MathJax

Friday, December 5, 2014

Cookies for Breakfast

I'm an adult. I can have cookies for breakfast if I want to.

You're an adult. Your parents aren't supervising your meal choices. Instead, there is a world of advertising encouraging you to act in certain ways. Some of those ways match up with what you want out of life and some do not. You're an adult. If you don't work out the difference, nobody will.

I'm an adult. I can continue improving myself if I want to. I can eat for a lifetime of health rather than a morning sugar rush if I want to. I can exercise my body regularly so it's there for me when I need it if I want to. I can continue improving my mind if I want to. I can choose to spend my money and time bettering myself and my community if I want to. I can pray for wisdom, guidance, and thanksgiving if I want to.

You're an adult. Your parents don't supervise your life. Will you?

Monday, December 1, 2014

Unfollow non-friends to defeat clickbaiters

It's easy:

  1. Notice a non-friend in your Facebook news feed.
  2. Click the top-right of the post and select the option to "unfollow" that entity.
  3. Repeat until Facebook is actually a place to keep in touch with your friends again.

Clickbait: Just say No
For the last decade there's been an explosion of entities competing for consumer attention. Time spent on TV, movies, and video games has been funneled to the web, to YouTube videos, and now to Facebook.

The Facebook News Feed is one of 2014's most effective broadcast channels. The effect of this is that Facebook isn't about helping you keep in touch with friends any more: it's about selling your attention to the highest bidder.*

Go to your news feed now and just start scrolling. How many updates are from your friends vs. news sources, comedy sites, celebrities, tabloids, or other corporate entities? Kind of sobering, isn't it?

Facebook wants your eyeballs. They will work overtime to keep them. They will show you what you're most likely to interact with, all by default. And maybe this is what you want. But maybe it isn't.

I get my news and comedy from RSS sources I trust (e.g. Ars Technica, Saturday Morning Breakfast Cereal). There are some sources where I want to read each post. I don't need to see the same stuff in Feedly and in Facebook, because when I go on Facebook it's to get in touch with friends, not to get distracted by clickbait and ragemongers.

We live in a post-boredom society. There are times I don't want to do anything, there are times I want to veg out, but there are no times when there is nothing to consume or nothing to do. One of the worst ways to spend a day is in all consumption and no doing. I don't go to Facebook because I'm bored. I go to Facebook to get in touch with friends, to broadcast news and thoughts to friends, to see what's going on with people far away, to send messages, and to plan events. Not to be another pair of eyeballs for BuzzFeed and HuffPo.

*These bids have two axes. One axis is ad dollars paid to Facebook, but the more significant axis of the bid is research into how to make people click on a link. If you've never spent more time on a clicked article than the article was worth, feel free to ignore this post.

Monday, September 29, 2014

Your MOM's a determinant

To my sister and her classmates, so that their math homework can be just a little less depressing.

Here are a few perfectly reasonable questions from high school sophomores about finding the determinant of a matrix:

  • What are they good for?
  • Who cares?
  • This is stupid and I hate you.
Let's address these questions. The first stop in the 21st century is, of course, Wikipedia:
The determinant provides important information about a matrix of coefficients of a system of linear equations, or about a matrix that corresponds to a linear transformation of a vector space. In the first case the system has a unique solution exactly when the determinant is nonzero; when the determinant is zero there are either no solutions or many solutions. In the second case the transformation has an inverse operation exactly when the determinant is nonzero. A geometric interpretation can be given to the value of the determinant of a square matrix with real entries: the absolute value of the determinant gives the scale factor by which area or volume (or a higher-dimensional analogue) is multiplied under the associated linear transformation, while its sign indicates whether the transformation preserves orientation. Thus a 2 × 2 matrix with determinant −2, when applied to a region of the plane with finite area, will transform that region into one with twice the area, while reversing its orientation.
Determinants occur throughout mathe
matics. The use of determinants in calculus includes the Jacobian determinant in the substitution rule for integrals of functions of several variables. They are used to define the characteristic polynomial of a matrix that is an essential tool in eigenvalue problems in linear algebra. In some cases they are used just as a compact notation for expressions that would otherwise be unwieldy to write down.
Ugh, I think I see the problem. These are, actually, really good reasons to be familiar with determinants, but most of these concepts are held back until a college-level Linear Algebra course. On that day when these sophomores take Linear Algebra, they will realize the brilliance of determinants--but they're depressed now. Let's try to do better than that.

A matrix's determinant tells you
  • whether a matrix is invertible, which tells you
    • whether you can undo a matrix multiplication operation
    • whether you can solve a system of equations based on that matrix
  • what happens to a vector when it's multiplied by the matrix:
    • How much does it stretch?
    • Does it flip inside out?
    • There could also be some rotation of the vector, but if we only care about its size and inside-outness, we can save the trouble of matrix multiplication if we know the determinant.
This, of course, presupposes that you care about matrix multiplication, which is itself a worthy topic for a similar post.

In addition to what a determinant tells you about a matrix, there are a few mathematical formulas that can be succinctly represented as computing the determinant of a matrix. This could save space on a cheat sheet for a future physics test. This is a nice collection of determinants doing cool things.

Look, this determinant is doing volume!

The procedure for computing the determinant of a 3x3 or larger matrix illustrates a key idea in mathematics: recursion. You find the determinant of a large matrix by finding the determinants of smaller submatrices and combining the results. This pattern of a procedure for solving a problem including running the procedure against a smaller version of the current problem is called recursion and pops up everywhere once you start looking for it.

The most important thing you get out of studying determinants in high school...


Drumroll, please.

It's tedious and awful.

You're insane.
Let me explain. Why is it important for high school students to be tortured with tedious, awful plug-n-chug work? Because it's a strong motivator for the self-study of computer programming.
That came out of nowhere.
Let me explain.

If you're in high school today, you've never known a planet without personal computers. You probably spend more time on tablets than you do watching TV. More and more data about what you do every day are winding up in computer systems. More and more jobs require the ability to analyze and manipulate data. Literacy was to the 20th century as digital literacy is to the 21st. Until computer programming is part of a normal curriculum, you're on your own to learn this stuff.

Computing the determinant of a large matrix is the sort of "hard" that mankind invented computers to solve: lots of simple steps strung together. You won't be hired to find the determinant of a matrix because we have software that does that. There are lots of other computational activities taken over by computers as well--pretty much any well-defined useful task that can be broken down into simple steps. This is a good thing because it lets people focus their time and energy on the next problem (there's always a next problem). The lessons you learn writing software to help you cheat on your math homework will be applied again and again throughout your career.

Knowing all this won't make finding determinants less tedious, but hopefully it won't seem like entirely random torture. Good luck!

Monday, September 22, 2014

Choosing not to have a choice

The highest expression of freedom is self-discipline in pursuit of a worthy goal.

The secret is to make it easy. The secret to making it easy is choosing not to have a choice.

When you have a choice, you can procrastinate. You can put off the exercise. You can have another cookie. Maybe just one more drink. It adds up. If these things are important to you--if you truly want to get work done, get in shape, lose weight, or kick the booze--choose not to have a choice.

You can procrastinate after the work's done.
You can contemplate the futility of working out after your workout.
Cookies are gasoline, not food.
Same with alcohol.

This is the nature of freedom for excellence. This is what it takes to live the life you meant to live. This is how you rise above your lot in life. This is how you get from what you need to accomplish to what you want to accomplish to what you were born to accomplish. You have no choice. Then it's easy.


Friday, September 19, 2014

Entropy by Another Alphabet -- Computing for Everyone

Previous: Entropy and Bits

All right, enough about passwords and entropy. This stuff is supposed to show up everywhere--how about another example?

Here are some Wheel of Fortune Google Image results with upper bounds on the corresponding entropy, for fun.

40.9 bits of entropy--luckily, that's a strict upper bound. I'm sure this guy's got something up his sleeve....
Good luck!
Oh no, an upper bound of 30.7 bits of entropy!
Luckily, as someone who was in an English-speaking country as a 5-year-old, you have the extra information needed to solve the puzzle.
An upper bound of 3.8 bits, but as an English speaker you've solved the puzzle
Only one possibility--exactly zero bits!
The bounds are based on any remaining letter being equally viable for each spot. The calculation uses knowledge of Wheel of Fortune's rules but no knowledge of English. With knowledge that these solutions are comprehensible in English, the entropy is significantly less--otherwise we'd never fit a show into 22 minutes!

40.9 bits of entropy was an upper bound, remember ;-)
The entropy of each puzzle is way less than each captioned upper bound because not all guesses are equally plausible. My previous post mentioned that when we use bits to measure entropy, we mean bits that have a 50/50 chance of being either 0 or 1. This equal probability is crucial.

This last one was an amazing performance by the contestant, but clearly my entropy upper bound calculations don't reflect the actual difficulty of the puzzle. Why? Because 41 bits of entropy would mean there were 241 equally plausible solutions to the puzzle. This guy's good, but not quadrillions-of-guesses good.

The upper-bound any-of-17-letters-in-each-of-the-10-spaces is clearly too large a universe of possibilities. The contestant, knowing English, could work out that the first word was probably "new," especially since missing "t" and "o" means it isn't "net" or "neo."

This leaves us with the number of 4-letter English words that can be made from the remaining 17 letters, the number of 5-letter English words from those same letters, and the triplets that make sense in combination (unfortunately for the contestant, you could have a "new" pretty much anything--"new baby buggy" is as plausible as new any-other-possible-phrase).

His situation is bad, but it's bad on the order of thousands of possibilities rather than quadrillions. Since the category is "thing," one of those words should be a noun, which whittles down the possibilities even further, etc. And lucky for our hero, the solution was near the beginning of the alphabet. An amazing performance.

But still, the second example--30.7 bits of entropy as an upper bound. Surely there aren't a trillion equally-plausible solutions.

The same thing happens in computing--in guessing passwords, in back-solving logic puzzles, and in communication.

If any remaining letter were equally probable on a Wheel of Fortune board, the game couldn't exist. Not only would it be tremendously boring, but equally-probable letters mean that there's nothing you can do to guess the solution other than guessing every word. Thankfully, English doesn't work like that. This feature of the language is called redundancy. Redundancy is a measure of how many symbols, on average, can be missing from a message while still remaining comprehensible. The "on average" is an important qualifier--as an exercise, think of some humorous situations where changing one letter in a word or one word in a phrase yields comprehensible English with an intent far different from what was originally intended! From our new perspective we can look at Wheel of Fortune as a game where contestants press their luck to select letters that give them enough information to solve a puzzle before their opponents can.

Redundancy has implications for computing. You've probably heard of file compression, i.e. zipping files. This is the essence of what's happening when you zip a file: the compressor is measuring the statistics of what's in the file and coming up with an encoding scheme whereby any bit in the compressed file has about a 50/50 chance of being 0 or 1. And this is lossless compression, meaning when you unzip the compressed file on the other side, you get back exactly what you had before compression.

Lossless compression of text files contrasts with lossy compression used in pictures, audio, and video. In that case, the uncompressed form isn't recoverable from the compressed form. We lose information, hence JPEG, MP3, and H.264 are all lossy. The trick with lossy compression of media files is to do so in a way where humans don't notice or care about the loss of fidelity. When encoding with lossy compression, the compression software often exposes settings for what bitrate you'd like to encode at to determine the threshold of what to smooth over. The rabbit hole of lossy compression schemes is extremely deep, but it all comes back to entropy and making the bits that are transmitted over the internet matter as bits you consume on the other end.

Wednesday, September 17, 2014

Entropy and Bits - Computing for Everyone

Previous: Entropy and Passwords

In information theory, entropy is measured in bits. For each bit of entropy in your set of rules for generating passwords, the number of possible passwords doubles. The famous correct horse battery staple comic does a great job of illustrating these bits as boxes:

Each square represents one bit of information. You know bits already as "binary digits"--something that can be either 1 or 0. This is still the case in information theory, but an important concept when talking about bits and entropy is that there is a 50/50 probability for each bit. The "caps?", "order unknown", and "common substitution" bits are self-explanatory in this context. But why 11 bits for the "uncommon (non-gibberish) base word", 4 bits for "punctuation", or 3 for "numeral"?

To answer this, we model each choice as if it were a random string of 1s and 0s and ask how long that string would have to be to have about as many possibilities as the number of possibilities that have an equal chance of selection (you may remember this from your Algebra 2 class as the base-2 logarithm).

Example: Why is a numeral 3 bits?
Solution: There are 10 possibilities for a base-10 numeral. Three bits yields 23 = 8 possibilities, which is closer to 10 than 24 = 16 possibilities, so we'll round to 3 bits. The exact number of bits would be lg 10 ≈ 3.32 bits (I find it convenient to abbreviate log2n as lg n). I believe it was the correct artistic choice for Mr. Munroe not to depict .32 of a bit.

Using similar reasoning, we see the comic estimates 16-32 punctuation symbols and in the ballpark of 2048 common words.

There's a password generator based on this comic online. Taking a look at its UI and word list, we can calculate the entropy of the default settings for the web site at about 48 bits--4 bits higher than the 44 bits advertised by the comic (3.32 of those come from appending a digit by default, which wasn't in the comic).

Next: Entropy by Another Alphabet

Monday, September 15, 2014

Entropy and Passwords - Computing for Everyone

Which is a better password (neither are good): "password" or "ospawdr"?
Intuitively, the second is better, because it's harder to guess--even though they both use the same letters, and the second collection of letters is shorter (7 letters instead of 8).

This is the information theory concept of entropy[1]. Once you're familiar with entropy, you'll see it pop up everywhere in computing and everywhere in life. Intuitively, a set of rules that generates passwords that are harder to guess has higher entropy than a set of rules that generates passwords that are easier to guess.

If your set of rules for generating passwords has high entropy, that means it will take more attempts to guess your password. High entropy is crucial because an attacker might be able to make guesses very, very, very quickly. Thankfully, you can use rules that can outrun the guess rate of an attacker. This is why some web sites have obnoxious rules about including a mix of characters that makes your passwords hard to remember: the rules make them hard to guess as well.

So why is "ospawdr" better than "password"? Let's look at the rules that generated each password.
"password" is a word in the dictionary. Worse, it's a common default password. As a common default password, a manual attacker might try it within the first half-dozen attempts (and laugh hysterically when it works). But let's be generous and say the rules you used to arrive at the password "password" are just that it's a common English word. How many English words are there? Let's say there are a million. But "password" is a common English word--a clever attacker would surely try common words first, no? The second edition of the Oxford English Dictionary contains under 175,000 entries--most of which you probably haven't heard of. But rest assured, a computer can make 175,000 guesses within seconds. So with a generous estimate, we'll say we're guaranteed to guess "password" within the first 200,000 guesses.

Now let's look at "ospawdr". My rules for making this password were to choose an arbitrary 7 letters from the word "password". "password" has 7 different letters, so my rules generate 77 = 823,543 different passwords. Much better!

There's a problem, though: my "arbitrary" 7 letters still don't have great entropy! Why? Because it turns out that humans are bad at making random-looking choices--what looks random to a human isn't as "random" as it should be! For example, I chose these characters in my head while looking at the word, and it turns out that I chose 7 distinct letters in a 7-character password from a 7-character alphabet. The fraction of truly random passwords that share this characteristic is 7 • 6 • 5 • 4 • 3 • 2 • 1 = 7! = 720 out of a space of 823,543. My "arbitrary" rules picked a password that was actually in a subset of 0.8% of the space I thought I was using. What a disaster![2]

Next: Entropy and Bits

[1] There's a related physics version of entropy.
[2] For comparison, here are 10 passwords I generated with a Ruby script using the rules I thought I was using:

dddrdrr
wpswoow
oaprrrr
drpaawo
sraddsd
warsosr
psdasso
rdpdwao
ooapwaw
apsppwo
And the script:
alphabet = 'pasword'
10.times do
  password = ''
  7.times do
    password << alphabet.chars.sample
  end
  puts password
end

These script-generated passwords don't "look" "as random," but they're actually harder to guess than the original. This is why I recommend using a password manager and generator.

Here are 10 sample "pasword" permutations:
posadwr
owpdsar
prowads
podwasr
sdawpro
opadrsw
rowsdpa
wsaodpr
rswdaop
drsoapw

And the Ruby code:
alphabet = 'pasword'
10.times do
  puts alphabet.chars.shuffle.join
end

As an example of how the human mind is bad at picking out "random" data, here are the two next to each other.

Random characters (entropy ≈ 20 bits)Random permutation (entropy ≈ 10 bits)
dddrdrr
wpswoow
oaprrrr
drpaawo
sraddsd
warsosr
psdasso
rdpdwao
ooapwaw
apsppwo
  
posadwr
owpdsar
prowads
podwasr
sdawpro
opadrsw
rowsdpa
wsaodpr
rswdaop
drsoapw
  
For more on passwords, see

Monday, June 2, 2014

PSA: Authentication != Authorization

Unfortunately, it's tempting to abbreviate both authentication and authorization as "auth." This common abbreviation causes people to confuse the two in their minds. I BEG you all to disambiguate as follows:

  • Authentication = AuthN
  • Authorization = AuthZ
(as per common sense and this other site I found)

Non-tech explanation:

Authentication (AuthN) is a means of confirming your identify. A weak real-world example is when a cashier checks your photo ID to match your name and face to the credit card you're using for a large purchase.
Authentication (AuthN) to match the human to the credit card
Authorization (AuthZ) is verifying that you have permission to perform an action. A weak real-world example is when a cashier checks your photo ID to verify you're old enough to legally purchase alcohol.
Authorization (AuthZ) to verify the human may purchase alcohol
By information security standards, our real-world mechanisms are extremely weak.
These are confused all the time because they both start with "auth" and you can use a driver's license to explain both. Really, they're quite different. Here's a non-driver's license example for both:

AuthN: You recognize your significant other's voice before engaging in flirtatious conversation.
Authentication (AuthN) to know you're flirting with your SO, not your parents
AuthZ: You know a secret handshake to get into a club.
Authorization (AuthZ) to grant access to premier plumbing services as a member of the Stonecutters
AuthN: Who am I?
AuthZ: May I?

Call to action for tech people:

Grep your codebase for instances of /auth[^enz]/ and eliminate them. Add a FindBugs rule to weed them out. Call out auth conflation in code reviews. WHEN TAKING NOTES, PERSONALLY ABBREVIATE VIA AUTHN OR AUTHZ, NEVER AUTH.

Also, OAuth is actually OAuthZ. OAUTH is a reference architecture for OAUTHN. Got it? 

Monday, March 31, 2014

I miss Lyft enough to use bold typeface at some point

I don't hate taxi companies. I hate
  • Calling for a cab
  • Not knowing where my cab is
  • The impersonal divide between me and my driver
  • Haggling over directions/expecting to be "taken for a ride"
  • Having my dispatched cab mistakenly take another customer
  • Paying via carbon-paper credit card machines as I'm stepping out of the cab


I've seen what it's like when these things are fixed and I love it. I first tried Lyft in the summer of 2013 and I was instantly hooked. I understand Uber, SideCar, and a few other rideshare companies offer similar services. I wouldn't say I love unconditionally Lyft, but I do love
  • Hailing a cab with a few taps on my smart phone
  • Knowing exactly where my cab is
  • The friendly vibe with each and every Lyft driver I've ever had. It really brings the city together.
  • Regular navigation system use with a point-to-point fare structure. Common sense!
  • Knowing exactly which Lyft is mine--seeing exactly what the car and driver look like, and having the driver recognize my face, eliminating confusion even when several Lyfts are being hailed on the same block.
  • Paying via phone with my linked credit card and allowing my driver to get to his next fare.

Then there were several other things that Lyft did that delighted me to the point of addiction:
  • Smart phone chargers as a regular feature
  • Water (even though I didn't usually take it)
  • Candy (even though I didn't usually take it)
  • Swapping stories and news about the city

This represents a clear example of what's technically known as "deshittification:" a vast improvement on a substandard customer experience.
This Lyft car looks like what it feels to take a rideshare cab.
Then then Seattle city council enfuckified (once again, a technical term) the situation by capping the total number of drivers rideshare companies were allowed to have on the road. When I went to try to get a Lyft today for the first time in months, there were no drivers and I was forced back into using a traditional taxi company again (ugh). I wouldn't say I hate the Seattle city council. I hate
  • Siding with interests who want to kneecap competitors with legislation rather than adapt to delight their customers
  • Wiping out transformative improvements in industry at the stroke of a pen
  • Telling the citizens it's for their own good
I didn't take taxis often, but when I did, it wasn't pleasant. Then I switched to Lyft and loved it (though I'm still an occasional customer only).

I pay attention to politics not to achieve a deeper meaning in my life, but to prevent those who seek meaning through meddling from messing it up for people who are just trying to live their lives--situations just like this.
DEY TOOK ER LYFT!
The good news for the Seattle city council is that I don't hate them. The bad news for the Seattle city council is that I don't have to hate them to vote against them next election. 

The council members who have insisted on rideshare caps don't deserve the vote of the great city of Seattle. They have screwed Seattle rideshare customers in the name of "fairness"[1]:
  • Mike O'Brien
  • Kshama Sawant (her particularly anti-progress position was quoted here)
  • Nick Licata
  • Bruce Harrell
These four have betrayed the trust of our city. Vote against them the next time you have a chance.

There was one ally for consumers against kneecapping rideshare services whose support is easily Googleable: Sally Bagshaw. Her position in favor of a superior riding experience can be found here. She's earned her position. Yay Bagshaw!

The founders gave us a Republic, if we could keep it. Go out there and keep the crap out of it.



1 - If you read both sides, part of the argument for caps is to limit the unfair advantage that rideshare companies have against the more regulated taxi cab companies. It's hard not to notice that this lower regulation accompanied a tremendously improved customer experience. If regulations are making it difficult for taxi companies to compete, I submit to you that pushing the burden of these regulations yet wider is a substandard approach. In other words, this demonstrates that more regulation != more consumer protection.

Monday, March 17, 2014

Reader questions: Software Development Job Search from Quantitative Background

A Notre Dame alum reached out to me recently about switching to software development from a quantitative background. My response seemed like they'd be of general interest, so please use it for good, not evil.

In the job search in general, the task is to match what a company needs and wants with what you have done and what you can do. Your task is to make that mapping as clear as possible throughout the process. The resume gets you the phone screen. The phone screen gets you the interview. The interview gets you the job. The best person for the job doesn't necessarily get the job; the person who is best at getting the job gets the job.
1)  How do you think I should pitch myself in my resume / cover letter to software companies?  How can I play down the fact that I have no work experience with "real" languages (e.g. Java / C++).
In my book, any Turing-complete language is a "real" language--anything from writing expense report software in Java to writing Tetris in Brainfuck. To me there seems to be an anecdotal inverse correlation (whatever that means) between demanding proficiency in a particular language and the quality of a company's software.

Amazon, Google, Microsoft, Facebook, etc. will all let you solve interview problems in the language with which you're the most comfortable. Even if the language is esoteric (e.g. Haskell (don't actually try to solve interview problems in Brainfuck)), if you can explain how your code works and runtime characteristics like big-O memory and runtime performance then you should be just fine.

So how do you play down your hipster programming language work experience and real language academic experience in a cover letter? Just don't bring it up. There's no template for a cover letter. You have experience in legitimate programming languages and your coding and problem-solving skills will be assessed interactively anyway. Map what you've done and what you can do to what the company's needs and wants.
2)  How helpful would it be to start taking on Python/Java projects on GitHub?  I've heard that showing people that you're a "doer" makes a really big difference in marketing yourself.
Project experience is always impressive. Any opportunity you have to hyperlink your resume to something you've built is worth taking advantage of. Each hyperlink is an incarnation of the mapping between what you've done and what the company needs or wants (in the most general sense, it shows you have written code that does something).

Contributing to existing projects also mirrors what you'll do on the job. Very rarely do you start on a new system from scratch, and when you do the new system becomes an existing system after only a few months.

Finally, contributing to projects is a great way to learn a new language and see how it's used in practice. A good project will show good habits not only for the code itself, but for the code's organization and tests. This will also expose you to a lot of bad habits (which you can also learn from: if a stanza of code seems to make no sense, it's probably poorly written. Once you figure out what the code is actually doing, think about which language features could have made the code more readable) and get you used to setting up development environments and getting new code running.

Now for the flip side: Contributing to a project is a time investment. It's easy to get over-committed in life, so get a handle for what you can include in your routine and live within your time budget. The most helpful thing I've done on this front this year was to start using a to-do list app. Many of the benefits I just attributed to contributing to a project on GitHub are accessible by merely browsing the source code on GitHub. You have to decide whether to prioritize the additional work of contributing a patch (or just forking your own branch) in return for a compelling work sample. Having such a link isn't a prerequisite by any means, but it's definitely a boost.
3)  Let's assume I don't have the legendary skills / experience required by tech firms like Amazon/Google.  What are some examples of tier-2 tech firms that are known / respected?  I feel like I might have a shot at firms that aren't quite as good as Amazon.
I'm a bad person to answer this question since my tech career started with a summer internship at Johnson & Johnson followed by full-time at Microsoft followed by full-time at Amazon. I probably have a big head about it.

There's a range of consulting firms, smaller tech companies, companies that specialize in mobile development, high-frequency trading firms, startups, and contractor agencies outside of the megacorporation world. There are also large companies and various government agencies that do some in-house software development.

If I were good at answering this question, I'd be able to give the pros and cons for working at each, as well as what they're looking for and how to prepare. If you insist on preparing for not being good enough (wow, I definitely have a big head), I recommend researching some specific example companies for these categories. Employees at these companies will have better answers for the pros and cons of where they are.

No matter where you go, your first goal is always to make yourself as awesome as possible. It's easy to take jobs as a proxy for self-worth, and it's okay to be proud of where you work, but don't let where you work define your identity. When I got my offer from Microsoft senior year of college I'd felt that I had made it. Being smart and achieving was a big part of my identity, and my employment by Microsoft was a nice encapsulation of my achievements and sacrifices. I had a view of myself as a Microsoft employee. It defined my identity. The problem was I wasn't happy with my work there. Deciding to look around for other jobs was very liberating: First I saw that my team was bigger than my role, then that Microsoft was bigger than my team, then that the software industry was bigger than Microsoft, then that the world was bigger than the software industry. When I went from Microsoft to Amazon I also went from thinking of myself as an SDET who works for a great company to a great SDET who happens to work for Amazon (and I friggin' love Amazon). I'm an SDE at Amazon now, but the principle holds: I'm a sharp guy with a lot going on who happens to work for Amazon as an SDE (a fitting incarnation of such awesomeness (there's that big head again)).

Monday, March 3, 2014

We're all cyborgs now.

Alfred North Whitehead once said "Society advances by extending the number of important operations which we can perform without thinking of them." This is the promise of our digital age, but not its guarantee.

Digital technology has simplified our lives in areas like paying bills, coordinating with friends, and getting around in new cities, but the wealth of information available to us can also overwhelm our puny carbon-based brains. The human memory is fragile, error-prone, slow to store and retrieve information, and limited in short-term storage (five to nine "chunks" of information). The first priority is to cope with this influx of information; the second is to bend it to your will.


You can meet the first priority by tuning out a lot of distractions1, but to thrive in the modern world, you must realize the meaning of this phrase: We're all cyborgs now. There's nothing special about our carbon brains that allows us to process this new influx of information; we have simply learned to make better use of the silicon extensions of those brains.



In much the same way that physical tools like levers, wedges, inclined planes, and pulleys allow people who understand them to apply their limited physical strength more effectively, logical tools allow people who understand them to apply their limited psychic strength2 more effectively.

Just like basic physical tools are combined into machines which are better- and worse-suited to the human body (axes, dollies, bicycles, airplanes, dental drills), iteration, symbolic manipulation, and conditional execution are combined into machines which are better- and worse-suited to the human mind (digital calendars and address books, search engines, note-taking software, spreadsheets, safe e-commerce, awful programmable VCRs from the 90s). Devices which remember for us, research for us, repeat for us, compute decisions based on our criteria, and communicate amongst themselves are now ubiquitous.



Just like a basic understanding of physics and physiology can make us much safer and effective when applying our physical strength, an understanding of the basic elements of computing and cognitive psychology make us clearer and more effective when applying our psychic strength. This argues for the importance of widespread education in computing, and even basic programming (very different from the traditional "computer courses" where elementary students learn to use Microsoft Word to print out essays) and raising the bar in terms of what we expect from ourselves when interacting with our technology.

What's the best way for a civilian aspiring-cyborg to learn these computing basics--the analog to basic physics in the mechanical world? I might not be the best person to ask about this, given that software is my bread-and-butter. My instinct is to point you at learning the basics of a programming language, probably JavaScript. It looks like this fellow might be in a familiar situation to yours, you non-programmers out there. His recommendation of the online version of Eloquent JavaScript, in particular, seem to be a perfect fit.

END COMMUNICATION



1Here are the worst offenders of information overload in our age, and what to do about them: Email, Facebook, Cell phones, and news sites/blogs (this one included).

  • Email: If you use Outlook for email at work, disable the desktop notifications. Companies that use Outlook tend to use Communicator as well. Don't turn email into a hacky instant messaging client. 
  • Facebook: Learn about Operant Conditioning, particularly the insidious nature of Intermittent Reward. Then walk away. 
  • Cell phones: If someone said, "I'll sell you a brick that buzzes and beeps every 53 minutes to distract you from what you're doing," you wouldn't be interested. That's not why we buy cell phones, so we shouldn't let them turn into productivity randomization bricks by accident. At home, keep it plugged in away from you. At work, keep it silenced, face-down and out of sight. This will also spare you the embarrassing experience of your phone turning into a meeting-interrupting brick.
  • Blogs: You're never going to read the entire internet. Use your social network as a sort of reverse-pyramid scheme to filter out the real gems, but even then know what you want to accomplish before you sit down to consume hypermedia.

2I don't believe in psychics, but I think that "psychic strength" is a cool phrase for describing brain power.

Monday, February 24, 2014

If man were meant to fly...

...he'd have been born with wings.
adventure.howstuffworks.com
...he'd have been born into a world that needs shrinking...
clearsimpleliving.com
...he'd have been born with a need to explore...
blog.getsholidays.com
...he'd have been born with curiosity...
kidspot.com.au
...he'd have been born with intellect...
wrightbrothers.org
...he'd have been born with drive...
pedalpowerplanes.co.uk
...he'd have been born annoyed with the notion of the impossible...
upload.wikimedia.org
...and he'd have been born with dreams.
redbubble.net

Monday, February 17, 2014

It's Go time.

It's no secret that I love chess. Now for the bad news: most chess games I play in-person do very little to help my brain. I'm sure this would quickly change if I signed up for some weekend tournaments, and this isn't always the case in my online play, but the truth is the lazy side of my brain is good enough to convincingly dismantle all but a few people I've met in non-chess-oriented settings.




So what are my options? I'd like to continue growing my ability to learn via games, but I'd like to get some more normal human interaction back into the mix. Playing a variety of German-style board games is one approach, but there is a (low) chance of developing metagaming heuristics effective enough to wind up in a similar situation with German board games as I currently have with chess.[1] It would limit my play to the friends I normally board game with and other board game nerds. Like I said, it's a low chance, but why not protect against this risk via portfolio diversification?

What I need is a game with a deep heuristics ladder that's also withstood centuries (if not millennia) of scrutiny. What drew me to chess was its lack of overt random elements[2], its freedom from politics as a 2-player game, and that it is a game of perfect information[3]. There's at least one other game that fits all of these criteria, plus I'm new to and therefore terrible at: Go.

I have no idea what's going on here, and I can't wait for that to change!
I'd like to commemorate my newfound enthusiasm for Go with a continued comparison against Chess. Since we've covered that both are are two-player turn-based strategy games of perfect information with deep heuristic ladders and no overt random elements that have withstood the test of time, let's go into some differences.

  • Chess has much more complex rules. There are six types of pieces, pawns move three different ways, certain material configurations are drawn by insufficient material, castling and its relationship to check and which pieces have moved is complex, and a large number of casual players don't know that en passant pawn captures exist.
  • In spite of simpler rules, Go has a much, much higher branching factor.[4] I see this partially as cheating by increasing the board size (19x19 vs. 8x8), but I'm over that.
  • The conceit of chess is much more amenable to suggesting state and directional heuristics to novice players: You want your King to live, a larger army will help capture the opponent's King, pieces have a sense of individuality and purpose both with and without the surrounding army. There's none of that in Go. The first few games of Go are a difficult process of discovery in transforming the basic set of rules and enormous branching factor into something that is within the limits of the human mind to evaluate.
  • Computers are terrible at Go. The anti-computer tactics employed by humans in Chess involve pushing games in the direction of favoring intuition over direct ply-by-ply calculation, which is Go's default setting.
  • Go is scored at the end. Chess is either a victory for one side or a draw.
  • Go's scoring mechanism and monotony of piece types allowed the creation of a simple handicap system to enable games between players of vastly different strength which are mutually useful intellectually. 
  • Philosophically, it's easy to imagine yourself as the King in a game of chess. If he's captured, you lose. It's a fight to the death, and you're right there in the middle. Go doesn't have that. You aren't any of the stones, and neither is your opponent. You play the role not of kings or battlefield commanders, but of emperor-gods hovering above a field of mortals.
  • If we insist on anthropomorphizing game pieces, Go may reflect a belief in the supremacy of groups over individual differences. Chess reflects the opposite.
  • While both games appreciate the role of forcing the opponent to respond (the Initiative in chess, Sente in go), chess's initiative is very all-or-nothing. You have it or you don't, and when you do then it's a mistake not to press it into something greater. Sente in go can be sort of "stacked," it seems, and much more difficult to put one's finger on. Whether a move in go forces an opponent to respond is harder to calculate directly, leading to more right-brained judgment calls. You see these sorts of judgment calls in chess as well (e.g. realizing an opponent's threat is a "ghost"), but it seems like a much more fundamental concept in go.
  • Similarly, developing one's pieces at the start of a chess game has a slightly different character than a ko threat in go. Areas of the board you judge as "dead" in go can still be useful as a ko threat later in the game, whereas lost material in chess will not rematerialize.
  • Go seems much more forgiving to beginners in the earlier stages of a game. Having so many more moves to choose from, there are more viable moves than you would see in chess. Many bad moves beginners make in chess have simple explanations for the nature of the error and a conclusive demonstration of why such moves are mistakes. Mistakes in any phase of a chess game can kill you. In go there is more equality between the viability of different moves absent Sente, and failing to immediately press Sente isn't a mistake the way fumbling the initiative is in intermediate+ chess games.
  • Threats in go are inaccessible before absorbing via literacy and drills several heuristics about living and dead structures and probably a few more things that I haven't learned yet, hence my poor go strength.
Come to think of it, the depth of go's heuristics ladder may be counterproductive to my interest in it. There are fewer go players than chess players that I have met in my life, so absorbing a handful of heuristics puts me at a great advantage much sooner. Wait a minute, that's what that handicap mechanism is for! This is going to be fun.





[1] This would most likely take the form of recognizing the essential games embedded in the rules of more recent board games and quickly discovering effective state and directional heuristics by combining effective strategies

[2] i.e. no dice, no cards. See Characteristics of Games.

[3] All information about the state of the game is known to both players; there's nothing like a private hand of cards.

[4] average number of options available to a player in a typical game state

Monday, February 10, 2014

Reusability is not a design goal

http://xkcd.com/974/

When software reusability is a stated goal of a software system, it encourages fitting your customers' problem to your solution's implementation rather than the other way around.

Reuse viewed as an intrinsic good encourages design missteps such as favoring inheritance over composition, resulting in tight-coupling issues.

Finally, reuse as its own goal violates YAGNI (You Aren't Going to Need It). See this great Coding Horror article, which really steals a bunch of my thunder.

Reuse has its place. Reuse occurs while following a more powerful design principle: Don't Repeat Yourself (DRY). Each piece of information in and about a system should have exactly one canonical location. Well-designed systems thus reuse their own components in the service of DRY, but what we should emulate in designing our own systems is not the reuse but the single encoding of each idea in a system.

"Okay, Mr. Smarty-Pants: Doesn't the existence and widespread use of software libraries contradict the entire point of this article? Haven't you heard of Ruby on Rails? Spring? Java? .NET?"

These are products for use by software developers (e.g. the Java Collections library). Within effective software libraries, reuse is still incidental to the more powerful DRY principle. Libraries are still designed to serve an end. Reuse comes naturally when common instances of a problem occur via DRY, not via the inherent goodness of reuse.

Friday, January 31, 2014

Sharing tribal knowledge in N easy steps!

Fun for the whole team! You'll need:

  • a decent spectrum of junior and senior team members
  • a whiteboard
  • markers

These sessions can last for about an hour at a time, weekly or biweekly.

  1. Prepare the discussion by having two seniors discuss a design detail for a contemporary change to the system.
  2. Write the domain-specific terms on the whiteboard as they come up.
  3. When there's a good amount of terms on the whiteboard, go around the room from the newest to most tenured member of the group. Everyone takes a turn explaining as many terms as he can as accurately as he can. The next person may correct or augment the explanations of previous explanators before tackling terms that have not yet been discussed.
Try this when you have an influx of new teammates or just as an occasional refresher to keep everyone on the same page. These discussions have a way of fitting their groups like a glove. Open question: Does this work with teaching new games to board game groups?

Wednesday, January 29, 2014

A month with Daily Planner

My theme for 2014 is reclaiming time in my personal life. There's one app that has made all the difference so far.

Daily Planner
Daily Planner is awesome.

There's a lot I'd like to do in my life, and only so much time. Just like we have bills and budgets for our money, we have schedules and to-do lists for our time. Just like the best camera is the one you have with you, the best to-do list is the one you use.

This app doesn't do much, but what it does it does well. Its interface is clean and it has support for replenishing lists you make for tasks that recur daily and weekly (so the task to make my bed is waiting for me in the morning, rather than me having to add it).

To-do lists are important enough for your employer to want you to use them. How much more should you be using a list to keep track of your own life?

Perhaps you fear becoming a robot, a slave to a list? Consider this: we all have 24 hours in a day, and we spend those hours somehow. Planning for what you want in your life and augmenting your discipline with tools to get you there just means that you're filling the one filling out your list rather than someone else (reddit/Facebook/Netflix/Wikipedia binges/your media of choice). In many ways, using a to-do list is less robotic because you're choosing how to behave, rather than riding a wave of dopamine from systems that optimize for diffuse attention. What's more robotic: writing, eating breakfast, practicing the piano, having a clean house, and taking care of those long-procrastinated tasks, or clicking through even the highest-quality list-based articles with animated gifs and hyperbolic headlines? Anyway, that's just my response to the potential "robot" straw-man.

(It just took me about 30 seconds to recall the word "procrastination." I'm choosing to interpret this as an indicator that my productivity has indeed increased.)

Your time is valuable, so here is the short version of the other reasons I love using this app:

  • Urgent tasks rob me of time and freedom. Having a list allows me to optimize the completion of tasks I need to get done before they become urgent.
  • Having a list lets me take advantage of time that would be otherwise wasted.
  • Web browsing is no longer something I do for its own sake, but something I do in the service of working through items on my list.
  • Checking things off the list is quite causes me to experience less guilt and more freedom.
  • Seeing the things I don't get to reminds me that time is limited. Having a list prioritized keeps me focused on what I most want and helps me stop myself from overcommitting.

So try it for yourself. Install Daily Planner or some equivalent app to your smartphone. Place the app's icon where Facebook's icon used to be (now you have muscle memory working for you rather than against you!). If you hate it, go back in a week. I doubt you'll hate it.

Welcome to the life you meant to live.

Monday, January 27, 2014

Every little bit counts

My junior year high school English teacher expertly seared certain moments into the class's brain. He would talk about how he had room for a finite number of friends and that no new spots were available. He would talk about how he could draw a perfect circle for the zero he would give us for papers which contained the passive voice, any conjugation of "to be" and any "thing" words.

A student skeptical of the benefits of Notehand (an abbreviated handwriting style we learned) asked Mr. Bounds how much time Notehand actually saves. "Tons," he replied.

If you have aspirations to speak to software, you will learn to interact with the command line. A handy tool of the *nix command line (though you can sort of do this in Windows) is the alias command, which allows you to define your own shorthand commands. Here are a few of the aliases I use for git:
alias g='git'
alias gs='git status'
alias gcm='git commit'
alias gch='git checkout'
alias gb='git branch'
alias gl='git log'
alias gm='git merge'
alias gr='git rebase'
With these aliases in my .zshrc file, I can spend more time examining the state of my repository on the command line and less time in the mental limbo between seeking information and accurately typing the request. Time saved: Tons.

Example 2: Google Chrome allows you to set up various "search engines" if you go to Settings > Search > Manage search engines... . This lets you do a Google search by typing into your address bar with a prefix you specify, e.g. "google" for Google, "yahoo" for Yahoo, or "bing" for Bing. Of course, it's much faster to change these from these outrageous defaults to g, y, and b respectively.

Here's another point to consider: Google Chrome can treat any URL as a "search engine." "Search engine" to Chrome just means, "here's a URL template. It's got a '%s' in there somewhere. When you type your keyword, what follows will be substituted for the %s and your browser will go to that URL." This means, for us with a company intranet, that many of the intranet pages and lookups can be greatly, greatly simplified.

As an aviation example, using "metar" for "http://aviationweather.gov/adds/metars/?station_ids=%s&std_trans=standard&chk_metars=on&hoursStr=most+recent+only&chk_tafs=on&submitmet=Submit" allows me to look up the current weather conditions at Friday Harbor by typing "metar kfhr". This isn't nearly as cool as my work examples, but has the huge benefit of containing no confidential information.

Look for other little things. Watch the time saved add up. Enjoy!

"Hello there, Miss Doesn't-find-me-sexually-attractive-anymore. I just tripled my productivity." - The Simpsons, "King Size Homer"