MathJax

Sunday, March 27, 2011

Fiction, fun and profit

A few months ago I realized I needed to read faster. I've always been an "A" student. I've shown my employers that I have the skills and wit they acquire. I needed to read faster because you can drown in the technical information available to people these days. The web is competing for the former juggernaut of time-sinks, television. What's more insidious is that the web surfer always has cause to surf further in the name of learning something edifying or useful to his job. I don't know anyone who complains that he can't find "anything good on the internet" the way people would gripe about there being nothing on TV.

With a strong and growing career in software engineering and a new hobby of aviation, I picked up the book 10 Days to Faster Reading, published by the Princeton Press. It's been a great investment, both of the paltry monetary sum and the 10 days themselves. The book mentions the physiology of reading, how our eyes make short stops when scanning across a page, how you can practice grabbing more words per eye movement and such.

What I realized was that technical science and math textbooks can train you to be a slower reader. Math books in particular are very dense, forcing the reader to savor each word, grok each sentence and appreciate each subtlety before moving on. But most of what we read isn't that dense. This extra redundancy allows readers to skip entire sections which may not apply or read passages quickly, confident that the important information will likely resurface soon. If you studied too hard in math, though, you may be carrying your habits over to non-technical works and slowing yourself down unnecessarily.

My solution? Read more novels. There are plenty of other reasons you would want to read novels as an engineer, but start with this one if you haven't picked up any pleasure reading in a while. Having an interesting plot and characters to look forward to can accelerate your engagement and your pace of reading. It just might train you out of reading each sentence like a mathematical equation.

*Note: 10 Days to Faster Reading notes that pleasure reading should just be done at your own pace. After all, it's for pleasure. While that's true, reading faster doesn't mean you miss much. I remember knocking back entire Goosebumps books in between 30 and 60 minutes as a kid, and I didn't enjoy them any less for the speed.

How Thinking Small Matters to your Success

At a certain size of team, success for the individual contributor means moving to a smaller team. Larger teams tend to support safer ventures while crowding out opportunities for meaningful contribution. If you feel unnecessary, or if you want more control over your work, maybe your team is just too big. You can solve two birds with one stone by switching teams. There are definitely places that could use you.

Speaking of which, the Amazon payments organization is hiring. See what you think: http://payments-jobs.amazon.com

6 Myths about Test Code

I cringe whenever I hear the phrase "it's just test code." Usually the speaker is implying that the code doesn't need to have well-named variables, doesn't need modularity, doesn't need clarity of thought and that copy-pasta is just fine. Automation code certainly differs from production code, but not at the expense of code quality.

Myth #1: Copying and Pasting can help write test code faster
I see this all the time from SDET contractors. They don't tend to get renewed. In a rush to get something out the door and point to a massive checkin as evidence of their developer prowess, they paste the same changes over and over again.
Fact: Copy-pasta doesn't help any code get out the door faster. Copying and pasting is a way to generate needless work in the future. Every time you paste what should have been the body of a method, you're multiplying the scenarios which will be out-of-date when it comes time for the next design change. Put it in a method body and just let everyone sleep easier at night. TestNG even allows you to parameterize your test cases.

Myth #2: Magic Numbers can help you write tests faster
Fact: Now that you've seen the statement in print, you immediately recognize it as absurd. Test code will constantly tempt you to hard-code magic data: retry timeouts, manually-created test orders, test account login information. Your task is to refuse such temptations. If your code can't create everything it needs to run at runtime, you'll wind up with dependencies on your environment or some configuration. Your tests become tightly coupled to the instance of the system they are testing against. If there is some common configuration data you need to reference, take a look at Spring or Java Beans (if you're testing Java code) as a convenient means to store common data.

Myth #3: Using try/catch blocks for flow control is OK in test code
Try/catch blocks are for error handling, and error handling alone. There's one situation that might tempt you to use a try/catch block in test code: Checking for expected exceptions. This is another area where TestNG shines. We've all seen the pattern:
public void throwsExceptionOnError() {
    try {
        makeErrorHappen();
        Assert.fail("We should have thrown an Exception");
    } catch(Exception e) {
        // Pass!
    }
}

Now: Go through your automation code base. How often have the authors omitted the unconditional Assert.fail() after your version of makeErrorHappen()? You're probably masking at least 1 or 2 bugs that you were never catching to begin with. Compare this with another great feature of TestNG: expectedExceptions and expectedExceptionMessageRegex. Now your test can look like this:
@Test(expectedExceptions = Exception.class)
public void throwsExceptionOnError() throws Exception {
    makeErrorHappen();
}

No spaghetti logic, no false passing when no exception is thrown. Check your test harness for its version of TestNG's expectedException feature. You owe it to your customers.

Myth 4: Using try/catch blocks for anything in your test methods is optimal
Fact: You've been tempted to use try/catches in exactly two circumstances:

  1. Verifying expected exceptions
  2. Handling unexpected states during your tests
Case 1 should be dealt with as in Myth #3 above. As for case 2, unexpected states during tests should lead to a test failure. Something went wrong, and your test didn't know about it! Either your tests are out-of-date or you just found a bug! Don't try to get the method to pass at all costs: just let the exception bubble up and fail the test. What's that? It's an intermittent timing issue? You're using exceptions to handle retry logic? Look into your test harness's version of TestNG's RetryAnalyzer. Don't write retry logic into every test you may want to retry. The work is tedious, you'll miss a detail, and even if you do it right you'll have a lot of needlessly repeated logic that will clutter your automation and obfuscate what you're really trying to test.


Myth #5: Logging to the console gives back important data about a test run
Fact: I'm running hundreds to thousands of tests every time you check in, and I'm only watching assert and exception messages from failed cases. I don't care what you logged to System.out, and I'm not going to read it. In fact, I can't read it. It's gone now. Well, I suppose I could run the tests again and redirect the output to a text file. Wait, this suite takes 15 minutes to run and the issue only appears 20% of the time.

Myth #6: Rerunning the failed tests in a suite until everything passes is acceptable
Fact: This obscures the very nature of software testing. Software Testing can NEVER tell you that everything's all right; it can only do its damnedest to break your code. Why give up that perfectly beautiful failure with a rerun? If the tests can't all pass at once, there's either a problem with your tests or a problem with your code, and it's your job to find out which one it is. It's also your job to make sure that your regression suite can pass with a single button press, or else you're passing off too large an inner loop to the product team.

Where does test code differ from production code?
The mission of every automated test is to fail. Test cases store knowledge about how your system works and demands it should be able to answer. Test code is different because of this mission, and it's different because test harnesses take care of a lot of the common reporting work of automated tests.

  • Test methods can throw checked exceptions. Make it a habit to just add the "throws Exception" clause at the start of your test methods (if you're working in Java)
  • Test methods handle errors by reporting them, not recovering from them. The system may have to recover, but your tests need to fail. A lot of the boilerplate logic of what expected/actual values were is taken care of by the suite of Assert methods you'll find with any testing framework worth its salt. In addition, the test harness knows to run every test, even if some fail. And it knows that you'll want the failure output, so expect that in the test reports.
Good code quality practices matter just as much to SDETs as they do to SDEs. They matter to anyone who writes code that may need to change in the future. The differences that exist exist because of what test harnesses do to help, not because test code is less worthy of maintainability. Think about that the next time that tiny, evil voice in your head whispers seductively, "it's only test code...Ctrl+V...."