MathJax

Sunday, March 27, 2011

6 Myths about Test Code

I cringe whenever I hear the phrase "it's just test code." Usually the speaker is implying that the code doesn't need to have well-named variables, doesn't need modularity, doesn't need clarity of thought and that copy-pasta is just fine. Automation code certainly differs from production code, but not at the expense of code quality.

Myth #1: Copying and Pasting can help write test code faster
I see this all the time from SDET contractors. They don't tend to get renewed. In a rush to get something out the door and point to a massive checkin as evidence of their developer prowess, they paste the same changes over and over again.
Fact: Copy-pasta doesn't help any code get out the door faster. Copying and pasting is a way to generate needless work in the future. Every time you paste what should have been the body of a method, you're multiplying the scenarios which will be out-of-date when it comes time for the next design change. Put it in a method body and just let everyone sleep easier at night. TestNG even allows you to parameterize your test cases.

Myth #2: Magic Numbers can help you write tests faster
Fact: Now that you've seen the statement in print, you immediately recognize it as absurd. Test code will constantly tempt you to hard-code magic data: retry timeouts, manually-created test orders, test account login information. Your task is to refuse such temptations. If your code can't create everything it needs to run at runtime, you'll wind up with dependencies on your environment or some configuration. Your tests become tightly coupled to the instance of the system they are testing against. If there is some common configuration data you need to reference, take a look at Spring or Java Beans (if you're testing Java code) as a convenient means to store common data.

Myth #3: Using try/catch blocks for flow control is OK in test code
Try/catch blocks are for error handling, and error handling alone. There's one situation that might tempt you to use a try/catch block in test code: Checking for expected exceptions. This is another area where TestNG shines. We've all seen the pattern:
public void throwsExceptionOnError() {
    try {
        makeErrorHappen();
        Assert.fail("We should have thrown an Exception");
    } catch(Exception e) {
        // Pass!
    }
}

Now: Go through your automation code base. How often have the authors omitted the unconditional Assert.fail() after your version of makeErrorHappen()? You're probably masking at least 1 or 2 bugs that you were never catching to begin with. Compare this with another great feature of TestNG: expectedExceptions and expectedExceptionMessageRegex. Now your test can look like this:
@Test(expectedExceptions = Exception.class)
public void throwsExceptionOnError() throws Exception {
    makeErrorHappen();
}

No spaghetti logic, no false passing when no exception is thrown. Check your test harness for its version of TestNG's expectedException feature. You owe it to your customers.

Myth 4: Using try/catch blocks for anything in your test methods is optimal
Fact: You've been tempted to use try/catches in exactly two circumstances:

  1. Verifying expected exceptions
  2. Handling unexpected states during your tests
Case 1 should be dealt with as in Myth #3 above. As for case 2, unexpected states during tests should lead to a test failure. Something went wrong, and your test didn't know about it! Either your tests are out-of-date or you just found a bug! Don't try to get the method to pass at all costs: just let the exception bubble up and fail the test. What's that? It's an intermittent timing issue? You're using exceptions to handle retry logic? Look into your test harness's version of TestNG's RetryAnalyzer. Don't write retry logic into every test you may want to retry. The work is tedious, you'll miss a detail, and even if you do it right you'll have a lot of needlessly repeated logic that will clutter your automation and obfuscate what you're really trying to test.


Myth #5: Logging to the console gives back important data about a test run
Fact: I'm running hundreds to thousands of tests every time you check in, and I'm only watching assert and exception messages from failed cases. I don't care what you logged to System.out, and I'm not going to read it. In fact, I can't read it. It's gone now. Well, I suppose I could run the tests again and redirect the output to a text file. Wait, this suite takes 15 minutes to run and the issue only appears 20% of the time.

Myth #6: Rerunning the failed tests in a suite until everything passes is acceptable
Fact: This obscures the very nature of software testing. Software Testing can NEVER tell you that everything's all right; it can only do its damnedest to break your code. Why give up that perfectly beautiful failure with a rerun? If the tests can't all pass at once, there's either a problem with your tests or a problem with your code, and it's your job to find out which one it is. It's also your job to make sure that your regression suite can pass with a single button press, or else you're passing off too large an inner loop to the product team.

Where does test code differ from production code?
The mission of every automated test is to fail. Test cases store knowledge about how your system works and demands it should be able to answer. Test code is different because of this mission, and it's different because test harnesses take care of a lot of the common reporting work of automated tests.

  • Test methods can throw checked exceptions. Make it a habit to just add the "throws Exception" clause at the start of your test methods (if you're working in Java)
  • Test methods handle errors by reporting them, not recovering from them. The system may have to recover, but your tests need to fail. A lot of the boilerplate logic of what expected/actual values were is taken care of by the suite of Assert methods you'll find with any testing framework worth its salt. In addition, the test harness knows to run every test, even if some fail. And it knows that you'll want the failure output, so expect that in the test reports.
Good code quality practices matter just as much to SDETs as they do to SDEs. They matter to anyone who writes code that may need to change in the future. The differences that exist exist because of what test harnesses do to help, not because test code is less worthy of maintainability. Think about that the next time that tiny, evil voice in your head whispers seductively, "it's only test code...Ctrl+V...."

1 comment:

  1. I'd agree with many of these points but you forget to call out lifespan of the test.

    If the lifespan is short, than copy/paste and magic numbers are actually ideal as they significantly reduce ROI and TCO.

    Now the longer the lifespan the more these things increase cost for reasons you point out.

    Young developers often over optimize TCO while not properly determining ROI.. be careful not to fall into that pitfall.

    //wasntante

    ReplyDelete