I love automated testing. In a rare diversion into op-ed1 I thought a put few thoughts (read – opinions) together.
Before I start on how best to compose your tests, I briefly ask – What are the reasons for testing? Broadly, I think they are:
- Reduce total number of bugs / increase product stability
- Ensure software works as per specification
- Achieve the above at low cost, low impact.
I think this boils down to providing software that does what your customer wants (features), doesn’t do what they don’t want (bugs), and do it without making too much noise (cost).
Choosing your system
Choose a system that has a low barrier to entry, something people are keen to learn, or will already know:
- One where there’s value in learning, such as a popular industry standard, and those systems will be better documented, better understood, more reliable, and your colleagues will be easier to get on board.
- Use the system “in-paradigm”, by which I mean, use it as was meant to be used, not in an unusual “out-of-paradigm” way, this will make your colleagues life difficult, and prevent adoption.
Can you can test multiple configurations, where some tests are only applicable to some modules and configurations?
Is it robust?
- Will changes to test subjects lead easily to identifying the tests that need changing? A change to your underlying implementation shouldn’t silently break the tests.
- Avoid completely dynamic languages, compile time checking prevents typographical errors and identifies tests that might need changing if the test subject changes.
Consider if the system is usable by both developers, and to less technical people – will you want testers or QA to be able to write tests?
Once upon a time I thought this was a no brainer: is the test system fully automated? Or, is it going to cost your company money each time you run them?
Tests should be fast to run and fast to write:
- Writing tests should not require time-consuming set-up of databases, DLLs or environments, automate anything of this nature.
- You should not require tacit knowledge of customised systems, no ones want to indulge in tedious manual set up. It’s just cost.
- Ask yourself – is running someone else’s tests should be possible with a single button?
- The tests themselves should not take long to write.
Don’t confuse tests for production code:
- Don’t worry too much about writing the most “effective Java” test code, or reuse. Fields don’t need to be “private final”.
- You don’t need to enforce you coding standards on tests.
Test the behaviour, not the method (@Test void testMethodX anyone?):
- Consider a BDD based system.
Consider writing test for interfaces, and then using a parameterized runner that will run the same set of tests for each implementation.
Test failure should clearly feedback into fixes:
- Capture output from tests so failure can be diagnosed.
- Make sure failed tests can be run in isolation from their suite, so you can focus on fixing failing tests.
- How long is the mean time between test failure, fixing the faulty code and rerun of the test?
Test Support and Test Doubles
Document supporting code:
- Test doubles or fixtures won’t be reused if people don’t know about them or how.
With JUnit, consider using @Rules to provide mixin-esq components for tests.
- They’re generally more versatile and reusable than stubs, dummies or mocks.
- They’ll give you a better understanding of the subject than other types of doubles.
- They can often share a code with the implementation, and thereby test that as well.
- Have the ability to directly control fakes by an interface, e.g. to put components into error mode that cannot be stimulated by normal APIs, e.g. network issues or hardware failures.
Fake the third-party:
- In my job there’s a fair amount of JNI/JNA code that talks to hardware. By faking just the JNI methods, we can simulate various things including timeouts of failures. I’ve done similar things with faking serial devices, faking javax.comm.SerialPort and pre-loading it with fake data that simulates failures or other errors.
- This will work equally as well with RESTful APIs and the like.
- Prefer running tests on a representative set-up using real code rather than using fakes.
- Try and run your tests out of container, so the software is run in as close to production set-up as possible.
- If software runs on specific environment, run the tests there too, i.e. integration tests are preceded by a deployment (and implicit test thereof), this in turn implies that deployment should be a button press.
Make then repeatable:
- Tests written by one person can easily be accessed by another, i.e. version controlled.
- No tedious, error prone work getting tests into version control, single button commit.
- Can they run on computers other than your dev machine?
- If it’s not automated, it’s not repeatable.
Integrate with the build system:
- You tests should run on your dev machine, and the CI server and in QA, each run will give you more confidence in the finished product.
- They should run in CI, probably headless, alongside concurrent executions of the same tests. Do they use the same hardcoded directories; are they listening on the same ports?