This was originally posted on LinkedIn as Life without TDD – Driving with the brakes off, or driving with no brakes?
This version is updated with my current thinking around TDD and how to write!
Once upon a time…
Today one of our unit tests failed. Its failure stopped our automated release process in its tracks and we spent a little while figuring out a fix.
The cause of the failure was an unexpected change in a software version number. Although the test was checking for anything “version number shaped”, rather than a specific version – a wise approach, it still failed. In short, the intentional change we made was delayed by a test that didn’t agree with it.
So we’d taken the effort to write a unit test and then that very test had stopped us doing something perfectly valid. Does this invalidate the TDD in some way? If you can’t make changes does that equate to driving with the brakes on? TDD would be no good if by doing it we slow ourselves down?
What Went Wrong?
Having the test wasn’t wrong. There had been a bug in the way the software had been publishing its version number and that test was there specifically to ensure it didn’t re-occur. When that test wrong, we knew that the only thing to worry about was the version number.
We also knew that the feature we wanted to release was still working, even without spinning up the application to test it with our fingers and our eyes. In fact, we hadn’t needed to test it manually at all, thanks to TDD.
In terms of fixing the break in the test, the error was clear right away and we simply needed to add a couple of *s to a regular expression. Annoying and a bit more waiting time, but no big deal.
Total Cost of Ownership?
So did the test pay for itself or cost? Perhaps on this occasion, it was a net cost, of about 10 minutes. Perhaps the fact that we hadn’t need to test the feature manually, which would have potentially required a few runs of the software before it was right, had already turned TDD into a huge net gain. It’s very hard to measure from this single incident.
The brothers and sisters of this test are still doing their job in securing the codebase against accidental change, though.
Later on in the day another change stopped our application from starting up on Tomcat. I know – what a day! We never got as far as trying to promote that failing build to a real environment – the automated end-to-end test found the failure and stopped us from promoting the change.
The fact that the tests stop genuinely bad things from progression to stages in the pipeline where they can do damage is the real benefit here.
Sure, there’s braking occasionally, but the release train isn’t a runaway one!
It takes time to write tests but they pay back by stopping you from needing to run up an environment to be able to check the simplest of things. When testing’s integrated into your build, it stops you from messing up higher environments.
Finally, the discipline of trying to construct tests frequently leads to finding simpler solutions and designing cleaner interfaces.
Of course, you can easily argue that extra stuff that slows you down is both surplus to requirements and against going quickly… But to argue that for TDD is to define TDD as the sort of tests you shouldn’t write – i.e. things which take too long and which you have to spend too long maintaining. That’s a straw man argument.
Finding the practical middle ground is an art and a balancing act. One that needs constant review.
Opinions expressed by Java Code Geeks contributors are their own.