Basic question: why do we write our tests first and make sure they go red?
Answer: because it’s possible that a test we write to test a feature after the fact would go green anyway, because it’s not really testing the feature properly… or there’s a chance that we write a test, it fails, and then we have a lot of uncertainty about whether the feature actually works.
My Test Went Suprisingly Red Just Now
So, in the spirit of practicing what I preach, we just set about working on the tests before a recent change. We wrote a test that we thought ought to pass – this was really to anchor some existing behaviour, and because we realised that the existing software had assumed this case would probably exist, but hadn’t specifically tested for it at that particular level of abstraction.
The main long-term benefit of the passing test was to anchor some default behaviour we knew we were already seeing.
Then we wrote the test for the neighbouring case where we expected the software couldn’t do the thing we were testing for.
The test failed.
With the wrong sort of error.
The Important Thing Is We Noticed
Because our attention was still on the behaviour of the problem, not on the solution to the problem, we were actually looking at how the test failed. When it turned out to fail in an unexpected way, we set our sights on determining:
- Why is the code doing this?
- Is the test doing what we think it is?
As it happens, it turned out that there was a bug in the implementation of the test.
Fixing this bug in the test saved us a lot of further surprise later on. Though we may have made the feature work, this test was ALWAYS going to give us problems.
Reducing The Cognitive Load
As the only software change we were making was the construction of test code, we didn’t have too many variables to balance as we tried to work out what was going on.
This is one of the key benefits of writing tests first. You focus on the right things with the least mental load.
The test soon went the correct shade of red, and we were able to proceed.