As discussed in What are We Testing Again? from the Test Smells, making the test code explain its test case and test data is a vital responsibility.
When writing test code, the best advice is focus on creating understanding, rather than focus on DRY code. This is largely because refactoring tests, requiring us to hide things behind methods or named constants can often result in the reader not understanding what’s behind those names.
The numbers used in an example can often be more transparent than names:
The above example kind of documents the earlier/later time values used, but it might be easier to understand if it were:
This helps the reader understand the precise use case.
Making it clear
Test names are a good option:
With the above, we might rely less on seeing the exact test data in the code.
You have to look at this from both sides. We definitely need:
- Good names of the use case
- Transparent test data
If we start to hide the test data behind labels, it starts to break the latter. What if we’re writing a lot of examples, though? How can we avoid creating a multitude of identical testing functions with explicit test data in, just so that the name of the test function can explain the test case?
Naming in Parameterised Tests
We need to think of a parameterised test as a table of inputs and outputs that define the specification by example of a component in the system. The core test function is pretty much doing the same thing: feed in concrete test data and check result. By making it tabular and then having a common function, we make the test richer, but what of naming?
There are a few options:
- Name some constants and and use them to populate the array of input values – if the names are great then the reader of the code can see the use cases
- Add a comment next to each row of the input explaining the use case – again, good for the reader of the test code
- Add a String value describing the use case to the row of inputs – visible both at code inspection time and also at test execution/reporting time
Of these, I recommend using none of them if the test is really simple. If we have several examples of exactly the same use case, then the one name and the data will be self explanatory. If we have multiple use cases expressed with the same assertion mechanism, then adding a documentation field, and enriching the test to show it in the test name, or mention it in an assertion failure, will help diagnose issues when the test fails, and help readers/maintainers of the test in future.
There’s no perfect answer, here, but I hope this discussion has helped you decide what works for you.