The “checking” part of testing, is really about trust. We check, because we want to make sure our system works as we anticipated. Therefore, we build a suite of tests that confirm our assumptions about the system. And every time we look at the test results, we want to be sure a 100% these tests are not lying to us.
We need to trust our tests, because then we won’t need to recheck every time. We’ll know failure points at a real problem. And that mass of tests we’ve accumulated over the years was not an utter waste of our time.
We need to know that no matter:
- Where in the world the test runs
- When the test runs
- On which kind of machine the test runs
- Who runs the test
- How many times we run it
- In what order we run it, if run alone or in sequence
- And any environmental conditions we run it
The result will not be affected.
Isolation means we can put a good chunk of trust in our tests, because we eliminate the effect of outside interference.
If we ensure total isolation, we’ll know that not only does Test XYZ has reliable results, it also doesn’t affect the results of any other test.
There’s only one small problem.
We cannot ensure total isolation!
Is the memory status the same every time we run the test?
Did our browser leave temporary files around the last time that might impact how full the disk is?
Did the almighty garbage collector cleared all the unused objects?
Was it the same length of time since system reboot?
We don’t know.
Usually these things don’t matter. Like in real life, we’re good at filtering out the un-risky stuff, that can have an affect, but usually doesn’t.
So we need good-enough isolation. And that means minimal controllable footprint.
- Every memory allocated by the test should be freed
- Every file the test created should be deleted.
- Every file the test deleted should be restored.
- Every changed registry key, environment variable, log entry, etc…
I’m using Test, but I actually mean Test AND Code. So if the tested code does something that requires rollback, the test needs to do it as well.
Mister, You Are A Terrible Isolationist!
It’s not the first time I’ve been called that.
Sounds a bit extreme, isn’t it? I mean, if I test against a “dirty” database, and don’t rely on any previous state, am I doing things wrong? Do I need to start always from the same database?
Well, yes and no.
If you’ve analyzed the situation, and have written a test that doesn’t rely on previous state, that means that you’ve taken isolation into account already. So a suite of tests that pile data on the database and don’t clean it up, are in a context that doesn’t care about footprint.
The question is – what if the test fails? Since you’ve allowed the tests to mark their territory, you now have tests that are hard to reproduce. That will cost you in debugging, and maybe in even resolving the problem.
As always, it’s an ROI balance of risk analysis and mitigation. The thing is you need to be aware of the balance, when making the decision.