Software Development

Why Most Unit Testing is Waste???

A rebuttal of points raised in this article by James O Coplien.

It’s worth noting that James O Coplien is a well respected father of modern software engineering, so it’s odd to be writing such a piece.

Write the wrong sort of code, and the wrong sorts of tests will annoy you.

The myth about good code structure

The article starts with an incorrect statement about how code used to be well structured, and that you’d trace the lower functions from the business case.

I’d contest that building implementation directly from business requirements does not naturally lead to good code structures. A solution to a problem is often based around abstractions and simplifications, or powerful patterns that can be applied to a business problem.

Similarly, when we apply approaches like DRYing out our software, or reducing cyclomatic complexity and, mainly, function size, then our lower level software becomes abstract building blocks. Blocks of single responsibility are provably very good.

These structures need only come into existence when the business problem we’re solving requires them. Critically, there should be automated tests that reflect the business cases. This is where BDD helps.

Object Oriented Programming Loses the Context

The idea that the various object oriented techniques lose us the ability to statically analyse the software is both true and false.

It’s true that a series of modules may need to be executed to understand their dynamic behaviour. It’s probably untrue to believe that any non-trivial alternative approach is any easier to follow. I’m presently working with some monolithic ASP pages. It’s all there in the page, with limited modularisation. It’s harder than well formed, well named components brought together.

The case against polymorphism, implied by the article, is terrifyingly backwards.

However, perhaps the point being made here is that unit testing becomes necessary to assemble loosely coupled modules together in a way that’s less relevant with older procedural programming. Who knows what we’d do with request and global state in the latter at test time, though.

Unit Tests are Unlikely to Test More Than One Trillionth of the Functionality of any Given Method

This is nonsense.

It’s true that there are some methods you can write that are so innately complex that you have no chance of trying out every permutation of pathways through them.

It’s also true that proving that all pathways are followed does not entirely equate to proof that the method works bug free.

Test driven development takes the approach that we write a test case first, and this creates the need for well formed software to achieve that outcome. We do it in small increments, refactoring as we go, and we prefer small single responsibility, single level of abstraction methods. When you add all these up, the unit tests we write for the lowest level functions enable us to exhaustively test them, especially because the exact boundaries we need are the thing we think of first with TDD.

Adding test automation AFTER the fact, to code structures which are naturally less manageable, will behave as predicted, and waste time.

So don’t.

Smaller Functions Don’t Encapsulate Algorithms

The argument that you shouldn’t break a large function down into smaller ones doesn’t end well.

If you fracture code randomly while refactoring down from larger things to smaller things, then it ends badly. I’m yet to find monolithic code which looks easier to reason about that its equivalent reasonably refactored alternative.

I’ve seen code over-refactored, and I’ve seen some patterns that increase the number of awkward boundaries possible in an algorithm.

But the argument that you can’t manage a broken down algorithm, and that you’re just gaming the tests is backwards.

What is Good Coverage?

It’s a myth to assume that we’re aiming for 100% coverage. That said, I tend to achieve high coverage.

High coverage is a relatively weak metric of code quality.

However, low coverage – i.e. less than 80% – is a very strong metric. It implies a few things:

  • The developers don’t care about testing
  • We’re probably not doing TDD
  • We have higher cyclomatic complexity
  • We may be writing code that’s redundant
  • We’re using coding patterns that come with extraneous edge cases
  • We have boilerplate bloat without needing it

The waste that the article speaks of refers to what happens when developers cargo cult the process of software testing. The purpose of TDD is to drive features, quality and design into the software. The idea that someone has to use this function makes us see it differently and often produce it better. We have to suffer the indignity of using our own software, do we produce something better rounded.

Of course high coverage requirements can lead to some seemingly wasteful practices. To make sonar happy, we occasionally add something that doesn’t seem to add much value…

But every test we write is a stake in the ground. It pins some functionality or behaviour down and gives us early warning if our assumptions stop being met in the future.

Cut the Unit Tests and Go For More Integration Testing

The well known test pyramid begs to differ here.

Coplien argues that too much worry around unit testing may come from a lack of integration, and that you can measure the ratio of unit test code to actual code to determine the fear factor. He argues you should cut the unit testing and integrate more.

Integration tests require a complex universe to set up to start with, and then a complex analysis of outcomes to complete them. Usually they require more steps, and they’re more brittle as systems change.

Why is this better?

The idea that there might be too many lines of unit test code is an interesting one, though. On the one hand, this is part of the challenge of writing good unit tests. See the Test Smells list for a lot more on this. There’s a dilemma. You do invest time and lines of code into the construction of test automation – you’re doing that to document and nail down the behaviour of the system… but then it sits there.

But running tests should be quick and cheap and should help debugging. It also removes the need to wait for the application to start to do 90% of the things we need to do to believe the software is likely to work.

If your tests have no relevance to the likelihood of the system working, then you’re doing something wrong.

Throw Away Tests That Never Fail

This is almost a good idea. If a test doesn’t fail, then perhaps it’s testing nothing important. It might be testing some boilerplate, or an area of the code that’s seldom visited…

But the cost of running a test is essentially 0. Test suites are slowed down more by writing the wrong sorts of tests than writing lots of small simple ones.

We should throw away tests that:

  • Test implementation rather than behaviour
  • Duplicate other tests
  • Are impossible to understand – we should replace them with clearer tests as we refactor

Keeping Tests Up To Date Reduces Velocity

In TDD this makes no sense. We always write tests first, and this implies keeping them up to date.

But a feature we add may contradict an existing test. That’s good. We can then update that test – perhaps not add a new one, or at least not start afresh with a new one.

If all our functionality is in a mass of glue in some hotspot, then any small change there will imply a lot of damaged test blast radius.

If software change has a huge blast radius, then your software design is poor.

The same is also true for test structures. Testing the implementation, or very implementation-dependent tests, often found in UI testing, also turns out to be a case of the code structure not being optimal for velocity.

The open closed principle probably helps us here.

As does abstraction.

Tracing Tests Back to Business Requirements

If this unit test fails, what business requirement can’t we meet?

Great point. It sounds to me like the unit test is a likely harbinger of an integration test that might also fail… or maybe we’re in the category of boundary conditions that are hard to contrive in any form of testing, but easy to manage in a unit test.

There’s huge value in this. We cannot achieve the permutational complexity to manage everything from the system-sized black box testing, but we can easily do it down at the unit test.

That we can’t relate each unit test failure to a business requirement is not necessarily an issue. The components of the system should be meaningful in their own right. If not, then we have badly designed code, not a testing problem.

Unit Tests are Assertions in Disguise

It used to be the case that you’d assert the production code as you went along. If an assertion failed, the production code would crash and you could get a bug report from the logs.

Some companies insist on assertion like things at runtime. For example:

void myFunction(Input one, Input two) {
   Objects.requireNonNull(one);
   Objects.requireNonNull(two);

   // now proceed knowing there'll be no unexplained 
   // null pointer exceptions
}

I didn’t really enjoy using the above structures, because it felt like code bloat, but it does something useful. It brings assumptions/errors about runtime screw ups to the very front, so they fail in a useful way before doing damage.

I resolved my discomfort with the above by knowing that I could ensure these failures were more likely to happen at unit test time, so I could fix them, rather than at actual runtime.

There’s limited value in waiting for a bug report, when we can meaningfully explore our code as we write it, to most drive bugs out before we even start the application.

Create System Tests with Feature Not Code Coverage

I agree with this. This is where ATDD or BDD can help. Feature coverage, impossible to measure, is the real metric of test coverage.

Debugging is Not Testing

Agreed… almost.

A consultant I met a while back boasted of never using the debugger, because they just wrote tests and the tests found the issue.

I always feel bad running a debugger these days, as it’s an admission that I can’t think of a unit test that would drive a stake into the issue that’s going wrong.

I feel worse when I’m debugging the app as it runs. Most of my time running a debugger is spend debugging a test – a smaller quicker thing to run – and the outcome is usually to write a test I’d missed from a particular edge case, and then make the necessary change to get all tests green again.

We Are Trying to Get The Computer to Think

The author is suggesting people adopt a test approach of:

  • Believe your tests are right because they’re more thorough
  • Then just hack code until it goes green, failing fast and often, and experimenting until the computer tells you you’re right

In some cases, trivial ones, I’d argue that this is exactly the method. I don’t really want to waste too much time agonising over operator precedence or where something is a plus or a minus in a calculation. I’d rather get this stuff tuned empirically against meaningful tests that make me think about my goals, not any particular implementation.

However, the best advice for any developer is the same as Merlin gives to King Arthur in the musical Camelot. “Arthur: don’t forget to think!”.

Conclusion

If you write code and tests badly. If you set up a war between a tester and a developer, fought over red/green test automation. If you fail to maintain tests, or write the wrong sort of test, then your tests will feel like a waste, because they’re not, themselves, functioning as either test or user feature.

If you do things as well as you can, driving good design and features into a system, and keeping them there with well thought out tests, over well thought out implementation, then it’ll speed you up overall.

There are some minor test rituals that we may do to appease coverage gods, but they pay off in other ways.

There’s also a cost to maintaining tests, especially flickering ones.

However testing allows you to manage complexity by driving good practices into your software.

Published on Java Code Geeks with permission by Ashley Frieze, partner at our JCG program. See the original article here: Why Most Unit Testing is Waste???

Opinions expressed by Java Code Geeks contributors are their own.

Ashley Frieze

Software developer, stand-up comedian, musician, writer, jolly big cheer-monkey, skeptical thinker, Doctor Who fan, lover of fine sounds
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

14 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Matt
Matt
2 years ago

No discussion of mocking or the pitfalls of overdoing it. Oh well. :)

Ashley Frieze
Ashley Frieze
2 years ago
Reply to  Matt
ivan
ivan
2 years ago

I more agree with Coplien on this. I’m bothered with this idea of mocking; instead of mocking db just run test db, instead of mocking rest service just call it. After all how can you mock calling complex pl/sql procedure, how can you mock complex SQL statement, how can you mock stateful series of rest calls, how can you mock HSM module, how can you mock message queue, cache etc. With poorly written and misbehaving duplicates? With mocking, you build duplicates of all those complex external stuff and oversimplify them. You break DRY. For end-to-end tests, I have a environment… Read more »

Ashley Frieze
Ashley Frieze
2 years ago
Reply to  ivan

@Ivan Hr – so do I… but… If you ONLY test again “the real thing” (and by the way, your emulators are just fancy mocks) then your tests will run veeeerrrryyyyy sloooowwwllllyyy which is another “waste”. There’s definitely a place for this type of testing. I use it very effectively. But… unit tests should really run in milliseconds and there should be many many of them. So these tests you’re describing are probably there to test the real relationship between a client and its service. At unit level, perhaps we can mock the api client so that we can produce… Read more »

Lukasz Bownik
2 years ago

Apparently we’ve been trying to grapple with Coplien’s essay at more or less the same time. You seem to disagree with most of what he said, I took a little different approach
https://www.codeproject.com/Articles/5315115/On-Design-Testing-and-Why-Some-Unit-Tests-are-Wast

I’m interested what you think about it. This could be a nice discussion :)

balkanoid
balkanoid
2 years ago

Integration Testing is far superior to unit testing, and TDD is possible only in case the documentation in solution proposal is complete… ideal world case. Covering more than 20%, of complex logic code, with unit tests is pure religion.

Ashley Frieze
Ashley Frieze
2 years ago
Reply to  balkanoid

With respect. That’s insane. It’s literally the opposite of that. You cannot easily or quickly reach the edge cases from integration tests, where TDD forces you to both consider the edge cases, and engineer the code to handle them better.

Calling this religion somewhat ignores the thousands of practitioners who get excellent results from this every hour of their working day.

I presume you deploy to production rarely, and go through long test cycles.

balkanoid
balkanoid
2 years ago
Reply to  Ashley Frieze

Unfortunately, from my experience, unit test are mostly written for parts of code that obviously work. In real world, where the clients are changing their minds on hourly basis, and new age managers (who never coded before, and hardly can write a complete solution proposal) it is simply impossible to test “everything”, and start with TDD. Especially for the UI part of code. Most of the bugs I encountered with are caused by mismatches between multiple components of the system, database, services, applications… And the end to end tests, can catch most of these. Those mismatches are made by distributed… Read more »

Ashley Frieze
Ashley Frieze
2 years ago
Reply to  balkanoid

What we have here is an example of “TDD done wrong is rubbish”, and I agree with that. For a start “Testing things we know work” suggests writing pointless tests after the code, rather than using tests to drive features into code. As for “And does it work in practice”, if you get the lower layers and architecture right, then you DO need SOME testing around the integration. If you get them flimsy and bad, then you need PARANOID testing around the integration. Look at the “Test ice cream” vs the “Test Pyramid”. If we care about all layers of… Read more »

David Smith
David Smith
1 year ago

Thanks a lot for article!

David Smith
David Smith
1 year ago

I can advise you to turn to essay writing services if you have problems with writing an essay or do not have enough time for this. Qualified writers will cope with any topic on time and without plagiarism and at the same time use real authoritative sources and not just download essays from the Internet

Last edited 1 year ago by David Smith
Bret Piper
Bret Piper
1 year ago
Reply to  David Smith

Thanks!

Bret Piper
Bret Piper
1 year ago
Reply to  David Smith

Thank you! My friends think that studying is hard. In some ways I agree, but I have been able to find essay writing services that help me a lot in my studies. It can be helpful for many students.

rune
rune
1 month ago

A piece of code that I’ve been using to demonstrate the issue on relying on unit tests: public T IncMax<T>(T value, T max){ return value + 1 <= max ? value++;value = max;} It’s absurdly easy to get 100% statement coverage and even branch coverage with a few unit tests. When I’ve done presentations I’ve put a dinner at a fancy restaurant on the line if the audience could write tests that would make it impossible for me to write code that would fail. I.e. That’s I’ve posited that I could write code where result would be either higher than… Read more »

Back to top button