Software Development

TDD Misbeliefs

While I was working with a previous article about Test-Driven Development (TDD) I read many blog posts and a few books on the subject and found out that I disagree with a few of them; even some that are pretty important. It seems that most software experts simply misunderstand how software development works. Maybe because they are not really programmers, but are instead book authors and conference speakers.

Test-Driven Development

Robert Martin (@unclebobmartin) in The Cycles Of TDD:

If the changes you make to the production code, pursuant to a test, make that test pass, but would not make other unwritten tests pass, then you are likely making the production code too specific.

I disagree. This statement goes against the very philosophy of testing: “a passing test is a weak test.” Unfortunately, a traditional understanding of testing is quite the opposite: tests must pass to make us happy. Thus, if we keep thinking about how to make our future tests pass we will shoot ourselves in the foot: tests will pass and the code quality will go down. Instead, we must design our code in a way that makes it easy to break with the future tests. The code must help its tests to break it! Because a test that is easy to make “red” is a good test. A test that is always “green” is a useless test. Uncle Bob, I’m sure, is aware of that.

James Shore (@jamesshore) in Red-Green-Refactor (Nov 2005):

You’ll run through several cycles very quickly, then find yourself slowing down and spending more time on refactoring.

I disagree. I have nothing against the first two steps: 1) the test is “red” because it doesn’t pass, and 2) the test is “green” when the code is fixed and passes. I disagree that refactoring is a responsibility of the person who fixes the test. If the code needs refactoring, it’s a bug, just like any other bug, either functional or design. It has to be reported, assigned and paid for. We must not do any code modifications, no matter how good our intentions are, if they are not required and paid for. Refactoring after fixing the test is a frivolous violation of management and coordination structure in a project.

Robert Martin (@unclebobmartin) in When TDD doesn’t work (April 2014):

You don’t have to write tests for Test Doubles because the actual unit tests and the production code are the tests for those pieces of code.

I disagree. Test Doubles (also known as fake objects) are the tools that help us find our where the code is broken. If the tool is unreliable, how can we test our code against it? This reminds me of an old joke where a patient comes to the doctor and says “Help me doc, my body hurts anywhere I touch it with my finger!”, and the doctor answers “It’s just your finger — it’s broken!” A very similar situation occurs here: If we test our production objects with broken Test Doubles, they will all look broken.

David Heinemeier Hansson (@dhh) in Testing like the TSA (April 2012):

Code-to-test ratios above 1:2 is a smell, above 1:3 is a stink.

I disagree. I don’t know exactly what units of measurement were used to compare the “code” and the “test”, but I can only assume Lines-of-Code. I was curious and decided to calculate this ratio in a few projects of mine. First, I tried jcabi-github, an immutable GitHub Java API client. The numbers were: 9.8K LoC in production classes, 6.2K in built-in fake classes, and 16.2K in test classes. Thus, the ratio was 9.8 to 22.4, which meant 1:2.3. Somewhere between a “smell” and a “stink”, according to David. Then I calculated the ratio for a few other projects of mine: jcabi-http (1:1), xembly (1:0.92), takes (1:0.91), and rultor (1:0.6). It seems that the higher the ratio the higher my confidence in the product’s quality. Thus, I don’t think that it’s a smell or a stink. Instead, in a yummy-scented product the amount of test code is a few times larger than its production counterpart.

Kent Beck (@kentbeck) in How deep are your unit tests? (Sep 2008):

I get paid for code that works, not for tests.

I disagree. Tests are not a separate product which either we are paid for or not. Tests are part of the code. An instrument of its development, maintenance and validation. Tests are similar to, say, file names. We don’t write our code naming all the files 1.java, 2.java, 234.java, and then say: “Now you pay me so that I can rename them properly.” That would be weird, right? That’s how the statement “I’m not paid for writing tests” sounds to me: weird. Do we really have to be paid to name files correctly? We just do it, because it’s convenient for us. Because proper self-descriptive file names make our code more readable and maintainable. It’s impossible to imagine a modern maintainable code base without tests. I would actually suggest changing that phrase to: “I get paid for code that is tested, not just for code.”

I will keep updating this post. If you know a “good” article about TDD, please send it my way; maybe I’ll find something interesting to quote from it.

Published on Java Code Geeks with permission by Yegor Bugayenko, partner at our JCG program. See the original article here: TDD Misbeliefs

Opinions expressed by Java Code Geeks contributors are their own.

Yegor Bugayenko

Yegor Bugayenko is an Oracle certified Java architect, CEO of Zerocracy, author of Elegant Objects book series about object-oriented programing, lead architect and founder of Cactoos, Takes, Rultor and Jcabi, and a big fan of test automation.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
stimpy
stimpy
4 years ago

You did not get the point. You have to test anyways. If you design tests first, you wont write code, that you do not need. YAGNI. And you miss some other important benefits of TDD. Reread the books. Most of the authors are developers

Back to top button