Software Development

Chatty Logging in Tests

This is another test smell.

What is the purpose of log messages in tests?

Sometimes there are logging messages, via a logger or stdout within a test suite. They happen. Sometimes they happen for a reason, but often when you look at the reasons for these, it tends to indicate a lack of something better:

  • I want to know what the test is doing
  • What data does the test have right now?
  • Why did the test fail that time?

The reason that logging is generally not a great solution is:

A logged message is not usually part of the test report, nor does it usually affect the test outcome.

In other words, instrumenting our tests to talk to us, is potentially done better.

When Do I Add Log Messages?

I have temporarily (and then left them there) added some log output to some tests, when they’ve failed on the CI server, or occasionally locally (but I can debug locally) in order to give me an idea of what’s going on. By driving these log messages from a genuine demand that I might need to have a fact about the environment, or the specific data, I’ve justified that this incidental issue might save my ass when there’s a glitch in the pipeline, but these are NOT a part of the overall test development strategy.

So What Is The Test Doing?

A test is not a script that rules and manually outputs a narrative of where it’s at. If the test is correctly broken up into test cases, then the test runner executing those should provide the feedback as to what happened and when.

Test frameworks like Cucumber and Serenity can be used to structure the tests and instrument the test code to provide the narrative automatically, which is probably richer, and more importantly they can produce reports at the end to correlate what happened with the specification of what was supposed to be testing.

What Data are We Processing?

The issue of tracking the data is down to context. If we have test data, we should know from the position in the test execution what’s being used. If we’re using data driven tests, then perhaps the way we build that test, with parameterised test runners, or data driven test frameworks, should automatically instrument our test reports with the data that was used. If we’re receiving data on the fly from the system under test, then lord help us… this is the fringe where we end up needing to add diagnostics to help understand failure. Ideally, those diagnostics should be closer to the way we detect failure.

What’s The Failure?

Use assertions!

There, I said it.

Make the assertions not only explain the comparison they were making, but the context in which they were making it. However, don’t fall into the trap of trying to write glib, meaningless, comments in every assertion.

Many assertions are the raison d’etre of the test step they’re in, so we know their context. However, if an assertion is less clear, or we feel the need to provide supplementary information about other data that was related to the thing we were asserting – e.g. not only do we assert the HTTP status code, but show the HTTP body when the code is not what we’re expecting – then we should instrument the assertion, rather than adding logging.

Conclusion

You read the red/green status and the test reports of a test suite more times than you deep dive the logs. Using the frameworks and especially the assertion libraries to paint the most robust picture of what was being tested and what the outcomes were, will be the best strategy for interpreting it.

Published on Java Code Geeks with permission by Ashley Frieze, partner at our JCG program. See the original article here: Chatty Logging in Tests

Opinions expressed by Java Code Geeks contributors are their own.

Ashley Frieze

Software developer, stand-up comedian, musician, writer, jolly big cheer-monkey, skeptical thinker, Doctor Who fan, lover of fine sounds
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
Back to top button