Software Development

Acceptance Testing: Blaming the Tools

About 5 years ago I was on a project to build a system for collateral management. The system was connected to a large financial network, and got its instructions through standardized financial messages.

This project was run in waterfall style, with lots of restrictions on collaboration between disciplines. Without to digress in details on this, I do still think it’s interesting that those project managers, even the experienced ones, believe that collaboration is bad. Anyhow, a team of business analysts wrote down the detailed specifications, then handed them over to the developers. The developers implemented the system according to the specifications and then delivered the software to the team of testers.

Testing the system

Because this particular system was operated through all these financial messages, it was much harder to test than a system with a regular user interface. The developers created a testing tool to interact with the system using a command line interface. By the nature of its command line interface the tool was also fit for scripting tests for entire business flows. The tool was low-level, just as the developers needed it to be.

The test team also needed a tool to interact with the system. The tool used by the developers was too low-level for the testers. One of the testers knew a tool he had used before to describe tests in a language that is more friendly to most people, and then having that tool link the friendly language to code. This tool was FitNesse.

To be able to interact with the main system from FitNesse, an adapter was added to the test tool the developers used. This was a fixture to get FitNesse to run the test tool. Some additional features were added to the test tool as well, to allow the testers to create template messages and then set specific values using xpath expressions in their FitNesse test tables.

Making changes

This approach went well for about 2 months. Within this time the test team had added a FitNesse plugin to connect to the system’s database to verify some of the system’s intermediate state, and had added a few layers of indirection to the FitNesse tests so it wasn’t really clear what operations were performed on the system from the ‘friendly’ specifications in the test tables.

And then the type of financial messages to support changed. This influenced all the message templates and all of the xpaths that were used so far. You can imagine that this touched almost every test, causing tons of rework on the test team. Because they couldn’t get it done fast enough, they got help from the development team.

A short while after the tests were all ‘migrated’ to use the new message structure, the database schema changed. And it didn’t change subtle either. Again most of the acceptance tests failed and needed lots of rework.

Blaming the tool

Boy, we really started to hate this FitNesse tool. Every time the system evolved all the tests started to fail. And to fix them, developers were to help the testers rework those tests. We could think of a nicer task or two.

It was not until a few weeks after I left the project that I realized that FitNesse was not to blame for the problems we faced. We were to blame ourselves! It wasn’t the tool that was wrong, it were the people who used it wrong. It was on the Advanced Test-Driven Development masterclass by Uncle Bob (Robert Martin) where I learned that the purpose of FitNesse isn’t to provide a textual abstraction to running your acceptance tests.

Collaboration

Don’t get me wrong, that textual abstraction is really nice and all. But it shouldn’t be the primary goal.

FitNesse is a tool for automated acceptance testing, with the focus being collaboration between business people, developers and testers. Through collaboration a team of developers and testers can together learn all about the features, each bringing their own skills and qualities to the discussion. Together they register their learnings as a test table or brief sentences.

As I mentioned earlier collaboration wasn’t one of the focal points on this project. Instead it was something you ‘do not waste your time upon’.

Where in this particular project the problems appeared to be around FitNesse, I believe it’s the same thing for many of the Cucumber stories I read. On the internet there are a lot of opinions about how Cucumber (and likewise tools) add no value to a project. I do believe that this is true in many projects, but for different reasons than projected in most of those articles.

Tools like Cucumber and FitNesse are not developer tools. If no-one but the developers read and write these scripts, using such tools is no more than a waste of time. Developers can faster automate acceptance tests using their regular test framework (e.g. rspec or junit).

But these tools neither are QA tools. Only when working together, testers and developers can get most out of these tools and use them for real good. Both Cucumber and FitNesse are collaboration tools. They belong in the toolbelt of agile teams.
 

Reference: Acceptance Testing: Blaming the Tools from our JCG partner Bart Bakker at the Software Craft blog.

Bart Bakker

Bart is a technologist who specializes in agile software development. He is passionate about creating working software that is easy to change and to maintain.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
Back to top button