What Agile Testing Is (Not)

Ok, I have to admit I am the kind of person who, when asked about something, tends to start replying by saying what something is not.

So, let’s start this piece of writing by stressing what Agile Testing is not in my opinion:

  • Testing faster
  • Skipping testing
  • Testing too late
  • Making unskilled people test

Let’s go through these points one by one.

Testing faster

One thing is to be as time-effective as possible when testing a software product.
The completely different thing is to be forced to rush testing something driven by unreasonable deadlines.

Even worse if what is expected from the tester(s) is just to confirm (as soon as possible, of course) that everything is ok.

So, to me Agile Testing has nothing to do with fast and shallow validations or with (quickly/dangerously) confirming (potentially wrong) assumptions. This would rather be reckless and meaningless testing.

So, as I see it, good testing takes (reasonable) time and efficient testers.
On the contrary, even though bad/inefficient testing may apparently take less time, you should be warned that you’ll eventually realize that it was actually bad/inefficient and basically meaningless sooner or later.

Skipping testing

Well, you should be able to foresee the risk of doing that.
If you are not, chances are you are living under the illusion that:

  • Your team is so good that they are producing perfect software or
  • Your customers are so easy-going that they are not going to notice that your software is actually pretty far from perfect.

Let me stress that I’ve never happened to test any software product which was so spotless it didn’t actually need to be tested at all. Basically because such a product does not exist. And I’m pretty sure you too are aware of that.

After all, living in denial doesn’t solve anything.

Testing too late

According to my experience, starting to test a product at the very late stage of the software development process is unfortunately still very common, especially when organizations believe that developing/delivering something is much more important than challenging their assumptions about it.

So, at some point, someone might have an idea which implies building a software product (hopefully, in order to solve a problem rather than to create another one) and they might start either doing that themselves or hiring a team able to do that.

Meanwhile, nobody is challenging the idea itself, the software architecture or the code that is being produced to implement it.

So, after a minimum viable product (which, by the way, too often is not viable at all) has taken shape, someone realizes that some glitches may be hindering the process.

So, one tester (usually misnamed as a QA) is urgently brought into the team to validate the software product and to check (and assure the stakeholders) that everything is ok.

I wonder, though, would you need a tester if something was ok?

The thing is, challenging assumptions at this point is usually not well accepted: it’s actually something that only particularly experienced and brave testers would dare to do.

After all, what most managers want to hear now, without further ado, is just that everything is ok.

Which means the organization in question may end up:

  • Delivering a low-quality product without knowing that it is a low-quality product (their customers definitely will, though),
  • Delivering a low-quality product in spite of knowing that it is a low-quality product (while keeping their fingers crossed and hoping that their customer will not realize that),
  • Not delivering at all what they initially thought was a pretty good product but turned out to be a dangerous and embarrassing minefield.

Wouldn’t it have been better to start testing earlier? Ideally at the same time other activities (coding, business analysis, etc.) were being performed.

Making unskilled people test

This is probably the most controversial point to cover, due to the unfortunately pretty common trend/misunderstanding that testing should be everybody’s responsibility.

After more than fifteen years working with software, and more than ten as a tester, I still struggle to understand what makes some people believe that software testing is so different from all the other things that might be needed to implement a software product (business analysis, software architecture and design, coding, etc.), that it can be performed by anybody.

I wonder, though, if under no circumstances would you like to make unskilled people code, why should you want unskilled people to test?

You may think lack of awareness about software testing could be one of the reasons behind that.

Nevertheless, lack of knowledge about coding doesn’t usually make people believe that anybody can code: actually, quite the opposite.

So, why underestimating software testing is so well-spread seems to me a really challenging conundrum to solve.

Throw the usual misconceptions about Agile into the mix and you’ll have a recipe for disaster: people saying that, in order to be more efficient at testing, all you need to do is to automate (the execution of) your test cases, will deliver the finishing blow to your software development process.

What’s more, strange as it may sound, fallacies about such an unfortunate discipline are often spread not only outside the software testing community (where we can assume there might be a somehow reasonable lack of knowledge) but also inside the community itself, which seems to be plagued by a dangerous mixture of Dunning-Kruger effect and a lot of mythological Trojan horses.

The thing is, making unskilled people test usually means settling for confirmatory testing, that is to say, demonstrating that a software product does what it is expected to do, without caring at all about exploring a system to learn how it actually works or investigating under which circumstances it might fail. And yes, this is what software testing really is after all, isn’t it?

On the other hand, it has to be noticed that even a usually skilled person might be unskilled at some point: if they built the software product themselves, for example, they will most likely lack the critical distance required to deeply and unbiasedly test the product itself, which is what you need when you really want to uncover unanticipated problems, don’t you?

Wrapping Up

Wrapping up, through this piece of writing, I made it clear that, to me, Agile Testing has nothing to do with unreasonably fast, meaningless, reckless, late, shallow or uneffective testing (or, maybe even worse, with no testing at all).
So, if you think Agile Testing doesn’t work for you or it is not providing you with the results you were hoping for, well, chances are you might just be getting it wrong.

Published on Java Code Geeks with permission by Ileana Belfiore, partner at our JCG program. See the original article here: What Agile Testing Is (Not)

Opinions expressed by Java Code Geeks contributors are their own.

Notify of

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Newest Most Voted
Inline Feedbacks
View all comments
Back to top button