Software Development

How design for testability can improve test automation capability

Introduction

Testability refers to the capability of testing something. When this something is an IT solution, the most suited way of doing it is Automation. But is it always possible to automate test? If not, what are the reasons? And how can testability be improved?

Test Categories

A software is usually done to be used by a consumer, that could be a person, another software, or physical device. When deciding to automate tests the first thing to decide is what will be the SUT (System Under Test). This decision shapes the scope of the automation and the point of view that has to be simulated with the automation test framework. There can be several choices, here are some of most used:

  • Unit Tests: here we want to test the smallest part of the code, that is the methods. So the framework is typically another software program written in the same language of the SUT, that is able to interact with each single method and to create all the necessary context around it to be able to exercise the logic and verify the expected results. The more this logic is coupled with other logic, the less testable is the code.
  • Integration Tests: here we want to test that two separate software components are able to dialogue correctly together. So the framework is similar to Unit Test Framework, just the SUT is larger, cause involves more logic and separate components. In those kind of tests is interesting to test for example boundary cases and exchange of data between different components. As for Unit Test, we must be able to isolate the components we want to test, indeed too much dependencies can mine the efficacy of the test.
  • Functional Tests: here we want to test the behavior of the component as perceived by the program consumer. So the framework is typically another software program that is able to reproduce all the interactions of the program consumer and inspect internally or externally the result or the state of the SUT. Those programs can be driven by scripting language or can be extension of the same frameworks used for Unit Tests (The latter case is common in Rich Client Platform solutions as for example in the NetBeans Platform where Jemmy/Jelly Tools extend JUnit to automate functional test). For those kind of tests, typically complex context information has to be provided to be able to be executed, so strong dependencies with infrastructure technologies can make difficult to simulate all the necessary conditions to execute the test.
  • Non-regression Tests: here we want to test the correctness of the logic implemented between two different versions of the software. The framework is typically a software program, as the one for Unit Test, with the capability to access input, functions and outputs of the SUT. Here is where testability concerns are less involved cause dependencies with infrastructure and other software programs cannot be avoided as the operational behavior of the system has to be reproduced in its integrity.

Limits to Testability

So far, what are the factors that can mine testability? I would say that bad dependencies, or strong coupling are major reasons. There are several types of dependencies that is better to avoid:

  • Logic to Resources: This form of dependency occurs when the logic is strongly coupled with a specific resource, that means changing this resource will direct impact the logic (resources could be technology infrastructure, file systems, legacy software). In this scenario logic cannot be decoupled from those resources and so, if you need to automate tests, you have to bring with it also all the related resources. In case we are speaking about a database, having logic coupled with it, means that probably for each test you need a specific database; this will create serious problems of performance, and governance of the test suite.
  • Logic to Logic: This form of dependency occurs when Loose Coupling has not been applied. That means there are no interfaces that permit to decouple two interacting software parts. Imagine that a method A of 10 lines, is using a method B of a static object to perform its function. Well 10 lines should not be difficult to test, but what do the call to method B is hiding? This strong coupling means that whatever method B is doing its logic will be included in the test of method A, and if method B needs resources or technologies that are not easy to reproduce in test environment, well probably you will decline automating method A.
  • Logic to UI: This form of dependency occurs when source code has not been correctly organized in different Layers . And so it could be that logic that has to be classified as purely application logic, acts as presentation logic, showing popups for the user and waiting for inputs. But, specially for unit tests, is not possible to reproduce the user interactions, so in this specific case will not be possible to automate unit tests (but also non regression and integration automation can become difficult).

How to increase test automation capability

Testability is the concern, and designing for testability since the beginning is the only way to grant a good level of test automation capability. For sure this is not possible in all the projects, sometime we must deal with legacy code and in that case code review and refactoring is the only possible solution. I think that some good points for increasing testability are:

  • Organize your source code into layers: This will permit to clearly classify your code depending on its scope, and to avoid bad dependencies (for example you could realize a Multilayered Architecture as I have been discussing in my previous posts).
  • Apply Loose Coupling: Always create an interface for accessing the functions and use a technical environment that permit to physically decouple contract from logic. When this principle is applied you are able to replace an implementation with any other that is simply done for the scope of the test. When applied to Logic to Logic coupling, it means you can create Test Double objects (Fake objects, Mocks, Stubs) that will permit to focus only on the lines of code you are interested to test, and when applied to Logic to Resource coupling it will permit to replace resources that are not easy to setup in test environments. For example a database could be fully replaced with an in-memory lighter version or with some scripting language done to simulate certain contexts.
  • Think modular: Modularize your architecture, well define and scope each module and make public only what is necessarily used by other modules. This will reduce the number of dependencies that you can create, avoiding the usefulness ones.

Conclusions

When approaching to a new design is worth to take testability as a major concern. Once launching test automation campaing, you will easily get your fruits. If instead you are dealing with legacy code, there are plenty of refactoring patterns described that can help to increase the testability quality. What is sure is that soon or later, if not designing for testability, overall capability of automating tests will become an issue.
 

Marco Di Stefano

Marco is a software engineer specialized in software architecture and process automation. He has been involved in all the software v-cycle phases and actually he works for the railway industry.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
Back to top button