Core Java

Adding Database Logging to JUnit3

We have written many thousands of JUnit3 tests over the last decade and are now trying to consolidate the results in a database instead of scattered log files. It turns out to be remarkably easy to extend the TestCase class to do this. Note: this approach does not directly apply to JUnit4 or other test frameworks but it’s usually possible to do something analogous.

The tested class and its test

For demonstration purposes we can define a class with a single method to test.

public class MyTestedClass {

    public String op(String a, String b) {
        return ((a == null) ? "" : a) + ":" + ((b == null) ? "" : b);
    }
}

A class with a single method to be tested is less of a restriction than you might think. We are only testing four methods in the thousands of tests I mentioned earlier.

Here are a handful of tests for the class above.

public class MySimpleTest extends SimpleTestCase {

    private MyTestedClass obj = new MyTestedClass();

    public void test1() {
        assertEquals("a:b", obj.op("a", "b"));
    }

    public void test2() {
        assertEquals(":b", obj.op(null, "b"));
    }

    public void test3() {
        assertEquals("a:", obj.op("a", null));
    }

    public void test4() {
        assertEquals(":", obj.op(null, null));
    }

    public void test5() {
        // this will fail
        assertEquals(" : ", obj.op(null, null));
    }
}

Capturing basic information with a TestListener

JUnit3 allows listeners to be added their test processes. This listener is called before and after the test is run, plus anytime a test fails or has an error (throws an exception). This TestListener writes basic test information to System.out as a proof of concept. It would be easy to modify it to write the information to a database, a JMS topic, etc.

public class SimpleTestListener implements TestListener {
    private static final TimeZone UTC = TimeZone.getTimeZone("UTC");
    private long start;
    private boolean successful = true;
    private String name;
    private String failure = null;

    SimpleTestListener() {
    }

    public void setName(String name) {
        this.name = name;
    }

    public void startTest(Test test) {
        start = System.currentTimeMillis();
    }

    public void addError(Test test, Throwable t) {
        // cache information about error.
        successful = false;
    }

    public void addFailure(Test test, AssertionFailedError e) {
        // cache information about failure.
        failure = e.getMessage();
        successful = false;
    }

    /**
     * After the test finishes we can update the database with statistics about
     * the test - name, elapsed time, whether it was successful, etc.
     */
    public void endTest(Test test) {
        long elapsed = System.currentTimeMillis() - start;

        SimpleDateFormat fmt = new SimpleDateFormat();
        fmt.setTimeZone(UTC);

        System.out.printf("[%s, %s, %s, %d, %s, %s]\n", test.getClass().getName(), name, fmt.format(new Date(start)),
                elapsed, failure, Boolean.toString(successful));

        // write any information about errors or failures to database.
    }
}

A production TestListener should do a lot more with errors and failures. I’ve skipped that in order to focus on the broader issues.

This listener is not thread-safe so we will want to use a Factory pattern to create a fresh instance for each test. We can create heavyweight objects in the factory, e.g., open a SQL DataSource in the factory and pass a fresh Connection to each instance.

public class SimpleTestListenerFactory {
    public static final SimpleTestListenerFactory INSTANCE = new SimpleTestListenerFactory();

    public SimpleTestListenerFactory() {
        // establish connection data source here?
    }

    public SimpleTestListener newInstance() {
        // initialize listener.
        SimpleTestListener listener = new SimpleTestListener();
        return listener;
    }
}

If we know the test framework is purely serial we can capture all console output by creating a buffer and calling System.setOut() in startTest() and then restoring the original System.out in endTest(). This works well as long as tests never overlap but will cause problems otherwise. This can be problematic though – IDEs may have their own test runners that allow concurrent execution.

We override the standard run() method with our own that creates and registers a listener before calling the existing run() method.

    public class SimpleTestCase extends TestCase {
     
        public void run(TestResult result) {
            SimpleTestListener l = SimpleTestListenerFactory.INSTANCE.newInstance();
            result.addListener(l);
            l.setName(getName());
            super.run(result);
            result.removeListener(l);
        }
    }

We now get the expected results to System.out.

[MySimpleTest, test1, 8/2/15 11:58 PM, 0, null, true]
[MySimpleTest, test2, 8/2/15 11:58 PM, 10, null, true]
[MySimpleTest, test3, 8/2/15 11:58 PM, 0, null, true]
[MySimpleTest, test4, 8/2/15 11:58 PM, 0, null, true]
[MySimpleTest, test5, 8/2/15 11:58 PM, 4, expected same:<:> was not:< : >, false]

Capturing call information with a facade and TestListener

This is a good start but we might be able to do better. Only 4 methods are called in the thousands of tests mentioned above – it would be extremely powerful if we could capture the input and output values on those calls.

It is easy to wrap these functions with AOP, or a logging facade if AOP is not acceptable for some reason. In simple cases we can simply capture the input and output values.

    public class MyFacadeClass extends MyTestedClass {
        private MyTestedClass parent;
        private String a;
        private String b;
        private String result;
     
        public MyFacadeClass(MyTestedClass parent) {
            this.parent = parent;
        }
     
        public String getA() {
            return a;
        }
     
        public String getB() {
            return b;
        }
     
        public String getResult() {
            return result;
        }
     
        /**
         * Wrap tested method so we can capture input and output.
         */
        public String op(String a, String b) {
            this.a = a;
            this.b = b;
            String result = parent.op(a, b);
            this.result = result;
            return result;
        }
    }

We log the basic information as before and add just a bit new code to log the inputs and outputs.

public class AdvancedTestListener extends SimpleTestListener {

    AdvancedTestListener() {
    }

    /**
     * Log information as before but also log call details.
     */
    public void endTest(Test test) {
        super.endTest(test);

        // add captured inputs and outputs
        if (test instanceof MyAdvancedTest) {
            MyTestedClass obj = ((MyAdvancedTest) test).obj;
            if (obj instanceof MyFacadeClass) {
                MyFacadeClass facade = (MyFacadeClass) obj;
                System.out.printf("[, , %s, %s, %s]\n", facade.getA(), facade.getB(), facade.getResult());
            }
        }
    }
}

The logs now show both the basic information and the call details.

[MyAdvancedTest, test2, 8/3/15 12:13 AM, 33, null, true]
[, , null, b, :b]
[MyAdvancedTest, test3, 8/3/15 12:13 AM, 0, null, true]
[, , a, null, a:]
[MyAdvancedTest, test4, 8/3/15 12:13 AM, 0, null, true]
[, , null, null, :]
[MyAdvancedTest, test1, 8/3/15 12:13 AM, 0, null, true]
[, , a, b, a:b]

We want to associate the basic and call details but that’s easily handled by adding a unique test id.

This approach is not enough in the real world where the tested methods may be called multiple times during a single test. In this case we need to either have a way to cache multiple sets of input and output values or to extend the listener so we can call it at the end of each covered method.

We can make our results more extensible by encoding the results in XML or JSON instead of a simple list. This will allow us to only capture values of interest or to easily handle fields added in the future.

[MyAdvancedTest, test2, 8/3/15 12:13 AM, 33, null, true]
{"a":null, "b":"b", "results":":b" }
[MyAdvancedTest, test3, 8/3/15 12:13 AM, 0, null, true]
{"a":"a", "b":null, "results":"a:" }
[MyAdvancedTest, test4, 8/3/15 12:13 AM, 0, null, true]
{"a":null, "b":null, "results":":" }
[MyAdvancedTest, test1, 8/3/15 12:13 AM, 0, null, true]
{"a":" a", "b":"b", "results":" a:b" }

Capturing assertX information

We can now rerun the tests by replaying the captured inputs but there are two problems with blindly comparing the results. First, it will be a lot of unnecessary work if we only care about a single value. Second, many tests are non-deterministic (e.g., they use canned data that changes over time or even live data) and things we don’t care about may change.

This is not an easy problem. If we’re lucky the tests will follow the standard pattern and we can make a good guess at what tests are being performed but it needs to be manually verified.

First, we need to wrap the tested method’s results with a facade that captures some or all method calls. The call history should become available in a form that we can replay later, e.g., a sequence of method names and serialized parameters.

Second, we need to wrap the TestCase assertX methods so that we capture the recent method calls and the values passed to the assert call (plus the results, of course).

Example

The process is easiest to show – and demolish – with an example. Let’s start with a simple POJO.

public class Person {
    private String firstName;
    private String lastName;

    public String getFirstName() { return firstName; }
    public String getLastName() { return lastName; }
}

In this case our facade only needs to record the method name.

A typical test method is

public void test1() {
    Person p = getTestPerson();
    assertEquals("John", p.getFirstName());
    assertEquals("Smith", p.getLastName());
}

with a wrapped assertX method of

static PersonFacade person;

public static void assertEquals(String expected, String actual) {
    // ignoring null handling...
    boolean results = expected.equals(actual);
    LOG.log("assertEquals('" + expected + "',"+person.getMethodsCalled()+ ") = " + results);
    person.clearMethodsCalled();
    if (!results) {
        throw new AssertionFailedError("Expected same:<" + expected + " > was not:<" + actual + ">");
    }
}

so we would get results like

assertEquals('John', getFirstName()) = true;
assertEquals('Smith', getLastName()) = false;

It’s not hard to see how this could be parsed by a test framework but it’s too early to celebrate. The second test method is

public void test1() {
    Person p = getTestPerson();
    assertEquals("john", p.getFirstName().toLowerCase());
}

and our simple code will not capture the toLowerCase(). Our log will wrongly record:

assertEquals('John', getFirstName()) = false;

A more pathological case is:

public void test1() {
    Person p = getTestPerson();
    LOG.log("testing " + p.getFirstName());
    assertEquals("john", "joe");
}

where the assertion has nothing to do with the wrapped class.

There are obvious bandaids, e.g., we could capture the return values in our facade, but this is a very deep rabbit hole that we want to stay far away from. I think the answer is to make a reasonable first attempt, manually verify the results, and leave it at that. (Alternative: rewrite the tests to a form that can be captured.)

Reference: Adding Database Logging to JUnit3 from our JCG partner Bear Giles at the Invariant Properties blog.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
Back to top button