Core Java

JUnit 5 – Extension Model

We already know quite a lot about the next version of Java’s most ubiquitous testing framework. Let’s now look at the JUnit 5 extension model, which will allow libraries and frameworks to add implement their own additions to JUnit.

Overview

Most of what you will read here and more can be found in the emerging JUnit 5 user guide. Note that it is based on an alpha version and hence subject to change.

Indeed, we are encouraged to open issues or pull requests so that JUnit 5 can improve further. Please make use of this opportunity! It is our chance to help JUnit help us, so if something you see here could be improved, make sure to take it upstream.

This post will get updated when it becomes necessary. The code samples I show here can be found on GitHub.

JUnit 4 Extension Model

Let’s first look at how JUnit 4 solved the problem. It has two, partly competing extension mechanisms: runners and rules.

Runners

Test runners manage a test’s life cycle: instantiation, calling setup and teardown methods, running the test, handling exceptions, sending notification, etc. and JUnit 4 provides an implementation that does all of that.

In 4.0 there was only one way to extend JUnit: Create a new runner and annotate your test class with @RunWith(MyRunner.class) so that JUnit uses it instead of its own implementation.

This mechanism is pretty heavyweight and inconvenient for little extensions. And it had a very severe limitation: There could always only be one runner per test class, which made it impossible to compose them. So there was no way to benefit from the features of, e.g., both the Mockito and the Spring runners at the same time.

Rules

To overcome these limitations, JUnit 4.7 introduced rules, which are annotated fields of the test class. JUnit 4 wraps test methods (and other actions) into a statement and passes it to the rules. They can then execute some code before and after executing the statement. Additionally, test methods usually call methods on rule instances during execution.

An example is the temporary folder rule:

public static class HasTempFolder {
	@Rule
	public TemporaryFolder folder= new TemporaryFolder();
 
	@Test
	public void testUsingTempFolder() throws IOException {
		File createdFile= folder.newFile("myfile.txt");
		File createdFolder= folder.newFolder("subfolder");
		// ...
	}
}

Due to the @Rule annotation, JUnit calls folder with a statement wrapping the method testUsingTempFolder. This specific rule is written so that folder creates a temporary folder, executes the test, and deletes the folder afterwards. The test itself can then create files and folders in the temporary folder.

Other rules might run the test in Swing’s Event Dispatch Thread, set up and tear down a database, or let the test time out if it ran too long.

Rules were a big improvement but are generally limited to executing some code before and after a test is run. They can not help with extension that can’t be implemented within that frame.

State Of Affairs

JUnit has two competing extension mechanisms, each with its own limitations.

So since JUnit 4.7 there were two competing extension mechanisms, each with its own limitations but also with quite an overlap. This makes clean extension difficult. Additionally, composing different extensions can be problematic and will often not do what the developer hoped it would.

junit-5-extension-model
Published by Tony Walmsley under CC-BY 2.0

JUnit 5 Extension Model

The JUnit Lambda project has a couple of core principles and one of them is to “prefer extension points over features”. This translated quite literally into an integral mechanism of the new version – not the only but the most important one for extending JUnit 5.

Extension Points

JUnit 5 extensions can declare interest in certain junctures of the test life cycle. When the JUnit 5 engine processes a test, it steps through these junctures and calls each registered extension. In rough order of appearance, these are the extension points:

  • Test Instance Post Processing
  • BeforeAll Callback
  • Conditional Test Execution
  • BeforeEach Callback
  • Parameter Resolution
  • Exception Handling
  • AfterEach Callback
  • AfterAll Callback

(Don’t worry if it’s not all that clear what each of them does. We will look at some of them later.)

Each extension point corresponds to an interface. Their methods take arguments that capture the context at that specific point in the test’s lifecycle, e.g. the test instance and method, the test’s name, parameters, annotations, and so forth.

An extension can implement any number of those interfaces and will get called by the engine with the respective arguments. It can then do whatever it needs to implement its functionality. One detail to consider: The engine makes no guarantees when it instantiates extension and how long it keeps instances around, so they have to be stateless. Any state they need to maintain has to be written to and loaded from a store that is made available by JUnit.

After creating the extension all that is left to do is tell JUnit about it. This is as easy as adding @ExtendWith(MyExtension.class) to the test class or method that needs the extension.

Actually, a slightly less verbose and more revealing option exists. But for that we first have to look at the other pillar of JUnit’s extension model.

Custom Annotations

The JUnit 5 API is driven by annotations and the engine does a little extra work when it checks for their presences: It not only looks for annotations on classes, methods and parameters but also on other annotations. And it treats everything it finds as if it were immediately present on the examined element. Annotating annotations is possible with so-called meta-annotations and the cool thing is, all JUnit annotations are totally meta.

This makes it possible to easily create and compose annotations that are fully functional within JUnit 5:

/**
 * We define a custom annotation that:
 * - stands in for '@Test' so that the method gets executed
 * - has the tag "integration" so we can filter by that,
 *   e.g. when running tests from the command line
 */
@Target({ElementType.TYPE, ElementType.METHOD})
@Retention(RetentionPolicy.RUNTIME)
@Test
@Tag("integration")
public @interface IntegrationTest { }

We can then use it like this:

@IntegrationTest
void runsWithCustomAnnotation() {
    // this gets executed
    // even though `@IntegrationTest` is not defined by JUnit
}

Or we can create more succinct annotations for our extensions:

@Target({ ElementType.TYPE, ElementType.METHOD, ElementType.ANNOTATION_TYPE })
@Retention(RetentionPolicy.RUNTIME)
@ExtendWith(ExternalDatabaseExtension.class)
public @interface Database { }

Now we can use @Database instead of @ExtendWith(ExternalDatabaseExtension.class). And since we added ElementType.ANNOTATION_TYPE to the list of allowed targets, it is also a meta-annotation and we or others can compose it further.

An Example

Let’s say we want to benchmark how long certain tests run. First, we create the annotation we want to use:

@Target({ ElementType.TYPE, ElementType.METHOD, ElementType.ANNOTATION_TYPE })
@Retention(RetentionPolicy.RUNTIME)
@ExtendWith(BenchmarkCondition.class)
public @interface Benchmark { }

It already points to BenchmarkCondition, which we will implement next. This is our plan:

  • to measure the runtime of the whole test class, store the time before any test is executed
  • to measure the runtime of individual test methods, store the time before each test
  • after a test method executed retrieve the test’s launch time, compute, and print the resulting runtime
  • after all tests are executed retrieve the class’s launch time, compute, and print the resulting runtime
  • only do any of this if the class or method is annotated with @Benchmark

The last point might not be immediately obvious. Why would a method not annotated with @Benchmark be processed by the extension? This stems from the fact that if an extension is applied to a class, it automatically applies to all methods therein. So if our requirements state that we might want to benchmark the class but not necessarily all individual methods, we need to exclude them. We do this by checking whether they are individually annotated.

Coincidentally, the first four points directly correspond to the life cycle callbacksBeforeAll, BeforeEach, AfterEach, AfterAll, so all we have to do is implement the four corresponding interfaces. The implementations are pretty trivial, they just do what we said above:

public class BenchmarkCondition implements
		BeforeAllExtensionPoint, BeforeEachExtensionPoint,
		AfterEachExtensionPoint, AfterAllExtensionPoint {

	private static final Namespace NAMESPACE =
			Namespace.of("BenchmarkCondition");

	@Override
	public void beforeAll(ContainerExtensionContext context) {
		if (!shouldBeBenchmarked(context))
			return;

		writeCurrentTime(context, LaunchTimeKey.CLASS);
	}

	@Override
	public void beforeEach(TestExtensionContext context) {
		if (!shouldBeBenchmarked(context))
			return;

		writeCurrentTime(context, LaunchTimeKey.TEST);
	}

	@Override
	public void afterEach(TestExtensionContext context) {
		if (!shouldBeBenchmarked(context))
			return;

		long launchTime = loadLaunchTime(context, LaunchTimeKey.TEST);
		long runtime = currentTimeMillis() - launchTime;
		print("Test", context.getDisplayName(), runtime);
	}

	@Override
	public void afterAll(ContainerExtensionContext context) {
		if (!shouldBeBenchmarked(context))
			return;

		long launchTime = loadLaunchTime(context, LaunchTimeKey.CLASS);
		long runtime = currentTimeMillis() - launchTime;
		print("Test container", context.getDisplayName(), runtime);
	}

	private static boolean shouldBeBenchmarked(ExtensionContext context) {
		return context.getElement().isAnnotationPresent(Benchmark.class);
	}

	private static void writeCurrentTime(
			ExtensionContext context, LaunchTimeKey key) {
		context.getStore(NAMESPACE).put(key, currentTimeMillis());
	}

	private static long loadLaunchTime(
			ExtensionContext context, LaunchTimeKey key) {
		return (Long) context.getStore(NAMESPACE).remove(key);
	}

	private static void print(
			String unit, String displayName, long runtime) {
		System.out.printf("%s '%s' took %d ms.%n", unit, displayName, runtime);
	}

	private enum LaunchTimeKey {
		CLASS, TEST
	}
}

Interesting details are shouldBeBenchmarked, which uses JUnit’s API to effortlessly determine whether the current element is (meta-)annotated with @Benchmark, andwriteCurrentTime/ loadLaunchTime, which use the store to write and read the launch times.

The next posts will talk about conditional test execution and parameter injection and show examples for how to use the corresponding extension points. If you can’t wait, check out this post, which shows how to port two JUnit 4 rules (conditional disable and temporary folder) to JUnit 5.

Summary

We have seen that JUnit 4’s runners and rules were not ideal to create clean, powerful, and composable extensions. JUnit 5 aims to overcome their limitations with the more general concept of extension points. They allow extensions to specify at what points in a test’s life cycle they want to intervene. We have also looked at how meta-annotations enable easy creation of custom annotations.

What do you think?

Reference: JUnit 5 – Extension Model from our JCG partner Nicolai Parlog at the CodeFx blog.

Nicolai Parlog

Nicolai is a thirty year old boy, as the narrator would put it, who has found his passion in software development. He constantly reads, thinks, and writes about it, and codes for a living as well as for fun.Nicolai is the editor of SitePoint's Java channel, writes a book about Project Jigsaw, blogs about software development on codefx.org, and is a long-tail contributor to several open source projects. You can hire him for all kinds of things.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Oleh
Oleh
7 years ago

TestNG vs JUnit 5 (2016) – what do you guys think?

Back to top button