JUnit 5 - Part II
In the first part, I gave you a brief introduction to the basic functionality of the new JUnit 5 framework: new asserts, testing exceptions and timing, parameterizing and structuring tests. In this part I will explain the extension mechanism, which serves as a replacement for the runners and rules.
The New Extension Model
JUnit 4 introduced the concept of runners, which allowed you to implement a strategy on how to run a test. You could specify the runner to use by attaching the@RunWith
annotation to the test class, where
the value of the annotation specified the runner class to use.
You could do quite a lot of stuff with runners, but they had one central
drawback: you could specify only one runner on a test :-|
Since this was not flexible enough for most people, they invented the concept of rules. Rules allows you to intercept the test execution, so you can do all kind of stuff here like test preparation and cleanup, but also conditionally executing a test. Additionally you could combine multiple rules. But they could not satisfy all requirements, that's why runners were still needed to e.g. run a Spring test.
So there were two disjunct concepts applying to the same problem. In JUnit 5 both of them has been discarded and replaced by the extension mechanism. In one sentence extensions allows you to implement callbacks that hook into the test lifecycle. You can attach an extension to a test using the
@ExtendsWith
annotation, where the value specifies your extension class. In contrast to
@RunWith
multiple extensions are allowed. Also, you may use an
extension either on the test class or on a test method:@ExtendWith(MockitoExtension.class) class MockTests { // ... }
@ExtendWith(MockitoExtension.class) @Test void mockTest() { // ... }
An extension must implement the interface
Extension
which is just
a marker interface. The interesting stuff comes with the subtypes of Extension
which allows you to hook into the JUnit lifecycle.
Conditional Test Execution
This extension type allows you to decide, whether a test should be executed at all. By implementing the interfaceContainerExecutionCondition
you may
decide about the execution of all tests in a test container, which is e.g. a test class:
public interface ContainerExecutionCondition extends Extension { ConditionEvaluationResult evaluate(ContainerExtensionContext context); }
The context gives you access to the test container, e.g. the test class so you may inspect it in order to make the decision. To decide on a per test instance whether to run a test or not, implement the interface
TestExecutionCondition
.
The TestExtensionContext
gives you access to the test method and the
parent context:
public interface TestExecutionCondition extends Extension { ConditionEvaluationResult evaluate(TestExtensionContext context); }
A practical example for a condition is the
DisabledCondition
which
implements both interfaces, and checks if the either the test method or container
is marked with a @Disabled
annotation. Have a look at the source code at
GitHub.
TestInstancePostProcessor
TheTestInstancePostProcessor
allows you to - make an educated guess -
post-process the test class instance. This is useful to e.g. perform dependency
injection and is used by the Spring-
and MockitoExtension
to inject beans resp. mocks. We will use that soon in our practical example.
Test Lifecycle Callbacks
These extensions allow you to hook into JUnit's before/after lifecycle. You may implement one or even all callbacks, depending on your usecase. The callbacks are:
BeforeAllCallback
This extension is called before all tests and before all methods marked with
the @BeforeAll
annotation.
BeforeEachCallback
This extension is called before each test of the associated container, and before
all methods marked with the @BeforeEach
annotation.
BeforeTestExecutionCallback
This extension is called before each test of the associated container, but
- in contrast to the BeforeEachCallback
- after all methods marked
with the @BeforeEach
annotation.
AfterTestExecutionCallback
This extension is called after each test of the associated container, but
before all methods marked with the @AfterEach
annotation.
AfterEachCallback
This extension is called after each test of the associated container, and after
all methods marked with the @AfterEach
annotation.
AfterAllCallback
This extension is called after all tests and after all methods marked with the
@AfterAll
annotation.
Set an Example - A Replacement for the TemporaryFolder
Rule
Since extensions are a replacement for runners and rules, the old rules are no longer
supported**. One rule often used is the TemporaryFolder
Rule, which provides
temporary files and folders for every test, and also performs some cleanup afterwards.
So we will now write an extension based replacement using the extensions we have seen
so far. You will find the source code in a
GitHub repository accompanying this article.
The main functionality of creating and cleaning the files and folders will be provided by
the class TemporaryFolder
(we use the same name here as the
original rule, so we can easily use ist as a replacement). It has some methods to create
files and folders, and also a before()
and after()
methods
which are supposed to be called before resp. after every test:
public class TemporaryFolder { ... public File newFile() throws IOException { ... } public File newFolder() throws IOException { ... } public void before() throws IOException { ... } public void after() throws IOException { ... } }
We now gonna write an extension, that injects the
TemporaryFolder
in
a test instance, and automatically calls the before()
and after()
methods before resp. after executing a test. Something like this
@ExtendWith(TemporaryFolderExtension.class) public class TempFolderTest { private TemporaryFolder temporaryFolder; @BeforeEach public void setUp() throws IOException { assertNotNull(temporaryFolder); } @Test public void testTemporaryFolderInjection() { File file = temporaryFolder.newFile(); assertNotNull(file); assertTrue(file.isFile()); File folder = temporaryFolder.newFolder(); assertNotNull(folder); assertTrue(folder.isDirectory()); } }
Let's start implementing that extension. We want to inject a
TemporaryFolder
into our test instance, and as already mentioned, the TestInstancePostProcessor
is the extension designed for that use case. You will get the test class instance and the
extension context for the test class as a parameter. So we need to inspect our test instance
for fields of type TemporaryFolder
, and assign a new instance to that field:
public class TemporaryFolderExtension implements TestInstancePostProcessor { @Override public void postProcessTestInstance(Object testInstance, ExtensionContext context) throws Exception { for (Field field : testInstance.getClass().getDeclaredFields()) { if (field.getType().isAssignableFrom(TemporaryFolder.class)) { TemporaryFolder temporaryFolder = createTemporaryFolder(context, field); field.setAccessible(true); field.set(testInstance, temporaryFolder); } } } ... }
Not that hard at all. But we need to remember the created
TemporaryFolder
instances, in order to call the before()
and after()
methods
on it. One would say No problem, just save them in some kind of collection member.
But there is a catch: extensions must not have state! This was a design decision in order
to be flexible on the lifecycle of extensions. But since state is essential for certain
kinds of extension, there is a store API:
interface Store { Object get(Object key); <V> V get(Object key, Class<V> requiredType); <K, V> Object getOrComputeIfAbsent(K key, Function<K, V> defaultCreator); <K, V> V getOrComputeIfAbsent(K key, Function<K, V> defaultCreator, Class<V> requiredType); void put(Object key, Object value); Object remove(Object key); <V> V remove(Object key, Class<V> requiredType); }
The store is provided by the
ExtensionContext
, where the context is
passed to the extension callbacks as a parameter. Be aware that these contexts are
organized hierarchically, means you have a context for the test (TestExtensionContext
)
and for the surrounding test class (ContainerExtensionContext
). And since test
classes may be nested, so may be those container contexts. And each context provides its own
store, so you have to take care where you are storing your stuff. Big words, let's just write
our createTemporaryFolder()
method that creates the TemporaryFolder
,
associates it in a map using the given field as the key, and saves that map in the
context's store:
protected TemporaryFolder createTemporaryFolder(ExtensionContext extensionContext, Member key) { Map<Member, TemporaryFolder> map = getStore(extensionContext).getOrComputeIfAbsent(extensionContext.getTestClass().get(), (c) -> new ConcurrentHashMap<>(), Map.class); return map.getcomputeIfAbsent(key, (k) -> new TemporaryFolder()); } protected ExtensionContext.Store getStore(ExtensionContext context) { return context.getStore(ExtensionContext.Namespace.create(getClass(), context)); }
Ok, so we now create and inject the field, and remember that in the store. Are we done now? Let's write a test. We want our extension to inject a
TemporaryFolder
that we will use to create files and folder - either in the set up or in a test - and
these files are supposed to be deleted after the test:
@ExtendWith(TemporaryFolderExtension.class) public class TempFolderTest { private List<File> createdFiles = new ArrayList<>(); private TemporaryFolder temporaryFolder; private void rememberFile(File file) { createdFiles.add(file); } private void checkFileAndParentHasBeenDeleted(File file) { assertFalse(file.exists(), String.format("file %s has not been deleted", file.getAbsolutePath())); assertFalse(file.getParentFile().exists(), String.format("folder %s has not been deleted", file.getParentFile().getAbsolutePath())); } @BeforeEach public void setUp() throws IOException { assertNotNull(temporaryFolder); createdFiles.clear(); // create a file in set up File file = temporaryFolder.newFile(); rememberFile(file); } @AfterEach public void tearDown() throws Exception { for (File file : createdFiles) { checkFileAndParentHasBeenDeleted(file); } } @Test public void testTemporaryFolderInjection() throws Exception { File file = temporaryFolder.newFile(); rememberFile(file); assertNotNull(file); assertTrue(file.isFile()); File folder = temporaryFolder.newFolder(); rememberFile(folder); assertNotNull(folder); assertTrue(folder.isDirectory()); } }
Run the test, and...it fails:
org.opentest4j.AssertionFailedError: file C:\Users\Ralf\AppData\Local\Temp\junit6228173188033609420\junit1925268561755970404.tmp has not been deleted ... at com.github.ralfstuckert.junit.jupiter.TempFolderTest.checkFileAndParentHasBeenDeleted(TempFolderTest.java:32) at com.github.ralfstuckert.junit.jupiter.TempFolderTest.tearDown(TempFolderTest.java:55)
Well, no surprise, we are not cleaning up any files yet, so we need to implement that. We want to clean up the files right after the test before the
@AfterEach
is triggered. The callback to do this, is AfterTestExecutionCallback
:
public class TemporaryFolderExtension implements AfterTestExecutionCallback, TestInstancePostProcessor { ... @Override public void afterTestExecution(TestExtensionContext extensionContext) throws Exception { if (extensionContext.getParent().isPresent()) { // clean up injected member cleanUpTemporaryFolder(extensionContext.getParent().get()); } } protected void cleanUpTemporaryFolder(ExtensionContext extensionContext) { for (TemporaryFolder temporaryFolder : getTemporaryFolders(extensionContext)) { temporaryFolder.after(); } } protected Iterable<TemporaryFolder> getTemporaryFolders(ExtensionContext extensionContext) { Map<Object, TemporaryFolder> map = getStore(extensionContext).get(extensionContext.getTestClass().get(), Map.class); if (map == null) { return Collections.emptySet(); } return map.values(); } }
So we now called right after the test has been executed, retrieve all
TemporaryFolder
we saved in the store in order to remember them, and call the after() method which
actually cleans up the files. One point to mention is, that we are using the context's parent to
retrieve the store. That's because we used the (Class-)ContainerExecutionContext
store
when we created the TemporaryFolder
s, but in afterTestExecution()
we get
passed the TestExtensionContext
which is the child context. So we have to climb up
the context hierarchy in order to get the right context and the associated store. Let's run the
test again...tada, green:
Provide the TemporaryFolder
as a Parameter
We want the possibility to provide a TemporaryFolder
as parameter
for a test method. We will specify this as a test first:
@Test public void testTemporaryFolderAsParameter(final TemporaryFolder tempFolder) throws Exception { assertNotNull(tempFolder); assertNotSame(tempFolder, temporaryFolder); File file = tempFolder.newFile(); rememberFile(file); assertNotNull(file); assertTrue(file.isFile()); }
Run the test...
org.junit.jupiter.api.extension.ParameterResolutionException: No ParameterResolver registered for parameter [com.github.ralfstuckert.junit.jupiter.extension.tempfolder.TemporaryFolder arg0] in executable [public void com.github.ralfstuckert.junit.jupiter.TempFolderTest.testTemporaryFolderAsParameter(com.github.ralfstuckert.junit.jupiter.extension.tempfolder.TemporaryFolder) throws java.lang.Exception].
This failure message already gives us a hint on what we have to do: a
ParameterResolver
. This is also an extension
interface that allows you to provide parameters for both test constructor and
methods, so we will implement that. It consists of the two methods
supports()
and resolve()
. The first one is called to
check whether this extension is capable of providing the desired parameter, and
the latter is then called to actually create an instance of that parameter:
public class TemporaryFolderExtension implements ParameterResolver, AfterTestExecutionCallback, TestInstancePostProcessor { @Override public boolean supports(ParameterContext parameterContext, ExtensionContext extensionContext) throws ParameterResolutionException { Parameter parameter = parameterContext.getParameter(); return (extensionContext instanceof TestExtensionContext) && parameter.getType().isAssignableFrom(TemporaryFolder.class); } @Override public Object resolve(ParameterContext parameterContext, ExtensionContext extensionContext) throws ParameterResolutionException { TestExtensionContext testExtensionContext = (TestExtensionContext) extensionContext; try { TemporaryFolder temporaryFolder = createTemporaryFolder(testExtensionContext, testExtensionContext.getTestMethod().get()); Parameter parameter = parameterContext.getParameter(); if (parameter.getType().isAssignableFrom(TemporaryFolder.class)) { return temporaryFolder; } throw new ParameterResolutionException("unable to resolve parameter for " + parameterContext); } catch (IOException e) { throw new ParameterResolutionException("failed to create temp file or folder", e); } }
That's it? No, if you run the test, it is still red, but with a different failure message saying that a file has not been delete as expected. Well, if you look at the implementation you will see, that we are saving the created
TemporaryFolder
in the store of the testExtensionContext
using the test method as the key. Before we rembered all instances we injected in
the (Class)ContainerExtensionContext
. So we have to take of this one
in our clean up code:
@Override public void afterTestExecution(TestExtensionContext extensionContext) throws Exception { // clean up test instance cleanUpTemporaryFolder(extensionContext); if (extensionContext.getParent().isPresent()) { // clean up injected member cleanUpTemporaryFolder(extensionContext.getParent().get()); } }
Run the test again...green. Of course we could have climbed up to the class container extension context, and use that store for remembering the new
TemporaryFolder
, but we
want to fool around here bit and try things out ;-)
More Fun with Parameters
By now we get aTemporaryFolder
injected and passed as parameter,
and then we are using that one to create files and folders. Why that extra step?
I'd like a fresh temporary file or folder directly passed as a parameter. Ok,
it would be nice, if we could express our desire for a temporary file.
Also we need something to distinguish between files and folders, since they
both have type File
...how about that:
public void testTempFolder(@TempFolder final File folder) { rememberFile(folder); assertNotNull(folder); assertTrue(folder.exists()); assertTrue(folder.isDirectory()); } public void testTempFile(@TempFile final File file) { rememberFile(file); assertNotNull(file); assertTrue(file.exists()); assertTrue(file.isFile()); }
Very nice, we just mark the parameter with an annotation that describes our needs. And this is easy to accomplish with the parameter resolver. At first, we need our parameter annotations:
@Target({ ElementType.TYPE, ElementType.PARAMETER }) @Retention(RetentionPolicy.RUNTIME) @Documented public @interface TempFile {} @Target({ ElementType.TYPE, ElementType.PARAMETER }) @Retention(RetentionPolicy.RUNTIME) @Documented public @interface TempFolder {}
Also, we just need to extend our existing code a little bit:
@Override public boolean supports(ParameterContext parameterContext, ExtensionContext extensionContext) throws ParameterResolutionException { Parameter parameter = parameterContext.getParameter(); return (extensionContext instanceof TestExtensionContext) && (parameter.getType().isAssignableFrom(TemporaryFolder.class) || (parameter.getType().isAssignableFrom(File.class) && (parameter.isAnnotationPresent(TempFolder.class) || parameter.isAnnotationPresent(TempFile.class)))); } @Override public Object resolve(ParameterContext parameterContext, ExtensionContext extensionContext) throws ParameterResolutionException { TestExtensionContext testExtensionContext = (TestExtensionContext) extensionContext; try { TemporaryFolder temporaryFolder = createTemporaryFolder(testExtensionContext, testExtensionContext.getTestMethod().get()); Parameter parameter = parameterContext.getParameter(); if (parameter.getType().isAssignableFrom(TemporaryFolder.class)) { return temporaryFolder; } if (parameter.isAnnotationPresent(TempFolder.class)) { return temporaryFolder.newFolder(); } if (parameter.isAnnotationPresent(TempFile.class)) { return temporaryFolder.newFile(); } throw new ParameterResolutionException("unable to resolve parameter for " + parameterContext); } catch (IOException e) { throw new ParameterResolutionException("failed to create temp file or folder", e); } }
Run the tests, aaaand...green. That was easy. Just one more improvement: wouldn't it useful, if we could name our test files? Like this:
@Test public void testTempFile(@TempFile("hihi") final File file) { rememberFile(file); assertNotNull(file); assertTrue(file.exists()); assertTrue(file.isFile()); assertEquals("hihi", file.getName()); }
That's easy. Just add a value to our file annotation, and evaluate it in the
resolve()
method:
@Target({ ElementType.TYPE, ElementType.PARAMETER }) @Retention(RetentionPolicy.RUNTIME) @Documented public @interface TempFile { String value() default ""; }
@Override public Object resolve(ParameterContext parameterContext, ExtensionContext extensionContext) throws ParameterResolutionException { ... if (parameter.isAnnotationPresent(TempFile.class)) { TempFile annotation = parameter.getAnnotation(TempFile.class); if (!annotation.value().isEmpty()) { return temporaryFolder.newFile(annotation.value()); } return temporaryFolder.newFile(); }
Annotation Composition
As already explained in the first part, JUnit 5 has support for composed and meta-annotations. This allows you to use JUnit annotations by inheritance (see the chapter on interface default methods). When searching for annotations, JUnit inspects also all super classes, interfaces and even annotations itself, means you can also use JUnit annotations as meta-annotation on your own annotations. Let's say you have a bunch of tests you like to Bechmark. In order to group them, you tag them with@Tag("benchmark")
.
The benchmark functionality is provided by your custom BenchmarkExtension
:
@Tag("benchmark") @ExtendWith(BenchmarkExtension.class) class SearchEngineTest { ...
We will now extract both the tag and the extension to our own meta-annotation...
@Target(ElementType.TYPE) @Retention(RetentionPolicy.RUNTIME) @Documented @Tag("benchmark") @ExtendWith(BenchmarkExtension.class) public @interface Benchmark { }
...and use that meta-annotation in our tests instead
@Benchmark class SearchEngineTest { ...
So this is helpful to represent a bunch of annotations by one descriptive annotation.
But there are other usecases, especially if you are dealing with libraries that themself support composed and meta-annotations, like e.g. Spring. Spring has support for running integration tests with junit, where the application context is created before the test is run. Spring has support for both JUnit 4 (using the
SpringJUnit4ClassRunner
and
JUnit 5 (using the SpringExtension
. So what is our use case?
If you working with Spring persistence, it is very easy to write integration
tests that check your custom persistence logic by real interaction with the
database. But after a test you need to clean up your test dirt. Some people
do so by tracking the objects they have inserted for testing purposes, and
deleting them after the tests. So how about writing an extension, that
actually tracks certain entities created during test, and automatically
deletes them afterwards?
The MongoCleanup
Extension
Let's say we have an entity Ticket
, a MongoDB based TicketRepository
, and
an integration test TicketRepositoryIT
. What we want to achieve, is that
we mark our test with the @MongoCleanup
annotation which gets passed one or multiple
entity classes to watch. All instances of that entities saved during the test will
be automatically deleted after the test has been finished:
@MongoCleanup(Ticket.class) @ExtendWith(SpringExtension.class) @SpringBootTest public class TicketRepositoryIT { @Autowired private TicketRepository repository; @Test @DisplayName("Test the findByTicketId() method") public void testSaveAndFindTicket() throws Exception { Ticket ticket1 = new Ticket("1", "blabla"); repository.save(ticket1); Ticket ticket2 = new Ticket("2", "hihi"); repository.save(ticket2); ... }
In order to do so, we got to register a bean in the spring context, that tracks saved instances, and provides some functionality to delete them. Also we need an extension, that has access to the spring context, so it can retrieve that bean and trigger the delete after the test is finished. Beans first:
public class MongoCleaner implements ApplicationListener<AfterSaveEvent> { @Override public void onApplicationEvent(AfterSaveEvent event) { // remember saved entities ... } public void prepare(final List<Class<?>> entityTypes) { // prepare entities to watch ... } public Map<Class<?>, Set<String>> cleanup() { // delete watched entities ... } ... }
The concrete implementation is not the point here, if you are interested, have a look at the accompanying GitHub project. The bean is provided to the spring context using a configuration class:
@Configuration public class MongoCleanerConfig { @Bean public MongoCleaner mongoCleaner() { return new MongoCleaner(); } }
And now the extension: It retrieves the
MongoCleaner
bean from the
spring context using a static function of the SpringExtension
, and
calls the prepare()
and cleanup()
methods before resp. after each test:
public class MongoCleanupExtension implements BeforeEachCallback, AfterEachCallback { @Override public void beforeEach(TestExtensionContext context) throws Exception { MongoCleaner mongoCleaner = getMongoCleaner(context); List<Class<?>> entityTypesToCleanup = getEntityTypesToCleanup(context); mongoCleaner.prepare(entityTypesToCleanup); } @Override public void afterEach(TestExtensionContext context) throws Exception { MongoCleaner mongoCleaner = getMongoCleaner(context); Map<Class<?>, Set<String>> cleanupResult = mongoCleaner.cleanup(); cleanupResult.forEach((entityType, ids) -> { context.publishReportEntry(String.format("deleted %s entities", entityType.getSimpleName()), ids.toString()); }); } protected MongoCleaner getMongoCleaner(ExtensionContext context) { ApplicationContext applicationContext = SpringExtension.getApplicationContext(context); MongoCleaner mongoCleaner = applicationContext.getBean(MongoCleaner.class); return mongoCleaner; } protected List<Class<?>> getEntityTypesToCleanup(ExtensionContext context) { Optional<AnnotatedElement> element = context.getElement(); MongoCleanup annotation = AnnotationUtils.findAnnotation(context.getTestClass().get(), MongoCleanup.class); return Arrays.asList(annotation.value()); } }
Well, but how is the bean configuration passed to spring? And what about our
MongoCleanupExtension
, that must be provided to JUnit via an
@ExtendsWith
annotation?!? Now that's the use case for a meta
annotation. We will create our own annotation @MongoCleanup
which is itself
annotated with the JUnit @ExtendsWith
AND the spring
@Import
annotation:
@Target({ ElementType.TYPE}) @Retention(RetentionPolicy.RUNTIME) @Documented @Import(MongoCleanerConfig.class) @ExtendWith(MongoCleanupExtension.class) public @interface MongoCleanup { /** * @return the entity classes to clean up. */ Class[] value(); }
The
@ExtendWith(MongoCleanupExtension.class)
is processed by JUnit,
and hooks our extension into the test lifecycle. The @Import(MongoCleanerConfig.class)
is processed by Spring, and adds our MongoCleaner
to the application context.
So by adding one single annotation to our test class, we add functionality that hooks
into two different frameworks. And this is possible since they both support composed
resp. meta-annotations.
Conclusion
JUnit 5 is complete rewrite, and it looks promising. The separation of the framework into a platform and a test engine SPI decouples the tool providers from the test engines, providing you support for any test engine that implements the SPI. And the engine providers may improve and refactor their code without affecting the tool providers, which was quite a problem in the past. The usage of lambdas lets you write more concise test code, and nested test classes and dynamic tests gives you some new flexibility to structure your tests. The runners and rules API has been replaced by the extension API, providing you a clean mechanism to extend the framework. Be aware that the work on JUnit 5 is still in progress, so some APIs might change until the release in Q3. That was quite a lot more stuff than planned, but I hope you got an idea on what to do with JUnit 5.Best regards
Ralf
That's what makes this a particularly difficult sort of extraordinary case. The kind I like.
Jupiter Jones - Jupiter Ascending
** Limited Supported for Some Old Rules
Since some people will miss rules likeTemporaryFolder
, the JUnit 5
team added some supported for a selection of rules:org.junit.rules.ExternalResource
(includingorg.junit.rules.TemporaryFolder
)org.junit.rules.Verifier
(includingorg.junit.rules.ErrorCollector
)org.junit.rules.ExpectedException
These are provided by the separate artifact
junit-jupiter-migration-support
.
In order to use those rules, you have to add one of the responsible extensions to your
test class, e.g. for Verifier
it is the extension VerifierSupport
.
Or you just annotate your test with @EnableRuleMigrationSupport
, which
composes all rule support extensions:@Target({ ElementType.TYPE}) @ExtendWith(ExternalResourceSupport.class) @ExtendWith(VerifierSupport.class) @ExtendWith(ExpectedExceptionSupport.class) public @interface EnableRuleMigrationSupport {}
Here is the site(bcomexamresult.in) where you get all Bcom Exam Results. This site helps to clear your all query.
ReplyDeleteBA 3rd year Result 2019-20
Agra University BCOM 3rd Year Result 2020