Monday, March 27, 2017

JUnit 5 - Part II

In the first part, I gave you a brief introduction to the basic functionality of the new JUnit 5 framework: new asserts, testing exceptions and timing, parameterizing and structuring tests. In this part I will explain the extension mechanism, which serves as a replacement for the runners and rules.

The New Extension Model

JUnit 4 introduced the concept of runners, which allowed you to implement a strategy on how to run a test. You could specify the runner to use by attaching the @RunWith annotation to the test class, where the value of the annotation specified the runner class to use. You could do quite a lot of stuff with runners, but they had one central drawback: you could specify only one runner on a test :-|

Since this was not flexible enough for most people, they invented the concept of rules. Rules allows you to intercept the test execution, so you can do all kind of stuff here like test preparation and cleanup, but also conditionally executing a test. Additionally you could combine multiple rules. But they could not satisfy all requirements, that's why runners were still needed to e.g. run a Spring test.

So there were two disjunct concepts applying to the same problem. In JUnit 5 both of them has been discarded and replaced by the extension mechanism. In one sentence extensions allows you to implement callbacks that hook into the test lifecycle. You can attach an extension to a test using the @ExtendsWith annotation, where the value specifies your extension class. In contrast to @RunWith multiple extensions are allowed. Also, you may use an extension either on the test class or on a test method:

@ExtendWith(MockitoExtension.class)
class MockTests {
   // ...
}

   @ExtendWith(MockitoExtension.class)
   @Test
   void mockTest() {
      // ...
   }

An extension must implement the interface Extension which is just a marker interface. The interesting stuff comes with the subtypes of Extension which allows you to hook into the JUnit lifecycle.

Conditional Test Execution

This extension type allows you to decide, whether a test should be executed at all. By implementing the interface ContainerExecutionCondition you may decide about the execution of all tests in a test container, which is e.g. a test class:

public interface ContainerExecutionCondition extends Extension {

   ConditionEvaluationResult evaluate(ContainerExtensionContext context);
}

The context gives you access to the test container, e.g. the test class so you may inspect it in order to make the decision. To decide on a per test instance whether to run a test or not, implement the interface TestExecutionCondition. The TestExtensionContext gives you access to the test method and the parent context:

public interface TestExecutionCondition extends Extension {

   ConditionEvaluationResult evaluate(TestExtensionContext context);
}

A practical example for a condition is the DisabledCondition which implements both interfaces, and checks if the either the test method or container is marked with a @Disabled annotation. Have a look at the source code at GitHub.

TestInstancePostProcessor

The TestInstancePostProcessor allows you to - make an educated guess - post-process the test class instance. This is useful to e.g. perform dependency injection and is used by the Spring- and MockitoExtension to inject beans resp. mocks. We will use that soon in our practical example.

Test Lifecycle Callbacks

These extensions allow you to hook into JUnit's before/after lifecycle. You may implement one or even all callbacks, depending on your usecase. The callbacks are:

BeforeAllCallback

This extension is called before all tests and before all methods marked with the @BeforeAll annotation.

BeforeEachCallback

This extension is called before each test of the associated container, and before all methods marked with the @BeforeEach annotation.

BeforeTestExecutionCallback

This extension is called before each test of the associated container, but - in contrast to the BeforeEachCallback - after all methods marked with the @BeforeEach annotation.

AfterTestExecutionCallback

This extension is called after each test of the associated container, but before all methods marked with the @AfterEach annotation.

AfterEachCallback

This extension is called after each test of the associated container, and after all methods marked with the @AfterEach annotation.

AfterAllCallback

This extension is called after all tests and after all methods marked with the @AfterAll annotation.

Set an Example - A Replacement for the TemporaryFolder Rule

Since extensions are a replacement for runners and rules, the old rules are no longer supported**. One rule often used is the TemporaryFolder Rule, which provides temporary files and folders for every test, and also performs some cleanup afterwards. So we will now write an extension based replacement using the extensions we have seen so far. You will find the source code in a GitHub repository accompanying this article. The main functionality of creating and cleaning the files and folders will be provided by the class TemporaryFolder (we use the same name here as the original rule, so we can easily use ist as a replacement). It has some methods to create files and folders, and also a before() and after() methods which are supposed to be called before resp. after every test:

public class TemporaryFolder {
...
    public File newFile() throws IOException { ... }

    public File newFolder() throws IOException { ... }
    
    public void before() throws IOException { ... }

    public void after() throws IOException { ... }
}

We now gonna write an extension, that injects the TemporaryFolder in a test instance, and automatically calls the before() and after() methods before resp. after executing a test. Something like this

@ExtendWith(TemporaryFolderExtension.class)
public class TempFolderTest {

   private TemporaryFolder temporaryFolder;

   @BeforeEach
   public void setUp() throws IOException {
      assertNotNull(temporaryFolder);
   }

   @Test
   public void testTemporaryFolderInjection() {
      File file = temporaryFolder.newFile();
      assertNotNull(file);
      assertTrue(file.isFile());

      File folder = temporaryFolder.newFolder();
      assertNotNull(folder);
      assertTrue(folder.isDirectory());
   }
}

Let's start implementing that extension. We want to inject a TemporaryFolder into our test instance, and as already mentioned, the TestInstancePostProcessor is the extension designed for that use case. You will get the test class instance and the extension context for the test class as a parameter. So we need to inspect our test instance for fields of type TemporaryFolder, and assign a new instance to that field:

public class TemporaryFolderExtension implements TestInstancePostProcessor {

   @Override
   public void postProcessTestInstance(Object testInstance, ExtensionContext context) throws Exception {
      for (Field field : testInstance.getClass().getDeclaredFields()) {
         if (field.getType().isAssignableFrom(TemporaryFolder.class)) {
            TemporaryFolder temporaryFolder = createTemporaryFolder(context, field);
            field.setAccessible(true);
            field.set(testInstance, temporaryFolder);
         }
      }
   }

   ...
}

Not that hard at all. But we need to remember the created TemporaryFolder instances, in order to call the before() and after() methods on it. One would say No problem, just save them in some kind of collection member. But there is a catch: extensions must not have state! This was a design decision in order to be flexible on the lifecycle of extensions. But since state is essential for certain kinds of extension, there is a store API:

interface Store {

   Object get(Object key);

   <V> V get(Object key, Class<V> requiredType);

   <K, V> Object getOrComputeIfAbsent(K key, Function<K, V> defaultCreator);

   <K, V> V getOrComputeIfAbsent(K key, Function<K, V> defaultCreator, Class<V> requiredType);

   void put(Object key, Object value);

   Object remove(Object key);

   <V> V remove(Object key, Class<V> requiredType);

}

The store is provided by the ExtensionContext, where the context is passed to the extension callbacks as a parameter. Be aware that these contexts are organized hierarchically, means you have a context for the test (TestExtensionContext) and for the surrounding test class (ContainerExtensionContext). And since test classes may be nested, so may be those container contexts. And each context provides its own store, so you have to take care where you are storing your stuff. Big words, let's just write our createTemporaryFolder() method that creates the TemporaryFolder, associates it in a map using the given field as the key, and saves that map in the context's store:

    protected TemporaryFolder createTemporaryFolder(ExtensionContext extensionContext, Member key) {
        Map<Member, TemporaryFolder> map =
                getStore(extensionContext).getOrComputeIfAbsent(extensionContext.getTestClass().get(),
                        (c) -> new ConcurrentHashMap<>(), Map.class);
        return map.getcomputeIfAbsent(key, (k) ->  new TemporaryFolder());
    }

    protected ExtensionContext.Store getStore(ExtensionContext context) {
        return context.getStore(ExtensionContext.Namespace.create(getClass(), context));
    }


Ok, so we now create and inject the field, and remember that in the store. Are we done now? Let's write a test. We want our extension to inject a TemporaryFolder that we will use to create files and folder - either in the set up or in a test - and these files are supposed to be deleted after the test:

@ExtendWith(TemporaryFolderExtension.class)
public class TempFolderTest {

    private List<File> createdFiles = new ArrayList<>();
    private TemporaryFolder temporaryFolder;


    private void rememberFile(File file) {
        createdFiles.add(file);
    }

    private void checkFileAndParentHasBeenDeleted(File file) {
        assertFalse(file.exists(), String.format("file %s has not been deleted", file.getAbsolutePath()));
        assertFalse(file.getParentFile().exists(), String.format("folder %s has not been deleted", file.getParentFile().getAbsolutePath()));
    }

    @BeforeEach
    public void setUp() throws IOException {
        assertNotNull(temporaryFolder);

        createdFiles.clear();

        // create a file in set up
        File file = temporaryFolder.newFile();
        rememberFile(file);
    }

    @AfterEach
    public void tearDown() throws Exception {
        for (File file : createdFiles) {
            checkFileAndParentHasBeenDeleted(file);
        }
    }

    @Test
    public void testTemporaryFolderInjection() throws Exception  {
        File file = temporaryFolder.newFile();
        rememberFile(file);
        assertNotNull(file);
        assertTrue(file.isFile());

        File folder = temporaryFolder.newFolder();
        rememberFile(folder);
        assertNotNull(folder);
        assertTrue(folder.isDirectory());
    }

}

Run the test, and...it fails:

org.opentest4j.AssertionFailedError: file C:\Users\Ralf\AppData\Local\Temp\junit6228173188033609420\junit1925268561755970404.tmp has not been deleted
   ...
   at com.github.ralfstuckert.junit.jupiter.TempFolderTest.checkFileAndParentHasBeenDeleted(TempFolderTest.java:32)
   at com.github.ralfstuckert.junit.jupiter.TempFolderTest.tearDown(TempFolderTest.java:55)

Well, no surprise, we are not cleaning up any files yet, so we need to implement that. We want to clean up the files right after the test before the @AfterEach is triggered. The callback to do this, is AfterTestExecutionCallback:

public class TemporaryFolderExtension implements AfterTestExecutionCallback, TestInstancePostProcessor {

   ...
   
   @Override
   public void afterTestExecution(TestExtensionContext extensionContext) throws Exception {
      if (extensionContext.getParent().isPresent()) {
         // clean up injected member
         cleanUpTemporaryFolder(extensionContext.getParent().get());
      }
   }

   protected void cleanUpTemporaryFolder(ExtensionContext extensionContext) {
      for (TemporaryFolder temporaryFolder : getTemporaryFolders(extensionContext)) {
         temporaryFolder.after();
      }
   }

   protected Iterable<TemporaryFolder> getTemporaryFolders(ExtensionContext extensionContext) {
      Map<Object, TemporaryFolder> map = getStore(extensionContext).get(extensionContext.getTestClass().get(), Map.class);
      if (map == null) {
         return Collections.emptySet();
      }
      return map.values();
   }

}

So we now called right after the test has been executed, retrieve all TemporaryFolder we saved in the store in order to remember them, and call the after() method which actually cleans up the files. One point to mention is, that we are using the context's parent to retrieve the store. That's because we used the (Class-)ContainerExecutionContext store when we created the TemporaryFolders, but in afterTestExecution() we get passed the TestExtensionContext which is the child context. So we have to climb up the context hierarchy in order to get the right context and the associated store. Let's run the test again...tada, green:




Provide the TemporaryFolder as a Parameter

We want the possibility to provide a TemporaryFolder as parameter for a test method. We will specify this as a test first:

   @Test
   public void testTemporaryFolderAsParameter(final TemporaryFolder tempFolder) throws Exception {
      assertNotNull(tempFolder);
      assertNotSame(tempFolder, temporaryFolder);

      File file = tempFolder.newFile();
      rememberFile(file);
      assertNotNull(file);
      assertTrue(file.isFile());
   }

Run the test...

org.junit.jupiter.api.extension.ParameterResolutionException: 
No ParameterResolver registered for parameter [com.github.ralfstuckert.junit.jupiter.extension.tempfolder.TemporaryFolder arg0] in executable [public void com.github.ralfstuckert.junit.jupiter.TempFolderTest.testTemporaryFolderAsParameter(com.github.ralfstuckert.junit.jupiter.extension.tempfolder.TemporaryFolder) throws java.lang.Exception].

This failure message already gives us a hint on what we have to do: a ParameterResolver. This is also an extension interface that allows you to provide parameters for both test constructor and methods, so we will implement that. It consists of the two methods supports() and resolve(). The first one is called to check whether this extension is capable of providing the desired parameter, and the latter is then called to actually create an instance of that parameter:

public class TemporaryFolderExtension implements ParameterResolver, AfterTestExecutionCallback, TestInstancePostProcessor {

   @Override
   public boolean supports(ParameterContext parameterContext, ExtensionContext extensionContext) throws ParameterResolutionException {
      Parameter parameter = parameterContext.getParameter();
      return (extensionContext instanceof TestExtensionContext) && 
            parameter.getType().isAssignableFrom(TemporaryFolder.class);
   }

   @Override
   public Object resolve(ParameterContext parameterContext, ExtensionContext extensionContext) throws ParameterResolutionException {
      TestExtensionContext testExtensionContext = (TestExtensionContext) extensionContext;
      try {
         TemporaryFolder temporaryFolder = createTemporaryFolder(testExtensionContext, 
                                                   testExtensionContext.getTestMethod().get());

         Parameter parameter = parameterContext.getParameter();
         if (parameter.getType().isAssignableFrom(TemporaryFolder.class)) {
            return temporaryFolder;
         }

         throw new ParameterResolutionException("unable to resolve parameter for " + parameterContext);
      } catch (IOException e) {
         throw new ParameterResolutionException("failed to create temp file or folder", e);
      }
   }


That's it? No, if you run the test, it is still red, but with a different failure message saying that a file has not been delete as expected. Well, if you look at the implementation you will see, that we are saving the created TemporaryFolder in the store of the testExtensionContext using the test method as the key. Before we rembered all instances we injected in the (Class)ContainerExtensionContext. So we have to take of this one in our clean up code:

   @Override
   public void afterTestExecution(TestExtensionContext extensionContext) throws Exception {
      // clean up test instance
      cleanUpTemporaryFolder(extensionContext);

      if (extensionContext.getParent().isPresent()) {
         // clean up injected member
         cleanUpTemporaryFolder(extensionContext.getParent().get());
      }
   }

Run the test again...green. Of course we could have climbed up to the class container extension context, and use that store for remembering the new TemporaryFolder, but we want to fool around here bit and try things out ;-)


More Fun with Parameters

By now we get a TemporaryFolder injected and passed as parameter, and then we are using that one to create files and folders. Why that extra step? I'd like a fresh temporary file or folder directly passed as a parameter. Ok, it would be nice, if we could express our desire for a temporary file. Also we need something to distinguish between files and folders, since they both have type File...how about that:

   public void testTempFolder(@TempFolder final File folder) {
      rememberFile(folder);
      assertNotNull(folder);
      assertTrue(folder.exists());
      assertTrue(folder.isDirectory());
   }

   public void testTempFile(@TempFile final File file) {
      rememberFile(file);
      assertNotNull(file);
      assertTrue(file.exists());
      assertTrue(file.isFile());
   }

Very nice, we just mark the parameter with an annotation that describes our needs. And this is easy to accomplish with the parameter resolver. At first, we need our parameter annotations:

@Target({ ElementType.TYPE, ElementType.PARAMETER })
@Retention(RetentionPolicy.RUNTIME)
@Documented
public @interface TempFile {}

@Target({ ElementType.TYPE, ElementType.PARAMETER })
@Retention(RetentionPolicy.RUNTIME)
@Documented
public @interface TempFolder {}

Also, we just need to extend our existing code a little bit:

   @Override
   public boolean supports(ParameterContext parameterContext, ExtensionContext extensionContext) throws ParameterResolutionException {
      Parameter parameter = parameterContext.getParameter();
      return (extensionContext instanceof TestExtensionContext) && (parameter.getType().isAssignableFrom(TemporaryFolder.class) ||
            (parameter.getType().isAssignableFrom(File.class) && (parameter.isAnnotationPresent(TempFolder.class)
                  || parameter.isAnnotationPresent(TempFile.class))));
   }

   @Override
   public Object resolve(ParameterContext parameterContext, ExtensionContext extensionContext) throws ParameterResolutionException {
      TestExtensionContext testExtensionContext = (TestExtensionContext) extensionContext;
      try {
         TemporaryFolder temporaryFolder = createTemporaryFolder(testExtensionContext, testExtensionContext.getTestMethod().get());

         Parameter parameter = parameterContext.getParameter();
         if (parameter.getType().isAssignableFrom(TemporaryFolder.class)) {
            return temporaryFolder;
         }
         if (parameter.isAnnotationPresent(TempFolder.class)) {
            return temporaryFolder.newFolder();
         }
         if (parameter.isAnnotationPresent(TempFile.class)) {
            return temporaryFolder.newFile();
         }

         throw new ParameterResolutionException("unable to resolve parameter for " + parameterContext);
      } catch (IOException e) {
         throw new ParameterResolutionException("failed to create temp file or folder", e);
      }
   }

Run the tests, aaaand...green. That was easy. Just one more improvement: wouldn't it useful, if we could name our test files? Like this:

   @Test
   public void testTempFile(@TempFile("hihi") final File file) {
      rememberFile(file);
      assertNotNull(file);
      assertTrue(file.exists());
      assertTrue(file.isFile());
      assertEquals("hihi", file.getName());
   }

That's easy. Just add a value to our file annotation, and evaluate it in the resolve() method:

@Target({ ElementType.TYPE, ElementType.PARAMETER })
@Retention(RetentionPolicy.RUNTIME)
@Documented
public @interface TempFile {

   String value() default "";
}

   @Override
   public Object resolve(ParameterContext parameterContext, ExtensionContext extensionContext) throws ParameterResolutionException {
         ...
         if (parameter.isAnnotationPresent(TempFile.class)) {
            TempFile annotation = parameter.getAnnotation(TempFile.class);
            if (!annotation.value().isEmpty()) {
               return temporaryFolder.newFile(annotation.value());
            }
            return temporaryFolder.newFile();
         }


Annotation Composition

As already explained in the first part, JUnit 5 has support for composed and meta-annotations. This allows you to use JUnit annotations by inheritance (see the chapter on interface default methods). When searching for annotations, JUnit inspects also all super classes, interfaces and even annotations itself, means you can also use JUnit annotations as meta-annotation on your own annotations. Let's say you have a bunch of tests you like to Bechmark. In order to group them, you tag them with @Tag("benchmark"). The benchmark functionality is provided by your custom BenchmarkExtension:

@Tag("benchmark")
@ExtendWith(BenchmarkExtension.class)
class SearchEngineTest {
   ...

We will now extract both the tag and the extension to our own meta-annotation...

@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
@Documented
@Tag("benchmark")
@ExtendWith(BenchmarkExtension.class)
public @interface Benchmark {
}

...and use that meta-annotation in our tests instead

@Benchmark
class SearchEngineTest {
   ...

So this is helpful to represent a bunch of annotations by one descriptive annotation.

But there are other usecases, especially if you are dealing with libraries that themself support composed and meta-annotations, like e.g. Spring. Spring has support for running integration tests with junit, where the application context is created before the test is run. Spring has support for both JUnit 4 (using the SpringJUnit4ClassRunner and JUnit 5 (using the SpringExtension. So what is our use case? If you working with Spring persistence, it is very easy to write integration tests that check your custom persistence logic by real interaction with the database. But after a test you need to clean up your test dirt. Some people do so by tracking the objects they have inserted for testing purposes, and deleting them after the tests. So how about writing an extension, that actually tracks certain entities created during test, and automatically deletes them afterwards?


The MongoCleanup Extension

Let's say we have an entity Ticket, a MongoDB based TicketRepository, and an integration test TicketRepositoryIT. What we want to achieve, is that we mark our test with the @MongoCleanup annotation which gets passed one or multiple entity classes to watch. All instances of that entities saved during the test will be automatically deleted after the test has been finished:

@MongoCleanup(Ticket.class)
@ExtendWith(SpringExtension.class)
@SpringBootTest
public class TicketRepositoryIT {

   @Autowired
   private TicketRepository repository;

   @Test
   @DisplayName("Test the findByTicketId() method")
   public void testSaveAndFindTicket() throws Exception {
      Ticket ticket1 = new Ticket("1", "blabla");
      repository.save(ticket1);
      Ticket ticket2 = new Ticket("2", "hihi");
      repository.save(ticket2);

      ...
   }

In order to do so, we got to register a bean in the spring context, that tracks saved instances, and provides some functionality to delete them. Also we need an extension, that has access to the spring context, so it can retrieve that bean and trigger the delete after the test is finished. Beans first:

public class MongoCleaner implements ApplicationListener<AfterSaveEvent> {

   @Override
   public void onApplicationEvent(AfterSaveEvent event) {
      // remember saved entities
      ...
   }

   public void prepare(final List<Class<?>> entityTypes) {
      // prepare entities to watch
      ...
   }

   public Map<Class<?>, Set<String>> cleanup() {
      // delete watched entities
      ...
   }
   ...
}

The concrete implementation is not the point here, if you are interested, have a look at the accompanying GitHub project. The bean is provided to the spring context using a configuration class:

@Configuration
public class MongoCleanerConfig {

   @Bean
   public MongoCleaner mongoCleaner() {
      return new MongoCleaner();
   }
}

And now the extension: It retrieves the MongoCleaner bean from the spring context using a static function of the SpringExtension, and calls the prepare() and cleanup() methods before resp. after each test:

public class MongoCleanupExtension implements BeforeEachCallback, AfterEachCallback {

   @Override
   public void beforeEach(TestExtensionContext context) throws Exception {
      MongoCleaner mongoCleaner = getMongoCleaner(context);
      List<Class<?>> entityTypesToCleanup = getEntityTypesToCleanup(context);
      mongoCleaner.prepare(entityTypesToCleanup);
   }

   @Override
   public void afterEach(TestExtensionContext context) throws Exception {
      MongoCleaner mongoCleaner = getMongoCleaner(context);
      Map<Class<?>, Set<String>> cleanupResult = mongoCleaner.cleanup();
      cleanupResult.forEach((entityType, ids) -> {
         context.publishReportEntry(String.format("deleted %s entities", entityType.getSimpleName()), ids.toString());
      });
   }

   protected MongoCleaner getMongoCleaner(ExtensionContext context) {
      ApplicationContext applicationContext = SpringExtension.getApplicationContext(context);
      MongoCleaner mongoCleaner = applicationContext.getBean(MongoCleaner.class);
      return mongoCleaner;
   }

   protected List<Class<?>> getEntityTypesToCleanup(ExtensionContext context) {
      Optional<AnnotatedElement> element = context.getElement();
      MongoCleanup annotation = AnnotationUtils.findAnnotation(context.getTestClass().get(), MongoCleanup.class);
      return Arrays.asList(annotation.value());
   }

}

Well, but how is the bean configuration passed to spring? And what about our MongoCleanupExtension, that must be provided to JUnit via an @ExtendsWith annotation?!? Now that's the use case for a meta annotation. We will create our own annotation @MongoCleanup which is itself annotated with the JUnit @ExtendsWith AND the spring @Import annotation:

@Target({ ElementType.TYPE})
@Retention(RetentionPolicy.RUNTIME)
@Documented
@Import(MongoCleanerConfig.class)
@ExtendWith(MongoCleanupExtension.class)
public @interface MongoCleanup {

   /**
    * @return the entity classes to clean up.
    */
   Class[] value();
}

The @ExtendWith(MongoCleanupExtension.class) is processed by JUnit, and hooks our extension into the test lifecycle. The @Import(MongoCleanerConfig.class) is processed by Spring, and adds our MongoCleaner to the application context. So by adding one single annotation to our test class, we add functionality that hooks into two different frameworks. And this is possible since they both support composed resp. meta-annotations.


Conclusion

JUnit 5 is complete rewrite, and it looks promising. The separation of the framework into a platform and a test engine SPI decouples the tool providers from the test engines, providing you support for any test engine that implements the SPI. And the engine providers may improve and refactor their code without affecting the tool providers, which was quite a problem in the past. The usage of lambdas lets you write more concise test code, and nested test classes and dynamic tests gives you some new flexibility to structure your tests. The runners and rules API has been replaced by the extension API, providing you a clean mechanism to extend the framework. Be aware that the work on JUnit 5 is still in progress, so some APIs might change until the release in Q3. That was quite a lot more stuff than planned, but I hope you got an idea on what to do with JUnit 5.

Best regards
Ralf
That's what makes this a particularly difficult sort of extraordinary case. The kind I like.
Jupiter Jones - Jupiter Ascending


** Limited Supported for Some Old Rules

Since some people will miss rules like TemporaryFolder, the JUnit 5 team added some supported for a selection of rules:

  • org.junit.rules.ExternalResource (including org.junit.rules.TemporaryFolder)
  • org.junit.rules.Verifier (including org.junit.rules.ErrorCollector)
  • org.junit.rules.ExpectedException

These are provided by the separate artifact junit-jupiter-migration-support. In order to use those rules, you have to add one of the responsible extensions to your test class, e.g. for Verifier it is the extension VerifierSupport. Or you just annotate your test with @EnableRuleMigrationSupport, which composes all rule support extensions:

@Target({ ElementType.TYPE})
@ExtendWith(ExternalResourceSupport.class)
@ExtendWith(VerifierSupport.class)
@ExtendWith(ExpectedExceptionSupport.class)
public @interface EnableRuleMigrationSupport {}

Monday, March 20, 2017

JUnit 5 - Part I

More than a decade ago I wrote an introduction to JUnit 4, which was – to be quite honest – just a catch-up with the more advanced TestNG. Now JUnit 5 is in the door and it is a complete rewrite, so it’s worth having a fresh look on it. In the first installment of this two part article I will describe what’s new in the basic usage: new asserts, testing exceptions and timing, parameterizing and structuring tests. JUnit 5 comes with a comprehensive user guide, most examples shown in this part are taken from there. So if you already read that, you may skip this and continue directly with the second part, which describes the new extension API that replaces the old runner and rules mechanisms.

One Jar fits all?

Prior to JUnit 5 all parts of the JUnit framework has been packed into one jar: the API to write tests - e.g. assertions - and the API and implementation to actually run tests; for some time, even hamcrest was baked into it. This was a problem since parts of JUnit could not be easily refactored without affecting the tool providers. JUnit 5 is a complete redesign and has been split into a platform and test engines. The platform provides mechanisms for discovering and launching tests, and serves as a foundation for the tool providers (e.g. IDE manufacturers). The platform also defines an SPI for test engines, where those test engines actually define the API to write a test. JUnit 5 provides two engines, the jupiter and the classic engine. The jupiter engine is used to write new JUnit 5 style tests. The classic engine can be used to run your old Junit 4 tests.

So this decouples the tool manufacturers from the test framework providers. This also means, that other test frameworks may be adapted to run on the JUnit platform. You may even write your own test engine, and run your tests in any tool that supports the platform. Currently IntelliJ runs JUnit 5 tests out of the box, other IDEs will follow soon. Also there is support for gradle and maven, and a console runner, so you may start using JUnit right now.

Platform, service providers, engines, a whole bunch of junit jars...holy chihuahua, what do I need to write JUnit tests? Well, to write tests with JUnit 5, you just need the junit-jupiter-api artifact, which defines the JUnit 5 API. It contains the API to write tests and extensions, and this is what we are gonna use in this article. The API in turn is implemented by the junit-jupiter-engine. The current version is 5.0.0-M3, the final release is scheduled for Q3 2017.

Annotations

The first thing to notice is that the new API is in a new namespace org.junit.jupiter.api. Most of the annotation names has been kept, the only one that changed is that Before is now BeforeEach, and BeforeClass is now BeforeAll:

import static org.junit.jupiter.api.Assertions.fail;

import org.junit.jupiter.api.AfterAll; 
import org.junit.jupiter.api.AfterEach; 
import org.junit.jupiter.api.BeforeAll; 
import org.junit.jupiter.api.BeforeEach; 
import org.junit.jupiter.api.Disabled; 
import org.junit.jupiter.api.Test; 

class ClassicTests { 

   @BeforeAll 
   static void setUpAll() { } 

   @BeforeEach 
   void setUp() { } 

   @AfterEach 
   void tearDown() { } 

   @AfterAll 
   static void tearDownAll() { } 

   @Test 
   void succeedingTest() { } 

   @Test 
   void failingTest() { 
      fail("a failing test");
   } 

   @Test 
   @Disabled("for demonstration purposes") 
   void skippedTest() { 
      // not executed 
   } 
}

Asserts

All assertions are still available via static imports, where the enclosing class is now org.junit.jupiter.api.Assertions:

import static org.junit.jupiter.api.Assertions.assertEquals;
...
  assertEquals(2, 2, “the message is now the last argument”);

The message is now the last argument, in order to get the test facts to the pole position. Another improvement is the possibility to create the message lazily using a lambda expression. This avoids unnecessary operations so the test may execute faster:

assertTrue(2 == 2, () -> "This message is created lazily");

Assertions may be grouped using assertAll(). All assertion in the group will  be executed whether they fail or not, and the results are collected.

   @Test 
   void groupedAssertions() { 
      // In a grouped assertion all assertions are executed, and any 
      // failures will be reported together. 
      assertAll("address", 
         () -> assertEquals("John", address.getFirstName()), 
         () -> assertEquals("User", address.getLastName()) 
      ); 
   } 

In JUnit4 exception testing has been done first using the expected property of the @Test annotation. This had the drawback, that you where unable to both inspect the exception and continue after the exception has been thrown. Therefore the ExpectedException rule has been introduced, to come around this flaw. By using the possibilities of lambda expressions, there is now an assertion for testing exceptions which allows to execute a block of code. That code is expected to throw an exception, and also continue the test and inspect that exception. And all that with just some local code:

   @Test 
   void exceptionTesting() { 
      Throwable exception = assertThrows(IllegalArgumentException.class, () -> { 
         throw new IllegalArgumentException("that hurts"); 
      }); 
      assertEquals("that hurts", exception.getMessage()); 
   }

JUnit 5 now also provides some asserts that allows you to time the code under test. This assertion comes in two flavours: the first just measures the time and fails, if the given time has been elapsed. The second is preemptive, means the test fails immediately if the time is up:

   @Test 
   void timeoutExceeded() { 
      // The following assertion fails with an error message similar to: 
      // execution exceeded timeout of 10 ms by 91 ms 
      assertTimeout(ofMillis(10), () -> { 
         // Simulate task that takes more than 10 ms. 
         Thread.sleep(100); 
      });
   } 
   
   @Test 
   void timeoutExceededWithPreemptiveTermination() { 
      // The following assertion fails with an error message similar to: 
      // execution timed out after 10 ms 
      assertTimeoutPreemptively(ofMillis(10), () -> { 
         // Simulate task that takes more than 10 ms. 
         Thread.sleep(100); 
      }); 
   }

Farewell to assertThat()

As already said, JUnit 5 is no longer a one-size-fits-all jar, but has been split up into different responsibilities. The core API now contains just the core API, nothing less, nothing more. Consequently, the baked in hamcrest support has been thrown out. As a replacement just use the bare hamcrest functionality, so all that changes are just some imports:

import static org.hamcrest.CoreMatchers.equalTo; 
import static org.hamcrest.CoreMatchers.is; 
import static org.hamcrest.MatcherAssert.assertThat;

...
   assertThat(2 + 1, is(equalTo(3)));

Assumptions

Some assumptions are gone in JUnit 5, e.g. assumeThat() (see farewell to assertThat()) and assumeNoExceptions(). But you may now use lambda expressions for the condition and the (lazy) message, and there is also a signature that allows you to execute a block of code if the condition matches:

   @Test
   void assumptionWithLambdaCondition() {
      assumeTrue(() -> "CI".equals(System.getenv("ENV")));
      // remainder of test
   }

   @Test
   void assumptionWithLambdaMessage() {
      assumeTrue("DEV".equals(System.getenv("ENV")),
         () -> "Aborting test: not on developer workstation");
      // remainder of test
   }

   @Test
   void assumptionWithCodeBlock() {
      assumingThat("CI".equals(System.getenv("ENV")),
         () -> {
            // perform these assertions only on the CI server
            assertEquals(2, 2);
         });

      // perform these assertions in all environments
      assertEquals("a string", "a string");
   }

Naming, Disabling and Filtering

Test runners use the test method name to visualize the test, so in order to make the result meaningful to us, we used to write XXXL long method names. JUnit 5 allow you to add a @DisplayName annotation that may provide a readable description of the test, which may also contain blanks and all other kind of characters. So you are no longer restricted to the set of characters allowed as java method names:

   @Test 
   @DisplayName("Custom test name containing spaces") 
   void testWithDisplayNameContainingSpaces() { }

The @Ignored annotation has been renamed to @Disabled. You may still provide a reason. Nothing more to say on that

   @Disabled(“This test will be ommited”)
   @Test void testWillBeSkipped() { }

We all use tags to mark content with some metadata in order to find, group or filter the data we are looking for in a certain context. Now tags are supported in JUnit also. You may add one or multiple tags to either a test case, and/ or the complete test class:

   @Test 
   @Tag("acceptance") 
   void testingCalculation() { }

Then you can use these tags to group resp. filter the tests you want to run. Here is a maven example:

<plugin> 
   <artifactId>maven-surefire-plugin</artifactId> 
   <version>2.19</version> 
   <configuration> 
      <properties> 
         <includeTags>acceptance</includeTags> 
         <excludeTags>integration, regression</excludeTags> 
      </properties> 
   </configuration> 
   <dependencies> ... </dependencies> 
</plugin>

Nested Tests

Sometimes it makes sense to organize tests in a hierarchy, e.g. in order to reflect the hierarchy of the structure under test. JUnit 5 allows you to organize tests in nested classes by just marking them with the @Nested annotation. The test discovery will organize these classes in nested test container, which may be visualized reflecting this structure. A core benefit is, that you may also organize the before and after methods hierarchally, means you may define a test setup for a group of nested tests:

...
import org.junit.jupiter.api.Nested;

@DisplayName("A stack")
class TestingAStackDemo {

   Stack<Object> stack;

   @Test
   @DisplayName("is instantiated with new Stack()")
   void isInstantiatedWithNew() {
      new Stack<>();
   }

   @Nested
   @DisplayName("when new")
   class WhenNew {

      @BeforeEach
      void createNewStack() {
         stack = new Stack<>();
      }

      @Test
      @DisplayName("is empty")
      void isEmpty() {
         assertTrue(stack.isEmpty());
      }

      @Test
      @DisplayName("throws EmptyStackException when popped")
      void throwsExceptionWhenPopped() {
         assertThrows(EmptyStackException.class, () -> stack.pop());
      }

      @Test
      @DisplayName("throws EmptyStackException when peeked")
      void throwsExceptionWhenPeeked() {
         assertThrows(EmptyStackException.class, () -> stack.peek());
      }

      @Nested
      @DisplayName("after pushing an element")
      class AfterPushing {

         String anElement = "an element";

         @BeforeEach
         void pushAnElement() {
            stack.push(anElement);
         }

         @Test
         @DisplayName("it is no longer empty")
         void isNotEmpty() {
            assertFalse(stack.isEmpty());
         }

         @Test
         @DisplayName("returns the element when popped and is empty")
         void returnElementWhenPopped() {
            assertEquals(anElement, stack.pop());
            assertTrue(stack.isEmpty());
         }

         @Test
         @DisplayName("returns the element when peeked but remains not empty")
         void returnElementWhenPeeked() {
            assertEquals(anElement, stack.peek());
            assertFalse(stack.isEmpty());
         }
      }
   }
}

You may nest tests arbitrarily deep, but be aware that this works for non-static classes only. Due to this restriction you can not nest @BeforeAll/ @AfterAll annotation since they are static. Let's have a look on how IntelliJ represents those nested test containers:



Dynamic Tests

More than once I had the case that I needed to repeat the same test logic for different contexts, e.g. different parameters. There are certain ways how to manage that, but they all suck. I always wished, I could just generate my tests on the fly for the different context. And finally the JUnit gods must have heard my prayers: create a method that returns either an Iterator or Iterable of DynamicTests, and mark it as @TestFactory:

import static org.junit.jupiter.api.DynamicTest.dynamicTest;
import org.junit.jupiter.api.DynamicTest;
...
class DynamicTestsDemo {

   @TestFactory
   Iterator<DynamicTest> dynamicTestsFromIterator() {
      return Arrays.asList(
         dynamicTest("1st dynamic test", () -> assertTrue(true)),
         dynamicTest("2nd dynamic test", () -> assertEquals(4, 2 * 2))
      ).iterator();
   }

   @TestFactory
   Collection<DynamicTest> dynamicTestsFromCollection() {
      return Arrays.asList(
            dynamicTest("3rd dynamic test", () -> assertTrue(true)),
            dynamicTest("4th dynamic test", () -> assertEquals(4, 2 * 2))
      );
   }

   @TestFactory
   Stream<DynamicTest> dynamicTestsFromIntStream() {
      // Generates tests for the first 10 even integers.
      return IntStream.iterate(0, n -> n + 2).limit(10).mapToObj(
         n -> dynamicTest("test" + n, () -> assertTrue(n % 2 == 0)));
   }
}



Unlike nested tests, the dynamic tests are - at the time of this writing - not organized in nested containers, but run in the context the test factory method. This has some significant flaw, as the before/after lifecycle is not executed for every dynamic test, but only once for the test factory method. Maybe this will change until the final release.

Parameter Resolver

JUnit 5 provides an API to pass parameter to the test constructor and methods, including the before- and after methods. You can define a ParameterResolver, which is responsible for providing parameters of a certain type. Besides passing parameters, this allows also stuff like dependency injection, and builds the foundation for e.g. the Spring- and Mockito-Extensions. Since this is part of the extension mechanism, it will be described in more detail in the second part of this article. By now I will just give you an example using the built-in TestInfo parameter resolver, which will provide you some data on the current test method. Just add TestInfo as a parameter in your test, and JUnit will automatically inject it:

   @Test
   @DisplayName("my wonderful test")
   @Tag("this is my tag")
   void test(TestInfo testInfo) {
      assertEquals("my wonderful test", testInfo.getDisplayName());
      assertTrue(testInfo.getTags().contains("this is my tag"));
   }

Interface Default Methods

Since JUnit 5 has support for composed annotations, you may also use test annotations on interfaces...and also on interface default methods. This allows you to write interface contracts as poor man's mixins instead of abstract base classes. Here is an example for the Comparable interface:

public interface ComparableContract<T extends Comparable<T>> {

   T createValue();
   
   T createSmallerValue();

   @Test
   default void returnsZeroWhenComparedToItself() {
      T value = createValue();
      assertEquals(0, value.compareTo(value));
   }

   @Test
   default void returnsPositiveNumberComparedToSmallerValue() {
      T value = createValue();
      T smallerValue = createSmallerValue();
      assertTrue(value.compareTo(smallerValue) > 0);
   }

   @Test
   default void returnsNegativeNumberComparedToLargerValue() {
      T value = createValue();
      T smallerValue = createSmallerValue();
      assertTrue(smallerValue.compareTo(value) < 0);
   }

}

If we now write a test for a class that implements the Comparable interface, we can inherit all tests provided by ComparableContract. All we have to do is to implement those two methods that provide appropriate values of our class createValue() and createSmallerValue().

public class StringTest implements ComparableContract<String> {

   @Override
   public String createValue() {
      return "foo";
   }

   @Override
   public String createSmallerValue() {
      return "bar"; // "bar" < "foo"
   }

    @Test
    public void someStringSpecificTest() { }

    @Test
    public void anotherStringSpecificTest() { }
}

If we now run the StringTest, both our specific tests and the tests inherited from the ComparableContract.



Ok, that's enough for today. Next week I will explain the JUnit 5 extension mechanism.

Best regards
Ralf
Perhaps a random drawing might be the most impartial way to figure things out.
Jupiter Jones - Jupiter Ascending

Sunday, August 14, 2016

Hyperlinks with PDFBox-Layout

One thing that made HTML so successful is the hyperlink. Keywords are marked up, and just by clicking on them, you are redirected to the refrerenced position in the document, or even to some totally different document. So it makes perfect sense to use hyperlinks in PDF documents also. Whether you are linking the entries of a TOC to the corresponding chapter, or a piece of information to a corresponding URL in wikipedia. Linking text enriches the content, and makes it more usable.

Luckily PDF (and PDFBox) supports hyperlinks, so why not use it? Because it's a pain. The PDF standard has no notion of marked up text, but the more general and abstract idea of annotated areas: You can describe some area in the document by coordinates, and add some metadata telling the PDF reader what to do with that area. That's quite powerful. You can do highlighting and all kinds of actions with that, and totally independant of the content, just by describing an area. And that's also the catch: it is totally independent of the content. If you are used to the notion of marked up text, this feels a bit unhandy. Let me give you an example how to do a link:

PDDocument document = new PDDocument();

PDPage page = new PDPage();
document.addPage(page);
float upperRightX = page.getMediaBox().getUpperRightX();
float upperRightY = page.getMediaBox().getUpperRightY();

PDFont font = PDType1Font.HELVETICA;
PDPageContentStream contentStream = new PDPageContentStream(document, page);
contentStream.beginText();
contentStream.setFont(font, 18);
contentStream.moveTextPositionByAmount( 0, upperRightY-20);
contentStream.drawString("This is a link to PDFBox");
contentStream.endText();
contentStream.close();

// create a link annotation
PDAnnotationLink txtLink = new PDAnnotationLink();

// add an underline
PDBorderStyleDictionary underline = new PDBorderStyleDictionary();
underline.setStyle(PDBorderStyleDictionary.STYLE_UNDERLINE);
txtLink.setBorderStyle(underline);

// set up the markup area
float offset = (font.getStringWidth("This is a link to ") / 1000) * 18;
float textWidth = (font.getStringWidth("PDFBox") / 1000) * 18;
PDRectangle position = new PDRectangle();
position.setLowerLeftX(offset);
position.setLowerLeftY(upperRightY - 24f);
position.setUpperRightX(offset + textWidth);
position.setUpperRightY(upperRightY -4);
txtLink.setRectangle(position);

// add an action
PDActionURI action = new PDActionURI();
action.setURI("http://www.pdfbox.org");
txtLink.setAction(action);

// and that's all ;-)
page.getAnnotations().add(txtLink);

document.save("link.pdf");



Ouch, this ain't no fun. Ok, I see you have all freedom to markup whatever you want. And you can do real fancy stuff with that, like highlight things, adding tooltips and a lot more. But if you just wanna add a hyperlink... hmmm. Let's do that again with PDFBox-Layout:

Document document = new Document();

Paragraph paragraph = new Paragraph();
paragraph.addText("This is a link to ", 18f,
PDType1Font.HELVETICA);
 
// create a hyperlink annotation
HyperlinkAnnotation hyperlink = 
   new HyperlinkAnnotation("http://www.pdfbox.org", LinkStyle.ul);

// create styled text annotated with the hyperlink
AnnotatedStyledText styledText = 
   new AnnotatedStyledText("PDFBox", 18f,
                           PDType1Font.HELVETICA, Color.black,
                           Collections.singleton(hyperlink));
paragraph.add(styledText);
document.add(paragraph);

final OutputStream outputStream = new FileOutputStream("link.pdf");
document.save(outputStream);

This performs exactly the same things. But you just say what you want: add a hyperlink to the given text. And all this odd area marking boilerplate code is handled by PDFBox-Layout. And we can do even better using markup:

Document document = new Document();

Paragraph paragraph = new Paragraph();
paragraph.addMarkup(
   "This is a link to {link[http://www.pdfbox.org]}PDFBox{link}", 
   18f, BaseFont.Helvetica);
document.add(paragraph);

final OutputStream outputStream = new FileOutputStream("link.pdf");
document.save(outputStream);

We just markup the text with the hyperlink URL and that's it :-) So now we can do external URLs, what about links into the document itself? Let's take the example Links.java. In order to link to some point in the document we have to add an anchor to this position. After that we can link to that anchor using the anchors name:

paragraph0.addMarkup(
   "And here comes a link to an internal anchor name {link[#hello]}hello{link}.\n\n", 
   11, BaseFont.Times);

...
paragraph4.addMarkup(
   "\n\n{anchor:hello}Here{anchor} comes the internal anchor named *hello*\n\n", 
   15, BaseFont.Courier);

So we define an anchor "{anchor:hello}Here{anchor}" somewhere in the document with the logical name `hello`. This anchor name is used in the link prefixed with a hash to indicate an internal link "{link[#hello]}hello{link}". See the example PDF links.pdf for the results. And that's all to say about links. Hopefully easy to use, and the dirty work is done behind the scenes by PDFBox-Layout.

Regards
Ralf
A chain is no stronger than its weakest link,
and life is after all a chain.
William James

Monday, June 13, 2016

Creating Lists with PDFBox-Layout

The last article gave you a brief introduction on what you can do with PDFBox-Layout. The new release 0.6.0 added support for indentation and lists, and that’s what this article is about.

Indentation

Indentation is often used to structure content, and it is also the base for creating lists. Let’s start with some simple example using the Indent element.

paragraph.addMarkup(
    "This is an example for the new indent feature. Let's do some empty space indentation:\n",
        11, BaseFont.Times);
paragraph.add(new Indent(50, SpaceUnit.pt));
paragraph.addMarkup("Here we go indented.\n", 11, BaseFont.Times);
paragraph.addMarkup(
    "The Indentation holds for the rest of the paragraph, or... \n",
    11, BaseFont.Times);
paragraph.add(new Indent(70, SpaceUnit.pt));
paragraph.addMarkup("any new indent comes.\n", 11, BaseFont.Times);

So what do we do here: we add an indent of 50pt width. This indent will be automatically inserted after each newline until the end of the paragraph…or a new indent is inserted. That’s what we do here, we insert an indent of 70pt:

indention

An indent may also have a label (after all, this is the foundation for lists). By default the label is right aligned, as this makes sense for lists. But you may specify an alignment to fit your needs:

paragraph = new Paragraph();
paragraph
    .addMarkup(
        "New paragraph, now indentation is gone. But we can indent with a label also:\n",
        11, BaseFont.Times);
paragraph.add(new Indent("This is some label", 100, SpaceUnit.pt, 11,
    PDType1Font.TIMES_BOLD));
paragraph.addMarkup("Here we go indented.\n", 11, BaseFont.Times);
paragraph
    .addMarkup(
        "And again, the Indentation holds for the rest of the paragraph, or any new indent comes.\nLabels can be aligned:\n",
        11, BaseFont.Times);
paragraph.add(new Indent("Left", 100, SpaceUnit.pt, 11,
    PDType1Font.TIMES_BOLD, Alignment.Left));
paragraph.addMarkup("Indent with label aligned to the left.\n", 11,
    BaseFont.Times);
paragraph.add(new Indent("Center", 100, SpaceUnit.pt, 11,
    PDType1Font.TIMES_BOLD, Alignment.Center));
paragraph.addMarkup("Indent with label aligned to the center.\n", 11,
    BaseFont.Times);
paragraph.add(new Indent("Right", 100, SpaceUnit.pt, 11,
    PDType1Font.TIMES_BOLD, Alignment.Right));
paragraph.addMarkup("Indent with label aligned to the right.\n", 11,
    BaseFont.Times);
document.add(paragraph);

indentionWithLabel

Lists

As already said, indentations where introduced in order to support list, so let's build one. It's nothing but indentation with a label, where the label is a bullet character:

paragraph = new Paragraph();
paragraph.addMarkup(
    "So, what can you do with that? How about lists:\n", 11,
    BaseFont.Times);
paragraph.add(new Indent(bulletOdd, 4, SpaceUnit.em, 11,
    PDType1Font.TIMES_BOLD, Alignment.Right));
paragraph.addMarkup("This is a list item\n", 11, BaseFont.Times);
paragraph.add(new Indent(bulletOdd, 4, SpaceUnit.em, 11,
    PDType1Font.TIMES_BOLD, Alignment.Right));
paragraph.addMarkup("Another list item\n", 11, BaseFont.Times);
paragraph.add(new Indent(bulletEven, 8, SpaceUnit.em, 11,
    PDType1Font.TIMES_BOLD, Alignment.Right));
paragraph.addMarkup("Sub list item\n", 11, BaseFont.Times);
paragraph.add(new Indent(bulletOdd, 4, SpaceUnit.em, 11,
    PDType1Font.TIMES_BOLD, Alignment.Right));
paragraph.addMarkup("And yet another one\n", 11, BaseFont.Times);

list

Ordered Lists with Enumerators

Ordered lists are quite helpful to structure and reference text. You already have all ingredients to build an ordered lists; just use increasing numbers as labels, and that's it. What would I need an API for to do that?!? How about a list with roman numbers:
RomanEnumerator e1 = new RomanEnumerator();
LowerCaseAlphabeticEnumerator e2 = new LowerCaseAlphabeticEnumerator();
paragraph = new Paragraph();
paragraph.addMarkup("Also available with indents: Enumerators:\n", 11,
    BaseFont.Times);
paragraph.add(new Indent(e1.next() + ". ", 4, SpaceUnit.em, 11,
    PDType1Font.TIMES_BOLD, Alignment.Right));
paragraph.addMarkup("First item\n", 11, BaseFont.Times);
paragraph.add(new Indent(e1.next() + ". ", 4, SpaceUnit.em, 11,
    PDType1Font.TIMES_BOLD, Alignment.Right));
paragraph.addMarkup("Second item\n", 11, BaseFont.Times);
paragraph.add(new Indent(e2.next() + ") ", 8, SpaceUnit.em, 11,
    PDType1Font.TIMES_BOLD, Alignment.Right));
paragraph.addMarkup("A sub item\n", 11, BaseFont.Times);
paragraph.add(new Indent(e2.next() + ") ", 8, SpaceUnit.em, 11,
    PDType1Font.TIMES_BOLD, Alignment.Right));
paragraph.addMarkup("Another sub item\n", 11, BaseFont.Times);
paragraph.add(new Indent(e1.next() + ". ", 4, SpaceUnit.em, 11,
    PDType1Font.TIMES_BOLD, Alignment.Right));
paragraph.addMarkup("Third item\n", 11, BaseFont.Times);
document.add(paragraph);
enumerators
Enumerators ease the task of generating ordered lists programmatically (and the markup API could not live without 'em ;-). Currently the following enumerators are supported:
  • ArabicEnumerator (1, 2, 3, 4...)
  • RomanEnumerator (I, II, III, IV...)
  • LowerCaseRomanEnumerator (i, ii, iii, iv...)
  • AlphabeticEnumerator (A, B, C, D...)
  • LowerCaseAlphabeticEnumerator (a, b, c, d...)

Markup

The markup API eases the burden of creating features programatically, and for sure you can do indentation and lists with markup. We're gonna start with some simple indentation. You start the indentation with -- at the beginning of a new line:
"--At vero eos et accusam\n"

Indents run until the end of a paragraph, or until another indentation starts. But you can you can explicitly end the indentation with !-:
"-!And end the indentation.\n"
indentation-markup

The default indent is 4 characters, but you can customize this by specifying the desired indentation in pt or em, so the markup --{50pt} will give you an indent of 50pt. Be aware that this size is per indent level, so if you prefix a space, you will get 100pt in this case.

To create lists with bullets, use the -+ markup:
"-+This is a list item\n"
"-+Another list item\n"

You can specify different levels of indentation just be prefixing the indent markup with one or multiple spaces:
" -+A sub list item\n"

You can customize the indent size and the bullet character (after all, this is the indentation label). Let's do an indent of 8 characters and use >> as the bullet: -+{>>:8em}
list-markup

Enumerated lists are supported also, just use the -# markup:

"-#This is a list item\n"
"-#Another list item\n"
" -#{a:}A sub list with lower case letter\n"
"-#And yet another one\n\n"

enumerators-markup
Again, you can customize the indent size and also the enumeration type. The default type is arabic numbers, but let's use the roman enumerator: `-#{I:6em}. The following enumerator types are built in:
  • 1 arabic
  • I roman, upper case
  • i roman, lower case
  • A alphabetic, upper case
  • a alphabetic, lower case
So let's use some custom separators here:

"-#This is a list item\n"
"-!And you can customize it:\n"
"-#{I ->:5}This is a list item\n"
"-#{I ->:5}Another list item\n"
" -#{a ~:30pt}A sub list item\n"
"-#{I ->:5}And yet another one\n\n";
custom-lists-markup

But you may also write your own enumerator and use it in markup, see class EnumeratorFactoryfor more details on that.

Regards
Ralf
The human animal differs from the lesser primates in his passion for lists.
H. Allen Smith

Sunday, April 17, 2016

PDF text layout made easy with PDFBox-Layout

More than a decade ago I was using iText to create PDF documents from scratch. It was quite easy to use, and did all the stuff I needed like organizing text in paragraphs, performing word wrapping and marking up text with bold and italic. But once upon a time Bruno Lowagie - the developer of iText - switched from open source to a proprietary license for reasons I do understand.

So when I now had to do some PDF processing for a new project, I was looking for an alternative. PDFBox is definitely the best open source choice, since it is quite mature.But when I was searching on how to do layout, I found a lot of people looking for exactly those features, and the common answer was: you have to do it on your own! Say what? Ouch. There must be someone out there, who already wrote that stuff... Sure there is, but google did not find him. So I started to write some simple word wrapping. And some simple pagination. And some simple markup for easy highlighting with bold an italic. Don't get me wrong: the stuff I wrote is neither sophisticated nor complete. It is drop dead simple, and does the things I need. But just in case someone out there may find it useful, I made it public under MIT license on GitHub.

column


PDFBox-Layout

PDFBox-Layout acts as a layer on top of PDFBox that performs some basic layout operations for you:
  • word wrapping
  • text alignment
  • paragraphs
  • pagination
The API actually has two parts: the (low-level) text layout API, and the document layout API.

The Text Layout API

The text layout API is thought for direct usage with the low level PDFBox API. You may organize text into blocks, do word wrapping, alignment, and highlight text with markup. Means: most features described in the remainder of this article may be used directly with PDFBox without the document layout API.  For more details on this API see the Text API Wiki page. What the document layout API gives you as a surplus, is paragraph layout and pagination.

The Document Layout API

The ingredients of the document layout API are documents, paragraphs and layouts. It is thought to easily create complete PDF documents from scratch, and performs things like word-wrapping, paragraph layout and pagination for you.
Let's start with a simple example:

document = new Document(Constants.A4, 40, 60, 40, 60);

Paragraph paragraph = new Paragraph();
paragraph.addText("Hello Document", 20, PDType1Font.HELVETICA);
document.add(paragraph);

final OutputStream outputStream = 
    new FileOutputStream("hellodoc.pdf");
document.save(outputStream);

We start with creating a Document, which acts as a container for elements like e.g. paragraphs. You specify the media box - A4 in this case - and the left, right, top and bottom margin of the document. The margins are applied to each page. After that we create a paragraph which is a container for text fragments. We add a text "Hello Document" with the font type HELVETICA and size 20 to the paragraph. That's it, let's save it to a file. The result looks like this:

hello

Word Wrapping

As already said, you can also perform word wrapping with PDFBox-Layout. Just use the method setMaxWidth() to set a maximum width, and the text container will do its best to not exceed the maximum width by word wrapping the text:

Paragraph paragraph = new Paragraph();
paragraph.addText(
    "This is some slightly longer text wrapped to a width of 100.", 
    11, PDType1Font.HELVETICA);
paragraph.setMaxWidth(100);
document.add(paragraph);

wrapped1

If you do not specify an explicit max width, the documents media box and the margins dictate the max width for a paragraph. Means: you may just write text, write text and more text without the need for any line beaks, and the layout will do the word wrapping in order to fit the paragraph into the page boundaries.

Text-Alignment

As you might have already seen, you can specify a text alignment on the paragraph:
Paragraph paragraph = new Paragraph();
paragraph.addText(
    "This is some slightly longer text wrapped to a width of 100.", 
paragraph.setMaxWidth(100);
paragraph.setAlignment(Alignment.Right);
document.add(paragraph);

wrapped-right

The alignment tells the draw method what to do with extra horizontal space, where the extra space is the difference between the width of the text container and the line. This means, that the alignment is effective only in case of multiple lines. Currently, Left, Center and Right alignment is supported.

Layout

The paragraphs in a document are sized and positioned using a layout strategy. By default, paragraphs are stacked vertically by the VerticalLayout. If a paragraph’s width is smaller than the page width, you can specify an alignment with a  layout hint:

document.add(paragraph, 
    new VerticalLayoutHint(Alignment.Left, 10, 10, 20, 0));

You can combine text- and paragraph-alignment anyway you want:

aligned

An alternative to the vertical layout is the Column-Layout, which allows you to arrange the paragraphs in multiple columns on a page.

Document document = 
    new Document(Constants.A4, 40, 60, 40, 60);
 
Paragraph title = new Paragraph();
title.addMarkup("*This Text is organized in Colums*", 
    20, BaseFont.Times);
document.add(title, VerticalLayoutHint.CENTER);
document.add(new VerticalSpacer(5));

// use column layout from now on
document.add(new ColumnLayout(2, 10));

Paragraph paragraph1 = new Paragraph();
paragraph1.addMarkup(text1, 11, BaseFont.Times);
document.add(paragraph1);
...

column

But you may also set an absolute position on an element. If this is set, the layout will ignore this element, and render it directly at the given position:

Paragraph footer = new Paragraph();
footer.addMarkup("This is some example footer", 6, BaseFont.Times);
paragraph.setAbsolutePosition(new Position(20, 20));
document.add(paragraph);

Pagination

As you add more and more paragraphs to the document, the layout automatically creates a new page if the content does not fit completely on the current page. Elements have different strategies how they will divide on multiple pages. Text is simply split by lines. Images may decide to either split, or - if they fit completely on the next page - to introduce some vertical spacer in order to be drawn on the next page. Anyway, you can always insert a NEW_PAGE element to trigger a new page.

Markup

Often you want use just some basic text styling: use a bold font here, some words emphasized with italic there, and that's it. Let's say we want to use different font types for the following sentence:

"Markup supports bold, italic, and even mixed markup."

If you want to do that using the standard API, it would look like this:

Paragraph paragraph = new Paragraph();
paragraph.addText("Markup supports ", 11, PDType1Font.HELVETICA);
paragraph.addText("bold", 11, PDType1Font.HELVETICA_BOLD);
paragraph.addText(", ", 11, PDType1Font.HELVETICA);
paragraph.addText("italic", 11, PDType1Font.HELVETICA_OBLIQUE);
paragraph.addText(", and ", 11, PDType1Font.HELVETICA);
paragraph.addText("even ", 11, PDType1Font.HELVETICA_BOLD);
paragraph.addText("mixed", 11, PDType1Font.HELVETICA_BOLD_OBLIQUE);
paragraph.addText(" markup", 11, PDType1Font.HELVETICA_OBLIQUE);
paragraph.addText(".\n", 11, PDType1Font.HELVETICA);
document.add(paragraph);

That's annoying, isn't it? That's what the markup API is intended for. Use * to mark bold content, and _ for italic. Let's do the same example with markup:

Paragraph paragraph = new Paragraph();
paragraph.addMarkup(
    "Markup supports *bold*, _italic_, and *even _mixed* markup_.\n", 
    11, 
    PDType1Font.HELVETICA, 
    PDType1Font.HELVETICA_BOLD,
    PDType1Font.HELVETICA_OBLIQUE,
    PDType1Font.HELVETICA_BOLD_OBLIQUE);
document.add(paragraph);

To make things even more easy, you may specify only the font family instead:

paragraph = new Paragraph();
paragraph.addMarkup(
    "Markup supports *bold*, _italic_, and *even _mixed* markup_.\n",
    11, BaseFont.Helvetica);

markup


That’s it

This was a short overview on what PDFBox Layout can do for you. Have a look at the Wiki and the examples for more information and some visual impressions.

Monday, January 4, 2016

Avoid Vertical Limits in Microservice Architectures

The microservice architecture allows us to partition an application into tiny sub-applications, which are easy to maintain and to deploy. This pattern is already widely adopted to implement backend systems. But the frontend is usually still one large application, at least when it comes to deployment. This article describes some thoughts on how to address this problem.
The microservice architecture is en vogue, everybody seems to know all about it, and feels like having to spread the truth. Including me ;-) But honestly, we are just about to learn how to cope with this kind of architecture. I guess we are like a kid that just managed to make some first steps, when we suddenly try to start running… that’s usually the moment when you fall over your own feet. Microservices are no free lunch, it definitely has its price (that’s the part we have already learned). For developers it feels uncomfortable, since instead of developing one application, they have to deal with dozens or hundreds. Usually the microservice-believers are the people who drive and maintain an application over a long period of time; those poor chaps that know the pain of change. And that is were their trust – or better: hope – in microservices comes from.

So what is this article about? I’m currently working in a web project for a customer, where we are using Spring Boot to create a microservice-based backend. We heavily use the Job DSL to drive our CI pipeline, and also all kind of configuration and tooling. And the front end is a single page application built with AngularJS. So far, so good, but there is a catch. One goal of this microservice idea is to have independent development teams, that drive features within their own development cycle. In the backend this is feasible due to the microservice architecture. But the front end is still one large application. Angular allows to partition the application into modules, but after all it is assembled to one application, so this is a single point of integration. Let me explain that point using a practical example. It is a stupid-simple web shop, stripped down to the bones. It is just a catalog of items, which we can put into a shopping cart:



I have prepared the example as a Spring Boot based application in the backed, with some AngularJS in the frontend. You will find the code on GitHub, the version we start with, is on the branch one-ui. All projects have maven and gradle builds, the readme will provide you enough information to run the application.
Just some note before we start: I’m not a web developer. In the last 20 years I have build – beside backend systems – various UIs in Motif, QT, Java AWT, Swing and SWT. But I have never done web frontends, there has never been the chance or need for that. So please be merciful if my angular and javascript looks rather… er, surprising to you ;-)
The architecture of our stupid simple web shop looks like this:

initial2

The browser accesses the application via a router, which limits access to the backend, helps with the CORS problem, and may also perform some security checks. The web-server hosts the AngularJS application and the assets. The AngularJS application itself communicates with the API-Gateway, which encapsulates the microservices in the backend, and provides services and data optimized for the UI. The API-Gateway pattern is often used in order to not to bloat the microservices with functionality where it not belongs to. We will see its advantages later on. The gateway talks to the backend microservice, which then perform their dedicated functionality using their own isolated databases (the databases has been omitted for simplicity in the example code).

So far nothing new, what’s the point? Well, what is that microservice idea all about? Ease of change. Make small changes easy to apply and deploy. Minimize the risk of change by having to deploy only a small part of the application instead of a large one-size-fits-all application. In the backend this is now widely adopted. The microservices are tiny, and easy to maintain. As long as their interfaces to other services remain (compatible), we may replace them with a new version in a running production environment, without affecting the others. And – if done right – without downtime. But what about the frontend? The frontend is mostly still one large application. If we make a small change to the UI, we have to build and redeploy the whole thing. Even worse, since it is one application, bug fixes and features are often developed in parallel by large teams, which makes it difficult to release minor changes separately. At last, that’s what the microservice story is all about: Dividing your application into small sub-applications which can be developed and deployed separately, giving each its own lifecycle. And consequently, this pattern should be applied to the complete application, from database via services to the frontend. But currently, our microservice architecture thinking ends at the UI, and that’s what I call a vertical limit.

We investigated how other people are dealing with this situation, and – no wonder – there were a lot of complaints about the same problem. But surprisingly, most advices to address this were to use the API-Gateway… er, but this does not solve the problem?!? So let’s think about it: we are dividing our backend into fine grained microservices in order to cope with changes. Consequently, we should exactly the same in UI. We could partition our UI into components, where features make up the logical boundaries. Let’s take our silly vertical shop example to exercise this. We separate this UI at least into two logical components: the catalog showing all available items, and the shopping cart. Angular provides a mechanism to build components: directives! The cart and the catalog are already encapsulated in angular directives (what a coincidence ;-). So what if we put those parts each on its own web-server? Let’s see how our architecture would look like:

multipleUI_oneGateway2

Hmm, obviously the API gateway is still a common denominator. The gateway’s job is to abstract from the backend microservice and to serve the UI in the best suited way. So consequently, we should divide the gateway also. But since it so closely related to the UI, we are packaging ‘em both into one service in our example. Let’s do so, you will find the code for this version on the branch ui-components. Now the architecture looks like this:

 componentsDashed

Hey wait! I’ve had look at your UI code. It’s not just the directives, there are also angular services. And the add-button in the catalog directive is effectively calling addItem() on the cart-service. That’s quite true. But is that a problem? In our microservice backend, services are calling other services also. This is not a problem, as long as the service’ API doesn’t change, resp. remains compatible. The same holds on the javascript side. As long as our API doesn’t change (remains compatible), new service rolllouts and internal changes are not a problem. But we have to design these interfaces between components wisely: (angular) services may be used by other components, so we have to be careful with changes. Another way of communication between components is broadcasting events. Actually the cart component is firing an event every time the cart changes, in order to give other components a chance to react on that. So we have to be cautious with changes to events also. New resp. more data is not a problem, removing or renaming existing properties is. So we simply put all functionality dealing with the cart on the cart web-server, and the catalog stuff on the catalog web-server… including their dedicated directives, services, assets a whatsoever. Our communication looks like this:

communication

Big words and bloated architecture. It ain’t worth it! You think so? Let’s make a small change to our application and examine its impact: Our customer does not like our shopping cart: “It just shows each items article number, and its amount. That’s not quite helpful. The user needs the article’s name…its price…and the sum of all items in the cart.”


Well, yeah, she’s right. So let’s fix that. But the cart microservice does not provide more information. Article names and prices are the catalog service’ domain. So the cart service could call the catalog service and enrich its data? But that’s not the cart’s domain, it should not have to know about that. No, providing the data in a way appropriate for the UI is the job of the API gateway (finally it pays ;-). So instead of delegating all calls to the cart service, the cart’s API gateway merges the data from the cart with the catalog data to provide cart items with article names and prices:

cartGateway

Now we simply adapt the UI code of the cart in order to use and visualize this additional data. You will find the source code of this final version on the master. All changes we made are isolated in the cart UI resp. its API gateway, so we only have to redeploy this single server. Let’s do so. And if we now reload our application, tada:


So, we made a change to the shopping cart without affecting the remaining application. Without redeploying the complete UI. All the advantages we take the microservice-burden on our shoulders for, now pays in the front end also. Just treat your front end as a union of distinct features. In agile development it is all about features resp. stories, and now we we are able develop and release features in its own lifecycle. Hey wait, the index.html still assembles the cart and the catalog to a single page. So there is still a single point of integration! Yep, you’re right. This is still one UI, but we have componentized it. Once a component is referenced in a page, that’s it. Any further changes to internals of that component does not affect the hosting page. Our microservices are also referenced by other parties, so this is quite similar.

The point is to avoid vertical limits in this kind of architecture. We have to separate our application into small sub-applications in all layers, namely persistence, service backend and frontend. In the backend this cut is often domain-driven, and in the front end we can use features to partition our application. For sure there are better ways and technologies to implement that, but I guess this is a step into the right direction.
We learn from failure, not from success!
Bram Stoker

1) This was poor attempt to avoid the word monolith, which is – at the time of writing – mistakenly flavored with a bad smack.