What's New Here?


Validating JAX-RS resource data with Bean Validation in Java EE 7 and WildFly

I have already approached this subject twice in the past. First, on my post Integrating Bean Validation with JAX-RS in Java EE 6, describing how to use Bean Validation with JAX-RS in JBoss AS 7, even before this was defined in the Java EE Platform Specification. And later, on an article written for JAX Magazine and posteriorly posted on JAXenter, using the new standard way defined in Java EE 7 with Glassfish 4 server (the first Java EE 7 certified server). Now that WildFly 8, previously know as JBoss Application Server, has finally reached the final version and has joined the Java EE 7 certified servers club, it’s time for a new post highlighting the specificities and differences between these two application servers, GlassFish 4 and WildFly 8.   Specs and APIs Java EE 7 is the long-awaited major overhaul of Java EE 6. With each release of Java EE, new features are added and existing specifications are enhanced. Java EE 7 builds on top of the success of Java EE 6 and continues to focus on increasing developer productivity. JAX-RS, the Java API for RESTful Web Services, is one of the fastest-evolving APIs in the Java EE landscape. This is, of course, due to the massive adoption of REST-based Web services and the increasing number of applications that consume those services. This post will go through the steps required to configure REST endpoints to support a JavaScript client and to handle validation exceptions to send localized error messages to the client in addition to HTTP error status codes. Source code The source code accompanying this article is available on GitHub. Introduction to Bean Validation JavaBeans Validation (Bean Validation) is a new validation model available as part of Java EE 6 platform. The Bean Validation model is supported by constraints in the form of annotations placed on a field, method, or class of a JavaBeans component, such as a managed bean. Several built-in constraints are available in the javax.validation.constraints package. The Java EE 7 Tutorial contains a list with all those constraints. Constraints in Bean Validation are expressed via Java annotations: public class Person { @NotNull @Size(min = 2, max = 50) private String name; // ... } Bean Validation and RESTful web services JAX-RS provides great support for extracting request values and binding them into Java fields, properties and parameters using annotations such as @HeaderParam,@QueryParam, etc. It also supports binding of request entity bodies into Java objects via non-annotated parameters (i.e., parameters not annotated with any of the JAX-RS annotations). However, prior to JAX-RS 2.0, any additional validation on these values in a resource class had to be performed programmatically. The last release, JAX-RS 2.0, includes a solution to enable validation annotations to be combined with JAX-RS annotations. The following example shows how path parameters can be validated using the @Pattern validation annotation: @GET @Path("{id}") public Person getPerson( @PathParam("id") @Pattern(regexp = "[0-9]+", message = "The id must be a valid number") String id) { return persons.get(id); } Besides validating single fields, you can also validate entire entities with the @Valid annotation. As an example, the method below receives a Person object and validates it: @POST public Response validatePerson(@Valid Person person) { // ... } Internationalization In the previous example we used the default or hard-coded error messages, but this is both a bad practice and not flexible at all. I18n is part of the Bean Validation specification and allows us to specify custom error messages using a resource property file. The default resource file name is ValidationMessages.properties and must include pairs of properties/values like: person.id.notnull=The person id must not be null person.id.pattern=The person id must be a valid number person.name.size=The person name must be between {min} and {max} chars long Note: {min}, {max} refer to the properties of the constraint to which the message will be associated with. Once defined, these messages can then be injected on the validation constraints such as: @POST @Path("create") @Consumes(MediaType.APPLICATION_FORM_URLENCODED) public Response createPerson( @FormParam("id") @NotNull(message = "{person.id.notnull}") @Pattern(regexp = "[0-9]+", message = "{person.id.pattern}") String id, @FormParam("name") @Size(min = 2, max = 50, message = "{person.name.size}") String name) { Person person = new Person(); person.setId(Integer.valueOf(id)); person.setName(name); persons.put(id, person); return Response.status(Response.Status.CREATED).entity(person).build(); } To provide translations to other languages, one must create a new file ValidationMessages_XX.properties with the translated messages, where XX is the code of the language being provided. Unfortunately, with some application servers, the default Validator provider doesn’t support i18n based on a specific HTTP request. They do not take Accept-Language HTTP header into account and always use the default Locale as provided by Locale.getDefault(). To be able to change the Locale using the Accept-Language HTTP header (which maps to the language configured in your browser options), you must provide a custom implementation. Custom Validator provider Although WildFly 8 correctly uses the Accept-Language HTTP header to choose the correct resource bundle, other servers like GlassFish 4 do not use this header. Therefore, for completeness and easier comparison with the GlassFish code (available under the same GitHub project), I’ve also implemented a custom Validator provider for WildFly. If you want to see a GlassFish example, please visit Integrating Bean Validation with JAX-RS on JAXenter.Add RESTEasy dependency to Maven WildFly uses RESTEasy, the JBoss implementation of the JAX-RS specification. RESTEasy dependencies are required for the Validator provider and Exception Mapper discussed later on in this post. Lets add it to Maven: <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-bom</artifactId> <version>3.0.6.Final</version> <scope>import</scope> <type>pom</type> </dependency> </dependencies> </dependencyManagement><dependencies> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-jaxrs</artifactId> <scope>provided</scope> </dependency> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-validator-provider-11</artifactId> <scope>provided</scope> </dependency> </dependencies>Create a ThreadLocal to store the Locale from the Accept-Language HTTP header ThreadLocal variables differ from their normal counterparts in that each thread that accesses one has its own, independently initialized copy of the variable. /** * {@link ThreadLocal} to store the Locale to be used in the message interpolator. */ public class LocaleThreadLocal {public static final ThreadLocal<Locale> THREAD_LOCAL = new ThreadLocal<Locale>();public static Locale get() { return (THREAD_LOCAL.get() == null) ? Locale.getDefault() : THREAD_LOCAL.get(); }public static void set(Locale locale) { THREAD_LOCAL.set(locale); }public static void unset() { THREAD_LOCAL.remove(); } }Create a request filter to read the Accept-Language HTTP header The request filter is responsible for reading the first language sent by the client in the Accept-Language HTTP header and store the Locale in our ThreadLocal: /** * Checks whether the {@code Accept-Language} HTTP header exists and creates a {@link ThreadLocal} to store the * corresponding Locale. */ @Provider public class AcceptLanguageRequestFilter implements ContainerRequestFilter {@Context private HttpHeaders headers;@Override public void filter(ContainerRequestContext requestContext) throws IOException { if (!headers.getAcceptableLanguages().isEmpty()) { LocaleThreadLocal.set(headers.getAcceptableLanguages().get(0)); } } }Create a custom message interpolator to enforce a specific Locale Next create a custom message interpolator to enforce a specific Locale value by bypassing or overriding the default Locale strategy: /** * Delegates to a MessageInterpolator implementation but enforces a given Locale. */ public class LocaleSpecificMessageInterpolator implements MessageInterpolator {private final MessageInterpolator defaultInterpolator;public LocaleSpecificMessageInterpolator(MessageInterpolator interpolator) { this.defaultInterpolator = interpolator; }@Override public String interpolate(String message, Context context) { return defaultInterpolator.interpolate(message, context, LocaleThreadLocal.get()); }@Override public String interpolate(String message, Context context, Locale locale) { return defaultInterpolator.interpolate(message, context, locale); } }Configure the Validator provider RESTEasy obtains a Bean Validation implementation by looking for a Provider implementing ContextResolver<GeneralValidator>. To configure a new Validation Service Provider to use our custom message interpolator add the following: /** * Custom configuration of validation. This configuration can define custom: * <ul> * <li>MessageInterpolator - interpolates a given constraint violation message.</li> * <li>TraversableResolver - determines if a property can be accessed by the Bean Validation provider.</li> * <li>ConstraintValidatorFactory - instantiates a ConstraintValidator instance based off its class. * <li>ParameterNameProvider - provides names for method and constructor parameters.</li> * * </ul> */ @Provider public class ValidationConfigurationContextResolver implements ContextResolver<GeneralValidator> {/** * Get a context of type {@code GeneralValidator} that is applicable to the supplied type. * * @param type the class of object for which a context is desired * @return a context for the supplied type or {@code null} if a context for the supplied type is not available from * this provider. */ @Override public GeneralValidator getContext(Class<?> type) { Configuration<?> config = Validation.byDefaultProvider().configure(); BootstrapConfiguration bootstrapConfiguration = config.getBootstrapConfiguration();config.messageInterpolator(new LocaleSpecificMessageInterpolator(Validation.byDefaultProvider().configure() .getDefaultMessageInterpolator()));return new GeneralValidatorImpl(config.buildValidatorFactory(), bootstrapConfiguration.isExecutableValidationEnabled(), bootstrapConfiguration.getDefaultValidatedExecutableTypes()); } }Mapping Exceptions By default, when validation fails an exception is thrown by the container and a HTTP error is returned to the client. Bean Validation specification defines a small hierarchy of exceptions (they all inherit from ValidationException) that could be thrown during initialization of validation engine or (for our case more importantly) during validation of input/output values (ConstraintViolationException). If a thrown exception is a subclass of ValidationException except ConstraintViolationException then this exception is mapped to a HTTP response with status code 500 (Internal Server Error). On the other hand, when a ConstraintViolationException is throw two different status code would be returned:500 (Internal Server Error) If the exception was thrown while validating a method return type. 400 (Bad Request) Otherwise.Unfortunately, WildFly instead of throwing the exception ConstraintViolationException for invalid input values, throws a ResteasyViolationException, which implements the ValidationException interface. This behavior can be customized to allow us to add error messages to the response that is returned to the client: /** * {@link ExceptionMapper} for {@link ValidationException}. * <p> * Send a {@link ViolationReport} in {@link Response} in addition to HTTP 400/500 status code. Supported media types * are: {@code application/json} / {@code application/xml} (if appropriate provider is registered on server). * </p> * * @see org.jboss.resteasy.api.validation.ResteasyViolationExceptionMapper The original WildFly class: * {@code org.jboss.resteasy.api.validation.ResteasyViolationExceptionMapper} */ @Provider public class ValidationExceptionMapper implements ExceptionMapper<ValidationException> {@Override public Response toResponse(ValidationException exception) { if (exception instanceof ConstraintDefinitionException) { return buildResponse(unwrapException(exception), MediaType.TEXT_PLAIN, Status.INTERNAL_SERVER_ERROR); } if (exception instanceof ConstraintDeclarationException) { return buildResponse(unwrapException(exception), MediaType.TEXT_PLAIN, Status.INTERNAL_SERVER_ERROR); } if (exception instanceof GroupDefinitionException) { return buildResponse(unwrapException(exception), MediaType.TEXT_PLAIN, Status.INTERNAL_SERVER_ERROR); } if (exception instanceof ResteasyViolationException) { ResteasyViolationException resteasyViolationException = ResteasyViolationException.class.cast(exception); Exception e = resteasyViolationException.getException(); if (e != null) { return buildResponse(unwrapException(e), MediaType.TEXT_PLAIN, Status.INTERNAL_SERVER_ERROR); } else if (resteasyViolationException.getReturnValueViolations().size() == 0) { return buildViolationReportResponse(resteasyViolationException, Status.BAD_REQUEST); } else { return buildViolationReportResponse(resteasyViolationException, Status.INTERNAL_SERVER_ERROR); } } return buildResponse(unwrapException(exception), MediaType.TEXT_PLAIN, Status.INTERNAL_SERVER_ERROR); }protected Response buildResponse(Object entity, String mediaType, Status status) { ResponseBuilder builder = Response.status(status).entity(entity); builder.type(MediaType.TEXT_PLAIN); builder.header(Validation.VALIDATION_HEADER, "true"); return builder.build(); }protected Response buildViolationReportResponse(ResteasyViolationException exception, Status status) { ResponseBuilder builder = Response.status(status); builder.header(Validation.VALIDATION_HEADER, "true");// Check standard media types. MediaType mediaType = getAcceptMediaType(exception.getAccept()); if (mediaType != null) { builder.type(mediaType); builder.entity(new ViolationReport(exception)); return builder.build(); }// Default media type. builder.type(MediaType.TEXT_PLAIN); builder.entity(exception.toString()); return builder.build(); }protected String unwrapException(Throwable t) { StringBuffer sb = new StringBuffer(); doUnwrapException(sb, t); return sb.toString(); }private void doUnwrapException(StringBuffer sb, Throwable t) { if (t == null) { return; } sb.append(t.toString()); if (t.getCause() != null && t != t.getCause()) { sb.append('['); doUnwrapException(sb, t.getCause()); sb.append(']'); } }private MediaType getAcceptMediaType(List<MediaType> accept) { Iterator<MediaType> it = accept.iterator(); while (it.hasNext()) { MediaType mt = it.next(); /* * application/xml media type causes an exception: * org.jboss.resteasy.core.NoMessageBodyWriterFoundFailure: Could not find MessageBodyWriter for response * object of type: org.jboss.resteasy.api.validation.ViolationReport of media type: application/xml */ /*if (MediaType.APPLICATION_XML_TYPE.getType().equals(mt.getType()) && MediaType.APPLICATION_XML_TYPE.getSubtype().equals(mt.getSubtype())) { return MediaType.APPLICATION_XML_TYPE; }*/ if (MediaType.APPLICATION_JSON_TYPE.getType().equals(mt.getType()) && MediaType.APPLICATION_JSON_TYPE.getSubtype().equals(mt.getSubtype())) { return MediaType.APPLICATION_JSON_TYPE; } } return null; } } The above example is an implementation of the ExceptionMapper interface which maps exceptions of the type ValidationException. This exception is thrown by the Validator implementation when the validation fails. If the exception is an instance of ResteasyViolationException we send a ViolationReport in the response in addition to HTTP 400/500 status code. This ensures that the client receives a formatted response instead of just the exception being propagated from the resource. The produced output looks just like the following (in JSON format): { "exception": null, "fieldViolations": [], "propertyViolations": [], "classViolations": [], "parameterViolations": [ { "constraintType": "PARAMETER", "path": "getPerson.id", "message": "The id must be a valid number", "value": "test" } ], "returnValueViolations": [] } Running and testing To run the application used for this article, build the project with Maven, deploy it into a WildFly 8 application server, and point your browser to http://localhost:8080/jaxrs-beanvalidation-javaee7/. Alternatively, you can run the tests from the class PersonsIT which are built with Arquillian and JUnit. Arquillian will start an embedded WildFly 8 container automatically, so make sure you do not have another server running on the same ports. Suggestions and improvementsWe are dependent on application server code in order to implement a custom Validator provider. On GlassFish 4 ContextResolver<ValidationConfig> needs to be implemented, while on WildFly 8 we need to implement ContextResolver<GeneralValidator>. Why not defined an interface on the Java EE 7 spec that both ValidationConfig and GeneralValidator must implement instead of relying on the application server specific code? Make WildFly 8 Embedded easier to use and configure with Maven. Currently, for it to be available to Arquillian, one needs to download the WildFly distribution (org.wildfly:wildfly-dist), unzip it into the target folder, and configure the system properties on Surefire/Failsafe Maven plugins: <systemPropertyVariables> <java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager> <jboss.home>${wildfly.home}</jboss.home> <module.path>${wildfly.home}/modules</module.path> </systemPropertyVariables> Whereas for Glassfish you just need to define the correct dependency (org.glassfish.main.extras:glassfish-embedded-all). Make RESTEasy a transitive dependency of WildFly Embedded. Having all the WildFly modules available on compile time just by defining a provided WildFly Embedded dependency would be a nice productive boost. It is currently not possible to use the option Run As >> JUnit Test on Eclipse since a system property named jbossHome must exist. This property is not read from Surefire/Failsafe configuration by Eclipse. Is there a workaround for this? When using RESTEasy default implementation of ExceptionMapper<ValidationException>, requesting the data in application/xml media type and having validation errors, will throw the following exception: org.jboss.resteasy.core.NoMessageBodyWriterFoundFailure: Could not find MessageBodyWriter for response object of type: org.jboss.resteasy.api.validation.ViolationReport of media type: application/xml Is this a RESTEasy bug?Reference: Validating JAX-RS resource data with Bean Validation in Java EE 7 and WildFly from our JCG partner Samuel Santos at the Samaxes blog....

Spring test with thymeleaf for views

I am a recent convert to thymeleaf for view templating in Spring based web applications, preferring it over jsp’s. All the arguments that thymeleaf documentation makes on why thymeleaf over jsp holds water and I am definitely sold. One of the big reasons for me, apart from being able to preview the template, is the way the view is rendered at runtime. Whereas the application stack has to defer the rendering of jsp to the servlet container, it has full control over the rendering of thymeleaf templates. To clarify this a little more, with jsp as the view technology an application only returns the location of the jsp and it is upto the servlet container to render the jsp. So why again is this a big reason – because using the mvc test support in spring-test module, now the actual rendered content can be asserted on rather than just the name of the view. Consider a sample Spring MVC controller : @Controller @RequestMapping("/shop") public class ShopController { ...@RequestMapping("/products") public String listProducts(Model model) { model.addAttribute("products", this.productRepository.findAll()); return "products/list"; } } Had the view been jsp based, I would have had a test which looks like this: @RunWith(SpringJUnit4ClassRunner.class) @WebAppConfiguration @ContextConfiguration(classes = SampleWebApplication.class) public class ShopControllerWebTests {@Autowired private WebApplicationContext wac;private MockMvc mockMvc;@Before public void setup() { this.mockMvc = MockMvcBuilders.webAppContextSetup(this.wac).build(); }@Test public void testListProducts() throws Exception { this.mockMvc.perform(get("/shop/products")) .andExpect(status().isOk()) .andExpect(view().name("products/list")); } } the assertion is only on the name of the view. Now, consider a test with thymeleaf used as the view technology: @Test public void testListProducts() throws Exception { this.mockMvc.perform(get("/shop/products")) .andExpect(status().isOk()) .andExpect(content().string(containsString("Dummy Book1"))); } Here, I am asserting on the actual rendered content. This is really good, whereas with jsp I would had to validate that the jsp is rendered correctly at runtime with a real container, with thymeleaf I can validate that rendering is clean purely using tests.Reference: Spring test with thymeleaf for views from our JCG partner Biju Kunjummen at the all and sundry blog....

Efficient Code Coverage with Eclipse

There is a saying that a fool with a tool is still a fool. But how to use a tool most efficiently is not always obvious to me. Because of this I typically spend some time to check out new playgrounds1 that promise to increase my work speed without impairing quality. This way I came across EclEmma, a code coverage tool for the Eclipse IDE, which can be quite useful to achieve comprehensive test cases. Coverage In general ‘Test coverage is a useful tool for finding untested parts of a codebase‘ because ‘Test Driven Development is a very useful, but certainly not sufficient, tool to help you get good tests‘ as Martin Fowler puts it2. Given this, the usual way to analyse a codebase for untested parts is either to run an appropriate tool every now and then or having a report automatically generated e.g. by a nightly build. However the first approach seems to be a bit non-committal and the second one involves the danger of focusing on high numbers3 instead of test quality. Let alone the cost of context switches, expanding coverage on blank spots you have written a couple of days or weeks ago. Hence Paul Johnson suggests ‘to use it as early as possible in the development process‘ and ‘ to run tests regularly with code coverage‘4. But when exactly is as early as possible? On second thought it occured to me that the very moment just before finishing the work on a certain unit under test should be ideal. Since at that point in time all the unit’s tests should be written and all its refactorings should be done, a quick coverage check might reveal an overlooked passage. And closing the gap at that time would come at a minimal expense as no context switch would be involved. Certainly the most important word in the last paragraph is quick, which means that this approach is only viable if the coverage data can be collected fast and the results are easy to check. Luckily EclEmma integrates seamlessly in Eclipse by providing launch configurations, appropriate shortcuts and editor highlighting to meet exactly these requirements, without burden any code instrumentation handling onto the developer. EclEmma In Eclipse there are several ways to execute a test case quickly5. And EclEmma makes it very easy to re-run the latest test launch e.g. by the shortcut keys Ctrl+Shift+F11. As Test Driven Development demands that test cases run very fast, the related data collection runs also very fast. This means one can check the coverage of the unit under test really in a kind of fly-by mode. Once data collection has been finished the coverage statistic is shown in a result view. But running only a single or a few test cases, the over all numbers will be pretty bad. Much more interesting is the highlighting in the code editor:The image shows the alleged pleasant case if full instruction and branch coverage has been reached. But it cannot be stressed enough that full coverage alone testifies nothing about the quality of the underlying test!6 The only reasonable conclusion to draw is, that there are obviously no uncovered spots and if the tests are written thorough and thoughtful, development of the unit might be declared as completed. If we however get a result like the following picture, we are definitely not done:  As you can see the tests do not cover several branches and misses a statement entirely, which means that there is still work to do. The obvious solution would be to add a few tests to close the gaps. But according to Brian Marick such gaps may be an indication of a more fundamental problem in your test case, called faults of omission7. So it might be advisable to reconsider the test case completely. Occasionally you may need other metrics than instruction and branch counters. In this case you can drill down in the report view to the class you are currently working on and select an appropriate one as shown below:Conclusion While there could be said much more about coverage in general and how to interpret the reports, I leave this to more called upon people like those mentioned in the footnotes of this post. Summarizing one can say that full coverage is a necessary but not sufficient criteria for good tests. But note that full coverage is not always achievable or would be unreasonably expensive to achieve. So be careful not to overdo things – or to quote Martin Fowler again: ‘ I would be suspicious of anything like 100% – it would smell of someone writing tests to make the coverage numbers happy, but not thinking about what they are doing‘2. Working with the approach described in this post, the numbers usually end up in the lower 90s of project wide coverage8, given that your co-workers follow the same pattern and principles, which mine usually do – at least after a while…!Regarding to software development such playgrounds may be methodologies, development techniques, frameworks, libraries and of course – the usage of tools TestCoverage, Martin Fowler, 4/17/2012 See Dashboards promote ingnorance, Sriram Narayan, 4/11/2011 Testing and Code Coverage, Paul Johnson 2002 See also Working Efficiently with JUnit in Eclipse To make this point clear, simply comment out every assert and verification in a test case that produces full coverage like shown above. Doing so should usually not change the coverage report at all, although the test case is now pretty useless How to Misuse Code Coverage by Brian Marick Keep in mind the numbers depend on the metric that was selected. Path Coverage numbers are usually smaller than those of branch coverage and those of branch coverage can be smaller than the statement coverage’s onesReference: Efficient Code Coverage with Eclipse from our JCG partner Frank Appel at the Code Affine blog....

OpenSource License Manager

What is a License Manager? License managers are used to enforce license rights, or at least to support the enforcement. When you develop an open source program, there is no much you need to or can do to enforce license rights. The code is there and if anyone just wants to abuse the program there is nothing technical that could stop them. Closed source programs are different. (Are they?) In that case the source code is not available for the client. It is not possible to alter the program so that it circumvents the license enforcement code, and thus there is a real role for license rights enforcement. But this is not true. The truth is that there is no fundamental difference between closed and open source code in this respect. Closed source codes can also be altered. The ultimate “source” for the execution is there after all: the machine code. There are tools that help to analyze and decode the binary to more or less human readable format and thus it is possible to circumvent the license management. It is possible and there is a great source of examples for it. On some sites hosted in some countries you can simply download the cracked version of practically any software. I do not recommend to do that and not only for ethical reasons though. You just never know which of the sites are funded by secret services or criminals (if there is any difference) and you never know if you install spy software on your machine using the cracked version. Once I worked for a company where one of the success measurements of their software was the number of the days after release till the cracked versions appeared on the different sites compared to the same value of the competitor. The smaller the number was for their software the happier they were. Were they crazy? Why were they happy to know that their software was cracked? When the number of the days was only one single they, why did not they consider applying stronger license enforcement measure, like morphing code, hardware key and so on? The answer is the following. This company knows very well that license management is not to prevent the unauthorized use. It can be used that way but it will have two major effects which will ruin your business:Writing license management code you spend your time on non-productive code. License management (this way) works against your customer.Never implement license management against your customer. When your license management solution is too restrictive you may restrict the software use of your customer. When you deliver your code using hardware key you impose inconvenience to your customer. When you bind your license to Ethernet MAC address of the machine the application is running on, again: you work against your customer. Set<User> != Set<Customer> Face the said truth: there will always be people, who use your software without paying for it. They are not your customers. Do they steal from you? Not necessarily. If there is someone who is not buying your software, he is not your customer. If you know that there is no way they would pay for the software and the decision was in your hands whether you want them to use the software or use that of your competitor what would you choose? I guess you would like your software to be used to get more feedback and more knowledge even in the area of non-customers. People using your software may become your customer more likely than people not using it. This is why big companies sell out educational licenses to universities and other academic institutions. Should we use license management at all in that case? Is license management bad down to ground in all aspects? My answer is that it is not. There is a correct use case for license management, even when the software is open source (but not free, like Atlassian products). To find and understand this use case there is one major thing to understand: The software is for the customer, and any line in the code has to support the customers to reach their business goals. Paying the fee for the software is for the customers. If nobody finances a software the software will die. There is nothing like free lunch. Somebody has to pay for it. To become a customer and pay for the software used is the most straightforward business model and provides the strongest feedback and control for the customer over the vendor to get the features needed. At the same time paying for the software use is not the core business of the customer. Paying for the resources used supports them to reach their business goals is indirect. This is where license management comes into picture. It helps the customer to due their duties. It helps them remember their long term needs. This also means that license management should not prevent functionality. No functionality should stop if a license expires. Not to mention functionality that may prevent access to data that actually belongs to the customer. If you approach license management with this mindset you can see that even open source (but not free) software may need it. License Management Tool: license3j Many years ago I was looking for some license management library and I found that there was none open source. I wanted to create an open source (but not free) application and it required that the license management is also open source. What I found was also overpriced taking into account our budget that was just zero for a part time start-up software (which actually failed business wise miserably, but that is another story). For this reason I created License3j which surprisingly became one of the most used library of my OS projects. License3j is very simple in terms of business objects. It uses a simple property file and lets the application check the content of the individual fields. The added value is handling electronic signature and checking the authenticity of the license file. Essentially it is hardly more than a single class file. <dependency> <groupId>com.verhas</groupId> <artifactId>license3j</artifactId> <version>1.0.4</version> </dependency> Feel free to use it if you like.Reference: OpenSource License Manager from our JCG partner Peter Verhas at the Java Deep blog....

HOW-TO: Spring Boot and Thymeleaf with Maven

Spring Boot is a great piece of software allowing you to bootstrap Spring application within a few seconds. And it really works. As little configuration as possible to get started. And still possible to change the defaults. Let’s see how easily is to bootstrap Spring MVC with Thymeleaf and Maven and work with it in IntelliJ. Basic setup Spring MVC + Thymeleaf with Maven Make sure you have Maven 3 installed with the following command: mvn --version. Navigate to the directory you want to create your project in and execute Maven archtetype:   mvn archetype:generate -DarchetypeArtifactId=maven-archetype-quickstart -DgroupId=pl.codeleak.demos.sbt -DartifactId=spring-boot-thymeleaf -interactiveMode=false The above command will create a new directory spring-boot-thymeleaf. Now you can import it to your IDE. In my case this is IntelliJ. The next step is to configure the application. Open pom.xml and add a parent project: Values from the parent project will be the default for this project if they are left unspecified. The next step is to add web dependencies. In order to do so, I firstly removed all previous dependencies (junit 3.8.1 actually) and added the below dependencies: Now, wait a second until Maven downloads the dependencies and run mvn dependency:tree to see what dependencies are included. The next thing is a packaging configuration. Let’s add Spring Boot Maven Plugin: With the above steps, the basic configuration is ready. Now we can run the application. Spring Boot Maven Plugin offers two goals run and repackage. So let’s run the application by using mvn spring-boot:run. The command should produce Hello World!. Please note, that the App class has main method. So in fact, you can run this class in IntellJ (or any other IDE). Hello World! But wait a moment. This is not the web application. So let’s modify the App class so it is the entry point to the Spring Boot application: package pl.codeleak.demos.sbt; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.EnableAutoConfiguration; import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration; @EnableAutoConfiguration @Configuration @ComponentScan public class App { public static void main(String[] args) { SpringApplication.run(App.class); } } In addition to the above, I would remove the AppTest as it sucks (it was created by the maven-archetype-quickstart)! Now we can run the application again to see what happens: java.lang.IllegalStateException: Cannot find template location: class path resource [templates/] (please add some templates or check your Thymeleaf configuration) Clear. Let’s add some Thymeleaf templates then. Where to put Thymeleaf templates? The default place for templates is … templates available in classpath. So we need to put at least one template into src/main/resources/templates directory. Let’s create a simple one: Running the application again will start embedded Tomcat with our application on port 8080: Tomcat started on port(s): 8080/http Ok. But something is missing. When we navigate to localhost:8080 we will see 404 page. Of course! There are no controllers yet. So let’s create one: package pl.codeleak.demos.sbt.home; import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.RequestMapping; @Controller class HomeController { @RequestMapping("/") String index() { return "index"; } } After running the application again you should be able to see Hello Spring Boot! page! Adding static resources Similarly to Thymeleaf templates, static resources are served from classpath by default. We may put CSS files to src/main/resources/css, JavaScript files to src/main/resources/js etc. In Thymeleaf template we reference them like this: Converting packaging from jar to war But what if we want to run the application as plain web app and provide it as a war package? It is fairly easy with Spring Boot. Firstly, we need to convert type of packaging in pom.xml from jar to war (packaging element). Secondly – make that Tomcat is a provided dependency: The last step is to bootstrap a servlet configuration. Create Init class and inherit from SpringBootServletInitializer: package pl.codeleak.demos.sbt; import org.springframework.boot.builder.SpringApplicationBuilder; import org.springframework.boot.context.web.SpringBootServletInitializer; public class Init extends SpringBootServletInitializer { @Override protected SpringApplicationBuilder configure(SpringApplicationBuilder application) { return application.sources(App.class); } } We can check if the configuration works with Maven: mvn clean package. The war file should be created: Building war: C:\Projects\demos\spring-boot-thymeleaf\target\spring-boot-thymeleaf-1.0-SNAPSHOT.war Use Maven to start the application from war file directly: java -jar target\spring-boot-thymeleaf-1.0-SNAPSHOT.war Having a war project we can run the application in IntelliJ. After we changed the packaging, IntellJ should detect the changes in the project and add a web facet to it. The next step is to configure Tomcat server and run it. Navigate to Edit Configurations and add Tomcat server with exploded war artifact. Now you can run the application as any other web application. Reloading Thymeleaf templates Since the application running on local Tomcat server in IntelliJ we may reload static resources (e.g. css files) without restarting the server. But by default, Thymeleaf caches the templates, so in order to update Thymeleaf templates we need to change this behaviour. To do this, add application.properties to src/main/resources directory with the following property: spring.thymeleaf.cache=false. Restart the server and from now on you can reload Thymeleaf templates without restarting the server. Changing the other configuration defaults Cache configuration is not the only available configuration we can adjust. Please look at the ThymeleafAutoConfiguration class to see what other things you can change. To mention a few: spring.thymeleaf.mode, spring.thymeleaf.encoding. Final thoughts Spring Boot simplifies bootstrapping web application. With just couple of steps you have fully working web application that can be self-contained or can run in any servlet environment. Instead of learning Spring configuration you may focus on development. To learn further about Spring Boot read the manual and check Spring guides that provide many usefull getting started tutorials. Enjoy! ResourcesSpring Boot Thymeleaf project sources Spring Boot Reference Guide Spring guides Thymeleaf projectReference: HOW-TO: Spring Boot and Thymeleaf with Maven from our JCG partner Rafal Borowiec at the Codeleak.pl blog....

Clean Synchronization Using ReentrantLock and Lambdas

Recently I was reading an informative post about the differences between synchronized vs ReentrantLock by Javin Paul1. He emphasises on the advantages of the latter, but does not withhold some downsides, which are related to the cumbersome try-finally block needed for proper usage. While agreeing on his statements I brooded about a thought, that always bothers me when it comes down to synchronization. Both approaches mix up separate concerns – synchronization and the functionality of the synchronized content – which hampers testing those concerns one by one. Being the explorative type, I picked up a solution for this problem that I already tried in the past. However at that time I did not like the programming pattern too much. This was because of its verboseness due to an anonymous class. But having Java 8 and Lambda expressions at hand I thought it might be worth reconsidering. So I copied the ‘counter’ part of Javin Paul’s example, wrote a simple test case and started refactoring. This was the initial situation: class Counter {private final Lock lock;private int count;Counter() { lock = new ReentrantLock(); }int next() { lock.lock(); try { return count++; } finally { lock.unlock(); } } } One can clearly see the ugly try-finally block that produces a lot of noise around the actual functionality2. The idea is to move this block into its own class that serves as a synchronization aspect to a kind of operation that does the incremental. The next snippet shows how such a newly created Operation interface may look like and how it can be used by a Lambda expression3: class Counter {private final Lock lock;private int count;interface Operation<T> { T execute(); }Counter() { lock = new ReentrantLock(); }int next() { lock.lock(); try { Operation<Integer> operation = () -> { return count++; }; return operation.execute(); } finally { lock.unlock(); } } } In the following class extracting step the Synchronizer type is introduced to serve as an executor that ensures a given Operation is performed within proper synchronization boundaries: class Counter {private final Synchronizer synchronizer;private int count;interface Operation<T> { T execute(); }static class Synchronizer {private final Lock lock;Synchronizer() { lock = new ReentrantLock(); }private int execute( Operation<Integer> operation ) { lock.lock(); try { return operation.execute(); } finally { lock.unlock(); } } }Counter() { synchronizer = new Synchronizer(); }int next() { return synchronizer.execute( () -> { return count++; } ); } } If I am not completely mistaken this should do the same as the initial class. Well, the tests were green, but plain JUnit tests do usually not help much regarding concurrency. But with a last change it is at least possible to verify the proper invocation sequence by a unit test to ensure synchronization: public class Counter {final Synchronizer<Integer> synchronizer; final Operation<Integer> incrementer;private int count;public Counter( Synchronizer<Integer> synchronizer ) { this.synchronizer = synchronizer; this.incrementer = () -> { return count++; }; }public int next() { return synchronizer.execute( incrementer ); } } As you can see the Operation and Synchronizer have been moved to their own files. This way the synchronization aspect is provided and can be tested as a seperate unit. The Counter class now uses the constructor to inject a synchronizer instance4. Furthermore the incrementation operation has been assigned to a field named ‘incrementer’. To ease testing a bit the final fields’ visibility has been opened to default. A test using Mockito for e.g. spying on the synchronizer could now ensure the proper synchronization call like this: @Test public void synchronization() { Synchronizer<Integer> synchronizer = spy( new Synchronizer<>() ); Counter counter = new Counter( synchronizer );counter.next();verify( synchronizer ).execute( counter.incrementer ); } Usually I am not overly exited about using method invocation verification, as this generates a very tight coupling between unit and test case. But given the circumstances above, it does not look as a too bad compromise to me. However I am just doing first warmups with Java 8 and Lambda expressions and maybe I am missing something on the concurrency side too – so what do you think?ReentrantLock Example in Java, Difference between synchronized vs ReentrantLock, Javin Paul, March 7, 2013 ↩ Obviously enough noise to confuse me, because my first test version failed… ↩ I decided to go with a type parameter return value instead of int. This way the resulting synchronization mechanism can be better reused. But I am not sure if e.g. autoboxing is uncritical here due to performance or whatsoever reasons. So for a general approach there are probably some more things to consider, which are out of the scope of this post, though ↩ If changing the constructor is for any reason not possible one might introduce a delegating default constructor that injects the new instance of Synchronizer into the parameterized one like this: this( new Synchronizer() );. This approach might be an acceptable overhead for testing purpose ↩Reference: Clean Synchronization Using ReentrantLock and Lambdas from our JCG partner Frank Appel at the Code Affine blog....

How to manage Git Submodules with JGit

For a larger project with Git you may find yourself wanting to share code among multiple repositories. Whether it is a shared library between projects or perhaps templates and such used among multiple different products. The Git built-in answer to this problem are submodules. They allow putting a clone of another repository as a subdirectory within a parent repository (sometimes also referred to as the superproject). A submodule is a repository in its own. You can commit, branch, rebase, etc. from inside it, just as with any other repository. JGit offers an API that implements most of the Git submodule commands. And this API it is I would like to introduce you to.   The Setup The code snippets used throughout this article are written as learning tests1. Simple tests can help to understand how third-party code works and adopting new APIs. They can be viewed as controlled experiments that allow you to discover exactly how the third-party code behaves. A helpful side effect is that, if you keep the tests, they can help you to verify new releases of the third-party code. If your tests cover how you use the library, then incompatible changes in the third-party code will show themselves early on. Back to the topic at hand: all tests share the same setup. See the full source code for details. There is an empty repository called parent. Next to it there is a library repository. The tests will add this as a submodule to the parent. The library repository has an initial commit with a file named readme.txt in it. A setUp method creates both repositories like so:Git git = Git.init().setDirectory( "/tmp/path/to/repo" ).call();The repositories are represented through the fields parent and library of type Git. This class wraps a repository and gives access to all Commands available in JGit. As I explained here earlier, each Command class corresponds to a native Git pocelain command. To invoke a command the builder pattern is used. For example, the result from the Git.commit() method is actually a CommitCommand. After providing any necessary arguments you can invoke its call() method. Add a Submodule The first and obvious step is to add a submodule to an existing repository. Using the setup outlined above, the library repository should be added as a submodule in the modules/library directory of the parent repository. @Test public void testAddSubmodule() throws Exception { String uri = library.getRepository().getDirectory().getCanonicalPath(); SubmoduleAddCommand addCommand = parent.submoduleAdd(); addCommand.setURI( uri ); addCommand.setPath( "modules/library" ); Repository repository = addCommand.call(); repository.close();F‌ile workDir = parent.getRepository().getWorkTree(); F‌ile readme = new F‌ile( workDir, "modules/library/readme.txt" ); F‌ile gitmodules = new F‌ile( workDir, ".gitmodules" ); assertTrue( readme.isF‌ile() ); assertTrue( gitmodules.isF‌ile() ); } The two things the SubmoduleAddCommand needs to know are from where the submodule should be cloned and a where it should be stored. The URI (shouldn’t it be called URL?) attribute denotes the location of the repository to clone from as it would be given to the clone command. And the path attribute specifies in which directory – relative to the parent repositories’ work directory root – the submodule should be placed. After the commands was run, the work directory of the parent repository looks like this:  The library repository is placed in the modules/library directory and its work tree is checked out. call() returns a Repository object that you can use like a regular repository. This also means that you have to explicitly close the returned repository to avoid leaking file handles. The image reveals that the SubmoduleAddCommand did one more thing. It created a .gitmodules file in the root of the parent repository work directory and added it to the index.  [submodule "modules/library"] path = modules/library url = git@example.com:path/to/lib.gitIf you ever looked into a Git config file you will recognize the syntax. The file lists all the submodules that are referenced from this repository. For each submodule it stores the mapping between the repository’s URL and the local directory it was pulled into. Once this file is committed and pushed, everyone who clones the repository knows where to get the submodules from (later more on that). Inventory Once we have added a submodule we may want to know that it is actually known by the parent repository. The first test did a naive check in that it verified that certain files and directories existed. But there is also an API to list the submodules of a repository. This is what the code below does: @Test public void testListSubmodules() throws Exception { addLibrarySubmodule();Map<String,SubmoduleStatus> submodules = parent.submoduleStatus().call();assertEquals( 1, submodules.size() ); SubmoduleStatus status = submodules.get( "modules/library" ); assertEquals( INITIALIZED, status.getType() ); } The SubmoduleStatus command returns a map of all the submodules in the repository where the key is the path to the submodule and the value is a SubmoduleStatus. With the above code we can verify that the just added submodule is actually there and INITIALIZED. The command also allows to add one or more paths to limit the status reporting to. Speaking of status, JGit’s StatusCommand isn’t at the the same level as native Git. Submodules are always treated as if the command was run with ‐‐ignore-submodules=dirty: changes to the work directory of submodules are ignored. Updating a Submodule Submodules always point to a specific commit of the repository that they represent. Someone who clones the parent repository somewhen in the future will get the exact same submodule state although the submodule may have new commits upstream. In order to change the revision, you must explicitly update a submodule like outlined here: @Test public void testUpdateSubmodule() throws Exception { addLibrarySubmodule(); ObjectId newHead = library.commit().setMessage( "msg" ).call();File workDir = parent.getRepository().getWorkTree(); Git libModule = Git.open( new F‌ile( workDir, "modules/library" ) ); libModule.pull().call(); libModule.close(); parent.add().addF‌ilepattern( "modules/library" ).call(); parent.commit().setMessage( "Update submodule" ).call();assertEquals( newHead, getSubmoduleHead( "modules/library" ) ); } This rather lengthy snippet first commits something to the library repository (line 4) and then updates the library submodule to the latest commit (line 7 to 9). To make the update permanent, the submodule must be committed (line 10 and 11). The commit stores the updated commit-id of the submodule under its name (modules/library in this example). Finally you usually want to push the changes to make them available to others. Updating Changes to Submodules in the Parent Repository Fetching commits from upstream into the parent repository may also change the submodule configuration. The submodules themselvs, however are not updated automatically. This is what the SubmoduleUpdateCommand solves. Using the command without further parametrization will update all registered submodules. The command will clone missing submodules and checkout the commit specified in the configuration. Like with other submodule commands, there is an addPath() method to only update submodules within the given paths. Cloning a Repository with Submodules You probably got the pattern meanwhile, everything to do with submodules is manual labor. Cloning a repository that has a submodule configuration does not clone the submodules by default. But the CloneCommand has a cloneSubmodules attribute and setting this to true, well, also clones the configured submodules. Internally the SubmoduleInitCommand and SubmoduleUpdateCommand are executed recursively after the (parent) repository was cloned and its work directory was checked out. Removing a Submodule To remove a submodule you would expect to write something like git.submoduleRm().setPath( ... ).call(); Unfortunately, neither native Git nor JGit has a built-in command to remove submodules. Hopefully this will be resolved in the future. Until then we must manually remove submodules. If you scroll down to the removeSubmodule() method you will see that it is no rocket science. First, the respective submodule section is removed from the .gitsubmodules and .git/config files. Then the submodule entry in the index is also removed. Finally the changes – .gitsubmodules and the removed submodule in the index – are committed and the submodule content is deleted from the work directory. For-Each Submodule Native Git offers the git submodule foreach command to execute a shell command for each submodule. While JGit doesn’t exactly support such a command, it offers the SubmoduleWalk. This class can be used to iterate over the submodules in a repository. The following example fetches upstream commits for all submodules. @Test public void testSubmoduleWalk() throws Exception { addLibrarySubmodule();int submoduleCount = 0; Repository parentRepository = parent.getRepository(); SubmoduleWalk walk = SubmoduleWalk.forIndex( parentRepository ); while( walk.next() ) { Repository submoduleRepository = walk.getRepository(); Git.wrap( submoduleRepository ).fetch().call(); submoduleRepository.close(); submoduleCount++; } walk.release();assertEquals( 1, submoduleCount ); } With next() the walk can be advanced to the next submodule. The method returns false if there are no more submodules. When done with a SubmoduleWalk, its allocated resources should be freed by calling release(). Again, if you obtain a Repository instance for a submodule do not forget to close it. The SubmoduleWalk can also be used to gather detailed information about submodules. Most of its getters relate to properties of the current submodule like path, head, remote URL, etc. Sync Remote URLs We have seen before that submodule configurations are stored in the .gitsubmodules file at the root of the repository work directory. Well, at least the remote URL can be overridden in .git/config. And then there is the config file of the submodule itself. This in turn can have yet another remote URL. The SubmoduleSyncCommand can be used to reset all remote URLs to the settings in .gitmodules As you can see, the support for submodules in JGit is almost at level with native Git. Most of its commands are implemented or can be emulated with little effort. And if you find that something is not working or missing you can always ask the friendly and helpful JGit community for assistance.The term is taken from the section on ‘Exploring and Learning Boundaries’ in Clean Code by Robert C. Martin ↩Reference: How to manage Git Submodules with JGit from our JCG partner Rudiger Herrmann at the Code Affine blog....

How can I do This? – With SQL of Course!

Haven’t we all been wondering: How can I do this? I have these data in Excel and I want to group / sort / assign / combine … While you could probably pull up a Visual Basic script doing the work or export the data to Java or any other procedural language of choice, why not just use SQL? The use-case: Counting neighboring colours in a stadium choreography This might not be an everyday use-case for many of you, but for our office friends at FanPictor, it is. They’re creating software to draw a fan choreography directly into a stadium. Here’s the use-case on a high level:It’s immediately clear what this fun software does, right?You submit a choreography suggestion The event organisers choose the best submission The event organisers export the choreography as an Excel file The Excel file is fed into a print shop, printing red/red, red/white, white/red, white/white panels (or any other colours) The event helpers distribute the coloured panels on the appropriate seat The fans get all excitedHaving a look at the Excel spreadsheet So this is what the Excel spreadsheet looks like:Now, distributing these panels is silly, repetitive work. From experience, our friends at FanPictor wanted to have something along the lines of this, instead:Notice that there are instructions associated with each panel to indicate:… whether a consecutive row of identical panels starts or stops … how many identical panels there are in such a row“consecutive” means that within a stadium sector and row, there are adjacent seats with the same (Scene1, Scene2) tuple. How do we solve this problem? We solve this problem with SQL of course – and with a decent database, that supports window functions, e.g. PostgreSQL, or any commercial database of your choice! (you won’t be finding this sort of feature in MySQL). Here’s the query: with data as ( select d.*, row(sektor, row, scene1, scene2) block from d ) select sektor, row, seat, scene1, scene2, case when lag (block) over(o) is distinct from block and lead(block) over(o) is distinct from block then 'start / stop' when lag (block) over(o) is distinct from block then 'start' when lead(block) over(o) is distinct from block then 'stop' else '' end start_stop, count(*) over( partition by sektor, row, scene1, scene2 ) cnt from data window o as ( order by sektor, row, seat ) order by sektor, row, seat; That’s it! Not too hard, is it? Let’s go through a couple of details. We’re using quite a few awesome SQL standard / PostgreSQL concepts, which deserve to be explained: Row value constructor The ROW() value constructor is a very powerful feature that can be used to combine several columns (or rows) into a single ROW / RECORD type: row(sektor, row, scene1, scene2) block This type can then be used for row value comparisons, saving you a lot of time comparing column by column. The DISTINCT predicate lag (block) over(o) is distinct from block The result of the above window function is compared with the previously constructed ROW by using the DISTINCT predicate, which is a great way of comparing things “null-safely” in SQL. Remember that SQL NULLs are some of the hardest things in SQL to get right. Window functions Window functions are a very awesome concept. Without any GROUP BY clause, you can calculate aggregate functions, window functions, ranking functions etc. in the context of a current row while you’re projecting the SELECT clause. For instance: count(*) over( partition by sektor, row, scene1, scene2 ) cnt The above window function counts all rows that are in the same partition (“group”) as the current row, given the partition criteria. In other words, all the seats that have the same (scene1, scene2) colouring and that are located in the same (sector, row). The other window functions are lead and lag, which return a value from a previous or subsequent row, given a specific ordering: lag (block) over(o), lead(block) over(o) -- ... window o as ( order by sektor, row, seat ) Note also the use of the SQL standard WINDOW clause, which is supported only by PostgreSQL and Sybase SQL Anywhere. In the above snippet, lag() returns the block value of the previous row given the ordering o, whereas lead() would return the next row’s value for block – or NULL, in case of which we’re glad that we used the DISTINCT predicate, before. Note that you can also optionally supply an additional numeric parameter, to indicate that you want to access the second, third, fifth, or eighth, row backwards or forward. SQL is your most powerful and underestimated tool At Data Geekery, we always say that SQL is a device whose mystery is only exceeded by its power If you’ve been following our blog, you may have noticed that we try to evangelise SQL as a great first-class citizen for Java developers. Most of the above features are supported by jOOQ, and translated to your native SQL dialect, if they’re not available. So, if you haven’t already, listen to Peter Kopfler who was so thrilled after our recent jOOQ/SQL talks in Vienna that he’s now all into studying standards and using PostgreSQL: Mind bending talk by @lukaseder about @JavaOOQ at tonight's @jsugtu. My new resolution: Install PostgreSQL and study SQL standard at once. — Peter Kofler (@codecopkofler) April 7, 2014Further reading There was SQL before window functions and SQL after window functionsReference: How can I do This? – With SQL of Course! from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....

Productive Developers are Smart and Lazy

When I use the terms Smart, Lazy, and Developer, I mean the following:Smart as in intelligent and able to think things through (i.e. not smart-ass)Not a dreamer who never gets around to writing anything practicalLazy as in lazy-loading, that is wait to write code (i.e. not couch potato) Developer as in energetic and focused on building real-world code solutionsGood development is lazy development; it is when the developer spends the time necessary to think through all the pathways of the solution that he is developing BEFORE writing the code. That is lazy-writing of code, i.e. not writing code before you really understand it.  The more due diligence a developer does to make sure that he is writing the correct code will reduce the amount of code that needs to be written.This due diligence takes the form of:Really understanding the requirements and getting product management (business analysts) to be clear on what are the ACTUAL requirementsThey often are not given time to gather requirements They often don’t have access to the right subject matter experts They sometimes have very poor abilities to synthesize consistent and complete requirements (see When BA means B∪ll$#!t Artist)Really making sure that you understand how you are interfacing to other developers on your team and other teamsThis involves quite a bit white boarding Often this involves producing diagrams (ideally UML and Visio diagrams)It takes time to do this due diligence to make sure that you have consistent requirements and to make sure that you will have consistent interfaces with your peers.  However, developers are eager to start banging out code and spend hours at their desks banging out code.  In reality, less than 5% of that time is spend productively (see The Programmer Productivity Paradox).  If you see developers spending 100% of their time staring at their screens with no human interaction then you are looking at some of the worst developers. It is a bad sign if developers always coding Productive developers are constantly checking their understanding of the requirements and making sure that they are staying in sync with their team’s code.  Productive developers are in regular contact with the product managers/business analysts and can often be seen white boarding with their peers and architects. There are definitely developers who use their years of experience to become more productive, in fact among the best developers:the ratio of initial coding time was about 20 to 1 the ratio of debugging times over 25 to 1 program execution speed about 10 to 1 program size 5 to 1However, in the aggregate, developers do not become more productive over time (see No Experience Necessary!), i.e. over thousands of developers there is no correlation between years of experience and productivity.  In fact, we have measured productivity regularly 8 times over the last 50 years and years of experience do not correlate (in the aggregate) with productivity. Why is lazy writing of code so important? Code is often written before the requirements are understood or gathered.  In addition, quickly written code often fails to fit with everyone else’s code; often, it is only during integration that this problem is discovered.   Good developers are patient and realize that there is a cost to writing code quickly. Developers become psychologically attached to their code  Bad developers are reluctant to change poorly written code.  Rather than rewrite suboptimal code, bad developers will simply add more code to make up for deficiencies. Even worse, they tend to blame everyone else for having bad code.  What you end up with is band-aid after band-aid that lead to a severely buggy and unstable system. Don’t get me wrong, good developers can find themselves in a situation where they have written sub-optimal code.  The difference is that a good developer will recognize a problematic section of code and:Refactor the code if the code is largely doing the right thing Rewrite the the code otherwiseWhen developers produce and maintain sub-optimal code, it becomes harder and harder to change this code as time goes on.  That is because their peers will need to write code that interfaces with the sutb-optimal code and build clumsy interfaces or work-arounds to make the code work.  As the code base grows, too many later code units rely on the functionality of this initial code.  Of course, the later code can do little to increase the stability of the code and bugs multiply when simple changes are made; in short, development becomes slower and slower. When in doubt, be lazy and write code lateReference: Productive Developers are Smart and Lazy from our JCG partner Dalip Mahal at the Accelerated Development blog....

Quick, and a bit dirty, JSON Schema generation with MOXy 2.5.1

So I am working on a new REST API for an upcoming Oracle cloud service these days so one of the things I needed was the ability to automatically generate a JSON Schema for the bean in my model. I am using MOXy to generate the JSON from POJO and as of version 2.5.1 of EclipseLink it now has the ability to generate a JSON Schema from the bean model. There will be a more formal solution integrated into Jersey 2.x at a future date; but this solution will do at the moment if you want to play around with this. So the first class we need to put in place is a model processor, very much and internal Jersey class, that allows us to amend the resource model with extra methods and resources. To each resource in the model we can add the JsonSchemaHandler which does the hard work of generating a new schema. Since this is a simple POC there is no caching going on here, please be aware of this if you are going to use this in production code. import com.google.common.collect.Lists;import example.Bean;import java.io.IOException; import java.io.StringWriter;import java.text.SimpleDateFormat;import java.util.Date; import java.util.List;import javax.inject.Inject;import javax.ws.rs.HttpMethod; import javax.ws.rs.WebApplicationException; import javax.ws.rs.container.ContainerRequestContext; import javax.ws.rs.core.Configuration; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.Response;import javax.xml.bind.JAXBException; import javax.xml.bind.SchemaOutputResolver; import javax.xml.transform.Result; import javax.xml.transform.stream.StreamResult;import org.eclipse.persistence.jaxb.JAXBContext;import org.glassfish.jersey.process.Inflector; import org.glassfish.jersey.server.ExtendedUriInfo; import org.glassfish.jersey.server.model.ModelProcessor; import org.glassfish.jersey.server.model.ResourceMethod; import org.glassfish.jersey.server.model.ResourceModel; import org.glassfish.jersey.server.model.RuntimeResource; import org.glassfish.jersey.server.model.internal.ModelProcessorUtil; import org.glassfish.jersey.server.wadl.internal.WadlResource;public class JsonSchemaModelProcessor implements ModelProcessor {private static final MediaType JSON_SCHEMA_TYPE = MediaType.valueOf("application/schema+json"); private final List<ModelProcessorUtil.Method> methodList;public JsonSchemaModelProcessor() { methodList = Lists.newArrayList(); methodList.add(new ModelProcessorUtil.Method("$schema", HttpMethod.GET, MediaType.WILDCARD_TYPE, JSON_SCHEMA_TYPE, JsonSchemaHandler.class)); }@Override public ResourceModel processResourceModel(ResourceModel resourceModel, Configuration configuration) { return ModelProcessorUtil.enhanceResourceModel(resourceModel, true, methodList, true).build(); }@Override public ResourceModel processSubResource(ResourceModel resourceModel, Configuration configuration) { return ModelProcessorUtil.enhanceResourceModel(resourceModel, true, methodList, true).build(); }public static class JsonSchemaHandler implements Inflector<ContainerRequestContext, Response> {private final String lastModified = new SimpleDateFormat(WadlResource.HTTPDATEFORMAT).format(new Date());@Inject private ExtendedUriInfo extendedUriInfo;@Override public Response apply(ContainerRequestContext containerRequestContext) {// Find the resource that we are decorating, then work out the // return type on the first GETList<RuntimeResource> ms = extendedUriInfo.getMatchedRuntimeResources(); List<ResourceMethod> rms = ms.get(1).getResourceMethods(); Class responseType = null; found: for (ResourceMethod rm : rms) { if ("GET".equals(rm.getHttpMethod())) { responseType = (Class) rm.getInvocable().getResponseType(); break found; } }if (responseType == null) { throw new WebApplicationException("Cannot resolve type for schema generation"); }// try { JAXBContext context = (JAXBContext) JAXBContext.newInstance(responseType);StringWriter sw = new StringWriter(); final StreamResult sr = new StreamResult(sw);context.generateJsonSchema(new SchemaOutputResolver() { @Override public Result createOutput(String namespaceUri, String suggestedFileName) throws IOException { return sr; } }, responseType);return Response.ok().type(JSON_SCHEMA_TYPE) .header("Last-modified", lastModified) .entity(sw.toString()).build(); } catch (JAXBException jaxb) { throw new WebApplicationException(jaxb); } } }}Note the very simple heuristic in the JsonSchemaHandler code it assumes that for each resource there is a 1:1 mapping to a single JSON Schema element. This of course might not be true for your particular application. Now that we have the schema generated in a know location we need to tell the client about it, the first thing we will do is to make sure that there is a suitable link header when the user invokes OPTIONS on a particular resource: import java.io.IOException;import javax.ws.rs.container.ContainerRequestContext; import javax.ws.rs.container.ContainerResponseContext; import javax.ws.rs.container.ContainerResponseFilter; import javax.ws.rs.core.Context; import javax.ws.rs.core.Link; import javax.ws.rs.core.UriInfo;public class JsonSchemaResponseFilter implements ContainerResponseFilter {@Context private UriInfo uriInfo;@Override public void filter(ContainerRequestContext containerRequestContext, ContainerResponseContext containerResponseContext) throws IOException {String method = containerRequestContext.getMethod(); if ("OPTIONS".equals(method)) {Link schemaUriLink = Link.fromUriBuilder(uriInfo.getRequestUriBuilder() .path("$schema")).rel("describedBy").build();containerResponseContext.getHeaders().add("Link", schemaUriLink); } } }Since this is JAX-RS 2.x we are working with we of course are going bundle all the bit together into a feature: import javax.ws.rs.core.Feature; import javax.ws.rs.core.FeatureContext;public class JsonSchemaFeature implements Feature {@Override public boolean configure(FeatureContext featureContext) {if (!featureContext.getConfiguration().isRegistered(JsonSchemaModelProcessor.class)) { featureContext.register(JsonSchemaModelProcessor.class); featureContext.register(JsonSchemaResponseFilter.class); return true; } return false; } }I am not going to show my entire set of POJO classes; but just quickly this is the Resource class with the @GET method required by the schema generation code: import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.Produces; import javax.ws.rs.core.MediaType;@Path("/bean") public class BeanResource {@GET @Produces(MediaType.APPLICATION_JSON) public Bean getBean() { return new Bean(); } } And finally here is what you see if you perform a GET on a resource: GET .../resources/bean Content-Type: application/json{ "message" : "hello", "other" : { "message" : "OtherBean" }, "strings" : [ "one", "two", "three", "four" ] } And OPTIONS: OPTIONS .../resources/bean Content-Type: text/plain Link: <http://.../resources/bean/$schema>; rel="describedBy"GET, OPTIONS, HEAD And finally if you resolve the schema resource: GET .../resources/bean/$schema Content-Type: application/schema+json{ "$schema" : "http://json-schema.org/draft-04/schema#", "title" : "example.Bean", "type" : "object", "properties" : { "message" : { "type" : "string" }, "other" : { "$ref" : "#/definitions/OtherBean" }, "strings" : { "type" : "array", "items" : { "type" : "string" } } }, "additionalProperties" : false, "definitions" : { "OtherBean" : { "type" : "object", "properties" : { "message" : { "type" : "string" } }, "additionalProperties" : false } } } There is a quite a bit of work to do here, in particular generating the hypermedia extensions based on the declarative linking annotations that I forward ported into Jersey 2.x a little while back. But it does point towards a solution and we get to exercise a variety of solutions to get something working now.Reference: Quick, and a bit dirty, JSON Schema generation with MOXy 2.5.1 from our JCG partner Gerard Davison at the Gerard Davison’s blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

15,153 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books
Get tutored by the Geeks! JCG Academy is a fact... Join Now
Hello. Add your message here.