Featured FREE Whitepapers

What's New Here?


My most useful IntelliJ IDEA keyboard shortcuts

Are you looking for ways to be more productive? It shouldn’t be a secret that performing actions using the keyboard instead of the mouse will save you time. If you think only about a single action it’s not a big deal. What if you use the same action multiple times a day? If you add up all these actions, they can have a great impact on your productivity.              I’m more or less used to drive most of my actions with keyboard shortcuts. When I was younger, I played semi-professionally Real Time Strategy computer games, including Starcraft and Warcraft III. Starcraft, popularized the term APM (Actions per Minute), which counted the number of actions that a player performed per minute. By using tools, it was possible to record APMs and tell if players were using mouse actions or a keyboard and mouse action combination. Usually, players with a keyboard and mouse combination gameplay had a better chance of winning games than the one that just clicked.So, what does this have to do with code and IntelliJ? Well, I believe that you can increase your code development productivity by learning and using the keyboard shortcuts to perform the desired actions. You can check the keyboard shortcuts on IntelliJ and you can also check the Productivity Guide which monitors your most used actions. This information is very useful, but it may be a little difficult to change your habits right away. To help you with this, I will describe my most used shortcuts in IntelliJ. You can start familiarize yourself with these and slowly introduce additional shortcuts.Shortcut DescriptionCTRL + W / CMD + W Syntax Aware Selection This allows you to select code with context. Awesome when you need to select large blocks or just specific parts of a piece of code. If you have this code: files.getFiles().forEach(auctionFile -> createAuctionFile(realm, auctionFile)); And place the cursor in the auctionFile and use the shortcut, it will select auctionFile. Press it again and the selection will expand to auctionFile -> createAuctionFile(realm, auctionFile). If you press it again, now the selection will expand to files.getFiles().forEach(auctionFile -> createAuctionFile(realm, auctionFile)). Pressing a final time, you get the full piece of code selected. If you combine it with SHIFT, you can unselect by context as well.CTRL + E / CMD + E Recent Viewed Files This will show you a popup with all the recent files that you have opened in the IDE. If you start typing, you can filter the files.CTRL + SHIFT + E / CMD + SHIFT + E Recent Edited Files Same as Recent Viewed Files, but only shows you the files that you’ve actually changed.CTRL + B / CMD + B Go to Declaration If you place the cursor in a class, method or variable and use the shortcut you will immediately jump to the declaration of the element.CTRL + SHIFT + ENTER / CMD + SHIFT + ENTER Complete Statement This will try to complete your current statement. How? By adding curly braces, or semicolon and line change. For instance, if you have the following statement: System.out.print() Press the shortcut once to add an ending semi-colon. Press it again to add a new line and to position the cursor aligned with the last line. Another example: if (condition == true) Press the shortcut to add opening and closing curly braces, and place the cursor inside the if body with additional indentation.CTRL + N / CMD + N Go to Class This one allows you to search by name for a Java file in your project. If you combine it with SHIFT, it searches any file. Adding ALT on top of that it searches for symbols. In the search area, you can use CamelHumps (type the capital letters only of the class name) notation to filter files.CTRL + SHIFT + SPACE / CMD + SHIFT + SPACE Smart Type Completion I didn’t mention it before, but I guess you are familiar with auto complete via CTRL + SPACE or CMD + SPACE. If you add a SHIFT you get the smart completion. This means that the IDE will try to match expected types that suit the current context and filter all the other options.CTRL + ALT + ← / CMD + ALT + ← Navigate Back This allows you to navigate back like a browser action. It remembers where your cursor was positioned and navigates back even to other files.CTRL + ALT + → / CMD + ALT + → Navigate Forward It’s like Navigate Back but goes Forward. Duh!CTRL + SHIFT + F7 / CMD + SHIFT + F7 Highlight Usages Place the cursor in a element and after pressing the cursor the IDE will highlight all the occurrences of the selected element.There are many more keyboard shortcuts. Almost every action has an equivalent shortcut. It’s hard to learn them all, it takes time and practice. I still learn new things every week, and if for some reason I don’t code as much for a few days, I forget about the new shortcuts I’ve learned. It’s practice, practice, practice! Try to learn a few and master them instead of trying to do everything in one go. It’s easier! An IntelliJ plugin exists to tell you with shortcuts you should use if you use the mouse. Its Key Promoter, but unfortunately it seems it’s not maintained anymore. Maybe I can update it for the latests IntelliJ versions. I would also like to see in the Productivity Guide a count of actions performed by shortcuts or mouse. If I find some free time, maybe I can do it too. Hope you enjoyed it.  Reference: My most useful IntelliJ IDEA keyboard shortcuts from our JCG partner Roberto Cortez at the Roberto Cortez Java Blog blog....

Quick way to check if the REST API is alive – GET details from Manifest file

There might be cases when you want to quickly verify if your REST API, that is deployed either on dev, test or prod environments, is reachable altogether. A common way to do this is by building a generic resource that delivers for example the version of the deployed API. You can trigger a request to this resource manually or, even better, have a Jenkings/Hudson job, which runs a checkup job after deployment. In this post, I will present how to implement such a service that reads the implementation details from the application’s manifest file. The API verified, is the one developed in the Tutorial – REST API design and implementation in Java with Jersey and Spring     Software usedJersey JAX-RS implementation 2.14 Spring 4.1.4 Maven 3.1.1 JDK 7REST resource I have developed two REST resources reading from the Manifest file :/manifest – returns the manifest’s main attributes as a key, value pairs /manifest/implementation-details – returns only the implementation details from the manifest fileManifest REST resource @Path("/manifest") public class ManifestResource { @Autowired ManifestService manifestService; @GET @Produces({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML }) public Response getManifestAttributes() throws FileNotFoundException, IOException{ Attributes manifestAttributes = manifestService.getManifestAttributes(); return Response.status(Response.Status.OK) .entity(manifestAttributes) .build(); } @Path("/implementation-details") @GET @Produces({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML }) public Response getVersion() throws FileNotFoundException, IOException{ ImplementationDetails implementationVersion = manifestService.getImplementationVersion(); return Response.status(Response.Status.OK) .entity(implementationVersion) .build(); } } Request GET request example – implementation details GET http://localhost:8888/demo-rest-jersey-spring/manifest HTTP/1.1 Accept-Encoding: gzip,deflate Accept: application/json Host: localhost:8888 Connection: Keep-Alive User-Agent: Apache-HttpClient/4.1.1 (java 1.5) Response – 200 OK Response in JSON format { "Implementation-Title": "DemoRestWS", "Implementation-Version": "0.0.1-SNAPSHOT", "Implementation-Vendor-Id": "org.codingpedia", "Built-By": "ama", "Build-Jdk": "1.7.0_40", "Manifest-Version": "1.0", "Created-By": "Apache Maven 3.1.1", "Specification-Title": "DemoRestWS", "Specification-Version": "0.0.1-SNAPSHOT" } The returned values in case of success (HTTP Status 200 OK) contain different default data related to implementation and specification details. These are automatically generated  the Manifest file with Maven plugin, which I will present in the next section. Generate Manifest file with Maven Since the demo application is a web application, I am using the Apache maven war plugin supported by the Apache Maven Archiver to generate a Manifest file: maven-war-plugin configuration <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <version>2.5</version> <configuration> <warName>${project.artifactId}</warName> <archive> <manifest> <addDefaultImplementationEntries>true</addDefaultImplementationEntries> <addDefaultSpecificationEntries>true</addDefaultSpecificationEntries> </manifest> </archive> </configuration> <executions> <execution> <phase>package</phase> <goals> <goal>manifest</goal> </goals> <inherited>true</inherited> </execution> </executions> </plugin> The addDefaultImplementationEntries and addDefaultSpecificationEntries will generate default implementation, respectively specification details, out of the project properties defined in the pom.xml file: Default implementation details Implementation-Title: ${project.name} Implementation-Version: ${project.version} Implementation-Vendor-Id: ${project.groupId} Implementation-Vendor: ${project.organization.name} Implementation-URL: ${project.url} , respectively: Default specifiation details Specification-Title: ${project.name} Specification-Version: ${project.version} Specification-Vendor: ${project.organization.name} See  Apache Maven Archiver for further details. Notice that in order to generate the Manifest.mf file also in the file system under webapp/META-INF, you need to bind the manifest goal to an execution phase (e.g. package): Bind manifest goal to package phase <executions> <execution> <phase>package</phase> <goals> <goal>manifest</goal> </goals> <inherited>true</inherited> </execution> </executions> Read from Manifest file Reading from the manifest file occurs in the injected ManifestService class: ManifestService.java public class ManifestService { @Autowired ServletContext context; Attributes getManifestAttributes() throws FileNotFoundException, IOException{ InputStream resourceAsStream = context.getResourceAsStream("/META-INF/MANIFEST.MF"); Manifest mf = new Manifest(); mf.read(resourceAsStream); Attributes atts = mf.getMainAttributes(); return atts; } ImplementationDetails getImplementationVersion() throws FileNotFoundException, IOException{ String appServerHome = context.getRealPath("/"); File manifestFile = new File(appServerHome, "META-INF/MANIFEST.MF");Manifest mf = new Manifest();mf.read(new FileInputStream(manifestFile));Attributes atts = mf.getMainAttributes(); ImplementationDetails response = new ImplementationDetails(); response.setImplementationTitle(atts.getValue("Implementation-Title")); response.setImplementationVersion(atts.getValue("Implementation-Version")); response.setImplementationVendorId(atts.getValue("Implementation-Vendor-Id")); return response; } } To access the MANIFEST.MF file you need to inject the ServletContext, and call one of its methodsSerlvetContext#getResourceAsStream() – (the preferred way) ServletContext#getRealPath() – gets the real path corresponding to the given virtual path. The real path returned will be in a form appropriate to the computer and operating system on which the servlet container is running, including the proper path separators. Its biggest problem in this case, if you don’t deploy the .war exploded you won’t have access to the manifest file.Java EE version In a JavaEE environment, you would have the ServletContext injected via the @Context annotation: Java EE implementation version public class ManifestResource { @Context ServletContext context; @GET @Produces({ MediaType.APPLICATION_JSON, MediaType.APPLICATION_XML }) public Response getManifestAttributes() throws FileNotFoundException, IOException{ InputStream resourceAsStream = context.getResourceAsStream("/META-INF/MANIFEST.MF"); Manifest mf = new Manifest(); mf.read(resourceAsStream); Attributes atts = mf.getMainAttributes(); return Response.status(Response.Status.OK) .entity(atts) .build(); } ... } Here you have – a quick way to verify that your REST api is reachable. ResourcesApache MavenApache Maven Archiver Introduction to the Build Lifecycle#Built-in_Lifecycle_BindingsOracle docs – Working with Manifest Files: The Basics  StackoverflowHow to get Maven Artifact version at runtime? How to Get Maven Project Version From Java Method as Like at PomReference: Quick way to check if the REST API is alive – GET details from Manifest file from our JCG partner Adrian Matei at the Codingpedia.org blog....

Concepts of Serialization

With all this talk about why Optional isn’t serializable and what to do about it (coming up soon), let’s have a closer look at serialization. Overview This post presents some key concepts of serialization. It tries to do so succinctly without going into great detail, which includes keeping advice to a minimum. It has no narrative and is more akin to a wiki article. The main source is Joshua Bloch’s excellent book Effective Java, which has several items covering serialization (1st edition: 54-57; 2nd edition: 74-78). Way more information can be found in the official serialization specification Definition With Serialization instances can be encoded as a byte stream (called serializing) and such a byte stream can be turned back into an instance (called deserializing). The key feature is that both processes do not have to be executed by the same JVM. This makes serialization a mechanism for storing objects on disk between system runs or transferring them between different systems for remote communication. Extralinguistic Character Serialization is a somewhat strange mechanism. It converts instances into a stream of bytes and vice versa with only little visible interaction with the class. Neither does it call accessors to get to the values nor does it use a constructor to create instances. And for that to happen all the developer of the class is required to do is implement an interface with no methods. Bloch describes this as an extralinguistic character and it is the root for many of the issues with serialization. Methods The serialization process can be customized by implementing some of the following methods. They can be private and the JVM will find them based on their signature. The descriptions are taken from the class comment on Serializable.private void writeObject(java.io.ObjectOutputStream out) throws IOException Is responsible for writing the state of the object for its particular class so that the corresponding readObject method can restore it. private void readObject(java.io.ObjectInputStream in) throws IOException, ClassNotFoundException Is responsible for reading from the stream and restoring the classes fields. private void readObjectNoData() throws ObjectStreamException Is responsible for initializing the state of the object for its particular class in the event that the serialization stream does not list the given class as a superclass of the object being deserialized. ANY-ACCESS-MODIFIER Object writeReplace() throws ObjectStreamException Designates an alternative object to be used when writing an object of this class to the stream. ANY-ACCESS-MODIFIER Object readResolve() throws ObjectStreamException; Designates a replacement object when an instance of this class is read from the stream.A good way to deal with the extralinguistic character of deserialization is to see all involved methods as an additional constructor of that class. The object streams involved in (de)serializing provide these helpful default (de)serialization methods:java.io.ObjectOutputStream.defaultWriteObject() throws IOException Writes the non-static and non-transient fields of the current class to this stream. java.io.ObjectInputStream.defaultReadObject() throws IOException, ClassNotFoundException Reads the non-static and non-transient fields of the current class from this stream.Invariants One effect of not using a constructor to create instances is that a class’s invariants are not automatically established on deserialization. So while a class does usually check all constructor arguments for validity, this mechanism is not automatically applied to the deserialized values of fields. Implementing such a check for deserialization is an extra effort which easily leads to code duplication and all the problems it typically ensues. If forgotten or done carelessly, the class is open for bugs or security holes. Serialized FormThe structure of a serializable class’s byte stream encoding is called its serialized form. It is mainly defined by the names and types of the class’s fields. The serialized form has some properties that are not immediately obvious. While some of the problematic ones can be mitigated by carefully defining the form, they will usually still be a burden on future development of a class. Public API The most important property of the serialized form is: It is part of the class’s public API! From the moment a serializable class is deployed, it has to be assumed that serialized instances exist. And it is usually expected of a system to support the deserialization of instances which were created with older versions of the same system. Users of a class rely on its serialized form as much as on its documented behavior. Reduced Information Hiding The concept of information hiding allows a class to maintain its documented behavior while changing its way of implementing it. This expressively includes the representation of its state, which is usually hidden and can be adapted as needed. Since the serialized form, which captures that representation of the state, becomes part of the public API so does the representation itself. A serializable class only effectively hides the implementation of its behavior while exposing the definition of that behavior and the state it uses to implement it. Reduced Flexibility Hence, like changing a class’s API (e.g. by changing or removing methods or altering their documented behavior) might break code using it, so does changing the serialized form. It is easy to see that improving a class becomes vastly more difficult if its fields are fixed. This greatly reduces the flexibility to change such a class if the need arises. Making something in the JDK serializable makes a dramatic increase in our maintenance costs, because it means that the representation is frozen for all time. This constrains our ability to evolve implementations in the future, and the number of cases where we are unable to easily fix a bug or provide an enhancement, which would otherwise be simple, is enormous. So, while it may look like a simple matter of “implements Serializable” to you, it is more than that. The amount of effort consumed by working around an earlier choice to make something serializable is staggering. Brian Goetz Increased Testing Effort If a serializable class is changed, it is necessary to test whether serialization and deserialization works across different versions of the system. This is no trivial task and will create measurable costs. Class representations The serialized from represents a class but not all representations are equal. Physical If a class defines fields with reference types (i.e. non-primitives), its instances contain pointers to instances of those types. Those instance, in turn, can point to other ones and so on. This defines a directed graph of interlinked instances. The physical representation of an instance is the graph of all instances reachable from it. As an example, consider a doubly linked list. Each element of the list is contained in a node and each node knows the previous and the next one. This is basically already the list’s physical representation. A list with a dozen elements would be a graph of 13 nodes. The list instance points to the first and last list node and starting from there one can traverse the ten nodes in between in both directions. One way to serialize an instance of a class is to simply traverse the graph and serialize each instance. This effectively writes the physical representation to the byte stream, which is the default serialization mechanism. While the physical representation of a class is usually an implementation detail, this way to serialize it exposes this otherwise hidden information. Serializing the physical representation effectively binds the class to it which makes it extremely hard to change it in the future. There are other disadvantages, which are described in Effective Java (p. 297 in 2nd edition). Logical The logical representation of a class’s state is often more abstract. It is usually more removed from the implementation details and contains less information. When trying to formulate this representation, it is advisable to push both aspects as far as possible. It should be as implementation independent as possible and should be minimal in the sense that leaving out any bit of information makes it impossible to recreate an instance from it. To continue the example of the linked list, consider what it actually represents: just some elements in a certain order. Whether these are contained in nodes or not and how those hypothetical nodes might be linked is irrelevant. A minimal, logical representation would hence only consist of those elements. (In order to properly recreate an instance from the stream it is necessary to add the number of elements. While this is redundant information it doesn’t seem to hurt much.) So a good logical representation only captures the state’s abstract structure and not the concrete fields representing it. This implies that while changing the former is still problematic the latter can be evolved freely. Compared to serializing the physical representation this restores a big part of the flexibility for further development of the class. Serialization Patterns There are at least three ways to serialize a class. Calling all of them patterns is a little overboard so the term is used loosely. Default Serialized Form This is as simple as adding implements Serializable to the declaration. The serialization mechanism will then write all non-transient fields to the stream and on deserialization assign all the values present in a stream to their matching fields. This is the most straight forward way to serialize a class. It is also the one where all the sharp edges of serialization are unblunted and waiting for their turn to really hurt you. The serialized form captures the physical representation and there is absolutely no checking of invariants. Custom Serialized Form By implementing writeObject a class can define what gets written to the byte stream. A matching readObject must read an according stream and use the information to assign values to fields. This approach allows more flexibility than the default form and can be used to serialize the class’s logical representation. There are some details to consider and I can only recommend to read the respective item in Effective Java (item 55 in 1st edition; item 75 in 2nd edition). Serialization Proxy Pattern In this case the instance to serialize is replaced by a proxy. This proxy is written to and read from the byte stream instead of the original instance. This is achieved by implementing the methods writeReplace and readResolve. In most cases this is by far the best approach to serialization. It deserves its own post and it will get it soon (stay tuned). Misc Some other details about serialization. Artificial Byte Stream The happy path of deserialization assumes a byte stream which was created by serializing an instance of the same class. While doing so is alright in most situations, it must be avoided in security critical code. This includes any publicly reachable service which uses serialization for remote communication. Instead the assumption must be that an attacker carefully handcrafted the stream to violate the class’s invariants. If this is not countered, the result can be an unstable system which might crash, corrupt data or be open for attacks. Documentation Javadoc has special annotations to document the serialized form of a class. For this it creates a special page in the docs where it lists the following information:The tag @serialData can annotate methods and the following comment is supposed to document the data written do the byte stream. The method signature and the comment is shown under Serialization Methods. The tag @serial can annotate fields and the following comment is supposed to describe the field. The field’s type and name and the comment are then listed under Serialized Fields.A good example is the documentation for the LinkedList.Reference: Concepts of Serialization from our JCG partner Nicolai Parlog at the CodeFx blog....

2015 Starts Off Strong for Java 8

JDK 8 is starting 2015 with a surge in popularity in terms of blog posts and articles. This is coinciding with Java being automatically upgraded to JDK 8 this month. In this post, I list and briefly describe some of the numerous articles and posts on JDK 8 that have been published already in 2015. JDK 8 Streams have been justifiably popular in recent posts. My first blog post of 2015 was Stream-Powered Collections Functionality in JDK 8 and it demonstrates performing some common functions against Java collections with greater ease and conciseness using Streams than was possible before Streams. The post Fail-fast validations using Java 8 streams looks at fluent fail-fast validation of state that was improved from its original writing based on feedback. The post Java 8: No more loops talks about streams providing concise alternatives to looping on collections. What is the difference between Collections and Streams in Java 8? and Java 8 Streams API as Friendly ForkJoinPool Facade were also posted this month. Lambda expressions are obviously a big part of JDK 8. The post Java 8 Stream and Lambda Expressions – Parsing File Example demonstrates use of lambda expressions and streams to parse a log file. A quick overview of features new to JDK 8 is available in What Are the Most Important New Features in the Java 8 Release?. The post Java 8 Default Methods Explained in 5 minutes describes JDK 8’s default methods. Daniel Shaya warns of two potential caveats using JDK 8 functionality in the posts Java8 Sorting – Performance Pitfall and What’s Stopping Me Using Java8 Lambdas – Try Debugging Them. Peter Ledbrook looks reexamines use of Groovy in JDK 8 in the post Groovy in light of Java 8. We are only half-way through the first month of 2015 and JDK 8 continues to see increased adoption and, correspondingly, increased online coverage of its features. Most of the focus seems to be on the functional aspects that JDK 8 brings to Java.Reference: 2015 Starts Off Strong for Java 8 from our JCG partner Dustin Marx at the Inspired by Actual Events blog....

Getting Started with Gradle: Creating a Multi-Project Build

Although we can create a working application by using only one module, sometimes it is wiser to divide our application into multiple smaller modules. Because this is a rather common use case, every self-respecting build tool must support it, and Gradle is no exception. If a Gradle project has more than one module, it is called a multi-project build. This blog post describes how we can create a multi-project build with Gradle. Let’s start by taking a look at the requirements of our Gradle build.     Additional Reading: If you are not familiar with Gradle, you should read the following blog posts before you continue reading this blog post:Getting Started with Gradle: Introduction helps you to install Gradle, describes the basic concepts of a Gradle build, and describes how you can add functionality to your build by using Gradle plugins. Getting Started with Gradle: Our First Java Project describes how you can create a Java project by using Gradle and package your application to an executable jar file. Getting Started with Gradle: Dependency Management describes how you can manage the dependencies of your Gradle project.The Requirements of Our Gradle Build Our example application has two modules:The core module contains the common components that are used by the other modules of our application. In our case, it contains only one class: the MessageService class returns the string ‘Hello World!’. This module has only one dependency: it has one unit test that uses Junit 4.11. The app module contains the HelloWorld class that starts our application, gets a message from a MessageService object, and writes the received message to a log file. This module has two dependencies: it needs the core module and uses Log4j 1.2.17 as a logging library.Our Gradle build has also two other requirements:We must be able to run our application with Gradle. We must be able to create a runnable binary distribution that doesn’t use the so called “fat jar” approach.If you don’t know how you can run your application and create a runnable binary distribution with Gradle, you should read the following blog post before you continue reading this blog post:Getting Started with Gradle: Creating a Binary DistributionLet’s move on and find out how we can create a multi-project build that fulfills our requirements. Creating a Multi-Project Build Our next step is to create a multi-project Gradle build that has two subprojects: app and core. Let’s start by creating the directory structure of our Gradle build. Creating the Directory Structure Because the core and app modules use Java, they both use the default project layout of a Java project. We can create the correct directory structure by following these steps:Create the root directory of the core module (core) and create the following the subdirectories:The src/main/java directory contains the source code of the core module. The src/test/java directory contains the unit tests of the core module.Create the root directory of the app module (app) and create the following subdirectories:The src/main/java directory contains the source code of the app module. The src/main/resources directory contains the resources of the app module.We have now created the required directories. Our next step is to configure our Gradle build. Let’s start by configuring the projects that are included in our multi-project build. Configuring the Projects that Are Included in Our Multi-Project Build We can configure the projects that are included in our multi-project build by following these steps:Create the settings.gradle file to the root directory of the root project. A multi-project Gradle build must have this file because it specifies the projects that are included in the multi-project build. Ensure that the app and core projects are included in our multi-project build.Our settings.gradle file looks as follows: include 'app' include 'core' Additional Reading:Gradle User Guide: 56.2 Settings file Gradle DSL Reference: SettingsLet’s move on and configure the core project. Configuring the Core Project We can configure the core project by following these steps:Create the build.gradle file to the root directory of the core project. Create a Java project by applying the Java plugin. Ensure that the core project gets its dependencies from the central Maven2 repository. Declare the JUnit dependency (version 4.11) and use the testCompile configuration. This configuration describes that the core project needs the JUnit library before its unit tests can be compiled.The build.gradle file of the core project looks as follows: apply plugin: 'java'repositories { mavenCentral() }dependencies { testCompile 'junit:junit:4.11' } Additional Reading:Getting Started with Gradle: Our First Java Project Getting Started with Gradle: Dependency ManagementLet’s move on and configure the app project. Configuring the App Project Before we can configure the app project, we have to take a quick look at the dependency management of such dependencies that are part of the same multi-project build. These dependencies are called project dependencies. If our multi-project build has projects A and B, and the compilation of the project B requires the project A, we can configure this dependency by adding the following dependency declaration to the build.gradle file of the project B: dependencies { compile project(':A') } Additional Reading:Gradle User Guide: 51.4.3. Project dependencies Gradle User Guide: 57.7. Project lib dependenciesWe can now configure the app project by following these steps:Create the build.gradle file to the root directory of the app project. Create a Java project by applying the Java plugin. Ensure that the app project gets its dependencies from the central Maven2 repository. Configure the required dependencies. The app project has two dependencies that are required when it is compiled:Log4j (version 1.2.17) The core moduleCreate a runnable binary distribution.The build.gradle file of the app project looks as follows: apply plugin: 'application' apply plugin: 'java'repositories { mavenCentral() }dependencies { compile 'log4j:log4j:1.2.17' compile project(':core') }mainClassName = 'net.petrikainulainen.gradle.client.HelloWorld'task copyLicense { outputs.file new File("$buildDir/LICENSE") doLast { copy { from "LICENSE" into "$buildDir" } } }applicationDistribution.from(copyLicense) { into "" } Additional Reading:Getting Started with Gradle: Creating a Binary DistributionLet’s move on and remove the duplicate configuration found from the build scripts of the core and app projects. Removing Duplicate Configuration When we configured the subprojects of our multi-project build, we added duplicate configuration to the build scripts of the core and app projects:Because both projects are Java projects, they apply the Java plugin. Both projects use the central Maven 2 repository.In other words, both build scripts contain the following configuration: apply plugin: 'java'repositories { mavenCentral() } Let’s move this configuration to the build.gradle file of our root project. Before we can do this, we have to learn how we can configure our subprojects in the build.gradle file of our root project. If we want to add configuration to a single subproject called core, we have to add the following snippet to the build.gradle file of our root project: project(':core') { //Add core specific configuration here } In other words, if we want to move the duplicate configuration to the build script of our root project, we have to add the following configuration to its build.gradle file: project(':app') { apply plugin: 'java'repositories { mavenCentral() } }project(':core') { apply plugin: 'java'repositories { mavenCentral() } } This doesn’t really change our situation. We still have duplicate configuration in our build scripts. The only difference is that the duplicate configuration is now found from the build.gradle file of our root project. Let’s eliminate this duplicate configuration. If we want to add common configuration to the subprojects of our root project, we have to add the following snippet to the build.gradle file of our root project: subprojects { //Add common configuration here } After we have removed the duplicate configuration from the build.gradle file of our root project, it looks as follows: subprojects { apply plugin: 'java'repositories { mavenCentral() } } If we have configuration that is shared by all projects of our multi-project build, we should add the following snippet to the build.gradle file of our root project: allprojects { //Add configuration here } Additional Reading:Gradle User Guide: 57.1 Cross project configuration Gradle User Guide: 57.2 Subproject configurationWe can now remove the duplicate configuration from the build scripts of our subprojects. The new build scripts of our subprojects looks as follows: The core/build.gradle file looks as follows: dependencies { testCompile 'junit:junit:4.11' } The app/build.gradle file looks as follows: apply plugin: 'application'dependencies { compile 'log4j:log4j:1.2.17' compile project(':core') }mainClassName = 'net.petrikainulainen.gradle.client.HelloWorld'task copyLicense { outputs.file new File("$buildDir/LICENSE") doLast { copy { from "LICENSE" into "$buildDir" } } }applicationDistribution.from(copyLicense) { into "" } We have now created a multi-project Gradle build. Let’s find out what we just did. What Did We Just Do? When we run the command gradle projects in the root directory of our multi-project build, we see the following output: > gradle projects :projects------------------------------------------------------------ Root project ------------------------------------------------------------Root project 'multi-project-build' +--- Project ':app' \--- Project ':core'To see a list of the tasks of a project, run gradle <project-path>:tasks For example, try running gradle :app:tasksBUILD SUCCESSFUL As we can see, this command lists the subprojects (app and core) of our root project. This means that we have just created a multi-project Gradle build that has two subprojects. When we run the command gradle tasks in the root directory of our multi-project build, we see the following output (only relevant part of it is shown below): > gradle tasks :tasks------------------------------------------------------------ All tasks runnable from root project ------------------------------------------------------------Application tasks ----------------- distTar - Bundles the project as a JVM application with libs and OS specific scripts. distZip - Bundles the project as a JVM application with libs and OS specific scripts. installApp -Installs the project as a JVM application along with libs and OS specific scripts run - Runs this project as a JVM application As we can see, we can run our application by using Gradle and create a binary distribution that doesn’t use the so called “fat jar” approach. This means that we have fulfilled all requirements of our Gradle build. Additional Information:Gradle User Guide: 11.6. Obtaining information about your buildLet’s move on and find out what we learned from this blog post. Summary This blog post has taught us three things:A multi-project build must have the settings.gradle file in the root directory of the root project because it specifies the projects that are included in the multi-project build. If we have to add common configuration or behavior to all projects of our multi-project build, we should add this configuration (use allprojects) to the build.gradle file of our root project. If we have to add common configuration or behavior to the subprojects of our root project, we should add this configuration (use subprojects) to the build.gradle file of our root project.P.S. You can get the example application of this blog post from Github.Reference: Getting Started with Gradle: Creating a Multi-Project Build from our JCG partner Petri Kainulainen at the Petri Kainulainen blog....

Scala snippets 4: Pimp my library pattern with type classes.

I wanted to write an article on the fun parts of scalaz, but thought it would be best to first look a bit closer at the type classes system provided by scala. So in this snippet we’ll explore a small part of how type classes work and can help you in writing more generic code. More snippets can be found here:Scala snippets 1: Folding Scala snippets 2: List symbol magic Scala snippets 3: Lists together with Map, flatmap, zip and reduce Scala snippets 4: Pimp my library pattern with type classesType classes Looking at the type class definition from wikipedia might quickly scare you away: “In computer science, a type class is a type system construct that supports ad hoc polymorphism. This is achieved by adding constraints to type variables in parametrically polymorphic types. Such a constraint typically involves a type class ‘T’ and a type variable ‘a’, and means that ‘a’ can only be instantiated to a type whose members support the overloaded operations associated with ‘T’.” Basically what type classes allow is to add functionality to existing classes without needing to touch the existing classes. We could for instance add standard “comparable” functionality to Strings without having to modify the existing classes. Note that you could also just use implicit functions to add custom behavior (e.g the “Pimp my library pattern”, https://coderwall.com/p/k_1jzw/scala-s-pimp-my-library-pattern-example), but using type classes is much more safe and flexible. A good discussion on this can be found here (http://stackoverflow.com/questions/8524878/implicit-conversion-vs-type-c…). So enough introducion, lets look at a very simple example of type classes. Creating a type class in scala takes a number of different steps. The first step is to create a trait. This trait is the actual type class and defines the functionality that we want to provide. For this article we’ll create a very contrived example where we define a “Duplicate” trait. With this trait we duplicate a specific object. So when we get a string value of “hello”, we want to return “hellohello”, when we get an integer we return value*value, when we get a char ‘c’, we return “cc”. All this in a type safe manner. Our typeclass is actually very simple: trait Duplicate[A,B] { def duplicate(value: A): B } Note that is look a lot like scala mix-in traits, but it is used completely different. Once we’ve got the typeclass definition, the next step is to create some default implementations. We do this in the trait’s companion object. object Duplicate {   // implemented as a singleton object implicit object DuplicateString extends Duplicate[String,String] { def duplicate(value: String) = value.concat(value) }   // or directly, which I like better. implicit val duplicateInt = new Duplicate[Int, Int] { def duplicate(value: Int) = value * value }   implicit val duplicateChar = new Duplicate[Char, String] { def duplicate(value: Char) = value.toString + value.toString } } } As you can see we can do this in a couple of different ways. The most important part here is the implicit keyword. Using this keyword we can make these members implicity available under certain circumstances. When you look at the implementation you’ll notice that they are all very straigthforward. We just implement the trait we defined for specific types. In this case for a string, an integer and a character. Now we can start using the type classes. object DuplicateWriter {   // import the conversions for use within this object import conversions.Duplicate   // Generic method that takes a value, and looks for an implicit // conversion of type Duplicate. If no implicit Duplicate is available // an error will be thrown. Scala will first look in the local // scope before looking for implicits in the companion object // of the trait class. def write[A,B](value: A)(implicit dup: Duplicate[A, B]) : B = { dup.duplicate(value) } }     // simple app that runs our conversions object Example extends App { import snippets.conversions.Duplicate   implicit val anotherDuplicateInt = new Duplicate[Int, Int] { def duplicate(value: Int) = value + value }   println(DuplicateWriter.write("Hello")) println(DuplicateWriter.write('c')) println(DuplicateWriter.write(0)) println(DuplicateWriter.write(0)(Duplicate.duplicateInt))   } In this example we’ve create a DuplicateWriter which calls the duplicate function on the provide class by looking for a matching typecall implementation. In our Example object we also override the default duplicate function for the Int type with a custom one. In the last line we provide a specific Duplicate object to use by the DuplicateWriter. The output of this application is this: 20 100 HelloHello cc If we run with an unsupported type (e.g a double): println(DuplicateWriter.write(0d)) We get the following compile time messages (intellij IDE in this case). Error:(56, 32) could not find implicit value for parameter dup: snippets.conversions.Duplicate[Double,B] println(DuplicateWriter.write(0d)) ^   Error:(56, 32) not enough arguments for method write: (implicit dup: snippets.conversions.Duplicate[Double,B])B. Unspecified value parameter dup. println(DuplicateWriter.write(0d)) ^ We can also customize the first of these messages by adding the following annotation to our trait/typeclass definition: @implicitNotFound("No member of type class Duplicate in scope for ${T}") trait Duplicate[A,B] { def duplicate(value: A): B } So that is a very quick introduction into type classes. As you can see, they provide a very easy way to add custom functionality to classes, even if you don’t control them. In the next snippet we’ll explore a couple of common, very useful type classes from the Scalaz library.Reference: Scala snippets 4: Pimp my library pattern with type classes. from our JCG partner Jos Dirksen at the Smart Java blog....

How to create a pub/sub application with MongoDB ? Introduction

In this article we will see how to create a pub/sub application (messaging, chat, notification), and this fully based on MongoDB (without any message broker like RabbitMQ, JMS, … ). So, what needs to be done to achieve such thing:an application “publish” a message. In our case, we simply save a document into MongoDB another application, or thread, subscribe to these events and will received message automatically. In our case this means that the application should automatically receive newly created document out of MongoDBAll this is possible with some very cool MongoDB features : capped collections and tailable cursors. Capped Collections and Tailable Cursors As you can see in the documentation, Capped Collections are fixed sized collections, that work in a way similar to circular buffers: once a collection fills its allocated space, it makes room for new documents by overwriting the oldest documents. MongoDB Capped Collections can be queried using Tailable Cursors, that are similar to the unix tail -f command.  Your application continue to retrieve documents as they are inserted into the collection. I also like to call this a “continuous query”. Now that we have seen the basics, let’s implement it. Building a very basic application Create the collection The first thing to do is to create a new capped collection : $> mongouse chatdb.messages.drop()db.createCollection('messages', { capped: true, size: 10000 })db.messages.insert({"type":"init"}); For simplicity, I am using the MongoDB Shell to create the messages collection in the chat database. You can see on line #7 how to create a capped collection, with 2 options:capped : true : this one is obvious size : 10000 :  this is a mandatory option when you create a capped collection. This is the maximum size in bytes. (will be raised to a multiple of 256)Finally, on line #9, I insert a dummy document, this is also mandatory to be able to get the tailable cursor to work. Write an application Now that we have the collection, let’s write some code.  First in node.js: var mongo = require("mongodb");var mongodbUri = "mongodb://";mongo.MongoClient.connect (mongodbUri, function (err, db) {db.collection('messages', function(err, collection) { // open a tailable cursor console.log("== open tailable cursor"); collection.find({}, {tailable:true, awaitdata:true, numberOfRetries:-1}) .sort({ $natural: 1 }) .each(function(err, doc) { console.log(doc); }) });}); From lines #1 to 5 I just connect to my local MongoDB instance. Then on line #7, I get the messages collection. And on line #10, I execute a find, using a tailable cursor, using specific options:{} : no filter, so all documents will be returned tailable : true : this one is clear, to say that we want to create a tailable cursor awaitdata : true : to say that we wait for data before returning no data to the client numberOfRetries : -1 :  The number of times to retry on time out, -1 is infinite, so the application will keep tryingLine #11 just force the sort to the natural order, then on line #12, the cursor returns the data, and the document is printed in the console each time it is inserted. Test the Application Start the application: node app.js Insert documents in the messages collection, from the shell or any other tool. You can find below a screencast showing this very basic application working:The source code of this sample application in this Github repository, take the step-01 branch; clone this branch using: git clone -b step-01 https://github.com/tgrall/mongodb-realtime-pubsub.git I have also created a gist showing the same behavior in Java: package org.mongodb.demos.tailable;import com.mongodb.*;public class MyApp {public static void main(String[] args) throws Exception {MongoClient mongoClient = new MongoClient(); DBCollection coll = mongoClient.getDB("chat").getCollection("messages");DBCursor cur = coll.find().sort(BasicDBObjectBuilder.start("$natural", 1).get()) .addOption(Bytes.QUERYOPTION_TAILABLE | Bytes.QUERYOPTION_AWAITDATA);System.out.println("== open cursor ==");Runnable task = () -> { System.out.println("\tWaiting for events"); while (cur.hasNext()) { DBObject obj = cur.next(); System.out.println( obj );} }; new Thread(task).start(); } } Mathieu Ancelin has written it in Scala: package org.mongodb.demos.tailableimport reactivemongo.api._ import reactivemongo.bson._ import play.api.libs.iteratee.Iteratee import scala.concurrent.ExecutionContext.Implicits.global import reactivemongo.api.collections.default.BSONCollectionobject Capped extends App {val driver = new MongoDriver val connection = driver.connection(List("localhost")) val db = connection("chat") val collection = db.collection[BSONCollection]("messages")val cursor = collection .find(BSONDocument()) .options(QueryOpts().tailable.awaitData) .cursor[BSONDocument]println("== open tailable cursor") cursor.enumerate().apply(Iteratee.foreach { doc => println(s"Document inserted: ${BSONDocument.pretty(doc)}") }) } Add some user interface We have the basics of a publish subscribe based application:publish by inserting document into MongoDB subscribe by reading document using a tailable cursorLet’s now push the messages to a user using for example socket.io. For this we need to:add socket.io dependency to our node project add HTML page to show messagesThe following gists shows the updated version of the app.js and index.html, let’s take a look: "use strict";var mongo = require("mongodb"), fs = require("fs"), // to read static files io = require("socket.io"), // socket io server http = require("http");var mongodbUri = "mongodb://";var app = http.createServer(handler); io = io.listen(app); app.listen(3000); console.log("http server on port 3000");function handler(req, res){ fs.readFile(__dirname + "/index.html", function (err, data) { res.writeHead(200); res.end(data); }); }mongo.MongoClient.connect (mongodbUri, function (err, db) {db.collection('messages', function(err, collection) {// open socket io.sockets.on("connection", function (socket) { // open a tailable cursor console.log("== open tailable cursor"); collection.find({}, {tailable:true, awaitdata:true, numberOfRetries:-1}).sort({ $natural: 1 }).each(function(err, doc) { console.log(doc); // send message to client if (doc.type == "message") { socket.emit("message",doc); } })});});});The node application has been updated with the following features:lines #4-7: import of http, file system and socket.io lines #10-21: configure and start the http server. You can see that I have created a simple handler to serve static html file lines #28-39: I have added support to Web socket using socket.io where I open the tailable cursor, and push/emit the messages on the socket.As you can see, the code that I have added is simple. I do not use any advanced framework, nor manage exceptions, this for simplicity and readability. Let’s now look at the client (html page). <!doctype html> <html> <head> <title>MongoDB pub/sub</title> <style> * { margin: 0; padding: 10px; box-sizing: border-box; } body { font: 13px Helvetica, Arial; } #messages { list-style-type: none; margin: 0; padding: 0; } #messages li { padding: 5px 10px; } #messages li:nth-child(odd) { background: #eee; } </style> </head> <body> <h2>MongoDB/Socket.io demonstration</h2><ul id="messages"></ul><script src="https://cdn.socket.io/socket.io-1.2.0.js"></script> <script src="https://code.jquery.com/jquery-2.1.3.min.js"></script> <script> var socket = io(); socket.on('message', function(doc){ $('#messages').append($('<li>').text(doc.text)); }); </script> </body> </html>Same as the server, it is really simple and does not use any advanced libraries except socket.io client (line #18) and JQuery (line #19), and used:on line #22 to received messages ans print them in the page using JQuery on line #23I have created a screencast of this version of the application:You can find the source code in this Github repository, take the step-02 branch; clone this branch using: git clone -b step-02 https://github.com/tgrall/mongodb-realtime-pubsub.git Conclusion In this first post, we have:learned about tailable cursor and capped collection see how it can be used to develop a pub/sub application expose this into a basic web socket based applicationReference: How to create a pub/sub application with MongoDB ? Introduction from our JCG partner Tugdual Grall at the Tug’s Blog blog....

New Javadoc Tags @apiNote, @implSpec and @implNote

If you’re already using Java 8, you might have seen some new Javadoc tags: @apiNote, @implSpec and @implNote. What’s up with them? And what do you have to do if you want to use them? Overview This post will have a quick view at the tags’ origin and current status. It will then explain their meaning and detail how they can be used with IDEs, the Javadoc tool and via Maven’s Javadoc plugin. I created a demo project on GitHub to show some examples and the necessary additions to Maven’s pom.xml. To make things easier for the Maven-averse, it already contains the generated javadoc. Context Origin The new Javadoc tags are a byproduct of JSR-335, which introduced lambda expressions. They came up in the context of default methods because these required a more standardized and fine grained documentation. In January 2013 Brian Goetz gave a motivation and made a proposal for these new tags. After a short discussion it turned into a feature request three weeks later. By April the JDK Javadoc maker was updated and the mailing list informed that they were ready to use. Current Status It is important to note that the new tags are not officially documented (they are missing in the official list of Javadoc tags) and thus subject to change. Furthermore, the implementer Mike Duigou wrote: There are no plans to attempt to popularize these particular tags outside of use by JDK documentation. So while it is surely beneficial to understand their meaning, teams should carefully consider whether using them is worth the risk which comes from relying on undocumented behavior. Personally, I think so as I deem the considerable investment already made in the JDK as too high to be reversed. It would also be easy to remove or search/replace their occurrences in a code base if that became necessary. @apiNote, @implSpec and @implNoteLet’s cut to the heart of things. What is the meaning of these new tags? And where and how are they used? Meaning The new Javadoc tags are explained pretty well in the feature request’s description (I changed the layout a little): There are lots of things we might want to document about a method in an API. Historically we’ve framed them as either being “specification” (e.g., necessary postconditions) or “implementation notes” (e.g., hints that give the user an idea what’s going on under the hood.) But really, there are four boxes (and we’ve been cramming them into two, or really 1.5): { API, implementation } x { specification, notes } (We sometimes use the terms normative/informative to describe the difference between specification/notes.) Here are some descriptions of what belongs in each box. 1. API specification. This is the one we know and love; a description that applies equally to all valid implementations of the method, including preconditions, postconditions, etc. 2. API notes. Commentary, rationale, or examples pertaining to the API. 3. Implementation specification. This is where we say what it means to be a valid default implementation (or an overrideable implementation in a class), such as “throws UOE.” Similarly this is where we’d describe what the default for putIfAbsent does. It is from this box that the would-be-implementer gets enough information to make a sensible decision as to whether or not to override. 4. Implementation notes. Informative notes about the implementation, such as performance characteristics that are specific to the implementation in this class in this JDK in this version, and might change. These things are allowed to vary across platforms, vendors and versions. The proposal: add three new Javadoc tags, @apiNote, @implSpec, and @implNote. (The remaining box, API Spec, needs no new tag, since that’s how Javadoc is used already.) @impl{spec,note} can apply equally well to a concrete method in a class or a default method in an interface. So the new Javadoc tags are meant to categorize the information given in a comment. It distinguishes between the specification of the method’s, class’s, … behavior (which is relevant for all users of the API – this is the “regular” comment and would be @apiSpec if it existed) and other, more ephemeral or less universally useful documentation. More concretely, an API user can not rely on anything written in @implSpec or @implNote, because these tags are concerned with this implementation of the method, saying nothing about overriding implementations. This shows that using these tags will mainly benefit API designers. But even Joe Developer, working on a large project, can be considered a designer in this context as his code is surely consumed and/or changed by his colleagues at some point in the future. In that case, it helps if the comment clearly describes the different aspects of the API. E.g. is “runs in linear time” part of the method’s specification (and should hence not be degraded) or a detail of the current implementation (so it could be changed). Examples Let’s see some examples! First from the demo project to show some rationale behind how to use the tags and then from the JDK to see them in production. The Lottery The project contains an interface Lottery from some fictitious library. The interface was first included in version 1.0 of the library but a new method has to be added for version 1.1. To keep backwards compatibility this is a default method but the plan is to make it abstract in version 2.0 (giving customers some time to update their code). With the new tags the method’s documentation clearly distinguishes the meanings of its documentation: Documentation of Lottery.pickWinners /** * Picks the winners from the specified set of players. * <p> * The returned list defines the order of the winners, where the first * prize goes to the player at position 0. The list will not be null but * can be empty. * * @apiNote This method was added after the interface was released in * version 1.0. It is defined as a default method for compatibility * reasons. From version 2.0 on, the method will be abstract and * all implementations of this interface have to provide their own * implementation of the method. * @implSpec The default implementation will consider each player a winner * and return them in an unspecified order. * @implNote This implementation has linear runtime and does not filter out * null players. * @param players * the players from which the winners will be selected * @return the (ordered) list of the players who won; the list will not * contain duplicates * @since 1.1 */ default List<String> pickWinners(Set<String> players) { return new ArrayList<>(players); } JDK The JDK widely uses the new tags. Some examples:ConcurrentMap:Several @implSpecs defining the behavior of the default implementations, e.g. on replaceAll. Interesting @implNotes on getOrDefault and forEach. Repeated @implNotes on abstract methods which have default implementations in Map documenting that “This implementation intentionally re-abstracts the inappropriate default provided in Map.”, e.g. replace.Objects uses @apiNote to explain why the seemingly useless methods isNull and nonNull were added. The abstract class Clock uses @implSpec and @implNote in its class comment to distinguish what implementations must beware of and how the existing methods are implemented.Inheritance When an overriding method has no comment or inherits its comment via {@inheritDoc}, the new tags are not included. This is a good thing, since they will not generally apply. To inherit specific tags, just add the snippet @tag {@inheritDoc} to the comment. The implementing classes in the demo project examine the different possibilities. The README gives an overview. Tool Support IDEs You will likely want to see the improved documentation (the JDK’s and maybe your own) in your IDE. So how do the most popular ones currently handle them? Eclipse displays the tags and their content but provides no special rendering, like ordering or prettifying the tag headers. There is a feature request to resolve this. IntellyJ‘s current community edition 14.0.2 displays neither the tags nor their content. This was apparently solved on Christmas Eve (see this ticket) so I guess the next version will not have this problem anymore. I cannot say anything regarding the rendering, though. NetBeans also shows neither tags nor content and I could find no ticket asking to fix this. All in all not a pretty picture but understandable considering the fact that this is no official Javadoc feature. Generating Javadoc If you start using those tags in your own code, you will soon realize that generating Javadoc fails because of the unknown tags. That is easy to fix, you just have to tell it how to handle them. Command Line This can be done via the command line argument -tag. The following arguments allow those tags everywhere (i.e. on packages, types, methods, …) and give them the headers currently used by the JDK: Telling Javadoc About The New Tags -tag "apiNote:a:API Note:" -tag "implSpec:a:Implementation Requirements:" -tag "implNote:a:Implementation Note:" (I read the official documentation as if those arguments should be -tag apiNote:a:”API Note:” [note the quotation marks] but that doesn’t work for me. If you want to limit the use of the new tags or not include them at all, the documentation of -tag tells you how to do that.) By default all new tags are added to the end of the generated doc, which puts them below, e.g., @param and @return. To change this, all tags have to be listed in the desired order, so you have to add the known tags to the list below the three above: Listing The Known Tags After The New Ones -tag "param" -tag "return" -tag "throws" -tag "since" -tag "version" -tag "serialData" -tag "see" Maven Maven’s Javadoc plugin has a configuration setting tag which is used to verbosely create the same command line arguments. The demo project on GitHub shows how this looks like in the pom. Reflection We have seen that the new Javadoc tags @apiNote, @implSpec and @implNote were added to allow the division of documentation into parts with different semantics. Understanding them is helpful to every Java developer. API designers might chose to employ them in their own code but must keep in mind that they are still undocumented and thus subject to change. We finally took a look at some of the involved tools and saw that IDE support needs to improve but the Javadoc tool and the Maven plugin can be parameterized to make full use of them.Reference: New Javadoc Tags @apiNote, @implSpec and @implNote from our JCG partner Nicolai Parlog at the CodeFx blog....

Multiple Return Statements

I once heard that in the past people strived for methods to have a single exit point. I understood this was an outdated approach and never considered it especially noteworthy. But lately I’ve come in contact with some developers who still adhere to that idea (the last time was here) and it got me thinking. So for the first time, I really sat down and compared the two approaches. Overview The first part of the post will repeat the arguments for and against multiple return statements. It will also identify the critical role clean code plays in assessing these arguments. The second part will categorize the situations which benefit from returning early. To not always write about “methods with multiple return statements” I’ll call the approach to structure methods that way a pattern. While this might be a little overboard it surely is more concise. The Discussion I’m discussing whether a method should always run to its last line, from where it returns its result, or can have multiple return statements and “return early”. This is no new discussion of course. See, for example, Wikipedia, Hacker Chick or StackOverflow. Structured Programming The idea that a single return statement is desirable stems from the paradigm of structured programming, developed in the 1960s. Regarding subroutines, it promotes that they have a single entry and a single exit point. While modern programming languages guarantee the former, the latter is somewhat outdated for several reasons. The main problem the single exit point solved were memory or resource leaks. These occurred when a return statement somewhere inside a method prevented the execution of some cleanup code which was located at its end. Today, much of that is handled by the language runtime (e.g. garbage collection) and explicit cleanup blocks can be written with try-catch-finally. So now the discussion mainly revolves around readability. Readability Sticking to a single return statement can lead to increased nesting and require additional variables (e.g. to break loops). On the other hand, having a method return from multiple points can lead to confusion as to its control flow and thus make it less maintainable. It is important to notice that these two sides behave very differently with respect to the overall quality of the code. Consider a method which adheres to clean coding guidelines: it is short and to the point with a clear name and an intention revealing structure. The relative loss in readability by introducing more nesting and more variables is very noticeable and might muddy the clean structure. But since the method can be easily understood due to its brevity and form, there is no big risk of overlooking any return statement. So even in the presence of more than one, the control flow remains obvious. Contrast this with a longer method, maybe part of a complicated or optimized algorithm. Now the situation is reversed. The method already contains a number of variables and likely some levels of nesting. Introducing more has little relative cost in readability. But the risk of overlooking one of several returns and thus misunderstanding the control flow is very real. So it comes down to the question whether methods are short and readable. If they are, multiple return statements will generally be an improvement. If they aren’t, a single return statement is preferable. Other Factors Readability might not be the only factor, though. Another aspect of this discussion can be logging. In case you want to log return values but do not resort to aspect oriented programming, you have to manually insert logging statements at the methods’ exit point(s). Doing this with multiple return statements is tedious and forgetting one is easy. Similarly, you might want to prefer a single exit point if you want to assert certain properties of your results before returning from the method. Situations For Multiple Returns Statements There are several kinds of situations in which a method can profit from multiple return statements. I tried to categorize them here but make no claim to have a complete list. (If you come up with another recurring situation, leave a comment and I will include it.) Every situation will come with a code sample. Note that these are shortened to bring the point across and can be improved in several ways.Guard Clauses Guard clauses stand at the beginning of a method. They check its arguments and for certain special cases immediately return a result. Guard Clause Against Null Or Empty Collections private Set<T> intersection(Collection<T> first, Collection<T> second) { // intersection with an empty collection is empty if (isNullOrEmpty(first) || isNullOrEmpty(second)) return new HashSet<>();return first.stream() .filter(second::contains) .collect(Collectors.toSet()); } Excluding edge cases at the beginning has several advantages:it cleanly separates handling of special cases and regular cases, which improves readability it provides a default location for additional checks, which preserves readability it makes implementing the regular cases less error prone it might improve performance for those special cases (though this is rarely relevant)Basically all methods for which this pattern is applicable will benefit from its use. A noteworthy proponent of guard clauses is Martin Fowler, although I would consider his example on the edge of branching (see below). Branching Some methods’ responsibilities demand to branch into one of several, often specialized subroutines. It is usually best to implement these subroutines as methods in their own right. The original method is then left with the only responsibility to evaluate some conditions and call the correct routine. Delegating To Specialized Methods public Offer makeOffer(Customer customer) { boolean isSucker = isSucker(customer); boolean canAffordLawSuit = customer.canAfford( legalDepartment.estimateLawSuitCost());if (isSucker) { if (canAffordLawSuit) return getBigBucksButStayLegal(customer); else return takeToTheCleaners(customer); } else { if (canAffordLawSuit) return getRid(customer); else return getSomeMoney(customer); } } (I know that I could leave out all else-lines. Someday I might write a post explaining why in cases like this, I don’t.) Using multiple return statements has several advantages over a result variable and a single return:the method more clearly expresses its intend to branch to a subroutine and simply return its result in any sane language, the method does not compile if the branches do not cover all possibilities (in Java, this can also be achieved with a single return if the variable is not initialized to a default value) there is no additional variable for the result, which would span almost the whole method the result of the called method can not be manipulated before being returned (in Java, this can also be achieved with a single return if the variable is final and its class immutable; the latter is not obvious to the reader, though) if a switch statement is used in a language with fall through (like Java), immediate return statements save a line per case because no break is needed, which reduces boilerplate and improves readabilityThis pattern should only be applied to methods which do little else than branching. It is especially important that the branches cover all possibilities. This implies that there is no code below the branching statements. If there were, it would take much more effort to reason about all paths through the method. If a method fulfills these conditions, it will be small and cohesive, which makes it easy to understand. Cascading Checks Sometimes a method’s behavior mainly consists of multiple checks where each check’s outcome might make further checks unnecessary. In that case, it is best to return as soon as possible (maybe after each check). Cascading Checks While Looking For an Anchor Parent private Element getAnchorAncestor(Node node) { // if there is no node, there can be no anchor, // so return null if (node == null) return null;// only elements can be anchors, // so if the node is no element, recurse to its parent boolean nodeIsNoElement = !(node instanceof Element); if (nodeIsNoElement) return getAnchorAncestor(node.getParentNode());// since the node is an element, it might be an anchor Element element = (Element) node; boolean isAnchor = element.getTagName().equalsIgnoreCase("a"); if (isAnchor) return element;// if the element is no anchor, recurse to its parent return getAnchorAncestor(element.getParentNode()); } Other examples of this are the usual implementations of equals or compareTo in Java. They also usually consist of a cascade of checks where each check might determine the method’s result. If it does, the value is immediately returned, otherwise the method continues with the next check. Compared to a single return statement, this pattern does not require you to jump through hoops to prevent ever deeper indentation. It also makes it straight forward to add new checks and place comments before a check-and-return block. As with branching, multiple return statements should only be applied to methods which are short and do little else. The cascading checks should be their central, or better yet, their only content (besides input validation). If a check or the computation of the return value needs more than two or three lines, it should be refactored into a separate method. Searching Where there are data structures, there are items with special conditions to be found in them. Methods which search for them often look similar. If such a method encounters the item it was searching for, it is often easiest to immediately return it. Immediately Returning The Found Element private <T> T findFirstIncreaseElement(Iterable<T> items, Comparator<? super T> comparator) { T lastItem = null; for (T currentItem : items) { boolean increase = increase(lastItem, currentItem, comparator); lastItem = currentItem;if (increase) { return currentItem; } }return null; } Compared to a single return statement, this saves us from finding a way to get out of the loop. This has the following advantages:there is no additional boolean variable to break the loop there is no additional condition for the loop, which is easily overlooked (especially in for loops) and thus fosters bugs the last two points together keep the loop much easier to understand there is most likely no additional variable for the result, which would span almost the whole methodLike most patterns which use multiple return statements, this also requires clean code. The method should be small and have no other responsibility but searching. Nontrivial checks and result computations should have their own methods. Reflection We have seen the arguments for and against multiple returns statements and the critical role clean code plays. The categorization should help to identify recurring situations in which a method will benefit from returning early.Reference: Multiple Return Statements from our JCG partner Nicolai Parlog at the CodeFx blog....

Pushing the Limits – Howto use AeroGear Unified Push for Java EE and Node.js

At the end of 2014 the AeroGear team announced the availability of the Red Hat JBoss Unified Push Server on xPaaS. Let’s take a closer look! Overview The Unified Push Server allows developers to send native push messages to Apple’s Push Notification Service (APNS) and Google’s Cloud Messaging (GCM). It features a built-in administration console that makes it easy for developers to create and manage push related aspects of their applications for any mobile development environment. Includes client SDKs (iOS, Android, & Cordova), and a REST based sender service with an available Java sender library. The following image shows how the Unified Push Server enables applications to send native push messages to Apple’s Push Notification Service (APNS) and Google’s Cloud Messaging (GCM):Architecture The xPaaS offering is deployed in a managed EAP container, while the server itself is based on standard Java EE APIs like:JAX-RS EJB CDI JPAAnother critical component is Keycloak, which is used for user management and authentication. The heart of the Unified Push Server are its public RESTful endpoints. These services are the entry for all mobile devices as well as for 3rd party business applications, when they want to issue a push notification to be delivered to the mobile devices, registered with the server. Backend integration Being based on the JAX-RS standard makes integration with any backend platform very easy. It just needs to speak HTTP… Java EE The project has a Java library to send push notification requests from any Java-based backend. The fluent builder API is used to setup the integration with the desired Unified Push Server, with the help of CDI we can extract that into a very simple factory: @Produces public PushSender setup() { PushSender defaultPushSender = DefaultPushSender.withRootServerURL("http://localhost:8080/ag-push") .pushApplicationId("c7fc6525-5506-4ca9-9cf1-55cc261ddb9c") .masterSecret("8b2f43a9-23c8-44fe-bee9-d6b0af9e316b") .build(); } Next we would need to inject the `PushSender` into a Java class which is responsible to send a push request to the Unified Push Server: @Inject private PushSender sender; ... public void sendPushNotificationRequest() { ... UnifiedMessage unifiedMessage....; sender.send(unifiedMessage); } The API for the `UnifiedMessage` is leveraging the builder pattern as well: UnifiedMessage unifiedMessage = UnifiedMessage.withMessage() .alert("Hello from Java Sender API!") .sound("default") .userData("foo-key", "foo-value") ... .build(); Node.js Being a restful server does not limit the integration to traditional platforms like Java EE. The AeroGear also has a Node.js library. Below is a short example how to send push notifications from a Node.js based backend: // setup the integration with the desired Unified Push Server var agSender = require( "unifiedpush-node-sender" ), settings = { url: "http://localhost:8080/ag-push", applicationId: "c7fc6525-5506-4ca9-9cf1-55cc261ddb9c", masterSecret: "8b2f43a9-23c8-44fe-bee9-d6b0af9e316b" };// build the push notification payload: message = { alert: "Hello from Node.js Sender API!", sound: "default", userData: { foo-key: "foo-value" } };// send it to the server: agSender.Sender( settings ).send( message, options ).on( "success", function( response ) { console.log( "success called", response ); }); What’s next ? The Unified Push Server on on xPaaS is supporting Android and iOS at the moment, but the AeroGear team is looking to enhance the service for more mobile platforms. The community project is currently supporting the following platforms:Android Chrome Packaged Apps iOS SimplePush / Firefox OS WindowsThere are plans for adding support for Safari browser and Amazon’s Device Messaging (ADM). Getting started To see the Unified Push Server in action, checkout the video below:The xPaaS release comes with different demos for Android, iOS and Apache Cordova clients as well as a Java EE based backend demo. You can find the downloads here. More information can be found on the Unified Push homepage. You can reach out to the AeroGer team via IRC or email. Have fun and enjoy!Reference: Pushing the Limits – Howto use AeroGear Unified Push for Java EE and Node.js from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: