Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?

java-logo

2015 Starts Off Strong for Java 8

JDK 8 is starting 2015 with a surge in popularity in terms of blog posts and articles. This is coinciding with Java being automatically upgraded to JDK 8 this month. In this post, I list and briefly describe some of the numerous articles and posts on JDK 8 that have been published already in 2015. JDK 8 Streams have been justifiably popular in recent posts. My first blog post of 2015 was Stream-Powered Collections Functionality in JDK 8 and it demonstrates performing some common functions against Java collections with greater ease and conciseness using Streams than was possible before Streams. The post Fail-fast validations using Java 8 streams looks at fluent fail-fast validation of state that was improved from its original writing based on feedback. The post Java 8: No more loops talks about streams providing concise alternatives to looping on collections. What is the difference between Collections and Streams in Java 8? and Java 8 Streams API as Friendly ForkJoinPool Facade were also posted this month. Lambda expressions are obviously a big part of JDK 8. The post Java 8 Stream and Lambda Expressions – Parsing File Example demonstrates use of lambda expressions and streams to parse a log file. A quick overview of features new to JDK 8 is available in What Are the Most Important New Features in the Java 8 Release?. The post Java 8 Default Methods Explained in 5 minutes describes JDK 8’s default methods. Daniel Shaya warns of two potential caveats using JDK 8 functionality in the posts Java8 Sorting – Performance Pitfall and What’s Stopping Me Using Java8 Lambdas – Try Debugging Them. Peter Ledbrook looks reexamines use of Groovy in JDK 8 in the post Groovy in light of Java 8. We are only half-way through the first month of 2015 and JDK 8 continues to see increased adoption and, correspondingly, increased online coverage of its features. Most of the focus seems to be on the functional aspects that JDK 8 brings to Java.Reference: 2015 Starts Off Strong for Java 8 from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
gradle-logo

Getting Started with Gradle: Creating a Multi-Project Build

Although we can create a working application by using only one module, sometimes it is wiser to divide our application into multiple smaller modules. Because this is a rather common use case, every self-respecting build tool must support it, and Gradle is no exception. If a Gradle project has more than one module, it is called a multi-project build. This blog post describes how we can create a multi-project build with Gradle. Let’s start by taking a look at the requirements of our Gradle build.     Additional Reading: If you are not familiar with Gradle, you should read the following blog posts before you continue reading this blog post:Getting Started with Gradle: Introduction helps you to install Gradle, describes the basic concepts of a Gradle build, and describes how you can add functionality to your build by using Gradle plugins. Getting Started with Gradle: Our First Java Project describes how you can create a Java project by using Gradle and package your application to an executable jar file. Getting Started with Gradle: Dependency Management describes how you can manage the dependencies of your Gradle project.The Requirements of Our Gradle Build Our example application has two modules:The core module contains the common components that are used by the other modules of our application. In our case, it contains only one class: the MessageService class returns the string ‘Hello World!’. This module has only one dependency: it has one unit test that uses Junit 4.11. The app module contains the HelloWorld class that starts our application, gets a message from a MessageService object, and writes the received message to a log file. This module has two dependencies: it needs the core module and uses Log4j 1.2.17 as a logging library.Our Gradle build has also two other requirements:We must be able to run our application with Gradle. We must be able to create a runnable binary distribution that doesn’t use the so called “fat jar” approach.If you don’t know how you can run your application and create a runnable binary distribution with Gradle, you should read the following blog post before you continue reading this blog post:Getting Started with Gradle: Creating a Binary DistributionLet’s move on and find out how we can create a multi-project build that fulfills our requirements. Creating a Multi-Project Build Our next step is to create a multi-project Gradle build that has two subprojects: app and core. Let’s start by creating the directory structure of our Gradle build. Creating the Directory Structure Because the core and app modules use Java, they both use the default project layout of a Java project. We can create the correct directory structure by following these steps:Create the root directory of the core module (core) and create the following the subdirectories:The src/main/java directory contains the source code of the core module. The src/test/java directory contains the unit tests of the core module.Create the root directory of the app module (app) and create the following subdirectories:The src/main/java directory contains the source code of the app module. The src/main/resources directory contains the resources of the app module.We have now created the required directories. Our next step is to configure our Gradle build. Let’s start by configuring the projects that are included in our multi-project build. Configuring the Projects that Are Included in Our Multi-Project Build We can configure the projects that are included in our multi-project build by following these steps:Create the settings.gradle file to the root directory of the root project. A multi-project Gradle build must have this file because it specifies the projects that are included in the multi-project build. Ensure that the app and core projects are included in our multi-project build.Our settings.gradle file looks as follows: include 'app' include 'core' Additional Reading:Gradle User Guide: 56.2 Settings file Gradle DSL Reference: SettingsLet’s move on and configure the core project. Configuring the Core Project We can configure the core project by following these steps:Create the build.gradle file to the root directory of the core project. Create a Java project by applying the Java plugin. Ensure that the core project gets its dependencies from the central Maven2 repository. Declare the JUnit dependency (version 4.11) and use the testCompile configuration. This configuration describes that the core project needs the JUnit library before its unit tests can be compiled.The build.gradle file of the core project looks as follows: apply plugin: 'java'repositories { mavenCentral() }dependencies { testCompile 'junit:junit:4.11' } Additional Reading:Getting Started with Gradle: Our First Java Project Getting Started with Gradle: Dependency ManagementLet’s move on and configure the app project. Configuring the App Project Before we can configure the app project, we have to take a quick look at the dependency management of such dependencies that are part of the same multi-project build. These dependencies are called project dependencies. If our multi-project build has projects A and B, and the compilation of the project B requires the project A, we can configure this dependency by adding the following dependency declaration to the build.gradle file of the project B: dependencies { compile project(':A') } Additional Reading:Gradle User Guide: 51.4.3. Project dependencies Gradle User Guide: 57.7. Project lib dependenciesWe can now configure the app project by following these steps:Create the build.gradle file to the root directory of the app project. Create a Java project by applying the Java plugin. Ensure that the app project gets its dependencies from the central Maven2 repository. Configure the required dependencies. The app project has two dependencies that are required when it is compiled:Log4j (version 1.2.17) The core moduleCreate a runnable binary distribution.The build.gradle file of the app project looks as follows: apply plugin: 'application' apply plugin: 'java'repositories { mavenCentral() }dependencies { compile 'log4j:log4j:1.2.17' compile project(':core') }mainClassName = 'net.petrikainulainen.gradle.client.HelloWorld'task copyLicense { outputs.file new File("$buildDir/LICENSE") doLast { copy { from "LICENSE" into "$buildDir" } } }applicationDistribution.from(copyLicense) { into "" } Additional Reading:Getting Started with Gradle: Creating a Binary DistributionLet’s move on and remove the duplicate configuration found from the build scripts of the core and app projects. Removing Duplicate Configuration When we configured the subprojects of our multi-project build, we added duplicate configuration to the build scripts of the core and app projects:Because both projects are Java projects, they apply the Java plugin. Both projects use the central Maven 2 repository.In other words, both build scripts contain the following configuration: apply plugin: 'java'repositories { mavenCentral() } Let’s move this configuration to the build.gradle file of our root project. Before we can do this, we have to learn how we can configure our subprojects in the build.gradle file of our root project. If we want to add configuration to a single subproject called core, we have to add the following snippet to the build.gradle file of our root project: project(':core') { //Add core specific configuration here } In other words, if we want to move the duplicate configuration to the build script of our root project, we have to add the following configuration to its build.gradle file: project(':app') { apply plugin: 'java'repositories { mavenCentral() } }project(':core') { apply plugin: 'java'repositories { mavenCentral() } } This doesn’t really change our situation. We still have duplicate configuration in our build scripts. The only difference is that the duplicate configuration is now found from the build.gradle file of our root project. Let’s eliminate this duplicate configuration. If we want to add common configuration to the subprojects of our root project, we have to add the following snippet to the build.gradle file of our root project: subprojects { //Add common configuration here } After we have removed the duplicate configuration from the build.gradle file of our root project, it looks as follows: subprojects { apply plugin: 'java'repositories { mavenCentral() } } If we have configuration that is shared by all projects of our multi-project build, we should add the following snippet to the build.gradle file of our root project: allprojects { //Add configuration here } Additional Reading:Gradle User Guide: 57.1 Cross project configuration Gradle User Guide: 57.2 Subproject configurationWe can now remove the duplicate configuration from the build scripts of our subprojects. The new build scripts of our subprojects looks as follows: The core/build.gradle file looks as follows: dependencies { testCompile 'junit:junit:4.11' } The app/build.gradle file looks as follows: apply plugin: 'application'dependencies { compile 'log4j:log4j:1.2.17' compile project(':core') }mainClassName = 'net.petrikainulainen.gradle.client.HelloWorld'task copyLicense { outputs.file new File("$buildDir/LICENSE") doLast { copy { from "LICENSE" into "$buildDir" } } }applicationDistribution.from(copyLicense) { into "" } We have now created a multi-project Gradle build. Let’s find out what we just did. What Did We Just Do? When we run the command gradle projects in the root directory of our multi-project build, we see the following output: > gradle projects :projects------------------------------------------------------------ Root project ------------------------------------------------------------Root project 'multi-project-build' +--- Project ':app' \--- Project ':core'To see a list of the tasks of a project, run gradle <project-path>:tasks For example, try running gradle :app:tasksBUILD SUCCESSFUL As we can see, this command lists the subprojects (app and core) of our root project. This means that we have just created a multi-project Gradle build that has two subprojects. When we run the command gradle tasks in the root directory of our multi-project build, we see the following output (only relevant part of it is shown below): > gradle tasks :tasks------------------------------------------------------------ All tasks runnable from root project ------------------------------------------------------------Application tasks ----------------- distTar - Bundles the project as a JVM application with libs and OS specific scripts. distZip - Bundles the project as a JVM application with libs and OS specific scripts. installApp -Installs the project as a JVM application along with libs and OS specific scripts run - Runs this project as a JVM application As we can see, we can run our application by using Gradle and create a binary distribution that doesn’t use the so called “fat jar” approach. This means that we have fulfilled all requirements of our Gradle build. Additional Information:Gradle User Guide: 11.6. Obtaining information about your buildLet’s move on and find out what we learned from this blog post. Summary This blog post has taught us three things:A multi-project build must have the settings.gradle file in the root directory of the root project because it specifies the projects that are included in the multi-project build. If we have to add common configuration or behavior to all projects of our multi-project build, we should add this configuration (use allprojects) to the build.gradle file of our root project. If we have to add common configuration or behavior to the subprojects of our root project, we should add this configuration (use subprojects) to the build.gradle file of our root project.P.S. You can get the example application of this blog post from Github.Reference: Getting Started with Gradle: Creating a Multi-Project Build from our JCG partner Petri Kainulainen at the Petri Kainulainen blog....
scala-logo

Scala snippets 4: Pimp my library pattern with type classes.

I wanted to write an article on the fun parts of scalaz, but thought it would be best to first look a bit closer at the type classes system provided by scala. So in this snippet we’ll explore a small part of how type classes work and can help you in writing more generic code. More snippets can be found here:Scala snippets 1: Folding Scala snippets 2: List symbol magic Scala snippets 3: Lists together with Map, flatmap, zip and reduce Scala snippets 4: Pimp my library pattern with type classesType classes Looking at the type class definition from wikipedia might quickly scare you away: “In computer science, a type class is a type system construct that supports ad hoc polymorphism. This is achieved by adding constraints to type variables in parametrically polymorphic types. Such a constraint typically involves a type class ‘T’ and a type variable ‘a’, and means that ‘a’ can only be instantiated to a type whose members support the overloaded operations associated with ‘T’.” Basically what type classes allow is to add functionality to existing classes without needing to touch the existing classes. We could for instance add standard “comparable” functionality to Strings without having to modify the existing classes. Note that you could also just use implicit functions to add custom behavior (e.g the “Pimp my library pattern”, https://coderwall.com/p/k_1jzw/scala-s-pimp-my-library-pattern-example), but using type classes is much more safe and flexible. A good discussion on this can be found here (http://stackoverflow.com/questions/8524878/implicit-conversion-vs-type-c…). So enough introducion, lets look at a very simple example of type classes. Creating a type class in scala takes a number of different steps. The first step is to create a trait. This trait is the actual type class and defines the functionality that we want to provide. For this article we’ll create a very contrived example where we define a “Duplicate” trait. With this trait we duplicate a specific object. So when we get a string value of “hello”, we want to return “hellohello”, when we get an integer we return value*value, when we get a char ‘c’, we return “cc”. All this in a type safe manner. Our typeclass is actually very simple: trait Duplicate[A,B] { def duplicate(value: A): B } Note that is look a lot like scala mix-in traits, but it is used completely different. Once we’ve got the typeclass definition, the next step is to create some default implementations. We do this in the trait’s companion object. object Duplicate {   // implemented as a singleton object implicit object DuplicateString extends Duplicate[String,String] { def duplicate(value: String) = value.concat(value) }   // or directly, which I like better. implicit val duplicateInt = new Duplicate[Int, Int] { def duplicate(value: Int) = value * value }   implicit val duplicateChar = new Duplicate[Char, String] { def duplicate(value: Char) = value.toString + value.toString } } } As you can see we can do this in a couple of different ways. The most important part here is the implicit keyword. Using this keyword we can make these members implicity available under certain circumstances. When you look at the implementation you’ll notice that they are all very straigthforward. We just implement the trait we defined for specific types. In this case for a string, an integer and a character. Now we can start using the type classes. object DuplicateWriter {   // import the conversions for use within this object import conversions.Duplicate   // Generic method that takes a value, and looks for an implicit // conversion of type Duplicate. If no implicit Duplicate is available // an error will be thrown. Scala will first look in the local // scope before looking for implicits in the companion object // of the trait class. def write[A,B](value: A)(implicit dup: Duplicate[A, B]) : B = { dup.duplicate(value) } }     // simple app that runs our conversions object Example extends App { import snippets.conversions.Duplicate   implicit val anotherDuplicateInt = new Duplicate[Int, Int] { def duplicate(value: Int) = value + value }   println(DuplicateWriter.write("Hello")) println(DuplicateWriter.write('c')) println(DuplicateWriter.write(0)) println(DuplicateWriter.write(0)(Duplicate.duplicateInt))   } In this example we’ve create a DuplicateWriter which calls the duplicate function on the provide class by looking for a matching typecall implementation. In our Example object we also override the default duplicate function for the Int type with a custom one. In the last line we provide a specific Duplicate object to use by the DuplicateWriter. The output of this application is this: 20 100 HelloHello cc If we run with an unsupported type (e.g a double): println(DuplicateWriter.write(0d)) We get the following compile time messages (intellij IDE in this case). Error:(56, 32) could not find implicit value for parameter dup: snippets.conversions.Duplicate[Double,B] println(DuplicateWriter.write(0d)) ^   Error:(56, 32) not enough arguments for method write: (implicit dup: snippets.conversions.Duplicate[Double,B])B. Unspecified value parameter dup. println(DuplicateWriter.write(0d)) ^ We can also customize the first of these messages by adding the following annotation to our trait/typeclass definition: @implicitNotFound("No member of type class Duplicate in scope for ${T}") trait Duplicate[A,B] { def duplicate(value: A): B } So that is a very quick introduction into type classes. As you can see, they provide a very easy way to add custom functionality to classes, even if you don’t control them. In the next snippet we’ll explore a couple of common, very useful type classes from the Scalaz library.Reference: Scala snippets 4: Pimp my library pattern with type classes. from our JCG partner Jos Dirksen at the Smart Java blog....
mongodb-logo

How to create a pub/sub application with MongoDB ? Introduction

In this article we will see how to create a pub/sub application (messaging, chat, notification), and this fully based on MongoDB (without any message broker like RabbitMQ, JMS, … ). So, what needs to be done to achieve such thing:an application “publish” a message. In our case, we simply save a document into MongoDB another application, or thread, subscribe to these events and will received message automatically. In our case this means that the application should automatically receive newly created document out of MongoDBAll this is possible with some very cool MongoDB features : capped collections and tailable cursors. Capped Collections and Tailable Cursors As you can see in the documentation, Capped Collections are fixed sized collections, that work in a way similar to circular buffers: once a collection fills its allocated space, it makes room for new documents by overwriting the oldest documents. MongoDB Capped Collections can be queried using Tailable Cursors, that are similar to the unix tail -f command.  Your application continue to retrieve documents as they are inserted into the collection. I also like to call this a “continuous query”. Now that we have seen the basics, let’s implement it. Building a very basic application Create the collection The first thing to do is to create a new capped collection : $> mongouse chatdb.messages.drop()db.createCollection('messages', { capped: true, size: 10000 })db.messages.insert({"type":"init"}); For simplicity, I am using the MongoDB Shell to create the messages collection in the chat database. You can see on line #7 how to create a capped collection, with 2 options:capped : true : this one is obvious size : 10000 :  this is a mandatory option when you create a capped collection. This is the maximum size in bytes. (will be raised to a multiple of 256)Finally, on line #9, I insert a dummy document, this is also mandatory to be able to get the tailable cursor to work. Write an application Now that we have the collection, let’s write some code.  First in node.js: var mongo = require("mongodb");var mongodbUri = "mongodb://127.0.0.1/chat";mongo.MongoClient.connect (mongodbUri, function (err, db) {db.collection('messages', function(err, collection) { // open a tailable cursor console.log("== open tailable cursor"); collection.find({}, {tailable:true, awaitdata:true, numberOfRetries:-1}) .sort({ $natural: 1 }) .each(function(err, doc) { console.log(doc); }) });}); From lines #1 to 5 I just connect to my local MongoDB instance. Then on line #7, I get the messages collection. And on line #10, I execute a find, using a tailable cursor, using specific options:{} : no filter, so all documents will be returned tailable : true : this one is clear, to say that we want to create a tailable cursor awaitdata : true : to say that we wait for data before returning no data to the client numberOfRetries : -1 :  The number of times to retry on time out, -1 is infinite, so the application will keep tryingLine #11 just force the sort to the natural order, then on line #12, the cursor returns the data, and the document is printed in the console each time it is inserted. Test the Application Start the application: node app.js Insert documents in the messages collection, from the shell or any other tool. You can find below a screencast showing this very basic application working:The source code of this sample application in this Github repository, take the step-01 branch; clone this branch using: git clone -b step-01 https://github.com/tgrall/mongodb-realtime-pubsub.git I have also created a gist showing the same behavior in Java: package org.mongodb.demos.tailable;import com.mongodb.*;public class MyApp {public static void main(String[] args) throws Exception {MongoClient mongoClient = new MongoClient(); DBCollection coll = mongoClient.getDB("chat").getCollection("messages");DBCursor cur = coll.find().sort(BasicDBObjectBuilder.start("$natural", 1).get()) .addOption(Bytes.QUERYOPTION_TAILABLE | Bytes.QUERYOPTION_AWAITDATA);System.out.println("== open cursor ==");Runnable task = () -> { System.out.println("\tWaiting for events"); while (cur.hasNext()) { DBObject obj = cur.next(); System.out.println( obj );} }; new Thread(task).start(); } } Mathieu Ancelin has written it in Scala: package org.mongodb.demos.tailableimport reactivemongo.api._ import reactivemongo.bson._ import play.api.libs.iteratee.Iteratee import scala.concurrent.ExecutionContext.Implicits.global import reactivemongo.api.collections.default.BSONCollectionobject Capped extends App {val driver = new MongoDriver val connection = driver.connection(List("localhost")) val db = connection("chat") val collection = db.collection[BSONCollection]("messages")val cursor = collection .find(BSONDocument()) .options(QueryOpts().tailable.awaitData) .cursor[BSONDocument]println("== open tailable cursor") cursor.enumerate().apply(Iteratee.foreach { doc => println(s"Document inserted: ${BSONDocument.pretty(doc)}") }) } Add some user interface We have the basics of a publish subscribe based application:publish by inserting document into MongoDB subscribe by reading document using a tailable cursorLet’s now push the messages to a user using for example socket.io. For this we need to:add socket.io dependency to our node project add HTML page to show messagesThe following gists shows the updated version of the app.js and index.html, let’s take a look: "use strict";var mongo = require("mongodb"), fs = require("fs"), // to read static files io = require("socket.io"), // socket io server http = require("http");var mongodbUri = "mongodb://127.0.0.1/chat";var app = http.createServer(handler); io = io.listen(app); app.listen(3000); console.log("http server on port 3000");function handler(req, res){ fs.readFile(__dirname + "/index.html", function (err, data) { res.writeHead(200); res.end(data); }); }mongo.MongoClient.connect (mongodbUri, function (err, db) {db.collection('messages', function(err, collection) {// open socket io.sockets.on("connection", function (socket) { // open a tailable cursor console.log("== open tailable cursor"); collection.find({}, {tailable:true, awaitdata:true, numberOfRetries:-1}).sort({ $natural: 1 }).each(function(err, doc) { console.log(doc); // send message to client if (doc.type == "message") { socket.emit("message",doc); } })});});});The node application has been updated with the following features:lines #4-7: import of http, file system and socket.io lines #10-21: configure and start the http server. You can see that I have created a simple handler to serve static html file lines #28-39: I have added support to Web socket using socket.io where I open the tailable cursor, and push/emit the messages on the socket.As you can see, the code that I have added is simple. I do not use any advanced framework, nor manage exceptions, this for simplicity and readability. Let’s now look at the client (html page). <!doctype html> <html> <head> <title>MongoDB pub/sub</title> <style> * { margin: 0; padding: 10px; box-sizing: border-box; } body { font: 13px Helvetica, Arial; } #messages { list-style-type: none; margin: 0; padding: 0; } #messages li { padding: 5px 10px; } #messages li:nth-child(odd) { background: #eee; } </style> </head> <body> <h2>MongoDB/Socket.io demonstration</h2><ul id="messages"></ul><script src="https://cdn.socket.io/socket.io-1.2.0.js"></script> <script src="https://code.jquery.com/jquery-2.1.3.min.js"></script> <script> var socket = io(); socket.on('message', function(doc){ $('#messages').append($('<li>').text(doc.text)); }); </script> </body> </html>Same as the server, it is really simple and does not use any advanced libraries except socket.io client (line #18) and JQuery (line #19), and used:on line #22 to received messages ans print them in the page using JQuery on line #23I have created a screencast of this version of the application:You can find the source code in this Github repository, take the step-02 branch; clone this branch using: git clone -b step-02 https://github.com/tgrall/mongodb-realtime-pubsub.git Conclusion In this first post, we have:learned about tailable cursor and capped collection see how it can be used to develop a pub/sub application expose this into a basic web socket based applicationReference: How to create a pub/sub application with MongoDB ? Introduction from our JCG partner Tugdual Grall at the Tug’s Blog blog....
java-logo

New Javadoc Tags @apiNote, @implSpec and @implNote

If you’re already using Java 8, you might have seen some new Javadoc tags: @apiNote, @implSpec and @implNote. What’s up with them? And what do you have to do if you want to use them? Overview This post will have a quick view at the tags’ origin and current status. It will then explain their meaning and detail how they can be used with IDEs, the Javadoc tool and via Maven’s Javadoc plugin. I created a demo project on GitHub to show some examples and the necessary additions to Maven’s pom.xml. To make things easier for the Maven-averse, it already contains the generated javadoc. Context Origin The new Javadoc tags are a byproduct of JSR-335, which introduced lambda expressions. They came up in the context of default methods because these required a more standardized and fine grained documentation. In January 2013 Brian Goetz gave a motivation and made a proposal for these new tags. After a short discussion it turned into a feature request three weeks later. By April the JDK Javadoc maker was updated and the mailing list informed that they were ready to use. Current Status It is important to note that the new tags are not officially documented (they are missing in the official list of Javadoc tags) and thus subject to change. Furthermore, the implementer Mike Duigou wrote: There are no plans to attempt to popularize these particular tags outside of use by JDK documentation. So while it is surely beneficial to understand their meaning, teams should carefully consider whether using them is worth the risk which comes from relying on undocumented behavior. Personally, I think so as I deem the considerable investment already made in the JDK as too high to be reversed. It would also be easy to remove or search/replace their occurrences in a code base if that became necessary. @apiNote, @implSpec and @implNoteLet’s cut to the heart of things. What is the meaning of these new tags? And where and how are they used? Meaning The new Javadoc tags are explained pretty well in the feature request’s description (I changed the layout a little): There are lots of things we might want to document about a method in an API. Historically we’ve framed them as either being “specification” (e.g., necessary postconditions) or “implementation notes” (e.g., hints that give the user an idea what’s going on under the hood.) But really, there are four boxes (and we’ve been cramming them into two, or really 1.5): { API, implementation } x { specification, notes } (We sometimes use the terms normative/informative to describe the difference between specification/notes.) Here are some descriptions of what belongs in each box. 1. API specification. This is the one we know and love; a description that applies equally to all valid implementations of the method, including preconditions, postconditions, etc. 2. API notes. Commentary, rationale, or examples pertaining to the API. 3. Implementation specification. This is where we say what it means to be a valid default implementation (or an overrideable implementation in a class), such as “throws UOE.” Similarly this is where we’d describe what the default for putIfAbsent does. It is from this box that the would-be-implementer gets enough information to make a sensible decision as to whether or not to override. 4. Implementation notes. Informative notes about the implementation, such as performance characteristics that are specific to the implementation in this class in this JDK in this version, and might change. These things are allowed to vary across platforms, vendors and versions. The proposal: add three new Javadoc tags, @apiNote, @implSpec, and @implNote. (The remaining box, API Spec, needs no new tag, since that’s how Javadoc is used already.) @impl{spec,note} can apply equally well to a concrete method in a class or a default method in an interface. So the new Javadoc tags are meant to categorize the information given in a comment. It distinguishes between the specification of the method’s, class’s, … behavior (which is relevant for all users of the API – this is the “regular” comment and would be @apiSpec if it existed) and other, more ephemeral or less universally useful documentation. More concretely, an API user can not rely on anything written in @implSpec or @implNote, because these tags are concerned with this implementation of the method, saying nothing about overriding implementations. This shows that using these tags will mainly benefit API designers. But even Joe Developer, working on a large project, can be considered a designer in this context as his code is surely consumed and/or changed by his colleagues at some point in the future. In that case, it helps if the comment clearly describes the different aspects of the API. E.g. is “runs in linear time” part of the method’s specification (and should hence not be degraded) or a detail of the current implementation (so it could be changed). Examples Let’s see some examples! First from the demo project to show some rationale behind how to use the tags and then from the JDK to see them in production. The Lottery The project contains an interface Lottery from some fictitious library. The interface was first included in version 1.0 of the library but a new method has to be added for version 1.1. To keep backwards compatibility this is a default method but the plan is to make it abstract in version 2.0 (giving customers some time to update their code). With the new tags the method’s documentation clearly distinguishes the meanings of its documentation: Documentation of Lottery.pickWinners /** * Picks the winners from the specified set of players. * <p> * The returned list defines the order of the winners, where the first * prize goes to the player at position 0. The list will not be null but * can be empty. * * @apiNote This method was added after the interface was released in * version 1.0. It is defined as a default method for compatibility * reasons. From version 2.0 on, the method will be abstract and * all implementations of this interface have to provide their own * implementation of the method. * @implSpec The default implementation will consider each player a winner * and return them in an unspecified order. * @implNote This implementation has linear runtime and does not filter out * null players. * @param players * the players from which the winners will be selected * @return the (ordered) list of the players who won; the list will not * contain duplicates * @since 1.1 */ default List<String> pickWinners(Set<String> players) { return new ArrayList<>(players); } JDK The JDK widely uses the new tags. Some examples:ConcurrentMap:Several @implSpecs defining the behavior of the default implementations, e.g. on replaceAll. Interesting @implNotes on getOrDefault and forEach. Repeated @implNotes on abstract methods which have default implementations in Map documenting that “This implementation intentionally re-abstracts the inappropriate default provided in Map.”, e.g. replace.Objects uses @apiNote to explain why the seemingly useless methods isNull and nonNull were added. The abstract class Clock uses @implSpec and @implNote in its class comment to distinguish what implementations must beware of and how the existing methods are implemented.Inheritance When an overriding method has no comment or inherits its comment via {@inheritDoc}, the new tags are not included. This is a good thing, since they will not generally apply. To inherit specific tags, just add the snippet @tag {@inheritDoc} to the comment. The implementing classes in the demo project examine the different possibilities. The README gives an overview. Tool Support IDEs You will likely want to see the improved documentation (the JDK’s and maybe your own) in your IDE. So how do the most popular ones currently handle them? Eclipse displays the tags and their content but provides no special rendering, like ordering or prettifying the tag headers. There is a feature request to resolve this. IntellyJ‘s current community edition 14.0.2 displays neither the tags nor their content. This was apparently solved on Christmas Eve (see this ticket) so I guess the next version will not have this problem anymore. I cannot say anything regarding the rendering, though. NetBeans also shows neither tags nor content and I could find no ticket asking to fix this. All in all not a pretty picture but understandable considering the fact that this is no official Javadoc feature. Generating Javadoc If you start using those tags in your own code, you will soon realize that generating Javadoc fails because of the unknown tags. That is easy to fix, you just have to tell it how to handle them. Command Line This can be done via the command line argument -tag. The following arguments allow those tags everywhere (i.e. on packages, types, methods, …) and give them the headers currently used by the JDK: Telling Javadoc About The New Tags -tag "apiNote:a:API Note:" -tag "implSpec:a:Implementation Requirements:" -tag "implNote:a:Implementation Note:" (I read the official documentation as if those arguments should be -tag apiNote:a:”API Note:” [note the quotation marks] but that doesn’t work for me. If you want to limit the use of the new tags or not include them at all, the documentation of -tag tells you how to do that.) By default all new tags are added to the end of the generated doc, which puts them below, e.g., @param and @return. To change this, all tags have to be listed in the desired order, so you have to add the known tags to the list below the three above: Listing The Known Tags After The New Ones -tag "param" -tag "return" -tag "throws" -tag "since" -tag "version" -tag "serialData" -tag "see" Maven Maven’s Javadoc plugin has a configuration setting tag which is used to verbosely create the same command line arguments. The demo project on GitHub shows how this looks like in the pom. Reflection We have seen that the new Javadoc tags @apiNote, @implSpec and @implNote were added to allow the division of documentation into parts with different semantics. Understanding them is helpful to every Java developer. API designers might chose to employ them in their own code but must keep in mind that they are still undocumented and thus subject to change. We finally took a look at some of the involved tools and saw that IDE support needs to improve but the Javadoc tool and the Maven plugin can be parameterized to make full use of them.Reference: New Javadoc Tags @apiNote, @implSpec and @implNote from our JCG partner Nicolai Parlog at the CodeFx blog....
java-logo

Multiple Return Statements

I once heard that in the past people strived for methods to have a single exit point. I understood this was an outdated approach and never considered it especially noteworthy. But lately I’ve come in contact with some developers who still adhere to that idea (the last time was here) and it got me thinking. So for the first time, I really sat down and compared the two approaches. Overview The first part of the post will repeat the arguments for and against multiple return statements. It will also identify the critical role clean code plays in assessing these arguments. The second part will categorize the situations which benefit from returning early. To not always write about “methods with multiple return statements” I’ll call the approach to structure methods that way a pattern. While this might be a little overboard it surely is more concise. The Discussion I’m discussing whether a method should always run to its last line, from where it returns its result, or can have multiple return statements and “return early”. This is no new discussion of course. See, for example, Wikipedia, Hacker Chick or StackOverflow. Structured Programming The idea that a single return statement is desirable stems from the paradigm of structured programming, developed in the 1960s. Regarding subroutines, it promotes that they have a single entry and a single exit point. While modern programming languages guarantee the former, the latter is somewhat outdated for several reasons. The main problem the single exit point solved were memory or resource leaks. These occurred when a return statement somewhere inside a method prevented the execution of some cleanup code which was located at its end. Today, much of that is handled by the language runtime (e.g. garbage collection) and explicit cleanup blocks can be written with try-catch-finally. So now the discussion mainly revolves around readability. Readability Sticking to a single return statement can lead to increased nesting and require additional variables (e.g. to break loops). On the other hand, having a method return from multiple points can lead to confusion as to its control flow and thus make it less maintainable. It is important to notice that these two sides behave very differently with respect to the overall quality of the code. Consider a method which adheres to clean coding guidelines: it is short and to the point with a clear name and an intention revealing structure. The relative loss in readability by introducing more nesting and more variables is very noticeable and might muddy the clean structure. But since the method can be easily understood due to its brevity and form, there is no big risk of overlooking any return statement. So even in the presence of more than one, the control flow remains obvious. Contrast this with a longer method, maybe part of a complicated or optimized algorithm. Now the situation is reversed. The method already contains a number of variables and likely some levels of nesting. Introducing more has little relative cost in readability. But the risk of overlooking one of several returns and thus misunderstanding the control flow is very real. So it comes down to the question whether methods are short and readable. If they are, multiple return statements will generally be an improvement. If they aren’t, a single return statement is preferable. Other Factors Readability might not be the only factor, though. Another aspect of this discussion can be logging. In case you want to log return values but do not resort to aspect oriented programming, you have to manually insert logging statements at the methods’ exit point(s). Doing this with multiple return statements is tedious and forgetting one is easy. Similarly, you might want to prefer a single exit point if you want to assert certain properties of your results before returning from the method. Situations For Multiple Returns Statements There are several kinds of situations in which a method can profit from multiple return statements. I tried to categorize them here but make no claim to have a complete list. (If you come up with another recurring situation, leave a comment and I will include it.) Every situation will come with a code sample. Note that these are shortened to bring the point across and can be improved in several ways.Guard Clauses Guard clauses stand at the beginning of a method. They check its arguments and for certain special cases immediately return a result. Guard Clause Against Null Or Empty Collections private Set<T> intersection(Collection<T> first, Collection<T> second) { // intersection with an empty collection is empty if (isNullOrEmpty(first) || isNullOrEmpty(second)) return new HashSet<>();return first.stream() .filter(second::contains) .collect(Collectors.toSet()); } Excluding edge cases at the beginning has several advantages:it cleanly separates handling of special cases and regular cases, which improves readability it provides a default location for additional checks, which preserves readability it makes implementing the regular cases less error prone it might improve performance for those special cases (though this is rarely relevant)Basically all methods for which this pattern is applicable will benefit from its use. A noteworthy proponent of guard clauses is Martin Fowler, although I would consider his example on the edge of branching (see below). Branching Some methods’ responsibilities demand to branch into one of several, often specialized subroutines. It is usually best to implement these subroutines as methods in their own right. The original method is then left with the only responsibility to evaluate some conditions and call the correct routine. Delegating To Specialized Methods public Offer makeOffer(Customer customer) { boolean isSucker = isSucker(customer); boolean canAffordLawSuit = customer.canAfford( legalDepartment.estimateLawSuitCost());if (isSucker) { if (canAffordLawSuit) return getBigBucksButStayLegal(customer); else return takeToTheCleaners(customer); } else { if (canAffordLawSuit) return getRid(customer); else return getSomeMoney(customer); } } (I know that I could leave out all else-lines. Someday I might write a post explaining why in cases like this, I don’t.) Using multiple return statements has several advantages over a result variable and a single return:the method more clearly expresses its intend to branch to a subroutine and simply return its result in any sane language, the method does not compile if the branches do not cover all possibilities (in Java, this can also be achieved with a single return if the variable is not initialized to a default value) there is no additional variable for the result, which would span almost the whole method the result of the called method can not be manipulated before being returned (in Java, this can also be achieved with a single return if the variable is final and its class immutable; the latter is not obvious to the reader, though) if a switch statement is used in a language with fall through (like Java), immediate return statements save a line per case because no break is needed, which reduces boilerplate and improves readabilityThis pattern should only be applied to methods which do little else than branching. It is especially important that the branches cover all possibilities. This implies that there is no code below the branching statements. If there were, it would take much more effort to reason about all paths through the method. If a method fulfills these conditions, it will be small and cohesive, which makes it easy to understand. Cascading Checks Sometimes a method’s behavior mainly consists of multiple checks where each check’s outcome might make further checks unnecessary. In that case, it is best to return as soon as possible (maybe after each check). Cascading Checks While Looking For an Anchor Parent private Element getAnchorAncestor(Node node) { // if there is no node, there can be no anchor, // so return null if (node == null) return null;// only elements can be anchors, // so if the node is no element, recurse to its parent boolean nodeIsNoElement = !(node instanceof Element); if (nodeIsNoElement) return getAnchorAncestor(node.getParentNode());// since the node is an element, it might be an anchor Element element = (Element) node; boolean isAnchor = element.getTagName().equalsIgnoreCase("a"); if (isAnchor) return element;// if the element is no anchor, recurse to its parent return getAnchorAncestor(element.getParentNode()); } Other examples of this are the usual implementations of equals or compareTo in Java. They also usually consist of a cascade of checks where each check might determine the method’s result. If it does, the value is immediately returned, otherwise the method continues with the next check. Compared to a single return statement, this pattern does not require you to jump through hoops to prevent ever deeper indentation. It also makes it straight forward to add new checks and place comments before a check-and-return block. As with branching, multiple return statements should only be applied to methods which are short and do little else. The cascading checks should be their central, or better yet, their only content (besides input validation). If a check or the computation of the return value needs more than two or three lines, it should be refactored into a separate method. Searching Where there are data structures, there are items with special conditions to be found in them. Methods which search for them often look similar. If such a method encounters the item it was searching for, it is often easiest to immediately return it. Immediately Returning The Found Element private <T> T findFirstIncreaseElement(Iterable<T> items, Comparator<? super T> comparator) { T lastItem = null; for (T currentItem : items) { boolean increase = increase(lastItem, currentItem, comparator); lastItem = currentItem;if (increase) { return currentItem; } }return null; } Compared to a single return statement, this saves us from finding a way to get out of the loop. This has the following advantages:there is no additional boolean variable to break the loop there is no additional condition for the loop, which is easily overlooked (especially in for loops) and thus fosters bugs the last two points together keep the loop much easier to understand there is most likely no additional variable for the result, which would span almost the whole methodLike most patterns which use multiple return statements, this also requires clean code. The method should be small and have no other responsibility but searching. Nontrivial checks and result computations should have their own methods. Reflection We have seen the arguments for and against multiple returns statements and the critical role clean code plays. The categorization should help to identify recurring situations in which a method will benefit from returning early.Reference: Multiple Return Statements from our JCG partner Nicolai Parlog at the CodeFx blog....
java-interview-questions-answers

Pushing the Limits – Howto use AeroGear Unified Push for Java EE and Node.js

At the end of 2014 the AeroGear team announced the availability of the Red Hat JBoss Unified Push Server on xPaaS. Let’s take a closer look! Overview The Unified Push Server allows developers to send native push messages to Apple’s Push Notification Service (APNS) and Google’s Cloud Messaging (GCM). It features a built-in administration console that makes it easy for developers to create and manage push related aspects of their applications for any mobile development environment. Includes client SDKs (iOS, Android, & Cordova), and a REST based sender service with an available Java sender library. The following image shows how the Unified Push Server enables applications to send native push messages to Apple’s Push Notification Service (APNS) and Google’s Cloud Messaging (GCM):Architecture The xPaaS offering is deployed in a managed EAP container, while the server itself is based on standard Java EE APIs like:JAX-RS EJB CDI JPAAnother critical component is Keycloak, which is used for user management and authentication. The heart of the Unified Push Server are its public RESTful endpoints. These services are the entry for all mobile devices as well as for 3rd party business applications, when they want to issue a push notification to be delivered to the mobile devices, registered with the server. Backend integration Being based on the JAX-RS standard makes integration with any backend platform very easy. It just needs to speak HTTP… Java EE The project has a Java library to send push notification requests from any Java-based backend. The fluent builder API is used to setup the integration with the desired Unified Push Server, with the help of CDI we can extract that into a very simple factory: @Produces public PushSender setup() { PushSender defaultPushSender = DefaultPushSender.withRootServerURL("http://localhost:8080/ag-push") .pushApplicationId("c7fc6525-5506-4ca9-9cf1-55cc261ddb9c") .masterSecret("8b2f43a9-23c8-44fe-bee9-d6b0af9e316b") .build(); } Next we would need to inject the `PushSender` into a Java class which is responsible to send a push request to the Unified Push Server: @Inject private PushSender sender; ... public void sendPushNotificationRequest() { ... UnifiedMessage unifiedMessage....; sender.send(unifiedMessage); } The API for the `UnifiedMessage` is leveraging the builder pattern as well: UnifiedMessage unifiedMessage = UnifiedMessage.withMessage() .alert("Hello from Java Sender API!") .sound("default") .userData("foo-key", "foo-value") ... .build(); Node.js Being a restful server does not limit the integration to traditional platforms like Java EE. The AeroGear also has a Node.js library. Below is a short example how to send push notifications from a Node.js based backend: // setup the integration with the desired Unified Push Server var agSender = require( "unifiedpush-node-sender" ), settings = { url: "http://localhost:8080/ag-push", applicationId: "c7fc6525-5506-4ca9-9cf1-55cc261ddb9c", masterSecret: "8b2f43a9-23c8-44fe-bee9-d6b0af9e316b" };// build the push notification payload: message = { alert: "Hello from Node.js Sender API!", sound: "default", userData: { foo-key: "foo-value" } };// send it to the server: agSender.Sender( settings ).send( message, options ).on( "success", function( response ) { console.log( "success called", response ); }); What’s next ? The Unified Push Server on on xPaaS is supporting Android and iOS at the moment, but the AeroGear team is looking to enhance the service for more mobile platforms. The community project is currently supporting the following platforms:Android Chrome Packaged Apps iOS SimplePush / Firefox OS WindowsThere are plans for adding support for Safari browser and Amazon’s Device Messaging (ADM). Getting started To see the Unified Push Server in action, checkout the video below:The xPaaS release comes with different demos for Android, iOS and Apache Cordova clients as well as a Java EE based backend demo. You can find the downloads here. More information can be found on the Unified Push homepage. You can reach out to the AeroGer team via IRC or email. Have fun and enjoy!Reference: Pushing the Limits – Howto use AeroGear Unified Push for Java EE and Node.js from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....
java-logo

Don’t Remove Listeners – Use ListenerHandles

Listening to an observable instance and reacting to its changes is fun. Doing what is necessary to interrupt or end this listening is way less fun. Let’s have a look at where the trouble comes from and what can be done about it. Overview The post will first describe the situation before discussing the common approach and what’s wrong with it. It will then present an easy abstraction which solves most problems. While the examples use Java, the deficiency is present in many other languages as well. The proposed solution can be applied in all object oriented languages. Those too lazy to implement the abstraction in Java themselves, can use LibFX. The SituationSay we want to listen to the changes of a property’s value. That’s straight forward: Simple Case Which Does Not Support Removal private void startListeningToNameChanges(Property<String> name) { name.addListener((obs, oldValue, newValue) -> nameChanged(newValue)); } Now assume we want to interrupt listening during certain intervals or stop entirely. Keeping References Around The most common approach to solve this is to keep a reference to the listener and another one to the property around. Depending on the concrete use case, the implementations will differ, but they all come down to something like this: Removing A Listener The Default Way private Property<String> listenedName; private ChangeListener<String> nameListener;...private void startListeningToNameChanges(Property<String> name) { listenedName = name; nameListener = (obs, oldValue, newValue) -> nameChanged(newValue); listenedName.addListener(nameListener); }private void stopListeningToNameChanges() { listenedName.removeListener(nameListener); } While this might look ok, I’m convinced it’s actually a bad solution (albeit being the default one). First, the extra references clutter the code. It is hard to make them express the intent of why they are kept around, so they reduce readability. Second, they increase complexity by adding a new invariant to the class: The property must always be the one to which the listener was added. Otherwise the call to removeListener will silently do nothing and the listener will still be executed on future changes. Unriddling this can be nasty. While upholding that invariant is easy if the class is short, it can become a problem if it grows more complex. Third, the references (especially the one to the property) invite further interaction with them. This is likely not intended but nothing keeps the next developer from doing it anyway (see the first point). And if someone does start to operate on the property, the second point becomes a very real risk. These aspects already disqualify this from being the default solution. But there is more! Having to do this in many classes leads to code duplication. And finally, the implementation above contains a race condition. ListenerHandle Most issues come from handling the observable and the listener directly in the class which needs to interrupt/end the listening. This is unnecessary and all of these problems go away with a simple abstraction: the ListenerHandle. The ListenerHandle public interface ListenerHandle { void attach(); void detach(); } The ListenerHandle holds on to the references to the observable and the listener. Upon calls to attach() or detach() it either adds the listener to the observable or removes it. For this to be embedded in the language, all methods which currently add listeners to observables should return a handle to that combination. Now all that is left to do is to actually implement handles for all possible scenarios. Or convince those developing your favorite programming language to do it. This is left as an exercise to the reader. Note that this solves all problems described above with the exception of the race condition. There are two ways to tackle this:handle implementations could be inherently thread-safe a synchronizing decorator could be implementedListenerHandles in LibFX As a Java developer you can use LibFX, which supports listener handles on three levels. Features Are Aware Of ListenerHandles Every feature of LibFX which can do so without conflicting with the Java API returns a ListenerHandle when adding listeners. Take the WebViewHyperlinkListener as an example: Getting a ‘ListenerHandle’ to a ‘WebViewHyperlinkListener’ WebView webView;ListenerHandle eventProcessingListener = WebViews .addHyperlinkListener(webView, this::processEvent); Utilities For JavaFX Since LibFX has strong connections to JavaFX (who would have thought!), it provides a utility class which adds listeners to observables and returns handles. This is implemented for all observable/listener combinations which exist in JavaFX. As an example, let’s look at at the combination ObservableValue<T> / ChangeListener<? superT>: Some Methods In ‘ListenerHandles’ public static <T> ListenerHandle createAttached( ObservableValue<T> observableValue, ChangeListener<? super T> changeListener);public static <T> ListenerHandle createDetached( ObservableValue<T> observableValue, ChangeListener<? super T> changeListener); ListenerHandleBuilder In all other cases, i.e. for any observable/listener combination not covered above, a handle can be created with a builder: Creating a ‘ListenerHandle’ For Custom Classes // These classes do not need to implement any special interfaces. // Their only connection are the methods 'doTheAdding' and 'doTheRemoving', // which the builder does not need to know about. MyCustomObservable customObservable; MyCustomListener customListener;ListenerHandles .createFor(customObservable, customListener) .onAttach((obs, listener) -> obs.doTheAdding(listener)) .onDetach((obs, listener) -> obs.doTheRemoving(listener)) .buildAttached(); Reactive Programming While this is no post on reactive programming, it should still be mentioned. Check out ReactiveX (for many languages including Java, Scala, Python, C++, C# and more) or ReactFX (or this introductory post) for some implementations. Reflection We have seen that the default approach to remove listeners from observables produces a number of hazards and needs to be avoided. The listener handle abstraction provides a clean way around many/all problems and LibFX provides an implementation.Reference: Don’t Remove Listeners – Use ListenerHandles from our JCG partner Nicolai Parlog at the CodeFx blog....
career-logo

Job Search Trap: It Doesn’t Matter What I Look Like

If you are a technical person, you probably dress in a casual way for work. I do. When it’s time to meet people, either when you network or when you interview, do you wear the same clothes that you wear to work? When I meet people at networking meetings, they are casual. And, I wonder when some of them last bathed or brushed their teeth. That’s a problem. When you look for a job and you are out in public, you are networking. You don’t have to be overt, as in telling everyone, “I’m looking for a job. Know of anything?” On the other hand, every time people see you, they judge you. You need to be ready for that judgement. How do you look? Are you well-groomed? This includes:Having your hair cut properly. It doesn’t matter if you have long hair. Is your hair cut/styled so people can see your face? Is your hair clean and neat? Do your clothes fit? Do all buttons button? Do all zippers zip? Are your clothes clean and not wrinkled? How do your teeth look? Do you need to see a dentist? I’ve met a number of job hunters who were missing teeth and didn’t have dentures or bridges. They also looked like they needed to see a dentist. Do you remember to clean your glasses every so often? (I don’t. That’s why I’m so aware of this one.) Are your glasses frames held together with tape? Fix your glasses if you need to. No tape. Clean them, so other people can see your eyes. Are your clothes professional? Not too casual. No sandals. No short skirts or low-cut shirts for ladies.I met a gentleman a couple of months ago who arrived at a networking meeting with wrinkled clothes, and hair that looked like he hadn’t combed it in years. He was looking for a senior development position. He said he’d had trouble in the interview stage. No requests for a second interview, no offers. What was he doing wrong? I asked him a number of questions about his technical background. He sounded like a great guy. He told me nailed the phone screens. I then asked how he dressed for the interview. “Just like this. These are my interview clothes.” I asked him if he wanted feedback. He said he did. I told him the above and explained that it did matter how he looked. He was not happy. I recently heard from him. He’s now made it to several second-round interviews. He might even get a job offer from one organization. Here’s the best part: he feels better about himself. He doesn’t feel as if he’s begging for a job. Because he changed his appearance from scruffy to professional, he feels better. He’s not wearing suits; he’s wearing chinos or khakis and nice shirts and sweaters. Everything is clean and ironed. Everything fits. His hair is long, and it’s tidy. I can’t guarantee you a job if you look as if you take care of yourself. However, if you do take care of your appearance, you will project a self-confident persona when you interview. That’s what people want to see. It does matter what you look like. Make your image reflect your best self.Reference: Job Search Trap: It Doesn’t Matter What I Look Like from our JCG partner Johanna Rothman at the Managing Product Development blog....
software-development-2-logo

We can’t measure Programmer Productivity… or can we?

If you go to Google and search for “measuring software developer productivity” you will find a whole lot of nothing. Seriously — nothing. Nick Hodges, Measuring Developer Productivity By now we should all know that we don’t know how to measure programmer productivity. There is no clear cut way to measure which programmers are doing a better or faster job, or to compare productivity across teams. We “know” who the stars on a team are, who we can depend on to deliver, and who is struggling. And we know if a team is kicking ass – or dragging their asses. But how do we prove it? How can we quantify it? All sorts of stupid and evil things can happen when you try to measure programmer productivity. But let’s do it anyways. We’re writing more code, so we must be more productive Developers are paid to write code. So why not measure how much code they write – how many lines of code get delivered? Because we’ve known since the 1980s that this is a lousy way to measure productivity. Lines of code can’t be compared across languages (of course), or even between programmers using the same language working in different frameworks or following different styles. Which is why Function Points were invented – an attempt to standardize and compare the size of work in different environments. Sounds good, but Function Points haven’t made it into the mainstream, and probably never will – very few people know how Function Points work, how to calculate them and how they should be used. The more fundamental problem is that measuring productivity by lines (or Function Points or other derivatives) typed doesn’t make any sense. A lot of important work in software development, the most important work, involves thinking and learning – not typing. The best programmers spend a lot of time understanding and solving hard problems, or helping other people understand and solve hard problems, instead of typing. They find ways to simplify code and eliminate duplication. And a lot of the code that they do write won’t count anyways, as they iterate through experiments and build prototypes and throw all of it away in order to get to an optimal solution. The flaws in these measures are obvious if we consider the ideal outcomes: the fewest lines of code possible in order to solve a problem, and the creation of simplified, common processes and customer interactions that reduce complexity in IT systems. Our most productive people are those that find ingenious ways to avoid writing any code at all. Jez Humble, The Lean Enterprise This is clearly one of those cases where size doesn’t matter. We’re making (or saving) more money, so we must be working better We could try to measure productivity at a high level using profitability or financial return on what each team is delivering, or some other business measure such as how many customers are using the system – if developers are making more money for the business (or saving more money), they must be doing something right. Using financial measures seems like a good idea at the executive level, especially now that “every company is a software company”. These are organizational measures that developers should share in. But they are not effective – or fair – measures of developer productivity. There are too many business factors are outside of the development team’s control. Some products or services succeed even if the people delivering them are doing a lousy job, or fail even if the team did a great job. Focusing on cost savings in particular leads many managers to cut people and try “to do more with less” instead of investing in real productivity improvements. And as Martin Fowler points out there is a time lag, especially in large organizations – it can sometimes take months or years to see real financial results from an IT project, or from productivity improvements. We need to look somewhere else to find meaningful productivity metrics. We’re going faster, so we must be getting more productive Measuring speed of development – velocity in Agile – looks like another way to measure productivity at the team level. After all, the point of software development is to deliver working software. The faster that a team delivers, the better. But velocity (how much work, measured in story points or feature points or ideal days, that the team delivers in a period of time) is really a measure of predictability, not productivity. Velocity is intended to be used by a team to measure how much work they can take on, to calibrate their estimates and plan their work forward. Once a team’s velocity has stabilized, you can measure changes in velocity within the team as a relative measure of productivity. If the team’s velocity is decelerating, it could be an indicator of problems in the team or the project or the system. Or you can use velocity to measure the impact of process improvements, to see if training or new tools or new practices actually make the team’s work measurably faster. But you will have to account for changes in the team, as people join or leave. And you will have to remember that velocity is a measure that only makes sense within a team – that you can’t compare velocity between teams. Although this doesn’t stop people from trying. Some shops use the idea of a well-known reference story that all teams in a program understand and use to base their story points estimates on. As long as teams aren’t given much freedom on how they come up with estimates, and as long as the teams are working in the same project or program with the same constraints and assumptions, you might be able to do rough comparison of velocity between teams. But Mike Cohn warns that If teams feel the slightest indication that velocities will be compared between teams there will be gradual but consistent “point inflation.” ThoughtWorks explains that velocity <> productivity in their latest Technology Radar: We continue to see teams and organizations equating velocity with productivity. When properly used, velocity allows the incorporation of “yesterday’s weather” into a team’s internal iteration planning process. The key here is that velocity is an internal measure for a team, it is just a capacity estimate for that given team at that given time. Organizations and managers who equate internal velocity with external productivity start to set targets for velocity, forgetting that what actually matters is working software in production. Treating velocity as productivity leads to unproductive team behaviors that optimize this metric at the expense of actual working software. Just stay busy One manager I know says that instead of trying to measure productivity “We just stay busy. If we’re busy working away like maniacs, we can look out for problems and bottlenecks and fix them and keep going”. In this case you would measure – and optimize for – cycle time, like in Lean manufacturing. Cycle time – turnaround time or change lead time, from when the business asks for something to when they get it in their hands and see it working – is something that the business cares about, and something that everyone can see and measure. And once you start looking closely, waste and delays will show up as you measure waiting/idle time, value-add vs. non-value-add work, and process cycle efficiency (total value-add time / total cycle time). “It’s not important to define productivity, or to measure it. It’s much more important to identify non-productive activities and drive them down to zero.” Erik Simmons, Intel Teams can use Kanban to monitor – and limit – work in progress and identify delays and bottlenecks. And Value Stream Mapping to understand the steps, queues, delays and information flows which need to be optimized. To be effective, you have to look at the end-to-end process from when requests are first made to when they are delivered and running, and optimize all along the path, not just the work in development. This may mean changing how the business prioritizes, how decisions are made and who makes the decisions. In almost every case we have seen, making one process block more efficient will have a minimal effect on the overall value stream. Since rework and wait times are some of the biggest contributors to overall delivery time, adopting “agile” processes within a single function (such as development) generally has little impact on the overall value stream, and hence on customer outcomes. Jezz Humble, The Lean Enterprise The down side of equating delivery speed with productivity? Optimizing for cycle time/speed of delivery by itself could lead to problems over the long term, because this incents people to think short term, and to cut corners and take on technical debt. We’re writing better software, so we must be more productive “The paradox is that when managers focus on productivity, long-term improvements are rarely made. On the other hand, when managers focus on quality, productivity improves continuously.” John Seddon, quoted in The Lean Enterprise We know that fixing bugs later costs more. Whether it’s 10x or 100+x, it doesn’t really matter. And that projects with fewer bugs are delivered faster – at least up to a point of diminishing returns for safety-critical and life-critical systems. And we know that the costs of bugs and mistakes in software to the business can be significant. Not just development rework costs and maintenance and support costs. But direct costs to the business. Downtime. Security breaches. Lost IP. Lost customers. Fines. Lawsuits. Business failure. It’s easy to measure that you are writing good – or bad – software. Defect density. Defect escape rates (especially defects – including security vulnerabilities – that escape to production). Static analysis metrics on the code base, using tools like SonarQube. And we know how to write good software – or we should know by now. But is software quality enough to define productivity? Devops – Measuring and Improving IT Performance Devops teams who build/maintain and operate/support systems extend productivity from dev into ops. They measure productivity across two dimensions that we have already looked at: speed of delivery, and quality. But devops isn’t limited to just building and delivering code – instead it looks at performance metrics for end-to-end IT service delivery:Delivery Throughput: deployment frequency and lead time, maximizing the flow of work into production Service Quality: change failure rate and MTTRIt’s not a matter of just delivering software faster or better. It’s dev and ops working together to deliver services better and faster, striking a balance between moving too fast or trying to do too much at a time, and excessive bureaucracy and over-caution resulting in waste and delays. Dev and ops need to share responsibility and accountability for the outcome, and for measuring and improving productivity and quality. As I pointed out in an earlier post this makes operational metrics more important than developer metrics. According to recent studies, success in achieving these goals lead to improvements in business success: not just productivity, but market share and profitability. Measure Outcomes, not Output In The Lean Enterprise (which you can tell I just finished reading), Jez Jumble talks about the importance of measuring productivity by outcome – measuring things that matter to the organization – not output. “It doesn’t matter how many stories we complete if we don’t achieve the business outcomes we set out to achieve in the form of program-level target conditions”. Stop trying to measure individual developer productivity. It’s a waste of time. Everyone knows who the top performers are. Point them in the right direction, and keep them happy. Everyone knows the people who are struggling. Get them the help that they need to succeed. Everyone knows who doesn’t fit in. Move them out. Measuring and improving productivity at the team or (better) organization level will give you much more meaningful returns. When it comes to productivity:Measure things that matter – things that will make a difference to the team or to the organization. Measures that are clear, important, and that aren’t easy to game. Use metrics for good, not for evil – to drive learning and improvement, not to compare output between teams or to rank people.I can see why measuring productivity is so seductive. If we could do it we could assess software much more easily and objectively than we can now. But false measures only make things worse. Martin Fowler, CannotMeasureProductivityReference: We can’t measure Programmer Productivity… or can we? from our JCG partner Jim Bird at the Building Real Software blog....
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close