Featured FREE Whitepapers

What's New Here?


Beginner’s Guide to Hazelcast Part 2

This article continues the series that I have started featuring Hazelcast, a distributed, in-memory database.  If one has not read the first post, please click here. Distributed Collections Hazelcast has a number of distributed collections that can be used to store data.  Here is a list of them:        IList ISet IQueueIList IList is a collection that keeps the order of what is put in and can have duplicates. In fact, it implements the java.util.List interface. This is not thread safe and one must use some sort of mutex or lock to control access by many threads. I suggest Hazelcast’s ILock. ISet ISet is a collection that does not keep order of the items placed in it. However, the elements are unique. This collection implements the java.util.Set interface. Like ILists, this collection is not thread safe. I suggest using the ILock again. IQueue IQueue is a collection that keeps the order of what comes in and allows duplicates. It implements the java.util.concurrent.BlockingQueue so it is thread safe. This is the most scalable of the collections because its capacity grows as the number of instances go up. For instance, lets say there is a limit of 10 items for a queue. Once the queue is full, no more can go in there unless another Hazelcast instance comes up, then another 10 spaces are available. A copy of the queue is also made.  IQueues can also be persisted via implementing the interface QueueStore. What They Have in Common All three of them implement the ICollection interface. This means one can add an ItemListener to them.  This lets one know when an item is added or removed. An example of this is in the Examples section. Scalablity As scalability goes, ISet and IList don’t do that well in Hazelcast 3.x. This is because the implementation changed from being map based  to becoming a collection in the MultiMap. This means they don’t partition and don’t go beyond a single machine. Striping the collections can go a long way or making one’s own that are based on the mighty IMap. Another way is to implement Hazelcast’s spi. Examples Here is an example of an ISet, IList and IQueue. All three of them have an ItemListener. The ItemListener is added in the hazelcast.xml configuration file. One can also add an ItemListener programmatically for those inclined. A main class and the snippet of configuration file that configured the collection will be shown. CollectionItemListener I implemented the ItemListener interface to show that all three of the collections can have an ItemListener. Here is the implementation: package hazelcastcollections;import com.hazelcast.core.ItemEvent; import com.hazelcast.core.ItemListener;/** * * @author Daryl */ public class CollectionItemListener implements ItemListener {@Override public void itemAdded(ItemEvent ie) { System.out.println(“ItemListener – itemAdded: ” + ie.getItem()); }@Override public void itemRemoved(ItemEvent ie) { System.out.println(“ItemListener – itemRemoved: ” + ie.getItem()); }} ISet Code package hazelcastcollections.iset;import com.hazelcast.core.Hazelcast; import com.hazelcast.core.HazelcastInstance; import com.hazelcast.core.ISet;/** * * @author Daryl */ public class HazelcastISet {/** * @param args the command line arguments */ public static void main(String[] args) { HazelcastInstance instance = Hazelcast.newHazelcastInstance(); HazelcastInstance instance2 = Hazelcast.newHazelcastInstance(); ISet<String> set = instance.getSet(“set”); set.add(“Once”); set.add(“upon”); set.add(“a”); set.add(“time”);ISet<String> set2 = instance2.getSet(“set”); for(String s: set2) { System.out.println(s); }System.exit(0); }} Configuration <set name=”set”> <item-listeners> <item-listener include-value=”true”>hazelcastcollections.CollectionItemListener</item-listener> </item-listeners> </set> IList Code package hazelcastcollections.ilist;import com.hazelcast.core.Hazelcast; import com.hazelcast.core.HazelcastInstance; import com.hazelcast.core.IList;/** * * @author Daryl */ public class HazelcastIlist {/** * @param args the command line arguments */ public static void main(String[] args) { HazelcastInstance instance = Hazelcast.newHazelcastInstance(); HazelcastInstance instance2 = Hazelcast.newHazelcastInstance(); IList<String> list = instance.getList(“list”); list.add(“Once”); list.add(“upon”); list.add(“a”); list.add(“time”);IList<String> list2 = instance2.getList(“list”); for(String s: list2) { System.out.println(s); } System.exit(0); }} Configuration <list name=”list”> <item-listeners> <item-listener include-value=”true”>hazelcastcollections.CollectionItemListener</item-listener> </item-listeners> </list>  IQueue Code I left this one for last because I have also implemented a QueueStore.   There is no call on IQueue to add a QueueStore.  One has to configure it in the hazelcast.xml file. package hazelcastcollections.iqueue;import com.hazelcast.core.Hazelcast; import com.hazelcast.core.HazelcastInstance; import com.hazelcast.core.IQueue;/** * * @author Daryl */ public class HazelcastIQueue {/** * @param args the command line arguments */ public static void main(String[] args) { HazelcastInstance instance = Hazelcast.newHazelcastInstance(); HazelcastInstance instance2 = Hazelcast.newHazelcastInstance(); IQueue<String> queue = instance.getQueue(“queue”); queue.add(“Once”); queue.add(“upon”); queue.add(“a”); queue.add(“time”);IQueue<String> queue2 = instance2.getQueue(“queue”); for(String s: queue2) { System.out.println(s); }System.exit(0); }} QueueStore Code package hazelcastcollections.iqueue;import com.hazelcast.core.QueueStore; import java.util.Collection; import java.util.Map; import java.util.Set; import java.util.TreeMap; import java.util.TreeSet; /** * * @author Daryl */ public class QueueQStore implements QueueStore<String> {@Override public void store(Long l, String t) { System.out.println(“storing ” + t + ” with ” + l); }@Override public void storeAll(Map<Long, String> map) { System.out.println(“store all”); }@Override public void delete(Long l) { System.out.println(“removing ” + l); }@Override public void deleteAll(Collection<Long> clctn) { System.out.println(“deleteAll”); }@Override public String load(Long l) { System.out.println(“loading ” + l); return “”; }@Override public Map<Long, String> loadAll(Collection<Long> clctn) { System.out.println(“loadAll”); Map<Long, String> retMap = new TreeMap<>(); return retMap; }@Override public Set<Long> loadAllKeys() { System.out.println(“loadAllKeys”); return new TreeSet<>(); }} Configuration Some mention needs to be addressed when it comes to configuring the QueueStore.  There are three properties that do not get passed to the implementation.  The binary property deals with how Hazelcast will send the data to the store.  Normally, Hazelcast stores the data serialized and deserializes it before it is sent to the QueueStore.  If the property is true, then the data is sent serialized.  The default is false.  The memory-limit is how many entries are kept in memory before being put into the QueueStore.  A 10000 memory-limit means that the 10001st is being sent to the QueueStore.  At initialization of the IQueue, entries are being loaded from the QueueStore.  The bulk-load property is how many can be pulled from the QueueStore at a time. <queue name=”queue”> <max-size>10</max-size> <item-listeners> <item-listener include-value=”true”>hazelcastcollections.CollectionItemListener</item-listener> </item-listeners> <queue-store> <class-name>hazelcastcollections.iqueue.QueueQStore</class-name> <properties> <property name=”binary”>false</property> <property name=”memory-limit”>10000</property> <property name=”bulk-load”>500</property> </properties> </queue-store> </queue>  Conclusion I hope one has learned about distributed collections inside Hazelcast.  ISet, IList and IQueue were discussed.  The ISet and IList only stay on the instance that they are created while the IQueue has a copy made, can be persisted and its capacity increases as the number of instances increase.  The code can be seen here. References The Book of Hazelcast: www.hazelcast.com Hazelcast Documentation (comes with the hazelcast download)Reference: Beginner’s Guide to Hazelcast Part 2 from our JCG partner Daryl Mathison at the Daryl Mathison’s Java Blog blog....

Using Asciidoctor with Spring: Rendering Asciidoc Documents with Spring MVC

Asciidoc is a text based document format, and that is why it is very useful if we want to commit our documents into a version control system and track the changes between different versions. This makes Asciidoc a perfect tool for writing books, technical documents, FAQs, or user’s manuals. After we have created an Asciidoc document, the odds are that we want to publish it, and one way to do this is to publish that document on our website. Today we will learn how we can transform Asciidoc documents into HTML by using AsciidoctorJ and render the created HTML with Spring MVC. The requirements of our application are:It must support Asciidoc documents that are found from the classpath. It must support Asciidoc markup that is given as a String object. It must transform the Asciidoc documents into HTML and render the created HTML. It must “embed” the created HTML to the layout of our application.Let’s start by getting the required dependencies with Maven. Getting the Required Dependencies with Maven We can get the required dependencies with Maven by following these steps:Enable the Spring IO platform. Configure the required dependencies.First, we can enable the Spring IO platform by adding the following snippet to our POM file: <dependencyManagement> <dependencies> <dependency> <groupId>io.spring.platform</groupId> <artifactId>platform-bom</artifactId> <version>1.0.2.RELEASE</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> Second, we can configure the required dependencies by following these steps:Configure the logging dependencies in the pom.xml file. Add the spring-webmvc dependency to the pom.xml file. Add the Servlet API dependency to the POM file. Configure the the Sitemesh (version 3.0.0) dependency in the POM file. Sitemesh ensures that every page of our application uses a consistent look and feel. Add asciidoctorj dependency (version 1.5.0) to the pom.xml file. AsciidoctorJ is a Java API for Asciidoctor and we use it to transform Asciidoc documents into HTML.The relevant part of our pom.xml file looks as follows: <dependencies> <!-- Logging --> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> </dependency> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> </dependency> <!-- Spring --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-webmvc</artifactId> </dependency> <!-- Java EE --> <dependency> <groupId>javax.servlet</groupId> <artifactId>javax.servlet-api</artifactId> <scope>provided</scope> </dependency> <!-- Sitemesh --> <dependency> <groupId>org.sitemesh</groupId> <artifactId>sitemesh</artifactId> <version>3.0.0</version> </dependency> <!-- AsciidoctorJ --> <dependency> <groupId>org.asciidoctor</groupId> <artifactId>asciidoctorj</artifactId> <version>1.5.0</version> </dependency> </dependencies> Because we use the Spring IO Platform, we don’t have to specify the dependency versions of the artifacts that are part of the Spring IO Platform. Let’s move on and start implementing our application. Rendering Asciidoc Documents with Spring MVC We can fulfil the requirements of our application by following these steps:Configure our web application and the Sitemesh filter. Implement the view classes that are responsible of transforming Asciidoc documents into HTML and rendering the created HTML. Implement the controller methods that use the created view classes.Let’s get started. Configuring Sitemesh The first thing that we have to do is to configure Sitemesh. We can configure Sitemesh by following these three steps:Configure the Sitemesh filter in the web application configuration. Create the decorator that is used to create consistent look and feel for our application. Configure the decorator that is used to by the Sitemesh filter.First, we have to configure the Sitemesh filter in our web application configuration. We can configure our web application by following these steps:Create a WebAppConfig class that implements the WebApplicationInitializer interface. Implement the onStartup() method of the WebApplicationInitializer interface by following these steps:Create an AnnotationConfigWebApplicationContext object and configure it to process our application context configuration class. Configure the dispatcher servlet. Configure the Sitemesh filter to process the HTML returned by the JSP pages of our application and all controller methods that use the url pattern ‘/asciidoctor/*’ Add a new ContextLoaderListener object to the ServletContext. A ContextLoaderListener is responsible of starting and shutting down the Spring WebApplicationContext.The source code of the WebAppConfig class looks as follows (Sitemesh configuration is highlighted): import org.sitemesh.config.ConfigurableSiteMeshFilter; import org.springframework.web.WebApplicationInitializer; import org.springframework.web.context.ContextLoaderListener; import org.springframework.web.context.WebApplicationContext; import org.springframework.web.context.support.AnnotationConfigWebApplicationContext; import org.springframework.web.servlet.DispatcherServlet;import javax.servlet.DispatcherType; import javax.servlet.FilterRegistration; import javax.servlet.ServletContext; import javax.servlet.ServletException; import javax.servlet.ServletRegistration; import java.util.EnumSet;public class WebAppConfig implements WebApplicationInitializer {private static final String DISPATCHER_SERVLET_NAME = "dispatcher";private static final String SITEMESH3_FILTER_NAME = "sitemesh"; private static final String[] SITEMESH3_FILTER_URL_PATTERNS = {"*.jsp", "/asciidoctor/*"};@Override public void onStartup(ServletContext servletContext) throws ServletException { AnnotationConfigWebApplicationContext rootContext = new AnnotationConfigWebApplicationContext(); rootContext.register(WebAppContext.class);configureDispatcherServlet(servletContext, rootContext); configureSitemesh3Filter(servletContext);servletContext.addListener(new ContextLoaderListener(rootContext)); }private void configureDispatcherServlet(ServletContext servletContext, WebApplicationContext rootContext) { ServletRegistration.Dynamic dispatcher = servletContext.addServlet( DISPATCHER_SERVLET_NAME, new DispatcherServlet(rootContext) ); dispatcher.setLoadOnStartup(1); dispatcher.addMapping("/"); }private void configureSitemesh3Filter(ServletContext servletContext) { FilterRegistration.Dynamic sitemesh = servletContext.addFilter(SITEMESH3_FILTER_NAME, new ConfigurableSiteMeshFilter() ); EnumSet<DispatcherType> dispatcherTypes = EnumSet.of(DispatcherType.REQUEST, DispatcherType.FORWARD ); sitemesh.addMappingForUrlPatterns(dispatcherTypes, true, SITEMESH3_FILTER_URL_PATTERNS); } }If you want to take a look at the application context configuration class of the example application, you can get it from Github.Second, we have to create the decorator that provides consistent look and feel for our application. We can do this by following these steps:Create the decorator file to the src/main/webapp/WEB-INF directory. The decorator file of our example application is called layout.jsp. Add the HTML that provides the consistent look and feel to the created decorator file. Ensure that Sitemesh adds the title found from the returned HTML to the HTML that is rendered by the web browser. Configure Sitemesh to add the HTML elements found from the head of the returned HTML to the head of the rendered HTML. Ensure that Sitemesh adds the body found from the returned HTML to the HTML that is shown to the user.The source code of our decorator file (layout.jsp) looks as follows (the parts that are related to Sitemesh are highlighted): <!doctype html> <%@ page contentType="text/html;charset=UTF-8" language="java" %> <html> <head> <title><sitemesh:write property="title"/></title> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet" type="text/css" href="${contextPath}/static/css/bootstrap.css"/> <link rel="stylesheet" type="text/css" href="${contextPath}/static/css/bootstrap-theme.css"/> <script type="text/javascript" src="${contextPath}/static/js/jquery-2.1.1.js"></script> <script type="text/javascript" src="${contextPath}/static/js/bootstrap.js"></script> <sitemesh:write property="head"/> </head> <body> <nav class="navbar navbar-inverse" role="navigation"> <div class="container-fluid"> <!-- Brand and toggle get grouped for better mobile display --> <div class="navbar-header"> <button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1"> <span class="sr-only">Toggle navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> </div> <div class="collapse navbar-collapse"> <ul class="nav navbar-nav"> <li><a href="${contextPath}/">Document list</a></li> </ul> </div> </div> </nav> <div class="container-fluid"> <sitemesh:write property="body"/> </div> </body> </html> Third, we have to configure Sitemesh to use the decorator file that we created in the second step. We can do this by following these steps:Create a sitemesh3.xml file to the src/main/webapp/WEB-INF directory. Configure Sitemesh to use our decorator for all requests that are processed by the Sitemesh filter.The sitemesh3.xml file looks as follows: <sitemesh> <mapping path="/*" decorator="/WEB-INF/layout/layout.jsp"/> </sitemesh> That is it. We have now configured Sitemesh to provide consistent look and feel for our application. Let’s move on and find out how we can implement the view classes that transform Asciidoc markup into HTML and render the created HTML. Implementing the View Classes Before we can start implementing the view classes that transform Asciidoc markup into HTML and render the created HTML, we have to take a quick look at our requirements. The requirements that are relevant for this step are:Our solution must support Asciidoc documents that are found from the classpath. Our solution must support Asciidoc markup that is given as a String object. Our solution must transform the Asciidoc documents into HTML and render the created HTML.These requirements suggest that we should create three view classes. These view classes are described in the following:We should create an abstract base class that contains the logic that transforms Asciidoc markup into HTML and renders the created HTML. We should create a view class that can read the Asciidoc markup from a file that is found from the classpath. We should create a view class that can read the Asciidoc markup from a String object.In other words, we have to create the following class structure:First, we have to implement the AbstractAsciidoctorHtmlView class. This class is an abstract base class that transforms Asciidoc markup into HTML and renders the created HTML. We can implement this class by following these steps:Create the AbstractAsciidoctorHtmlView class and extend the AbstractView class. Add a constructor to the created class and set the content type of the view to ‘text/html’. Add a protected abstract method getAsciidocMarkupReader() to created class and set its return type to Reader. The subclasses of this abstract class must implement this method, and the implementation of this method must return a Reader object that can be used to read the rendered Asciidoc markup. Add a private getAsciidoctorOptions() method to the created class and implement it by returning the configuration options of Asciidoctor. Override the renderMergedOutputModel() method of the AbstractView class, and implement it by transforming the Asciidoc document into HTML and rendering the created HTML.The source code of the AbstractAsciidoctorHtmlView class looks as follows: import org.asciidoctor.Asciidoctor; import org.asciidoctor.Options; import org.springframework.http.MediaType; import org.springframework.web.servlet.view.AbstractView;import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import java.io.Reader; import java.io.Writer; import java.util.Map;public abstract class AbstractAsciidoctorHtmlView extends AbstractView {public AbstractAsciidoctorHtmlView() { super.setContentType(MediaType.TEXT_HTML_VALUE); }protected abstract Reader getAsciidocMarkupReader();@Override protected void renderMergedOutputModel(Map<String, Object> model, HttpServletRequest request, HttpServletResponse response) throws Exception { //Set the content type of the response to 'text/html' response.setContentType(super.getContentType());Asciidoctor asciidoctor = Asciidoctor.Factory.create(); Options asciidoctorOptions = getAsciidoctorOptions();try ( //Get the reader that reads the rendered Asciidoc document //and the writer that writes the HTML markup to the request body Reader asciidoctorMarkupReader = getAsciidocMarkupReader(); Writer responseWriter = response.getWriter(); ) { //Transform Asciidoc markup into HTML and write the created HTML //to the response body asciidoctor.render(asciidoctorMarkupReader, responseWriter, asciidoctorOptions); } }private Options getAsciidoctorOptions() { Options asciiDoctorOptions = new Options(); //Ensure that Asciidoctor includes both the header and the footer of the Asciidoc //document when it is transformed into HTML. asciiDoctorOptions.setHeaderFooter(true); return asciiDoctorOptions; } } Second, we have to implement the ClasspathFileAsciidoctorHtmlView class. This class can read the Asciidoc markup from a file that is found from the classpath. We can implement this class by following these steps:Create the ClasspathFileAsciidoctorHtmlView class and extend the AbstractAsciidoctorHtmlView class. Add a private String field called asciidocFileLocation to the created class. This field contains the location of the Asciidoc file that is transformed into HTML. This location must be given in a format that is understood by the getResourceAsStream() method of the Class class. Create a constructor that takes the location the location of the rendered Asciidoc file as a constructor argument. Implement the constructor by calling the constructor of the superclass and storing the location of the rendered Asciidoc file to the asciidocFileLocation field. Override the getAsciidocMarkupReader() method and implement it by returning a new InputStreamReader object that is used to read the Asciidoc file found from the classpath.The source code of the ClasspathFileAsciidoctorHtmlView class looks as follows: import java.io.InputStreamReader; import java.io.Reader;public class ClasspathFileAsciidoctorHtmlView extends AbstractAsciidoctorHtmlView {private final String asciidocFileLocation;public ClasspathFileAsciidoctorHtmlView(String asciidocFileLocation) { super(); this.asciidocFileLocation = asciidocFileLocation; }@Override protected Reader getAsciidocMarkupReader() { return new InputStreamReader(this.getClass().getResourceAsStream(asciidocFileLocation)); } } Third, we have to implement the StringAsciidoctorHtmlView class that can read the Asciidoc markup from a String object. We can implement this class by following these steps:Create the StringAsciidoctorHtmlView class and extend the AbstractAsciidoctorHtmlView class. Add a private String field called asciidocMarkup to the created class. This field contains the Asciidoc markup that is transformed into HTML. Create a constructor that takes the rendered Asciidoc markup as a constructor argument. Implement this constructor by calling the constructor of the superclass and setting the rendered Asciidoc markup to the asciidocMarkup field. Override the getAsciidocMarkupReader() method and implement it by returning a new StringReader object that is used to read the Asciidoc markup stored to the asciidocMarkup field.The source code of the StringAsciidoctorHtmlView looks as follows: import java.io.Reader; import java.io.StringReader;public class StringAsciidoctorHtmlView extends AbstractAsciidoctorHtmlView {private final String asciidocMarkup;public StringAsciidoctorHtmlView(String asciidocMarkup) { super(); this.asciidocMarkup = asciidocMarkup; }@Override protected Reader getAsciidocMarkupReader() { return new StringReader(asciidocMarkup); } } We have now created the required view classes. Let’s move on and find out how we can use these classes in a Spring MVC web application. Using the Created View Classes Our last step is to create the controller methods that use the created view classes. We have to implement two controllers methods that are described in the following:The renderAsciidocDocument() method processes GET requests send to the url ‘/asciidoctor/document’, and it transforms an Asciidoc document into HTML and renders the created HTML. The renderAsciidocString() method processes GET get requests send to the url ‘/asciidoctor/string’, and it transforms an Asciidoc String into HTML and renders the created HTML.The source code of the AsciidoctorController class looks as follows: import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import org.springframework.web.servlet.ModelAndView; @Controller public class AsciidoctorController { private static final String ASCIIDOC_FILE_LOCATION = "/asciidoctor/document.adoc"; private static final String ASCIIDOC_STRING = "= Hello, AsciiDoc (String)!\n" + "Doc Writer <doc@example.com>\n" + "\n" + "An introduction to http://asciidoc.org[AsciiDoc].\n" + "\n" + "== First Section\n" + "\n" + "* item 1\n" + "* item 2\n" + "\n" + "1\n" + "puts \"Hello, World!\""; @RequestMapping(value = "/asciidoctor/document", method = RequestMethod.GET) public ModelAndView renderAsciidocDocument() { //Create the view that transforms an Asciidoc document into HTML and //renders the created HTML. ClasspathFileAsciidoctorHtmlView docView = new ClasspathFileAsciidoctorHtmlView(ASCIIDOC_FILE_LOCATION); return new ModelAndView(docView); } @RequestMapping(value = "/asciidoctor/string", method = RequestMethod.GET) public ModelAndView renderAsciidocString() { //Create the view that transforms an Asciidoc String into HTML and //renders the created HTML. StringAsciidoctorHtmlView stringView = new StringAsciidoctorHtmlView(ASCIIDOC_STRING); return new ModelAndView(stringView); } }Additional Information:The Javadoc of the @Controller annotation The Javadoc of the @RequestMapping annotation The Javadoc of the ModelAndView classWe have now created the controller methods that use our view classes. When the user of our application invokes a GET request to the url ‘/asciidoctor/document’, the source code of rendered HTML page looks as follows (the parts created by Asciidoctor are highlighted): <!doctype html><html> <head> <title>Hello, AsciiDoc (File)!</title> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet" type="text/css" href="/static/css/bootstrap.css"/> <link rel="stylesheet" type="text/css" href="/static/css/bootstrap-theme.css"/> <script type="text/javascript" src="/static/js/jquery-2.1.1.js"></script> <script type="text/javascript" src="/static/js/bootstrap.js"></script> <meta charset="UTF-8"> <!--[if IE]><meta http-equiv="X-UA-Compatible" content="IE=edge"><![endif]--> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta name="generator" content="Asciidoctor 1.5.0"> <meta name="author" content="Doc Writer"><link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Open+Sans:300,300italic,400,400italic,600,600italic|Noto+Serif:400,400italic,700,700italic|Droid+Sans+Mono:400"> <link rel="stylesheet" href="./asciidoctor.css"></head> <body> <nav class="navbar navbar-inverse" role="navigation"> <div class="container-fluid"> <!-- Brand and toggle get grouped for better mobile display --> <div class="navbar-header"> <button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1"> <span class="sr-only">Toggle navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> </div> <div class="collapse navbar-collapse"> <ul class="nav navbar-nav"> <li><a href="/">Document list</a></li> </ul> </div> </div> </nav> <div class="container-fluid"> <div id="header"> <h1>Hello, AsciiDoc (File)!</h1> <div class="details"> <span id="author" class="author">Doc Writer</span><br> <span id="email" class="email"><a href="mailto:doc@example.com">doc@example.com</a></span><br> </div> </div> <div id="content"> <div id="preamble"> <div class="sectionbody"> <div class="paragraph"> <p>An introduction to <a href="http://asciidoc.org">AsciiDoc</a>.</p> </div> </div> </div> <div class="sect1"> <h2 id="_first_section">First Section</h2> <div class="sectionbody"> <div class="ulist"> <ul> <li> <p>item 1</p> </li> <li> <p>item 2</p> </li> </ul> </div> <div class="listingblock"> <div class="content"> <pre class="highlight"><code class="language-ruby" data-lang="ruby">puts "Hello, World!"</code></pre> </div> </div> </div> </div> </div> <div id="footer"> <div id="footer-text"> Last updated 2014-09-21 14:21:59 EEST </div> </div></div> </body> </html> As we can see, the HTML created by Asciidoctor is embedded into our layout which provides a consistent user experience to the users of our application. Let’s move on and evaluate the pros and cons of this solution. Pros and Cons The pros of our solution are:The rendered HTML documents share the same look and feel than the other pages of our application. This means that we can provide a consistent user experience to the users of our application. We can render both static files and strings that can be loaded from a database.The cons of our solution are:The war file of our simple application is huge (51.9 MB). The reason for this is that even though Asciidoctor has a Java API, it is written in Ruby. Thus, our application needs two big jar files:The size of the asciidoctorj-1.5.0.jar file is 27.5MB. The size of the jruby-complete-1.7.9.jar file is 21.7MB.Our application transforms Asciidoc documents into HTML when the user requests them. This has a negative impact to the response time of our controller methods because the bigger the document, the longer it takes to process it. The first request that renders an Asciidoc document as HTML is 4-5 times slower than the next requests. I didn’t profile the application but I assume that JRuby has got something to do with this. At the moment it is not possible to use this technique if we want to transform Asciidoc documents into PDF documents.Let’s move on and summarize what we have learned from this blog post. Summary This blog post has taught us three things:We learned how we can configure Sitemesh to provide a consistent look and feel for our application. We learned how we can create the view classes that transform Asciidoc documents into HTML and render the created HTML. Even though our solution works, it has a lot of downsides that can make it unusable in real life applications.The next part of this tutorial describes how we can solve the performance problems of this solution. P.S. If you want play around with the example application of this blog post, you can get it from Github.Reference: Using Asciidoctor with Spring: Rendering Asciidoc Documents with Spring MVC from our JCG partner Petri Kainulainen at the Petri Kainulainen blog....

Getting Started with Docker

If the numbers of articles, meetups, talk submissions at different conferences, tweets, and other indicators are taken into consideration, then seems like Docker is going to solve world hunger. It would be nice if it would, but apparently not. But it does solve one problem really well! Lets hear it from @solomonstre – creator of Docker project!            In short, Docker simplifies software delivery by making it easy to build and share images that contain your application’s entire environment, or application operating system. What does it mean by application operating system ? Your application typically require a specific version of operating system, application server, JDK, database server, may require to tune the configuration files, and similarly multiple other dependencies. The application may need binding to specific ports and certain amount of memory. The components and configuration together required to run your application is what is referred to as application operating system. You can certainly provide an installation script that will download and install these components. Docker simplifies this process by allowing to create an image that contains your application and infrastructure together, managed as one component. These images are then used to create Docker containers which run on the container virtualization platform, provided by Docker. What are the main components of Docker ? Docker has two main components:Docker: the open source container virtualization platform Docker Hub: SaaS platform for sharing and managing Docker imagesDocker uses Linux Containers to provide isolation, sandboxing, reproducibility, constraining resources, snapshotting and several other advantages. Read this excellent piece at InfoQ on Docker Containers for more details on this. Images are “build component” of Docker and a read-only template of application operating system. Containers are runtime representation, and created from, images. They are “run component” of Docker. Containers can be run, started, stopped, moved, and deleted. Images are stored in a registry, the “distribution component” of Docker. Docker in turn contains two components:Daemon runs on a host machine and does the heavy lifting of building, running, and distributing Docker containers. Client is a Docker binary that accepts commands from the user and communicates back and forth with daemonHow do these work together ? Client communicates with Daemon, either co-located on the same host, or on a different host. It requests the Daemon to pull an image from the repository using pull command. The Daemon then downloads the image from Docker Hub, or whatever registry is configured. Multiple images can be downloaded from the registry and installed on Daemon host.Client can then start the Container using run command. The complete list of client commands can be seen here. Client communicates with Daemon using sockets or REST API. Because Docker uses Linux Kernel features, does that mean I can use it only on Linux-based machines ? Docker daemon and client for different operating systems can be installed from docs.docker.com/installation/. As you can see, it can be installed on a wide variety of platforms, including Mac and Windows. For non-Linux machines, a lightweight Virtual Machine needs to be installed and Daemon is installed within that. A native client is then installed on the machine that communicates with the Daemon. Here is the log from booting Docker daemon on Mac: bash unset DYLD_LIBRARY_PATH ; unset LD_LIBRARY_PATH mkdir -p ~/.boot2docker if [ ! -f ~/.boot2docker/boot2docker.iso ]; then cp /usr/local/share/boot2docker/boot2docker.iso ~/.boot2docker/ ; fi /usr/local/bin/boot2docker init /usr/local/bin/boot2docker up && export DOCKER_HOST=tcp://$(/usr/local/bin/boot2docker ip 2>/dev/null):2375 docker version ~> bash ~> unset DYLD_LIBRARY_PATH ; unset LD_LIBRARY_PATH ~> mkdir -p ~/.boot2docker ~> if [ ! -f ~/.boot2docker/boot2docker.iso ]; then cp /usr/local/share/boot2docker/boot2docker.iso ~/.boot2docker/ ; fi ~> /usr/local/bin/boot2docker init 2014/07/16 09:57:13 Virtual machine boot2docker-vm already exists ~> /usr/local/bin/boot2docker up && export DOCKER_HOST=tcp://$(/usr/local/bin/boot2docker ip 2>/dev/null):2375 2014/07/16 09:57:13 Waiting for VM to be started... ....... 2014/07/16 09:57:35 Started. 2014/07/16 09:57:35 To connect the Docker client to the Docker daemon, please set: 2014/07/16 09:57:35 export DOCKER_HOST=tcp:// ~> docker version Client version: 1.1.1 Client API version: 1.13 Go version (client): go1.2.1 Git commit (client): bd609d2 Server version: 1.1.1 Server API version: 1.13 Go version (server): go1.2.1 Git commit (server): bd609d2 For example, Docker Daemon and Client can be installed on Mac following the instructions at docs.docker.com/installation/mac. The VM can be stopped from the CLI as: boot2docker stop And then restarted again as: boot2docker boot And logged in as: boot2docker ssh The complete list of boot2docker commands are available in help: ~> boot2docker help Usage: boot2docker [] []boot2docker management utility.Commands: init Create a new boot2docker VM. up|start|boot Start VM from any states. ssh [ssh-command] Login to VM via SSH. save|suspend Suspend VM and save state to disk. down|stop|halt Gracefully shutdown the VM. restart Gracefully reboot the VM. poweroff Forcefully power off the VM (might corrupt disk image). reset Forcefully power cycle the VM (might corrupt disk image). delete|destroy Delete boot2docker VM and its disk image. config|cfg Show selected profile file settings. info Display detailed information of VM. ip Display the IP address of the VM's Host-only network. status Display current state of VM. download Download boot2docker ISO image. version Display version information. Enough talk, show me an example ? Some of the JBoss projects are available as Docker images at www.jboss.org/docker and can be installed following the commands explained on that page. For example, WildFly Docker image can be installed as: ~> docker pull jboss/wildfly Pulling repository jboss/wildfly 2f170f17c904: Download complete 511136ea3c5a: Download complete c69cab00d6ef: Download complete 88b42ffd1f7c: Download complete fdbe853b54e1: Download complete bc93200c3ba0: Download complete 0daf76299550: Download complete 3a7e1274035d: Download complete e6e970a0db40: Download complete 1e34f7a18753: Download complete b18f179f7be7: Download complete e8833789f581: Download complete 159f5580610a: Download complete 3111b437076c: Download complete The image can be verified using the command: ~> docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE jboss/wildfly latest 2f170f17c904 8 hours ago 1.048 GB Once the image is downloaded, the container can be started as: docker run jboss/wildfly By default, Docker containers do not provide an interactive shell and input from STDIN. So if WildFly Docker container is started using the command above, it cannot be terminated using Ctrl + C.  Specifying -i option will make it interactive and -t option allocated a pseudo-TTY. In addition, we’d also like to make the port 8080 accessible outside the container, i.e. on our localhost. This can be achieved by specifying -p 80:8080 where 80 is the host port and 8080 is the container port. So we’ll run the container as: docker run -i -t -p 80:8080 jboss/wildfly =========================================================================JBoss Bootstrap EnvironmentJBOSS_HOME: /opt/wildflyJAVA: javaJAVA_OPTS: -server -Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true=========================================================================22:08:29,943 INFO [org.jboss.modules] (main) JBoss Modules version 1.3.3.Final 22:08:30,200 INFO [org.jboss.msc] (main) JBoss MSC version 1.2.2.Final 22:08:30,297 INFO [org.jboss.as] (MSC service thread 1-6) JBAS015899: WildFly 8.1.0.Final "Kenny" starting 22:08:31,935 INFO [org.jboss.as.server] (Controller Boot Thread) JBAS015888: Creating http management service using socket-binding (management-http) 22:08:31,961 INFO [org.xnio] (MSC service thread 1-7) XNIO version 3.2.2.Final 22:08:31,974 INFO [org.xnio.nio] (MSC service thread 1-7) XNIO NIO Implementation Version 3.2.2.Final 22:08:32,057 INFO [org.wildfly.extension.io] (ServerService Thread Pool -- 31) WFLYIO001: Worker 'default' has auto-configured to 16 core threads with 128 task threads based on your 8 available processors 22:08:32,108 INFO [org.jboss.as.clustering.infinispan] (ServerService Thread Pool -- 32) JBAS010280: Activating Infinispan subsystem. 22:08:32,110 INFO [org.jboss.as.naming] (ServerService Thread Pool -- 40) JBAS011800: Activating Naming Subsystem 22:08:32,133 INFO [org.jboss.as.security] (ServerService Thread Pool -- 45) JBAS013171: Activating Security Subsystem 22:08:32,178 INFO [org.jboss.as.jsf] (ServerService Thread Pool -- 38) JBAS012615: Activated the following JSF Implementations: [main] 22:08:32,206 WARN [org.jboss.as.txn] (ServerService Thread Pool -- 46) JBAS010153: Node identifier property is set to the default value. Please make sure it is unique. 22:08:32,348 INFO [org.jboss.as.security] (MSC service thread 1-3) JBAS013170: Current PicketBox version=4.0.21.Beta1 22:08:32,397 INFO [org.jboss.as.webservices] (ServerService Thread Pool -- 48) JBAS015537: Activating WebServices Extension 22:08:32,442 INFO [org.jboss.as.connector.logging] (MSC service thread 1-13) JBAS010408: Starting JCA Subsystem (IronJacamar 1.1.5.Final) 22:08:32,512 INFO [org.wildfly.extension.undertow] (MSC service thread 1-9) JBAS017502: Undertow 1.0.15.Final starting 22:08:32,512 INFO [org.wildfly.extension.undertow] (ServerService Thread Pool -- 47) JBAS017502: Undertow 1.0.15.Final starting 22:08:32,570 INFO [org.jboss.as.connector.subsystems.datasources] (ServerService Thread Pool -- 27) JBAS010403: Deploying JDBC-compliant driver class org.h2.Driver (version 1.3) 22:08:32,660 INFO [org.jboss.as.connector.deployers.jdbc] (MSC service thread 1-10) JBAS010417: Started Driver service with driver-name = h2 22:08:32,736 INFO [org.jboss.remoting] (MSC service thread 1-7) JBoss Remoting version 4.0.3.Final 22:08:32,836 INFO [org.jboss.as.naming] (MSC service thread 1-15) JBAS011802: Starting Naming Service 22:08:32,839 INFO [org.jboss.as.mail.extension] (MSC service thread 1-15) JBAS015400: Bound mail session 22:08:33,406 INFO [org.wildfly.extension.undertow] (ServerService Thread Pool -- 47) JBAS017527: Creating file handler for path /opt/wildfly/welcome-content 22:08:33,540 INFO [org.wildfly.extension.undertow] (MSC service thread 1-13) JBAS017525: Started server default-server. 22:08:33,603 INFO [org.wildfly.extension.undertow] (MSC service thread 1-8) JBAS017531: Host default-host starting 22:08:34,072 INFO [org.wildfly.extension.undertow] (MSC service thread 1-13) JBAS017519: Undertow HTTP listener default listening on / 22:08:34,599 INFO [org.jboss.as.server.deployment.scanner] (MSC service thread 1-11) JBAS015012: Started FileSystemDeploymentService for directory /opt/wildfly/standalone/deployments 22:08:34,619 INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-9) JBAS010400: Bound data 22:08:34,781 INFO [org.jboss.ws.common.management] (MSC service thread 1-13) JBWS022052: Starting JBoss Web Services - Stack CXF Server 4.2.4.Final 22:08:34,843 INFO [org.jboss.as] (Controller Boot Thread) JBAS015961: Http management interface listening on 22:08:34,844 INFO [org.jboss.as] (Controller Boot Thread) JBAS015951: Admin console listening on 22:08:34,845 INFO [org.jboss.as] (Controller Boot Thread) JBAS015874: WildFly 8.1.0.Final "Kenny" started in 5259ms - Started 184 of 233 services (81 services are lazy, passive or on-demand) Container’s IP address can be found as: ~> boot2docker ipThe VM's Host only interface IP address is: The started container can be verified using the command: ~> docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b2f8001164b0 jboss/wildfly:latest /opt/wildfly/bin/sta 46 minutes ago Up 12 minutes 8080/tcp, 9990/tcp sharp_pare And now the WildFly server can now be accessed on your local machine as and looks like as shown:Finally the container can be stopped by hitting Ctrl + C, or giving the command as: ~> docker stop b2f8001164b0 b2f8001164b0 The container id obtained from “docker ps” is passed to the command here. More detailed instructions to use this image, such as booting in domain mode, deploying applications, etc. can be found at github.com/jboss/dockerfiles/blob/master/wildfly/README.md. What else would you like to see in the WildFly Docker image ? File an issue at github.com/jboss/dockerfiles/issues. Other images that are available at jboss.org/docker are:KeyCloak TorqueBox Immutant LiveOak AeroGearReference: Getting Started with Docker from our JCG partner Arun Gupta at the Miles to go 2.0 … blog....

The reuse dilemma

The first commandment that any young programmer learns is “Thou Shall Not Duplicate”. Thus instructed, whenever we see something that looks like it may be repeated code, we refactor. We create libraries and frameworks. But removing duplication doesn’t come for free. If I refactor some code so that instead of duplicating some logic in Class A and Class B, these classes share the logic in Class R (for reuse!). Now Classes A and B are indirectly coupled. This is not necessarily a bad thing, but it comes with some consequences, which are often overlooked. If Class A require the shared functionality to change, we have a choice: We make the change or we stymie Class A. Making the change comes at a cost of breaking Class B. If these classes are in the same Java package (or .NET Namespace), chances are that we will be able to verify that the change didn’t break anything. If the reused functionality is in a library that is used by another library that is used by a Class B, verifying that the change was good is harder. This is where Continuous Integration comes in. We check in our change to class R (for reuse, remember). A build is triggered on Jenkins, Bamboo, TFS or what have you. The build causes other projects to get built and eventually, we get a failing build where a unit test for Class B breaks. If we did our job right. Even with this failing test, we’re not out of the woods. A build system for a large enterprise may take up to an hour or even more to run. This means that by trying to improve Class R, the developers of Class A have made a mess for the developers of Class B at least for a good while. When organizations see repeated build breaks due to changes in dependencies, the reaction is usually the same: Lets limit changes to the shared code. Perhaps we’ll even version it, so that any change that you need will be released in the next version and dependencies can upgrade at their own pace. Now we’ve introduced another problem: We are no longer able to change the code. Developers of Class A will have to take Class R as it is, or at the very least go through substantial work to change it. If the reused code is very mature, this is fine. After all, this is what we have to deal with when it comes to language standard libraries and open source projects. Most reused code inside an organization, however, isn’t so mature. The developers of Class A will often be frustrated by the limitations of Class R. If the reused class is something very domain specific, the limitation is even worse. So the developers do what any reasonable developer would do: They make a copy of Class R (or they fork the repository where it lives), creating a new duplication of the code. And so it goes: Removing duplication leads to risk of adverse interactions between reusers. Organizations enforce change control on the reused code to avoid changes made in one context from breaking other parts of the system, resulting in paralysis for the reusers who need the reused code to change. The reusers eventually duplicate the reused code to their own branch where they can evolve it, leading to duplication. Until someone gets the urge to remove the duplication. The dilemma happens in the small and in the large, but the trade-offs are different. What we should do less is to reuse immature code with little novel value. The cost of the extra coupling often far outweighs the benefit of reuse.Reference: The reuse dilemma from our JCG partner Johannes Brodwall at the Thinking Inside a Bigger Box blog....

Keyboard shortcuts in IntelliJ

I attended a talk by Hadi Hariri at JavaOne last week. He introduced a whole bunch of IntelliJ keyboard shortcuts I was not aware of. Very useful talk. I have listed some of the most useful ones below.Cmd-1: Move focus to the Project windowWhile there, type any class name (including using Camel Case e.g. HW to find HelloWorld), then git enter to have it open in the editor. F4 or Escape brings you to the editor.    Fn-Alt-Left arrow (Alt + ↖); Alt + Home also seems to work): Jump to Navigation Bar.The Navigation Bar is a quick alternative to the Project view, for navigating to and opening files for editing. Since I do not have a function key on my Kinesis Advantage keyboard, I reprogrammed this to be Cmd ‘ (Command and quote) Note that my preference is to hide the Navigation bar (View -> Navigation bar), and just call it on demand via Alt + Home (Alt + ↖ for Mac) shortcut. Ctrl-N (or Alt-Insert) can then be used to create a new class.Cmd-E: Show recent files Cmd-Shift-E: Show recently edited files Cmd-Shift-Backspace jump to the latest edit location These are all very useful and even allow you to turn of the use of tabs completely (Preferences -> IDE Settings -> Editor -> Editor Tabs -> Placement -> None).Editing shortcutsCmd-D: duplicate current line Cmd-Y: delete current line Ctrl+Shift+Up or Ctrl+Shift+Down: Move Statement Up/Move Statement Down. Note that if you place the caret at the method declaration, it will move the whole method for you. See more at https://www.jetbrains.com/idea/webhelp/adding-deleting-and-moving-lines.html Ctrl-W: select ever expanding blocks of text Alt-Backspace, Alt-Delete: delete to word start/endCmd-, Preferences Cmd-` Switch between projects Shift-Cmd-A Show shortcuts; this is a useful way to navigate to any IntelliJ commandReference: Keyboard shortcuts in IntelliJ from our JCG partner Shaun Abram at the Shaun Abram’s blog blog....

The Estimates Land mine – use and misuse of estimates

After posting my last post – Estimate or #NoEstimate that is the question? – I felt a little as if I’d stepped on a land-mine. That is to say I had a few comments and a bit a mini-twitter storm. If I’m being honest I have been avoiding some of the estimates/#NoEstimates debate until now precisely because it is obvious feelings on the topic run high. Perhaps the thing that surprised me most was that a post intended to support making human estimates was interpreted by many Tweeters as part of the #NoEstimates movement! Maybe convergence between #NoEstimates and #NoProjects is already happening in the public mind.   In the meantime it seems to me that a lot of the problem with Estimates lies in what they are, what they are not, how they are used and how they are mis-used. As is so often the case it all depends on what we mean by the word, in this case “Estimate”. Generally I find it useful to agree with Humpty Dumpty: “When I use a word it means just what I choose it to mean—neither more nor less.” (Through the Looking-Glass, Lewis Caroll). After all, who can forget Bill Clinton saying: “It depends upon what the meaning of the word ‘is’ is.” While I am sometimes guilty of the use and misuse of words myself I find it helps to keep an open mind on just what someone means when they use a word or phrase. For example, if a developer says “Unit tests” I try not to jump to assumptions about what “Unit tests” actually are. The fact that such language is used is itself interesting but I also want to know what they actually mean by it. But back to the word “Estimates.” On occasions like this I like to check my dictionary, in this case the : “Estimate 1.        to form an approximate idea of (size, cost, etc.); calculate roughly 2.        to form an opinion; judge 3.        submit an approximate price for a job to a prospective client 4.        an approximate calculation 5.        a statement of the likely charge for certain work 6.        an opinion” (Collins Paperback English Dictionary 2001)My other usual source is Wikipedia which on this occasion gives: “Estimation is the process of finding an estimate, or approximation, which is a value that is usable for some purpose even if input data may be incomplete, uncertain, or unstable. The value is nonetheless usable because it is derived from the best information available.”From these definitions I define a sense that an estimate is:Approximate A statement of possibility Is based on available data which may itself be incomplete or variableMaybe all estimates should be accompanies by a statement of probability but as Kahneman and Tversky described in the planning fallacy, and has been proved repeatedly since, not only do humans underestimate time but humans are over confident in their estimates. Thus any estimate probability statement would probably itself be an over estimate of probability. Besides, very few of the “estimates” I’ve ever seen are accompanied by a statement of probability so I don’t think this suggest will get very far. More importantly these definitions also help tell us determine what an estimate is not:As estimate is not exact An estimate is not a promise, guarantee or commitment An estimate is not a target or deadlineAnd it is not several other things. Now in my previous blog post I introduced the idea of “Accurate Estimates” so I was actually sneaking in the idea that an estimate could have a high probability and could be an accurate indicator of what will happen. Perhaps I was guilty of something there. The trap I fell into is one that many fall into, that of accepting the general usage of the word “Estimate”. In general usage – in the software community – we misuse the word estimate. Firstly we automatically equate Estimate with “Effort Estimate”: effort (and therefore cost) estimate proliferate in software development but we overlook other estimates that might be useful, in particular Benefit Estimate. Second the inherently approximate nature of estimation is too often ignore, estimated are endowed with a sense of promise of what will be rather than recognising their inherent approximate nature. (And as noted in the Planning Fallacy, Vierordt’s Law and Hofstadter’s Law time estimates will always be under estimates.) This also leads to too much conversation about “Why was the estimate wrong?” – sometime blame may be implied. The answer to the question is really: “The estimate is not wrong because it was an ESTIMATE” That is to say: An estimate is never wrong because an estimate is an approximation and therefore is not binary “Right” or “Wrong”. Sure you can have a conversation about why the estimate was very different to what actually played out but the nature of that conversation is going to be different depending on what you will do with the findings of the conversation. For example, if the resulting information is used to refine future estimates it will be a very different conversation to one where the result will be punishment for someone. (Yes, people do get punished, I once saw a company where Project Managers were rewarded/punished based on the variance between estimated time spent on work and actual time spent.) In short too often an approximate estimates based on variable information is used as some kind of exact promise to meet a deadline. Software developers love to imagine it is evil managers who take their estimates, massage them to be politically correct, promise them to higher ups and then force poor coders to honour the number they first thought, but, big BUT, managers are not the only ones. Even in everyday life the Planning Fallacy, Vierordt’s Law and Hofstadter’s Law hold. Observe yourself next time you have to catch a bus, train, complete a tax return, hand in course work or do something (almost anything) with your kids. I would love it if I could wave a magic wand and reset everyones’ understanding of the word Estimate but I don’t see it happening. And I think – although Woody and Vasco might like to correct me – that a large part of the #NoEstimates movement is motivated by this problem. The way I see the logic is:Estimates are seldom recognised for what they really are and honoured as such. Estimates are misused and used as a stick to beat people and organizations. Therefore estimates have become a problem and it is better of finding a way or working without them.I’m not saying this is the whole #NoEstimates logic but it is part which strikes a chord with me. Incidentally, because I believe estimates are not a promise I don’t believe in Scrum commitment and because I believe they are approximate that in Xanpan I focus their use on the near term. And because I believe benefits should dictate deadlines not effort I refuse to use estimates as deadlines.Reference: The Estimates Land mine – use and misuse of estimates from our JCG partner Allan Kelly at the Agile, Lean, Patterns blog....

JavaOne 2014 Observations by Proxy

I wasn’t able to attend JavaOne this year, but have been happy to see some online resources covering what happened at JavaOne 2014. In this post, I summarize some of the observations made at JavaOne 2014 and provide links to references providing these observations or providing more background details on those observations. The listed observations are in no particular order and many of them come from the JavaOne 2014 Keynote Addresses. Rapid Adoption of Java 8 The Oracle Press Release associated with JavaOne 2014 states, “Since its launch in March 2014, Java SE 8 has achieved record adoption rates. Overall, adoption is up more than 20 percent compared to the same post-launch time period for Java SE 7.”1 George Saab highlighted this rapid adoption with the observation in the Strategy Keynote that there are already eight Java 8 publications available in eight different languages.4. Intel and Java Intel was an “Innovation Sponsor” of JavaOne 2014 and, because of that, had a portion of the JavaOne 2014 “Java Partner Community Keynote” address. In this address, it was stated that Java runs 32 times faster on Intel since 20072. It was also announced that Intel has joined the OpenJDK as a Contributing Member.2,5 New OpenJDK Partners The Oracle Press Release for JavaOne 2014 mentions other recently added new partners to the OpenJDK team: FreeBSD Foundation, GE Digital Energy, and Microsoft Open Technologies, Inc.1 JDK Modularity / Project Jigsaw It was confirmed in the “Java Partner Community Keynote” address that Oracle does intend to deliver modularity with JDK 9.2,6 Modularization was scheduled for previous versions of Java, but has been kicked down the road from JDK 8 and from JDK 7 before that. The Oracle Press Release announcing JavaOne 2014 states, “Oracle has begun work on the JDK 9 Project in the OpenJDK Community. New features will focus on modularity, performance, stability, and portability.”1 Project Valhalla and Project Panama In the Community Keynote2,5, Brian Goetz cited Project Valhalla (experimental JVM and language features and not to be confused with a much older Project Valhalla) and Project Panama (“Interconnecting JVM and native code”). The promise of value types was also discussed in this part of the keynote.2,5 Eclipse’s Open IoT Stack The Eclipse Foundation announced the Open IoT (Internet of Things) Stack at JavaOne 2014.3 JavaOne 2014 Talks on Parleys.com It was announced that JavaOne 2014 talks will be on Parleys.com.5 Miscellaneous Java Usage Statistics Oracle likes to announce splashy statistics related to “Java” (the language and the platform). This year’s edition was no different1:9 million developers worldwide More than 3 billion devices are powered by Java technology More than 125 million Java-based media devices have been deployed Over 10 billion Java Cards have been shipped since its introductionDuke Has An Alias: Fang One of the more important revelations from JavaOne 2014 for some of us is that Duke was formerly known as Fang.5. JavaOne 2015 JavaOne 2015 will be October 25–29, 2015, in San Francisco, California. Online References to JavaOne 20141Oracle Press Release: Oracle Highlights Continued Java SE Momentum and Innovation at JavaOne 2014 2InfoQ‘s (Ben Evans‘s) Java One – Final Day and Community Keynote 3InfoQ’s (Ben Evans’s) JavaOne 2014 – Day One and Eclipse IoT Announcement 4Oracle’s (Timothy Beneke’s) JavaOne Strategy and Technical Keynotes Look to the Future 5IDR Solutions‘s (Mark Stephens’s) My Key takeaways from JavaOne Community Keynote 6Mark Stoetzer‘s JavaOne 2014 – Day 1 – Keynote 7JavaOne 2014 Keynotes VideosReference: JavaOne 2014 Observations by Proxy from our JCG partner Dustin Marx at the Inspired by Actual Events blog....

Small Internal Releases Lead to Happy Customers

If you saw Large Program? Release More Often, you might have noted that I said, You want to release all the time inside your building. You need the feedback, to watch the product grow. Some of my clients have said, “But my customers don’t want the software that often.” That might be true.  You may have product constraints, also. If you are working on a hardware/software product, you can’t integrate the software with the hardware either until the hardware is ready or that often.   I’m not talking about releasing the product to the customers. I’m not talking about integrating the software with the hardware. I’m talking about small, frequent, fully functional releases that help you know that your software is actually done. You don’t need hardening sprints. Or, if you do, you know it early. You know you have that technical debt now, not later. You can fix things when the problem is small. You see, I don’t believe in hardening sprints. Hardening sprints mean you are not getting to done on your features. They might be too big. Your developers are not finishing the code, so the testers can’t finish the tests. Your testers might not be automating enough. Let’s not forget architectural debt. It could be any number of things. Hardening sprints are a sign that “the software is not done.” Wouldn’t you like to know that every three or four weeks, not every ten or twelve? You could fix it when the problem is small and easier to fix. Here’s an example. I have a number of clients who develop software for the education market.  One of them said to me, “We can’t release all the time.” I said, “Sure, you can’t release the grading software in the middle of the semester. You don’t want to upset the teachers. I get that. What about the how-to-buy-books module? Can you update that module?” “Of course. That’s independent. We’re not sure anyone uses that in the middle of the semester anyway.” I was pretty sure I knew better. Teachers are always asking students to buy books. Students procrastinate. Why do you think they call it “Student syndrome”? But I decided to keep my mouth shut. Maybe I didn’t know better. The client decided to try just updating the buy-the-book module as they fixed things. The client cleaned up the UI and fixed irritating defects. They released internally every two weeks for about six weeks. They finally had the courage to release mid-semester. A couple of schools sent emails, asking why they waited so long to install these fixes. “Please fix the rest of these problems, as soon as you can. Please don’t wait.” The client had never released this often before. It scared them. It didn’t scare their customers. Their customers were quite happy. And, the customers didn’t have all the interim releases; they had the planned mini-releases that the Product Owner planned. My client still doesn’t release every day. They still have an internal process where they review their fixes for a couple of weeks before the fixes go live. They like that. But, they have a schedule of internal releases that is much shorter than what they used to have. They also release more often to their customers. The customers feel as if they have a “tighter” relationship with my client. Everyone is happier. My client no longer has big-bang external releases. They have many small internal releases. They have happier customers. That is what I invite you to consider. Release externally whenever you want. That is a business decision. Separate that business decision from your ability to release internally all the time. Consider moving to a continuous delivery model internally, or as close as you can get to continuous delivery internally. Now, you can decide what you release externally. That is a business decision. What do you need to do to your planning, your stories, your technical practices to do so?Reference: Small Internal Releases Lead to Happy Customers from our JCG partner Johanna Rothman at the Managing Product Development blog....

Using Java API for WebSockets in JDeveloper 12.1.3

Introduction The latest release of JDeveloper 12c ( along with WebLogic Server 12.1.3 came up with some new Java EE 7 features. One of them is support of JSR 356 Java API for WebSockets. Actually the WebSocket Protocol (RFC 6455) has been supported starting from release, but it was based on WebLogic specific implementation of the WebSocket API. Now this proprietary WebLogic Server WebSocket API has been deprecated. However, it is still supported for backward compatibility. In this post I am going to show an example of using JSR 356 Java API for WebSockets in a simple  ADF application. The use case is about some sailing regatta which takes place in the Tasman Sea. There are three boats participating in the regatta and they are going to cross the Tasman Sea sailing from Australia to New Zealand coast. The goal of the sample application is to monitor the regatta and inform users about how it is going on, showing the positions of the boats on a map. We’re going to declare a WebSocket server endpoint in the application and when a user opens a page a Java script function opens a new WebSocket connection. The application uses a scheduled service which every second updates boats coordinates and sends a message containing new boats positions to all active WebSocket clients. On the client side a Java script function receives the message and adds markers to the Google map according to the GPS coordinates. So, each user, interested in the regatta, is going to see the same updated picture representing the current status of the competition. WebSocket server endpoint Let’s start with declaring a WebSocket server endpoint. There is a small issue in the current implementation, which probably will be resolved in future releases. The WebSocket endpoints can not be mixed with ADF pages and they should be deployed in a separate WAR file. The easiest way to do that is to create a separate WebSocket project within the application and to declare all necessary endpoints in this project:This is also important to set up a readable Java EE Web Context Root for the project:The next step is to create a Java class which is going to be a WebSocket end point. So, this is a usual class with a special annotation at the very beginning: @ServerEndpoint(value = "/message") public class MessageEndPoint {    public MessageEndPoint() {         super();     } } Note, that JDeveloper underlines the annotation with red. We are going to fix the issue by letting JDeveloper configure the project for Web Socket.Having done that, JDeveloper is going to convert the project into a Web project adding the Web.xml file and add necessary library:Furthermore, the endpoint class becomes runnable and we can just run it so as to check how it actually works:In response JDeveloper generates the following URL at which the WebSocket endpoint is available. Note, that the URL contains the project context root (WebSocket) and the value property of the annotation (/message). If everything is ok then when we click the URL, we’ll get the “Connected successfuly” information window:By the way, there is a typo in the message. And now let’s add some implementation to the WebSocket endpoint class. According to the specification a new instance of the MessageEndPoint class is going to be created for each WebSocket connection. In order to hold a bunch of all active WebSocket sessions we’re going to use a static queue: public class MessageEndPoint {     //A new instance of the MessageEndPoint class     //is going to be created for each WebSocket connection     //This queue contains all active WebSocket sessions     final static Queue<Session> queue = new ConcurrentLinkedQueue<>();    @OnOpen      public void open(Session session) {          queue.add(session);              }       @OnClose       public void closedConnection(Session session) {          queue.remove(session);       }           @OnError      public void error(Session session, Throwable t) {            queue.remove(session);            t.printStackTrace();        } The annotated methods open, closedConnection and error are going to be invoked respectively when a new connection has been established, when it has been closed and when something wrong has happened. As we have done that, we can use some static method to broadcast a text message to all clients:      public static void broadCastTex(String message) {         for (Session session : queue) {             try {                session.getBasicRemote().sendText(message);             } catch (IOException e) {                 e.printStackTrace();             }         }    } In our use case we have to notify users with new GPS coordinates of the boats, so we should be able to send via WebSockets something more complex than just text messages. Sending an object Basically, a business model of the sample application is represented by two plain Java classes Boat: public class Boat {   private final String country;   private final double startLongitude;   private final double startLatitude;  private double longitude;   private double latitude;    public String getCountry() {       return country;   }  public double getLongitude() {       return longitude;   }  public double getLatitude() {       return latitude;   }     public Boat(String country, double longitude, double latitude) {       this.country = country;       this.startLongitude = longitude;       this.startLatitude = latitude;   } ... and Regatta: public class Regatta {     private final Boat[] participants = new Boat[] {         new Boat("us", 151.644, -33.86),         new Boat("ca", 151.344, -34.36),         new Boat("nz", 151.044, -34.86)     };         public Boat[] getParticipants() {         return participants;     } ...For our use case we’re going to send an instance of the Regatta class to the WebSocket clients. The Regatta contains all regatta participants represented by the Boat class instances containing updated GPS coordinates (longitude and latitude). This can be done by creating a custom implementation of the Encoder.Text<Regatta> interface, or in other words we’re going to create an encoder which can transform a Regatta instance into a text and specify this encoder to be used by the WebSocket endpoint while sending an instance of the Regatta. public class RegattaTextEncoder implements Encoder.Text<Regatta> {   @Override   public void init(EndpointConfig ec) { }  @Override   public void destroy() { }  private JsonObject encodeBoat(Boat boat) throws EncodeException {       JsonObject jsonBoat = Json.createObjectBuilder()           .add("country", boat.getCountry())           .add("longitude", boat.getLongitude())           .add("latitude" , boat.getLatitude()).build();            return jsonBoat;    }    @Override    public String encode(Regatta regatta) throws EncodeException {       JsonArrayBuilder arrayBuilder = Json.createArrayBuilder();                       for (Boat boat : regatta.getParticipants()) {           arrayBuilder.add(encodeBoat(boat));       }      return arrayBuilder.build().toString();    }       } @ServerEndpoint(   value = "/message",   encoders = {RegattaTextEncoder.class })Having done that, we can send objects to our clients:     public static void sendRegatta(Regatta regatta) {         for (Session session : queue) {             try {                 session.getBasicRemote().sendObject(regatta);             } catch (EncodeException e) {                 e.printStackTrace();             } catch (IOException e) {                 e.printStackTrace();             }         }    } The RegattaTextEncoder represents a Regatta object as a list of boats using Json notation, so it is going to be something like this: [{"country":"us","longitude":151.67,"latitude":-33.84},{"country":"ca", ...},{"country":"nz", ...}]Receiving a message On the client side we use a Java script function to open a new WebSocket connection: //Open a new WebSocket connection //Invoked on page load function connectSocket() {    websocket = new WebSocket(getWSUri());      websocket.onmessage = onMessage;   } And when a message arrives, we’re going to loop over array of boats and for each boat add a marker on the map: function onMessage(evt) {   var boats = JSON.parse(evt.data);   for (i=0; i<boats.length; i++) {      markBoat(boats[i]);    }   }function markBoat(boat) {   var image = '../resources/images/'+boat.country+'.png';   var latLng = new google.maps.LatLng(boat.latitude,boat.longitude);      mark = new google.maps.Marker({            position: latLng,            map: map,            title: boat.country,            icon: image         }); }You can learn down here how to integrate Google maps into your applications. Run the regatta In order to emulate a live show we use ScheduledExecutorService. Every second we are going to update GPS coordinates and broadcast the update to all subscribers: private final ScheduledExecutorService scheduler =    Executors.newScheduledThreadPool(1); private ScheduledFuture<?> runHandle;//Schedule a new regatta on Start button click public void startRegatta(ActionEvent actionEvent) {    //Cancel the previous regatta     if (runHandle != null) {         runHandle.cancel(false);      }               runHandle = scheduler.scheduleAtFixedRate(new RegattaRun(), 1, 1,                                               TimeUnit.SECONDS); }public class RegattaRun implements Runnable {    private final static double FINISH_LONGITUDE = 18;     private final Regatta regatta = new Regatta();    //Every second update GPS coordinates and broadcast     //new positions of the boats     public void run() {                   regatta.move();        MessageEndPoint.sendRegatta(regatta);                 if (regatta.getLongitude() >= FINISH_LONGITUDE) {            runHandle.cancel(true);              }     } } Bet on your boat And finally, the result of our work looks like this:The sample application for this post requires JDeveloper 12.1.3. Have fun! That’s it!Reference: Using Java API for WebSockets in JDeveloper 12.1.3 from our JCG partner Eugene Fedorenko at the ADF Practice blog....

JavaOne 2014: Conferences conflict with contractual interests

The Duke’s Street Cafe where engineers can have a hallway conversation on the street.                    Incompatible with contracting My eleventh JavaOne conference (11 = 10 + 1, 2004 to 2014) was splendid. It was worth attending this event and meeting all the people involved in the community. Now here comes the gentleman’s but. My attendance came at some cost beyond the financial obvious, hotel and plane ticket. It appears going to conferences are seriously incompatible with the motivations around business of contracts. One cannot have freedom and escape obligation to professional work. Despite, all of the knowledge that we have learned as professional developers, designers and architects, if your client requires you to be on site and you are not around, it can be taken that attending conferences like JavaOne 2014 in certain minds is taken as an illustrious and salubrious adventure for your own benefit. On the hand this is fair assessment, a client pays a contractor to be available, around for a burning need, and it is balanced with team work, morale; and deadline and commitments. At the back of mind, there are two schools of thought. One way is not to care too much about clients, but then a contractor will find they have a devalued reputation and lack of repeat business. The other way is never to take time off or away from project work for a client and then rely on contracts ending or finishing exactly before or after a major conference like JavaOne. So what to do in 2015? How can I resolve contracting and conferences? I believe the answer, obviously, to reduce the conferences that I actually attend to the minimum that I can unfortunately. It means that I will consider whether JavaOne 2015 is going to be viable or not. The keynote question and answer session with a Twitter hashtag, #j1qa, which obviously has long expired, featuring John Rose(far left), James Gosling (inner left), Brian Goetz (middle), Brian Oliver (inner left) and Charles Nutter (far right). The chair was Mark Reinhold.Ten years ago, when I worked investment banking in the good times. I could pretty much rely on 6 months J2EE contracts lasting as long as that term. At Credit Suisse bank, I managed six month contract renewals at a breeze as long as I perform and finished project work on time. In 2014, the climate is more restrictive, the pressure on high profile projects and the uncertainty of business means that contracts lengths are typically 3 months to start with and that cannot guarantee renewals, and if you think that permanent employment solves the dilemma then you are incorrect. A contract is a temporary and by definition that implies a contractor is treated as a temporary resource, but a permanent person can also be removed at short notice in the United Kingdom, if you have less than two years with the employer. When you think that the typical IT employment last about two to three years before somebody changes job, then you can see even permanent people have to be extremely careful with their holiday planning and entitlement. Yes you are entitled 25 days or more, but if you fail to give forewarning and mess around with the program delivery managers project plan too much, don’t be surprise if a ton of bricks eventually comes tumbling down. A picture with the Java Mascot to complete the collection. I wonder if Duke has a sixth sense and if she/he/it can sense the trouble ahead lurking in my subconscious.  Frustrating as it is, and still more than year from the next JavaOne conference, the next one is late October 2015, from Sunday 25th to Thursday 29th, I can’t say with real confidence that I will be there. I will, of course, submit some Calls For Papers, when the time approaches, but it will be dependent on client requirements if I can attend or not. If I do attend, then I probably cannot stick around California and see friends. Even for the UK and European conferences, I can only see trouble ahead with more conflicts. I already decided that I will not be at Devoxx in Belgium. There are also issues when the conference planning is late, the confirmations are validated less than three months before the event, project managers are already looking at their schedules for resourcing and if a contractor is going to disappear, then they are more easily replaced with somebody who will be around to fix their present pain, which is what work is more often than not about. I have found that client’s typical do not have the attitude of kindness, it is about the budget and time. That’s is the way the business world is running now, and the only conference speakers who can give up the time as the developer advocates, the people who paid to speak or promote at conferences. Independents are finding it harder and there will be no improvement in this situation. I just can’t find seem to find that benevolent, technology loving and business client, who understand me for what I am. These guys and gals at Alderbaran electronics with their NAO robots are inspirational. This is photo from the JavaOne demo grounds and exhibition.Reference: JavaOne 2014: Conferences conflict with contractual interests from our JCG partner Peter Pilgrim at the Peter Pilgrim’s blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: