Featured FREE Whitepapers

What's New Here?

enterprise-java-logo

How to write better POJO Services

In Java, you can easily implements some business logic in a Plain Old Java Object (POJO) classes, and you can run them in a fancy server or framework without much hassle. There many server/frameworks, such as JBossAS, Spring or Camel etc, that would allow you to deploy POJO without even hardcoding to their API. Obviously you would get advance features if you willing to couple to their API specifics, but even if you do, you can keep these to minimal by encapsulating your own POJO and their API in a wrapper. By writing and designing your own application as simple POJO as possible, you will have the most flexible ways in choose a framework or server to deploy and run your application. One effective way to write your business logic in these environments is to use Service component. In this article I will share few things I learned in writing Services. What is a Service? The word Service is overly used today, and it could mean many things to different people. When I say Service, my definition is a software component that has minimal of life-cycles such as init, start, stop, and destroy. You may not need all these stages of life-cycles in every service you write, but you can simply ignore ones that don’t apply. When writing large application that intended for long running such as a server component, definining these life-cycles and ensure they are excuted in proper order is crucial! I will be walking you through a Java demo project that I have prepared. It’s very basic and it should run as stand-alone. The only dependency it has is the SLF4J logger. If you don’t know how to use logger, then simply replace them with System.out.println. However I would strongly encourage you to learn how to use logger effectively during application development though. Also if you want to try out the Spring related demos, then obviously you would need their jars as well.Writing basic POJO service You can quickly define a contract of a Service with life-cycles as below in an interface. package servicedemo;public interface Service { void init(); void start(); void stop(); void destroy(); boolean isInited(); boolean isStarted(); } Developers are free to do what they want in their Service implementation, but you might want to give them an adapter class so that they don’t have to re-write same basic logic on each Service. I would provide an abstract service like this: package servicedemo;import java.util.concurrent.atomic.*; import org.slf4j.*; public abstract class AbstractService implements Service { protected Logger logger = LoggerFactory.getLogger(getClass()); protected AtomicBoolean started = new AtomicBoolean(false); protected AtomicBoolean inited = new AtomicBoolean(false);public void init() { if (!inited.get()) { initService(); inited.set(true); logger.debug('{} initialized.', this); } }public void start() { // Init service if it has not done so. if (!inited.get()) { init(); } // Start service now. if (!started.get()) { startService(); started.set(true); logger.debug('{} started.', this); } }public void stop() { if (started.get()) { stopService(); started.set(false); logger.debug('{} stopped.', this); } }public void destroy() { // Stop service if it is still running. if (started.get()) { stop(); } // Destroy service now. if (inited.get()) { destroyService(); inited.set(false); logger.debug('{} destroyed.', this); } }public boolean isStarted() { return started.get(); }public boolean isInited() { return inited.get(); }@Override public String toString() { return getClass().getSimpleName() + '[id=' + System.identityHashCode(this) + ']'; }protected void initService() { }protected void startService() { }protected void stopService() { }protected void destroyService() { } } This abstract class provide the basic of most services needs. It has a logger and states to keep track of the life-cycles. It then delegate new sets of life-cycle methods so subclass can choose to override. Notice that the start() method is checking auto calling init() if it hasn’t already done so. Same is done in destroy() method to the stop() method. This is important if we’re to use it in a container that only have two stages life-cycles invocation. In this case, we can simply invoke start() and destroy() to match to our service’s life-cycles. Some frameworks might go even further and create separate interfaces for each stage of the life-cycles, such as InitableService or StartableService etc. But I think that would be too much in a typical app. In most of the cases, you want something simple, so I like it just one interface. User may choose to ignore methods they don’t want, or simply use an adaptor class. Before we end this section, I would throw in a silly Hello world service that can be used in our demo later. package servicedemo;public class HelloService extends AbstractService { public void initService() { logger.info(this + ' inited.'); } public void startService() { logger.info(this + ' started.'); } public void stopService() { logger.info(this + ' stopped.'); } public void destroyService() { logger.info(this + ' destroyed.'); } }Managing multiple POJO Services with a container Now we have the basic of Service definition defined, your development team may start writing business logic code! Before long, you will have a library of your own services to re-use. To be able group and control these services into an effetive way, we want also provide a container to manage them. The idea is that we typically want to control and manage multiple services with a container as a group in a higher level. Here is a simple implementation for you to get started: package servicedemo;import java.util.*; public class ServiceContainer extends AbstractService { private List<Service> services = new ArrayList<Service>();public void setServices(List<Service> services) { this.services = services; } public void addService(Service service) { this.services.add(service); }public void initService() { logger.debug('Initializing ' + this + ' with ' + services.size() + ' services.'); for (Service service : services) { logger.debug('Initializing ' + service); service.init(); } logger.info(this + ' inited.'); } public void startService() { logger.debug('Starting ' + this + ' with ' + services.size() + ' services.'); for (Service service : services) { logger.debug('Starting ' + service); service.start(); } logger.info(this + ' started.'); } public void stopService() { int size = services.size(); logger.debug('Stopping ' + this + ' with ' + size + ' services in reverse order.'); for (int i = size - 1; i >= 0; i--) { Service service = services.get(i); logger.debug('Stopping ' + service); service.stop(); } logger.info(this + ' stopped.'); } public void destroyService() { int size = services.size(); logger.debug('Destroying ' + this + ' with ' + size + ' services in reverse order.'); for (int i = size - 1; i >= 0; i--) { Service service = services.get(i); logger.debug('Destroying ' + service); service.destroy(); } logger.info(this + ' destroyed.'); } } From above code, you will notice few important things:We extends the AbstractService, so a container is a service itself. We would invoke all service’s life-cycles before moving to next. No services will start unless all others are inited. We should stop and destroy services in reverse order for most general use cases.The above container implementation is simple and run in synchronized fashion. This mean, you start container, then all services will start in order you added them. Stop should be same but in reverse order. I also hope you would able to see that there is plenty of room for you to improve this container as well. For example, you may add thread pool to control the execution of the services in asynchronized fashion.Running POJO Services Running services with a simple runner program. In the simplest form, we can run our POJO services on our own without any fancy server or frameworks. Java programs start its life from a static main method, so we surely can invoke init and start of our services in there. But we also need to address the stop and destroy life-cycles when user shuts down the program (usually by hitting CTRL+C.) For this, the Java has the java.lang.Runtime#addShutdownHook() facility. You can create a simple stand-alone server to bootstrap Service like this: package servicedemo;import org.slf4j.*; public class ServiceRunner { private static Logger logger = LoggerFactory.getLogger(ServiceRunner.class);public static void main(String[] args) { ServiceRunner main = new ServiceRunner(); main.run(args); }public void run(String[] args) { if (args.length < 1) throw new RuntimeException('Missing service class name as argument.');String serviceClassName = args[0]; try { logger.debug('Creating ' + serviceClassName); Class<?> serviceClass = Class.forName(serviceClassName); if (!Service.class.isAssignableFrom(serviceClass)) { throw new RuntimeException('Service class ' + serviceClassName + ' did not implements ' + Service.class.getName()); } Object serviceObject = serviceClass.newInstance(); Service service = (Service)serviceObject;registerShutdownHook(service);logger.debug('Starting service ' + service); service.init(); service.start(); logger.info(service + ' started.');synchronized(this) { this.wait(); } } catch (Exception e) { throw new RuntimeException('Failed to create and run ' + serviceClassName, e); } }private void registerShutdownHook(final Service service) { Runtime.getRuntime().addShutdownHook(new Thread() { public void run() { logger.debug('Stopping service ' + service); service.stop(); service.destroy(); logger.info(service + ' stopped.'); } }); } } With abover runner, you should able to run it with this command: $ java demo.ServiceRunner servicedemo.HelloServiceLook carefully, and you’ll see that you have many options to run multiple services with above runner. Let me highlight couple:Improve above runner directly and make all args for each new service class name, instead of just first element. Or write a MultiLoaderService that will load multiple services you want. You may control argument passing using System Properties.Can you think of other ways to improve this runner?Running services with Spring The Spring framework is an IoC container, and it’s well known to be easy to work POJO, and Spring lets you wire your application together. This would be a perfect fit to use in our POJO services. However, with all the features Spring brings, it missed a easy to use, out of box main program to bootstrap spring config xml context files. But with what we built so far, this is actually an easy thing to do. Let’s write one of our POJO Service to bootstrap a spring context file. package servicedemo;import org.springframework.context.ConfigurableApplicationContext; import org.springframework.context.support.FileSystemXmlApplicationContext;public class SpringService extends AbstractService { private ConfigurableApplicationContext springContext;public void startService() { String springConfig = System.getProperty('springContext', 'spring.xml); springContext = new FileSystemXmlApplicationContext(springConfig); logger.info(this + ' started.'); } public void stopService() { springContext.close(); logger.info(this + ' stopped.'); } } With that simple SpringService you can run and load any spring xml file. For example try this: $ java -DspringContext=config/service-demo-spring.xml demo.ServiceRunner servicedemo.SpringService Inside the config/service-demo-spring.xml file, you can easily create our container that hosts one or more service in Spring beans. <beans xmlns='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:schemaLocation='http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd'><bean id='helloService' class='servicedemo.HelloService'> </bean><bean id='serviceContainer' class='servicedemo.ServiceContainer' init-method='start' destroy-method='destroy'> <property name='services'> <list> <ref bean='helloService'/> </list> </property> </bean></beans>Notice that I only need to setup init-method and destroy-method once on the serviceContainer bean. You can then add one or more other service such as the helloService as much as you want. They will all be started, managed, and then shutdown when you close the Spring context. Note that Spring context container did not explicitly have the same life-cycles as our services. The Spring context will automatically instanciate all your dependency beans, and then invoke all beans who’s init-method is set. All that is done inside the constructor of FileSystemXmlApplicationContext. No explicit init method is called from user. However at the end, during stop of the service, Spring provide the container#close() to clean things up. Again, they do not differentiate stop from destroy. Because of this, we must merge our init and start into Spring’s init state, and then merge stop and destroy into Spring’s close state. Recall our AbstractService#destory will auto invoke stop if it hasn’t already done so. So this is trick that we need to understand in order to use Spring effectively.Running services with JEE app server In a corporate env, we usually do not have the freedom to run what we want as a stand-alone program. Instead they usually have some infrustructure and stricter standard technology stack in place already, such as using a JEE application server. In these situation, the most portable to run POJO services is in a war web application. In a Servlet web application, you can write a class that implements javax.servlet.ServletContextListener and this will provide you the life-cycles hook via contextInitialized and contextDestroyed. In there, you can instanciate your ServiceContainer object and call start and destroy methods accordingly. Here is an example that you can explore: package servicedemo; import java.util.*; import javax.servlet.*; public class ServiceContainerListener implements ServletContextListener { private static Logger logger = LoggerFactory.getLogger(ServiceContainerListener.class); private ServiceContainer serviceContainer;public void contextInitialized(ServletContextEvent sce) { serviceContainer = new ServiceContainer(); List<Service> services = createServices(); serviceContainer.setServices(services); serviceContainer.start(); logger.info(serviceContainer + ' started in web application.'); }public void contextDestroyed(ServletContextEvent sce) { serviceContainer.destroy(); logger.info(serviceContainer + ' destroyed in web application.'); }private List<Service> createServices() { List<Service> result = new ArrayList<Service>(); // populate services here. return result; } } You may configure above in the WEB-INF/web.xml like this:<listener> <listener-class>servicedemo.ServiceContainerListener</listener-class> </listener></web-app>The demo provided a placeholder that you must add your services in code. But you can easily make that configurable using the web.xml for context parameters. If you were to use Spring inside a Servlet container, you may directly use their org.springframework.web.context.ContextLoaderListener class that does pretty much same as above, except they allow you to specify their xml configuration file using the contextConfigLocation context parameter. That’s how a typical Spring MVC based application is configure. Once you have this setup, you can experiment our POJO service just as the Spring xml sample given above to test things out. You should see our service in action by your logger output. PS: Actually what we described here are simply related to Servlet web application, and not JEE specific. So you can use Tomcat server just fine as well.The importance of Service’s life-cycles and it’s real world usage All the information I presented here are not novelty, nor a killer design pattern. In fact they have been used in many popular open source projects. However, in my past experience at work, folks always manage to make these extremely complicated, and worse case is that they completely disregard the importance of life-cycles when writing services. It’s true that not everything you going to write needs to be fitted into a service, but if you find the need, please do pay attention to them, and take good care that they do invoked properly. The last thing you want is to exit JVM without clean up in services that you allocated precious resources for. These would become more disastrous if you allow your application to be dynamically reloaded during deployment without exiting JVM, in which will lead to system resources leakage. The above Service practice has been put into use in the TimeMachine project. In fact, if you look at the timemachine.scheduler.service.SchedulerEngine, it would just be a container of many services running together. And that’s how user can extend the scheduler functionalities as well, by writing a Service. You can load these services dynamically by a simple properties file. Reference: How to write better POJO Services from our JCG partner Zemian Deng at the A Programmer’s Journal blog....
software-development-2-logo

Naming Antipatterns

One of these annoying challenges when coding is finding proper names for your classes. There are some tools available making fun of our inability to come up with proper names. But while I enjoy these kind of gags I think there is some serious problem hiding. The problem is: Classes should be some kind of abstraction. I should only have one abstraction for a single purpose. But if my classes are unique abstractions it should be easy to name them, right? You just have to look into the average code base to see it isn’t easy. Lets have a look at a couple of common anti patterns. AbstractAnything: Often you have a possibly large set of classes that share lots of common stuff. In many cases you find an abstract class at the top of their inheritance tree named AbstractWhatever. Obviously that name is technical correct but not very helpfull. After all, the class itself is telling me it is abstract, no need to put it in the name. But apart from being picky about the name a couple of serious design problem come with these classes. They tend to gather functionality common to their subclasses. The problem is: Just because all (or even worse many) of the subclasses need a feature or function it shouldn’t be embedded in their superclass. The features are often mostly independent and therefore should go in separate classes. Lets take the example of an AbstractEditor which is intended to be the base class for models for editors in a Swing application. You might find things in an AbstractEditor likea save method, setting the cursor to its wait state before calling an abstract real save method. a boolean property telling you if this editor needs saving the class and id of the entity being edited Some property change handling infrastructure infrastructure code for validating inputs code that handles the process of asking the user if he wants to save changes when he tries to close the editorand so on. Of course these features depend on each others, but when they are lumped into a single class the dependencies become muddled. If a new need for a feature occurs there is hardly an option anymore but to put it in the same class and after some time the class looks almost like this example. Note that some developers try to hide the application of this anti pattern by renaming the class to BaseSomething. Doesn’t help though. AnythingDo, AnythingTo, AnythingBs With this antipattern you have a properly named class and lots of very similar classes with various suffixes. These suffixes often are very short and denote different layers of the application. On the boundaries of these layers data gets copied from one object into the other with barely any logic. The problem with this antipattern is that while there might be valid reasons for these classes to exists the seamingly determinism to construct one out of the other often runs against the purpose of these classes. An example: You might have a Person class which represents a Person in your system. You might also have a PersonHe (He like Hibernate Entity) which is mapped to a database table using Hibernate. The Person class is intended be used in all the business logic stuff, but since at the boundary to the persistence layer it is just copied over to the Hibernate Entity it has to be handled in the way Hibernate expects things. For example you have to move the complete Person Object around even if you just want to change a single attribute (e.g. the marriage status), because if you just leave fields empty, Hibernate will store these empty fields in the database and you end up with Persons that don’t have any useful property anymore except being married. Although this actually describes reality pretty good in some cases it normally isn’t what you want. Instead consider a design where in case of a marriage you actually create a Marriage object in your business logic, which does not have any direct relationship inside the database. You would do all kind of checks and logic in your business layer (without having code like) if (oldPerson.married != newPerson.married && newPerson.married) ... And only when you store it you put the information from the Marriage into Hibernate Person Entities. There is no MarriageHe or anything. This kind of design makes for way more expressive code. But developers don’t realize this option and often it is incredibly hard to force it into the existing infrastructur/architecture, because everything assumes there is a 1:1 relationship between Person and PersoneHe and all the other Person classes. AnythingImpl This one is annoying. And most people actually feel that they do something wrong when they have an interface X and a single implementation XImpl. It is bad because the Impl suffix basically tells us nothing. JavaDoc already tells us its the implementation of the interface, no need to put that fact into the class name. It also suggests there will always be only one implementation. At least I hope you don’t have classes ending in Impl2 and Impl3 in your code base. But if you have only one implementation in the first place why do you have an interface? I doesn’t make sense. Lets think hard about what other implementations there are (or might be). A classical exammple is the PersonDao interface and PersonDaoImpl. Here are some possible implementation alternatives I would come up with:one implementation could use Hibernate to store and retrieve stuff. one implementation could use a map or similar in memory structure to store stuff. Very usefull for testing one implementation might use special Oracle featuresWhich one is PersonDaoImpl? And by contrast which one is OraclePersonDao, HibernatePersonDao and InMemoryPersonDao? If nothing else consider ProductionPersonDao to distinguish it from the implementation used for testing. The next time you have a good class or interface name and you feel like slapping some kind of standard suffix or prefix onto it in order to create the name for another class, think twice. You might be creating a useless class name, or you might be screwing up your software design. Reference: Naming Antipatterns from our JCG partner Jens Schauder at the Schauderhaft blog....
sonar-logo

Sonar’s Quality Alphabet

Sonar (by SonarSource.com) is getting more and more popular among developer teams. It’s an open source platform measuring software quality in the following 7 axesArchitecture and Design  Comments  Coding Rules  Complexity  Code Duplication  Potential Bugs  Unit Tests If you’re a Sonar newbie then you might find this blog post very useful. On the other hand if you’re an experienced user then you can refresh your memory and what you’ve learned so far. Sonar’s Alphabet is not a user manual. It’s a reference to help you learn (and teach others) some basic terms and words used in the world of Sonar.A for Analysis : Sonar’s basic feature is the ability to analyse source with various ways (Maven, Ant, Sonar runner, trigger by CI system ) . You can have static and/or dynamic analysis if supported by the analyzed language.  B for Blockers : These are violations of the highest severity. They are considered real (not potential bugs ) so fix them as soon as possible  C for Continuous Inspection : Continuous Inspection requires a tool to automate data collection, to report on measures and to highlight hot spots and defects and yes, Sonar is currently the leading “all-in-one” Continuous Inspection engine.  D for Differential Views : Sonar’s star feature let you compare a snapshot analysis with a previous analysis. Fully customizable and dynamic makes continuous inspection a piece of cake.  E for Eclipse. If you’re an Eclipse fan then did you know that you can have most of Sonar’s features in your IDE without leaving it. If not then you should give a try the Sonar’s Eclipse plugin.  F for Filters : Filters are used to specify conditions and criteria on which projects are displayed. They can be used in dashboards or in widgets that require a filter.  G for Global Dashboards : Global dashboards are available at instance level and can be accessed through the menu on the left. One of those global dashboards is set as your home page.Any widget can be added to a global dashboard. Thus, any kinds of information from a project or from the Sonar instance can be displayed at will.  H for Historical Information : Knowing the quality level of your source code in a specific time instance is not enough. You need to be able to compare it with previous analysis. Sonar keeps historical information that can be viewed with many ways such as Timeline widget, Historical table widget or metric tendencies.   I for Internationalization : Sonar (and some of the open source plugins) supports internationalization. It’s available in 7 languages.  J for Jenkins : Although jenkins is not a term of Sonar, you’ll read it in many posts and articles. A best practice to run Sonar analysis and to achieve Continuous Inspection is to automate it by using a CI server. Sonar folks have created a very simple, still useful plugin, that integrates Sonar with Jenkins  K for Key : If you want to dive in Sonar’s technical details or write your own plugin then don’t forget that most of core concepts are identified by a key ( project key, metric key, coding rule key etc. )  L for Languages : Sonar was initially designed to analyze Java source code. Today, more than 20 languages are supported by free or commercial plugins.  M for Manual Measures : You can even define your own measures and set their values when automated calculation is not feasible ( such as team size, project budget etc. )  N for Notifications : Let Sonar sends you an email when Changes in review assigned to you or created by you New violations on your favorite projects introduced during the first differential view period.  O for Opensource : Sonar core as well as most of the plugins are available in CodeHaus or GitHub.  P for plugins. More than 50 Sonar plugins are available for a variety of topics. New languages, reporting, integration with other systems and many more. The best way to Install / update them through the Update Center.  Q for Quality Profiles. Sonar comes with default Quality profiles. For each language you can create your own profiles or edit the existing ones to adjust sonar analysis according to your demands. For each quality profile you activate/deactivate rules from the most popular tools such as PMD, FindBugs, Checkstyle and of course rules directly created by Sonar guys.  R for Reviews : Code Reviews made easy with Sonar. You can assign reviews directly to Sonar users and associate them with a violation. Create action plans to group them and track their progress from analysis to analysis.  S for Sonar in Action book. The only Sonar book that covers all aspects of Sonar. For beginners to advanced users even for developers that want to write their own plugins.  T for Testing : Sonar provides many test metrics such as line coverage, branch coverage and code coverage. It’s integrated with most popular coverage tools (jacoco, emma, cobertura, clover). It can show also metrics on integration tests and by installing opensource plugins you can integrate it with other test frameworks ( JMeter, Thucycides, GreenPepper etc.)  U for User mailing list. Being an active member of this list I can assure you that you can get answers for all your issues and problems.  V for Violations : A very popular term in Sonar. When a source code file (tests files apply also) doesn’t comply with a coding rule then Sonar creates a violation about it.  W for Widgets : Everything you see in a dashboard is a widget. Some of them are only available only for global dashboards. You can add as many as you want in a dashboard and customize them to fit your needs. There are many Sonar core widgets and usually plugins may offer some additional widgets.  X for X-ray : You can consider Sonar as your x-rays glasses to actually see IN your code. Nothing is hidden anymore and everything is measured.  Y for Yesterday’s comparison : One of the most common differential views usages is to compare the current analysis snapshot with the analysis triggered yesterday. Very useful if you don’t want to add up your technical debt and handle it only at the end of each development cycle.  Z for Zero values : For many Sonar metrics such as code duplication, critical/blocker violations, package cycles your purpose should be to minimize or nullify them that means seeing a lot of Zero values in your dashboard.When I was trying to create this alphabet in some cases/letters I was really in big dilemma which word/term to cover. For instance the Sonar runner, which is not mentioned above, is the proposed and standard way to analyze any project with Sonar regardless the programming language. If you think that an important Sonar term is missing feel free to comment and I’ll adjust the text. Reference: Sonar’s Quality Alphabet from our JCG partner  Patroklos Papapetrou at the Only Software matters blog....
jsf-logo

Service-Oriented UI with JSF

In large software development projects, service-oriented architecture is very common because it provides a functional interface that can be used by different teams or departments. The same principles should be applied when creating user interfaces.In the case of a large company that has, among others, a billing department and a customer management department, an organizational chart might look like this:If the billing department wants to develop a new dialog for creating invoices, it might look like this:As you can see, the screen above references a customer in the upper part. Clicking the “..” button right behind the short name text field will open the below dialog that allows the user to select the customer:After pressing “Select” the customer data is shown in the invoice form. It’s also possible to select a customer by simply entering a customer number or typing a short name into the text fields on the invoice screen. If a unique short name is entered, no selection dialog appears at all. Instead, the customer data is displayed directly. Only an ambiguous short name results in opening the customer selection screen. The customer functionality will be provided by developers who belong to the customer management team. A typical approach involves the customer management development team providing some services while the billing department developers create the user interface and call these services. However, this approach involves a stronger coupling between these two distinct departments than is actually necessary. The invoice only needs a unique ID for referencing the customer data. Developers creating the invoice dialog don’t really want to know how the customer data is queried or what services are used in the background to obtain that information. The customer management developers should provide the complete part of the UI that displays the customer ID and handles the selection of the customer:Using JSF 2, this is easy to achieve with composite components. The logical interface between the customer management department and the billing department consists of three parts:Composite component (XHTML) Backing bean for the composite component Listener interface for handling the selection resultsProvider (customer management departement) Composite component: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml" xmlns:ui="http://java.sun.com/jsf/facelets" xmlns:h="http://java.sun.com/jsf/html" xmlns:f="http://java.sun.com/jsf/core" xmlns:composite="http://java.sun.com/jsf/composite" xmlns:ice="http://www.icesoft.com/icefaces/component" xmlns:ace="http://www.icefaces.org/icefaces/components" xmlns:icecore="http://www.icefaces.org/icefaces/core"><ui:composition><composite:interface name="customerSelectionPanel" displayName="Customer Selection Panel" shortDescription="Select a customer using it's number or short name"> <composite:attribute name="model" type="org.fuin.examples.soui.view.CustomerSelectionBean" required="true" /> </composite:interface><composite:implementation> <ui:param name="model" value="#{cc.attrs.model}"/> <ice:form id="customerSelectionForm"> <icecore:singleSubmit submitOnBlur="true" /> <h:panelGroup id="table" layout="block"><table><tr> <td><h:outputLabel for="customerNumber" value="#{messages.customerNumber}" /></td> <td><h:inputText id="customerNumber" value="#{model.id}" required="false" /></td> <td>&nbsp;</td> <td><h:outputLabel for="customerShortName" value="#{messages.customerShortName}" /></td> <td><h:inputText id="customerShortName" value="#{model.shortName}" required="false" /></td> <td><h:commandButton action="#{model.select}" value="#{messages.select}" /></td> </tr><tr> <td><h:outputLabel for="customerName" value="#{messages.customerName}" /></td> <td colspan="5"><h:inputText id="customerName" value="#{model.name}" readonly="true" /></td> </tr></table></h:panelGroup> </ice:form></composite:implementation></ui:composition></html> Backing bean for the composite component: package org.fuin.examples.soui.view;import java.io.Serializable;import javax.enterprise.context.Dependent; import javax.inject.Inject; import javax.inject.Named;import org.apache.commons.lang.ObjectUtils; import org.fuin.examples.soui.model.Customer; import org.fuin.examples.soui.services.CustomerService; import org.fuin.examples.soui.services.CustomerShortNameNotUniqueException; import org.fuin.examples.soui.services.UnknownCustomerException;@Named @Dependent public class CustomerSelectionBean implements Serializable {private static final long serialVersionUID = 1L;private Long id;private String shortName;private String name;private CustomerSelectionListener listener;@Inject private CustomerService service;public CustomerSelectionBean() { super(); listener = new DefaultCustomerSelectionListener(); }public Long getId() { return id; }public void setId(final Long id) { if (ObjectUtils.equals(this.id, id)) { return; } if (id == null) { clear(); } else { clear(); this.id = id; try { final Customer customer = service.findById(this.id); changed(customer); } catch (final UnknownCustomerException ex) { FacesUtils.addErrorMessage(ex.getMessage()); } } }public String getShortName() { return shortName; }public void setShortName(final String shortNameX) { final String shortName = (shortNameX == "") ? null : shortNameX; if (ObjectUtils.equals(this.shortName, shortName)) { return; } if (shortName == null) { clear(); } else { if (this.id != null) { clear(); } this.shortName = shortName; try { final Customer customer = service .findByShortName(this.shortName); changed(customer); } catch (final CustomerShortNameNotUniqueException ex) { select(); } catch (final UnknownCustomerException ex) { FacesUtils.addErrorMessage(ex.getMessage()); } } }public String getName() { return name; }public CustomerSelectionListener getConnector() { return listener; }public void select() { // TODO Implement... }public void clear() { changed(null); }private void changed(final Customer customer) { if (customer == null) { this.id = null; this.shortName = null; this.name = null; listener.customerChanged(null, null); } else { this.id = customer.getId(); this.shortName = customer.getShortName(); this.name = customer.getName(); listener.customerChanged(this.id, this.name); } }public void setListener(final CustomerSelectionListener listener) { if (listener == null) { this.listener = new DefaultCustomerSelectionListener(); } else { this.listener = listener; } }public void setCustomerId(final Long id) throws UnknownCustomerException { clear(); if (id != null) { clear(); this.id = id; changed(service.findById(this.id)); } }private static final class DefaultCustomerSelectionListener implements CustomerSelectionListener {@Override public final void customerChanged(final Long id, final String name) { // Do nothing... }}} Listener interface for handling results: package org.fuin.examples.soui.view;/** * Gets informed if customer selection changed. */ public interface CustomerSelectionListener {/** * Customer selection changed. * * @param id New unique customer identifier - May be NULL. * @param name New customer name - May be NULL. */ public void customerChanged(Long id, String name);}User (billing departement) The invoice bean simply uses the customer selection bean by injecting it, and connects to it using the listener interface: package org.fuin.examples.soui.view;import java.io.Serializable;import javax.annotation.PostConstruct; import javax.enterprise.context.SessionScoped; import javax.enterprise.inject.New; import javax.inject.Inject; import javax.inject.Named;@Named("invoiceBean") @SessionScoped public class InvoiceBean implements Serializable {private static final long serialVersionUID = 1L;@Inject @New private CustomerSelectionBean customerSelectionBean;private Long customerId;private String customerName;@PostConstruct public void init() { customerSelectionBean.setListener(new CustomerSelectionListener() { @Override public final void customerChanged(final Long id, final String name) { customerId = id; customerName = name; } }); }public CustomerSelectionBean getCustomerSelectionBean() { return customerSelectionBean; }public String getCustomerName() { return customerName; }} Finally, in the invoice XHTML, the composite component is used and linked to the injected backing bean: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml" xmlns:ui="http://java.sun.com/jsf/facelets" xmlns:h="http://java.sun.com/jsf/html" xmlns:f="http://java.sun.com/jsf/core" xmlns:fuin="http://fuin.org/examples/soui/facelets" xmlns:customer="http://java.sun.com/jsf/composite/customer"><ui:composition template="/WEB-INF/templates/template.xhtml"> <ui:param name="title" value="#{messages.invoiceTitle}" /> <ui:define name="header"></ui:define> <ui:define name="content"> <customer:selection-panel model="#{invoiceBean.customerSelectionBean}" /> </ui:define><ui:define name="footer"></ui:define></ui:composition> </html>Summary In conclusion, parts of the user interface that reference data from other departments should be the responsibility of the department that delivers the data. Any changes in the providing code can then be easily made without any changes to the using code. Another important benefit of this method is the harmonization of the application’s user interface. Controls and panels that display the same data always look the same. Every department can also create a repository of its provided user interface components, making the process of designing a new dialog as easy as putting the right components together. Reference: Service-Oriented UI from our JCG partner Michael Schnell at the A Java Developer’s Life blog....
career-logo

Mobile Development Job Trends – 2012-08

Today we have the last installment of the job trends posts, mobile development job trends. Everyone is talking about mobile development because it really is the future of interaction. With smart phones and tablets, app development is becoming hugely important even for the enterprise. The terms included in this list were iPhone, Android, WP7 or “Windows Phone“, BlackBerry, Symbian, WebOS and PhoneGap. There is some noise in the data, but not enough to significantly affect the trends. I am watching to see if Apache Cordova starts gaining adoption, but it barely registers on the trend graphs right now.First, let’s look at the basic job trends from Indeed:Based on this graph, you would think that Android demand has surpassed iPhone demand. Generally, this is not quite true because of the introduction of the iPad and iOS. The iPad does not change the demand too much, but adding iOS to the graph really changes the outlook:Now you can see that iOS demand is around 75% larger than Android, with iOS growth outpacing Android in 2012. As you can see, Blackberry is slowly declining as should be expected. WebOS and Symbian never really gained much attention, and the little that they did is obviously declining as well. Windows Phone is showing some growth, just not as quick as Android or iOS. You cannot see it very well in this graph, but PhoneGap is showing some solid growth during this year.Now we look at the short-term trends from SimplyHired:The SimplyHired trends are very difficult to get a good handle on. I tried adding the iPad and iOS to the query, which made the demand decrease. Obviously that makes no sense, so I reverted to only looking at the iPhone demand. Oddly, iPhone demand had a huge dip around May, but rebounded nicely. All other mobile development looks to be declining. Granted, the general trend for Android still looks positive, but I am not sure what is changing the trend during the summer. Blackberry has a slow decline throughout the year as expected. The others barely register in this graph and do not seem to be showing any real growth. I am not sure why SimplyHired has trends that look so much different than Indeed.Lastly, we look at the relative scaling from Indeed, which shows trends based on job growth:This is another graph where things are really strange depending on the terms you add. If I include iPad and iOS, there is very stable growth, but much lower than what is shown above. The growth for iPhone does not seem to match the general trend lines either, but we do know that iOS development is growing rapidly. So, let’s look at a graph that does not include the iPhone line.That looks a lot more readable. You can see that WebOS had a spike in the beginning of 2011, but that has dropped off significantly and been stable for this year. Windows phone shows a very nice growth line, as does Android. PhoneGap is probably the biggest surprise, and something that needs to be watched. While it does not have the gross demand of the other technologies, this type of growth shows it is gaining some serious attention. With the breadth of devices that people need to support, I expect more device agnostic frameworks to become popular. The other technologies, Blackberry and Symbian, barely register on this graph.The next year will be interesting in mobile development as we see more devices introduced and we see the future of some of the older technologies. If you are looking at mobile development, your main focus should be on iPhone development and Android development. If you like the idea of cross-platform development, PhoneGap development is gaining adoption and a solid option to review.Reference: Mobile Development Job Trends – August 2012 from our JCG partner Rob Diana at the Regular Geek blog....
javafx-logo

JavaFX Tutorial – Basics

JavaFX seem to be gaining ground on the RIA space. With the right tools and development support, it wil definitely have its toll on the next best technology “thing!”. I’m not writing any JavaFX review here since there is a lot of Technology critiques out there that probably reviewed it extensively, but instead, I will write a simple tutorial on how can you develop JavaFX application in your MacOSX lion. First some pre-requisites:JavaFX Runtime Environment Java Runtime Environment JavaFX SDK JavaFX Scene Builder JavaFX IDE – I had chosen NetBeans 7 as it already has support for JavaFX.This can all be downloaded on the Oracle Website. You may google it. Requirement: Create a simple application that accepts (you guess it) person details (simple registration), a custom web browser and some analytics. Technology: JavaFX and JPA Step 1: Create the Database and Tables Simple Database and Table, download the SQL file: hereStep 2: Create the User Interface and specify the controller Using the JavaFX Scene Builder, create the user interface.Step 3: Development Code the App! NetBeans has its support for JPA – so I used it to interact with the database.Download the source: here Its basically a very simple application, but I think this sample will give you a brief introduction or a head start on how to actually develop an application using the platform. Not a bad alternative if you want to create Desktop Applications, which of course, .net offers much better solution. Though the main take away here is that JavaFX redefines Java Desktop Application Development – flexible enough to support the best of both worlds (Desktop and Web) and if thats not enough, it also supports mobile phones. Reference: JavaFX Tutorial – Basics from our JCG partner Alvin Reyes at the Alvin “Jay” Reyes Blog blog....
java-logo

Resource Bundle Tricks and Best Practices

Today is resource bundle day. This is the most well known mechanism for internationalization (i18n) in Java in general. Working with it should be a breeze. But there are many little questions that come up while getting your hands dirty with it. If you are feeling the same, this post is for you. Basics The java.util.ResourceBundle defines a standardized way for accessing translations in java. They contain locale-specific resources. Resource bundles belong to families whose members share a common base name, but whose names also have additional components that identify their locales. Each resource bundle in a family contains the same items, but the items have been translated for the locale represented by that resource bundle. Those are key/value pairs. The keys uniquely identify a locale-specific object in the bundle. The most basic example uses the following familiy: Messages.properties Messages_de.properties Messages_en.properties If you need to query a bundle in your application you simple call the ResourceBundle bundle = ResourceBundle.getBundle("Messages");method and query the returned bundle: bundle.getString("welcome.message");If you are wondering which Locale is going to be used here, you are right. The String constructor implicitly uses Locale.getDefault() to resolve the language. That might not be what you want. So you should ResourceBundle bundle = ResourceBundle.getBundle("Messages", locale); You cannot set the locale after you have retrieved the bundle. Every ResourceBundle has one defined locale. Naming stuff   Some thoughts about naming. Name the bundle properties after their contents. You can go a more general way by simply naming them “Messages” and “Errors” etc. but it also is possible to have a bundle per subsystem or component. Whatever fit’s your needs. Maintaining the contents isn’t easy with lots of entries. So any kind of contextual split makes developers happy. The bundle properties files are equivalent to classes; Name them accordingly. And further on you should find a common system for naming your keys. Depending on the split you have chosen for the property files you might also introduce some kind of subsystem or component namespace with your keys. Page prefixes are also possible. Think about this wisely and play around with it. You are aiming to have least possible dublicates in your keys. Encapsulating   As you have seen, you use the string representation of the bundles a lot. The fact that those are actually file-names (or better class-names) you would be better of with a simple enum which encapsulates everything for you: public enum ResourceBundles { MESSAGES("Messages"), ERRORS("Errors"); private String bundleName;ResourceBundles(String bundleName) { this.bundleName = bundleName; }public String getBundleName() { return bundleName; }@Override public String toString() { return bundleName; } }Having this you simply can write ResourceBundle bundle = ResourceBundle.getBundle(MESSAGES.getBundleName());Java Server Faces and ResourceBundles   To use resource bundles in your jsf based application you simple have to define them in your faces-config.xml and use the shortcuts in your xhtml files. <resource-bundle> <base-name>Messages</base-name> <var>msgs</var><h:outputLabel value="#{msgs['welcome.general']}" />JSF takes care of the rest. What about parameter substitution? Think about a key-value pair like the following: welcome.name=Hi {0}! How are you?You can pass the parameter via the f:param tag: <h:outputFormat value="#{msgs['welcome.name']}"> <f:param value="Markus" /> </h:outputFormat>To change the language you have to set a specific locale for your current FacesContext instance. It’s best to do this via a value change listener: public void countryLocaleCodeChanged(ValueChangeEvent e) { String newLocaleValue = e.getNewValue().toString(); //loop country map to compare the locale code for (Map.Entry<String, Object> entry : countries.entrySet()) { if (entry.getValue().toString().equals(newLocaleValue)) { FacesContext.getCurrentInstance() .getViewRoot().setLocale((Locale) entry.getValue()); } } }Resource Bundles in EJBs   JSF obviously is very easily integrated. What about using those bundles in EJBs? It is basically the same. You have the same mechanisms in place to get hand on the bundle and use it. There is one thing that you should keep in mind. You probably don’t want to always use the default locale. So you have to find a way to pass the locale down from the UI. If you are thinking about @Injecting the MessageBundle via a @Produces annotation you have to think more than one time. Especially if you are working with @Stateless EJBs. Those instances get pooled and you have to pass the Locale to any business method that needs to know about the current Locale. You typically would do this with a parameter object or some kind of user session profile. Don’t add the Locale as method signature all over. Resource Bundles from the DB   In most of the cases I see you need to pull the keys from a DB. Given the inner workings of the ResourceBundle (one “class” per locale) you end up having to implement the logic in your own ResourceBundle implementation. Most of the examples you find on the web do this by overriding the handleGetObject(String key) method. I don’t like this approach, especially since we have a far better way using the ResourceBundle.Control mechanism. Now you can override the newBundle() method and return your own ResourceBundle implementation. All you have to do is to set your own Control as a parent with your DatabaseResourceBundle: public DatabaseResourceBundle() { setParent(ResourceBundle.getBundle(BUNDLE_NAME, FacesContext.getCurrentInstance().getViewRoot().getLocale(), new DBControl())); }The DBControl returns MyResourceBundle which is a ListResourceBundle: protected class DBControl extends Control {@Override public ResourceBundle newBundle(String baseName, Locale locale, String format, ClassLoader loader, boolean reload) throws IllegalAccessException, InstantiationException, IOException { return new MyResources(locale); }/** * A simple ListResourceBundle */ protected class MyResources extends ListResourceBundle {private Locale locale;/** * ResourceBundle constructor with locale * * @param locale */ public MyResources(Locale locale) { this.locale = locale; }@Override protected Object[][] getContents() { TypedQuery<ResourceEntity> query = _entityManager.createNamedQuery("ResourceEntity.findForLocale", ResourceEntity.class); query.setParameter("locale", locale);List<ResourceEntity> resources = query.getResultList(); Object[][] all = new Object[resources.size()][2]; int i = 0; for (Iterator<ResourceEntity> it = resources.iterator(); it.hasNext();) { ResourceEntity resource = it.next(); all[i] = new Object[]{resource.getKey(), resource.getValue()}; values.put(resource.getKey(), resource.getValue()); i++; } return all; } } }As you can see, this is backed by an entitymanager and a simple ResourceEntity which has all the fields and NamedQueries necessary for building up the different bundles. @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; @Column(name = "i18n_key") private String key; @Column(name = "i18n_value") private String value; @Column(name = "i18n_locale") private Locale locale;By putting the bundles in a private Map<String, String> values = new HashMap<String, String>(); you also have a good way of caching the results after the bundles have been build up for the first time. This still isn’t the best solution as ResourceBundles have a way of caching. But I might dig into this in more detail later. Until now, this bundle is cached forever (or at least until the next redeployment). Rewrite as Language Switch   On last thing to mention is that you also could have some fancy add-ons here. If you already have the JSF language switch magic in place it is simple to add ocpsoft’s rewrite to your application. This is a simple way to encode the language in the URLs like this http://yourhost.com/Bundle-Provider-Tricks/en/index.html All you have to do is to add rewrite to the game by adding two simple dependencies: <dependency> <groupId>org.ocpsoft.rewrite</groupId> <artifactId>rewrite-servlet</artifactId> <version>1.1.0.Final</version> </dependency> <dependency> <groupId>org.ocpsoft.rewrite</groupId> <artifactId>rewrite-integration-faces</artifactId> <version>1.1.0.Final</version> </dependency>Rewrite needs you to add your own ConfigurationProvider which is the central place to hold your rewriting rules. Implement the following: public class BundleTricksProvider extends HttpConfigurationProvider {@Override public Configuration getConfiguration(ServletContext context) { return ConfigurationBuilder.begin() // Locale Switch .addRule(Join.path("/{locale}/{page}.html").to("/{page}.xhtml") .where("page").matches(".*") .where("locale").bindsTo(PhaseBinding.to(El.property("#{languageSwitch.localeCode}")).after(PhaseId.RESTORE_VIEW))); }@Override public int priority() { return 10; } }Next is to add a file named “org.ocpsoft.rewrite.config.ConfigurationProvider” to your META-INF/services folder and put the fully qualified name of your ConfigurationProvider implementation there. One last thing to tweak is the logic in the LanguageSwitch bean. Rewrite isn’t able to trigger a ValueChangeEvent (as far as I know :)) so you have to add some magic to change the Locale while the setter is called. That’s it .. very easy! Reference: Resource Bundle Tricks and Best Practices from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....
java-logo

BTrace: hidden gem in Java developer toolbox

This post is about BTrace which I am considering as a hidden gem for Java developer. BTrace is a safe, dynamic tracing tool for the Java platform. BTrace can be used to dynamically trace a running Java program (similar to DTrace for OpenSolaris applications and OS). Shortly, the tool allows to inject tracing points without restarting or reconfiguring your Java application while it’s running. Moreover, though there are several ways to do that, the one I would like to discuss today is using JVisualVM tool from standard JDK bundle. What is very cool, BTrace itself uses Java language to define injection trace points. The approach looks very familiar if you ever did aspect-oriented programming (AOP). So let’s get started with a problem: we have an application which uses one of the NoSQL databases (f.e., let it be MongoDB) and suddenly starts to experience significant performance slowdown. Developers suspect that application runs too many queries or updates but cannot say it with confidence. Here BTrace can help. First thing first, let’s run JVisualVM and install BTrace plugin:JVisualVM should be restarted in order for plugin to appear. Now, while our application is up and running, let’s right click on it in JVisualVM applications tree:The following very intuitive BTrace editor (with simple toolbar) should appear:This is a place where tracing instrumentation could be defined and dynamically injected into the running application. BTrace has a very rich model in order to define what exactly should be traced: methods, constructors, method returns, errors, …. Also it supports aggregations out of the box so it quite easy to collect a bunch of metrics while application is running. For our problem, we would like to see which methods related to MongoDB are being executed. As my application uses Spring Data MongoDB, I am interested in which methods of any implementation of org.springframework.data.mongodb.core.MongoOperations interface are being called by application and how long every call takes. So I have defined a very simple BTrace script: import com.sun.btrace.*; import com.sun.btrace.annotations.*; import static com.sun.btrace.BTraceUtils.*;@BTrace public class TracingScript { @TLS private static String method;@OnMethod( clazz = '+org.springframework.data.mongodb.core.MongoOperations', method = '/.*/' ) public static void onMongo( @ProbeClassName String className, @ProbeMethodName String probeMethod, AnyType[] args ) { method = strcat( strcat( className, '::' ), probeMethod ); } @OnMethod( clazz = '+org.springframework.data.mongodb.core.MongoOperations', method = '/.*/', location = @Location( Kind.RETURN ) ) public static void onMongoReturn( @Duration long duration ) { println( strcat( strcat( strcat( strcat( 'Method ', method ), ' executed in ' ), str( duration / 1000 ) ), 'ms' ) ); } } Let me explain briefly what I am doing here. Basically, I would like to know when any method of any implementation of org.springframework.data.mongodb.core.MongoOperations is called (onMongo marks that) and duration of the call (onMongoReturn marks that in turn). Thread-local variable method holds full qualified method name (with a class), while thanks to useful BTrace predefined annotation, duration parameter holds the method execution time (in nanoseconds). Though it’s pure Java, BTrace allows only small subset of Java classes to be used. It’s not a problem as com.sun.btrace.BTraceUtils class provides a lot of useful methods (f.e., strcat) to fill the gaps. Running this script produces following output: ** Compiling the BTrace script ... *** Compiled ** Instrumenting 1 classes ... Method org.springframework.data.mongodb.core.MongoTemplate::maybeEmitEvent executed in 25ms Method org.springframework.data.mongodb.core.MongoTemplate::maybeEmitEvent executed in 3ms Method org.springframework.data.mongodb.core.MongoTemplate::getDb executed in 22ms Method org.springframework.data.mongodb.core.MongoTemplate::prepareCollection executed in 2ms Method org.springframework.data.mongodb.core.MongoTemplate::prepareCollection executed in 19ms Method org.springframework.data.mongodb.core.MongoTemplate::access$100 executed in 2ms Method org.springframework.data.mongodb.core.MongoTemplate::access$100 executed in 1ms Method org.springframework.data.mongodb.core.MongoTemplate::maybeEmitEvent executed in 3ms Method org.springframework.data.mongodb.core.MongoTemplate::maybeEmitEvent executed in 2ms Method org.springframework.data.mongodb.core.MongoTemplate::getDb executed in 2ms Method org.springframework.data.mongodb.core.MongoTemplate::prepareCollection executed in 1ms Method org.springframework.data.mongodb.core.MongoTemplate::prepareCollection executed in 6ms Method org.springframework.data.mongodb.core.MongoTemplate::access$100 executed in 1ms Method org.springframework.data.mongodb.core.MongoTemplate::access$100 executed in 0ms Method org.springframework.data.mongodb.core.MongoTemplate::maybeEmitEvent executed in 2ms Method org.springframework.data.mongodb.core.MongoTemplate::maybeEmitEvent executed in 1ms Method org.springframework.data.mongodb.core.MongoTemplate::getDb executed in 2ms Method org.springframework.data.mongodb.core.MongoTemplate::prepareCollection executed in 1ms Method org.springframework.data.mongodb.core.MongoTemplate::prepareCollection executed in 6ms Method org.springframework.data.mongodb.core.MongoTemplate::access$100 executed in 1ms Method org.springframework.data.mongodb.core.MongoTemplate::access$100 executed in 0ms Method org.springframework.data.mongodb.core.MongoTemplate::maybeEmitEvent executed in 2ms Method org.springframework.data.mongodb.core.MongoTemplate::maybeEmitEvent executed in 1ms ... As you can see, output contains bunch of inner classes which could easily be eliminated by providing more precise method name templates (or maybe even tracing MongoDB driver instead). I have just started to discover BTrace but I definitely see a great value for me as a developer from using this awesome tool. Reference: BTrace: hidden gem in Java developer toolbox from our JCG partner Andrey Redko at the Andriy Redko {devmind} blog....
agile-logo

Contracting in Agile – You try it

One of the key principles in Agile development is “Customer collaboration over contract negotiation” Unfortunately, that means that if you’re trying to follow Agile methods, you’re left without useful guidelines to follow when it comes to contracting and coming up with contracts that fit the way that Agile teams work. Time-and-materials of course is a no-brainer, regardless of how the team works – do the work, track the time and other costs, and charge the customer as you do it. But it’s especially challenging for people who have to work within contract structures such as fixed price / fixed scope, which is the way that many government contracts are awarded and the way that a number of large businesses still contract development work. The advice for Agile teams usually runs something like: it’s up to you to convince the purchaser to change the rules and accept a fuzzier, more balanced way of contracting, with more give-and-take. Something that fits the basic assumptions of Agile development: that costs (mostly the people on the team) and schedule can be fixed, but the scope needs to be flexible and worked out as the project goes on. But in many business situations the people paying for the work aren’t interested in changing how they think and plan – it’s their money and they want what they want when they want it. They are calling the shots. If you don’t comply with the terms of the bidding process, you don’t get the opportunity to work with the customer at all. And the people paying you (your management) also need to know how much it is going to cost and when it is going to be done and what the risks are so they know if they can afford to take the project on. This puts the developers in a difficult (maybe impossible) situation.Money for Nothing and Change for Free Jeff Sutherland, one of the creators of Scrum, proposes a contract structure called “Money for Nothing and your Change for Free”. The development team delivers software incrementally – if they are following Scrum properly, they should start with the work that is most important to the customer first, and deliver what the customer needs the most as early as possible. The customer can terminate the contract at any point (because they’ve already got what they really need), and pay some percentage of the remainder of the contract to compensate the developer for losing the revenue that they planned to get for completing the entire project. So obviously, the payment schedule for the contract can’t be weighted towards the end of the project (no large payments on “final acceptance” since it may never happen). That’s the “money for nothing” part. “Change for free” means that the customer can’t add scope to the project, but can make changes as long as they substitute work still to be done in the backlog with work that is the same size or smaller. So new work can come up, the customer can change their mind, but the overall size of the project remains the same, which means that the team should still be able to deliver the project by the scheduled end date. To do this you have to define, understand and size all of the work that needs to be done upfront – which doesn’t fit well with the iterative, incremental way that Agile teams work. And it ignores the fact that changes still carry a price: the developers have to throw away the time that they spent upfront understanding what needed to be done enough to estimate it and the work that they went in to planning it, and they have to do more work to review and understand the change, estimate it and replan. Change is cheap in Agile development, but it’s not free. If the customer needs to make a handful of changes, the cost isn’t great. But it can become a real drag to delivery and add significant cost if a customer does this dozens or hundreds of times over a project.Fixed Price and Fixed Everything Contracts Fixed Price contracts, and especially what Alistair Cockburn calls Fixed-Everything contracts (fixed-price, fixed-scope and fixed-time too) are a nasty fact of business. Cockburn says that these contracts are usually created out of lack of trust – the people paying for the system to be developed don’t trust the people building the software to do what they need, and try to push the risk onto the development team. Even if people started out trusting each other, these contracts often create an environment where trust breaks down – the customer doesn’t trust the developers, the developers hide things from the customer, and the people who are paying the developers don’t trust anybody. But it’s still a common way to contract work because for many customers it is easier for them to plan around and it makes sense for organizations that think of software development projects as engineering projects and that want to treat software engineering projects the same way as they do building a road or a bridge. This is what we told you we want, this is when we need it, that’s how much you said it was going to cost (including your risk and profit margin), we agree that’s what we’re willing to pay, now go build it and we’ll pay you when you get it done. Cockburn does talk about a case where a team was successful in changing a fixed-everything contract into a time-and-materials contract over time, by working closely with the customer and proving that they could give the customer what they needed. After each delivery, the team would meet with the customer and discuss whether to continue with the contract as written or work on something that customer really needed instead, renegotiating the contract as they went on. I’ve seen this happen, but it’s rare, unless both companies do a lot of work together and the stakes of failure on a project are low. Ken Schwaber admits that fixed price contracting can’t be done with Scrum projects (read the book). Again, the solution is to convince the customer to accept and pay for work in an incremental, iterative way. Martin Fowler says that you can’t deliver a fixed price, fixed time and fixed scope contract without detailed, stable and accurate requirements – which he believes can’t be done. His solution is to fix the price and time, and then work with the customer to deliver what you can by the agreed end date, and hope that this will be enough. The most useful reference I’ve found on contracting in Agile projects is the freely-available Agile Contracts Primer from Practices for Scaling Lean and Agile Development, by Arbogast, Larman And Vodde. Their advice: avoid fixed-priced, fixed-scope (FPFS) contracts, because they are a lose-lose for both customer and supplier. The customer is less likely to get what they need because the supplier will at some point panic over delivery and be forced to cut quality; and if the supplier is able to deliver, the customer has to pay more than they should because of the risk premium that the supplier has to add. And working this way leads to a lack of transparency and to game playing on both sides. But, if you have to do it:Obviously it’s going to require up-front planning and design work to understand and estimate everything that has to get done – which means you have to bend Agile methods a lot. You don’t have to allow changes – you can just work incrementally from the backlog that is defined upfront. Or you can restrict the customer to only changing their mind on priority of work to be done (which gives them transparency and some control), or allow them to substitute a new requirement for an existing requirement of the same size (Sutherland’s “Change for Free”).To succeed in this kind of contract you have to:Invest a lot to do detailed, upfront requirements analysis, some design work, thorough acceptance test definition and estimation – by experienced people who are going to do the work Don’t allow changes in requirements or scope – just replacement / substitution Increase the margin of the contract price Make sure that you understand the problem you are working on – the domain and technology Deliver important things early and hope that the customer will be flexible with you towards the end if you still can’t deliver everything.PMI-ACP on Agile Contracting? For all of the projects that have been delivered using Agile methods, contracting seems to be still a work in progress. There are lots of good ideas and suggestions, but no solid answers. I’ve gone through the study guide materials for the PMI-ACP certification to see what PMI has to say about contracting in Agile projects. There is the same stuff about Sutherland’s “Money for Nothing and your Change for Free” and a few other options. It’s clear that the PMI didn’t take contracting in Agile projects on as a serious problem. This means that they missed another opportunity to help large organizations and people working with large organizations (the kind of people who are going to care about the PMI-ACP certification) to understand how to work with Agile methods in real-life situations. Reference: Contracting in Agile – You try it from our JCG partner Jim Bird at the Building Real Software blog....
java-logo

Java 7: HashMap vs ConcurrentHashMap

As you may have seen from my past performance related articles and HashMap case studies, Java thread safety problems can bring down your Java EE application and the Java EE container fairly easily. One of most common problems I have observed when troubleshooting Java EE performance problems is infinite looping triggered from the non-thread safe HashMap get() and put() operations. This problem is known since several years but recent production problems have forced me to revisit this issue one more time. This article will revisit this classic thread safety problem and demonstrate, using a simple Java program, the risk associated with a wrong usage of the plain old java.util.HashMap data structure involved in a concurrent threads context. This proof of concept exercise will attempt to achieve the following 3 goals:Revisit and compare the Java program performance level between the non-thread safe and thread safe Map data structure implementations (HashMap, Hashtable, synchronized HashMap, ConcurrentHashMap) Replicate and demonstrate the HashMap infinite looping problem using a simple Java program that everybody can compile, run and understand Review the usage of the above Map data structures in a real-life and modern Java EE container implementation such as JBoss AS7For more detail on the ConcurrentHashMap implementation strategy, I highly recommend the great article from Brian Goetz on this subject. Tools and server specifications As a starting point, find below the different tools and software’s used for the exercise:Sun/Oracle JDK & JRE 1.7 64-bit Eclipse Java EE IDE Windows Process Explorer (CPU per Java Thread correlation) JVM Thread Dump (stuck thread analysis and CPU per Thread correlation)The following local computer was used for the problem replication process and performance measurements:Intel(R) Core(TM) i5-2520M CPU @ 2.50Ghz (2 CPU cores, 4 logical cores) 8 GB RAM Windows 7 64-bit* Results and performance of the Java program may vary depending of your workstation or server specifications. Java program In order to help us achieve the above goals, a simple Java program was created as per below:The main Java program is HashMapInfiniteLoopSimulator.java A worker Thread class WorkerThread.java was also createdThe program is performing the following:Initialize different static Map data structures with initial size of 2 Assign the chosen Map to the worker threads (you can chose between 4 Map implementations) Create a certain number of worker threads (as per the header configuration). 3 worker threads were created for this proof of concept NB_THREADS = 3; Each of these worker threads has the same task: lookup and insert a new element in the assigned Map data structure using a random Integer element between 1 – 1 000 000. Each worker thread perform this task for a total of 500K iterations The overall program performs 50 iterations in order to allow enough ramp up time for the HotSpot JVM The concurrent threads context is achieved using the JDK ExecutorServiceAs you can see, the Java program task is fairly simple but complex enough to generate the following critical criteria’s:Generate concurrency against a shared / static Map data structure Use a mix of get() and put() operations in order to attempt to trigger internal locks and / or internal corruption (for the non-thread safe implementation) Use a small Map initial size of 2, forcing the internal HashMap to trigger an internal rehash/resizeFinally, the following parameters can be modified at your convenience: ## Number of worker threads private static final int NB_THREADS = 3;## Number of Java program iterations private static final int NB_TEST_ITERATIONS = 50;## Map data structure assignment. You can choose between 4 structures // Plain old HashMap (since JDK 1.2) threadSafeMap1 = new Hashtable<String, Integer>(2);// Plain old Hashtable (since JDK 1.0) threadSafeMap1 = new Hashtable<String, Integer>(2); // Fully synchronized HashMap threadSafeMap2 = new HashMap<String, Integer>(2); threadSafeMap2 = Collections.synchronizedMap(threadSafeMap2);// ConcurrentHashMap (since JDK 1.5) threadSafeMap3 = new ConcurrentHashMap<String, Integer>(2);/*** Assign map at your convenience ****/ assignedMapForTest = threadSafeMap3;Now find below the source code of our sample program. #### HashMapInfiniteLoopSimulator.java package org.ph.javaee.training4;import java.util.Collections; import java.util.Map; import java.util.HashMap; import java.util.Hashtable;import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors;/** * HashMapInfiniteLoopSimulator * @author Pierre-Hugues Charbonneau * */ public class HashMapInfiniteLoopSimulator {private static final int NB_THREADS = 3; private static final int NB_TEST_ITERATIONS = 50; private static Map<String, Integer> assignedMapForTest = null; private static Map<String, Integer> nonThreadSafeMap = null; private static Map<String, Integer> threadSafeMap1 = null; private static Map<String, Integer> threadSafeMap2 = null; private static Map<String, Integer> threadSafeMap3 = null; /** * Main program * @param args */ public static void main(String[] args) { System.out.println("Infinite Looping HashMap Simulator"); System.out.println("Author: Pierre-Hugues Charbonneau"); System.out.println("http://javaeesupportpatterns.blogspot.com"); for (int i=0; i<NB_TEST_ITERATIONS; i++) { // Plain old HashMap (since JDK 1.2) nonThreadSafeMap = new HashMap<String, Integer>(2); // Plain old Hashtable (since JDK 1.0) threadSafeMap1 = new Hashtable<String, Integer>(2); // Fully synchronized HashMap threadSafeMap2 = new HashMap<String, Integer>(2); threadSafeMap2 = Collections.synchronizedMap(threadSafeMap2); // ConcurrentHashMap (since JDK 1.5) threadSafeMap3 = new ConcurrentHashMap<String, Integer>(2); // ConcurrentHashMap /*** Assign map at your convenience ****/ assignedMapForTest = threadSafeMap3; long timeBefore = System.currentTimeMillis(); long timeAfter = 0; Float totalProcessingTime = null; ExecutorService executor = Executors.newFixedThreadPool(NB_THREADS); for (int j = 0; j < NB_THREADS; j++) { /** Assign the Map at your convenience **/ Runnable worker = new WorkerThread(assignedMapForTest); executor.execute(worker); } // This will make the executor accept no new threads // and finish all existing threads in the queue executor.shutdown(); // Wait until all threads are finish while (!executor.isTerminated()) { } timeAfter = System.currentTimeMillis(); totalProcessingTime = new Float( (float) (timeAfter - timeBefore) / (float) 1000); System.out.println("All threads completed in "+totalProcessingTime+" seconds"); } }}#### WorkerThread.java package org.ph.javaee.training4;import java.util.Map;/** * WorkerThread * * @author Pierre-Hugues Charbonneau * */ public class WorkerThread implements Runnable { private Map<String, Integer> map = null;public WorkerThread(Map<String, Integer> assignedMap) { this.map = assignedMap; }@Override public void run() { for (int i=0; i<500000; i++) { // Return 2 integers between 1-1000000 inclusive Integer newInteger1 = (int) Math.ceil(Math.random() * 1000000); Integer newInteger2 = (int) Math.ceil(Math.random() * 1000000); // 1. Attempt to retrieve a random Integer element Integer retrievedInteger = map.get(String.valueOf(newInteger1)); // 2. Attempt to insert a random Integer element map.put(String.valueOf(newInteger2), newInteger2); } }}Performance comparison between thread safe Map implementations The first goal is to compare the performance level of our program when using different thread safe Map implementations:Plain old Hashtable (since JDK 1.0) Fully synchronized HashMap (via Collections.synchronizedMap()) ConcurrentHashMap (since JDK 1.5)Find below the graphical results of the execution of the Java program for each iteration along with a sample of the program console output.# Output when using ConcurrentHashMap Infinite Looping HashMap Simulator Author: Pierre-Hugues Charbonneauhttp://javaeesupportpatterns.blogspot.comAll threads completed in 0.984 seconds All threads completed in 0.908 seconds All threads completed in 0.706 seconds All threads completed in 1.068 seconds All threads completed in 0.621 seconds All threads completed in 0.594 seconds All threads completed in 0.569 seconds All threads completed in 0.599 seconds ………………As you can see, the ConcurrentHashMap is the clear winner here, taking in average only half a second (after an initial ramp-up) for all 3 worker threads to concurrently read and insert data within a 500K looping statement against the assigned shared Map. Please note that no problem was found with the program execution e.g. no hang situation. The performance boost is definitely due to the improved ConcurrentHashMap performance such as the non-blocking get() operation. The 2 other Map implementations performance level was fairly similar with a small advantage for the synchronized HashMap. HashMap infinite looping problem replication The next objective is to replicate the HashMap infinite looping problem observed so often from Java EE production environments. In order to do that, you simply need to assign the non-thread safe HashMap implementation as per code snippet below: /*** Assign map at your convenience ****/ assignedMapForTest = nonThreadSafeMap;Running the program as is using the non-thread safe HashMap should lead to:No output other than the program header Significant CPU increase observed from the system At some point the Java program will hang and you will be forced to kill the Java processWhat happened? In order to understand this situation and confirm the problem, we will perform a CPU per Thread analysis from the Windows OS using Process Explorer and JVM Thread Dump. 1 – Run the program again then quickly capture the thread per CPU data from Process Explorer as per below. Under explore.exe you will need to right click over the javaw.exe and select properties. The threads tab will be displayed. We can see overall 4 threads using almost all the CPU of our system.2 – Now you have to quickly capture a JVM Thread Dump using the JDK 1.7 jstack utility. For our example, we can see our 3 worker threads which seems busy/stuck performing get() and put() operations. ..\jdk1.7.0\bin>jstack 272 2012-08-29 14:07:26 Full thread dump Java HotSpot(TM) 64-Bit Server VM (21.0-b17 mixed mode):"pool-1-thread-3" prio=6 tid=0x0000000006a3c000 nid=0x18a0 runnable [0x0000000007ebe000] java.lang.Thread.State: RUNNABLE at java.util.HashMap.put(Unknown Source) at org.ph.javaee.training4.WorkerThread.run(WorkerThread.java:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source)"pool-1-thread-2" prio=6 tid=0x0000000006a3b800 nid=0x6d4 runnable [0x000000000805f000] java.lang.Thread.State: RUNNABLE at java.util.HashMap.get(Unknown Source) at org.ph.javaee.training4.WorkerThread.run(WorkerThread.java:29) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source)"pool-1-thread-1" prio=6 tid=0x0000000006a3a800 nid=0x2bc runnable [0x0000000007d9e000] java.lang.Thread.State: RUNNABLE at java.util.HashMap.put(Unknown Source) at org.ph.javaee.training4.WorkerThread.run(WorkerThread.java:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) ..............It is now time to convert the Process Explorer thread ID DECIMAL format to HEXA format as per below. The HEXA value allows us to map and identify each thread as per below: ## TID: 1748 (nid=0X6D4)Thread name: pool-1-thread-2 CPU @25.71% Task: Worker thread executing a HashMap.get() operationat java.util.HashMap.get(Unknown Source) at org.ph.javaee.training4.WorkerThread.run(WorkerThread.java:29) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source)## TID: 700 (nid=0X2BC)Thread name: pool-1-thread-1 CPU @23.55% Task: Worker thread executing a HashMap.put() operationat java.util.HashMap.put(Unknown Source) at org.ph.javaee.training4.WorkerThread.run(WorkerThread.java:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source)## TID: 6304 (nid=0X18A0)Thread name: pool-1-thread-3 CPU @12.02% Task: Worker thread executing a HashMap.put() operationat java.util.HashMap.put(Unknown Source) at org.ph.javaee.training4.WorkerThread.run(WorkerThread.java:32) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source)## TID: 5944 (nid=0X1738)Thread name: pool-1-thread-1 CPU @20.88% Task: Main Java program execution"main" prio=6 tid=0x0000000001e2b000 nid=0x1738 runnable [0x00000000029df000] java.lang.Thread.State: RUNNABLE at org.ph.javaee.training4.HashMapInfiniteLoopSimulator.main(HashMapInfiniteLoopSimulator.java:75) As you can see, the above correlation and analysis is quite revealing. Our main Java program is in a hang state because our 3 worker threads are using lot of CPU and not going anywhere. They may appear ‘stuck’ performing HashMap get() & put() but in fact they are all involved in an infinite loop condition. This is exactly what we wanted to replicate. HashMap infinite looping deep dive Now let’s push the analysis one step further to better understand this looping condition. For this purpose, we added tracing code within the JDK 1.7 HashMap Java class itself in order to understand what is happening. Similar logging was added for the put() operation and also a trace indicating that the internal & automatic rehash/resize got triggered. The tracing added in get() and put() operations allows us to determine if the for() loop is dealing with circular dependency which would explain the infinite looping condition. #### HashMap.java get() operation public V get(Object key) { if (key == null) return getForNullKey(); int hash = hash(key.hashCode()); /*** P-H add-on- iteration counter ***/ int iterations = 1; for (Entry<K,V> e = table[indexFor(hash, table.length)]; e != null; e = e.next) { /*** Circular dependency check ***/ Entry<K,V> currentEntry = e; Entry<K,V> nextEntry = e.next; Entry<K,V> nextNextEntry = e.next != null?e.next.next:null; K currentKey = currentEntry.key; K nextNextKey = nextNextEntry != null?(nextNextEntry.key != null?nextNextEntry.key:null):null; System.out.println("HashMap.get() #Iterations : "+iterations++); if (currentKey != null && nextNextKey != null ) { if (currentKey == nextNextKey || currentKey.equals(nextNextKey)) System.out.println(" ** Circular Dependency detected! ["+currentEntry+"]["+nextEntry+"]"+"]["+nextNextEntry+"]"); } /***** END ***/ Object k; if (e.hash == hash && ((k = e.key) == key || key.equals(k))) return e.value; } return null; }HashMap.get() #Iterations : 1 HashMap.put() #Iterations : 1 HashMap.put() #Iterations : 1 HashMap.put() #Iterations : 1 HashMap.put() #Iterations : 1 HashMap.resize() in progress... HashMap.put() #Iterations : 1 HashMap.put() #Iterations : 2 HashMap.resize() in progress... HashMap.resize() in progress... HashMap.put() #Iterations : 1 HashMap.put() #Iterations : 2 HashMap.put() #Iterations : 1 HashMap.get() #Iterations : 1 HashMap.get() #Iterations : 1 HashMap.put() #Iterations : 1 HashMap.get() #Iterations : 1 HashMap.get() #Iterations : 1 HashMap.put() #Iterations : 1 HashMap.get() #Iterations : 1 HashMap.put() #Iterations : 1 ** Circular Dependency detected! [362565=362565][333326=333326]][362565=362565] HashMap.put() #Iterations : 2 ** Circular Dependency detected! [333326=333326][362565=362565]][333326=333326] HashMap.put() #Iterations : 1 HashMap.put() #Iterations : 1 HashMap.get() #Iterations : 1 HashMap.put() #Iterations : 1 ............................. HashMap.put() #Iterations : 56823Again, the added logging was quite revealing. We can see that following a few internal HashMap.resize() the internal structure became affected, creating circular dependency conditions and triggering this infinite looping condition (#iterations increasing and increasing…) with no exit condition. It is also showing that the resize() / rehash operation is the most at risk of internal corruption, especially when using the default HashMap size of 16. This means that the initial size of the HashMap appears to be a big factor in the risk & problem replication. Finally, it is interesting to note that we were able to successfully run the test case with the non-thread safe HashMap by assigning an initial size setting at 1000000, preventing any resize at all. Find below the merged graph results:The HashMap was our top performer but only when preventing an internal resize. Again, this is definitely not a solution to the thread safe risk but just a way to demonstrate that the resize operation is the most at risk given the entire manipulation of the HashMap performed at that time. The ConcurrentHashMap, by far, is our overall winner by providing both fast performance and thread safety against that test case. JBoss AS7 Map data structures usage We will now conclude this article by looking at the different Map implementations within a modern Java EE container implementation such as JBoss AS 7.1.2. You can obtain the latest source code from the github master branch. Find below the report:Total JBoss AS7.1.2 Java files (August 28, 2012 snapshot): 7302 Total Java classes using java.util.Hashtable: 72 Total Java classes using java.util.HashMap: 512 Total Java classes using synchronized HashMap: 18 Total Java classes using ConcurrentHashMap: 46Hashtable references were found mainly within the test suite components and from naming and JNDI related implementations. This low usage is not a surprise here. References to the java.util.HashMap were found from 512 Java classes. Again not a surprise given how common this implementation is since the last several years. However, it is important to mention that a good ratio was found either from local variables (not shared across threads), synchronized HashMap or manual synchronization safeguard so “technically” thread safe and not exposed to the above infinite looping condition (pending/hidden bugs is still a reality given the complexity with Java concurrency programming…this case study involving Oracle Service Bus 11g is a perfect example). A low usage of synchronized HashMap was found with only 18 Java classes from packages such as JMS, EJB3, RMI and clustering. Finally, find below a breakdown of the ConcurrentHashMap usage which was our main interest here. As you will see below, this Map implementation is used by critical JBoss components layers such as the Web container, EJB3 implementation etc. ## JBoss Single Sign On Used to manage internal SSO ID’s involving concurrent Thread access Total: 1 ## JBoss Java EE & Web Container Not surprising here since lot of internal Map data structures are used to manage the http sessions objects, deployment registry, clustering & replication, statistics etc. with heavy concurrent Thread access. Total: 11 ## JBoss JNDI & Security Layer Used by highly concurrent structures such as internal JNDI security management. Total: 4 ## JBoss domain & managed server management, rollout plans… Total: 7 ## JBoss EJB3 Used by data structures such as File Timer persistence store, application Exception, Entity Bean cache, serialization, passivation… Total: 8 ## JBoss kernel, Thread Pools & protocol management Used by high concurrent Threads Map data structures involved in handling and dispatching/processing incoming requests such as HTTP. Total: 3 ## JBoss connectors such as JDBC/XA DataSources… Total: 2 ## Weld (reference implementation of JSR-299: Contexts and Dependency Injection for the JavaTM EE platform) Used in the context of ClassLoader and concurrent static Map data structures involving concurrent Threads access. Total: 3 ## JBoss Test Suite Used in some integration testing test cases such as an internal Data Store, ClassLoader testing etc. Total: 3 Final words I hope this article has helped you revisit this classic problem and understand one of the common problems and risks associated with a wrong usage of the non-thread safe HashMap implementation. My main recommendation to you is to be careful when using an HashMap in a concurrent threads context. Unless you are a Java concurrency expert, I recommend that you use ConcurrentHashMap instead which offers a very good balance between performance and thread safety. As usual, extra due diligence is always recommended such as performing cycles of load & performance testing. This will allow you to detect thread safety and / or performance problems before you promote the solution to your client production environment. Reference: Java 7: HashMap vs ConcurrentHashMap from our JCG partner Pierre-Hugues Charbonneau at the Java EE Support Patterns & Java Tutorial blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close