Featured FREE Whitepapers

What's New Here?


EhCache replication: RMI vs JGroups

Recently, I was working on one product which required replicated caching. Caching provider was already decided – EhCache, and the remained was a question about transport. Which one is the best option? By the best option here I mean just the one which has better performance. The performance measurement was done just between two of available transports – JGroups and RMI, others were not considered, sorry. Replication was tested between two nodes. The main goal was to understand how increase of message data size and total number of messages affects performance. Another goal was to find the point where replication performance getting really bad. Latter is not that easy, because test used limited amount of memory and non-leaner performance deterioration could be caused by exhaust of free heap space. Below are memory size and software versions used to run the test:All tests used 6GB of heap for all executions. Tests were executed on the EhCache v.2.3.2 JVM is Sun java 1.6.0_21The test itself is very simple. One node puts some number of elements with some size in the cache, other node reads all these elements. The test output is the the time required to read all elements. Timer starts just after first element is read. The first test creates 10000 elements for each iteration. The variable is the message size, which increased twice on each iteration. On the first iteration size is 1280 bytes, on the last one – 327680 bytes (320 Kb). It means that final iteration with 10000 elements, where each size was 320 Kb will transfer approximate 3Gb of data. The tests have shown that EhCache copes very well with increasing size of the element and the slowdown was approximately proportional to the size of transferred data, which can be seen on the graph:Here y-axis is time required for transfer in milliseconds and x-axis the the size of the element. No need to give much comments. RMI definitely looks better than JGroups. In the seconds test, the variable was number of elements and the size of the element stayed constant and equal to 1280 bytes. As in previous test the number of messages was multiplied by two in each iteration and the amount of data transferred in final iteration was the same 3Gb. Graph below show how did it go:As in previous graph, y-axis is the time require to transfer all elements in one iteration. X-axis is the number of elements. Again, it can be seen that RMI is the leader. I believe hat JGroups hit the heap at the latest iteration, that’s why it performed so bad. It means that JGroups has more memory overhead per element. For the once, who do not trust (I woulnd’t ;) ) to my results and want to try it yourself, here are sources and configuration. And, as conclusion… Well, RMI and JGroups both are acceptably fast. JGroups is definitely more memory consuming, which means one can hit a problem using it with big amounts of data. RMI, on the other hand uses TCP, instead of UDP, which, with big amount of nodes, may cause higher network load. Latter, unfortunately, is not covered by the test by any means and the real impact is not clear. Reference: EhCache replication: RMI vs JGroups. from our JCG partner Stanislav Kobylansky at the Stas’s blog blog....

Java EE 6 Testing Part I – EJB 3.1 Embeddable API

One of the most common requests we hear from Enterprise JavaBeans developers is for improved unit/integration testing support. EJB 3.1 Specification introduced the EJB 3.1 Embeddable API for executing EJB components within a Java SE environment. Unlike traditional Java EE server-based execution, embeddable usage allows client code and its corresponding enterprise beans to run within the same JVM and class loader. This provides better support for testing, offline processing (e.g. batch), and the use of the EJB programming model in desktop applications. […] The embeddable EJB container provides a managed environment with support for the same basic services that exist within a Java EE runtime: injection, access to a component environment, container-managed transactions, etc. In general, enterprise bean components are unaware of the kind of managed environment in which they are running. This allows maximum reusability of enterprise components across a wide range of testing and deployment scenarios without significant rework. Let’s look at an example. Start by creating a Maven project and add the embeddable GlassFish dependency. I chose to use TestNG testing framework, but JUnit should work just as well. <dependencies> <dependency> <groupId>org.glassfish.extras</groupId> <artifactId>glassfish-embedded-all</artifactId> <version>3.1.2</version> <scope>test</scope> </dependency> <dependency> <groupId>org.testng</groupId> <artifactId>testng</artifactId> <version>6.4</version> <scope>test</scope> </dependency> <!-- The javaee-api is stripped of any code and is just used to compile your application. The scope provided in Maven means that it is used for compiling, but is also available when testing. For this reason, the javaee-api needs to be below the embedded Glassfish dependency. The javaee-api can actually be omitted when the embedded Glassfish dependency is included, but to keep your project Java-EE 6 rather than GlassFish, specification is important. --> <dependency> <groupId>javax</groupId> <artifactId>javaee-api</artifactId> <version>6.0</version> <scope>provided</scope> </dependency> </dependencies> Here’s a simple Stateless session bean: @Stateless public class HelloWorld { public String hello(String message) { return "Hello " + message; } } It exposes business methods through a no-interface view. There is no special API it must use to be capable of embeddable execution. Here is some test code to execute the bean in an embeddable container: public class HelloWorldTest { private static EJBContainer ejbContainer;private static Context ctx;@BeforeClass public static void setUpClass() throws Exception { // Instantiate an embeddable EJB container and search the // JVM class path for eligible EJB modules or directories ejbContainer = EJBContainer.createEJBContainer();// Get a naming context for session bean lookups ctx = ejbContainer.getContext(); }@AfterClass public static void tearDownClass() throws Exception { // Shutdown the embeddable container ejbContainer.close(); }@Test public void hello() throws NamingException { // Retrieve a reference to the session bean using a portable // global JNDI name HelloWorld helloWorld = (HelloWorld) ctx.lookup("java:global/classes/HelloWorld");// Do your tests assertNotNull(helloWorld); String expected = "World"; String hello = helloWorld.hello(expected); assertNotNull(hello); assertTrue(hello.endsWith(expected)); } } The source code is available on GitHub under the folder ejb31-embeddable. For a step by step tutorial with a JPA example take a look at Using the Embedded EJB Container to Test Enterprise Applications from NetBeans docs. While this new API is a step forward, I still have an issue with this approach: you are bringing the container to the test. This requires a specialized container which is different from your production environment. In Java EE 6 Testing Part II, I will introduce Arquillian and ShrinkWrap. Arquillian, a powerful container-oriented testing framework layered atop TestNG and JUnit, gives you the ability to create the production environment on the container of your choice and just execute tests in that environment (using the datasources, JMS destinations, and a whole lot of other configurations you expect to see in production environment). Instead of bringing your runtime to the test, Arquillian brings your test to the runtime. Related PostsJava EE 6 Testing Part II – Introduction to Arquillian and ShrinkWrap Maven 2 Cobertura Plugin – Updated Unit Testing JBoss 5 Services Stripes framework and EJB3 Maven 2 Cobertura Plugin Previous Entry: Changing URL parameters with jQuery Next Entry: Java EE 6 Testing Part II – Introduction to Arquillian and ShrinkWrapReference: Java EE 6 Testing Part I – EJB 3.1 Embeddable API from our JCG partner Samuel Santos at the Samaxes blog....

Java EE 6 Testing Part II – Introduction to Arquillian and ShrinkWrap

In Java EE 6 Testing Part I I briefly introduced the EJB 3.1 Embeddable API using Glassfish embedded container to demonstrate how to start the container, lookup a bean in the project classpath and run a very simple integration test. This post focus on Arquillian and ShrinkWrap and why they are awesome tools for integration testing of enterprise Java applications. The source code used for this post is available on GitHub under the folder arquillian-shrinkwrap. The toolsArquillianArquillian brings test execution to the target runtime, alleviating the burden on the developer of managing the runtime from within the test or project build. To invert this control, Arquillian wraps a lifecycle around test execution that does the following:Manages the lifecycle of one or more containers Bundles the test case, dependent classes and resources as ShrinkWrap archives Deploys the archives to the containers Enriches the test case with dependency injection and other declarative services Executes the tests inside (or against) the containers Returns the results to the test runner for reporting ShrinkWrapShrinkWrap, a central component of Arquillian, provides a simple mechanism to assemble archives like JARs, WARs, and EARs with a friendly, fluent API.One of the major benefits of using Arquillian is that you run the tests in a remote container (i.e. application server). That means you’ll be testing the real deal. No mocks. Not even embedded runtimes! Agenda The following topics will be covered on this post:Configure the Arquillian infrastructure in a Maven-based Java project Inject EJBs and Managed Beans (CDI) directly in test instances Test Java Persistence API (JPA) layer Run Arquillian in client mode Run and debug Arquillian tests inside your IDEConfigure Maven to run integration tests To run integration tests with Maven we need a different approach. By different approach I mean a different plugin: the Maven Failsafe Plugin. The Failsafe Plugin is a fork of the Maven Surefire Plugin designed to run integration tests. The Failsafe plugin goals are designed to run after the package phase, on the integration-test phase. The Maven lifecycle has four phases for running integration tests:pre-integration-test: on this phase we can start any required service or do any action, like starting a database, or starting a webserver, anything… integration-test: failsafe will run the test on this phase, so after all required services are started. post-integration-test: time to shutdown all services… verify: failsafe runs another goal that interprets the results of tests here, if any tests didn’t pass failsafe will display the results and exit the build.Configuring Failsafe in the POM: <!-- clip --> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>2.12</version> <configuration> <skipTests>true</skipTests> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-failsafe-plugin</artifactId> <version>2.12</version> <configuration> <encoding>UTF-8</encoding> </configuration> <executions> <execution> <id>integration-test</id> <goals> <goal>integration-test</goal> </goals> </execution> <execution> <id>verify</id> <goals> <goal>verify</goal> </goals> </execution> </executions> </plugin> <!-- clip --> By default, the Surefire plugin executes **/Test*.java, **/*Test.java, and **/*TestCase.java test classes. The Failsafe plugin will look for **/IT*.java, **/*IT.java, and **/*ITCase.java. If you are using both the Surefire and Failsafe plugins, make sure that you use this naming convention to make it easier to identify which tests are being executed by which plugin. Configure Arquillian infrastructure in Maven Configure your Maven project descriptor to use Arquillian by appending the following XML fragment: <!-- clip --> <repositories> <repository> <id>jboss-public-repository-group</id> <name>JBoss Public Repository Group</name> <url>http://repository.jboss.org/nexus/content/groups/public/</url> </repository> </repositories><dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.arquillian</groupId> <artifactId>arquillian-bom</artifactId> <version>1.0.1.Final</version> <scope>import</scope> <type>pom</type> </dependency> </dependencies> </dependencyManagement><dependencies> <dependency> <groupId>org.jboss.arquillian.testng</groupId> <artifactId>arquillian-testng-container</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.testng</groupId> <artifactId>testng</artifactId> <version>6.4</version> <scope>test</scope> </dependency> <dependency> <groupId>org.jboss.spec</groupId> <artifactId>jboss-javaee-6.0</artifactId> <version>3.0.1.Final</version> <scope>provided</scope> <type>pom</type> </dependency> </dependencies><profiles> <profile> <id>jbossas-remote-7</id> <activation> <activeByDefault>true</activeByDefault> </activation> <dependencies> <dependency> <groupId>org.jboss.as</groupId> <artifactId>jboss-as-arquillian-container-remote</artifactId> <version>7.1.1.Final</version> <scope>test</scope> </dependency> </dependencies> </profile> </profiles> <!-- clip -->Arquillian has a vast list of container adapters. An Arquillian test can be executed in any container that is compatible with the programming model used in the test. However, throughout this post, only JBoss AS 7 is used. Similarly to Java EE 6 Testing Part I, I chose to use TestNG testing framework, but again, JUnit should work just as well. Create testable components Before looking at how to write integration tests with Arquillian we first need to have a component to test. A Session Bean is a common component in Java EE stack and will serve as test subject. In this post, I’ll be creating a very basic backend for adding new users to a database. @Stateless public class UserServiceBean {@PersistenceContext private EntityManager em;public User addUser(User user) { em.persist(user); return user; }// Annotation says that we do not need to open a transaction @TransactionAttribute(TransactionAttributeType.SUPPORTS) public User findUserById(Long id) { return em.find(User.class, id); } } In the code above I use JPA and so we need a persistence unit. A persistence unit defines a set of all entity classes that are managed by EntityManager instances in an application. This set of entity classes represents the data contained within a single data store. Persistence units are defined by the persistence.xml configuration file: <?xml version="1.0" encoding="UTF-8"?> <persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistencehttp://java.sun.com/xml/ns/persistence/persistence_2_0.xsd"version="2.0"> <persistence-unit name="example"> <jta-data-source>java:jboss/datasources/ExampleDS</jta-data-source> <properties> <property name="hibernate.hbm2ddl.auto" value="create-drop" /> <property name="hibernate.show_sql" value="true" /> </properties> </persistence-unit> </persistence> In this example I’m using an example data source that uses H2 database and comes already configured with JBoss AS 7. Finally, we also need an entity that maps to a table in the database: @Entity public class User {@Id @GeneratedValue private Long id;@NotNull private String name;// Removed constructors, getters and setters for brevity@Override public String toString() { return "User [id=" + id + ", name=" + name + "]"; } }Test JPA with Arquillian We are now all set to write our first Arquillian test. An Arquillian test case looks just like a unit test with some extras. It must have three things:Extend Arquillian class (this is specific to TestNG, with JUnit you need a @RunWith(Arquillian.class) annotation on the class) A public static method annotated with @Deployment that returns a ShrinkWrap archive At least one method annotated with @Testpublic class UserServiceBeanIT extends Arquillian {private static final Logger LOGGER = Logger.getLogger(UserServiceBeanIT.class.getName());@Inject private UserServiceBean service;@Deployment public static JavaArchive createTestableDeployment() { final JavaArchive jar = ShrinkWrap.create(JavaArchive.class, "example.jar") .addClasses(User.class, UserServiceBean.class) .addAsManifestResource("META-INF/persistence.xml", "persistence.xml") // Enable CDI .addAsManifestResource(EmptyAsset.INSTANCE, ArchivePaths.create("beans.xml"));LOGGER.info(jar.toString(Formatters.VERBOSE));return jar; }@Test public void callServiceToAddNewUserToDB() { final User user = new User("Ike"); service.addUser(user); assertNotNull(user.getId(), "User id should not be null!"); } } This test is straightforward, it inserts a new user and checks that the id property has been filled with the generated value from the database. Since the test is enriched by Arquillian, you can inject EJBs and managed beans normally using @EJB or @Inject annotations. The method annotated with @Deployment uses ShrinkWrap to build a JAR archive which will be deployed to the container and to which your tests will be run against. ShrinkWrap isolates the classes and resources which are needed by the test from the remainder of the classpath, you should include every component needed for the test to run inside the deployment archive. Client mode Arquillian supports three test run modes:In-container mode is to test your application internals. This gives Arquillian the ability to communicate with the test, enrich the test and run the test remotely. In this mode, the test executes in the remote container; Arquillian uses this mode by default. Client mode is to test how your application is used by clients. As opposed to in-container mode which repackages and overrides the test execution, the client mode does as little as possible. It does not repackage your @Deployment nor does it forward the test execution to a remote server. Your test case is running in your JVM as expected and you’re free to test the container from the outside, as your clients see it. The only thing Arquillian does is to control the lifecycle of your @Deployment. Mixed mode allows to mix the two run modes within the same test class.To run Arquillian in client mode lets first build a servlet to be tested: @WebServlet("/User") public class UserServlet extends HttpServlet {private static final long serialVersionUID = -7125652220750352874L;@Inject private UserServiceBean service;@Override public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("text/plain");PrintWriter out = response.getWriter(); out.println(service.addUser(new User("Ike")).toString()); out.close(); }@Override protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { doGet(request, response); } } And now lets test it: public class UserServletIT extends Arquillian {private static final Logger LOGGER = Logger.getLogger(UserServletIT.class.getName());// Not managed, should be used for external calls (e.g. HTTP) @Deployment(testable = false) public static WebArchive createNotTestableDeployment() { final WebArchive war = ShrinkWrap.create(WebArchive.class, "example.war") .addClasses(User.class, UserServiceBean.class, UserServlet.class) .addAsResource("META-INF/persistence.xml") // Enable CDI .addAsWebInfResource(EmptyAsset.INSTANCE, ArchivePaths.create("beans.xml"));LOGGER.info(war.toString(Formatters.VERBOSE));return war; }@RunAsClient // Same as @Deployment(testable = false), should only be used in mixed mode @Test(dataProvider = Arquillian.ARQUILLIAN_DATA_PROVIDER) public void callServletToAddNewUserToDB(@ArquillianResource URL baseURL) throws IOException { // Servlet is listening at <context_path>/User final URL url = new URL(baseURL, "User"); final User user = new User(1L, "Ike");StringBuilder builder = new StringBuilder(); BufferedReader reader = new BufferedReader(new InputStreamReader(url.openStream())); String line;while ((line = reader.readLine()) != null) { builder.append(line); } reader.close();assertEquals(builder.toString(), user.toString()); } } Although this test is very simple, it allows you to test multiple layers of you application with a single method call. Run tests inside Eclipse You can run an Arquillian test from inside your IDE just like a unit test. Run an Arquillian test (Click on the images to enlarge)Install TestNG and JBoss Tools Eclipse plugins. Add a new JBoss AS server to Eclipse:Start JBoss AS server: Run the test case from Eclipse, right click on the test file on the Project Explorer and select Run As > TestNG Test:The result should look similar to this:Debug an Arquillian test (Click on the images to enlarge) Since we are using a remote container Debug As > TestNG Test does not cause breakpoints to be activated. Instead, we need to start the container in debug mode and attach the debugger. That’s because the test is run in a different JVM than the original test runner. The only change you need to make to debug your test is to start JBoss AS server in debug mode:Start JBoss AS server debug mode:Add the breakpoints you need to your code. And debug it by right clicking on the test file on the Project Explorer and selecting Run As > TestNG Test:More resources I hope to have been able to highlight some of the benefits of Arquillian. For more Arquillian awesomeness take a look at the following resources:Arquillian Guides Arquillian Community Arquillian Git RepositoryRelated PostsUnit Testing JBoss 5 Services Java EE 6 Testing Part I – EJB 3.1 Embeddable API Maven 2 Cobertura Plugin – Updated JBoss PojoCache configuration JBoss AS 5.0 is out! Previous Entry: Java EE 6 Testing Part I – EJB 3.1 Embeddable API Next Entry: Comparing OpenDDR to WURFLReference: Java EE 6 Testing Part II – Introduction to Arquillian and ShrinkWrap from our JCG partner Samuel Santos at the Samaxes blog....

Grails Dynamic Dropdown

Recently I had a UI requirement where a customer wanted to select values from two separate dropdowns. The value of the first dropdown essentially filtered the values for the second dropdown. Given the financial projects we support are not heavy on UI requirements, I had to do some initial learning and experimentation to yield a good implementation. This blog entry details the how to implement dynamic dropdowns in Grails with ajax and minimal JavaScript. Example Problem A contrived for dynamic dropdowns can be described below: A user would like to select a sports team for a city. The user first selects a value for a dropdown to choose a city. A second dropdown is filtered with the sports teams within that city. An example to clarify:The user selects Dallas as the city in the first dropdown. The second dropdown now displays values: Mavericks, Cowboys and Rangers. The user selects Pittsburgh as the city in the first dropdown. The second dropdown now displays values Steelers, Pirates, and Penguins.High Level Design in Grails Before we get into the details, we can take a step back and describe how we can accomplish a dynamic dropdown in the grails framework.On a gsp page, create a select dropdown with the list of cities. On change of the city dropdown, send an ajax call to the server with a param of the city selected. A controller on the server receives the parameter and looks for teams based on the city selected. Return a template with a new select dropdown for the teams, providing a model with the filtered list of teams.We will continue below with code snippets. The code was demoed with Grails 2.0. Domain Objects The domain objects for this example are quite simple: A City object with a name, and a Team object. package dropdownclass City {String namestatic hasMany = [teams: Team]static constraints = { } }package dropdownclass Team { String namestatic belongsTo = [city: City]static constraints = { } String toString() { name } }Gsp Page A gsp page contains a list of the cities directly from a GORM call. This is commonly performed and demonstrated by the default generated grails gsp pages. Note the use of remoteFunction. This is a grails gsp utility which makes an ajax call to the server and provides ‘update’ for the section of the dom to be updated on return. For the team dropdown, we will start off with a an empty select tag. Below is a snippet. <g:select name="city.id" from="${City.list()}" optionKey="id" optionValue="name" noSelection="['':'Choose City']" onchange="${remoteFunction ( controller: 'city', action: 'findTeamsForCity', params: ''city.id=' + this.value', update: 'teamSelection' )}" /> .... <td id="teamSelection" valign="top"> <select> <option>Choose Team</option> </select> </td>Controller used for Filtering The controller will have a closure which takes in the city id, and then uses it to provide the teams associated with the city. This closure is invoked via ajax. The closure renders a template and a model. The def dynamicDropdown closure is just used for navigation. By convention its renders the gsp of the same name. package dropdownclass CityController {static scaffold = City// just navigation to the gsp def dynamicDropdown = { }def findTeamsForCity = { def city = City.get(params.city.id) render(template: 'teamSelection', model: [teams: city.teams]) } }Template The template is used to replace a section of the dom in the gsp. It accepts any model that is provided. <!-- This template renders a drop down after a city is selected --><g:select name="team.id" from="${teams}" optionValue="name" optionKey="id"/>Conclusion There are multiple ways to accomplish a dynamic dropdown. Native jQuery can be used, or even native JavaScript. I chose to utilize the built-in functions of grails and lessen my dependency on client side programming. This proved to be clean, quick and quite simple! Reference: Grails Dynamic Dropdown from our JCG partner Nirav Assar at the Assar Java Consulting blog....

Spring & JSF integration: Internationalization and Localization

If you are working on a JSF application that is targeted to multiple languages, you may well be familiar with the <f:loadBundle> tag. Even if your application does not support internationalization using message bundles is still probably a good idea. Under the hood the <f:loadBundle> tag reads messages from a Java java.util.ResourceBundle and, whilst this will work, Spring developers often prefer the org.springframework.context.MessageSource interface. As an alternative to <f:loadBundle> I have been developing a new <s:messageSource> component that can be used to expose messages from any Spring MessageSource, as well as offering a few other advantages. The new component is a drop-in replacement for <f:loadBundle>. <s:messageSource source="#{messageSource}" var="messages"/> <p> <h:outputText value="#{messages.hello}"/> </p> The source attribute can be any EL expression that resolves to a MessageSource instance. If the source is not specified the Spring ApplicationContext will be used. The var attribute is the name of the variable that will be used to access the messages. Unlike standard JSF, the key of the message to load will be built from the ID of the page being rendered. For example, assuming the page above is from the file WEB-INF/pages/messages/simple.xhtml, the key used to load the hello message will be pages.messages.simple.hello. Using these compound keys prevents message key clashes and keeps the page mark-up nice and concise. You can use the prefix attribute to override this behaviour if you need to. If you make reference to message in your XHTML that you have forgotten to define you will either see a warning message (when in development) or an exception will be thrown (when in production). As with standard JSF, your messages and include place-holders for use with <h:outputFormat> pages.message.simple.welcome=Welcome to {1} with {0} <h:outputFormat value="#{messages.welcome}"> <f:param value="Spring"/> <f:param value="JSF"/> </h:outputFormat> The <h:outputFormat> tag is a little bit verbose, so for convenience, Spring messages can be used as Maps. This allows you to reference place-holders in a much more concise way: <h:outputText value="#{messages.welcome['Spring']['JSF']}"/> The same syntax allows you to map Java objects to messages. By default objects are mapped by building a message key from class name. For example, the following class: package org.example; public class ExampleObject { } Can be referenced in JSF: <h:outputText value="#{messages[exampleInstance]}"/> Resolving to the following message: org.example.ExampleObject=example For enum objects the message key includes the enum name as well as the class: package org.example; public enum ExampleObject { ONE, //mapped to message key org.example.ExampleObject.ONE TWO //mapped to message key org.example.ExampleObject.TWO } Object messages can also make reference to properties that should form part of the message: org.example.PersonName=Name is {first} {last} ...package org.example; public class PersonName { ... public String getFirst() {...} public String getLast() {...} } You can also define your own object message strategies by using a message source that implements the org.springframework.springfaces.message.ObjectMessageSource interface. If you want to check out any of this code take a look at the org.springframework.springfaces.message and org.springframework.springfaces.message.ui packages from the GitHub Project. Reference: Integrating Spring & JavaServer Faces : Internationalization and Localization from our JCG partner Phillip Webb at the Phil Webb’s Blog blog....

So you are a programmer…

Been there. Done that. And suffered for that…Programming is fun. But there are some other associated stuff we programmers blissfully skip or procrastinate because they are not so cool.End result?…Somebody is going to get hurt at the end of the day and that somebody may very well be a ourselves. So here are some stuff I have experienced and some of the stuff I my self have been guilty of doing and insights I gotten from them.Good ol’ docsIt’s a well documented fact that documentation is.. hmm well.. let me think.. Good to have. Or is it important? Yep I know the feeling . But it’s as things turn out, is some thing that needs to be done at the end of day. Who said we programmers do not have to toil for our food right?. From a user’s perspective a feature without proper documentation is a near close to a feature which is not there at all. Say you developed a really neat feature and obviously you want people to try it out right? But what if they are not able to wrap their head around how to use it or they have to play a guessing game to get for trying to get it to work and in the process failing miserably? Now not only you have wasted their time but also have earned some bad karma. And yes, an intuitive user interface can go a long way to ease user’s pain but a good, to the point documentation sprinkled on top makes up a recipe that users can’t get enough of.The Extra MileSay you developed this new cool feature. But in the hurry of pushing it off you cut some corners and left some manual step in the usage flow which better would have been done behind the curtains unbeknownst to the user. Now the user has to do this manual step every time he uses your feature which quickly becomes a pain specially if it turns out to be a heavily used feature. Optimize the UX. Cut unnecessary stuff from the user flow. Go that extra mile and users will thank you for that.Mind your cycleGo easy on your self. Make your development cycle quicker. Say you have some repetitive process to do in order to make the code you wrote to run in the test environment in order to check whether your feature/ fix is working correctly. Invest some time on automating this process, may be writing a handy script and it will help you to finish your work early and then go play .Let’s configure itWhat if user want to fine tune the size of foo queue holding tasks for the bar thread pool of your program? Oh ok let’s make it configurable via UI then right? Or should we?? Too much configurability thrown at user’s face kills user experience. Do not force your users to fill in stuff which are better left with some sensible defaults every time they use your stuff. It may be that there is no need to configure every nook and corner of your program to make it work the way you want. Decide what should be in and what should be out. Better yet the middle ground to come would be to provide those configurations in an optional advanced configuration section with some sensible defaults which if user sees fit will go and change. And also remember to document them clearly as well so that user knows better when configuring those.Nasty API docsWrong API docs are worse than having no API docs at all. It really happened to me once with a JMS API not working as published in its API docs. And I thought my thread programming was causing it. Took some considerable amount of hairs pulled to figure out the fault is with the API. Since my assumptions of the API derived from the docs were wrong, so was my program. Specially be mindful when you are changing an existing API implementation whether the assumptions and results returned in certain conditions specified in API docs still holds. If not change the docs accordingly.Carpenters wanted..Manage your broken windows. You may have to cut some corners and pull out some hacks due to time or release pressures. It’s OK as long as you know what your broken windows are and you intend to repair them the first chance you get. Leave some reminders and attend to them when you get the chance.Love thy code.Show that you care so that others will care. If you maintain your code in a good condition the other people taking over or contributing to your code will tend to care about maintaining it the same way. This is specially important in open source settings where you will not be the only one mending a piece of code written by you at the end of the day.So there goes my list of tidbits on programming for the better. Pretty much regulation and common sense stuff which does not warrant a mention you might say. But as mentioned in the beginning I have been there. Done that. And have paid for that . And we keep doing that as well. So I hope this will post serve as a reminder for me at least, when I am on verge of doing some nasty thing next time around . Anyway this is just my 2 cents. Please holler if you beg to differ.Reference: So you are a programmer.. from our JCG partner Buddhika Chamith at the Source Open blog....

Using Tomcat JDBC Connection Pool in Standalone Java Application

This is a guest article from our W4G partner Clarence Ho author of Pro Spring 3 from APress. You may find a discount coupon code for the book at the end of the article, only for the readers of Java Code Geeks! Enjoy! When using a JDBC connection pool in standalone Java applications that require data access, most of the developers will use either commons-dbcp or c3p0. In this tutorial, we will discuss using the JDBC connection pool in Apache Tomcat web container in standalone Java applications.One of the new features with Tomcat 7 is the tomcat-jdbc connection pool, which is a replacement to the commons-dbcp connection pool. The main advantages of tomcat-jdbc over commons-dbcp and other connection pool libraries were listed below:Support for highly concurrent environments and multi core/cpu systems Commons-dbcp is single-threaded and slow Commons-dbcp is complex (over 60 classes), while tomcat-jdbc core contains only 8 classes Support asynchronous connection retrieval XA connection support The connection pool object exposes an MBean that can be registered for monitoring purposes Most of the attributes in common-dbcp were supported, as well as many enhanced attributes Support of JDBC interceptorsFor a detail description and documentation for configuration, please refer to the official documentation page on Apache Tomcat web site. In this tutorial, we will demonstrate using tomcat-jdbc in developing a simple standalone data access Java application. This application will use the following frameworks and libraries:Spring Framework 3.1.1 Hibernate 4.1.3 Spring Data JPA 1.1.0 Tomcat JDBC Connection Pool 7.0.27 H2 database 1.3.167 Guava 12.0The sample was developed using SpringSource Tool Suite and a zipped archive can be downloaded at the end of this article. On the other hand, this tutorial assumes that you already have an understanding on developing JPA applications with Spring and Hibernate. Dependencies The project dependencies were managed by Maven. The following is the snippet from the POM file (pom.xml) of the project. Listing 1 – Project dependencies <properties> <maven.test.failure.ignore>true</maven.test.failure.ignore> <spring.framework.version>3.1.1.RELEASE</spring.framework.version> <hibernate.version>4.1.3.Final</hibernate.version> <spring.data.jpa.version>1.1.0.RELEASE</spring.data.jpa.version> <tomcat.dbcp.version>7.0.27</tomcat.dbcp.version> <h2.version>1.3.167</h2.version> <slf4j.version>1.6.4</slf4j.version> <log4j.version>1.2.16</log4j.version> <guava.version>12.0</guava.version> </properties><dependencies><!-- Hibernate --><dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-entitymanager</artifactId> <version>${hibernate.version}</version> </dependency><!-- Spring Framework --><dependency> <groupId>org.springframework</groupId> <artifactId>spring-context-support</artifactId> <version>${spring.framework.version}</version> </dependency><dependency> <groupId>org.springframework</groupId> <artifactId>spring-aop</artifactId> <version>${spring.framework.version}</version> </dependency><dependency> <groupId>org.springframework</groupId> <artifactId>spring-orm</artifactId> <version>${spring.framework.version}</version> </dependency><!-- Spring Data JPA --><dependency> <groupId>org.springframework.data</groupId> <artifactId>spring-data-jpa</artifactId> <version>${spring.data.jpa.version}</version> </dependency><!-- Tomcat DBCP --><dependency> <groupId>org.apache.tomcat</groupId> <artifactId>tomcat-jdbc</artifactId> <version>${tomcat.dbcp.version}</version> </dependency> <!-- Logging --><dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>${slf4j.version}</version> </dependency><dependency> <groupId>org.slf4j</groupId> <artifactId>jcl-over-slf4j</artifactId> <version>${slf4j.version}</version> <scope>runtime</scope> </dependency><dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> <version>${slf4j.version}</version> <scope>runtime</scope> </dependency><dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>${log4j.version}</version> </dependency> <!-- Others --><dependency> <groupId>com.h2database</groupId> <artifactId>h2</artifactId> <version>${h2.version}</version> </dependency><dependency> <groupId>com.google.guava</groupId> <artifactId>guava</artifactId> <version>${guava.version}</version> </dependency> </dependencies>Domain Object Model The object model is a simple contact information model. Each contact has their first name, last name, and date of birth. Also, each contact will be associated with zero or more hobbies (e.g. swimming, jogging, reading, etc.). In the DOM, there are 2 main classes, namely the Contact and Hobby classes. Listing 2 and 3 shows the code listing of the classes respectively. Listing 2 – the Contact class @Entity @Table(name = "contact") public class Contact {private Long id; private int version; private String firstName; private String lastName; private Date birthDate; private Set<Hobby> hobbies = new HashSet<Hobby>(); @Id @GeneratedValue(strategy=GenerationType.IDENTITY) @Column(name = "ID") public Long getId() { return id; } public void setId(Long id) { this.id = id; } @Version @Column(name = "VERSION") public int getVersion() { return version; } public void setVersion(int version) { this.version = version; } @Column(name = "FIRST_NAME") public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; }@Column(name = "LAST_NAME") public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } @Column(name = "BIRTH_DATE") @Temporal(TemporalType.DATE) public Date getBirthDate() { return birthDate; } public void setBirthDate(Date birthDate) { this.birthDate = birthDate; } @ManyToMany @JoinTable(name = "contact_hobby_detail", joinColumns = @JoinColumn(name = "CONTACT_ID"), inverseJoinColumns = @JoinColumn(name = "HOBBY_ID")) public Set<hobby> getHobbies() { return this.hobbies; }public void setHobbies(Set<hobby> hobbies) { this.hobbies = hobbies; } public String toString() { return "Contact - Id: " + id + ", First name: " + firstName + ", Last name: " + lastName + ", Birthday: " + birthDate; } }Listing 3 – the Hobby class @Entity @Table(name = "hobby") public class Hobby {private String hobbyId; private Set<Contact> contacts = new HashSet<Contact>();public Hobby() { }public Hobby(String hobbyId) { this.hobbyId = hobbyId; }public Hobby(String hobbyId, Set<Contact> contacts) { this.hobbyId = hobbyId; this.contacts = contacts; }@Id @Column(name = "HOBBY_ID") public String getHobbyId() { return this.hobbyId; }public void setHobbyId(String hobbyId) { this.hobbyId = hobbyId; }@ManyToMany @JoinTable(name = "contact_hobby_detail", joinColumns = @JoinColumn(name = "HOBBY_ID"), inverseJoinColumns = @JoinColumn(name = "CONTACT_ID")) public Set<Contact> getContacts() { return this.contacts; }public void setContacts(Set<Contact> contacts) { this.contacts = contacts; } }From Listing 2 and 3, noted that there is a many-to-many relationship between the Contact and Hobby classes. Database Schema In this tutorial, we will use H2 in-memory database. There are 3 tables:CONTACT: the table stores the contact information HOBBY: the table stores the listing of hobbies available for the application CONTACT_HOBBY_DETAIL:models the many-to-many relationship between Contact and Hobby classesListing 4 and 5 shows the content of the database schema creation script and the testing data population script respectively. Listing 4 – Database schema creation script (schema.sql) DROP TABLE IF EXISTS CONTACT;CREATE TABLE CONTACT ( ID INT NOT NULL AUTO_INCREMENT ,FIRST_NAME VARCHAR(60) NOT NULL ,LAST_NAME VARCHAR(40) NOT NULL ,BIRTH_DATE DATE ,VERSION INT NOT NULL DEFAULT 0 ,UNIQUE UQ_CONTACT_1 (FIRST_NAME, LAST_NAME) ,PRIMARY KEY (ID) );CREATE TABLE HOBBY ( HOBBY_ID VARCHAR(20) NOT NULL ,PRIMARY KEY (HOBBY_ID) );CREATE TABLE CONTACT_HOBBY_DETAIL ( CONTACT_ID INT NOT NULL ,HOBBY_ID VARCHAR(20) NOT NULL ,PRIMARY KEY (CONTACT_ID, HOBBY_ID) ,CONSTRAINT FK_CONTACT_HOBBY_DETAIL_1 FOREIGN KEY (CONTACT_ID) REFERENCES CONTACT (ID) ON DELETE CASCADE ,CONSTRAINT FK_CONTACT_HOBBY_DETAIL_2 FOREIGN KEY (HOBBY_ID) REFERENCES HOBBY (HOBBY_ID) );Listing 5 – Testing data population script (test-data.sql) insert into contact (first_name, last_name, birth_date) values ('Clarence', 'Ho', '1980-07-30'); insert into contact (first_name, last_name, birth_date) values ('Scott', 'Tiger', '1990-11-02');insert into hobby (hobby_id) values ('Swimming'); insert into hobby (hobby_id) values ('Jogging'); insert into hobby (hobby_id) values ('Programming'); insert into hobby (hobby_id) values ('Movies'); insert into hobby (hobby_id) values ('Reading');insert into contact_hobby_detail(contact_id, hobby_id) values (1, 'Swimming'); insert into contact_hobby_detail(contact_id, hobby_id) values (1, 'Movies'); insert into contact_hobby_detail(contact_id, hobby_id) values (2, 'Swimming');Service Layer In the service layer, there exist 2 interfaces:ContactService: provide services for accessing contact information HobbyService: provide services for accessing hobby informationListing 6 and 7 show the ContactService and HobbyService interfaces respectively. Listing 6 – the ContactService interface public interface ContactService {public List<Contact> findAll(); public Contact findById(Long id); public Contact save(Contact contact); }Listing 7 – the HobbyService interface public interface HobbyService {public List<Hobby> findAll(); }Spring Configuration Let’s take a look on the Spring configurations. Listing 8 shows the data source, transaction and JPA configurations. Listing 8 – Spring JPA configuration (datasource-tx-jpa.xml) <!--Tomcat JDBC connection pool configutation --> <bean id="dataSource" class="org.apache.tomcat.jdbc.pool.DataSource"> <property name="driverClassName" value="org.h2.Driver" /> <property name="url" value="jdbc:h2:mem:testdb" /> <property name="username" value="sa" /> <property name="password" value="" /> </bean><!--Intialize the database schema with test data --> <jdbc:initialize-database data-source="dataSource"> <jdbc:script location="classpath:schema.sql"/> <jdbc:script location="classpath:test-data.sql"/> </jdbc:initialize-database> <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="emf"/> </bean><tx:annotation-driven transaction-manager="transactionManager" /><bean id="emf" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"/> </property> <property name="packagesToScan" value="com.skywidesoft.tomcat.dbcp.tutorial.domain"/> <property name="jpaProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.H2Dialect</prop> <prop key="hibernate.max_fetch_depth">3</prop> <prop key="hibernate.jdbc.fetch_size">50</prop> <prop key="hibernate.jdbc.batch_size">10</prop> <prop key="hibernate.show_sql">true</prop> </props> </property> </bean> <context:annotation-config/><!--Spring Data JPA Repository Configuration --> <jpa:repositories base-package="com.skywidesoft.tomcat.dbcp.tutorial.repository" entity-manager-factory-ref="emf" transaction-manager-ref="transactionManager"/>Some highlights of the configuration in Listing 8 were listed below:For the dataSource bean, the class org.apache.tomcat.jdbc.pool.DataSource was used to provide the JDBC DataSource interface for the underlying connection. You will see that the configuration is basically the same as using commons-dbcp. The <jdbc:initialize-database> tag is Spring 3.1’s support for initializing the database with the database schema and testing data The <jpa:repositories> tag is to configure the Spring Data JPA’s repository abstraction.Listing 9 shows the Spring application context configuration. Listing 9 – Spring application context (app-context.xml) <import resource="classpath:datasource-tx-jpa.xml"/><context:component-scan base-package="com.skywidesoft.tomcat.dbcp.tutorial.service.jpa"/>Spring Data JPA Repository Abstraction Spring Data JPA’s repository abstraction provides a simplified approach in developing JPA based data access applications. For details, please refer to the project website. The repository abstraction layer is developed using Java interface. Listing 10 and 11 shows the code listing of the ContactRepository and HobbyRepository interfaces respectively. Listing 10 – The ContactRepository Interface public interface ContactRepository extends CrudRepository<Contact, Long>{}Listing 11 – The HobbyRepository Interface public interface HobbyRepository extends CrudRepository<Hobby, String>{}Note that the interface simply extends the Spring Data Common’s CrudRepository<T,ID> interface, which already provides common data access operations (e.g. findAll, findOne, save, delete, etc.). JPA Implementation Classes The next step is to develop the JPA implementation of the service layer interfaces in Listing 6 and 7. The classes adopt Spring Framework’s annotations for Spring bean declaration, auto-wiring of dependencies, and transaction requirements, etc. Listing 12 and 13 show the ContactServiceImpl and HobbyServiceImpl classes respectively. Listing 12 – The ContactServiceImpl class @Service("contactService") @Repository @Transactional public class ContactServiceImpl implements ContactService {final static Logger logger = LoggerFactory.getLogger(ContactServiceImpl.class); @Autowired private ContactRepository contactRepository; @Transactional(readOnly=true) public List<Contact> findAll() { logger.info("Finding all contacts"); return Lists.newArrayList(contactRepository.findAll()); }@Transactional(readOnly=true) public Contact findById(Long id) { return contactRepository.findOne(id); }public Contact save(Contact contact) { return contactRepository.save(contact); }}Listing 13 – The HobbyServiceImpl class @Service("hobbyService") @Repository @Transactional public class HobbyServiceImpl implements HobbyService {@Autowired private HobbyRepository hobbyRepository;@Transactional(readOnly=true) public List<Hobby> findAll() { return Lists.newArrayList(hobbyRepository.findAll()); }}Testing Let’s see the application in action. Listing 14 shows the ContactServiceTest class, which simply bootstrap the Spring application context from the app-context.xml file, lookup the contactService bean, and invoke the findAll operation to retrieve all the contacts from the database. Listing 14 – The ContactServiceTest class public class ContactServiceTest {public static void main(String[] args) {GenericXmlApplicationContext ctx = new GenericXmlApplicationContext(); ctx.load("classpath:app-context.xml"); ctx.refresh(); ContactService contactService = ctx.getBean("contactService", ContactService.class); List<Contact> contacts = contactService.findAll(); for (Contact contact: contacts) { System.out.println(contact); }}}Run the above class will produce the following output in the console output window (other non-relevant outputs were omitted): 2012-05-25 13:35:43,552 INFO [com.skywidesoft.tomcat.dbcp.tutorial.service.jpa.ContactServiceImpl] - <Finding all contacts> 2012-05-25 13:35:43,665 DEBUG [org.hibernate.SQL] - <select contact0_.ID as ID0_, contact0_.BIRTH_DATE as BIRTH2_0_, contact0_.FIRST_NAME as FIRST3_0_, contact0_.LAST_NAME as LAST4_0_, contact0_.VERSION as VERSION0_ from contact contact0_> Hibernate: select contact0_.ID as ID0_, contact0_.BIRTH_DATE as BIRTH2_0_, contact0_.FIRST_NAME as FIRST3_0_, contact0_.LAST_NAME as LAST4_0_, contact0_.VERSION as VERSION0_ from contact contact0_ Contact - Id: 1, First name: Clarence, Last name: Ho, Birthday: 1980-07-30 Contact - Id: 2, First name: Scott, Last name: Tiger, Birthday: 1990-11-02From the above output, you can see that the contact information which was populated by the test-data.sql script was retrieved from the database correctly. Conclusion This tutorial presents using Tomcat’s JDBC connection pool in standalone Java applications. Tomcat’s JDBC connection pool is a replacement for the commons-dbcp connection pool, providing a faster and more feature rich JDBC connection pool solution. It’s neat design, high performance, support of highly concurrent environment and multi core/cpu systems make it a compelling choice as the JDBC connection pool provider both in Tomcat’s web container and standalone Java application environments. Download full Eclipse Maven Project.Reference: Using Tomcat JDBC Connection Pool in Standalone Java Application from our W4G partner Clarence Ho. Clarence Ho is lead author of Pro Spring 3 from APress. With Pro Spring 3, you’ll learn Spring basics and core topics, and gain access to the authors’ insights and real–world experiences with remoting, Hibernate, and EJB. Beyond the basics, you’ll learn how to leverage the Spring Framework to build various tiers or parts of an enterprise Java application like transactions, the web and presentations tiers, deployment, and much more. A full sample application allows you to apply many of the technologies and techniques covered in this book and see how they work together. APress has provided the readers of Java Code Geeks with a discount coupon code. The coupon code is: SPR76 and is valid till July 6, 2012. The code offers a 40% off the eBook only from apress.com....

Profiling JavaFX Mobile applications

NOTE: This article was originally published in 2009 and is provided for reference reasons. Please check out the rest of our JavaFX articles. Today is a great day for every developer of JavaFX Mobile applications. You wonder why? Because the JavaME SDK 3.0 was released. It was long, hard work, from what I heard during our lunch breaks, but the result is a fantastic tool. Congratulations to the whole team and I am looking forward to celebrate this launch with you guys! Some people might be wondering now, JavaME SDK – JavaFX Mobile, where is the connection? The JavaME SDK finally enables a so far well hidden functionality of JavaFX Mobile: profiling – which makes it the most important tool for JavaFX Mobile developers in my opinion. (Ok, maybe I am a little biased here, because performance is my day-to-day job…) Oh, yea, right. The JavaME SDK is also a great tool for developing JavaME applications – at least that’s what I heard. The remainder of this article will explain, how you enable the profiler, what you have to consider while profiling and finally how to view the results. Enabling the profiler To enable profiling of a JavaFX Mobile application, you need to change the settings of the VM. This is conveniently possible simply by changing the properties of one of the predefined devices (Alternatively you can define a new device explicitly for profiling.) To enable profiling with the default device DefaultFxPhone1, open the file device.properties in ~/javafx-sdk/1.1/work/0, which is located in your home-folder. You need to start the emulator at least once, so that the folder and file are created. If you look at the content, it will look similar to this: # # Copyright (c) 2009 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. # phone.number: 123456789 runtime.internal.com.sun.io.j2me.apdu.hostsandports = localhost:9025,localhost:9026 profiler.enabled: false profiler.file: data.prof netmon.enabled: false runtime.internal.JAVA_HEAP_SIZE: 15728640 runtime.internal.MAIN_MEMORY_CHUNK_SIZE: 26214400 runtime.internal.microedition.locale: en-US File Content of device.properties For profiling we are only interested in the properties profiler.enabled and profiler.file. Enable profiling by setting the flag profiler.enabled: profiler.enabled: true The property profiler.file determines where the profiling data will be stored. If you do not change the default, it will be stored in the file data.prof in the same directory as device.properties. After you have changed the properties, you have to restart the emulator and the device-manager. Running a profiling-session Whenever you run an application in the emulator now, it will be profiled. After the application finishes, the result will be stored in the file which is configured in device.properties. Be aware, if you run two applications, the finishing the second one will overwrite the profiling data of the first application, so be sure to copy the file before running the second application. While profiling, two issues need to be considered. First of all an application, which is profiled, runs extremely slow. In fact it runs so slow, that any user interaction is very difficult, if not impossible. The best option is to make your test run fully automated without the need of user interaction. The slow execution also affects animations, almost all frames will be dropped when profiling. You can change the duration of an animation if it is important to execute more frames. The other issue to consider is, that the VM needs some time to write the profiling data to the file system after your application finishes. If you close the emulator window directly, the VM will be shutdown immediately and the file with the profiling data is usually corrupted. One solution to overcome this is to make sure the application finishes by itself. You can call FX.exit() to quit a JavaFX application anytime. If you need to stop the application manually, press the red cancel button on the device. This will put the JavaFX application in the background and show the AMS (Application Management System). From there you can end the application without stopping the VM by selecting the running application and selecting “End” from the menu. Viewing the profiling data This is were the JavaME SDK finally comes into play. Start the SDK and select the entry “Import Java ME SDK snapshot…” from the menu Tools to load the file generated in your profiling session. This will open a view similar to the profiler window in NetBeans and give you an easy to use representation of the generated data. Reference: Profiling JavaFX Mobile applications from our JCG partner Michael Heinrichs at the Mike’s Blog blog....

Comparing OpenDDR to WURFL

Web content delivered to mobile devices usually benefits from being tailored to take into account a range of factors such as screen size, markup language support and image format support. Such information is stored in “Device Description Repositories” (DDRs). Both WURFL and OpenDDR projects provide an API to access the DDRs, in order to ease and promote the development of Web content that adapts to its Delivery Context. WURFL recently changed its license to AGPL (Affero GPL) v3. Meaning that it is not free to use commercially anymore. Consequently some free open source alternatives have recently started to show up. OpenDDR is one of them. In this post I will share my findings on how the OpenDDR Java API compares to WURFL. Add dependencies to project This section describes how to add WURFL and OpenDDR to a Maven project. WURFL WURFL is really straightforward since it is available on Maven central repository. All you have to do is to include the dependency on your project: <dependency> <groupId>net.sourceforge.wurfl</groupId> <artifactId>wurfl</artifactId> <version>1.2.2</version><!-- the last free version --> </dependency>OpenDDR OpenDDR on the other hand is quite difficult to configure. Follow these steps to include OpenDDR in your project:Download OpenDDR-Simple-API zip. Unzip it and create a new Java project on Eclipse based on the resulting folder. Export OpenDDR-Simple-API JAR using Eclipse File >> Export..., include only the content of the src folder excluding oddr.properties file. Install the resulting JAR and DDR-Simple-API.jar from the lib folder into your local Maven repository mvn install:install-file -DgroupId=org.w3c.ddr.simple -DartifactId=DDR-Simple-API -Dversion=2008-03-30 -Dpackaging=jar -Dfile=DDR-Simple-API.jar -DgeneratePom=true -DcreateChecksum=true mvn install:install-file -DgroupId=org.openddr.simpleapi.oddr -DartifactId=OpenDDR -Dversion= -Dpackaging=jar -Dfile=OpenDDR- -DgeneratePom=true -DcreateChecksum=trueAdd the dependencies to your project pom.xml file: <dependency> <groupId>org.w3c.ddr.simple</groupId> <artifactId>DDR-Simple-API</artifactId> <version>2008-03-30</version> </dependency> <dependency> <groupId>org.openddr.simpleapi.oddr</groupId> <artifactId>OpenDDR</artifactId> <version></version> </dependency> <dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-jexl</artifactId> <version>2.1.1</version> </dependency> <dependency> <groupId>commons-lang</groupId> <artifactId>commons-lang</artifactId> <version>2.6</version> </dependency>Load repository/capabilities file This section describes how to load WURFL and OpenDDR repository file(s) and import it in your project. WURFL Copy wurfl-2.1.1.xml.gz file (the last free version) into your project src/main/resources folder and import it using: WURFLHolder wurflHolder = new CustomWURFLHolder(getClass().getResource("/wurfl-2.1.1.xml.gz").toString());OpenDDR Copy oddr.properties from the OpenDDR-Simple-API src folder and all the files inside OpenDDR-Simple-API resources folder into your project src/main/resources folder. Import them using: Service identificationService = null; try { Properties initializationProperties = new Properties(); initializationProperties.load(getClass().getResourceAsStream("/oddr.properties")); identificationService = ServiceFactory .newService("org.openddr.simpleapi.oddr.ODDRService", initializationProperties.getProperty(ODDRService.ODDR_VOCABULARY_IRI), initializationProperties); } catch (IOException e) { LOGGER.error(e.getMessage(), e); } catch (InitializationException e) { LOGGER.error(e.getMessage(), e); } catch (NameException e) { LOGGER.error(e.getMessage(), e); }Using the API This section describes how use WURFL and OpenDDR Java APIs to access the device capabilities. WURFL WURFL API is very easy to use and has the big advantage of having a fall-back hierarchy inferring capabilities for devices not yet in its repository file. Device device = wurflHolder.getWURFLManager().getDeviceForRequest(getContext().getRequest()); int resolutionWidth = Integer.valueOf(device.getCapability("resolution_width")); int resolutionHeight = Integer.valueOf(device.getCapability("resolution_height")); There’s no need to validate device.getCapability("resolution_width") against null value when no data is available. OpenDDR OpenDDR is quite the opposite. Very cumbersome and does not have a fall-back hierarchy, forcing the developer to validate each property value. PropertyRef displayWidthRef; PropertyRef displayHeightRef;try { displayWidthRef = identificationService.newPropertyRef("displayWidth"); displayHeightRef = identificationService.newPropertyRef("displayHeight"); } catch (NameException ex) { throw new RuntimeException(ex); }PropertyRef[] propertyRefs = new PropertyRef[] { displayWidthRef, displayHeightRef }; Evidence e = new ODDRHTTPEvidence(); e.put("User-Agent", getContext().getRequest().getHeader("User-Agent"));int maxImageWidth = 320; // A default value int maxImageHeight = 480; // A default value try { PropertyValues propertyValues = identificationService.getPropertyValues(e, propertyRefs); PropertyValue displayWidth = propertyValues.getValue(displayWidthRef); PropertyValue displayHeight = propertyValues.getValue(displayHeightRef);if (displayWidth.exists()) { maxImageWidth = displayWidth.getInteger(); } if (displayHeight.exists()) { maxImageHeight = displayHeight.getInteger(); } } catch (Exception ex) { throw new RuntimeException(ex); }Results The following table shows the results of tests run against an application for server-side image adaptation using both WURFL and OpenDDR. These tests were performed on real devices and pages were served as XHTML BASIC (same as XHTML MP).Platform Device Property WURFL max_image_width (1) / max_image_height WURFL resolution_width / resolution_height OpenDDR displayWidth / displayHeightN/A Firefox desktop width 650 640 Not supportedheight 600 480 Not supportediOS iPhone 4S width 320 320 320height 480 480 480Android HTC One V width 320 540 Not supportedheight 400 960 Not supportedHTC Hero width 300 320 320height 460 480 480Windows Phone 7.5 Nokia Lumia 710 width 600 640 480height 600 480 800BlackBerry BlackBerry Bold 9900 width 228 480 640height 280 640 480Symbian S60 Nokia E52 (Webkit) width 234 240 240height 280 320 320Nokia E52 (Opera Mobile) width 240 240 Not supportedheight 280 320 Not supportedWindows Mobile 6.1 HTC Touch HD T8282 width 440 480 480height 700 800 800(1) max_image_width capability is very handy:Width of the images viewable (usable) width expressed in pixels. This capability refers to the image when used in “mobile mode”, i.e. when the page is served as XHTML MP, or it uses meta-tags such as “viewport”, “handheldfriendly”, “mobileoptimised” to disable “web rendering” and force a mobile user-experience. Note: The color #9f9 highlights the results that performed better. Pros and ConsPros ConsWURFLA Device Hierarchy that yields a high-chance that the value of capabilities is inferred correctly even when the device is not yet recognized. Lots and lots of capabilities. Easier to configure. Cleaner API.Pricing and Licensing.OpenDDRFree to use, even commercially. Growing community.Limited capabilities. OpenDDR seems to be limited to W3C DDR Core Vocabulary.Related PostsEclipse RCP to Cellphones Java EE 6 Testing Part II – Introduction to Arquillian and ShrinkWrap Java EE 6 Testing Part I – EJB 3.1 Embeddable API Stripes framework XSS Interceptor Maven 2 Cobertura Plugin – Updated Previous Entry: Java EE 6 Testing Part II – Introduction to Arquillian and ShrinkWrapReference: Comparing Device Description Repositories from our JCG partner Samuel Santos at the Samaxes blog....

Avoid Null Pointer Exception in Java

Null Pointer Exception is the most common and most annoying exception in Java. In this post I want to avoid this undesired exception. First let’s create example that raise Null Pointer Exception private Boolean isFinished(String status) { if (status.equalsIgnoreCase("Finish")) { return Boolean.TRUE; } else { return Boolean.FALSE; } } In previous method if we pass the value of “status” variable as null it will raise Null Pointer Exception in below line if (status.equalsIgnoreCase("Finish")) { So we should change my code to below code to avoid Null Pointer Exception private Boolean isFinished(String status) { if ("Finish".equalsIgnoreCase(status)) { return Boolean.TRUE; } else { return Boolean.FALSE; } } In previous method if we path the value of “status” variable as null it will not raise Null Pointer Exception. If you have object.equals(”literal”) you should replace with “literal”.equals(object) . If you have object.equals(Enum.enumElement) you should replace with Enum.enumElement.equals(object). At general expose equals method of the object that you are sure that it doesn’t has null value. I will continue in providing more best practice and advices. In part 1 post I listed how to avoid NPE in equalsIgnoreCase() method and enumerator, today I will write about below cases 1- Empty Collection 2- Use some Methods 3- assert Keyword 4- Assert Class 5- Exception Handling 6- Too many dot syntax 7- StringUtils Class 1- Empty Collection Empty collection is collection which hasn’t any elements. Some developers return null value for Collection which has no elements but this is false, you should return Collections.EMPTY_LIST, Collections.EMPTY_SET and Collections.EMPTY_MAP. Wrong Code public static List getEmployees() { List list = null; return list; } Correct Code public static List getEmployees() { List list = Collections.EMPTY_LIST; return list; }2- Use some Method Use some methods to assure that null value not exist like contains(), indexOf(), isEmpty(), containsKey(), containsValue() and hasNext(). Example String myName = "Mahmoud A. El-Sayed"; List list = Collections.EMPTY_LIST; boolean exist = list.contains(myName); int index = list.indexOf(myName); boolean isEmpty =list.isEmpty(); Map map =Collections.EMPTY_MAP; exist=map.containsKey(myName); exist=map.containsValue(myName); isEmpty=map.isEmpty(); Set set=Collections.EMPTY_SET; exist=set.contains(myName); isEmpty=set.isEmpty(); Iterator iterator; exist = iterator.hasNext();3- assert Keyword assert is keyword provided in Java 1.4 which enables you to test your assumptions about your code. Syntax of assert keyword assert expression1 ;expression1 is boolean expression which is evaluated and if it is false system will throw AssertionError with no detail message assert expression1 : expression2 ;expression1 is boolean expression which is evaluated and if it is false system will throw AssertionError and detail message is expression2 For example in my post I want to assert that expression is not null then I should write below code public static String getManager(String employeeId) { assert (employeeId != null) : "employeeId must be not null"; return "Mahmoud A. El-Sayed"; } If I try to call getManager method using getManager(null); It will raise “java.lang.AssertionError: employeeId must be not null” Note use -enableassertion in your java option while run your code to enable assertion. 4- Assert Class Assert class exists in com.bea.core.repackaged.springframework.util package and has a lot of methods used in assertion. Example public static String getManager(String employeeId) { Assert.notNull(employeeId, "employeeId must be not null"); Assert.hasLength(employeeId, "employeeId must has length greater than 0"); return "Mahmoud A. El-Sayed"; } If I try to call getManager method using getManager(null); It will raise “java.lang.IllegalArgumentException: employeeId must be not null” 5- Exception Handling I should take care in exception handling using try catch statement or checking of null value of variables For example public static String getManager(String employeeId) { return null; } I will cal it using below code String managerId = getManager("A015"); System.out.println(managerId.toString()); It will raise “java.lang.NullPointerException” , so to handle this exception I should use try catch or checking of null values a- try catch statement I will change calling code to below code String managerId = getManager("A015"); try { System.out.println(managerId.toString()); } catch (NullPointerException npe) { //write your code here } b- Checking of Null values I will change calling code to below code String managerId = getManager("A015"); if (managerId != null) { System.out.println(managerId.toString()); } else { //write your code here } 6- Too many dot syntax Some developers use this approach as he writes less code but in the future will not be easier for maintenance and handling exception Wrong Code String attrValue = (String)findViewObject("VO_NAME").getCurrentRow().getAttribute("Attribute_NAME"); Correct Code ViewObject vo = findViewObject("VO_NAME"); Row row = vo.getCurrentRow(); String attrValue = (String)row.getAttribute("Attribute_NAME"); 7- StringUtils Class StringUtils class is part of org.apache.commons.lang package, I can use it to avoid NPE specially all its methods are null safe For example StringUtils. IsEmpty(), StringUtils. IsBlank(), StringUtils.equals(), and much more. You can read specification of this class from here Conclusion Always take care of NullPointerException when writing code and guess how it will be thrown in your code and write //TODO in your code for solving it later if you haven’t more time. Reference: Avoid Null Pointer Exception Part 1, Avoid Null Pointer Exception Part 2 from our JCG partner Mahmoud A. ElSayed at the Dive in Oracle blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: