Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?


Hibernate locking patterns – How does Optimistic Lock Mode work

Explicit optimistic locking In my previous post, I introduced the basic concepts of Java Persistence locking. The implicit locking mechanism prevents lost updates and it’s suitable for entities that we can actively modify. While implicit optimistic locking is a widespread technique, few happen to understand the inner workings of explicit optimistic lock mode. Explicit optimistic locking may prevent data integrity anomalies, when the locked entities are always modified by some external mechanism.   The product ordering use case Let’s say we have the following domain model:Our user, Alice, wants to order a product. The purchase goes through the following steps:Alice loads a Product entity Because the price is convenient, she decides to order the Product the price Engine batch job changes the Product price (taking into consideration currency changes, tax changes and marketing campaigns) Alice issues the Order without noticing the price changeImplicit locking shortcomings First, we are going to test if the implicit locking mechanism can prevent such anomalies. Our test case looks like this: doInTransaction(new TransactionCallable<Void>() { @Override public Void execute(Session session) { final Product product = (Product) session.get(Product.class, 1L); try { executeAndWait(new Callable<Void>() { @Override public Void call() throws Exception { return doInTransaction(new TransactionCallable<Void>() { @Override public Void execute(Session _session) { Product _product = (Product) _session.get(Product.class, 1L); assertNotSame(product, _product); _product.setPrice(BigDecimal.valueOf(14.49)); return null; } }); } }); } catch (Exception e) { fail(e.getMessage()); } OrderLine orderLine = new OrderLine(product); session.persist(orderLine); return null; } }); The test generates the following output: #Alice selects a Product Query:{[select as id1_1_0_, abstractlo0_.description as descript2_1_0_, abstractlo0_.price as price3_1_0_, abstractlo0_.version as version4_1_0_ from product abstractlo0_ where][1]}#The price engine selects the Product as well Query:{[select as id1_1_0_, abstractlo0_.description as descript2_1_0_, abstractlo0_.price as price3_1_0_, abstractlo0_.version as version4_1_0_ from product abstractlo0_ where][1]} #The price engine changes the Product price Query:{[update product set description=?, price=?, version=? where id=? and version=?][USB Flash Drive,14.49,1,1,0]} #The price engine transaction is committed DEBUG [pool-2-thread-1]: o.h.e.t.i.j.JdbcTransaction - committed JDBC Connection#Alice inserts an OrderLine without realizing the Product price change Query:{[insert into order_line (id, product_id, unitPrice, version) values (default, ?, ?, ?)][1,12.99,0]} #Alice transaction is committed unaware of the Product state change DEBUG [main]: o.h.e.t.i.j.JdbcTransaction - committed JDBC Connection The implicit optimistic locking mechanism cannot detect external changes, unless the entities are also changed by the current Persistence Context. To protect against issuing an Order for a stale Product state, we need to apply an explicit lock on the Product entity. Explicit locking to the rescue The Java Persistence LockModeType.OPTIMISTIC is a suitable candidate for such scenarios, so we are going to put it to a test. Hibernate comes with a LockModeConverter utility, that’s able to map any Java Persistence LockModeType to its associated Hibernate LockMode. For simplicity sake, we are going to use the Hibernate specific LockMode.OPTIMISTIC, which is effectively identical to its Java persistence counterpart. According to Hibernate documentation, the explicit OPTIMISTIC Lock Mode will: assume that transaction(s) will not experience contention for entities. The entity version will be verified near the transaction end. I will adjust our test case to use explicit OPTIMISTIC locking instead: try { doInTransaction(new TransactionCallable<Void>() { @Override public Void execute(Session session) { final Product product = (Product) session.get(Product.class, 1L, new LockOptions(LockMode.OPTIMISTIC));executeAndWait(new Callable<Void>() { @Override public Void call() throws Exception { return doInTransaction(new TransactionCallable<Void>() { @Override public Void execute(Session _session) { Product _product = (Product) _session.get(Product.class, 1L); assertNotSame(product, _product); _product.setPrice(BigDecimal.valueOf(14.49)); return null; } }); } });OrderLine orderLine = new OrderLine(product); session.persist(orderLine); return null; } }); fail("It should have thrown OptimisticEntityLockException!"); } catch (OptimisticEntityLockException expected) {"Failure: ", expected); } The new test version generates the following output: #Alice selects a Product Query:{[select as id1_1_0_, abstractlo0_.description as descript2_1_0_, abstractlo0_.price as price3_1_0_, abstractlo0_.version as version4_1_0_ from product abstractlo0_ where][1]}#The price engine selects the Product as well Query:{[select as id1_1_0_, abstractlo0_.description as descript2_1_0_, abstractlo0_.price as price3_1_0_, abstractlo0_.version as version4_1_0_ from product abstractlo0_ where][1]} #The price engine changes the Product price Query:{[update product set description=?, price=?, version=? where id=? and version=?][USB Flash Drive,14.49,1,1,0]} #The price engine transaction is committed DEBUG [pool-1-thread-1]: o.h.e.t.i.j.JdbcTransaction - committed JDBC Connection#Alice inserts an OrderLine Query:{[insert into order_line (id, product_id, unitPrice, version) values (default, ?, ?, ?)][1,12.99,0]} #Alice transaction verifies the Product version Query:{[select version from product where id =?][1]} #Alice transaction is rolled back due to Product version mismatch INFO [main]: c.v.h.m.l.c.LockModeOptimisticTest - Failure: org.hibernate.OptimisticLockException: Newer version [1] of entity [[com.vladmihalcea.hibernate.masterclass.laboratory.concurrency. AbstractLockModeOptimisticTest$Product#1]] found in database The operation flow goes like this:The Product version is checked towards transaction end. Any version mismatch triggers an exception and a transaction rollback. Race condition risk Unfortunately, the application-level version check and the transaction commit are not an atomic operation. The check happens in EntityVerifyVersionProcess, during the before-transaction-commit stage: public class EntityVerifyVersionProcess implements BeforeTransactionCompletionProcess { private final Object object; private final EntityEntry entry;/** * Constructs an EntityVerifyVersionProcess * * @param object The entity instance * @param entry The entity's referenced EntityEntry */ public EntityVerifyVersionProcess(Object object, EntityEntry entry) { this.object = object; this.entry = entry; }@Override public void doBeforeTransactionCompletion(SessionImplementor session) { final EntityPersister persister = entry.getPersister();final Object latestVersion = persister.getCurrentVersion( entry.getId(), session ); if ( !entry.getVersion().equals( latestVersion ) ) { throw new OptimisticLockException( object, "Newer version [" + latestVersion + "] of entity [" + MessageHelper.infoString( entry.getEntityName(), entry.getId() ) + "] found in database" ); } } } The AbstractTransactionImpl.commit() method call, will execute the before-transaction-commit stage and then commit the actual transaction: @Override public void commit() throws HibernateException { if ( localStatus != LocalStatus.ACTIVE ) { throw new TransactionException( "Transaction not successfully started" ); }LOG.debug( "committing" );beforeTransactionCommit();try { doCommit(); localStatus = LocalStatus.COMMITTED; afterTransactionCompletion( Status.STATUS_COMMITTED ); } catch (Exception e) { localStatus = LocalStatus.FAILED_COMMIT; afterTransactionCompletion( Status.STATUS_UNKNOWN ); throw new TransactionException( "commit failed", e ); } finally { invalidate(); afterAfterCompletion(); } } Between the check and the actual transaction commit, there is a very short time window for some other transaction to silently commit a Product price change. Conclusion The explicit OPTIMISTIC locking strategy offers a limited protection against stale state anomalies. This race condition is a typical case of Time of check to time of use data integrity anomaly. In my next article, I will explain how we can save this example using the explicit lock upgrade technique.Code available on GitHub.Reference: Hibernate locking patterns – How does Optimistic Lock Mode work from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....

Learning Netflix Governator – Part 2

To continue from the previous entry on some basic learnings on Netflix Governator, here I will cover one more enhancement that Netflix Governator brings to Google Guice – Lifecycle Management Lifecycle Management essentially provides hooks into the different lifecycle phases that an object is taken through, to quote the wiki article on Governator:           Allocation (via Guice) | v Pre Configuration | v Configuration | V Set Resources | V Post Construction | V Validation and Warm Up | V -- application runs until termination, then... -- | V Pre Destroy To illustrate this, consider the following code: package;import; import; import sample.dao.BlogDao; import sample.model.BlogEntry; import sample.service.BlogService;import javax.annotation.PostConstruct; import javax.annotation.PreDestroy;@AutoBindSingleton(baseClass = BlogService.class) public class DefaultBlogService implements BlogService { private final BlogDao blogDao;@Inject public DefaultBlogService(BlogDao blogDao) { this.blogDao = blogDao; }@Override public BlogEntry get(long id) { return this.blogDao.findById(id); }@PostConstruct public void postConstruct() { System.out.println("Post-construct called!!"); } @PreDestroy public void preDestroy() { System.out.println("Pre-destroy called!!"); } } Here two methods have been annotated with @PostConstruct and @PreDestroy annotations to hook into these specific phases of the Governator’s lifecycle for this object. The neat thing is that these annotations are not Governator specific but are JSR-250 annotations that are now baked into the JDK. Calling the test for this class appropriately calls the annotated methods, here is a sample test: mport; import; import; import org.junit.Test; import sample.service.BlogService;import static org.hamcrest.MatcherAssert.*; import static org.hamcrest.Matchers.*;public class SampleWithGovernatorTest {@Test public void testExampleBeanInjection() throws Exception { Injector injector = LifecycleInjector .builder() .withModuleClass(SampleModule.class) .usingBasePackages("") .build() .createInjector();LifecycleManager manager = injector.getInstance(LifecycleManager.class);manager.start();BlogService blogService = injector.getInstance(BlogService.class); assertThat(blogService.get(1l), is(notNullValue())); manager.close(); }} Spring Framework has supported a similar mechanism for a long time – so the exact same JSR-250 based annotations work for Spring bean too. If you are interested in exploring this further, here is my github project with samples with Lifecycle management.Reference: Learning Netflix Governator – Part 2 from our JCG partner Biju Kunjummen at the all and sundry blog....

SSL with WildFly 8 and Undertow

I’ve been working my way through some security topics along WildFly 8 and stumbled upon some configuration options, that are not very well documented. One of them is the TLS/SSL configuration for the new web-subsystem Undertow. There’s plenty of documentation for the older web-subsystem and it is indeed still available to use, but here is the short how-to configure it the new way. Generate a keystore and self-signed certificate  First step is to generate a certificate. In this case, it’s going to be a self signed one, which is enough to show how to configure everything. I’m going to use the plain Java way of doing it, so all you need is the JRE keytool. Java Keytool is a key and certificate management utility. It allows users to manage their own public/private key pairs and certificates. It also allows users to cache certificates. Java Keytool stores the keys and certificates in what is called a keystore. By default the Java keystore is implemented as a file. It protects private keys with a password. A Keytool keystore contains the private key and any certificates necessary to complete a chain of trust and establish the trustworthiness of the primary certificate. Please keep in mind, that an SSL certificate serves two essential purposes: distributing the public key and verifying the identity of the server so users know they aren’t sending their information to the wrong server. It can only properly verify the identity of the server when it is signed by a trusted third party. A self signed certificate is a certificate that is signed by itself rather than a trusted authority. Switch to a command-line and execute the following command which has some defaults set, and also prompts you to enter some more information. $>keytool -genkey -alias mycert -keyalg RSA -sigalg MD5withRSA -keystore my.jks -storepass secret  -keypass secret -validity 9999What is your first and last name?   [Unknown]:  localhost What is the name of your organizational unit?   [Unknown]:  myfear What is the name of your organization?   [Unknown]: What is the name of your City or Locality?   [Unknown]:  Grasbrun What is the name of your State or Province?   [Unknown]:  Bavaria What is the two-letter country code for this unit?   [Unknown]:  ME Is CN=localhost, OU=myfear,, L=Grasbrun, ST=Bavaria, C=ME correct?   [no]:  yes Make sure to put your desired “hostname” into the “first and last name” field, otherwise you might run into issues while permanently accepting this certificate as an exception in some browsers. Chrome doesn’t have an issue with that though. The command generates a my.jks file in the folder it is executed. Copy this to your WildFly config directory (%JBOSS_HOME%/standalone/config). Configure The Additional WildFly Security Realm The next step is to configure the new keystore as a server identity for ssl in the WildFly security-realms section of the standalone.xml (if you’re using -ha or other versions, edit those).  <management>         <security-realms> <!-- ... -->  <security-realm name="UndertowRealm">                 <server-identities>                     <ssl>                         <keystore path="my.keystore" relative-to="jboss.server.config.dir" keystore-password="secret" alias="mycert" key-password="secret"/>                     </ssl>                 </server-identities>             </security-realm> <!-- ... --> And you’re ready for the next step. Configure Undertow Subsystem for SSL If you’re running with the default-server, add the https-listener to the undertow subsystem:   <subsystem xmlns="urn:jboss:domain:undertow:1.2">          <!-- ... -->             <server name="default-server">             <!-- ... -->                 <https-listener name="https" socket-binding="https" security-realm="UndertowRealm"/> <! -- ... --> That’s it, now you’re ready to connect to the ssl port of your instance https://localhost:8443/. Note, that you get the privacy error (compare screenshot). If you need to use a fully signed certificate you mostly get a PEM file from the cert authority. In this case, you need to import this into the keystore. This stackoverflow thread may help you with that.Reference: SSL with WildFly 8 and Undertow from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....

Display a string list in an Android ListView

Showing a list of items is a very common pattern in mobile application. This pattern comes up often when I make a tutorial: I often need to interact with data, but I don’t want to send a lot of time just on displaying that data when that’s not the point of the tutorial. So, what is the easiest way to display a simple list of values in Android like a list of strings? In the Android SDK, the widget used to show lists of items is a ListView. A listview must always get its data from an adapter class. That adapter class manages the layout used to display each individual item, how it should behave and the data itself. All the other widgets that display multiple items in the Android SDK, like the spinner and the grid, also need an adapter. When I was making the knitting row counter for my series on saving data with Android, I needed to show a list of all the projects in the database but I wanted to do the absolute minimum. The name of the projects are strings so I used the ArrayAdapter class from the Android SDK to display that list of strings. Here is show to create the adapter and set it in the listview to display the list of items: private ListView mListView;@Override protected void onStart() { super.onStart();// Add the project titles to display in a list for the listview adapter. List<String> listViewValues = new ArrayList<String>(); for (Project currentProject : mProjects) { listViewValues.add(currentProject.getName()); }// Initialise a listview adapter with the project titles and use it // in the listview to show the list of project. mListView = (ListView) findViewById(; ArrayAdapter<String> adapter = new ArrayAdapter<String>(this, android.R.layout.simple_list_item_1,, listViewValues.toArray(new String[listViewValues.size()])); mListView.setAdapter(adapter); } After the adapter for the list is set, you can also add a action to execute when an item is clicked. For the row counter application, clicking an item opens a new activity showing the details of the selected project. private ListView mListView;@Override protected void onStart() { [...]// Sets a click listener to the elements of the listview so a // message can be shown for each project. mListView.setOnItemClickListener(new OnItemClickListener() {@Override public void onItemClick(AdapterView<?> parent, View view, int position, long id) { // Get clicked project. Project project = mProjects.get(position); // Open the activity for the selected project. Intent projectIntent = new Intent(MainActivity.this, ProjectActivity.class); projectIntent.putExtra("project_id", project.getId()); MainActivity.this.startActivity(projectIntent); } If you need to go further than the default layout, you’ll need to create your custom layout and adapter to show the data the way you want it to, but what is shown here is enough to get started displaying data. If you want to run the example, you can find the complete RowCounter project on my GitHub at the listview is the file.Reference: Display a string list in an Android ListView from our JCG partner Cindy Potvin at the Web, Mobile and Android Programming blog....

Self-Signed Certificate for Apache TomEE (and Tomcat)

Probably in most of your Java EE projects you will have part or whole system with SSL support (https) so browsers and servers can communicate over a secured connection. This means that the data being sent is encrypted, transmitted and finally decrypted before processing it. The problem is that sometimes the official “keystore” is only available for production environment and cannot be used in development/testing machines. Then one possible step is creating a non-official “keystore” by one member of the team and share it to all members so everyone can locally test using https, and the same for testing/QA environments. But using this approach you are running to one problem, and it is that when you are going to run the application you will receive a warning/error message that the certificate is untrusted. You can live with this but also we can do it better and avoid this situation by creating a self-signed SSL certificate. In this post we are going to see how to create and enable SSL in Apache TomEE (and Tomcat) with a self-signed certificate. The first thing to do is to install openssl. This step will depend on your OS. In my case I run with an Ubuntu 14.04. Then we need to generate a 1024 bit RSA private key using Triple-DES algorithm and stored in PEM format. I am going to use {userhome}/certs directory to generate all required resources, but it can be changed without any problem. Generate Private Key openssl genrsa -des3 -out server.key 1024 Here we must introduce a password, for this example I am going to use apachetomee (please don’t do that in production). Generate CSR Next step is to generate a CSR (Certificate Signing Request). Ideally this file will be generated and sent to a Certificate Authority such as Thawte or Verisign, who will verify the identity. But in our case we are going to self-signed CSR with previous private key. openssl req -new -key server.key -out server.csr One of the prompts will be for “Common Name (e.g. server FQDN or YOUR name)”. It is important that this field be filled in with the fully qualified domain name of the server to be protected by SSL. In case of development machine you can set “localhost”. Now that we have the private key and the csr, we are ready to generate a X.509 self-signed certificate valid for one year by running next command: Generate a Self-Signed Certificate openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt To install certificate inside Apache TomEE (and Tomcat) we need to use a keystore. This keystore is generated using keytool command. To use this tool, the certificate should be a PKCS12 certificate. For this reason we are going to use openssl to transform the certificate to a PKCS12 format by running: Prepare for Apache TomEE openssl pkcs12 -export -in server.crt -inkey server.key -out server.p12 -name test_server -caname root_ca We are almost done, now we only need to create the keystore. I have used as the same password to protect the keystore as in all other resources, which is apachetomee. keytool -importkeystore -destkeystore keystore.jks -srckeystore server.p12 -srcstoretype PKCS12 -srcalias test_server -destalias test_server And now we have a keystore.jks file created at {userhome}/certs. Installing Keystore into Apache TomEE The process of installing a keystore into Apache TomEE (and Tomcat) is described in But in summary the only thing to do is open ${TOMEE_HOME}/config/server.xml and define the SSL connector. <Service name="Catalina"> <Connector port="8443" protocol="HTTP/1.1" maxThreads="150" SSLEnabled="true" scheme="https" secure="true" keystoreFile="${user.home}/certs/keystore.jks" keystorePass="apachetomee" clientAuth="false" sslProtocol="TLS" /> </Service> Note that you need to set the keystore location in my case {userhome}/certs/keystore.jks and the password to be used to open the keystore which is apachetomee. Preparing the Browser Before starting the server we need to add the server.crt as valid Authorities in browser. In Firefox: Firefox Preferences -> Advanced -> View Certificates -> Authorities (tab) and then import the server.crt file. In Chrome: Settings -> HTTPS/SSL -> Manage Certificates … -> Authorities (tab) and then import the server.crt file. And now you are ready to start Apache TomEE (or Tomcat) and you can navigate to any deployed application but using https and port 8443. And that’s all, now we can run tests (with Selenium) without worrying about untrusted certificate warning.Reference: Self-Signed Certificate for Apache TomEE (and Tomcat) from our JCG partner Alex Soto at the One Jar To Rule Them All blog....

NoSQL with Hibernate OGM – Part one: Persisting your first Entities

The first final version of Hibernate OGM is out and the team recovered a bit from the release frenzy. So they thought about starting a series of tutorial-style blogs which give you the chance to start over easily with Hibernate OGM. Thanks to Gunnar Morling ( @gunnarmorling) for creating this tutorial. Introduction Don’t know what Hibernate OGM is? Hibernate OGM is the newest project under the Hibernate umbrella and allows you to persist entity models in different NoSQL stores via the well-known JPA. We’ll cover these topics in the following weeks:Persisting your first entities (this instalment) Querying for your data Running on WildFly Running with CDI on Java SE Store data into two different stores in the same applicationIf you’d like us to discuss any other topics, please let us know. Just add a comment below or tweet your suggestions to us. In this first part of the series we are going to set up a Java project with the required dependencies, create some simple entities and write/read them to and from the store. We’ll start with the Neo4j graph database and then we’ll switch to the MongoDB document store with only a small configuration change. Project set-up  Let’s first create a new Java project with the required dependencies. We’re going to use Maven as a build tool in the following, but of course Gradle or others would work equally well. Add this to the dependencyManagement block of your pom.xml: ... <dependencyManagement> <dependencies> ... <dependency> <groupId>org.hibernate.ogm</groupId> <artifactId>hibernate-ogm-bom</artifactId> <type>pom</type> <version>4.1.1.Final</version> <scope>import</scope> </dependency> ... </dependencies> </dependencyManagement> ... This will make sure that you are using matching versions of the Hibernate OGM modules and their dependencies. Then add the following to the dependencies block: ... <dependencies> ... <dependency> <groupId>org.hibernate.ogm</groupId> <artifactId>hibernate-ogm-neo4j</artifactId> </dependency> <dependency> <groupId>org.jboss.jbossts</groupId> <artifactId>jbossjta</artifactId> </dependency> ... </dependencies> ... The dependencies are:The Hibernate OGM module for working with an embedded Neo4j database; This will pull in all other required modules such as Hibernate OGM core and the Neo4j driver. When using MongoDB, you’d swap that with hibernate-ogm-mongodb. JBoss’ implementation of the Java Transaction API (JTA), which is needed when not running within a Java EE container such as WildFlyThe domain model Our example domain model is made up of three classes: Hike, HikeSection and Person.There is a composition relationship between Hike and HikeSection, i.e. a hike comprises several sections whose life cycle is fully dependent on the Hike. The list of hike sections is ordered; This order needs to be maintained when persisting a hike and its sections. The association between Hike and Person (acting as hike organizer) is a bi-directional many-to-one/one-to-many relationship: One person can organize zero ore more hikes, whereas one hike has exactly one person acting as its organizer. Mapping the entities Now let’s map the domain model by creating the entity classes and annotating them with the required meta-data. Let’s start with the Person class: @Entity public class Person {@Id @GeneratedValue(generator = "uuid") @GenericGenerator(name = "uuid", strategy = "uuid2") private long id;private String firstName; private String lastName;@OneToMany(mappedBy = "organizer", cascade = CascadeType.PERSIST) private Set<Hike> organizedHikes = new HashSet<>();// constructors, getters and setters... } The entity type is marked as such using the @Entity annotation, while the property representing the identifier is annotated with @Id. Instead of assigning ids manually, Hibernate OGM can take care of this, offering several id generation strategies such as (emulated) sequences, UUIDs and more. Using a UUID generator is usually a good choice as it ensures portability across different NoSQL datastores and makes id generation fast and scalable. But depending on the store you work with, you also could use specific id types such as object ids in the case of MongoDB (see the reference guide for the details). Finally, @OneToMany marks the organizedHikes property as an association between entities. As it is a bi-directional entity, the mappedBy attribute is required for specifying the side of the association which is in charge of managing it. Specifying the cascade type PERSIST ensures that persisting a person will automatically cause its associated hikes to be persisted, too. Next is the Hike class: @Entity public class Hike {@Id @GeneratedValue(generator = "uuid") @GenericGenerator(name = "uuid", strategy = "uuid2") private String id;private String description; private Date date; private BigDecimal difficulty;@ManyToOne private Person organizer;@ElementCollection @OrderColumn(name = "sectionNo") private List<HikeSection> sections;// constructors, getters and setters... } Here the @ManyToOne annotation marks the other side of the bi-directional association between Hike and Organizer. As HikeSection is supposed to be dependent on Hike, the sections list is mapped via @ElementCollection. To ensure the order of sections is maintained in the datastore, @OrderColumn is used. This will add one extra “column” to the persisted records which holds the order number of each section. Finally, the HikeSection class: @Embeddable public class HikeSection {private String start; private String end;// constructors, getters and setters... } Unlike Person and Hike, it is not mapped via @Entity but using @Embeddable. This means it is always part of another entity ( Hike in this case) and as such also has no identity on its own. Therefore it doesn’t declare any @Id property. Note that these mappings looked exactly the same, had you been using Hibernate ORM with a relational datastore. And indeed that’s one of the promises of Hibernate OGM: Make the migration between the relational and the NoSQL paradigms as easy as possible! Creating the persistence.xml With the entity classes in place, one more thing is missing, JPA’s persistence.xml descriptor. Create it under src/main/resources/META-INF/persistence.xml: <?xml version="1.0" encoding="utf-8"?><persistence xmlns="" xmlns:xsi="" xsi:schemaLocation="" version="2.0"><persistence-unit name="hikePu" transaction-type="RESOURCE_LOCAL"> <provider>org.hibernate.ogm.jpa.HibernateOgmPersistence</provider><properties> <property name="hibernate.ogm.datastore.provider" value="neo4j_embedded" /> <property name="hibernate.ogm.datastore.database" value="HikeDB" /> <property name="hibernate.ogm.neo4j.database_path" value="target/test_data_dir" /> </properties> </persistence-unit> </persistence> If you have worked with JPA before, this persistence unit definition should look very familiar to you. The main difference to using the classic Hibernate ORM on top of a relational database is the specific provider class we need to specify for Hibernate OGM: org.hibernate.ogm.jpa.HibernateOgmPersistence. In addition, some properties specific to Hibernate OGM and the chosen back end are defined to set:the back end to use (an embedded Neo4j graph database in this case) the name of the Neo4j database the directory for storing the Neo4j database filesDepending on your usage and the back end, other properties might be required, e.g. for setting a host, user name, password etc. You can find all available properties in a class named <BACK END>Properties, e.g. Neo4jProperties, MongoDBProperties and so on. Saving and loading an entity With all these bits in place its time to persist (and load) some entities. Create a simple JUnit test shell for doing so: public class HikeTest {private static EntityManagerFactory entityManagerFactory;@BeforeClass public static void setUpEntityManagerFactory() { entityManagerFactory = Persistence.createEntityManagerFactory( "hikePu" ); }@AfterClass public static void closeEntityManagerFactory() { entityManagerFactory.close(); } } The two methods manage an entity manager factory for the persistence unit defined in persistence.xml. It is kept in a field so it can be used for several test methods (remember, entity manager factories are rather expensive to create, so they should be initialized once and be kept around for re-use). Then create a test method persisting and loading some data: @Test public void canPersistAndLoadPersonAndHikes() { EntityManager entityManager = entityManagerFactory.createEntityManager();entityManager.getTransaction().begin();// create a Person Person bob = new Person( "Bob", "McRobb" );// and two hikes Hike cornwall = new Hike( "Visiting Land's End", new Date(), new BigDecimal( "5.5" ), new HikeSection( "Penzance", "Mousehole" ), new HikeSection( "Mousehole", "St. Levan" ), new HikeSection( "St. Levan", "Land's End" ) ); Hike isleOfWight = new Hike( "Exploring Carisbrooke Castle", new Date(), new BigDecimal( "7.5" ), new HikeSection( "Freshwater", "Calbourne" ), new HikeSection( "Calbourne", "Carisbrooke Castle" ) );// let Bob organize the two hikes cornwall.setOrganizer( bob ); bob.getOrganizedHikes().add( cornwall );isleOfWight.setOrganizer( bob ); bob.getOrganizedHikes().add( isleOfWight );// persist organizer (will be cascaded to hikes) entityManager.persist( bob );entityManager.getTransaction().commit();// get a new EM to make sure data is actually retrieved from the store and not Hibernate's internal cache entityManager.close(); entityManager = entityManagerFactory.createEntityManager();// load it back entityManager.getTransaction().begin();Person loadedPerson = entityManager.find( Person.class, bob.getId() ); assertThat( loadedPerson ).isNotNull(); assertThat( loadedPerson.getFirstName() ).isEqualTo( "Bob" ); assertThat( loadedPerson.getOrganizedHikes() ).onProperty( "description" ).containsOnly( "Visiting Land's End", "Exploring Carisbrooke Castle" );entityManager.getTransaction().commit();entityManager.close(); } Note how both actions happen within a transaction. Neo4j is a fully transactional datastore which can be controlled nicely via JPA’s transaction API. Within an actual application one would probably work with a less verbose approach for transaction control. Depending on the chosen back end and the kind of environment your application runs in (e.g. a Java EE container such as WildFly), you could take advantage of declarative transaction management via CDI or EJB. But let’s save that for another time. Having persisted some data, you can examine it, using the nice web console coming with Neo4j. The following shows the entities persisted by the test:Hibernate OGM aims for the most natural mapping possible for the datastore you are targeting. In the case of Neo4j as a graph datastore this means that any entity will be mapped to a corresponding node. The entity properties are mapped as node properties (see the black box describing one of the Hike nodes). Any not natively supported property types will be converted as required. E.g. that’s the case for the date property which is persisted as an ISO-formatted String. Additionally, each entity node has the label ENTITY (to distinguish it from nodes of other types) and a label specifying its entity type (Hike in this case). Associations are mapped as relationships between nodes, with the association role being mapped to the relationship type. Note that Neo4j does not have the notion of embedded objects. Therefore, the HikeSection objects are mapped as nodes with the label EMBEDDED, linked with the owning Hike nodes. The order of sections is persisted via a property on the relationship. Switching to MongoDB One of Hibernate OGM’s promises is to allow using the same API – namely, JPA – to work with different NoSQL stores. So let’s see how that holds and make use of MongoDB which, unlike Neo4j, is a document datastore and persists data in a JSON-like representation. To do so, first replace the Neo4j back end with the following one: ... <dependency> <groupId>org.hibernate.ogm</groupId> <artifactId>hibernate-ogm-mongodb</artifactId> </dependency> ... Then update the configuration in persistence.xml to work with MongoDB as the back end, using the properties accessible through MongoDBProperties to give host name and credentials matching your environment (if you don’t have MongoDB installed yet, you can download it here): ... <properties> <property name="hibernate.ogm.datastore.provider" value="mongodb" /> <property name="hibernate.ogm.datastore.database" value="HikeDB" /> <property name="" value="" /> <property name="hibernate.ogm.datastore.username" value="db_user" /> <property name="hibernate.ogm.datastore.password" value="top_secret!" /> </properties> ... And that’s all you need to do to persist your entities in MongoDB rather than Neo4j. If you now run the test again, you’ll find the following BSON documents in your datastore: # Collection "Person" { "_id" : "50b62f9b-874f-4513-85aa-c2f59015a9d0", "firstName" : "Bob", "lastName" : "McRobb", "organizedHikes" : [ "a78d731f-eff0-41f5-88d6-951f0206ee67", "32384eb4-717a-43dc-8c58-9aa4c4e505d1" ] } # Collection Hike { "_id" : "a78d731f-eff0-41f5-88d6-951f0206ee67", "date" : ISODate("2015-01-16T11:59:48.928Z"), "description" : "Visiting Land's End", "difficulty" : "5.5", "organizer_id" : "50b62f9b-874f-4513-85aa-c2f59015a9d0", "sections" : [ { "sectionNo" : 0, "start" : "Penzance", "end" : "Mousehole" }, { "sectionNo" : 1, "start" : "Mousehole", "end" : "St. Levan" }, { "sectionNo" : 2, "start" : "St. Levan", "end" : "Land's End" } ] } { "_id" : "32384eb4-717a-43dc-8c58-9aa4c4e505d1", "date" : ISODate("2015-01-16T11:59:48.928Z"), "description" : "Exploring Carisbrooke Castle", "difficulty" : "7.5", "organizer_id" : "50b62f9b-874f-4513-85aa-c2f59015a9d0", "sections" : [ { "sectionNo" : 1, "start" : "Calbourne", "end" : "Carisbrooke Castle" }, { "sectionNo" : 0, "start" : "Freshwater", "end" : "Calbourne" } ] } Again, the mapping is very natural and just as you’d expect it when working with a document store like MongoDB. The bi-directional one-to-many/many-to-one association between Person and Hike is mapped by storing the referenced id(s) on either side. When loading back the data, Hibernate OGM will resolve the ids and allow to navigate the association from one object to the other. Element collections are mapped using MongoDB’s capabilities for storing hierarchical structures. Here the sections of a hike are mapped to an array within the document of the owning hike, with an additional field sectionNo to maintain the collection order. This allows to load an entity and its embedded elements very efficiently via a single round-trip to the datastore. Wrap-up In this first instalment of NoSQL with Hibernate OGM 101 you’ve learned how to set up a project with the required dependencies, map some entities and associations and persist them in Neo4j and MongoDB. All this happens via the well-known JPA API. So if you have worked with Hibernate ORM and JPA in the past on top of relational databases, it never was easier to dive into the world of NoSQL. At the same time, each store is geared towards certain use cases and thus provides specific features and configuration options. Naturally, those cannot be exposed through a generic API such as JPA. Therefore Hibernate OGM lets you make usage of native NoSQL queries and allows to configure store-specific settings via its flexible option system. You can find the complete example code of this blog post on GitHub. Just fork it and play with it as you like. Of course storing entities and getting them back via their id is only the beginning. In any real application you’d want to run queries against your data and you’d likely also want to take advantage of some specific features and settings of your chosen NoSQL store. We’ll come to that in the next parts of this series, so stay tuned!Reference: NoSQL with Hibernate OGM – Part one: Persisting your first Entities from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....

Development Horror Story – Release Nightmare

Everyone has good stories about releases that went wrong, right? I’m no exception and I have a few good ones under my development career. These are usually very stressful at the time, but now me and my teammates can’t talk about these stories without laughing.                History I think this happened around 2009. Me and my team had to maintain a medium to large legacy web application with around 500 k lines of code. This application was developed by another company, so we didn’t have the code. Since we were in charge now and needed the code to maintain it, they handed us the code in a zip file (first pointer that something was wrong)! Their release process was peculiar to say the least. I’m pretty sure there are worst release procedures out there. This one consisted in copying the changed files (*.class, *.jsp, *.html, etc) to an exploded war folder on a Tomcat server. We also had three environments (QA, PRE, PROD) with different application versions and no idea which files were deployed on each. They also had a ticket management application with attached compiled files, ready to be deployed and no idea of the original sources. What could possibly go wrong here? The Problem Our team was able to make changes required by the customer and push them to PROD servers. We have done it a few times successfully, even with all the handicaps. Everything was looking good until we got another request for additional changes. These changes were only a few improvements in the log messages of a batch process. The batch purpose was to copy files sent to the application with financial data input to insert into a database. I guess that I don’t have to state the obvious: this data was critical to calculate financial movements with direct impact on the amounts paid by the application users. After our team made the changes and perform the release, all hell went loose. Files were not being copied to the correct locations. Several data duplicated in the database and the file system. Financial transactions with incorrect amounts. You name it. A complete nightmare. But why? The only change was a few improvements in the log messages. The Cause The problem was not exactly related with the changed code. Look at the following files: BatchConfiguration public class BatchConfiguration { public static final String OS = "Windows"; } And: public class BatchProcess { public void copyFile() { if (BatchConfiguration.OS.equals("Windows")) { System.out.println("Windows"); } else if (BatchConfiguration.OS.equals("Unix")) { System.out.println("Unix"); } }public static void main(String[] args) { new BatchProcess().copyFile(); } } This is not the real code, but for the problem purposes it was laid out like this. Don’t ask me about the why it was like this. We got it in the zip file, remember? So we have here a variable which sets the expected Operating System and then the logic to copy the file is dependant on this. The server was running on a Unix box so the variable value was Unix. Unfortunately, all the developers were working on Windows boxes. I said unfortunately, because if the developer that implemented the changes was using Unix, everything would be fine. Anyway, the developer changed the variable to Windows so he could proceed with some tests. Everything was fine, so he performs the release. He copied the resulting BatchProcess.class into the server. He didn’t bother about the BatchConfiguration, since the one on the server was configured to Unix right? Maybe you already spotted the problem. If you haven’t, try the following:Copy and build the code. Execute it. Check the output, you should get Windows. Copy the resulting BatchProcess.class to an empty directory. Execute this one again. Use command line java BatchProcessWhat happened? You got the output Windows, right?. Wait! We didn’t have the BatchConfiguration.class file in the executing directory. How is that possible? Shouldn’t we need this file there? Shouldn’t we get an error? When you build the code, the java compiler will inline the BatchConfiguration.OS variable. This means that the compiler will replace the variable expression in the if statement with the actual variable value. It’s like having if ("Windows".equals("Windows")) Try executing javap -c BatchProcess. This will show you a bytecode representation of the class file: BatchProcess.class public void copyFile(); Code: 0: ldc #3 // String Windows 2: ldc #3 // String Windows 4: invokevirtual #4 // Method java/lang/String.equals:(Ljava/lang/Object;)Z 7: ifeq 21 10: getstatic #5 // Field java/lang/System.out:Ljava/io/PrintStream; 13: ldc #3 // String Windows 15: invokevirtual #6 // Method java/io/PrintStream.println:(Ljava/lang/String;)V 18: goto 39 21: ldc #3 // String Windows 23: ldc #7 // String Unix 25: invokevirtual #4 // Method java/lang/String.equals:(Ljava/lang/Object;)Z 28: ifeq 39 31: getstatic #5 // Field java/lang/System.out:Ljava/io/PrintStream; 34: ldc #7 // String Unix 36: invokevirtual #6 // Method java/io/PrintStream.println:(Ljava/lang/String;)V 39: return You can confirm that all the variables are replaced with their constant values. Now, returning to our problem. The .class file that was copied to the PROD servers had the Windows value set in. This messed everything in the execution runtime that handled the input files with the financial data. This was the cause of the problems I’ve described earlier. Aftermath Fixing the original problem was easy. Fixing the problems caused by the release was painful. It involved many people, many hours, pizza, loads of SQL queries, shell scripts and so on. Even our CEO came to help us. We called this the mUtils problem, since it was the original java class name with the code. Yes, we migrated the code to something manageable. It’s now on a VCS with a tag for every release and version.Reference: Development Horror Story – Release Nightmare from our JCG partner Roberto Cortez at the Roberto Cortez Java Blog blog....

Java8 Lambdas: Sorting Performance Pitfall EXPLAINED

Written in collaboration with Peter Lawrey. A few days ago I raised a serious problem with the performance of sorting using the new Java8 declarative style. See blogpost here.  In that post I only pointed out the issue but in this post I’m going to go a bit deeper into understanding and explaining the causes of the problem.  This will be done by reproducing the issue using the declarative style, and bit by bit modifying the code until we have removed the performance issue and are left with the performance that we would expect using the old style compare. To recap, we are sorting instances of this class: private static class MyComparableInt{ private int a,b,c,d;public MyComparableInt(int i) { a = i%2; b = i%10; c = i%1000; d = i; }public int getA() return a; public int getB() return b; public int getC() return c; public int getD() return d; } Using the declarative Java 8 style (below) it took ~6s to sort 10m instances: List mySortedList = .sorted(Comparator.comparing(MyComparableInt::getA) .thenComparing(MyComparableInt::getB) .thenComparing(MyComparableInt::getC) .thenComparing(MyComparableInt::getD)) .collect(Collectors.toList()); Using a custom sorter (below) took ~1.6s to sort 10m instances. This is the code call to sort: List mySortedList = .sorted(MyComparableIntSorter.INSTANCE) .collect(Collectors.toList()); Using this custom Comparator: public enum MyComparableIntSorter implements Comparator<MyComparableInt>{ INSTANCE;@Override public int compare(MyComparableInt o1, MyComparableInt o2) { int comp =, o2.getA()); if(comp==0){ comp =, o2.getB()); if(comp==0){ comp =, o2.getC()); if(comp==0){ comp =, o2.getD()); } } } return comp; } } Let’s create a comparing method in our class so we can analyse the code more closely. The reason for the comparing method is to allow us to easily swap implementations but leave the calling code the same. In all cases this is how the comparing method will be called: List mySortedList = .sorted(comparing( MyComparableInt::getA, MyComparableInt::getB, MyComparableInt::getC, MyComparableInt::getD)) .collect(Collectors.toList()); The first implementation of comparing is pretty much a copy of the one in jdk. public static <T, U extends Comparable<? super U>> Comparator<T> comparing( Function<? super T, ? extends U> ke1, Function<? super T, ? extends U> ke2, Function<? super T, ? extends U> ke3, Function<? super T, ? extends U> ke4) { return Comparator.comparing(ke1).thenComparing(ke2) .thenComparing(ke3).thenComparing(ke4); } Not surprisingly this took ~6s to run through the test – but at least we have reproduced the problem and have a basis for moving forward. Let’s look at the flight recording for this test:As can be seen there are two big issues:A performance issue in the lambda$comparing method Repeatedly calling Integer.valueOf (auto-boxing)Let’s try and deal with the first one which is in the comparing method.  At first sight this seems strange because when you look at the code there’s not much happening in that method.  One thing however that is going on here extensively are virtual table lookups as the code finds the correct implementation of the function.   Virtual table lookups are used when there are multiple methods called from a single line of code. We can eliminate this source of latency with the following implementation of comparing. By expanding all of the uses of the Function interface each line can only call one implementation and thus the method is inlined. public static <T, U extends Comparable<? super U>> Comparator<T> comparing( Function<? super T, ? extends U> ke1, Function<? super T, ? extends U> ke2, Function<? super T, ? extends U> ke3, Function<? super T, ? extends U> ke4) { return (c1, c2) -> { int comp = compare(ke1.apply(c1), ke1.apply(c2)); if (comp == 0) { comp = compare(ke2.apply(c1), ke2.apply(c2)); if (comp == 0) { comp = compare(ke3.apply(c1), ke3.apply(c2)); if (comp == 0) { comp = compare(ke4.apply(c1), ke4.apply(c2)); } } } return comp; }; } By unwinding the method the JIT should be able to inline the method lookup. Indeed the time almost halves to 3.5s, let’s look at the Flight Recording for this run:When I first saw this I was very surprised because as yet we haven’t done any changes to reduce the calls to Integer.valueOf but that percentage has gone right down!  What has has actually happened here is that, because of the changes we made to allow inlining, the Integer.valueOf has been inlined and the time taken for the Integer.valueOf is being blamed on the caller ( lambda$comparing ) which has inlined the callee ( Integer.valueOf ).  This is a common problem in profilers as they can be mistaken as to which method to blame especially when inlining has taken place. But we know that in the previous Flight Recording Integer.valueOf was highlighted so let’s remove that with this implementation of comparing and see if we can reduce the time further. return (c1, c2) -> { int comp = compare(ke1.applyAsInt(c1), ke1.applyAsInt(c2)); if (comp == 0) { comp = compare(ke2.applyAsInt(c1), ke2.applyAsInt(c2)); if (comp == 0) { comp = compare(ke3.applyAsInt(c1), ke3.applyAsInt(c2)); if (comp == 0) { comp = compare(ke4.applyAsInt(c1), ke4.applyAsInt(c2)); } } } return comp; }; With this implementation the time goes right down to 1.6s which is what we could achieve with the custom Comparator. Let’s again look at the flight recording for this run:All the time is now going in the actual sort methods and not in overhead. In conclusion we have learnt a couple of interesting things from this investigation:Using the new Java8 declarative sort will in some cases be up to 4x slower than writing a custom Comparator because of the cost of auto-boxing and virtual table lookups. FlightRecorder whilst being better than other profilers (see my first blog post on this issue) will still attribute time to the wrong methods especially when inlining is taking place.Reference: Java8 Lambdas: Sorting Performance Pitfall EXPLAINED from our JCG partner Daniel Shaya at the Rational Java blog....

Logging to Redis using Spring Boot and Logback

When doing centralized logging, e.g. using Elasticsearch, Logstash and Kibana or Graylog2 you have several options available for your Java application. You can either write your standard application logs and parse those using Logstash, either consumed directly or shipped to another machine using something like logstash-forwarder. Alternatively you can write in a more appropriate format like JSON directly so the processing step doesn’t need that much work for parsing your messages. As a third option is to write to a different data store directly which acts as a buffer for your log messages. In this post we are looking at how we can configure Logback in a Spring Boot application to write the log messages to Redis directly.      Redis We are using Redis as a log buffer for our messages. Not everyone is happy with Redis but it is a common choice. Redis stores its content in memory which makes it well suited for fast access but can also sync it to disc when necessary. A special feature of Redis is that the values can be different data types like strings, lists or sets. Our application uses a single key and value pair where the key is the name of the application and the value is a list that contains all our log messages. This way we can handle several logging applications in one Redis instance. When testing your setup you might also want to look into the data that is stored in Redis. You can access it using the redis-cli client. I collected some useful commands for validating your log messages are actually written to Redis.Command DescriptionKEYS * Show all keys in this Redis instanceLLEN key Show the number of messages in the list for keyLRANGE key 0 100 Show the first 100 messages in the list for keyThe Logback Config When working with Logback most of the time an XML file is used for all the configuration. Appenders are the things that send the log output somewhere. Loggers are used to set log levels and attach appenders to certain pieces of the application. For Spring Boot Logback is available for any application that uses the spring-boot-starter-logging which is also a dependency of the common spring-boot-starter-web. The configuration can be added to a file called logback.xml that resides in src/main/resources. Spring boot comes with a file and a console appender that are already configured correctly. We can include the base configuration in our file to keep all the predefined configurations. For logging to Redis we need to add another appender. A good choice is the logback-redis-appender that is rather lightweight and uses the Java client Jedis. The log messages are written to Redis in JSON directly so it’s a perfect match for something like logstash. We can make Spring Boot log to a local instance of Redis by using the following configuration. <?xml version="1.0" encoding="UTF-8"?> <configuration> <include resource="org/springframework/boot/logging/logback/base.xml"/> <appender name="LOGSTASH" class="com.cwbase.logback.RedisAppender"> <host>localhost</host> <port>6379</port> <key>my-spring-boot-app</key> </appender> <root level="INFO"> <appender-ref ref="LOGSTASH" /> <appender-ref ref="CONSOLE" /> <appender-ref ref="FILE" /> </root> </configuration> We configure an appender named LOGSTASH that is an instance of RedisAppender. Host and port are set for a local Redis instance, key identifies the Redis key that is used for our logs. There are more options available like the interval to push log messages to Redis. Explore the readme of the project for more information. Spring Boot Dependencies To make the logging work we of course have to add a dependency to the logback-redis-appender to our pom. Depending on your Spring Boot version you might see some errors in your log file that methods are missing. This is because Spring Boot manages the dependencies it uses internally and the versions for jedis and commons-pool2 do not match the ones that we need. If this happens we can configure the versions to use in the properties section of our pom. <properties> <commons-pool2.version>2.0</commons-pool2.version> <jedis.version>2.5.2</jedis.version> </properties> Now the application will start and you can see that it sends the log messages to Redis as well. Enhancing the Configuration Having the host and port configured in the logback.xml is not the best thing to do. When deploying to another environment with different settings you have to change the file or deploy a custom one. The Spring Boot integration of Logback allows to set some of the configuration options like the file to log to and the log levels using the main configuration file Unfortunately this is a special treatment for some values and you can’t add custom values as far as I could see. But fortunately Logback supports the use of environment variables so we don’t have to rely on configuration files. Having set the environment variables REDIS_HOST and REDIS_PORT you can use the following configuration for your appender. <appender name="LOGSTASH" class="com.cwbase.logback.RedisAppender"> <host>${REDIS_HOST}</host> <port>${REDIS_PORT}</port> <key>my-spring-boot-app</key> </appender> We can even go one step further. To only activate the appender when the property is set you can add conditional processing to your configuration. <if condition='isDefined("REDIS_HOST") && isDefined("REDIS_PORT")'> <then> <appender name="LOGSTASH" class="com.cwbase.logback.RedisAppender"> <host>${REDIS_HOST}</host> <port>${REDIS_PORT}</port> <key>my-spring-boot-app</key> </appender> </then> </if> You can use a Java expression for deciding if the block should be evaluated. When the appender is not available Logback will just log an error and uses any other appenders that are configured. For this to work you need to add the Janino library to your pom. Now the appender is activated based on the environment variables. If you like you can skip the setup for local development and only set the variables on production systems. Conclusion Getting started with Spring Boot or logging to Redis alone is very easy but some of the details are some work to get right. But it’s worth the effort: Once you get used to centralized logging you don’t want to have your systems running without it anymore.Reference: Logging to Redis using Spring Boot and Logback from our JCG partner Florian Hopf at the Dev Time blog....

Separating Integration Tests from Unit Tests Using Maven Failsafe & JUnit @Category

Why Unit Tests Should Run Separately From Integration Tests TDD at the Unit Testing level is fairly straight-forward, since classes in unit testing either do not have complex dependencies, or you mock-out the dependencies with a mocking framework (ex. Mockito). However, TDD quickly becomes difficult when we get to Integration Testing. Integration Testing is basically testing a component with some or all of its dependencies instead of mocking them all out. Examples are tests the cut across multiple layers, tests that read or write to a database or file system, tests that require a Servlet container or EJB container to be up, tests that involve network communication, web services, etc. Integration Tests tend to be fragile and/or slow. Examples:A test that talks to a DB might fail not because the logic in the code was wrong, but because the DB was down, the URL/username/password to the DB was changed, or there was wrong data in the DB. Tests that read or write to disk are slow, and each time you run a test the file or database needs to be reset with the proper data or content. Packaging and deploying to a container is slow. Tests that make network calls may fail not because the logic in the code is wrong, but because the network resource is unavailable, or there’s problems with the network itself.These hassles tend to discourage developers from running tests frequently. When tests are run few and far between, developers end up writing a lot of code before they catch errors. Therefore, when tests are run infrequently, productivity goes down because errors are harder to find and fix after a lot of code has been written, and there’s an increased risk of quality issues. Also, when running tests are a hassle, developers are discouraged from writing enough tests. It therefore makes sense to run unit tests separately from integration tests. Unit tests run completely in memory with no external dependencies, so even for large projects they should all run in just a few seconds, and should run robustly each time since they depend only on the logic of the code under test. Developers are therefore encouraged to run all their unit tests with every small change. Using Maven Failsafe and JUnit @Category to Separate Integration Tests There’s more than one way to separate integration tests. By default, Failsafe picks up any class with a suffix “IT” or “ITCase”, or prefixed with “IT”. However, some testing frameworks also require suffixes or prefixes, which makes using that approach cumbersome. Another approach is to place integration tests in a separate source directory. I’ve chosen to use JUnit @Category since I’m also using Concordion, which requires a suffix in its test classes. The rest of this article just documents how I implemented the advice from the 2012 article by John Doble entitled, “Unit and Integration Tests With Maven and JUnit Categories”. You can find my source code here. Creating the JUnit Category Creating a JUnit Category is just simply creating an empty interface. Really, that’s it! See below: package com.orangeandbronze.test;public interface IntegrationTest {} Now, I can apply this “marker interface” as a Category to my integration tests – in the example below, to SectionDaoTest. import org.junit.experimental.categories.Category; import com.orangeandbronze.test.IntegrationTest;@Category(IntegrationTest.class) public class SectionDaoTest extends DaoTest { ... } Adding the Surefire and Failsafe Plugins Now to add the Surefire and Failsafe plugins. I need to exclude all tests marked by IntegrationTest in Surefire (which runs unit tests), and include all the tests marked by IntegrationTest in Failsafe (which runs integration tests). Also, I had to include “**/*.java” or the tests don’t run, I don’t know why. <plugin> <artifactId>maven-surefire-plugin</artifactId> <version>2.18.1</version> <configuration> <excludedGroups>com.orangeandbronze.test.IntegrationTest</excludedGroups> </configuration> </plugin> <plugin> <artifactId>maven-failsafe-plugin</artifactId> <version>2.18.1</version> <configuration> <includes> <include>**/*.java</include> </includes> <groups>com.orangeandbronze.test.IntegrationTest</groups> </configuration> <executions> <execution> <goals> <goal>integration-test</goal> <goal>verify</goal> </goals> </execution> </executions> </plugin> Running the Tests So now, when I run mvn test only the unit tests get run, whereas when I run mvn integration-test or mvn verify (I usually run mvn verify), not only do the unit test run, but my project gets packaged and then the integration tests run. In a real project, each developer would run all the unit tests after just a few changes, dozens of times a day, while he would run the integration tests less frequently, but at least once a day. The CI server would also run both unit and integration tests during its builds.Reference: Separating Integration Tests from Unit Tests Using Maven Failsafe & JUnit @Category from our JCG partner Calen Legaspi at the Calen Legaspi blog....
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

Get ready to Rock!
To download the books, please verify your email address by following the instructions found on the email we just sent you.