Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?


Web App Architecture – the Spring MVC – AngularJs stack

Spring MVC and AngularJs together make for a really productive and appealing frontend development stack for building form-intensive web applications.In this blog post we will see how a form-intensive web app can be built using these technologies, and compare such approach with other available options. A fully functional and secured sample Spring MVC / AngularJs web app can be found in this github repository.We will go over the following topics:          The architecture of a Spring MVC + Angular single page app How to structure a web UI using Angular Which Javascript / CSS libraries complement well Angular? How to build a REST API backend with Spring MVC Securing a REST API using Spring Security How does this compare with other approaches that use a full Java-based approach?The architecture of a Spring MVC + Angular single page web app Form-intensive enterprise class applications are ideally suited for being built as single page web apps. The main idea compared to other more traditional server-side architectures is to build the server as a set of stateless reusable REST services, and from an MVC perspective to take the controller out of the backend and move it into the browser:The client is MVC-capable and contains all the presentation logic which is separated in a view layer, a controller layer and a frontend services layer. After the initial application startup, only JSON data goes over the wire between client and server. How is the backend built? The backend of an enterprise frontend application can be built in a very natural and web-like way as a REST API. The same technology can be used to provide web services to third-party applications – obviating in many cases the need for a separate SOAP web services stack. From a DDD perspective, the domain model remains on the backend, at the service and persistence layer level. Over the wire only DTOs go by, but not the domain model. How to structure the frontend of a web app using Angular The frontend should be built around a view-specific model (which is not the domain model), and should only handle presentation logic, but no business logic. These are the three layers of the frontend: The View Layer The view layer is composed of Html templates, CSS, and any Angular directives representing the different UI components. This is an example of a simple view for a login form: <form ng-submit="onLogin()" name="form" novalidate="" ng-controller="LoginCtrl"> <fieldset> <legend>Log In</legend> <div class="form-field"> <input ng-model="vm.username" name="username" required="" ng-minlength="6" type="text"> <div class="form-field"> <input ng-model="vm.password" name="password" required="" ng-minlength="6" pattern="(?=.*\d)(?=.*[a-z])(?=.*[A-Z]).{6,}" type="password"> </div></div></fieldset> <button type="submit">Log In</button> <a href="/resources/public/new-user.html">New user?</a> </form> The Controller Layer The controller layer is made of Angular controllers that glue the data retrieved from the backend and the view together. The controller initializes the view model and defines how the view should react to model changes and vice-versa: angular.module('loginApp', ['common', 'editableTableWidgets']) .controller('LoginCtrl', function ($scope, LoginService) { $scope.onLogin = function () { console.log('Attempting login with username ' + $scope.vm.username + ' and password ' + $scope.vm.password); if ($scope.form.$invalid) { return; } LoginService.login($scope.vm.userName, $scope.vm.password); }; }); One of the main responsibilities of the controller is to perform frontend validations. Any validations done on the frontend are for user convenience only – for example they are useful to immediately inform the user that a field is required. Any frontend validations need to be repeated in the backend at the service layer level due to security reasons, as the frontend validations can be easily bypassed. The Frontend Services Layer A set of Angular services that allow to interact with the backend and that can be injected into Angular controllers: angular.module('frontendServices', []) .service('UserService', ['$http','$q', function($http, $q) { return { getUserInfo: function() { var deferred = $q.defer(); $http.get('/user') .then(function (response) { if (response.status == 200) { deferred.resolve(; } else { deferred.reject('Error retrieving user info'); } }); return deferred.promise; } Let’s see what other libraries we need to have the frontend up and running. Which Javascript / CSS libraries are necessary to complement Angular? Angular already provides a large part of the functionality needed to build the frontend of our app. Some good complements to Angular are:An easily themeable pure CSS library of only 4k from Yahoo named PureCss. Its Skin Builder allows to easily generate a theme based on a primary color. Its a BYOJ (Bring Your Own Javascript) solution, which helps keeping things the ‘Angular way’. a functional programming library to manipulate data. The one that seems the most used and better maintained and documented these days is lodash.With these two libraries and Angular, almost any form based application can be built, nothing else is really required. Some other libraries that might be an option depending on your project are:a module system like requirejs is nice to have, but because the Angular module system does not handle file retrieval this introduces some duplication between the dependency declarations of requirejs and the angular modules. A CSRF Angular module, to prevent cross-site request forgery attacks. An internationalization moduleHow to build a REST API backend using Spring MVC The backend is built using the usual backend layers:Router Layer: defines which service entry points correspond to a given HTTP url, and how parameters are to be read from the HTTP request Service Layer: contains any business logic such as validations, defines the scope of business transactions Persistence Layer: maps the database to/from in-memory domain objectsSpring MVC is currently best configured using only Java configuration. The web.xml is hardly ever needed, see here an example of a fully configured application using Java config only. The service and persistence layers are built using the usual DDD approach, so let’s focus our attention on the Router Layer. The Router Layer The same Spring MVC annotations used to build a JSP/Thymeleaf application can also be used to build a REST API. The big difference is that the controller methods do not return a String that defines which view template should be rendered. Instead the @ResponseBody annotation indicates that the return value of the controller method should be directly rendered and become the response body: @ResponseBody @ResponseStatus(HttpStatus.OK) @RequestMapping(method = RequestMethod.GET) public UserInfoDTO getUserInfo(Principal principal) { User user = userService.findUserByUsername(principal.getName()); Long todaysCalories = userService.findTodaysCaloriesForUser(principal.getName()); return user != null ? new UserInfoDTO(user.getUsername(), user.getMaxCaloriesPerDay(), todaysCalories) : null; } If all the methods of the class are to be annotated with @ResponseBody, then it’s better to annotate the whole class with @RestController instead. By adding the Jackson JSON library, the method return value will be directly converted to JSON without any further configuration. Its also possible to convert to XML or other formats, depending on the value of the Accept HTTP header specified by the client. See here an example of a couple of controllers with error handling configured. How to secure a REST API using Spring Security A REST API can be secured using Spring Security Java configuration. A good approach is to use form login with fallback to HTTP Basic authentication, and include some CSRF protection and the possibility to enforce that all backend methods are only accessible via HTTPS. This means the backend will propose the user a login form and assign a session cookie on successful login to browser clients, but it will still work well for non-browser clients by supporting a fallback to HTTP Basic where credentials are passed via the Authorization HTTP header. Following OWASP recommendations, the REST services can be made minimally stateless (the only server state is the session cookie used for authentication) to avoid having to send credentials over the wire for each request. This is an example of how to configure the security of a REST API: http .authorizeRequests() .antMatchers("/resources/public/**").permitAll() .anyRequest().authenticated() .and() .formLogin() .defaultSuccessUrl("/resources/calories-tracker.html") .loginProcessingUrl("/authenticate") .loginPage("/resources/public/login.html") .and() .httpBasic() .and() .logout() .logoutUrl("/logout"); if ("true".equals(System.getProperty("httpsOnly"))) {"launching the application in HTTPS-only mode"); http.requiresChannel().anyRequest().requiresSecure(); } This configuration covers the authentication aspect of security only, choosing an authorization strategy depends on the security requirements of the API. If you need a very fine-grained control on authorization then check if Spring Security ACLs could be a good fit for your use case. Let’s now see how this approach of building web apps compares with other commonly used approaches. Comparing the Spring / MVC Angular stack with other common approaches This approach of using Javascript for the frontend and Java for the backend makes for a simplified and productive development workflow. When the backend is running, no special tools or plugins are needed to achieve full frontend hot-deploy capability: just publish the resources to the server using your IDE (for example hitting Ctrl+F10 in IntelliJ) and refresh the browser page. The backend classes can still be reloaded using JRebel, but for the frontend nothing special is needed. Actually the whole frontend can be built by mocking the backend using for example json-server. This would allow for different developers to build the frontend and the backend in parallel if needed. Productivity gains of full stack development? From my experience being able to edit the Html and CSS directly with no layers of indirection in-between (see here a high-level Angular comparison with GWT and JSF) helps to reduce mental overhead and keeps things simple. The edit-save-refresh development cycle is very fast and reliable and gives a huge productivity boost. The largest productivity gain is obtained when the same developers build both the Javascript frontend and the Java backend, because often simultaneous changes on both are needed for most features. The potential downside of this is that developers need to know also Html, CSS and Javascript, but this seems to have become more frequent in the last couple of years. In my experience, going full stack allows to implement complex frontend use cases in a fraction of the time than the equivalent full Java solution (days instead of weeks), so the productivity gain makes the learning curve definitely worth it. Conclusions Spring MVC and Angular combined really open the door for a new way of building form-intensive web apps. The productivity gains that this approach allows make it an alternative worth looking into. The absence of any server state between requests (besides the authentication cookie) eliminates by design a whole category of bugs. For further details have a look at this sample application on github, and let us know your thoughts/questions on the comments bellow.Reference: Web App Architecture – the Spring MVC – AngularJs stack from our JCG partner Aleksey Novik at the The JHades Blog blog....

Testing and System.out with system-rules

Writing unit tests is an integral part of software development. One problem you have to solve when your class under test interacts with the operating system, is to simulate its behaviours. This can be done by using mocks instead of the real objects provided by the Java Runtime Environment (JRE). Libraries that support mocking for Java are for example mockito or jMock. Mocking objects is a great thing when you have complete control over their instantiation. When dealing with standard input and standard output this is a little bit tricky, but not impossible as java.lang.System lets you replace the standard InputStream and OutputStream.   System.setIn(in); System.setOut(out); In order that you do not have to replace the streams before and after each test case manually, you can utilize org.junit.rules.ExternalResource. This class provides the two methods before() and after() that are called, like their names suggest, before and after each test case. This way you can easily setup and cleanup resources that all of your tests within one class need. Or, to come back to the original problem, replace the input and output stream for java.lang.System. Exactly what I have described above, is implemented by the library system-rules. To see how it works, lets start with a simple example: public class CliExample { private Scanner scanner = new Scanner(, "UTF-8"); public static void main(String[] args) { CliExample cliExample = new CliExample();; } private void run() { try { int a = readInNumber(); int b = readInNumber(); int sum = a + b; System.out.println(sum); } catch (InputMismatchException e) { System.err.println("The input is not a valid integer."); } catch (IOException e) { System.err.println("An input/output error occurred: " + e.getMessage()); } } private int readInNumber() throws IOException { System.out.println("Please enter a number:"); String nextInput =; try { return Integer.valueOf(nextInput); } catch(Exception e) { throw new InputMismatchException(); } } } The code above reads two intergers from standard input and prints out its sum. In case the user provides an invalid input, the program should output an appropriate message on the error stream. In the first test case, we want to verify that the program correctly sums up two numbers and prints out the result: public class CliExampleTest { @Rule public final StandardErrorStreamLog stdErrLog = new StandardErrorStreamLog(); @Rule public final StandardOutputStreamLog stdOutLog = new StandardOutputStreamLog(); @Rule public final TextFromStandardInputStream systemInMock = emptyStandardInputStream(); @Test public void testSuccessfulExecution() { systemInMock.provideText("2\n3\n"); CliExample.main(new String[]{}); assertThat(stdOutLog.getLog(), is("Please enter a number:\r\nPlease enter a number:\r\n5\r\n")); } ... } To simulate we utilize system-rules’ TextFromStandardInputStream. The instance variable is initialized with an empty input stream by calling emptyStandardInputStream(). In the test case itself we provide the intput for the application by calling provideText() with a newline at the appropriate points. Then we call the main() method of our application. Finally we have to assert that the application has written the two input statements and the result to the standard input. The latter is done through an instance of StandardOutputStreamLog. By calling its method getLog() we retrieve everything that has been written to standard output during the current test case. The StandardErrorStreamLog can be used alike for the verification of what has been written to standard error: @Test public void testInvalidInput() throws IOException { systemInMock.provideText("a\n"); CliExample.main(new String[]{}); assertThat(stdErrLog.getLog(), is("The input is not a valid integer.\r\n")); } Beyond that system-rules also offers rules for the work with System.getProperty(), System.setProperty(), System.exit() and System.getSecurityManager(). Conclusion: With system-rules testing command line applications with unit tests becomes even more simpler than using junit’s Rules itself. All the boilerplate code to update the system environment before and after each test case comes within some easy to use rules. PS: You can find the complete sources here.Reference: Testing and System.out with system-rules from our JCG partner Martin Mois at the Martin’s Developer World blog....

Tips for Importing Data

I’m currently importing a large amount of spatial data into a PostgreSQL/PostGIS database and realized others could learn from my experience. Most of the advice is not specific to PostgreSQL or PostGIS. Know the basic techniques Know the basic techniques for loading bulk data. Use COPY if possible. If not use batch processing if possible. If not turn off auto-commits before doing individual INSERT or UPDATE calls and only commit every Nth calls. Use auto-committed INSERT and UPDATE as an absolute last resort. Unfortunately the latter is the usual default with JDBC connections. Other standard technique is to drop any indexes before loading large amounts of data and recreate them afterwards. This is not always possible, e.g., if you’re updating a live table, but it can mean a big performance boost. Finally remember to update your index statistics. In PostgreSQL this is VACUUM ANALYZE. Know your database PostgreSQL allows tables to be created as “UNLOGGED”. That means there can be data loss if the system crashes while (or soon after) uploading your data but so what? If that happens you’ll probably want to restart the upload from the start anyway. I haven’t done performance testing (yet) but it’s another trick to keep in mind. Know how to use the ExecutorService for multithreading Every call to the database will have dead time due to network latency and the time required for the database to complete the call. On the flip side the database is idle while waiting for the desktop to prepare and upload each call. You can fill that dead time by using a multithreaded uploader. The ExecutorService makes it easy to break the work into meaningful chunks. E.g., each thread uploads a single table or a single data file. The ExecutorService also allows you to be more intelligent about how you upload your data. Upload to a dedicated schema If possible upload to a dedicated schema. This gives you more flexibility later. In PostgreSQL this can be transparent to most users by calling ALTER DATABASE database SET SEARCH_PATH=schema1,schema2,schema3 and specifying both public and the schema containing the uploaded data. In other databases you can only have one active schema at a time and you will need to explicitly specify the schema later. Upload unprocessed data I’ve sometimes found it faster to upload raw data and process it in the database (with queries and stored procedures) than to process it on the desktop and upload the cooked data. This is especially true with fixed format records. There are benefits besides performance – see following items. Keep track of your source Few things are more frustrating than having questions about what’s in the database and no way to trace it back to its source. (Obviously this applies to data you uploaded from elsewhere, not data your application generated itself.) Adding a couple fields for, e.g., filename and record number, can save a lot of headaches later. Plan your work In my current app one of my first steps is to decompress and open each shapefile (data file), get the number of records, and then close it and delete the uncompressed file. This seems like pointless work but it allows me to get a solid estimate on the amount of time that will be required to upload the file. That, in turn, allows me to make intelligent decisions about how to schedule the work. E.g., one standard approach is to use a priority queue where you always grab the most expensive work item (e.g., number of records to process) when a thread becomes available. This will result in the fastest overall uploads. This has two other benefits. First, it allows me to verify that I can open and read all of the shapefiles. I won’t waste hours before running into a fatal problem. Second it allows me to verify that I wrote everything expected. There’s a problem if I found 813 records while planning but later could only find 811 rows in the table. Or worse I found 815 rows in the table. Log everything I write to a log table after every database call. I write to a log table after every exception. It’s expensive but it’s a lot easier to query a database table or two later than to parse log files. It’s also less expensive than you think if you have a large batch size. I log the time, the shapefile’s pathname, the number of records read and the number of records written. The exceptions log the time, shapefile’s pathname, and the exception messages. I log all exception messages – both ’cause’ and ‘nextException’ (for SQLExceptions). Perform quality checks Check that you’re written all expected records and had no exceptions. Check the data for self-consistency. Add unique indexes. Add referential integrity constraints. There’s a real cost to adding indexes when inserting and updating data but there’s no downside if these will be read-only tables. Check for null values. Check for reasonable values, e.g., populations and ages can never be negative. You can often perform checks specific to the data. E.g., I know that states/provinces must be WITHIN their respective countries, and cities must be WITHIN their respective states/provinces. (Some US states require cities to be WITHIN a single county but other states allow cities to cross country boundaries.) There are two possibilities if the QC fails. First, your upload could be faulty. This is easy to check if you recorded the source of each record. Second, the data itself could be faulty. In either case you want to know this as soon as possible. Backup your work This should be a no-brainer but immediately perform a database backup of these tables after the QC passes. Lock down the data This only applies if the tables will never or rarely be modified after loading. (Think things like tables containing information about states or area codes.) Lock down the tables. You can use both belts and suspenders: REVOKE INSERT, UPDATE, TRUNCATE ON TABLE table FROM PUBLIC. ALTER TABLE table SET READ ONLY. Some people prefer VACUUM FREEZE table SET READ ONLY. If you uploaded the data into a dedicated schema you can use the shorthand REVOKE INSERT, UPDATE, TRUNCATE ON ALL TABLES IN SCHEMA schema FROM PUBLIC. You can also REVOKE CREATE permissions on a schema.Reference: Tips for Importing Data from our JCG partner Bear Giles at the Invariant Properties blog....

Grails Tutorial for Beginners – HQL Queries (executeQuery and executeUpdate)

This Grails tutorial will teach the basics of using HQL. Grails supports dynamic finders which makes it convenient to perform simple database queries. But for more complex cases, Grails provides both Criteria API and HQL. This tutorial will focus on the latter. Introduction It is well known that Grails sits on top of Spring and Hibernate – two of the most popular Java frameworks. Hibernate is used as the underlying technology for the object-relational mapping of Grails (GORM). Hibernate is database agnostic. It means that since Grails is based on it, we could write applications that is compatible with most popular databases. We don’t need to write different queries for each possible database. The easiest way to perform database queries is through dynamic finders. It’s simple and very intuitive. Check my previous post for a tutorial on this topic. Dynamic finders however are very limited. It may not be suitable for complex requirements and cases where the developer needs a lower level of control. HQL is a very good alternative as it is very similar to SQL. HQL is fully object oriented and understands inheritance, polymorphism and association. Using it will provide a very powerful and flexible API yet preserving your application to be database agnostic. In Grails, there are two domain methods to use to invoke HQLexecuteQuery – Executes HQL queries (SELECT operations) executeUpdate – Updates the database with DML-style operations (UPDATE and DELETE)executeQuery Here is a sample Domain class that we will query from: package asia.grails.test class Person { String firstName String lastName int age static constraints = { } } Retrieve all domain objects This is the code to retrieve all Person objects from the database: def listOfAllPerson = Person.executeQuery("from Person") Notice that:executeQuery is a method of a Domain class and used for retrieving information (SELECT statements) Similar to SQL, the from identifier is required Instead of specifying the table, we specify the domain class right after the from keyword. We could also write the query like this def listOfAllPerson = Person.executeQuery("from asia.grails.test.Person")It is valid not to specify select clause. By default, it will return the object instances of the specified Domain class. In the example, it will return a list of all Person objects.Here is a sample code of how we could use the result listOfAllPerson.each { person -> println "First Name = ${person.firstName}" println "Last Name = ${person.lastName}" println "Age = ${person.age}" } Since listOfAllPerson is a list of Person instances, we could iterate over it and print the details. Select clause When the select clause is explicitly used, HQL will not return a list of domain objects. Instead, it will return a 2 dimensional list. Here is an example assuming that at least 1 record is in the database: def list = Person.executeQuery("select firstName, lastName from Person") def firstPerson = list[0] def firstName = firstPerson[0] def lastName = firstPerson[1] println "First Name = ${firstName}" println "Last Name = ${lastName}" The variable list will be assigned a list of items. Each item is a list that corresponds to the value as enumerated in the select clause. The code can also be written like this to help visualize the data structure: def list = Person.executeQuery("select firstName, lastName from Person") def firstName = list[0][0] def lastName = list[0][1] println "First Name = ${firstName}" println "Last Name = ${lastName}" Where clause Just like SQL, we can filter query results using where clause. Here are some examples: People with surname Doe def peopleWithSurnameDoe = Person.executeQuery("from Person where lastName = 'Doe'") People who are at least 18 years old def adults = Person.executeQuery("from Person where age >= 18") People having first name that contains John def peopleWithFirstNameLikeJohn = Person.executeQuery("from Person where firstName like '%John%'") Group clause Group clause is also permitted. The behavior is similar to SQL. Here is an example: def list = Person.executeQuery("select age, count(*) from Person group by age") list.each { item -> def age = item[0] def count = item[1] println "There are ${count} people with age ${age} years old" } This will print all ages found in the table and how many people have that age. Having clause The having clause is useful to filter out the result of a group by. Here is an example: def list = Person.executeQuery( "select age, count(*) from Person group by age having count(*) > 1") list.each { item -> def age = item[0] def count = item[1] println "There are ${count} people with age ${age} years old" } This will print all ages found in the table and how many people have that age, provided that there are more than 1 person in the age group. Pagination It is not good for performance to retrieve all records in a table all at once. It is more efficient to page results. For example, get 10 records at a time. Here is a code sample on how to do that: def listPage1 = Person.executeQuery("from Person order by id", [offset:0, max:10]) def listPage2 = Person.executeQuery("from Person order by id", [offset:10, max:10]) def listPage3 = Person.executeQuery("from Person order by id", [offset:20, max:10]) The parameter max informs GORM to fetch a maximum of 10 records only. The offset means how many records to skip before reading the first result.On page 1, we don’t skip any records and get the first 10 results On page 2, we skip the first 10 records and get the 11th to 20th records. On page 3, we skip the first 20 records and get the 21st to 30th records.GORM/Hibernate will translate the paging information to it’s proper SQL syntax depending on the database. Note: It is usually better to have an order by clause when paginating results, otherwise most database offers no guarantee on how records are sorted between each query. List Parameters HQL statements can have parameters. Here is an example: def result = Person.executeQuery( "from Person where firstName = ? and lastName = ?", ['John', 'Doe']) The parameters can be passed as a list. The first parameter (John) is used in the first question mark, the second parameter (Doe) is used in the second question mark, and so on. Results can also be paginated def result = Person.executeQuery( "from Person where firstName = ? and lastName = ?", ['John', 'Doe'], [offset:0, max:5]) Named Parameters Providing list parameters is usually hard to read and prone to errors. It is easier to use named parameters. For example: def result = Person.executeQuery( "from Person where firstName = :searchFirstName and lastName = :searchLastName", [searchFirstName:'John', searchLastName:'Doe']) The colon signifies a named parameter variable. Then the values can be passed as a map of values. Results can also be paginated: def result = Person.executeQuery( "from Person where firstName = :searchFirstName and lastName = :searchLastName", [searchFirstName:'John', searchLastName:'Doe'], [offset:0, max:5]) Here is a shorter version: def result = Person.executeQuery( "from Person where firstName = :searchFirstName and lastName = :searchLastName", [searchFirstName:'John', searchLastName:'Doe'], [offset:0, max:5]) How to perform JOINs Here is an example one to many relationship domain classes: package asia.grails.test class Purchase { static hasMany = [items:PurchaseItem] String customer Date dateOfPurchase double price } package asia.grails.test class PurchaseItem { static belongsTo = Purchase Purchase parentPurchase String product double price int quantity } Here is a sample code that joins the two tables: def customerWhoBoughtPencils = Purchase.executeQuery( "select p.customer from Purchase p join p.items i where i.product = 'Pencil' ") This returns all customers who bought pencils executeUpdate We can update or delete records using executeUpdate. This is sometimes more efficient specially when dealing with large sets of records. Delete Here are some examples of how to delete records using executeUpdated. Delete all person records in the database Purchase.executeUpdate("delete Person") Here are different ways to delete people with first name John Person.executeUpdate("delete Person where firstName = 'John'") Person.executeUpdate("delete Person where firstName = ? ", ['John']) Person.executeUpdate("delete Person where firstName = :firstNameToDelete ", [firstNameToDelete:'John']) Update Here are some examples of how to delete records using executeUpdated. Here are different ways on how to make all people have the age 15 Person.executeUpdate("update Person set age = 15") Person.executeUpdate("update Person set age = ?", [15]) Person.executeUpdate("update Person set age = :newAge", [newAge:15]) Here are different ways to set John Doe’s age to 15. Person.executeUpdate( "update Person set age = 15 where firstName = 'John' and lastName = 'Doe'") Person.executeUpdate( "update Person set age = ? where firstName = ? and lastName = ?", [15, 'John', 'Doe']) Person.executeUpdate( "update Person set age = :newAge where firstName = :firstNameToSearch and lastName = :lastNameToSearch", [newAge:15, firstNameToSearch:'John', lastNameToSearch:'Doe'])Reference: Grails Tutorial for Beginners – HQL Queries (executeQuery and executeUpdate) from our JCG partner Jonathan Tan at the Grails cookbook blog....

Hibernate locking patterns – How does Optimistic Lock Mode work

Explicit optimistic locking In my previous post, I introduced the basic concepts of Java Persistence locking. The implicit locking mechanism prevents lost updates and it’s suitable for entities that we can actively modify. While implicit optimistic locking is a widespread technique, few happen to understand the inner workings of explicit optimistic lock mode. Explicit optimistic locking may prevent data integrity anomalies, when the locked entities are always modified by some external mechanism.   The product ordering use case Let’s say we have the following domain model:Our user, Alice, wants to order a product. The purchase goes through the following steps:Alice loads a Product entity Because the price is convenient, she decides to order the Product the price Engine batch job changes the Product price (taking into consideration currency changes, tax changes and marketing campaigns) Alice issues the Order without noticing the price changeImplicit locking shortcomings First, we are going to test if the implicit locking mechanism can prevent such anomalies. Our test case looks like this: doInTransaction(new TransactionCallable<Void>() { @Override public Void execute(Session session) { final Product product = (Product) session.get(Product.class, 1L); try { executeAndWait(new Callable<Void>() { @Override public Void call() throws Exception { return doInTransaction(new TransactionCallable<Void>() { @Override public Void execute(Session _session) { Product _product = (Product) _session.get(Product.class, 1L); assertNotSame(product, _product); _product.setPrice(BigDecimal.valueOf(14.49)); return null; } }); } }); } catch (Exception e) { fail(e.getMessage()); } OrderLine orderLine = new OrderLine(product); session.persist(orderLine); return null; } }); The test generates the following output: #Alice selects a Product Query:{[select as id1_1_0_, abstractlo0_.description as descript2_1_0_, abstractlo0_.price as price3_1_0_, abstractlo0_.version as version4_1_0_ from product abstractlo0_ where][1]}#The price engine selects the Product as well Query:{[select as id1_1_0_, abstractlo0_.description as descript2_1_0_, abstractlo0_.price as price3_1_0_, abstractlo0_.version as version4_1_0_ from product abstractlo0_ where][1]} #The price engine changes the Product price Query:{[update product set description=?, price=?, version=? where id=? and version=?][USB Flash Drive,14.49,1,1,0]} #The price engine transaction is committed DEBUG [pool-2-thread-1]: o.h.e.t.i.j.JdbcTransaction - committed JDBC Connection#Alice inserts an OrderLine without realizing the Product price change Query:{[insert into order_line (id, product_id, unitPrice, version) values (default, ?, ?, ?)][1,12.99,0]} #Alice transaction is committed unaware of the Product state change DEBUG [main]: o.h.e.t.i.j.JdbcTransaction - committed JDBC Connection The implicit optimistic locking mechanism cannot detect external changes, unless the entities are also changed by the current Persistence Context. To protect against issuing an Order for a stale Product state, we need to apply an explicit lock on the Product entity. Explicit locking to the rescue The Java Persistence LockModeType.OPTIMISTIC is a suitable candidate for such scenarios, so we are going to put it to a test. Hibernate comes with a LockModeConverter utility, that’s able to map any Java Persistence LockModeType to its associated Hibernate LockMode. For simplicity sake, we are going to use the Hibernate specific LockMode.OPTIMISTIC, which is effectively identical to its Java persistence counterpart. According to Hibernate documentation, the explicit OPTIMISTIC Lock Mode will: assume that transaction(s) will not experience contention for entities. The entity version will be verified near the transaction end. I will adjust our test case to use explicit OPTIMISTIC locking instead: try { doInTransaction(new TransactionCallable<Void>() { @Override public Void execute(Session session) { final Product product = (Product) session.get(Product.class, 1L, new LockOptions(LockMode.OPTIMISTIC));executeAndWait(new Callable<Void>() { @Override public Void call() throws Exception { return doInTransaction(new TransactionCallable<Void>() { @Override public Void execute(Session _session) { Product _product = (Product) _session.get(Product.class, 1L); assertNotSame(product, _product); _product.setPrice(BigDecimal.valueOf(14.49)); return null; } }); } });OrderLine orderLine = new OrderLine(product); session.persist(orderLine); return null; } }); fail("It should have thrown OptimisticEntityLockException!"); } catch (OptimisticEntityLockException expected) {"Failure: ", expected); } The new test version generates the following output: #Alice selects a Product Query:{[select as id1_1_0_, abstractlo0_.description as descript2_1_0_, abstractlo0_.price as price3_1_0_, abstractlo0_.version as version4_1_0_ from product abstractlo0_ where][1]}#The price engine selects the Product as well Query:{[select as id1_1_0_, abstractlo0_.description as descript2_1_0_, abstractlo0_.price as price3_1_0_, abstractlo0_.version as version4_1_0_ from product abstractlo0_ where][1]} #The price engine changes the Product price Query:{[update product set description=?, price=?, version=? where id=? and version=?][USB Flash Drive,14.49,1,1,0]} #The price engine transaction is committed DEBUG [pool-1-thread-1]: o.h.e.t.i.j.JdbcTransaction - committed JDBC Connection#Alice inserts an OrderLine Query:{[insert into order_line (id, product_id, unitPrice, version) values (default, ?, ?, ?)][1,12.99,0]} #Alice transaction verifies the Product version Query:{[select version from product where id =?][1]} #Alice transaction is rolled back due to Product version mismatch INFO [main]: c.v.h.m.l.c.LockModeOptimisticTest - Failure: org.hibernate.OptimisticLockException: Newer version [1] of entity [[com.vladmihalcea.hibernate.masterclass.laboratory.concurrency. AbstractLockModeOptimisticTest$Product#1]] found in database The operation flow goes like this:The Product version is checked towards transaction end. Any version mismatch triggers an exception and a transaction rollback. Race condition risk Unfortunately, the application-level version check and the transaction commit are not an atomic operation. The check happens in EntityVerifyVersionProcess, during the before-transaction-commit stage: public class EntityVerifyVersionProcess implements BeforeTransactionCompletionProcess { private final Object object; private final EntityEntry entry;/** * Constructs an EntityVerifyVersionProcess * * @param object The entity instance * @param entry The entity's referenced EntityEntry */ public EntityVerifyVersionProcess(Object object, EntityEntry entry) { this.object = object; this.entry = entry; }@Override public void doBeforeTransactionCompletion(SessionImplementor session) { final EntityPersister persister = entry.getPersister();final Object latestVersion = persister.getCurrentVersion( entry.getId(), session ); if ( !entry.getVersion().equals( latestVersion ) ) { throw new OptimisticLockException( object, "Newer version [" + latestVersion + "] of entity [" + MessageHelper.infoString( entry.getEntityName(), entry.getId() ) + "] found in database" ); } } } The AbstractTransactionImpl.commit() method call, will execute the before-transaction-commit stage and then commit the actual transaction: @Override public void commit() throws HibernateException { if ( localStatus != LocalStatus.ACTIVE ) { throw new TransactionException( "Transaction not successfully started" ); }LOG.debug( "committing" );beforeTransactionCommit();try { doCommit(); localStatus = LocalStatus.COMMITTED; afterTransactionCompletion( Status.STATUS_COMMITTED ); } catch (Exception e) { localStatus = LocalStatus.FAILED_COMMIT; afterTransactionCompletion( Status.STATUS_UNKNOWN ); throw new TransactionException( "commit failed", e ); } finally { invalidate(); afterAfterCompletion(); } } Between the check and the actual transaction commit, there is a very short time window for some other transaction to silently commit a Product price change. Conclusion The explicit OPTIMISTIC locking strategy offers a limited protection against stale state anomalies. This race condition is a typical case of Time of check to time of use data integrity anomaly. In my next article, I will explain how we can save this example using the explicit lock upgrade technique.Code available on GitHub.Reference: Hibernate locking patterns – How does Optimistic Lock Mode work from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....

Learning Netflix Governator – Part 2

To continue from the previous entry on some basic learnings on Netflix Governator, here I will cover one more enhancement that Netflix Governator brings to Google Guice – Lifecycle Management Lifecycle Management essentially provides hooks into the different lifecycle phases that an object is taken through, to quote the wiki article on Governator:           Allocation (via Guice) | v Pre Configuration | v Configuration | V Set Resources | V Post Construction | V Validation and Warm Up | V -- application runs until termination, then... -- | V Pre Destroy To illustrate this, consider the following code: package;import; import; import sample.dao.BlogDao; import sample.model.BlogEntry; import sample.service.BlogService;import javax.annotation.PostConstruct; import javax.annotation.PreDestroy;@AutoBindSingleton(baseClass = BlogService.class) public class DefaultBlogService implements BlogService { private final BlogDao blogDao;@Inject public DefaultBlogService(BlogDao blogDao) { this.blogDao = blogDao; }@Override public BlogEntry get(long id) { return this.blogDao.findById(id); }@PostConstruct public void postConstruct() { System.out.println("Post-construct called!!"); } @PreDestroy public void preDestroy() { System.out.println("Pre-destroy called!!"); } } Here two methods have been annotated with @PostConstruct and @PreDestroy annotations to hook into these specific phases of the Governator’s lifecycle for this object. The neat thing is that these annotations are not Governator specific but are JSR-250 annotations that are now baked into the JDK. Calling the test for this class appropriately calls the annotated methods, here is a sample test: mport; import; import; import org.junit.Test; import sample.service.BlogService;import static org.hamcrest.MatcherAssert.*; import static org.hamcrest.Matchers.*;public class SampleWithGovernatorTest {@Test public void testExampleBeanInjection() throws Exception { Injector injector = LifecycleInjector .builder() .withModuleClass(SampleModule.class) .usingBasePackages("") .build() .createInjector();LifecycleManager manager = injector.getInstance(LifecycleManager.class);manager.start();BlogService blogService = injector.getInstance(BlogService.class); assertThat(blogService.get(1l), is(notNullValue())); manager.close(); }} Spring Framework has supported a similar mechanism for a long time – so the exact same JSR-250 based annotations work for Spring bean too. If you are interested in exploring this further, here is my github project with samples with Lifecycle management.Reference: Learning Netflix Governator – Part 2 from our JCG partner Biju Kunjummen at the all and sundry blog....

SSL with WildFly 8 and Undertow

I’ve been working my way through some security topics along WildFly 8 and stumbled upon some configuration options, that are not very well documented. One of them is the TLS/SSL configuration for the new web-subsystem Undertow. There’s plenty of documentation for the older web-subsystem and it is indeed still available to use, but here is the short how-to configure it the new way. Generate a keystore and self-signed certificate  First step is to generate a certificate. In this case, it’s going to be a self signed one, which is enough to show how to configure everything. I’m going to use the plain Java way of doing it, so all you need is the JRE keytool. Java Keytool is a key and certificate management utility. It allows users to manage their own public/private key pairs and certificates. It also allows users to cache certificates. Java Keytool stores the keys and certificates in what is called a keystore. By default the Java keystore is implemented as a file. It protects private keys with a password. A Keytool keystore contains the private key and any certificates necessary to complete a chain of trust and establish the trustworthiness of the primary certificate. Please keep in mind, that an SSL certificate serves two essential purposes: distributing the public key and verifying the identity of the server so users know they aren’t sending their information to the wrong server. It can only properly verify the identity of the server when it is signed by a trusted third party. A self signed certificate is a certificate that is signed by itself rather than a trusted authority. Switch to a command-line and execute the following command which has some defaults set, and also prompts you to enter some more information. $>keytool -genkey -alias mycert -keyalg RSA -sigalg MD5withRSA -keystore my.jks -storepass secret  -keypass secret -validity 9999What is your first and last name?   [Unknown]:  localhost What is the name of your organizational unit?   [Unknown]:  myfear What is the name of your organization?   [Unknown]: What is the name of your City or Locality?   [Unknown]:  Grasbrun What is the name of your State or Province?   [Unknown]:  Bavaria What is the two-letter country code for this unit?   [Unknown]:  ME Is CN=localhost, OU=myfear,, L=Grasbrun, ST=Bavaria, C=ME correct?   [no]:  yes Make sure to put your desired “hostname” into the “first and last name” field, otherwise you might run into issues while permanently accepting this certificate as an exception in some browsers. Chrome doesn’t have an issue with that though. The command generates a my.jks file in the folder it is executed. Copy this to your WildFly config directory (%JBOSS_HOME%/standalone/config). Configure The Additional WildFly Security Realm The next step is to configure the new keystore as a server identity for ssl in the WildFly security-realms section of the standalone.xml (if you’re using -ha or other versions, edit those).  <management>         <security-realms> <!-- ... -->  <security-realm name="UndertowRealm">                 <server-identities>                     <ssl>                         <keystore path="my.keystore" relative-to="jboss.server.config.dir" keystore-password="secret" alias="mycert" key-password="secret"/>                     </ssl>                 </server-identities>             </security-realm> <!-- ... --> And you’re ready for the next step. Configure Undertow Subsystem for SSL If you’re running with the default-server, add the https-listener to the undertow subsystem:   <subsystem xmlns="urn:jboss:domain:undertow:1.2">          <!-- ... -->             <server name="default-server">             <!-- ... -->                 <https-listener name="https" socket-binding="https" security-realm="UndertowRealm"/> <! -- ... --> That’s it, now you’re ready to connect to the ssl port of your instance https://localhost:8443/. Note, that you get the privacy error (compare screenshot). If you need to use a fully signed certificate you mostly get a PEM file from the cert authority. In this case, you need to import this into the keystore. This stackoverflow thread may help you with that.Reference: SSL with WildFly 8 and Undertow from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....

Display a string list in an Android ListView

Showing a list of items is a very common pattern in mobile application. This pattern comes up often when I make a tutorial: I often need to interact with data, but I don’t want to send a lot of time just on displaying that data when that’s not the point of the tutorial. So, what is the easiest way to display a simple list of values in Android like a list of strings? In the Android SDK, the widget used to show lists of items is a ListView. A listview must always get its data from an adapter class. That adapter class manages the layout used to display each individual item, how it should behave and the data itself. All the other widgets that display multiple items in the Android SDK, like the spinner and the grid, also need an adapter. When I was making the knitting row counter for my series on saving data with Android, I needed to show a list of all the projects in the database but I wanted to do the absolute minimum. The name of the projects are strings so I used the ArrayAdapter class from the Android SDK to display that list of strings. Here is show to create the adapter and set it in the listview to display the list of items: private ListView mListView;@Override protected void onStart() { super.onStart();// Add the project titles to display in a list for the listview adapter. List<String> listViewValues = new ArrayList<String>(); for (Project currentProject : mProjects) { listViewValues.add(currentProject.getName()); }// Initialise a listview adapter with the project titles and use it // in the listview to show the list of project. mListView = (ListView) findViewById(; ArrayAdapter<String> adapter = new ArrayAdapter<String>(this, android.R.layout.simple_list_item_1,, listViewValues.toArray(new String[listViewValues.size()])); mListView.setAdapter(adapter); } After the adapter for the list is set, you can also add a action to execute when an item is clicked. For the row counter application, clicking an item opens a new activity showing the details of the selected project. private ListView mListView;@Override protected void onStart() { [...]// Sets a click listener to the elements of the listview so a // message can be shown for each project. mListView.setOnItemClickListener(new OnItemClickListener() {@Override public void onItemClick(AdapterView<?> parent, View view, int position, long id) { // Get clicked project. Project project = mProjects.get(position); // Open the activity for the selected project. Intent projectIntent = new Intent(MainActivity.this, ProjectActivity.class); projectIntent.putExtra("project_id", project.getId()); MainActivity.this.startActivity(projectIntent); } If you need to go further than the default layout, you’ll need to create your custom layout and adapter to show the data the way you want it to, but what is shown here is enough to get started displaying data. If you want to run the example, you can find the complete RowCounter project on my GitHub at the listview is the file.Reference: Display a string list in an Android ListView from our JCG partner Cindy Potvin at the Web, Mobile and Android Programming blog....

Self-Signed Certificate for Apache TomEE (and Tomcat)

Probably in most of your Java EE projects you will have part or whole system with SSL support (https) so browsers and servers can communicate over a secured connection. This means that the data being sent is encrypted, transmitted and finally decrypted before processing it. The problem is that sometimes the official “keystore” is only available for production environment and cannot be used in development/testing machines. Then one possible step is creating a non-official “keystore” by one member of the team and share it to all members so everyone can locally test using https, and the same for testing/QA environments. But using this approach you are running to one problem, and it is that when you are going to run the application you will receive a warning/error message that the certificate is untrusted. You can live with this but also we can do it better and avoid this situation by creating a self-signed SSL certificate. In this post we are going to see how to create and enable SSL in Apache TomEE (and Tomcat) with a self-signed certificate. The first thing to do is to install openssl. This step will depend on your OS. In my case I run with an Ubuntu 14.04. Then we need to generate a 1024 bit RSA private key using Triple-DES algorithm and stored in PEM format. I am going to use {userhome}/certs directory to generate all required resources, but it can be changed without any problem. Generate Private Key openssl genrsa -des3 -out server.key 1024 Here we must introduce a password, for this example I am going to use apachetomee (please don’t do that in production). Generate CSR Next step is to generate a CSR (Certificate Signing Request). Ideally this file will be generated and sent to a Certificate Authority such as Thawte or Verisign, who will verify the identity. But in our case we are going to self-signed CSR with previous private key. openssl req -new -key server.key -out server.csr One of the prompts will be for “Common Name (e.g. server FQDN or YOUR name)”. It is important that this field be filled in with the fully qualified domain name of the server to be protected by SSL. In case of development machine you can set “localhost”. Now that we have the private key and the csr, we are ready to generate a X.509 self-signed certificate valid for one year by running next command: Generate a Self-Signed Certificate openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt To install certificate inside Apache TomEE (and Tomcat) we need to use a keystore. This keystore is generated using keytool command. To use this tool, the certificate should be a PKCS12 certificate. For this reason we are going to use openssl to transform the certificate to a PKCS12 format by running: Prepare for Apache TomEE openssl pkcs12 -export -in server.crt -inkey server.key -out server.p12 -name test_server -caname root_ca We are almost done, now we only need to create the keystore. I have used as the same password to protect the keystore as in all other resources, which is apachetomee. keytool -importkeystore -destkeystore keystore.jks -srckeystore server.p12 -srcstoretype PKCS12 -srcalias test_server -destalias test_server And now we have a keystore.jks file created at {userhome}/certs. Installing Keystore into Apache TomEE The process of installing a keystore into Apache TomEE (and Tomcat) is described in But in summary the only thing to do is open ${TOMEE_HOME}/config/server.xml and define the SSL connector. <Service name="Catalina"> <Connector port="8443" protocol="HTTP/1.1" maxThreads="150" SSLEnabled="true" scheme="https" secure="true" keystoreFile="${user.home}/certs/keystore.jks" keystorePass="apachetomee" clientAuth="false" sslProtocol="TLS" /> </Service> Note that you need to set the keystore location in my case {userhome}/certs/keystore.jks and the password to be used to open the keystore which is apachetomee. Preparing the Browser Before starting the server we need to add the server.crt as valid Authorities in browser. In Firefox: Firefox Preferences -> Advanced -> View Certificates -> Authorities (tab) and then import the server.crt file. In Chrome: Settings -> HTTPS/SSL -> Manage Certificates … -> Authorities (tab) and then import the server.crt file. And now you are ready to start Apache TomEE (or Tomcat) and you can navigate to any deployed application but using https and port 8443. And that’s all, now we can run tests (with Selenium) without worrying about untrusted certificate warning.Reference: Self-Signed Certificate for Apache TomEE (and Tomcat) from our JCG partner Alex Soto at the One Jar To Rule Them All blog....

NoSQL with Hibernate OGM – Part one: Persisting your first Entities

The first final version of Hibernate OGM is out and the team recovered a bit from the release frenzy. So they thought about starting a series of tutorial-style blogs which give you the chance to start over easily with Hibernate OGM. Thanks to Gunnar Morling ( @gunnarmorling) for creating this tutorial. Introduction Don’t know what Hibernate OGM is? Hibernate OGM is the newest project under the Hibernate umbrella and allows you to persist entity models in different NoSQL stores via the well-known JPA. We’ll cover these topics in the following weeks:Persisting your first entities (this instalment) Querying for your data Running on WildFly Running with CDI on Java SE Store data into two different stores in the same applicationIf you’d like us to discuss any other topics, please let us know. Just add a comment below or tweet your suggestions to us. In this first part of the series we are going to set up a Java project with the required dependencies, create some simple entities and write/read them to and from the store. We’ll start with the Neo4j graph database and then we’ll switch to the MongoDB document store with only a small configuration change. Project set-up  Let’s first create a new Java project with the required dependencies. We’re going to use Maven as a build tool in the following, but of course Gradle or others would work equally well. Add this to the dependencyManagement block of your pom.xml: ... <dependencyManagement> <dependencies> ... <dependency> <groupId>org.hibernate.ogm</groupId> <artifactId>hibernate-ogm-bom</artifactId> <type>pom</type> <version>4.1.1.Final</version> <scope>import</scope> </dependency> ... </dependencies> </dependencyManagement> ... This will make sure that you are using matching versions of the Hibernate OGM modules and their dependencies. Then add the following to the dependencies block: ... <dependencies> ... <dependency> <groupId>org.hibernate.ogm</groupId> <artifactId>hibernate-ogm-neo4j</artifactId> </dependency> <dependency> <groupId>org.jboss.jbossts</groupId> <artifactId>jbossjta</artifactId> </dependency> ... </dependencies> ... The dependencies are:The Hibernate OGM module for working with an embedded Neo4j database; This will pull in all other required modules such as Hibernate OGM core and the Neo4j driver. When using MongoDB, you’d swap that with hibernate-ogm-mongodb. JBoss’ implementation of the Java Transaction API (JTA), which is needed when not running within a Java EE container such as WildFlyThe domain model Our example domain model is made up of three classes: Hike, HikeSection and Person.There is a composition relationship between Hike and HikeSection, i.e. a hike comprises several sections whose life cycle is fully dependent on the Hike. The list of hike sections is ordered; This order needs to be maintained when persisting a hike and its sections. The association between Hike and Person (acting as hike organizer) is a bi-directional many-to-one/one-to-many relationship: One person can organize zero ore more hikes, whereas one hike has exactly one person acting as its organizer. Mapping the entities Now let’s map the domain model by creating the entity classes and annotating them with the required meta-data. Let’s start with the Person class: @Entity public class Person {@Id @GeneratedValue(generator = "uuid") @GenericGenerator(name = "uuid", strategy = "uuid2") private long id;private String firstName; private String lastName;@OneToMany(mappedBy = "organizer", cascade = CascadeType.PERSIST) private Set<Hike> organizedHikes = new HashSet<>();// constructors, getters and setters... } The entity type is marked as such using the @Entity annotation, while the property representing the identifier is annotated with @Id. Instead of assigning ids manually, Hibernate OGM can take care of this, offering several id generation strategies such as (emulated) sequences, UUIDs and more. Using a UUID generator is usually a good choice as it ensures portability across different NoSQL datastores and makes id generation fast and scalable. But depending on the store you work with, you also could use specific id types such as object ids in the case of MongoDB (see the reference guide for the details). Finally, @OneToMany marks the organizedHikes property as an association between entities. As it is a bi-directional entity, the mappedBy attribute is required for specifying the side of the association which is in charge of managing it. Specifying the cascade type PERSIST ensures that persisting a person will automatically cause its associated hikes to be persisted, too. Next is the Hike class: @Entity public class Hike {@Id @GeneratedValue(generator = "uuid") @GenericGenerator(name = "uuid", strategy = "uuid2") private String id;private String description; private Date date; private BigDecimal difficulty;@ManyToOne private Person organizer;@ElementCollection @OrderColumn(name = "sectionNo") private List<HikeSection> sections;// constructors, getters and setters... } Here the @ManyToOne annotation marks the other side of the bi-directional association between Hike and Organizer. As HikeSection is supposed to be dependent on Hike, the sections list is mapped via @ElementCollection. To ensure the order of sections is maintained in the datastore, @OrderColumn is used. This will add one extra “column” to the persisted records which holds the order number of each section. Finally, the HikeSection class: @Embeddable public class HikeSection {private String start; private String end;// constructors, getters and setters... } Unlike Person and Hike, it is not mapped via @Entity but using @Embeddable. This means it is always part of another entity ( Hike in this case) and as such also has no identity on its own. Therefore it doesn’t declare any @Id property. Note that these mappings looked exactly the same, had you been using Hibernate ORM with a relational datastore. And indeed that’s one of the promises of Hibernate OGM: Make the migration between the relational and the NoSQL paradigms as easy as possible! Creating the persistence.xml With the entity classes in place, one more thing is missing, JPA’s persistence.xml descriptor. Create it under src/main/resources/META-INF/persistence.xml: <?xml version="1.0" encoding="utf-8"?><persistence xmlns="" xmlns:xsi="" xsi:schemaLocation="" version="2.0"><persistence-unit name="hikePu" transaction-type="RESOURCE_LOCAL"> <provider>org.hibernate.ogm.jpa.HibernateOgmPersistence</provider><properties> <property name="hibernate.ogm.datastore.provider" value="neo4j_embedded" /> <property name="hibernate.ogm.datastore.database" value="HikeDB" /> <property name="hibernate.ogm.neo4j.database_path" value="target/test_data_dir" /> </properties> </persistence-unit> </persistence> If you have worked with JPA before, this persistence unit definition should look very familiar to you. The main difference to using the classic Hibernate ORM on top of a relational database is the specific provider class we need to specify for Hibernate OGM: org.hibernate.ogm.jpa.HibernateOgmPersistence. In addition, some properties specific to Hibernate OGM and the chosen back end are defined to set:the back end to use (an embedded Neo4j graph database in this case) the name of the Neo4j database the directory for storing the Neo4j database filesDepending on your usage and the back end, other properties might be required, e.g. for setting a host, user name, password etc. You can find all available properties in a class named <BACK END>Properties, e.g. Neo4jProperties, MongoDBProperties and so on. Saving and loading an entity With all these bits in place its time to persist (and load) some entities. Create a simple JUnit test shell for doing so: public class HikeTest {private static EntityManagerFactory entityManagerFactory;@BeforeClass public static void setUpEntityManagerFactory() { entityManagerFactory = Persistence.createEntityManagerFactory( "hikePu" ); }@AfterClass public static void closeEntityManagerFactory() { entityManagerFactory.close(); } } The two methods manage an entity manager factory for the persistence unit defined in persistence.xml. It is kept in a field so it can be used for several test methods (remember, entity manager factories are rather expensive to create, so they should be initialized once and be kept around for re-use). Then create a test method persisting and loading some data: @Test public void canPersistAndLoadPersonAndHikes() { EntityManager entityManager = entityManagerFactory.createEntityManager();entityManager.getTransaction().begin();// create a Person Person bob = new Person( "Bob", "McRobb" );// and two hikes Hike cornwall = new Hike( "Visiting Land's End", new Date(), new BigDecimal( "5.5" ), new HikeSection( "Penzance", "Mousehole" ), new HikeSection( "Mousehole", "St. Levan" ), new HikeSection( "St. Levan", "Land's End" ) ); Hike isleOfWight = new Hike( "Exploring Carisbrooke Castle", new Date(), new BigDecimal( "7.5" ), new HikeSection( "Freshwater", "Calbourne" ), new HikeSection( "Calbourne", "Carisbrooke Castle" ) );// let Bob organize the two hikes cornwall.setOrganizer( bob ); bob.getOrganizedHikes().add( cornwall );isleOfWight.setOrganizer( bob ); bob.getOrganizedHikes().add( isleOfWight );// persist organizer (will be cascaded to hikes) entityManager.persist( bob );entityManager.getTransaction().commit();// get a new EM to make sure data is actually retrieved from the store and not Hibernate's internal cache entityManager.close(); entityManager = entityManagerFactory.createEntityManager();// load it back entityManager.getTransaction().begin();Person loadedPerson = entityManager.find( Person.class, bob.getId() ); assertThat( loadedPerson ).isNotNull(); assertThat( loadedPerson.getFirstName() ).isEqualTo( "Bob" ); assertThat( loadedPerson.getOrganizedHikes() ).onProperty( "description" ).containsOnly( "Visiting Land's End", "Exploring Carisbrooke Castle" );entityManager.getTransaction().commit();entityManager.close(); } Note how both actions happen within a transaction. Neo4j is a fully transactional datastore which can be controlled nicely via JPA’s transaction API. Within an actual application one would probably work with a less verbose approach for transaction control. Depending on the chosen back end and the kind of environment your application runs in (e.g. a Java EE container such as WildFly), you could take advantage of declarative transaction management via CDI or EJB. But let’s save that for another time. Having persisted some data, you can examine it, using the nice web console coming with Neo4j. The following shows the entities persisted by the test:Hibernate OGM aims for the most natural mapping possible for the datastore you are targeting. In the case of Neo4j as a graph datastore this means that any entity will be mapped to a corresponding node. The entity properties are mapped as node properties (see the black box describing one of the Hike nodes). Any not natively supported property types will be converted as required. E.g. that’s the case for the date property which is persisted as an ISO-formatted String. Additionally, each entity node has the label ENTITY (to distinguish it from nodes of other types) and a label specifying its entity type (Hike in this case). Associations are mapped as relationships between nodes, with the association role being mapped to the relationship type. Note that Neo4j does not have the notion of embedded objects. Therefore, the HikeSection objects are mapped as nodes with the label EMBEDDED, linked with the owning Hike nodes. The order of sections is persisted via a property on the relationship. Switching to MongoDB One of Hibernate OGM’s promises is to allow using the same API – namely, JPA – to work with different NoSQL stores. So let’s see how that holds and make use of MongoDB which, unlike Neo4j, is a document datastore and persists data in a JSON-like representation. To do so, first replace the Neo4j back end with the following one: ... <dependency> <groupId>org.hibernate.ogm</groupId> <artifactId>hibernate-ogm-mongodb</artifactId> </dependency> ... Then update the configuration in persistence.xml to work with MongoDB as the back end, using the properties accessible through MongoDBProperties to give host name and credentials matching your environment (if you don’t have MongoDB installed yet, you can download it here): ... <properties> <property name="hibernate.ogm.datastore.provider" value="mongodb" /> <property name="hibernate.ogm.datastore.database" value="HikeDB" /> <property name="" value="" /> <property name="hibernate.ogm.datastore.username" value="db_user" /> <property name="hibernate.ogm.datastore.password" value="top_secret!" /> </properties> ... And that’s all you need to do to persist your entities in MongoDB rather than Neo4j. If you now run the test again, you’ll find the following BSON documents in your datastore: # Collection "Person" { "_id" : "50b62f9b-874f-4513-85aa-c2f59015a9d0", "firstName" : "Bob", "lastName" : "McRobb", "organizedHikes" : [ "a78d731f-eff0-41f5-88d6-951f0206ee67", "32384eb4-717a-43dc-8c58-9aa4c4e505d1" ] } # Collection Hike { "_id" : "a78d731f-eff0-41f5-88d6-951f0206ee67", "date" : ISODate("2015-01-16T11:59:48.928Z"), "description" : "Visiting Land's End", "difficulty" : "5.5", "organizer_id" : "50b62f9b-874f-4513-85aa-c2f59015a9d0", "sections" : [ { "sectionNo" : 0, "start" : "Penzance", "end" : "Mousehole" }, { "sectionNo" : 1, "start" : "Mousehole", "end" : "St. Levan" }, { "sectionNo" : 2, "start" : "St. Levan", "end" : "Land's End" } ] } { "_id" : "32384eb4-717a-43dc-8c58-9aa4c4e505d1", "date" : ISODate("2015-01-16T11:59:48.928Z"), "description" : "Exploring Carisbrooke Castle", "difficulty" : "7.5", "organizer_id" : "50b62f9b-874f-4513-85aa-c2f59015a9d0", "sections" : [ { "sectionNo" : 1, "start" : "Calbourne", "end" : "Carisbrooke Castle" }, { "sectionNo" : 0, "start" : "Freshwater", "end" : "Calbourne" } ] } Again, the mapping is very natural and just as you’d expect it when working with a document store like MongoDB. The bi-directional one-to-many/many-to-one association between Person and Hike is mapped by storing the referenced id(s) on either side. When loading back the data, Hibernate OGM will resolve the ids and allow to navigate the association from one object to the other. Element collections are mapped using MongoDB’s capabilities for storing hierarchical structures. Here the sections of a hike are mapped to an array within the document of the owning hike, with an additional field sectionNo to maintain the collection order. This allows to load an entity and its embedded elements very efficiently via a single round-trip to the datastore. Wrap-up In this first instalment of NoSQL with Hibernate OGM 101 you’ve learned how to set up a project with the required dependencies, map some entities and associations and persist them in Neo4j and MongoDB. All this happens via the well-known JPA API. So if you have worked with Hibernate ORM and JPA in the past on top of relational databases, it never was easier to dive into the world of NoSQL. At the same time, each store is geared towards certain use cases and thus provides specific features and configuration options. Naturally, those cannot be exposed through a generic API such as JPA. Therefore Hibernate OGM lets you make usage of native NoSQL queries and allows to configure store-specific settings via its flexible option system. You can find the complete example code of this blog post on GitHub. Just fork it and play with it as you like. Of course storing entities and getting them back via their id is only the beginning. In any real application you’d want to run queries against your data and you’d likely also want to take advantage of some specific features and settings of your chosen NoSQL store. We’ll come to that in the next parts of this series, so stay tuned!Reference: NoSQL with Hibernate OGM – Part one: Persisting your first Entities from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

Get ready to Rock!
To download the books, please verify your email address by following the instructions found on the email we just sent you.