Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?


A 3 Step Guide to Getting Started with NoSQL

I have been looking in to NoSQL databases for few months and would like to share my experience with it. This is a post might help you if you indent to start learning about the NoSQL Databases. I would try to link the resources which I found useful here. Step 1: What is NoSQL? NoSQL DEFINITION: Next Generation Databases mostly addressing some of the points: being non-relational, distributed, open-source and horizontally scalable. The original intention has been modern web-scale databases. The movement began early 2009 and is growing rapidly. Often more characteristics apply such as: schema-free, easy replication support, simple API, eventually consistent / BASE (not ACID), a huge   amount of data and more. So the misleading term ‘nosql’ (the community now translates it mostly with ‘not only sql’).As seen on Martin Flower’s NoSQL page is a good starting point. His below talk on Goto Conference explains the need and structure of NoSQL data stores. Martin and Pramod has written a book titled ‘NoSQL Distilled: A Brief Guide to the Emerging World of Polyglot Persistence‘ and is a good read. It summarizes his talks and other blog post into a book. Martin has been an influential speaker on this topic and has written number of articles on this. I have read and seen many introductions but his work helped me to get things in to my head.If you likes to view the slides, then the below presentation by Tobias Lindaaker on slideshare might inspire you. He gives similar ideas.NOSQL for Dummies from Tobias Lindaaker MongoDB has an online course MongoDB for Java Developers which is really useful if you are interested in trying out things. Step 2: How and for what are NoSQL used in Real world? Once you have some idea, try to find the usage patterns. The above presentations will give lot of information on how these systems are used. You could go through the below links, which explains how specific business problems are solved using NoSQL. This is important because we could easily relate the case studies and get more insights into the capabilities of these systems.MongoDB Customers page Powerd By Haddop Neo4J Customers PageStep 3 : Find Usage Patterns that you could work on! Once you have reached this point, you should try and implement the concepts. Look back at the application that you are working on and see if there is a need for an alternative data store. Do you store Product recommendations? Do you have issues with heterogeneous data? Can your application compromise ACID model for scalability? Do you store XML files or Images on you relational DB? These are some of the questions that you could ask. This way you could determine if there is a serious need of a investigation for a alternative persistence mechanisms. This is in no way means removing the RDBMS completely but moving to a polygot structure of data stores. If there is no opportunity to try out these concepts in your work, you could create your own test projects and implement them. This way you would encounter problems and will learn from them.   Reference: A 3 Step Guide to Getting Started with NoSQL from our JCG partner Manu PK at the The Object Oriented Life blog. ...

Indexing data in Solr from disparate sources using Camel

Apache Solr is ‘the popular, blazing fast open source enterprise search platform’ built on top of Lucene. In order to do a search (and find results) there is the initial requirement of data ingestion usually from disparate sources like content management systems, relational databases, legacy systems, you name it… Then there is also the challenge of keeping the index up to date by adding new data, updating existing records, removing obsolete data. The new sources of data could be the same as the initial ones, but could also be sources like twitter, AWS or rest endpoints. Solr can understand different file formats and provides fair amount of options for data indexing:  Direct HTTP and remote streaming – allows you to interact with Solr over HTTP by posting a file for direct indexing or the path to the file for remote streaming. DataImportHandler – is a module that enables both full and incremental delta imports from relational databases or file system. SolrJ – a java client to access Solr using Apache Commons HTTP Client.But in real life, indexing data from different sources with millions of documents, dozens of transformations, filtering, content enriching, replication, parallel processing requires much more than that. One way to cope with such a challenge is by reinventing the wheel: write few custom applications, combine them with some scripts or run cronjobs. Another approach would be to use a tool that is flexible and designed to be configurable and plugable, that can help you to scale and distribute the load with ease. Such a tool is Apache Camel which has also a Solr connector now. All started few months ago, during basecamp days at Sourcesense, where me and my colleague Alex were experimenting with different projects to implement a pipeline for indexing data into Solr. As expected we discovered Camel and after few days of pairing, we were ready with the initial version of the Solr component which got committed to Camel and extended further by Ben Oday. At the moment it is full featured Solr connector, that uses SolrJ behind the scene and lets you to: configure all parameters of SolrServer and StreamingUpdateSolrServer; supports the operations: insert, add_bean, delete_by_id, delete_by_query, commit, rolback, optimize; index files, SolrInputDocument instances, beans with annotations or individual message headers. Creating a Camel route to index all the data from a relational database table and local file system is simple: public void configure() { from("timer://clear?repeatCount=1") .to("direct:clearIndex");from("file:src/data?noop=true") .to("direct:insert");from("timer://database?repeatCount=1") .to("sql:select * from products?dataSourceRef=productDataSource") .split(body()) .process(new SqlToSolrMapper()) .to("direct:insert");from("direct:insert") .setHeader(SolrConstants.OPERATION, constant(SolrConstants.OPERATION_INSERT)) .to(SOLR_URL) .setHeader(SolrConstants.OPERATION, constant(SolrConstants.OPERATION_COMMIT)) .to(SOLR_URL);from("direct:clearIndex") .setHeader(SolrConstants.OPERATION, constant(SolrConstants.OPERATION_DELETE_BY_QUERY)) .setBody(constant("*:*")) .to(SOLR_URL) .setHeader(SolrConstants.OPERATION, constant(SolrConstants.OPERATION_COMMIT)) .to(SOLR_URL); } The above route will first clear the index by deleting all the documents followed by a commit. Then it will start polling files from src/data folder, read each file and send it to Solr endpoint. Assuming that the files are in a format Solr can understand, they will be indexed and committed. The third route will retrieve all the products from database (in memory), split them into individual records, map each record to Solr fields, and digest. Luckily, in 2012, the life of software developer is not that simple boring. Instead nowadays a more realistic indexing requirement would consist of something like this: 1. Get the backup files from amazon S3 and index. If a document is approved, commit it as soon as possible, otherwise commit every 10 minutes. How can Camel help you with this requirement? Camel supports most popular amazon APIs including S3. Using aws-s3 component, it is possible to read files from a S3 bucket, then apply a filter for approved documents, in order to send them into a separate route for instant commit. <route> <from uri="aws-s3://MyBucket?delay=5000&maxMessagesPerPoll=5"/> <choice> <when> <xpath>/add/doc[@status='approved']</xpath> <to uri="direct:indexAndCommit"/> </when> <otherwise> <to uri="direct:index"/> </otherwise> </choice> </route> <route> <from uri="timer://commit?fixedRate=true&period=600s"/> <from uri="direct:commit"/> </route> 2. Retrieve customer data from database every 5 seconds by reading10 records at a time. Also look for deltas. Enrich the address data with latitute/longitute by calling XXX external service to facilitate spatial search in Solr. <route id="fromDB"> <from uri="jpa://com.ofbizian.pipeline.Customer?consumer.namedQuery= newCustomers&maximumResults=10&delay=5000"/> <enrich uri="direct:coordinateEnricher" strategyRef="latLongAggregationStrategy"/> <to uri="direct:index"/> </route><route> <from uri="direct:coordinateEnricher"/> <setHeader headerName="CamelHttpQuery"> <simple>address='${body.address}'&sensor=false</simple> </setHeader> <to uri=""/> <setHeader headerName="lat"> <xpath resultType="java.lang.Double">//result[1]/geometry/location/lat/text()</xpath> </setHeader> <setHeader headerName="lng"> <xpath resultType="java.lang.Double">//result[1]/geometry/location/lng/text()</xpath> </setHeader> </route> The above route reads from Customer table 10 records at a time, and for each one will call google’s maps API to get latitude and longitude using the customer address field. The coordinates are extracted from response using XPath and merged back into Customer object. Simple, isn’t it. 3. Index the content under this/that/path in our content management system and also monitor for updates. <route> <from uri="jcr://user:pass@repository/import/inbox/signal?eventTypes=3&deep=true&synchronous=false"/> <to uri="direct:index"/> </route> Camel has a jcr connector, which allows you to create content in any java content repository. There is also an improvement submitted in CAMEL-5155 which will allow reading content from JCR v.2 supporting repositories soon. If you are lucky and your CMS supports CMIS you can use my camel-cmis connector from github for the same purpose. 4. Listen for tweets about our product/company, do sentiment analysis, and index only positive tweets. <route id="fromTwitter"> <from uri="twitter://streaming/filter?type=event&keywords=productName&consumerKey={{consumer.key}}&consumerSecret={{consumer.secret}}"/> <setHeader headerName="CamelHttpQuery"> <language language="beanshell"> "q=" +, "UTF-8") </language> </setHeader> <throttle timePeriodMillis="1500"> <constant>1</constant> <to uri=""/> <setHeader headerName="sentiment"> <xpath resultType="java.lang.Double">/sentiment/value/text()</xpath> </setHeader> <filter> <simple>${in.header.sentiment} > 0</simple> <to uri="direct:index"/> </filter> </throttle> </route> This route is going to listen for tweets using twitter’s real time api, url encode the tweet and call tweetsentiments api for sentiment analysis. In addition it will apply throttling, so only one request at most is made every 1500 milliseconds, because there is restriction on the number of calls per second. Then the route is applying a filter to ignore all the negative tweets, before indexing. As you can see Camel can interact with many disparate systems (including Solr) easily, and even if you have a very custom application, writing a connector for it would not be difficult. But this is only one side of the story. At the other side, there is a full list of Enterprise Integration Patterns implemented by Camel which are needed for any serious data ingestion pipeline: Routers, Translator, Filter, Splitter, Aggregator, Content Enricher, Load Balancer… Last but not least: Exception Handling, Logging, Monitoring, DSLs… In two words: Camel Rocks! PS: The full source code of the examples can be found on my github account.   Reference: Indexing data in Solr from disparate sources using Camel from our JCG partner Bilgin Ibryam at the OFBIZian blog. ...

Spring MVC: Creation of a simple Controller with Java based config

This is the first article on my blog related to Spring MVC. The beginning is always exciting, so I will try to be concise and informative. Spring MVC allows creation of web-applications in the most convenient, straightforward and fast way. Start working with this technology implies knowledge of Spring CORE. In the post you will read about creation of a simple Spring MVC Controller. I prefer Java-based configuration of application, so the example will contain this approach. The main aim is a creation of the controller which will process the request. Hence, after a click on the link you will be redirected to a concrete page with the help of Spring controller. Preparation Create in Eclipse a new Dynamic Web Project, then convert it to the Maven project. Verify that your web.xml file looks like this: <!--?xml version="1.0" encoding="UTF-8"?--> <web-app xmlns:xsi="" xmlns="" xmlns:web="" xsi:schemalocation="" id="WebApp_ID" version="3.0"> <display-name>SimpleController</display-name> <welcome-file-list> <welcome-file>index.jsp</welcome-file> </welcome-file-list> </web-app>index.jsp will play the role of Home Page in the application, place it into src/main/webapp/index.jsp; Here is the code of index.jsp: <%@ page language="java" contentType="text/html; charset=ISO-8859-1" pageEncoding="ISO-8859-1"%> ... <h1>Home page</h1> <p>This is a Home Page.</p> ...As the result, project’s structure will be like this:Setting up dependencies What I need to do next is to add some dependencies into pom.xml file. I’m not going to speak about dependencies anymore, I will comment the code below: <properties> <spring.version>3.1.1.RELEASE</spring.version> </properties><dependencies> <!-- Spring --> <dependency> <groupid>org.springframework</groupid> <artifactid>spring-context</artifactid> <version>${spring.version}</version> </dependency> <dependency> <groupid>org.springframework</groupid> <artifactid>spring-webmvc</artifactid> <version>${spring.version}</version> </dependency> <dependency> <groupid>org.springframework</groupid> <artifactid>spring-beans</artifactid> <version>${spring.version}</version> </dependency> <dependency> <groupid>org.springframework</groupid> <artifactid>spring-web</artifactid> <version>${spring.version}</version> </dependency> <!-- CGLIB is required to process @Configuration classes --> <dependency> <groupid>cglib</groupid> <artifactid>cglib</artifactid> <version>2.2.2</version> </dependency> <!-- Servlet API, JSTL --> <dependency> <groupid>javax.servlet</groupid> <artifactid>javax.servlet-api</artifactid> <version>3.0.1</version> <scope>provided</scope> </dependency> <dependency> <groupid>jstl</groupid> <artifactid>jstl</artifactid> <version>1.2</version> </dependency> </dependencies>More information about the Spring dependencies you can find on the official blog. Java-based configuration It’s time to create a configuration for the application. As I mentioned above this approach is convenient, and one of the reasons is the annotation usage. Firstly I’m going to create WebAppConfig class package com.sprmvc.init;import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration; import org.springframework.web.servlet.config.annotation.EnableWebMvc; import org.springframework.web.servlet.view.JstlView; import org.springframework.web.servlet.view.UrlBasedViewResolver;@Configuration //Specifies the class as configuration @ComponentScan('com.sprmvc') //Specifies which package to scan @EnableWebMvc //Enables to use Spring's annotations in the code public class WebAppConfig {@Bean public UrlBasedViewResolver setupViewResolver() { UrlBasedViewResolver resolver = new UrlBasedViewResolver(); resolver.setPrefix('/WEB-INF/pages/'); resolver.setSuffix('.jsp'); resolver.setViewClass(JstlView.class); return resolver; }} It’s done to point out the path where all JSPs are stored. This is required in order to use further readable URLs. Now is the turn of Initializer class to be overviewed: package com.sprmvc.init;import javax.servlet.ServletContext; import javax.servlet.ServletException; import javax.servlet.ServletRegistration.Dynamic;import org.springframework.web.WebApplicationInitializer; import; import org.springframework.web.servlet.DispatcherServlet;public class Initializer implements WebApplicationInitializer {@Override public void onStartup(ServletContext servletContext) throws ServletException {AnnotationConfigWebApplicationContext ctx = new AnnotationConfigWebApplicationContext(); ctx.register(WebAppConfig.class);ctx.setServletContext(servletContext);Dynamic servlet = servletContext.addServlet('dispatcher', new DispatcherServlet(ctx)); servlet.addMapping('/'); servlet.setLoadOnStartup(1);}} Notice that Initializer class implements WebApplicationInitializer interface. This is required to avoid XML-configuration of web-application. JSP for the Controller Before I show you how to create a simple controller, I need to create a JSP file to which controller will lead us. <%@ page language="java" contentType="text/html; charset=ISO-8859-1" pageEncoding="ISO-8859-1"%>... <p>Hello world: ${message}</p> <p>Well done!</p> ...Here is the path to the JSP-file: src/main/webapp/WEB-INF/pages/hello.jsp Notice that in the WebAppConfig class I have specified such path parts like sufix and prefix. Controller And finally the code of the LinkController class package com.sprmvc.controller;import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.servlet.ModelAndView;@Controller public class LinkController {@RequestMapping(value='/hello-page') public ModelAndView goToHelloPage() { ModelAndView view = new ModelAndView(); view.setViewName('hello'); //name of the jsp-file in the 'page' folderString str = 'MVC Spring is here!'; view.addObject('message', str); //adding of str object as 'message' parameterreturn view; }} Now you need to update index.jsp file by adding there the link to the Hello Page: <%@ page language="java" contentType="text/html; charset=ISO-8859-1" pageEncoding="ISO-8859-1"%> ... <h1>Home page</h1> <p>This is a Home Page.</p> <p><a href="hello-page.html">Hello world link</a></p> ...The final project structure is:Launch the project, open index.jsp and click on the link and you will get:Summary At times tutorials are really helpful but the best way to learn how to use Spring is to read official documentation, so I recommend you dig deeper on the Spring blog.   Reference: Spring MVC: Creation of a simple Controller with Java based config from our JCG partner Alex Fruzenshtein at the Fruzenshtein’s notes blog. ...

Introduction to Default Methods (Defender Methods) in Java 8

We all know that interfaces in Java contain only method declarations and no implementations and any non-abstract class implementing the interface had to provide the implementation. Lets look at an example:                 public interface SimpleInterface { public void doSomeWork(); }class SimpleInterfaceImpl implements SimpleInterface{ @Override public void doSomeWork() { System.out.println('Do Some Work implementation in the class'); }public static void main(String[] args) { SimpleInterfaceImpl simpObj = new SimpleInterfaceImpl(); simpObj.doSomeWork(); } } Now what if I add a new method in the SimpleInterface? public interface SimpleInterface { public void doSomeWork(); public void doSomeOtherWork(); } and if we try to compile the code we end up with: $javac .\ .\ error: SimpleInterfaceImpl is not abstract and does not override abstract method doSomeOtherWork() in SimpleInterface class SimpleInterfaceImpl implements SimpleInterface{ ^ 1 error And this limitation makes it almost impossible to extend/improve the existing interfaces and APIs. The same challenge was faced while enhancing the Collections API in Java 8 to support lambda expressions in the API. To overcome this limitation a new concept is introduced in Java 8 called default methods which is also referred to as Defender Methods or Virtual extension methods. Default methods are those methods which have some default implementation and helps in evolving the interfaces without breaking the existing code. Lets look at an example: public interface SimpleInterface { public void doSomeWork();//A default method in the interface created using 'default' keyword default public void doSomeOtherWork(){ System.out.println('DoSomeOtherWork implementation in the interface'); } }class SimpleInterfaceImpl implements SimpleInterface{ @Override public void doSomeWork() { System.out.println('Do Some Work implementation in the class'); } /* * Not required to override to provide an implementation * for doSomeOtherWork. */public static void main(String[] args) { SimpleInterfaceImpl simpObj = new SimpleInterfaceImpl(); simpObj.doSomeWork(); simpObj.doSomeOtherWork(); } } and the output is: Do Some Work implementation in the class DoSomeOtherWork implementation in the interface This is a very brief introduction to default methods. One can read in depth about default methods here.   Reference: Introduction to Default Methods (Defender Methods) in Java 8 from our JCG partner Mohamed Sanaulla at the Experiences Unlimited blog. ...

Bidirectional @OneToMany / @ManyToOne association

One of goals in programming is representing of models from real world. Very often an application need to model some relationship between entities. In the last article about Hibernate associations I have described rules of setting up the “one to one” relationship. Today I’m going to show you how to setup bidirectional “one to many” and “many to one” association. This example will be based on previous Hibernate tutorials. At the start I need to say that my code example will be based on a simple situation. Let’s imagine a football league. Every league has teams, and in the team can play some players. So the summary is following: one team has many players, one player can play for an one team. In this way we get obvious “one to many” and “many to one” relationships. I use MySQL as a database in this example. Here are scripts for the tables creation: CREATE TABLE `teams` ( `id` int(6) NOT NULL AUTO_INCREMENT, `name` varchar(20) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=7 DEFAULT CHARSET=utf8;CREATE TABLE `players` ( `id` int(6) NOT NULL AUTO_INCREMENT, `lastname` varchar(20) NOT NULL, `team_id` int(6) NOT NULL, PRIMARY KEY (`id`), KEY `player's team` (`team_id`), CONSTRAINT `player's team` FOREIGN KEY (`team_id`) REFERENCES `teams` (`id`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=InnoDB AUTO_INCREMENT=7 DEFAULT CHARSET=utf8; The next step is creation of POJOs: import java.util.Set;import javax.persistence.*;@Entity @Table(name = 'teams') public class Team {@Id @GeneratedValue private Integer id;private String name;@OneToMany(mappedBy='team', cascade=CascadeType.ALL) private Setplayers;public Team(String name) { = name; }public Integer getId() { return id; }public void setId(Integer id) { = id; }public String getName() { return name; }public void setName(String name) { = name; }public SetgetPlayers() { return players; }public void setPlayers(Setplayers) { this.players = players; } } I have used @OneToMany because one team can has many players. In the next POJO, association will be @ManyToOne since many players can play for an one team. import javax.persistence.*;@Entity @Table(name = 'players') public class Player {@Id @GeneratedValue private Integer id;private String lastname;@ManyToOne @JoinColumn(name = 'team_id') private Team team;public Player(String lastname) { this.lastname = lastname; }public Integer getId() { return id; }public void setId(Integer id) { = id; }public String getLastname() { return lastname; }public void setLastname(String lastname) { this.lastname = lastname; }public Team getTeam() { return team; }public void setTeam(Team team) { = team; } } Here I specify the column (team_id) which will be joined from the owning side (Teams). Notice that I don’t declare team_id field in the POJO. If I need to change a team for a player I just need to use setTeam(Team team) setter. After POJOs were declared, I can demonstrate how to persist them: ... public static void main(String[] args) {SessionFactory sessionFactory = HibernateUtil.getSessionFactory(); Session session = sessionFactory.openSession(); session.beginTransaction();Team team = new Team('Barcelona'); Setplayers = new HashSet();Player p1 = new Player('Messi'); Player p2 = new Player('Xavi');p1.setTeam(team); p2.setTeam(team);players.add(p1); players.add(p2);team.setPlayers(players);;session.getTransaction().commit();session.close();} ... The result of the code execution is: Hibernate: insert into teams (name) values (?) Hibernate: insert into players (lastname, team_id) values (?, ?) Hibernate: insert into players (lastname, team_id) values (?, ?) That’s it, in this tutorial I have demonstrated how to setup “one to many” and “many to one” bidirectional association. I don’t see any sense in a same tutorial with an example of unidirectional association. Because Hibernate has its own best practices: Unidirectional associations are more difficult to query. In a large application, almost all associations must be navigable in both directions in queries.   Reference: Bidirectional @OneToMany / @ManyToOne association from our JCG partner Alex Fruzenshtein at the Fruzenshtein’s notes blog. ...

Bidirectional @OneToOne primary key association

It’s time to continue articles about Hibernate. The last one was dedicated to unidirectional @OneToOne association. So today I will show you how to obtain bidirectional @OneTonOne primary key association. An example from this tutorial based on previous article. Let’s get started. I will work with the same tables which I have created previously. In order to setup a bidirectional one to one association I need to update two POJOs and the way of saving process. Let’s consider a new version of Author class:       import javax.persistence.*;@Entity @Table(name='authors') public class Author {@Id @GeneratedValue private Integer id;private String name;@OneToOne(mappedBy='author', cascade=CascadeType.ALL) private Biography biography;public Integer getId() { return id; }public void setId(Integer id) { = id; }public String getName() { return name; }public void setName(String name) { = name; }public Biography getBiography() { return biography; }public void setBiography(Biography biography) { this.biography = biography; }} The changes are minimal. I have just removed @PrimaryKeyJoinColumn from biography field. In the bidirectional association appears two sides of association – owning and inverse. For one to one bidirectional relationships, the owning side corresponds to the side that contains the corresponding foreign key. In our case the owning side is Author class. Let’s go ahead. Quote from JPA 2 specification: The inverse side of a bidirectional relationship must refer to its owning side by use of the mappedBy element of the OneToOne, OneToMany, or ManyToMany annotation. The mappedBy element designates the property or field in the entity that is the owner of the relationship. The inverse side in this example is the Biography class. It requires more essential changes comparing with the Author class. import javax.persistence.*;import org.hibernate.annotations.GenericGenerator; import org.hibernate.annotations.Parameter;@Entity @Table(name='biographies') public class Biography {@Id @Column(name='author_id') @GeneratedValue(generator='gen') @GenericGenerator(name='gen', strategy='foreign', parameters=@Parameter(name='property', value='author')) private Integer authorId;private String information;@OneToOne @PrimaryKeyJoinColumn private Author author;public Author getAuthor() { return author; }public void setAuthor(Author author) { = author; }public Integer getAuthorId() { return authorId; }public void setAuthorId(Integer authorId) { this.authorId = authorId; }public String getInformation() { return information; }public void setInformation(String information) { this.information = information; } } The first important thing is decoration of authorId field with additional annotations. ... @GeneratedValue(generator='gen') @GenericGenerator(name='gen', strategy='foreign', parameters=@Parameter(name='property', value='author')) ... In @GeneratedValue I specify a name of generator (“gen”) and in @GenericGenerator I define a strategy for generator. The second important thing is adding of author filed in the class with an appropriate getter and setter. ... @OneToOne @PrimaryKeyJoinColumn private Author author; ... In this way we obtain a bidirectional association. Now we can access to Author from Biography and vice versa, because both of objects have references to each other. Now a process of objects saving must be updated: ... public static void main(String[] args) {SessionFactory sessionFactory = HibernateUtil.getSessionFactory(); Session session = sessionFactory.openSession(); session.beginTransaction();Author author = new Author(); author.setName(' O. Henry');Biography biography = new Biography(); biography.setInformation('William Sydney Porter better known as O. Henry...');author.setBiography(biography); biography.setAuthor(author);;session.getTransaction().commit();session.close();} ... Notice that now I don’t persist owning side before adding inverse side to it. But you can see that I set the biography to the author and at the following string I set the author to the biography. This is a main purpose of bidirectional association. The result of the code execution is: Hibernate: insert into authors (name) values (?) Hibernate: insert into biographies (information, author_id) values (?, ?)   Reference: Bidirectional @OneToOne primary key association from our JCG partner Alex Fruzenshtein at the Fruzenshtein’s notes blog. ...

ARM Virtualization Extensions – Introduction (Part 1)

Sorry guys for another hiatus, my job at Calxeda keeps me busy. I was recently discussing ARM’s virtualization support with my friend Ali Hussain (yup, that’s our idea of a fun dinner conversation) and found some very interesting facts. I requested Ali to share his knowledge in a blog post series on this topic, so here you go. Ali is in ARM’s performance modeling team and has been working on ARM cores since 2008. The idea for this blog post stemmed from talking to people that had the impression that ARM’s virtualization support, even with the virtualization extensions in Cortex-A15, is limited. I plan to write a few posts exploring virtualization, and the support for it in the ARM and x86 ISAs. This post will draw heavily on my understanding of the ARM architecture and operating systems. What is Virtualization? Before we can look at virtualization, we need to define a few key things. The first is virtualization itself. Virtualization in general is creating an environment in software to emulate something physical. More specifically, when we talk about hardware virtualization, it is running an operating system on a sandboxed virtual machine (VM) as opposed to having access to the physical hardware. This is done through the hypervisor which manages guest operating systems the same way an operating system manages applications.Virtualization does not require hardware support. It can be performed completely in software. An existing operating system can be patched to work at a lower privilege level and trap to the hypervisor. This is called paravirtualization. The advantage of hardware support is in simplifying the software work for the hypervisor and providing performance improvements. For this post, I want to discuss how hardware can help the hypervisor in performing its functions. Those interested in paravirtualization will find this talk about problems faced while implementing paravirtualization on ARM interesting by VMWare: Let’s explore some of the duties of the hypervisor and how the hardware accomplishes them to better understand hardware-assisted virtualization. Management Has Its Privileges To sandbox an OS, the hypervisor has to be at a higher privilege level than the guest OS. ARM, for example, creates a higher privilege level called hypervisor mode. The hypervisor mode has access to its own set of system registers that are analogous to the registers present in the system mode. For example, just like the OS tracks process IDs using an Address Space IDentifier (ASID which is a part of the TTBR), the hypervisor tracks current VM using a VMID (which is a part of the VTTBR).The hypervisor also has similar access controls as the supervisor mode. In addition,  the hypervisor can read and write the system control registers for the OS. Having two layers of access control does create a lot of interesting scenarios where the hypervisor and the guest OS compete for a trap. ARM has gone with the philosophy that a good manager is one that intrudes into your work as little as possible. So, if an exception occurs, the guest OS is typically given the opportunity to service it before the hypervisor because the OS is better equipped to handle the applications’ requirements. Let me explain this with an example. Both the OS and the hypervisor provide a similar feature for disabling access to the floating point and SIMD units, using the CPACR (Coprocessor Access Control Register) and HCPTR (Hypervisor Coprocessor Trap Register) respectively. When an application running in the guest OS is disallowed access by both the CPACR and HCPTR, the CPACR has priority. This is an interesting design choice. It has two advantages. First, it improves the response time of the exception. Second, it allows the OS to behave as normal. However, it also makes it extremely difficult to provide certain functionality, e.g., making the floating point unit invisible to the guest. Next week I’ll add another post talking about the memory management and interrupt handling in a hypervisor. Disclaimers:All opinions expressed here are my own and not of ARM or any other entity. I haven’t implemented a hypervisor or operating system myself so I would love to hear comments from experts in the field.Resources:  Reference: ARM Virtualization Extensions – Introduction (Part 1) from our JCG partner Aater Suleman at the Future Chips blog. ...

Apache Camel meets Redis

The Lamborghini of Key-Value stores Camel is the best of bread Integration framework and in this post I’m going to show you how to make it even more powerful by leveraging another great project – Redis.Camel 2.11 is on its way to be released soon with lots of new features, bug fixes and components. Couple of these new components are authored by me, redis-component being my favourite one. Redis – a ligth key/value store is an amazing piece of Italian software designed for speed (same as Lamborghini – a two-seater Italian car designed for   speed). Written in C and having an in-memory closer to the metal nature, Redis performs extremely well (Lamborgini’s motto is ‘Closer to the Road’). Redis is often referred to as a data structure server since keys can contain strings, hashes, lists and sorted sets. A fast and light data structure server is like a super sportscars for software engineers – it just flies. If you want to find out more about Redis’ and Lamborghini’s unique performance characteristics google around and you will see for yourself. Getting started with Redis is easy: download, make, and start a redis-server. After these steps, you ready to use it from your Camel application. The component uses internally Spring Data which in turn uses Jedis driver, but with possibility to switch to other Redis drivers. Here are few use cases where the camel-redis component is a good fit: Idempotent Repository The term idempotent is used in mathematics to describe a function that produces the same result if it is applied to itself. In Messaging this concepts translates into the a message that has the same effect whether it is received once or multiple times. In Camel this pattern is implemented using the IdempotentConsumer class which uses an Expression to calculate a unique message ID string for a given message exchange; this ID can then be looked up in the IdempotentRepository to see if it has been seen before; if it has the message is consumed; if its not then the message is processed and the ID is added to the repository. RedisIdempotentRepository is using a set structure to store and check for existing Ids. <bean id="idempotentRepository" class="org.apache.camel.component.redis.processor.idempotent.RedisIdempotentRepository"> <constructor-arg value="test-repo"/> </bean> <route> <from uri="direct:start"/> <idempotentConsumer messageIdRepositoryRef="idempotentRepository"> <simple>${}</simple> <to uri="mock:result"/> </idempotentConsumer> </route> Caching One of the main uses of Redis is as LRU cache. It can store data inmemory as Memcached or can be tuned to be durable flushing data to a log file that can be replayed if the node restarts.The various policies when maxmemory is reached allows creating caches for specific needs:volatile-lru remove a key among the ones with an expire set, trying to remove keys not recently used. volatile-ttl remove a key among the ones with an expire set, trying to remove keys with short remaining time to live. volatile-random remove a random key among the ones with an expire set. allkeys-lru like volatile-lru, but will remove every kind of key, both normal keys or keys with an expire set. allkeys-random like volatile-random, but will remove every kind of keys, both normal keys and keys with an expire set.Once your Redis server is configured with the right policies and running, the operation you need to do are SET and GET: <?xml version="1.0" encoding="UTF-8"?> <route> <from uri="direct:start"/> <setHeader headerName="CamelRedis.Command"> <constant>SET</constant> </setHeader> <setHeader headerName="CamelRedis.Key"> <constant>keyOne</constant> </setHeader> <setHeader headerName="CamelRedis.Value"> <constant>valueOne</constant> </setHeader> <to uri="redis://localhost:6379"/> </route> Interap pub/sub with Redis Camel has various components for interacting between routes:direct: provides direct, synchronous invocation in the same camel context. seda: asynchronous behavior, where messages are exchanged on a BlockingQueue, again in the same camel context. vm: asynchronous behavior like seda, but also supports communication across CamelContext as long as they are in the same JVM. Complex applications usually consist of more than one standalone Camel instances running on separate machines. For this kind of scenarios, Camel provides jms, activemq, combination of AWS SNS with SQS, for messaging between instances. Redis has a simpler solution for the Publish/Subscribe messaging paradigm. Subscribers subscribes to one or more channels, by specifying the channel names or using pattern matching for receiving messages from multiple channels. Then the publisher publishes the messages to a channel, and Redis makes sure it reaches all the matching subscribers. <?xml version="1.0" encoding="UTF-8"?> <camelContext id="camel" xmlns=""> <route startupOrder="1"> <from uri="redis://localhost:6379?command=SUBSCRIBE&channels=testChannel"/> <to uri="mock:result"/> </route> <route startupOrder="2"> <from uri="direct:start"/> <setHeader headerName="CamelRedis.Command"> <constant>PUBLISH</constant> </setHeader> <setHeader headerName="CamelRedis.CHANNEL"> <constant>testChannel</constant> </setHeader> <setHeader headerName="CamelRedis.MESSAGE"> <constant>Test Message</constant> </setHeader> <to uri="redis://localhost:6379"/> </route> </camelContext> Other usages Guaranteed Delivery: Camel supports this EIP using JMS, File, JPA and few other components. Here Redis can be used as lightweight key-value persistent store with its transaction support. The Claim Check from the EIP patterns allows you to replace message content with a claim check (a unique key), which can be used to retrieve the message content at a later time. The message content can be stored temporarily in Redis.Redis is also very popular for implementing counters, leaderboards, tagging systems and many more functionalities. Now, with two swiss army knives under your belt, the integrations to make are limited only by your imagination.   Reference: Apache Camel meets Redis from our JCG partner Bilgin Ibryam at the OFBIZian blog. ...

Java Collections API Quirks

So we tend to think we’ve seen it all, when it comes to the Java Collections API. We know our ways around Lists, Sets, Maps, Iterables, Iterators. We’re ready for Java 8?s Collections API enhancements. But then, every once in a while, we stumble upon one of these weird quirks that originate from the depths of the JDK and its long history of backwards-compatibility. Let’s have a look at unmodifiable collections Unmodifiable Collections Whether a collection is modifiable or not is not reflected by the Collections API. There   is no immutable List, Set or Collection base type, which mutable subtypes could extend. So, the following API doesn’t exist in the JDK: // Immutable part of the Collection API public interface Collection { boolean contains(Object o); boolean containsAll(Collection<?> c); boolean isEmpty(); int size(); Object[] toArray(); <T> T[] toArray(T[] array); }// Mutable part of the Collection API public interface MutableCollection extends Collection { boolean add(E e); boolean addAll(Collection<? extends E> c); void clear(); boolean remove(Object o); boolean removeAll(Collection<?> c); boolean retainAll(Collection<?> c); } Now, there are probably reasons, why things hadn’t been implemented this way in the early days of Java. Most likely, mutability wasn’t seen as a feature worthy of occupying its own type in the type hierarchy. So, along came the Collections helper class, with useful methods such as unmodifiableList(), unmodifiableSet(), unmodifiableCollection(), and others. But beware when using unmodifiable collections! There is a very strange thing mentioned in the Javadoc: The returned collection does not pass the hashCode and equals operations through to the backing collection, but relies on Object’s equals and hashCode methods. This is necessary to preserve the contracts of these operations in the case that the backing collection is a set or a list. “To preserve the contracts of these operations”. That’s quite vague. What’s the reasoning behind it? A nice explanation is given in this Stack Overflow answer here: An UnmodifiableList is an UnmodifiableCollection, but the same is not true in reverse — an UnmodifiableCollection that wraps a List is not an UnmodifiableList. So if you compare an UnmodifiableCollection that wraps a List a with an UnmodifiableList that wraps the same List a, the two wrappers should not be equal. If you just passed through to the wrapped list, they would be equal. While this reasoning is correct, the implications may be rather unexpected. The bottom line The bottom line is that you cannot rely on Collection.equals(). While List.equals() and Set.equals() are well-defined, don’t trust Collection.equals(). It may not behave meaningfully. Keep this in mind, when accepting a Collection in a method signature: public class MyClass { public void doStuff(Collection<?> collection) { // Don't rely on collection.equals() here! } }   Reference: Java Collections API Quirks from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog. ...

Using Maven Jetty plugin

Although I am using Maven since a long time I never used the Jetty plugin until recently. To be able to test a REST client I created a servlet which showed me all incoming parameters and headers with the incoming request. To run the servlet in a container I decided to give the Maven Jetty plugin a go. So first I create a web application by using the specific Maven archetype:             mvn archetype:generate -DgroupId=net.pascalalma -DartifactId=rest-service -Dversion=1.0.0-SNAPSHOT -DarchetypeArtifactId=maven-archetype-webapp This results in the complete project and the following logging: [INFO] Scanning for projects... [INFO] [INFO] ------------------------------------------------------------------------ [INFO] Building Maven Stub Project (No POM) 1 [INFO] ------------------------------------------------------------------------ [INFO] [INFO] >>> maven-archetype-plugin:2.2:generate (default-cli) @ standalone-pom >>> [INFO] [INFO] <<< maven-archetype-plugin:2.2:generate (default-cli) @ standalone-pom <<< [INFO] [INFO] --- maven-archetype-plugin:2.2:generate (default-cli) @ standalone-pom --- [INFO] Generating project in Interactive mode Downloading: Downloaded: (4 KB at 5.2 KB/sec) Downloading: Downloaded: (533 B at 1.1 KB/sec) [INFO] Using property: groupId = net.pascalalma [INFO] Using property: artifactId = rest-service [INFO] Using property: version = 1.0.0-SNAPSHOT [INFO] Using property: package = net.pascalalma Confirm properties configuration: groupId: net.pascalalma artifactId: rest-service version: 1.0.0-SNAPSHOT package: net.pascalalma Y: : Y [INFO] ---------------------------------------------------------------------------- [INFO] Using following parameters for creating project from Old (1.x) Archetype: maven-archetype-webapp:1.0 [INFO] ---------------------------------------------------------------------------- [INFO] Parameter: groupId, Value: net.pascalalma [INFO] Parameter: packageName, Value: net.pascalalma [INFO] Parameter: package, Value: net.pascalalma [INFO] Parameter: artifactId, Value: rest-service [INFO] Parameter: basedir, Value: /Users/pascal/projects [INFO] Parameter: version, Value: 1.0.0-SNAPSHOT [INFO] project created from Old (1.x) Archetype in dir: /Users/pascal/projects/rest-service [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 13.057s [INFO] Finished at: Sun Feb 03 17:13:33 CET 2013 [INFO] Final Memory: 7M/81M [INFO] ------------------------------------------------------------------------ MacBook-Air-van-Pascal:projects pascal$ Next I added the servlet code to the project: package net.pascalalma.servlets;import; import; import java.util.Enumeration; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse;/** * * @author pascal */ public class TestRestServlet extends HttpServlet {public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { PrintWriter out = response.getWriter(); out.println('GET method called'); out.println('parameters:\n ' + parameters(request)); out.println('headers:\n ' + headers(request)); }public void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { PrintWriter out = response.getWriter(); out.println('POST method called'); out.println('parameters: ' + parameters(request)); out.println('headers: ' + headers(request)); }public void doDelete(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { PrintWriter out = response.getWriter(); out.println('Delete method called'); }private String parameters(HttpServletRequest request) { StringBuilder builder = new StringBuilder(); for (Enumeration e = request.getParameterNames(); e.hasMoreElements();) { String name = (String) e.nextElement(); builder.append('|' + name + '->' + request.getParameter(name)+'\n'); } return builder.toString(); }private String headers(HttpServletRequest request) { StringBuilder builder = new StringBuilder(); for (Enumeration e = request.getHeaderNames(); e.hasMoreElements();) { String name = (String) e.nextElement(); builder.append('|' + name + '->' + request.getHeader(name)+'\n'); } return builder.toString(); } } And configure the servlet in the ‘web.xml’. By the way the generated ‘web.xml’ wasn’t able to be shown in my Netbeans version (v7.2.1). I got the message: Web application version is unsupported. Upgrade web.xml to version 2.4 or newer or use previous version of NetBeans To fixed this I modified the web.xml so it starts with the following declaration of namespaces: <web-app xmlns:xsi='' xmlns='' xmlns:web='' xsi:schemaLocation='' id='WebApp_ID' version='2.5'> Next add the servlet to the modified ‘web.xml’: <?xml version='1.0' encoding='UTF-8'?> ... <display-name>Archetype Created Web Application</display-name> <servlet> <servlet-name>TestRestServlet</servlet-name> <servlet-class>net.pascalalma.servlets.TestRestServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>TestRestServlet</servlet-name> <url-pattern>/TestRestServlet</url-pattern> </servlet-mapping> ... Now everything is ready to test the servlet. As I said before I am going to use the Jetty plugin for this. To add the plugin to the project simply put the following in your ‘pom.xml’: <plugins> <plugin> <groupId>org.mortbay.jetty</groupId> <artifactId>jetty-maven-plugin</artifactId> <configuration> <scanIntervalSeconds>10</scanIntervalSeconds> <contextPath>/</contextPath> <scanIntervalSeconds>10</scanIntervalSeconds> <stopKey>STOP</stopKey> <stopPort>8005</stopPort> <port>8080</port> </configuration> </plugin> </plugins> Now I can run the command ‘mvn jetty:run’ in my terminal to have the container running the servlet. The log should end with something like: .... 2013-02-19 09:54:53.044:INFO:oejs.AbstractConnector:Started SelectChannelConnector@ [INFO] Started Jetty Server [INFO] Starting scanner at interval of 10 seconds.</code> Now if you open a browser and go to the this url ‘http://localhost:8080/TestRestServlet?bla=true&#8217; you will see the servlet in action and outputting to the browser: GET method called parameters: |bla->true headers: |DNT->1 |Host->localhost:8080 |Accept->text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 |Accept-Charset->ISO-8859-1,utf-8;q=0.7,*;q=0.3 |Accept-Language->en-US,en;q=0.8 |User-Agent->Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_5) AppleWebKit/537.17 (KHTML, like Gecko) Chrome/24.0.1312.57 Safari/537.17 |Connection->keep-alive |Cache-Control->max-age=0 |Accept-Encoding->gzip,deflate,sdch One note: As you can see in the plugin configuration I have added a few extra parameters for my convenience. So will the container check every 10 seconds for changes in the servlet, so I don’t have to restart the Jetty container after each change of the servlet. To stop the container you can now enter the command ‘mvn jetty:stop -DstopPort=8005 -DstopKey=STOP’ in another terminal session. By the way, make sure you name the plugin ‘jetty-maven-plugin’ and not ‘maven-jetty-plugin’ because then you will be using an old version of the plugin which doesn’t pickup the configuration parameters (yes, very confusing and frustrating as I found out).   Reference: Using Maven Jetty plugin from our JCG partner Pascal Alma at the The Pragmatic Integrator blog. ...
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

Get ready to Rock!
To download the books, please verify your email address by following the instructions found on the email we just sent you.