Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?

jcg-logo

Get your Advanced Java Programming Degree with these Tutorials and Courses

Getting started as a Java developer these days is quite straightforward. There are countless books on the subject, and of course an abundance of online material to study. Of course, our own site offers a vast array of tutorials and articles to guide you through the language and we genuinely believe that Java Code Geeks offer the best way to learn Java programming. Things get a bit trickier once you have successfully passed the beginner phase. In order to reach a more advanced level of competence, you will need to reach out and look for targeted resources. A higher level of sophistication is required and the random tutorials that you find online might not “cut it”. For this reason, we have created and featured numerous tutorials on our site. You may find them at the following pages:Core Java Tutorials Enterprise Java Tutorials Spring Tutorials Desktop Java TutorialsAdditionally, we have created several “Ultimate” tutorials, discussing OOP concepts, popular Java tools and frameworks, and more. Have a look at those too:Java 8 Features Tutorial Java Annotations Tutorial Java Servlet Tutorial Java Reflection Tutorial Abstraction in Java JMeter Tutorial for Load Testing JUnit Tutorial for Unit Testing JAXB Tutorial for Java XML BindingOn top of the above, to get you prepared for your programming interviews, we have created some great QnA guides:115 Java Interview Questions and Answers 69 Spring Interview Questions and Answers Multithreading and Concurrency Interview Questions and Answers Core Java Interview Questions 40 Java Collections Interview Questions and Answers Top 100 Java Servlet QuestionsFor even more high-end training, we would like to suggest our JCG Academy courses. With JCG Academy’s course offerings, you tackle real-world projects built by programming experts. Courses offered are designed to help you master new concepts quickly and effectively. All courses could be beneficial to the modern age developer, but let’s focus on the Java related ones. The Advanced Java course is the flagship course that every Java developer should take. This course is designed to help you make the most effective use of Java. It discusses advanced topics, including object creation, concurrency, serialization, reflection and many more. It will guide you through your journey to Java mastery! Next on, we have the Java Design Patterns course (standalone version here). Design patterns are general reusable solutions to commonly occurring problems within a given context in software design. In this course you will delve into a vast number of Design Patterns and see how those are implemented and utilized in Java. You will understand the reasons why patterns are so important and learn when and how to apply each one of them. In the new age of multi-core processors, every developer should be competent in concurrent programming. For this reason we created the Java Concurrency Essentials course (you can join this for FREE!). In this course, you will dive into the magic of concurrency. You will be introduced to the fundamentals of concurrency and concurrent code and you will learn about concepts like atomicity, synchronization and thread safety. As you advance, the following lessons will deal with the tools you can leverage, such as the Fork/Join framework, the java.util.concurrent JDK package. Finally, in order to stay up to date with the latest developments, make sure to join our ever growing newsletter (with more than 73,000 subscribers). By joining, you will also get 11 programming books for FREE! Summing up, you don’t have to spend a bunch of money or waste countless hours to reach and advanced level in Java programming. Instead, you need to be able to study the correct material and use it in your day to day work in order to gain the relevant experience. The good thing about the programming world is that people care only about results. If you can show them that you are great at executing and getting results, you’ll do phenomenal as a Java programmer. Geek on! ...
wcg-logo

FREE Programming books with the WCG Newsletter

Dear fellow geek, it is with great honor that we announce the launch of Web Code Geeks! This is our sister site, targeted to Web programming developers. Come on, admit it, there is a web developer inside you too, so make sure to check it out. To celebrate this, we have decided to distribute 2 of our books for free. You can get access to them by joining our Newsletter. Additionally, you will also receive weekly news, tips and special offers delivered to your inbox courtesy of Web Code Geeks! This is just the beginning, and as Web Code Geeks grow, there will be more free goodies for you! So let’s see what you get in detail!    Building web apps with Node.js Node.js is an exciting software platform for building scalable server-side and networking applications. Node.js applications are written in JavaScript, and can be run within the Node.js runtime on Windows, Mac OS X and Linux with no changes. Node.js applications are designed to maximize throughput and efficiency, using non-blocking I/O and asynchronous events. In this book, you will get introduced to Node.js. You will learn how to install, configure and run the server and how to load various modules. Additionally, you will build a sample application from scratch and also get your hands dirty with Node.js command line programming.  CouchDB Database for the Web CouchDB, is an open source database that focuses on ease of use and on being “a database that completely embraces the web”. It is a NoSQL database that uses JSON to store data, JavaScript as its query language using MapReduce, and HTTP for an API. One of its distinguishing features is multi-master replication. CouchDB was first released in 2005 and later became an Apache project in 2008. This book is a hands-on course on CouchDB. You will learn how to install and configure CouchDB and how to perform common operations with it. Additionally, you will build an example application from scratch and then finish the course with advanced topics like scaling, replication and load balancing.   So, fellow geeks, hop on our newsletter and enjoy our kick-ass books!NOTE: If you have subscribed and not received an email yet, please send us an email at support[at]webcodegeeks.com and we will provide immediate assistance. ...
java-interview-questions-answers

Quick peek at JAX-RS request to method matching

In this post, let’s look at the HTTP request to resource method matching in JAX-RS. It is one of the most fundamental features of JAX-RS. Generally, the developers using the JAX-RS API are not exposed to (or do not really need to know) the nitty gritty of the matching process, rest assured that the JAX-RS runtime churns out its algorithms quietly in the background as our RESTful clients keep those HTTP requests coming! Just in case the term request to resource method matching is new to you – it’s nothing but the process via which the JAX-RS provider dispatches a HTTP request to a particular method of your one of your resource classes (decorated with @Path). Hats off to the JAX-RS spec doc for explaining this in great detail (we’ll just cover the tip of the iceberg in this post though!) Primary criteria What are the factors taken into consideration during the request matching process ?HTTP request URI HTTP request method (GET, PUT, POST, DELETE etc) Media type of the HTTP request Media type of requested responseHigh level steps A rough diagram should help. Before we look at that, here is the example scenarioTwo resource classes – Books.java, Movies.java Resource methods paths in Books.java – /books/, /books/{id} (URI path parameter), /books?{isbn} (URI query parameter) HTTP request URI – /books?isbn=xyzWho will win ? @Path("books") public class Books{ @Produces("application/json") @GET public List<Book> findAll(){ //find all books } @Produces("application/json") @GET @Path("{id}") public Book findById(@PathParam("id") String bookId){ //find book by id e.g. /books/123 } @Produces("application/json") @GET public Book findByISBN(@QueryParam("isbn") String bookISBN){ //find book by ISBN e.g. /books?isbn=xyz } }@Path("movies") public class Books{ @Produces("application/json") @GET public List<Movie> findAll(){ //find all movies e.g. /movies/ } @Produces("application/json") @GET @Path("{name}") public Movie findById(@PathParam("name") String name){ //find movie by name e.g. /movies/SourceCode } }JAX-RS request to method matching process Break down of what’s going onNarrow down the possible matching candidates to a set of resource classesThis is done by matching the HTTP request URI with the value of the @Path annotation on the resource classesFrom the set of resource classes in previous step, find a set of methods which are possible matching candidates (algorithm is applied to the filtered set of resource classes) Boil down to the exact method which can server the HTTP requestThe HTTP request verb is compared against the HTTP method specific annotations (@GET, @POST etc), the request media type specified by the Content-Type header is compared against the media type specified in the @Consumes annotation and the response media type specified by the Accept header is compared against the media type specified in the @Produces annotation I would highly recommend looking at the Jersey server side logic for implementation classes in the org.glassfish.jersey.server.internal.routing package to get a deeper understanding. Some of the classes/implementation which you can look at areMatchResultInitializerRouter SubResourceLocatorRouter MethodSelectingRouter PathMatchingRouterTime to dig in….? Happy hacking !Reference: Quick peek at JAX-RS request to method matching from our JCG partner Abhishek Gupta at the Object Oriented.. blog....
apache-hadoop-logo

Running PageRank Hadoop job on AWS Elastic MapReduce

In a previous post I described an example to perform a PageRank calculation which is part of the Mining Massive Dataset course with Apache Hadoop. In that post I took an existing Hadoop job in Java and modified it somewhat (added unit tests and made file paths set by a parameter). This post shows how to use this job on a real-life Hadoop cluster. The cluster is a AWS EMR cluster of 1 Master Node and 5 Core Nodes, each being backed by a m3.xlarge instance. The first step is to prepare the input for the cluster. I make use of AWS S3 since this is a convenient way when working with EMR. I create a new bucket, ‘emr-pagerank-demo’, and made the following subfolders:in: the folder containing the input files for the job job: the folder containing my executable Hadoop jar file log: the folder where EMR will put its log filesIn the ‘in’ folder I then copied the data that I want to be ranked. I used this file as input. Unzipped it became a 5 GB file with XML content, although not really massive, it is sufficient for this demo. When you take the sources of the previous post and run ‘mvn clean install’ you will get the jar file: ‘hadoop-wiki-pageranking-0.2-SNAPSHOT.jar’. I uploaded this jar file to the ‘job’ folder. That is it for the preparation. Now we can fire up the cluster. For this demo I used the AWS Management Console:Name the cluster Enter the log folder as log locationEnter the number of Core instancesAdd a step for our custom jarConfigure the step like this:This should result in the following overview:If this is correct you can press the ‘Create Cluster’ button and have EMR doing its work. You can monitor the cluster in the ‘Monitoring’ part of the console:And monitor the status of the steps in the ‘Steps’ part:After a few minutes the job will be finished (depending on the size of the input files and used cluster of course). In our S3 bucket we can see log files are created in the ‘log’ folder:Here we see a total of 7 jobs: 1 x the Xml preparation step, 5 x the rankCalculator step and 1 x the rankOrdering step. And more important we can see the results in the ‘Result’ folder:Each reducer creates its own result file so we have multiple files here. We are interested in the one with the highest number since there are the pages with the highest ranks. If we look into this file we see the following result as top-10 ranking: 271.6686 Spaans 274.22974 Romeinse_Rijk 276.7207 1973 285.39502 Rondwormen 291.83002 Decapoda 319.89224 Brussel_(stad) 390.02606 2012 392.08563 Springspinnen 652.5087 2007 2241.2773 Boktorren Please note that the current implementation only runs the calculation 5 times (hard coded), so not really the power iteration as described in the theory of MMDS (nice modification for a next release of the software :-)). Also note that the cluster is not terminated after the job is finished when the default settings are used, so costs for the cluster increase until the cluster is terminated manually.Reference: Running PageRank Hadoop job on AWS Elastic MapReduce from our JCG partner Pascal Alma at the The Pragmatic Integrator blog....
java-interview-questions-answers

Java EE7 and Maven project for newbies – part 8

Part #1, Part #2, Part #3, Part #4, Part #5, Part #6, Part #7  Part #8It’s been a long time since my last post, for this series of  tutorials. Its time to resume and add new features on our simple project. As I have mentioned in previous posts, this series of posts is targeting mostly Maven and JavaEE7 newcomers, I welcome any questions or comments (and fixes) on the contents below. I promise I will try to keep up with the updates. Git tag for this post? The tag for this post, is this post8, and can be found on my bitbucket repo. What has changed from the previous post?Some comments and fixes on the code from readers have already been integrated.Thank you very much all for your tome. I have updated the Wildfly Application Server version from 8.1 to 8.2, so all the examples and code runs under the new server. I have also updated the versions of the Arquillian BOM (s), to the latest version which is now 1.1.7.Final I have also added a property under the sample-parent project that indicates the path that the various maven modules will download and use Wildfly server, automatically so that you don’t have to download it on your own. The server will be automatically downloaded and extracted to the predefined path, as soon as you try to execute one of the unit tests from the previous posts (sample-services module)<!--path to download wildfly--> <wildfly-server-home>${project.basedir}/servers/</wildfly-server-home>Adding a JSF enabled war Maven Module on our ear Eventually our project structure already featured a war (see sample-web) maven module. So there is no extra module introduced rather than changes on the existing pom.xml files of the parent and the module itself. Step 1  changes on web.xml Our  application server is already bundled with the required libraries and settings in order to support applications that make use of the JSF 2.2 specification. Wildfly bundles Mojarra 2.2.8. What we have to do is just update some configuration descriptors (eventually only one). The most important is web.xml which now looks like this. <web-app xmlns="http://xmlns.jcp.org/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-app_3_1.xsd" version="3.1"> <context-param> <param-name>javax.faces.PROJECT_STAGE</param-name> <param-value>Development</param-value> </context-param> <servlet> <servlet-name>Faces Servlet</servlet-name> <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>/faces/*</url-pattern> </servlet-mapping> <session-config> <session-timeout>15</session-timeout> </session-config> <welcome-file-list> <welcome-file>faces/index.xhtml</welcome-file> </welcome-file-list> </web-app> Step 2 Packaging of war and the skinny war issue Our war module, is following a packaging scheme called skinny war. Please read the following page from the Apache maven war plugin. To cut a long story short, in order to reduce the overall size of our deploy able (ear), we package all the required libraries under a predefined folder on the ear level, usually is called \lib and we don’t include libraries under the war’s WEB-INF\lib folder. The only thing you need to do, is add those dependencies of your war to the ear level. Despite the fact that the overall ‘hack’ does not feel very maven like, it works if you follow the proposed configuration, but there are cases that skinny war packaging wont work. One of these is usually for JSF based JavaEE web applications where the implementation of the JSF widget engine should be packaged within the war’s WEB-INF\lib. For our sample project, I am using the excellent and free Primefaces library, which I highly recommend for your next JSF based project. So I need to define a dependency on my war module for the primefaces jar but by pass the skinny war mechanism only for this jar, so that it is packaged in the right place. This is how we do it. <!-- from the war module pom.xml --><!-- This is the dependency --><dependency> <groupId>org.primefaces</groupId> <artifactId>primefaces</artifactId> <version>${primefaces-version}</version> </dependency><!-- See the packaging exclude, we exclude all the jars apart from the one we want to be bundled within the WAR --><plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <configuration> <packagingExcludes>%regex[WEB-INF/lib/(?!primefaces).*.jar]</packagingExcludes> <archive> <manifest> <addClasspath>true</addClasspath> <classpathPrefix>lib/</classpathPrefix> </manifest> <manifestEntries> <Class-Path>sample-services-${project.version}.jar</Class-Path> </manifestEntries> </archive> </configuration> </plugin> Step 3 Add some jsf love, a managed bean and an xhtml page with the appropriate tags.Our code is just a small table, and a couple of tags from Primefaces. If you think that you need to read more about JSF 2.X please have a look on the following linksJSF 2.2 tutorial by one of the JSF gods (BalusC) Primefaces documentation Primefaces Show Case The JavaEE tutorial – JSF 2.2 from Oracle JSF 2.2 examplesStep 4 Package and deploy to a running server. Start your wildfly (you are expected to have one under your project-base dir and the subfolder servers <wildfly-server-home>${project.basedir}/servers/</wildfly-server-home> and then under the sample-parent project type. mvn clean install -Ph2 You should have your demo JSF 2.2 enabled demo app, on http://localhost:8080/sample-web/ and see something like the following.That’s all, this will give you a simple start in order to expand on something more than a demo! As always you will find the complete – example under tag post8 .Reference: Java EE7 and Maven project for newbies – part 8 from our JCG partner Paris Apostolopoulos at the Papo’s log blog....
software-development-2-logo

Why Non-Blocking?

I’ve been writing non-blocking, asynchronous code for the past year. Learning how it works and how to write it is not hard. Where are the benefits coming from is what I don’t understand. Moreover, there is so much hype surrounding some programming models, that you have to be pretty good at telling marketing from rumours from facts. So let’s first start with clarifying the terms. Non-blocking applications are written in a way that threads never block – whenever a thread would have to block on I/O (e.g. reading/writing from/to a socket), it instead gets notified when new data is available. How is that implemented is out of the scope of this post. Non-blocking applications are normally implemented with message passing (or events). “Asynchronous” is related to that (in fact, in many cases it’s a synonym for “non-blocking”), as you send your request events and then get response to them in a different thread, at a different time – asynchronously. And then there’s the “reactive” buzzword, which I honestly can’t explain – on one hand there’s the reactive functional programming, which is rather abstract; on the other hand there’s the reactive manifesto which defines 3 requirements for practically every application out there (responsive, elastic, resilient) and one implementation detail (message-driven), which is there for no apparent reason. And how does the whole thing relate to non-blocking/asynchronous programming – probably because of the message-driven thing, but often the three go together in the buzzword-driven marketing jargon. Two examples of frameworks/tools that are used to implement non-blocking (web) applications are Akka (for Scala nad Java) and Node.js. I’ve been using the former, but most of the things are relevant to Node as well. Here’s a rather simplified description of how it works. It uses the reactor pattern (ahaa, maybe that’s where “reactive” comes from?) where one thread serves all requests by multiplexing between tasks and never blocks anywhere – whenever something is ready, it gets processed by that thread (or a couple of threads). So, if two requests are made to a web app that reads from the database and writes the response, the framework reads the input from each socket (by getting notified on incoming data, switching between the two sockets), and when it has read everything, passes a “here’s the request” message to the application code. The application code then sends a message to a database access layer, which in turn sends a message to the database (driver), and gets notified whenever reading the data from the database is complete. In the callback it in turn sends a message to the frontend/controller, which in turn writes the data as response, by sending it as message(s). Everything consists of a lot of message passing and possibly callbacks. One problem of that setup is that if at any point in the code the thread blocks, then the whole things goes to hell. But let’s assume all your code and 3rd party libraries are non-blocking and/or you have some clever way to avoid blocking everything (e.g. an internal thread pool that handles the blocking part). That brings me to another point – whether only reading and writing the socket is non-blocking as opposed to the whole application being non-blocking. For example, Tomcat’s NIO connector is non-blocking, but (afaik, via a thread pool) the application code can be executed in the “good old” synchronous way. Though I admit I don’t fully understand that part, we have to distinguish asynchronous application code from asynchronous I/O, provided by the infrastructure. And another important distinction – the fact that your server code is non-blocking/asynchronous, doesn’t mean your application is asynchronous to the client. The two things are related, but not the same – if your client uses long-living connection where it expects new data to be pushed from the server (e.g. websockets/comet) then the asynchronicity goes outside your code and becomes a feature of your application, from the perspective of the client. And that can be achieved in multiple ways, including Java Servlet with async=true (that is using a non-blocking model so that long-living connections do not each hold a blocked thread). Okay, now we know roughly how it works, and we can even write code in that paradigm. We can pass messages around, write callbacks, or get notified with a different message (i.e. akka’s “ask” vs “tell” pattern). But again – what’s the point? That’s where it gets tricky. You can experiment with googling for stuff like “benefits of non-blocking/NIO”, benchmarks, “what is faster – blocking or non-blocking”, etc. People will say non-blocking is faster, or more scalable, that it requires less memory for threads, has higher throughput, or any combination of these. Are they true? Nobody knows. It indeed makes sense that by not blocking your threads, and when you don’t have a thread-per-socket, you can have less threads service more requests. But is that faster or more memory efficient? Do you reach the maximum number of threads in a big thread pool before you max the CPU, network I/O or disk I/O? Is the bottleneck in a regular web application really the thread pool? Possibly, but I couldn’t find a definitive answer. This benchmark shows raw servlets are faster than Node (and when spray (akka) was present in that benechmark, it was also slower). This one shows that the NIO tomcat connector gives worse throughput. My own benchmark (which I lost) of spray vs spring-mvc showed that spray started returning 500 (Internal Server Error) responses with way less concurrent requests than spring-mvc. I would bet there are counter-benchmarks that “prove” otherwise. The most comprehensive piece on the topic is the “Thousands of Threads and Blocking I/O” presentation from 2008, which says something I myself felt – that everyone “knows” non-blocking is better and faster, but nobody actually tested it, and that people sometimes confuse “fast” and “scalable”. And that blocking servers actually perform ~20 faster. That presentation, complemented by this “Avoid NIO” post, claim that the non-blocking approach is actually worse in terms of scalability and performance. And this paper (from 2003) claims that “Events Are A Bad Idea (for high-concurrency servers)”. But is all this objective, does it hold true only for the Java NIO library or for the non-blocking approach in general; does it apply to Node.js and akka/spray, and how do applications that are asynchronous from the client perspective fit into the picture – I honestly don’t know. It feels like the old, thread-pool-based, blocking approach is at least good enough, if not better. Despite the “common knowledge” that it is not. And to complicate things even further, let’s consider usecases. Maybe you should use a blocking approach for a RESTful API with a traditional request/response paradigm, but maybe you should make a high-speed trading web application non-blocking, because of the asynchronous nature. Should you have only your “connector” (in tomcat terms) nonblocking, and the rest of your application blocking…except for the asynchronous (from client perspective) part? It gets really complicated to answer. And even “it depends” is not a good-enough answer. Some people would say that you should to your own benchmark, for your usecase. But for a benchmark you need an actual application. Written in all possible ways. Yes, you can use some prototype, basic functionality, but choosing the programming paradigm must happen very early (and it’s hard to refactor it later). So, which approach is more performant, scalable, memory-efficient? I don’t know. What do I know, however, is which is easier to program, easier to test and easier to support. And that’s the blocking paradigm. Where you simple call methods on objects, not caring about callbacks and handling responses. Synchronous, simple, straightforward. This is actually one of the points in both the presentation and the paper I linked above – that it’s harder to write non-blocking code. And given the unclear benefits (if any), I would say that programming, testing and supporting the code is the main distinguishing feature. Whether you are going to be able to serve 10000 or 11000 concurrent users from a single machine doesn’t really matter. Hardware is cheap. (unless it’s 1000 vs 10000, of course). But why is the non-blocking, asynchronous, event/message-driven programming paradigm harder? For me, at least, even after a year of writing in that paradigm, it’s still messier. First, it is way harder to trace the programming flow. With a synchronous code you would just tell your IDE to fetch the call hierarchy (or find the usage of a given method if your language is not IDE-friendly), and see where everything comes and goes. With events it’s not that trivial. Who constructs this message? Where is it sent to / who consumes it? How is the response obtained – via callback, via another message? When is the response message constructed and who actually consumes it? And no, that’s not “loose coupling”, because your code is still pretty logically (and compilation-wise) coupled, it’s just harder to read. What about thread-safety – the event passing allegedly ensure that no contention, deadlocks, or race-conditions occur. Well, even that’s not necessarily true. You have to be very careful with callbacks (unless you really have one thread like in Node) and your “actor” state. Which piece of code is executed by which thread is important (in akka at least), and you can still have a shared state even though only a few threads do the work. And for the synchronous approach you just have to follow one simple rule – state does not belong in the code, period. No instance variables and you are safe, regardless of how many threads execute the same piece of code. The presentation above mentions also immutable and concurrent data structures that are inherently thread-safe and can be used in either of the paradigms. So in terms of concurrency, it’s pretty easy, from the perspective of the developer. Testing complicated message-passing flows is a nightmare, really. And whereas test code is generally less readable than the production code, test code for a non-blocking application is, in my experience, much uglier. But that’s subjective again, I agree. I wouldn’t like to finish this long and unfocused piece with “it depends”. I really think the synchronous/blocking programming model, with a thread pool and no message passing in the business logic is the simpler and more straightforward way to write code. And if, as pointed out by the presentation and paper linked about, it’s also faster – great. And when you really need asynchronously sending responses to clients – consider the non-blocking approach only for that part of the functionality. Ultimately, given similar performance, throughput, scalability (and ignoring the marketing buzz), I think one should choose the programming paradigm that is easier to write, read and test. Because it takes 30 minutes to start another server, but accidental complexity can burn weeks and months of programming effort. For me, the blocking/synchronous approach is the easier to write, read and test, but that isn’t necessarily universal. I would just not base my choice of a programming paradigm on vague claims about performance and scalability.Reference: Why Non-Blocking? from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog....
java-interview-questions-answers

JPA 2.1: Unsynchronized persistence context

The JPA version 2.1 brings a new way how to handle the synchronization between the persistence context and the current JTA transaction as well as the resource manager. The term resource manager comes from the Java Transaction API and denotes a component that manipulates one resource (for example a concrete database that is manipulated by using its JDBC driver). Per default a container-managed persistence context is of type SynchronizationType.SYNCHRONIZED, i.e. this persistence context automatically joins the current JTA transaction and updates to the persistence context are propagated to the underlying resource manager. By creating a persistence context that is of the new type SynchronizationType.UNSYNCHRONIZED, the automatic join of the transaction as well as the propgation of updates to the resource manager is disabled. In order to join the current JTA transaction the code has to call the method joinTransaction() of the EntityManager. This way the EntityManager’s persistence context gets enlisted in the transaction and is registered for subsequent notifications. Once the transaction is commited or rolled back, the persistence context leaves the transaction and is not attached to any further transaction until the method joinTransaction() is called once again for a new JTA transaction. Before JPA 2.1 one could implement a conversation that spans multiple method calls with a @Stateful session bean as described by Adam Bien here: @Stateful @TransactionAttribute(TransactionAttributeType.NEVER) public class Controller { @PersistenceContext(type = PersistenceContextType.EXTENDED) EntityManager entityManager; public Person persist() { Person p = new Person(); p.setFirstName("Martin"); p.setLastName("Developer"); return entityManager.merge(p); } public List<Person> list() { return entityManager.createQuery("from Person", Person.class).getResultList(); } @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW) public void commit() { } @Remove public void remove() { } } The persistence context is of type EXTENDED and therefore lives longer than the JTA transactions it is attached to. As the persistence context is per default also of type SYNCHRONIZED it will automatically join any transaction that is running when any of the session bean’s methods are called. In order to prevent that to happen for most of the bean’s methods, the annotation @TransactionAttribute(TransactionAttributeType.NEVER) tells the container to not open any transaction for this bean. Therefore the methods persist() and list() run without a transaction. This behavior is different for the method commit(). Here the annotation @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW) tells the container to create a new transaction before the method is called and therefore the bean’s EntityManager will join it automatically. With the new type SynchronizationType.UNSYNCHRONIZED the code above can be rewritten as depicted in the following listing: @Stateful public class Controller { @PersistenceContext(type = PersistenceContextType.EXTENDED, synchronization = SynchronizationType.UNSYNCHRONIZED) EntityManager entityManager; public Person persist() { Person p = new Person(); p.setFirstName("Martin"); p.setLastName("Developer"); return entityManager.merge(p); } public List<Person> list() { return entityManager.createQuery("from Person", Person.class).getResultList(); } public void commit() { entityManager.joinTransaction(); } @Remove public void remove() { } } Now that the EntityManager won’t automatically join the current transaction, we can omit the @TransactionAttribute annotations. Any running transaction won’t have an impact on the EntityManager until we explicitly join it. This is now done in the method commit() and could even be done on the base on some dynamic logic. In order to test the implementation above, we utilize a simple REST resource: @Path("rest") @Produces("text/json") @SessionScoped public class RestResource implements Serializable { @Inject private Controller controller; @GET @Path("persist") public Person persist(@Context HttpServletRequest request) { return controller.persist(); } @GET @Path("list") public List<Person> list() { return controller.list(); } @GET @Path("commit") public void commit() { controller.commit(); } @PreDestroy public void preDestroy() { } } This resource provides methods to persist a person, list all persisted person and to commit the current changes. As we are going to use a stateful session bean, we annotate the resource with @SessionScoped and let the container inject the Controller bean. By calling the following URL after the application has been deployed to some Java EE container, a new person gets added to the unsynchronized persistence context, but is not stored in the database. http://localhost:8080/jpa2.1-unsychronized-pc/rest/persist Even a call of the list() method won’t return the newly added person. Only by finally synchronizing the changes in the persistence context to the underlying resource with a call of commit(), the insert statement is send to the underlying database. Conclusion The new UNSYNCHRONIZED mode of the persistence context lets us implement conversations over more than one method invocation of a stateful session bean with the flexibility to join a JTA transaction dynamically based on our application logic without the need of any annotation magic.PS: The source code is available at github.Reference: JPA 2.1: Unsynchronized persistence context from our JCG partner Martin Mois at the Martin’s Developer World blog....
jboss-wildfly-logo

Bind WildFly to a different IP address, or all addresses on multihomed

Interface is a logical name, in WildFly parlance, for a network interface/IP address/host name to which sockets can be bound. There are two interfaces: “public” and “management”. The “public” interface binding is used for all application related network communication (i.e. Web, Messaging, etc). The “management” interface is used for all components and services that are required by the management layer (i.e. the HTTP Management Endpoint). By default, “public” interface is configured to listen on the loopback address of 127.0.0.1. So if you start WildFly as: ./bin/standalone.sh Then WildFly default page can be accessed as http://127.0.0.1:8080. Usually, /etc/hosts provide a mapping of 127.0.0.1 to localhost, and so the same page is accessible at http://localhost:8080. 8080 is the port where all applications are accessed. On a multihomed machine, you may like to start WildFly and bind “public” interface to a specific IP address. This can be easily done as: ./bin/standalone.sh -b=192.168.1.1 Now the applications can be accessed at http://192.168.1.1:8080. For compatibility, -b 192.168.1.1 is also supported but -b=192.168.1.1 is recommended. Or, if you want to bind to all available IP addresses, then you can do: ./bin/standalone.sh -b=0.0.0.0 Similarly, by default, WildFly can be managed using Admin Console at http://127.0.0.1:9990. 9990 is the management port. WildFly “management” interface can be bound to a specific IP address as: ./bin/standalone.sh -bmanagement=192.168.1.1 Now Admin Console can be accessed at http://192.168.1.1:9990. Or, bind “management” interface to all available IP addresses as: ./bin/standalone.sh -bmanagement=0.0.0.0 You can also bind to two specific addresses as explained here. Of course, you can bind WildFly “public” and “management” interface together as: ./bin/standalone.sh -b=0.0.0.0 -bmanagement=0.0.0.0 Learn more about it Interface and Port Configuration in WildFly. And more about these switches in Controlling the Bind Address with -b.Reference: Bind WildFly to a different IP address, or all addresses on multihomed from our JCG partner Arun Gupta at the Miles to go 2.0 … blog....
software-development-2-logo

Law of Demeter and How to Work With It

The Law of Demeter is an interesting programming principle. It’s the only one I know of that has a near-mathematical definition: Any method m of an object O may only invoke the methods of the following kinds of objects:O itself m‘s parameters Any objects created/instantiated within m O‘s direct component objects A global variable, accessible by O, in the scope of mIn case you didn’t realize, this list doesn’t allow you to call the methods of anything returned by any of the methods allowed. It essentially prohibits chaining of method calls. I’ll give some exceptions to that in a bit. Why? What is the purpose of the Law of Demeter? What is it trying to prevent? The Law of Demeter is designed to decrease coupling and increase encapsulation. By following the Law of Demeter, you make it so that the class O doesn’t need to know anything about the source of m other than what m does. O shouldn’t know anything about what m returns, other than its existence, in order to reduce coupling to it. I may not be explaining it the best. It’s not a bad idea to go out and see how others explain it. The real purpose of this post is to describe ways of working with the law, an anti-pattern, and some exceptions. Anti-Pattern We’ll start with the anti-pattern. I call it Layered Private Methods, and it was my first thought about “getting” around the Law of Demeter, but I quickly figured out that, while it technically follows the rules given, it doesn’t follow the reason for the law. The anti-pattern works like this. Say you originally had the following piece of code that you want to modify to follow Demeter: public void m(Parameter parameter) { ReturnValue result = parameter.method(); result.method(); } To fix it, you change it to this: public void m(Parameter parameter) { otherMethod(parameter.method()); }private void otherMethod(ReturnValue rv) { rv.method(); } As you can see, each method still follows the law, but the class is still coupled. Whenever you call another method of O from within it, you should consider the code in that called method to be part of m in order to still follow the spirit of the Law of Demeter. Layers So what should you do to fix the code? In this case, you can turn the private method call into a call on a new class: public class NewClass { public void extractedMethod(ReturnValue rv) { rv.method(); } } So m can look like this now: public void m(Parameter parameter) { NewClass helper = new NewClass(); helper.extractedMethod(parameter.method()); } Now, this may seem like a complete waste, creating a new class that is essentially a replacement for a private method. But, in fact, there are many out there that think that every private method should become a new class. While I agree that our private methods do display a tendency toward going beyond the scope of the classes they’re contained within. Whenever you see a private method, consider extracting a class from it. But, is this the best way to call that class? By now, you should know about dependency injection (look it up if you don’t; it’s super simple, but important). If the calls moved to the classes are complex enough that they merit their own testing, you should apply DI any way you find to be most appropriate. If not, still consider it. There may not be a good reason to do it right away, though, unless you expect to have multiple implementations of it. Exception 1: Data Structures When m is a method on a data structure or returns a data structure, then the Law of Demeter can be ignored. A data structure is a class whose entire purpose is based around storing/representing data. This includes the primitive type wrappers, String, “java beans”, and others. It’s understandable that, if definition of data changes, users of the data should have to adapt to the change. If the method you call (that breaks the Law of Demeter) is simply returning public internal data (the data may itself be private, but with public getters), there’s no good reason why not to allow it. Immutable objects can be part of breaking the law too, with these guidelines: the “bad” method call 1) returns public internal data (like before) or 2) returns a new object of the same type as itself. Number 2 is safe because the method call is equivalent to incrementing a number. If integers didn’t have the ability to use operators, incrementing would be exactly like number 2 (n = n.increment()), assuming immutable objects. Exception 2: The Builder Pattern and Other Fluent APIs When an API is designed to be fluent (and therefore, usually chained), there’s no good reason to lose the readability just to be a strict Law of Demeter follower. For example, java 8’s Stream API would be worthless if you didn’t allow yourself to chain methods. Outro I hope you now have a better understanding of the purpose and use of the Law of Demeter. Now go and program! And have fun!Reference: Law of Demeter and How to Work With It from our JCG partner Jacob Zimmerman at the Programming Ideas With Jake blog....
java-interview-questions-answers

LOVs in Oracle MAF

We all love one of the most powerful ADF features lists of values. Using them we can declaratively and easily build pretty complicated functionality in ADF applications. A good thing is that we have a similar approach in Oracle MAF as well. In ADF BC we define LOVs, attribute UI hints, validation  rules, etc. at the Business Service level, basically at the Entity or VO level. In MAF we are able to do the same but at the Data Controls level. This is pretty obvious since who knows what the business service is. It can be whatever in Oracle MAF. So, in this post I am going to show how we can define and work with LOVs in Oracle MAF. Let’s consider a simple use-case. There is a payment form which looks like this:An end user selects an account in the drop-down list and the total account balance is going to be used as a default payment amount, however the amount can be changed. The business model is based on a couple POJO classes: public class PaymentBO {    private int accountid;     private double amount;     private String note; and public class AccountBO {    private int id;     private String accountName;     private double balance; There is also AccountService class providing a list of available accounts: public class AccountService {    private final static AccountService accountService = new AccountService();    private AccountBO[] accounts = new AccountBO[] {         new AccountBO(1, "Main Account", 1000.89),         new AccountBO(2, "Secondary Account", 670.78),         new AccountBO(3, "Pocket Account", 7876.84),         new AccountBO(4, "Emergency Account", 7885.80)     };    public AccountBO[] getAccounts() {         return accounts;     }    public static synchronized AccountService getInstance() {         return accountService;     } And there is PaymentDC class which is exposed as a data control: public class PaymentDC {    private final PaymentBO payment = new PaymentBO();     private final AccountService accountService = AccountService.getInstance();    public PaymentBO getPayment() {         return payment;     }    public AccountBO[] getAccounts() {         return accountService.getAccounts();     } } The DataControl structure looks like this:In order to be able to define Payment attribute settings such as UI hints, validation rules, LOVs, etc. I am going to click the pencil button and I will have a form which looks pretty similar to what we have in ADF BC:Those who are familiar with ADF BC will hardly get lost here. So, at the List of Values page we can define a LOV for the accountid attribute:Having done that, we’re able to setup LOV’s UI hints, etc. Basically that’s it. All we need to do is to drop accountid attribute from that DataControl palette onto a page as a selectOneChoice component.<amx:selectOneChoice value="#{bindings.accountid.inputValue}"                      label="#{bindings.accountid.label}" id="soc1">     <amx:selectItems value="#{bindings.accountid.items}" id="si1"/> </amx:selectOneChoice> The framework will do the rest defining the list binding definition in the pageDef file:  <list IterBinding="paymentIterator" StaticList="false"         Uses="LOV_accountid" id="accountid" DTSupportsMRU="true"         SelectItemValueMode="ListObject"/> But we have to implement somehow setting of the payment amount with the account balance when the account is selected. In ADF we would be able to define multiple attribute mappings in the LOV’s definition and that would be the solution. Like this:But in MAF it doesn’t work. Unfortunately.  Only the primary mapping works. So, we’re going to do that manually in the PaymentBO.setAccountid  method: public void setAccountid(int accountid) {     this.accountid = accountid;    AccountBO account = AccountService.getInstance().getAccountById(accountid);     if (account != null) {         setAmount(account.getBalance());     } } And in the PaymentBO.setAmount method we have to fire a change event in order to get the amount field refreshed on the page: public void setAmount(double amount) {     double oldAmount = this.amount;     this.amount = amount;     propertyChangeSupport.firePropertyChange("amount", oldAmount, amount); } That’s it! The sample application for this post can be downloaded here. It requires JDeveloper 12.1.3 and MAF 2.1.0.Reference: LOVs in Oracle MAF from our JCG partner Eugene Fedorenko at the ADF Practice blog....
scala-logo

A fresh look on accessing database on JVM platform: Slick from Typesafe

In today’s post we are going to open our mind, step away from traditional Java EE / Java SE JPA-based stack (which I think is great) and take a refreshing look on how to access database in your Java applications using the new kid on the block: Slick 2.1 from Typesafe. So if JPA is so great, why bother? Well, sometimes you need to do very simple things and there is no need to bring the complete, well modeled persistence layer for that. In here Slick shines. In the essence, Slick is database access library, not an ORM. Though it is written in Scala, the examples we are going to look at do not require any particular knowledge of this excellent language (although, it is just Scala that made Slick possible to exist). Our relational database schema will have only two tables, customers and addresses, linked by one-to-many relationships. For simplicity, H2 has been picked as an in-memory database engine. The first question which comes up is defining database tables (schema) and, naturally, database specific DDLs are the standard way of doing that. Can we do something about it and try another approach? If you are using Slick 2.1, the answer is yes, absolutely. Let us just describe our tables as Scala classes: // The 'customers' relation table definition class Customers( tag: Tag ) extends Table[ Customer ]( tag, "customers" ) { def id = column[ Int ]( "id", O.PrimaryKey, O.AutoInc )def email = column[ String ]( "email", O.Length( 512, true ), O.NotNull ) def firstName = column[ String ]( "first_name", O.Length( 256, true ), O.Nullable ) def lastName = column[ String ]( "last_name", O.Length( 256, true ), O.Nullable )// Unique index for customer's email def emailIndex = index( "idx_email", email, unique = true ) } Very easy and straightforward, resembling a lot typical CREATE TABLE construct. The addresses table is going to be defined the same way and reference users table by foreign key. // The 'customers' relation table definition class Addresses( tag: Tag ) extends Table[ Address ]( tag, "addresses" ) { def id = column[ Int ]( "id", O.PrimaryKey, O.AutoInc )def street = column[ String ]( "street", O.Length( 100, true ), O.NotNull ) def city = column[ String ]( "city", O.Length( 50, true ), O.NotNull ) def country = column[ String ]( "country", O.Length( 50, true ), O.NotNull )// Foreign key to 'customers' table def customerId = column[Int]( "customer_id", O.NotNull ) def customer = foreignKey( "customer_fk", customerId, Customers )( _.id ) } Great, leaving off some details, that is it: we have defined two database tables in pure Scala. But details are important and we are going to look closely on following two declarations: Table[ Customer ] and Table[ Address ]. Essentially, each table could be represented as a tuple with as many elements as column it has defined. For example, customers is a tuple of (Int, String, String, String), while addresses table is a tuple of (Int, String, String, String, Int). Tuples in Scala are great but they are not very convenient to work with. Luckily, Slick allows to use case classes instead of tuples by providing so called Lifted Embedding technique. Here are our Customer and Address case classes: case class Customer( id: Option[Int] = None, email: String, firstName: Option[ String ] = None, lastName: Option[ String ] = None)case class Address( id: Option[Int] = None, street: String, city: String, country: String, customer: Customer ) The last but not least question is how Slick is going to convert from tuples to case classes and vice-versa? It would be awesome to have such conversion out-of-the box but at this stage Slick needs a bit of help. Using Slick terminology, we are going to shape * table projection (which corresponds to SELECT * FROM … SQL construct). Let see how it looks like for customers: // Converts from Customer domain instance to table model and vice-versa def * = ( id.?, email, firstName.?, lastName.? ).shaped <> ( Customer.tupled, Customer.unapply ) For addresses table, shaping looks a little bit more verbose due to the fact that Slick does not have a way to refer to Customer case class instance by foreign key. Still, it is pretty straightforward, we just construct temporary Customer from its identifier. // Converts from Customer domain instance to table model and vice-versa def * = ( id.?, street, city, country, customerId ).shaped <> ( tuple => { Address.apply( id = tuple._1, street = tuple._2, city = tuple._3, country = tuple._4, customer = Customer( Some( tuple._5 ), "" ) ) }, { (address: Address) => Some { ( address.id, address.street, address.city, address.country, address.customer.id getOrElse 0 ) } } ) Now, when all details have been explained, how can we materialize our Scala table definitions into the real database tables? Thankfully to Slick, it is a easy as that: implicit lazy val DB = Database.forURL( "jdbc:h2:mem:test", driver = "org.h2.Driver" ) DB withSession { implicit session => ( Customers.ddl ++ Addresses.ddl ).create } Slick has many ways to query and update data in database. The most beautiful and powerful one is just using pure functional constructs of the Scala language. The easiest way of doing that is by defining companion object and implement typical CRUD operations in it. For example, here is the method which inserts new customer record into customers table: object Customers extends TableQuery[ Customers ]( new Customers( _ ) ) { def create( customer: Customer )( implicit db: Database ): Customer = db.withSession { implicit session => val id = this.autoIncrement.insert( customer ) customer.copy( id = Some( id ) ) } } And it could be used like this: Customers.create( Customer( None, "tom@b.com", Some( "Tom" ), Some( "Tommyknocker" ) ) ) Customers.create( Customer( None, "bob@b.com", Some( "Bob" ), Some( "Bobbyknocker" ) ) ) Similarly, the family of find functions could be implemented using regular Scala for comprehension: def findByEmail( email: String )( implicit db: Database ) : Option[ Customer ] = db.withSession { implicit session => ( for { customer <- this if ( customer.email === email.toLowerCase ) } yield customer ) firstOption } def findAll( implicit db: Database ): Seq[ Customer ] = db.withSession { implicit session => ( for { customer <- this } yield customer ) list } And here are usage examples: val customers = Customers.findAll val customer = Customers.findByEmail( "bob@b.com" ) Updates and deletes are a bit different though very simple as well, let us take a look on those: def update( customer: Customer )( implicit db: Database ): Boolean = db.withSession { implicit session => val query = for { c <- this if ( c.id === customer.id ) } yield (c.email, c.firstName.?, c.lastName.?) query.update(customer.email, customer.firstName, customer.lastName) > 0 } def remove( customer: Customer )( implicit db: Database ) : Boolean = db.withSession { implicit session => ( for { c <- this if ( c.id === customer.id ) } yield c ).delete > 0 } Let us see those two methods in action: Customers.findByEmail( "bob@b.com" ) map { customer => Customers.update( customer.copy( firstName = Some( "Tommy" ) ) ) Customers.remove( customer ) } Looks very neat. I am personally still learning Slick however I am pretty excited about it. It helps me to have things done much faster, enjoying the beauty of Scala language and functional programming. No doubts, the upcoming version 3.0 is going to bring even more interesting features, I am looking forward to it. This post is just an introduction into world of Slick, a lot of implementation details and use cases have been left aside to keep it short and simple. However Slick’s documentation is pretty good and please do not hesitate to consult it.The complete project is available on GitHub.Reference: A fresh look on accessing database on JVM platform: Slick from Typesafe from our JCG partner Andrey Redko at the Andriy Redko {devmind} blog....
spring-interview-questions-answers

Head first elastic search on java with spring boot and data features

In this article I’ll try to give you an easy introduction on how to use Elastic Search in a Java project. As Spring Boot is the easiest and fastest way to begin our project I choose to use it. Futhermore, we will heavily use Repository goodies of beloved Spring Data. Let’s begin by installing Elastic Search on our machine and run our elastic server for the first time. I go to elastic-folder\bin and run elasticsearch.bat (yeah I’m using Windows) but no luck. I get this:    “Error occurred during initialization of VM Could not reserve enough space for object heap Error: Could not create the Java Virtual Machine. Error: A fatal exception has occurred. Program will exit.” What a great start! In my bin folder there’s a “elasticsearch.in.bat” file. I set ES_MAX_MEM=1g to ES_MAX_MEM=512mb and voila it is fixed. I start a new server without problem after that. Now it is time to define the document we will index in elastic search. Assume we have movie information to index. Our model is quite straightforward. Movie has a name, rating and a genre in it. I chose “elastic_sample” as index name which sounds good as a database name and “movie” as type which is good for a table name if we think in relational database terms. Nothing fancy in the model as you can see. @Document(indexName = "elastic_sample", type = "movie") public class Movie {@Id private String id;private String name;@Field(type = FieldType.Nested) private List < Genre > genre;private Double rating;public Double getRating() { return rating; }public void setRating(Double rating) { this.rating = rating; }public void setId(String id) { this.id = id; }public List < Genre > getGenre() { return genre; }public void setGenre(List < Genre > genre) { this.genre = genre; }public String getId() { return id; }public String getName() { return name;}public void setName(String name) { this.name = name; }@Override public String toString() { return "Movie{" + "id=" + id + ", name='" + name + '\'' + ", genre=" + genre + ", rating=" + rating + '}'; } } For those who wonder what Genre is here is it. Just a POJO. public class Genre { private String name;public Genre() { }public Genre(String name) { this.name = name; }public String getName() { return name; }@Override public String toString() { return "Genre{" + "name='" + name + '\'' + '}'; }public void setName(String name) { this.name = name; } } Not it is time to create DAO layer so we can save and load our document to/from our elastic search server. Our Repository extends the classic ElasticserchRepository (no idea why it is search and not Search). As you probably know Spring Data can query one or more fields with these predefined methods where we use our field names. findByName will search in the name field, findByRating will search in the rating field so on so forth. Furthermore thanks to Spring Data we don’t need to write implementation for it, we just put method names in the interface and that’s finished. public interface MovieRepository extends ElasticsearchRepository < Movie, Long > { public List < Movie > findByName(String name);public List < Movie> findByRatingBetween(Double beginning, Double end); } Our DAO layer will be called by a Service layer: @Service public class MovieService {@Autowired private MovieRepository repository;public List < Movie > getByName(String name) { return repository.findByName(name); }public List < Movie > getByRatingInterval(Double beginning, Double end) { return repository.findByRatingBetween(beginning, end); }public void addMovie(Movie movie) { repository.save(movie); } } Here is the main Class we will use to run our application. EnableAutoConfiguration will auto-configure everything it recognizes under our classpath. ComponentScan will scan for Spring annotations under the main Class’ directory. @Configuration @EnableAutoConfiguration @ComponentScan public class BootElastic implements CommandLineRunner {@Autowired private MovieService movieService;private static final Logger logger = LoggerFactory.getLogger(BootElastic.class);// add star wars and // princess bride as a movie // to elastic search private void addSomeMovies() { Movie starWars = getFirstMovie(); movieService.addMovie(starWars);Movie princessBride = getSecondMovie(); movieService.addMovie(princessBride); }private Movie getSecondMovie() { Movie secondMovie = new Movie(); secondMovie.setId("2"); secondMovie.setRating(8.4d); secondMovie.setName("The Princess Bride");List < Genre > princessPrideGenre = new ArrayList < Genre >(); princessPrideGenre.add(new Genre("ACTION")); princessPrideGenre.add(new Genre("ROMANCE")); secondMovie.setGenre(princessPrideGenre);return secondMovie; }private Movie getFirstMovie() { Movie firstMovie = new Movie(); firstMovie.setId("1"); firstMovie.setRating(9.6d); firstMovie.setName("Star Wars");List < Genre > starWarsGenre = new ArrayList < Genre >(); starWarsGenre.add(new Genre("ACTION")); starWarsGenre.add(new Genre("SCI_FI")); firstMovie.setGenre(starWarsGenre);return firstMovie; }public void run(String... args) throws Exception { addSomeMovies(); // We indexed star wars and pricess bride to our movie // listing in elastic search//Lets query if we have a movie with Star Wars as name List < Movie > starWarsNameQuery = movieService.getByName("Star Wars"); logger.info("Content of star wars name query is {}", starWarsNameQuery);//Lets query if we have a movie with The Princess Bride as name List < Movie > brideQuery = movieService.getByName("The Princess Bride"); logger.info("Content of princess bride name query is {}", brideQuery);//Lets query if we have a movie with rating between 6 and 9 List < Movie > byRatingInterval = movieService.getByRatingInterval(6d, 9d); logger.info("Content of Rating Interval query is {}", byRatingInterval); }public static void main(String[] args) throws Exception { SpringApplication.run(BootElastic.class, args); } } If we run it the result is: 015-02-28 18:26:12.368 INFO 3616 --- [ main] main.BootElastic: Content of star wars name query is [Movie{id=1, name='Star Wars', genre=[Genre{name='ACTION'}, Genre{name='SCI_FI'}], rating=9.6}] 2015-02-28 18:26:12.373 INFO 3616 --- [ main] main.BootElastic: Content of princess bride name query is [Movie{id=2, name='The Princess Bride', genre=[Genre{name='ACTION'}, Genre{name='ROMANCE'}], rating=8.4}] 2015-02-28 18:26:12.384 INFO 3616 --- [ main] main.BootElastic: Content of Rating Interval query is [Movie{id=2, name='The Princess Bride', genre=[Genre{name='ACTION'}, Genre{name='ROMANCE'}], rating=8.4}] As you can see the interval query only retrieved Princess Bride. We did not do any configuration right? It is unusual. I have to share the huge configuration file with you: spring.data.elasticsearch.cluster-nodes=localhost:9300 # if spring data repository support is enabled spring.data.elasticsearch.repositories.enabled=true Normally you would use port 9200 when you query your elastic server. But when we programmatically reach it we are using 9300. If you have more than one node you would separate them with a comma and use 9301, 9302 etc as port numbers. Our pom file is no surprise either. Just elastic starter pom and we are set to go. <project xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://maven.apache.org/POM/4.0.0" xsi:schemalocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelversion>4.0.0</modelversion><groupid>caught.co.nr</groupid> <artifactid>boot-elastic-sample</artifactid> <version>1.0-SNAPSHOT</version> <packaging>war</packaging><!-- Inherit defaults from Spring Boot --> <parent> <groupid>org.springframework.boot</groupid> <artifactid>spring-boot-starter-parent</artifactid> <version>1.2.2.RELEASE</version> </parent><dependencies> <dependency> <groupid>org.springframework.boot</groupid> <artifactid>spring-boot-starter-data-elasticsearch</artifactid> </dependency></dependencies><!-- Needed for fat jar --> <build> <plugins> <plugin> <groupid>org.springframework.boot</groupid> <artifactid>spring-boot-maven-plugin</artifactid> </plugin> </plugins> </build> </project>As you can see thanks to Spring Boot and Data it is quite easy to work with elastic search. Lets check what we indexed from the server api as well. I’ll use Sense -a chrome plug-in for elastic commands-.Here’s the result json: { "took": 2, "timed_out": false, "_shards": { "total": 1, "successful": 1, "failed": 0 }, "hits": { "total": 2, "max_score": 1, "hits": [ { "_index": "elastic_sample", "_type": "movie", "_id": "1", "_score": 1, "_source": { "id": 1, "name": "Star Wars", "genre": [ { "name": "ACTION" }, { "name": "SCI_FI" } ] } }, { "_index": "elastic_sample", "_type": "movie", "_id": "2", "_score": 1, "_source": { "id": 2, "name": "The Princess Bride", "genre": [ { "name": "ACTION" }, { "name": "ROMANCE" } ] } } ] } }You can check out the whole project in the github.Reference: Head first elastic search on java with spring boot and data features from our JCG partner Sezin Karli at the caught Somewhere In Time = true; blog....
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

Get ready to Rock!
To download the books, please verify your email address by following the instructions found on the email we just sent you.

THANK YOU!

Close