What's New Here?


What’s the opposite of duplication?

“If you see the same code structure in more than one place,” writes Martin Fowler in his wonderful, Refactoring book, “You can be sure that your program will be better if you can find a way to unify them.” He then describes how skillful programmers use the twin scalpels of extraction and substitution to excise duplicated expression and metastasized algorithm. Almost lost in this flurry of fine advice lies the word, “Structure.” Fowler seems to use it in a slightly unorthodox fashion. Structure being a set of elements and their relationships, that to which Fowler refers certainly expresses structure: the source code lines constituting the elements, sequential control flow, the relationships. And true, multiplicity in this context – lines of code – represents the most fundamental form of duplication, the one on which all other duplication builds. Yet other structure exists, structure of more traditional from, structure that also reflects – perhaps imperfectly – the underpinning textual substrate. This higher level of structure models methods as elements and dependencies as relationships. What does duplication look like in this context? Figure 1 shows the spoiklin diagram of a program bedridden with the vile disease.In figure 1, a() calls five other methods, e(), f(), g(), h() and i(). And so does b(). And so does c(). This structural form does not preserve the order of method invocation and hence figure 1 can only suggest and not pronounce duplication – abstractions shed their information ballast as they float up over the source code proper – but, as we shall see later, pragmatism easily vanquishes such theoretical constraint. Consider that figure 1 successfully identifies a structural duplication, that a(), b() and c() all house precisely the same sequence of method invocations. What’s to be done and what how should the structural solution appear? The solution, of course, crushes the three sequences together in a new method, d(), which the original top three then call, see figure 2.Here we see, if not the opposite of duplication, at least the cost of its eradication: depth. The transitive dependencies of figure 1 are two elements long; in figure 2, they all stretch three elements long. This eradication comes, then, at the risk of increasing ripple effects but duplication, the root of all evil in software design, seems worth the price. This may strike as rather trivial. What could be more obvious than such duplication reduction? On this point, most of the software greats stand in rare agreement. Fowler elevates code duplication to the first and most important code smell in his book. Kent Beck writes in his Extreme Programming Explained, “When the system requires that you duplicate code, it is asking for refactoring.” J. B. Rainsberger professes just two requirements of simple design, one being that it, “Minimizes duplication.” The problem is that duplication minimization first requires duplication identification, a task at which, rather like performance optimization, machines excel and humans, sadly, do not. Several modern design processes urge programmers to satisfy functional requirements first and to postpone briefly those structural redresses necessitated by function-blinkered design. During this postponed refactoring, these programmers naturally focus on the newly accreted logic and so may fail to notice when local additions mirror existing code in distant unstudied packages. The futility of such effort to manually root out duplication often scars even the most popular programs long after their release. A casual processing of the recently reviewed FitNesse reveals it to be littered with many snippets of identical or nearly identical code, most trivially small, but some boasting impressive reach such as the tentacled ...

Go for Java Programmers: Introduction

Background Go (often referred to as “Golang”) is a fairly new programming language, first conceived in 2007, with version 1.0 launched in 2012.  Its three inventors are currently Google employees, with impressive credentials.  Ken Thompson is the legendary father of UNIX.  Rob Pike created the influential Plan 9 operating system alongside Thompson, and Robert Griesemer worked on the Java HotSpot virtual machine and Google’s V8 JavaScript engine. The origins of Go stem from a belief that C++ has grown too complex. The three inventors wanted a new systems-level language… likewise building upon a C foundation, but adding contemporary concepts and eliminating cruft. Like C, Go has a fairly compact language specification, and a small standard library compared to other modern languages. However, Go introduces features such as:Static typing and type safety Automatic garbage collection Simplified use of pointers Built-in support for concurrency A dependency management system … and much moreIt was originally assumed that Go’s target audience would be C and C++ developers.  However, so far the most enthusiasm has come from Python and other dynamic language communities.  Programming in Go “feels” somewhat similar to Python in many ways… adding support for static types, easy concurrency, and ridiculously faster execution speed.  It can be described as a “best of both worlds” model, combining the efficiency of a low-level systems language with the ease and expressiveness of much higher-level languages. Go for Java Programmers? In a sense, the motives behind Go are similar to those behind Java’s origins.  Twenty years ago, Sun Microsystems wanted a programming language that was easier and more forgiving than C/C++ (e.g. garbage collection, no pointer arithmetic!).  Yet it had to be powerful, with a rich standard library sufficient for most tasks.  The industry was just entering the commercial Internet age, and needed language with strong networking support build into its foundation. I have been working with Java professionally for over 15 years, and I don’t see its status as my primary language changing anytime soon.  However, I have always had need for a secondary language to serve where Java is a poor fit.  Sometimes I need a quick script, to parse through several gigabytes of dirty log files.  I need a one-off command line utility, for feeding information into a database.  I need to easily deploy some batch processing logic, to be invoked from a cron job.  I might need a basic CRUD web app, simply wrapping a few database tables for easy data entry.  Etc. Of course, I can do all of these things in Java.  It’s just that using Java for many of these tasks is like using a chainsaw when you need a pair of scissors.  Compared to other languages, it takes a good amount of work to bootstrap a Java program.  You have to either set up a build system such as Maven or Gradle, or else manage your JAR file dependencies by hand.  Deployment for a small special-purpose utility feels like overkill.  Deploying JAR’s separately means having to manage your classpath when invoking the JVM, whereas bundling them into a monolithic executable JAR can be problematic. More importantly, Java is a very verbose language by today’s standards.  I don’t consider this a drawback for large team-oriented projects, or for complex logic meant to be maintained for a long of time.  I find that criticism comes from newbies trying to use Java for simple, short lived, one-off tasks… where the verbosity is unnecessary overhead.  However, opening and parsing a bunch of files does require a frustrating amount of boilerplate code, if that’s all you’re needing a small utility to do. Therefore, like many Java programmers, I’ve always had a secondary language or two in my back pocket for such side tasks.  Back in the late-90′s, it was Perl.  That eventually shifted to Python, although I’ve also spent significant amounts of time with Groovy, Scala, and others.  These extra languages have not only filled Java’s gaps for special side tasks, they have also served to improve my Java development skills.  When I started working with asynchronous Java code, it was easier due to having previously worked with JavaScript (where asynchronous is the norm).  If I had not already been working with Scala for several years, then learning the new functional constructs in Java 8 would be a far more daunting task. Go has become my primary “other” language over the past year.  It offers several great advantages for a Java programmer:A light and clean programming style, that feels like working with a dynamic language, yet provides the compile-time type checking that Java developers prefer. A standard library that is small enough to quickly learn, yet rich enough for all the things you might commonly need.  There are built-in parsers for the most common data types (e.g. XML, JSON, CSV, etc).  A database library that resembles JDBC.  A built-in HTTP server with excellent performance, for writing quick RESTful services or small web apps.  Etc, etc. Easy dependency management and deployment.  Go programs compile directly to native executables, with all dependencies statically linked.  There’s no virtual machine!  You don’t have to worry about having the correct version of Python installed on the target box, or wrestle with multiple environments on your own box using tools like virtualenv.  Just build an exe, and run it.  Although executables are platform-specific, Go makes it easy to cross-compile… building a Linux executable on a Windows workstation, for example. Not only are Go executables fast, but so is the compiler.  The compiler is so fast, that you can run Go programs as “scripts”, with the compilation occurring silently under the covers. One of the easiest models out there for writing highly concurrent code in a safe manner.Learning Go as a Java Programmer Although Go is a fairly light and easy language to learn, there are some cultural challenges in approaching it as a Java developers.  There are several excellent books on Go (see the Resources section at the bottom), but they are mostly written from a C/C++ perspective.  Many of the things they emphasize are alien and distracting to Java programmers. Meanwhile, there is IRC and StackOverflow and many other online Go communities, but these seem largely populated by Python developers.  They can contrast Go’s environment and build tools to “pip” and “virtualenv”, but are not as able to explain them relative to Maven and other Java concepts. So in the weeks and months ahead, I plan to write a series of blog posts about Go, featuring different aspects from a perspective helpful to Java programmers.  I will compare Go concepts to their Java counterparts.  Hopefully this will be more helpful to Java programmers than reading explanations meant for a C/C++ or Python audience, and having to mentally translate that into Java terms. Links to all of the post in this series, as well as topics planned for the future, are listed on the main Go for Java Programmers series page.  If there are any addition subjects that you would like to see covered at some point, please don’t hesitate to contact me with your suggestions.  Thanks for reading, and enjoy your experience with Go! Additional ResourcesA Tour of Go – An interactive browser-based tutorial, for learning the basics of Go through live code examples.  Showcases the Go Playground, a means for testing out Go code in a web browser, hosted on Google App Engine. Effective Go – A 50-page survey of language features, ideal for reading after you complete the Tour of Go tutorial. How to Write Go Code – A brief overview of environmental topics, such as library management, and the proper directory structure for a full Go application. An Introduction to Programming in Go – A 165-page book by Caleb Doxsey, available for free on his website, with an Amazon Kindle version available for nominal charge.  I have previously written a detailed review of the book on this site. The Go Programming Language Phrasebook – This 288-page book by David Chisnall has an odd title, since it certainly isn’t a mere “phrase book”.  Instead, it’s a well-written mid level overview of the language.  It’s scope and density falls somewhere in between Doxsey’s book above, and Summerfield’s book below. Programming in Go: Creating Applications for the 21st Century – At 496-pages, Mark Summerfield’s book is the most thorough deep-dive of the language currently on the market that I would recommend.  If you ever learned Perl back in the day, then you can think of Chisnall’s book as analogous to the “llama book“, while Summerfield’s is analogous to the “camel book“.In the next article of the Go for Java Programmers series, we’ll compare the differences in basic syntax between Java and Go… starting with packages, functions, and variables.   Reference: Go for Java Programmers: Introduction from our JCG partner Steve Perkins at the steveperkins.net blog. ...

Exporting Spring Data JPA Repositories as REST Services using Spring Data REST

Spring Data modules provides various modules to work with various types of datasources like RDBMS, NOSQL stores etc in unified way. In my previous article SpringMVC4 + Spring Data JPA + SpringSecurity configuration using JavaConfig I have explained how to configure Spring Data JPA using JavaConfig. Now in this post let us see how we can use Spring Data JPA repositories and export JPA entities as REST endpoints using Spring Data REST. First let us configure spring-data-jpa and spring-data-rest-webmvc dependencies in our pom.xml.     <dependency> <groupId>org.springframework.data</groupId> <artifactId>spring-data-jpa</artifactId> <version>1.5.0.RELEASE</version> </dependency><dependency> <groupId>org.springframework.data</groupId> <artifactId>spring-data-rest-webmvc</artifactId> <version>2.0.0.RELEASE</version> </dependency> Make sure you have latest released versions configured correctly, otherwise you will encounter the following error: java.lang.ClassNotFoundException: org.springframework.data.mapping.SimplePropertyHandler Create JPA entities.@Entity @Table(name = "USERS") public class User implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name = "user_id") private Integer id; @Column(name = "username", nullable = false, unique = true, length = 50) private String userName; @Column(name = "password", nullable = false, length = 50) private String password; @Column(name = "firstname", nullable = false, length = 50) private String firstName; @Column(name = "lastname", length = 50) private String lastName; @Column(name = "email", nullable = false, unique = true, length = 50) private String email; @Temporal(TemporalType.DATE) private Date dob; private boolean enabled=true; @OneToMany(fetch=FetchType.EAGER, cascade=CascadeType.ALL) @JoinColumn(name="user_id") private Set<Role> roles = new HashSet<>(); @OneToMany(mappedBy = "user") private List<Contact> contacts = new ArrayList<>(); //setters and getters }@Entity @Table(name = "ROLES") public class Role implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name = "role_id") private Integer id; @Column(name="role_name",nullable=false) private String roleName; //setters and getters }@Entity @Table(name = "CONTACTS") public class Contact implements Serializable { private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name = "contact_id") private Integer id; @Column(name = "firstname", nullable = false, length = 50) private String firstName; @Column(name = "lastname", length = 50) private String lastName; @Column(name = "email", nullable = false, unique = true, length = 50) private String email; @Temporal(TemporalType.DATE) private Date dob; @ManyToOne @JoinColumn(name = "user_id") private User user; //setters and getters } Configure DispatcherServlet using AbstractAnnotationConfigDispatcherServletInitializer. Observe that we have added RepositoryRestMvcConfiguration.class to getServletConfigClasses() method.RepositoryRestMvcConfiguration is the one which does the heavy lifting of looking for Spring Data Repositories and exporting them as REST endpoints.package com.sivalabs.springdatarest.web.config;import javax.servlet.Filter; import org.springframework.data.rest.webmvc.config.RepositoryRestMvcConfiguration; import org.springframework.orm.jpa.support.OpenEntityManagerInViewFilter; import org.springframework.web.servlet.support.AbstractAnnotationConfigDispatcherServletInitializer;import com.sivalabs.springdatarest.config.AppConfig;public class SpringWebAppInitializer extends AbstractAnnotationConfigDispatcherServletInitializer {@Override protected Class<?>[] getRootConfigClasses() { return new Class<?>[] { AppConfig.class}; }@Override protected Class<?>[] getServletConfigClasses() { return new Class<?>[] { WebMvcConfig.class, RepositoryRestMvcConfiguration.class }; }@Override protected String[] getServletMappings() { return new String[] { "/rest/*" }; } @Override protected Filter[] getServletFilters() { return new Filter[]{ new OpenEntityManagerInViewFilter() }; } } Create Spring Data JPA repositories for JPA entities.public interface UserRepository extends JpaRepository<User, Integer> { }public interface RoleRepository extends JpaRepository<Role, Integer> { }public interface ContactRepository extends JpaRepository<Contact, Integer> { } That’s it. Spring Data REST will take care of rest of the things. You can use spring Rest Shell https://github.com/spring-projects/rest-shell or Chrome’s Postman Addon to test the exported REST services. D:\rest-shell-1.2.1.RELEASE\bin>rest-shellhttp://localhost:8080:> Now we can change the baseUri using baseUri command as follows: http://localhost:8080:>baseUri http://localhost:8080/spring-data-rest-demo/rest/http://localhost:8080/spring-data-rest-demo/rest/>http://localhost:8080/spring-data-rest-demo/rest/>listrel         href======================================================================================users       http://localhost:8080/spring-data-rest-demo/rest/users{?page,size,sort}roles       http://localhost:8080/spring-data-rest-demo/rest/roles{?page,size,sort}contacts    http://localhost:8080/spring-data-rest-demo/rest/contacts{?page,size,sort} Note: It seems there is an issue with rest-shell when the DispatcherServlet url mapped to “/” and issue list command it responds with “No resources found”. http://localhost:8080/spring-data-rest-demo/rest/>get users/{ "_links": { "self": { "href": "http://localhost:8080/spring-data-rest-demo/rest/users/{?page,size,sort}", "templated": true }, "search": { "href": "http://localhost:8080/spring-data-rest-demo/rest/users/search" } }, "_embedded": { "users": [ { "userName": "admin", "password": "admin", "firstName": "Administrator", "lastName": null, "email": "admin@gmail.com", "dob": null, "enabled": true, "_links": { "self": { "href": "http://localhost:8080/spring-data-rest-demo/rest/users/1" }, "roles": { "href": "http://localhost:8080/spring-data-rest-demo/rest/users/1/roles" }, "contacts": { "href": "http://localhost:8080/spring-data-rest-demo/rest/users/1/contacts" } } }, { "userName": "siva", "password": "siva", "firstName": "Siva", "lastName": null, "email": "sivaprasadreddy.k@gmail.com", "dob": null, "enabled": true, "_links": { "self": { "href": "http://localhost:8080/spring-data-rest-demo/rest/users/2" }, "roles": { "href": "http://localhost:8080/spring-data-rest-demo/rest/users/2/roles" }, "contacts": { "href": "http://localhost:8080/spring-data-rest-demo/rest/users/2/contacts" } } } ] }, "page": { "size": 20, "totalElements": 2, "totalPages": 1, "number": 0 } }You can find the source code at https://github.com/sivaprasadreddy/sivalabs-blog-samples-code/tree/master/spring-data-rest-demoFor more Info on Spring Rest Shell: https://github.com/spring-projects/rest-shell  Reference: Exporting Spring Data JPA Repositories as REST Services using Spring Data REST from our JCG partner Siva Reddy at the My Experiments on Technology blog. ...

Can MapReduce solve planning problems?

To solve a planning or optimization problem, some solvers tend to scale out poorly: As the problem has more variables and more constraints, they use a lot more RAM memory and CPU power. They can hit hardware memory limits at a few thousand variables and few million constraint matches. One way their users typically work around such hardware limits, is to use MapReduce. Let’s see what happens if we MapReduce a planning problem, such as the Traveling Salesman Problem. About MapReduce MapReduce is programming model which has proven to be very effective to run a query on big data. Generally speaking, it works like this:The data is partitioned across multiple computer nodes. A map function runs on every partition and returns a result. A reduce function reduces 2 results into one result. Its continuously run until only a single result remains.For example, suppose we need to find the most expensive invoice record in a data cluster:The invoice records are partitioned across multiple computer nodes. For each node, the map function extracts the most expensive invoice for that node. The reduce function takes 2 invoices and returns the most expensive.About the Traveling Salesman Problem The Traveling Salesman Problem (TSP) is a very basic planning problem. Given a list of cities, find the shortest path to visit all cities. For example, here’s a dataset with 68 cities and its optimal tour with a distance of 674:The search space of this small dataset has 68 (= 1096) combinations. That’s a lot. A more realistic planning problem, such vehicle routing, has more constraints (both in number as in complexity), such as: vehicle capacity, vehicle type limitations, time windows, driver limits, etc. MapReduce on TSP Even though most solvers probably won’t go out of memory on only 68 variables, the small size of this problem allows us to visualize it clearly:Let’s apply MapReduce on it: 1) Partition: Divide the problem into n pieces First, we take the problem and split it into n pieces. Usually, n is the number of computer nodes in our system. For visual reasons, we divide it into only 4 pieces:TSP is easily partitioned because of it only has 1 relevant constraint: find the shortest path. In a more realistic planning problem, sane partitioning can be hard or even impossible. For example:In capacitated vehicle routing, no 2 partitions should share the same vehicle. What if we have more partitions than vehicles? In vehicle routing with time windows, each partition should have enough vehicle time to service each customer and drive to each location. Catch 22: How do we determine the drive time if we don’t know the vehicle routes yet?It’s tempting to make false assumptions. 2) Map: Solve each piece separately Solve each partition using a Solver:We get 4 pieces, each with a partial solution. 3) Reduce: Merge solution pieces Merge the pieces together. To merge 2 pieces together, we remove an arc from each piece and add 2 arcs to connect cities of different pieces:We do merge several times until all pieces are merged:There are several ways to merge 2 pieces together. Here we try every combination and take the optimal one. For performance reasons, we might instead connect the 2 closest cities of different pieces with an arc, and then add a correcting arc on the other side (however long that may be). In a more realistic planning problem, with more complex constraints, merging feasible partial solutions often results into an infeasible solution (with broken hard constraints). Smarter partitioning, which takes all the hard constraints into account, can sometimes solve this… at the expense of more broken soft constraints and a higher maintenance cost. 4) Result: What is the quality of the result? Each piece was solved optimally. Pieces were merged optimally. But the result is not optimal:In fact, the results aren’t even near optimal, especially as we scale out with a MapReduce approach:More variables result in a lower result quality. More constraints result in a lower result quality, presuming it’s even possible to partition and reduce sanely. More partitions, result in a lower result quality.Conclusion MapReduce is great approach to handle a query problem (and presumably many other problems). But MapReduce is a terrible approach on a planning or optimization problem. Use the right tool for the job. Note: We applied MapReduce on the planning problem, not on the optimization algorithm implementation in a Solver, for which it can make sense. For example, in a depth-first search algorithm, MapReduce can make sense to explore the search tree (although the search tree scales exponentially bad which dwarfs any gain of MapReduce). To solve a big planning problem, use a Solver (such as OptaPlanner) that scales well in memory, so you don’t to resort to partitioning at the expense of solution quality.   Reference: Can MapReduce solve planning problems? from our JCG partner Geoffrey De Smet at the OptaPlanner blog. ...

WAR files vs. Java apps with embedded servers

Most server-side Java applications (e.g. web or service-oriented) are intended to run within a container.  The traditional way to package these apps for distribution is to bundle them as a WAR file.  This is nothing more than a ZIP archive with a standard directory layout, containing all of libraries and application-level dependencies needed at runtime.  This format is mostly interoperable, and can be deployed to whichever server container you like, Tomcat or Jetty, JBoss or GlassFish, etc. However, there is another school of thought which completely turns this model on its head.  In this approach, Java applications are packaged for command-line execution like any normal app.  Rather than being deployed to a container, an embedded container is deployed within the application itself! This is par for the course with many languages.  The Django framework for Python includes a bundled server for development and testing, while Ruby on Rails comes with an embedded server that is used in production as well.  The concept has even been around for awhile in Java, with Jetty specializing in the embedded niche.  However, this is far from the norm, and the de facto standard is still a WAR file that can be deployed to Tomcat. The buzz is growing, though.  At last year’s DevNexus conference, I went to a session by James Ward, who at that time was a “developer evangelist” at Heroku.  Bundling your own container is the recommended approach for deploying apps to Heroku’s cloud-based platform, and James is a big proponent. His session was specifically about the Play framework for Java and Scala, which embeds Netty in a manner similar to the Rails server.  Unlike Grails, which uses a Django-style server for development and then ships as a WAR file, Play is meant to use its own server all the way to production.  James advocated this approach in all Java apps. An embedded adventureI had at least a sip of the Kool-Aid.  When I started writing my book, Hibernate Search by Example, I wanted to keep its focus on Hibernate Search rather than any other frameworks or server issues.  So I eschewed Spring, and wrote the book’s example application using a vanilla Servlet 3.0 approach. I normally use Eclipse in my own development environment, and point it at a local Tomcat instance for testing web apps.  However, I wanted to support readers who were more comfortable using IntelliJ, Netbeans, or no IDE at all.  So I decided to have my build script embed a test server, so readers could run the examples without installing or configuring anything.   Using an embedded server with Maven My first goal was simply to launch a server from within my Maven build scripts, so the readers wouldn’t have to install a server or integrate it into their IDE.  I had seen this done before, and it was a simple matter of adding the Jetty Maven Plugin to a project’s POM.  Readers should be able to build the example application and launch it in one step, with the command: mvn clean jetty:run Eh, not so fast. You’re supposed to be able to make changes to your static content, and see the changes immediately take effect while the server is running.  However, I ran into errors about my files being locked. After wasting some time on research, I discovered that Jetty’s default settings do not play nice with Windows file locking.  This can be fixed by toggling one property in one config file. However, you have to crack open a Jetty JAR file to get a correct copy of this config file.  First, you have to dig around your local Maven repo and figure out which JAR file to crack open (it turns out to be jetty-webapp rather than jetty-server).  Once you get a copy of the webdefault.xml file and toggle the useFileMappedBuffer setting, you have to save your copy somewhere in your project, and update your Maven POM to look there rather than inside the Jetty JAR: <plugin> <groupId>org.mortbay.jetty</groupId> <artifactId>jetty-maven-plugin</artifactId> <version>8.1.7.v20120910</version> <configuration> <webAppConfig> <defaultsDescriptor>${basedir}/src/main/webapp/WEB-INF/webdefault.xml</defaultsDescriptor> </webAppConfig> </configuration> </plugin> Okay… a little more hassle than I was expecting, but I can deal with this. Using an embedded server with other build systems I know that many Java developers hate Maven.  So I wanted to provide a version of my book’s example application built using Ant, to illustrate how the default concepts can be adapted.  So, which line do I add to build.xml to make Ant use Jetty? Eh, not so fast.  There is Ant integration for Jetty, but it is even more cumbersome than Maven.  Even if you are using a dependency-management system such as Ivy, your Ant script can’t download and manage the embedded server for you.  Instead, you have to download a full standalone Jetty server, and manually copy bits and pieces of it into your project.  Who doesn’t want 6 megs of executable binaries committed into source control? After you copy over the Jetty server JAR’s, you need to manually add another JAR file for the Ant integration.  To my surprise, I discovered that the most recent supported version was Jetty 7, implementing the Servlet 2.5 spec that is almost eight years old.   I see that they finally added Jetty 8 last month, but that didn’t help me when I was writing the book last autumn.  I had to re-write this version of my example app for Servlet 2.5 instead of 3.0, and was starting to wonder if this was really worth it. Using an embedded server from code This last chapter of my book talks about Hibernate Search applications running in a clustered server environment.  The Maven plugin is purely single-instance, so I decided to write a small bootstrap class that would pro grammatically launch two Jetty instances on different ports.  By structuring this class as a JUnit test, I could still have Maven launch it automatically like this:   mvn clean compile war:exploded test Eh, not so fast.  My application’s servlets, listeners, and RESTful services were not being registered at startup.  After much more wasted research time, I discovered that there are different “flavors” of Jetty available, with Servlet 3.0 features (such as annotations) enabled or disabled by default. To be honest, I still don’t completely understand how to tell the difference between “hightide” and “non-hightide”.  All I can tell you is that I had to add this hunk of code to my bootstrap class in order to make annotations work: ... masterContext.setConfigurations( new Configuration[] { new WebInfConfiguration(), new WebXmlConfiguration(), new MetaInfConfiguration(), new FragmentConfiguration(), new EnvConfiguration(), new PlusConfiguration(), new AnnotationConfiguration(), new JettyWebXmlConfiguration(), new TagLibConfiguration() } ); ... So much simpler and more intuitive than dropping a WAR file in Tomcat’s /webapps folder, right? Using an embedded server from the console and the cloud With the book complete, I wanted a demo version of the example code pushed to GitHub and deployed to Heroku.  Theoretically, Heroku can run any application that you can run locally from the command-line.  If Heroku finds a Maven POM, it will run mvn clean package, and then execute whatever startup command you have placed in a script named Procfile. My programmatic Jetty launcher worked fine within the context of a Maven run.  However, Maven was managing my classpath dependencies at test time, and now I need Jetty available without that help.  Heroku’s recommended approach,  used in their demo Java applications, is to bundle your app with a one-file version of Tomcat.  Awesome, I’m more familiar with Tomcat anyway! Eh, not so fast.  If your application expects database connections (or anything else) to be registered as JNDI resources, then you are on your own.  Heroku’s bundled Tomcat runner doesn’t support JNDI setup.  Hmm… maybe this is why Heroku’s vanilla servlet demo doesn’t really do anything, and why the only demo app that does do something is Spring-based instead.  Now that I think about it, James Ward left Heroku to work for TypeSafe last year, and Heroku hasn’t made a single update to their Java site since he left.  Gulp.Don’t worry, because there is a similar one-file Jetty Runner, and it does let you pass JNDI settings as command-line parameters.  Besides, we’ve invested a lot of time solving all the problems with embedded Jetty already! Eh, still too fast.  If you use JSTL taglibs in your JSP views (i.e. you live in the 21st century), then Jetty Runner is a mess that puts you in classpath hell.  When running it from the command-line, you need to pass parameters to Java for:The Jetty Runner JAR file Your web application’s WAR file (*) The exploded version of your WAR file, generated during the Maven build(*) You read that correctly.  After all of this embedded nightmare, Heroku is actually still using a WAR file!!! My Heroku Profile ended up looking like this: web: java $JAVA_OPTS -jar target/dependency/jetty-runner-8.1.7.v20120910.jar --lib target/hibernate-search-demo-0.0.1-SNAPSHOT/WEB-INF/lib --port $PORT --jdbc org.apache.commons.dbcp.BasicDataSource "url=jdbc:h2:mem:vaporware;DB_CLOSE_DELAY=-1" "jdbc/vaporwareDB" target/*.war There is more than one classloader at work here, and this allows the Jetty Runner to load the JSTL/taglib stuff from its classpath rather than the web app’s classpath. Conclusion There is nothing inherently wrong with the embedded server concept, when it is baked-in to a framework from the outset.  Writing Play applications is a pleasure, and they are almost trivial to deploy on Heroku.  At my day job, I use a Spring-based commerce package called hybris, whose extensive build system bundles a Tomcat server into your app.  As long as you don’t need to customize the build scripts too much, this works fine. On the other hand, the concept is just too fragile and brittle for wide general-purpose use.  Duct taping an embedded server onto a normal Java application is pure pain.  You might be able to cling to the safety of someone else’s working example,  but the moment your app does anything different, you are on your own to fix the breakage.  Take my embedded adventure above, and contrast it with the “hassle” of using Tomcat:Download Tomcat and unzip it somewhere Drop your WAR file in Tomcat’s /webapps subdirectory Start TomcatThe only real advantage I gained was the ability to run a demo on Heroku.  However, Java support from cloud providers is improving every day.  Jelastic lets you deploy normal WAR files to Tomcat 7 or GlassFish 3 right now.  AppFog supports deployment to Tomcat 6, with support for Tomcat 7 coming soon.  I suspect that in the not-so-distant future, the idea of modifying your apps for cloud deployment will be seen as an anachronism. So in a nutshell, it depends on the framework you’re using.  If embedded servers are baked-in, then they can be very cool.  If they are duct taped-on, then they can be horrible.  If I were writing Hibernate Search by Example today, the example application build scripts would produce two things:  a WAR file, and a Tomcat download link.   Reference: WAR files vs. Java apps with embedded servers from our JCG partner Steve Perkins at the steveperkins.net blog. ...

Integration testing with Maven and Docker

Docker is one of the new hot things out there. With a different set of technologies and ideas compared to traditional virtual machines, it implements something similar and at the same time different, with the idea of containers: almost all VMs power but much faster and with very interesting additional goodies. In this article I assume you already know something about Docker and know how to interact with it. If it’s not the case I can suggest you these links to start with:http://www.docker.io/gettingstarted http://coreos.com/docs/launching-containers/building/getting-started-with-docker/ http://robknight.org.uk/blog/2013/05/drupal-on-docker/My personal contribution to the topic is to show you a possible workflow that allows you to start and stop Docker containers from within a Maven job. The reason why I have investigated in this functionality is to help with tests and integration tests in Java projects built with Maven. The problem is well known: your code interacts with external systems and services. Depending on what you are really writing this could mean Databases, Message Brokers, Web Services and so on. The usual strategies to test these interactions are:In memory servers; implemented in java that are usually very fast but too often their limit is that they are not the real thing A layer of stubbed services, that you implement to offers the interfaces that you need. Real external processes, sometimes remote, to test real interactions.Those strategies work but they often require a lot of effort to be put in place. And the most complete one, that is the one that uses proper external services, poses problems for what concerns isolation: imagine that you are interacting with a database and that you perform read/write operations just while someone else was accessing the same resources. Again, you may find the correct workflows that invovle creating separate schemas and so on, but, again, this is extra work and very often a not very straight forward activity. Wouldn’t it be great if we could have the same opportunities that these external systems offers, but in totaly isolation? And what do you think if I also add speed to the offer? Docker is a tool that offers us this opportunity. You can start a set of Docker container with all the services that you need, at the beginning of the testing suite, and tear it down at the end of it. And your Maven job can be the only consumer of these services, with all the isolation that it needs. And you can all of this easily scripted with the help of Dockerfiles, that are, at the end, not much more than a sequential set of command line invocations. Let see how to enable all of this. The first prerequisite is obviously to have Docker installed on your system. As you may already know Docker technology depends on the capabilities of the Linux Kernel, so you have to be on Linux OR you need the help of a traditional VM to host the Docker server process. This is the official documentation guide that shows you how to install under different Linux distros: http://docs.docker.io/en/latest/installation/ While instead this is a very quick guide to show how to install if you are on MacOSX: http://blog.javabien.net/2014/03/03/setup-docker-on-osx-the-no-brainer-way/ Once you are ready and you have Docker installed, you need to apply a specific configuration. Docker, in recents versions, exposes its remote API, by default, only over Unix Sockets. Despite we could interact with them with the right code, I find much easier to interact with the API over HTTP. To obtain this, you have to pass a specific flag to the Docker daemon to tell it to listen also on HTTP. I am using Fedora, and the configuration file to modify is /usr/lib/systemd/system/docker.service. [Unit] Description=Docker Application Container Engine Documentation=http://docs.docker.io After=network.target[Service] ExecStart=/usr/bin/docker -d -H tcp:// -H unix:///var/run/docker.sock Restart=on-failure[Install] WantedBy=multi-user.target The only modification compared to the defaults it’s been adding -H tcp:// Now, after I have reloaded systemd scripts and restarted the service I have a Docker daemon that exposes me a nice REST API I can poke with curl. sudo systemctl daemon-reload sudo systemctl restart docker curl # returns a json in output You probably also want this configuration to survive future Docker rpm updates. To achieve that you have to copy the file you have just modified to a location that survives rpm updates. The correct way to achieve this in systemd is with: sudo cp /usr/lib/systemd/system/docker.service /etc/systemd/system If you are using Ubuntu you have to configure a different file. Look at this page: http://blog.trifork.com/2013/12/24/docker-from-a-distance-the-remote-api/ Now we have all we need to interact easily with Docker. You may at this point expect me to describe you how to use the Maven Docker plugin. Unluckily that’s not the case. There is no such plugin yet, or at least I am not aware of it. I am considering writing one but for the moment being I have solved my problems quickly with the help of GMaven plugin, a little bit of Groovy code and the help of the java library Rest-assured. Here is the code to startup Docker containers import com.jayway.restassured.RestAssured import static com.jayway.restassured.RestAssured.* import static com.jayway.restassured.matcher.RestAssuredMatchers.* import com.jayway.restassured.path.json.JsonPath import com.jayway.restassured.response.ResponseRestAssured.baseURI = "" RestAssured.port = 4243// here you can specify advance docker params, but the mandatory one is the name of the Image you want to use def dockerImageConf = '{"Image":"${docker.image}"}' def dockerImageName = JsonPath.from(dockerImageConf).get("Image")log.info "Creating new Docker container from image $dockerImageName" def response = with().body(dockerImageConf).post("/containers/create")if( 404 == response.statusCode ) { log.info "Docker image not found in local repo. Trying to dowload image '$dockerImageName' from remote repos" response = with().parameter("fromImage", dockerImageName).post("/images/create") def message = response.asString() //odd: rest api always returns 200 and doesn't return proper json. I have to grep if( message.contains("404") ) fail("Image $dockerImageName NOT FOUND remotely. Abort. $message}") log.info "Image downloaded"// retry to create the container response = with().body(dockerImageConf).post("/containers/create") if( 404 == response.statusCode ) fail("Unable to create container with conf $dockerImageConf: ${response.asString()}") }def containerId = response.jsonPath().get("Id")log.info "Container created with id $containerId"// set the containerId to be retrieved later during the stop phase project.properties.setProperty("containerId", "$containerId")log.info "Starting container $containerId" with().post("/containers/$containerId/start").asString()def ip = with().get("/containers/$containerId/json").path("NetworkSettings.IPAddress")log.info "Container started with ip: $ip"System.setProperty("MONGODB_HOSTNAME", "$ip") System.setProperty("MONGODB_PORT", "27017") And this is the one to stop them import com.jayway.restassured.RestAssured import static com.jayway.restassured.RestAssured.* import static com.jayway.restassured.matcher.RestAssuredMatchers.*RestAssured.baseURI = "" RestAssured.port = 4243def containerId = project.properties.getProperty('containerId') log.info "Stopping Docker container $containerId" with().post("/containers/$containerId/stop") log.info "Docker container stopped" if( true == ${docker.remove.container} ){ with().delete("/containers/$containerId") log.info "Docker container deleted" } Rest-assured fluent API should suggest what is happening, and the inline comment should clarify it but let me add a couple of comments. The code to start a container is my implementation of the functionality of docker run as described in the official API documentation here: http://docs.docker.io/en/latest/reference/api/docker_remote_api_v1.9/#inside-docker-run The specific problem I had to solve was how to propagate the id of my Docker container from a Maven Phase to another one. I have achieved the functionality thanks to the line: // set the containerId to be retrieved later during the stop phase project.properties.setProperty("containerId", "$containerId") I have also exposed a couple of Maven properties that can be useful to interact with the API:docker.image – The name of the image you want to spin docker.remove.container – If set to false, tells Maven to not remove the stopped container from filesystem (useful to inspect your docker container after the job has finished)Ex. mvn verify -Ddocker.image=pantinor/fuse -Ddocker.remove.container=false You may find here a full working example. I have been told that sometimes my syntax colorizer script eats some keyword or change the case of words, so if you want to copy and paste it may be a better idea cropping from Github. This is a portion of the output while running the Maven build with the command mvn verify : ... [INFO] --- gmaven-plugin:1.4:execute (start-docker-images) @ gmaven-docker --- [INFO] Creating new Docker container from image {"Image":"pantinor/centos-mongodb"} log4j:WARN No appenders could be found for logger (org.apache.http.impl.conn.BasicClientConnectionManager). log4j:WARN Please initialize the log4j system properly. [INFO] Container created with id 5283d970dc16bd7d64ec08744b5ecec09b57d9a81162826e847666b8fb421dbc [INFO] Starting container 5283d970dc16bd7d64ec08744b5ecec09b57d9a81162826e847666b8fb421dbc [INFO] Container started with ip:[INFO] --- gmaven-plugin:1.4:execute (stop-docker-images) @ gmaven-docker --- [INFO] Stopping Docker container 5283d970dc16bd7d64ec08744b5ecec09b57d9a81162826e847666b8fb421dbc [INFO] Docker container stopped [INFO] Docker container deleted... If you have any question or suggestion please feel free to let me know! Full Maven `pom.xml` available also here: https://raw.githubusercontent.com/paoloantinori/gmaven_docker/master/pom.xml <!--?xml version="1.0"?--> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemalocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelversion>4.0.0</modelversion> <artifactid>gmaven-docker</artifactid> <groupid>paolo.test</groupid> <version>1.0.0-SNAPSHOT</version> <name>Sample Maven Docker integration</name> <description>See companion blogpost here: </description> <build> <plugins> <plugin> <groupid>org.codehaus.gmaven</groupid> <artifactid>gmaven-plugin</artifactid> <version>1.4</version> <configuration> <providerselection>2.0</providerselection> </configuration> <executions> <execution> <id>start-docker-images</id> <phase>test</phase> <goals> <goal>execute</goal> </goals> <configuration> <source><!--[CDATA[ import com.jayway.restassured.RestAssured import static com.jayway.restassured.RestAssured.* import static com.jayway.restassured.matcher.RestAssuredMatchers.* RestAssured.baseURI = "" RestAssured.port = 4243 // here you can specify advance docker params, but the mandatory one is the name of the Image you want to use def dockerImage = '{"Image":"pantinor/centos-mongodb"}' log.info "Creating new Docker container from image $dockerImage" def response = with().body(dockerImage).post("/containers/create") if( 404 == response.statusCode ) { log.info "[INFO] Docker Image not found. Downloading from Docker Registry" log.info with().parameter("fromImage", "pantinor/centos-mongodb").post("/images/create").asString() log.info "Image downloaded" } // retry to create the container def containerId = with().body(dockerImage).post("/containers/create").path("Id") log.info "Container created with id $containerId" // set the containerId to be retrieved later during the stop phase project.properties.setProperty("containerId", "$containerId") log.info "Starting container $containerId" with().post("/containers/$containerId/start").asString() def ip = with().get("/containers/$containerId/json").path("NetworkSettings.IPAddress") log.info "Container started with ip: $ip" System.setProperty("MONGODB_HOSTNAME", "$ip") System.setProperty("MONGODB_PORT", "27017") ]]--> </configuration> </execution> <execution> <id>stop-docker-images</id> <phase>post-integration-test</phase> <goals> <goal>execute</goal> </goals> <configuration> <source><!--[CDATA[ import com.jayway.restassured.RestAssured import static com.jayway.restassured.RestAssured.* import static com.jayway.restassured.matcher.RestAssuredMatchers.* RestAssured.baseURI = "" RestAssured.port = 4243 def containerId = project.properties.getProperty('containerId') log.info "Stopping Docker container $containerId" with().post("/containers/$containerId/stop") log.info "Docker container stopped" with().delete("/containers/$containerId") log.info "Docker container deleted" ]]--> </configuration> </execution> </executions> </plugin> </plugins> </build> <dependencies> <dependency> <groupid>com.jayway.restassured</groupid> <artifactid>rest-assured</artifactid> <version>1.8.1</version> <scope>test</scope> </dependency> </dependencies> </project>   Reference: Integration testing with Maven and Docker from our JCG partner Paolo Antinori at the Someday Never Comes blog. ...

Java Facts to Blow your Mind! (infographic)

With the release of Java 8 scheduled for the coming days, we were on the lookout for some Java facts that would really capture the effect of this programming language to the world. So, we decided to create a simple infographic depicting some important stats about the history of Java. The main source of information was Oracle’s Java Timeline. We urge you to have a look at it and discover how Java came to be the incredible platform and ecosystem that is today. As a high-level overview, here are some totally impressive stats:#1 Development Platform 9 Millions Developers 1 Billion Java Downloads per Year 3 Billion devices run Java 97% of Enterprise Desktops run Java 100% of BLU-RAY Disc Players ship with JavaThe verdict is indisputable: The effect that Java has had on our world is stunning. Note that the timeline seems to not have been updated for a couple of years and I am pretty confident that Java’s predominance has grown since then, so those numbers seem to be in the lower end. To present you the Java facts in a more eye capturing format that you can show your friends, we have decided to create an infographic here at Java Code Geeks. Enjoy! Click on the image below to see a larger view:Don’t forget to share with your fellow Java developers! Embed This Image On Your Site (copy code below):Courtesy of: Java Code GeeksAlso find below the stats in a text format. Language Principles There were five primary goals in the creation of the Java language:It should be “simple, object-oriented and familiar” It should be “robust and secure” It should be “architecture-neutral and portable” It should execute with “high performance” It should be “interpreted, threaded, and dynamic”Java Editions There are four editions of Java defined and supported, targeting different application environments. The APIs are segmented so that they belong to one of the platforms. The platforms are:Java Card for smartcards. Java Platform, Micro Edition (Java ME) — targeting environments with limited resources. Java Platform, Standard Edition (Java SE) — targeting workstation environments. Java Platform, Enterprise Edition (Java EE) — targeting large distributed enterprise or Internet environments.Java Versions Major release versions of Java, along with their release dates:JDK 1.0 (January 21, 1996) JDK 1.1 (February 19, 1997) J2SE 1.2 (December 8, 1998) J2SE 1.3 (May 8, 2000) J2SE 1.4 (February 6, 2002) J2SE 5.0 (September 30, 2004) Java SE 6 (December 11, 2006) Java SE 7 (July 28, 2011) Java SE 8 (March 18, 2014)Duke, the Java mascot Duke was designed to represent a “software agent” that performed tasks for the user. Duke was the interactive host that enabled a new type of user interface that went beyond the buttons, mice, and pop-up menus of the desktop computing world. Duke was instantly embraced. In fact, at about the same time Java was first introduced and the first Java cup logo was commissioned, Duke became the official mascot of Java technology. In 2006, Duke was officially “open sourced” under a BSD license. Duke is celebrated at Oracle. A living, life-size Duke is a popular feature at every JavaOne developer conference. And each year, Oracle releases a new Duke personality. JVM LanguagesBeanShell – A lightweight scripting language for Java. Clojure – A dialect of the Lisp programming language. Groovy, a dynamic language with features similar to those of Python, Ruby, Perl, and Smalltalk. JRuby – A Ruby interpreter. Jython – A Python interpreter. Kotlin – An industrial programming language for JVM with full Java interoperability. Rhino – A JavaScript interpreter. Scala – A multi-paradigm programming language designed as a “better Java”. Gosu – A general-purpose Java Virtual Machine-based programming language released under the Apache License 2.0.Java and the Future Java 8 is expected on 18 March 2014JSR 335, JEP 126: Language-level support for lambda expressions JSR 223, JEP 174: Project Nashorn, a JavaScript runtime JSR 308, JEP 104: Annotations on Java Types for Unsigned Integer Arithmetic JSR 310, JEP 150: Date and Time APIJava 9 is expected in 2016 (as mentioned at JavaOne 2011)JSR 294: Modularization of the JDK under Project Jigsaw JSR 354: Money and Currency API Tight integration with JavaFXReferenceshttp://oracle.com.edgesuite.net/timeline/java/ http://www.oracle.com/us/technologies/java/duke-424174.html http://en.wikipedia.org/wiki/Java_%28programming_language%29 https://en.wikipedia.org/wiki/Java_%28software_platform%29 http://en.wikipedia.org/wiki/Java_version_history...

Integration testing custom validation constraints in Jersey 2

I recently joined a team trying to switch a monolithic legacy system into set of RESTful services in Java. They decided to use latest 2.x version of Jersey as a REST container which was not a first choice for me, since I’m not a big fan of JSR-* specs. But now I must admit that JAX-RS 2.x is doing things right: requires almost zero boilerplate code, support auto-discovery of features and prefers convention over configuration like other modern frameworks. Since the spec is still young, it’s hard to find good tutorials and kick-off projects with some working code. I created jersey2-starter project on GitHub which can be used as starting point for your own production-ready RESTful service. In this post I’d like to cover how to implement and integration test your own validation constraints of REST resources.   Custom constraints One of the issues which bothers me when coding REST in Java is littering your class model with annotations. Suppose you want to build a simple Todo list REST service, when using Jackson, validation and Spring Data, you can easily end up with this as your entity class: @Document public class Todo { private Long id; @NotNull private String description; @NotNull private Boolean completed; @NotNull private DateTime dueDate;@JsonCreator public Todo(@JsonProperty("description") String description, @JsonProperty("dueDate") DateTime dueDate) { this.description = description; this.dueDate = dueDate; this.completed = false; } // getters and setters }Your domain model is now effectively blured by messy annotations almost everywhere. Let’s see what we can do with validation constraints (@NotNulls). Some may say that you could introduce some DTO layer with own validation rules, but it conflicts for me with pure REST API design, which stands that you operate on resources which should map to your domain classes. On the other hand – what does it mean that Todo object is valid? When you create a Todo you should provide a description and due date, but what when you’re updating? You should be able to change any of description, due date (postponing) and completion flag (marking as done) – but you should provide at least one of these as valid modification. So my idea is to introduce custom validation constraints, different ones for creation and modification: @Target({TYPE, PARAMETER}) @Retention(RUNTIME) @Constraint(validatedBy = ValidForCreation.Validator.class) public @interface ValidForCreation { //... class Validator implements ConstraintValidator<ValidForCreation, Todo> { /... @Override public boolean isValid(Todo todo, ConstraintValidatorContext constraintValidatorContext) { return todo != null && todo.getId() == null && todo.getDescription() != null && todo.getDueDate() != null; } } }@Target({TYPE, PARAMETER}) @Retention(RUNTIME) @Constraint(validatedBy = ValidForModification.Validator.class) public @interface ValidForModification { //... class Validator implements ConstraintValidator<ValidForModification, Todo> { /... @Override public boolean isValid(Todo todo, ConstraintValidatorContext constraintValidatorContext) { return todo != null && todo.getId() == null && (todo.getDescription() != null || todo.getDueDate() != null || todo.isCompleted() != null); } } }And now you can move validation annotations to the definition of a REST endpoint: @POST @Consumes(APPLICATION_JSON) public Response create(@ValidForCreation Todo todo) {...}@PUT @Consumes(APPLICATION_JSON) public Response update(@ValidForModification Todo todo) {...}And now you can remove those NotNulls from your model. Integration testing There are in general two approaches to integration testing:test is being run on separate JVM than the app, which is deployed on some other integration environment test deploys the application programmatically in the setup block.Both of these have their pros and cons, but for small enough services, I personally prefer the second approach. It’s much easier to setup and you have only one JVM started, which makes debugging really easy. You can use a generic framework like Arquillian for starting your application in a container environment, but I prefer simple solutions and just use emdedded Jetty. To make test setup 100% production equivalent, I’m creating full Jetty’s WebAppContext and have to resolve all runtime dependencies for Jersey auto-discovery to work. This can be simply achieved with Maven resolved from Shrinkwrap – an Arquillian subproject: WebAppContext webAppContext = new WebAppContext(); webAppContext.setResourceBase("src/main/webapp"); webAppContext.setContextPath("/"); File[] mavenLibs = Maven.resolver().loadPomFromFile("pom.xml") .importCompileAndRuntimeDependencies() .resolve().withTransitivity().asFile(); for (File file: mavenLibs) { webAppContext.getMetaData().addWebInfJar(new FileResource(file.toURI())); } webAppContext.getMetaData().addContainerResource(new FileResource(new File("./target/classes").toURI()));webAppContext.setConfigurations(new Configuration[] { new AnnotationConfiguration(), new WebXmlConfiguration(), new WebInfConfiguration() }); server.setHandler(webAppContext);(this Stackoverflow thread inspired me a lot here) Now it’s time for the last part of the post: parametrizing our integration tests. Since we want to test validation constraints, there are many edge paths to check (and make your code coverage close to 100%). Writing one test per each case could be a bad idea. Among the many solutions for JUnit I’m most convinced to the Junit Params by Pragmatists team. It’s really simple and have nice concept of JQuery-like helper for creating providers. Here is my tests code (I’m also using builder pattern here to create various kinds of Todos): @Test @Parameters(method = "provideInvalidTodosForCreation") public void shouldRejectInvalidTodoWhenCreate(Todo todo) { Response response = createTarget().request().post(Entity.json(todo));assertThat(response.getStatus()).isEqualTo(BAD_REQUEST.getStatusCode()); }private static Object[] provideInvalidTodosForCreation() { return $( new TodoBuilder().withDescription("test").build(), new TodoBuilder().withDueDate(DateTime.now()).build(), new TodoBuilder().withId(123L).build(), new TodoBuilder().build() ); }OK, enough of reading, feel free to clone the project and start writing your REST services!   Reference: Integration testing custom validation constraints in Jersey 2 from our JCG partner Piotr Jagielski at the Full stack JVM development… blog. ...

Postgres and Oracle compatibility with Hibernate

There are situations your JEE application needs to support Postgres and Oracle as a Database. Hibernate should do the job here, however there are some specifics worth mentioning. While enabling Postgres for application already running Oracle I came across following tricky parts:BLOBs support, CLOBs support, Oracle not knowing Boolean type (using Integer) instead and DUAL table.These were the tricks I had to apply to make the @Entity classes running on both of these. Please note I’ve used Postgres 9.3 with Hibernate 4.2.1.SP1. BLOBs support The problem with Postgres is that it offers 2 types of BLOB storage:bytea – data stored in table oid – table holds just identifier to data stored elsewhereI guess in the most of the situations you can live with the bytea as well as I did. The other one as far as I’ve read is to be used for some huge data (in gigabytes) as it supports streams for IO operations. Well, it sounds nice there is such a support, however using Hibernate in this case can make things quite problematic (due to need to use the specific annotations), especially if you try to achieve compatibility with Oracle. To see the trouble here, see StackOverflow: proper hibernate annotation for byte[] All- the combinations are described there: annotation postgres oracle works on ------------------------------------------------------------- byte[] + @Lob oid blob oracle byte[] bytea raw(255) postgresql byte[] + @Type(PBA) oid blob oracle byte[] + @Type(BT) bytea blob postgresql where @Type(PBA) stands for @Type(type="org.hibernate.type.PrimitiveByteArrayBlobType") and @Type(BT) stands for: @Type(type="org.hibernate.type.BinaryType"). These result in all sorts of Postgres errors, like: ERROR: column “foo” is of type oid but expression is of type bytea or ERROR: column “foo” is of type bytea but expression is of type oid Well, there seems to be a solution, still it includes patching of Hibernate library (something I see as the last option when playing with 3.rd party library). There is also a reference to official blog post from the Hibernate guys on the topic: PostgreSQL and BLOBs. Still solution described in blog post seems not working for me and based on the comments, seems to be invalid for more people. BLOBs solved OK, so now the optimistic part. After quite some debugging I ended up with the Entity definition like this : @Lob private byte[] foo; Oracle has no trouble with that, moreover I had to customize the Postgres dialect in a way: public class PostgreSQLDialectCustom extends PostgreSQL82Dialect {@Override public SqlTypeDescriptor remapSqlTypeDescriptor(SqlTypeDescriptor sqlTypeDescriptor) { if (sqlTypeDescriptor.getSqlType() == java.sql.Types.BLOB) { return BinaryTypeDescriptor.INSTANCE; } return super.remapSqlTypeDescriptor(sqlTypeDescriptor); } } That’s it! Quite simple right? That works for persisting to bytea typed columns in Postgres (as that fits my usecase). CLOBs support The errors in misconfiguration looked something like this: org.postgresql.util.PSQLException: Bad value for type long : ... So first I’ve found (on String LOBs on PostgreSQL with Hibernate 3.6) following solution: @Lob @Type(type = "org.hibernate.type.TextType") private String foo; Well, that works, but for Postgres only. Then there was a suggestion (on StackOverflow: Postgres UTF-8 clobs with JDBC) from to go for: @Lob @Type(type="org.hibernate.type.StringClobType") private String foo; That pointed me the right direction (the funny part was that it was just a comment to some answers). It was quite close, but didn’t work for me in all cases, still resulted in errors in my tests. CLOBs solved The important was @deprecation javadocs in the org.hibernate.type.StringClobType that brought me to working one: @Lob @Type(type="org.hibernate.type.MaterializedClobType") private String foo; That works for both Postgres and Oracle, without any further hacking (on Hibernate side) needed. Boolean type Oracle knows no Boolean type and the trouble is that Postgres does. As there was also some plain SQL present, I ended up In Postgres with error: ERROR: column “foo” is of type boolean but expression is of type integer I decided to enable cast from Integer to Boolean in Postgres rather than fixing all the plain SQL places (in a way found in Forum: Automatically Casting From Integer to Boolean): update pg_cast set castcontext = 'i' where oid in ( select c.oid from pg_cast c inner join pg_type src on src.oid = c.castsource inner join pg_type tgt on tgt.oid = c.casttarget where src.typname like 'int%' and tgt.typname like 'bool%'); Please note you should run the SQL update by user with provileges to update catalogs (probably not your postgres user used for DB connection from your application), as I’ve learned on Stackoverflow: Postgres – permission denied on updating pg_catalog.pg_cast. DUAL table There is one more specific in the Oracle I came across. If you have plain SQL, in Oracle there is DUAL table provied (see more info on Wikipedia on that) that might harm you in Postgres. Still the solution is simple. In Postgres create a view that would fill the similar purpose. It can be created like this: create or replace view dual as select 1; Conclusion Well that should be it. Enjoy your cross DB compatible JEE apps.   Reference: Postgres and Oracle compatibility with Hibernate from our JCG partner Peter Butkovic at the pb’s blog about life and IT blog. ...

Event processing in camel-drools

In a previous post about camel-drools I’ve introduced camel-drools component and implemented some simple task-oriented process using rules inside Camel route. Today I’ll show how to extend this example by adding event processing. So how to describe an event? Each event occur at some time and lasts for some duration, events happen in some particular order. We have then a ‘cloud of events’ from which we want to identify those, which form some interesting correlations. And here the usage of Drools becomes reasonable – we don’t have to react for each event, just describe set of rules and consequences for those interesting correlations. Drools engine will find them and fire matching rules. Suppose our system has to monitor execution of task assigned to users. After a task is created, user has 10 days to complete it. When he doesn’t – an e-mail remainder should be sent. Rule definition may look like this: import org.apache.camel.component.drools.stateful.model.* global org.apache.camel.component.drools.CamelDroolsHelper helperdeclare TaskCreated @role( event ) @expires( 365d ) enddeclare TaskCompleted @role( event ) @expires( 365d ) endrule "Task not completed after 10 days" when $t : TaskCreated() not(TaskCompleted(name==$t.name, this after [-*, 10d] $t)) then helper.send("direct:escalation", $t.getName()); endAs you can see, there are two types of events: TaskCreated – when system assigns task to users, and TaskCompleted – when user finishes task. We correlate those two by the ‘name’ property. Firstly, we need to declare our model classes as events by adding @role(event) and @expires annotations. Then we describe rule: ‘when there are is no TaskCompleted event after 10 days of TaskCreated event, send task name to direct:escalation route’. Again, this could be example of declarative programming – we don’t event have to specify actual names of tasks, just correlate TaskCreated with TaskCompleted events by name. In this example, I used ‘after’ temporal operator. For description of others – see Drools Fusion documentation. And finally, here is JUnit test code snippet: public class TaskEventsTest extends GenericTest {DefaultCamelContext ctx;@Test public void testCompleted() throws Exception { insertAdvanceDays(new TaskCreated("Task1"), 4); assertContains(0); insertAdvanceDays(new TaskCompleted("Task1"), 4); advanceDays(5); assertContains(0); }@Test public void testNotCompleted() throws Exception { insertAdvanceDays(new TaskCreated("Task1"), 5); assertContains(0); advanceDays(5); assertContains("Task1"); }@Test public void testOneNotCompleted() throws Exception { ksession.insert(new TaskCreated("Task1")); insertAdvanceDays(new TaskCreated("Task2"), 5); assertContains(0); insertAdvanceDays(new TaskCompleted("Task1"), 4); assertContains(0); advanceDays(1); assertContains("Task2"); advanceDays(10); assertContains("Task2"); }@Override protected void setUpResources(KnowledgeBuilder kbuilder) throws Exception { kbuilder.add(new ReaderResource(new StringReader( IOUtils.toString(getClass() .getResourceAsStream("/stateful/task-event.drl")))), ResourceType.DRL); }@Override public void setUpInternal() throws Exception { this.ctx = new DefaultCamelContext(); CamelDroolsHelper helper = new CamelDroolsHelper(ctx, new DefaultExchange(ctx)) { public Object send(String uri, Object body) { sentStuff.add(body.toString()); return null; }; }; ksession.setGlobal("helper", helper); } }You can find source code for this example here.  Reference: Event processing in camel-drools from our JCG partner Piotr Jagielski at the Full stack JVM development… blog. ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

15,153 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books
Get tutored by the Geeks! JCG Academy is a fact... Join Now
Hello. Add your message here.