Featured FREE Whitepapers

What's New Here?


Maven Integration Testing And Spring Restful Services

Introduction My original blog showed how to seperate maven unit and integration tests using a very simple example. http://johndobie.blogspot.com/2011/06/seperating-maven-unit-integration-tests.html Since then a lot of people asked me for a more realistic example than the one used originally. This post shows how you split your unit and integration tests using the original method in a realistic environment where the application is actually deployed to a server.We use Maven to build and unit test some Spring based restful webservices. We then use the Maven Jetty plugin to start a Web server and deploy them to it. We create an In-memory database and create the schema Finally we run all of the integration tests in the seperate \src\integrationtest\java directoryThis article is aimed squarely at showing how to use Maven in a realistic way to start and deploy a set of services to a running server, before running your integration tests. It is not about the subtle details of REST or Spring MVC. I’ll cover this lightly enough to build a working application whilst providing references to more in depth articles for those that want more details. Code StructureRunning the Example The full code is hosted at google code. Use the following commands to check it out and run it. Make sure you have nothing running on port 8080 before running the tests. svn co https://designbycontract.googlecode.com/svn/trunk/examples/maven/spring-rest-example cd spring-rest-example mvn clean install -Pit,jettyYou can see the full build on the following Cloudbees hosted Jenkins instance. https://designbycontract.ci.cloudbees.com/job/spring-rest-example/ Results of running the exampleThe tests in the standard maven test structure are run during the unit test phase as usual. A Jetty Webserver is started The war containing the web server is deployed to the server The hsqldb in-memory database is started and the schema created. The tests in the \src\integrationtest\java directory are run during the integration test phase. The server is shutdown.How to create the Spring Service class The trade service is very simple. It uses a repository to create and find trades. I haven’t included exceptions to keep the whole thing as simple as possible. The only trick here is to add the @Service annotation, otherwise it is straight Java. @Service public class SimpleTradeService implements TradeService { @Autowired TradeRepository tradeRepository; public SimpleTradeService(TradeRepository tradeRepository) { this.tradeRepository = tradeRepository; } @Override public Long createTrade(Trade t) { Long id = tradeRepository.createTrade(t); return id; }@Override public Trade getTradeById(Long id) { return tradeRepository.getTradeById(id); }How to create the Database repository class The above service uses a trade repository to create and find trades. We use the Spring class HibernateDaoSupoort to create this class and keep things simple. By extending this class we simply need to create our trade object class, and define our database details in the spring config. All of the other details are taken care of by the framework. public class HibernateTradeRepository extends HibernateDaoSupport implements TradeRepository{ @Override public Trade getTradeByReference(String reference) { throw new RuntimeException(); }@Override public Long createTrade(Trade trade) { return (Long) getHibernateTemplate().save(trade); }@Override public Trade getTradeById(Long id) { return getHibernateTemplate().get(Trade.class, id); } }How to create the Database Trade Class We use standard JPA annotations to define our database trade object @Entity public class Trade { @Id private long id;The @Entity annotation marks the object as a database entity. The @Id annotation shows which field we want to be our table primary key. For the rest of the fields we use default behaviour so no other annotations are required. How to Configure the Database For this example we are going to use Hsqldb to create our database. http://hsqldb.org/ A new instance of this will be created every time we start the server. To setup the database all we have to do is define it in the spring config trade-servlet.xml <bean id="sessionFactory" <bean id="sessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean"> <property name="packagesToScan" value="com.dbc.model" /> <property name="hibernateProperties"> <props> <prop key="hibernate.show_sql">true</prop> <prop key="hibernate.format_sql">true</prop> <prop key="hibernate.transaction.factory_class"> org.hibernate.transaction.JDBCTransactionFactory </prop> <prop key="hibernate.dialect">org.hibernate.dialect.HSQLDialect</prop> <prop key="hibernate.connection.pool_size">0</prop> <prop key="hibernate.connection.driver_class">org.hsqldb.jdbcDriver</prop> <prop key="hibernate.connection.url"> jdbc:hsqldb:target/data/tradedatabase;shutdown=true </prop> <prop key="hibernate.connection.username">sa</prop> <prop key="hibernate.connection.password"></prop> <prop key="hibernate.connection.autocommit">true</prop> <prop key="hibernate.jdbc.batch_size">0</prop> <prop key="hibernate.hbm2ddl.auto">update</prop> </props> </property> </bean>The session factory defines our database connection details. The most important property is <prop key="hibernate.hbm2ddl.auto">update</prop>This property tells hibernate to update the database when the application starts. It will effectively create the table for the trade object from the annotations on our trade object. When you run the tests, you will see that the following SQL is executed on startup. 11:30:31,899 DEBUG org.hibernate.tool.hbm2ddl.SchemaUpdate SchemaUpdate:203 - create table Trade (id bigint not null, description varchar(255), reference varchar(255), primary key (id))Thats a new database setup and ready to go. Creating The Restful Interface. I’m just going to cover the basics here. For some great examples follow these links http://blog.springsource.com/2009/03/08/rest-in-spring-3-mvc/ http://www.stupidjavatricks.com/?p=54 How to Create the Spring Controller The Spring controller is the key to this whole example. It is the controller that takes our requests and passes them to the trade Service for processing. It defines the restful interface. We use @PathVariable to make things simple. @RequestMapping(value = "/create/trade/{id}") public ModelAndView createTrade(@PathVariable Long id) { Trade trade = new Trade(id); service.createTrade(trade); ModelAndView mav = new ModelAndView("tradeView", BindingResult.MODEL_KEY_PREFIX + "trade", trade); return mav; }@RequestMapping(value = "/find/trade/{id}") public ModelAndView findTradeById(@PathVariable Long id) { Trade trade = service.getTradeById(id); ModelAndView mav = new ModelAndView("tradeView", BindingResult.MODEL_KEY_PREFIX + "trade", trade); return mav; }It works quite simply by populating the @PathVariable id with the value from /find/trade/{id} For example, requesting /find/trade/1 will populate reference with “1” requesting /find/trade/29 will populate reference with “29” More information can be found here: http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/mvc.html#mvc-ann-requestmapping-uri-templates How to configure the Web Application The configuration of the web application in web.xml is very straightforward. First we register the Spring Servlet trade org.springframework.web.servlet.DispatcherServlet 1Next we define a mapping to the servlet. This mapping will pass all requests to our servlet.trade /*How to Configure Spring The Spring configuration consists of a number of distinct elements. The first line simply tells Spring where to look for annotationsThe BeanNameViewResolver takes the nameThis scary looking piece of XML does the job of making sure that the Trade object is returned as XML. XStream will take the object and automatically convert to an XML Format.The Trade class defines the XStream annotation for this. @XStreamAlias("trade") public class Trade {In our case you can see from the test that we get the following from /search/trade/1 1How to start and stop the Jetty Server I use the Jetty Plugin to start the server and deploy the war file contained the services. http://docs.codehaus.org/display/JETTY/Maven+Jetty+Plugin The server is started with the following snippet from pom.xml <execution> <id>start-jetty</id> <phase>pre-integration-test</phase> <goals> <goal>run</goal> </goals> </execution>The server is stoped with the following snippet from pom.xml <execution> <id>stop-jetty</id> <phase>post-integration-test</phase> <goals> <goal>stop</goal> </goals> </execution>How to run the Integration Tests The integration tests are run using failsafe as described in the orgiinal article. http://johndobie.blogspot.com/2011/06/seperating-maven-unit-integration-tests.html We use the new Spring RestTemplate to make the call to the service easy. @Test public void testGetTradeFromRestService() throws Exception { long id = 10L; createTrade(id); String tradeXml = new RestTemplate() .getForObject( "http://localhost:8080/find/trade/{id}", String.class, id); System.out.println(tradeXml); Trade trade = getTradeFromXml(tradeXml); assertEquals(trade.getId(), id); }Reference: Maven Integration Testing And Spring Restful Services from our JCG partner John Dobie at the Agile Engineering Techniques blog....

JavaFX-Based SimpleDateFormat Demonstrator

One of the things that can be a little tricky for developers new to Java or even for experienced Java developers new to formatting with Java Dates, is the specification of a date/time format using SimpleDateFormat. The class-level Javadoc-based documentation for SimpleDateFormat is pretty thorough in its coverage of patterns representing various components of a date/time. However, unless one carefully reads and understands these various patterns, it can be tricky to remember the difference between lowercase ‘d’ for day in the month and uppercase ‘D’ for day in the year or to remember if it’s lowercase ‘m’ or uppercase ‘M’ used for months versus minutes. In this post, I look at a simple application written in JavaFX that allows a developer to quickly try arbitrary patterns to see how SimpleDateFormat will render the current date/time given the arbitrary pattern. In theory, a developer could use this simple tool to quickly determine the effect of his or her date/time pattern, but it’s really more of an excuse to apply JavaFX. The code listing below contains the complete JavaFX 2.x-based application. package dustin.examples;import java.text.SimpleDateFormat; import java.util.Date; import javafx.application.Application; import javafx.event.EventHandler; import javafx.geometry.Pos; import javafx.scene.Group; import javafx.scene.Scene; import javafx.scene.control.Button; import javafx.scene.control.TextField; import javafx.scene.control.TextFieldBuilder; import javafx.scene.input.MouseEvent; import javafx.scene.layout.Pane; import javafx.scene.layout.VBox; import javafx.scene.paint.Color; import javafx.stage.Stage;/** * JavaFX application allowing for testing and demonstration of various String * formats for date/time. * * @author Dustin */ public class DateTimeStringFormatDemonstrator extends Application { /** * Generate the application's main pane. * * @return Main pane for the application. */ private Pane generateMainPane() { final VBox vbox = new VBox(); final TextField dateTimeFormatField = TextFieldBuilder.create().prefWidth(350).alignment(Pos.CENTER) .promptText("Enter DateFormat") .build(); vbox.getChildren().add(dateTimeFormatField); final TextField formattedDateField = TextFieldBuilder.create().prefWidth(350).alignment(Pos.BASELINE_CENTER) .promptText("Date Output Goes Here").build(); formattedDateField.setEditable(false); final Button applyButton = new Button("Apply Format"); applyButton.setPrefWidth(350); applyButton.setOnMousePressed( new EventHandler<MouseEvent>() { @Override public void handle(MouseEvent mouseEvent) { try { final SimpleDateFormat sdf = new SimpleDateFormat(dateTimeFormatField.getText()); formattedDateField.setText(sdf.format(new Date())); formattedDateField.setAlignment(Pos.CENTER); } catch (Exception ex) { formattedDateField.setText("ERROR"); formattedDateField.setAlignment(Pos.CENTER); } formattedDateField.setAlignment(Pos.BASELINE_CENTER); } }); vbox.getChildren().add(applyButton); vbox.getChildren().add(formattedDateField); return vbox; }/** * The method overridden from Application for starting the application. * * @param stage Primary stage. * @throws Exception Exceptions throwing during execution of JavaFX application. */ @Override public void start(final Stage stage) throws Exception { stage.setTitle("JavaFX Date/Time String Format Presenter"); final Group group = new Group(); group.getChildren().add(generateMainPane()); final Scene scene = new Scene(group, 350, 65, Color.DARKKHAKI); stage.setScene(scene); stage.show(); }/** * Main function for running date/time format JavaFX application. * * @param arguments Command-line arguments; none expected. */ public static void main(final String[] arguments) { Application.launch(arguments); } }The simple JavaFX 2-based application shown above makes it easy to try out different date/time format patterns to see what SimpleDateFormat will do with each. A series of these used on the evening of Tuesday, 8 May 2012, are shown next. These examples demonstrate several key aspects of using SimpleDateFormat:Uppercase ‘M’ is used for months while lowercase ‘m’ is used for minutes. Number of ‘M’ characters represents month’s representation (example: 5, 05, or ‘May’ for May). Uppercase ‘D’ is for the number of the day of the year (since January 1) while lowercase ‘d’ is the number of the day of the month (since May 1 in this case). Two ‘y’ or ‘Y’ digits represent 2-digit year, but 3 or 4 ‘Y’ or ‘y’ digits can be used for a 4-digit year.The simple example highlighted in this blog post demonstrates the simplicity of JavaFX and provides an example of how JavaFX can provide graphical interfaces to make Java applications more intuitive. As part of this, mouse event handling in JavaFX and the common JavaFX idiom of using builders are both demonstrated. A practical use of this application is to quickly and easily determine the representation that is provided by SimpleDateFormat for a given pattern. Reference: JavaFX-Based SimpleDateFormat Demonstrator from our JCG partner Dustin Marx at the Inspired by Actual Events blog....

Gradle archetype for Spring applications

I am releasing a Gradle archetype useful for creating Java/Groovy applications based on Springframework. Of course, it is not a real archetype because such a creation is not possible.However, with very few steps you can create, edit and deploy an application server. It would be a most accomodating starting point for deployable software projects. This release is an attempt to mitigate common issues related to development life-cycle phases such as testing, the running of application and deployment in various environments. The archetype leverages upon the flexible building process and on the top-most featured IoC (Inversion of Control) management system. When creating application modules for linking services through HTTP, JMS or any other connector type, this archetype is refined and can be applied for satisfying these requirements:Automatic testing, building and continuous integration. A different configuration for each environment (development, integration, production). Springframework based system. Groovy support.The project consists of:Utility classes for given Spring context. Grails-like DSL for Spring setup (beans.groovy). Logging and application configuration properties for each environment (development/integration/production). Gradle config file.Why Gradle? Problems exist using Maven in Groovy projects due to the gmaven plugin, which may indicate that it is not ready for the groovy-user community. Indeed, Gradle works perfectly on Groovy projects. It is so concise and elastic that you don?t have just a building system, you have a programming tool. When a customized behaviours proper plugin cannot be found in the registry, you may add custom tasks by writing groovy code directly to the build.gradle descriptor. Gradle is a swiss army knife for developers. Getting startedRun git clone git@github.com:gfrison/proto-app.git myApp where myApp is the name of your project. Edit property ?projectName? in ?build.gradle? with project name. Add classes, and manage them with spring ?beans.groovy?. You are now ready to test, run and deploy your project through a continuous integration system such as Jenkins.If you have suggestions, or pull requests from Github, myself the author, would be happy to consider them. Reference: Gradle archetype for Spring applications from our JCG partner Giancarlo Frison at the Making Things Simple Through The Complex blog....

Java Memcached on Mac OS X

Introduction In this article I will explain how you can:Install and Configure Memcached on Mac OS X Use Memcached in your Java ApplicationI won’t go in too much detail about the benefits of using a distributed cache in your applications, but let’s at least provide some use cases for applications that are running in the context of an enterprise portal, eXo Platform in my case – surprising isn’t? And I will show this in another post. We have many reasons to use a cache (distributed or not), in the context of enterprise portal, let’s take a look to some of these reasons:A portal is used to aggregate data in a single page. These data could come from different sources : Web Services, Database, ERP, ….. and accessing the data in real time could be costly. So it will be quite interesting to cache the result of the call when possible. If the portal is used to aggregate many data from many sources, it is sometime necessary to jump into another application to continue some operation. A distributed and shared cache could be used to manage some context between different applications running in different processes (JVM or even technologies)These are two example where a shared cache could be interesting for your portal based applications, we can find many other reason. Note that the Portlet API (JSR-286) contains already a cache mechanism that cache the HTML fragment, and that eXo Platform also provide a low level cache, based on JBoss Cache. Installation and Configuration Installing Memcached from sources You can find some information about Memcached installation on the Memcached Wiki. The following steps are the steps that I have used on my environment. As far as I know, Memached is not available as package for Mac OS X. I am still on Snow Leopard (10.6.8), and I have installed XCode and all development tools. I have use the article “Installing memcached 1.4.1 on Mac OS X 10.6 Snow Leopard” from wincent.com. For simplicity reason I have duplicate the content and updated to the latest releases. 1. Create a working directory : $ mkdir memcachedbuild $ cd memcachebuild 2.Install libevent that is mandatory for memcached $ curl -O http://www.monkey.org/~provos/libevent-1.4.14-stable.tar.gz $ tar xzvf libevent-1.4.14-stable.tar.gz $ cd libevent-1.4.14-stable $ ./configure $ make $ make verify $ sudo make install3. Install memcached Go back to your install directory ( memcachedbuild) $ curl -O http://memcached.googlecode.com/files/memcached-1.4.10.tar.gz $ tar xzvf memcached-1.4.10.tar.gz $ cd memcached-1.4.10 $ ./configure $ make $ make test $ sudo make installYou are now ready to use memcached that is available at /usr/local/bin/memcached This allows you to avoid changing to the pre-installed memcached located in /usr/bin, if you want to replace it instead of having you own install, just run the configure command with the following parameter: ./configure –prefix=/usr Starting and testing Memcached Start the memcached server, using the following command line: $ /usr/local/bin/memcached -d -p 11211 This command starts the memcached server as demon (-d parameter), on the TCP port 11211 (this is the default value). You can find more about the memcached command using man memcached. It is possible to connect and test your server using a telnet connection. Once connected you can set and get object in the cache, take a look to the following paragraph. $ telnet 11211 Trying Connected to tgrall-server. Escape character is '^]'. set KEY 0 600 16 This is my value STORED get KEY VALUE KEY 0 16 This is my value ENDThe set command allows you to put a new value in the cache using the following syntax: set <key> <flags> <expiration_time> <number_of_bytes> [noreply]<value>key : the key used to store the data in the cache flags : a 32 bits unsigned integer that memcached stored with the data expiration_time : expiration time in seconds, if you put 0 this means no delaynumber_if_bytes : number of bytes in the data block noreply : option to tell the server to not return any value value : the value to store and associate to the key.This is a short view of the documentation located in your source directory /memcachedbuild/memcached-1.4.10/doc/protocol.txt . The get command allows you to access the value that is associated with the key. You can check the version of memcahed you are running by calling the stats command in your telnet session. Your memcached server is up and running, you can now start to use it inside your applications. Simple Java Application with Memcached The easiest way to use memcached from your Java applications is to use a client library. You can find many client libraries. In this example I am using spymemcached developped by the people from Couchbase. 1. Adding SpyMemcached to your Maven project Add the repository to you pom.xml (or you setting.xml) <repository> <id>spy</id> <name>Spy Repository</name> <layout>default</layout> <url>http://files.couchbase.com/maven2/</url> </repository> then the dependency to your pom.xml <dependency> <groupid>spy</groupid> <artifactid>spymemcached</artifactid> <version>2.7.3</version> </dependency>2. Use SpyMemcache client in your application The following code is a simple Java class that allows you to enter the key and the value and set it in the cache. package com.grallandco.blog;import java.io.BufferedReader; import java.io.IOException; import java.io.Console; import java.io.InputStreamReader; import java.util.Date; import java.util.logging.Level; import java.util.logging.Logger; import net.spy.memcached.AddrUtil; import net.spy.memcached.MemcachedClient;public class Test {public static void main(String[] args) { try { System.out.print("Enter the new key : "); BufferedReader reader = new BufferedReader( new InputStreamReader(System.in)); String key = null; key = reader.readLine(); System.out.print("Enter the new value : "); String value = null; value = reader.readLine(); MemcachedClient cache = new MemcachedClient(AddrUtil.getAddresses("")); // read the object from memory System.out.println("Get Object before set :"+ cache.get(key) );// set a new object cache.set(key, 0, value );System.out.println("Get Object after set :"+ cache.get(key) );} catch (IOException ex) { Logger.getLogger(Test.class.getName()).log(Level.SEVERE, null, ex); System.exit(0); } System.exit(0); } }So when executing the application you will see something like : Enter the new key : CITY Enter the new value : Paris, France 2011-11-16 15:22:09.928 INFO net.spy.memcached.MemcachedConnection: Added {QA sa=/, #Rops=0, #Wops=0, #iq=0, topRop=null, topWop=null, toWrite=0, interested=0} to connect queue 2011-11-16 15:22:09.932 INFO net.spy.memcached.MemcachedConnection: Connection state changed for sun.nio.ch.SelectionKeyImpl@5b40c281 Get Object before set :null Get Object after set :Paris, FranceYou can also access the object from a Telnet session: get CITY VALUE CITY 0 13 Paris, France ENDYou can use any Java class in your application, the only thing to do is to make this class serializable. This is it for the first post about memcached and Java, I am currently working on a small example integrating Web Services call, Portlets and memcached. Reference: Installing Memcached on Mac OS X and using it in Java from our JCG partner Tugdual Grall at the Tug’s Blog blog....

Does your App Support Android 4.0? Think Again

Think your app supports Android 4.0 just because you support 2.2 or 2.3? Maybe not. If your app uses a few common Java classes, you may want to double check your app on several 4.0 devices. Probably like most developers, when we saw Ice Cream Sandwich release, we didn’t have any thoughts that our app may not work properly. All the changes seemed to be more closely related to the UI, not the heart of the framework. A few months ago, we noticed a handful of reviews popping up on the market mentioning total failures and really strange behavior. On some of the reviews, we got the lucky tip of a Nexus S being used, as well as other potential 4.0 devices like the Motorola Zoom. These strange reviews prompted us to go out and grab several 4.0 devices just to see what was going on. It turns out, the mass majority of our web calls were being kicked back from the server with strange errors about Post requests being sent instead of Get requests. After a bit of research, I discovered Stack Overflow comes to the rescue as usual ( one and two). In short, the problem has to do with a update to the HttpURLConnection class. Specifically, the setDoOutput method has been changed (it actually works now). A quick review of the HttpURLConnection documents show a few new updates that really should be reviewed by anyone who is using it. So with the above changes in mind, it’s a really good idea to check your app versus any new versions coming out. There’s really no way to find this landmine without actually using the app on an Ice Cream Sandwich device. Always remember that the more Android specific APIs are not the only classes are modified by Google. The Android team has free reign to change any other aspect they see fit. Reference: Does your App Support Android 4.0? Think Again from our JCG partner Isaac Taylor at the Programming Mobile blog....

Developer’s Fantasies

The moment I became a proud father I almost lost all of my free time. Since then I became a proud father two times more. To make a long story short – I have three kids. This basically means no free time at all. Every now and then when I have five minutes to think clearly I fantasize about things I would have done if only I got the time. This includes learning how to play the guitar, teach high school kids math, open my own pizzeria and much more. Even when I do find free time I do not spend it on my fantasies because there is simply no point perusing any of them if I have no plan to stick with it. As a software developer we also have fantasies (job related – other fantasies are out of scope). Each of us has his own ideas about a great change which will defiantly bring improvement. Those ideas are not realized because they have no economical direct incentive. Such ideas range from developing a smart text box which will use natural processing language and understands user requests to replacing our existing old annoying build technology with a new cool one. However, no one asked for a smart text box and the old annoying build system works just fine. What happens to these fantasies in the agile era? Agile development teaches us to pursue costumer value. Tasks are wrapped as user stories to ensure there is no waste. Said stories are prioritized by value so at all time we work on the most important and valuable story. Technical debt and technical user stories are dealt with extra care. We only work on the most painful ones. We have been taught to adopt a new technology only if it helps us solve a problem, not entertain us or keep us up to date. The problem is that besides the fact that it killed some of the fun in development it also keeps our solutions close to where we found them. Several algorithms in computer science such as the Genetic Algorithm and the Simulated Annealing Algorithm use randomization in order to escape minimum locality. In a nutshell these algorithms are based on the following methodology: They start with an initial solution to the problem. Then iteratively use ‘smart’ assumptions to improve the solution by making small modifications to it. The more iterations they undergo the better the solution they will find. The problem with this approach is that the initial solution might be surrounded with bad solution (minimum locality). In such scenario the algorithm will not perform well and eventually return something close to the initial solution. How do we escape such minimum locality? One suggestion is once in a while to change your solution randomly without using your brain and smart assumptions. The idea is that sometimes such a bounce will upgrade your solution so well that it will compensate for other times where it did not help at all. How can we escape minimum locality in our development process? Even in the agile era we should spend some time on our fantasies. Such random bounces in our iteration should be integrated in the agile methodology. If we will stick to it long enough we will eventually find better solutions for our problems. One thing we can promise. It will be much more fun. Reference: Developer’s Fantasies from our JCG partner Nadav Azaria & Roi Gamliel at the DeveloperLife blog....

Squealer: An Anti-ORM Influenced Scala Tool

I was reading a blog post from Prismatic the other day and it got me thinking about how we, as programmers, have diverged so much from our roots. In the beginning, we designed small tools which did one thing and did it well. Now we’re more concerned with meeting deadlines and shipping code as fast as possible. We’ve fallen in love with the phrase, Release often. Release early. – The Cathedral and the Bazaar, Eric S. Raymondand as a result have rushed design decisions or used something that made the decision for us. Inevitably, that means we’ve used some framework, somewhere in our code or stack, which could have easily been replaced with either a simple tool or a collection of libraries and some glue. There’s nothing wrong with RORE as a guiding principle. Focusing first on an MVP which is not feature rich is a great business strategy. But it’s a business strategy, not a software strategy. At the cost of repeating what was said by the Prismatic team, there will be a point where using a framework costs you more than not using a framework. I will even argue that in most cases you can work just as fast, produce just as high a quality piece of code, and be in more control of your product if you avoid frameworks at all costs. Frameworks force your hand. In many cases they cause you to structure your code to work around their limitations. Case in point, ORMs. Look at the number of articles that come up with a quick search of Google using the keywords “ORM” and “problem:”The N+1 selects problem Coding Horror on ORMs The Vietnam of Computer Science ORM: A Solution that Creates Many ProblemsIf you were to ask me about ORMs you’d quickly find out that I’m a rather vocal opponent of using them for any medium to large scale project. Which brings me to the point of this post. My History With DB “Solutions” In projects past, I worked in environments where the schema changed at least once a week and the code took 20 minutes to compile. I was required to support all database types and schemas that were handed to us, “within reason.” That pretty much meant whoever wanted whatever schema on whatever database for whatever demo they were about to give, generally with only a few hours of notice. We were already using a third-party database abstraction layer to help “ease the burden” of database interaction so other than a few config files what was the problem? The speed of this abstraction layer depended heavily on the underlying database. On some databases, one query mapped to a single join, on another table, the same query mapped to a 5 table join. Thus, to reach acceptable performance speeds of the code, the queries used changed and the code handling it had to change as well. By using this ORM, in this manner, we lost most, if not all, of the benefits of using an ORM. The first cut of the code went fast, the next few were a frustrating experience of compile, test, compile, test, explain the delay to management, etc. While my experiences with ORMs and all database “solutions” have improved significantly since that time, I am still skeptical of the purported benefits promised by these solutions. Nothing beats writing bare SQL for fine tuning performance, memory usage, caching strategies and ease of debugging issues. That said, writing bare SQL is time consuming, error prone, and many times if the DBAs change the schema, you, the developer won’t know until a run-time error happens. Thus sprang the genesis for the ideas of Squealer, a tool which could write the code for you based upon your queries and validated against the DB you were hoping to use. Introducing Squealer Squealer, is my way of avoiding an ORM yet still reaping many of the rewards. It’s not a library but rather a tool which builds code based on the database you’re working with. How it works is simple. Take an automatic code generation exercise based on parsing text and then apply it to parsing a database. For classes which represent an individual table you’d have access to column names, column data-types, column default values, and any comments the DBAs left in the code. Since this is a Scala solution and something that I started working on with conviction after attending NEScala ’12, I’m using a few libraries I heard of or watched a presentation about while attending it:TreeHugger, a library which exposes parts of the Scala AST to generate code Gll-Combinators, a parsing library with an upper bound of O(N³) and capable of handling ambiguity Config, a config library for JVM languagesI’d like to switch to using ScalaTest since Bill Venners was there holding a session on the next version of it. I’m also thinking of forking a co-worker’s SQL parsing and conversion library, Seekwell, to port it to using gll-combinators. This might open up the possibility of writing queries once and porting DB specific expressions to different DBs. The current version of Squealer only does one thing right now, it parses the database and generates classes and companion classes based on the database tables. You can and should limit it to a select few tables otherwise you’ll wind up with classes generated for meta-tables. All data mappings are the suggested data mappings by Oracle. The next step is to add in the ability to parse SQL statement and generate code based on those statements. I’m currently writing the code for this but I will admit I’m not happy with it. Hopefully I’ll be able to find enough time to finish before Scalathon ’12 so that I can present it. Reference: Squealer: An Anti-ORM Influenced Scala Tool for Working with Relational DB from our JCG partner Owein Reese at the Statically Typed blog....

SQL tooling, the ranking

When you need to get up and running quickly with your database, the tooling becomes very important. When developing jOOQ and adding integrations for new databases, I really love those ones that provide me with simple ways to create new databases, schemata, users, roles, grants, whatever is needed, using simple dialogs where I can click next next next. After all, we’re in the 21st century, and I don’t want to configure my software with punchcards anymore. Database tooling categories So with jOOQ development, I’ve seen a fair share of databases and their toolings. I’d like to divide them into three categories. Please note, that this division is subjective, from the point of view of jOOQ development. With most of these databases, I have no productive experience (except Oracle and MySQL). Things may change drastically when you go into production. So here are the categories: The “all-you-can-wish-for” ones These friends of mine ship with excellent tooling already integrated into their standard deliverable for free. It is easy to start the tooling and use it right away, without any configuration. The tooling is actually an intuitive rich client and I don’t have to read thousands of manual pages and google all around, or pay extra license fees to get the add-on. This category contains (in alphabetical order):CUBRID with its Eclipse-RCP based CUBRID Manager. This is a very nice tool for a newcomer. DB2 with its Eclipse-RCP based IBM Data Studio. IBM created Eclipse. It would’ve been a shame if they hadn’t created the Data Studio. Postgres with pgAdminIII. Very very nice looking and fast. SQL Server with its SQL Server Management Studio. This is probably the most complete of all. You can lose yourself in its myriads of properties and configuration popups. Sybase SQL Anywhere and Sybase ASE, both share the same tooling called Sybase Central. It looks a bit out of date, but all administrative operations can be done easily.The ones with sufficient tooling These databases have tooling that is “sufficient”. This means that they ship with some integrated scripting-enabled console. Some of them are also generally popular, such that there exist free open source tools to administer those databases. This includes MySQL and Oracle. Here are to “OK” ones:H2. Its web-based console is actually quite nice-looking. It features DHTML-based auto-completion and scripting. I can live with that. Ingres. This dinosaur seems not to have upgraded UI components since Windows 95, but it works as good as it has to. MySQL, with phpMyAdmin. This is a very nice, independent, open source PHP application for MySQL administration. You can install it easily along with MySQL using XAMPP, a nice Apache, MySQL, PHP, Perl distribution. Yes, I like installing complete things using the next next next pattern! Oracle. It has sql*plus for scripting and there are many commercial and open source products with user interfaces. My favourite ones are Toad and Toad Extensions, a really nice and free Eclipse plugin. It is worth mentioning, that if you pay the extra license fee, you will have access to Oracle Enterprise Manager and other very very fancy tools. With money, you clearly can’t complain here.The other ones… Here, you’re back to loading *.sql files with DDL all along. No help from the vendors, here.Derby. I’m not aware of any tooling. Correct me if I’m wrong HSQLDB. Its integrated console can execute SQL, but it doesn’t provide syntax highlighting, checking, autocompletion, etc. I’m probably better off using SQuirreL SQL, or any other generic SQL tool. SQLite. Good luck there! This database is really minimal!Screenshots (ordered by database, alphabetically)            Reference: SQL tooling, the ranking from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....

Better looking HTML test reports for TestNG with ReportNG – Maven guide

TestNG is a testing framework created as an annotation driven alternative for JUnit 3 in times when “extends TestCase” was a indispensable part of writing tests. Even now it provides some interesting features like data providers, parallel tests or test groups. In the situation our tests are not executed from IDE it’s often useful to take a look at test result in HTML report. The original TestNG report looks… raw. What is more they are not very intuitive and readable. There is an alternative – ReportNG. It provides better looking and more lucid HTML test reports. More information about ReportNG can be found at its webpage, but when I tried to use for my AppInfo library in Maven builds running from CI server I had a problem to find any at a glance guide how to use it with Maven. Fortunately there are samples for Ant and Gradle, so I was able to figure it out, but I hope with this post everyone wanting to use ReportNG with Maven will be able to achieve it without any problem within a few minutes. First, additional dependency has to be added to pom.xml: <dependencies> <dependency> <groupId>org.uncommons</groupId> <artifactId>reportng</artifactId> <version>1.1.2</version> <scope>test</scope> <exclusions> <exclusion> <groupId>org.testng</groupId> <artifactId>testng</artifactId> </exclusion> </exclusions> </dependency> (...) </dependencies> Usually in our project newer TestNG version is used, so that ReportNG dependency should be excluded. Next, Surefire plugin has to be configured: <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>2.5</version> <configuration> <properties> <property> <name>usedefaultlisteners</name> <value>false</value> </property> <property> <name>listener</name> <value>org.uncommons.reportng.HTMLReporter, org.uncommons.reportng.JUnitXMLReporter</value> </property> </properties> <workingDirectory>target/</workingDirectory> </configuration> </plugin> (...) </plugins> </build> ReportNG uses two reporters pluggable into TestNG. JUnitXMLReporter generates XML summarize of running tests. It’s used for tools (like CI server). HTMLReporter creates human readable HTML report. Default TestNG listeners should be disabled. After test run I added also workingDirectory property which causes that velocity.log (file created by Velocity engine used internally by ReportNG) is placed in target instead of main project directory (and therefor is deleted by “mvn clean” command). One more thing. Unfortunately ReportNG jar isn’t available in Maven Central Repository, so could be required to add java.net repository in your settings.xml. <repositories> <repository> <id>java-net</id> <url>http://download.java.net/maven/2</url> </repository> (...) </repositories> That’s all. Now “mvn clean test” should generate nice looking HTML report for lots of tests covering our project. Reference: Better looking HTML test reports for TestNG with ReportNG – Maven guide from our JCG partner Marcin Zajaczkowski at the Solid Soft blog....

Joins with Map Reduce

I have been reading on Join implementations available for Hadoop for past few days. In this post I recap some techniques I learnt during the process. The joins can be done at both Map side and Join side according to the nature of data sets of to be joined. Reduce Side Join Let’s take the following tables containing employee and department data.Let’s see how join query below can be achieved using reduce side join. SELECT Employees.Name, Employees.Age, Department.Name FROM Employees INNER JOIN Department ON Employees.Dept_Id=Department.Dept_IdMap side is responsible for emitting the join predicate values along with the corresponding record from each table so that records having same department id in both tables will end up at on same reducer which would then do the joining of records having same department id. However it is also required to tag the each record to indicate from which table the record originated so that joining happens between records of two tables. Following diagram illustrates the reduce side join process.Here is the pseudo code for map function for this scenario. map (K table, V rec) {dept_id = rec.Dept_Idtagged_rec.tag = tabletagged_rec.rec = recemit(dept_id, tagged_rec)}At reduce side join happens within records having different tags. reduce (K dept_id, list<tagged_rec> tagged_recs) {for (tagged_rec : tagged_recs) {for (tagged_rec1 : taagged_recs) {if (tagged_rec.tag != tagged_rec1.tag) {joined_rec = join(tagged_rec, tagged_rec1)} emit (tagged_rec.rec.Dept_Id, joined_rec)}}Map Side Join (Replicated Join) Using Distributed Cache on Smaller Table For this implementation to work one relation has to fit in to memory. The smaller table is replicated to each node and loaded to the memory. The join happens at map side without reducer involvement which significantly speeds up the process since this avoids shuffling all data across the network even though most of the records not matching are later dropped. Smaller table can be populated to a hash-table so look-up by Dept_Id can be done. The pseudo code is outlined below. map (K table, V rec) {list recs = lookup(rec.Dept_Id) // Get smaller table records having this Dept_Idfor (small_table_rec : recs) {joined_rec = join (small_table_rec, rec)}emit (rec.Dept_id, joined_rec)}Using Distributed Cache on Filtered Table If the smaller table doesn’t fit the memory it may be possible to prune the contents of it if filtering expression has been specified in the query. Consider following query. SELECT Employees.Name, Employees.Age, Department.Name FROM Employees INNER JOIN Department ON Employees.Dept_Id=Department.Dept_Id WHERE Department.Name="Eng"Here a smaller data set can be derived from Department table by filtering out records having department names other than “Eng”. Now it may be possible to do replicated map side join with this smaller data set. Replicated Semi-Join Reduce Side Join with Map Side Filtering Even of the filtered data of small table doesn’t fit in to the memory it may be possible to include just the Dept_Id s of filtered records in the replicated data set. Then at map side this cache can be used to filter out records which would be sent over to reduce side thus reducing the amount of data moved between the mappers and reducers. The map side logic would look as follows. map (K table, V rec) {// Check if this record needs to be sent to reducer boolean sendToReducer = check_cache(rec.Dept_Id) if (sendToReducer) { dept_id = rec.Dept_Idtagged_rec.tag = tabletagged_rec.rec = recemit(dept_id, tagged_rec) } }Reducer side logic would be same as the Reduce Side Join case. Using a Bloom Filter A bloom filter is a construct which can be used to test the containment of a given element in a set. A smaller representation of filtered Dept_ids can be derived if Dept_Id values can be augmented in to a bloom filter. Then this bloom filter can be replicated to each node. At the map side for each record fetched from the smaller table the bloom filter can be used to check whether the Dept_Id in the record is present in the bloom filter and only if so to emit that particular record to reduce side. Since a bloom filter is guaranteed not to provide false negatives the result would be accurate. Reference: Joins with Map Reduce from our JCG partner Buddhika Chamith at the Source Open blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: