Featured FREE Whitepapers

What's New Here?


Apache Bigtop – Installing Hive, HBase and Pig

In the previous post we learnt how easy it was to install Hadoop with Apache Bigtop! We know its not just Hadoop and there are sub-projects around the table! So, lets have a look at how to install Hive, Hbase and Pig in this post. Before rowing your boat… Please follow the previous post and get ready with Hadoop installed! Follow the link for previous post: http://femgeekz.blogspot.in/2012/06/hadoop-hangover-introduction-to-apache.html also, the same can be found at DZone, developer site: http://www.dzone.com/links/hadoop_hangover_introduction_to_apache_bigtop_and.html All Set?? Great! Head On.. Make sure all the services of Hadoop are running. Namely, JobTracker, SecondaryNameNode, TaskTracker, DataNode and NameNode. [standalone mode] Hive with Bigtop: The steps here are almost the same as Installing Hive as a separate project. However, few steps are reduced. The Hadoop installed in the previous post is Release 1.0.1 We had installed Hadoop with the following command sudo apt-get install hadoop\* Step 1: Installing Hive We have installed Bigtop 0.3.0, and so issuing the following command installs all the hive components. ie. hive, hive-metastore, hive-server. The daemons names are different in Bigtop 0.3.0. sudo apt-get install hive\*This installs all the hive components. After installing, the scripts must be able to create /tmp and /usr/hive/warehouse and HDFS doesn’t allow these to be created while installing as it is unaware of the path to Java. So, create the directories if not created and grant the execute permissions. In the hadoop directory, ie. /usr/lib/hadoop/ bin/hadoop fs -mkdir /tmp bin/hadoop fs -mkdir /user/hive/warehouse bin/hadoop -chmod g+x /tmp bin/hadoop -chmod g+x /user/hive/warehouse Step 2: The alternative directories could be /var/run/hive and /var/lock/subsys sudo mkdir /var/run/hive sudo mkdir /var/lock/subsysStep 3: Start the hive server, a daemon sudo /etc/init.d/hive-server startImage:Step 4: Running Hive Go-to the directory /usr/lib/hive. See the Image below: bin/hiveStep 5: Operations on Hive Image:HBase with Bigtop: Installing Hbase is similar to Hive. Step 1: Installing HBase sudo apt-get install hbase\*Image:Step 2: Starting HMaster sudo service hbase-master startImage:Image:Step 3: Starting HBase shell hbase shell Image:Step 4: HBase Operations Image:Image:Pig with Bigtop: Installing Pig is similar too. Step 1: Installing Pig sudo apt-get install pigImage:Step 2: Moving a file to HDFS Image:Step 3: Installed Pig-0.9.2 Image:Step 4: Starting the grunt shell pigImage:Step 5: Pig Basic Operations Image:Image:We saw that is it possible to install the subprojects and work with Hadoop, with no issues. Apache Bigtop has its own spark! :) There is a release coming BIGTOP-0.4.0 which is supposedly to fix the following issues: https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12318889&styleName=Html&projectId=12311420 Source and binary files: http://people.apache.org/~rvs/bigtop-0.4.0-incubating-RC0 Maven staging repo: https://repository.apache.org/content/repositories/orgapachebigtop-279 Bigtop’s KEYS file containing PGP keys we use to sign the release: http://svn.apache.org/repos/asf/incubator/bigtop/dist/KEYS Let us see how to install other sub-projects in the coming posts! Until then, Happy Learning! Reference: Hadoop Hangover: Introduction To Apache Bigtop and Installing Hive, HBase and Pig from our JCG partner Swathi V at the * Techie(S)pArK * blog....

JavaFX 2.0 Hello World

Before talking about the example itself, I want to show you how to create a JavaFX application in NetBeans. (If you haven´t installed JavaFX and NetBeans yet, please see my previous post Installing JavaFX 2.0 and NetBeans 7.7.1) Click on “New Project” in the “File” menu to open the project wizard. Then choose “JavaFX->JavaFX Application” and press “Next”.In the next dialog you can specify the name of your application and a destination folder, where it should be stored. If you have installed JavaFX correctly the “JavaFX Platform” should be specified already. Otherwise you can add the platform yourself by clicking on “Manage Platforms->Add Platform” and specifying the paths to your JavaFX installation.Note: By default the “Create Application Class” checkbox is checked. Please uncheck it because we´ll create our own application class. Click on “finish” to create your first JavaFX application. Hello World in JavaFX 2.0 – Example 1 Probably every single software developer knows the famous „HelloWorld“ example as it is often used to show the syntax of a (unknown) programming language and to give a first clue, of what the language looks like. I don´t want to break this tradition, so here are 2 different versions of a HelloWorld program in JavaFX 2.0. I´ll show the complete code first and then explain the individual parts. import javafx.application.Application; import javafx.event.ActionEvent; import javafx.event.EventHandler; import javafx.scene.Scene; import javafx.scene.control.Button; import javafx.scene.layout.StackPane; import javafx.stage.Stage;/** * * Created on: 17.03.2012 * @author Sebastian Damm */ public class HelloJavaFX extends Application { @Override public void start(Stage stage) throws Exception { Button bt = new Button('Print HelloWorld'); bt.setOnAction(new EventHandler<ActionEvent>() { @Override public void handle(ActionEvent arg0) { System.out.println('HelloWorld! :)'); } }); StackPane root = new StackPane(); Scene scene = new Scene(root, 300, 150); root.getChildren().add(bt);stage.setTitle('HelloWorld in JavaFX 2.0'); stage.setScene(scene); stage.show(); }public static void main(String[] args) { Application.launch(args); } } The first thing worth mentioning is that you have to extend from the Application class in order to create a working JavaFX application. This class provides several live-cycle methods and is the starting point for your application. It is an abstract class (which means, that you cannot instantiate it) with a single abstract method start, that you have to override. You are provided a stage object by the JavaFX runtime, which you can use to display your UI.To start your application you have to call the static method launch as seen in the main method in this example. After launching your application, it will call the start method. Here is the JavaDoc of the Application class, which shows the individual steps when starting a JavaFX application. The entry point for JavaFX applications is the Application class. The JavaFX runtime does the following, in order, whenever an application is launched: Constructs an instance of the specified Application classCalls the init() method Calls the start(javafx.stage.Stage) method Waits for the application to finish, which happens either when the last window has been closed, or the application calls Platform.exit() Calls the stop() methodLet´s start with the real source code inside the start method. First of all we create a simple Button and specify an action to be triggered when the button is clicked via the setOnAction method (compare JButton´s addActionListener). Next we create a StackPane object, which is one of the layout panes in JavaFX (One of the next blog posts will cover all different layout panes in JavaFX). I use a StackPane here, because it automatically takes all the available space provided by its surrounding parent and because it automatically centers its children inside. Note: The foundation of a JavaFX application is the Scene graph. Every single Node (which includes simple controls, groups and layout panes) is part of a hierarchical tree of nodes, which is called the Scene graph. The Scene graph and therefore your whole JavaFX application has always one single root node! As mentioned above, the start method has a Stage object parameter, which is provided by the JavaFX runtime. This Stage object is a kind of window. You have to give it a Scene object as its viewable content. You can create a Scene object by passing the root node of your application. Optional parameters are the width and the height of your scene as well as a Paint object, which includes simple colors and also complex color gradients. With root.getChildren().add(bt); you add the button to your root node, which is a stackpane. After that we set a title to the stage and apply the created scene object. Finally with the show method we tell the stage to show. (compare Swing´s setVisible Now your application should look like this:And it should print ‘HelloWorld’ to the command line if you click the button. Nothing spectacular yet, but it´s your first working JavaFX application, so congratulations! :) Hello World in JavaFX 2.0 – Example 2 Additionally a slightly changed example, which will show the text in the GUI. The code: import javafx.application.Application; import javafx.event.ActionEvent; import javafx.event.EventHandler; import javafx.scene.Group; import javafx.scene.Scene; import javafx.scene.control.Button; import javafx.scene.effect.DropShadow; import javafx.scene.paint.Color; import javafx.scene.text.Font; import javafx.scene.text.Text; import javafx.stage.Stage;/** * * Created on: 17.03.2012 * @author Sebastian Damm */ public class HelloJavaFX2 extends Application { @Override public void start(Stage stage) throws Exception { final Group root = new Group(); Scene scene = new Scene(root, 500, 200, Color.DODGERBLUE); final Text text = new Text(140, 120, 'Hello JavaFX 2.0!'); text.setFont(Font.font('Calibri', 35)); text.setFill(Color.WHITE); text.setEffect(new DropShadow()); Button bt = new Button('Show HelloWorld'); bt.setLayoutX(180); bt.setLayoutY(50); bt.setOnAction(new EventHandler<ActionEvent>() { @Override public void handle(ActionEvent arg0) { root.getChildren().add(text); } });root.getChildren().add(bt); stage.setTitle('HelloWorld in JavaFX 2.0'); stage.setScene(scene); stage.show(); }public static void main(String[] args) { Application.launch(args); } } Instead of using a layout pane we use a Group object here. Group is a subclass of Parent (which is a subclass of Node) and takes one or more children. A Group isn´t directly resizable and you can add transformations or effects to a Group which will then affect all children of the Group. (Note that we now also provided a Paint for the Scene.) Next we create a Text object. Because we have no layout pane, we specify the x and y coordinates directly. We specify a custom font, change the color to white and add a DropShadow. The Button also gets coordinates and instead of printing “HelloWorld” to the command line when we click the button, we add the created Text object to our root element (and therefore to the Scene Graph). After clicking the button, your application shoul look like this.Summary:A JavaFX Stage object is a kind of window and behaves similar to a JFrame or JDialog in Swing. A JavaFX Scene object is the viewable content of a Stage and has a single Parent root node. Node is one of the most important classes in JavaFX. Every control or layout pane is a kind of node. The Scene Graph is a hierarchical tree of nodes. It has one single root node and is the foundation of your application. It has to be passed to a Scene object In order to create and start a JavaFX application you have to complete the following steps:Extend the Application class Override the abstract start method Create a root node and add some elements to it Create a Scene object and pass the root node to it Apply the Scene to the Stage via setScene Tell the Stage to show with the show method Call Application.launch in your main methodReference: Hello World in JavaFX 2.0 from our JCG partner Sebastian Damm at the Just my 2 cents about Java blog....

All you need to know about QuickSort

It would be true to say that Quicksort is one of the most popular sorting algorithms. You can find it implemented on the most of the languages and it is present in almost any core library. In Java and Go Quicksort is default sorting algorithm for some data types and it is used in in C++ STL ( Introsoft which is used there begins with Quicksort). Such popularity can be explained by the fact that on average, Quicksort is one of the fastest known sorting algorithms. Interestingly that complexity of Quicksort is not less than it is for other algorithms like MergeSort or HeapSort. The best case performance is O(nlogn) and on the worst case it gives O(n^2). Latter, luckily, is exceptional case for the proper implementation. Quicksort performance is gained by the main loop which tends to make excellent usage of CPU caches. Another reason of popularity is that it doesn’t need allocation of additional memory.Personally for me Quicksort appeared as one of the most complex sorting algorithms. The basic idea is pretty simple and usually takes just a few minutes to implement. But that version, of course, if not practically usable. When it comes to details and to efficiency, it is getting more and more complicated.Quicksort was first discovered by C.A.R. Hoare in 1962 (see “Quicksort,” Computer Journal 5, 1, 1962) and in following years algorithm slightly mutated. The most known version is Three-way Quicksort. The most comprehensive of widely known ones is Dual-Pivot Quicksort. Both algorithms will be covered in that post.  The Java language was used to implement all algorithms. That post do not pretend to make adequate performance analysis. Test data used for performance comparison is incomplete and used just to show certain optimization techniques. Also, algorithm implementations are not necessary optimal. Just keep that in mind while you are reading. Basics The basic version of Quicksort is pretty simple and can be implemented just in few lines of code: public static void basicQuickSort(long arr[], int beginIdx, int len) { if ( len <= 1 ) return; final int endIdx = beginIdx+len-1;// Pivot selection final int pivotPos = beginIdx+len/2; final long pivot = arr[pivotPos]; Utils.swap(arr, pivotPos, endIdx);// partitioning int p = beginIdx; for(int i = beginIdx; i != endIdx; ++i) { if ( arr[i] <= pivot ) { Utils.swap(arr, i, p++); } } Utils.swap(arr, p, endIdx);// recursive call basicQuickSort(arr, beginIdx, p-beginIdx); basicQuickSort(arr, p+1, endIdx-p); } The code looks pretty simple and easily readable. Pivot selection is trivial and doesn’t require any explanation. The partitioning process can be illustrated using following figure:pointer “i” moves from the beginning to the end on array (note, that the last element of the array is skipped – we know that it the pivot). If i-th element is “<= pivot” then i-th and p-th elements are swapped and “p” pointer is moved to the next element. When partitioning is finished array will look like this: Remember, that in the code, at the end of array there is element with pivot value, and that element is excluded from the pivoting loop. That element is put on p-th position, which makes p-th element included in “<= pivot” area. If you need more details, have a look at Wikipedia. There is pretty good explanation with lots of references. I would just emphasize your attention that algorithm consists of three main sections. These sections are pivot selection, partitioning and recursive call to sort partition. To make separation clearer the algorithm can be written down as: public static void basicQuickSort(long arr[], int beginIdx, int len) { if ( len <= 1 ) return; final int endIdx = beginIdx + len - 1; final int pivotIdx = getPivotIdx(arr, beginIdx, len); final long pivot = arr[pivotIdx];Utils.swap(arr, pivotIdx, endIdx); int p = partition(arr, beginIdx, len, pivot); Utils.swap(arr, p, endIdx);basicQuickSort(arr, beginIdx, p-beginIdx); basicQuickSort(arr, p+1, endIdx-p); }public static int partition(long[] arr, int beginIdx, int len, long pivot) { final int endIdx = beginIdx + len - 1; int p = beginIdx; for(int i = beginIdx; i != endIdx; ++i) { if ( arr[i] <= pivot ) { Utils.swap(arr, i, p++); } } return p; }public static int getPivotIdx(long arr[], int beginIdx, int len) { return beginIdx+len/2; } Now let’s have a look how it performs vs Java 1.6 sort algorithm. For the test I will generate array using following loop: static Random rnd = new Random(); private static long[] generateData() { long arr[] = new long[5000000]; for(int i = 0; i != arr.length; ++i) { arr[i] = rnd.nextInt(arr.length); } return arr; } Then I run each JDK 6 Arrays.sort() and basicQuickSort() for 30 times and took the average run time as the result. New set of random data was generated for each run. The result if that exercise is this:arr[i]=rnd.nextInt(arr.length)Java 6 Arrays.sort 1654msbasicQuickSort 1431msNot that bad. Now look what would happen if input data has some more repeated elements. To generated that data, I just divided nextInt() argument by 100:arr[i]=rnd.nextInt(arr.length) arr[i]=rnd.nextInt(arr.length/100)Java 6 Arrays.sort 1654ms 935msbasicQuickSort 1431ms 2570msNow that is very bad. Obviously that simple algorithm doesn’t behave well in such cases. It can be assumed that the problem is in the quality of the pivot. The worst possible pivot is the biggest or the smallest element of the array. In that case, algorithm would has O(n^2) complexity. Ideally pivot should be chosen such as it splits an array into two parts with equal sizes. It means that ideal pivot is the median on all values of given array. Practically that is not good idea – too slow. Therefore, usually, implementation uses median of 3-5 elements. The decision on the number of elements used for pivot can be based on the size of partitioned array. The code for the pivot selection may look like this: public static int getPivotIdx(long arr[], int beginIdx, int len) { if ( len <= 512 ) { int p1 = beginIdx; int p2 = beginIdx+(len>>>1); int p3 = beginIdx+len-1;if ( arr[p1] > arr[p2] ) { int tmp = p1; p1 = p2; p2 = tmp; } if ( arr[p2] > arr[p3] ) { p2 = p3; } if ( arr[p1] > arr[p2] ) { p2 = p1; }return p2; } else { int p1 = beginIdx+(len/4); int p2 = beginIdx+(len>>1); int p3 = beginIdx+(len-len/4); int p4 = beginIdx; int p5 = beginIdx+len-1;if ( arr[p1] > arr[p2] ) { int tmp = p1; p1 = p2; p2 = tmp; } if ( arr[p2] > arr[p3] ) { int tmp = p2; p2 = p3; p3 = tmp; } if ( arr[p1] > arr[p2] ) { int tmp = p1; p1 = p2; p2 = tmp; } if ( arr[p3] > arr[p4] ) { int tmp = p3; p3 = p4; p4 = tmp; } if ( arr[p2] > arr[p3] ) { int tmp = p2; p2 = p3; p3 = tmp; } if ( arr[p1] > arr[p2] ) { p2 = p1; } if ( arr[p4] > arr[p5] ) { p4 = p5; } if ( arr[p3] > arr[p4] ) { p3 = p4; } if ( arr[p2] > arr[p3] ) { p3 = p2; } return p3; } } Here are results after improvements in pivot selection strategy:arr[i]=rnd.nextInt(arr.length) arr[i]=rnd.nextInt(arr.length/100)Java 6 Arrays.sort 1654ms 935msbasicQuickSort 1431ms 2570msbasicQuickSort with ‘better’ pivot 1365ms 2482msUnfortunately, the improvement is almost nothing. It appeared that pivot selection is not the root cause of the problem. But still let’s keep it, it doesn’t harm, even helps a little bit. It also significantly reduce possibility of O(n^2) behaviour. Another suspect is the algorithm itself. It seems like it’s not good enough. Obviously it doesn’t perform well, when collection has repeated elements. Therefore something has to be changed.Three-way partitioningThe way to get around that problem is three-way-partitioning. As a result of such partitioning, elements which are equal to the pivot are put in the middle of the array. Elements which are bigger than pivot are put in the right side of the array and ones which are smaller on the left side, appropriately. Implementation of that partitioning method consists of two stages. In the first stage arrays is scanned by two pointers (“i” and “j”) which are approaching in opposite directions. Elements which are equals to pivot are moved to the ends of array:  It can be seen that after the first stage elements which are equal to the pivot are located on the edges of the array. On the second stage these elements are moved to the middle. That is now their final position and they can be be excluded from the further sorting: After implementation of such algorithm partitioning function is getting much more complicated. In that implementation the result of the partitioning is lengths of two bound partitions: public static long partition(long[] arr, int beginIdx, int endIdx, long pivot) { int i = beginIdx-1; int l = i; int j = endIdx+1; int r = j; while ( true ) { while(arr[++i] <> pivot){}if ( i >= j ) break;Utils.swap(arr, i, j); if ( arr[i] == pivot ) { Utils.swap(arr, i, ++l); } if ( arr[j] == pivot ) { Utils.swap(arr, j, --r); } } // if i == j then arr[i] == arr[j] == pivot if ( i == j ) { ++i; --j; }final int lLen = j-l; final int rLen = r-i;final int pLen = l-beginIdx; final int exchp = pLen > lLen ? lLen: pLen; int pidx = beginIdx; for(int s = 0; s <= exchp; ++s) { Utils.swap(arr, pidx++, j--); } final int qLen = endIdx-r; final int exchq = rLen > qLen ? qLen : rLen; int qidx = endIdx; for(int s = 0; s <= exchq; ++s) { Utils.swap(arr, qidx--, i++); }return (((long)lLen)<<32)|rlen; } The pivot selection has to be changed as well, but more just for convenience, the idea remains absolutely the same. Now it returns actual value of pivot, instead of index: public static long getPivot(long arr[], int beginIdx, int len) { if ( len <= 512 ) { long p1 = arr[beginIdx]; long p2 = arr[beginIdx+(len>>1)]; long p3 = arr[beginIdx+len-1];return getMedian(p1, p2, p3); } else { long p1 = arr[beginIdx+(len/4)]; long p2 = arr[beginIdx+(len>>1)]; long p3 = arr[beginIdx+(len-len/4)]; long p4 = arr[beginIdx]; long p5 = arr[beginIdx+len-1];return getMedian(p1, p2, p3, p4, p5); } } And here is the main method, which is slightly changed as well: public static void threeWayQuickSort(long[] arr, int beginIdx, int len) { if ( len < 2 ) return;final int endIdx = beginIdx+len-1; final long pivot = getPivot(arr, beginIdx, len); final long lengths = threeWayPartitioning(arr, beginIdx, endIdx, pivot);final int lLen = (int)(lengths>>32); final int rLen = (int)lengths;threeWayQuickSort(arr, beginIdx, lLen); threeWayQuickSort(arr, endIdx-rLen+1, rLen); } now let’s compare it with Java 6 sort:arr[i]=rnd.nextInt(arr.length) arr[i]=rnd.nextInt(arr.length/100)Java 6 Arrays.sort 1654ms 935msbasicQuickSort 1431ms 2570msbasicQuickSort with ‘better’ pivot 1365ms 2482msThree-way partitioning Quicksort 1330ms 829msHuh, impressive! It is faster than standard library, which, by he way, implements the same algorithm. To be honest I was surprised, when found that it is such an easy task to beat standard library. But what about making it even faster? There is one trick which always helps and it works for all sorting algorithms which work with consecutive memory. That trick is Insertion sort. Although is has big chance of O(n^2), it appears to be very very effective on the small arrays and always gives some performance improvements. Especially that is noticeable when input data is not sorted and there are not many repeated elements. All you need to do is just add it at the beginning of sorting method: public static void threeWayQuickSort(long[] arr, int beginIdx, int len) { if ( len < 2 ) return;if ( len < 17 ) { InsertionSort.sort(arr, beginIdx, len); return; }final int endIdx = beginIdx+len-1; final long pivot = getPivot(arr, beginIdx, len); final long lengths = threeWayPartitioning(arr, beginIdx, endIdx, pivot);final int lLen = (int)(lengths>>32); final int rLen = (int)lengths;threeWayQuickSort(arr, beginIdx, lLen); threeWayQuickSort(arr, endIdx-rLen+1, rLen); } and run the test again:arr[i]=rnd.nextInt(arr.length) arr[i]=rnd.nextInt(arr.length/100)Java 6 Arrays.sort 1654ms 935msbasicQuickSort 1431ms 2570msbasicQuickSort with ‘better’ pivot 1365ms 2482msThree-way partitioning Quicksort 1330ms 829msThree-way partitioning Quicksort with Insertion sort 1155ms 818msnow standard library looks just awful. It looks now that all is said and done. But it in reality that’s not the end of the story and there is something else to talk about. Dual-pivot Quicksort Moving forward, I found that Java 7 is much more advanced and performs much faster than Java 6 version and outperforms all previous tests:arr[i]=rnd.nextInt(arr.length) arr[i]=rnd.nextInt(arr.length/100)Java 6 Arrays.sort 1654ms 935msJava 7 Arrays.sort 951ms 764msbasicQuickSort 1431ms 2570msbasicQuickSort with ‘better’ pivot 1365ms 2482msThree-way partitioning Quicksort 1330ms 829msThree-way partitioning Quicksort with Insertion sort 1155ms 818msAfter several seconds of very exciting research study it was found that Java 7 uses new version of Quicksort algorithm which was discovered just in 2009 by Vladimir Yaroslavskiy and named Dual-Pivot QuickSort. Interestingly that after some search in internet, I have found algorithm called “Multiple pivot sorting” which was published in 2007. It seems like generic case of “Dual-Pivot QuickSort” where is possible to have any number of pivots. As you may notice from the name, the main difference of that algorithm is that it is using two pivots, instead of one. Coding now is getting even more complicated. The simplest version of that algorithm may look like this: public static void dualPivotQuicksort(long arr[], int beginIdx, int len) { if ( len < 2 ) return;final int endIdx = beginIdx+len-1;long pivot1 = arr[beginIdx]; long pivot2 = arr[endIdx];if ( pivot1 == pivot2 ) { final long lengths = QuickSort.threeWayPartitioning(arr, beginIdx, endIdx, pivot1); final int lLen = (int)(lengths>>32); final int rLen = (int)lengths;dualPivotQuicksort3(arr, beginIdx, lLen); dualPivotQuicksort3(arr, endIdx-rLen+1, rLen); } else { if ( pivot1 > pivot2 ) { long tmp = pivot1; pivot1 = pivot2; pivot2 = tmp; Utils.swap(arr, beginIdx, endIdx); }int l = beginIdx; int r = endIdx; int p = beginIdx;while ( p <= r ) { if ( arr[p] < pivot1 ) { Utils.swap(arr, l++, p++); } else if ( arr[p] > pivot2 ) { while ( arr[r] > pivot2 && r > p ) { --r; } Utils.swap(arr, r--, p); } else { ++p; } } if ( arr[l] == pivot1 ) ++l; if ( arr[r] == pivot2 ) --r;dualPivotQuicksort3(arr, beginIdx, l-beginIdx); dualPivotQuicksort3(arr, l, r-l+1); dualPivotQuicksort3(arr, r+1, endIdx-r); } } First code picks up two pivots. If pivots are the same, it means we have just one pivot and in that case we can used three-way method for partitioning. If pivots are different, then partitioning process will look like this:Scanning pointer “p” is moving from the beginning of array. If current element is “<> pivot1″, then r-th element is swapped with p-th and “r” pointer is moved to next element backwards. All stops when “p” becomes less than “r”. After partitioning, array will look like this:When partition is finished, algorithms is called recursively for each partition.Reader shouldn’t expect good performance from the provided code, it is not fast and performs even worse than Java 6 Arrays.sort. I was provided just to illustrate the concept.To be honest I failed to make my implementation to perform any better than version from Java 7. I must admit, that Yaroslav made a very good job there. Therefore I do not think that there is any sense in discussing my implementation here in details.But, if someone wants to challenge Java 7 version I can point to some direction for optimizations. Firstly, which is seems obvious? is pivot selection. Another easy improvement is Insertions sort at the beginning. Also, I have noticed that that algorithm is very sensitive to inlining, so there is sense to inline Utils.swap(). As other option, you can decided to go thought the middle partition and move elements equals to pivot1 or pivot2 to their final positions which will exclide them from the further sorting. I found that it is effective for relatively (<=512 elements) small arrays. You can also have a look at source from the Java 7 and try to implement some tricks from there. Be ready to spend a lot of time :) All in all, it can be seen that over the years sorting is getting better and better. And that statement doesn’t only relate to Quicksort. Other sorting algorithms are improving as well. As examples can be considered Introsoft or Timsort. However, it would be true to say that nothing really new was discovered in that area since 1960s-1980s. Hopefully we will be lucky enough to see something completely new and radical in the future.For ones who want to dig deeper, as the starting point, I would suggest to visit following links:Quicksort Wikipedia article Dual-Pivot QuickSort Quicksort Is Optimal presentation by Robert Sedgewick & Jon Bentley MIT lecture about quicksortReference: All you need to know about QuickSort from our JCG partner Stanislav Kobylansky at the Stas’s blog blog....

Impressive first Apache Camel release

In preparation for the CamelOne conference next week, I took time to look back in the history of the Apache Camel project. So among others I had a look at the first official 1.0 release of Apache Camel.Apache Camel 1.0 – 5 years agoThe more I looked the more impressed I am with the fact of this release. Now you have to consider this was done 5 years ago, and in this release the Camel founders had already in the DNA of the projectJava DSL XML DSL (using Spring) OSGi on the roadmap camel-core JAR of 660kb 18 external components (+ whats in camel-core) 2 working examples full website with documentation included, incl FAQs Project logo and box The Camel Maven plugin to easily run Camel and its examples Test KitBelow is a screenshot of the tar ball distribution, of this release:Camel 1.0 distribution (hint the OSGi ambitions in the pom.xml)When you hear James talk about the past and how he created Camel, then his ambitions was that Camel should not restrain you. If you want to use Java and not XML then fine. If you are on the Spring XML wagon, then fine. If you are into Groovy, then fine, if you want to use Ruby, then hell yeah (Ruby supported was added in Camel 1.3). Lets take a look down the lane of the DSLs. Apache Camel is most likely the first integration project that offers multiple language DSLs out of the box in its very first release. It is simply in the projects DNA, and what makes IMHO Apache Camel stand out from the rest – The diverse and vibrant community and the DNA of the Camel project embracing “no shoe fits all”. So lets take a look at this example with the Java DSL. Todays people using the latest Camel release, eg 2.9.2 should instant be familiar with the DSL – Something just works from the very beginning!Java DSL in Camel 1.0And a sample of the XML DSL, which you can see in the source code as well.XML DSL in Camel 1.0And in this first release we also have the excellent Test Kit, for example notice the usage of mocks and setting up expectations in the screenshot below. Testing Camel was made easy from day-1. Yes its in the DNA of the Camel project.Camel Test Kit already in Camel 1.0And notice from the unit test above, the reference to the founders of Apache Camel.James Strachan Rob Davies Hiram Chirino Guillaume NodetThanks guys for creating this marvelous project. Impressive first release, you guys did 5 years ago. I will end this blog by running the camel-example-spring from the Apache Camel 1.0 release.  $cd examples $cd camel-example-spring $mvn compile $mvn camel:run Now you should have patience as Maven is downloading ancient JARs that are 5 years old. So it takes a while :)Camel 1.0 example runningThe screenshot above shows the Camel 1.0 example running. This example kicks off by consuming messages from a JMS queue and write those to a file. So we need to connect with jconsole, to send a message. I have highlighted the service url to use in jconsole.jconsole to send a message – Camel 1.0 rocksIn jconsole we expand the tree and find the test queue, and invoke the sentTextMessage operation with the text “Camel 1.0 rocks”. In the 2nd screenshot above, you may notice in the last line from the console, it says “Received Exchange”. This is Camel logging this, as the example uses the following route shown in the screenshot in the top of this blog. We can then see the file was written to the test directory as well, where we can see the file name is the message id, and the file content is what we sent from jconsole:This was 5 years ago, so lets fast forward to today. The last release of Apache Camel is 2.9.2, so lets migrate the old example to use this version instead. To do that you need to:Adjust the pom.xml to use Camel 2.9.2 and the camel-activemq component has been moved from Camel to ActiveMQ so you need to include that. And for logging we now use slf4j. The modified pom.xml is shown belowUpgrading the example from Camel 1.0 to 2.9.2, adjusting the pom.xml fileIn the Spring XML file you need to change the namespace of Camel, as when Camel graduated to become an Apache Top Level Project, the namespace was migrated from activemq to camel. Also we upgrade to use Spring 3.0 in the XSD. And the activemq component is now from ActiveMQ and not Camel. And the packages attribute is now a xml tag, so you need to use <packlage> in the <camelContext>. The updated file is shown below:Upgrading Spring XML from Camel 1.0 to Camel 2.9.2Okay we are now ready to go. There is no need for changes in the Java source code!!!The example migrated from Camel 1.0 to 2.9.2 without any Java code changes!!!And like before we use JConsole to send a text message. I must say James and the founders hit it in Camel 1.0 release, the DSL from the example is fully compatible with todays Camel release. Indeed a very impressive first release. Camel was off to a great start, and the project has grown from strength to strength ever since. Reference: Looking at the impressive first Apache Camel release from our JCG partner Claus Ibsen at the Claus Ibsen riding the Apache Camel blog....

ANTLR: Getting Started

This post drives you towards the basics of ANTLR. Previously, we had learnt about setting up of ANTLR as an external tool.RECAP! It’s here: ANTLR External Tool :) So, here we go…. What is ANTLR? • ANother Tool for Language Recognition, is a language tool that provides a framework for constructing recognizers, interpreters, compilers, and translators from grammatical descriptions containing actions. What can be the target languages?  • Action Script, Ada • C • C#; C#2 • C#3 • D • Emacs ELisp • Objective C • Java • Java Script • Python • Ruby • Perl6 • Perl • PHP • Oberon • Scala   What does ANTLR support? • Tree construction • Error recovery • Error handling • Tree walking • Translation   What environment does it support?  ANTLRWorks is the IDE for ANTLR. It is the graphical grammar editor and debugger, written by Jean Bovet using Swing.What for ANTLR can be used? • “”REAL”” programming languages • domain-specific languages [DSL]   Who is using ANTLR? • Programming languages :Boo, Groovy, Mantra, Nemerle, XRuby etc. • Other Tools: HIbernate, Intellij IDEA, Jazillian, JBoss Rules, Keynote(Apple), WebLogic(Oracle) etc.   Where is that you can look for ANTLR? You can always follow here http://www.antlr.org • to download ANTLR and ANTLRWorks, which are free and open source • docs,articles,wiki,mailing list,examples…. You can catch everything here! Row your Boat….  Basic terms• Lexer : converts a stream of characters to a stream of tokens. • Parser : processes of tokens, possibly creating AST • Abstract Syntax Tree(AST): an intermediate tree representation of the parsed input that is simpler to process than the stream of tokens. It can as well be processed multiple times. • Tree Parser: It processes an AST • String Template: a library that supports using templates with placeholders for outputting textGeneral Steps• Write Grammar in one or more files • Write string templates[optional] • Debug your grammar with ANTLRWorks • Generate classes from grammar • Write an application that uses generated classes • Feed the application text that conforms to the grammarA Bit Further…. Lets write a simple grammar which consists of • Lexer • Parser Lexer: Breaks the input stream into tokens Lets take the example of simple declaration type in C of the form “int a,b;” or “int a;” and same with float. As we see we can write lexer as follows: //TestLexer.g grammar TestLexer; ID : (‘a’..’z’|’A’..’Z’|’_’) (‘a’..’z’|’A’..’Z’|’0′..’9′|’_’|’.’|’a’..’z’|’A’..’Z’)*; COMMA: ‘,’; SEMICOLON:’;’; DATATYPE: ‘int’ | ‘float’;As we could see, these were the characters that were to be converted to tokens. So, now lets write some rules which processes these tokens generated and may it create a parse tree accordingly. //TestParser.g grammar TestParser; options {language : Java;} decl:DATATYPE ID (‘,’ ID)* ; Running ANTLR on the grammar just generates the lexer and parser,TestParser and TestLexer. To actually try the grammar on some input, we need a test rig with a main( ) method as follows: // Test.java import org.antlr.runtime.*; public class Test {public static void main(String[] args) throws Exception {// Create an input character stream from standard in ANTLRFileStream input = new ANTLRFileStream("input"); // give path to the file input // Create an ExprLexer that feeds from that stream TestLexer lexer = new TestLexer(input); // Create a stream of tokens fed by the lexer CommonTokenStream tokens = new CommonTokenStream(lexer); // Create a parser that feeds off the token stream TestParser parser = new TestParser(tokens); // Begin parsing at rule decl parser.decl(); }}We shall see how to create an AST and walk over the tree in the next blog post… Happy learning….! :) Reference: Getting Started With ANTLR:Basics from our JCG partner Swathi V at the * Techie(S)pArK * blog....

Spring 3.1 profiles and Tomcat configuration

Spring 3.1 introduced very useful feature called profiles. Thanks to that its easy to build one package that can be deployed in all environments (development, test, production and so on). By defining system property spring.profiles.active Spring allows us to create different beans depending on active profile name using XML configuration or @Profile annotation. As we all know system properties can be used in Spring XML files and we will take advantage of that. In this post I will show how to use Spring profiles to create one package for all environments and how to run it on Apache Tomcat. Example architecture I think the most common and wanted architecture is when applications deployed on dev, test and production differ only in used properties file containing configuration. WAR contains configuration for all environments and correct one is chosen during runtime. So it is the best if in application resources we have files like: src main resources - config_dev.properties - config_production.properties ...Configuring Spring property placeholder In order to load properties files in Spring we use <context:property-placeholder /> or @PropertySource annotation. In my example I will follow XML configuration approach for loading properties file: <?xml version='1.0' encoding='UTF-8'?> <beans xmlns='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:context='http://www.springframework.org/schema/context' xsi:schemaLocation='http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans-3.1.xsdhttp://www.springframework.org/schema/contexthttp://www.springframework.org/schema/context/spring-context-3.1.xsd'><context:property-placeholder location='classpath:config_${spring.profiles.active}.properties' /></beans>Configuring Tomcat Now its time to tell Tomcat which profile is active. There at least ways to do that:defining context param in web.xml – that breaks “one package for all environments” statement. I don’t recommend that defining system property -Dspring.profiles.active=your-active-profileI believe that defining system property is much better approach. So how to define system property for Tomcat? Over the internet i could find a lot of advices like “modify catalina.sh” because you will not find any configuration file for doing stuff like that. Modifying catalina.sh is a dirty unmaintable solution. There is a better way to do that. Just create file setenv.sh in Tomcat’s bin directory with content: JAVA_OPTS='$JAVA_OPTS -Dspring.profiles.active=dev' and it will be loaded automatically during running catalina.sh start or run.Conclusion Using Spring profiles we can create flexible applications that can be deployed in several environments. How is it different from Maven profiles approach? With Maven a person who was building application had to define in which environment it was supposed to run. With approach described above environment decides if its development, testing or production. Thanks to that we can use exactly the same WAR file and deploy it everywhere. Reference: Spring 3.1 profiles and Tomcat configuration from our JCG partner Maciej Walkowiak at the Software Development Journey blog....

Chain of Responsibility Pattern in Java

Chain of Responsibility design pattern is needed when a few processors should exist for performing an operation and a particular order should be defined for those processors. Also the changeability of the order of processors on runtime are important.UML represantation of the pattern is as below:Handler defines the general structure of processor objects. ‘HandleRequest’ here is the abstract processor method. Handler also has a reference of its own type, which represents the next handler. For this a public ‘setNextHandler’ method should be defined and exactly the handler is an abstract class. ConcreteHandler define different representations of processors. At last, Client is responsible with creating required handlers (processors) and define a chain order between them. Generally two diffent implementation may exist for this pattern. Difference is related with the ‘location of the chain routing business logic’. Chain routing business logic may be either in Handler abstract class or ConcreteHandler classes, or both of them. Sample of first two approaches will be given below: 1. ‘Handler’ has chain routing business logic: public abstract class Processor { protected Processor next; protected int threshold; public void setNextProcessor(Processor p) { next = p; } public void process(String data, int value) { if (value <= threshold) { process(data); } if (next != null) { next.message(data, threshold); } } abstract protected void processData(String data); } public class ProcessorA extends Processor {public ProcessorA (int threshold) { this.threshold = threshold; } protected void processData(String data) { System.out.println("Processing with A: " + data); } } public class ProcessorB extends Processor {public ProcessorB (int threshold) { this.threshold = threshold; } protected void writeMessage(String data) { System.err.println("Processing with B: " + data); } } public class Client { public static void main(String[] args) { Processor p, p1, p2; p1 = p = new ProcessorA(2); p2 = new ProcessorB(1); p1.setNextProcessor(p2); // Handled by ProcessorA p.process("data1", 2); // Handled by ProcessorA and ProcessorB p.process("data2", 1); } } 2. ‘ConcreteHandler’s have chain routing business logic: public abstract class Processor { protected Processor next; protected int threshold; public void setNextProcessor(Processor p) { next = p; } abstract protected void processData(String data); } public class ProcessorA extends Processor {public ProcessorA (int threshold) { this.threshold = threshold; } protected void processData(String data, int value) { System.out.println("Processing with A: " + data); if (value >= threshold && next != null) { next.processData(data, value); } } } public class ProcessorB extends Processor {public ProcessorB (int threshold) { this.threshold = threshold; } protected void processData(String data, int value) { System.out.println("Processing with B: " + data); if (value >= threshold && next != null) { next.processData(data, value); } } } public class Client { public static void main(String[] args) { Processor p, p1, p2; p1 = p = new ProcessorA(2); p2 = new ProcessorB(1); p1.setNextProcessor(p2); // Handled by ProcessorA p.processData("data1", 1); // Handled by ProcessorA and ProcessorB p.processData("data2", 2); } } Reference: 2 Implementations of “Chain of Responsibility” Pattern with Java from our JCG partner Cagdas Basaraner at the CodeBuild blog....

Concurrency – Executors and Spring Integration

Thread Pool/Executors Based Implementation A better approach than the raw thread version, is a Thread pool based one, where an appropriate thread pool size is defined based on the system where the task is running – Number of CPU’s/(1-Blocking Coefficient of Task). Venkat Subramaniams book has more details:First I defined a custom task to generate the Report Part, given the Report Part Request, this is implemented as a Callable: public class ReportPartRequestCallable implements Callable<ReportPart> { private final ReportRequestPart reportRequestPart; private final ReportPartGenerator reportPartGenerator;public ReportPartRequestCallable(ReportRequestPart reportRequestPart, ReportPartGenerator reportPartGenerator) { this.reportRequestPart = reportRequestPart; this.reportPartGenerator = reportPartGenerator; }@Override public ReportPart call() { return this.reportPartGenerator.generateReportPart(reportRequestPart); } }public class ExecutorsBasedReportGenerator implements ReportGenerator { private static final Logger logger = LoggerFactory.getLogger(ExecutorsBasedReportGenerator.class);private ReportPartGenerator reportPartGenerator;private ExecutorService executors = Executors.newFixedThreadPool(10);@Override public Report generateReport(ReportRequest reportRequest) { List<Callable<ReportPart>> tasks = new ArrayList<Callable<ReportPart>>(); List<ReportRequestPart> reportRequestParts = reportRequest.getRequestParts(); for (ReportRequestPart reportRequestPart : reportRequestParts) { tasks.add(new ReportPartRequestCallable(reportRequestPart, reportPartGenerator)); }List<Future<ReportPart>> responseForReportPartList; List<ReportPart> reportParts = new ArrayList<ReportPart>(); try { responseForReportPartList = executors.invokeAll(tasks); for (Future<ReportPart> reportPartFuture : responseForReportPartList) { reportParts.add(reportPartFuture.get()); }} catch (Exception e) { logger.error(e.getMessage(), e); throw new RuntimeException(e); } return new Report(reportParts); }...... } Here a thread pool is created using the Executors.newFixedThreadPool(10) call, with a pool size of 10, a callable task is generated for each of the report request parts, and handed over to the threadpool using the ExecutorService abstraction responseForReportPartList = executors.invokeAll(tasks); this call returns a List of Futures, which support a get() method which is a blocking call on the response to be available. This is clearly a much better implementation compared to the raw thread version, the number of threads is constrained to a manageable number under load. Spring Integration Based Implementation The approach that I personally like the most is using Spring Integration, the reason is that with Spring Integration I focus on the components doing the different tasks and leave it upto Spring Integration to wire the flow together, using a xml based or annotation based configuration. Here I will be using a XML based configuration : The components in my case are: 1. The component to generate the report part, given the report part request, which I had shown earlier. 2. A component to split the report request to report request parts: public class DefaultReportRequestSplitter implements ReportRequestSplitter{ @Override public List<ReportRequestPart> split(ReportRequest reportRequest) { return reportRequest.getRequestParts(); } } 3. A component to assemble/aggregate the report parts into a whole report: public class DefaultReportAggregator implements ReportAggregator{@Override public Report aggregate(List<ReportPart> reportParts) { return new Report(reportParts); }} And that is all the java code that is required with Spring Integration, the rest of the is wiring – here I have used a Spring integration configuration file: <?xml version='1.0' encoding='UTF-8'?> <beans ....<int:channel id='report.partsChannel'/> <int:channel id='report.reportChannel'/> <int:channel id='report.partReportChannel'> <int:queue capacity='50'/> </int:channel> <int:channel id='report.joinPartsChannel'/><int:splitter id='splitter' ref='reportsPartSplitter' method='split' input-channel='report.partsChannel' output-channel='report.partReportChannel'/> <task:executor id='reportPartGeneratorExecutor' pool-size='10' queue-capacity='50' /> <int:service-activator id='reportsPartServiceActivator' ref='reportPartReportGenerator' method='generateReportPart' input-channel='report.partReportChannel' output-channel='report.joinPartsChannel'> <int:poller task-executor='reportPartGeneratorExecutor' fixed-delay='500'> </int:poller> </int:service-activator><int:aggregator ref='reportAggregator' method='aggregate' input-channel='report.joinPartsChannel' output-channel='report.reportChannel' ></int:aggregator><int:gateway id='reportGeneratorGateway' service-interface='org.bk.sisample.springintegration.ReportGeneratorGateway' default-request-channel='report.partsChannel' default-reply-channel='report.reportChannel'/> <bean name='reportsPartSplitter' class='org.bk.sisample.springintegration.processors.DefaultReportRequestSplitter'></bean> <bean name='reportPartReportGenerator' class='org.bk.sisample.processors.DummyReportPartGenerator'/> <bean name='reportAggregator' class='org.bk.sisample.springintegration.processors.DefaultReportAggregator'/> <bean name='reportGenerator' class='org.bk.sisample.springintegration.SpringIntegrationBasedReportGenerator'/></beans> Spring Source Tool Suite provides a great way of visualizing this file:this matches perfectly with my original view of the user flow:In the Spring Integration version of the code, I have defined the different components to handle the different parts of the flow: 1. A splitter to convert a report request to report request parts: <int:splitter id='splitter' ref='reportsPartSplitter' method='split' input-channel='report.partsChannel' output-channel='report.partReportChannel'/> 2. A service activator component to generate a report part from a report part request: <int:service-activator id='reportsPartServiceActivator' ref='reportPartReportGenerator' method='generateReportPart' input-channel='report.partReportChannel' output-channel='report.joinPartsChannel'> <int:poller task-executor='reportPartGeneratorExecutor' fixed-delay='500'> </int:poller> </int:service-activator> 3. An aggregator to join the report parts back to a report, and is intelligent enough to correlate the original split report requests appropriately without any explicit coding required for it: <int:aggregator ref='reportAggregator' method='aggregate' input-channel='report.joinPartsChannel' output-channel='report.reportChannel' ></int:aggregator> What is interesting in this code is that, like in the executors based sample, the number of threads that services each of these components is completely configurable using the xml file, by using appropriate channels to connect the different components together and by using task executors with the thread pool size set as attribute of the executor. In this code, I have defined a queue channel where the report request parts come in: <int:channel id='report.partReportChannel'> <int:queue capacity='50'/> </int:channel> and is serviced by the service activator component, using a task executor with a thread pool of size 10, and a capacity of 50: <task:executor id='reportPartGeneratorExecutor' pool-size='10' queue-capacity='50' /> <int:service-activator id='reportsPartServiceActivator' ref='reportPartReportGenerator' method='generateReportPart' input-channel='report.partReportChannel' output-channel='report.joinPartsChannel'> <int:poller task-executor='reportPartGeneratorExecutor' fixed-delay='500'> </int:poller> </int:service-activator> All this through configuration! The entire codebase for this sample is available at this github location: https://github.com/bijukunjummen/si-sample Reference: Concurrency – Executors and Spring Integration from our JCG partner Biju Kunjummen at the all and sundry blog....

Bye, Bye, 5 * 60 * 1000 //Five Minutes, Bye, Bye

In this post I am going to talk about one class that was first introduced in version 1.5, that I have used too much but talking with some people they said that they didn’t know it exists. This class is TimeUnit. TimeUnit class represents time durations at a given unit of granularity and also provides utility methods to convert to different units, and methods to perform timing delays. TimeUnit is an enum with seven levels of granularity: DAYS, HOURS, MICROSECONDS, MILLISECONDS, MINUTES, NANOSECONDS and SECONDS. The first feature that I find useful is the convert method. With this method you can say good bye to typical: private static final int FIVE_SECONDS_IN_MILLIS = 1000 * 5; to something like: long duration = TimeUnit.MILLISECONDS.convert(5, TimeUnit.SECONDS); But also equivalent operations in a better readable method exist. For example the same conversion could be expressed as: long duration = TimeUnit.SECONDS.toMillis(5); The second really useful sets of operations are those related with stopping current thread. For example you can sleep current thread with method: TimeUnit.MINUTES.sleep(5); instead of: Thread.sleep(5*60*1000); But you can also use it with join and wait with timeout. Thread t = new Thread(); TimeUnit.SECONDS.timedJoin(t, 5); So as we can see TimeUnit class is though in terms of expressiveness, you can do the same as you do previously but in a more fashionable way. Notice that you can also use static import and code will be even more readable. Reference: Bye, Bye, 5 * 60 * 1000 //Five Minutes, Bye, Bye from our JCG partner Alex Soto at the One Jar To Rule Them All blog....

Using Redis with Spring

As NoSQL solutions are getting more and more popular for many kind of problems, more often the modern projects consider to use some (or several) of NoSQLs instead (or side-by-side) of traditional RDBMS. I have already covered my experience with MongoDB in this, this and this posts. In this post I would like to switch gears a bit towards Redis, an advanced key-value store. Aside from very rich key-value semantics, Redis also supports pub-sub messaging and transactions. In this post I am going just to touch the surface and demonstrate how simple it is to integrate Redis into your Spring application. As always, we will start with Maven POM file for our project:   <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemalocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">     <modelversion>4.0.0</modelversion> <groupid>com.example.spring</groupid> <artifactid>redis</artifactid> <version>0.0.1-SNAPSHOT</version> <packaging>jar</packaging><properties> <project.build.sourceencoding>UTF-8</project.build.sourceencoding> <spring.version>3.1.0.RELEASE</spring.version> </properties><dependencies> <dependency> <groupid>org.springframework.data</groupid> <artifactid>spring-data-redis</artifactid> <version>1.0.0.RELEASE</version> </dependency><dependency> <groupid>cglib</groupid> <artifactid>cglib-nodep</artifactid> <version>2.2</version> </dependency><dependency> <groupid>log4j</groupid> <artifactid>log4j</artifactid> <version>1.2.16</version> </dependency><dependency> <groupid>redis.clients</groupid> <artifactid>jedis</artifactid> <version>2.0.0</version> <type>jar</type> </dependency><dependency> <groupid>org.springframework</groupid> <artifactid>spring-core</artifactid> <version>${spring.version}</version> </dependency><dependency> <groupid>org.springframework</groupid> <artifactid>spring-context</artifactid> <version>${spring.version}</version> </dependency> </dependencies> </project>Spring Data Redis is the another project under Spring Data umbrella which provides seamless injection of Redis into your application. The are several Redis clients for Java and I have chosen the Jedis as it is stable and recommended by Redis team at the moment of writing this post.We will start with simple configuration and introduce the necessary components first. Then as we move forward, the configuration will be extended a bit to demonstrated pub-sub capabilities. Thanks to Java config support, we will create the configuration class and have all our dependencies strongly typed, no XML anymore: package com.example.redis.config;import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.data.redis.connection.jedis.JedisConnectionFactory; import org.springframework.data.redis.core.RedisTemplate; import org.springframework.data.redis.serializer.GenericToStringSerializer; import org.springframework.data.redis.serializer.StringRedisSerializer;@Configuration public class AppConfig { @Bean JedisConnectionFactory jedisConnectionFactory() { return new JedisConnectionFactory(); }@Bean RedisTemplate< String, Object > redisTemplate() { final RedisTemplate< String, Object > template = new RedisTemplate< String, Object >(); template.setConnectionFactory( jedisConnectionFactory() ); template.setKeySerializer( new StringRedisSerializer() ); template.setHashValueSerializer( new GenericToStringSerializer< Object >( Object.class ) ); template.setValueSerializer( new GenericToStringSerializer< Object >( Object.class ) ); return template; } }That’s basically everything we need assuming we have single Redis server up and running on localhost with default configuration. Let’s consider several common uses cases: setting a key to some value, storing the object and, finally, pub-sub implementation. Storing and retrieving a key/value pair is very simple: @Autowired private RedisTemplate< String, Object > template;public Object getValue( final String key ) { return template.opsForValue().get( key ); }public void setValue( final String key, final String value ) { template.opsForValue().set( key, value ); }Optionally, the key could be set to expire (yet another useful feature of Redis), f.e. let our keys expire in 1 second: public void setValue( final String key, final String value ) { template.opsForValue().set( key, value ); template.expire( key, 1, TimeUnit.SECONDS ); }Arbitrary objects could be saved into Redis as hashes (maps), f.e. let save instance of some class User public class User { private final Long id; private String name; private String email; // Setters and getters are omitted for simplicity }into Redis using key pattern “user:<id>”: public void setUser( final User user ) { final String key = String.format( "user:%s", user.getId() ); final Map< String, Object > properties = new HashMap< String, Object >();properties.put( "id", user.getId() ); properties.put( "name", user.getName() ); properties.put( "email", user.getEmail() );template.opsForHash().putAll( key, properties); }Respectively, object could easily be inspected and retrieved using the id. public User getUser( final Long id ) { final String key = String.format( "user:%s", id );final String name = ( String )template.opsForHash().get( key, "name" ); final String email = ( String )template.opsForHash().get( key, "email" );return new User( id, name, email ); }There are much, much more which could be done using Redis, I highly encourage to take a look on it. It surely is not a silver bullet but could solve many challenging problems very easy. Finally, let me show how to use a pub-sub messaging with Redis. Let’s add a bit more configuration here (as part of AppConfig class): @Bean MessageListenerAdapter messageListener() { return new MessageListenerAdapter( new RedisMessageListener() ); }@Bean RedisMessageListenerContainer redisContainer() { final RedisMessageListenerContainer container = new RedisMessageListenerContainer();container.setConnectionFactory( jedisConnectionFactory() ); container.addMessageListener( messageListener(), new ChannelTopic( "my-queue" ) );return container; }The style of message listener definition should look very familiar to Spring users: generally, the same approach we follow to define JMS message listeners. The missed piece is our RedisMessageListener class definition: package com.example.redis.impl;import org.springframework.data.redis.connection.Message; import org.springframework.data.redis.connection.MessageListener;public class RedisMessageListener implements MessageListener { @Override public void onMessage(Message message, byte[] paramArrayOfByte) { System.out.println( "Received by RedisMessageListener: " + message.toString() ); } }Now, when we have our message listener, let see how we could push some messages into the queue using Redis. As always, it’s pretty simple: @Autowired private RedisTemplate< String, Object > template;public void publish( final String message ) { template.execute( new RedisCallback< Long >() { @SuppressWarnings( "unchecked" ) @Override public Long doInRedis( RedisConnection connection ) throws DataAccessException { return connection.publish( ( ( RedisSerializer< String > )template.getKeySerializer() ).serialize( "queue" ), ( ( RedisSerializer< Object > )template.getValueSerializer() ).serialize( message ) ); } } ); }That’s basically it for very quick introduction but definitely enough to fall in love with Redis. Reference: Using Redis with Spring from our JCG partner Andrey Redko at the Andriy Redko {devmind} blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: