Featured FREE Whitepapers

What's New Here?

java-logo

Clean Unit Test Patterns – Presentation Slides

I was given the opportunity to talk at the GDG DevFestKarlsruhe 2014 conference about ‘Clean Unit Test Patterns’. Thanks to the organizers for inviting me and thanks to all people listening to my talk. As promised I shared the presentation e.g. for those who want to have a look at the additional slides I did not cover during the talk:           Clean Unit Test Patterns GDG DevFest Karlsruhe 2014 – Oktober 25th, 2014 JUnit testing is not as trivial as it might look. If not written with care, tests can be a show-stopper with respect to maintenance and progression. Hence this session introduces the clean structure of well written unit tests. It explains the significance of test isolation and how it can be achieved by means of various test double patterns. The topic is deepened by a brief discussion about the pros and cons of test double frameworks. The talk continues with the JUnit concepts Runners and Rules. It illustrates in which way these effect testing efficiency and readability. Descriptive examples are used to enlarge upon the subject. Finally the presentation covers unit test assertions. It shows how custom verification patterns of Hamcrest or AssertJ can help writing clear, simple and expressive assertion statements.Reference: Clean Unit Test Patterns – Presentation Slides from our JCG partner Frank Appel at the Code Affine blog....
apache-commons-io-logo

Apache Commons IO Tutorial: A beginner’s guide

Apache Commons IO is a Java library created and maintained by the Apache Foundation. It provides a multitude of classes that enable developers to do common tasks easily and with much less boiler-plate code, that needs to be written over and over again for every single project.The importance of libraries like that is huge, because they are mature and maintained by experienced developers, who have thought of every possible edge-case, or fixed the various bugs when they appeared. In this example, we are going to present some methods with varying functionality, depending on the package of org.apache.commons.io that they belong to. We are not going to delve too deep inside the library, as it is enormous, but we are going to provide examples for some common usage that can definitely come in handy for every developer, beginner or not.1. Apache Commons IO Example The code for this example will be broken into several classes, and each of them will be representative of a particular area that Apache Commons IO covers. These areas are:Utility classes Input Output Filters Comparators File MonitorTo make things even clearer, we are going to break down the output in chunks, one for each of the classes that we have created. We have also created a directory inside the project folder (named ExampleFolder) which will contain the various files that will be used in this example to show the functionality of the various classes. NOTE: In order to use org.apache.commons.io, you need to download the jar files (found here) and add them to the build path of your Eclipse project, by right clicking on the project folder -> Build Path -> Add external archives. ApacheCommonsExampleMain.java public class ApacheCommonsExampleMain {public static void main(String[] args) { UtilityExample.runExample(); FileMonitorExample.runExample(); FiltersExample.runExample(); InputExample.runExample(); OutputExample.runExample(); ComparatorExample.runExample(); } }This is the main class that will be used to run the methods from the other classes of our example. You can comment certain classes in order to see the output that you want to.1.1 Utility Classes There are various Utility classes, inside the package org.apache.commons.io, most of which have to do with file manipulation and String comparison. We have used some of the most important ones here:FilenameUtils: This class has methods that work with file names, and the main point is to make life easier in every OS (works equally well in Unix and Windows systems). FileUtils: It provides methods for file manipulation (moving, opening and reading a file, checking if a file exists, etc). IOCase: String manipulation and comparison methods. FileSystemUtils: Its methods return the free space of a designated drive.UtilityExample.java import java.io.File; import java.io.IOException;import org.apache.commons.io.FileSystemUtils; import org.apache.commons.io.FileUtils; import org.apache.commons.io.FilenameUtils; import org.apache.commons.io.LineIterator; import org.apache.commons.io.IOCase;public final class UtilityExample { // We are using the file exampleTxt.txt in the folder ExampleFolder, // and we need to provide the full path to the Utility classes. private static final String EXAMPLE_TXT_PATH = "C:\\Users\\Lilykos\\workspace\\ApacheCommonsExample\\ExampleFolder\\exampleTxt.txt"; private static final String PARENT_DIR = "C:\\Users\\Lilykos\\workspace\\ApacheCommonsExample";public static void runExample() throws IOException { System.out.println("Utility Classes example..."); // FilenameUtils System.out.println("Full path of exampleTxt: " + FilenameUtils.getFullPath(EXAMPLE_TXT_PATH)); System.out.println("Full name of exampleTxt: " + FilenameUtils.getName(EXAMPLE_TXT_PATH)); System.out.println("Extension of exampleTxt: " + FilenameUtils.getExtension(EXAMPLE_TXT_PATH)); System.out.println("Base name of exampleTxt: " + FilenameUtils.getBaseName(EXAMPLE_TXT_PATH)); // FileUtils // We can create a new File object using FileUtils.getFile(String) // and then use this object to get information from the file. File exampleFile = FileUtils.getFile(EXAMPLE_TXT_PATH); LineIterator iter = FileUtils.lineIterator(exampleFile); System.out.println("Contents of exampleTxt..."); while (iter.hasNext()) { System.out.println("\t" + iter.next()); } iter.close(); // We can check if a file exists somewhere inside a certain directory. File parent = FileUtils.getFile(PARENT_DIR); System.out.println("Parent directory contains exampleTxt file: " + FileUtils.directoryContains(parent, exampleFile)); // IOCase String str1 = "This is a new String."; String str2 = "This is another new String, yes!"; System.out.println("Ends with string (case sensitive): " + IOCase.SENSITIVE.checkEndsWith(str1, "string.")); System.out.println("Ends with string (case insensitive): " + IOCase.INSENSITIVE.checkEndsWith(str1, "string.")); System.out.println("String equality: " + IOCase.SENSITIVE.checkEquals(str1, str2)); // FileSystemUtils System.out.println("Free disk space (in KB): " + FileSystemUtils.freeSpaceKb("C:")); System.out.println("Free disk space (in MB): " + FileSystemUtils.freeSpaceKb("C:") / 1024); } }Output Utility Classes example... Full path of exampleTxt: C:\Users\Lilykos\workspace\ApacheCommonsExample\ExampleFolder\ Full name of exampleTxt: exampleTxt.txt Extension of exampleTxt: txt Base name of exampleTxt: exampleTxt Contents of exampleTxt... This is an example text file. We will use it for experimenting with Apache Commons IO. Parent directory contains exampleTxt file: true Ends with string (case sensitive): false Ends with string (case insensitive): true String equality: false Free disk space (in KB): 32149292 Free disk space (in MB): 313951.2 File Monitor The org.apache.commons.io.monitor package contains methods that can get specific information about a File, but more importantly, it can create handlers that can be used to track changes in a specific file or folder and take action depending on the changes. Let’s take a look on the code: FileMonitorExample.java import java.io.File; import java.io.IOException;import org.apache.commons.io.FileDeleteStrategy; import org.apache.commons.io.FileUtils; import org.apache.commons.io.monitor.FileAlterationListenerAdaptor; import org.apache.commons.io.monitor.FileAlterationMonitor; import org.apache.commons.io.monitor.FileAlterationObserver; import org.apache.commons.io.monitor.FileEntry;public final class FileMonitorExample { private static final String EXAMPLE_PATH = "C:\\Users\\Lilykos\\workspace\\ApacheCommonsExample\\ExampleFolder\\exampleFileEntry.txt"; private static final String PARENT_DIR = "C:\\Users\\Lilykos\\workspace\\ApacheCommonsExample\\ExampleFolder"; private static final String NEW_DIR = "C:\\Users\\Lilykos\\workspace\\ApacheCommonsExample\\ExampleFolder\\newDir"; private static final String NEW_FILE = "C:\\Users\\Lilykos\\workspace\\ApacheCommonsExample\\ExampleFolder\\newFile.txt";public static void runExample() { System.out.println("File Monitor example..."); // FileEntry // We can monitor changes and get information about files // using the methods of this class. FileEntry entry = new FileEntry(FileUtils.getFile(EXAMPLE_PATH)); System.out.println("File monitored: " + entry.getFile()); System.out.println("File name: " + entry.getName()); System.out.println("Is the file a directory?: " + entry.isDirectory()); // File Monitoring // Create a new observer for the folder and add a listener // that will handle the events in a specific directory and take action. File parentDir = FileUtils.getFile(PARENT_DIR); FileAlterationObserver observer = new FileAlterationObserver(parentDir); observer.addListener(new FileAlterationListenerAdaptor() { @Override public void onFileCreate(File file) { System.out.println("File created: " + file.getName()); } @Override public void onFileDelete(File file) { System.out.println("File deleted: " + file.getName()); } @Override public void onDirectoryCreate(File dir) { System.out.println("Directory created: " + dir.getName()); } @Override public void onDirectoryDelete(File dir) { System.out.println("Directory deleted: " + dir.getName()); } }); // Add a monior that will check for events every x ms, // and attach all the different observers that we want. FileAlterationMonitor monitor = new FileAlterationMonitor(500, observer); try { monitor.start(); // After we attached the monitor, we can create some files and directories // and see what happens! File newDir = new File(NEW_DIR); File newFile = new File(NEW_FILE); newDir.mkdirs(); newFile.createNewFile(); Thread.sleep(1000); FileDeleteStrategy.NORMAL.delete(newDir); FileDeleteStrategy.NORMAL.delete(newFile); Thread.sleep(1000); monitor.stop(); } catch (IOException e) { e.printStackTrace(); } catch (InterruptedException e) { e.printStackTrace(); } catch (Exception e) { e.printStackTrace(); } } } Output File Monitor example... File monitored: C:\Users\Lilykos\workspace\ApacheCommonsExample\ExampleFolder\exampleFileEntry.txt File name: exampleFileEntry.txt Is the file a directory?: false Directory created: newDir File created: newFile.txt Directory deleted: newDir File deleted: newFile.txtLet’s take a look on what happened here. We used some classes of the org.apache.commons.io.monitor package, that enable us to create handlers that listen to specific events (in our case, everything that has to do with files, folders, directories etc). In order to achieve that, there are certain steps that need to be taken:Create a File object, that is a reference to the directory that we want to listen to for changes. Create a FileAlterationObserver object, that will observe for those changes. Add a FileAlterationListenerAdaptor to the observer using the addListener() method. You can create the adaptor using various ways, but in our example we used a nested class that implements only some of the methods (the ones we need for the example requirements). Create a FileAlterationMonitor and add the observers that you have, as well as the interval (in ms). Start the monitor using the start() method and stop it when necessary using the stop() method.1.3 Filters Filters can be used in a variety of combinations and ways. Their job is to allow us to easily make distinctions between files and get the ones that satisfy certain criteria. We can also combine filters to perform logical comparisons and get our files much more precisely, without using tedious String comparisons afterwards. FiltersExample.java import java.io.File;import org.apache.commons.io.FileUtils; import org.apache.commons.io.IOCase; import org.apache.commons.io.filefilter.AndFileFilter; import org.apache.commons.io.filefilter.NameFileFilter; import org.apache.commons.io.filefilter.NotFileFilter; import org.apache.commons.io.filefilter.OrFileFilter; import org.apache.commons.io.filefilter.PrefixFileFilter; import org.apache.commons.io.filefilter.SuffixFileFilter; import org.apache.commons.io.filefilter.WildcardFileFilter;public final class FiltersExample { private static final String PARENT_DIR = "C:\\Users\\Lilykos\\workspace\\ApacheCommonsExample\\ExampleFolder";public static void runExample() { System.out.println("File Filter example..."); // NameFileFilter // Right now, in the parent directory we have 3 files: // directory example // file exampleEntry.txt // file exampleTxt.txt // Get all the files in the specified directory // that are named "example". File dir = FileUtils.getFile(PARENT_DIR); String[] acceptedNames = {"example", "exampleTxt.txt"}; for (String file: dir.list(new NameFileFilter(acceptedNames, IOCase.INSENSITIVE))) { System.out.println("File found, named: " + file); } //WildcardFileFilter // We can use wildcards in order to get less specific results // ? used for 1 missing char // * used for multiple missing chars for (String file: dir.list(new WildcardFileFilter("*ample*"))) { System.out.println("Wildcard file found, named: " + file); } // PrefixFileFilter // We can also use the equivalent of startsWith // for filtering files. for (String file: dir.list(new PrefixFileFilter("example"))) { System.out.println("Prefix file found, named: " + file); } // SuffixFileFilter // We can also use the equivalent of endsWith // for filtering files. for (String file: dir.list(new SuffixFileFilter(".txt"))) { System.out.println("Suffix file found, named: " + file); } // OrFileFilter // We can use some filters of filters. // in this case, we use a filter to apply a logical // or between our filters. for (String file: dir.list(new OrFileFilter( new WildcardFileFilter("*ample*"), new SuffixFileFilter(".txt")))) { System.out.println("Or file found, named: " + file); } // And this can become very detailed. // Eg, get all the files that have "ample" in their name // but they are not text files (so they have no ".txt" extension. for (String file: dir.list(new AndFileFilter( // we will match 2 filters... new WildcardFileFilter("*ample*"), // ...the 1st is a wildcard... new NotFileFilter(new SuffixFileFilter(".txt"))))) { // ...and the 2nd is NOT .txt. System.out.println("And/Not file found, named: " + file); } } }Output File Filter example... File found, named: example File found, named: exampleTxt.txt Wildcard file found, named: example Wildcard file found, named: exampleFileEntry.txt Wildcard file found, named: exampleTxt.txt Prefix file found, named: example Prefix file found, named: exampleFileEntry.txt Prefix file found, named: exampleTxt.txt Suffix file found, named: exampleFileEntry.txt Suffix file found, named: exampleTxt.txt Or file found, named: example Or file found, named: exampleFileEntry.txt Or file found, named: exampleTxt.txt And/Not file found, named: example1.4 Comparators The org.apache.commons.io.comparator package contains classes that allow us to easily compare and sort files and directories. We just need to provide a list of files and, depending on the class, compare them in various ways. ComparatorExample.java import java.io.File; import java.util.Date;import org.apache.commons.io.FileUtils; import org.apache.commons.io.IOCase; import org.apache.commons.io.comparator.LastModifiedFileComparator; import org.apache.commons.io.comparator.NameFileComparator; import org.apache.commons.io.comparator.SizeFileComparator;public final class ComparatorExample { private static final String PARENT_DIR = "C:\\Users\\Lilykos\\workspace\\ApacheCommonsExample\\ExampleFolder"; private static final String FILE_1 = "C:\\Users\\Lilykos\\workspace\\ApacheCommonsExample\\ExampleFolder\\example"; private static final String FILE_2 = "C:\\Users\\Lilykos\\workspace\\ApacheCommonsExample\\ExampleFolder\\exampleTxt.txt"; public static void runExample() { System.out.println("Comparator example..."); //NameFileComparator // Let's get a directory as a File object // and sort all its files. File parentDir = FileUtils.getFile(PARENT_DIR); NameFileComparator comparator = new NameFileComparator(IOCase.SENSITIVE); File[] sortedFiles = comparator.sort(parentDir.listFiles()); System.out.println("Sorted by name files in parent directory: "); for (File file: sortedFiles) { System.out.println("\t"+ file.getAbsolutePath()); } // SizeFileComparator // We can compare files based on their size. // The boolean in the constructor is about the directories. // true: directory's contents count to the size. // false: directory is considered zero size. SizeFileComparator sizeComparator = new SizeFileComparator(true); File[] sizeFiles = sizeComparator.sort(parentDir.listFiles()); System.out.println("Sorted by size files in parent directory: "); for (File file: sizeFiles) { System.out.println("\t"+ file.getName() + " with size (kb): " + file.length()); } // LastModifiedFileComparator // We can use this class to find which file was more recently modified. LastModifiedFileComparator lastModified = new LastModifiedFileComparator(); File[] lastModifiedFiles = lastModified.sort(parentDir.listFiles()); System.out.println("Sorted by last modified files in parent directory: "); for (File file: lastModifiedFiles) { Date modified = new Date(file.lastModified()); System.out.println("\t"+ file.getName() + " last modified on: " + modified); } // Or, we can also compare 2 specific files and find which one was last modified. // returns > 0 if the first file was last modified. // returns 0) System.out.println("File " + file1.getName() + " was modified last because..."); else System.out.println("File " + file2.getName() + "was modified last because..."); System.out.println("\t"+ file1.getName() + " last modified on: " + new Date(file1.lastModified())); System.out.println("\t"+ file2.getName() + " last modified on: " + new Date(file2.lastModified())); } }Output Comparator example... Sorted by name files in parent directory: C:\Users\Lilykos\workspace\ApacheCommonsExample\ExampleFolder\comparator1.txt C:\Users\Lilykos\workspace\ApacheCommonsExample\ExampleFolder\comperator2.txt C:\Users\Lilykos\workspace\ApacheCommonsExample\ExampleFolder\example C:\Users\Lilykos\workspace\ApacheCommonsExample\ExampleFolder\exampleFileEntry.txt C:\Users\Lilykos\workspace\ApacheCommonsExample\ExampleFolder\exampleTxt.txt Sorted by size files in parent directory: example with size (kb): 0 exampleTxt.txt with size (kb): 87 exampleFileEntry.txt with size (kb): 503 comperator2.txt with size (kb): 1458 comparator1.txt with size (kb): 4436 Sorted by last modified files in parent directory: exampleTxt.txt last modified on: Sun Oct 26 14:02:22 EET 2014 example last modified on: Sun Oct 26 23:42:55 EET 2014 comparator1.txt last modified on: Tue Oct 28 14:48:28 EET 2014 comperator2.txt last modified on: Tue Oct 28 14:48:52 EET 2014 exampleFileEntry.txt last modified on: Tue Oct 28 14:53:50 EET 2014 File example was modified last because... example last modified on: Sun Oct 26 23:42:55 EET 2014 exampleTxt.txt last modified on: Sun Oct 26 14:02:22 EET 2014Let’s see what classes were used here:NameFileComparator: Compares files according to their name. SizeFileComparator: Compares files according to their size. LastModifiedFileComparator: Compares files according to the date they were last modified.You should also take notice here, that the comparisons can happen either in whole directories (were they are sorted using the sort() method), or separately for 2 files specifically (using compare()).1.5 Input There  are various implementations of InputStream in the org.apache.commons.io.input package. We are going to examine one of the most useful, TeeInputStream, which takes as arguments both an InputStream and an OutputStream, and automatically copies the read bytes from the input, to the output. Moreover, by using a third, boolean argument, by closing just the TeeInputStream in the end, the two additional streams close as well. InputExample.java import java.io.ByteArrayInputStream; import java.io.ByteArrayOutputStream; import java.io.File; import java.io.IOException;import org.apache.commons.io.FileUtils; import org.apache.commons.io.input.TeeInputStream; import org.apache.commons.io.input.XmlStreamReader;public final class InputExample { private static final String XML_PATH = "C:\\Users\\Lilykos\\workspace\\ApacheCommonsExample\\InputOutputExampleFolder\\web.xml"; private static final String INPUT = "This should go to the output.";public static void runExample() { System.out.println("Input example..."); XmlStreamReader xmlReader = null; TeeInputStream tee = null; try { // XmlStreamReader // We can read an xml file and get its encoding. File xml = FileUtils.getFile(XML_PATH); xmlReader = new XmlStreamReader(xml); System.out.println("XML encoding: " + xmlReader.getEncoding()); // TeeInputStream // This very useful class copies an input stream to an output stream // and closes both using only one close() method (by defining the 3rd // constructor parameter as true). ByteArrayInputStream in = new ByteArrayInputStream(INPUT.getBytes("US-ASCII")); ByteArrayOutputStream out = new ByteArrayOutputStream(); tee = new TeeInputStream(in, out, true); tee.read(new byte[INPUT.length()]);System.out.println("Output stream: " + out.toString()); } catch (IOException e) { e.printStackTrace(); } finally { try { xmlReader.close(); } catch (IOException e) { e.printStackTrace(); } try { tee.close(); } catch (IOException e) { e.printStackTrace(); } } } }Output Input example... XML encoding: UTF-8 Output stream: This should go to the output.1.6 Output Similar to the org.apache.commons.io.input, org.apache.commons.io.output has implementations of OutputStream, that can be used in many situations. A very interesting one is TeeOutputStream, which allows an output stream to be branched, or in other words, we can send an input stream to 2 different outputs. OutputExample.java import java.io.ByteArrayInputStream; import java.io.ByteArrayOutputStream; import java.io.IOException;import org.apache.commons.io.input.TeeInputStream; import org.apache.commons.io.output.TeeOutputStream;public final class OutputExample { private static final String INPUT = "This should go to the output.";public static void runExample() { System.out.println("Output example..."); TeeInputStream teeIn = null; TeeOutputStream teeOut = null; try { // TeeOutputStream ByteArrayInputStream in = new ByteArrayInputStream(INPUT.getBytes("US-ASCII")); ByteArrayOutputStream out1 = new ByteArrayOutputStream(); ByteArrayOutputStream out2 = new ByteArrayOutputStream(); teeOut = new TeeOutputStream(out1, out2); teeIn = new TeeInputStream(in, teeOut, true); teeIn.read(new byte[INPUT.length()]);System.out.println("Output stream 1: " + out1.toString()); System.out.println("Output stream 2: " + out2.toString()); } catch (IOException e) { e.printStackTrace(); } finally { // No need to close teeOut. When teeIn closes, it will also close its // Output stream (which is teeOut), which will in turn close the 2 // branches (out1, out2). try { teeIn.close(); } catch (IOException e) { e.printStackTrace(); } } } }Output Output example... Output stream 1: This should go to the output. Output stream 2: This should go to the output.2. Download the Complete Example This was an introduction to Apache Commons IO, covering most of the important classes that provide easy solutions to developers. There are many other capabilities in this vast package,but using this intro you get the general idea and a handful of useful tools for your future projects! DownloadYou can download the full source code of this example here: ApacheCommonsIOExample.rar ...
apache-zookeeper-logo

ZooKeeper on Kubernetes

The last couple of weeks I’ve been playing around with docker and kubernetes. If you are not familiar with kubernetes let’s just say for now that its an open source container cluster management implementation, which I find really really awesome. One of the first things I wanted to try out was running an Apache ZooKeeper ensemble inside kubernetes and I thought that it would be nice to share the experience. For my experiments I used Docker v. 1.3.0 and Openshift V3, which I built from source and includes Kubernetes.   ZooKeeper on Docker Managing a ZooKeeper ensemble is definitely not a trivial task. You usually need to configure an odd number of servers and all of the servers need to be aware of each other. This is a PITA on its own, but it gets even more painful when you are working with something as static as docker images. The main difficulty could be expressed as: “How can you create multiple containers out of the same image and have them point to each other?”One approach would be to use docker volumes and provide the configuration externally. This would mean that you have created the configuration for each container, stored it somewhere in the docker host and then pass the configuration to each container as a volume at creation time. I’ve never tried that myself, I can’t tell if its a good or bad practice, I can see some benefits, but I can also see that this is something I am not really excited about. It could look like this: docker run -p 2181:2181 -v /path/to/my/conf:/opt/zookeeper/conf my/zookeeper An other approach would be to pass all the required information as environment variables to the container at creation time and then create a wrapper script which will read the environment variables, modify the configuration files accordingly, launch zookeeper. This is definitely easier to use, but its not that flexible to perform other types of tuning without rebuilding the image itself. Last but not least one could combine the two approaches into one and do something like:Make it possible to provide the base configuration externally using volumes. Use env and scripting to just configure the ensemble.There are plenty of images out there that take one or the other approach. I am more fond of the environment variables approach and since I needed something that would follow some of the kubernetes conventions in terms of naming, I decided to hack an image of my own using the env variables way. Creating a custom image for ZooKeeper I will just focus on the configuration that is required for the ensemble. In order to configure a ZooKeeper ensemble, for each server one has to assign a numeric id and then add in its configuration  an entry per zookeeper server, that contains the ip of the server, the peer port of the server and the election port. The server id is added in a file called myid under the dataDir. The rest of the configuration looks like: server.1=server1.example.com:2888:3888 server.2=server2.example.com:2888:3888 server.3=server3.example.com:2888:3888 ... server.current=[bind address]:[peer binding port]:[election biding port] Note that if the server id is X the server.X entry needs to contain the bind ip and ports and not the connection ip and ports. So what we actually need to pass to the container as environment variables are the following:The server id. For each server in the ensemble:The hostname or ip The peer port The election portIf these are set, then the script that updates the configuration could look like: if [ ! -z "$SERVER_ID" ]; then echo "$SERVER_ID" > /opt/zookeeper/data/myid #Find the servers exposed in env. for i in `echo {1..15}`;doHOST=`envValue ZK_PEER_${i}_SERVICE_HOST` PEER=`envValue ZK_PEER_${i}_SERVICE_PORT` ELECTION=`envValue ZK_ELECTION_${i}_SERVICE_PORT`if [ "$SERVER_ID" = "$i" ];then echo "server.$i=0.0.0.0:2888:3888" >> conf/zoo.cfg elif [ -z "$HOST" ] || [ -z "$PEER" ] || [ -z "$ELECTION" ] ; then #if a server is not fully defined stop the loop here. break else echo "server.$i=$HOST:$PEER:$ELECTION" >> conf/zoo.cfg fidone fi For simplicity the function that read the keys and values from env are excluded. The complete image and helping scripts to launch zookeeper ensembles of variables size can be found in the fabric8io repository. ZooKeeper on Kubernetes The docker image above, can be used directly with docker, provided that you take care of the environment variables. Now I am going to describe how this image can be used with kubernetes. But first a little rambling… What I really like about using kubernetes with ZooKeeper, is that kubernetes will recreate the container, if it dies or the health check fails. For ZooKeeper this also means that if a container that hosts an ensemble server dies, it will get replaced by a new one. This guarantees that there will be constantly a quorum of ZooKeeper servers. I also like that you don’t need to worry about the connection string that the clients will use, if containers come and go. You can use kubernetes services to load balance across all the available servers and you can even expose that outside of kubernetes. Creating a Kubernetes confing for ZooKeeper I’ll try to explain how you can create 3 ZooKeeper Server Ensemble in Kubernetes. What we need is 3 docker containers all running ZooKeeper with the right environment variables: { "image": "fabric8/zookeeper", "name": "zookeeper-server-1", "env": [ { "name": "ZK_SERVER_ID", "value": "1" } ], "ports": [ { "name": "zookeeper-client-port", "containerPort": 2181, "protocol": "TCP" }, { "name": "zookeeper-peer-port", "containerPort": 2888, "protocol": "TCP" }, { "name": "zookeeper-election-port", "containerPort": 3888, "protocol": "TCP" } ] } The env needs to specify all the parameters discussed previously. So we need to add along with the ZK_SERVER_ID, the following:ZK_PEER_1_SERVICE_HOST ZK_PEER_1_SERVICE_PORT ZK_ELECTION_1_SERVICE_PORT ZK_PEER_2_SERVICE_HOST ZK_PEER_2_SERVICE_PORT ZK_ELECTION_2_SERVICE_PORT ZK_PEER_3_SERVICE_HOST ZK_PEER_3_SERVICE_PORT ZK_ELECTION_3_SERVICE_PORTAn alternative approach could be instead of adding all these manual configuration, to expose peer and election as kubernetes services. I tend to favor the later approach as it can make things simpler when working with multiple hosts. It’s also a nice exercise for learning kubernetes. So how do we configure those services? To configure them we need to know:the name of the port the kubernetes pod the provide the serviceThe name of the port is already defined in the previous snippet. So we just need to find out how to select the pod. For this use case, it make sense to have a different pod for each zookeeper server container. So we just need to have a label for each pod, the designates that its a zookeeper server pod and also a label that designates the zookeeper server id. "labels": { "name": "zookeeper-pod", "server": 1 } Something like the above could work. Now we are ready to define the service. I will just show how we can expose the peer port of server with id 1, as a service. The rest can be done in a similar fashion: { "apiVersion": "v1beta1", "creationTimestamp": null, "id": "zk-peer-1", "kind": "Service", "port": 2888, "containerPort": "zookeeper-peer-port", "selector": { "name": "zookeeper-pod", "server": 1 } } The basic idea is that in the service definition, you create a selector which can be used to query/filter pods. Then you define the name of the port to expose and this is pretty much it. Just to clarify, we need a service definition just like the one above per zookeeper server container. And of course we need to do the same for the election port. Finally, we can define an other kind of service, for the client connection port. This time we are not going to specify the sever id, in the selector, which means that all 3 servers will be selected. In this case kubernetes will load balance across all ZooKeeper servers. Since ZooKeeper provides a single system image (it doesn’t matter on which server you are connected) then this is pretty handy. { "apiVersion": "v1beta1", "creationTimestamp": null, "id": "zk-client", "kind": "Service", "port": 2181, "createExternalLoadBalancer": "true", "containerPort": "zookeeper-client-port", "selector": { "name": "zookeeper-pod" } } I hope you found it useful. There is definitely room for improvement so feel free to leave comments.Reference: ZooKeeper on Kubernetes from our JCG partner Ioannis Canellos at the Ioannis Canellos Blog blog....
java-logo

Chronicle Map and Yahoo Cloud Service Benchmark

Overview Yahoo Cloud Service Benchmark is a reasonably widely used benchmarking tool for testing key value stores for a significant number of key e.g 100 million, and a modest number of clients i.e. served from one machine. In this article I look at how a test of 100 million * 1 KB key/values performed using Chronicle Map on a single machinewith 128 GB memory, dual Intel E5-2650 v2 @ 2.60GHz, and six Samsung 840 EVO SSDs. The 1 KB value consists of ten fields of 100 byte Strings.  For a more optimal solution, primitive numbers would be a better choice. While the SSDs helped, the peak transfer rate was 700 MB/s which could be supported by two SATA SSD drives. These benchmarks were performed using the latest version at the time of the report, Chronicle Map 2.0.5a-SNAPSHOT. Micro-second world Something which confounds me when reading benchmarks about key-value stores is that they start with the premise that performance is really important.  IMHO, about 90% of the time, performance is not the most important feature, provided you have sufficient performance. These benchmark reports then continue to report times in milli-seconds, not micro-seconds and throughputs in thetens of thousands instead of the hundreds of thousands or millions.  If performance really was that important, they would have built their products around performance, instead of the useful features they do support, like multi-key transactionality, quorum updates and other features Chronicle Map doesn’t support, for performance reasons. So how would a key-store built for performance look with YCSB? Throughput measures The “50/50″ tests 50% random reads and 50% random writes, the “95/5″ tests 95% reads to 5% writes. It is expected that writes will be more expensive, and a higher percentage of reads results in higher throughputs.Threads 50/50 read/update 95/5 read/update1 122 K/s 245 K/s2 235 K/s 414 K/s4 339 K/s 750 K/s8 646 K/s 1.295 M/s15 819 K/s 1.452 M/s30 900 K/s 1.641 M/sLatencies The following latencies are in micro-seconds, not milli-seconds.Threads: 8 50/50 read 95/5 read 50/50 update 95/5 updateaverage 5 µs 3.9 µs 15.9 µs 11.3 µs95th 12 µs 8 µs 31 µs 19 µs99th 19 µs 14 µs 42 µs 27 µsworst 67 ms 70 ms 67 ms 70 ms  Note: the benchmark is not designed to be GC free and creates some garbage.  This is not particularly high and the benchmark itself uses only about 1/4 of CPU according to flight simulator, however it does impact the worst latencies. ConclusionMake sure the key-value store has the features you need, but if performance is critical, look for a solution designed for performance as this can be 100x faster than full featured products. Other high performance examples Aerospike benchmark – Single server benchmark with over 1 M TPS, sub-micro-second latencies. Uses smaller 100 byte records. NuoDB benchmark – Supports transactions across a quorum. 24 nodes for 1 M TPS. Oracle NoSQL benchmark - A couple of years old, uses a lot of threads, otherwise a good result. VoltDB benchmark – Not tested to 1 M TPS, but promising. Latencies around 1-2 ms, report has 99th percentile latencies which others don’t include.Room for improvement MongoDB driver benchmark – Has 1000s of micro-seconds instead of milli-seconds. Cassandra, HBase, Redis – Shows you can get 1 million TPS if you use enough servers, 288 nodes for 1 M TPS. Report including Elasticsearch – Report includes runtime in a “resource Austere Environment” Hyperdex – Cover throughput only. WhiteDB – Reports latencies in micro-seconds for 170 K records, and modest throughputs. Benchmark including Aerospace – ReportsFootnote Using smaller values helps, and we suggest trying to make values closer to 100 bytes.  This is the result of the 95/5 workload B, using 10×10 byte fields, and 50 M entries as the Aerospike benchmark does.[OVERALL], RunTime(ms), 29,542 [OVERALL], Throughput(ops/sec), 3,385,011 [READ], Operations, 94998832 [READ], AverageLatency(us), 1.88 [READ], MinLatency(us), 0 [READ], MaxLatency(us), 50201 [READ], 95thPercentileLatency(ms), 0.004 [READ], 99thPercentileLatency(ms), 0.006 [READ], Return=0, 48768825 [READ], Return=1, 46230007 [UPDATE], Operations, 5001168 [UPDATE], AverageLatency(us), 8.04 [UPDATE], MinLatency(us), 0 [UPDATE], MaxLatency(us), 50226 [UPDATE], 95thPercentileLatency(ms), 0.012 [UPDATE], 99thPercentileLatency(ms), 0.018 [UPDATE], Return=0, 5001168Reference: Chronicle Map and Yahoo Cloud Service Benchmark from our JCG partner Peter Lawrey at the Vanilla Java blog....
spring-interview-questions-answers

Spring Boot Actuator: custom endpoint with MVC layer on top of it

Spring Boot Actuator endpoints allow you to monitor and interact with your application. Spring Boot includes a number of built-in endpoints and you can also add your own.   Adding custom endpoints is as easy as creating a class that extends from org.springframework.boot.actuate.endpoint.AbstractEndpoint. But Spring Boot Actuator offers also possibility to decorate endpoints with MVC layer.   Endpoints endpoint There are many built-in endpoints, but one there is missing is the endpoint to expose all endpoints. By default endpoints are exposed via HTTP where ID of an endpoint is mapped to a URL. In the below example, the new endpoint with ID endpoints is created and its invoke method returns all available endpoints: @Component public class EndpointsEndpoint extends AbstractEndpoint<List<Endpoint>> {private List<Endpoint> endpoints;@Autowired public EndpointsEndpoint(List<Endpoint> endpoints) { super("endpoints"); this.endpoints = endpoints; }@Override public List<Endpoint> invoke() { return endpoints; } } @Component annotation adds endpoint to the list of existing endpoints. The /endpoints URL will now expose all endpoints with id, enabled and sensitive properties: [ { "id": "trace", "sensitive": true, "enabled": true }, { "id": "configprops", "sensitive": true, "enabled": true } ] New endpoint will be also registered with JMX server as MBean: [org.springframework.boot:type=Endpoint,name=endpointsEndpoint] MVC Endpoint Spring Boot Actuator offers an additional feature which is a strategy for the MVC layer on top of an Endpoint through org.springframework.boot.actuate.endpoint.mvc.MvcEndpoint interfaces. The MvcEndpoint can use @RequestMapping and other Spring MVC features. Please note that EndpointsEndpoint returns all available endpoints. But it would be nice if user could filter endpoints by its enabled and sensitive properties. In order to do so a new MvcEndpoint must be created with a valid @RequestMapping method. Please note that using @Controller and @RequestMapping on the class level is not allowed, therefore @Component was used to make the endpoint available: @Component public class EndpointsMvcEndpoint extends EndpointMvcAdapter {private final EndpointsEndpoint delegate;@Autowired public EndpointsMvcEndpoint(EndpointsEndpoint delegate) { super(delegate); this.delegate = delegate; }@RequestMapping(value = "/filter", method = RequestMethod.GET) @ResponseBody public Set<Endpoint> filter(@RequestParam(required = false) Boolean enabled, @RequestParam(required = false) Boolean sensitive) {} } The new method will be available under /endpoints/filter URL. The implementation of this method is simple: it gets optional enabled and sensitive parameters and filters the delegate’s invoke method result: @RequestMapping(value = "/filter", method = RequestMethod.GET) @ResponseBody public Set<Endpoint> filter(@RequestParam(required = false) Boolean enabled, @RequestParam(required = false) Boolean sensitive) {Predicate<Endpoint> isEnabled = endpoint -> matches(endpoint::isEnabled, ofNullable(enabled));Predicate<Endpoint> isSensitive = endpoint -> matches(endpoint::isSensitive, ofNullable(sensitive));return this.delegate.invoke().stream() .filter(isEnabled.and(isSensitive)) .collect(toSet()); }private <T> boolean matches(Supplier<T> supplier, Optional<T> value) { return !value.isPresent() || supplier.get().equals(value.get()); } Usage examples:All enabled endpoints: /endpoints/filter?enabled=true All sensitive endpoints: /endpoints/filter?sensitive=true All enabled and sensitive endpoints: /endpoints/filter?enabled=true&sensitive=trueMake endpoints discoverable EndpointsMvcEndpoint utilizes MVC capabilities, but still returns plain endpoint objects. In case Spring HATEOAS is in the classpath the filter method could be extended to return org.springframework.hateoas.Resource with links to endpoints: class EndpointResource extends ResourceSupport {private final String managementContextPath; private final Endpoint endpoint;EndpointResource(String managementContextPath, Endpoint endpoint) { this.managementContextPath = managementContextPath; this.endpoint = endpoint;if (endpoint.isEnabled()) {UriComponentsBuilder path = fromCurrentServletMapping() .path(this.managementContextPath) .pathSegment(endpoint.getId());this.add(new Link(path.build().toUriString(), endpoint.getId())); } }public Endpoint getEndpoint() { return endpoint; } } The EndpointResource will contain a link to each enabled endpoint. Note, that the constructor takes a managamentContextPath variable. This variable contains a Spring Boot Actuator management.contextPath property value. Used to set a prefix for management endpoint. The changes required in EndpointsMvcEndpoint class: @Component public class EndpointsMvcEndpoint extends EndpointMvcAdapter {@Value("${management.context-path:/}") // default to '/' private String managementContextPath;@RequestMapping(value = "/filter", method = RequestMethod.GET) @ResponseBody public Set<Endpoint> filter(@RequestParam(required = false) Boolean enabled, @RequestParam(required = false) Boolean sensitive) {// predicates declarationsreturn this.delegate.invoke().stream() .filter(isEnabled.and(isSensitive)) .map(e -> new EndpointResource(managementContextPath, e)) .collect(toSet()); } } The result in my Chrome browser with JSON Formatter installed:But why not returning the resource directly from EndpointsEnpoint? In EndpointResource a UriComponentsBuilder that extracts information from an HttpServletRequest was used which will throw an exception while calling to MBean’s getData operation (unless JMX is not desired). Manage endpoint state Endpoints can be used not only for monitoring, but also for management. There is already built-in ShutdownEndpoint (disabled by default) that allows to shutdown the ApplicationContext. In the below (hypothetical) example, user can change state of selected endpoint: @RequestMapping(value = "/{endpointId}/state") @ResponseBody public EndpointResource enable(@PathVariable String endpointId) { Optional<Endpoint> endpointOptional = this.delegate.invoke().stream() .filter(e -> e.getId().equals(endpointId)) .findFirst(); if (!endpointOptional.isPresent()) { throw new RuntimeException("Endpoint not found: " + endpointId); }Endpoint endpoint = endpointOptional.get(); ((AbstractEndpoint) endpoint).setEnabled(!endpoint.isEnabled());return new EndpointResource(managementContextPath, endpoint); } While calling a disabled endpoint user should receive the following response: { "message": "This endpoint is disabled" } Going further The next step could be adding a user interface for custom (or existing) endpoints, but it is not in scope of this article. If you are interested you may have a look at Spring Boot Admin that is a simple admin interface for Spring Boot applications. Summary Spring Boot Actuator provides all of Spring Boot’s production-ready features with a number of built-in endpoints. With minimal effort custom endpoints can be added to extend monitoring and management capabilities of the application. Referenceshttp://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#production-readyReference: Spring Boot Actuator: custom endpoint with MVC layer on top of it from our JCG partner Rafal Borowiec at the Codeleak.pl blog....
java-interview-questions-answers

Securing WebSockets using Username/Password and Servlet Security

RFC 6455 provide a complete list of security considerations for WebSockets. Some of them are baked in the protocol itself, and others need more explanation on how they can be achieved on a particular server. Lets talk about some of the security built into the protocol itself:              The Origin header in HTTP request includes only the information required to identify the principal (web page, JavaScript or any other client) that initiated the request (typically the scheme, host, and port of initiating origin). For WebSockets, this header field is included in the client’s opening handshake. This is used to inform server of the script origin generating the WebSocket connection request. The server may then decide to accept or reject the handshake request accordingly. This allows the server to protect against unauthorized cross-origin use of a WebSocket server by scripts using the WebSocket API in a browser.For example, if Java EE 7 WebSocket Chat sample is deployed to WildFly and accessed at localhost:8080/chat/ then the Origin header is “http://localhost:8080″. Non-browser clients may use the Origin header to specify the origin of the request. WebSocket servers should be careful about receiving such requests. WebSocket opening handshake from client must include Sec-WebSocket-Key and Sec-WebSocket-Version HTTP header field. XMLHttpRequest can be used to make HTTP requests, and allows to set headers as part of that request as: xhr.onreadystatechange = function () { if (xhr.readyState == 4 && xhr.status == 200) { document.getElementById("myDiv").innerHTML = xhr.responseText; } } xhr.open("GET", "http://localhost:8080", true); xhr.setRequestHeader("foo", "bar"); xhr.setRequestHeader("Sec-WebSocket-Key", "myKey"); xhr.send(); If XMLHttpRequest tries to set any header fields starting with Sec- then they are ignored. So a malicious user cannot simulate a WebSocket connection to a server by using HTML and JavaScript APIs.In addition to these two primary ways, WebSockets can be secured using client authentication mechanism available to any HTTP servers. This Tech Tip will show how to authenticate Java EE 7 WebSockets deployed on WildFly. Lets get started!Clone Java EE 7 Samples workspace: git clone https://github.com/javaee-samples/javaee7-samples.gitThe “websocket/endpoint-security” sample shows how client authentication can be done before the WebSocket handshake is initiated from the client. This is triggered by including the following deployment descriptor: <security-constraint> <web-resource-collection> <web-resource-name>WebSocket Endpoint</web-resource-name> <url-pattern>/*</url-pattern> <http-method>GET</http-method> </web-resource-collection> <auth-constraint> <role-name>g1</role-name> </auth-constraint> </security-constraint> <login-config> <auth-method>BASIC</auth-method> <realm-name>file</realm-name> </login-config> <security-role> <role-name>g1</role-name> </security-role> Some key points to understand about this descriptor:<url-pattern> indicates that any request made to this application will be prompted for authentication <auth-constraint> defines the security role that can access this resource <login-config> shows that file-based realm is used with basic authentication <security-role> defines the security roles referenced by this applicationIn our particular case, the page that creates the WebSocket connection is protected by basic authentication. Download WildFly 8.1, unzip, and add a new user by invoking the following script: ./bin/add-user.sh -a -u u1 -p p1 -g g1 This will add user “u1″ with password “p1″ in group “g1″. The group specified here needs to match as defined in <role-name> in the deployment descriptor. Deploy the sample by giving the command: mvn wildfly:deployNow when the application is accessed at localhost:8080/endpoint-security then a security dialog box pops up as shown:Enter “u1″ as the username and “p1″ as the password to authenticate. These credentials are defined in the group “g1″ which is referenced in the deployment descriptor. Any other credentials will keep bringing the dialog back. As soon as the request is successfully authenticated, the WebSocket connection is established and a message is shown on the browser. If you are interested in securing only the WebSocket URL then change the URL pattern from: /* to: /websocket In websocket.js, change the URL to create WebSocket endpoint from: var wsUri = "ws://" + document.location.host + document.location.pathname + "websocket"; to: var wsUri = "ws://u1:p1@" + document.location.host + document.location.pathname + "websocket"; Note, how credentials are passed in the URL itself. As of Google Chrome 38.0.2125.104, a browser popup does not appear if only WebSocket URL requires authentication. Next Tech Tip will explain how to secure WebSocket using wss:// protocol.Reference: Securing WebSockets using Username/Password and Servlet Security from our JCG partner Arun Gupta at the Miles to go 2.0 … blog....
java-interview-questions-answers

Java EE 7 / JAX-RS 2.0: Simple REST API Authentication & Authorization with Custom HTTP Header

REST has made a lot of conveniences when it comes to implementing web services with the already available HTTP protocol at its disposal. By just firing GET, POST and other HTTP methods through the designated URL, you’ll sure to get something done through a response out of a REST service. But whatever conveniences which REST has given to the developers, the subject of security and access control should always be addressed. This article will show you how to implement simple user based authentication with the use of HTTP Headers and JAX-RS 2.0 interceptors.         Authenticator Let’s begin with an authenticator class. This DemoAuthenticator with the codes below provides the necessary methods for authenticating any users which is request access to the REST web service. Please read through the codes and the comments are there to guide the understanding. Codes for DemoAuthenticator: package com.developerscrappad.business;   import java.util.HashMap; import java.util.Map; import java.util.UUID; import java.security.GeneralSecurityException; import javax.security.auth.login.LoginException;   public final class DemoAuthenticator {   private static DemoAuthenticator authenticator = null;   // A user storage which stores <username, password> private final Map<String, String> usersStorage = new HashMap();   // A service key storage which stores <service_key, username> private final Map<String, String> serviceKeysStorage = new HashMap();   // An authentication token storage which stores <service_key, auth_token>. private final Map<String, String> authorizationTokensStorage = new HashMap();   private DemoAuthenticator() { // The usersStorage pretty much represents a user table in the database usersStorage.put( "username1", "passwordForUser1" ); usersStorage.put( "username2", "passwordForUser2" ); usersStorage.put( "username3", "passwordForUser3" );   /**   * Service keys are pre-generated by the system and is given to the   * authorized client who wants to have access to the REST API. Here,   * only username1 and username2 is given the REST service access with   * their respective service keys.   */ serviceKeysStorage.put( "f80ebc87-ad5c-4b29-9366-5359768df5a1", "username1" ); serviceKeysStorage.put( "3b91cab8-926f-49b6-ba00-920bcf934c2a", "username2" ); }   public static DemoAuthenticator getInstance() { if ( authenticator == null ) { authenticator = new DemoAuthenticator(); }   return authenticator; }   public String login( String serviceKey, String username, String password ) throws LoginException { if ( serviceKeysStorage.containsKey( serviceKey ) ) { String usernameMatch = serviceKeysStorage.get( serviceKey );   if ( usernameMatch.equals( username ) && usersStorage.containsKey( username ) ) { String passwordMatch = usersStorage.get( username );   if ( passwordMatch.equals( password ) ) {   /**   * Once all params are matched, the authToken will be   * generated and will be stored in the   * authorizationTokensStorage. The authToken will be needed   * for every REST API invocation and is only valid within   * the login session   */ String authToken = UUID.randomUUID().toString(); authorizationTokensStorage.put( authToken, username );   return authToken; } } }   throw new LoginException( "Don't Come Here Again!" ); }   /**   * The method that pre-validates if the client which invokes the REST API is   * from a authorized and authenticated source.   *   * @param serviceKey The service key   * @param authToken The authorization token generated after login   * @return TRUE for acceptance and FALSE for denied.   */ public boolean isAuthTokenValid( String serviceKey, String authToken ) { if ( isServiceKeyValid( serviceKey ) ) { String usernameMatch1 = serviceKeysStorage.get( serviceKey );   if ( authorizationTokensStorage.containsKey( authToken ) ) { String usernameMatch2 = authorizationTokensStorage.get( authToken );   if ( usernameMatch1.equals( usernameMatch2 ) ) { return true; } } }   return false; }   /**   * This method checks is the service key is valid   *   * @param serviceKey   * @return TRUE if service key matches the pre-generated ones in service key   * storage. FALSE for otherwise.   */ public boolean isServiceKeyValid( String serviceKey ) { return serviceKeysStorage.containsKey( serviceKey ); }   public void logout( String serviceKey, String authToken ) throws GeneralSecurityException { if ( serviceKeysStorage.containsKey( serviceKey ) ) { String usernameMatch1 = serviceKeysStorage.get( serviceKey );   if ( authorizationTokensStorage.containsKey( authToken ) ) { String usernameMatch2 = authorizationTokensStorage.get( authToken );   if ( usernameMatch1.equals( usernameMatch2 ) ) {   /**   * When a client logs out, the authentication token will be   * remove and will be made invalid.   */ authorizationTokensStorage.remove( authToken ); return; } } }   throw new GeneralSecurityException( "Invalid service key and authorization token match." ); } }General Code Explanation: Generally, there are only a few important items that makes up the authenticator and that that is: service key, authorization token, username and password. The username and password will commonly go in pairs. Service Key The service key may be new to some readers; in some public REST API service, a service key and sometimes known as API key, is generated by the system and then sends to the user/client (either through email or other means) that is permitted to access the REST service. So besides login into the REST service with just mere username and password, the system will also check on the service key if the user/client is permitted to access the REST APIs. The usernames, passwords and service keys are all predefined in the codes above for now only demo purpose. Authorization Token Upon authentication (through the login() method), the system will then generate an authorization token for the authenticated user. This token is passed back to the user/client through HTTP response and is to be used for any REST API invocation later. The user/client will have to find a way to store and use it throughout the login session. We’ll look at that later. Required HTTP Headers Name Definition Moving forward, instead of having the service key and authorization token to be passed to the server-side app as HTTP parameters (Form or Query), we’ll have it pass as HTTP Headers. This is to allow the request to be first filtered before being processed by the targeted REST method. The names for the HTTP Headers are below:HTTP Header Name Descriptionservice_key The service key that enables a HTTP client to access the REST Web Services. This is the first layer of authenticating and authorizing the HTTP Request.auth_token The token generated upon username/password authentication, which is to be used for any REST Web Service calls (except for the authentication method shown later).REST API Implementation For convenience and further code error reduction, let’s put the HTTP Header names into an interface as static final variables for the use in the rest of the classes. Codes for DemoHTTPHeaderNames.java: package com.developerscrappad.intf;   public interface DemoHTTPHeaderNames {   public static final String SERVICE_KEY = "service_key"; public static final String AUTH_TOKEN = "auth_token"; }For the implementation of the authentication process and other demo methods, the methods’ signature are defined in DemoBusinessRESTResourceProxy, along with the appropriate HTTP Methods, parameters and the business implementation is defined in DemoBusinessRESTResource. Codes for DemoBusinessRESTResourceProxy.java: package com.developerscrappad.intf;   import java.io.Serializable; import javax.ejb.Local; import javax.ws.rs.FormParam; import javax.ws.rs.GET; import javax.ws.rs.POST; import javax.ws.rs.Path; import javax.ws.rs.Produces; import javax.ws.rs.core.Context; import javax.ws.rs.core.HttpHeaders; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.Response;   @Local @Path( "demo-business-resource" ) public interface DemoBusinessRESTResourceProxy extends Serializable {   @POST @Path( "login" ) @Produces( MediaType.APPLICATION_JSON ) public Response login( @Context HttpHeaders httpHeaders, @FormParam( "username" ) String username, @FormParam( "password" ) String password );   @GET @Path( "demo-get-method" ) @Produces( MediaType.APPLICATION_JSON ) public Response demoGetMethod();   @POST @Path( "demo-post-method" ) @Produces( MediaType.APPLICATION_JSON ) public Response demoPostMethod();   @POST @Path( "logout" ) public Response logout( @Context HttpHeaders httpHeaders ); }Codes for DemoBusinessRESTResource.java: package com.developerscrappad.business;   import com.developerscrappad.intf.DemoBusinessRESTResourceProxy; import com.developerscrappad.intf.DemoHTTPHeaderNames; import java.security.GeneralSecurityException; import javax.ejb.Stateless; import javax.json.Json; import javax.json.JsonObject; import javax.json.JsonObjectBuilder; import javax.security.auth.login.LoginException; import javax.ws.rs.FormParam; import javax.ws.rs.core.CacheControl; import javax.ws.rs.core.Context; import javax.ws.rs.core.HttpHeaders; import javax.ws.rs.core.Response;   @Stateless( name = "DemoBusinessRESTResource", mappedName = "ejb/DemoBusinessRESTResource" ) public class DemoBusinessRESTResource implements DemoBusinessRESTResourceProxy {   private static final long serialVersionUID = -6663599014192066936L;   @Override public Response login( @Context HttpHeaders httpHeaders, @FormParam( "username" ) String username, @FormParam( "password" ) String password ) {   DemoAuthenticator demoAuthenticator = DemoAuthenticator.getInstance(); String serviceKey = httpHeaders.getHeaderString( DemoHTTPHeaderNames.SERVICE_KEY );   try { String authToken = demoAuthenticator.login( serviceKey, username, password );   JsonObjectBuilder jsonObjBuilder = Json.createObjectBuilder(); jsonObjBuilder.add( "auth_token", authToken ); JsonObject jsonObj = jsonObjBuilder.build();   return getNoCacheResponseBuilder( Response.Status.OK ).entity( jsonObj.toString() ).build();   } catch ( final LoginException ex ) { JsonObjectBuilder jsonObjBuilder = Json.createObjectBuilder(); jsonObjBuilder.add( "message", "Problem matching service key, username and password" ); JsonObject jsonObj = jsonObjBuilder.build();   return getNoCacheResponseBuilder( Response.Status.UNAUTHORIZED ).entity( jsonObj.toString() ).build(); } }   @Override public Response demoGetMethod() { JsonObjectBuilder jsonObjBuilder = Json.createObjectBuilder(); jsonObjBuilder.add( "message", "Executed demoGetMethod" ); JsonObject jsonObj = jsonObjBuilder.build();   return getNoCacheResponseBuilder( Response.Status.OK ).entity( jsonObj.toString() ).build(); }   @Override public Response demoPostMethod() { JsonObjectBuilder jsonObjBuilder = Json.createObjectBuilder(); jsonObjBuilder.add( "message", "Executed demoPostMethod" ); JsonObject jsonObj = jsonObjBuilder.build();   return getNoCacheResponseBuilder( Response.Status.ACCEPTED ).entity( jsonObj.toString() ).build(); }   @Override public Response logout( @Context HttpHeaders httpHeaders ) { try { DemoAuthenticator demoAuthenticator = DemoAuthenticator.getInstance(); String serviceKey = httpHeaders.getHeaderString( DemoHTTPHeaderNames.SERVICE_KEY ); String authToken = httpHeaders.getHeaderString( DemoHTTPHeaderNames.AUTH_TOKEN );   demoAuthenticator.logout( serviceKey, authToken );   return getNoCacheResponseBuilder( Response.Status.NO_CONTENT ).build(); } catch ( final GeneralSecurityException ex ) { return getNoCacheResponseBuilder( Response.Status.INTERNAL_SERVER_ERROR ).build(); } }   private Response.ResponseBuilder getNoCacheResponseBuilder( Response.Status status ) { CacheControl cc = new CacheControl(); cc.setNoCache( true ); cc.setMaxAge( -1 ); cc.setMustRevalidate( true );   return Response.status( status ).cacheControl( cc ); } }The login() method is to authenticate the username, the password and also the right service key. After login(), the authorization token will be generated and returned to the client. The client will have to use it for any other methods invocation later on. The demoGetMethod() and the demoPostMethod() are just dummy methods which returns a JSON message for demo purpose, but with a special condition that a valid authorization token must be present. The logout() method is to log the user out of the REST service; user is identified by the “auth_token“. The service key and the authorization token will be made available to the REST service methods through: @Context HttpHeaders httpHeaders The httpHeaders, an instance of javax.ws.rs.core.HttpHeaders, is an object that contains the header name and values for the use of the application further on. But in order to get the REST service to accept the HTTP Header, something needs to be done first through both the REST request interceptor and the response interceptor. Authentication With HTTP Headers Through JAX-RS 2.0 Interceptors Due to certain security limitation, just don’t hope that any HTTP headers could be passed using any REST client and expect the REST service to accept it. It just doesn’t work that way. In order to make a specific header to be accepted in the REST service, we have to define the acceptance of HTTP Header very specifically in the response filter interceptor. Codes for DemoRESTResponseFilter.java: package com.developerscrappad.interceptors;   import com.developerscrappad.intf.DemoHTTPHeaderNames; import java.io.IOException; import java.util.logging.Logger; import javax.ws.rs.container.ContainerRequestContext; import javax.ws.rs.container.ContainerResponseContext; import javax.ws.rs.container.ContainerResponseFilter; import javax.ws.rs.container.PreMatching; import javax.ws.rs.ext.Provider;   @Provider @PreMatching public class DemoRESTResponseFilter implements ContainerResponseFilter {   private final static Logger log = Logger.getLogger( DemoRESTResponseFilter.class.getName() );   @Override public void filter( ContainerRequestContext requestCtx, ContainerResponseContext responseCtx ) throws IOException {   log.info( "Filtering REST Response" );   responseCtx.getHeaders().add( "Access-Control-Allow-Origin", "*" ); // You may further limit certain client IPs with Access-Control-Allow-Origin instead of '*' responseCtx.getHeaders().add( "Access-Control-Allow-Credentials", "true" ); responseCtx.getHeaders().add( "Access-Control-Allow-Methods", "GET, POST, DELETE, PUT" ); responseCtx.getHeaders().add( "Access-Control-Allow-Headers", DemoHTTPHeaderNames.SERVICE_KEY + ", " + DemoHTTPHeaderNames.AUTH_TOKEN ); } }DemoRESTResponseFilter is a JAX-RS 2.0 interceptor which implements ContainerResponseFilter. Don’t forget to annotate it with both @Provide and @PreMatching. In order to allow certain specific custom HTTP headers to be accepted, the header name “Access-Control-Allow-Headers” follow by the value of custom headers with “,” as the separator must be added as part of the custom headers value. This is the way to inform the browser or REST client of the custom headers allowed. The rest of the headers are for CORS, which you can read more in one of our articles Java EE 7 / JAX-RS 2.0 – CORS on REST (How to make REST APIs accessible from a different domain). Next, to validate and verify the service key and authorization token, we need to extract it out from the HTTP Headers and pre-process it with the request filter interceptor. Codes for DemoRESTRequestFilter: package com.developerscrappad.interceptors;   import com.developerscrappad.business.DemoAuthenticator; import com.developerscrappad.intf.DemoHTTPHeaderNames; import java.io.IOException; import java.util.logging.Logger; import javax.ws.rs.container.ContainerRequestContext; import javax.ws.rs.container.ContainerRequestFilter; import javax.ws.rs.container.PreMatching; import javax.ws.rs.core.Response; import javax.ws.rs.ext.Provider;   @Provider @PreMatching public class DemoRESTRequestFilter implements ContainerRequestFilter {   private final static Logger log = Logger.getLogger( DemoRESTRequestFilter.class.getName() );   @Override public void filter( ContainerRequestContext requestCtx ) throws IOException {   String path = requestCtx.getUriInfo().getPath(); log.info( "Filtering request path: " + path );   // IMPORTANT!!! First, Acknowledge any pre-flight test from browsers for this case before validating the headers (CORS stuff) if ( requestCtx.getRequest().getMethod().equals( "OPTIONS" ) ) { requestCtx.abortWith( Response.status( Response.Status.OK ).build() );   return; }   // Then check is the service key exists and is valid. DemoAuthenticator demoAuthenticator = DemoAuthenticator.getInstance(); String serviceKey = requestCtx.getHeaderString( DemoHTTPHeaderNames.SERVICE_KEY );   if ( !demoAuthenticator.isServiceKeyValid( serviceKey ) ) { // Kick anyone without a valid service key requestCtx.abortWith( Response.status( Response.Status.UNAUTHORIZED ).build() );   return; }   // For any pther methods besides login, the authToken must be verified if ( !path.startsWith( "/demo-business-resource/login/" ) ) { String authToken = requestCtx.getHeaderString( DemoHTTPHeaderNames.AUTH_TOKEN );   // if it isn't valid, just kick them out. if ( !demoAuthenticator.isAuthTokenValid( serviceKey, authToken ) ) { requestCtx.abortWith( Response.status( Response.Status.UNAUTHORIZED ).build() ); } } } }To get the header value, we invoke the getHeaderString() method of the object instance of ContainerRequestContext, for example: String serviceKey = requestCtx.getHeaderString( "service_key" ); The rest of the codes in DemoRESTRequestFilter is pretty straight forward on validating and verifying the service key and the authorization token. REST Service Deployment Don’t forget to have the web.xml for the enablement of REST service define. Codes for web.xml:  javax.ws.rs.core.Application 1 javax.ws.rs.core.Application /rest-api/*  For this demo, I have packaged the compiled codes into a war file naming it RESTSecurityWithHTTPHeaderDemo.war. I have chosen to deploy on Glassfish 4.0 on the domain developerscrappad.com (the domain of this blog). If you are going through everything in this tutorial, you may choose a different domain of your own. The REST API URLs will be in the format of: http://<domain>:<port>/RESTSecurityWithHTTPHeaderDemo/rest-api/path/method-path/ Anyway, the summary of the URLs for the test client which I’m using are:Method REST URL HTTP MethodDemoBusinessRESTResourceProxy.login() http://developerscrappad.com:8080/RESTSecurityWithHTTPHeaderDemo/rest-api/demo-business-resource/login/ POSTDemoBusinessRESTResourceProxy. demoGetMethod() http://developerscrappad.com:8080/RESTSecurityWithHTTPHeaderDemo/rest-api/demo-business-resource/demo-get-method/ GETDemoBusinessRESTResourceProxy. demoPostMethod() http://developerscrappad.com:8080/RESTSecurityWithHTTPHeaderDemo/rest-api/demo-business-resource/demo-post-method/ POSTDemoBusinessRESTResourceProxy.logout() http://developerscrappad.com:8080/RESTSecurityWithHTTPHeaderDemo/rest-api/demo-business-resource/logout/ POSTTHE REST Client Putting it altogether, here’s a REST client which I’ve wrote to test the REST APIs. The REST Client is just a HTML file (specifically HTML5, which supports web storage) that leverages jQuery for REST API calls. What the REST Client does is as follow:First, the REST Client will make a REST API call without service key and authorization token. The call will be rejected with HTTP Status 401 (Unauthorized) Next, it will perform a login with the specific service key (hard coded for now in the Authenticator.java) for “username2″. Once the authorisation token had been received, it will be stored in the sessionStorage for further use. Then, it will call the dummy get and post methods. After that, it will pereform a logout Once the user is logged-out, the client will then perform a call to to the dummy get and post method, but the access will be denied with HTTP Status 401 due to the expiration of the authorization token.Codes for rest-auth-test.html: <html> <head> <title>REST Authentication Tester</title> <meta charset="UTF-8"> </head> <body> <div id="logMsgDiv"></div>   <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.11.0/jquery.min.js"></script> <script type="text/javascript"> var $ = jQuery.noConflict();   // Disable async $.ajaxSetup( { async: false } );   // Using Service Key 3b91cab8-926f-49b6-ba00-920bcf934c2a and username2   // This is what happens when there you call the REST APIs without a service key and authorisation token $.ajax( { cache: false, crossDomain: true, url: "http://www.developerscrappad.com:8080/RESTSecurityWithHTTPHeaderDemo/rest-api/demo-business-resource/demo-post-method/", type: "POST", success: function( jsonObj, textStatus, xhr ) { var htmlContent = $( "#logMsgDiv" ).html( ) + "<p style='color: red;'>If this is portion is executed, something must be wrong</p>"; $( "#logMsgDiv" ).html( htmlContent ); }, error: function( xhr, textStatus, errorThrown ) { var htmlContent = $( "#logMsgDiv" ).html( ) + "<p style='color: red;'>This is what happens when there you call the REST APIs without a service key and authorisation token." + "<br />HTTP Status: " + xhr.status + ", Unauthorized access to demo-post-method</p>";   $( "#logMsgDiv" ).html( htmlContent ); } } );   // Performing login with username2 and passwordForUser2 $.ajax( { cache: false, crossDomain: true, headers: { "service_key": "3b91cab8-926f-49b6-ba00-920bcf934c2a" }, dataType: "json", url: "http://www.developerscrappad.com:8080/RESTSecurityWithHTTPHeaderDemo/rest-api/demo-business-resource/login/", type: "POST", data: { "username": "username2", "password": "passwordForUser2" }, success: function( jsonObj, textStatus, xhr ) { sessionStorage.auth_token = jsonObj.auth_token;   var htmlContent = $( "#logMsgDiv" ).html( ) + "<p>Perform Login. Gotten auth-token as: " + sessionStorage.auth_token + "</p>"; $( "#logMsgDiv" ).html( htmlContent ); }, error: function( xhr, textStatus, errorThrown ) { console.log( "HTTP Status: " + xhr.status ); console.log( "Error textStatus: " + textStatus ); console.log( "Error thrown: " + errorThrown ); } } );   // After login, execute demoteGetMethod with the auth-token obtained $.ajax( { cache: false, crossDomain: true, headers: { "service_key": "3b91cab8-926f-49b6-ba00-920bcf934c2a", "auth_token": sessionStorage.auth_token }, dataType: "json", url: "http://www.developerscrappad.com:8080/RESTSecurityWithHTTPHeaderDemo/rest-api/demo-business-resource/demo-get-method/", type: "GET", success: function( jsonObj, textStatus, xhr ) { var htmlContent = $( "#logMsgDiv" ).html( ) + "<p>After login, execute demoteGetMethod with the auth-token obtained. JSON Message: " + jsonObj.message + "</p>"; $( "#logMsgDiv" ).html( htmlContent ); }, error: function( xhr, textStatus, errorThrown ) { console.log( "HTTP Status: " + xhr.status ); console.log( "Error textStatus: " + textStatus ); console.log( "Error thrown: " + errorThrown ); } } );   // Execute demoPostMethod with the auth-token obtained $.ajax( { cache: false, crossDomain: true, headers: { "service_key": "3b91cab8-926f-49b6-ba00-920bcf934c2a", "auth_token": sessionStorage.auth_token }, dataType: "json", url: "http://www.developerscrappad.com:8080/RESTSecurityWithHTTPHeaderDemo/rest-api/demo-business-resource/demo-post-method/", type: "POST", success: function( jsonObj, textStatus, xhr ) { var htmlContent = $( "#logMsgDiv" ).html( ) + "<p>Execute demoPostMethod with the auth-token obtained. JSON message: " + jsonObj.message + "</p>"; $( "#logMsgDiv" ).html( htmlContent ); }, error: function( xhr, textStatus, errorThrown ) { console.log( "HTTP Status: " + xhr.status ); console.log( "Error textStatus: " + textStatus ); console.log( "Error thrown: " + errorThrown ); } } );   // Let's logout after all the above. No content expected $.ajax( { cache: false, crossDomain: true, headers: { "service_key": "3b91cab8-926f-49b6-ba00-920bcf934c2a", "auth_token": sessionStorage.auth_token }, url: "http://www.developerscrappad.com:8080/RESTSecurityWithHTTPHeaderDemo/rest-api/demo-business-resource/logout/", type: "POST", success: function( jsonObj, textStatus, xhr ) { var htmlContent = $( "#logMsgDiv" ).html( ) + "<p>Let's logout after all the above. No content expected.</p>"; $( "#logMsgDiv" ).html( htmlContent ); }, error: function( xhr, textStatus, errorThrown ) { console.log( "HTTP Status: " + xhr.status ); console.log( "Error textStatus: " + textStatus ); console.log( "Error thrown: " + errorThrown ); } } );   // This is what happens when someone reuses the authorisation token after a user had been logged out $.ajax( { cache: false, crossDomain: true, headers: { "service_key": "3b91cab8-926f-49b6-ba00-920bcf934c2a", "auth_token": sessionStorage.auth_token }, url: "http://www.developerscrappad.com:8080/RESTSecurityWithHTTPHeaderDemo/rest-api/demo-business-resource/demo-get-method/", type: "GET", success: function( jsonObj, textStatus, xhr ) { var htmlContent = $( "#logMsgDiv" ).html( ) + "<p style='color: red;'>If this is portion is executed, something must be wrong</p>"; $( "#logMsgDiv" ).html( htmlContent ); }, error: function( xhr, textStatus, errorThrown ) { var htmlContent = $( "#logMsgDiv" ).html( ) + "<p style='color: red;'>This is what happens when someone reuses the authorisation token after a user had been logged out" + "<br />HTTP Status: " + xhr.status + ", Unauthorized access to demo-get-method</p>";   $( "#logMsgDiv" ).html( htmlContent ); } } ); </script> </body> </html>The Result The rest-auth-test.html need not be packaged with the war file, this is to separate invoking client script from the server-side app to simulate a cross-origin request. To run the rest-auth-test.html, all you need to do is to execute it from a web browser. For me, I have done this through Firefox with the Firebug plugin, and the below is the result:It worked pretty well. The first and the last request will be rejected as 401 (Unauthorized) HTTP Status because it was executed before authentication and after logout (invalid auth_token). Final Words When it comes to dealing with custom HTTP Headers in a JAX-RS 2.0 application, just remember to have the the custom HTTP Header names to be included as part of “Access-Control-Allow-Headers” in the response filter, e.g. Access-Control-Allow-Headers: custom_header_name1, custom_header_name2 After that, getting the HTTP Headers could be done easily in the REST Web Service methods with the help of javax.ws.rs.core.HttpHeaders through the REST context. Don’t forget the restriction and impact for CORS, which should be taken care in both REST Request and Response interceptors. Thank you for reading and hope this article helps. Related Articles:Java EE 7 / JAX-RS 2.0 – CORS on REST (How to make REST APIs accessible from a different domain) http://en.wikipedia.org/wiki/Cross-origin_resource_sharing http://www.html5rocks.com/en/tutorials/cors/ http://www.w3.org/TR/cors/ https://developer.mozilla.org/en/docs/HTTP/Access_control_CORSReference: Java EE 7 / JAX-RS 2.0: Simple REST API Authentication & Authorization with Custom HTTP Header from our JCG partner Max Lam at the A Developer’s Scrappad blog....
grails-logo

Grails Generate Asynchronous Controller

Since version 2.3, Grails supports asynchronous parallel programming to support modern multiple core hardware. Therefore a new Grails command is added to generate asynchronous controllers for domain classes. The generated controller contains CRUD actions for a given domain class. In the example below, we will generate a default asynchronous implementation of a Grails controller. First we create a domain object:       $ grails create-domain-class grails.data.Movie Second we will generate the asynchronous controller using the new generate-async-controller command: $ grails generate-async-controller grails.data.Movie Grails now generates an asynchronous controller with the name MovieController. Below you can see the default implementation of the index method: def index(Integer max) {params.max = Math.min(max ?: 10, 100) Movie.async.task { [movieInstanceList: list(params), count: count() ] }.then { result -> respond result.movieInstanceList, model:[movieInstanceCount: result.count] } } The async namespace makes sure GORM methods in the task method will be performed in a different thread and therefore is asynchronous. The task method which is used, returns a Promises object which you can use to perform callback operations like onError and onComplete.Reference: Grails Generate Asynchronous Controller from our JCG partner Albert van Veen at the JDriven blog....
scala-logo

Testing your plugin with multiple version of Play

So, you’ve written a plugin for Play…are you sure it works? I’ve been giving Deadbolt some love recently, and as part of the work I’ve added a test application for functional testing. This is an application that uses all the features of Deadbolt, and is driven by HTTP calls by REST-Assured. Initially, it was based on Play 2.3.5 but this ignores the supported Play versions of 2.3.1 through to 2.3.4. Additionally, those hard-working people on the Play team at Typesafe keep cranking out new feature-filled versions. On top of that, support for Scala 2.10.4 and 2.11.1 is required so cross-Scala version testing is needed. Clearly, testing your plugin against a single version of Play is not enough. Seems like some kind of continuous integration could help us out here… Building on Travis CI Deadbolt builds on Travis CI, a great CI platform that’s free for open-source projects. This runs the tests, and publishs snapshot versions to Sonatype. I’m not going into detail on this, because there’s already a great guide over at Cake Solutions. You can find the guide here: http://www.cakesolutions.net/teamblogs/publishing-artefacts-to-oss-sonatype-nexus-using-sbt-and-travis-ci-here… I’ve made some changes to the build script because the plugin code is not at the top level of the repositry; rather, it resides one level down. The repository looks like this: deadbolt-2-java |-code # plugin code lives here |-test-app # the functional test application As a result, the .travis.yml file that defines the build, looks like this. language: scala jdk: - openjdk6 scala: - 2.11.1 script: - cd code - sbt ++$TRAVIS_SCALA_VERSION +test - cd ../test-app - sbt ++$TRAVIS_SCALA_VERSION +test - cd ../code - sbt ++$TRAVIS_SCALA_VERSION +publish-local after_success: - ! '[[ $TRAVIS_BRANCH == "master" ]] && { sbt +publish; };' env: global: - secure: foo - secure: bar This sets the Java version (people get angry when I don’t provide Java 6-compatible versions), and defines a script as the build process. Note the cd commands used to switch between the plugin directory and the test-app directory. This script already covers the cross-Scala version requirement – prefixing a command with +, e.g. +test, will execute that command against all versions of Scala defined in your build.sbt. It’s important to note that although only Scala 2.11.1 is defined in .travis.yml, SBT itself will take care of setting the current build version based on build.sbt. crossScalaVersions := Seq("2.11.1", "2.10.4") Testing multiple versions of Play However, the version of Play used by the test-app is still hard-coded to 2.3.5 in test-app/project/plugins.sbt. addSbtPlugin("com.typesafe.play" % "sbt-plugin" % "2.3.5") Happily, .sbt files are not just configuration files but actual code. This means we can change the Play version based on environment properties. A default value of 2.3.5 is given to allow the tests to run locally without having to set the version. addSbtPlugin("com.typesafe.play" % "sbt-plugin" % System.getProperty("playTestVersion", "2.3.5")) Finally, we update .travis.yml to take advantage of this. language: scala jdk: - openjdk6 scala: - 2.11.1 script: - cd code - sbt ++$TRAVIS_SCALA_VERSION +test - cd ../test-app - sbt ++$TRAVIS_SCALA_VERSION -DplayTestVersion=2.3.1 +test - sbt ++$TRAVIS_SCALA_VERSION -DplayTestVersion=2.3.2 +test - sbt ++$TRAVIS_SCALA_VERSION -DplayTestVersion=2.3.3 +test - sbt ++$TRAVIS_SCALA_VERSION -DplayTestVersion=2.3.4 +test - sbt ++$TRAVIS_SCALA_VERSION -DplayTestVersion=2.3.5 +test - cd ../code - sbt ++$TRAVIS_SCALA_VERSION +publish-local after_success: - ! '[[ $TRAVIS_BRANCH == "master" ]] && { sbt +publish; };' env: global: - secure: foo - secure: bar This means the following steps occur during the build:sbt ++$TRAVIS_SCALA_VERSION +testRun the plugin tests against Scala 2.11.1 Run the plugin tests against Scala 2.10.4sbt ++$TRAVIS_SCALA_VERSION -DplayTestVersion=2.3.1 +testRun the functional tests of the test-app against Scala 2.11.1 and Play 2.3.1 Run the functional tests of the test-app against Scala 2.10.4 and Play 2.3.1sbt ++$TRAVIS_SCALA_VERSION -DplayTestVersion=2.3.2 +testRun the functional tests of the test-app against Scala 2.11.1 and Play 2.3.2 Run the functional tests of the test-app against Scala 2.10.4 and Play 2.3.2sbt ++$TRAVIS_SCALA_VERSION -DplayTestVersion=2.3.3 +testRun the functional tests of the test-app against Scala 2.11.1 and Play 2.3.3 Run the functional tests of the test-app against Scala 2.10.4 and Play 2.3.3sbt ++$TRAVIS_SCALA_VERSION -DplayTestVersion=2.3.4 +testRun the functional tests of the test-app against Scala 2.11.1 and Play 2.3.4 Run the functional tests of the test-app against Scala 2.10.4 and Play 2.3.4sbt ++$TRAVIS_SCALA_VERSION -DplayTestVersion=2.3.5 +testRun the functional tests of the test-app against Scala 2.11.1 and Play 2.3.5 Run the functional tests of the test-app against Scala 2.10.4 and Play 2.3.5If all these steps pass, the after_success branch of the build script will execute. If any of the steps fail, the build will break and the snapshots won’t be published.You can take a look at a repository using this approach here: https://github.com/schaloner/deadbolt-2-java.The resulting Travis build is available here: https://travis-ci.org/schaloner/deadbolt-2-java.Reference: Testing your plugin with multiple version of Play from our JCG partner Steve Chaloner at the Objectify blog....
junit-logo

Quo Vadis JUnit

For me JUnit is the most important library of the Java universe. But I think a new version of it is overdue. With it’s approach of having a method definition as a test definition JUnit is mighty inflexible and needs various hacks … sorry features, to do what you really should be able to do with basic (Java 8) language features. If you aren’t sure, what I’m talking about, check out this article about ScalaTest. Something like this should be the standard for JUnit. Of course you can implement your own TestRunner to get something like this going. But there are already many important TestRunners (SpringJUnit4ClassRunner anyone?) and they have the huge drawback that you can have only one of them. Another alternative would be to just say good-bye to JUnit and use a different Testframework. But all these other Testframeworks don’t have the support from third-party tools that JUnit has, so I’d really prefer JUnit to evolve, instead of it being replaced by something else. I was thinking about these issues for quite some time and actually brought them up on the JUnit mailing list, with lots of interesting feedback, but nothing happened. So when I met Marc, one of the JUnit committers at the XP-Days we started to discuss the situation, joined by Stefan, another JUnit committer and various XP-Days participants. And as so often nothing is as easy as it seems. JUnit is a very successful library, but it also doesn’t offer all the features people want or need. This has the effect that people use JUnit in all kinds of weird ways, which makes it really hard to evolve. E.g. Marc and Stefan told a story about the latest version of JUnit where they learned that a certain IDE uses reflection on private fields of JUnit, resulting in a “Bug” when the name of that field was changed. Therefore it seems, before one can make a change as big as a different default TestRunner, one has to revamp JUnit. I envision something like the following:gather the various features that others bolted onto JUnit, that probably should be part of JUnit itself. provide a clean, supported API for those apply gentle pressure and time for third parties to switch to the new APIs behind that API provide a new more flexible way to create tests profitAnd since JUnit is an open source project and all developers seem to work only in their private time on it, we started right there at the XP-Days gathering stuff that needs consideration. I put the results in a wiki page in the JUnit github repository. Get over there and see if you can add something.Reference: Quo Vadis JUnit from our JCG partner Jens Schauder at the Schauderhaft blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close