What's New Here?


Lambda Expressions and Stream API: basic examples

This blog post contains a list of basic Lambda expressions and Stream API examples I used in a live coding presentation I gave in June 2014 at Java User Group – Politechnica Gedanensis (Technical University of Gdańsk) and at Goyello. Lambda Expressions Syntax The most common example:     Runnable runnable = () -> System.out.println("Hello!"); Thread t = new Thread(runnable); t.start(); t.join(); One can write this differently: Thread t = new Thread(() -> System.out.println("Hello!")); t.start(); t.join(); What about arguments? Comparator<String> stringComparator = (s1, s2) -> s1.compareTo(s2); And expanding to full expression: Comparator<String> stringComparator = (String s1, String s2) -> { System.out.println("Comparing..."); return s1.compareTo(s2); }; Functional interface Lambda expressions let you express instances of single-method classes more compactly. Single-method classes are called functional interfaces and can be annotated with @FunctionalInterface: @FunctionalInterface public interface MyFunctionalInterface<T> { boolean test(T t); }// Usage MyFunctionalInterface<String> l = s -> s.startsWith("A"); Method references Method references are compact, easy-to-read lambda expressions for methods that already have a name. Let’s look at this simple example: public class Sample {public static void main(String[] args) { Runnable runnable = Sample::run; }private static void run() { System.out.println("Hello!"); } } Another example: public static void main(String[] args) { Sample sample = new Sample(); Comparator<String> stringLengthComparator = sample::compareLength; }private int compareLength(String s1, String s2) { return s1.length() - s2.length(); } Stream API – basics A stream is a sequence of elements supporting sequential and parallel bulk operations. Iterating over a list List<String> list = Arrays.asList("one", "two", "three", "four", "five", "six");list.stream() .forEach(s -> System.out.println(s)); Filtering Java 8 introduced default methods in interfaces. They are handy in Stream API: Predicate<String> lowerThanOrEqualToFour = s -> s.length() <= 4; Predicate<String> greaterThanOrEqualToThree = s -> s.length() >= 3;list.stream() .filter(lowerThanOrEqualToFour.and(greaterThanOrEqualToThree)) .forEach(s -> System.out.println(s)); Sorting Predicate<String> lowerThanOrEqualToFour = s -> s.length() <= 4; Predicate<String> greaterThanOrEqualToThree = s -> s.length() >= 3; Comparator<String> byLastLetter = (s1, s2) -> s1.charAt(s1.length() - 1) - s2.charAt(s2.length() - 1); Comparator<String> byLength = (s1, s2) -> s1.length() - s2.length();list.stream() .filter(lowerThanOrEqualToFour.and(greaterThanOrEqualToThree)) .sorted(byLastLetter.thenComparing(byLength)) .forEach(s -> System.out.println(s)); In the above example a default method and of java.util.function.Predicate is used. Default (and static) methods are new to interfaces in Java 8. Limit Predicate<String> lowerThanOrEqualToFour = s -> s.length() <= 4; Predicate<String> greaterThanOrEqualToThree = s -> s.length() >= 3; Comparator<String> byLastLetter = (s1, s2) -> s1.charAt(s1.length() - 1) - s2.charAt(s2.length() - 1); Comparator<String> byLength = (s1, s2) -> s1.length() - s2.length();list.stream() .filter(lowerThanOrEqualToFour.and(greaterThanOrEqualToThree)) .sorted(byLastLetter.thenComparing(byLength)) .limit(4) .forEach(s -> System.out.println(s)); Collect to a list Predicate<String> lowerThanOrEqualToFour = s -> s.length() <= 4; Predicate<String> greaterThanOrEqualToThree = s -> s.length() >= 3; Comparator<String> byLastLetter = (s1, s2) -> s1.charAt(s1.length() - 1) - s2.charAt(s2.length() - 1); Comparator<String> byLength = (s1, s2) -> s1.length() - s2.length();List<String> result = list.stream() .filter(lowerThanOrEqualToFour.and(greaterThanOrEqualToThree)) .sorted(byLastLetter.thenComparing(byLength)) .limit(4) .collect(Collectors.toList()); Parallel processing I used quite common example with iterating over a list of files: public static void main(String[] args) { File[] files = new File("c:/windows").listFiles(); Stream.of(files) .parallel() .forEach(Sample::process); }private static void process(File file) { try { Thread.sleep(1000); } catch (InterruptedException e) { }System.out.println("Processing -> " + file); } Please note that while showing the examples I explained some known drawbacks with parallel processing of streams. Stream API – more examples Mapping Iterate over files in a directory and return a FileSize object: class FileSize {private final File file; private final Long size;FileSize(File file, Long size) { this.file = file; this.size = size; }File getFile() { return file; }Long getSize() { return size; }String getName() { return getFile().getName(); }String getFirstLetter() { return getName().substring(0, 1); }@Override public String toString() { return Objects.toStringHelper(this) .add("file", file) .add("size", size) .toString(); } } The final code of a mapping: File[] files = new File("c:/windows").listFiles(); List<FileSize> result = Stream.of(files) .map(FileSize::new) .collect(Collectors.toList()); Grouping Group FileSize object by first letter of a file name: Map<String, List<FileSize>> result = Stream.of(files) .map(FileSize::new) .collect(Collectors.groupingBy(FileSize::getFirstLetter)); Reduce Get the biggest/smallest file in a directory: Optional<FileSize> filesize = Stream.of(files) .map(FileSize::new) .reduce((fs1, fs2) -> fs1.getSize() > fs2.getSize() ? fs1 : fs2); In case you don’t need a FileSize object, but only a number: OptionalLong max = Stream.of(files) .map(FileSize::new) .mapToLong(fs -> fs.getSize()) .max();Reference: Lambda Expressions and Stream API: basic examples from our JCG partner Rafal Borowiec at the Codeleak.pl blog....

Graph Degree Distributions using R over Hadoop

There are two common types of graph engines. One type is focused on providing real-time, traversal-based algorithms over linked-list graphs represented on a single-server. Such engines are typically called graph databases and some of the vendors include Neo4j, OrientDB, DEX, and InfiniteGraph. The other type of graph engine is focused on batch-processing using vertex-centric message passing within a graph represented across a cluster of machines. Graph engines of this form include Hama, Golden Orb, Giraph, and Pregel. The purpose of this post is to demonstrate how to express the computation of two fundamental graph statistics — each as a graph traversal and as a MapReduce algorithm. The graph engines explored for this purpose are Neo4j and Hadoop. However, with respects to Hadoop, instead of focusing on a particular vertex-centric BSP-based graph-processing package such as Hama or Giraph, the results presented are via native Hadoop (HDFS + MapReduce). Moreover, instead of developing the MapReduce algorithms in Java, the R programming language is used. RHadoop is a small, open-source package developed by Revolution Analytics that binds R to Hadoop and allows for the representation of MapReduce algorithms using native R. The two graph algorithms presented compute degree statistics: vertex in-degree and graph in-degree distribution. Both are related, and in fact, the results of the first can be used as the input to the second. That is, graph in-degree distribution is a function of vertex in-degree. Together, these two fundamental statistics serve as a foundation for more quantifying statistics developed in the domains of graph theory and network science.Vertex in-degree: How many incoming edges does vertex X have? Graph in-degree distribution: How many vertices have X number of incoming edges?These two algorithms are calculated over an artificially generated graph that contains 100,000 vertices and 704,002 edges. A subset is diagrammed on the left. The algorithm used to generate the graph is called preferential attachment. Preferential attachment yields graphs with “natural statistics” that have degree distributions that are analogous to real-world graphs/networks. The respective iGraph R code is provided below. Once constructed and simplified (i.e. no more than one edge between any two vertices and no self-loops), the vertices and edges are counted. Next, the first five edges are iterated and displayed. The first edge reads, “vertex 2 is connected to vertex 0.” Finally, the graph is persisted to disk as a GraphML file.     ~$ rR version 2.13.1 (2011-07-08) Copyright (C) 2011 The R Foundation for Statistical Computing> g <- simplify(barabasi.game(100000, m=10)) > length(V(g)) [1] 100000 > length(E(g)) [1] 704002 > E(g)[1:5] Edge sequence: [1] 2 -> 0 [2] 2 -> 1 [3] 3 -> 0 [4] 4 -> 0 [5] 4 -> 1 > write.graph(g, '/tmp/barabasi.xml', format='graphml') Graph Statistics using Neo4j  When a graph is on the order of 10 billion elements (vertices+edges), then a single-server graph database is sufficient for performing graph analytics. As a side note, when those analytics/algorithms are “ego-centric” (i.e. when the traversal emanates from a single vertex or small set of vertices), then they can typically be evaluated in real-time (e.g. < 1000 ms). To compute these in-degree statistics, Gremlin is used. Gremlin is a graph traversal language developed by TinkerPop that is distributed with Neo4j, OrientDB, DEX, InfiniteGraph, and the RDF engine Stardog. The Gremlin code below loads the GraphML file created by R in the previous section into Neo4j. It then performs a count of the vertices and edges in the graph. ~$ gremlin\,,,/ (o o) -----oOOo-(_)-oOOo----- gremlin> g = new Neo4jGraph('/tmp/barabasi') ==>neo4jgraph[EmbeddedGraphDatabase [/tmp/barabasi]] gremlin> g.loadGraphML('/tmp/barabasi.xml') ==>null gremlin> g.V.count() ==>100000 gremlin> g.E.count() ==>704002 The Gremlin code to calculate vertex in-degree is provided below. The first line iterates over all vertices and outputs the vertex and its in-degree. The second line provides a range filter in order to only display the first five vertices and their in-degree counts. Note that the clarifying diagrams demonstrate the transformations on a toy graph, not the 100,000 vertex graph used in the experiment.  gremlin> g.V.transform{[it, it.in.count()]} ... gremlin> g.V.transform{[it, it.in.count()]}[0..4] ==>[v[1], 99104] ==>[v[2], 26432] ==>[v[3], 20896] ==>[v[4], 5685] ==>[v[5], 2194] Next, to calculate the in-degree distribution of the graph, the following Gremlin traversal can be evaluated. This expression iterates through all the vertices in the graph, emits their in-degree, and then counts the number of times a particular in-degree is encountered. These counts are saved into an internal map maintained by groupCount. The final cap step yields the internal groupCount map. In order to only display the top five counts, a range filter is applied. The first line emitted says: “There are 52,611 vertices that do not have any incoming edges.” The second line says: “There are 16,758 vertices that have one incoming edge.”  gremlin> g.V.transform{it.in.count()}.groupCount.cap ... gremlin> g.V.transform{it.in.count()}.groupCount.cap.next()[0..4] ==>0=52611 ==>1=16758 ==>2=8216 ==>3=4805 ==>4=3191 To calculate both statistics by using the results of the previous computation in the latter, the following traversal can be executed. This representation has a direct correlate to how vertex in-degree and graph in-degree distribution are calculated using MapReduce (demonstrated in the next section). gremlin> degreeV = [:] gremlin> degreeG = [:] gremlin> g.V.transform{[it, it.in.count()]}.sideEffect{degreeV[it[0]] = it[1]}.transform{it[1]}.groupCount(degreeG) ... gremlin> degreeV[0..4] ==>v[1]=99104 ==>v[2]=26432 ==>v[3]=20896 ==>v[4]=5685 ==>v[5]=2194 gremlin> degreeG.sort{a,b -> b.value <=> a.value}[0..4] ==>0=52611 ==>1=16758 ==>2=8216 ==>3=4805 ==>4=3191 Graph Statistics using Hadoop When a graph is on the order of 100+ billion elements (vertices+edges), then a single-server graph database will not be able to represent nor process the graph. A multi-machine graph engine is required. While native Hadoop is not a graph engine, a graph can be represented in its distributed HDFS file system and processed using its distributed processing MapReduce framework. The graph generated previously is loaded up in R and a count of its vertices and edges is conducted. Next, the graph is represented as an edge list. An edge list (for a single-relational graph) is a list of pairs, where each pair is ordered and denotes the tail vertex id and the head vertex id of the edge. The edge list can be pushed to HDFS using RHadoop. The variable edge.list represents a pointer to this HDFS file. > g <- read.graph('/tmp/barabasi.xml', format='graphml') > length(V(g)) [1] 100000 > length(E(g)) [1] 704002 > edge.list <- to.dfs(get.edgelist(g))  In order to calculate vertex in-degree, a MapReduce job is evaluated on edge.list. The map function is fed key/value pairs where the key is an edge id and the value is the ids of the tail and head vertices of the edge (represented as a list). For each key/value input, the head vertex (i.e. incoming vertex) is emitted along with the number 1. The reduce function is fed key/value pairs where the keys are vertices and the values are a list of 1s. The output of the reduce job is a vertex id and the length of the list of 1s (i.e. the number of times that vertex was seen as an incoming/head vertex of an edge). The results of this MapReduce job are saved to HDFS and degree.V is the pointer to that file. The final expression in the code chunk below reads the first key/value pair from degree.V — vertex 10030 has an in-degree of 5. > degree.V <- mapreduce(edge.list, map=function(k,v) keyval(v[2],1), reduce=function(k,v) keyval(k,length(v))) > from.dfs(degree.V)[[1]] $key [1] 10030 $val [1] 5 attr(,"rmr.keyval") [1] TRUE In order to calculate graph in-degree distribution, a MapReduce job is evaluated on degree.V. The map function is fed the key/value results stored in degree.V. The function emits the degree of the vertex with the number 1 as its value. For example, if vertex 6 has an in-degree of 100, then the map function emits the key/value [100,1]. Next, the reduce function is fed keys that represent degrees with values that are the number of times that degree was seen as a list of 1s. The output of the reduce function is the key along with the length of the list of 1s (i.e. the number of times a degree of a particular count was encountered). The final code fragment below grabs the first key/value pair from degree.g — degree 1354 was encountered 1 time. > degree.g <- mapreduce(degree.V, map=function(k,v) keyval(v,1), reduce=function(k,v) keyval(k,length(v))) > from.dfs(degree.g)[[1]] $key [1] 1354 $val [1] 1 attr(,"rmr.keyval") [1] TRUE In concert, these two computations can be composed into a single MapReduce expression. > degree.g <- mapreduce(mapreduce(edge.list, map=function(k,v) keyval(v[2],1), reduce=function(k,v) keyval(k,length(v))), map=function(k,v) keyval(v,1), reduce=function(k,v) keyval(k,length(v))) Note that while a graph can be on the order of 100+ billion elements, the degree distribution is much smaller and can typically fit into memory. In general, edge.list > degree.V > degree.g. Due to this fact, it is possible to pull the degree.g file off of HDFS, place it into main memory, and plot the results stored within. The degree.g distribution is plotted on a log/log plot. As suspected, the preferential attachment algorithm generated a graph with natural “scale-free” statistics — most vertices have a small in-degree and very few have a large in-degree. > degree.g.memory <- from.dfs(degree.g) > plot(keys(degree.g.memory), values(degree.g.memory), log='xy', main='Graph In-Degree Distribution', xlab='in-degree', ylab='frequency')  Related MaterialCohen, J., “Graph Twiddling in a MapReduce World,” Computing in Science & Engineering, IEEE, 11(4), pp. 29-41, July 2009.Reference: Graph Degree Distributions using R over Hadoop from our JCG partner Marko Rodriguez at the AURELIUS blog....

Code For Maintainability So The Next Developer Doesn’t Hate You

Unless your problem domain includes some specific need for highly optimized code, consider what is your biggest coding priority. I’m going to suggest that you make it maintainability. There was an online meme going around recently that suggested that you should code as if the person that will maintain your code is a homicidal maniac that knows your address. Okay, that’s over the top, but I would like to suggest you code like future changes to your code will cost someone money. Preferably, someone you like. The more time the next programmer has to spend figuring out why you did something, the more money it will cost.   Before I continue any further, I would like to apologize to anyone that has had to maintain any of the code I have written over the last, ahem, 20 plus years. Especially anyone who has stopped and said, “what was he thinking?” We all have “that” code out there, but I hope that I have less than most, as I am sure we all do. How to achieve it? It’s easy enough to say you are coding for maintainability, but what should you do to achieve it? I’d love to see some suggestions in the comments. Here are some of mine. No such thing as prototyping. I had the pleasure to work with a team who had to maintain and extend a project that started its life as a prototype. A couple of bright programmers put together a concept they thought would benefit their company. It was quickly slapped together and shown to their bosses and a marketing type or two. I know a few of you know where I’m going with this. Within a month, the marketing types had a client signed on to use this new service. Having been thrown together without any real architecture, the product was a real problem to maintain. I’m not saying never prototype, but when you do, keep in mind, this code may not be thrown away. Code the prototype using good software design skills. Use TDD. I’m going to approach this from a different direction than most do. Yes, Test Driven Development is great for maintainability because if you break something, your tests will let you know it. But it also is a great way to document your code. Documentation in the comments is often wrong because the code changes, but the documentation doesn’t, or because the original programmer does not take the time to write well understood documentation. Or, most likely, the original programmer never gets around to writing the comments at all. As long as the tests are being run, they will usually be a reflection of how the programmer expected the process to perform. If you fully test your code, the next programmer can get a really good idea of what your were thinking about by how you setup your tests. There is a lot you can learn by how you build the objects you pass into your code. Don’t be cute. Sure, that way of using a while loop instead of a for loop is cool and different, but the next programmer has no idea why you did it. If you need to do something non-standard, document it in place and include why. Peer review. We all get in a hurry trying to meet a deadline and the brain locks up when trying to remember how to do something simple. We write some type of hack to get around it, thinking we’ll go back and fix it later when more sleep and more caffeine have done their magic. But by then, you’ve slept and forgotten all about the hack. Having to explain why you did it that way to another programmer keeps some really bizarre code from getting checked in. Build process and dependency control. At first glance, this may not seem to be an important part of writing maintainable code. Starting on any project, there is a huge curve in getting to know and understand that project. If you can get past spending time to figure out what dependencies the project requires and what settings you have to change on your IDE, you’ve cut down a bit on the time it takes to maintain the project. Read, read, and read some more. It’s a great time to be a programmer. There are tons of articles and blogs that contain sample code all over the Internet. The publishing industry is trying hard to keep up with the ever-changing landscape. Reading code that contains best practices is an obvious way to improve your own code and help you create maintainable code. But also, reading code that does not follow best practices is a great way to see how not to do it. The trick is to know the difference between the two. Refactor. When you have the logic worked out and your code now works, take some time to look through your code and see where you can tighten it up. This is a good time to see if you’ve repeated code that can be moved into its own method. Use this time to read the code like a maintenance programmer. Would you understand this code if you saw it for the first time? Leave the code in better condition than when you found it. Many of us are loath to change working code just because it’s “ugly,” and I’m not given out a license to wholesale refactor any process you open in an editor. If you’re updating code that is surrounded by hard-to-maintain code, don’t take that as permission to write more bad code. Final Thoughts Thinking that your code will never be touched again is many things, but especially unrealistic. At the very least, business requirements change and may require modifications to your code. You may even be the person that has to maintain it, and trust me, after 6 months or so of writing other code, you will not be in the same frame of mind. Spend some time writing code that won’t be cursed by the next programmer.Reference: Code For Maintainability So The Next Developer Doesn’t Hate You from our JCG partner Rik Scarborough at the Keyhole Software blog....

10 things you can do to make your app secure: #4 Access Control

This is #4 in a series on the OWASP Top 10 Proactive Controls: 10 things that developers can do to make sure that their app is secure. Access Control aka Authorization, deciding who needs what access to which data and to which features, and how these rules will be enforced, needs to be carefully thought through up front in design. It’s difficult to retrofit access control later without making mistakes. Come up with a pattern early, and make sure that it is applied consistently. And make sure to follow these simple rules:       Deny by Default In many apps, the default behaviour is to allow access to features and to data or other resources unless an access control check is added, and the check fails. Take a few seconds and think about what could go wrong with this approach. If it’s not obvious, go to OWASP’s Top 10 list of the most serious application vulnerabilities #7: Missing Function-Level Access Control. Then make sure to only permit access to a function if an authorization check passes. What’s your Access Control Policy anyway? Access checks – even checks that are done properly, using a positive access approach, are often sprinkled throughout application code, looking something like this:if (user.isManager() || user.isAdministrator() || user.isEditor() || user.isUser()) { //execute action }The problem with this approach is that it’s really hard to review your access control rules and make sure that they are correct, and it’s hard to make changes because you can’t be sure that you found all of the checks and changed them correctly. Instead of embedding access control rules based on the user-id or role inside application logic, centralize access control rules in a data-driven authorization service which maps users against roles or other authorization schemes, and provide a simple API to this service that the application code can call. Much easier to audit, much more extensible and maintainable. If this isn’t already available in the application framework that you are using, look for a good security library to do the job. Apache Shiro offers an easy and flexible access control framework which you can use to implement these ideas. OWASP’s ESAPI also has a framework to enforce fine-grained access control rules at function, service, URL, data, and file levels. Don’t trust – verify Back again to the issue of trusting data. Never use client-side data or other untrusted data in access control decisions. Only use trusted server-side data. For more on Access Control patterns and anti-patterns and common problems in implementing Access Controls properly, please read OWASP’s Access Control Cheat Sheet. Access Control is closely tied to Authentication – in fact, some people mix these ideas up entirely. So let’s look at key issues in implementing Authentication next.Reference: 10 things you can do to make your app secure: #4 Access Control from our JCG partner Jim Bird at the Building Real Software blog....

Creating logs in Android applications

For Android applications, logging is handled by the android.util.Log class, which is a basic logging class that stores the logs in a circular buffer for the whole device. All logs for the device can be seen in the LogCat tab in Eclipse, or read using the logcat command. Here is a standard log output for the LogCat tab in Eclipse :              There are five levels of logs you can use in your application, for the most verbose to the least verbose :Verbose:For extra information messages that are only compiled in a debug application, but never included in a release application. Debug: For debug log messages that are always compiled, but stripped at runtime in a release application. Info: For an information in the logs that will be written in debug and release. Warning: For a warning in the logs that will be written in debug and release. Error: For an error in the logs that will be written in debug and release.A log message includes a tag identifying a group of messages and the message. By default the log level of all tags is Info, which means than messages that are of level Debug and Verbose should never shown unless the setprop command is used to change the log level. So, to write a verbose message to the log, you should call the isLoggable method to check if the message can be logged, and call the logging method : if (!Log.isLoggable(logMessageTag, Log.Verbose)) Log.v("MyApplicationTag", logMessage); And to show the Debug and Verbose log message for a specific tag, run the setprop command while your device is plugged in. If you reboot the device you will have to run the command again. adb shell setprop log.tag.MyApplicationTag VERBOSE Unfortunately, starting  with Android 4.0, an application can only read its own logs. It was useful for debugging to be able to read logs from another application, but in some cases sensitive information was written in those logs, and malicious apps were created to retrieve them. So if you need to have logs files send to you for debugging, you will need to create your own log class using the methods from the android.util.Log class. Remember, only Info, Warning and Error logs should be shown when the application is not run in debug mode.  Here is an example of a simple logger wrapping the call to isLoggable and storing the logs messages on the primary storage of the device (requires the permission WRITE_EXTERNAL_STORAGE) and to the standard buffer : /** * A logger that uses the standard Android Log class to log exceptions, and also logs them to a * file on the device. Requires permission WRITE_EXTERNAL_STORAGE in AndroidManifest.xml. * @author Cindy Potvin */ public class Logger { /** * Sends an error message to LogCat and to a log file. * @param context The context of the application. * @param logMessageTag A tag identifying a group of log messages. Should be a constant in the * class calling the logger. * @param logMessage The message to add to the log. */ public static void e(Context context, String logMessageTag, String logMessage) { if (!Log.isLoggable(logMessageTag, Log.ERROR)) return;int logResult = Log.e(logMessageTag, logMessage); if (logResult > 0) logToFile(context, logMessageTag, logMessage); }/** * Sends an error message and the exception to LogCat and to a log file. * @param context The context of the application. * @param logMessageTag A tag identifying a group of log messages. Should be a constant in the * class calling the logger. * @param logMessage The message to add to the log. * @param throwableException An exception to log */ public static void e(Context context, String logMessageTag, String logMessage, Throwable throwableException) { if (!Log.isLoggable(logMessageTag, Log.ERROR)) return;int logResult = Log.e(logMessageTag, logMessage, throwableException); if (logResult > 0) logToFile(context, logMessageTag, logMessage + "\r\n" + Log.getStackTraceString(throwableException)); }// The i and w method for info and warning logs should be implemented in the same way as the e method for error logs./** * Sends a message to LogCat and to a log file. * @param context The context of the application. * @param logMessageTag A tag identifying a group of log messages. Should be a constant in the * class calling the logger. * @param logMessage The message to add to the log. */ public static void v(Context context, String logMessageTag, String logMessage) { // If the build is not debug, do not try to log, the logcat be // stripped at compilation. if (!BuildConfig.DEBUG || !Log.isLoggable(logMessageTag, Log.VERBOSE)) return;int logResult = Log.v(logMessageTag, logMessage); if (logResult > 0) logToFile(context, logMessageTag, logMessage); }/** * Sends a message and the exception to LogCat and to a log file. * @param logMessageTag A tag identifying a group of log messages. Should be a constant in the * class calling the logger. * @param logMessage The message to add to the log. * @param throwableException An exception to log */ public static void v(Context context,String logMessageTag, String logMessage, Throwable throwableException) { // If the build is not debug, do not try to log, the logcat be // stripped at compilation. if (!BuildConfig.DEBUG || !Log.isLoggable(logMessageTag, Log.VERBOSE)) return;int logResult = Log.v(logMessageTag, logMessage, throwableException); if (logResult > 0) logToFile(context, logMessageTag, logMessage + "\r\n" + Log.getStackTraceString(throwableException)); }// The d method for debug logs should be implemented in the same way as the v method for verbose logs./** * Gets a stamp containing the current date and time to write to the log. * @return The stamp for the current date and time. */ private static String getDateTimeStamp() { Date dateNow = Calendar.getInstance().getTime(); // My locale, so all the log files have the same date and time format return (DateFormat.getDateTimeInstance(DateFormat.SHORT, DateFormat.SHORT, Locale.CANADA_FRENCH).format(dateNow)); }/** * Writes a message to the log file on the device. * @param logMessageTag A tag identifying a group of log messages. * @param logMessage The message to add to the log. */ private static void logToFile(Context context, String logMessageTag, String logMessage) { try { // Gets the log file from the root of the primary storage. If it does // not exist, the file is created. File logFile = new File(Environment.getExternalStorageDirectory(), "TestApplicationLog.txt"); if (!logFile.exists()) logFile.createNewFile(); // Write the message to the log with a timestamp BufferedWriter writer = new BufferedWriter(new FileWriter(logFile, true)); writer.write(String.format("%1s [%2s]:%3s\r\n", getDateTimeStamp(), logMessageTag, logMessage)); writer.close(); // Refresh the data so it can seen when the device is plugged in a // computer. You may have to unplug and replug to see the latest // changes MediaScannerConnection.scanFile(context, new String[] { logFile.toString() }, null, null);} catch (IOException e) { Log.e("com.cindypotvin.Logger", "Unable to log exception to file."); } } } If you release an application to the app store or to a client with this kind of logger, you should disable logging by default and add a switch in the preferences to enable logging on demand. If the logger is always enabled, your application will often write to the primary storage and to the logcat, which is an unnecessary overhead when everything works correctly. Also, the size of the log file should be limited in some way to avoid filling up the primary storage.Reference: Creating logs in Android applications from our JCG partner Cindy Potvin at the Web, Mobile and Android Programming blog....

RESTBucks Evolved

The book REST in Practice: Hypermedia and Systems Architecture uses an imaginary StarBucks-like company as its running example. I think this is a great example, since most people are familiar with the domain. The design is also simple enough to follow, yet complex enough to be interesting.   Problem Domain RESTbucks is about ordering and paying for coffee (or tea) and food. Here is the state diagram for the client:  Create the order Update the order Cancel the order Pay for the order Wait for the order to be prepared Take the orderBook Design The hypermedia design in the book for the service is as follows:The client POSTs an order to the well-known RESTBucks URI. This returns the order URI in the Location header. The client then GETs the order URI The client POSTs an updated order to the order URI The client DELETEs the order URI The client PUTs a payment to the URI found by looking up a link with relation http://relations.restbucks.com/payment The client GETs the order URI until the state changes The client DELETEs the URI found by looking up a link with relation http://relations.restbucks.com/receiptThe book uses the specialized media type application/vnd.restbucks.order+xml for all messages exchanged. Design Problems Here are some of the problems that I have with the above approach:I think the well-known URI for the service (what Mike Amundsen calls the billboard URI) should respond to a GET, so that clients can safely explore it. This adds an extra message, but it also makes it possible to expand the service with additional functionality. For instance, when menus are added in a later chapter of the book, a second well-known URI is introduced. With a proper home document-like resource in front of the order service, this could have been limited to a new link relation. I’d rather use PUT for updating an order, since that is an idempotent method. The book states that the representation returned by GET contains links and argues that this implies that (1) PUT messages should also contain those links and (2) that that would be strange since those links are under control of the server. I disagree with both statements. A server doesn’t necessarily have to make the formats for GET and PUT exactly the same. Even if it did, some parts, like the links, could be optional. Furthermore, there is no reason the server couldn’t accept and ignore the links. The DELETE is fine. An alternative is to use PUT with a status of canceled, since we already have a status property anyway. That opens up some more possibilities, like re-instating a canceled order, but also introduces issues like garbage collection. I don’t think PUT is the correct method. Can the service really guarantee under all circumstances that my credit card won’t get charged twice if I repeat the payment? More importantly, this design assumes that payments are always for the whole order. That may seem logical at first, but once the book introduces vouchers that logic evaporates. If I have a voucher for a free coffee, then I still have to pay for anything to eat or for a second coffee. I’d create a collection of payments that the client should POST to. I’d also use the standard payment link relation defined in RFC 5988. This is fine. This makes no sense to me: taking the order is not the same as deleting the receipt. I need the receipt when I’m on a business trip, so I can get reimbursed! I’d rather PUT a new order with status taken.Service Evolution Suppose you’ve implemented your own service based on the design in the book.Further suppose that after reading the above, you want to change your service. How can you do that without breaking any clients that may be out there?   After all, isn’t that what proponents tout as one of the advantages of a RESTful approach? Well, yes and no. The media type defined in the book is at level 3a, and so will allow you to change URIs. However, the use of HTTP methods is defined out-of-band and you can’t easily change that. Now imagine that the book would have used a full hypermedia type (level 3b) instead. In that case, the HTTP method used would be part of the message. The client would have to discover it, and the server could easily change it without fear of breaking clients. Of course, this comes at the cost of having to build more logic into the client. That’s why I think it’s a good idea to use an existing full hypermedia type like Mason, Siren, or UBER. Such generic media types are much more likely to come with libraries that will handle this sort of work for the client.Reference: RESTBucks Evolved from our JCG partner Remon Sinnema at the Secure Software Development blog....

Making Unsafe safer

Overview If you use Unsafe directly, you risk crashing the JVM.  This happens when you access a page of memory which hasn’t been mapped and the result on Unix is a SIGSEG (if you access page 0) or SIGBUS (if you access another page which is not mapped). Using MethodHandles Wrapping Unsafe method with a MethodHandle is a possible solution. You can add code to Method Handles to check for a zero page access. e.g. unsigned_ptr < 4096. The reason you should add this to MethodHandle is that it makes it easier to optimise away this check. Downside of this is thatYou have to use MethodHandles which complicates the syntax and obscures what you are really doing. It doesn’t work if you don’t It doesn’t cover bus errors, nor could it as the mapping for the whole application is complex, and can change in any thread at any time. Optimising away the bounds check requires some work to the optimiser which is yet to be proven.Using Signals If only there was some way to do this in the hardware already, and there is.  The CPU already checks if a page you attempt to access is valid and it throws an interrupt if the page is not in cache. This interrupt is turned into a signal if the OS cannot find/create a mapping for this cache miss. If only there was a signal handler in the JVM already, and there is, this is what produces the crash report. If only there was some way for an interrupt handler to trigger an error or exception back to the code which triggered it. Like Thread.currentThread().stop(e); (You get the idea) AdvantagesNo additional work is required to perform the checking as it is already done by the CPU. Minimal changes to the optimiser (if any). Potentially work for signals produced from a range of source. Using signals is a mature/old tech way of trapping runtime errors which pre-date Java.DisadvantagesSingle processing is likely to be a stop-the-world operation (no way to benchmark this in Java currently) Even if it is not, it is likely to be much more expensive when an error is triggered. You would have to change the signal handler which traditionally hasn’t been changed. i.e. there is much more experience of changing the optimiser.Possible exceptions thrown New exceptions could be thrown, however I suggest re-using existing exceptions. Access to page 0 – NullPointerException Accesses to page 0 (not just access of a NULL pointer) trigger a SIGSEG.  NPE is named after the access of a NULL pointer from C and it is perhaps more obvious to have a NPE for an access to a NULL pointer than a reference. i.e. it could have been called NullReferenceException since Java doesn’t have pointers. Invalid Access – IndexOutOfBoundsException Other candidates include BufferUnderflowException (if you are a page short of a mapped region), BufferOverflowException (if you are a page long of a mapped region). Something these all have in common is that they are RuntimeException(s).  If a custom, more descriptive exception is raised, a RuntimeException might be consistent with existing throwables thrown. Conclusion A common trick to maximise performance is; don’t write in Java something your system is doing for you already.  In Chronicle we use the OS to do the asynchronous persistence to disk and it is more efficient and reliable than writing the same again in Java.  Similarly, trapping and handling invalid memory access would be more efficient and more robust if the facilities provided by the CPU and OS were re-used. Generally speaking you would re-write OS features when each OS does things differently to support cross platform compatibility, but only a minimum required to do this.  This is why Java doesn’t have a thread scheduler and in relativity has little control over how threads are run. Virtual memory handling is so old and standard, that major platforms all work basically the same way.Reference: Making Unsafe safer from our JCG partner Peter Lawrey at the Vanilla Java blog....

The Product Owner’s Guide to the Sprint Retrospective

Summary The sprint retrospective is the key mechanism in Scrum to improve the way people work. Some product owners believe though that they should not attend the meeting, and if they do then only as guests and not as active participants. But the retrospective does not only benefit the development team and the ScrumMaster; it is also an opportunity for the product owner to learn and improve, as this post explains. The Retrospective in a Nutshell The sprint retrospective is an opportunity to pause for a short while and reflect on what happened in the sprint. This allows the attendees to improve their collaboration and their work practices to get even better at creating a great product. The meeting takes place right at the end of the sprint after the sprint review meeting. Its outcome should be actionable improvement measures. These can range from making a firm commitment to start and end future meetings on time to bigger process changes. The retrospective is not a finger-pointing exercise. As Mahatma Gandhi famously said: “Be the change you want to see in the world.” Take Part As the product owner, you are a member of the Scrum team, which also includes the development team and the ScrumMaster. While you are in charge of the product, you rely on the collaboration of the other Scrum team members to create a successful software product. If you don’t attend the retrospective as the product owner, you waste the opportunity to strengthen the relationship and to improve the collaboration with them.But there is more to: Taking part in the sprint retrospective allows you to understand why the team requires some time in the next sprint to carry out improvements such as refactoring the build script, or investigating a new test tool; and maybe more importantly, it helps you improve your own work. Say that some of the user stories the team had worked on did not get finished in the sprint. At first sight this looks like the development team’s fault. But analysing the issue may well reveal that the size of the stories and the quality of the acceptance criteria contributed to the problem. As you are responsible for ensuring that the product backlog is ready, this finding affects your work: It shows you that you have to decompose the user stories further and it suggests that the development team’s involvement in getting the stories ready should be improved – otherwise you would have spotted the issue before the stories went into the sprint. If you had not been in the retrospective would you then whole-heartedly support the resulting improvement measures and change the way you work? Be an Active Participant Don’t attend the retrospective as a guest who will speak when asked but otherwise remains silent. Be an active participant, use the sprint retrospective to get feedback on your work, and raise any issues that you want to see improved. Be constructive and collaborative but don’t shy away from tough problems.Here are some questions that you may want to explore in the retrospective:Do you spend enough time with the development team? Are you available to answer questions or provide feedback quickly enough? Do you provide the right level of feedback and guidance in the right way? Is the communication between the team and you open, honest, and trustful? Does the team know how the users employ the product? Are the team members happy with their involvement in analysing user feedback and data, changing the product backlog, and getting stories ready for the sprint? Do you get enough support from the team to “groom” the backlog? Are the team members aware of the big picture – the overall vision, the product strategy, and the product roadmap? Do you get enough of the team members’ time to help you with product planning and product roadmapping?Improve the Wider Collaboration As important as it is, continuously improving your collaboration with the development team and the ScrumMaster is usually not enough. You also need strong relationships with all the other people required to make the product a success. These include individuals from marketing, sales, customer service, finance, and legal, as the following picture shows.A great way to review and improve the collaboration with your partners from marketing, sales and so forth is to invite them to the retrospective on a regular basis. Depending on how closely you collaborate, this may range from once per month to once per major release. A joint retrospective will help you develop closer and more trustful relationships, help smooth the launch of new product versions, and improve selling and servicing the product. Here are some questions that you may want to explore in an extended retrospective:Are the partners from marketing, sales etc. involved enough in the product planning and roadmapping activities? Do they regularly participate in the sprint review meetings? Are the review meetings beneficial for them? Do they understand the project progress? Do they receive the information necessary to carry out their work in a timely manner, for instance, to prepare the marketing campaign and to compile the sales collateral? Do you get enough data and support from the partners, for instance, regular updates on the sales figures and the market feedback you require?You can, of course, also discuss these questions one-on-one. But getting any issues on the table and discussing improvement opportunities creates a sense of we-are-all-in-this-together; it reaffirms the need for collaboration and teamwork to provide a great product; and it can break down departmental boundaries. Learn More You can learn more about the sprint retrospective meeting and the product owner by attending my Certified Scrum Product Owner training course. Please contact me if you want me to teach the course onsite or if you would like me to run a product owner workshop at your office.Reference: The Product Owner’s Guide to the Sprint Retrospective from our JCG partner Roman Pichler at the Pichler’s blog blog....

Test Data Builders and Object Mother: another look

Constructing objects in tests is usually a painstaking work and usually it produces a lot of repeatable and hard to read code. There are two common solutions for working with complex test data: Object Mother and Test Data Builder. Both has advantages and disadvantages, but (smartly) combined can bring new quality to your tests. Note: There are already many articles you can find about both Object Mother and Test Data Builder so I will keep my description really concise.       Object Mother Shortly, an Object Mother is a set of factory methods that allow us creating similar objects in tests: // Object Mother public class TestUsers {public static User aRegularUser() { return new User("John Smith", "jsmith", "42xcc", "ROLE_USER"); }// other factory methods}// arrange User user = TestUsers.aRegularUser(); User adminUser = TestUsers.anAdmin(); Each time when a user with slightly different variation of data is required, new factory method is created, which makes that the Object Mother may grow in time. This is one of the disadvantages of Object Mother. This problem can be solved by introducing a Test Data Builder. Test Data Builder Test Data Builder uses the Builder pattern to create objects in Unit Tests. A short reminder of a Builder:The builder pattern is an object creation software design pattern. […] The intention of the builder pattern is to find a solution to the telescoping constructor anti-pattern.Let’s look at the example of a Test Data Builder: public class UserBuilder {public static final String DEFAULT_NAME = "John Smith"; public static final String DEFAULT_ROLE = "ROLE_USER"; public static final String DEFAULT_PASSWORD = "42";private String username; private String password = DEFAULT_PASSWORD; private String role = DEFAULT_ROLE; private String name = DEFAULT_NAME;private UserBuilder() { }public static UserBuilder aUser() { return new UserBuilder(); }public UserBuilder withName(String name) { this.name = name; return this; }public UserBuilder withUsername(String username) { this.username = username; return this; }public UserBuilder withPassword(String password) { this.password = password; return this; }public UserBuilder withNoPassword() { this.password = null; return this; }public UserBuilder inUserRole() { this.role = "ROLE_USER"; return this; }public UserBuilder inAdminRole() { this.role = "ROLE_ADMIN"; return this; }public UserBuilder inRole(String role) { this.role = role; return this; }public UserBuilder but() { return UserBuilder .aUser() .inRole(role) .withName(name) .withPassword(password) .withUsername(username); }public User build() { return new User(name, username, password, role); } } In our test we can use the builder as follows: UserBuilder userBuilder = UserBuilder.aUser() .withName("John Smith") .withUsername("jsmith");User user = userBuilder.build(); User admin = userBuilder.but() .withNoPassword().inAdminRole(); The above code seem pretty nice. We have a fluent API that improves the readability of the test code and for sure it eliminates the problem of having multiple factory methods for object variations that we need in tests while using Object Mother. Please note that I added some default values of properties that may be not relevant for most of the tests. But since they are defined as public constants they can be used in assertions, if we want so. Note: The example used in this article is relatively simple. It is used to visualize the solution. Object Mother and Test Data Builder combined Neither solution is perfect. But what if we combine them? Imagine, that Object Mother returns a Test Data Builder. Having this, you can then manipulate the builder state before calling a terminal operation. It is a kind of a template. Look at the example below: public final class TestUsers {public static UserBuilder aDefaultUser() { return UserBuilder.aUser() .inUserRole() .withName("John Smith") .withUsername("jsmith"); }public static UserBuilder aUserWithNoPassword() { return UserBuilder.aUser() .inUserRole() .withName("John Smith") .withUsername("jsmith") .withNoPassword(); }public static UserBuilder anAdmin() { return UserBuilder.aUser() .inAdminRole() .withName("Chris Choke") .withUsername("cchoke") .withPassword("66abc"); } } Now, TestUsers provides factory method for creating similar test data in our tests. It returns a builder instance, so we are able to quickly and nicely modify the object in a our test as we need: UserBuilder user = TestUsers.aUser(); User admin = user.but().withNoPassword().build(); The benefits are great. We have a template for creating similar objects and we have the power of a builder if we need to modify the state of the returned object before using it. Enriching a Test Data Builder While thinking about the above, I am not sure if keeping a separate Object Mother is really necessary. We could easily move the methods from Object Mother directly to Test Data Builder: public class UserBuilder {public static final String DEFAULT_NAME = "John Smith"; public static final String DEFAULT_ROLE = "ROLE_USER"; public static final String DEFAULT_PASSWORD = "42";// field declarations omitted for readabilityprivate UserBuilder() {}public static UserBuilder aUser() { return new UserBuilder(); }public static UserBuilder aDefaultUser() { return UserBuilder.aUser() .withUsername("jsmith"); }public static UserBuilder aUserWithNoPassword() { return UserBuilder.aDefaultUser() .withNoPassword(); }public static UserBuilder anAdmin() { return UserBuilder.aUser() .inAdminRole(); }// remaining methods omitted for readability} Thanks to that we can maintain the creation of User’s data inside a single class. Please note, that in this that Test Data Builder is a test code. In case we have a builder already in a production code, creating an Object Mother returning an instance of Builder sounds like a better solution. What about mutable objects? There are some possible drawbacks with Test Data Builder approach when it comes to mutable objects. And in many applications I mostly deal with mutable objects (aka beans or anemic data model) and probably many of you do as well. The Builder pattern is meant for creating immutable value objects – in theory. Typically, if we deal with mutable objects Test Data Builder may seem like a duplication at first sight: // Mutable class with setters and getters class User { private String name; public String getName() { ... } public String setName(String name) { ... }// ... }public class UserBuilder { private User user = new User();public UserBuilder withName(String name) { user.setName(name); return this; }// other methodspublic User build() { return user; } } In a test we can then create a user like this: User aUser = UserBuiler.aUser() .withName("John") .withPassword("42abc") .build(); Instead of: User aUser = new User(); aUser.setName("John"); aUser.setPassword("42abc"); In such a case creating Test Data Builder is a trade off. It requires writing more code that needs to be maintained. On the other hand, the readability is greatly improved. Summary Managing test data in unit tests is not an easy job. If you don’t find a good solution, you end up with plenty of boilerplate code that is hard to read and understand, hard to maintain. On the other hand there is not silver bullet solution for that problem. I experimented with many approaches. Depending on the size of the problem I need to deal with I select a different approach, sometimes combining multiple approaches in one project. How do you deal with constructing data in your tests? ResourcesPetri Kainulainen: Writing Clean Tests – New Considered Harmful Growing Object-Oriented Software, Guided by Tests – Chapter 22: Constructing Complex Test Data.Reference: Test Data Builders and Object Mother: another look from our JCG partner Rafal Borowiec at the Codeleak.pl blog....

Docker Containers With Gradle in 4 Steps

 Do you need to create a Docker image from your Java web app? Are you using Gradle? If so, then you are only 4 steps away from Docker nivana. For this example, I’m going to use a simple Spring Boot application. You can find all the source code in my Github repository dubbed galoshe. If you haven’t had a chance to see Spring Boot in action, then you’re in for a treat, especially if the words simple and Java web app in the same sentence make you flinch. That was certainly my long standing reaction until I took a serious look at Boot. For instance, a quick and dirty “hello world” Boot app is essentially more imports & annotations than actual code. Check it out: A simple Spring Boot application package com.github.aglover.galoshe;import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.EnableAutoConfiguration; import org.springframework.context.ApplicationContext; import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RestController;@RestController @Configuration @EnableAutoConfiguration @ComponentScan public class Application {public static void main(String[] args) { ApplicationContext ctx = SpringApplication.run(Application.class, args); }@RequestMapping("/") public String index() { return "Hello to you, world"; } } Running this application is as easy as typing: $ java -jar build/libs/galoshe-0.1.0.jar That command will fire up an embedded web container with the request path / mapped to return the simple String “Hello to you, world”. You can define what port this application will run on via an application.properties file like so: application.properties server.port: 8080 Consequently, if I take my browser and point it to localhost:8080, I see the pedestrian, but oh-so-gratifying-when-you-see-it salutation. Now that you’ve been introduced to the application I’d like to distribute as a Docker container, let me show you how to do it in 4 easy steps. Keep in mind, however, that in order to use the gradle-docker plugin I use in this example, you’ll need to have Docker installed as the plugin shells out to the docker command. Step 1: Apply some plugins First and foremost, to Docker-ize your application, you’ll need to use two Gradle plugins: docker and application. The gradle-docker plugin by Transmode is actually 1 of 2 available plugins for Dockering with Gradle. The other plugin by Ben Muschko of Gradleware is a bit more advanced with additional features, however, I find the Transmode plugin the easiest and quickest to get going. The application plugin is actually included automatically via the spring-boot plugin in my particular example, however, if you aren’t using Boot, then you’ll need to add the following two plugins to your build.gradle file: apply plugin: 'application' apply plugin: 'docker' As the docker plugin is a 3rd party plugin, you’ll need to tell Gradle how to find it via a dependencies clause. Specifying the classpath for the docker plugin buildscript { repositories { mavenCentral() } dependencies { classpath 'se.transmode.gradle:gradle-docker:1.1' } } Now your Gradle script is ready to start Docker-ing. Next up, you’ll need to provide some clues so the plugin can create a valid Dockerfile. Step 2: Provide some properties The gradle-docker plugin doesn’t directly create a Docker container – it merely creates a Dockerfile and then shells out to the docker command to build an image. Consequently, you need to specify a few properties in your build.gradle file so that the corresponding Dockerfile builds a valid container that automatically runs your application. You need to provide:The class to run i.e. the class in your application that contains a main method The target JVM version (default is Java 7) Optionally, a group id, which feeds into the corresponding Docker tag.Accordingly, my build.gradle defines all three properties like so: Defining properties for the docker plugin group = 'aglover' sourceCompatibility = 1.7 mainClassName = 'com.github.aglover.galoshe.Application' A few notes about these properties. Firstly, Java 8 isn’t currently available for this plugin. If you don’t specify a sourceCompatibility, you’ll get Java 7. Next, the group property isn’t required; however, it helps in Docker tagging. For example, my project’s baseName is dubbed galoshe; consequently, when the plugin creates a Docker image, it’ll tag that image with the pattern group/name. So in my case, the corresponding image is tagged aglover/galoshe. Finally, the mainClassName shouldn’t be too surprising – it’s the hook into your application. In truth, the plugin will create a script that your resultant Docker image will invoke on startup. That script will essentially call the command: java -classpath your_class_path your_main_class At this point, you are almost done. Next up, you’ll need to specify any Dockerfile instructions. Step 3: Specify any required Dockerfile instructions Dockerfiles contain specialized instructions for the corresponding image they create. There are a few important ones; nevertheless, my Boot app only requires one: port, which is set via the exposePort method of the plugin. Consequently, to ensure my Docker container exposes port 8080 as defined in my application.properites file, I’ll add the following clause to my build.gradle file: Specifying port 8080 distDocker { exposePort 8080 } A few other aspects you can muddle with via the plugin are addFile which results in an ADD instruction, runCommand, which results in a RUN instruction, and finally setEnvironment, which creates an ENV instruction. Now you’re done with your Gradle build. All that’s left to do is run your build and fire the image up! Step 4: Build and run it Provided you’ve configured the gradle-plugin properly, all that’s left to do is run your build. In this case, the command is simply distDocker. Running my build $ ./gradlew distDocker The first time you run this command it’ll take a bit as various images will be downloaded. Subsequent runs will be lightning quick though. After your build completes, your image will be created with the tag I noted earlier. In my case, the tag will be aglover/galoshe, which I can quickly see by running the images command: Listing available local Docker images $ docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE aglover/galoshe latest 332e163221bc 20 hours ago 1.042 GB dockerfile/java latest f9793c257930 3 weeks ago 1.026 GB I can subsequently run my image like so: Running my container docker run 332e163221bc I can naturally go to my browser, hit localhost:8080 and find myself quite satisfied that my image runs a nifty greeting. Of course, I would need to publish this image for others to use it; nevertheless, as you can see, the gradle-plugin allows me to quickly create Docker containers for Java apps.Reference: Docker Containers With Gradle in 4 Steps from our JCG partner Andrew Glover at the The Disco Blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books