Featured FREE Whitepapers

What's New Here?


Chronicle and low latency in Java

Overview I was watching this excellent presentation by Rolan Kuhn of Typesafe on Introducing Reactive Streams At first glance it appears that it has some similar goals to Chronicle, but as you dig into the details it was clear to me that there was a few key assumptions which were fundamentally different.   Key assumptions The key assumptions  in the design of Chronicle arelow latency is your problem, not throughput. Data comes in micro-bursts which you want to handle as quickly as possible long before the next micro-burst of activity. you can’t pause an exchange/producer if you are busy. (or pausing the end user is not an option) your information is high value, recording every event with detailed timing is valuable. Recording all your events is key to understanding micro-bursts. You want to be able to examine any event which occurred in the past.Low latency is essential The key problem Chronicle is design to help you solve is consistent low latency.  It assumes that if your latency is low enough, you don’t have a problem with throughput.  Many web based systems are designed for throughput and as long as the latency is not visible to end users, latency is not an issue. For soft real time systems, you need low latency most of the time and a modest worst case latency, much faster than a human can see. You can’t stop the world Another key assumption is that flow control is not an option.  If you are running slow, you can’t say to the exchange and all its users, wait up a second while I catch up.  This means the producer can never be slowed down by a consumer.Slowing the producer is effectively the same as adding latency, but this latency is easy to hide. If you wait until your producer timestamps an event, this can make you latencies look better. If you want a more realistic measure is you should use the time stamp the event should have been sent by a producer which is not delayed. You need to record every thing for replay Replaying can be useful for testing your application under a range of conditions.  e.g. you can change your application and see not only that it behaves correctly, but behaves in a timely manner. For quantitative analysis, they will need a set of data to tune their strategies. Replay an old event in real time. Instead of taking a copy of event you might want to refer to later, you can instead remember it’s index and you can replay that event later on demand.  This saves memory in the heap, or just-in-case copies of data. Micro-bursts are critical to understanding your system. The performance of some systems are characterised in terms of transactions per day.  This implies that if no transactions were completed for the first 23 hours and all of them completed in the last hour, you would still perform this many transactions per day.  Often the transactions per day is quoted because its a higher numbers, but in my case having all day to smooth out the work load sounds like a luxury.Some systems might be characterised in terms of the number of transactions per second.  This may imply that those transactions can start and complete in one second, but not always.  If you have 1000 transactions and one comes in every milli-seconds, you get an even response time.  What I find more interesting is the number of transactions in the busiest second of a day.  This gives you an indication of the flow rate your system should be able to handle. Examining micro bursts Consider a system which gets 30 events all in the same 100 micro-seconds and these bursts are 100 milli-seconds apart.   This could appear as a (30 / 0.1) 300 transactions per second which sounds relatively easy if all we need to do is to keep up, but if we want to response as quickly as possible, the short term/burst throughput is 30 in 100 micro-seconds or 300,000 events per second.In other words, to handle micro-bursts as fast as possible, you will need a systems which can handle throughputs many orders of magnitude higher than you would expect over seconds or minutes or a day. Ideally, the throughput of your systems will be 100x the busiest second of the day.  This is required to handle the busiest 10 milli-seconds in any second without slowing the handling of these bursts of data.How does Chronicle improves handling of micro-bursts Low garbage rate Minimising garbage is key to avoiding GC pauses. To use your L1 and L2 cache efficiently, you need to keep your garbage rates very low.  If you are not using these cache efficiently your application can be 2-5x slower.The garbage from Chronicle is low enough that you can process one million events without jstat detecting you have created any garbage.  jstat only displays multiples of 4 KB, and only when a new TLAB is allocated.  Chronicle does create garbage, but it is extremely low. i.e. a few objects per million events processes.Once you make the GC pauses manageable, or non-existent, you start to see other sources of delay in your system.   Take away the boulders and you start to see the rocks.  Take away the rocks and you start to see the pebbles. Supports a write everything model. It is common knowledge that if you leave DEBUG level logging on, it can slow down your application dramatically.  There is a tension between recording everything you might want to know later, and the impact on your application.Chronicle is designed to be fast enough that you can record everything.  If you replace queues and IPC connections in your system, it can improve the performance and you get “record everything” for free, or even better.Being able to record everything means you can add trace timings through every stage of your system and then monitor your system, but also drill into the worst 1% delays in your system.  This is not something you can do with a profiler which gives you averages.With chronicle you can answer questions such as; which parts of the system were responsible for the slowest 20 events for a day? Chronicle has minimal interaction with the Operating System. System calls are slow, and if you can avoid call the OS, you can save significant amounts of latency.For example, if you send a message over TCP on loopback, this can add a 10 micro-seconds latency between writing and reading the data.  You can write to a chronicle, which is a plain write to memory, and read from chronicle, which is also a read from memory with a latency of 0.2 micro-seconds. (And as I mentioned before, you get persistence as well) No need to worry about running out of heap. A common problem with unbounded queues and this uses an open ended amount of heap.Chronicle solves this by not using the heap to store data, but instead using memory mapped files. This improve memory utilisation by making the data more compact but also means a 1 GB JVM can stream 1 TB of data over a day without worrying about the heap or how much main memory you have.  In this case, an unbounded queue becomes easier to manage. Conclusion By being built on different assumptions, Chronicle solves problems other solutions avoid, such as the need for flow control or consuming messages (deleting processed messages).Chronicle is designed to be used your hardware more efficiently so you don’t need a cloud of say 30 servers to handle around one million events per second (as a number of cloud based solutions claim), you can do this event rate with a developer laptop.  Reference: Chronicle and low latency in Java from our JCG partner Peter Lawrey at the Vanilla Java blog....

Micro Services the easy way with Fabric8

Micro Services have received a lot of discussion of late. While its easy to argue the exact meaning of the term; its hard to deny there’s a clear movement in the Java ecosystem towards micro services: using smaller, lighter weight and isolated micro service processes instead of putting all your code into monolithic application servers with approaches like DropWizard, Spring Boot and Vert.x. There are lots of benefits of the micro services approach; particularly considering DevOps and the cloud. Fabric8 is poly app server On the fabric8 project we’re very application server agnostic; there are lots of pros and cons with using different application servers. The ideal choice often comes down to your requirements and your team’s knowledge and history. Folks tend to prefer to stick with the application servers they know and love (or that their operations folks are happy managing) rather than switching; as all application servers require time to learn. OSGi is very flexible, modular and standards based; but the modularity has a cost in terms of learning and development time. (OSGi is kinda marmite technology; you tend to either love it or hate it ;). Servlet engines like Tomcat are really simple; but with very limited modularity. Then for those that love JEE there’s WildFly and TomEE. In the fabric8 project we initially started supporting OSGi for managing things like JBoss Fuse, Apache Karaf, and Apache ServiceMix. If you use version 1.0.x of fabric8 (which is included in JBoss Fuse 6.1) then OSGi is the only application server model supported. However in 1.1.x we’re working hard on support for Apache Tomcat, Apache TomEE and WildFly as first class citizens in fabric8; so you can pick whichever application server model you and your team prefer; including using a mixture of container types for different services. Fabric8 1.1.0.Beta5 now supports Java Containers I’m personally really excited about the new Java Container capability which is now available in version 1.1.0.Beta5 or later of fabric8 which lets you easily provision and manage java based Micro Services. A Java Container is an alternative to using an application server; its literally using the java process using a classpath and main you specify. So there’s no mandated application server or libraries. With pretty much all application servers you’re gonna hit class loader issues at some point; with the Java Container in fabric8; its a simple flat class loader that you fully control. Simples! If things work in maven and your tests; they work in the Java Container (*); since its the same classpath – a flat list of jars. * (provided you don’t include duplicate classes in different jars, where the order of the jars in the classpath can cause issues but thats easy to check for in your build). The easiest, simplest thing that could possibly work as an application developer is just using a simple flat class loader. i.e. using the java process on the command line like this:java -cp "lib/*" $MAINCLASSThis then means that each micro service is a separate isolated container (operating system process) with its own class path so its easy to monitor and perform incremental upgrades of dependencies without affecting other containers; this makes version upgrades a breeze. However the problem with micro services is managing the deployment of all these java processes; starting them, stopping them, managing them, having nice tooling to view whats happening and performing rolling upgrades of changes. Thats where fabric8 comes to help! The easiest way to see is via a demo… Demo Here’s a screencast I just recorded to show how easy it is to work with any Java project which has a maven build and a Java main function (a static main(String[] args) function to bootstrap the Java code. I use an off the shelf Apache Camel and Spring example; but really any Java project with an executable jar or main class would do.    For more background on how to use this and how all this works check the documentation on the Java Container and Micro Services in Fabric8  Reference: Micro Services the easy way with Fabric8 from our JCG partner James Strachan at the James Strachan’s Blog blog....

Spring Java Configuration: Session timeout

We live in a nice time, when you can develop a Spring application using java based configuration. No redundant XML code any more, just pure java code. In this article I want to discuss a popular topic about session management in Spring applications. If to be more precise I’m going to talk about a session timeout in java configuration style. So in one of my previous blog posts I’ve already said how to manage a lifetime of session. But that solution implies usage of web.xml file, which is not required for java based configs. Because its role plays a class which extends AbstractAnnotationConfigDispatcherServletInitializer class. Frequently it looks something like this: import javax.servlet.Filter;import org.springframework.web.filter.HiddenHttpMethodFilter; import org.springframework.web.servlet.support.AbstractAnnotationConfigDispatcherServletInitializer;public class Initializer extends AbstractAnnotationConfigDispatcherServletInitializer {@Override protected Class<?>[] getRootConfigClasses() { return null; }@Override protected Class<?>[] getServletConfigClasses() { return new Class<?>[] { WebAppConfig.class }; }@Override protected String[] getServletMappings() { return new String[] { "/" }; }@Override protected Filter[] getServletFilters() { return new Filter[] { new HiddenHttpMethodFilter() }; }} I’ve written a lot about usage of such configurations, but here we should pay extra attention to classes which AbstractAnnotationConfigDispatcherServletInitializer extends. I talk about the AbstractDispatcherServletInitializer class. In its turn it has onStartup(ServletContext servletContext) method. Its purpose is to configure a ServletContext with any servlets, filters, listeners context-params and attributes necessary for initializing this web application. Directly in this place it’s a good time to recall about the HttpSessionListener interface. As you have already guessed in an implementation of this interface we are able to manage each just created session a an application. For example we can set a maximum inactive interval equal to 5 minutes: import javax.servlet.http.HttpSessionEvent; import javax.servlet.http.HttpSessionListener;public class SessionListener implements HttpSessionListener {@Override public void sessionCreated(HttpSessionEvent event) { System.out.println("==== Session is created ===="); event.getSession().setMaxInactiveInterval(5*60); }@Override public void sessionDestroyed(HttpSessionEvent event) { System.out.println("==== Session is destroyed ===="); } } In order to apply this session management changes into our java based configurations, we have to add a following code snippet to Initializer class: ... @Override public void onStartup(ServletContext servletContext) throws ServletException { super.onStartup(servletContext); servletContext.addListener(new SessionListener()); } ... That’s all java geeks, enjoy coding.  Reference: Spring Java Configuration: Session timeout from our JCG partner Alexey Zvolinskiy at the Fruzenshtein’s notes blog....

Tracking Exceptions – Part 6 – Building an Executable Jar

If you’ve read the previous five blogs in this series, you’ll know that I’ve been building a Spring application that runs periodically to check a whole bunch of error logs for exceptions and then email you the results. Having written the code and the tests, and being fairly certain it’ll work the next and final step is to package the whole thing up and deploy it to a production machine. The actual deployment and packaging methods will depend upon your own organisation’s processes and procedures. In this example, however, I’m going to choose the simplest way possible to create and deploy an executable JAR file. The first step was completed several weeks ago, and that’s defining our output as a JAR file in the Maven POM file, which, as you’ll probably already know, is done using the packaging element:<packaging>jar</packaging> It’s okay having a JAR file, but in this case there’s a further step involved: making it executable. To make a JAR file executable you need to add a MANIFEST.MF file and place it in a directory called META-INF. The manifest file is a file that describes the JAR file to both the JVM and human readers. As usual, there are a couple of ways of doing this, for example if you wanted to make life difficult for yourself, you could hand-craft your own file and place it in the META-INF directory inside the project’s src/main/resources directory. On the other hand, you could use the maven-jar plug-in and do it automatically. To do that, you need to to add the following to your POM file. <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <version>2.4</version> <configuration> <archive> <manifest> <addClasspath>true</addClasspath> <mainClass>com.captaindebug.errortrack.Main</mainClass> <classpathPrefix>lib/</classpathPrefix> </manifest> </archive> </configuration> </plugin> The interesting point here is the <archive><manifest> configuration element. It contains three sub-elements:addClasspath: this means that the plug-in will add the classpath to the MANIFEST.MF file so that the JVM can find all the support jars when running the app. mainClass: this tells the plug-in to add a Main-Class attribute to the MANIFEST.MF file, so that the JVM knows where to find the the entry point to the application. In this case it’s com.captaindebug.errortrack.Main classpathPrefix: this is really useful. It allows you to locate all the support jars in a different directory to the main part of the application. In this case I’ve chosen the very simple and short name of lib.If you run the build and then open up the resulting JAR file and extract and examine the /META-INF/MANIFEST.MFfile, you’ll find something rather like this: Manifest-Version: 1.0Built-By: Roger Build-Jdk: 1.7.0_09 Class-Path: lib/spring-context-3.2.7.RELEASE.jar lib/spring-aop-3.2.7.RELEASE.jar lib/aopalliance-1.0.jar lib/spring-beans-3.2.7.RELEASE.jar lib/spring-core-3.2.7.RELEASE.jar lib/spring-expression-3.2.7.RELEASE.jar lib/slf4j-api-1.6.6.jar lib/slf4j-log4j12-1.6.6.jar lib/log4j-1.2.16.jar lib/guava-13.0.1.jar lib/commons-lang3-3.1.jar lib/commons-logging-1.1.3.jar lib/spring-context-support-3.2.7.RELEASE.jar lib/spring-tx-3.2.7.RELEASE.jar lib/quartz-1.8.6.jar lib/mail-1.4.jar lib/activation-1.1.jar Created-By: Apache Maven 3.0.4 Main-Class: com.captaindebug.errortrack.Main Archiver-Version: Plexus Archiver The last step is to marshall all the support jars into one directory, in this case the lib directory, so that the JVM can find them when you run the application. Again, there are two ways of approaching this: the easy way and the hard way. The hard way involves manually collecting together all the JAR files as defined by the POM (both direct and transient dependencies) and copying them to an output directory. The easy way involves getting the maven-dependency-plugin to do it for you. This involves adding the following to your POM file: <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-dependency-plugin</artifactId> <version>2.5.1</version> <executions> <execution> <id>copy-dependencies</id> <phase>package</phase> <goals> <goal>copy-dependencies</goal> </goals> <configuration> <outputDirectory> ${project.build.directory}/lib/ </outputDirectory> </configuration> </execution> </executions> </plugin> In this case you’re using the copy-dependencies goal executed in the package phase to copy all the project dependencies to the ${project.build.directory}/lib/ directory – note that the final part of the directory path, lib, matches the classpathPrefix setting from the previous step. In order to make life easier, I’ve also created a small run script: runme.sh: #!/bin/bashecho Running Error Tracking... java -jar error-track-1.0-SNAPSHOT.jar com.captaindebug.errortrack.Main And that’s about it. The application is just about complete. I’ve copied it to my build machine where it now monitors the Captain Debug Github sample apps and build. I could, and indeed may, add a few more features to the app. There are a few rough edges that need knocking off the code: for example is it best to run it as a separate app, or would it be a better idea to turn it into a web app? Furthermore, wouldn’t it be a good idea to ensure that the same errors aren’t reported twice? I may get around to thart soon… or maybe I’ll talk about something else; so much to blog about so little time… The code for this blog is available on Github at: https://github.com/roghughe/captaindebug/tree/master/error-track. If you want to look at other blogs in this series take a look here…Tracking Application Exceptions With Spring Tracking Exceptions With Spring – Part 2 – Delegate Pattern Error Tracking Reports – Part 3 – Strategy and Package Private Tracking Exceptions – Part 4 – Spring’s Mail Sender Tracking Exceptions – Part 5 – Scheduling With SpringReference: Tracking Exceptions – Part 6 – Building an Executable Jar from our JCG partner Roger Hughes at the Captain Debug’s Blog blog....

See, always told you: testing is just a waste of time

Automated testing has become something people don’t speak about any more. It matured to being a standard in software development, everyone is and should be practicing. No more talks at conferences, only a few blog posts and online articles from now and then. That radically changed recently, again becoming a hotly debated topic, at least after some of @dhh’s quite provocative posts and his RailsConf keynote. (Image source: http://www.industriallogic.com/blog/tdd-dead-sale/) I learned about automated testing, particularly unit testing, during my Bachelor studies at university. It was kind of awkward, initially, and we definitely wrote tests to fulfill the submission criteria of our lab projects. After years of practicing, more experience, a dozen of read articles, books, blog posts, I really started to love it. Especially after I felt the pain of having to refactor the codebase without any kind of regression tests in place: you cannot move, you want, but you cannot! A total nightmare, just blindly changing the code, hoping for the best. Stop doing this. Today I mostly always write automated tests becauseI got used to it it makes me develop faster it gives me the freedom to improve later; making it possible to quickly code, even a sub-optimal implementation now, because I know I’ll be able to change, refactor and optimize it laster more easily …Even when coding some quick experiments, I create some automated tests that proof my assumptions. Overkill? I don’t think so. There are so many nice tools in place which execute your tests right after you save/compile your code. Would I be faster by writing and executing console.log statements after each change? Hardly… Whenever you are tempted to type something into a print statement or a debugger expression, write it as a test instead. Martin Fowler I simply love the fast feedback I get by looking at the test run indicator while coding, without having to move my fingers from the keyboard to click through some UI for verifying that my code (still) works. Moreover, I build a regression suite: each additional tests I add gets executed after each modification to my code. I’m basically testing all of the scenarios (my and those potentially implemented by my collegues) from the beginning of my coding up to where I currently am, in milliseconds. Can you do the same with the debugger or console.log?? ;) When I code JavaScript, I write automated tests! Of course, I don’t even have a compiler there, everything happens/breaks at runtime. It’s even more important than in statically typed, compiled languages. So, hell yes, I do create automated tests that are executed by Jenkins when I commit my feature to make sure I didn’t break anything else. Do I do TDD? Hmm..I follow a test-first approach, I’d say, and I try to have a cycle likeImplement the test See it fail Write code See test pass (…)Admittedly it often slightly distorts to a “implement production code, gosh..this is going to be complicated..need a test, then comment production code out, write test, see it fail, then uncomment the code, see it still fail (?!), write some more production code, see it succeed” kind of workflow. But that’s normal I guess. What about Test Driven Design? You mean to let my architecture evolve blindly by the magic of TDD? Didn’t succeed on that (I’d require a mentor here in case someone is interested ;)). I usually think about the architecture or a possible implementation of a feature already before I write the first line of code. This happens automatically. Do the tests influence/change/adapt that initially design during the implementation? Most often, yes. Furthermore, based on the gained experiences in writing lots of tests, my architecture evolved over the years to facilitate testing out of the box. For me, the important thing to have is automation and fast feedback (which implies having automation in place). Then, I write all kind of tests, depending which on my needs. Automated JMeter tests that call my REST api after each deploy, integration tests going down starting from my frontend controller through the dependency injection framework, the business layer, data access till down to the DB. Unit tests on the other side for reusable components, for critical code, for situations where I wouldn’t want to setup data in the DB to test a particular use case and where it’s much easier and faster to simply provide some stubs. The debate: Most unit testing is a waste The testing debate started with a quite provocative article entitled “Why Most Unit Testing is Waste”. In a 19 pages article James O Coplien outlines some issues he encountered while writing unit tests. If you have the time, absolutely read it, but read the entire article and don’t just draw (wrong) conclusions from the headline! Alternatively, Rodolfo Grave created a nice summary on his blog. Coplien’s résumé:Keep regression tests around for up to a year — but most of those will be system-level tests rather than unit tests. Keep unit tests that test key algorithms for which there is a broad, formal, independent oracle of correctness, and for which there is ascribable business value. Except for the preceding case, if X has business value and you can test X with either a system test or a unit test, use a system test — context is everything. Design a test with more care than you design the code. Turn most unit tests into assertions. Throw away tests that haven’t failed in a year. Testing can’t replace good development: a high test failure rate suggests you should shorten development intervals, perhaps radically, and make sure your architecture and design regimens have teeth If you find that individual functions being tested are trivial, double-check the way you incentivize developers’ performance. Rewarding coverage or other meaningless metrics can lead to rapid architecture decay. Be humble about what tests can achieve. Tests don’t improve quality: developers do.As already Rodolfo comments, other than the “throw tests away”, I fully agree. IMHO, this article is written by a person who mastered writing automated tests. A bit later, David Heinemeier Hansson (@dhh), creator of Ruby on Rails and founder & CTO of Basecamp, entered the debate with his TDD is dead articles:TDD is dead. Long live testing. Test-induced design damage..followed by lots of discussions on HackerNews here and here and various ones on Twitter between Martin Fowler, Heinemeier, Uncle Bob Martin and Kent Beck. tdd is not "alive" or "dead". it is subject to tradeoffs, including risk of api changes, skill of practitioner, and existing design. — Kent Beck (@KentBeck) April 28, 2014Numerous blog posts and articles emerged from these dicussions. Here are some I was able to captureUncle Bob: Professionalism and TDD (Reprise) Uncle Bob: Test Induced Design Damage? TDD, Straw Men, and Rhetoric Martin Fowler, attempting to better define “Unit test”Someone even started a #WhyITDD hashtag on Twitter. To conclude: It seems to me that using good design principles that make your tests run faster is a noble goal. It also seems to me that decoupling from frameworks such as Rails, as your applications grow, is a wise action. I believe these things to be evidence that professionals, like Jim Weirich, are at work. Uncle Bob See, always told you… If you ever tried to convince/coach people in writing automated tests, then you know how damn hard it is. Personally I think it’s nearly impossible. You can only give an initial hint on some techniques and then each dev needs to practice and experience it by himself. It’s something that has quite a steep learning curve. The main problem (I’m quite sick to hear about…) is that devs take those articles mentioned previously as a proof they were correct in not writing any tests in the past and future. This is total non-sense! If you read beyond the headline, none of them questions the creation of automated tests but ratherthe TDD approach unit tests vs. integration tests vs acceptance tests etc…Make sure you understand what automated testing is about, what unit tests are, what TDD is all about etc. Fowler’s collection of articles might be a good starting point. Thoughtworks event: “Is TDD dead?” Martin Fowler announced yesterday that ThoughtWorks will be hosting a debate between himself, Kent Beck and David Heinemeier Hansson about whether TDD is dead. You should absolutely participate at this hangout or watch the recorded session afterwards. Watch myself, @KentBeck and @dhh discuss TDD in a hangout on Friday at 11am EST [corrected time] https://t.co/mV4gVYAh6D … — Martin Fowler (@martinfowler) May 6, 2014Relevant links All relevant links together (will be updated if new ones emerge):“Why Most Unit Testing is Waste” (Summary) TDD is dead. Long live testing. David Heinemeier Hansson’s RailsConf keynote Test-induced design damage Uncle Bob: Professionalism and TDD (Reprise) Uncle Bob: Test Induced Design Damage? TDD, Straw Men, and Rhetoric Martin Fowler, attempting to better define “Unit test” ThoughtWorks Event: Is TDD dead?Reference: See, always told you: testing is just a waste of time from our JCG partner Juri Strumpflohner at the Juri Strumpflohner’s TechBlog blog....

Java 8 Features – The ULTIMATE Guide

EDITORIAL NOTE: It’s been a while since Java 8 is out in the public and everything points to the fact that this is a really major release. We have provided an abundance of tutorials here at Java Code Geeks, like Playing with Java 8 – Lambdas and Concurrency, Java 8 Date Time API Tutorial : LocalDateTime and Abstract Class Versus Interface in the JDK 8 Era. We also referenced 15 Must Read Java 8 Tutorials from other sources. Of course, we examined some of the shortfalls also, like The Dark Side of Java 8. Now, it is time to gather all the major Java 8 features under one reference post for your reading pleasure. Enjoy!Table Of Contents1. Introduction 2. New Features in Java language2.1. Lambdas and Functional Interfaces 2.2. Interface Default and Static Methods 2.3. Method References 2.4. Repeating annotations 2.5. Better Type Inference 2.6. Extended Annotations Support3. New Features in Java compiler3.1. Parameter names4. New Features in Java libraries4.1. Optional 4.2. Streams 4.3. Date/Time API (JSR 310) 4.4. Nashorn JavaScript engine 4.5. Base64 4.6. Parallel Arrays 4.7. Concurrency5. New Java tools5.1. Nashorn engine: jjs 5.2. Class dependency analyzer: jdeps6. New Features in Java runtime (JVM) 7. Conclusions 8. Resources  1. Introduction With no doubts, Java 8 release is the greatest thing in the Java world since Java 5 (released quite a while ago, back in 2004). It brings tons of new features to the Java as a language, its compiler, libraries, tools and the JVM (Java virtual machine) itself. In this tutorial we are going to take a look on all these changes and demonstrate the different usage scenarios on real examples. The tutorial consists of several parts where each one touches the specific side of the platform:language compiler libraries tools runtime (JVM)2. New Features in Java language Java 8 is by any means a major release. One might say it took so long to finalize in order to implement the features every Java developer was looking for. In this section we are going to cover most of them. 2.1. Lambdas and Functional Interfaces Lambdas (also known as closures) are the biggest and most awaited language change in the whole Java 8 release. They allow us to treat functionality as a method argument (passing functions around), or treat a code as data: the concepts every functional developer is very familiar with. Many languages on JVM platform (Groovy, Scala, …) have had lambdas since day one, but Java developers had no choice but hammer the lambdas with boilerplate anonymous classes. Lambdas design discussions have taken a lot of time and community efforts. But finally, the trade-offs have been found, leading to new concise and compact language constructs. In its simplest form, a lambda could be represented as a comma-separated list of parameters, the –> symbol and the body. For example: Arrays.asList( "a", "b", "d" ).forEach( e -> System.out.println( e ) ); Please notice the type of argument e is being inferred by the compiler. Alternatively, you may explicitly provide the type of the parameter, wrapping the definition in brackets. For example: Arrays.asList( "a", "b", "d" ).forEach( ( String e ) -> System.out.println( e ) ); In case lambda’s body is more complex, it may be wrapped into square brackets, as the usual function definition in Java. For example: Arrays.asList( "a", "b", "d" ).forEach( e -> { System.out.print( e ); System.out.print( e ); } );Lambdas may reference the class members and local variables (implicitly making them effectively final if they are not). For example, those two snippets are equivalent: String separator = ","; Arrays.asList( "a", "b", "d" ).forEach( ( String e ) -> System.out.print( e + separator ) );And: final String separator = ","; Arrays.asList( "a", "b", "d" ).forEach( ( String e ) -> System.out.print( e + separator ) );Lambdas may return a value. The type of the return value will be inferred by compiler. The return statement is not required if the lambda body is just a one-liner. The two code snippets below are equivalent: Arrays.asList( "a", "b", "d" ).sort( ( e1, e2 ) -> e1.compareTo( e2 ) ); And: Arrays.asList( "a", "b", "d" ).sort( ( e1, e2 ) -> { int result = e1.compareTo( e2 ); return result; } );Language designers put a lot of thought on how to make already existing functionality lambda-friendly. As a result, the concept of functional interfaces has emerged. The function interface is an interface with just one single method. As such, it may be implicitly converted to a lambda expression. The java.lang.Runnable and java.util.concurrent.Callable are two great examples of functional interfaces. In practice, the functional interfaces are fragile: if someone adds just one another method to the interface definition, it will not be functional anymore and compilation process will fail. To overcome this fragility and explicitly declare the intent of the interface as being functional, Java 8 adds special annotation @FunctionalInterface (all existing interfaces in Java library have been annotated with @FunctionalInterface as well). Let us take a look on this simple functional interface definition: @FunctionalInterface public interface Functional { void method(); } One thing to keep in mind: default and static methods do not break the functional interface contract and may be declared: @FunctionalInterface public interface FunctionalDefaultMethods { void method(); default void defaultMethod() { } } Lambdas are the largest selling point of Java 8. It has all the potential to attract more and more developers to this great platform and provide state of the art support for functional programming concepts in pure Java. For more details please refer to official documentation. 2.2. Interface’s Default and Static Methods Java 8 extends interface declarations with two new concepts: default and static methods. Default methods make interfaces somewhat similar to traits but serve a bit different goal. They allow adding new methods to existing interfaces without breaking the binary compatibility with the code written for older versions of those interfaces. The difference between default methods and abstract methods is that abstract methods are required to be implemented. But default methods are not. Instead, each interface must provide so called default implementation and all the implementers will inherit it by default (with a possibility to override this default implementation if needed). Let us take a look on example below. private interface Defaulable { // Interfaces now allow default methods, the implementer may or // may not implement (override) them. default String notRequired() { return "Default implementation"; } } private static class DefaultableImpl implements Defaulable { } private static class OverridableImpl implements Defaulable { @Override public String notRequired() { return "Overridden implementation"; } } The interface Defaulable declares a default method notRequired() using keyword default as part of the method definition. One of the classes, DefaultableImpl, implements this interface leaving the default method implementation as-is. Another one, OverridableImpl , overrides the default implementation and provides its own. Another interesting feature delivered by Java 8 is that interfaces can declare (and provide implementation) of static methods. Here is an example. private interface DefaulableFactory { // Interfaces now allow static methods static Defaulable create( Supplier< Defaulable > supplier ) { return supplier.get(); } } The small code snippet below glues together the default methods and static methods from the examples above. public static void main( String[] args ) { Defaulable defaulable = DefaulableFactory.create( DefaultableImpl::new ); System.out.println( defaulable.notRequired() ); defaulable = DefaulableFactory.create( OverridableImpl::new ); System.out.println( defaulable.notRequired() ); } The console output of this program looks like that: Default implementation Overridden implementation Default methods implementation on JVM is very efficient and is supported by the byte code instructions for method invocation. Default methods allowed existing Java interfaces to evolve without breaking the compilation process. The good examples are the plethora of methods added to java.util.Collection interface: stream(), parallelStream(), forEach(), removeIf(), … Though being powerful, default methods should be used with a caution: before declaring method as default it is better to think twice if it is really needed as it may cause ambiguity and compilation errors in complex hierarchies. For more details please refer to official documentation. 2.3. Method References Method references provide the useful syntax to refer directly to exiting methods or constructors of Java classes or objects (instances). With conjunction of Lambdas expressions, method references make the language constructs look compact and concise, leaving off boilerplate. Below, considering the class Car as an example of different method definitions, let us distinguish four supported types of method references. public static class Car { public static Car create( final Supplier< Car > supplier ) { return supplier.get(); } public static void collide( final Car car ) { System.out.println( "Collided " + car.toString() ); } public void follow( final Car another ) { System.out.println( "Following the " + another.toString() ); } public void repair() { System.out.println( "Repaired " + this.toString() ); } } The first type of method references is constructor reference with the syntax Class::new or alternatively, for generics, Class< T >::new. Please notice that the constructor has no arguments. final Car car = Car.create( Car::new ); final List< Car > cars = Arrays.asList( car ); The second type is reference to static method with the syntax Class::static_method. Please notice that the method accepts exactly one parameter of type Car. cars.forEach( Car::collide ); The third type is reference to instance method of arbitrary object of specific type with the syntax Class::method. Please notice, no arguments are accepted by the method. cars.forEach( Car::repair ); And the last, fourth type is reference to instance method of particular class instance the syntax instance::method. Please notice that method accepts exactly one parameter of type Car. final Car police = Car.create( Car::new ); cars.forEach( police::follow ); Running all those examples as a Java program produces following output on a console (the actual Car instances might be different): Collided com.javacodegeeks.java8.method.references.MethodReferences$Car@7a81197d Repaired com.javacodegeeks.java8.method.references.MethodReferences$Car@7a81197d Following the com.javacodegeeks.java8.method.references.MethodReferences$Car@7a81197d For more examples and details on method references, please refer to official documentation. 2.4. Repeating annotations Since Java 5 introduced the annotations support, this feature became very popular and is very widely used. However, one of the limitations of annotation usage was the fact that the same annotation cannot be declared more than once at the same location. Java 8 breaks this rule and introduced the repeating annotations. It allows the same annotation to be repeated several times in place it is declared. The repeating annotations should be themselves annotated with @Repeatable annotation. In fact, it is not a language change but more a compiler trick as underneath the technique stays the same. Let us take a look on quick example: package com.javacodegeeks.java8.repeatable.annotations;import java.lang.annotation.ElementType; import java.lang.annotation.Repeatable; import java.lang.annotation.Retention; import java.lang.annotation.RetentionPolicy; import java.lang.annotation.Target;public class RepeatingAnnotations { @Target( ElementType.TYPE ) @Retention( RetentionPolicy.RUNTIME ) public @interface Filters { Filter[] value(); } @Target( ElementType.TYPE ) @Retention( RetentionPolicy.RUNTIME ) @Repeatable( Filters.class ) public @interface Filter { String value(); }; @Filter( "filter1" ) @Filter( "filter2" ) public interface Filterable { } public static void main(String[] args) { for( Filter filter: Filterable.class.getAnnotationsByType( Filter.class ) ) { System.out.println( filter.value() ); } } } As we can see, there is an annotation class Filter annotated with @Repeatable( Filters.class ). The Filters is just a holder of Filter annotations but Java compiler tries hard to hide its presence from the developers. As such, the interface Filterable has Filter annotation defined twice (with no mentions of Filters). Also, the Reflection API provides new method getAnnotationsByType() to return repeating annotations of some type (please notice that Filterable.class.getAnnotation( Filters.class ) will return the instance of Filters injected by the compiler). The program output looks like that: filter1 filter2 For more details please refer to official documentation. 2.5. Better Type Inference Java 8 compiler has improved a lot on type inference. In many cases the explicit type parameters could be inferred by compiler keeping the code cleaner. Let us take a look on one of the examples. package com.javacodegeeks.java8.type.inference;public class Value< T > { public static< T > T defaultValue() { return null; } public T getOrDefault( T value, T defaultValue ) { return ( value != null ) ? value : defaultValue; } } And here is the usage of Value< String > type. package com.javacodegeeks.java8.type.inference;public class TypeInference { public static void main(String[] args) { final Value< String > value = new Value<>(); value.getOrDefault( "22", Value.defaultValue() ); } } The type parameter of Value.defaultValue()is inferred and is not required to be provided. In Java 7, the same example will not compile and should be rewritten to Value.< String >defaultValue(). 2.6. Extended Annotations Support Java 8 extends the context where annotation might be used. Now, it is possible to annotate mostly everything: local variables, generic types, super-classes and implementing interfaces, even the method’s exceptions declaration. Couple of examples are show below. package com.javacodegeeks.java8.annotations;import java.lang.annotation.ElementType; import java.lang.annotation.Retention; import java.lang.annotation.RetentionPolicy; import java.lang.annotation.Target; import java.util.ArrayList; import java.util.Collection;public class Annotations { @Retention( RetentionPolicy.RUNTIME ) @Target( { ElementType.TYPE_USE, ElementType.TYPE_PARAMETER } ) public @interface NonEmpty { } public static class Holder< @NonEmpty T > extends @NonEmpty Object { public void method() throws @NonEmpty Exception { } } @SuppressWarnings( "unused" ) public static void main(String[] args) { final Holder< String > holder = new @NonEmpty Holder< String >(); @NonEmpty Collection< @NonEmpty String > strings = new ArrayList<>(); } } The ElementType.TYPE_USE and ElementType.TYPE_PARAMETER are two new element types to describe the applicable annotation context. The Annotation Processing API also underwent some minor changes to recognize those new type annotations in the Java programming language. 3. New Features in Java compiler 3.1. Parameter names Literally for ages Java developers are inventing different ways to preserve method parameter names in Java byte-code and make them available at runtime (for example, Paranamer library). And finally, Java 8 bakes this demanding feature into the language (using Reflection API and Parameter.getName() method) and the byte-code (using new javac compiler argument –parameters). package com.javacodegeeks.java8.parameter.names;import java.lang.reflect.Method; import java.lang.reflect.Parameter;public class ParameterNames { public static void main(String[] args) throws Exception { Method method = ParameterNames.class.getMethod( "main", String[].class ); for( final Parameter parameter: method.getParameters() ) { System.out.println( "Parameter: " + parameter.getName() ); } } } If you compile this class without using –parameters argument and then run this program, you will see something like that: Parameter: arg0 With –parameters argument passed to the compiler the program output will be different (the actual name of the parameter will be shown): Parameter: args For experienced Maven users the –parameters argument could be added to the compiler using configuration section of the maven-compiler-plugin: <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.1</version> <configuration> <compilerArgument>-parameters</compilerArgument> <source>1.8</source> <target>1.8</target> </configuration> </plugin> Latest Eclipse Kepler SR2 release with Java 8 (please check out this download instructions) support provides useful configuration option to control this compiler setting as the picture below shows.Additionally, to verify the availability of parameter names, there is a handy method isNamePresent() provided by Parameter class. 4. New Features in Java libraries Java 8 adds a lot of new classes and extends existing ones in order to provide better support of modern concurrency, functional programming, date/time, and many more. 4.1. Optional The famous NullPointerException is by far the most popular cause of Java application failures. Long time ago the great Google Guava project introduced the Optionals as a solution to NullPointerExceptions, discouraging codebase pollution with null checks and encouraging developers to write cleaner code. Inspired by Google Guava, the Optional is now a part of Java 8 library. Optional is just a container: it can hold a value of some type T or just be null. It provides a lot of useful methods so the explicit null checks have no excuse anymore. Please refer to official Java 8 documentation for more details. We are going to take a look on two small examples of Optional usages: with the nullable value and with the value which does not allow nulls. Optional< String > fullName = Optional.ofNullable( null ); System.out.println( "Full Name is set? " + fullName.isPresent() ); System.out.println( "Full Name: " + fullName.orElseGet( () -> "[none]" ) ); System.out.println( fullName.map( s -> "Hey " + s + "!" ).orElse( "Hey Stranger!" ) ); The isPresent() method returns true if this instance of Optional has non-null value and false otherwise. The orElseGet() method provides the fallback mechanism in case Optional has null value by accepting the function to generate the default one. The map() method transforms the current Optional’s value and returns the new Optional instance. The orElse() method is similar to orElseGet() but instead of function it accepts the default value. Here is the output of this program: Full Name is set? false Full Name: [none] Hey Stranger! Let us briefly look on another example: Optional< String > firstName = Optional.of( "Tom" ); System.out.println( "First Name is set? " + firstName.isPresent() ); System.out.println( "First Name: " + firstName.orElseGet( () -> "[none]" ) ); System.out.println( firstName.map( s -> "Hey " + s + "!" ).orElse( "Hey Stranger!" ) ); System.out.println(); And here is the output: First Name is set? true First Name: Tom Hey Tom! For more details please refer to official documentation. 4.2. Streams The newly added Stream API (java.util.stream) introduces real-world functional-style programming into the Java. This is by far the most comprehensive addition to Java library intended to make Java developers significantly more productive by allowing them to write effective, clean, and concise code. Stream API makes collections processing greatly simplified (but it is not limited to Java collections only as we will see later). Let us take start off with simple class called Task. public class Streams { private enum Status { OPEN, CLOSED }; private static final class Task { private final Status status; private final Integer points;Task( final Status status, final Integer points ) { this.status = status; this.points = points; } public Integer getPoints() { return points; } public Status getStatus() { return status; } @Override public String toString() { return String.format( "[%s, %d]", status, points ); } } } Task has some notion of points (or pseudo-complexity) and can be either OPEN or CLOSED. And then let us introduce a small collection of tasks to play with. final Collection< Task > tasks = Arrays.asList( new Task( Status.OPEN, 5 ), new Task( Status.OPEN, 13 ), new Task( Status.CLOSED, 8 ) ); The first question we are going to address is how many points in total all OPEN tasks have? Up to Java 8, the usual solution for it would be some sort of foreach iteration. But in Java 8 the answers is streams: a sequence of elements supporting sequential and parallel aggregate operations. // Calculate total points of all active tasks using sum() final long totalPointsOfOpenTasks = tasks .stream() .filter( task -> task.getStatus() == Status.OPEN ) .mapToInt( Task::getPoints ) .sum(); System.out.println( "Total points: " + totalPointsOfOpenTasks ); And the output on the console looks like that: Total points: 18 There are a couple of things going on here. Firstly, the tasks collection is converted to its stream representation. Then, the filter operation on stream filters out all CLOSED tasks. On next step, the mapToInt operation converts the stream of Tasks to the stream of Integers using Task::getPoints method of the each task instance. And lastly, all points are summed up using sum method, producing the final result. Before moving on to the next examples, there are some notes to keep in mind about streams (more details here). Stream operations are divided into intermediate and terminal operations. Intermediate operations return a new stream. They are always lazy, executing an intermediate operation such as filter does not actually perform any filtering, but instead creates a new stream that, when traversed, contains the elements of the initial stream that match the given predicate Terminal operations, such as forEach or sum, may traverse the stream to produce a result or a side-effect. After the terminal operation is performed, the stream pipeline is considered consumed, and can no longer be used. In almost all cases, terminal operations are eager, completing their traversal of the underlying data source. Yet another value proposition of the streams is out-of-the box support of parallel processing. Let us take a look on this example, which does sums the points of all the tasks. // Calculate total points of all tasks final double totalPoints = tasks .stream() .parallel() .map( task -> task.getPoints() ) // or map( Task::getPoints ) .reduce( 0, Integer::sum ); System.out.println( "Total points (all tasks): " + totalPoints ); It is very similar to the first example except the fact that we try to process all the tasks in parallel and calculate the final result using reduce method. Here is the console output: Total points (all tasks): 26.0 Often, there is a need to performing a grouping of the collection elements by some criteria. Streams can help with that as well as an example below demonstrates. // Group tasks by their status final Map< Status, List< Task > > map = tasks .stream() .collect( Collectors.groupingBy( Task::getStatus ) ); System.out.println( map ); The console output of this example looks like that: {CLOSED=[[CLOSED, 8]], OPEN=[[OPEN, 5], [OPEN, 13]]} To finish up with the tasks example, let us calculate the overall percentage (or weight) of each task across the whole collection, based on its points. // Calculate the weight of each tasks (as percent of total points) final Collection< String > result = tasks .stream() // Stream< String > .mapToInt( Task::getPoints ) // IntStream .asLongStream() // LongStream .mapToDouble( points -> points / totalPoints ) // DoubleStream .boxed() // Stream< Double > .mapToLong( weigth -> ( long )( weigth * 100 ) ) // LongStream .mapToObj( percentage -> percentage + "%" ) // Stream< String> .collect( Collectors.toList() ); // List< String > System.out.println( result ); The console output is just here: [19%, 50%, 30%] And lastly, as we mentioned before, the Stream API is not only about Java collections. The typical I/O operations like reading the text file line by line is a very good candidate to benefit from stream processing. Here is a small example to confirm that. final Path path = new File( filename ).toPath(); try( Stream< String > lines = Files.lines( path, StandardCharsets.UTF_8 ) ) { lines.onClose( () -> System.out.println("Done!") ).forEach( System.out::println ); } The onClose method called on the stream returns an equivalent stream with an additional close handler. Close handlers are run when the close() method is called on the stream. Stream API together with Lambdas and Method References baked by Interface’s Default and Static Methods is the Java 8 response to the modern paradigms in software development. For more details, please refer to official documentation. 4.3. Date/Time API (JSR 310) Java 8 makes one more take on date and time management by delivering New Date-Time API (JSR 310). Date and time manipulation is being one of the worst pain points for Java developers. The standard java.util.Date followed by java.util.Calendar hasn’t improved the situation at all (arguably, made it even more confusing). That is how Joda-Time was born: the great alternative date/time API for Java. The Java 8’s New Date-Time API (JSR 310) was heavily influenced by Joda-Time and took the best of it. The new java.time package contains all the classes for date, time, date/time, time zones, instants, duration, and clocks manipulation. In the design of the API the immutability has been taken into account very seriously: no change allowed (the tough lesson learnt from java.util.Calendar). If the modification is required, the new instance of respective class will be returned. Let us take a look on key classes and examples of their usages. The first class is Clock which provides access to the current instant, date and time using a time-zone. Clock can be used instead of System.currentTimeMillis() and TimeZone.getDefault(). // Get the system clock as UTC offset final Clock clock = Clock.systemUTC(); System.out.println( clock.instant() ); System.out.println( clock.millis() ); The sample output on a console: 2014-04-12T15:19:29.282Z 1397315969360 Other new classes we are going to look at are LocaleDate and LocalTime. LocaleDate holds only the date part without a time-zone in the ISO-8601 calendar system. Respectively, LocaleTime holds only the time part without time-zone in the ISO-8601 calendar system. Both LocaleDate and LocaleTime could be created from Clock. // Get the local date and local time final LocalDate date = LocalDate.now(); final LocalDate dateFromClock = LocalDate.now( clock ); System.out.println( date ); System.out.println( dateFromClock ); // Get the local date and local time final LocalTime time = LocalTime.now(); final LocalTime timeFromClock = LocalTime.now( clock ); System.out.println( time ); System.out.println( timeFromClock ); The sample output on a console: 2014-04-12 2014-04-12 11:25:54.568 15:25:54.568 The LocalDateTime combines together LocaleDate and LocalTime and holds a date with time but without a time-zone in the ISO-8601 calendar system. A quick example is shown below. // Get the local date/time final LocalDateTime datetime = LocalDateTime.now(); final LocalDateTime datetimeFromClock = LocalDateTime.now( clock ); System.out.println( datetime ); System.out.println( datetimeFromClock ); The sample output on a console: 2014-04-12T11:37:52.309 2014-04-12T15:37:52.309 If case you need a date/time for particular timezone, the ZonedDateTime is here to help. It holds a date with time and with a time-zone in the ISO-8601 calendar system. Here are a couple of examples for different timezones. // Get the zoned date/time final ZonedDateTime zonedDatetime = ZonedDateTime.now(); final ZonedDateTime zonedDatetimeFromClock = ZonedDateTime.now( clock ); final ZonedDateTime zonedDatetimeFromZone = ZonedDateTime.now( ZoneId.of( "America/Los_Angeles" ) ); System.out.println( zonedDatetime ); System.out.println( zonedDatetimeFromClock ); System.out.println( zonedDatetimeFromZone ); The sample output on a console: 2014-04-12T11:47:01.017-04:00[America/New_York] 2014-04-12T15:47:01.017Z 2014-04-12T08:47:01.017-07:00[America/Los_Angeles] And finally, let us take a look on Duration class: an amount of time in terms of seconds and nanoseconds. It makes very easy to compute the different between two dates. Let us take a look on that. // Get duration between two dates final LocalDateTime from = LocalDateTime.of( 2014, Month.APRIL, 16, 0, 0, 0 ); final LocalDateTime to = LocalDateTime.of( 2015, Month.APRIL, 16, 23, 59, 59 );final Duration duration = Duration.between( from, to ); System.out.println( "Duration in days: " + duration.toDays() ); System.out.println( "Duration in hours: " + duration.toHours() ); The example above computes the duration (in days and hours) between two dates, 16 April 2014 and 16 April 2015. Here is the sample output on a console: Duration in days: 365 Duration in hours: 8783 The overall impression about Java 8’s new date/time API is very, very positive. Partially, because of the battle-proved foundation it is built upon (Joda-Time), partially because this time it was finally tackled seriously and developer voices have been heard. For more details please refer to official documentation. 4.4. Nashorn JavaScript engine Java 8 comes with new Nashorn JavaScript engine which allows developing and running certain kinds of JavaScript applications on JVM. Nashorn JavaScript engine is just another implementation of javax.script.ScriptEngine and follows the same set of rules, permitting Java and JavaScript interoperability. Here is a small example. ScriptEngineManager manager = new ScriptEngineManager(); ScriptEngine engine = manager.getEngineByName( "JavaScript" ); System.out.println( engine.getClass().getName() ); System.out.println( "Result:" + engine.eval( "function f() { return 1; }; f() + 1;" ) ); The sample output on a console: jdk.nashorn.api.scripting.NashornScriptEngine Result: 2 We will get back to the Nashorn later in the section dedicated to new Java tools. 4.5. Base64 Finally, the support of Base64 encoding has made its way into Java standard library with Java 8 release. It is very easy to use as following example shows off. package com.javacodegeeks.java8.base64;import java.nio.charset.StandardCharsets; import java.util.Base64;public class Base64s { public static void main(String[] args) { final String text = "Base64 finally in Java 8!"; final String encoded = Base64 .getEncoder() .encodeToString( text.getBytes( StandardCharsets.UTF_8 ) ); System.out.println( encoded ); final String decoded = new String( Base64.getDecoder().decode( encoded ), StandardCharsets.UTF_8 ); System.out.println( decoded ); } } The console output from program run shows both encoded and decoded text: QmFzZTY0IGZpbmFsbHkgaW4gSmF2YSA4IQ== Base64 finally in Java 8! There are also URL-friendly encoder/decoder and MIME-friendly encoder/decoder provided by the Base64 class (Base64.getUrlEncoder() / Base64.getUrlDecoder(), Base64.getMimeEncoder() / Base64.getMimeDecoder()). 4.6. Parallel Arrays Java 8 release adds a lot of new methods to allow parallel arrays processing. Arguably, the most important one is parallelSort() which may significantly speedup the sorting on multicore machines. The following small example demonstrates this new method family (parallelXxx) in action. package com.javacodegeeks.java8.parallel.arrays;import java.util.Arrays; import java.util.concurrent.ThreadLocalRandom;public class ParallelArrays { public static void main( String[] args ) { long[] arrayOfLong = new long [ 20000 ]; Arrays.parallelSetAll( arrayOfLong, index -> ThreadLocalRandom.current().nextInt( 1000000 ) ); Arrays.stream( arrayOfLong ).limit( 10 ).forEach( i -> System.out.print( i + " " ) ); System.out.println(); Arrays.parallelSort( arrayOfLong ); Arrays.stream( arrayOfLong ).limit( 10 ).forEach( i -> System.out.print( i + " " ) ); System.out.println(); } } This small code snippet uses method parallelSetAll() to fill up arrays with 20000 random values. After that, the parallelSort() is being applied. The program outputs first 10 elements before and after sorting so to ensure the array is really ordered. The sample program output may look like that (please notice that array elements are randomly generated): Unsorted: 591217 891976 443951 424479 766825 351964 242997 642839 119108 552378 Sorted: 39 220 263 268 325 607 655 678 723 793 4.7. Concurrency New methods have been added to the java.util.concurrent.ConcurrentHashMap class to support aggregate operations based on the newly added streams facility and lambda expressions. Also, new methods have been added to the java.util.concurrent.ForkJoinPool class to support a common pool (check also our free course on Java concurrency). The new java.util.concurrent.locks.StampedLock class has been added to provide a capability-based lock with three modes for controlling read/write access (it might be considered as better alternative for infamous java.util.concurrent.locks.ReadWriteLock). New classes have been added to the java.util.concurrent.atomic package:DoubleAccumulator DoubleAdder LongAccumulator LongAdder5. New Java tools Java 8 comes with new set of command line tools. In this section we are going to look over most interesting of them. 5.1. Nashorn engine: jjs jjs is a command line based standalone Nashorn engine. It accepts a list of JavaScript source code files as arguments and runs them. For example, let us create a file func.js with following content: function f() { return 1; };print( f() + 1 ); To execute this fie from command, let us pass it as an argument to jjs: jjs func.js The output on the console will be: 2 For more details please refer to official documentation. 5.2. Class dependency analyzer: jdeps jdeps is a really great command line tool. It shows the package-level or class-level dependencies of Java class files. It accepts .class file, a directory, or JAR file as an input. By default, jdeps outputs the dependencies to the system output (console). As an example, let us take a look on dependencies report for the popular Spring Framework library. To make example short, let us analyze only one JAR file: org.springframework.core-3.0.5.RELEASE.jar. jdeps org.springframework.core-3.0.5.RELEASE.jar This command outputs quite a lot so we are going to look on the part of it. The dependencies are grouped by packages. If dependency is not available on a classpath, it is shown as not found. org.springframework.core-3.0.5.RELEASE.jar -> C:\Program Files\Java\jdk1.8.0\jre\lib\rt.jar org.springframework.core (org.springframework.core-3.0.5.RELEASE.jar) -> java.io -> java.lang -> java.lang.annotation -> java.lang.ref -> java.lang.reflect -> java.util -> java.util.concurrent -> org.apache.commons.logging not found -> org.springframework.asm not found -> org.springframework.asm.commons not found org.springframework.core.annotation (org.springframework.core-3.0.5.RELEASE.jar) -> java.lang -> java.lang.annotation -> java.lang.reflect -> java.util For more details please refer to official documentation. 6. New Features in Java runtime (JVM) The PermGen space is gone and has been replaced with Metaspace (JEP 122). The JVM options -XX:PermSize and -XX:MaxPermSize have been replaced by -XX:MetaSpaceSize and -XX:MaxMetaspaceSize respectively. 7. Conclusions The future is here: Java 8 moves this great platform forward by delivering the features to make developers much more productive. It is too early to move the production systems to Java 8 but in the next couples of months its adoption should slowly start growing. Nevertheless the time is right to start preparing your code bases to be compatible with Java 8 and to be ready to turn the switch once Java 8 proves to be safe and stable enough. As a confirmation of community Java 8 acceptance, recently Pivotal released Spring Framework 4.0.3 with production-ready Java 8 support. If you enjoyed this, then subscribe to our newsletter to enjoy weekly updates and complimentary whitepapers! Also, check out JCG Academy for more advanced training! You are welcome to contribute with your comments about the exciting new Java 8 features! 8. Resources Some additional resources which discuss in depth different aspects of Java 8 features:What’s New in JDK 8: http://www.oracle.com/technetwork/java/javase/8-whats-new-2157071.html The Java Tutorials: http://docs.oracle.com/javase/tutorial/ WildFly 8, JDK 8, NetBeans 8, Java EE 7: http://blog.arungupta.me/2014/03/wildfly8-jdk8-netbeans8-javaee7-excellent-combo-enterprise-java/ Java 8 Tutorial: http://winterbe.com/posts/2014/03/16/java-8-tutorial/ JDK 8 Command-line Static Dependency Checker: http://marxsoftware.blogspot.ca/2014/03/jdeps.html The Illuminating Javadoc of JDK 8: http://marxsoftware.blogspot.ca/2014/03/illuminating-javadoc-of-jdk-8.html The Dark Side of Java 8: http://blog.jooq.org/2014/04/04/java-8-friday-the-dark-side-of-java-8/ Installing Java™ 8 Support in Eclipse Kepler SR2: http://www.eclipse.org/downloads/java8/ Java 8: http://www.baeldung.com/java8 Oracle Nashorn. A Next-Generation JavaScript Engine for the JVM: http://www.oracle.com/technetwork/articles/java/jf14-nashorn-2126515.html...

Test Driven Discipline

As the ”Is TDD Dead or Alive” continues, it is interesting to see the different kind of discussions. Here’s an example: The new default answer to TDD critics: How Would You Know? — Darren Cauthon (@darrencauthon) May 5, 2014Not much of a two way conversation there. In contrast to the tweet above, people who were disappointed by TDD, actually have used it. Maybe to the letter, or maybe not. The fact is, they think TDD let them down. And I get why Here’s a recent story. Recently, working on an 6 year old code base, we wanted to add some new functionality. We’ve decided to use TDD for the new components, and then embed them into the codebase. This would be covered by new integration-test, we’ve written upfront. So far, so good. By-the-book test first. We wanted to use TDD, since:There’s a lot of internal logic involved and we want to make sure it’s correct We’re not sure what the right design is Developing the new functionality away from the current implementation gives us better control, and quicker feedback.My partner was new to TDD, and we’ve paired on this for a couple of days, moving from class to class, adding tests and seeing them pass. In just a few days, I’ve encountered two kinds of struggles. The first one was between the two of us. Me pushing for incremental steps, test-code-refactor, don’t jump ahead method. In other words, restraining my partner. I could see how it tormented him. There were moments of appreciation to the evolving design, but more moments of “I’ll play along, but not for the fun of it”. Indeed, at the end of the second day we split, and re-joined a day later. He added some code to the existing codebase, without tests and not part of the new design. The second one was internal – as work continued, we’ve added more and more tests, changed the design a couple of times, renamed and extracted. All the good stuff that comes with TDD. Yet a little voice kept talking inside my head. It said: “All this work could be done in a few hours, you’re working too much for this task”. That’s the result of two days of work. How much struggle do you think months of projects involve? Can everyone weather the storms? TDD is a discipline I usually explain TDD as a methodology, then later explain it’s really a discipline, in both senses of the word. Using TDD requires discipline and confidence, not to give in to the dark side. Which may not always be so dark. Or may not seem like it. The people who swear by TDD have grown this discipline, as well as many skills to make it work and avoid so many pot-holes. Many have conquered that little voice in their head. But I bet it’s still there. The people who were disappointed were not able to do this. It’s easy to say “they don’t understand it”, or “they didn’t stick around to see the value”, or “how would you know”. I think the disappointment comes from an unfulfilled promise:  We describe TDD as simple, just following a few steps. Nobody talks about the constant struggle between people with different skills, the pressure to complete tasks quickly, and the totality of it all: No compromises, or you’re doing it wrong. Here’s the whole truth: Stick with it, and you’ll get rewards. But know this: TDD is hard work. I hope you won’t get disappointed.Reference: Test Driven Discipline from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....

Inconvenient truths of project Status reporting

Long time readers of this blog may recall that I subscribe to the high-brow MIT Sloan Management Review. They may also remember that I’m never quite sure it is worth the rather high subscription fee. But once in a while an article comes along which is worth a year’s subscription. The “Pitfalls of Project Status Reporting” is this year’s article. As the title suggests the article looks at why project status reporting, specifically for IT projects, so often fails to alert companies of the problems in project work. (Clearly the authors are unfamiliar with #BeyondProjects/#NoProjects but I suspect most of the same problems can be found in the false-projects so beloved of IT organizations.) If you want the full story you will need to buy your own copy of the article (Amazon have it for £2.50) but for the rest of you here are the highlights, the 5 points the authors called “Inconvenient Truths” plus some of the text and a couple comments from from myself. The comments are all my own but the quotes – and the headlines in bold – are from the article which is research not conjecture, something we could do we more of! 1. Executives can’t rely on project staff and other employees to accurately report project status information and to speak up when they see problems. “[software project managers] write biased reports 60% if the time and that their bias is more than twice as likely to be optimistic.” Basically people don’t like reporting bad news and worry that reporting it will reflect badly on themselves. 2. A variety of reasons can cause people to misreport about project status; individual personality traits, work climate and cultural norms can all play a role. “employees who work in climates that support self-interested behaviour are more likely to misreport than employees who work in climates based on ‘rules and code’.” Why do I think of banks and bonuses when I read this? 3. An aggressive audit team can’t counter the effects of project status misreporting and withholding of information by project staff. “Once auditors are added to the mix… a dysfunctional cycle that results in even less openness regarding status information.” “Diminished trust in the senior executive who controlled their project was associated with an increase in misreporting.” More reporting will make things worse, if auditors arrive then people don’t feel they are trusted and guess what? They might not so the auditors the dirty washing and might tend to paint things as good. 4. Putting a senior executive in charge of a project may increase misreporting. “research suggests that the stronger the power of the sponsor or the project leader, the less inclined subordinates are to report accurately.” I find this suggestion very worrying because some people in the Scrum community insist that the Product Owner must be an executive (“the real product owner”) who has real power to secure the resources and changes a make the project a success. This research suggests that having a strong, senior, Product Owner could make things worse. 5. Executives often ignore bad news if they receive it. I am reminded of one client where my own attempts to raise a red flag have gone no-where. On my final visit to the client I spoke with the senior architect to again voice both my concerns over the work and the failure of anyone to listen. Unfortunately he had the same problem. He saw the same problems but couldn’t find anyone willing to listen. My initial thought on all of this is that this is all the more reason to base reporting on deliverables, i.e. what has actually been delivered that is working and in use. Rather than reporting proxy “status” report actual progress.Reference: Inconvenient truths of project Status reporting from our JCG partner Allan Kelly at the Agile, Lean, Patterns blog....

Continuous Delivery: CI Tools Setup

This is the second article in the “Continuous Delivery” series. We’ll continue where we left in Introduction to concepts and tools. The goal of this article is to set up a Jenkins server locally through automated and repeatable process with all the artifacts stored in the GIT repository. This will require tools like VirtualBox and Vagrant. It will also require registration to Docker. Unlike the previous article that provided only general information, in this one we’ll have to get our hands dirty and follow the examples. What do we need in order to have a Jenkins server up and running locally? We need a virtual machine or available server, operating system installed and configured, Jenkins installed and configured. Since everything we do needs to be reliable and repeatable, we’ll also need a Version Control System. Examples in this article will use GitHub as VCS. Following software is required by examples in this article: VirtualBox, Vagrant, Docker, GitHub account and GIT client (we’re using TortoiseGIT but any other should do). Please consult above mentioned sites for installation instructions. All the code from this article can be found in the TechnologyConversationsCD GitHub repository. Vagrant: Virtual Machine with Operating SystemWe’ll create an VirtualBox machine. The process itself will be done with Vagrant. Vagrant is a tool for building environments. Even though it can be used to build complete environment, we’ll use it only to create virtual machine with Ubuntu server. The process is very simple. Create a file called Vagrantfile. Contents of that file should be: VAGRANTFILE_API_VERSION = "2"Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.box = "hashicorp/precise64" end Full source code can be found in the Vagrantfile. The key line is the one starting with config.*. It specifies which server we’d like to install. Easiest is to use one of the already created boxes from the Vagrant Cloud Service. For the purpose of this article, I choose a standard Ubuntu 12.04 LTS 64-bit box called “hashicorp/precise64″. Next, we should bring up the virtual machine. To do that, following should be executed from the command prompt while in the same directory where Vagrantfile is located: $ vagrant up It takes some time to download the box when run from the first time. Once done, creating virtual machines will be almost instantaneous. From now on we can login to our new Ubuntu virtual machine. Default IP is with username and password vagrant/vagrant. Fire up your favorite SSH client (Putty is always a good choice) and play around with your new server. Once done we’ll proceed with the setup of Docker. Docker: Jenkins serverDocker is an open-source engine that automates the deployment of any application as a lightweight, portable, self-sufficient container that will run virtually anywhere. Let’s create an Ubuntu container with Jenkins. To do that we’ll have to create Dockerfile with following content: FROM ubuntu:12.04 MAINTAINER Viktor Farcic, "viktor@farcic.com"RUN echo deb http://archive.ubuntu.com/ubuntu precise universe >> /etc/apt/sources.list RUN apt-get update && apt-get clean RUN apt-get install -q -y openjdk-7-jre-headless && apt-get clean ADD http://mirrors.jenkins-ci.org/war/1.560/jenkins.war /opt/jenkins.war RUN ln -sf /jenkins /root/.jenkinsENTRYPOINT ["java", "-jar", "/opt/jenkins.war"] EXPOSE 8080 VOLUME ["/jenkins"] CMD [""] Full source code can be found in the Dockerfile. Docker uses its own Domain-Specific Language (DSL) that is pretty straight forward. I’ll go briefly through instructions used in the above file. For detailed information, please consult the Docker documentation.FROM: The FROM instruction sets the Base Image for subsequent instructions. In our case, base image is the Ubuntu that can be found in Docker public repository. We’ll build on top of this base and setup a Jenkins server. MAINTAINER: Name and email of the author. Purpose is purely informational. RUN: The RUN instruction will execute any command in a new layer on top of the current image and commit the results. It has several forms. The one we’re using is simply executing shell commands as they are written. ADD: The ADD instruction will copy new files and add them to the container’s filesystem. ENTRYPOINT: An ENTRYPOINT helps you to configure a container that you can run as an executable. That is, when you specify an ENTRYPOINT, then the whole container runs as if it was just that executable. In our case, we’re telling Docker to run Jenkins WAR file. EXPOSE: The EXPOSE instructions inform Docker that the container will listen on the specified network ports at runtime. Since Jenkins is using port 8080, we need to expose it to processes running outside the container.Next step is to push the file we just created to our GitHub repo and configure Docker to build the container every time we change contents of that file. The rest of the article assumes that the reader is registered with Docker. Go to your Docker profile, select “Authorized Services” followed with “Go to application”. By clicking “+ Add” > “Add trusted (source) build” we can add connection from Docker to our GitHub repo. Please follow the instructions on the screen for the rest of steps. Build of our new container can be followed from the build status Docker page. Once done (Active status), newly create container is ready for use. I named my container repo vfarcic/cd_jenkins. It is set to be publicly available so that anyone can reuse it. The rest of the article will use that Docker container. If you followed the instructions and created your own, please change vfarcic/cd_jenkins to the name you used. At this moment we have a virtual machine created with VirtualBox and Vagrant. That VM has only Ubuntu OS installed. On the other hand, we created a docker container with Jenkins server. That container can be used on almost any machine where we need it. It can be a physical server, virtual machine, laptop, etc. Virtual Machine with the Jenkins server Next step is to use Ubuntu VM and Jenkins together. In order to do that we should go back to our Vagrantfile and specify that we want our container to be set and run every time we start the VM. Vagrant supports different provisioners. Provisioners in Vagrant allow you to automatically install software, alter configurations, and more on the machine as part of the vagrant up process. Most basic provisioner are Shell scripts. For simple projects, they work fine. However, Shell scripts become cumbersome very quickly. For that purpose Vagrant supports Ansible, CFEngine, Chef, Puppet, Salt and Docker. As you probably guessed, we’ll proceed using Docker. I encourage you to explore the rest as well. Modified Vagrantfile looks like following: VAGRANTFILE_API_VERSION = "2"Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.box = "hashicorp/precise64" config.vm.provision "docker" do |d| d.run "vfarcic/cd_jenkins", args: "-d -p" end config.vm.network :forwarded_port, host: 4567, guest: 49153 end We added two new instructions. First one is provision. It tells Vagrant to use docker provisioner, run the container that we just created (vfarcic/cd_jenkins) and supply it with -p argument specifying that all container communication on the Jenkins port 8080 should be redirected to 46153. Now we have Jenkins up and running on the port 46153. To make that port visible outside the Vagrant, we need to forward that port again using the network instruction. To apply changes we just made to the already running VM, run following: vagrant reload --provisionWith reload we’re restaring the virtual machine. Provision tells vagrant to enable provisioning (in this case with Docker). Prior to execution of Docker instructions, Vagrant will install Docker if it does not exist (as it’s the case with our Ubuntu server). This is the big moment! Open http://localhost:4567 in your favorite browser and start using the newly created Jenkins server. Summary Even though the whole process might look complicated at the first glance, with a bit of practice it is very easy and intuitive. As the result, we have a virtual machine that can be easily created, destroyed, multiplied, distributed, etc (see Vagrant documentation for more info). That virtual machine comes with Docker container running Jenkins. Docker does not depend on Vagrant and the same container can be put on any other machine. It is a common practice to use Vagrant as a tool to create development machines used locally. Once we’re done with the machine it can be suspended (and later resumed) or destroyed (and later recreated). In some cases Vagrant is used in production environment as well. However, it is more optimal to use only Docker without the overhead that VM has. One server can run multiple Docker containers without the need for separate VMs with separate operating systems. We stored two small files in our GitHub repository (Vagrantfile and Dockerfile). With them we can recreate exactly the same process as many times as needed. More importantly, having editable text files will allow us to easily build on top of this article and extend our solution. Next articles will start the journey of the CI flow. We’ll pull the code from our repository, run static code analysis, build it, run different kinds of tests, etc. It will be a fun ride and Docker will be one of our key tools. Until the next article is published, I encourage you to explore Vagrant and Docker in more details. What about Travis and other solutions? In the previous article I promised that we’ll work with both Jenkins and Travis and yet I did not even mention the latter. That’s because we’ll use Travis cloud server that is already set and all it requires is registration. The goal of this series is to show how to set Continuous Integration (and later on Continuous Deployment and Delivery) servers and flows. Depending on your needs, you might want to set it up by yourself (like we just did with Jenkins) or to use an already set cloud solution. We’ll continue using Jenkins as an example of the locally set server and Travis as one of the many cloud based solutions. That does not mean that there is no cloud based Jenkins solution nor that Travis cannot be installed locally. We’ll explore those options as well. Since the previous article was published, I received suggestions for several alternative cloud based CI/CD solutions. Starting from the next article, I’ll increase our stack to few other cloud based solutions besides Travis.Reference: Continuous Delivery: CI Tools Setup from our JCG partner Viktor Farcic at the Technology conversations blog....

How to Implement Sort Indirection in SQL

I’ve recently stumbled upon this interesting Stack Overflow question, where the user essentially wanted to ensure that resulting records are delivered in a well-defined order. They wrote             SELECT name FROM product WHERE name IN ('CE367FAACDHCANPH-151556', 'CE367FAACEX9ANPH-153877', 'NI564FAACJSFANPH-162605', 'GE526OTACCD3ANPH-149839') They got CE367FAACDHCANPH-151556 CE367FAACEX9ANPH-153877 GE526OTACCD3ANPH-149839 NI564FAACJSFANPH-162605 They wanted CE367FAACDHCANPH-151556 CE367FAACEX9ANPH-153877 NI564FAACJSFANPH-162605 GE526OTACCD3ANPH-149839 Very often, according to your business rules, sorting orders are not “natural”, as in numeric sorting or in alpha-numeric sorting. Some business rule probably specified that GE526OTACCD3ANPH-149839 needs to appear last in a list. Or the user might have re-arranged product names in their screen with drag and drop, producing new sort order. We could discuss, of course, if such sorting should be performed in the UI layer or not, but let’s assume that the business case or the performance requirements or the general architecture needed for this sorting to be done in the database. How to do it? Through… Sort Indirection In fact, you don’t want to sort by the product name, but by a pre-defined enumeration of such names. In other words, you want a function like this: CE367FAACDHCANPH-151556 -> 1 CE367FAACEX9ANPH-153877 -> 2 NI564FAACJSFANPH-162605 -> 3 GE526OTACCD3ANPH-149839 -> 4 With plain SQL, there are many ways to do the above. Here are two of them (also seen in my Stack Overflow answer): By using a CASE expression You can tell the database the explicit sort indirection easily, using a CASE expression in your ORDER BY clause: SELECT name FROM product WHERE name IN ('CE367FAACDHCANPH-151556', 'CE367FAACEX9ANPH-153877', 'NI564FAACJSFANPH-162605', 'GE526OTACCD3ANPH-149839') ORDER BY CASE WHEN name = 'CE367FAACDHCANPH-151556' THEN 1 WHEN name = 'CE367FAACEX9ANPH-153877' THEN 2 WHEN name = 'NI564FAACJSFANPH-162605' THEN 3 WHEN name = 'GE526OTACCD3ANPH-149839' THEN 4 END Note that I’ve used the CASE WHEN predicate THEN value END syntax, because this is implemented in all SQL dialects. Alternatively (if you’re not using Apache Derby), you could also save some characters when typing, writing: ORDER BY CASE name WHEN 'CE367FAACDHCANPH-151556' THEN 1 WHEN 'CE367FAACEX9ANPH-153877' THEN 2 WHEN 'NI564FAACJSFANPH-162605' THEN 3 WHEN 'GE526OTACCD3ANPH-149839' THEN 4 END Of course, this requires repeating the same values in the predicate and in the sort indirection. This is why, in some cases, you might be more lucky … By using INNER JOIN In the following example, the predicate and the sort indirection are taken care of in a simple derived table that is INNER JOIN‘ed to the original query: SELECT product.name FROM product JOIN ( VALUES('CE367FAACDHCANPH-151556', 1), ('CE367FAACEX9ANPH-153877', 2), ('NI564FAACJSFANPH-162605', 3), ('GE526OTACCD3ANPH-149839', 4) ) AS sort (name, sort) ON product.name = sort.name ORDER BY sort.sort The above example is using PostgreSQL syntax, but you might be able to implement the same in a different way in your database. Using jOOQ’s sort indirection API Sort indirection is a bit tedious to write out, which is why jOOQ has a special syntax for this kind of use-case, which is also documented in the manual. Any of the following statements perform the same as the above query: // jOOQ generates 1, 2, 3, 4 as values in the // generated CASE expression DSL.using(configuration) .select(PRODUCT.NAME) .from(PRODUCT) .where(NAME.in( "CE367FAACDHCANPH-151556", "CE367FAACEX9ANPH-153877", "NI564FAACJSFANPH-162605", "GE526OTACCD3ANPH-149839" )) .orderBy(PRODUCT.NAME.sortAsc( "CE367FAACDHCANPH-151556", "CE367FAACEX9ANPH-153877", "NI564FAACJSFANPH-162605", "GE526OTACCD3ANPH-149839" )) .fetch();// You can choose your own indirection values to // be generated in the CASE expression .orderBy(PRODUCT.NAME.sort( new HashMap<String, Integer>() {{ put("CE367FAACDHCANPH-151556", 2); put("CE367FAACEX9ANPH-153877", 3); put("NI564FAACJSFANPH-162605", 5); put("GE526OTACCD3ANPH-149839", 8); }} )) Conclusion Sort indirection is a nice trick to have up your sleeves every now and then. Never forget that you can put almost arbitrary column expressions in your SQL statement’s ORDER BY clause. Use them!Reference: How to Implement Sort Indirection in SQL from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books