Featured FREE Whitepapers

What's New Here?


RateLimiter – discovering Google Guava

RateLimiter class was recently added to Guava libraries (since 13.0) and it is already among my favourite tools. Have a look what the JavaDoc says: Let’s start from a simple example. Say we have a long running process that needs to broadcast its progress to supplied listener: def longRunning(listener: Listener) { var processed = 0 for(item <- items) { //..do work... processed += 1 listener.progressChanged(100.0 * processed / items.size) } }trait Listener { def progressChanged(percentProgress: Double) }Please forgive me the imperative style of this Scala code, but that’s not the point. The problem I want to highlight becomes obvious once we start our application with some concrete listener: class ConsoleListener extends Listener { def progressChanged(percentProgress: Double) { println('Progress: ' + percentProgress) } }longRunning(new ConsoleListener)Imagine that longRunning() method processes millions of items but each iteration takes just a split of a second. The amount of logging messages is just insane, not to mention console output is probably taking much more time than processing itself. You’ve probably faced such a problem several times and have a simple workaround: if(processed % 100 == 0) { listener.progressChanged(100.0 * processed / items.size) }There, I Fixed It! We only print progress every 100th iteration. However this approach has several drawbacks:code is polluted with unrelated logic there is no guarantee that every 100th iteration is slow enough… … or maybe it’s still too slow?What we really want to achieve is to limit the frequency of progress updates (say: two times per second). OK, going deeper into the rabbit hole: def longRunning(listener: Listener) { var processed = 0 var lastUpdateTimestamp = 0L for(item <- items) { //..do work... processed += 1 if(System.currentTimeMillis() - lastUpdateTimestamp > 500) { listener.progressChanged(100.0 * processed / items.size) lastUpdateTimestamp = System.currentTimeMillis() } } }Do you also have a feeling that we are going in the wrong direction? Ladies and gentlemen, I give you RateLimiter: var processed = 0 val limiter = RateLimiter.create(2) for (item <- items) { //..do work... processed += 1 if (limiter.tryAcquire()) { listener.progressChanged(100.0 * processed / items.size) } }Getting better? If the API is not clear: we are first creating a RateLimiter with 2 permits per second. This means we can acquire up to two permits during one second and if we try to do it more often tryAcquire() will return false (or thread will block if acquire() is used instead 1). So the code above guarantees that the listener won’t be called more that two times per second. As a bonus, if you want to completely get rid of unrelated throttling code from the business logic, decorator pattern to the rescue. First let’s create a listener that wraps another (concrete) listener and delegates to it only at a given rate: class RateLimitedListener(target: Listener) extends Listener {val limiter = RateLimiter.create(2)def progressChanged(percentProgress: Double) { if (limiter.tryAcquire()) { target.progressChanged(percentProgress) } } }What’s best about the decorator pattern is that both the code using the listener and the concrete implementation are not aware of the decorator. Also the client code became much simpler (essentially we came back to original): def longRunning(listener: Listener) { var processed = 0 for (item <- items) { //..do work... processed += 1 listener.progressChanged(100.0 * processed / items.size) } }longRunning(new RateLimitedListener(new ConsoleListener))But we’ve only scratched the surface of where RateLimiter can be used! Say we want to avoid aforementioned denial of service attack or slow down automated clients of our API. It’s very simple with RateLimiter and servlet filter: @WebFilter(urlPatterns=Array('/*')) class RateLimiterFilter extends Filter {val limiter = RateLimiter.create(100)def init(filterConfig: FilterConfig) {}def doFilter(request: ServletRequest, response: ServletResponse, chain: FilterChain) { if(limiter.tryAcquire()) { chain.doFilter(request, response) } else { response.asInstanceOf[HttpServletResponse].sendError(SC_TOO_MANY_REQUESTS) } }def destroy() {} }Another self-descriptive sample. This time we limit our API to handle not more than 100 requests per second (of course RateLimiter is thread safe). All HTTP requests that come through our filter are subject to rate limiting. If we cannot handle incoming request, we send HTTP 429 – Too Many Requests error code (not yet available in servlet spec). Alternatively you may wish to block the client for a while instead of eagerly rejecting it. That’s fairly straightforward as well: def doFilter(request: ServletRequest, response: ServletResponse, chain: FilterChain) { limiter.acquire() chain.doFilter(request, response) }limiter.acquire() will block as long as it’s needed to keep desired 100 requests per second limit. Yet another alternative is to use tryAcquire() with timeout (blocking up to given amount of time). Blocking approach is better if you want to avoid sending errors to the client. However under high load it’s easy to imagine almost all HTTP threads blocked waiting for RateLimiter, eventually causing servlet container to reject connections. So dropping of clients can be only partially avoided. This filter is a good starting point to build more sophisticated solutions. Map of rate limiters by IP or user name are good examples. What we haven’t covered yet is acquiring more than one permit at a time. It turns out RateLimiter can also be used e.g. to limit network bandwidth or the amount of data being sent/received. Imagine you create a search servlet and you want to impose that no more than 1000 results are returned per second. In each request user decides how many results she wants to receive per response: it can be 500 requests each containing 2 results or 1 huge request asking for 1000 results at once. But never more than 1000 results within a second on average. Users are free to use their quota as they wish: @WebFilter(urlPatterns = Array ('/search')) class SearchServlet extends HttpServlet {val limiter = RateLimiter.create(1000)override def doGet(req: HttpServletRequest, resp: HttpServletResponse) { val resultsCount = req.getParameter('results').toInt limiter.acquire(resultsCount) //process and return results... } }By default we acquire() one permit per invocation. Non-blocking servlet would call limiter.tryAcquire(resultsCount) and check the results, you know that by now. If you are interested in rate limiting of network traffic, don’t forget to check out my Tenfold increase in server throughput with Servlet 3.0 asynchronous processing. RateLimiter, due to a blocking nature, is not very well suited to write scalable upload/download servers with throttling. The last example I would like to share with you is throttling client code to avoid overloading the server we are talking to. Imagine a batch import/export process that calls some server thousands of times exchanging data. If we don’t throttle the client and there is no rate limiting on the server side, server might get overloaded and crash. RateLimiter is once again very helpful: val limiter = RateLimiter.create(20)def longRunning() { for (item <- items) { limiter.acquire() server.sync(item) } }This sample is very similar to the first one. Difference being that this time we block instead of discard missing permits. Thanks to blocking, external call to server.sync(item) won’t overload the 3rd-party server, calling it at most 20 times per second. Of course if you have several threads interacting with the server, they can all share the same RateLimiter. To wrap-up:RateLimiter allows you to perform certain actions not more often than with a given frequency It’s a small and lightweight class (no threads involved!) You can create thousands of rate limiters (per client?) or share one among several threads We haven’t covered warm-up functionality – if RateLimiter was completely idle for a long time, it will gradually increase allowed frequency over configured time up to configured maximum value instead of allowing maximum frequency from the very beginningI have a feeling that we’ll go back to this class soon. I hope you’ll find it useful in your next project! 1 – I am using Guava 14.0-SNAPSHOT. If 14.0 stable is not available by the time you are reading this, you must use more verbose tryAcquire(1, 0, TimeUnit.MICROSECONDS) instead of tryAcquire() and acquire(1) instead of acquire(). Happy coding and don’t forget to share! Reference: RateLimiter – discovering Google Guava from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog....

JavaOne 2012: The Road to Lambda

One of the presentations I most eagerly anticipated for JavaOne 2012 was Brian Goetz‘s ‘The Road to Lambda.’ The taste of Lambda at last night’s Technical Keynote only added to the anticipation. Held in the Hilton Plaza A/B, this was a short walk from the previous presentation that I attended in Golden Gate A/B/C. I had expected the relatively large Plaza A/B to be packed (standing-room only), but there were far more empty seats than I had expected.Goetz started talking about Java 8 being ‘in the home stretch,’ but not yet released or ready for delivery. He said he expects Java 8 and Lambda to be available about this time next year. Goetz said that you ‘can write any program that’s worth it to write using Java,’ but that Java 8 will make it much easier to do so. His slide ‘Modernizing Java’ talked about Java SE 8 ‘modernizing the Java language’ and ‘modernizing the Java libraries.’ His last bullet on the slide stated, ‘Together, perhaps the biggest upgrade ever to the Java programming model.’ This has been my feeling and that’s part of why I was surprised this presentation was not better attended. Goetz stated that a lambda expression is ‘an anonymous method.’ It has everything that a method has (argument list, return type, and body) except for the name. It allows you to ‘treat code as data.’ A method reference references an existing method. Goetz reiterated the huge fundamental shift to writing and using libraries that will result from addition of lambda expressions. Goetz pointed out that most languages did not have closures when Java started in 1995, but that most languages other than Java do have closures today. He then summarized some of the history of closures in Java in the slide titled ‘Closures for Java – a long and winding road.’ He referenced Odersky’s and Wadler’s 1997 ‘Pizza’ (1997), Java 1.1′s inner classes (1997), and the 2006-2008 ‘vigorous community debate about closures‘ (including BGGA and CICE). Project Lambda was formed in December 2009 and associated JSR 335 was filed in November 2010. It is ‘fairly close to completion’ today. Goetz stated that the for loop is ‘over-specified for today’s hardware’ while describing the ‘accidental complexity’ associated with use of ‘external iteration’ that we frequently use today. I agreed with the point he made that the ‘foreach loop hides complex interaction between client and library.’ The goal of lambda expressions allows the ‘how’ to be moved from the client to the library. Goetz emphasized that this is more than a syntactic change because the library is in control with lambda expressions and it is an internal iteration. Goetz stated, ‘The client handles the ‘what’ and the library handles the ‘how’ and that is a good thing.’ He added that lambda expressions have a profound effect on how we code and especially on how we develop libraries. Goetz discussed the new forEach(Block) method added to collections using the new default implementation mechanism for Java interfaces. Goetz differentiated that Java has always had multiple inheritance of types (can implement multiple interfaces), is now (Java 8) going to have multiple inheritance of behaviors (default method implementation available for interfaces), but still won’t have multiple inheritance of state (the last of which he describes as the most dangerous). Goetz had a slide dedicated to explanation of why ‘diamonds are easy’ when you take date (state) out of the multiple inheritance. Goetz had a nice slide summarizing ‘Default Methods – Inheritance Rules.’ This slide featured three rules. He pointed out that ‘if default cannot be resolved via the rules, subclass must implement it.’ Goetz pointed out that an interface could provide a ‘weak’ default implementation and subclasses can provide better implementations. Another advantage of default methods on interfaces is that the default implementation can throw an exception (such as UnsupportedOperationException) for optional methods so that subclasses not implementing the optional behavior don’t need to do anything else. Goetz also showed how lambda expressions enable the addition of reverse() and compose() methods to Comparator. Goetz showed several examples of code that illustrated that lambda expressions allow for ‘cleaner’ and ‘more natural’ representations. In his words, ‘the code reads like the problem statement’ thanks to the composability of the lambda expression-powered operations. There is also ‘no mutable state in the client.’ One of Goetz’s slides had a quote I plan to use in the future: ‘Laziness can be more efficient.’ The context of this is that laziness can be more efficient if you’re not going to use all of the results because you can stop looking once a match is determined. Stream operations are either intermediate (lazy) or terminal (naturally eager). The Stream is an abstraction introduced to allow for addition of bulk operations and ‘represents a stream of values.’ Goetz’s bullet cautioned that a Stream is ‘not a data structure’ and ‘doesn’t store the values.’ The aim here was to avoid noise in setting things up and try to be more ‘fluent.’ Goetz stated that ‘one of Java’s friends has always been libraries.’ He talked about how lambda expressions enable greater parallelism in the Java libraries. Goetz stated that fork-join is powerful but not necessarily easy to use. Goetz emphasized that ‘Writing serial code is easy; writing parallel code is a pain in the ass.’ Lambda expressions will still require parallelism to be explicit, but should be unobtrusive with lambda expressions and their impact on the libraries. To emphasize Project Lambda’s effect on parallelism in the libraries, Goetz showed a painful slide with how parallel sum with collections would be done today with fork-join and then another slide showing the much simpler use of lambda expressions. The point was made: much less code with lambda expressions, making the business logic a much larger percentage of the overall code. Goetz introduced Spliterator as the ‘parallel analogue of Iterator.’ The Spliterator’s prescribed behaviors are available to any object that knows how to split itself (Spliterable). The slide ‘Lambdas Enable Better APIs’ drove home the powerful and welcome effect of lambda expressions on the standard Java APIs. He emphasized that the ‘key effect on APIs is more composability. Goetz stated that we typically prefer evolving programming model via libraries than language syntax for numerous reasons such as less cost, less risk, etc. He summarized his presentation by stating that times have changed and it is no longer a radical idea for Java to support closures. One of the attendees asked why lambda expression method support is on the collections rather than on iterators. Goetz said that although C# did approach it from the iterator approach, his team found it less confusing for developers to have the methods on the collections instead of on the iterators. In response to another question, Goetz stated that reflection on lambda expressions is not yet available due to its complexity. In response to another question, Goetz stated that lambda expression support is built with invokedynamic and method handles. This is part of an effort to make lambda expressions ‘fun to program’ and ‘fast.’ Another question led to a really interesting response from Goetz in which Goetz explained that the availability of internal iteration within a collection itself means the iteration complexity will be encountered by far fewer people (library developers rather than end user developers). Goetz encouraged attendees to run Java 8 drops currently available to help determine if Lambda expressions are being handled correctly. Goetz remarked, ‘The most valuable contribution we get from the community are people who say, ‘I tried it out and found this bug.” Goetz started this presentation by stating that this was one in a long line of presentations at previous JavaOne conferences and other conferences on the state of lambda. What was different about this one, however, is that Project Lambda is ‘almost there’ and, with that in mind, it seems that the syntax and concepts are largely in place. This obvious solidification of the APIs and syntax is welcome and this presentation met my very high expectations for it. Don’t forget to share! Reference: JavaOne 2012: The Road to Lambda from our JCG partner Dustin Marx at the Inspired by Actual Events blog....

Enhancing Spring Test Framework with beforeClass and afterClass setup

How to allow instance methods to run as JUnit BeforeClass behavior JUnit allows you to setup methods on the class level once before and after all tests methods invocation. However, by design on purpose that they restrict this to only static methods using @BeforeClass and @AfterClass annotations. For example this simple demo shows the typical Junit setup: package deng.junitdemo;import org.junit.AfterClass; import org.junit.BeforeClass; import org.junit.Test;public class DemoTest {@Test public void testOne() { System.out.println('Normal test method #1.'); }@Test public void testTwo() { System.out.println('Normal test method #2.'); }@BeforeClass public static void beforeClassSetup() { System.out.println('A static method setup before class.'); }@AfterClass public static void afterClassSetup() { System.out.println('A static method setup after class.'); } } And above should result the following output: A static method setup before class. Normal test method #1. Normal test method #2. A static method setup after class. This usage is fine for most of the time, but there are times you want to use non-static methods to setup the test. I will show you a more detailed use case later, but for now, let’s see how we can solve this naughty problem with JUnit first. We can solve this by making the test implements a Listener that provide the before and after callbacks, and we will need to digg into JUnit to detect this Listener to invoke our methods. This is a solution I came up with: package deng.junitdemo;import org.junit.Test; import org.junit.runner.RunWith;@RunWith(InstanceTestClassRunner.class) public class Demo2Test implements InstanceTestClassListener {@Test public void testOne() { System.out.println('Normal test method #1'); }@Test public void testTwo() { System.out.println('Normal test method #2'); }@Override public void beforeClassSetup() { System.out.println('An instance method setup before class.'); }@Override public void afterClassSetup() { System.out.println('An instance method setup after class.'); } } As stated above, our Listener is a simple contract: package deng.junitdemo;public interface InstanceTestClassListener { void beforeClassSetup(); void afterClassSetup(); } Our next task is to provide the JUnit runner implementation that will trigger the setup methods. package deng.junitdemo;import org.junit.runner.notification.RunNotifier; import org.junit.runners.BlockJUnit4ClassRunner; import org.junit.runners.model.InitializationError;public class InstanceTestClassRunner extends BlockJUnit4ClassRunner {private InstanceTestClassListener InstanceSetupListener;public InstanceTestClassRunner(Class<?> klass) throws InitializationError { super(klass); }@Override protected Object createTest() throws Exception { Object test = super.createTest(); // Note that JUnit4 will call this createTest() multiple times for each // test method, so we need to ensure to call 'beforeClassSetup' only once. if (test instanceof InstanceTestClassListener && InstanceSetupListener == null) { InstanceSetupListener = (InstanceTestClassListener) test; InstanceSetupListener.beforeClassSetup(); } return test; }@Override public void run(RunNotifier notifier) { super.run(notifier); if (InstanceSetupListener != null) InstanceSetupListener.afterClassSetup(); } } Now we are in business. If we run above test, it should give us similar result, but this time we are using instance methods instead! An instance method setup before class. Normal test method #1 Normal test method #2 An instance method setup after class.A concrete use case: Working with Spring Test Framework Now let me show you a real use case with above. If you use Spring Test Framework, you would normally setup a test like this so that you may have test fixture injected as member instance. package deng.junitdemo.spring;import static org.hamcrest.Matchers.is; import static org.junit.Assert.assertThat;import java.util.List;import javax.annotation.Resource;import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;@RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration public class SpringDemoTest {@Resource(name='myList') private List<String> myList;@Test public void testMyListInjection() { assertThat(myList.size(), is(2)); } } You would also need a spring xml under that same package for above to run: <?xml version='1.0' encoding='UTF-8'?> <beans xmlns='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:schemaLocation='http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd'> <bean id='myList' class='java.util.ArrayList'> <constructor-arg> <list> <value>one</value> <value>two</value> </list> </constructor-arg> </bean> </beans>Pay very close attention to member instance List<String> myList. When running JUnit test, that field will be injected by Spring, and it can be used in any test method. However, if you ever want a one time setup of some code and get a reference to a Spring injected field, then you are in bad luck. This is because the JUnit @BeforeClass will force your method to be static; and if you make your field static, Spring injection won’t work in your test! Now if you are a frequent Spring user, you should know that Spring Test Framework already provided a way for you to handle this type of use case. Here is a way for you to do class level setup with Spring’s style: package deng.junitdemo.spring;import static org.hamcrest.Matchers.is; import static org.junit.Assert.assertThat;import java.util.List;import javax.annotation.Resource;import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.TestContext; import org.springframework.test.context.TestExecutionListeners; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; import org.springframework.test.context.support.AbstractTestExecutionListener; import org.springframework.test.context.support.DependencyInjectionTestExecutionListener;@RunWith(SpringJUnit4ClassRunner.class) @TestExecutionListeners(listeners = { DependencyInjectionTestExecutionListener.class, SpringDemo2Test.class}) @ContextConfiguration public class SpringDemo2Test extends AbstractTestExecutionListener {@Resource(name='myList') private List<String> myList;@Test public void testMyListInjection() { assertThat(myList.size(), is(2)); }@Override public void afterTestClass(TestContext testContext) { List<?> list = testContext.getApplicationContext().getBean('myList', List.class); assertThat((String)list.get(0), is('one')); }@Override public void beforeTestClass(TestContext testContext) { List<?> list = testContext.getApplicationContext().getBean('myList', List.class); assertThat((String)list.get(1), is('two')); } } As you can see, Spring offers the @TestExecutionListeners annotation to allow you to write any Listener, and in it you will have a reference to the TestContext which has the ApplicationContext for you to get to the injected field reference. This works, but I find it not very elegant. It forces you to look up the bean, while your injected field is already available as field. But you can’t use it unless you go through the TestContext parameter. Now if you mix the solution we provided in the beginning, we will see a more prettier test setup. Let’s see it: package deng.junitdemo.spring;import static org.hamcrest.Matchers.is; import static org.junit.Assert.assertThat;import java.util.List;import javax.annotation.Resource;import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.test.context.ContextConfiguration;import deng.junitdemo.InstanceTestClassListener;@RunWith(SpringInstanceTestClassRunner.class) @ContextConfiguration public class SpringDemo3Test implements InstanceTestClassListener {@Resource(name='myList') private List<String> myList;@Test public void testMyListInjection() { assertThat(myList.size(), is(2)); }@Override public void beforeClassSetup() { assertThat((String)myList.get(0), is('one')); }@Override public void afterClassSetup() { assertThat((String)myList.get(1), is('two')); } } Now JUnit only allow you to use single Runner, so we must extends the Spring’s version to insert what we did before. package deng.junitdemo.spring;import org.junit.runner.notification.RunNotifier; import org.junit.runners.model.InitializationError; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;import deng.junitdemo.InstanceTestClassListener;public class SpringInstanceTestClassRunner extends SpringJUnit4ClassRunner {private InstanceTestClassListener InstanceSetupListener;public SpringInstanceTestClassRunner(Class<?> clazz) throws InitializationError { super(clazz); }@Override protected Object createTest() throws Exception { Object test = super.createTest(); // Note that JUnit4 will call this createTest() multiple times for each // test method, so we need to ensure to call 'beforeClassSetup' only once. if (test instanceof InstanceTestClassListener && InstanceSetupListener == null) { InstanceSetupListener = (InstanceTestClassListener) test; InstanceSetupListener.beforeClassSetup(); } return test; }@Override public void run(RunNotifier notifier) { super.run(notifier); if (InstanceSetupListener != null) InstanceSetupListener.afterClassSetup(); } } That should do the trick. Running the test will give use this output: 12:58:48 main INFO org.springframework.test.context.support.AbstractContextLoader:139 | Detected default resource location 'classpath:/deng/junitdemo/spring/SpringDemo3Test-context.xml' for test class [deng.junitdemo.spring.SpringDemo3Test]. 12:58:48 main INFO org.springframework.test.context.support.DelegatingSmartContextLoader:148 | GenericXmlContextLoader detected default locations for context configuration [ContextConfigurationAttributes@74b23210 declaringClass = 'deng.junitdemo.spring.SpringDemo3Test', locations = '{classpath:/deng/junitdemo/spring/SpringDemo3Test-context.xml}', classes = '{}', inheritLocations = true, contextLoaderClass = 'org.springframework.test.context.ContextLoader']. 12:58:48 main INFO org.springframework.test.context.support.AnnotationConfigContextLoader:150 | Could not detect default configuration classes for test class [deng.junitdemo.spring.SpringDemo3Test]: SpringDemo3Test does not declare any static, non-private, non-final, inner classes annotated with @Configuration. 12:58:48 main INFO org.springframework.test.context.TestContextManager:185 | @TestExecutionListeners is not present for class [class deng.junitdemo.spring.SpringDemo3Test]: using defaults. 12:58:48 main INFO org.springframework.beans.factory.xml.XmlBeanDefinitionReader:315 | Loading XML bean definitions from class path resource [deng/junitdemo/spring/SpringDemo3Test-context.xml] 12:58:48 main INFO org.springframework.context.support.GenericApplicationContext:500 | Refreshing org.springframework.context.support.GenericApplicationContext@44c9d92c: startup date [Sat Sep 29 12:58:48 EDT 2012]; root of context hierarchy 12:58:49 main INFO org.springframework.beans.factory.support.DefaultListableBeanFactory:581 | Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@73c6641: defining beans[myList,org.springframework.context.annotation. internalConfigurationAnnotationProcessor,org. springframework.context.annotation.internalAutowiredAnnotationProcessor,org .springframework.context.annotation.internalRequiredAnnotationProcessor,org. springframework.context.annotation.internalCommonAnnotationProcessor,org. springframework.context.annotation. ConfigurationClassPostProcessor$ImportAwareBeanPostProcessor#0]; root of factory hierarchy 12:58:49 Thread-1 INFO org.springframework.context.support.GenericApplicationContext:1025 | Closing org.springframework.context.support.GenericApplicationContext@44c9d92c: startup date [Sat Sep 29 12:58:48 EDT 2012]; root of context hierarchy 12:58:49 Thread-1 INFO org.springframework.beans.factory.support. DefaultListableBeanFactory:433 | Destroying singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@ 73c6641: defining beans [myList,org.springframework.context.annotation. internalConfigurationAnnotationProcessor,org.springframework. context.annotation.internalAutowiredAnnotationProcessor,org.springframework. context.annotation.internalRequiredAnnotationProcessor,org.springframework. context.annotation.internalCommonAnnotationProcessor,org.springframework. context.annotation.ConfigurationClassPostProcessor$ImportAwareBeanPostProcessor#0]; root of factory hierarchyObviously the output shows nothing interesting here, but the test should run with all assertion passed. The point is that now we have a more elegant way to invoking a before and after test setup that are at class level, and they can be instance methods to allow Spring injection.Download the demo code You may get above demo code in a working Maven project from my sandbox Reference: Enhancing Spring Test Framework with beforeClass and afterClass setup from our JCG partner Zemian Deng at the A Programmer’s Journal blog....

Spring 3.1: Caching and EhCache

If you look around the web for examples of using Spring 3.1’s built in caching then you’ll usually bump into Spring’s SimpleCacheManager, which the Guys at Spring say is “Useful for testing or simple caching declarations”. I actually prefer to think of SimpleCacheManager as lightweight rather than simple; useful in those situations where you want a small in memory cache on a per JVM basis. If the Guys at Spring were running a supermarket then SimpleCacheManager would be in their own brand ‘basics’ product range. If, on the other hand, you need a heavy duty cache, one that’s scalable, persistent and distributed, then Spring also comes with a built in ehCache wrapper. The good news is that swapping between Spring’s caching implementations is easy. In theory it’s all a matter of configuration and, to prove the theory correct, I took the sample code from my Caching and @Cacheable blog and ran it using an EhCache implementation. The configuration steps are similar to those described in my last blog Caching and Config in that you still need to specify: <cache:annotation-driven /> …in your Spring config file to switch caching on. You also need to define a bean with an id of cacheManager, only this time you reference Spring’s EhCacheCacheManager class instead of SimpleCacheManager. <bean id='cacheManager' class='org.springframework.cache.ehcache.EhCacheCacheManager' p:cacheManager-ref='ehcache'/> The example above demonstrates an EhCacheCacheManager configuration. Notice that it references a second bean with an id of ‘ ehcache‘. This is configured as follows: <bean id='ehcache' class='org.springframework.cache.ehcache.EhCacheManagerFactoryBean' p:configLocation='ehcache.xml' p:shared='true'/> ‘ ehcache‘ has two properties: configLocation and shared. ‘ configLocation‘ is an optional attribute that’s used to specify the location of an ehcache configuration file. In my test code I used the following example file: <?xml version='1.0' encoding='UTF-8'?> <ehcache xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:noNamespaceSchemaLocation='http://ehcache.org/ehcache.xsd'> <defaultCache eternal='true' maxElementsInMemory='100' overflowToDisk='false' /> <cache name='employee' maxElementsInMemory='10000' eternal='true' overflowToDisk='false' /> </ehcache> …which creates two caches: a default cache and one named “employee”. If this file is missing then the EhCacheManagerFactoryBean simply picks up a default ehcache config file: ehcache-failsafe.xml, which is located in ehcache’s ehcache-core jar file. The other EhCacheManagerFactoryBean attribute is ‘ shared‘. This is supposed to be optional as the documentation states that it defines ‘whether the EHCache CacheManager should be shared (as a singleton at the VM level) or independent (typically local within the application). Default is ‘false’, creating an independent instance.” However, if this is set to false then you’ll get the following exception: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.cache.interceptor.CacheInterceptor#0': Cannot resolve reference to bean 'cacheManager' while setting bean property 'cacheManager'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'cacheManager' defined in class path resource [ehcache-example.xml]: Cannot resolve reference to bean 'ehcache' while setting bean property 'cacheManager'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'ehcache' defined in class path resource [ehcache-example.xml]: Invocation of init method failed; nested exception is net.sf.ehcache.CacheException: Another unnamed CacheManager already exists in the same VM. Please provide unique names for each CacheManager in the config or do one of following: 1. Use one of the CacheManager.create() static factory methods to reuse same CacheManager with same name or create one if necessary 2. Shutdown the earlier cacheManager before creating new one with same name. The source of the existing CacheManager is: InputStreamConfigurationSource [stream=java.io.BufferedInputStream@424c414] at org.springframework.beans.factory.support.BeanDefinitionValueResolver. resolveReference(BeanDefinitionValueResolver.java:328) at org.springframework.beans.factory.support.BeanDefinitionValueResolver. resolveValueIfNecessary(BeanDefinitionValueResolver.java:106) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory. applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1360) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory. populateBean(AbstractAutowireCapableBeanFactory.java:1118) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory. doCreateBean(AbstractAutowireCapableBeanFactory.java:517) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory. createBean(AbstractAutowireCapableBeanFactory.java:456) ... stack trace shortened for clarity at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner. java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner. run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner. main(RemoteTestRunner.java:197) Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'cacheManager' defined in class path resource [ehcache-example.xml]: Cannot resolve reference to bean 'ehcache' while setting bean property 'cacheManager'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'ehcache' defined in class path resource [ehcache-example.xml]: Invocation of init method failed; nested exception is net.sf.ehcache.CacheException: Another unnamed CacheManager already exists in the same VM. Please provide unique names for each CacheManager in the config or do one of following: 1. Use one of the CacheManager.create() static factory methods to reuse same CacheManager with same name or create one if necessary 2. Shutdown the earlier cacheManager before creating new one with same name. The source of the existing CacheManager is: InputStreamConfigurationSource [stream=java.io.BufferedInputStream@424c414] at org.springframework.beans.factory.support.BeanDefinitionValueResolver. resolveReference(BeanDefinitionValueResolver.java:328) at org.springframework.beans.factory.support.BeanDefinitionValueResolver. resolveValueIfNecessary(BeanDefinitionValueResolver.java:106) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory. applyPropertyValues(AbstractAutowireCapableBeanFactory.java:1360) ... stack trace shortened for clarity at org.springframework.beans.factory.support.AbstractBeanFactory. getBean(AbstractBeanFactory.java:193) at org.springframework.beans.factory.support.BeanDefinitionValueResolver. resolveReference(BeanDefinitionValueResolver.java:322) ... 38 more Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'ehcache' defined in class path resource [ehcache-example.xml]: Invocation of init method failed; nested exception is net.sf.ehcache.CacheException: Another unnamed CacheManager already exists in the same VM. Please provide unique names for each CacheManager in the config or do one of following: 1. Use one of the CacheManager.create() static factory methods to reuse same CacheManager with same name or create one if necessary 2. Shutdown the earlier cacheManager before creating new one with same name. The source of the existing CacheManager is: InputStreamConfigurationSource [stream=java.io.BufferedInputStream@424c414] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory. initializeBean(AbstractAutowireCapableBeanFactory.java:1455) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory. doCreateBean(AbstractAutowireCapableBeanFactory.java:519) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory. createBean(AbstractAutowireCapableBeanFactory.java:456) at org.springframework.beans.factory.support.AbstractBeanFactory$1 .getObject(AbstractBeanFactory.java:294) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry. getSingleton(DefaultSingletonBeanRegistry.java:225) at org.springframework.beans.factory.support.AbstractBeanFactory. doGetBean(AbstractBeanFactory.java:291) at org.springframework.beans.factory.support.AbstractBeanFactory. getBean(AbstractBeanFactory.java:193) at org.springframework.beans.factory.support.BeanDefinitionValueResolver. resolveReference(BeanDefinitionValueResolver.java:322) ... 48 more Caused by: net.sf.ehcache.CacheException: Another unnamed CacheManager already exists in the same VM. Please provide unique names for each CacheManager in the config or do one of following: 1. Use one of the CacheManager.create() static factory methods to reuse same CacheManager with same name or create one if necessary 2. Shutdown the earlier cacheManager before creating new one with same name. The source of the existing CacheManager is: InputStreamConfigurationSource [stream=java.io.BufferedInputStream@424c414] at net.sf.ehcache.CacheManager.assertNoCacheManagerExistsWithSameName(CacheManager. java:521) at net.sf.ehcache.CacheManager.init(CacheManager.java:371) at net.sf.ehcache.CacheManager. (CacheManager.java:339) at org.springframework.cache.ehcache.EhCacheManagerFactoryBean. afterPropertiesSet(EhCacheManagerFactoryBean.java:104) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory. invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1514) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory. initializeBean(AbstractAutowireCapableBeanFactory.java:1452) ... 55 more …when you try to run a bunch of unit tests. I think that this comes down to a simple bug Spring’s the ehcache manager factory as it’s trying to create multiple cache instances using new() rather than using, as the exception states, “one of the CacheManager.create() static factory methods’ which allows it to reuse same CacheManager with same name. Hence, my first JUnit test works okay, but all others fail.The offending line of code is: this.cacheManager = (this.shared ? CacheManager.create() : new CacheManager()); My full XML config file is listed below for completeness: <?xml version='1.0' encoding='UTF-8'?> <beans xmlns='http://www.springframework.org/schema/beans' xmlns:p='http://www.springframework.org/schema/p' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:cache='http://www.springframework.org/schema/cache' xmlns:context='http://www.springframework.org/schema/context' xsi:schemaLocation='http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.1.xsd http://www.springframework.org/schema/cache http://www.springframework.org/schema/cache/spring-cache.xsd'> <!-- Switch on the Caching --> <cache:annotation-driven /><!-- Do the component scan path --> <context:component-scan base-package='caching' /><bean id='cacheManager' class='org.springframework.cache.ehcache.EhCacheCacheManager' p:cacheManager-ref='ehcache'/> <bean id='ehcache' class='org.springframework.cache.ehcache.EhCacheManagerFactoryBean' p:configLocation='ehcache.xml' p:shared='true'/> </beans> In using ehcache, the only other configuration details to consider are the Maven dependencies. These are pretty straight forward as the Guys at Ehcache have combined all the various ehcache jars into one Maven POM module. This POM module can be added to your project’s POM file using the XML below: <dependency> <groupId>net.sf.ehcache</groupId> <artifactId>ehcache</artifactId> <version>2.6.0</version> <type>pom</type> <scope>test</scope> </dependency> Finally, the ehcache Jar files are available from both the Maven Central and Sourceforge repositories: <repositories> <repository> <id>sourceforge</id> <url>http://oss.sonatype.org/content/groups/sourceforge/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> </repository> </repositories>Happy coding and don’t forget to share! Reference: Spring 3.1: Caching and EhCache from our JCG partner Roger Hughes at the Captain Debug’s Blog blog....

JavaOne 2012: How Do Non-Blocking Data Structures Work?

I was a little surprised when I looked at my schedule for today and noted that all of the sessions I have currently planned to see today are in the Hilton. This became a little less surprising when I realized that about half of the JavaOne presentations are in the Hilton and that they seem to be roughly located by track. Tobias Lindaaker‘s (Neo Technology) presentation ‘How Do Atomic Data Structures Work?’ was held in the Hilton’s Golden Gate 3/4/5 conference room area. Lindaaker changed his presentation’s title since he originally submitted the abstract. The abstract’s title (and that listed in the conference materials) was ‘How Do Atomic Data Structures Work?,’ but he has renamed it to ‘How Do Non-Blocking Data Structures Work?’ Lindaaker explained that ‘atomic’ comes from Greek and meaning ‘undividable.’ He explained that a ‘lock-free data structure’ is ‘a data structure that does not block any threads when performing an operation on the data structure (read or write).’ He stated that one wants to avoid ‘spin-waiting‘ whenever possible. Lindaaker talked about synchronized regions. He said such regions ‘create a serialized path through the code’ and ‘guarantee safe publication.’ He defined ‘safe publication’ as meaning ‘everything written before exiting synchronized [block]‘ and ‘guaranteed to be visible on entry of synchronized [block].’ One of his bullets stated, ‘volatile fields give you safe publication without serialization.’ Lindaaker focused more on the volatile keyword modifier in his ‘volatile fields’ slide. The slide ‘What is a memory barrier?’ provided a simple visual representation of the memory barrier concept. For his slide ‘Atomic updates,’ Lindaaker stated that the easiest way to access an atomic reference is via use of java.util.concurrent.atomic.AtomicReference<V>. Lindakker provided a physical demonstration using coasters to illustrate the difference between compareAndSet (sets a value if the conditional matches favorably) and getAndSet. (sets new value returns old value). Lindaaker prefers java.util.concurrent.atomic.AtomicReferenceFieldUpdater<T,V> because of its ‘lower memory overhead’ (‘fewer object headers’) and ‘better memory locality’ (‘no reference indirection’). Lindaaker explained that array-based queues do block (sometimes a benefit when amount of work needs to be limited due to finite hardware resources), linked queues do not block. Lindaaker used a supermarket queue as an example of the differences. In the link-based queue, you always stand behind the same customer in front of you in the queue. In the array-based queue, you always remain in the same position. Bounded queues ‘frequently perform better,’ but will block when full. One of the main themes of this presentation was the idea of learning new ideas and then individually researching them further. Lindaaker recommended that audience members look at the JDK’s code to see some impressive and less impressive code examples. Lindaaker referenced LMAX (London Multi Asset Exchange) Disruptor as an example of a ‘ring buffer’ (‘array with a read mark and a write mark’). He stated that ‘readers contend on the read mark, writers on write mark’ and highlighted the consequence of this, ‘With single reader / single writer, there is no contention.’ The Disruptor page describes Disruptor as a ‘High Performance Inter-Thread Messaging Library.’ Lindaaker stated that java.util.concurrent.ConcurrentHashMap is a good general choice, but is not very exciting for discussion in his presentation. He stated that it ‘scales reasonably well on current commodity hardware’ (fewer than 100 CPUs) with proper tuning. Neo Technology provides a database implementation (Neo4j) that is not relational (graph database). Lindaaker described the Neo Technology’s graph-based database offering as, ‘Stores data as nodes and relationships between nodes.’ Don’t forget to share! Reference: JavaOne 2012: How Do Non-Blocking Data Structures Work? from our JCG partner Dustin Marx at the Inspired by Actual Events blog....

Stuff I Learned from Grails Consulting

I don’t do much Grails consulting since I work for the Engineering group, and we have an excellent group of support engineers that usually work directly with clients. I do occasionally teach the 3-day Groovy and Grails course but I’ve only been on two onsite consulting gigs so far, and one was a two-week engagement that ended last week. As is often the case when you teach something or help someone else out, I learned a lot and was reminded of a lot of stuff I’d forgotten about, so I thought it would be good to write some of that down for future reference. SQL Logging There are two ways to view SQL output from queries; adding logSql = true in DataSource.groovy and configuring Log4j loggers. The Log4j approach is a lot more flexible since it doesn’t just dump to stdout, and can be routed to a file or other appender and conveniently enabled and disabled. But it turns out it’s easy to toggle logSql SQL console logging. Get a reference to the sessionFactory bean (e.g. using dependency injection with def sessionFactory) and turn it on with sessionFactory.settings.sqlStatementLogger.logToStdout = true and off with sessionFactory.settings.sqlStatementLogger.logToStdout = falsestacktrace.log The stacktrace.log file was getting very large and they wanted to configure it to use a rolling file appender. Seemed simple enough, but it took a lot longer than I expected. The trick is to create an appender with the name 'stacktrace'; the Grails logic that parses the Log4j DSL looks for an existing appender and uses it, and only configures the default one if there isn’t one already configured. So here’s one that configures a RollingFileAppender with a maximum of 10 files, each a maximum of 10MB in size, and with the standard layout pattern. In addition it includes logic to determine if it’s deployed in Tomcat so it can write to the Tomcat logs folder, or the target folder if you’re using run-app. If you’re deploying to a different container, adjust the log directory calculation appropriately. appenders { String logDir = grails.util.Environment.warDeployed ? System.getProperty('catalina.home') + '/logs' : 'target' rollingFile name: 'stacktrace', maximumFileSize: 10 * 1024 * 1024, file: '$logDir/stacktrace.log', layout: pattern( conversionPattern: ''%d [%t] %-5p %c{2} %x - %m%n''), maxBackupIndex: 10 } Dynamic fooId property In a many-to-one where you have a Foo foo field (or static belongsTo = [foo: Foo] which triggers adding a ‘foo’ field) you can access its foreign key with the dynamic fooId property. This can be used in a few ways. Since references like this are lazy by default, checking if a nullable reference exists using foo != null involves loading the entire instance from the database. But checking fooId != null involves no database access. Other queries or updates that really only need the foreign key will be cheaper using fooId. For example, to set a reference in another instance you would typically use code like this: bar2.foo = bar1.foo bar2.save() But you can use the load method bar2.foo = bar1.fooId ? Foo.load(bar1.fooId) : null bar2.save() and avoid loading the Foo instance just to set its foreign key in the second instance and then discard it. Deleting by id is less expensive too; ordinarily you use get to load an instance and call its delete method, but retrieving the entire instance isn’t needed. You can do this instead: Foo.load(bar.fooId).delete()DRY constraints You can use the importFrom method inside a constraints block in a domain class to avoid repeating constraints. You can import all constraints from another domain class: static constraints = { someProperty nullable: true ... importFrom SomeOtherDomainClass } and optionally use the include and/or exclude properties to use a subset: static constraints = { someProperty nullable: true ... importFrom SomeOtherDomainClass, exclude: ['foo', 'bar'] }Flush event listener They were seeing some strange behavior where collections that weren’t explicitly modified were being changed and saved, causing StaleObjectStateExceptions. It wasn’t clear what was triggering this behavior, so I suggested registering a Hibernate FlushEventListener to log the state of the dirty instances and collections during each flush: package com.burtbeckwith.blogimport org.hibernate.HibernateException import org.hibernate.collection.PersistentCollection import org.hibernate.engine.EntityEntry import org.hibernate.engine.PersistenceContext import org.hibernate.event.FlushEvent import org.hibernate.event.FlushEventListenerclass LoggingFlushEventListener implements FlushEventListener {void onFlush(FlushEvent event) throws HibernateException { PersistenceContext pc = event.session.persistenceContextpc.entityEntries.each { instance, EntityEntry value -> if (instance.dirty) { println 'Flushing instance $instance' } }pc.collectionEntries.each { PersistentCollection collection, value -> if (collection.dirty) { println 'Flushing collection '$collection.role' $collection' } } } } It’s not sufficient in this case to use the standard hibernateEventListeners map (described in the docs here) since that approach adds your listeners to the end of the list, and this listener needs to be at the beginning. So instead use this code in BootStrap.groovy to register it: import org.hibernate.event.FlushEventListener import com.burtbeckwith.blog.LoggingFlushEventListenerclass BootStrap {def sessionFactorydef init = { servletContext ->def listeners = [new LoggingFlushEventListener()] def currentListeners = sessionFactory.eventListeners.flushEventListeners if (currentListeners) { listeners.addAll(currentListeners as List) } sessionFactory.eventListeners.flushEventListeners = listeners as FlushEventListener[] } }“Read only” objects and Sessions The read method was added to Grails a while back, and it works like get except that it marks the instance as read-only in the Hibernate Session. It’s not really read-only, but if it is modified it won’t be a candidate for auto-flushing using dirty detection. But you can explicitly call save() or delete() and the action will succeed. This can be useful in a lot of ways, and in particular it is more efficient if you won’t be changing the instance since Hibernate will not maintain a copy of the original database data for dirty checking during the flush, so each instance will use about half of the memory that it would otherwise. One limitation of the read method is that it only works for instances loaded individually by id. But there are other approaches that affect multiple instances. One is to make the entire session read-only: session.defaultReadOnly = true Now all loaded instances will default to read-only, for example instances from criteria queries and finders. A convenient way to access the session is the withSession method on an arbitrary domain class: SomeDomainClass.withSession { session -> session.defaultReadOnly = true } It’s rare that an entire session will be read-only though. You can set the results of individual criteria query to be read-only with the setReadOnly method: def c = Account.createCriteria() def results = c { between('balance', 500, 1000) eq('branch', 'London') maxResults(10) setReadOnly true } One significant limitation of this technique is that attached collections are not affected by the read-only status of the owning instance (and there doesn’t seem to be a way to configure collection to ignore changes on a per-instance basis). Read more about this in the Hibernate documentation Reference: Stuff I Learned Consulting from our JCG partner Burt Beckwith at the An Army of Solipsists blog....

JavaOne 2012: JavaOne Technical Keynote

Mark Reinhold started off the JavaOne 2012 Technical Keynote. He said this year’s edition would be a little different because it would use largely the same example to illustrate various aspects of Java rather than standalone individual coverage of each component of Java. Richard Bair and Jasper Potts of the JavaFX team (and associated with FXExperience) introduced this example application, a schedule builder with presentation and speaker data from this year’s JavaOne. As part of the introduction of the example application, the presenters made extra effort to point out that Oracle is shipping the JVM for MacOS and that OpenJDK is what is being used in the example. They also stated that the example runs on Linux as well. They used Java SE 7 and JavaFX 2 for this application and they talked about the availability of SceneBuilder for building a JavaFX application. They demonstrated the use of SceneBuilder within NetBeans to generate the JavaFX-based login page. Other interesting JavaFX advancements mentioned include the addition of a ComboBox (though there is no Date Picker yet), interoperability with SWT, and the availability of a JavaFX Packager. It was also mentioned that JavaFX was architected and designed from the beginning to allow for the main UI thread to be separate from background threads, allowing it to take advantage of multiple CPUs. Bair showed the relatively verbose code that would be required to implement a JavaFX application to fully take advantage of multiple threads today. Brian Goetz came to the stage to describe how Project Lambda and the changes to the Java language will enable ‘better parallel libraries.’ Goetz said that the easiest way to help developers is to give them better libraries, but the language must sometime be extended when the limits of the language prevent libraries from being written to fully satisfy the need. Goetz stated that the goals of inner classes are the same as Project Lambda, but inner classes have ‘a whole lot of other baggage.’ Goetz added that bulk operations on collections may not ‘really be needed, but things are better this way.’ Goetz then showed a simple but highly illustrative example of how Project Lambda changes how we process bulk data changes in a collection. His slide showed the J2SE 5 enhanced for loop is used today but can be done with the forEach method (added to all of the collections via the new default implementation interface approach) and a Groovy-like closure syntax (->). Goetz’s next slide was even more impressive. He showed what appeared to be three operations being performed on a collection as it was iterated. However, he pointed out that these would all be enacted at once on the collection with only a single traversal of that collection. All I could think was, ‘Wow!’ Goetz also had a slide showing off the computeIfAbsent operation on collections. He ended by saying there’s still lots of work to do and citing two URLs for playing with Project Lambda: http://openjdk.java.net/projects/lambda/ and http://jdk8.java.net/lambda/. There was some interesting discussion on the differences between traditional Java environments and embedded environments. Raspberry Pi received multiple and prominent mentions. Reinhold started talking about modularity and Project Jigsaw and showed a ‘little bit of a spaghetti diagram that is way cleaner than where we started, which was a total spaghetti diagram.’ He used this as a starting point for discussing the controversial decision to boot Project Jigsaw from Java 8 to Java 9. Reinhold had a slide focused on things that are in Java 8 such as Project Lambda, Compact Profiles, Type Annotations, Project Nashorn, and the new Date/Time API. Reinhold added that ‘all this work is being done in OpenJDK’ and that ‘all the specification work is being done in the JCP.’ Arun Gupta had the unenviable task of beginning his presentation at the time the keynote was scheduled to end (7 pm local time). He talked about Java EE and showed a slide titled, ‘Java EE Past, Present, & Future.’ This slide showed how Java EE has added features since the ten specifications of J2EE 1.2 in December 1999. Gupta had another slide talking about ‘Java EE 7 Revised Scope’ and how it increases productivity (via less boilerplate code with richer functionality and more defaults) and adds HTML5 support (WebSocket, JSON, and HTML5 Forms). Another Gupta slide was titled ‘Java EE 7 – Candidate JSRs’ that listed JSRs that are all new to Java EE 7 as well as those being modified. He then focused individual slides on some of them. His ‘Java API for RESTful Web Services 2.0′ slide talked about a standarized approach using a client API. Gupta’s slides showing how this is done today (without libraries) and comparing it to the next client API demonstrated how much simpler this is going to be. Gupta’s coverage of JMS 2.0 included discussion of less verbosity in JMS thanks to annotations and other new features in the Java programming language. He mentioned that the required resource adapter will make it easier to ‘mix and match’ JMS providers in the future. Gupta showed a slide full of small-font code (‘this code is not meant to be readable’) demonstrating sending a message using JMS 1.1. This was followed with a slide showing significantly less (and much clearer) code in JMS 2.0 taking advantage of annotations and resource injection to send a message. Gupta’s coverage of the JSON support to be added to Java EE included the bullet ‘API to parse, generate, transform, query, etc. JSON.’ He then showed some slides with example JSON-formatted data and example code for using builder-style to access the JSON. It felt a lot like Groovy’s JSON handling. Java API for WebSocket 1.0 will allow annotations to be used to easily work with WebSocket. When covering Bean Validation 1.1, Gupta pointed out that not all new adopted JSRs are being led by Oracle. He showed using the built-in @NotNull annotation on method parameters, but also showed that one will be able to write custom constraints that can be similarly applied to method arguments. Gupta highlighted miscellaneous improvements to Java EE such as JPA 2.1, EJB 3.2, etc. The majority of these JSRs have early public drafts available. GlassFish 4 is the reference implementation of Java EE 7 and already includes WebSocket, JSON, JMS 2, and more. One of Gupta’s slides was focused on Avatar. The ‘Angry Bids’ example application was demonstrated. It is based on Avatar and runs on GlassFish and uses standard Java EE 7 components. Gupta introduced Project Easel for NetBeans. It was mentioned that NetBeans 7.3 beta would be coming out later this week and will include support for HTML5 as a new project type. The example being showed uses JQuery and CSS. The NetBeans-based example communicated through Google Chrome to WebKit (it also works with the JavaFX-embedded browser), but it is expected to work eventually with any WebKit-based browser or device. The demonstrator showed how his changes to HTML5 code (HTML, JavaScript, and CSS) within NetBeans were updated in the Google Chrome browser. It was pretty impressive and makes me wish I had enough time to have accepted an invitation to provide early testing of NetBeans 7.3. NetBeans is going to be able to generate RESTful clients, support JQuery, and provide a Project Nashorn editor. A similar demo to this one is available at http://netbeans.org/kb/docs/web/html5-gettingstarted-screencast.html. Like the Strategy Keynote, this Technical Keynote was held in the Masonic Auditorium. One of the interesting trends I noticed in tonight’s keynotes was that at least three different people from three different organizations mentioned looking for skilled Java developers should contact them if interested in job opportunities. Reference: JavaOne 2012: JavaOne Technical Keynote from our JCG partner Dustin Marx at the Inspired by Actual Events blog....

JavaOne 2012: Java Strategy Keynote and IBM Keynote

I had a rough start to JavaOne 2012 similar to that at JavaOne 2010. It took 70 minutes for the people handling the check-in to provide me with a JavaOne badge due to ‘computer and printer technical difficulties.’ Although I’m not the most patient person in the world, the part of this that was even more disappointing than the wait is that I missed being part of the ‘Community Session: For You – By You: Growing the NetBeans Community’ panel at NetBeans Community Day at JavaOne 2012, something I was really looking forward to attending and participating in. I had arrived at Moscone West about 15 minutes before that panel was to begin, but did not end up getting my badge until well after the panel was over. In a disappointed mood, I headed to the Nob Hill Masonic Center (AKA Masonic Auditorium, AKA California Masonic Memorial Temple) on Nob Hill to attend the initial evening’s keynote address. Java Strategy Keynote The first announcement was to ‘turn off all electronic devices.’ After that announcement, a video was shown. I was happy that it was short. Hassan Risvi introduced the theme for JavaOne 2012: ‘Make the Future Java.’ He showed slides indicating the 2012 Scorecard for three areas of Java’s strategy: technical innovation, community involvement, and Oracle leadership.Georges Saab stated that Oracle has made Java available for more new platforms in the past year with JDK 7 than added in the previous ten years. He highlighted JDK 7′s adoption and talked about OpenJDK. One feature of JDK 8 that he highlighted is Project Nashorn, a JavaScript implementation taking advantage of invokedynamic for high performance with high interoperability with Java and the JVM. He announced that Project Nashorn will be contributed to OpenJDK. He stated that IBM, RedHat, and Twitter have already expressed support for Project Nashorn as part of OpenJDK. Dierk König of Canoo and Navis were guest speakers at this Strategy Keynote. They talked about their use of JavaFX and the Canoo Dolphin project being open sourced. Nandini Ramani talked about JavaFX and stated that its now available for all major platforms. She also cited the release of NetBeans 7.2 with integrated SceneBuilder as part of improved tooling. She also reminded the audience that JavaFX is now bundled with current versions of Java. Ramani announced that JavaFX is now available for Linux ARM. She mentioned that 3D is coming to JavaFX. It was also mentioned in this keynote that they expect JavaFX to be fully open source by the end of the calendar year. AMD’s Phil Rogers talked about hardware trends moving from single-core CPUs to multi-core CPUs to GPUs using ‘a single piece of silicon and shared memory.’ Saab and Rogers stated that Project Sumatra allows the JVM to be modified so that Java developers can take advantage of new features in the hardware with existing Java language skills. The JVM will be able to decide whether to run the Java code on a multi-CPU or multi-processor. Ramani returned to the stage and mentioned two new recently announced releases: Java ME Embedded 3.2 and Java Embedded Suite 7.0. Axel Hansmann of Cinterion talked about his company’s use of Java ME Embedded. Marc Brule of the Royal Canadian Mint joined Ramani on stage to talk about their use of Java Card: MintChip (‘The Evolution of Currency’). Cameron Purdy came to the stage to discuss Java EE. Purdy announced that the earliest releases of Java EE 7 SDK can be downloaded via GlassFish versions. Purdy also announced that GlassFish 4 already includes significant HTML 5 additions mentioned at JavaOne 2011. Purdy pointed out that NoSQL is not standardized yet (‘you could call it ‘No Standard Databases”) and pointed out that JPA already supports MongoDB and Oracle NoSQL with planned support for Cassandra and other NoSQL implementations. Purdy stated that April 2013 is the currently planned timeframe for release of Java EE 7. Nicole Otto of Nike joined Purdy on stage and showed a brief video (FuelBand: ‘Life is a Sport: Make It Count’). She talked about Java EE being used to track data on activity. Purdy had a slide ‘Java EE 8 and beyond’ that was sub-titled: ‘Standards-based cloud programming model.’ A short film was shown to introduce Dr. Robert Ballard (now I know why we were given the Alien Deep DVD upon entrance to the keynote). Dr. Ballard talked about his discovery of the Titanic and explained how the technology used for that discovery was like tying two tin cans together compared to the technology available today. The most laughter and applause of the night came to his statement that he hoped to find a spaceship in his explorations so that he never has to talk about discovering the Titanic again. Dr. Ballard stated that we should not sell science or engineering but should make it more personal to kids and sell scientists and engineers. He stated that ‘the battle for a scientist of engineer is over by the eighth grade.’IBM Keynote: (hardware,software)–>{IBM.java.patterns} We moved, without break, directly into the IBM Keynote. Jason McGee (blog), an IBM representative, talked about ‘some of the things we’ve seen related to Java and the cloud.’ He talked about ‘Java Challenges’ as ‘share more,’ ‘cooperate,’ ‘use less’ (resources), exploit technology. John Duimovich came to the stage to talk more about these four challenges ‘in context on the Java Virtual Machine.’ Duimovich talked about ‘shared classes cache’ and AOT (Ahead of Time), described as ‘JIT code saved for next JVM). Duimovich also talked about multi-tenancy and supporting ‘isolation within a single JVM.’ He had a slide on the Liberty Profile ‘for Web, OSGi, and Mobile Apps.’ Duimovich introduced ‘really cool hardware’ called System z and explained the advantages of running Java (rather than C or C++) on this hardware. Duimovich stated that ‘Oracle and Java team together on Java, but compete head to head.’ He pointed out that this ‘competition drives innovation’ and is good for customers and developers. McGee returned to the stage to talk about a few more themes, observations, and trends. He pointed out that ‘Java provides developers abstraction from underlying hardware,’ but that ‘hardware is changing and evolving rapidly.’ McGee’s slide stated that ‘both Java and Cloud need to enable the exploitation of these hardware advances while still preserving the ‘run anywhere’ benefit.’ Another McGee slide was titled ‘Java in a ploygot world…’ and he used this slide to talk about the world transitioning from an all-Java enterprise world to today with applications written in numerous languages. He mentioned several alternative JVM-based languages and put a plug in for IBM’s X10 language. McGee believes that Java will be part of, but not all of, future enterprise applications. Don’t forget to share! Reference: JavaOne 2012: Java Strategy Keynote and IBM Keynote from our JCG partner Dustin Marx at the Inspired by Actual Events blog....

Google Guava v07 examples

We have something called Weekly Technology Workshops at TouK, that is, every Friday at 16:00 somebody has a presentation for everyone willing to come. We present stuff we learn and work on at home, but we also have a bulletin board with topics that people would like to listen about. Last week Maciej Próchniak had a talk about Clojure, this time a few folks asked for an introduction to Google Guava libraries. Since this was a dead simple task, I was happy to deliver. WTF is Guava? It’s a set of very simple, basic classes, that you end up writing yourself anyway. Think in terms of Apache commons, just by Google. Just to make your life a little bit easier. There is an early (v04) presentation and there was a different one (in Polish) at Javarsowia 2010 by Wiktor Gworek. At the time of writing this, the latest version is v07, it’s been mavenized and is available at a public maven repo. Here’s a quick review of a few interesting things. Don’t expect anything fancy though, Guava is very BASIC.@VisibleForTesting A simple annotation that tells you why a particular property access restriction has been relaxed. A common trick to use in testing is to relax access restrictions to default for a particular property, so that you can use it in a unit test, which resides in the same package (though in different catalog). Whether you thing it’s good or bad, remember to give a hint about that to the developer. Consider: public class User { private Long id; private String firstName; private String lastName; String login;Why is login package scoped? public class User { private Long id; private String firstName; private String lastName; @VisibleForTesting String login; Ah, that’s why.Preconditions Guava has a few preconditions for defensive programming (Design By Contract), but they are not quite as good as what Apache Commons / Spring framework has. One thing interesting is that Guava solution returns the object, so could be inlined. Consider: Using hand written preconditions: public User(Long id, String firstName, String lastName, String login) { validateParameters(id, firstName, lastName, login); this.id = id; this.firstName = firstName; this.lastName = lastName; this.login = login.toLowerCase(); }private void validateParameters(Long id, String firstName, String lastName, String login) { if(id == null ) { throw new IllegalArgumentException('id cannot be null'); }if(firstName == null || firstName.length() == 0) { throw new IllegalArgumentException('firstName cannot be empty'); }if(lastName == null || lastName.length() == 0) { throw new IllegalArgumentException('lastName cannot be empty'); }if(login == null || login.length() == 0) { throw new IllegalArgumentException('login cannot be empty'); } } Using guava preconditions: public void fullyImplementedGuavaConstructorWouldBe(Long id, String firstName, String lastName, String login) { this.id = checkNotNull(id); this.firstName = checkNotNull(firstName); this.lastName = checkNotNull(lastName); this.login = checkNotNull(login);checkArgument(firstName.length() > 0); checkArgument(lastName.length() > 0); checkArgument(login.length() > 0); } (Thanks Yom for noticing that checkNotNull must go before checkArgument, though it makes it a bit unintuitive) Using spring or apache commons preconditions (the use looks exactly the same for both libraries): public void springConstructorWouldBe(Long id, String firstName, String lastName, String login) { notNull(id); hasText(firstName); hasText(lastName); hasText(login); this.id = id; this.firstName = firstName; this.lastName = lastName; this.login = login; } CharMatcher For people who hate regexp or just want a simple and good looking object style pattern matching solution. Examples: And/or ease of use String input = 'This invoice has an id of 192/10/10'; CharMatcher charMatcher = CharMatcher.DIGIT.or(CharMatcher.is('/')); String output = charMatcher.retainFrom(input);output is: 192/10/10 Negation: String input = 'DO NOT scream at me!'; CharMatcher charMatcher = CharMatcher.JAVA_LOWER_CASE.or(CharMatcher.WHITESPACE).negate(); String output = charMatcher.retainFrom(input);output is: DONOT! Ranges: String input = 'DO NOT scream at me!'; CharMatcher charMatcher = CharMatcher.inRange('m', 's').or(CharMatcher.is('a').or(CharMatcher.WHITESPACE)); String output = charMatcher.retainFrom(input);output is: sram a m Joiner / Splitter As the names suggest, it’s string joining/splitting done the right way, although I find the inversion of calls a bit… oh well, it’s java. String[] fantasyGenres = {'Space Opera', 'Horror', 'Magic realism', 'Religion'}; String joined = Joiner.on(', ').join(fantasyGenres);Output: Space Opera, Horror, Magic realism, Religion You can skip nulls: String[] fantasyGenres = {'Space Opera', null, 'Horror', 'Magic realism', null, 'Religion'}; String joined = Joiner.on(', ').skipNulls().join(fantasyGenres);Output: Space Opera, Horror, Magic realism, Religion You can fill nulls: String[] fantasyGenres = {'Space Opera', null, 'Horror', 'Magic realism', null, 'Religion'}; String joined = Joiner.on(', ').useForNull('NULL!!!').join(fantasyGenres);Output: Space Opera, NULL!!!, Horror, Magic realism, NULL!!!, Religion You can join maps Map<Integer, String> map = newHashMap(); map.put(1, 'Space Opera'); map.put(2, 'Horror'); map.put(3, 'Magic realism'); String joined = Joiner.on(', ').withKeyValueSeparator(' -> ').join(map);Output: 1 ? Space Opera, 2 ? Horror, 3 ? Magic realism Split returns Iterable instead of JDK arrays: String input = 'Some very stupid data with ids of invoces like 121432, 3436534 and 8989898 inside'; Iterable<String> splitted = Splitter.on(' ').split(input);Split does fixed length splitting, although you cannot give a different length for each “column” which makes it’s use a bit limited while parsing some badly exported excels. String input = 'A 1 1 1 1\n' + 'B 1 2 2 2\n' + 'C 1 2 3 3\n' + 'D 1 2 5 3\n' + 'E 3 2 5 4\n' + 'F 3 3 7 5\n' + 'G 3 3 7 5\n' + 'H 3 3 9 7'; Iterable<String> splitted = Splitter.fixedLength(3).trimResults().split(input);You can use CharMatcher while splitting String input = 'Some very stupid data with ids of invoces like 123231/fv/10/2010, 123231/fv/10/2010 and 123231/fv/10/2010'; Iterable<String> splitted = Splitter.on(CharMatcher.DIGIT.negate()) .trimResults() .omitEmptyStrings() .split(input); Predicates / Functions Predicates alone are not much, it’s just an interface with a method that returns true, but if you combine predicates with functions and Collections2 (a guava class that simplifies working on collections), you get a nice tool in your toolbox. But let’s start with basic predicate use. Imagine we want to find whether there are users who have logins with digits inside. The inocation would be (returns boolean): Predicates.in(users).apply(shouldNotHaveDigitsInLoginPredicate);And the predicate looks like that public class ShouldNotHaveDigitsInLoginPredicate implements Predicate<User> { @Override public boolean apply(User user) { checkNotNull(user); return CharMatcher.DIGIT.retainFrom(user.login).length() == 0; } }Now lets add a function that will transform a user to his full name: public class FullNameFunction implements Function<User, String> { @Override public String apply(User user) { checkNotNull(user); return user.getFirstName() + ' ' + user.getLastName(); } }You can invoke it using static method transform: List<User> users = newArrayList(new User(1L, 'sylwek', 'stall', 'rambo'), new User(2L, 'arnold', 'schwartz', 'commando'));List<String> fullNames = transform(users, new FullNameFunction());And now lets combine predicates with functions to print names of users that have logins which do not contain digits: List<User> users = newArrayList(new User(1L, 'sylwek', 'stall', 'rambo'), new User(2L, 'arnold', 'schwartz', 'commando'), new User(3L, 'hans', 'kloss', 'jw23'));Collection<User> usersWithoutDigitsInLogin = filter(users, new ShouldNotHaveDigitsInLoginPredicate()); String names = Joiner.on('\n').join( transform(usersWithoutDigitsInLogin, new FullNameFunction()) ); What we do not get: fold (reduce) and tuples. Oh well, you’d probably turn to Java Functional Library anyway, if you wanted functions in Java, right? CaseFormat Ever wanted to turn those ugly PHP Pear names into nice java/cpp style with one liner? No? Well, anyway, you can: String pearPhpName = 'Really_Fucked_Up_PHP_PearConvention_That_Looks_UGLY_because_of_no_NAMESPACES'; String javaAndCPPName = CaseFormat.UPPER_UNDERSCORE.to(CaseFormat.UPPER_CAMEL , pearPhpName);Output: ReallyFuckedUpPhpPearconventionThatLooksUglyBecauseOfNoNamespaces But since Oracle has taken over Sun, you may actually want to turn those into sql style, right? String sqlName = CaseFormat.UPPER_CAMEL.to(CaseFormat.LOWER_UNDERSCORE, javaAndCPPName); Output: really_fucked_up_php_pearconvention_that_looks_ugly_because_of_no_namespaces Collections Guava has a superset of Google collections library 1.0, and this indeed is a very good reason to include this dependency in your poms. I won’t even try to describe all the features, but just to point out a few nice things:you have an Immutable version of pretty much everything you get a few nice static and statically typed methods on common types like Lists, Sets, Maps, ObjectArrays, which include:easy way of creating based on return type: e.g. newArrayList transform (way to apply functions that returns Immutable version) partition (paging) reverseAnd now for a few more interesting collections. Mutlimaps Mutlimap is basically a map that can have many values for a single key. Ever had to create a Map<T1, Set<T2>> in your code? You don’t have to anymore. Multimap<Integer, String> multimap = HashMultimap.create(); multimap.put(1, 'a'); multimap.put(2, 'b'); multimap.put(3, 'c'); multimap.put(1, 'a2');There are of course immutable implementations as well: ImmutableListMultimap, ImmutableSetMultomap, etc. You can construct immutables either in line (up to 5 elements) or using a builder: Multimap<Integer, String> multimap = ImmutableSetMultimap.of(1, 'a', 2, 'b', 3, 'c', 1, 'a2'); Multimap<Integer, String> multimap = new ImmutableSetMultimap.Builder<Integer, String>() .put(1, 'a') .put(2, 'b') .put(3, 'c') .put(1, 'a2') .build();BiMap BiMap is a map that have only unique values. Consider this: @Test(expected = IllegalArgumentException.class) public void biMapShouldOnlyHaveUniqueValues() { BiMap<Integer, String> biMap = HashBiMap.create(); biMap.put(1, 'a'); biMap.put(2, 'b'); biMap.put(3, 'a'); //argh! an exception }That allows you to inverse the map, so the values become key and the other way around: BiMap<Integer, String> biMap = HashBiMap.create(); biMap.put(1, 'a'); biMap.put(2, 'b'); biMap.put(3, 'c');BiMap<String, Integer> invertedMap = biMap.inverse();Not sure what I’d actually want to use it for. Constraints This allows you to add constraint checking on a collection, so that only values which pass the constraint may be added. Imagine we want a collections of users with first letter ‘r’ in their logins. Constraint<User> loginMustStartWithR = new Constraint<User>() { @Override public User checkElement(User user) { checkNotNull(user); if(!user.login.startsWith('r')) { throw new IllegalArgumentException('GTFO, you are not Rrrrrrrrr'); }return user; } };And now for a test: @Test(expected = IllegalArgumentException.class) public void shouldConstraintCollection() { //given Collection<User> users = newArrayList(new User(1L, 'john', 'rambo', 'rambo')); Collection<User> usersThatStartWithR = constrainedCollection(users, loginMustStartWithR);//when usersThatStartWithR.add(new User(2L, 'arnold', 'schwarz', 'commando')); }You also get notNull constraint out of the box: //notice it's not an IllegalArgumentException :( @Test(expected = NullPointerException.class) public void notNullConstraintShouldWork() { //given Collection<Integer> users = newArrayList(1); Collection<Integer> notNullCollection = constrainedCollection(users, notNull());//when notNullCollection.add(null); }Thing to remember: constraints are not checking the data already present in a collection. Tables Just as expected, a table is a collection with columns, rows and values. No more Map<T1, Map<T2, T3>> I guess. The usage is simple and you can transpose: Table<Integer, String, String> table = HashBasedTable.create(); table.put(1, 'a', '1a'); table.put(1, 'b', '1b'); table.put(2, 'a', '2a'); table.put(2, 'b', '2b');Table transponedTable = Tables.transpose(table);That’s all, folks. I didn’t present util.concurrent, primitives, io and net packages, but you probably already know what to expect. Happy coding and don’t forget to share! Reference: Google Guava v07 examples from our JCG partner Jakub Nabrdalik at the Solid Craft blog....

Business Agility Through DevOps and Continuous Delivery

The principles of Continuous Delivery and DevOps have been around for a few years. Developers and system administrators who follow the lean-startup movement are more than familiar with both. However, more often than not, implementing either or both within a traditional, large IT environment is a significant challenge compared to a new age, Web 2.0 type organization (think Flickr) or a Silicon Valley startup (think Instagram). This is a case study of how the consultancy firm I work for delivered the largest software upgrade in the history of one blue chip client, using both. Background The client, is one of Australia’s largest retailers. The firm I work for is a trusted consultant working with them for over a decade. During this time (thankfully), we have earned enough credibility to influence business decisions heavily dependent on IT infrastructure. A massive IT infrastructure upgrade was imminent, when our client wanted to leverage their loyalty rewards program to fight competition head-on. With an existing user base of several millions and our client looking to double this number with the new campaign, the expectations from the software was nothing short of spectacular. In addition to ramping up the existing software, a new set of software needed to be in place, capable of handling hundreds of thousands of new user registrations per hour. Maintenance downtime was not an option (is it ever?) once the system went live (especially during the marketing campaign period). Why DevOps? Our long relationship with this client and the way IT operations is organized meant that adopting DevOps was evolutionary than revolutionary. The good folk at operations have a healthy respect and trust towards our developers and the feeling is mutual. Our consultants provided development and 24/7 support for the software. The software include a Web Portal, back office systems, partner integration systems and customer support systems. Adopting DevOps principles meant;That our developers have more control over the environments the software runs in, from build to production. Developers have better understanding of the production environment the software eventually run in, opposed to their local machines. Developers are able to clearly explain to infrastructure operations group what the software does in each environment. Simple clear processes to manage the delivery of change. Better collaboration between developers and operations. No need to raise tickets.Why Continuous Delivery? The most important reason was the reduced risk to our client’s new campaign. With a massive marketing campaign in full throttle, targeting millions of new user sign-ups, the software systems needed to maintain 100% up-time. Taking software offline for maintenance, meant lost opportunity and money for the business. In a nutshell;A big bang approach would have been fine for the initial release. But when issues are found we want to deliver fixes without down time. When the marketing campaign is running, based on analytics and metrics, improvements and features will need to be done to the software. Delivering them in large batches (taking months) doesn’t deliver good business value. In a developer’s perspective, delivering small changes frequently helps to identify what went wrong easily and either roll back or re-deploy a fix. Years of Agile practices followed by us at the client’s site ensured that a proper culture is in place to adopt continuous delivery painlessly. We were already using Hudson/Jenkins for continuous integration. We only needed the ‘last mile’ of the deployment pipeline to be built, in order to upgrade the existing technical process to a one that delivered continuously.The process: keep it simple and transparent The development process we follow is simple and the culture is such, that each developer is aware that at any given moment one or more of their commits can be released to production. To make the burden minimum, we use subversion tags and branching so that release candidate revisions are tagged before a release candidate is promoted to the test environment (more on that later). The advantage of tagging early is that we have more control over changes we deliver into production. For instance, bug fixes versus feature releases.Image credit – WikipediaThe production environment consists of a cluster of twenty nodes. Each node contains a Tomcat instance fronted by Apache. The load balancer provides functionality to release nodes from the cluster when required, although not as advanced as API level communication provided by Amazon’s elastic load balancer (this is an investment made by the client way back, so we opted to work with it than complaining). Jenkins CI is used as the foundation for our continuous delivery process. The deployment pipeline consists of several stages. We kept the process simple just like the diagram above, to minimize confusion.1.Build – At this stage the latest revision from Subversion is checked out by Jenkins at the build server, unit tests are run and once successful, the artifacts bundled. The build environment is also equipped with infrastructure to test deploy the software for verification. Every build is deployed to this test infrastructure by Jenkins.Creating a release candidate build with subversion tagging.Promotion tasks2.Test (UAT) – Once a build is verified by developers, it’s promoted to the Test environment using a Jenkins task.A promotion indicates that the developers are confident of a build and it’s ready for quality assurance. The automated promotion process creates a tag in Subversion using the revision information packaged into the artifacts. Automated integration tests written using Selenium is run against the Test deployment. The QA team uses this environment to carry out their testing.3.Production Verification – Once artifacts are tested by the test team and no failures reported by the automated integration tests, a node is picked from the production cluster and – using a Jenkins job – prepared for smoke testing. This automated process will;Remove the elected node from the cluster. Deploy the tested artifacts to this node.Removing a node from the production cluster.Nominating a node (s) for production verification.4.Production (Cut-over) – Once the smoke tests are done, the artifacts are deployed to the cluster by a separate Jenkins task.The deployment is following a round-robin schedule, where each node is taken off the load balancer to deploy and refresh the software. The deployment time is highly predictable and almost constant. As soon as a node is returned to the cluster, verification begins. 5.Rollback (Disaster recovery) – In case of a bad deployment, despite all the testing and verification, rollback to the last stable deployment. Just like the cut-over deployment above, the time is predictable for a full rollback.Preparing for rollback – The roll back process goes through test server.Implementation: Our toolsJenkins – Jenkins is the user interface to the whole process. We used parametrized builds whenever we required a developer to interact with a certain job. Jenkins Batch Task plugin – We automated all repetitive tasks to minimize human error. The Task Plugin was used extensively so that we have the flexibility to write scripts to do exactly what we want. Bash – Most of the hard work is done by a set of Bash scripts. We configured keyless login from the build server with appropriate permissions, so that these scripts can perform just like a human, once told what to do via Jenkins. Ant – The build scripts for the software were written in Ant. Ant also couples nicely with Jenkins and can be easily called from a shell script when needed. JUnit and Selenium – Automation is great, but without a good feedback loop, can lead to disaster. JUnit tests provides us with feedback for every single build, while Selenium does the same for ones that are promoted to the test environment. An error means immediate termination of the deployment pipeline for that build. This coupled with testing done by QA keep defects reaching production to a minimum. Puppet – Puppet (http://puppetlabs.com) is used by the operations team to manage configurations across environments. Once the operations team build a server for the developers, they have full access to go in and configure it to run the application. The most important part is to record everything we do while in there. Once a developer is satisfied that the configuration is working, they give a walk-through to the operations team, who in-turn update their Puppet Recipes. These changes are rolled out to the cluster by Puppet immediately. Monitoring – The logs from all production nodes are harvested to a single location for easy analysis. A health check page is built into the application itself, so that we can check the status of the application running in each node.Conclusion Neither DevOps nor Continuous delivery is a silver bullet. However, nurturing a culture, where developers and operations trust each other and work together can be very rewarding to a business. Cultivating such a culture allows a business to reap the full benefits of an Agile development process. Because of the mutual trust between us (the developers) and our client’s operations team, we were able to implement a deployment pipeline that is capable of delivering features and fixes within hours if necessary, instead of months. During a crucial marketing campaign, this kind of agility allowed our client to keep the software infrastructure well in-tune with feedback received through their marketing analytics and KPIs. Further reading A few articles you might find interesting.Four Principles of Low-Risk Software Releases On DVCS, continuous integration, and feature branches The Relationship Between Dev-Ops And Continuous DeliveryReference: Business Agility Through DevOps and Continuous Delivery from our JCG partner Tyrell Perera at the Conundrum blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: