Featured FREE Whitepapers

What's New Here?

java-logo

Tracking excessive garbage collection in Hotspot JVM

Quite frequently due to memory leaks or other memory problems applications freeze leaving only the garbage collector (GC) process running unsuccessfully trying to free some space. This happens until watchdog (or frustrated administrator) restarts the application and the problem is never solved.The goal of this article is to show how to identify excessive GC and to get a heap dump whenever it occurs. It assumes Hotspot JVM version 1.6 or higher. Relevant JVM flagsUsing following flags we can direct Hotspot JVM to throw a heap dump when an application becomes GC driven.First of all -XX:+HeapDumpOnOutOfMemoryError flag should be added.Our goal is to have an OutOfMemory error generated when throughput of an application drops because of non stopping GC. There are two JVM flags that will help:-XX:GCTimeLimit = 98 – defines a limit of proportion of time spent in GC before an OutOfMemory error is thrown -XX:GCHeapFreeLimit = 2 – defines minimum percentage of free space after a full GC before an OutOfMemoryError is thrownBoth flags are enabled by default, but the out of memory error is not triggered frequently. Hence, deeper understanding is needed in order to decide when each flag is useful and what should be the defaults.Oracle documentation   Excessive GC Time and OutOfMemoryError.The concurrent collector will throw an OutOfMemoryError if too much time is being spent in garbage collection: if more than 98% of the total time is spent in garbage collection and less than 2% of the heap is recovered, an OutOfMemoryError will be thrown. This feature is designed to prevent applications from running for an extended period of time while making little or no progress because the heap is too small. If necessary, this feature can be disabled by adding the option -XX:-UseGCOverheadLimit to the command line.The policy is the same as that in the parallel collector, except that time spent performing concurrent collections is not counted toward the 98% time limit. In other words, only collections performed while the application is stopped count toward excessive GC time. Such collections are typically due to a concurrent mode failure or an explicit collection request (e.g., a call to System.gc()).As the second passage states, GCTimeLimit does not work well with parallel collector as time spent in GC is a less important factor in that case.Choosing defaults for GCHeapFreeLimit    The defaults should be chosen in such a way that when an application becomes unresponsive (or driven by GC) out of memory will be thrown. Unresponsiveness is a subjective definition that may vary from application to application, therefore the resulting value may be (slightly) different for different applications.First a simple test should be created. Class that runs inside an application and allocates huge amounts of objects. Part of the objects are collected fast and part live for a long time.GC logging flags should be added before running the test. For my application following flags were added:-Xmx960m -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -Xloggc:/root/gc.log -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/root/dumpsThe output of GC log given below. First full GC cycle – application is responsive:[Full GC [PSYoungGen: 290112K->33213K(308672K)] [PSOldGen: 655359K->655359K(655360K)] 945471K->688573K(964032K)] After few minutes – application in unresponsive:[Full GC [PSYoungGen: 290112K->213269K(308672K)] [PSOldGen: 655359K->655359K(655360K)] 945471K->868629K(964032K)]For 950 Mb heap the distribution of memory spaces is: old gen = 650Mb, young = 300Mb. When the application is responsive, old generation becomes full with strong referenced data, while most of the young generation is garbage.Therefore one estimate for GCHeapFreeLimit is (all young is cleared and nothing is cleared from old) 300/950 ~ 32%.However when the time passes, objects from the young space can not be promoted to the old one, because it’s full. As it says in the Oracle documentation: The youngest generation collection does not require a guarantee of full promotion of all live objects. (-XX:+HandlePromotionFailure flag).Because of failed promotions there are much more referenced objects than expected at the young generation. In order to be on the safe side (less false positives) I assume that when more than 1/3 of the young space is garbage and old space is full the system is driven by GC. Therefore the ratio I advice for application similar to mine is 200/950 ~ 20%.-XX: GCHeapFreeLimit=20Experiments show that OOM happens after 1-2 minutes of excessive GC above the 20% limit with 30-35 full GCs happening during that time.ConclusionIdentifying excessive garbage collection is a difficult task in Java with no silver bullet. Hotspot JVM developers provided methods that will help frequently with identification of root cause. GCHeapFreeLimit and GCTimeLimit flags can shed the light on the problem when used properly: with appropriate values.References1. http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp-140102.html 2. http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.htmlReference: Tracking excessive garbage collection in Hotspot JVM from our JCG partner Art Gourevitch at the The Art of Java blog....
aspectj-logo

A Simple Introduction to AOP

Why use AOP, a simple way to answer this question is to show an implementation of a cross cutting concern without using AOP. Consider a simple service and it’s implementation:                 public interface InventoryService { public Inventory create(Inventory inventory); public List<inventory> list(); public Inventory findByVin(String vin); public Inventory update(Inventory inventory); public boolean delete(Long id); public Inventory compositeUpdateService(String vin, String newMake); }and its default implementation: public class DefaultInventoryService implements InventoryService{ @Override public Inventory create(Inventory inventory) { logger.info("Create Inventory called"); inventory.setId(1L); return inventory; }@Override public List<inventory> list(){ return new ArrayList<inventory>(); }@Override public Inventory update(Inventory inventory) { return inventory; }@Override public boolean delete(Long id) { logger.info("Delete Inventory called"); return true; } ....This is just one service. Assume that there are many more services in this project. So now, if there were a requirement to record the time taken by each of the service methods, the option without AOP would be something along the following lines. Create a decorator for the service: public class InventoryServiceDecorator implements InventoryService{ private static Logger logger = LoggerFactory.getLogger(InventoryServiceDecorator.class); private InventoryService decorated;@Override public Inventory create(Inventory inventory) { logger.info("before method: create"); long start = System.nanoTime(); Inventory inventoryCreated = decorated.create(inventory); long end = System.nanoTime(); logger.info(String.format("%s took %d ns", "create", (end-start)) ); return inventoryCreated; }This decorator would essentially intercept the call on behalf of the decorated, record the time taken for the method call while delegating the call to the decorated object. Imagine doing this for all the methods and all the services in the project. This is the scenario that AOP addresses, it provides a way for the cross cutting concerns(Recording time for the service method calls for eg.) to be modularized – to be packaged up separately without polluting the core of the classes. To end the session, a different way to implement the decorator would be using the dynamic proxy feature of Java: public class AuditProxy implements java.lang.reflect.InvocationHandler { private static Logger logger = LoggerFactory.getLogger(AuditProxy.class); private Object obj;public static Object newInstance(Object obj) { return java.lang.reflect.Proxy.newProxyInstance(obj.getClass().getClassLoader(), obj .getClass().getInterfaces(), new AuditProxy(obj)); }private AuditProxy(Object obj) { this.obj = obj; }public Object invoke(Object proxy, Method m, Object[] args) throws Throwable { Object result; try { logger.info("before method " + m.getName()); long start = System.nanoTime(); result = m.invoke(obj, args); long end = System.nanoTime(); logger.info(String.format("%s took %d ns", m.getName(), (end-start)) ); } catch (InvocationTargetException e) { throw e.getTargetException(); } catch (Exception e) { throw new RuntimeException("unexpected invocation exception: " + e.getMessage()); } finally { logger.info("after method " + m.getName()); } return result; } }So now, when creating an instance of InventoryService, I would create it through the AuditProxy dynamic proxy: InventoryService inventoryService = (InventoryService)AuditProxy.newInstance(new DefaultInventoryService());the overridden invoke method of java.lang.reflect.InvocationHandler would intercept all calls to the InventoryService created in this manner, where the cross cutting concern of auditing the method call time is recorded. This way the cross cutting concern is modularized to one place(AuditProxy), but still needs to be explicitly known by the clients of InventoryService, when instantiating InventoryService. Now, I will show how the cross-cutting concern can be implemented using Spring AOP – Spring offers multiple ways of implementing Aspects – XML configuration based, @AspectJ based. In this specific example, I will use XML configuration file based way of defining the aspect Spring AOP works in the context of a Spring container, so the service implementation that was defined in the previous session needs to be a Spring bean, I am defining it using the @Service annotation: @Service public class DefaultInventoryService implements InventoryService{ ... }Now, I want to record the time taken for each of the method calls of my DefaultInventoryService – I am first going to modularize this as an “advice”: package org.bk.inventory.aspect;import org.aspectj.lang.ProceedingJoinPoint; import org.slf4j.Logger; import org.slf4j.LoggerFactory;public class AuditAdvice {private static Logger logger = LoggerFactory.getLogger(AuditAdvice.class);public void beforeMethod() { logger.info("before method"); }public void afterMethod() { logger.info("after method"); }public Object aroundMethod(ProceedingJoinPoint joinpoint) { try { long start = System.nanoTime(); Object result = joinpoint.proceed(); long end = System.nanoTime(); logger.info(String.format("%s took %d ns", joinpoint.getSignature(), (end - start))); return result; } catch (Throwable e) { throw new RuntimeException(e); } } }This advice is expected to capture the time taken by the methods in DefaultInventoryService. So now to wire this advice to the DefaultInventoryService spring bean: <bean id="auditAspect" class="org.bk.inventory.aspect.AuditAdvice" /><aop:config> <aop:aspect ref="auditAspect"> <aop:pointcut id="serviceMethods" expression="execution(* org.bk.inventory.service.*.*(..))" /><aop:before pointcut-ref="serviceMethods" method="beforeMethod" /> <aop:around pointcut-ref="serviceMethods" method="aroundMethod" /> <aop:after-returning pointcut-ref="serviceMethods" method="afterMethod" /> </aop:aspect> </aop:config>This works by first defining the “pointcut” – the places(in this example, the service methods) to add the cross cutting concern(capturing the method execution time in this example) to. Here I have defined it using a pointcut expression – execution(* org.bk.inventory.service.*.*(..)) , which is essentially selecting all methods of all the types in the org.bk.inventory.service package. Once the pointcut is defined, it defines what needs to be done around the pointcut(the advice), using the expression: <aop:around pointcut-ref="serviceMethods" method="aroundMethod" />This basically says, that around every method of any service type, execute the aroundMethod of AspectAdvice that was defined earlier. Now, if the service methods are executed, I would see the advice getting invoked during the method execution, the following is a sample output if DefaultInventoryService, createInventory method is called: org.bk.inventory.service.InventoryService - Create Inventory called org.bk.inventory.aspect.AuditAdvice - Inventory org.bk.inventory.service.InventoryService.create(Inventory) took 82492 nsSpring’s AOP implementation works by generating a dynamic proxy at runtime for all the target beans, based on the defined pointcut. Another way of defining an Aspect is using @AspectJ annotaions – which is natively understood by Spring: package org.bk.inventory.aspect;import org.aspectj.lang.ProceedingJoinPoint; import org.aspectj.lang.annotation.After; import org.aspectj.lang.annotation.Around; import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.Before; import org.aspectj.lang.annotation.Pointcut; import org.slf4j.Logger; import org.slf4j.LoggerFactory;@Aspect public class AuditAspect {private static Logger logger = LoggerFactory.getLogger(AuditAspect.class);@Pointcut("execution(* org.bk.inventory.service.*.*(..))") public void serviceMethods(){ // } @Before("serviceMethods()") public void beforeMethod() { logger.info("before method"); }@Around("serviceMethods()") public Object aroundMethod(ProceedingJoinPoint joinpoint) { try { long start = System.nanoTime(); Object result = joinpoint.proceed(); long end = System.nanoTime(); logger.info(String.format("%s took %d ns", joinpoint.getSignature(), (end - start))); return result; } catch (Throwable e) { throw new RuntimeException(e); } } @After("serviceMethods()") public void afterMethod() { logger.info("after method"); } }The @Aspect annotation on the class identifies it as an aspect definition. It starts by defining the pointcuts: @Pointcut("execution(* org.bk.inventory.service.*.*(..))") public void serviceMethods(){}The above basically identifies all the methods of all types in org.bk.inventory.service package, this pointcut is identified by the name of the method on which the annotation is placed – in this case “serviceMethods”. Next, the advice is defined using the @Before(serviceMethods()), @After(serviceMethods()) and @Around(serviceMethods()) annotation and the specifics of what needs to happen is the body of the methods with those annotations. Spring AOP natively understands the @AspectJ annotations, if this Aspect is defined as a bean: <bean id="auditAspect" class="org.bk.inventory.aspect.AuditAspect" />Spring would create a dynamic proxy to apply the advice on all the target beans identified as part of the pointcut notation. Yet another way to define an aspect – this time using native aspectj notation. package org.bk.inventory.aspect;import org.bk.inventory.types.Inventory; import org.slf4j.Logger; import org.slf4j.LoggerFactory;public aspect AuditAspect { private static Logger logger = LoggerFactory.getLogger(AuditAspect.class);pointcut serviceMethods() : execution(* org.bk.inventory.service.*.*(..));pointcut serviceMethodsWithInventoryAsParam(Inventory inventory) : execution(* org.bk.inventory.service.*.*(Inventory)) && args(inventory);before() : serviceMethods() { logger.info("before method"); }Object around() : serviceMethods() { long start = System.nanoTime(); Object result = proceed(); long end = System.nanoTime(); logger.info(String.format("%s took %d ns", thisJoinPointStaticPart.getSignature(), (end - start))); return result; }Object around(Inventory inventory) : serviceMethodsWithInventoryAsParam(inventory) { Object result = proceed(inventory); logger.info(String.format("WITH PARAM: %s", inventory.toString())); return result; } after() : serviceMethods() { logger.info("after method"); } }This maps to the previously defined @AspectJ notation Since this is a DSL specifically for defining Aspects, it is not understood by the java compiler. AspectJ provides a tool(ajc) to compile these native aspectj files and to weave the aspects into the targeted pointcuts. Maven provides a plugin which seamlessly invokes ajc at the point of compilation: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>aspectj-maven-plugin</artifactId> <version>1.0</version> <dependencies> <dependency> <groupId>org.aspectj</groupId> <artifactId>aspectjrt</artifactId> <version>${aspectj.version}</version> </dependency> <dependency> <groupId>org.aspectj</groupId> <artifactId>aspectjtools</artifactId> <version>${aspectj.version}</version> </dependency> </dependencies> <executions> <execution> <goals> <goal>compile</goal> <goal>test-compile</goal> </goals> </execution> </executions> <configuration> <outxml>true</outxml> <aspectLibraries> <aspectLibrary> <groupId>org.springframework</groupId> <artifactId>spring-aspects</artifactId> </aspectLibrary> </aspectLibraries> <source>1.6</source> <target>1.6</target> </configuration> </plugin>This will be a wrap up of the AOP intro, with an example that will comprehensively exercise the concepts introduced in the previous sessions. The use case is simple, I am going to define a custom annotation, PerfLog, I expect the calls to methods annotated with this annotation to be timed and logged. Let me start by defining the annotation: package org.bk.annotations; import java.lang.annotation.ElementType; import java.lang.annotation.Retention; import java.lang.annotation.RetentionPolicy; import java.lang.annotation.Target;@Target({ElementType.TYPE, ElementType.METHOD}) @Retention(RetentionPolicy.RUNTIME) public @interface PerfLog { }Now to annotate some service methods with this annotation: @Service public class DefaultInventoryService implements InventoryService{ private static Logger logger = LoggerFactory.getLogger(InventoryService.class); @Override public Inventory create(Inventory inventory) { logger.info("Create Inventory called"); inventory.setId(1L); return inventory; }@Override public List<Inventory> list() { return new ArrayList<Inventory>(); }@Override @PerfLog public Inventory update(Inventory inventory) { return inventory; }@Override public boolean delete(Long id) { logger.info("Delete Inventory called"); return true; }@Override @PerfLog public Inventory findByVin(String vin) { logger.info("find by vin called"); return new Inventory("testmake", "testmodel","testtrim","testvin" ); }@Override @PerfLog public Inventory compositeUpdateService(String vin, String newMake) { logger.info("composite Update Service called"); Inventory inventory = findByVin(vin); inventory.setMake(newMake); update(inventory); return inventory; } }Here three methods of DefaultInventoryService have been annotated with @PerfLog annotation – update, findByVin, compositeUpdateService which internally invokes the methods findByVin and update. Now for the Aspect which will intercept all calls to methods annotated with @PerfLog and log the time taken for the method call: package org.bk.inventory.aspect;import org.aspectj.lang.ProceedingJoinPoint; import org.aspectj.lang.annotation.Around; import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.Pointcut; import org.slf4j.Logger; import org.slf4j.LoggerFactory;@Aspect public class AuditAspect {private static Logger logger = LoggerFactory.getLogger(AuditAspect.class);@Pointcut("execution(@org.bk.annotations.PerfLog * *.*(..))") public void performanceTargets(){}@Around("performanceTargets()") public Object logPerformanceStats(ProceedingJoinPoint joinpoint) { try { long start = System.nanoTime(); Object result = joinpoint.proceed(); long end = System.nanoTime(); logger.info(String.format("%s took %d ns", joinpoint.getSignature(), (end - start))); return result; } catch (Throwable e) { throw new RuntimeException(e); } } }Here the pointcut expression – @Pointcut("execution(@org.bk.annotations.PerfLog * *.*(..))") selects all methods annotated with @PerfLog annotation, and the aspect method logPerformanceStats logs the time taken by the method calls. To test this: package org.bk.inventory;import static org.hamcrest.CoreMatchers.*; import static org.junit.Assert.*;import org.bk.inventory.service.InventoryService; import org.bk.inventory.types.Inventory; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;@RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration("classpath:/testApplicationContextAOP.xml") public class AuditAspectTest {@Autowired InventoryService inventoryService; @Test public void testInventoryService() { Inventory inventory = this.inventoryService.create(new Inventory("testmake", "testmodel","testtrim","testvin" )); assertThat(inventory.getId(), is(1L)); assertThat(this.inventoryService.delete(1L), is(true)); assertThat(this.inventoryService.compositeUpdateService("vin","newmake").getMake(),is("newmake")); }}When this test is invoked the output is the following: 2011-09-08 20:54:03,521 org.bk.inventory.service.InventoryService - Create Inventory called 2011-09-08 20:54:03,536 org.bk.inventory.service.InventoryService - Delete Inventory called 2011-09-08 20:54:03,536 org.bk.inventory.service.InventoryService - composite Update Service called 2011-09-08 20:54:03,536 org.bk.inventory.service.InventoryService - find by vin called 2011-09-08 20:54:03,536 org.bk.inventory.aspect.AuditAspect - Inventory org.bk.inventory.service.DefaultInventoryService.findByVin(String) took 64893 ns 2011-09-08 20:54:03,536 org.bk.inventory.aspect.AuditAspect - Inventory org.bk.inventory.service.DefaultInventoryService.update(Inventory) took 1833 ns 2011-09-08 20:54:03,536 org.bk.inventory.aspect.AuditAspect - Inventory org.bk.inventory.service.DefaultInventoryService.compositeUpdateService(String, String) took 1371171 nsthe advice is correctly invoked for findByVin, update and compositeUpdateService. This sample is available at : git://github.com/bijukunjummen/AOP-Samples.git Reference: A simple introduction to AOP from our JCG partner Biju Kunjummen at the all and sundry blog....
nosqlunit-logo

NoSQLUnit 0.3.0 Released

Introduction Unit testing is a method by which the smallest testable part of an application is validated. Unit tests must follow the FIRST Rules; these are Fast, Isolated, Repeatable, Self-Validated and Timely. It is strange to think about a JEE application without persistence layer (typical Relational databases or new NoSQL databases) so should be interesting to write unit tests of persistence layer too. When we are writing unit tests of persistence layer we should focus on to not break two main concepts of FIRST rules, the fast and the isolated ones. Our tests will be fast if they don’t access network nor filesystem, and in case of persistence systems network and filesystem are the most used resources. In case of RDBMS ( SQL ), many Java in-memory databases exist like Apache Derby , H2 or HSQLDB . These databases, as their name suggests are embedded into your program and data are stored in memory, so your tests are still fast. The problem is with NoSQL systems, because of their heterogeneity. Some systems work using Document approach (like MongoDb ), other ones Column (like Hbase ), or Graph (like Neo4J ). For this reason the in-memory mode should be provided by the vendor, there is no a generic solution. Our tests must be isolated from themselves. It is not acceptable that one test method modifies the result of another test method. In case of persistence tests this scenario occurs when previous test method insert an entry to database and next test method execution finds the change. So before execution of each test, database should be found in a known state. Note that if your test found database in a known state, test will be repeatable, if test assertion depends on previous test execution, each execution will be unique. For homogeneous systems like RDBMS , DBUnit exists to maintain database in a known state before each execution. But there is no like DBUnit framework for heterogeneous NoSQL systems. NoSQLUnit resolves this problem by providing a JUnit extension which helps us to manage lifecycle of NoSQL systems and also take care of maintaining databases into known state. NoSQLUnit NoSQLUnit is a JUnit extension to make writing unit and integration tests of systems that use NoSQL backend easier and is composed by two sets of Rules and a group of annotations. First set of Rules are those responsible of managing database lifecycle; there are two for each supported backend.The first one (in case it is possible) it is the in-memory mode. This mode takes care of starting and stopping database system in ‘in-memory’ mode. This mode will be typically used during unit testing execution.The second one is the managed mode. This mode is in charge of starting NoSQL server but as remote process (in local machine) and stopping it. This will typically used during integration testing execution.Second set of Rules are those responsible of maintaining database into known state. Each supported backend will have its own, and can be understood as a connection to defined database which will be used to execute the required operations for maintaining the stability of the system. Note that because NoSQL databases are heterogeneous, each system will require its own implementation. And finally two annotations are provided, @UsingDataSet and @ShouldMatchDataSet , (thank you so much Arquillian people for the name) to specify locations of datasets and expected datasets. MongoDb ExampleNow I am going to explain a very simple example of how to use NoSQLUnit, for full explanation of all features provided, please read documentation in link or download in pdf format. To use NoSQLUnit with MongoDb you only need to add next dependency: <dependency> <groupId>com.lordofthejars<groupId> <artifactId>nosqlunit-mongodb<artifactId> <version>0.3.0<version> <dependency>First step is defining which lifecycle management strategy is required for your tests. Depending on kind of test you are implementing (unit test, integration test, deployment test, …) you will require an in-memory approach, managed approach or remote approach. For this example we are going to use managed approach using ManagedMongoDb Rule) but note that in-memory MongoDb management is also supported (see documentation how). Next step is configuring Mongodb rule in charge of maintaining MongoDb database into known state by inserting and deleting defined datasets. You must register MongoDbRule JUnit rule class, which requires a configuration parameter with information like host, port or database name. To make developer’s life easier and code more readable, a fluent interface can be used to create these configuration objects. Let’s see the code: First thing is a simple POJO class that will be used as model class: public class Book { private String title; private int numberOfPages; public Book(String title, int numberOfPages) { super(); this.title = title; this.numberOfPages = numberOfPages; } public void setTitle(String title) { this.title = title; } public void setNumberOfPages(int numberOfPages) { this.numberOfPages = numberOfPages; } public String getTitle() { return title; } public int getNumberOfPages() { return numberOfPages; } }Next business class is the responsible of managing access to MongoDb server: public class BookManager { private static final Logger LOGGER = LoggerFactory.getLogger(BookManager.class); private static final MongoDbBookConverter MONGO_DB_BOOK_CONVERTER = new MongoDbBookConverter(); private static final DbObjectBookConverter DB_OBJECT_BOOK_CONVERTER = new DbObjectBookConverter(); private DBCollection booksCollection; public BookManager(DBCollection booksCollection) { this.booksCollection = booksCollection; } public void create(Book book) { DBObject dbObject = MONGO_DB_BOOK_CONVERTER.convert(book); booksCollection.insert(dbObject); } } And now it is time for testing. In next test we are going to validate that a book is inserted correctly into database. package com.lordofthejars.nosqlunit.demo.mongodb; public class WhenANewBookIsCreated { @ClassRule public static ManagedMongoDb managedMongoDb = newManagedMongoDbRule().mongodPath('optmongo').build(); @Rule public MongoDbRule remoteMongoDbRule = new MongoDbRule(mongoDb().databaseName('test').build()); @Test @UsingDataSet(locations='initialData.json', loadStrategy=LoadStrategyEnum.CLEAN_INSERT) @ShouldMatchDataSet(location='expectedData.json') public void book_should_be_inserted_into_repository() { BookManager bookManager = new BookManager(MongoDbUtil.getCollection(Book.class.getSimpleName())); Book book = new Book('The Lord Of The Rings', 1299); bookManager.create(book); } }See that first of all we are creating using ClassRule annotation a managed connection to MongoDb server. In this case we are configuring MongoDb path programmatically, but also can be set from MONGO_HOME environment variable. See here full description of all available parameters. This Rule will be executed when test is loaded and will start a MongoDb instance. Also will shutdown the server when all tests have been executed. Next Rule is executed before any test method, and is responsible of maintaining database into known state. Note that we are only configuring working database, in this case the test one. And finally we annotate method test with @UsingDataSet indicating where to find data to be inserted before execution of each test, and @ShouldMatchDataSet locating expected dataset. { 'Book': [ {'title':'The Hobbit','numberOfPages':293} ] }{ 'Book': [ {'title':'The Hobbit','numberOfPages':293}, {'title':'The Lord Of The Rings','numberOfPages':1299} ] }We are setting an initial dataset in file initialData.json located at classpath com/lordofthejars/nosqlunit/demo/mongodb/initialData.json and expected dataset called expectedData.json. Final Notes Although NoSQLUnit is at early stages, the part of MongoDb is almost finished, in next releases new features and of course new databases will be supported. Next NoSQL supported engines will be Neo4J, Cassandra, HBase and CouchDb. Also read the documentation where you will find an full explanation of each feature explained here. And finally any suggestion you have, any recommendation, or any advice will be welcomed. Keep Learning! Full Code Reference: NoSQLUnit 0.3.0 Released from our JCG partner Alex Soto at the One Jar To Rule Them All blog....
spring-logo

Getting Started with Spring Social

Like me, you will not have failed to notice the current rush to ‘socialize’ applications, whether it’s adding a simple Facebook ‘Like’ button, a whole bunch of ‘share’ buttons or displaying timeline information. Everybody’s doing it including the Guys at Spring, and true to form they’ve come up with a rinky-dinky API called Spring Social that allows you to integrate your application with a number of Software as a Service (SaaS) feeds such as Twitter, Facebook, LinkedIn etc. This, and the following few blogs, takes a look at the whole social scene by demonstrating the use of Spring Social, and I’m going to start by getting very basic. If you’ve seen the Spring Social Samples you’ll know that they contain a couple of very good and complete ‘quickstart’ apps; one for Spring 3.0.x and another for Spring 3.1.x. In looking into these apps, the thing that struck me was the number of concepts you have to learn in order to appreciate just what’s going on. This includes configuration, external authorization, feed integration, credential persistence etc… Most of this complexity stems from the fact that your user will need to login to their Software as a Service (SaaS) account, such as Twitter, Facebook or QZone, so that your application can access their data 1. This is further complicated by the large number of SaaS providers around together with the different number of authorization protocols they use. So, I thought that I’d try and break all this down into the various individual components explaining how to build a useful app; however, I’m going to start with a little background. The Guys at Spring have quite rightly realized that there are so many SaaS providers on the Internet that they’ll never be able to code modules for all of them, so they’ve split the functionality into two parts, with the first parting comprising the spring-social-core and spring-social-web modules that provide the basic connectivity and authorization code for every SaaS provider. Providing all this sounds like a mammoth task but it’s simplified in that to be a SaaS provider you need to implement what’s known as the OAuth protocol. I’m not going into OAuth details just yet, but in a nutshell the OAuth protocol performs a complicated little jig that allows the user to share their SaaS data (i.e. stuff they have on Facebook etc) with your application without the user handing out their credentials to your application. There are at least three versions: 1.0, 1.0a and 2.0 and SaaS providers are free to implement any version they like, often adding their own proprietary features. The second part of this split consists of the SaaS provider modules that know how to talk to the individual service provider servers at the lowest levels. The Guys at Spring currently provide the basic services, which to the Western World are Facebook, LinkedIn and Twitter. The benefit of taking the extensive modular approach is that there’s also a whole bunch of other community led modules that you can use:Spring Social 500px Spring Social BitBucket Spring Social Digg Spring Social Dropbox Spring Social Flattr Spring Social Flickr Spring Social Foursquare Spring Social Google Spring Social Instagram Spring Social Last.fm Spring Social Live (Windows Live) Spring Social Miso Spring Social Mixcloud Spring Social Nk Spring Social Salesforce Spring Social SoundCloud Spring Social Tumblr Spring Social Viadeo Spring Social Vkontakte Spring Social Weibo Spring Social Xing Spring Social Yammer Spring Social Security Module Spring Social Grails PluginThis, however, is only fraction of the number of services available: to see how large this list is visit the AddThis web site and find out what services they support.Back to the Code Now, if you’re like me, then when it comes to programming you’ll hate security: from a development view point it’s a lot of faff, stops you from writing code and makes your life difficult, so I thought I’d start off by throwing all that stuff away and write a small app that displays some basic SaaS data. This, it turns out, is possible as some SaaS providers, such as Twitter, serve both private and public data. Private data is the stuff that you need to login for, whilst public data is available to anyone. In today’s scenario, I’m writing a basic app that displays a Twitter user’s time line in an application using the Spring Social Twitter Module and all you’ll need to do this is the screen name of a Twitter user. To create the application, the first step is to create a basic Spring MVC Project using the template section of the SpringSource Toolkit Dashboard. This provides a webapp that’ll get you started. The second step is to add the following dependencies to your pom.xml file: <!-- Twitter API --> <dependency> <groupId>org.springframework.social</groupId> <artifactId>spring-social-twitter</artifactId> <version>${org.springframework.social-twitter-version}</version> </dependency><!-- CGLIB, only required and used for @Configuration usage: could be removed in future release of Spring --> <dependency> <groupId>cglib</groupId> <artifactId>cglib-nodep</artifactId> <version>2.2</version> </dependency> The first dependency above is for Spring Social’s Twitter API, whilst the second is required for configuring the application using Spring 3’s @Configuration annotation. Note that you’ll also need to specify the Twitter API version number by adding: <org.springframework.social-twitter-version>1.0.2.RELEASE</org.springframework.social-twitter-version> …to the <properties> section at the top of the file. Step 3 is where you need configure Spring. If you look at the Spring Social sample code, you’ll notice that the Guys at Spring configure their apps using Java and the Spring 3 @Configuration annotation. This is because Java based configuration allows you a lot more flexibility than the original XML based configuration.                   @Configurationpublic class SimpleTwitterConfig {private static Twitter twitter;public SimpleTwitterConfig() {if (twitter == null) {twitter = new TwitterTemplate();}}/*** A proxy to a request-scoped object representing the simplest Twitter API* - one that doesn't need any authorization*/@Bean@Scope(value = 'request', proxyMode = ScopedProxyMode.INTERFACES)public Twitter twitter() {return twitter;}} All that the code above does is to provide Spring with a simple TwitterTemplate object via its Twitter interface. Using @Configuration is strictly overkill for this basic application, but I will be building upon it in future blogs. For more information on the @Configuration annotation and Java based configuration, take a look at:Spring’s Java Based Dependency Injection More Spring Java based DIHaving written the configuration class the next thing to do is to sort out the controller. In this simple example, I’ve used a straight forward @RequestMapping handler that deals with URLs that look something like this: <a href=timeline?id=roghughe>Grab Twitter User Time Line for @roghughe</a><br /> …and the code looks something like this: @Controllerpublic class TwitterTimeLineController {private static final Logger logger = LoggerFactory.getLogger(TwitterTimeLineController.class);private final Twitter twitter;@Autowiredpublic TwitterTimeLineController(Twitter twitter) {this.twitter = twitter;}@RequestMapping(value = 'timeline', method = RequestMethod.GET)public String getUserTimeline(@RequestParam('id') String screenName, Model model) {logger.info('Loading Twitter timeline for :' + screenName);List<Tweet> results = queryForTweets(screenName);// Optional Step - format the Tweets into HTMLformatTweets(results);model.addAttribute('tweets', results);model.addAttribute('id', screenName);return 'timeline';}private List<Tweet> queryForTweets(String screenName) {TimelineOperations timelineOps = twitter.timelineOperations();List<Tweet> results = timelineOps.getUserTimeline(screenName);logger.info('Fond Twitter timeline for :' + screenName + ' adding ' + results.size() + ' tweets to model');return results;}private void formatTweets(List<Tweet> tweets) {ByteArrayOutputStream bos = new ByteArrayOutputStream();StateMachine<TweetState> stateMachine = createStateMachine(bos);for (Tweet tweet : tweets) {bos.reset();String text = tweet.getText();stateMachine.processStream(new ByteArrayInputStream(text.getBytes()));String out = bos.toString();tweet.setText(out);}}private StateMachine<TweetState> createStateMachine(ByteArrayOutputStream bos) {StateMachine<TweetState> machine = new StateMachine<TweetState>(TweetState.OFF);// Add some actions to the statemachinemachine.addAction(TweetState.OFF, new DefaultAction(bos));machine.addAction(TweetState.RUNNING, new DefaultAction(bos));machine.addAction(TweetState.READY, new ReadyAction(bos));machine.addAction(TweetState.HASHTAG, new CaptureTag(bos, new HashTagStrategy()));machine.addAction(TweetState.NAMETAG, new CaptureTag(bos, new UserNameStrategy()));machine.addAction(TweetState.HTTPCHECK, new CheckHttpAction(bos));machine.addAction(TweetState.URL, new CaptureTag(bos, new UrlStrategy()));return machine;}} The getUserTimeline method contains three steps: firstly it gets hold of some tweets, does a bit of formatting and then puts the results into the model. In terms of this blog, getting hold of the tweets in the most important point and you can see that this is done in the List<tweet> queryForTweets(String screenName) method. This methods has two steps: use the Twitter object to get hold of a TimelineOperations instance and then use that object to query a time line using using screen name the argument. If you look at the Twitter interface, it acts as a factory object returning other objects that deal with different Twitter features: timelines, direct messaging, searching etc. I guess that this is because the developers realized that Twitter itself encompasses so much functionality that if all the required methods were in one class, then they’d have a God Object on their hands. I’ve also included the optional step of converting the Tweets into HTML. To do this I’ve used the JAR from my State Machine project and blog and you can see how this is done in the formatTweets(...) method. After putting the list of Tweets in to the model as an attribute, the final thing to accomplish is to write a JSP to display the data: <ul> <c:forEach items='${tweets}' var='tweet'> <li><img src='${tweet.profileImageUrl}' align='middle'/><c:out value='${tweet.createdAt}'/><br/><c:out value='${tweet.text}' escapeXml='false'/></li> </c:forEach> </ul> If you implement the optional anchor tag formatting then the key thing to remember here is to ensure that the formatted Tweet’s HTML is picked up by the browser. This is achieved by either using the escapeXml='false' attribute of the c:out tag or to place ${tweet.text} directly into the JSP. I haven’t included any styling or a fancy front end in this sample, so if you run the code 2 you should get something like this:And that completes my simple introduction to Spring Social, but there’s still a lot of ground to cover. In my next blog, I’ll be taking a look at what’s going on in the background. 1I’m guessing that there’s lots of privacy and data protection legality issues to consider here, especially if you use this API to store your users’ data and I’d welcome comments and observations on this. 2The code is available on GitHub at git://github.com/roghughe/captaindebug.git in the social project. Reference: Getting Started with Spring Social from our JCG partner Roger Hughes at the Captain Debug’s Blog blog....
news-logo

Spring Framework 3.2 M1 Released

SpringSource just announced the first milestone release toward Spring 3.2. The new release is now available from the SpringSource repository at http://repo.springsource.org/. Check out a quick tutorial on resolving these artifacts via Maven. This release includes:Initial support for asynchronous @Controller methods Early support for JCache-based cache providers Significant performance improvements in autowiring of non-singleton beans Initial delay support for @Scheduled and Ability to choose between multiple executors with @Async Enhanced bean profile selection using the not (!) operator 48 bugs fixed, 8 new features and 36 improvements implementedThis is also the first release since Spring’s move to GitHub and using the new Gradle build.Some Java Code Geeks articles to kick-start you with Spring:Set up a Spring 3 development environment Gradle archetype for Spring applications Transaction configuration with JPA and Spring 3.1 Spring 3.1 Cache Abstraction TutorialHappy Spring coding..!!...
redhat-openshift-logo

Rise above the Cloud hype with OpenShift

Are you tired of requesting a new development machine for your application? Are you sick of having to setup a new test environment for your application? Do you just want to focus on developing your application in peace without ‘dorking with the stack’ all of the time? We hear you. We have been there too. Have no fear, OpenShift is here! In this article will walk you through the simple steps it takes to setup not one, not two, not three, but up to five new machines in the Cloud with OpenShift. You will have your applications deployed for development, testing or to present them to the world at large in minutes. No more messing around. We start with an overview of what OpenShift is, where it comes from and how you can get the client tooling setup on your workstation. You will then be taken on a tour of the client tooling as it applies to the entry level of OpenShift, called Express. In minutes you will be off and back to focusing on your application development, deploying to test it in OpenShift Express. When finished you will just discard your test machine and move on. When you have mastered this, it will be time to ramp up into the next level with OpenShift Flex. This opens up your options a bit so you can do more with complex applications and deployments that might need a bit more fire power. After this you will be fully capable of ascending into the OpenShift Cloud when you chose, where you need it and at a moments notice. This is how development is supposed to be, development without stack distractions. Introduction There is a great amount of hype in the IT world right now about Cloud. There is no shortage of acronyms for the various areas that have been carved out, like IaaS, PaaS and SaaS. OpenShift is a Platform as a Service (PaaS) from Red Hat which provides you with a platform to run your applications. For you as a developer, you want to look at the environment where you put your applications as just a service that is being provided. You don’t want to bother with how that service is constructed of a set of components, how they are configured or where they are running. You just want to make use of this service that they offer to deploy, develop, test and run your application. At this basic level, OpenShift provides a platform for your Java applications. First let’s take a quick look at where OpenShift comes from. It started at a company called Makara that was based in Redwood City, Calif., providing solutions to enable organizations to deploy, manage, monitor and scale their applications on both private or public clouds. Red Hat acquired Makara in November of 2010, and in the following year they have merged Red Hat technologies into a new project called OpenShift[1]. They launched a first project that initially provides two levels of service[2], a shared hosting solution called Express and a dedicated hosting solution known as Flex. What makes this merging of technologies interesting for a Java developer is that Red Hat has included the next generation application platform based on JBoss AS 7 in OpenShift[3]. This brings a lightning fast application platform for all your development needs. OpenShift Express The OpenShift website states, “Express is a free, cloud-based application platform for Java, Perl, PHP, Python, and Ruby applications. It’s super-simple—your development environment is also your deployment environment: git push, ‘and you’re in the cloud.” This peaks the interest so lets give it a try and see if we can raise our web application into the clouds. For this we have our jBPM Migration web application[4] which we will use as a running example for the rest of this exercise. Getting started in Express is well documented on the website as a quick start[5], which you can get to once you have signed up for a Red Hat Cloud (rhcloud) account. This quick start provides us with the four steps you need to get our application online and starts with the installation of the necessary client tools. This is outlined for Red Hat Enterprise Linux (RHEL), Fedora Linux, generic Linux distributions, Mac OS X and Windows. For RHEL and Fedora it is a simple package installation, for the rest it is a Ruby based gem installation which we will leave for the reader to apply to her system. Once the client tooling is installed, there are several commands based on the form rhc-<command>. There is an online interface available but most developers prefer the control offered by the command line client tools so we will be making use of these. Here is an overview of what is available with a brief description of each:rhc-create-domain – used to bind a registered rhcloud user to a domain in rhcloud. You can have maximum of one domain per registered rhcloud user. rhc-create-app – used to create an application for a given rhcloud user, a given development environment (Java, Ruby, Python, Perl, PHP) and for a given rhcloud domain. You can create up to five applications for a given domain. This will generate the full URI for your rhcloud instance, setup your rhcloud instance based on the environment you chose and by default will create a local git project for your chosen development environment. rhc-snapshot – used to create a local backup of a given rhcloud instance. rhc-ctl-app – used to control a given rhcloud application. Here you can add a database, check the status of the instance, start, stop, etc. rhc-tail-files – used to connect to a rhcloud applications log files and dump them into your command shell. rhc-user-info – used to look at a given rhcloud user, the defined domains and created applications. rhc-chk – used to run a simple configuration check on your setup.Create your domain To get started with our demo application we need to do a few simple thing to get an Express instance setup for hosting our Java application, beginning with a domain. # We need to create the domain for Express to start setting up # We need to create the domain for Express to start setting up # our URL with the client tooling using # rhc-create-domain -n domainname -l rhlogin # $ rhc-create-domain --helpUsage: /usr/bin/rhc-create-domain Bind a registered rhcloud user to a domain in rhcloud.NOTE: to change ssh key, please alter your ~/.ssh/libra_id_rsa and ~/.ssh/libra_id_rsa.pub key, then re-run with --alter-n|--namespace namespace Namespace for your application(s) (alphanumeric - max 16 chars) (required) -l|--rhlogin rhlogin Red Hat login (RHN or OpenShift login with OpenShift Express access) (required) -p|--password password RHLogin password (optional, will prompt) -a|--alter Alter namespace (will change urls) and/or ssh key -d|--debug Print Debug info -h|--help Show Usage info# So we setup one for our Java application. Note that we already have # setup my ssh keys for OpenShift, if you have not yet done that, # then it will walk you through it. # $ rhc-create-domain -n inthe -l [rhcloud-user] -p [mypassword]OpenShift Express key found at /home/[homedir]/.ssh/libra_id_rsa. Reusing... Contacting https://openshift.redhat.com Creation successfulYou may now create an application. Please make note of your local config file in /home/[homedir]/.openshift/express.conf which has been created and populated for you.Create your application Next we want to create our application, which means we want to tell the OpenShift Express which stack we need. This is done with the rhc-create-app client tool. # Let's take a look at the options available before we setup a Java # instance for our application. # $ rhc-create-app --help Contacting https://openshift.redhat.com to obtain list of cartridges... (please excuse the delay)Usage: /usr/bin/rhc-create-app Create an OpenShift Express app.-a|--app application Application name (alphanumeric - max 16 chars) (required) -t|--type type Type of app to create (perl-5.10, jbossas-7.0, wsgi-3.2, rack-1.1, php-5.3) (required) -l|--rhlogin rhlogin Red Hat login (RHN or OpenShift login with OpenShift Express access) (Default: xxxxxxxxx) -p|--password password RHLogin password (optional, will prompt) -r|--repo path Git Repo path (defaults to ./$app_name) -n|--nogit Only create remote space, don't pull it locally -d|--debug Print Debug info -h|--help Show Usage info# It seems we can choose between several but we want the jboss-as7.0 # stack (called a cartridge). Provide a user, password and location # for the git repo to be created called 'jbpmmigration', see the # documentation for the defaults. Let's watch the magic happen! # $ rhc-create-app -a jbpmmigration -t jbossas-7.0 -l [rhcloud-user] -p [mypassword] -r /home/[homedir]/git-projects/jbpmmigrationFound a bug? Post to the forum and we'll get right on it. IRC: #openshift on freenode Forums: https://www.redhat.com/openshift/forumsAttempting to create remote application space: jbpmmigration Contacting https://openshift.redhat.com API version: 1.1.1 Broker version: 1.1.1RESULT: Successfully created application: jbpmmigrationChecking ~/.ssh/config Contacting https://openshift.redhat.com Found rhcloud.com in ~/.ssh/config... No need to adjust Now your new domain name is being propagated worldwide (this might take a minute)... Pulling new repo down Warning: Permanently added 'jbpmmigration-inthe.rhcloud.com,50.17.167.44' (RSA) to the list of known hosts. Confirming application jbpmmigration is available Attempt # 1Success! Your application is now published here:http://jbpmmigration-inthe.rhcloud.com/The remote repository is located here:ssh://1806d6b78bb844d49378874f222f4403@jbpmmigration-inthe.rhcloud.com/~/git/jbpmmigration.git/To make changes to your application, commit to jbpmmigration/. Then run 'git push' to update your OpenShift Express space .If we take a look at my given path to the repo we find a git-projects/jbpmmigration git repository. Note that if you decide to alter your domain name you will have to adjust the git repository config file to reflect where the remote repository is, see above the line with ‘ssh:…..’. Also the page is already live at http://jbpmmigration-ishereon.rhcloud.com/. It is just a splash screen to get you started, so now we move on to deploying our existing jBPM Migration project. First lets look at the provided README in our git project which gives some insight to the repository layout. Repo layout =========== deployments/ - location for built wars (Details below) src/ - maven src structure pom.xml - maven build file .openshift/ - location for openshift specific files .openshift/config/ - location for configuration files such as standalone.xml (used to modify jboss config such as datasources) ../data - For persistent data (also in env var OPENSHIFT_DATA_DIR) .openshift/action_hooks/build - Script that gets run every push, just prior to starting your app For this article we only will examine the deployments and src directories. You can just drop in your WAR files, remove the pom.xml file in the root of the project and they will be automatically deployed. If you want to deploy exploded WAR files then you just add a file called ‘.dodeploy’ as outlined in the README file. For real project development we want to push our code through the normal src directory structure and this is also possible by working with the provided pom.xml file. The README file provided gives all the details needed to get your started. Our demo application, jbpmmigration also comes with a README file that provides the instructions to add the project contents to our new git repository, so we will run these commands to pull the files into our local project. # placing our application into our express git repo. # $ cd jbpmmigration $ git remote add upstream -m master git://github.com/eschabell/openshift-jbpmmigration.git $ git pull -s recursive -X theirs upstream master# now we need to push the content. # $ git push origin[jbpmmigration maven build log output removed] ... remote: [INFO] ------------------------------------------------------------------------ remote: [INFO] BUILD SUCCESS remote: [INFO] ------------------------------------------------------------------------ remote: [INFO] Total time: 3.114s remote: [INFO] Finished at: Mon Nov 14 10:26:57 EST 2011 remote: [INFO] Final Memory: 5M/141M remote: [INFO] ------------------------------------------------------------------------ remote: ~/git/jbpmmigration.git remote: Running .openshift/action_hooks/build remote: Running .openshift/action_hooks/deploy remote: Starting application... remote: Done remote: Running .openshift/action_hooks/post_deploy To ssh://1806d6b78bb844d49378874f222f4403@jbpmmigration-inthe.rhcloud.com/~/git/jbpmmigration.git/ 410a1c9..7ea0003 master -> master As you can see we have now pushed our content to the rhcloud instance we created, it deployed the content and started our instance. Now we should be able to find our application online at http://jbpmmigration-ishereon.rhcloud.com/jbpmmigration_upload-0.4/. The final step would then be that you are finished working on this application and want to free it up for a new application. You can then make a backup with the rhc-snapshot client tool and then remove your instance with rhc-ctl-app client tool. # Ready to get rid of our application now. # $ rhc-ctl-app -a jbpmmigration -l eschabell -c destroy Password: ********Contacting https://openshift.redhat.com !!!! WARNING !!!! WARNING !!!! WARNING !!!! You are about to destroy the jbpmmigration application.This is NOT reversible, all remote data for this application will be removed. Do you want to destroy this application (y/n): yContacting https://openshift.redhat.com API version: 1.1.1 Broker version: 1.1.1RESULT: Successfully destroyed application: jbpmmigration As you can see, it is really easy to get started with the five free instances you have to play with for your application development. You might notice that there are limitation, with no ability to use specific integrated monitoring tooling, auto-scaling features are missing and control of the configuration is limited. For those needing more access and features, take a look at the next step up with OpenShift Flex[6]. This completes our tour of the OpenShift Express project where we provided you with a glimpse of the possibilities that await you and your applications. It was a breeze to create your domain, define your applications needs and import your project into the provided git project. After pushing your changes to the new Express instance you are off and testing your application development in the cloud. This is real. This is easy. Now get out there and raise your code above the cloud hype. Related links:OpenShift,https://openshift.redhat.com. Project overview OpenShift, https://openshift.redhat.com/app/platform. JBoss AS7 in the Cloud, http://www.jboss.org/openshift. jBPM Migration project web application, https://github.com/eschabell/jbpmmigration_upload. OpenShift Express Quick Start, https://openshift.redhat.com/app/express#quickstart. OpenShift Flex Quick Start, https://openshift.redhat.com/app/flex#quickstart.Reference: Rise above the Cloud hype with OpenShift from our JCG partner Eric D. Schabell at the Thoughts on Middleware, Linux, software, cycling and other news… blog....
enterprise-java-logo

How I explained Dependency Injection to My Team

Recently our company started developing a new java based web application and after some evaluation process we decided to use Spring.But many of the team members are not aware of Spring and Dependency Injection principles. So I was asked to give a crash course on what is Dependency Injection and basics on Spring. Instead of telling all the theory about IOC/DI I thought of explaining with an example. Requirement: We will get some Customer Address and we need to validate the address. After some evaluation we thought of using Google Address Validation Service. Legacy(Bad) Approach: Just create an AddressVerificationService class and implement the logic. Assume GoogleAddressVerificationService is a service provided by Google which takes Address as a String and Return longitude/latitude. class AddressVerificationService { public String validateAddress(String address) { GoogleAddressVerificationService gavs = new GoogleAddressVerificationService(); String result = gavs.validateAddress(address); return result; } }Issues with this approach:  1. If you want to change your Address Verification Service Provider you need to change the logic. 2. You can’t Unit Test with some Dummy AddressVerificationService (Using Mock Objects) Due to some reason Client ask us to support multiple AddressVerificationService Providers and we need to determine which service to use at runtime. To accomidate this you may thought of changing the above class as below: class AddressVerificationService { //This method validates the given address and return longitude/latitude details. public String validateAddress(String address) { String result = null; int serviceCode = 2; // read this code value from a config file if(serviceCode == 1) { GoogleAddressVerificationService googleAVS = new GoogleAddressVerificationService(); result = googleAVS.validateAddress(address); } else if(serviceCode == 2) { YahooAddressVerificationService yahooAVS = new YahooAddressVerificationService(); result = yahooAVS.validateAddress(address); } return result; } }Issues with this approach:    1. Whenever you need to support a new Service Provider you need to add/change logic using if-else-if. 2. You can’t Unit Test with some Dummy AddressVerificationService (Using Mock Objects) IOC/DI Approach: In the above approaches AddressVerificationService is taking the control of creating its dependencies. So whenever there is a change in its dependencies the AddressVerificationService will change. Now let us rewrite the AddressVerificationService using IOC/DI pattern. class AddressVerificationService { private AddressVerificationServiceProvider serviceProvider; public AddressVerificationService(AddressVerificationServiceProvider serviceProvider) { this.serviceProvider = serviceProvider; } public String validateAddress(String address) { return this.serviceProvider.validateAddress(address); } } interface AddressVerificationServiceProvider { public String validateAddress(String address); }Here we are injecting the AddressVerificationService dependency AddressVerificationServiceProvider. Now let us implement the AddressVerificationServiceProvider with multiple provider services. class YahooAVS implements AddressVerificationServiceProvider { @Override public String validateAddress(String address) { System.out.println("Verifying address using YAHOO AddressVerificationService"); return yahooAVSAPI.validate(address); } }class GoogleAVS implements AddressVerificationServiceProvider { @Override public String validateAddress(String address) { System.out.println("Verifying address using Google AddressVerificationService"); return googleAVSAPI.validate(address); } }Now the Client can choose which Service Provider’s service to use as follows: AddressVerificationService verificationService = null; AddressVerificationServiceProvider provider = null; provider = new YahooAVS();//to use YAHOO AVS provider = new GoogleAVS();//to use Google AVS verificationService = new AddressVerificationService(provider); String lnl = verificationService.validateAddress("HitechCity, Hyderabad"); System.out.println(lnl);For Unit Testing we can implement a Mock AddressVerificationServiceProvider. class MockAVS implements AddressVerificationServiceProvider { @Override public String validateAddress(String address) { System.out.println("Verifying address using MOCK AddressVerificationService"); return "<response><longitude>123</longitude><latitude>4567</latitude>"; } } AddressVerificationServiceProvider provider = null; provider = new MockAVS();//to use MOCK AVS AddressVerificationServiceIOC verificationService = new AddressVerificationServiceIOC(provider); String lnl = verificationService.validateAddress("Somajiguda, Hyderabad"); System.out.println(lnl);With this approach we elemenated the issues with above Non-IOC/DI based approaches. 1. We can provide support for as many Provides as we wish. Just implement AddressVerificationServiceProvider and inject it. 2. We can unit test using Dummy Data using Mock Implementation. So by following Dependency Injection principle we can create interface-based loosely-coupled and easily testable services. Reference: How I explained Dependency Injection to My Team from our JCG partner Siva Reddy at the My Experiments on Technology blog....
play-framework-logo

Proof of Concept: Play! Framework

We are starting a new project and we have to choose the web framework. Our default choice is grails, because the team already has experience with it, but I decided to give Play! and Scala a chance. Play! has a lot of cool things for which it received many pluses in my evaluation, but in the end we decided to stick with grails. It’s not that grails is perfect and meets all the requirements, but Play! is not sufficiently better to make us switch. Anyway, here’s a list of areas where Play! failed my evaluation. Please correct me if I’ve got something wrong:template engine – UI developers were furious with the template engine used in the previous project – freemarker, because it wasn’t null-safe – it blew up each time a chain of invocations had null. Play templates use scala, and so they are not null-safe. Scala has a different approach to nulls – Option, but third party libraries and our core code will be in Java and we’d have to introduce some null-to-Option conversion, and it will get ugly. This question shows a way to handle the case, but the comments make me hesitant to use it. That’s only part of the story – with all my respect and awe for static typing, the UI layer must use a simple scripting language. EL/JSTL is a good example. It doesn’t explode if it doesn’t find some value. static assets – this is hard, and I couldn’t find anything about using Play! with a CDN or how to merge multiple assets into one file. Is there an easy way to do that? IDE-support – the only was to edit the templates is through the scala editor, but it doesn’t have html support. This is not a deal-breaker, but tooling around the framework is a good thing to have. community – there is a good community around Play!, but I viewed it compared to grails. Play! is an older framework, and it has 2.5k questions on stackoverflow, while grails has 7.5k. module fragmentation – some of the important modules that I found were only for 1.x without direct replacements in 2.0.Other factors:I won’t be working with it – UI developers will. Although I might be fine with all the type-safety and peculiar scala concepts, UI developers will probably not be. scala is ugly – now bash me for that. Yes, I’m not a Scala guy, but this being a highly upvoted answer kind of drove me off. It looks like a low-level programming language, and relevant to the previous point – it definitely doesn’t look OK to our UI developers. change of programming model – I mentioned the Option vs null, but there are tons of other things. This is not a problem of scala, of course, it even makes it the cool and good thing that has generated all the hype, but it’s a problem that too many people will have to switch their perspective at the same time we have been using Spring and Spring-MVC a lot, and Play’s integration with spring isn’t as smooth as that of Grails (which is built ontop of spring-mvc) http://zeroturnaround.com/blog/play-framework-unfeatures-that-irk-my-inner-geek/As you can see, many of the problems are not universal – they are relevant to our experience and expectations. You may not need to use a CDN, and your UI developers may be scala-gurus instead of groovy developers. And as I said in the beginning, Play! definitely looks good and has a lot of cool things that I omitted here (the list would be long).Reference: Proof of Concept: Play! Framework from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog....
software-development-2-logo

Software Architects Need Not Apply

I saw an online job posting several years ago that listed a set of desired software development and programming skills and concluded with the statement, “Architects Need Not Apply.” Joe Winchester has written that Those Who Can, Code; Those Who Can’t, Architect (beware an extremely obnoxious Flash-based popup) and has stated that part of his proposed Hippocratic Oath for Programmers would be to “swear that my desire to enter the computing profession is not to become an architect.” Andriy Solovey has written the post Do We Need Software Architects? 10 Reasons Why Not and Sergey Mikhanov has proclaimed Why I don’t believe in software architects. More recent posts have talked of Frustration with the Role and Purpose of Architects on Software Projects and The frustrated architect. In this post, I look at some of the reasons software architects are often held in low esteem in the software development community.I have been (and am) a software architect at times and a software developer at times. Often, I must move rapidly between the two roles. This has allowed me to see both sides of the issue and I believe that the best software architects are those who do architecture work, design work, and lower level implementation coding and testing.In Chapter 5 (“The Second-System Effect“) The Mythical Man-Month, Frederick P. Brooks, Jr., wrote of the qualities and characteristics of a successful architect. These are listed next:An architect “suggests” (“not dictates”) implementation because the programmer/coder/builder has the “inventive and creative responsibility.” An architect should have an idea of how to implement his or her architecture, but should be “prepared to accept any other way that meets the objectives as well.” An architect should be “ready to forego credit for suggested improvements.” An architect should “listen to the builder’s suggestions for architecture improvements.” An architect should strive for work to be “spare and clean,” avoiding “functional ornamentation” and “extrapolation of functions that are obviated by changes in assumptions and purposes.”Although the first edition of The Mythical Man-Month was published more than 35 years ago in 1975, violations of Brooks’s suggestions for being a successful architect remain, in my opinion, the primary reason why software architecture as a discipline has earned some disrespect in the software development community.One of the problems developers often have with software architects is the feeling that the software architect is micromanaging their technical decisions. As Brooks suggests, successful architects need to listen to the developers’ alternative suggestions and recommendations for improvements. Indeed, in some cases, the open-minded architect might even be willing to go with a significant architectural change if the benefits outweigh the costs. In my opinion, good architects (like good software developers) should be willing to learn and even expect to learn from others (including developers).A common complaint among software developers regarding architects is that architects are so high-level that they miss important details or ignore important concerns with their idealistic architectures. I have found that I’m a better architect when I have recently worked with low-level software implementation. The farther and longer removed I am from design and implementation, the less successful I can be in helping architect the best solutions. Software developers are more confident in the architect’s vision when they know that the architect is capable of implementing the architecture himself or herself if needed. An architect needs to be working among the masses and not lounging in the ivory tower. Indeed, it would be nice if the title “software architect” was NOT frequently seen as an euphemism for “can no longer code.”The longer I work in the software development industry, the more convinced I am that “spare and clean” should be the hallmarks of all good designs and architectures. Modern software principles seem to support this. Concepts like Don’t Repeat Yourself (DRY) and You Ain’t Gonna Need It (YAGNI) have become popular for good reason.Some software architects have an inflated opinion of their own value due to their title or other recognition. For these types, it is very difficult to follow Brooks’s recommendation to “forego credit” for architecture and implementation improvements. Software developers are much more likely to embrace the architect who shares credit as appropriate and does not take credit for the developers’ ideas and work.I think there is a place for software architecture, but a portion of our fellow software architects have harmed the reputation of the discipline. Following Brooks’s suggestions can begin to improve the reputation of software architects and their discipline, but, more importantly, can lead to better and more efficient software solutions.Reference: Software Architects Need Not Apply from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
java-logo

Java Thread deadlock – Case Study

This article will describe the complete root cause analysis of a recent Java deadlock problem observed from a Weblogic 11g production system running on the IBM JVM 1.6. This case study will also demonstrate the importance of mastering Thread Dump analysis skills; including for the IBM JVM Thread Dump format. Environment specifications - Java EE server: Oracle Weblogic Server 11g & Spring 2.0.5 – OS: AIX 5.3 – Java VM: IBM JRE 1.6.0 – Platform type: Portal & ordering application Monitoring and troubleshooting tools - JVM Thread Dump (IBM JVM format) – Compuware Server Vantage (Weblogic JMX monitoring & alerting) Problem overview A major stuck Threads problem was observed & reported from Compuware Server Vantage and affecting 2 of our Weblogic 11g production managed servers causing application impact and timeout conditions from our end users. Gathering and validation of facts As usual, a Java EE problem investigation requires gathering of technical and non-technical facts so we can either derived other facts and/or conclude on the root cause. Before applying a corrective measure, the facts below were verified in order to conclude on the root cause: · What is the client impact? MEDIUM (only 2 managed servers / JVM affected out of 16) · Recent change of the affected platform? Yes (new JMS related asynchronous component) · Any recent traffic increase to the affected platform? No · How does this problem manifest itself? A sudden increase of Threads was observed leading to rapid Thread depletion · Did a Weblogic managed server restart resolve the problem? Yes, but problem is returning after few hours (unpredictable & intermittent pattern) - Conclusion #1 : The problem is related to an intermittent stuck Threads behaviour affecting only a few Weblogic managed servers at the time - Conclusion #2 : Since problem is intermittent, a global root cause such as a non-responsive downstream system is not likely Thread Dump analysis – first pass The first thing to do when dealing with stuck Thread problems is to generate a JVM Thread Dump. This is a golden rule regardless of your environment specifications & problem context. A JVM Thread Dump snapshot provides you with crucial information about the active Threads and what type of processing / tasks they are performing at that time. Now back to our case study, an IBM JVM Thread Dump ( javacore.xyz format) was generated which did reveal the following Java Thread deadlock condition below: 1LKDEADLOCK Deadlock detected !!! NULL --------------------- NULL 2LKDEADLOCKTHR Thread '[STUCK] ExecuteThread: '8' for queue: 'weblogic.kernel.Default (self-tuning)'' (0x000000012CC08B00) 3LKDEADLOCKWTR is waiting for: 4LKDEADLOCKMON sys_mon_t:0x0000000126171DF8 infl_mon_t: 0x0000000126171E38: 4LKDEADLOCKOBJ weblogic/jms/frontend/FESession@0x07000000198048C0/0x07000000198048D8: 3LKDEADLOCKOWN which is owned by: 2LKDEADLOCKTHR Thread '[STUCK] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)'' (0x000000012E560500) 3LKDEADLOCKWTR which is waiting for: 4LKDEADLOCKMON sys_mon_t:0x000000012884CD60 infl_mon_t: 0x000000012884CDA0: 4LKDEADLOCKOBJ weblogic/jms/frontend/FEConnection@0x0700000019822F08/0x0700000019822F20: 3LKDEADLOCKOWN which is owned by: 2LKDEADLOCKTHR Thread '[STUCK] ExecuteThread: '8' for queue: 'weblogic.kernel.Default (self-tuning)'' (0x000000012CC08B00)This deadlock situation can be translated as per below: - Weblogic Thread #8 is waiting to acquire an Object monitor lock owned by Weblogic Thread #10 - Weblogic Thread #10 is waiting to acquire an Object monitor lock owned by Weblogic Thread #8 Conclusion: both Weblogic Threads #8 & #10 are waiting on each other; forever! Now before going any deeper in this root cause analysis, let me provide you a high level overview on Java Thread deadlocks. Java Thread deadlock overview Most of you are probably familiar with Java Thread deadlock principles but did you really experience a true deadlock problem? From my experience, true Java deadlocks are rare and I have only seen ~5 occurrences over the last 10 years. The reason is that most stuck Threads related problems are due to Thread hanging conditions (waiting on remote IO call etc.) but not involved in a true deadlock condition with other Thread(s). A Java Thread deadlock is a situation for example where Thread A is waiting to acquire an Object monitor lock held by Thread B which is itself waiting to acquire an Object monitor lock held by Thread A. Both these Threads will wait for each other forever. This situation can be visualized as per below diagram:Thread deadlock is confirmed…now what can you do? Once the deadlock is confirmed (most JVM Thread Dump implementations will highlight it for you), the next step is to perform a deeper dive analysis by reviewing each Thread involved in the deadlock situation along with their current task & wait condition. Find below the partial Thread Stack Trace from our problem case for each Thread involved in the deadlock condition: ** Please note that the real application Java package name was renamed for confidentiality purposes ** Weblogic Thread #8 '[STUCK] ExecuteThread: '8' for queue: 'weblogic.kernel.Default (self-tuning)'' J9VMThread:0x000000012CC08B00, j9thread_t:0x00000001299E5100, java/lang/Thread:0x070000001D72EE00, state:B, prio=1 (native thread ID:0x111200F, native priority:0x1, native policy:UNKNOWN) Java callstack: at weblogic/jms/frontend/FEConnection.stop(FEConnection.java:671(Compiled Code)) at weblogic/jms/frontend/FEConnection.invoke(FEConnection.java:1685(Compiled Code)) at weblogic/messaging/dispatcher/Request.wrappedFiniteStateMachine(Request.java:961(Compiled Code)) at weblogic/messaging/dispatcher/DispatcherImpl.syncRequest(DispatcherImpl.java:184(Compiled Code)) at weblogic/messaging/dispatcher/DispatcherImpl.dispatchSync(DispatcherImpl.java:212(Compiled Code)) at weblogic/jms/dispatcher/DispatcherAdapter.dispatchSync(DispatcherAdapter.java:43(Compiled Code)) at weblogic/jms/client/JMSConnection.stop(JMSConnection.java:863(Compiled Code)) at weblogic/jms/client/WLConnectionImpl.stop(WLConnectionImpl.java:843) at org/springframework/jms/connection/SingleConnectionFactory.closeConnection(SingleConnectionFactory.java:342) at org/springframework/jms/connection/SingleConnectionFactory.resetConnection(SingleConnectionFactory.java:296) at org/app/JMSReceiver.receive() ……………………………………………………………………Weblogic Thread #10 '[STUCK] ExecuteThread: '10' for queue: 'weblogic.kernel.Default (self-tuning)'' J9VMThread:0x000000012E560500, j9thread_t:0x000000012E35BCE0, java/lang/Thread:0x070000001ECA9200, state:B, prio=1 (native thread ID:0x4FA027, native priority:0x1, native policy:UNKNOWN) Java callstack: at weblogic/jms/frontend/FEConnection.getPeerVersion(FEConnection.java:1381(Compiled Code)) at weblogic/jms/frontend/FESession.setUpBackEndSession(FESession.java:755(Compiled Code)) at weblogic/jms/frontend/FESession.consumerCreate(FESession.java:1025(Compiled Code)) at weblogic/jms/frontend/FESession.invoke(FESession.java:2995(Compiled Code)) at weblogic/messaging/dispatcher/Request.wrappedFiniteStateMachine(Request.java:961(Compiled Code)) at weblogic/messaging/dispatcher/DispatcherImpl.syncRequest(DispatcherImpl.java:184(Compiled Code)) at weblogic/messaging/dispatcher/DispatcherImpl.dispatchSync(DispatcherImpl.java:212(Compiled Code)) at weblogic/jms/dispatcher/DispatcherAdapter.dispatchSync(DispatcherAdapter.java:43(Compiled Code)) at weblogic/jms/client/JMSSession.consumerCreate(JMSSession.java:2982(Compiled Code)) at weblogic/jms/client/JMSSession.setupConsumer(JMSSession.java:2749(Compiled Code)) at weblogic/jms/client/JMSSession.createConsumer(JMSSession.java:2691(Compiled Code)) at weblogic/jms/client/JMSSession.createReceiver(JMSSession.java:2596(Compiled Code)) at weblogic/jms/client/WLSessionImpl.createReceiver(WLSessionImpl.java:991(Compiled Code)) at org/springframework/jms/core/JmsTemplate102.createConsumer(JmsTemplate102.java:204(Compiled Code)) at org/springframework/jms/core/JmsTemplate.doReceive(JmsTemplate.java:676(Compiled Code)) at org/springframework/jms/core/JmsTemplate$10.doInJms(JmsTemplate.java:652(Compiled Code)) at org/springframework/jms/core/JmsTemplate.execute(JmsTemplate.java:412(Compiled Code)) at org/springframework/jms/core/JmsTemplate.receiveSelected(JmsTemplate.java:650(Compiled Code)) at org/springframework/jms/core/JmsTemplate.receiveSelected(JmsTemplate.java:641(Compiled Code)) at org/app/JMSReceiver.receive() ……………………………………………………………As you can see in the above Thread Strack Traces, such deadlock did originate from our application code which is using the Spring framework API for the JMS consumer implementation (very useful when not using MDB’s). The Stack Traces are quite interesting and revealing that both Threads are in a race condition against the same Weblogic JMS consumer session / connection and leading to a deadlock situation: - Weblogic Thread #8 is attempting to reset and close the current JMS connection – Weblogic Thread #10 is attempting to use the same JMS Connection / Session in order to create a new JMS consumer – Thread deadlock is triggered! Root cause: non Thread safe Spring JMS SingleConnectionFactory implementation A code review and a quick research from Spring JIRA bug database did reveal the following Thread safe defect below with a perfect correlation with the above analysis: # SingleConnectionFactory’s resetConnection is causing deadlocks with underlying OracleAQ’s JMS connection https://jira.springsource.org/browse/SPR-5987 A patch for Spring SingleConnectionFactory was released back in 2009 which did involve adding proper synchronized{} block in order to prevent Thread deadlock in the event of a JMS Connection reset operation: synchronized (connectionMonitor) { //if condition added to avoid possible deadlocks when trying to reset the target connection if (!started) { this.target.start(); started = true; } }Solution Our team is currently planning to integrate this Spring patch in to our production environment shortly. The initial tests performed in our test environment are positive. Conclusion  I hope this case study has helped understand a real-life Java Thread deadlock problem and how proper Thread Dump analysis skills can allow you to quickly pinpoint the root cause of stuck Thread related problems at the code level. Please don’t hesitate to post any comment or question. Reference: Java Thread deadlock – Case Study from our JCG partner Pierre-Hugues Charbonneau at the Java EE Support Patterns & Java Tutorial blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books