Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?


Documenting Compliance – About TCKs, Specifications and Testing

Working with software specifications is hard. No matter in which exact filed; you end up with the big question: Does everything ever specified is implemented and tested? Back in the days of waterfall driven methodologies this has been an issue and even today at the time of writing, agility and user-stories still don’t guarantee you the perfect fit. Many of today’s agile approaches combine well with Test Driven Development or even Behavior Driven Development concepts to turn the issue upside down. Instead of asking “Does my code cover every single sentence of written specification?” those simply assume that writing the tests first is a valid way of having the needed coverage. The down-side here is the lack of documentation which easily can happen. Additionally you never find a suitable document workflow to re-factor tests to the one single document. What might work for individual solutions and projects comes to an end if you look at stuff like “Technology Compatibility Kits” (TCK) which by nature are more or less gathered from any kind of document based written specification. TCKs for the Java platforms Diving into that kind of topics always is a good candidate to polarize the development community. Especially because documentation is still a topic which tends to be forgotten or delayed completely. To me documentation is key on may levels. On a framework level it assures that your users don’t struggle and you lay a good ground for quick adoption. To me the Arquillian project and team did an amazing job in their first years. Even on a project level this makes sense to quickly swap new team members in and out without losing knowledge. But there is another area which not simply benefits from it but has a strong relation to documentation: The Java TCKs. All Java Platforms define Java Specification Requests (JSRs) as the point for language improvements. A Technology Compatibility Kit (TCK) is a suite of tests that at least nominally checks a particular alleged implementation of a Java Specification Request (JSR) for compliance. Given the fact, that most specifications exist in some Office like documents and are pushed around as PDFs for review and comments it is nearly impossible to say weather a TCK has a defined coverage of the original specification at all. This at best is scary. Most of the time it is annoying because Reference Implementations (RIs) simply forget to cover parts of the spec and the user has to handle the resulting bugs or behaviors in specific ways. If that is possible at all. Just a short note here regarding the availability of TCKs. Most of them aren’t available as of today but subject to terms of license and financial agreements. Hopefully this is going to change with the upcoming changes to the Java Community Process. Some JBoss Goddess to cure documentationBut some bright minds came up with a solution. It probably isn’t a big surprise that a great effort came out of a couple of RedHats. A small project which initially was created as part of the hibernate-validator project which is the RI for BeanValidation is here to cure the problems. The unknown and itself mostly undocumented jboss-test-audit project calls itself “Utility classes for TCK Test Coverage Report”. This perfectly nails it. It is a very lightweight but still powerful little addition to any RI which post-processes sources for special annotations to gather a coverage report for any project which has the goal of implementing a specification. It is licensed under the Apache License, Version 2.0 and you only need some very few steps to get this up an running against your own setup. It all begins with the specification. This is a xml document which defines the different sections and required assertions. <specification> <section id="1" title="Chapter 1 - Introduction"/> <section id ="2" title="Chapter 2 - What's new"> <assertion id="a"> <text>A simple sample test</text> </assertion> </section> </specification> This document is the base line for your tests. You now need to go ahead and equip all your tests with relevant section and assertion information. This might look like the following: SpecVersion(spec = "spectests", version = "1.0.0") public class AppTest {@Test @SpecAssertion(section = "2", id = "a") public void simpleTestForAssertion() { App app = new App(); assertEquals(app.sayHello("Markus"), "Hello Markus"); } Combined with a bit of maven magic (maven-processor-plugin) all the annotations are parsed and a nice report is generated about the overall coverage. If you want to have a look at the complete bootstrap example find it on The Hard Parts This obviously is a no-brainer. To add some annotations to your tests will not be the hardest thing you ever did. What is really hard is to convert your documentation into that fancy audit xml format. There are plenty of ways to do this. Given the fact, that most of the companies leading a JSR have some kind of hard-core document management in place should make this a once in a lifetime thing to implement. If you’re working with Microsoft Word you could also use the available xml schema to write well formed documents with it (which is a pain! Don’t do it!). Plenty of Ideas The little utility classes work comparably well. But there is still plenty of room for improvements. It might be a valid idea to have some supportive information here like issue-numbers or other references. I also would like to be able to use asciidoc in the documentation. But I’m not complaining here because I am not going to change it myself. But if anybody is interested, the complete thing is on and I believe those guys know how community works and accept contributions. Future Wishes for the JCP Given that simple and easy approach it would be a good thing to foster adoption along with JSRs. So if you like it approach the EC member you trust and make him/her aware of this and put it as an idea on their list.   Reference: Documenting Compliance – About TCKs, Specifications and Testing from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog. ...

Unit Testing Tip: Create Descriptive Tests

Your unit tests should be as descriptive as possible. The feedback they give you should be so clear you don’t even need to fire up the debugger and step one by one through the code to inspect your local variables. Why? Because that takes time and we’re lazy, right? In order to do so you need to create descriptive tests. There are different approaches to realize that. Here are two of them. Add Assert Messages Assert messages can usually be specified as an additional parameter in your test assert and they appear as the failure message when that specific test case fails. In jUnit for instance you’d specify your assert message like: assertEquals("The firstname of the two people should match if the clone was successful", "Fritz", person1.getFirstname()); In MSTest on the other hand, the assert message has to be added as the last parameter. Assert.AreEqual("Fritz", person1.Firstname, "The firstname of the two people should match if the clone was successful"); Caution, don’t “over-engineer”. I treat these comments similarly as with other code comments: just add them if they add meaningful information. Otherwise they’re waste and hence treat them as such. Prefer Explicit Asserts What I mean with explicit asserts is to use the correct asserts for the kind of operation you’re performing. For instance, if you need to perform an equality check as in the assert mentioned before, don’t use a boolean assert statement. (Here an example of a dummy QUnit test case) test('Should correctly clone two people', 1, function(){ //...ok(person1.firstname === 'Fritz', 'I expect both names to match if the clone operation succeeded'); }); …and the corresponding result in the output window:The outcome doesn’t tell us much, right? What you can say is that the person’s firstname didn’t match what you expected and as such that that clone operation probably didn’t succeed. But why?? What was its actual value then? Use the equals instead: test('Should correctly clone two people', 1, function(){ //...equal(person1.firstname, 'Fritz', 'I expect both names to match if the clone operation succeeded'); }); …and again, the outcome:Now this test case’s outcome is much more descriptive. It doesn’t only tell you that the operation failed, but also exactly shows you the expected as well as actual value. This might give you an immediate a hint where the problem could be.   Reference: Unit Testing Tip: Create Descriptive Tests from our JCG partner Juri Strumpflohner at the Juri Strumpflohner’s TechBlog blog. ...

How design for testability can improve test automation capability

Introduction Testability refers to the capability of testing something. When this something is an IT solution, the most suited way of doing it is Automation. But is it always possible to automate test? If not, what are the reasons? And how can testability be improved? Test Categories A software is usually done to be used by a consumer, that could be a person, another software, or physical device. When deciding to automate tests the first thing to decide is what will be the SUT (System Under Test). This decision shapes the scope of the automation and the point of view that has to be simulated with the automation test framework. There can be several choices, here are some of most used:Unit Tests: here we want to test the smallest part of the code, that is the methods. So the framework is typically another software program written in the same language of the SUT, that is able to interact with each single method and to create all the necessary context around it to be able to exercise the logic and verify the expected results. The more this logic is coupled with other logic, the less testable is the code. Integration Tests: here we want to test that two separate software components are able to dialogue correctly together. So the framework is similar to Unit Test Framework, just the SUT is larger, cause involves more logic and separate components. In those kind of tests is interesting to test for example boundary cases and exchange of data between different components. As for Unit Test, we must be able to isolate the components we want to test, indeed too much dependencies can mine the efficacy of the test. Functional Tests: here we want to test the behavior of the component as perceived by the program consumer. So the framework is typically another software program that is able to reproduce all the interactions of the program consumer and inspect internally or externally the result or the state of the SUT. Those programs can be driven by scripting language or can be extension of the same frameworks used for Unit Tests (The latter case is common in Rich Client Platform solutions as for example in the NetBeans Platform where Jemmy/Jelly Tools extend JUnit to automate functional test). For those kind of tests, typically complex context information has to be provided to be able to be executed, so strong dependencies with infrastructure technologies can make difficult to simulate all the necessary conditions to execute the test. Non-regression Tests: here we want to test the correctness of the logic implemented between two different versions of the software. The framework is typically a software program, as the one for Unit Test, with the capability to access input, functions and outputs of the SUT. Here is where testability concerns are less involved cause dependencies with infrastructure and other software programs cannot be avoided as the operational behavior of the system has to be reproduced in its integrity.Limits to Testability So far, what are the factors that can mine testability? I would say that bad dependencies, or strong coupling are major reasons. There are several types of dependencies that is better to avoid:Logic to Resources: This form of dependency occurs when the logic is strongly coupled with a specific resource, that means changing this resource will direct impact the logic (resources could be technology infrastructure, file systems, legacy software). In this scenario logic cannot be decoupled from those resources and so, if you need to automate tests, you have to bring with it also all the related resources. In case we are speaking about a database, having logic coupled with it, means that probably for each test you need a specific database; this will create serious problems of performance, and governance of the test suite. Logic to Logic: This form of dependency occurs when Loose Coupling has not been applied. That means there are no interfaces that permit to decouple two interacting software parts. Imagine that a method A of 10 lines, is using a method B of a static object to perform its function. Well 10 lines should not be difficult to test, but what do the call to method B is hiding? This strong coupling means that whatever method B is doing its logic will be included in the test of method A, and if method B needs resources or technologies that are not easy to reproduce in test environment, well probably you will decline automating method A. Logic to UI: This form of dependency occurs when source code has not been correctly organized in different Layers . And so it could be that logic that has to be classified as purely application logic, acts as presentation logic, showing popups for the user and waiting for inputs. But, specially for unit tests, is not possible to reproduce the user interactions, so in this specific case will not be possible to automate unit tests (but also non regression and integration automation can become difficult).How to increase test automation capability Testability is the concern, and designing for testability since the beginning is the only way to grant a good level of test automation capability. For sure this is not possible in all the projects, sometime we must deal with legacy code and in that case code review and refactoring is the only possible solution. I think that some good points for increasing testability are:Organize your source code into layers: This will permit to clearly classify your code depending on its scope, and to avoid bad dependencies (for example you could realize a Multilayered Architecture as I have been discussing in my previous posts). Apply Loose Coupling: Always create an interface for accessing the functions and use a technical environment that permit to physically decouple contract from logic. When this principle is applied you are able to replace an implementation with any other that is simply done for the scope of the test. When applied to Logic to Logic coupling, it means you can create Test Double objects (Fake objects, Mocks, Stubs) that will permit to focus only on the lines of code you are interested to test, and when applied to Logic to Resource coupling it will permit to replace resources that are not easy to setup in test environments. For example a database could be fully replaced with an in-memory lighter version or with some scripting language done to simulate certain contexts. Think modular: Modularize your architecture, well define and scope each module and make public only what is necessarily used by other modules. This will reduce the number of dependencies that you can create, avoiding the usefulness ones.Conclusions When approaching to a new design is worth to take testability as a major concern. Once launching test automation campaing, you will easily get your fruits. If instead you are dealing with legacy code, there are plenty of refactoring patterns described that can help to increase the testability quality. What is sure is that soon or later, if not designing for testability, overall capability of automating tests will become an issue.   Reference: How design for testability can improve test automation capability from our JCG partner Marco Di Stefano at the Refactoring Ideas blog. ...

MOXy’s @XmlVariableNode – JSON Schema Example

We are in the process of adding the ability to generate a JSON Schema from your domain model to EclipseLink MOXy.  To accomplish this we have created a new Variable Node mapping. In this post I will demonstrate the new mapping by mapping a Java model to a JSON Schema. You can try this out today using a nightly build of EclipseLink 2.6.0:    JSON Schema (input.json/Output)  Below is the “Basic Example” taken from  Note how the type has many properties, but they don’t appear as a JSON array.  Instead they appear as separate JSON objects keyed on the property name. { "title": "Example Schema", "type": "object", "properties": { "firstName": { "type": "string" }, "lastName": { "type": "string" }, "age": { "description": "Age in years", "type": "integer", "minimum": 0 } }, "required": ["firstName", "lastName"] } Java Model Below is the Java model we will use for this example. JsonSchema (Properties Stored in a List) In this Java representation of the JSON Schema we have a class that has a collection of Property objects.  Instead of the default representation of the collection (see:  Binding to JSON & XML – Handling Collections), we want each Property to be keyed by its name.  We can do this using the @XmlVariableNode annotation.  With it we specify the field/property from the target object that should be used as the key. package blog.variablenode.jsonschema;import java.util.*; import javax.xml.bind.annotation.*; import org.eclipse.persistence.oxm.annotations.XmlVariableNode;@XmlAccessorType(XmlAccessType.FIELD) public class JsonSchema {private String title;private String type;@XmlElementWrapper @XmlVariableNode("name") public List<Property> properties;private List<String> required;} JsonSchema (Properties Stored in a Map) In this version of the JsonSchema class we have changed the type of properties property from List<Property> property to Map<String, Property>.  The annotation remains the same, the difference is that when @XmlVariableNode is used on a Map the variable node name is used as the map key. package blog.variablenode.jsonschema;import java.util.*; import javax.xml.bind.annotation.*; import org.eclipse.persistence.oxm.annotations.XmlVariableNode;@XmlAccessorType(XmlAccessType.FIELD) public class JsonSchema {private String title;private String type;@XmlElementWrapper @XmlVariableNode("name") public Map<String, Property> properties;private List<String> required;} Property To prevent the name field from being marshalled we need to annotate it with @XmlTransient (see JAXB and Unmapped Properties). package blog.variablenode.jsonschema;import javax.xml.bind.annotation.*;@XmlAccessorType(XmlAccessType.FIELD) public class Property {@XmlTransient private String name;private String description;private String type;private Integer minimum;} Demo Code Below is some sample code that you can use to prove that everything works. package blog.variablenode.jsonschema;import java.util.*; import javax.xml.bind.*; import; import org.eclipse.persistence.jaxb.JAXBContextProperties;public class Demo {public static void main(String[] args) throws Exception { Map<String, Object> properties = new HashMap<String, Object>(); properties.put(JAXBContextProperties.MEDIA_TYPE, "application/json"); properties.put(JAXBContextProperties.JSON_INCLUDE_ROOT, false); JAXBContext jc = JAXBContext.newInstance(new Class[] {JsonSchema.class}, properties);Unmarshaller unmarshaller = jc.createUnmarshaller(); StreamSource json = new StreamSource("src/blog/variablenode/jsonschema/input.json"); JsonSchema jsonSchema = unmarshaller.unmarshal(json, JsonSchema.class).getValue();Marshaller marshaller = jc.createMarshaller(); marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true); marshaller.marshal(jsonSchema, System.out); }} External Metadata MOXy also offers an external mapping document which allows you to provide metadata for third party objects or apply alternate mappings for your model (see: Mapping Object to Multiple XML Schemas – Weather Example).  Below is the mapping document for this example. <?xml version="1.0"?> <xml-bindings xmlns="" package-name="blog.variablenode.jsonschema" xml-accessor-type="FIELD"> <java-types> <java-type name="JsonSchema"> <java-attributes> <xml-variable-node java-attribute="properties" java-variable-attribute="name"> <xml-element-wrapper/> </xml-variable-node> </java-attributes> </java-type> <java-type name="Property"> <java-attributes> <xml-transient java-attribute="name"/> </java-attributes> </java-type> </java-types> </xml-bindings>   Reference: MOXy’s @XmlVariableNode – JSON Schema Example from our JCG partner Blaise Doughan at the Java XML & JSON Binding blog. ...

JMeter custom function implementation

JMeter provides functions that can be used in the sampler. While writing complex test-plan you feel that JMeter is lacking some of the methods. You use Beanshell script to define your own custom method. JMeter invokes Beanshell interpreter to run script the script. This works fine as long as you don’t generate high load (high number of threads). But once JMeter tries to generate high load it run out of resources and slows down dramatically. If JMeter custom functions are used instead then JMeter is able to generate high load effortlessly. The only problem is figuring out implementation requirement and how to integrate with JMeter. There are hardly any document provided by JMeter about custom function implementation. But after looking through JMeter source code and Googling, I found the way to implement JMeter custom function. Custom method implementation Lets dive into the details of implementation. There are certain requirement that should be satisfied. These are as following.Function class package name must contain “.functions.” Function class must extend AbstractFunction and implement execute(), setParameters(), getReferenceKey() and getArgumentDesc() methods Make jar file and put in <JMETER_HOME>/lib/ext directory and restart JMeterPackage name JMeter is design in such a way that it can run without GUI(Grapical User Interface). It loads the core classes and execute the test-plan. It provides high priority to core classes and prefer to load those classes first. In order to make sure that GUI and core/backend doesn’t get mixed it segregate the classes based on the package name. It tries to follow convention that the function implmentation class should be present in package which should contain ‘functions’ word in it e.g com.code4reference.jmeter.functions. Under the hood it looks in file and try to find the following property values. classfinder.functions.contain=.functions. As you can see the default value provided is ".functions.". you can change this to something else, but you have to make sure that the same word should exist in the custom function class package name. It’s preferred that to keep default value. Once you define the package now it’s time to write the Function implementation class. Function implementation class While writing this class you have to implement the following methods.String getReferenceKey(): Name of the function which can be called from sampler. Convention is to put two “__”(underscore) before the name of the function e.g __TimeInMillis and function name should be same as the class name which has implemented this function. This function name should be stored in some static final String variable so that it can’t be change during the execution. ListgetArgumentDesc():This method basically returns the argument description in a list of string. This description appears in Function helper (shown in the below picture) void setParameters(Collectionparameters):This method is called by JMeter and it passes the values passed in the function call. The variables are passed as collection of CompoundVariable. This method is get called even there is no argument provided. In this method global variable can be set and accessed in execute() method. String execute(SampleResult previousResult, Sampler currentSampler): JMeter passes previous SampleResult and the current SampleResult. This method returns a string which get used as a replacement value for the function call. This method get called by multiple threads so it has to be threadsafe. Strange thing about this method is that after processing the arguments the result has to be converted to string and returnedSource code In sample source code below, I have implemented one function called __TimeInMillis. This method returns time in milliseconds after adjusting the current time with provided offset. For example, this ${__TimeInMillis(2000)} method call returns 1371413879000 when current time is 1371413877000. package com.code4reference.jmeter.functions;import java.util.Collection; import java.util.LinkedList; import java.util.List; import java.util.Calendar;import org.apache.jmeter.engine.util.CompoundVariable; import org.apache.jmeter.functions.AbstractFunction; import org.apache.jmeter.functions.InvalidVariableException; import org.apache.jmeter.samplers.SampleResult; import org.apache.jmeter.samplers.Sampler; import org.apache.jorphan.logging.LoggingManager; import org.apache.log.Logger;public class TimeInMillis extends AbstractFunction {private static final List<String> desc = new LinkedList<String>(); private static final String KEY = "__TimeInMillis"; private static final int MAX_PARAM_COUNT = 1; private static final int MIN_PARAM_COUNT = 0; private static final Logger log = LoggingManager.getLoggerForClass(); private Object[] values;static { desc.add("(Optional)Pass the milliseconds that should be added/subtracted from current time."); }/** * No-arg constructor. */ public TimeInMillis() { super(); }/** {@inheritDoc} */ @Override public synchronized String execute(SampleResult previousResult, Sampler currentSampler) throws InvalidVariableException { //JMeterVariables vars = getVariables(); Calendar cal = Calendar.getInstance();if (values.length == 1 ) { //If user has provided offset value then adjust the time."Got one paramenter"); try { Integer offsetTime = new Integer(((CompoundVariable) values[0]).execute().trim()); cal.add(Calendar.MILLISECOND, offsetTime); } catch (Exception e) { //In case user pass invalid parameter. throw new InvalidVariableException(e); } }return String.valueOf(cal.getTimeInMillis()); }/** {@inheritDoc} */ @Override public synchronized void setParameters(Collection<CompoundVariable> parameters) throws InvalidVariableException { checkParameterCount(parameters, MIN_PARAM_COUNT, MAX_PARAM_COUNT); values = parameters.toArray(); }/** {@inheritDoc} */ @Override public String getReferenceKey() { return KEY; }/** {@inheritDoc} */ @Override public List<String> getArgumentDesc() { return desc; } } I have highlight some of the crucial part of the code. In line 19, function name is set where as in line 26 function description is provided. In line 60, the number of arguments are check and made sure that the right number of arguments have been provided. The main part of code is highlighted between 44 to 51 where current time is adjusted and returned as string object. If you are interested to check other function implementation then checkout the entire source code present on github/Code4Reference. Once code is written, compile it and make jar file and place it in <JMETER_HOME>/lib/ext directory. You can get a sample Gradle script for building jar file in this post. If you don’t know about Gradle, then you can use commands to generate jar file. The easiest way of creating jar file by exporting the package in Eclipse and select the export destination as Jar file.   Reference: JMeter custom function implementation from our JCG partner Rakesh Cusat at the Code4Reference blog. ...

Git configuration options you can’t miss

Whenever I start using a new machine for development these are the first options I setup.                   First things first – Your name git config --global "Andrea Salvadore" git config --global "" Better log messages git config --global alias.lg "log --color --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset' --abbrev-commit" This command will generate nicely coloured and formatted git logs. See more details in here Some common aliases git config --global status git config --global merge tool I use diffmerge or meld. The following configuration is for diffmerge git config --global merge.tool diffmerge git config --global mergetool.diffmerge.trustexitcode true git config --global mergetool.keepbackup false git config --global mergetool.diffmerge.cmd "/usr/bin/diffmerge --merge --result=\"$MERGED\" \"$LOCAL\" \"$BASE\" \"$REMOTE\"" diff tool git config --global diff.tool diffmerge git config --global difftool.diffmerge.cmd diffmerge '$LOCAL' '$REMOTE' push current folder git config --global push.default current This will allow you to type git push origin instead of git push origin <current_branch_name> Tell git to ignore file permission changes git config --global core.filemode false   Reference: Git configuration options you can’t miss from our JCG partner Andrea Salvadore at the Development in progress blog. ...

Simple Spring Memcached – Spring Caching Abstraction and Memcached

Caching remains the one of the most basic performance enhancing mechanism in any read heavy database application. Spring 3.1 release came up with a cool new feature called Cache Abstraction. Spring Cache Abstraction provides the application developers an easy, transparent and decoupled way to implement any caching solution. Memcached is one of the most popular distributed caching system used across apps. In this post we will focus on how to integrate memcached with a Spring enabled applications. Since Spring directly supports only Ehcache and ConcurrentHashMap so we will fall down to a third party library Simple Spring Memcache to leverage power of spring caching abstraction.     Getting The Code Code for this tutorial can be downloaded from following SVN location. For the tutorial to work please create the following table in your db. Then modify the datasource in springcache.xml. CREATE TABLE IF NOT EXISTS `adconnect`.`books` ( `book_id` INT NOT NULL AUTO_INCREMENT , `book_name` VARCHAR(500) NULL , `book_author` VARCHAR(500) NULL , `category` VARCHAR(500) NULL , `numpages` INT NULL , `price` FLOAT NULL , PRIMARY KEY (`book_id`) ) ENGINE = InnoDB; Integration Steps 1. Dependencies - I also assume that you have your hibernate, spring and logs set up. So for downloading SSM dependencies add following to your POM. For full set of dependencies please download the project from SVN url above. <dependency> <groupId></groupId> <artifactId>spring-cache</artifactId> <version>3.1.0</version> </dependency><dependency> <groupId></groupId> <artifactId>xmemcached-provider</artifactId> <version>3.1.0</version> </dependency> 2. Enable Caching – To enable caching in your spring application add following to your spring context xml. <cache:annotation-driven/> 3. Configure Spring to enable Memcached based caching  – Add following to your application context xml. <bean name="cacheManager" class=""> <property name="caches"> <set> <bean class=""> <constructor-arg name="cache" index="0" ref="defaultCache"/> <!-- 5 minutes --> <constructor-arg name="expiration" index="1" value="300"/> <!-- @CacheEvict(..., "allEntries" = true) doesn't work --> <constructor-arg name="allowClear" index="2" value="false"/> </bean> </set> </property></bean><bean name="defaultCache" class=""> <property name="cacheName" value="defaultCache"/> <property name="cacheClientFactory"> <bean name="cacheClientFactory" class=""/> </property> <property name="addressProvider"> <bean class=""> <property name="address" value=""/> </bean> </property> <property name="configuration"> <bean class=""> <property name="consistentHashing" value="true"/> </bean> </property></bean> SSMCacheManager extends – It is an  abstract class and is a manager for underlying Cache. SSMCache implements org.springframework.cache.Cache – This is actual  wrapper round underlying cache client api. 4. Annotation Driven caching – Spring uses annotation to mark a method that it is to be managed by  cache.  These are the annotations defined by spring caching framework@Cacheable – This annotation is used to mark a method whose results are to be cached. If a cacheable method is called then spring first looks if result of the method is cached or not. If it present in cache then result is pulled from there else it the method call is made.                                                                                                                          @CachePut – Methods marked with cacheput annotations are always run and their results are pushed to cache. You should not place both Cacheput and Cacheable annotation on same method as they have different behaviour. Cacheput will result in method getting executed all the time while cacheable results in method getting executed only once. @CacheEvict – This annotation results in eviction of objects from the cache. This is generally used when the result object is updated hence the old object from cache needs to be purged. @Caching – This annotation is used if multiple annotations of same type is to be put on a method.@Cacheable Demo  @Cacheable(value = "defaultCache", key = "new Integer(#book_id).toString().concat('.BookVO')") public BookVO get(int book_id) throws Exception { BookVO bookVO = null; try{ Query query = getSession().createQuery("from BookVO bookVO where bookVO.book_id=:book_id"); query.setLong("book_id", book_id); bookVO = (BookVO)query.uniqueResult(); }catch(HibernateException he){ log.error("Error in finding a bookVO : " + he); throw new Exception("Error in finding adPicVO by book_id for book_id : " + bookVO, he); } return bookVO; } Please note the key attribute of the annotation. This is an example of Spring Expression Language. You can use SePL use to create memcache key according to your requirement. In this example I want a key which should be of form <book_id>.BookVO.  Another Example – Lets say I want to store a list of bookVO from a given author in that case I can a unique key of form <author_name>.BookVOList so for that I can use following key @Cacheable(value = "defaultCache", key = "#author.concat('.BookVOList')") public List<BookVO> getList(String author) throws Exception { @CachePut Demo @CachePut(value = "defaultCache", key = "new Integer(#bookVO.book_id).toString().concat('.BookVO')") public BookVO create(BookVO bookVO) throws Exception { try{ getSession().save(bookVO); getSession().flush(); }catch(HibernateException he){ log.error("Error in inserting bookVO : " + he); throw new Exception("Error in inserting bookVO", he); }return bookVO; } CachePut can be used while inserting data where data inserted can be put in cache after insertion is done @CacheEvict Demo @CacheEvict(value = "defaultCache", key = "new Integer(#bookVO.book_id).toString().concat('.BookVO')") public BookVO update(BookVO bookVO) throws Exception { try{ Query query = getSession().createQuery("update BookVO bookVO set bookVO.book_name=:book_name, bookVO.book_author=:book_author,bookVO.category=:category,bookVO.numpages=:numpages,bookVO.price=:price " + "where bookVO.book_id=:book_id"); query.setString("book_name", bookVO.getBook_name()); query.setString("book_author", bookVO.getBook_author()); query.setString("category", bookVO.getCategory()); query.setInteger("numpages", bookVO.getNumpages()); query.setFloat("price", bookVO.getPrice()); query.setLong("book_id", bookVO.getBook_id()); query.executeUpdate(); }catch(HibernateException he){ log.error("Error in updating bookVO : " + he); throw new Exception("Error in updating bookVO", he); }return bookVO; } Resources  Reference: Simple Spring Memcached – Spring Caching Abstraction and Memcached from our JCG partner Niraj Singh at the Weblog4j blog. ...

Maven: Start an external process without blocking your build

Let’s assume that we have to execute a bunch of acceptance tests with a BDD framework like Cucumber as part of a Maven build. Using Maven Failsafe Plugin is not complex. But it has an implicit requirement: The container that hosts the implementation we are about to test needs to be already running. Many containers like Jetty or JBoss provide their own Maven plugins, to allow to start the server as part of a Maven job. And there is also the good generic Maven Cargo plguin that offers an implementation of the same behavior for many different container. These plugins allow for instance, to start the server at the beginning of a Maven job, deploy the implementation that you want to test, fire your tests and stop the server at the end. All the mechanisms that I have described work and they are usually very useful for the various testing approaches. Unluckily, I cannot apply this solution if my container is not a supported container. Unless obviuosly, I decide to write a custom plugin or add the support to my specific container to Maven Cargo. In my specific case I had to find a way to use Red Hat’s JBoss Fuse, a Karaf based container. I decided to try keeping it easy and to not write a full featured Maven plugin and eventually to rely to GMaven plugin, or how I have recently read on the internet the “Poor Man’s Gradle”. GMaven is basically a plugin to add Groovy support to you Maven job, allowing you to execute snippets of Groovy as part of your job. I like it because it allows me to inline scripts directly in the pom.xml. It permits you also to define your script in a separate file and execute it, but that is exactly the same behaviour you could achieve with plain java and Maven Exec Plugin; a solution that I do not like much because hides the implementation and makes harder to imagine what the full build is trying to achieve. Obviously this approach make sense if the script you are about to write are simple enough to be autodescriptive.  I will describe my solution starting with sharing with you my trial and errors and references to various articles and posts I have found: At first I have considered to use Maven Exec Plugin to directly launch my container. Something like what was suggested here <plugin> <groupid>org.codehaus.mojo</groupid> <artifactid>exec-maven-plugin</artifactid> <version>1.1.1</version> <executions> <execution> <id>some-execution</id> <phase>compile</phase> <goals> <goal>exec</goal> </goals> </execution> </executions> <configuration> <executable>hostname</executable> </configuration> </plugin> That plugin invocation, as part of a Maven job, actually allows me to start the container, but it has a huge drawback: he Maven lifecycle stops until the external process terminates or is manually stopped. This is because the external process execution is “synchronous” and Maven doesn’t consider the command execution finished, so, it never goes on with the rest of the build instructions. This is not what I needed, so I have looked for something different. At first I have found this suggestion to start a background process to allow Maven not to block: The idea here is to execute a shell script, that start a background process and that immediately returns. <plugin> <groupid>org.codehaus.mojo</groupid> <artifactid>exec-maven-plugin</artifactid> <version>1.2.1</version> <executions> <execution> <id>start-server</id> <phase>pre-integration-test</phase> <goals> <goal>exec</goal> </goals> <configuration> <executable>src/test/scripts/</executable> <arguments> <argument>{server.home}/bin/server</argument> </arguments> </configuration> </execution> </executions> </plugin> and the script is #! /bin/sh $* > /dev/null 2>&1 & exit 0 This approach actually works. My Maven build doesn’t stop and the next lifecycle steps are executed. But I have a new problem now. My next steps are immediately executed. I have no way to trigger the continuation only after my container is up and running. Browsing a little more I have found this nice article: The article, very well written, seems to describe exactly my scenario. It’s also applied to my exact context, trying to start a flavour of Karaf. It uses a different approach to start the process in background, the Antrun Maven plugin. I gave it a try and unluckily I am in the exact same situation as before. The integration tests are executed immediately, after the request to start the container but before the container is ready. Convinced that I couldn’t find any ready solution I decided to hack the current one with the help of some imperative code. I thought that I could insert a “wait script”, after the start request but before integration test are fired, that could check for a condition that assures me that the container is available. So, if the container is started during this phase: pre-integration-test and my acceptance tests are started during the very next integration-test I can insert some logic in pre-integration-test that keeps polling my container and that returns only after the container is “considered” available. import static com.jayway.restassured.RestAssured.*; println("Wait for FUSE to be available") for(int i = 0; i < 30; i++) { try{ def response = with().get("http://localhost:8383/hawtio") def status = response.getStatusLine() println(status) } catch(Exception e){ Thread.sleep(1000) continue }finally{ print(".") } if( !(status ==~ /.*OK.*/) ) Thread.sleep(1000)} And is executed by this GMaven instance: <plugin> <groupid>org.codehaus.gmaven</groupid> <artifactid>gmaven-plugin</artifactid> <configuration> <providerselection>1.8</providerselection> </configuration> <executions> <execution> <id>########### wait for FUSE to be available ############</id> <phase>pre-integration-test</phase> <goals> <goal>execute</goal> </goals> <configuration> <source><![CDATA[ import static com.jayway.restassured.RestAssured.*; ... } ]]> </configuration> </execution> </executions> </plugin> My (ugly) script, uses Rest-assured and an exception based logic to check for 30 seconds if a web resource, that I know my container is deploying will be available. This check is not as robust as I’d like to, since it checks for a specific resource but it’s not necessary a confirmation that the whole deploy process has finished. Eventually, a better solution would be use some management API that could be able to check the state of the container, but honestly I do not know if they are exposed by Karaf and my simple check was enough for my limited use case. With the GMaven invocation, now my maven build is behaving like I wanted. This post showed a way to enrich your Maven script with some programmatic logic without the need of writing a full featured Maven plugin. Since you have full access to the Groovy context, you can think to perform any kind of task that you could find helpful. For instance you could also start background threads that will allow the Maven lifecycle to progress while keep executing your logic. My last suggestion is to try keeping the logic in your scripts simple and to not turn them in long and complex programs. Readability was the reason I decided to use rest-assured instead of direct access to Apache HttpClient. This is a sample full pom.xml <!-- ======== Start FUSE ================================================================= --> <project xmlns:xsi="" xmlns="" xsi:schemalocation=""> <modelversion>4.0.0</modelversion> <name>${groupId}.${artifactId}</name> <parent> <groupid>xxxxxxx</groupid> <artifactid>esb</artifactid> <version>1.0.0-SNAPSHOT</version> </parent> <artifactid>acceptance</artifactid> <properties> <fuse .home="">/data/software/RedHat/FUSE/fuse_full/jboss-fuse-6.0.0.redhat-024/bin/</fuse> </properties> <build> <plugins> <plugin> <artifactid>maven-failsafe-plugin</artifactid> <version>2.12.2</version> <executions> <execution> <goals> <goal>integration-test</goal> <goal>verify</goal> </goals> </execution> </executions> </plugin> <plugin> <groupid>org.apache.maven.plugins</groupid> <artifactid>maven-surefire-plugin</artifactid> <configuration> <excludes> <exclude>**/*Test*.java</exclude> </excludes> </configuration> <executions> <execution> <id>integration-test</id> <goals> <goal>test</goal> </goals> <phase>integration-test</phase> <configuration> <excludes> <exclude>none</exclude> </excludes> <includes> <include>**/</include> </includes> </configuration> </execution> </executions> </plugin> <plugin> <artifactid>maven-antrun-plugin</artifactid> <version>1.6</version> <executions> <execution> <id>############## start-fuse ################</id> <phase>pre-integration-test</phase> <configuration> <target> <exec dir="${fuse.home}" executable="${fuse.home}/start" spawn="true"> </exec> </target> </configuration> <goals> <goal>run</goal> </goals> </execution> </executions> </plugin> <plugin> <artifactid>maven-antrun-plugin</artifactid> <version>1.6</version> <executions> <execution> <id>############## stop-fuse ################</id> <phase>post-integration-test</phase> <configuration> <target> <exec dir="${fuse.home}" executable="${fuse.home}/stop" spawn="true"> </exec> </target> </configuration> <goals> <goal>run</goal> </goals> </execution> </executions> </plugin> <plugin> <groupid>org.codehaus.gmaven</groupid> <artifactid>gmaven-plugin</artifactid> <configuration> <providerselection>1.8</providerselection> </configuration> <executions> <execution> <id>########### wait for FUSE to be available ############</id> <phase>pre-integration-test</phase> <goals> <goal>execute</goal> </goals> <configuration> <source><![CDATA[ import static com.jayway.restassured.RestAssured.*; println("Wait for FUSE to be available") for(int i = 0; i < 30; i++) { try{ def response = with().get("http://localhost:8383/hawtio") def status = response.getStatusLine() println(status) } catch(Exception e){ Thread.sleep(1000) continue }finally{ print(".") } if( !(status ==~ /.*OK.*/) ) Thread.sleep(1000) } ]]> </configuration> </execution> </executions> </plugin> <!-- --> </plugins> </build> <dependencies> <!-- --> <dependency> <groupid>info.cukes</groupid> <artifactid>cucumber-java</artifactid> <version>${cucumber.version}</version> <scope>test</scope> </dependency> <dependency> <groupid>info.cukes</groupid> <artifactid>cucumber-picocontainer</artifactid> <version>${cucumber.version}</version> <scope>test</scope> </dependency> <dependency> <groupid>info.cukes</groupid> <artifactid>cucumber-junit</artifactid> <version>${cucumber.version}</version> <scope>test</scope> </dependency> <dependency> <groupid>junit</groupid> <artifactid>junit</artifactid> <version>4.11</version> <scope>test</scope> </dependency> <!-- groovy script dependencies --> <dependency> <groupid>org.apache.httpcomponents</groupid> <artifactid>httpclient</artifactid> <version>4.2.5</version> </dependency> <dependency> <groupid>com.jayway.restassured</groupid> <artifactid>rest-assured</artifactid> <version>1.8.1</version> </dependency> </dependencies> </project>   Reference: Maven: Start an external process without blocking your build from our JCG partner Paolo Antinori at the Someday Never Comes blog. ...

Design Patterns: Strategy

This time I want to talk about Strategy design pattern. In this way I start articles about behavioral design patterns. These kind of patterns represent some schemas of interaction between objects to make a code more flexible and well organized.The most essential point of this approach is loose coupling between objects. The Strategy should be used when you have several implementations for one purpose in your application. In this case you create strategy-interface, concrete realizations of the interface, and finally a context class which will encapsulate all logic in some methods. In order to understand this approach, lets see an example. The example will be based on football. Let’s imagine that any football team can play in two manners: attacking and defending. These two tactics are particular realisations of a football strategy.Strategy interface: public interface FootballStrategy {public void adhereTactic(String team);} Concrete realizations: public class AttackTactic implements FootballStrategy {@Override public void adhereTactic(String team) { System.out.println(team+" will play in attacking football!"); }} And public class DefenceTactic implements FootballStrategy {@Override public void adhereTactic(String team) { System.out.println(team+" will make emphasis on defence!"); }} Context class: public class TacticContext {private FootballStrategy strategy = null;public void selectTactic(String team) { strategy.adhereTactic(team); }public FootballStrategy getStrategy() { return strategy; }public void setStrategy(FootballStrategy strategy) { this.strategy = strategy; }} Demonstration of the Strategy usage: ... public static void main(String[] args) {String team1 = "Barcelona"; String team2 = "Real Madrid";TacticContext context = new TacticContext();context.setStrategy(new AttackTactic()); context.selectTactic(team1);context.setStrategy(new DefenceTactic()); context.selectTactic(team2);} ... The result of the code execution: Barcelona will play in attacking football! Real Madrid will make emphasis on defence! When to use the Strategy design pattern? Definitely when a client doesn’t need to know about implementation of concrete strategies or about data which is used there. When you want to use one class from the set dynamically. I don’t know which situations I also need to mention now. But I’m sure that my example was verbose and you can make your own conclusion about cons and pros of the Strategy design pattern.   Reference: Design Patterns: Strategy from our JCG partner Alexey Zvolinskiy at the Fruzenshtein’s notes blog. ...

Quicksorting – 3-way and Dual Pivot

It’s no news that Quicksort is considered one of the most important algorithms of the century and that it is the defacto system sort for many languages, including the Arrays.sort in Java. So, what’s new about quicksort? Well, nothing except that I figured just now (after 2 damn years of release of Java 7) that the Quicksort implementation of the Arrays.sort has been replaced with a variant called Dual-Pivot QuickSort. This thread is not only awesome for this reason but also how humble Jon Bentley and Joshua Bloch really are. What did I do next? Just like everybody else, I wanted to implement it and do some benchmarking – against some 10 million integers (random and duplicate). Oddly enough, I found the following results : Random Data :Quick Sort Basic : 1222 millis Quick Sort 3 Way : 1295 millis (seriously !!) Quick Sort Dual Pivot : 1066 millisDuplicate Data :Quick Sort Basic : 378 millis Quick Sort 3 Way : 15 millis Quick Sort Dual Pivot : 6 millisStupid Question 1 : I am afraid that I am missing something in the implementation of 3-way partition. Across several runs against random inputs (of 10 million) numbers, I could see that the single pivot always performs better (although the difference is less than 100 milliseconds for 10 million numbers). I understand that the whole purpose of making the 3-way Quicksort as the default Quicksort until now is that it does not give 0(n2) performance on duplicate keys – which is very evident when I run it against duplicate input. But is it true that for the sake of handling duplicate data, a small penalty is taken by 3-way? Or is my implementation bad? Stupid Question 2 My Dual Pivot implementation (link below) does not handle duplicates well. It takes a sweet forever (0(n2)) to execute. Is there a good way to avoid this? Referring to the Arrays.sort implementation, I figured out that ascending sequence and duplicates are eliminated well before the actual sorting is done. So, as a dirty fix, if the pivots are equal I fast-forward the lowerIndex until it is different than pivot2. Is this a fair implementation? else if (pivot1==pivot2){ while (pivot1==pivot2 && lowIndex<highIndex){ lowIndex++; pivot1=input[lowIndex]; } } That’s it. That is all I did? I always find the tracing of the algorithm interesting but with the number of variables in Dual Pivot quicksort, my eyes found it overwhelming while debugging. So, I also went ahead and created trace-enabled implementations (for all 3) so that I could see where the variable pointers currently are. These trace-enabled classes just overlay where the pointers are below the array values. I hope you find these classes useful. eg. for a Dual Pivot iterationThe entire project (along with a few lame implementations of DSA) is available on github here. The quicksort classes alone could be found here. Here’s my implementation of the SinglePivot (Hoare), 3-way (Sedgewick) and the new Dual-Pivot (Yaroslavskiy) Single Pivotpackage basics.sorting.quick;import static; import static basics.sorting.utils.SortUtils.less; import basics.shuffle.KnuthShuffle;public class QuickSortBasic {public void sort (int[] input){//KnuthShuffle.shuffle(input); sort (input, 0, input.length-1); }private void sort(int[] input, int lowIndex, int highIndex) {if (highIndex<=lowIndex){ return; }int partIndex=partition (input, lowIndex, highIndex);sort (input, lowIndex, partIndex-1); sort (input, partIndex+1, highIndex); }private int partition(int[] input, int lowIndex, int highIndex) {int i=lowIndex; int pivotIndex=lowIndex; int j=highIndex+1;while (true){while (less(input[++i], input[pivotIndex])){ if (i==highIndex) break; }while (less (input[pivotIndex], input[--j])){ if (j==lowIndex) break; }if (i>=j) break;exchange(input, i, j);}exchange(input, pivotIndex, j);return j; }} 3-waypackage basics.sorting.quick;import static basics.shuffle.KnuthShuffle.shuffle; import static; import static basics.sorting.utils.SortUtils.less;public class QuickSort3Way {public void sort (int[] input){ //input=shuffle(input); sort (input, 0, input.length-1); }public void sort(int[] input, int lowIndex, int highIndex) {if (highIndex<=lowIndex) return;int lt=lowIndex; int gt=highIndex; int i=lowIndex+1;int pivotIndex=lowIndex; int pivotValue=input[pivotIndex];while (i<=gt){if (less(input[i],pivotValue)){ exchange(input, i++, lt++); } else if (less (pivotValue, input[i])){ exchange(input, i, gt--); } else{ i++; }}sort (input, lowIndex, lt-1); sort (input, gt+1, highIndex);}} Dual Pivotpackage basics.sorting.quick;import static basics.shuffle.KnuthShuffle.shuffle; import static; import static basics.sorting.utils.SortUtils.less;public class QuickSortDualPivot {public void sort (int[] input){ //input=shuffle(input); sort (input, 0, input.length-1); }private void sort(int[] input, int lowIndex, int highIndex) {if (highIndex<=lowIndex) return;int pivot1=input[lowIndex]; int pivot2=input[highIndex];if (pivot1>pivot2){ exchange(input, lowIndex, highIndex); pivot1=input[lowIndex]; pivot2=input[highIndex]; //sort(input, lowIndex, highIndex); } else if (pivot1==pivot2){ while (pivot1==pivot2 && lowIndex<highIndex){ lowIndex++; pivot1=input[lowIndex]; } }int i=lowIndex+1; int lt=lowIndex+1; int gt=highIndex-1;while (i<=gt){if (less(input[i], pivot1)){ exchange(input, i++, lt++); } else if (less(pivot2, input[i])){ exchange(input, i, gt--); } else{ i++; }}exchange(input, lowIndex, --lt); exchange(input, highIndex, ++gt);sort(input, lowIndex, lt-1); sort (input, lt+1, gt-1); sort(input, gt+1, highIndex);}}   Reference: Quicksorting – 3-way and Dual Pivot from our JCG partner Arun Manivannan at the blog. ...
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

Get ready to Rock!
To download the books, please verify your email address by following the instructions found on the email we just sent you.