Featured FREE Whitepapers

What's New Here?


Technical Debt – How much is it Really Costing you?

The idea behind the technical debt metaphor is that there is a cost to taking short cuts (intentional technical debt) or making mistakes (unintentional technical debt) and that the cost of not dealing with these short cuts and mistakes will increase over time. The problem with this metaphor is that with financial debt, we know how much it would cost to pay off a debt off today and we can calculate how much interest we will have to pay in the future. Technical debt though is much fuzzier. We don’t really know how much debt we have taken on – you may have taken on a lot of unintentional technical debt – and you may still be taking it on without knowing it. And we can’t quantify how much it is really costing us – how much interest we have paid so far, what the total cost may be in the future if we don’t take care of it today. Some people have tried to put technical debt into concrete financial terms. For example, according to CAST Software’s CRASH report “applications carry on average $3.61 of technical debt per line of code”. For some reason, the average cost of Java apps was even higher: $5.42 per line of code. These numbers are calculated from running static structural analysis on their customers’ code. Sonar, an Open Source dashboard for managing code quality, also tries to calculate a technical debt cost for a code base, again using static analysis findings like code coverage of automated tests, code complexity, duplication, violations of coding practices, comment density. Thinking of technical debt in this way is interesting, but let’s stop pretending that these are hard numbers that we can use to make trade-off decisions. Although the numbers appear precise, they’re arbitrary, guesses. And they assume that technical debt can be calculated by a tool looking at the structure of the code. Unfortunately, dealing with technical debt is not that straightforward. But if debt is too fuzzy to be measured in detailed cost terms, how do you know what kind of debt is hurting you the most, how do you know when you have too much? Let’s look at different kinds of technical debt, and how much they might cost you, using a fuzzier approach. $$$ Making a fundamental mistake in architecture or the platform technology – you don’t find out until too late, until you have real customers using the system, that a key piece of technology like the database or messaging fabric doesn’t scale or isn’t reliable, or that you can’t scale out your architecture like you need to because of core dependency problems, or you you made some fundamentally incorrect assumptions on how the system is supposed to work or how customers will use it. Now you have no choice but to start again or at least rewrite big chunks of the system to get it to work or to keep it working, and you don’t have the time to do this properly. $$-$$$ Error-prone code – the 20% of the code where 80% of bugs are found. Capers Jones says that all big systems have a small number of routines where bugs and problems cluster, code that is hard to understand and expensive and dangerous to change because it was done badly in the first place or it went to hell over time because of the accumulation of short-sighted fixes and changes. Not rewriting this code is one of the most expensive mistakes that developers make. $-$$ The system can’t be easily tested – because you don’t have good automated tests, or the tests are brittle and slow and keep falling apart when you change the code. Testing costs can make up more than half of the cost of making any change or fix – sometimes testing can take much more time and cost much more than making the fix – and testing costs tend to go up over time as you write more code, as the system adds more interfaces and options. $-$$ Not taking care of packaging and release and deployment. Relying too much on manual steps and manual checks, leading to mistakes and problems in production, late nights. Like testing, release and deployment costs don’t go away, they just keep adding up incrementally. $-$$ Code that mysteriously works, but nobody is sure how or why – usually performance-critical or safety-critical low-level plumbing code written by a wizard who has long since left the company. It might be beautiful code, but if nobody on the team understands it, it’s a time bomb – someday, somebody is going to have to change it or fix it, or try to. $-$$ Forward and backward compatibility adapters and compromises. This is necessary, short-term debt. But the cost rises the longer that you have to maintain these compromises. $-$$ Out of date libraries and middleware stack – you’ve fallen behind on applying patches and upgrades. Even if the code that you have now is stable, you run some risk of unpatched security vulnerabilities. The longer that this goes on, the further behind you are, the higher the risk – at some point if the software is no longer supported or supportable, and your hand is called. $-$$ Duplicate, copy-and-paste code. This is one of the bugaboos of technical debt and static analysis tools. Almost everybody has it. But how bad is it, really? The cost depends on how many clones developers have made, how often they need to be changed, how many subtle differences there are between the different copies, and how easily you can find the copies and keep track of them. If the developer who made the copies is still on the team and does a good job of keeping track of all of them, it doesn’t cost much if anything. $-$$ Known, outstanding bugs in code and unresolved static analysis findings. The cost and risk depends on how many bugs and warnings you have, and how nasty they are. But if they are real problems, they should have been fixed by now. Is a bug really a bug if it isn’t bugging anyone? $-$$ Inefficient design or implementation, “throwing hardware at it”, using too much memory or network bandwidth or CPU. Hardware is cheap, but these costs can add up a lot as you scale out. $ Inconsistent use of programming idioms and patterns – developers either didn’t understand the existing patterns, or didn’t like them and introduced new ones, or didn’t care and just wanted to get their change done. It’s ugly, and it can be frustrating for developers. But the real cost of living with the situation is often less than trying to clean it all up. $ Missing or poor error handling and exception handling. It will constantly bite you in the ass in production, but it won’t cost a lot to at least get it mostly right. $0.01 Hard coding, magic numbers, code that isn’t standards compliant, poor element naming, missing comments, and code that needs tidying. This is a pain in the ass, but it’s the kind of thing that is easy to clean up as part of standard refactoring work. $0.01 Out of date documentation – another issue that is commonly considered in technical debt. But let’s be honest, most programmers don’t read documentation anyways. If nobody is using it, get rid of it. If people are using it, why isn’t it up to date? $0.00 Hand-rolled code that could have and should have been done using built-in language features or libraries, or existing framework or common services. It’s disappointing when somebody recognizes it, but unless this hand-rolled code has lots of bugs in it, it’s a sunk cost, not a cost that is increasing over time. There are different kinds of debt, with different costs. Figuring out where your real costs are, and what to do about it, isn’t easy. Reference: Technical Debt – How much is it Really Costing you? from our JCG partner Jim Bird at the Building Real Software blog....

Patching Java at runtime

This article will slightly highlight how to fix issues with third party libs thatcan’t be circumvented are difficult to exclude/bypass/replaced simply provide no bugfixIn such cases solving the issue remains a challengig task. As a motivation for this scenario consider the attacks on “hash indexed” data structures, such as java.util.Hashtable and java.util.HashMap (for those who are not familar with this kinds of attacks I would recommend to view the following talk of the 28C3: Denial-of-Service attacks on web applications made easy). To make a long story short the core issue is the usage of a non-cryptographic hash functions (where finding collisions is easy). The root cause is hidden in the java.lang.String.hashCode() function. The obvious approach would be to patch the java.lang.String class which is difficult for two reasons:it contains native code it belongs to the Java core classes which are delivered with the Java installation and thus out of our controlThe first point would force us to patch with architecture and OS specific libs which we should circumvent whenever possible. The second point is true but it is a little more flexible as we will see in the following. Ok, so let’s reconsider: Patching native is dirty and we are not eager to go this way – we have to do some work for others (in this case patch SDK libs) who are not willing to fix their code. An attempt: The classes java.util.Hashtable and java.util.HashMap are concerned by the hashing issue and don’t use any native code. Patching these classes is much easier as it is sufficient to provide one compiled class for all architectures and OSs. We could use one of the provided solutions for the bug and adjust (or replace) the original classes with fixed versions. The difficulty is to patch the VM without touching the core libs – I guess users would be very disappointed if they have to change parts of their JVM installation or, even worse, our application does this automatically during installation. Further on introducing new, custom Classloaders could be difficult in some cases. What we need is a solution to patch our single application on the fly – replace the buggy classes and don’t touch anything else. If we do this transparently other software parts don’t even recognize any changes (in best case) and remain interfacing the classes without any modifications. This could easily be done by abusing the Java Instrumentation API. To quote the JavaDoc: “Provides services that allow Java programming language agents to instrument programs running on the JVM.”And that is exactly what we need! Proof of concept At first we need a sample application to demonstrate the concept: public class StringChanger { public static void main(String[] args) { System.out.println(A.shout()); }}public class A { public static String shout() { return "A"; } }When this class is run it simply outputs: A After applying our “patch” we would like to have the following output: Apatched The “patched code looks like this: public class A { public static String shout() { return "Apatched"; } }Further on we need an “Agent” which governs the used classes and patches the right ones: final public class PatchingAgent implements ClassFileTransformer {private static byte[] PATCHED_BYTES; private static final String PATH_TO_FILE = "Apatched.class"; private static final String CLASS_TO_PATCH = "stringchanger/A";public PatchingAgent() throws FileNotFoundException, IOException { if (PATCHED_BYTES == null) { PATCHED_BYTES = readPatchedCode(PATH_TO_FILE); } }public static void premain(String agentArgument, final Instrumentation instrumentation) { System.out.println("Initializing hot patcher..."); PatchingAgent agent = null;try { agent = new PatchingAgent(); } catch(Exception e) { System.out.println("terrible things happened...."); }instrumentation.addTransformer(agent); }@Override public byte[] transform(final ClassLoader loader, String className, final Class classBeingRedefined, final ProtectionDomain protectionDomain, final byte[] classfileBuffer) throws IllegalClassFormatException { byte[] result = null;if (className.equals(CLASS_TO_PATCH)) { System.out.println("Patching... " + className); result = PATCHED_BYTES; }return result; }private byte[] readPatchedCode(final String path) throws FileNotFoundException, IOException { ... } }Don’t worry – I’m not going to bother you with implementaion details, since this is only PoC code, far from being nice, clever, fast and neat. Except from the fact that I’m catching Exception just because I’m too lazy at this point I’m not filtering inputs, building deep-copies (defensive programming as a buzzword) this really shouldn’t be taken as production code. public PatchingAgent() Initializes the agent (in this case fetching the bytes of a patched A.class file. The patched class was compiled and is stored somewhere where we can access it. public static void premain(…) This method is called after the JVM has initialized and prepares the agent. public byte[] transform(…) Whenever a class is defined (for example by ClassLoader.defineClass(…)) this function will get invoked and may transform the handled class byte[] (classfileBuffer). As can be seen we will do this for our class A in the stringchanger package. You are not limited how you are going to transform the class (as long as it remains a valid Java class ) – for example you could utilize byte code modification frameworks… – to keep things simple we assume that we replace the old byte[] with the one of the patched class (by simply buffering the complete patched A.class file into a byte[]). That’s all for the coding part of the patcher… As a final thing we have to build a jar of the agent with a special manifest.mf file which tells the JVM how the agent can be invoked. Manifest-Version: 1.0 X-COMMENT: Main-Class will be added automatically by build Premain-Class: stringchanger.PatchingAgent After building this jar we could try out our PoC application. at first we will call it without the necessary JVM arguments to invoke the agent: run: A BUILD SUCCESSFUL (total time: 0 seconds) It behaves as expect and prints the output as defined by the unpatched class. And now we will try it with the magic JVM arguments to invoke the agent -javaagent:StringChanger.jar: run: Initializing hot patcher… Reading patched file. Patching… stringchanger/A Apatched BUILD SUCCESSFUL (total time: 0 seconds) Voilà, the code was successfully patched on-the-fly! As we can see it is possible to hot-patch a JVM dynamically without touching the delivered code. What has to be done is the development of a patching agent and a patched class. At this moment I’m not aware of performance measuring data. Thus I’m very unsure how practical this solution is for production systems and in how far it influences application performance. To make it clear, this is not an elegant solution – at least it is very dirty! The best way would be to patch the root cause, but as long as there is no vendor fix developers can prevent their software by hot-patching without rewriting every single line where the vulnerable classes are used. Finally I would kindly ask for comments, improvements or simply better solutions. Many thanks to Juraj Somorovsky who joint-works on this issue with me. Reference: Patching Java at runtime from our JCG partner Christopher Meyer at the Java security and related topics....

Apache Mahout: Getting started

Recently I have got an interesting problem to solve: how to classify text from different sources using automation? Some time ago I read about a project which does this as well as many other text analysis stuff – Apache Mahout. Though it’s not a very mature one (current version is 0.4), it’s very powerful and scalable. Build on top of another excellent project, Apache Hadoop, it’s capable to analyze huge data sets. So I did a small project in order to understand how Apache Mahout works. I decided to use Apache Maven 2 in order to manage all dependencies so I will start with POM file first. <!--?xml version="1.0" encoding="UTF-8"?--> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemalocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelversion>4.0.0</modelversion> <groupid>org.acme</groupid> <artifactid>mahout</artifactid> <version>0.94</version> <name>Mahout Examples</name> <description>Scalable machine learning library examples</description> <packaging>jar</packaging><properties> <project.build.sourceencoding>UTF-8</project.build.sourceencoding> <apache.mahout.version>0.4</apache.mahout.version> </properties> <build> <plugins> <plugin> <groupid>org.apache.maven.plugins</groupid> <artifactid>maven-compiler-plugin</artifactid> <configuration> <encoding>UTF-8</encoding> <source>1.6 <target>1.6</target> <optimize>true</optimize> </configuration> </plugin> </plugins> </build><dependencies> <dependency> <groupid>org.apache.mahout</groupid> <artifactid>mahout-core</artifactid> <version>${apache.mahout.version}</version> </dependency><dependency> <groupid>org.apache.mahout</groupid> <artifactid>mahout-math</artifactid> <version>${apache.mahout.version}</version> </dependency><dependency> <groupid>org.apache.mahout</groupid> <artifactid>mahout-utils</artifactid> <version>${apache.mahout.version}</version> </dependency><dependency> <groupid>org.slf4j</groupid> <artifactid>slf4j-api</artifactid> <version>1.6.0</version> </dependency><dependency> <groupid>org.slf4j</groupid> <artifactid>slf4j-jcl</artifactid> <version>1.6.0</version> </dependency> </dependencies> </project>Then I looked into Apache Mahout examples and algorithms available for text classification problem. The most simple and accurate one is Naive Bayes classifier. Here is a code snippet: package org.acme;import java.io.BufferedReader; import java.io.IOException; import java.io.FileReader; import java.util.List;import org.apache.hadoop.fs.Path; import org.apache.mahout.classifier.ClassifierResult; import org.apache.mahout.classifier.bayes.TrainClassifier; import org.apache.mahout.classifier.bayes.algorithm.BayesAlgorithm; import org.apache.mahout.classifier.bayes.common.BayesParameters; import org.apache.mahout.classifier.bayes.datastore.InMemoryBayesDatastore; import org.apache.mahout.classifier.bayes.exceptions.InvalidDatastoreException; import org.apache.mahout.classifier.bayes.interfaces.Algorithm; import org.apache.mahout.classifier.bayes.interfaces.Datastore; import org.apache.mahout.classifier.bayes.model.ClassifierContext; import org.apache.mahout.common.nlp.NGrams;public class Starter { public static void main( final String[] args ) { final BayesParameters params = new BayesParameters(); params.setGramSize( 1 ); params.set( "verbose", "true" ); params.set( "classifierType", "bayes" ); params.set( "defaultCat", "OTHER" ); params.set( "encoding", "UTF-8" ); params.set( "alpha_i", "1.0" ); params.set( "dataSource", "hdfs" ); params.set( "basePath", "/tmp/output" ); try { Path input = new Path( "/tmp/input" ); TrainClassifier.trainNaiveBayes( input, "/tmp/output", params ); Algorithm algorithm = new BayesAlgorithm(); Datastore datastore = new InMemoryBayesDatastore( params ); ClassifierContext classifier = new ClassifierContext( algorithm, datastore ); classifier.initialize(); final BufferedReader reader = new BufferedReader( new FileReader( args[ 0 ] ) ); String entry = reader.readLine(); while( entry != null ) { List< String > document = new NGrams( entry, Integer.parseInt( params.get( "gramSize" ) ) ) .generateNGramsWithoutLabel();ClassifierResult result = classifier.classifyDocument( document.toArray( new String[ document.size() ] ), params.get( "defaultCat" ) );entry = reader.readLine(); } } catch( final IOException ex ) { ex.printStackTrace(); } catch( final InvalidDatastoreException ex ) { ex.printStackTrace(); } } }There is one important note here: system must be taught before starting classification. In order to do so, it’s necessary to provide examples (more – better) of different text classification. It should be simple files where each line starts with category separated by tab from text itself. F.e.: SUGGESTION That's a great suggestion QUESTION Do you sell Microsoft Office? ...More files you can provide, more precise classification you will get. All files must be put to the ‘/tmp/input’ folder, they will be processed by Apache Hadoop first. :) Reference: Getting started with Apache Mahout from our JCG partner Andrey Redko at the Andriy Redko {devmind}....

MyBatis 3 – Spring integration tutorial

As a first step of this tutorial, Spring MVC 3 CRUD example with MyBatis 3, we will define a MyBatis service that will help us to perform CRUD operation on database. We have a domain class for User and a database table to store the User information on database. We will use xml configuration model for our example to define SQL commands that will perform CRUD operation. Our Domain class package com.raistudies.domain;import java.io.Serializable;public class User implements Serializable{private static final long serialVersionUID = 3647233284813657927L;private String id; private String name = null; private String standard = null; private String age; private String sex = null;//setter and getter have been omitted to make the code short@Override public String toString() { return "User [name=" + name + ", standard=" + standard + ", age=" + age + ", sex=" + sex + "]"; } }We have five properties in our domain class called User for which have to provide database services. Our Database Table Following is our database table: CREATE TABLE `user` ( `id` varchar(36) NOT NULL, `name` varchar(45) DEFAULT NULL, `standard` varchar(45) DEFAULT NULL, `age` varchar(45) DEFAULT NULL, `sex` varchar(45) DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8Creating interface for CRUD operations For defining the CRUD database operation using MyBatis 3, we have to specify the methods that will be used to perform CRUD operation. Following is the interface for our example: package com.raistudies.persistence;import java.util.List;import com.raistudies.domain.User;public interface UserService {public void saveUser(User user); public void updateUser(User user); public void deleteUser(String id); public List<User> getAllUser(); }We have four methods here to perform operations create,update , delete and get from database. XML Mapping file for UserService interface <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd"><mapper namespace="com.raistudies.persistence.UserService"><resultMap id="result" type="user"> <result property="id" column="id"/> <result property="name" column="name"/> <result property="standard" column="standard"/> <result property="age" column="age"/> <result property="sex" column="sex"/> </resultMap><select id="getAllUser" parameterType="int" resultMap="result"> SELECT id,name,standard,age,sex FROM user; </select><insert id="saveUser" parameterType="user"> INSERT INTO user (id,name,standard,age,sex) VALUE (#{id},#{name},#{standard},#{age},#{sex}) </insert><update id="updateUser" parameterType="user"> UPDATE user SET name = #{name}, standard = #{standard}, age = #{age}, sex = #{sex} where id = #{id} </update><delete id="deleteUser" parameterType="int"> DELETE FROM user WHERE id = #{id} </delete> </mapper>You will see a lot of things new here: The mapping file will contain element <mapper/> to define the SQL statement for the services. Here the property “namespace” defines the interface for which this mapping file has been defined. <insert/> tag defines that the operation is of type insert. The value of “id” property specifies the function name for which the SQL statement is been defined. Here it is “saveUser“. The property “parameterType” defines the parameter of the method is of which type. We have used alias for the class User here. The alias will be configured in MyBatis Configuration file later. Then, we have to define the SQL Statement. #{id} defines that the property “id” of class User will be passed as a parameter to the SQL query. <resultMap/> tag is used to specify the mapping between the User class and user table. id of <resultMap/> is a unique name to the mapping definition. Under this tag, we define the different properties and which column is bounded to which property. <select/> tag is used to specify a select SQL statement. The value of “id” property specifies the function name for which the SQL statement is been defined. The attribute resultMap is used to define the return type of the SQL statement as a collection. MyBatis 3 Configuration file Following is our configuration file for MyBatis: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE configuration PUBLIC "-//mybatis.org//DTD Config 3.0//EN" "http://mybatis.org/dtd/mybatis-3-config.dtd"><configuration> <settings> <!-- changes from the defaults --> <setting name="lazyLoadingEnabled" value="false" /> </settings> <typeAliases> <typeAlias type="com.raistudies.domain.User" alias="user"/> </typeAliases> </configuration>You can see, we have not defined some very important properties here:Database connection properties. Transaction related properties. And also have not defined mappers configuration.MyBatis 3 is very powerful SQL mapping framework with automatic database access class generation using a proxy implementation of the services defined by users. We get realize it’s true power if you integrate MyBatis 3 with Spring framework and use these proxy implementation. It will reduce our database work by 80%. Below, we will see how to integrate MyBatis 3 with Spring 3 framework. Previously, we created the CRUD database service for class User using MyBatis 3. Now, we will integrate the data services implemented using MyBatis with Spring framework. Tools Used:c3p0- – For providing pooled database connection. mybatis-spring-1.0.0.jar – For integrating MyBatis with Spring (Provided by MyBatis team) Spring JDBC and Core libraryTo integrate these two frameworks, we have to follow bellow steps: Step 1: Defining datasource as a Spring bean As we will use c3po data source provider, we have to define datasource bean in Spring. Following is the configuration snippet: <!-- Declare a datasource that has pooling capabilities --> <bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close" p:driverClass="${app.jdbc.driverClassName}" p:jdbcUrl="${app.jdbc.url}" p:user="${app.jdbc.username}" p:password="${app.jdbc.password}" p:acquireIncrement="10" p:idleConnectionTestPeriod="60" p:maxPoolSize="100" p:maxStatements="50" p:minPoolSize="10" />Here we have created a spring bean with id dataSource of class com.mchange.v2.c3p0.ComboPooledDataSource which is provided by c3p0 library for pooled data source. We have set some properties in the bean. Following is the description of properties defined in bean:driverClass : Driver class that will be used to connect to database. jdbcUrl : jdbc url defining the database connection string. user : username of the database user. password : password of the database user. acquireIncrement : how many connections will be created at a time when there will be a shortage of connections. idleConnectionTestPeriod : after how much delay a connection will be closed if it is no longer in use. maxPoolSize : Max number of connections that can be created. maxStatements : Max number of SQL statements to be executed on a connection. minPoolSize : Minimum number of connections to be created.We have used Spring EL to define many of the property values, that will be bring from properties file. Defining Transaction Manager in Spring We will user Transaction Manager provided by Spring JDBC framework and for defining transaction levels we will use annotations. Following is the configuration for transaction manager: <!-- Declare a transaction manager --> <bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager" p:dataSource-ref="dataSource" /><!-- Enable annotation style of managing transactions --> <tx:annotation-driven transaction-manager="transactionManager" />Defining MyBatis SqlSessionFactory and MapperScanner <!-- define the SqlSessionFactory, notice that configLocation is not needed when you use MapperFactoryBean --> <bean id="sqlSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="configLocation" value="WEB-INF/mybatis/sqlmap-config.xml" /> </bean><!-- scan for mappers and let them be autowired --> <bean class="org.mybatis.spring.mapper.MapperScannerConfigurer"> <property name="basePackage" value="${MapperInterfacePackage}" /> </bean>The SqlSessionFactory bean will provide SessionFactory instances of MyBatis. To configure SqlSessionFactory, we need to define two properties. First the data source which will be used by MyBatis to create connection database and MyBatis configuration file name to configure the environment of MyBatis. MapperScannerConfigurer is used to publish the data service interfaces in defined for MyBatis to configure as Spring Beans. We just have to provide package in which the interfaces and their mapping XML files has been defined. We can specify more than one packages using common separation or semicolon. After that we will be able to get the instances of UserService using @Autowired annotation. We do not have to implement the interface as MyBatis will provide proxy implementation for this. Spring configuration file as together Here is our jdbc-context.xml as together: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:tx="http://www.springframework.org/schema/tx" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation="http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans-3.0.xsdhttp://www.springframework.org/schema/txhttp://www.springframework.org/schema/tx/spring-tx-3.0.xsdhttp://www.springframework.org/schema/contexthttp://www.springframework.org/schema/context/spring-context-3.0.xsd"><context:property-placeholder location="/WEB-INF/jdbc.properties,/WEB-INF/mybatis/mybatis.properties" /><!-- Enable annotation style of managing transactions --> <tx:annotation-driven transaction-manager="transactionManager" /><!-- Declare a datasource that has pooling capabilities --> <bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close" p:driverClass="${app.jdbc.driverClassName}" p:jdbcUrl="${app.jdbc.url}" p:user="${app.jdbc.username}" p:password="${app.jdbc.password}" p:acquireIncrement="10" p:idleConnectionTestPeriod="60" p:maxPoolSize="100" p:maxStatements="50" p:minPoolSize="10" /><!-- Declare a transaction manager --> <bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager" p:dataSource-ref="dataSource" /><!-- define the SqlSessionFactory, notice that configLocation is not needed when you use MapperFactoryBean --> <bean id="sqlSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="configLocation" value="WEB-INF/mybatis/sqlmap-config.xml" /> </bean><!-- scan for mappers and let them be autowired --> <bean class="org.mybatis.spring.mapper.MapperScannerConfigurer"> <property name="basePackage" value="${MapperInterfacePackage}" /> </bean></beans>jdbc.properties file # database properties app.jdbc.driverClassName=com.mysql.jdbc.Driver app.jdbc.url=jdbc:mysql://localhost/mybatis-example app.jdbc.username=root app.jdbc.password=passwordmybatis.properties file MapperInterfacePackage=com.raistudies.persistenceReference:  Creating CRUD service using MyBatis 3 Mapping Framework – Part 1 & Integrating MyBatis 3 and Spring frameworks – Part 2 from our JCG partner Rahul Mondal at the Rai Studies blog....

JavaFX 2.0 Bar and Scatter Charts (and JavaFX 2.1 StackedBarCharts)

JavaFX 2.0 provides built-in capabilities for generating charts, a capability found within the javafx.scene.chart package. In this post, I look at creating bar charts and a scatter chart in using JavaFX 2.0. During the course of this post, I use Guava and some Java 7 features along the way. Before demonstrating the JavaFX 2.0 chart APIs, this first code listing shows the configuration of data to be used in the examples. In a more realistic scenario, I’d get this data from a data store, but in this case I simply include in directly in the source code for convenient access for the example. Although this code is not itself inherently related to the JavaFX 2.0 charting, there are some interesting things about it. The code makes use of Java 7’s underscores in numeric literals to make it easier to read the land size and populations of the sample states used in the data. The code listing also uses Guava‘s ImmutableMap class (covered in a previous post). Code that Configures the Sample Data /** Simple U.S. state representation. */ private enum State { ALASKA("Alaska"), CALIFORNIA("California"), COLORADO("Colorado"), NEW_YORK("New York"), RHODE_ISLAND("Rhode Island"), TEXAS("Texas"), WYOMING("Wyoming");private String stateName;State(final String newStateName) { this.stateName = newStateName; } }/** Simple Movie representation. */ private enum Movie { STAR_WARS("Star Wars"), EMPIRE_STRIKES_BACK("The Empire Strikes Back"), RAIDERS_OF_THE_LOST_ARK("Raiders of the Lost Ark"), INCEPTION("Inception"), CHRISTMAS_VACATION("Christmas Vacation"), CHRISTMAS_VACATION_2("Christmas Vacation 2"), FLETCH("Fletch");private String movieName;Movie(final String newMovieName) { this.movieName = newMovieName; } }/** Mapping of state name to area of state measured in kilometers. */ private final static Map<String, Long> statesLandSizeKm;/** Mapping of state name to estimated number of people living in that state. */ private final static Map<String, Long> statesPopulation;/** Normal audience movie ratings on Rotten Tomatoes. */ private final static Map<String, Double> movieRatingsNormal;/** Critics movie ratings on Rotten Tomatoes. */ private final static Map<String, Double> movieRatingsCritics;/** Dustin's movie ratings. */ private final static Map<String, Double> movieRatingsDustin;/** Maximum population to be shown on bar charts of states' populations. */ private final static long POPULATION_RANGE_MAXIMUM = 40_000_000L;/** Maximum land area (km) to be shown on bar charts of states' land areas. */ private final static long LAND_AREA_KM_MAXIMUM = 1_800_000L;/** Maximum movie rating to be shown on bar charts. */ private final static double MOVIE_RATING_MAXIMUM = 10.0;/** Width of chart. */ private final static int CHART_WIDTH = 750;/** Height of chart. */ private final static int CHART_HEIGHT = 600;/** Width of chart for Movie Ratings. */ private final static int MOVIE_CHART_WIDTH = CHART_WIDTH + 350;/* Initialize final static variables. */ static { statesLandSizeKm = ImmutableMap.<String, Long>builder() .put(State.ALASKA.stateName, 1_717_854L) .put(State.CALIFORNIA.stateName, 423_970L) .put(State.COLORADO.stateName, 269_601L) .put(State.NEW_YORK.stateName, 141_299L) .put(State.RHODE_ISLAND.stateName, 4_002L) .put(State.TEXAS.stateName, 695_621L) .put(State.WYOMING.stateName, 253_336L) .build();statesPopulation = ImmutableMap.<String, Long>builder() .put(State.ALASKA.stateName, 722_718L) .put(State.CALIFORNIA.stateName, 37_691_912L) .put(State.COLORADO.stateName, 5_116_769L) .put(State.NEW_YORK.stateName, 19_465_197L) .put(State.RHODE_ISLAND.stateName, 1_051_302L) .put(State.TEXAS.stateName, 25_674_681L) .put(State.WYOMING.stateName, 568_158L) .build();movieRatingsNormal = ImmutableMap.<String, Double>builder() .put(Movie.CHRISTMAS_VACATION.movieName, 8.3) .put(Movie.CHRISTMAS_VACATION_2.movieName, 1.3) .put(Movie.STAR_WARS.movieName, 9.3) .put(Movie.EMPIRE_STRIKES_BACK.movieName, 9.4) .put(Movie.RAIDERS_OF_THE_LOST_ARK.movieName, 9.3) .put(Movie.INCEPTION.movieName, 9.3) .put(Movie.FLETCH.movieName, 7.8) .build();movieRatingsCritics = ImmutableMap.<String, Double>builder() .put(Movie.CHRISTMAS_VACATION.movieName, 6.3) .put(Movie.CHRISTMAS_VACATION_2.movieName, 0.0) .put(Movie.STAR_WARS.movieName, 9.4) .put(Movie.EMPIRE_STRIKES_BACK.movieName, 9.7) .put(Movie.RAIDERS_OF_THE_LOST_ARK.movieName, 9.4) .put(Movie.INCEPTION.movieName, 8.6) .put(Movie.FLETCH.movieName, 7.5) .build(); movieRatingsDustin = ImmutableMap.<String, Double>builder() .put(Movie.CHRISTMAS_VACATION.movieName, 7.0) .put(Movie.CHRISTMAS_VACATION_2.movieName, 0.0) .put(Movie.STAR_WARS.movieName, 9.5) .put(Movie.EMPIRE_STRIKES_BACK.movieName, 10.0) .put(Movie.RAIDERS_OF_THE_LOST_ARK.movieName, 10.0) .put(Movie.INCEPTION.movieName, 9.0) .put(Movie.FLETCH.movieName, 9.0) .build(); }The next code listing demonstrates the boot-strapping of the sample application. This includes the one-line main function that normally kicks off Java applications and the more interesting start(String[]) method that is overridden from the extended Application class. This code listing also makes use of Java 7’s switch-on-Strings capability for a quick-and-dirty implementation of command-line argument parsing to run a particular chart generation demonstration. The example demonstrates that command-line arguments passed to Application.launch(String…) are available to the JavaFX application via the Application.getParameters() method that returns an instance of nested Application.Parameters. Code that Launches JavaFX 2.0 Charting Demonstration Example /** * Start JavaFX application. * * @param stage First stage of JavaFX application. * @throws Exception */ @Override public void start(final Stage stage) throws Exception { final Parameters parameters = getParameters(); // command-line args final List<String> args = parameters.getUnnamed(); final String firstArgument = !args.isEmpty() ? args.get(0) : "1"; final int chartWidth = !firstArgument.equals("4") ? CHART_WIDTH : MOVIE_CHART_WIDTH;stage.setTitle("Building Bar Charts"); final Group rootGroup = new Group(); final Scene scene = new Scene(rootGroup, chartWidth, CHART_HEIGHT, Color.WHITE); stage.setScene(scene);switch (firstArgument) { case "1" : rootGroup.getChildren().add(buildVerticalLandAreaBarChart()); break; case "2" : rootGroup.getChildren().add(buildVerticalPopulationBarChart()); break; case "3" : rootGroup.getChildren().add(buildHorizontalPopulationBarChart()); break; case "4" : rootGroup.getChildren().add(buildVerticalMovieRatingsBarChart()); break; case "5" : rootGroup.getChildren().add(buildStatesLandSizePopulationScatterChart()); break; default : rootGroup.getChildren().add(buildVerticalLandAreaBarChart()); } stage.show(); }/** * Main function for demonstrating JavaFX 2.0 bar chart and scatter chart. * * @param arguments Command-line arguments: none expected. */ public static void main(final String[] arguments) { Application.launch(arguments); }With the data to populate the charts configured and the basic JavaFX application boot-strapping demonstrated, it’s time to start looking at use of the actual JavaFX 2.0 chart APIs. As the code above shows, the first option (“1″) leads to generation of a vertical bar chart depicting the sample states’ relative land areas in kilometers. The methods executed for that example are shown next. Generating Vertical Bar Chart with States’ Land Areas /** * Build ObservableList of XYChart.Series instances mapping state names to * land areas. * * @return ObservableList of XYChart.Series instances mapping state names to * land areas. */ public ObservableList<XYChart.Series<String,Long>> buildStatesToLandArea() { final ObservableList<XYChart.Data<String,Long>> statesToLandArea = FXCollections.observableArrayList(); for (final State state : State.values()) { final XYChart.Data<String,Long> stateAreaData = new XYChart.Data<String,Long>( state.stateName, statesLandSizeKm.get(state.stateName)); statesToLandArea.add(stateAreaData); } final XYChart.Series<String, Long> landSeries = new XYChart.Series<String, Long>(statesToLandArea); final ObservableList<XYChart.Series<String, Long>> series = FXCollections.observableArrayList(); landSeries.setName("State Land Size (km)"); series.add(landSeries); return series; }/** * Provides a CategoryAxis instantiated with sample states' names. * * @return CategoryAxis with sample states' names. */ public CategoryAxis buildStatesNamesCategoriesAxis() { final ObservableList<String> stateNames = FXCollections.observableArrayList(); stateNames.addAll( State.ALASKA.stateName, State.CALIFORNIA.stateName, State.COLORADO.stateName, State.NEW_YORK.stateName, State.RHODE_ISLAND.stateName, State.TEXAS.stateName, State.WYOMING.stateName); final CategoryAxis categoryAxis = new CategoryAxis(stateNames); categoryAxis.setLabel("State"); categoryAxis.setMinWidth(CHART_WIDTH); return categoryAxis; }/** * Build vertical bar chart comparing land areas of sample states. * * @return Vertical bar chart comparing land areas of sample states. */ public XYChart buildVerticalLandAreaBarChart() { final ValueAxis landAreaAxis = new NumberAxis(0, LAND_AREA_KM_MAXIMUM, 50_000); final BarChart landAreaBarChart = new BarChart(buildStatesNamesCategoriesAxis(), landAreaAxis, buildStatesToLandArea()); landAreaBarChart.setMinWidth(CHART_WIDTH); landAreaBarChart.setMinHeight(CHART_HEIGHT); landAreaBarChart.setTitle("Land Area (in km) of Select U.S. States"); return landAreaBarChart; }The above snippet of code shows the three methods I used to generate a bar chart. The method at the bottom, buildVerticalLandAreaBarChart(), instantiates a NumberAxis for the chart’s y-axis and uses that implementation of ValueAxis in instantiating a BarChart. The BarChart instantiation invokes the other two methods in the code snippet to create the x-axis with states’ names and to prepare the data in ObservableList<XYChart.Series<String,Long>> format to be used in the chart generation. The generated chart is shown next.Similar code can lead to a similar chart for depicting populations of the sample states. The code for doing this is shown next and is followed by the screen snapshot of the generated chart. Generating Vertical Bar Chart with States’ Populations // method buildStatesNamesCategoriesAxis() was shown in previous code listing/** * Build one or more series of XYChart Data representing state names as 'x' * portion and state populations as 'y' portion. This method is likely to be * used in vertical presentations where state names are desired on the x-axis * and population numbers are desired on the y-axis. * * @return Series of XYChar Data representing state names as 'x' portion and * state populations as 'y' portion. */ public ObservableList<XYChart.Series<String,Long>> buildStatesToPopulation() { final ObservableList<XYChart.Data<String,Long>> statesToPopulation = FXCollections.observableArrayList(); for (final State state : State.values()) { final XYChart.Data<String,Long> statePopulationData = new XYChart.Data<String,Long>( state.stateName, statesPopulation.get(state.stateName)); statesToPopulation.add(statePopulationData); } final XYChart.Series<String, Long> populationSeries = new XYChart.Series<String, Long>(statesToPopulation); final ObservableList<XYChart.Series<String, Long>> series = FXCollections.observableArrayList(); populationSeries.setName("State Population"); series.add(populationSeries); return series; }/** * Build vertical bar chart comparing populations of sample states. * * @return Vertical bar chart comparing populations of sample states. */ public XYChart buildVerticalPopulationBarChart() { final ValueAxis populationAxis = new NumberAxis(0, POPULATION_RANGE_MAXIMUM, 2_000_000); final BarChart populationBarChart = new BarChart(buildStatesNamesCategoriesAxis(), populationAxis, buildStatesToPopulation()); populationBarChart.setMinWidth(CHART_WIDTH); populationBarChart.setMinHeight(CHART_HEIGHT); populationBarChart.setTitle("Population of Select U.S. States"); return populationBarChart; }The previous two diagrams were vertical diagrams. The next example uses the same population of the states for its sample data as used in the last example, but depicts it with a horizontal bar chart rather than with the vertical chart. Note that the same method is used for generating the axis with states’ names as in the last two examples, but its result is passed in as the second argument to the BarChart constructor rather than as the first argument. This change of order to the BarChart constructor changes the chart from vertical to horizontal. In other words, having a CategoryAxis as the first argument and a ValueAxis as the second argument to the BarChart constructor leads to a vertical chart. Switching the order of those two types of Axis leads to a horizontal chart. I also had to switch the order of the mapping of the data being charted so that the key portion was the population and the value portion was the state names. The code is followed by the output. Generating Horizontal Bar Chart with States’ Populations // method buildStatesNamesCategoriesAxis() was shown in previous code listings/** * Build one or more series of XYChart Data representing population as 'x' * portion and state names as 'y' portion. This method is likely to be used * in horizontal presentations where state names are desired on the y-axis * and population numbers are desired on the x-axis. * * @return Series of XYChar Data representing population as 'x' portion and * state names as 'y' portion. */ public ObservableList<XYChart.Series<Long,String>> buildPopulationToStates() { final ObservableList<XYChart.Data<Long,String>> statesToPopulation = FXCollections.observableArrayList(); for (final State state : State.values()) { final XYChart.Data<Long,String> statePopulationData = new XYChart.Data<Long,String>( statesPopulation.get(state.stateName), state.stateName); statesToPopulation.add(statePopulationData); } final XYChart.Series<Long, String> populationSeries = new XYChart.Series<Long, String>(statesToPopulation); final ObservableList<XYChart.Series<Long, String>> series = FXCollections.observableArrayList(); populationSeries.setName("State Population"); series.add(populationSeries); return series; }/** * Build horizontal bar chart comparing populations of sample states. * * @return Horizontal bar chart comparing populations of sample states. */ public XYChart buildHorizontalPopulationBarChart() { final ValueAxis populationAxis = new NumberAxis(0, POPULATION_RANGE_MAXIMUM, 2_000_000); final BarChart populationBarChart = new BarChart(populationAxis, buildStatesNamesCategoriesAxis(), buildPopulationToStates()); populationBarChart.setMinWidth(CHART_WIDTH); populationBarChart.setTitle("Population of Select U.S. States"); return populationBarChart; }For all of these examples of generating a bar chart, I’ve made use of JavaFX’s XYChart. It turns out that ScatterChart also extends XYChart, so its use is similar to that of BarChart. The big difference in this case (ScatterChart) is that two values-oriented axes will exist. In other words, instead of using states’ names for the x-axis (vertical) or for the y-axis (horizontal), each axis will be based on values (land area for x-axis and population for y-axis). These types of charts are often used to visually determine degree of correlation between the data. The code for generating this and the output it generates are shown next. Generating Scatter Chart of State Population to State Land Size /** * Build mapping of land area to population for each state. * * @return Mapping of land area to population for each sample state. */ public ObservableList<XYChart.Series<Long,Long>> buildAreaToPopulation() { final ObservableList<XYChart.Data<Long,Long>> areaToPopulation = FXCollections.observableArrayList(); for (final State state : State.values()) { final XYChart.Data<Long,Long> areaPopulationData = new XYChart.Data<Long,Long>( statesLandSizeKm.get(state.stateName), statesPopulation.get(state.stateName)); areaToPopulation.add(areaPopulationData); } final XYChart.Series<Long, Long> areaPopulationSeries = new XYChart.Series<Long, Long>(areaToPopulation); final ObservableList<XYChart.Series<Long, Long>> series = FXCollections.observableArrayList(); areaPopulationSeries.setName("State Land Area and Population"); series.add(areaPopulationSeries); return series; }/** * Build a Scatter Chart depicting correlation between land area and population * for each state. * * @return Scatter Chart depicting correlation between land area and population * for each state. */ public XYChart buildStatesLandSizePopulationScatterChart() { final ValueAxis xAxis = new NumberAxis(0, LAND_AREA_KM_MAXIMUM, 50_000); xAxis.setLabel("Land Area (km)"); final ValueAxis yAxis = new NumberAxis(0, POPULATION_RANGE_MAXIMUM, 2_000_000); yAxis.setLabel("Population"); final ScatterChart xyChart = new ScatterChart(xAxis, yAxis, buildAreaToPopulation()); xyChart.setMinHeight(CHART_HEIGHT);return xyChart; }The scatter charts help to visually determine if there is any correlation between a state’s land size and its population. Partially because Alaska and Wyoming are included in the set of sample states, there is not much of a correlation. There is much more that can be done to style the JavaFX Scatter Chart. It is sometimes useful to see more than one series plotted against the same bar chart. To illustrate multiple series in the same bar chart, I’m going to change the same data being used. Instead of using data on states and their sizes and populations, I’m going to use the data shown in the original code listing related to movie ratings. In particular, there are three series here: critics’ ratings, “normal” audience members’ ratings, and my own ratings. As with the previous examples, I show the code first (most interesting part is method buildRatingsToMovieTitle()), followed by the output. Generating Movie Rating Bar Chart with Multiple Series (Multiple Rating Groups) /** * Build one or more series of XYChart Data representing movie names as 'x' * portion and movie ratings as 'y' portion. This method is likely to be * used in vertical presentations where movie names are desired on the x-axis * and movie ratings are desired on the y-axis. This method illustrates * multiple series as ratings for both normal audience members and critics * are shown. * * @return Series of XYChar Data representing state movie names as 'x' portion * and movie ratings as 'y' portion. */ public ObservableList<XYChart.Series<String,Double>> buildRatingsToMovieTitle() { final ObservableList<XYChart.Data<String,Double>> normalRatings = FXCollections.observableArrayList(); final ObservableList<XYChart.Data<String,Double>> criticRatings = FXCollections.observableArrayList(); final ObservableList<XYChart.Data<String,Double>> dustinRatings = FXCollections.observableArrayList(); for (final Movie movie : Movie.values()) { final XYChart.Data<String,Double> normalRatingsData = new XYChart.Data<String,Double>( movie.movieName, movieRatingsNormal.get(movie.movieName)); normalRatings.add(normalRatingsData); final XYChart.Data<String,Double> criticRatingsData = new XYChart.Data<String,Double>( movie.movieName, movieRatingsCritics.get(movie.movieName)); criticRatings.add(criticRatingsData); final XYChart.Data<String,Double> dustinRatingsData = new XYChart.Data<String,Double>( movie.movieName, movieRatingsDustin.get(movie.movieName)); dustinRatings.add(dustinRatingsData); } final XYChart.Series<String, Double> normalSeries = new XYChart.Series<String, Double>(normalRatings); normalSeries.setName("Normal Audience"); final XYChart.Series<String, Double> criticSeries = new XYChart.Series<String, Double>(criticRatings); criticSeries.setName("Critics"); final XYChart.Series<String, Double> dustinSeries = new XYChart.Series<String, Double>(dustinRatings); dustinSeries.setName("Dustin"); final ObservableList<XYChart.Series<String, Double>> series = FXCollections.observableArrayList(); series.add(normalSeries); series.add(criticSeries); series.add(dustinSeries); return series; }/** * Build vertical bar chart comparing movie ratings to demonstrate multiple * series used in a single chart. * * @return Vertical bar chart comparing movie ratings. */ public XYChart buildVerticalMovieRatingsBarChart() { final ValueAxis ratingAxis = new NumberAxis(0, MOVIE_RATING_MAXIMUM, 1.0); final BarChart ratingBarChart = new BarChart(buildMovieRatingsAxis(), ratingAxis, buildRatingsToMovieTitle()); ratingBarChart.setMinWidth(MOVIE_CHART_WIDTH); ratingBarChart.setMinHeight(CHART_HEIGHT); ratingBarChart.setTitle("Movie Ratings"); return ratingBarChart; }JavaFX 2.1 beta includes a couple new charts, including the StackedBarChart. The stacked bar chart implies multiple series, so I will adapt the last example to use one of these. The stacked bar chart will show each of the three ratings sources contributing to a single bar per movie rather than three bars per movie as in the last example. Generating StackedBarChart of Movie Ratings /** * Build one or more series of XYChart Data representing movie names as 'x' * portion and movie ratings as 'y' portion. This method is likely to be * used in vertical presentations where movie names are desired on the x-axis * and movie ratings are desired on the y-axis. This method illustrates * multiple series as ratings for both normal audience members and critics * are shown. * * @return Series of XYChar Data representing state movie names as 'x' portion * and movie ratings as 'y' portion. */ public ObservableList<XYChart.Series<String,Double>> buildRatingsToMovieTitle() { final ObservableList<XYChart.Data<String,Double>> normalRatings = FXCollections.observableArrayList(); final ObservableList<XYChart.Data<String,Double>> criticRatings = FXCollections.observableArrayList(); final ObservableList<XYChart.Data<String,Double>> dustinRatings = FXCollections.observableArrayList(); for (final Movie movie : Movie.values()) { final XYChart.Data<String,Double> normalRatingsData = new XYChart.Data<String,Double>( movie.movieName, movieRatingsNormal.get(movie.movieName)); normalRatings.add(normalRatingsData); final XYChart.Data<String,Double> criticRatingsData = new XYChart.Data<String,Double>( movie.movieName, movieRatingsCritics.get(movie.movieName)); criticRatings.add(criticRatingsData); final XYChart.Data<String,Double> dustinRatingsData = new XYChart.Data<String,Double>( movie.movieName, movieRatingsDustin.get(movie.movieName)); dustinRatings.add(dustinRatingsData); } final XYChart.Series<String, Double> normalSeries = new XYChart.Series<String, Double>(normalRatings); normalSeries.setName("Normal Audience"); final XYChart.Series<String, Double> criticSeries = new XYChart.Series<String, Double>(criticRatings); criticSeries.setName("Critics"); final XYChart.Series<String, Double> dustinSeries = new XYChart.Series<String, Double>(dustinRatings); dustinSeries.setName("Dustin"); final ObservableList<XYChart.Series<String, Double>> series = FXCollections.observableArrayList(); series.add(normalSeries); series.add(criticSeries); series.add(dustinSeries); return series; }/** * Build a Stacked Bar Chart depicting total ratings of each movie based on * contributions of three ratings groups. * * @return Stacked Bar Chart depicting three rating groups' contributions * to overall movie rating. */ public XYChart buildStackedMovieRatingsBarChart() { final ValueAxis ratingAxis = new NumberAxis(0, MOVIE_RATING_MAXIMUM*3, 2.5); final StackedBarChart ratingBarChart = new StackedBarChart(buildMovieRatingsAxis(), ratingAxis, buildRatingsToMovieTitle()); ratingBarChart.setMinWidth(MOVIE_CHART_WIDTH); ratingBarChart.setMinHeight(CHART_HEIGHT); ratingBarChart.setTitle("Movie Ratings"); return ratingBarChart; }The stacked bar chart is helpful because it provides a quick view of the overall composite rating of each movie along with how much each reviewer group contributed to that overall rating. JavaFX 2.0 Charts Documentation The Using JavaFX Charts tutorial covers code examples and corresponding generated chart images for different types of JavaFX 2.0 charts such as pie chart, line chart, area chart, bubble chart, scatter chart, and bar chart. This tutorial also provides sections on using CSS to style charts and information on preparing chart data and generating custom charts. Conclusion This post has demonstrated use of the JavaFX charts package to generate bar charts, scatter charts, and stacked bar charts. When JavaFX is accepted as a standard part of the Java SE SDK, it will bring a standard mechanism for generating charts in Java to the SDK. Reference: JavaFX 2.0 Bar and Scatter Charts (and JavaFX 2.1 StackedBarCharts) from our JCG partner Dustin Marx at the Inspired by Actual Events  blog....

Play framework on the cloud made easy: Openshift module

Just a couple of years ago finding an afordable hosting solution for a java web application was a hard task, and looking for a free one was an impossible mission. Not to mention that even thinking about things like auto-scaling, one-command deploy, continuos integration, and that sort of stuff was plain science fiction. This last year has witnessed a cloud revolution, and nowdays there’s a really appalling amount of alternatives to choose from. It seemed like every medium-to-large size IT player had to come out with their own Platform as a Service (PaaS) cloud offering. In this scenario, an offering from Red Hat couldn’t go unnoticed. Red Hat engineers really know a lot about managing servers, and, lucklily for us, they also know a lot about java web applications. Fortunately, they took the challenge, and what they have to offer would certainly not disappoint us.Isn’t that panda bear cute?So, here comes Openshift. Openshift is Red Hat’s free, auto-scaling, cloud-based platform-as-a-service for Java, Perl, PHP, Python, and Ruby applications. It’s a quickly evolving platform, that managed to shape a vibrant and helpful community supporting it. Moreover, it’s free offering largely surpases anything that the competence has to offer. Just by entering you email and choosing a password, you get five applicacions namespaces, each of them with a git repository and half GB of data (code + database) to use as you like it. Add to that support for mysql (with phpmyadmin), PostgreSQL, MongoDB 2.0 (with MongoRock) and even a fully functional Jenkins instance to have a continuous integration environment. Deploying a java web application to openshift is really easy, just git add, git commit, git push… and that’s it. But we, play developers, spoiled by our beloved framework as we are, would rather just type something like play rhc:deploy and forget about it. That’s what openshift module for play framework is about. The short story So you have everything set up to deploy a play framework application to openshift. That means you have installed JDK 1.6 or 1.5, play framework, ruby, ruby gems, openshift client tools, and that you have signed up at openshift and also created a domain. In that case, you just have to: $ play install openshiftand then $ play new <my app> --with openshift $ cd <my app> $ play rhc:deploy -o… and that’s it.Your application is ready… and running on Openshift!Every time you want to deploy your changes to openshift, just issue once again play rhc:deploy -o. The -o parameters just tells the module to open your application on a web browser right after deployment. From zero to the cloud Just as a reminder to myself, here are the steps required to go from a bare linux installation to deployment on openfhit:1. Install Java jdk 1.6 on debian based linux distributions (like ubuntu, mint and others) $ sudo apt-get install openjdk-6-jdkon rmp based linux distributions (like fedora, red hat, centos, and others) $ sudo yum install java-1.6.0-openjdk-devel.i6862. Install play framework Here’s my quick and dirty list of commands to install play framework. $ cd ~ $ mkdir dev $ cd dev $ wget http://download.playframework.org/releases/play-1.2.4.zip $ unzip play-1.2.4.zip$ echo "export PATH=$PATH:~/dev/play-1.2.4" >> ~/.profile$ source ~/.profileAnd then test it with: $ play version ~ _ _ ~ _ __ | | __ _ _ _| | ~ | '_ \| |/ _' | || |_| ~ | __/|_|\____|\__ (_) ~ |_| |__/ ~ ~ play! 1.2.4, http://www.playframework.org ~ 1.2.4Note: If you are running on fedora, you might need to issue sudo yum remove sox, because the sox package comes with it’s own play command that conflicts with play framework. 3. Sign up for openshift Go to https://openshift.redhat.com/app/user/new/express enter your email and choose a password. 4. Install, git and ruby gems On a Debian based linux distro: $ sudo apt-get install git ruby rubygems Rpm version: $ sudo yum install git ruby rubygems5. Install openshift client tools Once you have installed ruby gems, installing red hat cloud tools is as easy as: $ sudo gem install rhc 6. Create a domain Your domain namespace is used to help identify your applications and as part of the URLs to your applications. It’s unique to you across all of openshift. For example, let’s say you have the namespace awesome, when you create a new app called wicked, you’ll find it at http://wicked-awesome.rhcloud.com. When you create a new app called freakin, it’ll be at http://freakin-awesome.rhcloud.com.So go to your openshift control panel at https://openshift.redhat.com/app/control_panel and click on edit on the NAMESPACE section. Then enter something like playdemo (well, that one is already taken) and click save. 7. Create and register your SSH keys Now you’ll have to create a pair of keys, a private and a public one.., so that openshift can validate that it’s really you the one trying to push something to the remote git repository. Just follow the steps at http://help.github.com/linux-set-up-git/, you just have to open a terminal and then $ cd ~/.ssh If you get a No such file or directory error, don’t worry, it means that you didn’t have any SSH key on your system. On the other hand, if you already have a SSH key, it would be a good idea to make a backup. $ ssh-keygen -t rsa -C "<my email>" Generating public/private rsa key pair. Enter file in which to save the key (/home/sas/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/sas/.ssh/id_rsa. Your public key has been saved in /home/sas/.ssh/id_rsa.pub. The key fingerprint is: 22:7b:cd:f3:98:4f:92:de:80:1d:ad:d6:ea:73:20:c2 <my email> The key's randomart image is: +--[ RSA 2048]----+ | | | | | | | . | | .. . S . | | Eo.*.= | | ..o.@.o | | . o.@. | | .*++ | +-----------------+And then, you can setup your username and email, like this: $ git config --global user.name "<my name>" $ git config --global user.email "<my email>"Now you have to register this key at openshift. Just copy the content of the id_rsa.pub (be careful not to copy the file id_rsa, it’s your private key, and you should keep that to yourself) and add it as a new SSH KEY from your control panel. On Fedora is pretty annoying having to enter your passphrase on every git operation. To avoid it, just run ssh-add and enter your passphrase for the last time. Alternatively, you can also use the following command $ rhc-create-domain -l <your email> -p <your password> -n <pick a domain> and let openshift create a a pair of private and public keys as libra_id_rsa and libra_id_rsa.pub at your .ssh/ directory. I had a couple of conflicts between my own SSH keys and the libra ones created by openshift, so I prefer to handle the ssh keys myself. Note: You won’t be able to push anything to your git repository unless you have a valid public key registered at openshift. Take into account that you can add as many keys as needed. Go to your control panel at https://openshift.redhat.com/app/control_panel to check that everything is right. Going to the cloud And now yes, we are ready to deploy our play framework application to the cloud. $ play install openshift $ play new <my app> --with openshift $ cd <my app>Now, for every command you’ll have to enter, at least, your username and password. You can spare yourself this trouble by adding the following keys to your conf/application.conf file: # Openshift module configuration openshift.rhlogin=<my login> openshift.password=<my password>After that you should check that you have installed all the prerequisites. Just run: $ play rhc:chk It will check for a java 1.6 or 1.5 install, git, ruby, rubygem, and openshift client tools 0.84.15 or higher. It will also check that the application exists on openshift, otherwise it will ask you to create it, and finally it will check that you have a local git repository pointing at the remote repository at openshift. Then you can deploy your app with: $ play rhc:deploy -o The first time it will take quite some time to issue the deploy, because the module has to upload all of the play framework libraries. After that initial deploy, subsequent commits will be much faster, because git is smart enough to send only the changed files. Moreover, the module will ask your permission to create the app on openshift, and also to create a local repo. If you just want the script to create everything without asking for permission, just add a --bypass or -b parameter to the command. Your application will now be available at: http://<my app>-<my domain>.rhcloud.com. If you have already deployed your application to openshift, and you just want to retrieve it from your remote git repository, just issue: $ play rhc:fetch Take into account that this is a destructive operation. It will completely remove your local application and replace it with the contents of your remote repository. To have a look at your server logs, issue: $ play rhc:logsHaving a look at openshift log files with “play rhc:logs”To display information about your applications on openshift run: $ play rhc:info Which is just a short-hand for the rhc-domain-info command. You can open your application at openshift anytime issuing: $ play rhc:open Which is also a short-hand for opening a web browser at http://<my app>-<my domain>.rhcloud.com. Finally, if you think you want to remove your application from openshift, just run: $ play rhc:destroy Installing the openshift module There are two ways to install openshift module. One is just to issue play install openshift, which will install the module directly with your framework, at <play install folder>/modules/openshift-0.1.0. That way it will be available to every app you create with $ play new my-app --with openshift The other way is to manually configure it as a dependency. Just add the following line to your conf/dependencies.yml file: # Application dependencies require: - play - play -> openshift 0.1.0And then issue play depsNote: play keeps a cache of fetched dependencies at ~/.ivy2/cache. If you are having troubles with dependencies just clean that diectory and try again. Along with the module there’s a sample application at <openshift module folder>/samples_and_tests/openshift-demo. Just go to that folder and issue play deps and then play run to see it running locally. It just displays play configuration and the host environment variables to let you check that your app is running on openshift.Openshift module demo applicationThen run play rhc:chk to verify that you have installed all the prerequisites. After that issue play rhc:deploy -o to create your remote application at openshift, create a local git repo, package your app as a war file, commit your new app, and deploy to openshift. Thanks to the -o parameter the module will open your openshift app in a web browser after deployment. Getting help You can have a look at the module’s commands issuing: $ play help ~ _ _ ~ _ __ | | __ _ _ _| | ~ | '_ \| |/ _' | || |_| ~ | __/|_|\____|\__ (_) ~ |_| |__/ ~ ~ play! 1.2.4, http://www.playframework.org ~[...] ~ ~ Modules commands: ~ ~~~~~~~~~~~~~~~~~ ~ rhc:chk Check openshift prerequisites, application and git repo. ~ rhc:deploy Deploys application on openshift. ~ rhc:destroy Destroys application on openshift. ~ rhc:fetch Fetches application from remote openshift repository. ~ rhc:info Displays information about user and configured applications. ~ rhc:logs Show the logs of the application on openshift. ~ rhc:open Opens the application deployed on openshift in web browser. ~ ~ Also refer to documentation at http://www.playframework.org/documentation ~Then you can get more help about parameters with the -h or --help parameter: $ play rhc:chk -h~ _ _ ~ _ __ | | __ _ _ _| | ~ | '_ \| |/ _' | || |_| ~ | __/|_|\____|\__ (_) ~ |_| |__/ ~ ~ play! 1.2.4, http://www.playframework.org ~ Usage: play [options]Options: -h, --help show this help message and exit -a APP, --app=APP Application name (alphanumeric) (required) -s SUBDOMAIN, --subdomain=SUBDOMAIN Application subdomain, root by default (alphanumeric) (optional) -l RHLOGIN, --rhlogin=RHLOGIN Red Hat login (RHN or OpenShift login with OpenShift Express access) -p PASSWORD, --password=PASSWORD RHLogin password (optional, will prompt) -d, --debug Print Debug info -m MESSAGE, --message=MESSAGE Commit message --timeout=TIMEOUT Timeout, in seconds, for connection -o, --open Open site after deploying -b, --bypass Bypass warningsYou can also specify these options in the conf/application.conf file with the following keys: openshift.rhlogin: Red Hat login (RHN or OpenShift login with OpenShift Express access) openshift.password: RHLogin password (optional, will prompt) openshift.application.name: Application name (alphanumeric) (required) openshift.application.subdomain: Application subdomain, root by default (alphanumeric) openshift.debug: Print Debug info openshift.timeout: Timeout, in seconds, for connectionYou can see all versions of the module at the openshift module’s page on http://www.playframework.org/modules/openshift. You can check the documentation at http://www.playframework.org/modules/openshift-0.1.0/home, or running locally your app in dev mode with play run, and then going to http://localhost:9000/@documentation/modules/openshift/home.Browsing module documentation locallyYou can ask questions at the play framework discussion list at https://groups.google.com/group/play-framework, or you can try with it’s spanish cousin at https://groups.google.com/group/play-latam. Known issues Unfortunately, right now the openshift module doesn’t work with windows. That’s because the module issues many git commands, and you can’t do that on windows from the standard shell, it requires a special “git bash” prompt. Further steps In the next version I’ll be exploring the possibility of building a java only version of the module using openshift’s java api. That way we won’t be needing git, ruby, nor the rhc tools installation. Morevoer, we should be able to use it all from windows as well. ResourcesPlay framework openshift module page: http://www.playframework.org/modules/openshift Latest version: http://www.playframework.org/modules/openshift-0.1.0/home Project at github: https://github.com/opensas/openshift Detailed tutorial about how to deploy a Play Framework application to openshift: https://github.com/opensas/play-demo/wiki/Step-12.5—deploy-to-openshift Excellent tutorial about deploying java applications to openshift: https://gist.github.com/1637464#file_tutorial.rst A couple of articles on jboss planet: http://planet.jboss.org/post/let_s_play_on_the_red_hat_cloud_using_the_play_framework_on_openshift_express_with_jboss_as_7 https://community.jboss.org/blogs/thomas.heute/2011/06/29/play-framework-on-jboss-as-7?_sscc=tReference: Play framework on the cloud made easy: Openshift module from our JCG partner Sebastian Scarano at the Having fun with Play framework! blog. ...

Properties with Spring

1. Overview This tutorial will show how to set up and use Properties in Spring – either via XML or Java configuration. Before Spring 3.1, adding new properties files into Spring and using property values wasn’t as flexible and as robust as it could be. Starting with Spring 3.1, the new Environment and PropertySource abstractions simplify this process greatly. 2. Registering Properties via the XML namespace Using XML, new properties files can be made accessible to Spring via the following namespace element: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation="http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans-3.2.xsdhttp://www.springframework.org/schema/contexthttp://www.springframework.org/schema/context/spring-context-3.2.xsd"><context:property-placeholder location="classpath:foo.properties" /></beans> The foo.properties file should be placed under /src/main/resources so that it will be available on the classpath at runtime. 2.1. Multiple <property-placeholder> In case multiple <property-placeholder> elements are present in the Spring context, there are a few best practices that should be followed:the order attribute needs to be specified to fix the order in which these are processed by Spring all property placeholders minus the last one (highest order) should have ignore-unresolvable=”true” to allow the resolution mechanism to pass to others in the context without throwing an exception3. Registering Properties via Java Annotations Spring 3.1 also introduces the new @PropertySource annotation, as a convenient mechanism of adding property sources to the environment. This annotation is to be used in conjunction with Java based configuration and the @Configuration annotation: @Configuration @PropertySource("classpath:foo.properties") public class PropertiesWithJavaConfig {@Bean    public static PropertySourcesPlaceholderConfigurer propertySourcesPlaceholderConfigurer() {       return new PropertySourcesPlaceholderConfigurer();    } } As opposed to using XML namespace element, the Java @PropertySource annotation does not automatically register a PropertySourcesPlaceholderConfigurer with Spring. Instead, the bean must be explicitly defined in the configuration to get the property resolution mechanism working. The reasoning behind this unexpected behavior is by design and documented on this issue. 4. Using Properties Both the older PropertyPlaceholderConfigurer and the new PropertySourcesPlaceholderConfigurer added in Spring 3.1 resolve ${…} placeholders within bean definition property values and @Value annotations. For example, to inject a property using the @Value annotation: @Value( "${jdbc.url}" ) private String jdbcUrl; A default value for the property can also be specified: @Value( "${jdbc.url:aDefaultUrl}" ) private String jdbcUrl; Using properties in Spring XML configuration: <bean id="dataSource"> <property name="url" value="${jdbc.url}" /> </bean> And lastly, obtaining properties via the new Environment APIs: @Autowired private Environment env; ... dataSource.setUrl(env.getProperty("jdbc.url")); An very important caveat here is that using <property-placeholder> will not expose the properties to the Spring Environment – this means that retrieving the value like this will not work – it will return null: env.getProperty("key.something") 4.1 Properties Search Precedence By default, in Spring 3.1, local properties are search last, after all environment property sources, including properties files. This behavior can be overridden via the localOverride property of the PropertySourcesPlaceholderConfigurer, which can be set to true to allow local properties to override file properties. In Spring 3.0 and before, the old PropertyPlaceholderConfigurer also attempted to look for properties both in the manually defined sources as well as in the System properties. The lookup precedence was also customizable via the systemPropertiesMode property of the configurer:never – Never check system properties fallback (default) – Check system properties if not resolvable in the specified properties files override – Check system properties first, before trying the specified properties files. This allows system properties to override any other property source.Finally, note that in case a property is defined in two or more files defined via @PropertySource - the last definition will win and override the previous ones. This makes the exact property value hard to predict, so if overriding is important, the PropertySource API can be used instead. 5. Behind the Scenes – the Spring Configuration 5.1. Before Spring 3.1 Spring 3.1 introduced the convenient option of defining properties sources using annotations – but before that, XML Configuration was necessary for these. The <context:property-placeholder> XML element automatically registers a new PropertyPlaceholderConfigurer bean in the Spring Context. For backwards compatibility, this is also the case in Spring 3.1 if the XSD schemas are not yet upgraded to point to the new 3.1 XSD versions. 5.2. After Spring 3.1 From Spring 3.1 onward, the XML <context:property-placeholder> will no longer register the old PropertyPlaceholderConfigurer but the newly introduced PropertySourcesPlaceholderConfigurer. This replacement class was created to be more flexible and to better interact with the newly introduced Environment and PropertySource mechanism. For applications using Spring 3.1 or above, this should be considered the standard. 6. Configuration using Raw Beans in Spring 3.0 – the PropertyPlaceholderConfigurer Besides the convenient methods of getting properties into Spring – annotations and the XML namespace – the property configuration bean can also be defined and registered manually. Working with the PropertyPlaceholderConfigurer gives us full control over the configuration, with the downside of being more verbose and most of the time, unnecessary. 6.1. Java configuration @Bean public static PropertyPlaceholderConfigurer properties(){ PropertyPlaceholderConfigurer ppc = new PropertyPlaceholderConfigurer(); Resource[] resources = new ClassPathResource[ ] { new ClassPathResource( "foo.properties" ) }; ppc.setLocations( resources ); ppc.setIgnoreUnresolvablePlaceholders( true ); return ppc; } 6.2. XML configuration <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"> <property name="locations"> <list> <value>classpath:foo.properties</value> </list> </property> <property name="ignoreUnresolvablePlaceholders" value="true"/> </bean> 7. Configuration using Raw Beans in Spring 3.1 – the PropertySourcesPlaceholderConfigurer Similarly, in Spring 3.1, the new PropertySourcesPlaceholderConfigurer can also be configured manually: 7.1. Java configuration @Bean public static PropertySourcesPlaceholderConfigurer properties(){ PropertySourcesPlaceholderConfigurer pspc = new PropertySourcesPlaceholderConfigurer(); Resource[] resources = new ClassPathResource[ ] { new ClassPathResource( "foo.properties" ) }; pspc.setLocations( resources ); pspc.setIgnoreUnresolvablePlaceholders( true ); return pspc; } 7.2. XML configuration <bean class="org.springframework.context.support.PropertySourcesPlaceholderConfigurer"> <property name="location"> <list> <value>classpath:foo.properties</value> </list> </property> <property name="ignoreUnresolvablePlaceholders" value="true"/> </bean> 8. Conclusion This article showed several examples of working with properties and properties files in Spring, and discussed the older Spring 3.0 options as well as the new support for properties introduced in Spring 3.1. The implementation of all examples of registering properties files and using property values can be found in the github project – this is an Eclipse based project, so it should be easy to import and run as it is.   Reference: Properties with Spring from our JCG partner Eugen Paraschiv at the baeldung blog. ...

Agile’s Customer Problem

Agile methods like Scrum and XP both rely on a close and collaborative relationship and continual interaction with the customer – the people who are paying for the software and who are going to use the system. Rather than writing and reviewing detailed specifications and working through sign-offs and committees, the team works with someone who represents the interests of the customer to define business features and to decide what work needs to be done and when. One of the key problems in adopting these approaches is finding the right person to play the important role of the Customer (XP) or Product Owner (Scrum). I’ll use both terms interchangeably. The team reviews their work with the Customer, the Customer answers any questions that they have and sets the team’s direction and priorities and makes sure that the team is focused on what is important to the business, owns and manages the requirements backlog, writes acceptance tests and decides when the software really is done. A team’s success depends a lot on how good the Customer or Product Owner is, their knowledge and experience, their commitment to the project, how they make decisions and how good these decisions are. Mike Cohn in his book Succeeding with Agile explains that Product Owners have to be:committed to the team and available to answer questions, get the information that the team needs when the team needs it; an expert in the business: not only do they have to understand the domain, but they also need to understand what’s important strategically to the business and what’s important to the user community; a good communicator within the team and outside of the team; a good negotiator so that they can balance the needs of the team and the needs of different stakeholders; empowered – they must be able to make decisions on behalf of the customer – and they need to be willing to make tradeoffs and tough decisions when they have to.An effective Product Owner also needs to be detail-oriented so that they can understand and resolve fine details of functionality and write functional acceptance tests. They should understand the basics of software development, at least the boundaries of what is and is not possible, what is hard to do and what isn’t and why so that they can appreciate technical dependencies and technical risks. They need to at least understand the rules of Scrum or XP – what’s expected of them, and how to play the game. And they should understand the basics of project management and risk management. Because in the end, the Product Owner is the person who decides what gets done and what doesn’t. The Product Owner is a product manager, project manager and business analyst all rolled into one. Oh, and… they also need to be “collaborative by choice”, “agile in all things”, and…” fun and reasonable”. Being in Two places at Once The Product Owner has to work closely with the team, in touch with what the team is working on, to make sure that they can keep moving forward. But the Product Owner also has to stay involved in the business to understand what is going on and what is important. They have to be both inward-facing (working with the team, planning, holding reviews, attending meetings, prioritizing, helping to manage the backlog, defining requirements, answering questions and clarifying information) and outward-facing (working with the project’s sponsors and with the users of the system, making sure that the business understands what is happening in the project, making sure that they understand business priorities and competitive positioning and business trends and when any of this changes and how this could affect the project). They have to be in two places at once, which is physically impossible if the development team and the business aren’t co-located. The Product Owner and Politics There are political risks in the team’s relationship with the Product Owner, and in the Product Owner’s position and influence within their own organization. The Product Owner has to play politics with the team and inside the business, trying to promote the interests of the team and the project, reconciling conflicts between different stakeholders, trying to get and keep stakeholders onside, building coalitions. Their success, and the project’s success, depends on their ability to negotiate these issues. Scrum assumes that the Product Owner is not only committed and talented and in touch with the business’s strategic priorities and with the concerns and needs of front line workers; but it also assumes that the Product Owner will always put the interests of the business ahead of their own interests. But a good Product Owner is usually an ambitious Product Owner – they are interested in the project as an opportunity to advance their career. Projects effect change, and with every change there are winners and losers. There is the real risk that the Product Owner’s success may put them in conflict with other important stakeholders in the business – by focusing on making their Product Owner happy, the team may be making enemies somewhere else in the organization without knowing it. A Customer, or Many Customers? Not only does the Product Owner decide what is important and what is going to get done, they are responsible for the project’s budget, and they are accountable for the project’s success. According to Ken Schwaber’s Agile Software Development with Scrum (the original definition of Scrum) the Product Owner is “the person who is officially responsible for the project”, “the one throat to choke” if the project fails or goes off the rails. Expecting one person to take on all of this responsibility and this much work is unrealistic. It’s too much responsibility, too much risk, and too much work. For many Product Owners, it’s more than a full-time job – a direct contradiction to Agile values that put people first and emphasize realistic working hours and sustainable pace for the team. This has been a problem since the beginning of Agile: a few months after the launch of the initial phase of C3 (the first XP project), the Customer representative quit due to burnout and stress, and could not be replaced, and the project was eventually cancelled. Scrum still demands that the Product Owner role must be played by one person. This doesn’t make sense, given that another fundamental underlying Agile principle is that people working together collaboratively make better decisions than one person working alone (“the Wisdom of Crowds”). If true, then why are the critical decisions about prioritization and direction and vision for the project made by one person? The people behind XP eventually recognized that the simple idea of a Customer demanded too much from one person, that the workload and responsibility need to be shared by a Customer team. A Customer team means that you get the advantage of multiple perspectives, people with different specialties and experiences, and you have more help with answering questions and making decisions. It’s more sustainable and practical. But this comes with its own set of problems:The development team has to reconcile differences in abilities, differences in understanding, different priorities, biases and political conflicts and personal conflicts within the Customer team. More time has to be spent explicitly keeping the Customer team itself in synch. There are more chances of mistakes and misunderstandings and dropped balls. Somebody still has to be in charge, make the important decisions – what Mike Cohn calls “the-buck-stops-here” person. The development team has to know that Customer decisions will not be over-ridden within the Customer team so that they can commit to getting work done.The Customer in Maintenance Maintenance and enhancement, where most of us will spend a lot of our careers, shows other problems with the Product Owner idea. First, it’s hard enough to get someone with the talent and drive and selflessness to represent the customer on a high-profile, strategic development project. It’s much harder to get anything close to this same level of commitment and talent for smaller projects, or for ongoing maintenance work. People with this knowledge and ability are likely to be running or supporting some important part of the business, not answering questions and helping to prioritize issues for your maintenance team. And if you are lucky enough to find someone good, it’s hard to keep them – unlike a project, maintenance work doesn’t have a clear end date, so the team will need to get used to working with different Customers at different times, with different working styles, different agendas and different strengths. The Product Owner is supposed to act as the single voice for the customer. But for a production system you are more likely to have too many voices, too many different people with different priorities and demands, all working for different parts of the business, all talking to different people on your team, trying to get what they need or want done. Or for some old legacy systems you may run into the opposite problem – nobody wants to take ownership of the system or its data, nobody knows enough or wants to be responsible for making business decisions. Be your own Customer All of these challenges don’t mean that you can’t work with the Product Owner model – obviously lots of teams are following Scrum and XP in one way or another. But you need to recognize the limitations and risks of this approach, and be prepared to fill in. For example, many development and maintenance teams who work in a different location and especially a different timezone from the business fall back on a Customer Proxy: a business analyst or somebody else on the team who can help fill part of the Customer role, and work with people in the business to help answer questions and confirm requirements and priorities. It’s not as efficient as working directly with the business, but sometimes there isn’t a choice. Even strong Customers need help at times. Many ScrumMasters and Product Owners find ways to split up the responsibilities more equitably and help each other out. The Scrum Master or team lead or senior developers or senior testers, whoever is playing a technical leadership role on the team may have to step in and help fill in when the Customer isn’t available, when they are over-worked and can’t keep up, when they don’t understand, when they aren’t qualified to make a decision, or when they don’t care. To reconcile technical and business requirements and the needs of different stakeholders, and make technical and business trade-offs and long-term and short-term trade-offs. To communicate with business stakeholders. To follow-up on outstanding issues and unanswered questions and try to get answers from someone in the business. To write the acceptance tests for the customer – a common problem on Agile teams is that nobody on the business side is willing to help write acceptance tests, but they are willing to jump on the team when something is done wrong. Be prepared to make more mistakes working this way – you will have to work with imperfect and incomplete information, sometimes you’ll have to make a best guess and go with it, and you will get requirements and priorities wrong. Test everything that you can, and get the product out to the business as quickly and as often as you can, and be prepared for negative feedback. You may not be able to build as close a relationship with the business as you could with a strong and committed Customer. It’s a compromise, but it’s a necessary compromise that many teams have no choice but to make. As Mike Cohn points out in The Fallacy of One Throat to Choke, in the end it’s the team that fails, not just the Customer. Reference: Agile’s Customer Problem from our JCG partner Jim Bird at the Building Real Software blog....

Apache Commons Lang StringUtils

So, thought it’d be good to talk about another Java library that I like. It’s been around for a while and is not perhaps the most exciting library, but it is very very useful. I probably make use of it daily. org.apache.commons.lang.StringUtils StringUtils is part of Apache Commons Lang (http://commons.apache.org/lang/), and as the name suggest it provides some nice utilities for dealing with Strings, going beyond what is offered in java.lang.String. It consists of over 50 static methods, and I’m not going to cover every single one of them, just a selection of methods that I make the most use of. There are two different versions available, the newer org.apache.commons.lang3.StringUtils and the older org.apache.commons.lang.StringUtils. There are not really any significant differences between the two. lang3.StringUtils requires Java 5.0 and is probably the version you’ll want to use. public static boolean equals(CharSequence str1, CharSequence str2) Thought I’d start with one of the most straight forward methods. equals. This does exactly what you’d expect, it takes two Strings and returns true if they are identical, or false if they’re not. But java.lang.String already has a perfectly good equals method? Why on earth would I want to use a third party implementation? It’s a fair question. Let’s look at some code, can you see any problems? public void doStuffWithString(String stringParam) { if(stringParam.equals("MyStringValue")) { // do stuff } }That’s a NullPointerException waiting to happen! There are a couple of ways around this: public void safeDoStuffWithString1(String stringParam) { if(stringParam != null && stringParam.equals("MyStringValue")) { // do stuff } } public void safeDoStuffWithString2(String stringParm) { if("MyStringValue".equals(stringParam)) { // do stuff } }Personally I’m not a fan of either method. I think null checks pollute code, and to me “MyStringValue”.equals(stringParam) just doesn’t scan well, it looks wrong. This is where StringUtils.equals comes in handy, it’s null safe. It doesn’t matter what you pass it, it won’t NullPointer on you! So you could rewrite the simple method as follows: public void safeDoStuffWithString3(String stringParam) { if(StringUtils.equals(stringParam,"MyStringValue)) { // do stuff } }It’s personal preference, but I think this reads better than the first two examples. There’s nothing wrong with them, but I do think StringUtils.equals() is worth considering. isEmpty, isNotEmpty, isBlank, isNotBlankOK, these look pretty self explanatory, I’m guessing they’re all null safe? You’re probably spotting a pattern here. isEmpty is indeed a null safe replacement for java.lang.String.isEmpty(), and isNotEmpty is it’s inverse. So no more null checks: if(myString != null && !myString.isEmpty()) { // urghh // Do stuff with myString } if(StringUtils.isNotEmpty(myString)) { // much nicer // Do stuff with myString }So, why Blank and Empty? There is a difference, isBlank also returns true if the String just contains whitespace, ie… String someWhiteSpace = " \t \n"; StringUtils.isEmpty(someWhiteSpace); // false StringUtils.isBlank(someWhiteSpace); // truepublic static String[] split(String str, String separatorChars) Right that looks just like String.split(), so this is just a null safe version of the built in Java method? Well, yes it certainly is null safe. Trying to split a null string results in null, and a null separator splits on whitespace. But there is another reason you should consider using StringUtils.split(…), and that’s the fact that java.lang.String.split takes a regular expression as a separator. For example the following may not do what you want: public void possiblyNotWhatYouWant() { String contrivedExampleString = "one.two.three.four"; String[] result = contrivedExampleString.split("."); System.out.println(result.length); // 0 }But all I have to do is put a couple of backslashes in front of the ‘.’ and it will work fine. It’s not really a big deal is it? Perhaps not, but there’s one last advantage to using StringUtils.split, and that’s the fact that regular expressions are expensive. In fact when I tested splitting a String on a comma (a fairly common use case in my experience), StingUtils.split runs over four times faster! public static String join(Iterable iterable, String separator) Ah, finally something genuinely useful! Indeed I’ve never found an elegant way of concatenating strings with a separator, there’s always that annoying conditional require to check if want to insert the separator or not. So it’s nice there’s a utility to this for me. Here’s a quick example: String[] numbers = {"one", "two", "three"}; StringUtils.join(numbers,","); // returns "one,two,three"There’s also various overloaded versions of join that take Arrays, and Iterators. Ok, I’m convinced. This looks like a pretty useful library, what else can it do? Quite a lot, but like I said earlier I won’t bother going through every single method available, I’d just end up repeating what’s said in the API documentation. I’d really recommend taking a closer look: http://commons.apache.org/lang/api-3.1/org/apache/commons/lang3/StringUtils.html So basically if you ever need to do something with a String that isn’t covered by Java’s core String library (and maybe even stuff that is), take a look at StringUtils. Reference: Apache Commons Lang StringUtils from our JCG partner Tom Jefferys at the Tom’s Programming Blog ....

Essential Attack Surface Management

To attack your system, to steal something or do something else nasty, the bad guys need to find a way in, and usually a way out as well. This is what Attack Surface Analysis is all about: mapping the ways in and out of your system, looking at the system from an attacker’s perspective, understanding what parts of the system are most vulnerable, where you need to focus testing and reviews. It’s part of design and it’s also part of risk management. Attack Surface Analysis is simple in concept. It’s like walking around your house, counting all of the doors and windows, and checking to see if they are open, or easy to force open. The fewer doors and windows you have, and the harder they are to open, the safer you are. The bigger a system’s attack surface, the bigger a security problem you have, and the more work that you have to put into your security program. For enterprise systems and web apps, the doors and windows include web URLs (every form, input field – including hidden fields, URL parameters and scripts), cookies, files and databases shared outside the app, open ports and sockets, external system calls and application APIs, admin user ids and functions. And any support backdoors into the app, if you allow that kind of thing. I’m not going to deal with minimizing the attack surface by turning off features or deleting code. It’s important to do this when you can of course, but most developers are paid to add new features and write more forms and other interfaces – to open up the Attack Surface. So it’s important to understand what this means in terms of security risk. Measuring the System’s Attack Surface Michael Howard at Microsoft and other researchers have developed a method for measuring the attack surface of an application, and to track changes to the attack surface over time, called the Relative Attack Surface Quotient (RSQ). Using this method you calculate an overall attack surface score for the system, and measure this score as changes are made to the system and to how it is deployed. Researchers at Carnegie Mellon built on this work to develop a formal way to calculate an Attack Surface Metric for large systems like SAP. They calculate the Attack Surface as the sum of all entry and exit points, channels (the different ways that clients or external systems connect to the system, including TCP/UDP ports, RPC end points, named pipes…) and untrusted data elements. Then they apply a damage potential/effort ratio to these Attack Surface elements to identify high-risk areas. Smaller teams building and maintaining smaller systems (which is most of us) and Agile teams trying to move fast don’t need to go this far. Managing a system’s attack surface can be done through a few straightforward steps that developers can understand and take ownership of. Attack Surface: Where to Start? Start with some kind of baseline if you can – at least a basic understanding of the system’s attack surface. Spend a few hours reviewing design and architecture documents from an attack surface perspective. For web apps you can use a tool like Arachni or Skipfish or w3af or one of the many commercial dynamic testing and vulnerability scanning tools or services to crawl your app and map the attack surface – at least the part of the system that is accessible over the web. Or better, get an appsec expert to review the application and pen test it so that you understand the attack surface and real vulnerabilities. Once you have a map of the attack surface, identify the high risk areas. Focus on remote entry points – interfaces with outside systems and to the Internet – and especially where the system allows anonymous, public access. This is where you are most exposed to attack. Then understand what compensating controls you have in place, operational controls like network firewalls and application firewalls,and intrusion detection or prevention systems to help protect your app. The attack surface model will be rough and incomplete to start, especially if you haven’t done any security work on the system before. Use what you have and fill in the holes as the team makes changes to the attack surface. But how do you know when you are changing the attack surface? When are you Changing the Attack Surface? According to The Official Guide to the CSSLP “… it is important to understand that the moment a single line of code is written, the attack surface has increased.” But this over states the risk of making code changes – there are lots of code changes (for example to behind-the-scenes reporting and analytics and changes to business logic) that don’t make the system more vulnerable to attack. Remember, the attack surface is the sum of entry and exit points and untrusted data elements in the system. Adding a new system interface, a new channel into the system, a new connection type, a new API, a new type of client, a new mobile app, or a new file or database table shared with the outside – these changes directly affect the Attack Surface and change the risk profile of your app. The first web page that you create opens up the system’s attack surface significantly and introduces all kinds of new risks. If you add another field to that page, or another web page like it, while technically you have made the attack surface bigger, you haven’t increased the risk profile of the application in a meaningful way. Each of these incremental changes is more of the same, unless you follow a new design or use a new framework. Changes to session management and authentication and password management also affect the attack surface. So do changes to authorization and access control logic, especially adding or changing role definitions, adding admin users or admin functions with high privileges. Changes to the code that handles encryption and secrets. Changes to how data validation is done. And major architectural changes to layering and trust relationships, or fundamental changes in technical architecture – swapping out your web server or database platform, or changing the run-time OS. Use a Risk-Based and Opportunity-Based Approach Attack Surface management can be done in an opportunistic way, driven by your ongoing development requirements. As they work on a piece of the system, the team reviews whether and how the changes affect the attack surface, what the risks are, and raise flags for deeper review. These red flags drive threat modeling and secure code reviews and additional testing. This means that developers can stay focused on delivering features, while still taking responsibility for security. Attack surface reviews become a part of design and QA and risk management, burned in to how the team works, done when needed in each stage or phase or sprint. The first time that you touch a piece of the system it may take longer to finish the change because you need to go through more risk assessment. But over time, as you work on the same parts of the system or the same problems, and as you learn more about the application and more about security risks, it will get simpler and faster. Your understanding of the system’s attack surface will probably never be perfect or complete – but you will always be updating it and improving it. New risks and vulnerabilities will keep coming up. When this happens you add new items to your risk checklists, new red flags to watch for. As long as the system is being maintained, you’re never finished doing attack surface management. Reference: Essential Attack Surface Management from our JCG partner Jim Bird at the Building Real Software blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: