Featured FREE Whitepapers

What's New Here?


Guava’s Strings Class

In the post Checking for Null or Empty or White Space Only String in Java, I demonstrated common approaches in the Java ecosystem (standard Java, Guava, Apache Commons Lang, and Groovy) for checking whether a String is null, empty, or white space only similar to what C# supports with the String.IsNullOrWhiteSpace method. One of the approaches I showed was a Guava-based approach that made use of the Guava class Strings and its static isNullOrEmpty(String) method. In this post, I look at other useful functionality for working with Strings that is provided by Guava’s six ‘static utility methods pertaining to String’ that are bundled into the Strings class. Using Guava’s Strings class is straightforward because of its well-named methods. The following list enumerates the methods (all static) on the Strings class with a brief description of what each does next to the method name (these descriptions are borrowed or adapted from the Javadoc documentation).Strings.isNullOrEmpty(String)‘Returns true if the given string is null or is the empty string.’emptyToNull(String)‘Returns the given string if it is nonempty; null otherwise.’nullToEmpty(String)‘Returns the given string if it is non-null; the empty string otherwise.’padStart(String,int,char)Prepend to provided String, if necessary, enough provided char characters to make string the specified length.padEnd(String,int,char)Append to provided String, if necessary, enough provided char characters to make string the specified length.repeat(String,int)‘Returns a string consisting of a specific number of concatenated copies of an input string.’  isNullOrEmpty Guava’s Strings.isEmptyOrNull(String) method makes it easy to build simple and highly readable conditional statements that check a String for null or emptiness before acting upon said String. As previously mentioned, I have briefly covered this method before. Another code demonstration of this method is shown next. Code Sample Using Strings.isNullOrEmpty(String) /** * Print to standard output a string message indicating whether the provided * String is null or empty or not null or empty. This method uses Guava's * Strings.isNullOrEmpty(String) method. * * @param string String to be tested for null or empty. */ private static void printStringStatusNullOrEmpty(final String string) { out.println( 'String '' + string + '' ' + (Strings.isNullOrEmpty(string) ? 'IS' : 'is NOT') + ' null or empty.'); }/** * Demonstrate Guava Strings.isNullOrEmpty(String) method on some example * Strings. */ public static void demoIsNullOrEmpty() { printHeader('Strings.isNullOrEmpty(String)'); printStringStatusNullOrEmpty('Dustin'); printStringStatusNullOrEmpty(null); printStringStatusNullOrEmpty(''); } The output from running the above code is contained in the next screen snapshot. It shows that true is returned when either null or empty String (”) is passed to Strings.isNullOrEmpty(String).nullToEmpty and emptyToNull There are multiple times when one may want to treat a null String as an empty String or wants present a null when an empty String exists. In cases such as these when transformations between null and empty String are desired, The following code snippets demonstrate use of Strings.nullToEmpty(String) and Strings.emptyToNull(String). nullToEmpty and emptyToNull /** * Print to standard output a simple message indicating the provided original * String and the provided result/output String. * * @param originalString Original String. * @param resultString Output or result String created by operation. * @param operation The operation that acted upon the originalString to create * the resultString. */ private static void printOriginalAndResultStrings( final String originalString, final String resultString, final String operation) { out.println('Passing '' + originalString + '' to ' + operation + ' produces '' + resultString + '''); }/** Demonstrate Guava Strings.emptyToNull() method on example Strings. */ public static void demoEmptyToNull() { final String operation = 'Strings.emptyToNull(String)'; printHeader(operation); printOriginalAndResultStrings('Dustin', Strings.emptyToNull('Dustin'), operation); printOriginalAndResultStrings(null, Strings.emptyToNull(null), operation); printOriginalAndResultStrings('', Strings.emptyToNull(''), operation); }/** Demonstrate Guava Strings.nullToEmpty() method on example Strings. */ public static void demoNullToEmpty() { final String operation = 'Strings.nullToEmpty(String)'; printHeader(operation); printOriginalAndResultStrings('Dustin', Strings.nullToEmpty('Dustin'), operation); printOriginalAndResultStrings(null, Strings.nullToEmpty(null), operation); printOriginalAndResultStrings('', Strings.nullToEmpty(''), operation); } The output from running the above code (shown in the next screen snapshot) proves that these methods work as we’d expect: converting null to empty String or converting empty String to null.padStart and padEnd Another common practice when dealing with Strings in Java (or any other language) is to pad a String to a certain length with a specified character. Guava supports this easily with its Strings.padStart(String,int,char) and Strings.padEnd(String,int,char) methods, which are demonstrated in the following code listing. padStart and padEnd /** * Demonstrate Guava Strings.padStart(String,int,char) method on example * Strings. */ public static void demoPadStart() { final String operation = 'Strings.padStart(String,int,char)'; printHeader(operation); printOriginalAndResultStrings('Dustin', Strings.padStart('Dustin', 10, '_'), operation); /** * Do NOT call Strings.padStart(String,int,char) on a null String: * Exception in thread 'main' java.lang.NullPointerException * at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:187) * at com.google.common.base.Strings.padStart(Strings.java:97) */ //printOriginalAndResultStrings(null, Strings.padStart(null, 10, '_'), operation);printOriginalAndResultStrings('', Strings.padStart('', 10, '_'), operation); }/** * Demonstrate Guava Strings.padEnd(String,int,char) method on example * Strings. */ public static void demoPadEnd() { final String operation = 'Strings.padEnd(String,int,char)'; printHeader(operation); printOriginalAndResultStrings('Dustin', Strings.padEnd('Dustin', 10, '_'), operation); /** * Do NOT call Strings.padEnd(String,int,char) on a null String: * Exception in thread 'main' java.lang.NullPointerException * at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:187) * at com.google.common.base.Strings.padEnd(Strings.java:129) */ //printOriginalAndResultStrings(null, Strings.padEnd(null, 10, '_'), operation); printOriginalAndResultStrings('', Strings.padEnd('', 10, '_'), operation); } When executed, the above code pads the provided Strings with underscore characters either before or after the provided String depending on which method was called. In both cases, the length of the String was specified as ten. This output is shown in the next screen snapshot.repeat A final manipulation technique that Guava’s Strings class supports is the ability to easily repeat a given String a specified number of times. This is demonstrated in the next code listing and the corresponding screen snapshot with that code’s output. In this example, the provided String is repeated three times. repeat /** Demonstrate Guava Strings.repeat(String,int) method on example Strings. */ public static void demoRepeat() { final String operation = 'Strings.repeat(String,int)'; printHeader(operation); printOriginalAndResultStrings('Dustin', Strings.repeat('Dustin', 3), operation); /** * Do NOT call Strings.repeat(String,int) on a null String: * Exception in thread 'main' java.lang.NullPointerException * at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:187) * at com.google.common.base.Strings.repeat(Strings.java:153) */ //printOriginalAndResultStrings(null, Strings.repeat(null, 3), operation); printOriginalAndResultStrings('', Strings.repeat('', 3), operation); }Wrapping Up The above examples are simple because Guava’s Strings class is simple to use. The complete class containing the demonstration code shown earlier is now listed. GuavaStrings.java package dustin.examples;import com.google.common.base.Strings; import static java.lang.System.out;/** * Simple demonstration of Guava's Strings class. * * @author Dustin */ public class GuavaStrings { /** * Print to standard output a string message indicating whether the provided * String is null or empty or not null or empty. This method uses Guava's * Strings.isNullOrEmpty(String) method. * * @param string String to be tested for null or empty. */ private static void printStringStatusNullOrEmpty(final String string) { out.println( 'String '' + string + '' ' + (Strings.isNullOrEmpty(string) ? 'IS' : 'is NOT') + ' null or empty.'); }/** * Demonstrate Guava Strings.isNullOrEmpty(String) method on some example * Strings. */ public static void demoIsNullOrEmpty() { printHeader('Strings.isNullOrEmpty(String)'); printStringStatusNullOrEmpty('Dustin'); printStringStatusNullOrEmpty(null); printStringStatusNullOrEmpty(''); }/** * Print to standard output a simple message indicating the provided original * String and the provided result/output String. * * @param originalString Original String. * @param resultString Output or result String created by operation. * @param operation The operation that acted upon the originalString to create * the resultString. */ private static void printOriginalAndResultStrings( final String originalString, final String resultString, final String operation) { out.println('Passing '' + originalString + '' to ' + operation + ' produces '' + resultString + '''); }/** Demonstrate Guava Strings.emptyToNull() method on example Strings. */ public static void demoEmptyToNull() { final String operation = 'Strings.emptyToNull(String)'; printHeader(operation); printOriginalAndResultStrings('Dustin', Strings.emptyToNull('Dustin'), operation); printOriginalAndResultStrings(null, Strings.emptyToNull(null), operation); printOriginalAndResultStrings('', Strings.emptyToNull(''), operation); }/** Demonstrate Guava Strings.nullToEmpty() method on example Strings. */ public static void demoNullToEmpty() { final String operation = 'Strings.nullToEmpty(String)'; printHeader(operation); printOriginalAndResultStrings('Dustin', Strings.nullToEmpty('Dustin'), operation); printOriginalAndResultStrings(null, Strings.nullToEmpty(null), operation); printOriginalAndResultStrings('', Strings.nullToEmpty(''), operation); }/** * Demonstrate Guava Strings.padStart(String,int,char) method on example * Strings. */ public static void demoPadStart() { final String operation = 'Strings.padStart(String,int,char)'; printHeader(operation); printOriginalAndResultStrings('Dustin', Strings.padStart('Dustin', 10, '_'), operation); /** * Do NOT call Strings.padStart(String,int,char) on a null String: * Exception in thread 'main' java.lang.NullPointerException * at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:187) * at com.google.common.base.Strings.padStart(Strings.java:97) */ //printOriginalAndResultStrings(null, Strings.padStart(null, 10, '_'), operation); printOriginalAndResultStrings('', Strings.padStart('', 10, '_'), operation); }/** * Demonstrate Guava Strings.padEnd(String,int,char) method on example * Strings. */ public static void demoPadEnd() { final String operation = 'Strings.padEnd(String,int,char)'; printHeader(operation); printOriginalAndResultStrings('Dustin', Strings.padEnd('Dustin', 10, '_'), operation); /** * Do NOT call Strings.padEnd(String,int,char) on a null String: * Exception in thread 'main' java.lang.NullPointerException * at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:187) * at com.google.common.base.Strings.padEnd(Strings.java:129) */ //printOriginalAndResultStrings(null, Strings.padEnd(null, 10, '_'), operation); printOriginalAndResultStrings('', Strings.padEnd('', 10, '_'), operation); }/** Demonstrate Guava Strings.repeat(String,int) method on example Strings. */ public static void demoRepeat() { final String operation = 'Strings.repeat(String,int)'; printHeader(operation); printOriginalAndResultStrings('Dustin', Strings.repeat('Dustin', 3), operation); /** * Do NOT call Strings.repeat(String,int) on a null String: * Exception in thread 'main' java.lang.NullPointerException * at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:187) * at com.google.common.base.Strings.repeat(Strings.java:153) */ //printOriginalAndResultStrings(null, Strings.repeat(null, 3), operation); printOriginalAndResultStrings('', Strings.repeat('', 3), operation); }/** * Print a separation header to standard output. * * @param headerText Text to be placed in separation header. */ public static void printHeader(final String headerText) { out.println('\n========================================================='); out.println('= ' + headerText); out.println('========================================================='); }/** * Main function for demonstrating Guava's Strings class. * * @param arguments Command-line arguments: none anticipated. */ public static void main(final String[] arguments) { demoIsNullOrEmpty(); demoEmptyToNull(); demoNullToEmpty(); demoPadStart(); demoPadEnd(); demoRepeat(); } } The methods for padding and for repeating Strings do not take kindly to null Strings being passed to them. Indeed, passing a null to these three methods leads to NullPointerExceptions being thrown. Interestingly, these are more examples of Guava using the Preconditions class in its own code. Conclusion Many Java libraries and frameworks provide String manipulation functionality is classes with names like StringUtil. Guava’s Strings class is one such example and the methods it supplies can make Java manipulation of Strings easier and more concise. Indeed, as I use Guava’s Strings‘s methods, I feel almost like I’m using some of Groovy’s GDK String goodness.   Reference: Guava’s Strings Class from our JCG partner Dustin Marx at the Inspired by Actual Events blog. ...

MyBatis Tutorial – CRUD Operations and Mapping Relationships – Part 1

CRUD Operations MyBatis is an SQL Mapper tool which greatly simplifies the database programing when compared to using JDBC directly.                   Step1: Create a Maven project and configure MyBatis dependencies.   <project xmlns='http://maven.apache.org/POM/4.0.0' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:schemaLocation='http://maven.apache.org/POM/4.0.0http://maven.apache.org/xsd/maven-4.0.0.xsd'><modelVersion>4.0.0</modelVersion><groupId>com.sivalabs</groupId> <artifactId>mybatis-demo</artifactId> <version>0.0.1-SNAPSHOT</version> <packaging>jar</packaging><name>mybatis-demo</name> <url>http://maven.apache.org</url><properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties><build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2.3.2</version> <configuration> <source>1.6</source> <target>1.6</target> <encoding>${project.build.sourceEncoding}</encoding> </configuration> </plugin> </plugins> </build><dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.10</version> <scope>test</scope> </dependency><dependency> <groupId>org.mybatis</groupId> <artifactId>mybatis</artifactId> <version>3.1.1</version> </dependency> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.21</version> <scope>runtime</scope> </dependency> </dependencies> </project>   Step#2: Create the table USER and a Java domain Object User as follows:   CREATE TABLE user ( user_id int(10) unsigned NOT NULL auto_increment, email_id varchar(45) NOT NULL, password varchar(45) NOT NULL, first_name varchar(45) NOT NULL, last_name varchar(45) default NULL, PRIMARY KEY (user_id), UNIQUE KEY Index_2_email_uniq (email_id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; package com.sivalabs.mybatisdemo.domain; public class User { private Integer userId; private String emailId; private String password; private String firstName; private String lastName;@Override public String toString() { return 'User [userId=' + userId + ', emailId=' + emailId + ', password=' + password + ', firstName=' + firstName + ', lastName=' + lastName + ']'; } //setters and getters }   Step#3: Create MyBatis configuration files. a) Create jdbc.properties file in src/main/resources folder jdbc.driverClassName=com.mysql.jdbc.Driver jdbc.url=jdbc:mysql://localhost:3306/mybatis-demo jdbc.username=root jdbc.password=admin b) Create mybatis-config.xml file in src/main/resources folder <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE configuration PUBLIC '-//mybatis.org//DTD Config 3.0//EN' 'http://mybatis.org/dtd/mybatis-3-config.dtd'> <configuration> <properties resource='jdbc.properties'/> <typeAliases> <typeAlias type='com.sivalabs.mybatisdemo.domain.User' alias='User'></typeAlias> </typeAliases> <environments default='development'> <environment id='development'> <transactionManager type='JDBC'/> <dataSource type='POOLED'> <property name='driver' value='${jdbc.driverClassName}'/> <property name='url' value='${jdbc.url}'/> <property name='username' value='${jdbc.username}'/> <property name='password' value='${jdbc.password}'/> </dataSource> </environment> </environments> <mappers> <mapper resource='com/sivalabs/mybatisdemo/mappers/UserMapper.xml'/> </mappers> </configuration>   Step#4: Create an interface UserMapper.java in src/main/java folder in com.sivalabs.mybatisdemo.mappers package.   package com.sivalabs.mybatisdemo.mappers;import java.util.List; import com.sivalabs.mybatisdemo.domain.User;public interface UserMapper {public void insertUser(User user);public User getUserById(Integer userId);public List<User> getAllUsers();public void updateUser(User user);public void deleteUser(Integer userId);}   Step#5: Create UserMapper.xml file in src/main/resources folder in com.sivalabs.mybatisdemo.mappers package.   <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE mapper PUBLIC '-//mybatis.org//DTD Mapper 3.0//EN' 'http://mybatis.org/dtd/mybatis-3-mapper.dtd'><mapper namespace='com.sivalabs.mybatisdemo.mappers.UserMapper'><select id='getUserById' parameterType='int' resultType='com.sivalabs.mybatisdemo.domain.User'> SELECT user_id as userId, email_id as emailId , password, first_name as firstName, last_name as lastName FROM USER WHERE USER_ID = #{userId} </select> <!-- Instead of referencing Fully Qualified Class Names we can register Aliases in mybatis-config.xml and use Alias names. --> <resultMap type='User' id='UserResult'> <id property='userId' column='user_id'/> <result property='emailId' column='email_id'/> <result property='password' column='password'/> <result property='firstName' column='first_name'/> <result property='lastName' column='last_name'/> </resultMap><select id='getAllUsers' resultMap='UserResult'> SELECT * FROM USER </select><insert id='insertUser' parameterType='User' useGeneratedKeys='true' keyProperty='userId'> INSERT INTO USER(email_id, password, first_name, last_name) VALUES(#{emailId}, #{password}, #{firstName}, #{lastName}) </insert><update id='updateUser' parameterType='User'> UPDATE USER SET PASSWORD= #{password}, FIRST_NAME = #{firstName}, LAST_NAME = #{lastName} WHERE USER_ID = #{userId} </update><delete id='deleteUser' parameterType='int'> DELETE FROM USER WHERE USER_ID = #{userId} </delete></mapper>   Step#6: Create MyBatisUtil.java to instantiate SqlSessionFactory.   package com.sivalabs.mybatisdemo.service;import java.io.IOException; import java.io.Reader; import org.apache.ibatis.io.Resources; import org.apache.ibatis.session.SqlSessionFactory; import org.apache.ibatis.session.SqlSessionFactoryBuilder;public class MyBatisUtil { private static SqlSessionFactory factory;private MyBatisUtil() { }static { Reader reader = null; try { reader = Resources.getResourceAsReader('mybatis-config.xml'); } catch (IOException e) { throw new RuntimeException(e.getMessage()); } factory = new SqlSessionFactoryBuilder().build(reader); }public static SqlSessionFactory getSqlSessionFactory() { return factory; } }   Step#7: Create UserService.java in src/main/java folder.   package com.sivalabs.mybatisdemo.service;import java.util.List; import org.apache.ibatis.session.SqlSession; import com.sivalabs.mybatisdemo.domain.User; import com.sivalabs.mybatisdemo.mappers.UserMapper;public class UserService {public void insertUser(User user) { SqlSession sqlSession = MyBatisUtil.getSqlSessionFactory().openSession(); try{ UserMapper userMapper = sqlSession.getMapper(UserMapper.class); userMapper.insertUser(user); sqlSession.commit(); }finally{ sqlSession.close(); } }public User getUserById(Integer userId) { SqlSession sqlSession = MyBatisUtil.getSqlSessionFactory().openSession(); try{ UserMapper userMapper = sqlSession.getMapper(UserMapper.class); return userMapper.getUserById(userId); }finally{ sqlSession.close(); } }public List<User> getAllUsers() { SqlSession sqlSession = MyBatisUtil.getSqlSessionFactory().openSession(); try{ UserMapper userMapper = sqlSession.getMapper(UserMapper.class); return userMapper.getAllUsers(); }finally{ sqlSession.close(); } }public void updateUser(User user) { SqlSession sqlSession = MyBatisUtil.getSqlSessionFactory().openSession(); try{ UserMapper userMapper = sqlSession.getMapper(UserMapper.class); userMapper.updateUser(user); sqlSession.commit(); }finally{ sqlSession.close(); }}public void deleteUser(Integer userId) { SqlSession sqlSession = MyBatisUtil.getSqlSessionFactory().openSession(); try{ UserMapper userMapper = sqlSession.getMapper(UserMapper.class); userMapper.deleteUser(userId); sqlSession.commit(); }finally{ sqlSession.close(); }}}   Step#8: Create a JUnit Test class to test UserService methods.   package com.sivalabs.mybatisdemo;import java.util.List;import org.junit.AfterClass; import org.junit.Assert; import org.junit.BeforeClass; import org.junit.Test;import com.sivalabs.mybatisdemo.domain.User; import com.sivalabs.mybatisdemo.service.UserService;public class UserServiceTest { private static UserService userService;@BeforeClass public static void setup() { userService = new UserService(); }@AfterClass public static void teardown() { userService = null; }@Test public void testGetUserById() { User user = userService.getUserById(1); Assert.assertNotNull(user); System.out.println(user); }@Test public void testGetAllUsers() { List<User> users = userService.getAllUsers(); Assert.assertNotNull(users); for (User user : users) { System.out.println(user); }}@Test public void testInsertUser() { User user = new User(); user.setEmailId('test_email_'+System.currentTimeMillis()+'@gmail.com'); user.setPassword('secret'); user.setFirstName('TestFirstName'); user.setLastName('TestLastName');userService.insertUser(user); Assert.assertTrue(user.getUserId() != 0); User createdUser = userService.getUserById(user.getUserId()); Assert.assertNotNull(createdUser); Assert.assertEquals(user.getEmailId(), createdUser.getEmailId()); Assert.assertEquals(user.getPassword(), createdUser.getPassword()); Assert.assertEquals(user.getFirstName(), createdUser.getFirstName()); Assert.assertEquals(user.getLastName(), createdUser.getLastName());}@Test public void testUpdateUser() { long timestamp = System.currentTimeMillis(); User user = userService.getUserById(2); user.setFirstName('TestFirstName'+timestamp); user.setLastName('TestLastName'+timestamp); userService.updateUser(user); User updatedUser = userService.getUserById(2); Assert.assertEquals(user.getFirstName(), updatedUser.getFirstName()); Assert.assertEquals(user.getLastName(), updatedUser.getLastName()); }@Test public void testDeleteUser() { User user = userService.getUserById(4); userService.deleteUser(user.getUserId()); User deletedUser = userService.getUserById(4); Assert.assertNull(deletedUser);} }   Now, I will explain how to perform CRUD operations using MyBatis Annotation support without need of Queries configuration in XML mapper files.   Step#1: Create a table BLOG and a java domain Object Blog.   CREATE TABLE blog ( blog_id int(10) unsigned NOT NULL auto_increment, blog_name varchar(45) NOT NULL, created_on datetime NOT NULL, PRIMARY KEY (blog_id) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; package com.sivalabs.mybatisdemo.domain;import java.util.Date;public class Blog {private Integer blogId; private String blogName; private Date createdOn;@Override public String toString() { return 'Blog [blogId=' + blogId + ', blogName=' + blogName + ', createdOn=' + createdOn + ']'; } //Seeters and getters }   Step#2: Create UserMapper.java interface with SQL queries in Annotations.   package com.sivalabs.mybatisdemo.mappers;import java.util.List;import org.apache.ibatis.annotations.Delete; import org.apache.ibatis.annotations.Insert; import org.apache.ibatis.annotations.Options; import org.apache.ibatis.annotations.Result; import org.apache.ibatis.annotations.Results; import org.apache.ibatis.annotations.Select; import org.apache.ibatis.annotations.Update;import com.sivalabs.mybatisdemo.domain.Blog;public interface BlogMapper { @Insert('INSERT INTO BLOG(BLOG_NAME, CREATED_ON) VALUES(#{blogName}, #{createdOn})') @Options(useGeneratedKeys=true, keyProperty='blogId') public void insertBlog(Blog blog);@Select('SELECT BLOG_ID AS blogId, BLOG_NAME as blogName, CREATED_ON as createdOn FROM BLOG WHERE BLOG_ID=#{blogId}') public Blog getBlogById(Integer blogId);@Select('SELECT * FROM BLOG ') @Results({ @Result(id=true, property='blogId', column='BLOG_ID'), @Result(property='blogName', column='BLOG_NAME'), @Result(property='createdOn', column='CREATED_ON') }) public List<Blog> getAllBlogs();@Update('UPDATE BLOG SET BLOG_NAME=#{blogName}, CREATED_ON=#{createdOn} WHERE BLOG_ID=#{blogId}') public void updateBlog(Blog blog);@Delete('DELETE FROM BLOG WHERE BLOG_ID=#{blogId}') public void deleteBlog(Integer blogId);}   Step#3: Configure BlogMapper in mybatis-config.xml   <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE configuration PUBLIC '-//mybatis.org//DTD Config 3.0//EN' 'http://mybatis.org/dtd/mybatis-3-config.dtd'> <configuration> <properties resource='jdbc.properties'/> <environments default='development'> <environment id='development'> <transactionManager type='JDBC'/> <dataSource type='POOLED'> <!-- <property name='driver' value='com.mysql.jdbc.Driver'/> <property name='url' value='jdbc:mysql://localhost:3306/mybatis-demo'/> <property name='username' value='root'/> <property name='password' value='admin'/> --> <property name='driver' value='${jdbc.driverClassName}'/> <property name='url' value='${jdbc.url}'/> <property name='username' value='${jdbc.username}'/> <property name='password' value='${jdbc.password}'/> </dataSource> </environment> </environments> <mappers> <mapper class='com.sivalabs.mybatisdemo.mappers.BlogMapper'/> </mappers> </configuration>   Step#4: Create BlogService.java   package com.sivalabs.mybatisdemo.service;import java.util.List;import org.apache.ibatis.session.SqlSession;import com.sivalabs.mybatisdemo.domain.Blog; import com.sivalabs.mybatisdemo.mappers.BlogMapper;public class BlogService {public void insertBlog(Blog blog) { SqlSession sqlSession = MyBatisUtil.getSqlSessionFactory().openSession(); try{ BlogMapper blogMapper = sqlSession.getMapper(BlogMapper.class); blogMapper.insertBlog(blog); sqlSession.commit(); }finally{ sqlSession.close(); } }public Blog getBlogById(Integer blogId) { SqlSession sqlSession = MyBatisUtil.getSqlSessionFactory().openSession(); try{ BlogMapper blogMapper = sqlSession.getMapper(BlogMapper.class); return blogMapper.getBlogById(blogId); }finally{ sqlSession.close(); } }public List<Blog> getAllBlogs() { SqlSession sqlSession = MyBatisUtil.getSqlSessionFactory().openSession(); try{ BlogMapper blogMapper = sqlSession.getMapper(BlogMapper.class); return blogMapper.getAllBlogs(); }finally{ sqlSession.close(); } }public void updateBlog(Blog blog) { SqlSession sqlSession = MyBatisUtil.getSqlSessionFactory().openSession(); try{ BlogMapper blogMapper = sqlSession.getMapper(BlogMapper.class); blogMapper.updateBlog(blog); sqlSession.commit(); }finally{ sqlSession.close(); } }public void deleteBlog(Integer blogId) { SqlSession sqlSession = MyBatisUtil.getSqlSessionFactory().openSession(); try{ BlogMapper blogMapper = sqlSession.getMapper(BlogMapper.class); blogMapper.deleteBlog(blogId); sqlSession.commit(); }finally{ sqlSession.close(); }}}   Step#5: Create JUnit Test for BlogService methods   package com.sivalabs.mybatisdemo;import java.util.Date; import java.util.List;import org.junit.AfterClass; import org.junit.Assert; import org.junit.BeforeClass; import org.junit.Test;import com.sivalabs.mybatisdemo.domain.Blog; import com.sivalabs.mybatisdemo.service.BlogService;public class BlogServiceTest { private static BlogService blogService;@BeforeClass public static void setup() { blogService = new BlogService(); }@AfterClass public static void teardown() { blogService = null; }@Test public void testGetBlogById() { Blog blog = blogService.getBlogById(1); Assert.assertNotNull(blog); System.out.println(blog); }@Test public void testGetAllBlogs() { List<Blog> blogs = blogService.getAllBlogs(); Assert.assertNotNull(blogs); for (Blog blog : blogs) { System.out.println(blog); }}@Test public void testInsertBlog() { Blog blog = new Blog(); blog.setBlogName('test_blog_'+System.currentTimeMillis()); blog.setCreatedOn(new Date());blogService.insertBlog(blog); Assert.assertTrue(blog.getBlogId() != 0); Blog createdBlog = blogService.getBlogById(blog.getBlogId()); Assert.assertNotNull(createdBlog); Assert.assertEquals(blog.getBlogName(), createdBlog.getBlogName());}@Test public void testUpdateBlog() { long timestamp = System.currentTimeMillis(); Blog blog = blogService.getBlogById(2); blog.setBlogName('TestBlogName'+timestamp); blogService.updateBlog(blog); Blog updatedBlog = blogService.getBlogById(2); Assert.assertEquals(blog.getBlogName(), updatedBlog.getBlogName()); }@Test public void testDeleteBlog() { Blog blog = blogService.getBlogById(4); blogService.deleteBlog(blog.getBlogId()); Blog deletedBlog = blogService.getBlogById(4); Assert.assertNull(deletedBlog); } }   Reference: MyBatis Tutorial: Part1 – CRUD Operations from our JCG partner, MyBatis Tutorial: Part-2: CRUD operations Using Annotations from our JCG partner Siva Reddy at the My Experiments on Technology blog. ...

Remote actors – discovering Akka

Assume our test application became a huge success and slowly a single server is not capable of handling growing traffic. We are faced with two choices: replacing our server with a better one (scaling up) or buying a second one and building a cluster (scaling out). We’ve chosen to build a cluster as it’s easier to scale in the future. However we quickly discover that our application no longer fulfils the very first requirement: The client application should call the URL […] at most from one thread – it’s forbidden to concurrently fetch random numbers using several HTTP connections. Obviously every node in the cluster is independent, having its own, separate instance of Akka, thus a separate copy of RandomOrgClient actor. In order to fix this issue we have few options:having a global (cluster-wide!) lock (distributed monitor, semaphore, etc.) guarding multithreaded access. Ordinary synchronized is not enough. …or create a dedicated node in the cluster to communicate with random.org, used by all other nodes via some API …or create only one instance of RandomOrgClient on exactly one node and expose it via some API (RMI, JMS…) to remote clientsDo you remember how much time spent describing the different between Actor and ActorRef? Now this distinction will become obvious. In turns out our solution will be based on the last suggestion, however we don’t have to bother about API, serialization, communication or transport layer. Even better, there is no such API in Akka to handle remote actors. It’s enough to say in the configuration file: this particular actor is suppose to be created only on this node. All other nodes, instead of creating the same actor locally, will return a special proxy, which looks like a normal actor from the outside, while in reality it forwards all messages to remote actor on other node. Let’s say it again: we don’t have to change anything in our code, it’s enough to make some adjustments in the configuration file: akka { actor { provider = "akka.remote.RemoteActorRefProvider" deployment { /buffer/client { remote = "akka://RandomOrgSystem@" } } } remote { transport = "akka.remote.netty.NettyRemoteTransport" log-sent-messages = on netty { hostname = "" } } } That’s it! Each node in the cluster is identified by the server address and port. Key part of the configuration is the declaration that /buffer/client is suppose to be created only Every other instance (working on a different server or port), instead of creating a new copy of the actor, will build a special transparent proxy, calling remote server. If you don’t remember the architecture of our solution, figure below demonstrates the message flow. As you can see each node has its own copy of RandomOrgBuffer (otherwise each access of the buffer would result in remote call, which defeats the purpose of the buffer altogether). However each node (except the middle one) has a remote reference to a RandomOrgClient actor (node in the middle accesses this actor locally):The machine in the middle (JVM 1) is executed on port 2552 and it’s the only machine that communicates with random.org. All the others (JVM 2 and 3 working on 2553 and 2554 respectively) are communicating with this server indirectly via JVM 1. BTW we can change the TCP/IP port used by each node either by using configuration file or -akka.remote.netty.port=2553 Java property. Before we announce premature success (again), we are faced with another problem. Or actually, we haven’t really passed the original obstacle yet. Since RandomOrgClient is now accessed by multiple RandomBuffer actors (distributed across the cluster), it can still initiate multiple concurrent HTTP connections to random.org, on behalf of every node in the cluster. It’s easy to imagine a situation where several RandomOrgBuffer instances are sending FetchFromRandomOrg message at the same time, beginning several concurrent HTTP connections. In order to avoid this situation we implement already known technique of queueing requests in actor if one request wasn’t yet finished: case class FetchFromRandomOrg(batchSize: Int)case class RandomOrgServerResponse(randomNumbers: List[Int])class RandomOrgClient extends Actor {val client = new AsyncHttpClient() val waitingForReply = new mutable.Queue[(ActorRef, Int)]override def postStop() { client.close() }def receive = LoggingReceive { case FetchFromRandomOrg(batchSize) => waitingForReply += (sender -> batchSize) if (waitingForReply.tail.isEmpty) { sendHttpRequest(batchSize) } case response: RandomOrgServerResponse => waitingForReply.dequeue()._1 ! response if (!waitingForReply.isEmpty) { sendHttpRequest(waitingForReply.front._2) } }private def sendHttpRequest(batchSize: Int) { val url = "https://www.random.org/integers/?num=" + batchSize + "&min=0&max=65535&col=1&base=10&format=plain&rnd=new" client.prepareGet(url).execute(new RandomOrgResponseHandler(self)) } }private class RandomOrgResponseHandler(notifyActor: ActorRef) extends AsyncCompletionHandler[Unit]() { def onCompleted(response: Response) { val numbers = response.getResponseBody.lines.map(_.toInt).toList notifyActor ! RandomOrgServerResponse(numbers) } } This time pay attention to waitingForReply queue. When a request to fetch random numbers from remote web service comes in, either we initiate new connection (if the queue doesn’t contain no-one except us). If there are other actors awaiting, we must politely put ourselves in the queue, remembering who requested how many numbers (waitingForReply += (sender -> batchSize)). When a reply arrives, we take the very first actor from the queue (who waits for the longest amount of time) and initiate another request on behalf of him. Unsurprisingly there is no multithreading or synchronization code. However it’s important not to break encapsulation by accessing its state outside of receive method. I made this mistake by reading waitingForReply queue inside onCompleted() method. Because this method is called asynchronously by HTTP client worker thread, potentially we can access our actors state from two threads at the same time (if it was handling some message in receive at the same time). That’s the reason why I decided to extract HTTP reply callback into a separate class, not having implicit access to an actor. This is much safer as access to actor’s state is guarded by the compiler itself. This is the last part of our Discovering Akka series. Remember that the complete source code is available on GitHub.   Reference: Remote actors – discovering Akka from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog. ...

Avoiding FORs – Anti-If Campaign

Have you ever wondered how FORs impact your code? How they are limiting your design and more important how they are transforming your code into an amount of lines without any human meaning? In this post we are going to see how to transform a simple example of a for (provided by Francesco Cirillio – anti-if campaign), to something more readable and well designed. So let’s start with original code using FOR:       public class Department {private List<Resource> resources = new ArrayList<Resource>();public void addResource(Resource resource) { this.resources.add(resource); }public void printSlips() {for (Resource resource : resources) { if(resource.lastContract().deadline().after(new Date())) {System.out.println(resource.name()); System.out.println(resource.salary()); } } }} See printSlips method. So simple method, only 10 lines counting white lines, but violating one of the most important rule, this method is doing more than one thing inside it mixing different level of abstractions. As Robert C. Martin note in his book ‘Functions Should Do One Thing. They Should Do Well. They Should do it only […]. If a function does only steps that are one level below the stated name of the function, then the function is doing one thing […].’. So with the given definition of how a method should look, let’s recap with previous method, and see how many things are doing printSlips method? Concretely four. public void printSlips() {for (Resource resource : resources) { #Cycle if(resource.lastContract().deadline().after(new Date())) { #Selection #Media System.out.println(resource.name()); #Content System.out.println(resource.salary()); } } } The method is cycling, selecting resources, accessing to content, and accessing to media. See that each of them belongs to different level of abstraction, printing to console should be in different level of checking if a resource has not expired its contract yet. Let’s see the solution proposed by Francesco. The first thing to do is splitting main functions into three classes and two interfaces, one for iterating resources, another one to choose whatever a resource has not expired yet, and another one for printing a resource. With this approach, we are creating a solution which is designed to be extended, and improving readability too. And now it is time for the code: Predicate interface will be used for implementing if a resource meets an implemented condition. public interface Predicate {boolean is(Resource each);} For example in our case, implementation of interface will look like: public class InForcePredicate implements Predicate {public boolean is(Resource each) { return each.lastContract().deadline().after(new Date()); } } We have move conditional to InForcePredicate class. Note that if we want to create a class that checks if contract is expired we would create a new class implementing Predicate with something like return  each.lastContract().deadline().before(new Date()); Block interface will be the next interface which will implement the access to media. In this case to console: public interface Block {void evaluate(Resource resource);} And its implementation: public class PrintSlip implements Block {public void evaluate(Resource resource) { System.out.println(resource.name()); System.out.println(resource.salary()); }} Again note that changing where information is sent (console, file, network, …) it is a simply matter of implementing Block interface. And last class is the one which contains an iterator over resources, and also provides methods to call each interface created previously: public class ResourceOrderedCollection { private Collection<Resource> resources = new ArrayList<Resource>();public ResourceOrderedCollection() { super(); }public ResourceOrderedCollection(Collection<Resource> resources) { this.resources = resources; }public void add(Resource resource) { this.resources.add(resource); }public void forEachDo(Block block) { Iterator<Resource> iterator = resources.iterator();while(iterator.hasNext()) { block.evaluate(iterator.next()); }}public ResourceOrderedCollection select(Predicate predicate) {ResourceOrderedCollection resourceOrderedCollection = new ResourceOrderedCollection();Iterator<Resource> iterator = resources.iterator();while(iterator.hasNext()) { Resource resource = iterator.next(); if(predicate.is(resource)) { resourceOrderedCollection.add(resource); } }return resourceOrderedCollection; } } See next three important points:First one is that constructor receives a list of resources. Second one is that select method receives a predicate which is executed into the iterator to know if resource is choosable to be printed or not. Finally returning a new instance of ResourceOrderedCollection with resources without an expired contract. Third one forEachDo method receives a Block interface which is called by every element of resources list.And finally modified Department class using previous developed classes: public class Department {private List<Resource> resources = new ArrayList<Resource>();public void addResource(Resource resource) { this.resources.add(resource); }public void printSlips() { new ResourceOrderedCollection(this.resources).select(new InForcePredicate()).forEachDo(new PrintSlip()); }} Observe that now printSlips method contains a single readable line with the same level of abstraction. Take notice that class and interface names are taken from Francesco example, but if I should do the same I would choose more representative names. Cirillo’s approach is good, but has some minor aspects to consider. For example it has the ‘ vertical problem‘: the InForcePredicate instance from Predicate interface uses five lines of source code to encapsulate a single statement. We have explored two possible solutions of a problem, being the last one the proposed by Cirillio. Also there are many other possible and correct solutions to this problem, for example using Template Method Pattern, or mixing the use of Lambdaj with (or without) Closures (Lambdaj syntax can be a bit confusing). All of them have their pros and cons, but all of them makes your code readable and more important, all functions do one thing, they do well and they do it only. As final notes of this post, JDK 8 will provide support for closures natively, and also will provide many features that now are provided by Lambdaj. Meanwhile JDK 8 is not stable (planned for mid-final 2013) or for your legacy code (from the point of view of JDK 8), Lambdaj is a really good fellow traveler. We keep learning.   Reference: Avoiding FORs – Anti-If Campaign from our JCG partner Alex Soto at the One Jar To Rule Them All blog. ...

Getting rid of null parameters with a simple spring aspect

What is the most hated and at the same time the most popular exception in the world? I bet it’s the NullPointerException. NullPointerException can mean anything, from simple “ups, I didn’t think that can be null” to hours and days of debugging of third-party libraries (try using Dozer for complicated transformations, I dare you). The funny thing is, it’s trivial to get rid of all the NullPointerExceptions in your code. This triviality is a side effect of a technique called “Design by Contract”. I won’t go into much details about the theory, you can find everything you need on Wikipedia, but in the nutshell Design by Contract means:each method has a precondition (what it expects before being called) each method has a postcondition (what it guarantees, what is returned) each class has an constraint on its state (class invariant)So at the beginning of each method you check whether preconditions are met, at the end, whether postconditions and invariant are met, and if something’s wrong you throw an exception saying what is wrong. Using Spring’s internal static methods that throw appropriate exceptions (IllegalArgumentException), it can look something like this: import static org.springframework.util.Assert.notNull; import static org.springframework.util.StringUtils.hasText;public class BranchCreator { public Story createNewBranch(Story story, User user, String title) { verifyParameters(story, user, title); Story branch = //... the body of the class returnig an object verifyRetunedValue(branch); return branch; }private void verifyParameters(Story story, User user, String title) { notNull(story); notNull(user); hasText(title); }private void verifyRetunedValue(Story branch) { notNull(branch); } } You can also use Validate class from apache commons instead of spring’s notNull/hasText. Usually I just check preconditions and write tests for postconditions and constraints. But still, this is all boiler plate code. To move it out of your class, you can use many Design by Contract libraries, for example SpringContracts, or Contract4J. Either way you end up checking the preconditions on every public method. And guess what? Except for Data Transfer Objects and some setters, every public method I write expects its parameters NOT to be null. So to save us some writing of this boiler plate ocde, how about adding a simple aspect that will make it impossible in the whole application, to pass null to anything other than DTOs and setters? Without any additional libraries (I assume you are already using Spring Framework), annotations, and what else. Why would I like to not allow for nulls in parameters? Because we have method overloading in modern languages. Seriously, how often do you want to see something like this: Address address = AddressFactory.create(null, null, null, null); And this is not much better either Microsoft.Office.Interop.Excel.Workbook theWorkbook = ExcelObj.Workbooks.Open(openFileDialog.FileName, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing);   The solution So here is a simple solution: you add one class to your project and a few lines of spring IoC configuration. The class (aspect) looks like this: import org.aspectj.lang.JoinPoint; import static org.springframework.util.Assert.notNull;public class NotNullParametersAspect { public void throwExceptionIfParametersAreNull(JoinPoint joinPoint) { for(Object argument : joinPoint.getArgs()) { notNull(argument); } } } And the spring configuration is here (remember to change the namespace to your project). <aop:config proxy-target-class='true'> <aop:aspect ref='notNullParametersAspect'> <aop:pointcut expression='execution(public * eu.solidcraft.*..*.*(..)) && !execution(public * eu.solidcraft.*..*Dto.*(..)) && !execution(public * eu.solidcraft.*..*.set*(..))' id='allPublicApplicationOperationsExceptDtoAndSetters'> <aop:before method='throwExceptionIfParametersAreNull' pointcut-ref='allPublicApplicationOperationsExceptDtoAndSetters'></aop:before> </aop:pointcut><task:annotation-driven> <bean class='eu.solidcraft.aspects.NotNullParametersAspect' id='notNullParametersAspect'></bean> </task:annotation-driven> </aop:aspect> </aop:config> The ‘&&’ is no error, it’s just && condition escaped in xml. If you do not understand aspectj pointcut definition syntaxt, here is a little cheat sheet. And here is a test telling us that the configuration is succesfull. public class NotNullParametersAspectIntegrationTest extends AbstractIntegrationTest { @Resource(name = 'userFeedbackFacade') private UserFeedbackFacade userFeedbackFacade;@Test(expected = IllegalArgumentException.class) public void shouldThrowExceptionIfParametersAreNull() { //when userFeedbackFacade.sendFeedback(null);//then exception is thrown }@Test public void shouldNotThrowExceptionForNullParametersOnDto() { //when UserBookmarkDto userBookmarkDto = new UserBookmarkDto(); userBookmarkDto.withChapter(null); StoryAncestorDto ancestorDto = new StoryAncestorDto(null, null, null, null);//then no exception is thrown } } AbstractIntegrationTest is a simple class that starts the spring test context. You can use AbstractTransactionalJUnit4SpringContextTests with @ContextConfiguration(..) instead. The catch Ah yes, there is a catch. Since spring AOP uses either J2SE dynamic proxies basing on an interface or aspectj CGLIB proxies, every class will either need an interface (for simple proxy based aspect weaving) or a constructor without any parameters (for cglib weaving). The good news is that the constructor can be private.   Reference: Getting rid of null parameters with a simple spring aspect from our JCG partner Jakub Nabrdalik at the Solid Craft blog. ...

Introduction to Enterprise Integration Patterns

In this blog entry we will go through some of Enterprise Integration Patterns. These are known design patterns that aims to solve integration challenges. After reading this one would get a head start in designing integration solutions. EIPs (in short) are known design patterns that provide solutions to issues/problems faced during application integration. The obvious question that comes to mind is, what are those issues/problems that we need to deal while integrating applications?Applications are heterogeneous in nature, they are developed using different languages, runs on different OS’es, understand different data formats. Applications undergo a lot of change, they are subjected to upgrades, and their API’s change over time. They need to exchange data over networks in a reliable and secure manner.EIPs are classified in the following categories. Adjacent to these, notations that are used to refer these patterns is also specified. Integration styles: File TransferIn this mode applications exchange information using files, files are shared at some common location. Shared Database Here applications use a common database schema. MessagingAn entity mediates between applications that want to exchange data. It does the job of accepting messages from producers and then delivering to the consumers. Messaging helps to achieve loose coupling while integrating applications. It isolates the connecting applications from API changes/upgrades that happens over time. RPCIn this an application exposes its functionality using interfaces, the caller needs to be aware of those and invokes them using stubs. Except RPC all the three mechanisms above are asynchronous in nature. Next set of patterns talk about Messaging Systems:Message Structure of a message is well defined by the messaging system used. It usually contains header and body sections.Message Channel Channels are the mediums where messages are produced. They are the usual queue and topics.Pipes and FiltersThis pattern is useful when one needs to process messages before they are delivered to consumer applications. Message RouterWhen the sender application does not know on which channel the receiver is subscribed to. A router is used in between for delivering messages on the correct channel. It has routing rules that decides where the messages shall be delivered. Message TranslatorTranslators are used to change the format of message. The sender application might send CSV data while the receiver application understands XML, in that case we need to have a translator before the receiving application that does the job of CSV to XML conversion. Message EndpointEndpoint is a component that helps an application interact with messaging systems.They have the protocol built in for communication with the messaging systems.They are the message producers and the consumers. Channel patterns: These patterns talk about attributes of messaging channels. Peer 2 PeerChannels that deliver a message to a single consumer. Example is a queue Publish SubscribeChannels that broadcast a message to all subscribing consumers. Topics are of pub-sub nature. Dead Letter ChannelChannels used for moving messages which cannot be processed. Cases when the consumer can’t understand or messages get expired. This is important from the point of monitoring & management. Messaging BridgeThese are the channel adapters that bridges different messaging systems. Consider a case when there are two enterprise systems, one uses Mircosoft’s MQ while the other uses IBM’s MQ server. There you need a bridge that can connect these. Guaranteed deliveryPersistent channels are used to guarantee message delivery. In case the message system crashes, it would lose all messages present in memory. So usually channels are backed up a persistent store where all messages in the channel are stored. Message Construction Patterns: These patterns are used to specify the intent of the messages. What should the receiver do after getting the message? Command MessageThese specify a method or a function that the receiver should invoke. Consider the case when XML is being used, the root node may specify the method name while the child elements specify the arguments for method invocation. Document MessageWhen the sender passes data but it does not know what the receiver should do with it. Event MessageSender sends notification messages about changes happening on its end. Here the receiver may choose to ignore or react to it. Request ReplyIn this the sender expects a reply back. Message might be composed of two parts, one contains the request and other is populated by the receiver i.e. the response. Correlation IdentifierIf the responses are received asynchronously, an id used to correlate the responses with its corresponding requests. Routing Patterns Content Based RoutingMessages are inspected for determining the correct channel. Where XML is used, rules are written in XPath. Message FilterWhen a receiver is only interested in messages having certain properties, then it needs to apply a filter. This capability is generally comes in built with messaging systems. SplitterIn case when messages arrive in a batch. A splitter is required to break the message into parts that can be processed individually. AggregatorAn aggregator does the opposite job of a splitter. It correlates and combines similar messages. Transformation Patterns Content EnricherAn enricher does the job of adding extra information to a message. This is required in case all the data to process the message is not present. Content FilterThe Content Filter does the opposite it removes unwanted data from a message. NormalizerA normalizer does the job of converting messages arriving in different formats to a common format. Your application needs the ability to accept data in JSON, XML, CSV etc but the processing logic only understands XML, in that case you need a normalizer. Endpoint patterns Transaction ClientA transaction client helps in coordinating with external transaction services. Message DispatcherMessage dispatcher is a pattern where the receiver dispatches the arriving messages to workers. The workers have the job of processing the message. Event Driven ConsumerIn this the receiver registers an action on the messaging system; on receiving a message the messaging systems calls that action. System Management Patterns: These specify ways to monitor and manage systems efficiently. DetourAs the name specifies, the path of the message is changed for doing activities such as validation, logging etc. This extra processing is control based, which can be turned off for performance reasons. Wire TapHere the message is copied to a channel and is later retrieved for inspection or analysis. Message StoreAs the message is passed on from the receiver to the processing unit, the whole message or parts of it (some properties from header or message body) are stored in a central location. To dive in more details check eaipatterns.com which has elaborated these patterns in-depth. In coming blog entries we will also look into Apache Camel that provides implementation to many of these patterns.   Reference: Introduction to Enterprise Integration Patterns from our JCG partner Abhishek Jain at the NS.Infra blog. ...

Making Plain Old Java OSGi Compatible

Although OSGi is increasingly popular in the Java world, there are many Java applications and libraries that have not been designed to work in OSGi. Sometimes you may need to run such code inside an OSGi environment, either because you would like to take advantage of the benefits offered by OSGi itself, or because you need certain features only offered by this particular environment. Often, you can’t afford to migrate entirely to OSGi or at least you need a transition period during which your code works fine both in and outside OSGi. And surely, you would like to do this with minimum effort and without increasing the complexity of your software. Recently, our team at SAP faced a similar challenge. We have a rather large legacy plain Java application which has grown over the years to include a number of homegrown frameworks and custom solutions. We needed to offer a REST-based interface for this application, and so we had either to include a Web server, or run inside an environment which has it. We opted for using SAP Lean Java Server (LJS), the engine behind SAP NetWeaver Cloud, which includes Tomcat and other useful services. However, LJS is based on Equinox, an OSGi implementation by Eclipse, and therefore we needed to make sure our code is compatible with OSGi to ensure smooth interoperability. In the process, we learned a lot about this topic and so I would like to share our most interesting findings with you in this post. In order for plain Java code to run smoothly inside an OSGi environment, the following prerequisites should be met:It is packaged as OSGi bundles, that is jar archives with valid OSGi manifests. It adheres to requirements and restrictions imposed by OSGi regarding loading classes dynamically. All its packages are exported by only one bundle, i.e. there are no different bundles exporting the same package.Also, in many cases you may need to create a new entry point for your application to be started from the OSGi console. If you use Equinox, you should consider creating an Equinox application for this purpose. Note that meeting the above requirements means neither that you should in any way migrate your code to OSGi, so that it only runs in OSGi, nor that you should fundamentally change your development environment or process to be based on OSGi. On the contrary, our experience shows that it is quite possible to achieve compatibility with OSGi without losing the capability to run outside OSGi, and without changing dramatically your development approaches and tools, by addressing the above requirements in the following ways:All OSGi manifests can be generated automatically using BND and other tools based on it. Outside OSGi, these manifests are not used but also they don’t hurt. Loading classes dynamically based on Class.forName() and custom classloading can be replaced by nearly identical mechanisms that use native OSGi services under the hood. It is possible to switch dynamically between the original and the OSGi mechanisms based on whether your code executes in OSGi with very little change to your existing code. Alternatively, you could get rid of dynamic classloading in OSGi altogether by using the OSGi services mechanism for dynamic registration and discovery of “named” implementations. Identical packages exported by more than one bundle should simply be renamed. Obviously this works outside OSGi as well. Dependencies to OSGi can be minimized by placing all OSGi-specific code in a limited number of bundles, which preferably don’t contain code that should be executed outside OSGi as well.The following sections provide more details regarding how this can be achieved. Packaging as OSGi Bundles In order to work inside an OSGi environment, all Java code should be packaged as OSGi bundles. This applies not only to all archives generated by your build, and to also to all their dependencies that are delivered as part of your software. If your build uses Maven, you should strongly consider using Maven Bundle Plugin (which internally uses BND) to generate valid OSGi manifests for all archives produced by the build. In most cases, the manifests generated by the default configuration of this plugin will work just fine. However, in some cases certain minor tweaks and additions could be needed to generate the right manifests, for example:Adding additional import packages for classes that are only used via reflection, and therefore could not be found by BND. Specifying service component XMLs for bundles that expose OSGi declarative services. Specifying bundle activators for bundles that depend on custom activation.In our project, the bundle plugin is configured in our parent POM as follows:<properties> <classpath></classpath> <import-package>*</import-package> <export-package>{local-packages}</export-package> <bundle-directives></bundle-directives> <bundle-activator></bundle-activator> <bundle-activationpolicy></bundle-activationpolicy> <require-bundle></require-bundle> <service-component></service-component> ... </properties> ... <build> <pluginManagement> <plugins> <plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <version>2.3.4</version> <extensions>true</extensions> <configuration> <encoding>${project.build.sourceEncoding}</encoding> <archive> <forced>true</forced> </archive> <instructions> <Bundle-SymbolicName>${project.artifactId}${bundle-directives}</Bundle-SymbolicName> <Bundle-Name>${project.artifactId}</Bundle-Name> <_nouses>true</_nouses> <Class-Path>${classpath}</Class-Path> <Export-Package>${export-package}</Export-Package> <Import-Package>${import-package}</Import-Package> <Bundle-Activator>${bundle-activator}</Bundle-Activator> <Bundle-ActivationPolicy>${bundle-activationpolicy}</Bundle-ActivationPolicy> <Require-Bundle>${require-bundle}</Require-Bundle> <Service-Component>${service-component}</Service-Component> </instructions> </configuration> <executions> <execution> <id>bundle-manifest</id> <phase>process-classes</phase> <goals> <goal>manifest</goal> </goals> </execution> </executions> </plugin> ... </plugins> </pluginManagement> </build>In the child POMs, we specify any of the properties that need to have a value different from the default. Such POMs are relatively few in our case. Most of our dependencies also don’t have OSGi manifests, so we generate them as part of our build process. Currently, this is done by a Groovy script which uses the BND wrap command. For the majority of our dependencies, using a generic template for the manifest is sufficient. Currently, we use the following template, which is generated on the fly by the script: Bundle-Name: ${artifactId} Bundle-SymbolicName: ${artifactId} Bundle-Version: ${version} -nouses: true Export-Package: com.sap.*;version=${version_space},* Import-Package: com.sap.*;version="[${version_space},${version_space}]";resolution:=optional,*;resolution:=optional I n only a few cases, the manifest template has to contain information specific to the concrete jar. We captured such specifics by submitting these templates in our SCM and using the submitted version instead of the default. Compliance with OSGi Classloading Alternatives to Commonly Used Classloading Mechanisms OSGi environments impose their own classloading mechanism which is described in more details in the following articles:OSGi Classloading Classloading and Type Visibility in OSGiHowever, some plain Java applications and libraries often rely extensively on creating custom classloaders and loading classes via Class.forName() or ClassLoader.loadClass() in order to use reflection, and our application was one of them. This is problematic in OSGi as described in more details in OSGi Readiness – Loading Classes. The solutions proposed in this article, although valid, could not be directly applied in our case as this would involve changing heavily a large amount of legacy code, something that we didn’t want to do at this point. We found that it is possible to solve this issue in an elegant way, transparently for the major bulk of our legacy code, relying entirely on native OSGi mechanisms. Instead of Class.forName(), one could use the following sequence of calls:Use FrameworkUtil.getBundle() to get hold of the current Bundle and its BundleContext. Get the standard PackageAdmin service from the OSGi service registry via the bundle context obtained in the previous step Use PackageAdmin.getExportedPackage() and ExportedPackage.getExportingBundle() to find the Bundle which exports the package. Finally, simply call Bundle.loadClass() to load the requested class.In addition, although it is not possible to directly work with the low-level bundle classloader, the Bundle class itself provides classloading methods such as Bundle.loadClass() and Bundle.getResource(). Therefore, it is possible to create a custom classloader that wraps a bundle (or a number of bundles) and delegates to these methods. To make the major bulk of your legacy code work in OSGi with only minor changes, it is sufficient to adapt it in the following way:If the code executes in OSGi, instead of invoking Class.forName(), invoke a method which implements the sequence described above. If the code executes in OSGi, instead of creating a custom classloader from a number of jar files, create a BundleClassLoader from the bundles corresponding to these jar files.To make the above changes even more straightforward, in our application we introduced a new class named ClassHelper. It is a singleton which provides the following static helper methods that delegate to identical non-static methods of the single instance: public static boolean isOsgi(); public static Object getBundleContext(Class&lt;?&gt; clazz); public static Class&lt;?&gt; forName(String className, ClassLoader cl) throws ClassNotFoundException; public static ClassLoader getBundleClassLoader(String[] bundleNames, ClassLoader cl); The default implementation of these methods in the base ClassHelper class implement the default non-OSGi behavior – isOsgi() returns false, getBundleContext() and getBundleClassLoader() return null, and forName() simply delegates to Class.forName(). The class OsgiClassHelper inherits from ClassHelper and in turn implements the proper OSGi behavior described above. We put this class in its own special bundle to make sure that the bundle which contains ClassHelper and a large amount of other utilities is free from OSGi dependencies. This special bundle has an Activator, which replaces the default ClassHelper instance with an OsgiClassHelper instance upon bundle activation. Since the activation code is only executed in OSGi, this ensures that the proper implementation is loaded in both cases. In the rest of our code, it was sufficient to simply replace invocations of Class.forName() with ClassHelper.forName(), and creation of custom classloaders with ClassHelper.getBundleClassLoader(). Using OSGi Services In many plain Java applications, certain implementations are loaded based on a string “handle”, either the class name itself or something else. ClassLoader.loadClass(), often in combination with custom classloading, is commonly used for this purpose. OSGi offers the OSGi services mechanism for registration and discovery of such “named” implementations, which would allow you to get rid of dynamic classloading altogether. This mechanism is native to OSGi and offers a very elegant alternative to the custom mechanisms mentioned above. The downside of this approach, compared to the approach presented in the previous section, is that it requires somewhat deeper changes to your code, especially if it should continue to work outside OSGi as well. You need to consider the following aspects:Registering your interfaces and implementations in the OSGi service registry. Discovering these implementations at runtime in the code that uses them.Although you could register services programmatically, in most cases you would prefer using the OSGi declarative services approach, since it allows registering an existing implementation as an OSGi service in a purely declarative way. Regarding discovery, you could query the service registry directly via facilities provided by the BundleContext, or you could use the more powerful service tracker mechanism. There are many excellent tutorials on OSGi services and declarative services in particular, among them:OSGi Services – Tutorial by Lars Vogel. Getting Started with OSGi: Introducing Declarative Services by Neil Bartlett.In our case, we didn’t want to change our codebase too dramatically, so we switched to OSGi services in only a few places where we felt the positive impact would justify the investment. For the time being, we declared our existing implementations as services by adding service component XMLs. Although this XML-based approach is standard and commonly used, we find it rather verbose and inconvenient. An alternative approach would be to use annotations for specifying components and services, as described in the declarative services Wiki page and the OSGi Release 4 Draft Paper. These annotations are already supported by BND. Additional Considerations All Packages Exported by Only One Bundle Exporting the same package from more than one bundle doesn’t work well in OSGi and so has to be avoided. If you have such cases in your code, you should rename these packages appropriately. Exposing an OSGi Entry Point Finally, you may need to provide a new entry point for starting your application from the OSGi console. If you use Equinox, one appropriate mechanism for this is creating an Equinox application, which involves implementing the org.eclipse.equinox.app.IApplication interface and providing one additional plugin.xml, as described in Getting started with Eclipse plug-ins: command-line applications. This application can be started from the Equinox OSGi console using the startApp command. Conclusion It is possible to make plain Java applications and libraries OSGi compatible with relatively little effort and manageable impact on your existing code, by following the guidelines and approaches described in this post. Do you have similar experience with making Java code compatible with OSGi? If yes, I would love to hear about it.   Reference: Making Plain Old Java OSGi Compatible from our JCG partner Stoyan Rachev at the Stoyan Rachev’s Blog blog. ...

JSON-Schema in WADL

In between other jobs I have been recently been reviewing the WADL specification with a view to fixing some documentation problems with a view to producing an updated version. One of the things that because apparent was the lack of any grammar support for languages other than XML – yes you can use a mapping from JSON<->XML Schema but this would be less than pleasant for a JSON purist. So I began to look at how one would go about attaching a JSON-Schema grammar of a JSON document in a WADL description of a service. This isn’t a specification yet; but a proposal of how it might work consistently. Now I work with Jersey mostly, so lets consider what Jersey will current generate for a service that returns both XML and JSON. So the service here is implemented using the JAX-B binding so they both use a similar structure as defined by the XML-Schema reference by the include. <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <application xmlns="http://wadl.dev.java.net/2009/02"> <doc xmlns:jersey="http://jersey.java.net/" jersey:generatedBy="Jersey: 1.16-SNAPSHOT 10/26/2012 09:28 AM"/> <grammars> <include href="xsd0.xsd"> <doc title="Generated" xml:lang="en"/> </include> </grammars> <resources base="http://localhost/"> <resource path="/root"> <method id="hello" name="PUT"> <request> <representation xmlns:m="urn:message" element="m:requestMessage" mediaType="application/json" /> <representation xmlns:m="urn:message" element="m:requestMessage" mediaType="application/xml" /> </request> <response> <representation xmlns:m="urn:message" element="m:responseMessage" mediaType="application/json"/> <representation xmlns:m="urn:message" element="m:responseMessage" mediaType="application/xml" /> </response> </method> </resource> </resources> </application> So the first thing we considered was re-using the existing element property, which is defined as a QName, on the representation element to reference an imported JSON-Schema. It is shown here both with and another an arbitrary namespace to it can be told apart from XML elements without a namespace. <grammars> <include href="xsd0.xsd" /> <include href="application.wadl/responseMessage" /> </grammars><representation element="responseMessage" mediaType="application/json"/>Or xmlns:json="http://wadl.dev.java.net/2009/02/json"<representation element="json:responseMessage" mediaType="application/json" /> The problem is that the JSON-Schema specification as it stands doesn’t have a concept of a “name” property, so each JSON-Schema is uniquely identified by it’s URI. Also from my read of the specification each JSON-Schema contains the definition for at most one document – not the multiple types / documents that can be contained in XML-Schema. So the next best suggestion would be to just use the “filename” part of the URI as a proxy for the URI; but of course that won’t necessarily be unique. I could see for example the US government and Yahoo both publishing there own “address” micro format. The better solution to this problem is to introduce a new attribute, luckily the WADL spec was designed with this in mind, that is of type URI that can be used to directly reference the JSON-Schema definitions. So rather than the direct import in the previous example we have a URI property on the element itself. The “describedby” attribute name comes from the JSON-Schema proposal and is consistent with the rel used on atom links in the spec. xmlns:json="http://wadl.dev.java.net/2009/02/json-schema" xmlns:m="urn:message"<grammars> <include href="xsd0.xsd" /> </grammars><representation mediaType="application/json" element="m:responseMessage" json:describedby="application.wadl/responseMessage" /> The has the secondary advantage in that this format is backwardly compatible with tooling that was relying on the XML-Schema grammar. Although this is probably only of interesting to people who work in tooling / testing tools like myself. Once you have the JSON-Schema definition then some users are going to want to do away with the XML all together, so finally here is a simple mapping of the WADL to a JSON document that contains just the JSON-Schema information. It has been suggested by Sergey Breyozkin the JSON mapping would only show the json grammars and I am coming around to that way of thinking. I would be interested to hear of a usecase for the JSON mapping that would want access to the XML Schema. { "doc":{ "@generatedBy":"Jersey: 1.16-SNAPSHOT 10/26/2012 09:28 AM" }, "resources":{ "@base":"http://localhost/", "resource":{ "@path":"/root", "method":{ "@id":"hello", "@name":"PUT", "request":{ "representation":[ { "@mediaType":"application/json", "@describedby":"application.wadl/requestMessage" } ] }, "response":{ "representation":[ { "@mediaType":"application/json", "@describedby":"application.wadl/responseMessage" } ] } } } } } I am currently using the mime type of “application/vnd.sun.wadl+json” for this mapping to be consistent with the default WADL mime type. I suspect we would want to change this in the future; but it will do for starters. So this is all very interesting but you can’t play with it unless you have an example implementation. I have something working for both the server side and for a Java client generator in Jersey and wadl2java respectively and that will be the topic of my next post. I have been working with Pavel Bucek on the Jersey team on these implementations and the WADL proposal, thanks very much to him for putting up with me.   Reference: JSON-Schema in WADL from our JCG partner Gerard Davison at the Gerard Davison’s blog blog. ...

Iteratees for imperative programmers

When I first heard the word iteratee, I thought it was a joke. Turns out, it wasn’t a joke, in fact there are also enumerators (that’s ok) and enumeratees (you’re killing me). If you’re an imperative programmer, or rather a programmer who feels more comfortable writing imperative code than functional code, then you may be a little overwhelmed by all the introductions to iteratees out there, because they all assume that you think from a functional perspective. Well I just learnt iteratees, and although I’m feeling more and more comfortable with functional programming every day, I still think like an imperative programmer at heart. This made learning iteratees very difficult for me. So while I’m still in the imperative mindset, I thought this a very good opportunity to explain iteratees from an imperative programmers perspective, taking no functional knowledge for granted. If you’re an imperative programmer who wants to learn iteratees, this is the blog post for you. I’m going to specifically be looking at Play’s Iteratee API, but the concepts learnt here will apply to all Iteratees in general. So let’s start off with explaining what iteratees, and their counterparts, are trying to achieve. An iteratee is a method of reactively handling streams of data that is very easily composable. By reactive, I mean non blocking, ie you react to data being available to read, and react to the opportunity to write data. By composable, I mean you write simple iteratees that do one small thing well, then you use those as the building blocks to write iteratees that do bigger things, and you use those as the building blocks to write iteratees to do even bigger things, and so on. At each stage, everything is simple and easy to reason about. Reactive stream handling If you’re looking for information about iteratees, then I’m guessing you already know a bit about what reactive stream handling is. Let’s contrast it to synchronous IO code: trait InputStream { def read(): Byte } So this should be very familiar, if you want to read a byte, you call read. If no byte is currently available to be read, that call will block, and your thread will wait until a byte is available. With reactive streams, obviously it’s the other way around, you pass a callback to the stream you want to receive data from, and it will call that when it’s ready to give data to you. So typically you might implement a trait that looks like this: trait InputStreamHandler { def onByte(byte: Byte) } So before we go on, let’s look at how the same thing would be achieved in a pure functional world. At this point I don’t want you to ask why we want to do things this way, you will see that later on, but if you know anything about functional programming, you know that everything tends to be immutable, and functions have no side effects. The trait above has to have side effects, because unless you are ignoring the bytes passed to onByte, you must be changing your state (or something elses state) somehow in that function. So, how do we handle data without changing our state? The answer is the same way other immutable data structures work, we return a copy of ourselves, updated with the new state. So if the InputStreamHandler were to be functional, it might look like this: trait InputStreamHandler { def onByte(byte: Byte): InputStreamHandler } And an example implementation of one, that reads input into a seq, might look like this: class Consume(data: Seq[Byte]) extends InputStreamHandler { def onByte(byte: Byte) = new Consume(data :+ byte) } So we now have imperative and functional traits that react to our input stream, and you might be thinking this is all there is to reactive streams. If that’s the case, you’re wrong. What if we’re not ready to handle data when the onByte method is called? If we’re building structures in memory this will never be the case, but if for example we’re storing them to a file or to a database as we receive the data, then this very likely will be the case. So reactive streams are two way, it’s not just you, the stream consumer that is reacting to input, the stream producer must react to you being ready for input. Now this is possible to implement in an imperative world, though things do start looking much more functional. We simply start using futures: trait InputStreamHandler { def onByte(byte: Byte): Future[Unit] } So, when the stream we are consuming has a byte for us, it calls onByte, and then attaches a callback to the future we return, to pass the next byte, when it’s ready. If you have a look at Netty’s asynchronous channel APIs, you’ll see it uses exactly this pattern. We can also implement something similar for an immutable functional API: trait InputStreamHandler { def onByte(byte: Byte): Future[InputStreamHandler] } And so here we have a functional solution for reactive stream handling. But it’s not a very good one, for a start, there’s no way for the handlers to communicate to the code that uses them that they don’t want to receive any more input, or if they’ve encountered an error (exceptions are frowned upon in functional programming). We could add things to handle this, but very soon our interface would become quite complex, hard to break up into small pieces that can be composed, etc. I’m not going to justify this now, I think you’ll see it later when I show you just how easy iteratees are to compose. So, by this stage I hope you have understood two important points. Firstly, reactive stream handling means twofold reacting, both your code has to react to the stream being ready, and the stream has to react to you being ready. Secondly, when I say that we want a functional solution, I mean a solution where everything is immutable, and that is achieved by our stream handlers producing copies of themselves each time they receive/send data. If you’ve understood those two important points, then now we can move on to introducing iteratees. Iteratees There are a few things that our interface hasn’t yet addressed. The first is, how does the stream communicate to us that it is finished, that is, that it has no more data for us? To do this, instead of passing in a byte directly, we’re going to abstract our byte to be something of type Input[Byte], and that type can have three possible implementations, EOF, an element, or empty. Let’s not worry about why we need empty just yet, but assume for some reason we might want to pass empty. So this is what Input looks like: sealed trait Input[+E]object Input { case object EOF extends Input[Nothing] case object Empty extends Input[Nothing] case class El[+E](e: E) extends Input[E] } Updating our InputStreamHandler, we now get something that looks like this: trait InputStreamHandler[E] { def onInput(in: Input[E]): Future[InputStreamHandler[E]] } Now updating our Consumer from before to handle this, it might look like this: class Consume(data: IndexedSeq[Byte]) extends InputStreamHandler[Byte] { def onInput(in: Input[Byte]) = in match { case El(byte) => Future.successful(new Consume(data :+ byte)) case _ => Future.successful(this) } } You can see that when we get EOF or Empty, there’s nothing for us to do to change our state, so we just return ourselves again. If we were writing to another stream, we might, when we receive EOF, close that stream (or rather, send it an EOF). The next thing we’re going to do is make it easier for our handler to consume input immediately without having to create a future. To do this, rather than passing the byte directly, we’ll pass a function, that takes a function as a parameter, and that function will take the byte as a parameter. So, our handler, when it’s ready, will create a function to handle the byte, and then invoke the function that was passed to it, with that function. We’ll call the first function the cont function, which is short for continue, and means when you’re ready to continue receiving input invoke me. Too many functions? Let’s look at the code: trait InputStreamHandler[E] { def onByte[B](cont: (Input[E] => InputStreamHandler[E]) => Future[B]): Future[B] } Now where did this Future[B] come from? B is just the mechanism that the stream uses to pass state back to itself. As the handler, we don’t have to worry about what it is, we just have to make sure that we eventually invoke the cont function, and eventually make sure that the B it returns makes it back to our caller. And what does this look like in our Consume iteratee? Let’s have a look: class Consume(data: IndexedSeq[Byte]) extends InputStreamHandler { def onByte(cont: (Input[Byte] => InputStreamHandler) => Future[B]) = cont { case Input.El(byte) => new Consume(data :+ byte) case _ => this } } You can see in our simple case of being ready to handle input immediately, we just immediately invoke cont, we no longer need to worry about creating futures. If we want to handle the input asynchronously, it is a little more complex, but we’ll take a look at that later. Now we have one final step in producing our iteratee API. How does the handler communicate back to the stream that it is finished receiving data? There could be two reasons for this, one is that it’s finished receiving data. For example, if our handler is a JSON parser, it might have reached the end of the object it was parsing, and so doesn’t want to receive anymore. The other reason is that it’s encountered an error, for a JSON parser, this might be a syntax error, or if it’s sending data through to another stream, it might be an IO error on that stream. To allow our iteratee to communicate with the stream, we’re going to create a trait that represents its state. We’ll call this trait Step, and the three states that the iteratee can be in will be Cont, Done and Error. Our Cont state is going to contain our Input[Byte] => InputStreamHandler function, so that the stream can invoke it. Our Done state will contain our result (in the case of Consume, a Seq[Byte]) and our Error state will contain an error message. In addition to this, both our Done and Error states need to contain the left over input that they didn’t consume. This will be important for when we are composing iteratees together, so that once one iteratee has finished consuming input from a stream, the next can pick up where the first left off. This is one reason why we need Input.Empty, because if we did consume all the input, then we need some way to indicate that. So, here’s our Step trait: sealed trait Step[E, +A]object Step { case class Done[+A, E](a: A, remaining: Input[E]) extends Step[E, A] case class Cont[E, +A](k: Input[E] => InputStreamHandler[E, A]) extends Step[E, A] case class Error[E](msg: String, input: Input[E]) extends Step[E, Nothing] } The type parameter E is the type of input our iteratee wants to accept, and A is what it’s producing. So our handler trait now looks like this: trait InputStreamHandler[E, A] { def onInput[B](step: Step[E, A] => Future[B]): Future[B] } And our consumer is implemented like this: class Consume(data: Seq[Byte]) extends InputStreamHandler[Byte, Seq[Byte]] { def onInput(step: Step[Byte, Seq[Byte]] => Future[B]) = step(Step.Cont({ case Input.El(byte) => new Consume(data :+ byte) case Input.EOF => new InputStreamHandler[Byte, Seq[Byte]] { def onInput(cont: Step[Byte, Seq[Byte]] => Future[B]) = step(Step.Done(data, Input.Empty)) } case Input.Empty => this })) } One big difference here that you now notice is when we receive EOF, we actually pass Done into the step function, to say we are done consuming the input. And so now we’ve built our iteratee interface. Our naming isn’t quite right though, so we’ll rename the trait obviously to Iteratee, and we’ll rename onInput to fold, since we are folding our state into one result. And so now we get our interface: trait Iteratee[E, +A] { def fold[B](folder: Step[E, A] => Future[B]): Future[B] }   Iteratees in practice So far we’ve started with the requirements of a traditional imperative input stream, and described what an iteratee is in constrast to that. But looking at the above code, you might think that using them is really difficult. They seem like they are far more complex than they need to be, at least conceptually, to implement reactive streams. Well, it turns out that although so far we’ve shown the basics of the iteratee interface, there is a lot more that a full iteratee API has to offer, and once we start understanding this, and using it, you will start to see how powerful, simple and useful iteratees are. So remember how iteratees are immutable? And remember how iteratees can be in one of three states, cont, done and error, and depending on which state it’s in, it will pass its corresponding step class to the folder function? Well, if an iteratee is immutable and it can be in one of three states, then it can only ever be in that state that it’s in, and therefore it will only ever pass that corresponding step to the folder function. If an iteratee is done, it’s done, it doesn’t matter how many times you call its fold function, it will never become cont or error, and its done value will never change, it will only ever pass the Done step to the folder function with the same A value and the same left over input. Because of this, there is only one implementation of a done iteratee that we’ll ever need, it looks like this: case class Done[E, A](a: A, e: Input[E] = Input.Empty) extends Iteratee[E, A] { def fold[B](folder: Step[E, A] => Future[B]): Future[B] = folder(Step.Done(a, e)) } This is the only done iteratee you’ll ever need to indicate that you’re done. In the Consume iteratee above, when we reached EOF, we created a done iteratee using an anonymous inner class, we didn’t need to do this, we could have just used the Done iteratee above. The exact same thing holds for error iteratees: case class Error[E](msg: String, e: Input[E]) extends Iteratee[E, Nothing] { def fold[B](folder: Step[E, Nothing] => Future[B]): Future[B] = folder(Step.Error(msg, e)) } You may be surprised to find out the exact same thing applies to cont iteratees too – a cont iteratee just passes a function the folder, and that function, because the iteratee is immutable, is never going to change. So consequently, the following iteratee will usually be good enough for your requirements: case class Cont[E, A](k: Input[E] => Iteratee[E, A]) extends Iteratee[E, A] { def fold[B](folder: Step[E, A] => Future[B]): Future[B] = folder(Step.Cont(k)) } So let’s rewrite our consume iteratee to use these helper classes: def consume(data: Array[Byte]): Iteratee[Byte, Array[Byte]] = Cont { case Input.El(byte) => consume(data :+ byte) case Input.EOF => Done(data) case Input.Empty => consume(data) }   A CSV parser Now we’re looking a lot simpler, our code is focussed on just handling the different types of input we could receive, and returning the correct result. So let’s start writing some different iteratees. In fact, let’s write an iteratee that parses a CSV file from a stream of characters. Our CSV parser will support optionally quoting fields, and escaping quotes with a double quote. Our first step will be to write the building blocks of our parser. First up, we want to write something that skips some kinds of white space. So let’s write a general purpose drop while iteratee: def dropWhile(p: Char => Boolean): Iteratee[Char, Unit] = Cont { case in @ Input.El(char) if !p(char) => Done(Unit, in) case in @ Input.EOF => Done(Unit, in) case _ => dropWhile(p) } Since we’re just dropping input, our result is actually Unit. We return Done if the predicate doesn’t match the current char, or if we reach EOF, and otherwise, we return ourselves again. Note that when we are done, we include the input that was passed into us as the remaining data, because this is going to be needed to be consumed by the next iteratee. Using this iteratee we can now write an iteratee that drops white space: def dropSpaces = dropWhile(c => c == ' ' || c == '\t' || c == '\r') Next up, we’re going to write a take while iteratee, it’s going to be a mixture between our earlier consume iteratee, carrying state between each invocation, and the drop while iteratee: def takeWhile(p: Char => Boolean, data: Seq[Char] = IndexedSeq[Char]()): Iteratee[Char, Seq[Char]] = Cont { case in @ Input.El(char) => if (p(char)) { takeWhile(p, data :+ char) } else { Done(data, in) } case in @ Input.EOF => Done(data, in) case _ => takeWhile(p, data) } We also want to write a peek iteratee, that looks at what the next input is, without actually consuming it: def peek: Iteratee[Char, Option[Char]] = Cont { case in @ Input.El(char) => Done(Some(char), in) case in @ Input.EOF => Done(None, in) case Input.Empty => peek } Note that our peek iteratee must return an option, since if it encounters EOF, it can’t return anything. And finally, we want a take one iteratee: def takeOne: Iteratee[Char, Option[Char]] = Cont { case in @ Input.El(char) => Done(Some(char)) case in @ Input.EOF => Done(None, in) case Input.Empty => takeOne } Using the take one iteratee, we’ll build an expect iteratee, that mandates that a certain character must appear next otherwise it throws an error: def expect(char: Char): Iteratee[Char, Unit] = takeOne.flatMap { case Some(c) if c == char => Done(Unit) case Some(c) => Error('Expected ' + char + ' but got ' + c, Input.El(c)) case None => Error('Premature end of input, expected: ' + char, Input.EOF) } Notice the use of flatMap here. If you haven’t come across it before, in the asynchronous world, flatMap basically means ‘and then’. It applies a function to the result of the iteratee, and returns a new iteratee. In our case we’re using it to convert the result to either a done iteratee, or an error iteratee, depending on whether the result is what we expected. flatMap is one of the fundamental mechanisms that we’ll be using to compose our iteratees together. Now with our building blocks, we are ready to start building our CSV parser. The first part of it that we’ll write is an unquoted value parser. This is very simple, we just want to take all characters that aren’t a comma or new line, with one catch. We want the result to be a String, not a Seq[Char] like takeWhile produces. Let’s see how we do that: def unquoted = takeWhile(c => c != ',' && c != '\n').map(v => v.mkString.trim) As you can see, we’ve used the map function to transform the end result from a sequence of characters into a String. This is another key method on iteratees that you will find useful. Our next task is to parse a quoted value. Let’s start with an implementation that doesn’t take into account escaped quotes. To parse a quoted value, we need to expect a quote, and then we need to take any value that is not a quote, and then we need to expect a quote. Notice that during that sentence I said ‘and then’ 2 times? What method can we use to do an ‘and then’? That’s right, the flatMap method that I talked about before. Let’s see what our quoted value parser looks like: def quoted = expect(''') .flatMap(_ => takeWhile(_ != ''')) .flatMap(value => expect(''') .map(_ => value.mkString)) So now you can probably start to see the usefulness of flatMap. In fact it is so useful, not just for iteratees, but many other things, that Scala has a special syntax for it, called for comprehensions. Let’s rewrite the above iteratee using that: def quoted = for { _ <- expect(''') value <- takeWhile(_ != ''') _ <- expect(''') } yield value.mkString Now at this point I hope you are getting excited. What does the above code look like? It looks like ordinary imperative synchronous code. Read this value, then read this value, then read this value. Except it’s not synchronous, and it’s not imperative. It’s functional and asynchronous. We’ve taken our building blocks, and composed them into a piece of very readable code that makes it completely clear exactly what we are doing. Now in case you’re not 100% sure about the above syntax, the values to the left of the <- signs are the results of the iteratees to the right. These are able to be used anywhere in any subsequent lines, including in the end yield statement. Underscores are used to say we’re not interested in the value, we’re using this for the expect iteratee since that just returns Unit anyway. The statement after the yield is a map function, which gives us the opportunity to take all the intermediate values and turn them into a single result. So now that we understand that, let’s rewrite our quoted iteratee to support escaped quotes. After reading our quote, we want to peek at the next character. If it’s a quote, then we want to append the value we just read, plus a quote to our cumulated value, and recursively invoke the quoted iteratee again. Otherwise, we’ve reached the end of the value. def quoted(value: Seq[Char] = IndexedSeq[Char]()): Iteratee[Char, String] = for { _ <- expect(''') maybeValue <- takeWhile(_ != ''') _ <- expect(''') nextChar <- peek value <- nextChar match { case Some(''') => quoted(value ++ maybeValue :+ ''') case _ => Done[Char, String]((value ++ maybeValue).mkString) } } yield value Now we need to write an iteratee that can parse either a quoted or unquoted value. We choose which one by peeking at the first character, and then accordingly returning the right iteratee. def value = for { char <- peek value <- char match { case Some(''') => quoted() case None => Error[Char]('Premature end of input, expected a value', Input.EOF) case _ => unquoted } } yield value Let’s now parse an entire line, reading until the end of line character. def values(state: Seq[String] = IndexedSeq[String]()): Iteratee[Char, Seq[String]] = for { _ <- dropSpaces value <- value _ <- dropSpaces nextChar <- takeOne values <- nextChar match { case Some('\n') | None => Done[Char, Seq[String]](state :+ value) case Some(',') => values(state :+ value) case Some(other) => Error('Expected comma, newline or EOF, but found ' + other, Input.El(other)) } } yield values   Enumeratees Now, in a similar way to how we parse the values, we could also parse each line of a CSV file until we reach EOF. But this time we’re going to do something a little different. We’ve seen how we can sequence iteratees using flatMap, but there are further possibilities for composing iteratees. Another concept in iteratees is enumeratees. Enumeratees adapt a stream to be consumed by an iteratee. The simplest enumeratees simply map the input values of the stream to be something else. So, for example, here’s an enumeratee that converts a stream of strings to a stream of ints: def toInt: Enumeratee[String,Int] = Enumeratee.map[String](_.toInt) One of the methods on Enumeratee is transform. We can use this method to apply an enumeratee to an iteratee: val someIteratee: Iteratee[Int, X] = ... val adaptedIteratee: Iteratee[String, X] = toInt.transform(someIteratee) This method is also aliased to an operator, &>>, and so this code below is equivalent to the code above: val adaptedIteratee: Iteratee[String, X] = toInt &>> someIteratee We can also make an enumeratee out of another iteratee, and this is exactly what we’re going to do with our values iteratee. The Enumeratee.grouped method takes an iteratee and applies it to the stream over and over, the result of each application being an input to feed into the the iteratee that will be transformed. Let’s have a look: def csv = Enumeratee.grouped(values()) Now let’s get a little bit more creative with enumeratees. Let’s say that our CSV file is very big, so we don’t want to load it into memory. Each line is a series of 3 integer columns, and we want to sum each column. So, let’s define an enumeratee that converts each set of values to integers: def toInts = Enumeratee.map[Seq[String]](_.map(_.toInt)) And another enumeratee to convert the sequence to a 3-tuple: def toThreeTuple = Enumeratee.map[Seq[Int]](s => (s(0), s(1), s(2))) And finally an iteratee to sum the them: def sumThreeTuple(a: Int = 0, b: Int = 0, c: Int = 0): Iteratee[(Int, Int, Int), (Int, Int, Int)] = Cont { case Input.El((x, y, z)) => sumThreeTuple(a + x, b + y, c + z) case Input.Empty => sumThreeTuple(a, b, c) case in @ Input.EOF => Done((a, b, c), in) } Now to put them all together. There is another method on enumeratee called compose, which, you guessed it, let’s you compose enumeratees. This has an alias operator, ><>. Let’s use it: val processCsvFile = csv ><> toInts ><> toThreeTuple &>> sumThreeTuple()   Enumerators Finally, if an iteratee consumes a stream, what produces a stream? The answer is an enumerator. An enumerator can be applied to an iteratee using its apply method, which is also aliased to >>>. This will leave the iteratee in a cont state, ready to receive more input. If however the enumerator contains the entirety of the stream, then the run method can be used instead which will send the iteratee an EOF once it’s finished. This is aliased to |>>>. The Play enumerator API makes it easy to create an enumerator by passing a sequence of inputs to the Enumerator companion objects apply method. So, we can create an enumerator of characters using the following code: val csvFile = Enumerator( '''1,2,3 |4,5,6'''.stripMargin.toCharArray:_*) And we can feed this into our iteratee like so: val result = csvFile |>>> processCsvFile And our result in this case will be a future that is eventually redeemed with (5, 7, 9). Conclusion Well, it’s been a long journey, but hopefully if you’re an imperative programmer, you not only understand iteratees, you understand the reasoning behind their design, and how easily they compose. I also hope you have a better understanding of both functional and asynchronous programming in general. The functional mindset is quite different to the imperative mindset, and I’m still getting my head around it, but particularly after seeing how nice and simple iteratees can be to work with (once you understand them), I’m becoming convinced that functional programming is the way to go. If you are interested in downloading the code from this blog post, or if you want to see a more complex JSON parsing iteratee/enumeratee, checkout this GitHub project, which has a few examples, including parsing byte/character streams in array chunks, rather than one at a time.   Reference: Iteratees for imperative programmers from our JCG partner James Roper at the James and Beth Roper’s blogs blog. ...

Tomcat Clustering Series Part 1 : Simple Load Balancer

I am going to start new series of posts about Tomcat clustering. In this post we will see what is the problem in normal deployment on a single machine, what is clustering and why is it necessary and how to setup a simple load balancer with Apache httpd web server + Tomcat server cluster. Why need Clustering? (Tomcat Clustering) In normal production, servers are running on a single machine. If that  machine fails due to crash, hardware defects or OutOfMemory exceptions, then our web site can’t be accessed by anybody.So, how do we solve this problem? By adding more Tomcat instances to collectively (group/cluster) run as one production server  ( as opposed to a single Tomcat instance). Each Tomcat instance will deploy the same web application. So any Tomcat instance can process the client request. If one Tomcat instance is fails, then another Tomcat in the cluster is going to proceeds the request.But there is one big problem with that approach. Each tomcat instance is either running on a dedicated physical machine or many tomcat instances are running in single machine (Check my post about Running multiple tomcat instances in single machine). So each Tomcat is running on a different port and maybe, on a different IP. The problem is in client perspective, as to which tomcat we need to make the request. Because there are lots of Tomcat instances as parts of the cluster. For each Tomcat we need to make IP and Port combination. Like or how do we Solve this problem? By adding one server in front of all Tomcat instances, in order to accept all the request and distribute them to the cluster. And that server acts as a load balancer. There are lots of servers  available with load balancing  capabilities. Here we are going to use Apache httpd web server as a load balancer with mod_jk module.So now all clients will access the load balancer (Apache httpd web server) and won’t bother about Tomcat instances. So now our URL is http://ramkitech.com/ (Apache runs on port 80). Apache httpd Web Server Here we are going to use Apache httpd web server as a Load Balancer. To use the load balancing capabilities of Apache httpd server we need to include either mod_proxy module or mod_jk module. Here we are using mod_jk module. Before continuing , check my old post (Virtual Host Apache httpd server) about how to install the Apache httpd server and mod_jk module, and how to configure the mod_jk. How to setup the Simple Load Balancer For simplicity, I am going to run 3 Tomcat instances on a single machine(we can run them on dedicated machines as well) with Apache httpd web server. And the same web application is deployed in all tomcat instances.Here we use mod_jk module as the load balancer. By default it uses the round robin algorithm to distribute the requests. Now we need to configure the workers.properties file like virtual host concept in Apache httpd server. worker.list=tomcat1,tomcat2,tomcat3worker.tomcat1.type=ajp13worker.tomcat1.port=8009worker.tomcat1.host=localhostworker.tomcat2.type=ajp13worker.tomcat2.port=8010worker.tomcat2.host=localhostworker.tomcat3.type=ajp13worker.tomcat3.port=8011worker.tomcat3.host=localhost Here, I configured the 3 Tomcat instances in workers.properties file.So, type is ajp13 and port is the ajp port (not http connector port) and host is  the IP address of our machine. There are a couple of special workers we need to add into workers.properties file. First one is the load balancer worker, here the name is balancer (you can put any name you want). worker.balancer.type=lb worker.balancer.balance_workers=tomcat1,tomcat2,tomcat3 Here this worker’s type is lb, ie load balancer. its special type provide by load balancer. And another property is balance_workers, used in order to specify all tomcat instances like tomcat1,tomcat2,tomcat3 (comma separated) Second one, is the status worker. Its optional, but from this worker we can get statistics of the load balancer. worker.stat.type=status Here we use special type status. Now we modify the worker.list property. worker.list=balancer,stat So from the outside, there are 2 workers that are visible (balancer and stat). So all requests are  going to the balancer. Then balancer worker manages all the Tomcat instances. workers.properties worker.list=balancer,statworker.tomcat1.type=ajp13 worker.tomcat1.port=8009 worker.tomcat1.host=localhostworker.tomcat2.type=ajp13 worker.tomcat2.port=8010 worker.tomcat2.host=localhostworker.tomcat3.type=ajp13 worker.tomcat3.port=8011 worker.tomcat3.host=localhostworker.balancer.type=lb worker.balancer.balance_workers=tomcat1,tomcat2,tomcat3worker.stat.type=status Now workers.properties confiuration is finished. Next, we need to send all the  requests to the balancer worker. So we need to modify the httpd.conf file of Apache httpd server LoadModule jk_module modules/mod_jk.soJkWorkersFile conf/workers.propertiesJkLogFile logs/mod_jk.log JkLogLevel emerg JkLogStampFormat '[%a %b %d %H:%M:%S %Y] ' JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories JkRequestLogFormat '%w %V %T'JkMount /status stat JkMount / balancer The above code is just boiler plate code. The 1st line loads the mod_jk module, the 2nd line is specifing the worker file (workers.properties file). All others are just for logging purposes. But the last 2 lines are very important. JkMount /status stat means that any request that matches the /status, then that request is forwarded to the stat worker. It’s the status type worker, so its shows the status of the load balancer. JkMount / balancer. This line matches all the request, so all request is forward to the balancer worker. The balancer worker uses the round robin algorithm to distribute the requests to the Tomcat instances. That’s it. Now access the load balancer from the browser. Each and every request is distributed to 3 tomcat instances. If one of the Tomcat instances fails, then the load balancer dynamically understands it and stops forwarding the requests to that failed tomcat instances. The other tomcat instances continue to work. If the failed tomcat is recovers, from failed state to normal state then load balancer adds it to the cluster and forwards the request to that tomcat as well. (check the video) Here the big question is How the Load balancer knows when a Tomcat instance fails or when a Tomcat has just recovered from failed state? Answer : When one tomcat instance fails, the load balancer doesn’t know that this instance failed. So it will try to forward the request to all Tomcat instances. If  the load balancer tries to forward the request to the failed Tomcat instance, its will not get a response. So the load balancer will markethe state of this instance as a failed, and will forward the same request to another Tomcat instance. So from the client perspective we don’t feel that one Tomcat instance has failed. When a Tomcat recovers from failed state, again ,the load balancer doesn’t know that the Tomcat is ready for processing. It’s still marked as failed. In periodic intervals, the load balancer checks the health status of all Tomcat instances (by default 60 sec). After checking the health status, the load balancer updates the status of that instance to OK. please share ur thoughts through comments. Video : http://www.youtube.com/watch?feature=player_embedded&v=9gtpyqhd-NI   Reference: Tomcat Clustering Series Part 1 : Simple Load Balancer from our JCG partner Rama Krishnan at the Ramki Java Blog blog. ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: