Featured FREE Whitepapers

What's New Here?

spring-interview-questions-answers

Spring 3.1 profiles and Tomcat configuration

Spring 3.1 introduced very useful feature called profiles. Thanks to that its easy to build one package that can be deployed in all environments (development, test, production and so on). By defining system property spring.profiles.active Spring allows us to create different beans depending on active profile name using XML configuration or @Profile annotation. As we all know system properties can be used in Spring XML files and we will take advantage of that. In this post I will show how to use Spring profiles to create one package for all environments and how to run it on Apache Tomcat. Example architecture I think the most common and wanted architecture is when applications deployed on dev, test and production differ only in used properties file containing configuration. WAR contains configuration for all environments and correct one is chosen during runtime. So it is the best if in application resources we have files like: src main resources - config_dev.properties - config_production.properties ...Configuring Spring property placeholder In order to load properties files in Spring we use <context:property-placeholder /> or @PropertySource annotation. In my example I will follow XML configuration approach for loading properties file: <?xml version='1.0' encoding='UTF-8'?> <beans xmlns='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:context='http://www.springframework.org/schema/context' xsi:schemaLocation='http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans-3.1.xsdhttp://www.springframework.org/schema/contexthttp://www.springframework.org/schema/context/spring-context-3.1.xsd'><context:property-placeholder location='classpath:config_${spring.profiles.active}.properties' /></beans>Configuring Tomcat Now its time to tell Tomcat which profile is active. There at least ways to do that:defining context param in web.xml – that breaks “one package for all environments” statement. I don’t recommend that defining system property -Dspring.profiles.active=your-active-profileI believe that defining system property is much better approach. So how to define system property for Tomcat? Over the internet i could find a lot of advices like “modify catalina.sh” because you will not find any configuration file for doing stuff like that. Modifying catalina.sh is a dirty unmaintable solution. There is a better way to do that. Just create file setenv.sh in Tomcat’s bin directory with content: JAVA_OPTS='$JAVA_OPTS -Dspring.profiles.active=dev' and it will be loaded automatically during running catalina.sh start or run.Conclusion Using Spring profiles we can create flexible applications that can be deployed in several environments. How is it different from Maven profiles approach? With Maven a person who was building application had to define in which environment it was supposed to run. With approach described above environment decides if its development, testing or production. Thanks to that we can use exactly the same WAR file and deploy it everywhere. Reference: Spring 3.1 profiles and Tomcat configuration from our JCG partner Maciej Walkowiak at the Software Development Journey blog....
java-logo

Chain of Responsibility Pattern in Java

Chain of Responsibility design pattern is needed when a few processors should exist for performing an operation and a particular order should be defined for those processors. Also the changeability of the order of processors on runtime are important.UML represantation of the pattern is as below:Handler defines the general structure of processor objects. ‘HandleRequest’ here is the abstract processor method. Handler also has a reference of its own type, which represents the next handler. For this a public ‘setNextHandler’ method should be defined and exactly the handler is an abstract class. ConcreteHandler define different representations of processors. At last, Client is responsible with creating required handlers (processors) and define a chain order between them. Generally two diffent implementation may exist for this pattern. Difference is related with the ‘location of the chain routing business logic’. Chain routing business logic may be either in Handler abstract class or ConcreteHandler classes, or both of them. Sample of first two approaches will be given below: 1. ‘Handler’ has chain routing business logic: public abstract class Processor { protected Processor next; protected int threshold; public void setNextProcessor(Processor p) { next = p; } public void process(String data, int value) { if (value <= threshold) { process(data); } if (next != null) { next.message(data, threshold); } } abstract protected void processData(String data); } public class ProcessorA extends Processor {public ProcessorA (int threshold) { this.threshold = threshold; } protected void processData(String data) { System.out.println("Processing with A: " + data); } } public class ProcessorB extends Processor {public ProcessorB (int threshold) { this.threshold = threshold; } protected void writeMessage(String data) { System.err.println("Processing with B: " + data); } } public class Client { public static void main(String[] args) { Processor p, p1, p2; p1 = p = new ProcessorA(2); p2 = new ProcessorB(1); p1.setNextProcessor(p2); // Handled by ProcessorA p.process("data1", 2); // Handled by ProcessorA and ProcessorB p.process("data2", 1); } } 2. ‘ConcreteHandler’s have chain routing business logic: public abstract class Processor { protected Processor next; protected int threshold; public void setNextProcessor(Processor p) { next = p; } abstract protected void processData(String data); } public class ProcessorA extends Processor {public ProcessorA (int threshold) { this.threshold = threshold; } protected void processData(String data, int value) { System.out.println("Processing with A: " + data); if (value >= threshold && next != null) { next.processData(data, value); } } } public class ProcessorB extends Processor {public ProcessorB (int threshold) { this.threshold = threshold; } protected void processData(String data, int value) { System.out.println("Processing with B: " + data); if (value >= threshold && next != null) { next.processData(data, value); } } } public class Client { public static void main(String[] args) { Processor p, p1, p2; p1 = p = new ProcessorA(2); p2 = new ProcessorB(1); p1.setNextProcessor(p2); // Handled by ProcessorA p.processData("data1", 1); // Handled by ProcessorA and ProcessorB p.processData("data2", 2); } } Reference: 2 Implementations of “Chain of Responsibility” Pattern with Java from our JCG partner Cagdas Basaraner at the CodeBuild blog....
spring-interview-questions-answers

Concurrency – Executors and Spring Integration

Thread Pool/Executors Based Implementation A better approach than the raw thread version, is a Thread pool based one, where an appropriate thread pool size is defined based on the system where the task is running – Number of CPU’s/(1-Blocking Coefficient of Task). Venkat Subramaniams book has more details:First I defined a custom task to generate the Report Part, given the Report Part Request, this is implemented as a Callable: public class ReportPartRequestCallable implements Callable<ReportPart> { private final ReportRequestPart reportRequestPart; private final ReportPartGenerator reportPartGenerator;public ReportPartRequestCallable(ReportRequestPart reportRequestPart, ReportPartGenerator reportPartGenerator) { this.reportRequestPart = reportRequestPart; this.reportPartGenerator = reportPartGenerator; }@Override public ReportPart call() { return this.reportPartGenerator.generateReportPart(reportRequestPart); } }public class ExecutorsBasedReportGenerator implements ReportGenerator { private static final Logger logger = LoggerFactory.getLogger(ExecutorsBasedReportGenerator.class);private ReportPartGenerator reportPartGenerator;private ExecutorService executors = Executors.newFixedThreadPool(10);@Override public Report generateReport(ReportRequest reportRequest) { List<Callable<ReportPart>> tasks = new ArrayList<Callable<ReportPart>>(); List<ReportRequestPart> reportRequestParts = reportRequest.getRequestParts(); for (ReportRequestPart reportRequestPart : reportRequestParts) { tasks.add(new ReportPartRequestCallable(reportRequestPart, reportPartGenerator)); }List<Future<ReportPart>> responseForReportPartList; List<ReportPart> reportParts = new ArrayList<ReportPart>(); try { responseForReportPartList = executors.invokeAll(tasks); for (Future<ReportPart> reportPartFuture : responseForReportPartList) { reportParts.add(reportPartFuture.get()); }} catch (Exception e) { logger.error(e.getMessage(), e); throw new RuntimeException(e); } return new Report(reportParts); }...... } Here a thread pool is created using the Executors.newFixedThreadPool(10) call, with a pool size of 10, a callable task is generated for each of the report request parts, and handed over to the threadpool using the ExecutorService abstraction responseForReportPartList = executors.invokeAll(tasks); this call returns a List of Futures, which support a get() method which is a blocking call on the response to be available. This is clearly a much better implementation compared to the raw thread version, the number of threads is constrained to a manageable number under load. Spring Integration Based Implementation The approach that I personally like the most is using Spring Integration, the reason is that with Spring Integration I focus on the components doing the different tasks and leave it upto Spring Integration to wire the flow together, using a xml based or annotation based configuration. Here I will be using a XML based configuration : The components in my case are: 1. The component to generate the report part, given the report part request, which I had shown earlier. 2. A component to split the report request to report request parts: public class DefaultReportRequestSplitter implements ReportRequestSplitter{ @Override public List<ReportRequestPart> split(ReportRequest reportRequest) { return reportRequest.getRequestParts(); } } 3. A component to assemble/aggregate the report parts into a whole report: public class DefaultReportAggregator implements ReportAggregator{@Override public Report aggregate(List<ReportPart> reportParts) { return new Report(reportParts); }} And that is all the java code that is required with Spring Integration, the rest of the is wiring – here I have used a Spring integration configuration file: <?xml version='1.0' encoding='UTF-8'?> <beans ....<int:channel id='report.partsChannel'/> <int:channel id='report.reportChannel'/> <int:channel id='report.partReportChannel'> <int:queue capacity='50'/> </int:channel> <int:channel id='report.joinPartsChannel'/><int:splitter id='splitter' ref='reportsPartSplitter' method='split' input-channel='report.partsChannel' output-channel='report.partReportChannel'/> <task:executor id='reportPartGeneratorExecutor' pool-size='10' queue-capacity='50' /> <int:service-activator id='reportsPartServiceActivator' ref='reportPartReportGenerator' method='generateReportPart' input-channel='report.partReportChannel' output-channel='report.joinPartsChannel'> <int:poller task-executor='reportPartGeneratorExecutor' fixed-delay='500'> </int:poller> </int:service-activator><int:aggregator ref='reportAggregator' method='aggregate' input-channel='report.joinPartsChannel' output-channel='report.reportChannel' ></int:aggregator><int:gateway id='reportGeneratorGateway' service-interface='org.bk.sisample.springintegration.ReportGeneratorGateway' default-request-channel='report.partsChannel' default-reply-channel='report.reportChannel'/> <bean name='reportsPartSplitter' class='org.bk.sisample.springintegration.processors.DefaultReportRequestSplitter'></bean> <bean name='reportPartReportGenerator' class='org.bk.sisample.processors.DummyReportPartGenerator'/> <bean name='reportAggregator' class='org.bk.sisample.springintegration.processors.DefaultReportAggregator'/> <bean name='reportGenerator' class='org.bk.sisample.springintegration.SpringIntegrationBasedReportGenerator'/></beans> Spring Source Tool Suite provides a great way of visualizing this file:this matches perfectly with my original view of the user flow:In the Spring Integration version of the code, I have defined the different components to handle the different parts of the flow: 1. A splitter to convert a report request to report request parts: <int:splitter id='splitter' ref='reportsPartSplitter' method='split' input-channel='report.partsChannel' output-channel='report.partReportChannel'/> 2. A service activator component to generate a report part from a report part request: <int:service-activator id='reportsPartServiceActivator' ref='reportPartReportGenerator' method='generateReportPart' input-channel='report.partReportChannel' output-channel='report.joinPartsChannel'> <int:poller task-executor='reportPartGeneratorExecutor' fixed-delay='500'> </int:poller> </int:service-activator> 3. An aggregator to join the report parts back to a report, and is intelligent enough to correlate the original split report requests appropriately without any explicit coding required for it: <int:aggregator ref='reportAggregator' method='aggregate' input-channel='report.joinPartsChannel' output-channel='report.reportChannel' ></int:aggregator> What is interesting in this code is that, like in the executors based sample, the number of threads that services each of these components is completely configurable using the xml file, by using appropriate channels to connect the different components together and by using task executors with the thread pool size set as attribute of the executor. In this code, I have defined a queue channel where the report request parts come in: <int:channel id='report.partReportChannel'> <int:queue capacity='50'/> </int:channel> and is serviced by the service activator component, using a task executor with a thread pool of size 10, and a capacity of 50: <task:executor id='reportPartGeneratorExecutor' pool-size='10' queue-capacity='50' /> <int:service-activator id='reportsPartServiceActivator' ref='reportPartReportGenerator' method='generateReportPart' input-channel='report.partReportChannel' output-channel='report.joinPartsChannel'> <int:poller task-executor='reportPartGeneratorExecutor' fixed-delay='500'> </int:poller> </int:service-activator> All this through configuration! The entire codebase for this sample is available at this github location: https://github.com/bijukunjummen/si-sample Reference: Concurrency – Executors and Spring Integration from our JCG partner Biju Kunjummen at the all and sundry blog....
java-logo

Bye, Bye, 5 * 60 * 1000 //Five Minutes, Bye, Bye

In this post I am going to talk about one class that was first introduced in version 1.5, that I have used too much but talking with some people they said that they didn’t know it exists. This class is TimeUnit. TimeUnit class represents time durations at a given unit of granularity and also provides utility methods to convert to different units, and methods to perform timing delays. TimeUnit is an enum with seven levels of granularity: DAYS, HOURS, MICROSECONDS, MILLISECONDS, MINUTES, NANOSECONDS and SECONDS. The first feature that I find useful is the convert method. With this method you can say good bye to typical: private static final int FIVE_SECONDS_IN_MILLIS = 1000 * 5; to something like: long duration = TimeUnit.MILLISECONDS.convert(5, TimeUnit.SECONDS); But also equivalent operations in a better readable method exist. For example the same conversion could be expressed as: long duration = TimeUnit.SECONDS.toMillis(5); The second really useful sets of operations are those related with stopping current thread. For example you can sleep current thread with method: TimeUnit.MINUTES.sleep(5); instead of: Thread.sleep(5*60*1000); But you can also use it with join and wait with timeout. Thread t = new Thread(); TimeUnit.SECONDS.timedJoin(t, 5); So as we can see TimeUnit class is though in terms of expressiveness, you can do the same as you do previously but in a more fashionable way. Notice that you can also use static import and code will be even more readable. Reference: Bye, Bye, 5 * 60 * 1000 //Five Minutes, Bye, Bye from our JCG partner Alex Soto at the One Jar To Rule Them All blog....
spring-data-logo

Using Redis with Spring

As NoSQL solutions are getting more and more popular for many kind of problems, more often the modern projects consider to use some (or several) of NoSQLs instead (or side-by-side) of traditional RDBMS. I have already covered my experience with MongoDB in this, this and this posts. In this post I would like to switch gears a bit towards Redis, an advanced key-value store. Aside from very rich key-value semantics, Redis also supports pub-sub messaging and transactions. In this post I am going just to touch the surface and demonstrate how simple it is to integrate Redis into your Spring application. As always, we will start with Maven POM file for our project:   <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemalocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">     <modelversion>4.0.0</modelversion> <groupid>com.example.spring</groupid> <artifactid>redis</artifactid> <version>0.0.1-SNAPSHOT</version> <packaging>jar</packaging><properties> <project.build.sourceencoding>UTF-8</project.build.sourceencoding> <spring.version>3.1.0.RELEASE</spring.version> </properties><dependencies> <dependency> <groupid>org.springframework.data</groupid> <artifactid>spring-data-redis</artifactid> <version>1.0.0.RELEASE</version> </dependency><dependency> <groupid>cglib</groupid> <artifactid>cglib-nodep</artifactid> <version>2.2</version> </dependency><dependency> <groupid>log4j</groupid> <artifactid>log4j</artifactid> <version>1.2.16</version> </dependency><dependency> <groupid>redis.clients</groupid> <artifactid>jedis</artifactid> <version>2.0.0</version> <type>jar</type> </dependency><dependency> <groupid>org.springframework</groupid> <artifactid>spring-core</artifactid> <version>${spring.version}</version> </dependency><dependency> <groupid>org.springframework</groupid> <artifactid>spring-context</artifactid> <version>${spring.version}</version> </dependency> </dependencies> </project>Spring Data Redis is the another project under Spring Data umbrella which provides seamless injection of Redis into your application. The are several Redis clients for Java and I have chosen the Jedis as it is stable and recommended by Redis team at the moment of writing this post.We will start with simple configuration and introduce the necessary components first. Then as we move forward, the configuration will be extended a bit to demonstrated pub-sub capabilities. Thanks to Java config support, we will create the configuration class and have all our dependencies strongly typed, no XML anymore: package com.example.redis.config;import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.data.redis.connection.jedis.JedisConnectionFactory; import org.springframework.data.redis.core.RedisTemplate; import org.springframework.data.redis.serializer.GenericToStringSerializer; import org.springframework.data.redis.serializer.StringRedisSerializer;@Configuration public class AppConfig { @Bean JedisConnectionFactory jedisConnectionFactory() { return new JedisConnectionFactory(); }@Bean RedisTemplate< String, Object > redisTemplate() { final RedisTemplate< String, Object > template = new RedisTemplate< String, Object >(); template.setConnectionFactory( jedisConnectionFactory() ); template.setKeySerializer( new StringRedisSerializer() ); template.setHashValueSerializer( new GenericToStringSerializer< Object >( Object.class ) ); template.setValueSerializer( new GenericToStringSerializer< Object >( Object.class ) ); return template; } }That’s basically everything we need assuming we have single Redis server up and running on localhost with default configuration. Let’s consider several common uses cases: setting a key to some value, storing the object and, finally, pub-sub implementation. Storing and retrieving a key/value pair is very simple: @Autowired private RedisTemplate< String, Object > template;public Object getValue( final String key ) { return template.opsForValue().get( key ); }public void setValue( final String key, final String value ) { template.opsForValue().set( key, value ); }Optionally, the key could be set to expire (yet another useful feature of Redis), f.e. let our keys expire in 1 second: public void setValue( final String key, final String value ) { template.opsForValue().set( key, value ); template.expire( key, 1, TimeUnit.SECONDS ); }Arbitrary objects could be saved into Redis as hashes (maps), f.e. let save instance of some class User public class User { private final Long id; private String name; private String email; // Setters and getters are omitted for simplicity }into Redis using key pattern “user:<id>”: public void setUser( final User user ) { final String key = String.format( "user:%s", user.getId() ); final Map< String, Object > properties = new HashMap< String, Object >();properties.put( "id", user.getId() ); properties.put( "name", user.getName() ); properties.put( "email", user.getEmail() );template.opsForHash().putAll( key, properties); }Respectively, object could easily be inspected and retrieved using the id. public User getUser( final Long id ) { final String key = String.format( "user:%s", id );final String name = ( String )template.opsForHash().get( key, "name" ); final String email = ( String )template.opsForHash().get( key, "email" );return new User( id, name, email ); }There are much, much more which could be done using Redis, I highly encourage to take a look on it. It surely is not a silver bullet but could solve many challenging problems very easy. Finally, let me show how to use a pub-sub messaging with Redis. Let’s add a bit more configuration here (as part of AppConfig class): @Bean MessageListenerAdapter messageListener() { return new MessageListenerAdapter( new RedisMessageListener() ); }@Bean RedisMessageListenerContainer redisContainer() { final RedisMessageListenerContainer container = new RedisMessageListenerContainer();container.setConnectionFactory( jedisConnectionFactory() ); container.addMessageListener( messageListener(), new ChannelTopic( "my-queue" ) );return container; }The style of message listener definition should look very familiar to Spring users: generally, the same approach we follow to define JMS message listeners. The missed piece is our RedisMessageListener class definition: package com.example.redis.impl;import org.springframework.data.redis.connection.Message; import org.springframework.data.redis.connection.MessageListener;public class RedisMessageListener implements MessageListener { @Override public void onMessage(Message message, byte[] paramArrayOfByte) { System.out.println( "Received by RedisMessageListener: " + message.toString() ); } }Now, when we have our message listener, let see how we could push some messages into the queue using Redis. As always, it’s pretty simple: @Autowired private RedisTemplate< String, Object > template;public void publish( final String message ) { template.execute( new RedisCallback< Long >() { @SuppressWarnings( "unchecked" ) @Override public Long doInRedis( RedisConnection connection ) throws DataAccessException { return connection.publish( ( ( RedisSerializer< String > )template.getKeySerializer() ).serialize( "queue" ), ( ( RedisSerializer< Object > )template.getValueSerializer() ).serialize( message ) ); } } ); }That’s basically it for very quick introduction but definitely enough to fall in love with Redis. Reference: Using Redis with Spring from our JCG partner Andrey Redko at the Andriy Redko {devmind} blog....
apache-bigtop-logo

Apache Bigtop – Installing Hadoop

Ah!! The name is everywhere, carried with the wind. Apache Hadoop!! The BIG DATA crunching platform! We all know how alien it can be at start too! Phew!! :oIts my personal experience, nearly 11 months before, I was trying to install HBase, I faced few issues! The problem was version compatibility. Ex: “HBase some x.version” with “Hadoop some y.version”.This is a real issue because you will never know which package of what version blends well with the other, unless, someone has tested it. This testing again depends on the environment where they have set up and could be another issue.There was a pressing demand for the management of distributions and then comes an open source project which attempts to create a fully integrated and tested Big Data management distribution, “Apache Bigtop“.Goals of Apache Bigtop:-Packaging -Deployment -Integration Testing of all the sub-projects of Hadoop. This project aims at system as a whole, than the individual project.I love the way Doug Cutting quoted in the Keynote, back then, wherein he expressed the similarity between Hadoop and Linux kernel,and the corresponding similarity between the big stack of Hadoop ( Hive, Hbase, Pig, Avro, etc.) and the fully operational operating systems with its distributions (RedHat, Ubuntu, Fedora, Debian etc.). This is an awesome analogy! :)Life is made easy with Bigtop:Bigtop Hadoop distribution artifacts won’t make you feel that you live in an alien world! After installing, you will get a chance to blend a Hadoop cluster in any mode, with the sub-projects of it. Its all for you to garnish next! :)Setup Of Bigtop and Installing Hadoop:It’s time to welcome all your packages home. [I also mean /home/..] ;) I’ve tested on Ubuntu 11.04 and here goes a quick and easy installation process.Step 1: Installing the GNU Privacy Guard key, a key management system to access all public key directories. wget -O- http://www.apache.org/dist/incubator/bigtop/bigtop-0.3.0-incubating/repos/GPG-KEY-bigtop | sudo apt-key add -   Step 2: Get the repo file from the link http://www.apache.org/dist/incubator/bigtop/bigtop-0.3.0-incubating/repos/ubuntu/bigtop.listsudo wget -O /etc/apt/sources.list.d/bigtop.listhttp://www.apache.org/dist/incubator/bigtop/bigtop-0.3.0-incubating/repos/ubuntu/bigtop.listsudo gedit /etc/apt/sources.list.d/bigtop.list uncomment the mirror link near by. The first link worked for me. deb http://apache.01link.hk/incubator/bigtop/stable/repos/ubuntu/ bigtop contribStep 3: Updating the apt cache sudo apt-get update   Step 4: Checking in the artifacts sudo apt-cache search hadoop Image:Search in the apt cacheStep 5: Set your JAVA_HOME export JAVA_HOME=path_to_your_Java export $JAVA_HOME in ~/.bashrc Step 6: Installing the complete Hadoop stack sudo apt-get install hadoop\* Image: (above)Running Hadoop: Step 1: Formatting the namendoe sudo -u hdfs hadoop namenode -format Image :Formatting the namenodeStep 2: Starting the Namenode, Datanode, Jobtracker, Tasktracker of Hadoop for i in hadoop-namenode hadoop-datanode hadoop-jobtracker hadoop-tasktracker ; do sudo service $i start ; done Now, the cluster is up and running. Image :Start all the servicesStep 3: Creating a new directory in hdfs sudo -u hdfs hadoop fs -mkdir /user/bigtop bigtop is the directory name in the user $USER sudo -u hdfs hadoop fs -chown $USER /user/bigtop Image :Create a directory in HDFSStep 4: List the directories in file system hadoop fs -lsr / Image :HDFS directoriesStep 5: Running a sample pi example hadoop jar /usr/lib/hadoop/hadoop-examples.jar pi 10 1000 Image :Running a sample programJob Completed!Enjoy with your cluster! :) We shall see what more blending could be done with Hadoop (with Hive, Hbase, etc.) in the next post! Until then, Happy Learning!! :):) Reference: Hadoop Hangover : Introduction To Apache Bigtop and Playing With It (Installing Hadoop)! from our JCG partner Swathi V at the * Techie(S)pArK * blog....
findbugs-logo

Findbugs Warnings By Sample

The FindBugs™ bug descriptions of the online documentation are concise and well written. Unfortunately, some parts of the descriptions are not easy to understand (even for experienced Java developers). It can be difficult to understand the exact root cause for a warning and/or to find the correct way of fixing. To be honest – at least I had problems with some warnings in the last years. Quite often I found no helping sample code in the web. The main weakness of the bug descriptions is, that it uses seldom sample code to demonstrate the wrong and correct situation. This is my motivation to publish in the next weeks a series of articles with simple code samples of selected warnings. The following 10 warnings are covered in this article:BC_IMPOSSIBLE_CAST BC_IMPOSSIBLE_DOWNCAST BC_IMPOSSIBLE_INSTANCEOF BC_IMPOSSIBLE_DOWNCAST_OF_TOARRAY DMI_BIGDECIMAL_CONSTRUCTED_FROM_DOUBLE ES_COMPARING_STRINGS_WITH_EQ VA_FORMAT_STRING_ILLEGAL RV_RETURN_VALUE_IGNORED NP_ALWAYS_NULL QBA_QUESTIONABLE_BOOLEAN_ASSIGNMENTThe following sample has been compiled with JDK 1.6.0_24 and Findbugs™ (Version 2.0.1-rc2) will show all warnings with the default settings of the Findbugs™ Eclipse Plugin (Version 2.0.1.20120511). package com.sprunck.samples;import java.math.BigDecimal; import java.util.ArrayList; import java.util.Collection; import java.util.UnknownFormatConversionException;@SuppressWarnings(value = { "null", "unused" }) public class FindbugsWarningsBySampleFirst {public static void main(final String[] args) {System.out.println("Findbugs Sample 001 for BC_IMPOSSIBLE_CAST"); // WRONG try { FindbugsWarningsBySampleFirst.bcImpossibleCastWRONG(); } catch (final ClassCastException e) { System.out.println(" - ERROR:" + e.getMessage()); } // CORRECT FindbugsWarningsBySampleFirst.bcImpossibleCastCORRECT();System.out.println("Findbugs Sample 002 for BC_IMPOSSIBLE_DOWNCAST"); // WRONG try { FindbugsWarningsBySampleFirst.bcImpossibleDowncastWRONG(); } catch (final ClassCastException e) { System.out.println(" - ERROR:" + e.getMessage()); } // CORRECT FindbugsWarningsBySampleFirst.bcImpossibleDowncastCORRECT();System.out.println("Findbugs Sample 003 for BC_IMPOSSIBLE_INSTANCEOF"); // WRONG FindbugsWarningsBySampleFirst.bcImpossibleInstanceOfWRONG(); // CORRECT FindbugsWarningsBySampleFirst.bcImpossibleInstanceOfCORRECT();System.out.println("Findbugs Sample 004 for BC_IMPOSSIBLE_DOWNCAST_OF_TOARRAY"); // WRONG try { FindbugsWarningsBySampleFirst.bcImpossibleDowncastOfArrayWRONG(); } catch (final ClassCastException e) { System.out.println(" - ERROR:" + e.getMessage()); } // CORRECT FindbugsWarningsBySampleFirst.bcImpossibleDowncastOfArrayCORRECT();System.out.println("Findbugs Sample 005 for DMI_BIGDECIMAL_CONSTRUCTED_FROM_DOUBLE"); // WRONG FindbugsWarningsBySampleFirst.dmiBigDecimalConstructedFromDoubleWRONG(); // CORRECT FindbugsWarningsBySampleFirst.dmiBigDecimalConstructedFromDoubleCORRECT();System.out.println("Findbugs Sample 006 for ES_COMPARING_STRINGS_WITH_EQ"); // WRONG FindbugsWarningsBySampleFirst.esComparingStringsWithEqWRONG(); // CORRECT FindbugsWarningsBySampleFirst.esComparingStringsWithEqCORRECT();System.out.println("Findbugs Sample 007 for VA_FORMAT_STRING_ILLEGAL"); // WRONG try { FindbugsWarningsBySampleFirst.vaFormatStringIllegalWRONG(); } catch (final UnknownFormatConversionException e) { System.out.println(" - ERROR:" + e.getMessage()); } // CORRECT FindbugsWarningsBySampleFirst.vaFormatStringIllegalCORRECT();System.out.println("Findbugs Sample 008 for RV_RETURN_VALUE_IGNORED"); // WRONG FindbugsWarningsBySampleFirst.rvReturnValueIgnoredWRONG(); // CORRECT FindbugsWarningsBySampleFirst.rvReturnValueIgnoredCORRECT();System.out.println("Findbugs Sample 009 for NP_ALWAYS_NULL"); // WRONG try { FindbugsWarningsBySampleFirst.npAlwaysNullWRONG(); } catch (final NullPointerException e) { System.out.println(" - ERROR:" + e.getMessage()); } // CORRECT FindbugsWarningsBySampleFirst.npAlwaysNullCORRECT();System.out.println("Findbugs Sample 010 for QBA_QUESTIONABLE_BOOLEAN_ASSIGNMENT"); // WRONG FindbugsWarningsBySampleFirst.qabQuestionableBooleanAssignmentWRONG(); // CORRECT FindbugsWarningsBySampleFirst.qabQuestionableBooleanAssignmentCORRECT();}private static void bcImpossibleCastWRONG() { final Object doubleValue = Double.valueOf(1.0); final Long value = (Long) doubleValue; System.out.println(" - " + value); }private static void bcImpossibleCastCORRECT() { final Object doubleValue = Double.valueOf(1.0); final Double value = (Double) doubleValue; System.out.println(" - " + value); }private static void bcImpossibleDowncastWRONG() { final Object exception = new RuntimeException("abc"); final SecurityException value = (SecurityException) exception; System.out.println(" - " + value.getMessage()); }private static void bcImpossibleDowncastCORRECT() { final Object exception = new RuntimeException("abc"); final RuntimeException value = (RuntimeException) exception; System.out.println(" - " + value.getMessage()); }private static void bcImpossibleInstanceOfWRONG() { final Object value = Double.valueOf(1.0); System.out.println(" - " + (value instanceof Long)); }private static void bcImpossibleInstanceOfCORRECT() { final Object value = Double.valueOf(1.0); System.out.println(" - " + (value instanceof Double)); }private static void bcImpossibleDowncastOfArrayWRONG() { final Collection<String> stringVector = new ArrayList<String>(); stringVector.add("abc"); stringVector.add("xyz"); final String[] stringArray = (String[]) stringVector.toArray(); System.out.println(" - " + stringArray.length); }private static void bcImpossibleDowncastOfArrayCORRECT() { final Collection<String> stringVector = new ArrayList<String>(); stringVector.add("abc"); stringVector.add("xyz"); final String[] stringArray = stringVector.toArray(new String[stringVector.size()]); System.out.println(" - " + stringArray.length); }private static void dmiBigDecimalConstructedFromDoubleWRONG() { final BigDecimal bigDecimal = new BigDecimal(3.1); System.out.println(" - " + bigDecimal.toString()); }private static void dmiBigDecimalConstructedFromDoubleCORRECT() { final BigDecimal bigDecimal = new BigDecimal("3.1"); System.out.println(" - " + bigDecimal.toString()); }private static void esComparingStringsWithEqWRONG() { final StringBuilder sb1 = new StringBuilder("1234"); final StringBuilder sb2 = new StringBuilder("1234"); final String string1 = sb1.toString(); final String string2 = sb2.toString(); System.out.println(" - " + (string1 == string2)); }private static void esComparingStringsWithEqCORRECT() { final StringBuilder sb1 = new StringBuilder("1234"); final StringBuilder sb2 = new StringBuilder("1234"); final String string1 = sb1.toString(); final String string2 = sb2.toString(); System.out.println(" - " + string1.equals(string2)); }private static void vaFormatStringIllegalWRONG() { System.out.println(String.format(" - %>s %s", "10", "9")); }private static void vaFormatStringIllegalCORRECT() { System.out.println(String.format(" - %s > %s", "10", "9")); }private static void rvReturnValueIgnoredWRONG() { final BigDecimal bigDecimal = BigDecimal.ONE; bigDecimal.add(BigDecimal.ONE); System.out.println(String.format(" - " + bigDecimal)); }private static void rvReturnValueIgnoredCORRECT() { final BigDecimal bigDecimal = BigDecimal.ONE; final BigDecimal newValue = bigDecimal.add(BigDecimal.ONE); System.out.println(String.format(" - " + newValue)); }private static void npAlwaysNullWRONG() { final String value = null; if (null != value & value.length() > 2) { System.out.println(String.format(" - " + value)); } else { System.out.println(String.format(" - value is invalid")); } }private static void npAlwaysNullCORRECT() { final String value = null; if (null != value && value.length() > 2) { System.out.println(String.format(" - " + value)); } else { System.out.println(String.format(" - value is invalid")); } }private static void qabQuestionableBooleanAssignmentWRONG() { boolean value = false; if (value = true) { System.out.println(String.format(" - value is true")); } else { System.out.println(String.format(" - value is false")); } }private static void qabQuestionableBooleanAssignmentCORRECT() { final boolean value = false; if (value == true) { System.out.println(String.format(" - value is true")); } else { System.out.println(String.format(" - value is false")); } }}The program should print the following output to the standard output: Findbugs Sample 001 for BC_IMPOSSIBLE_CAST - ERROR:java.lang.Double cannot be cast to java.lang.Long - 1.0 Findbugs Sample 002 for BC_IMPOSSIBLE_DOWNCAST - ERROR:java.lang.RuntimeException cannot be cast to java.lang.SecurityException - abc Findbugs Sample 003 for BC_IMPOSSIBLE_INSTANCEOF - false - true Findbugs Sample 004 for BC_IMPOSSIBLE_DOWNCAST_OF_TOARRAY - ERROR:[Ljava.lang.Object; cannot be cast to [Ljava.lang.String; - 2 Findbugs Sample 005 for DMI_BIGDECIMAL_CONSTRUCTED_FROM_DOUBLE - 3.100000000000000088817841970012523233890533447265625 - 3.1 Findbugs Sample 006 for ES_COMPARING_STRINGS_WITH_EQ - false - true Findbugs Sample 007 for VA_FORMAT_STRING_ILLEGAL - ERROR:Conversion = '>' - 10 > 9 Findbugs Sample 008 for RV_RETURN_VALUE_IGNORED - 1 - 2 Findbugs Sample 009 for NP_ALWAYS_NULL - ERROR:null - value is invalid Findbugs Sample 010 for QBA_QUESTIONABLE_BOOLEAN_ASSIGNMENT - value is true - value is falsePlease, do not hesitate to contact me if you have any ideas for improvement and/or you find a bug in the sample code. We continue with the following 5 warnings :DMI_CONSTANT_DB_PASSWORD (Security) DMI_EMPTY_DB_PASSWORD (Security) SQL_NONCONSTANT_STRING_PASSED_TO_EXECUTE (Security) OBL_UNSATISFIED_OBLIGATION (Experimental) OBL_UNSATISFIED_OBLIGATION_EXCEPTION_EDGE (Experimental)Motivation for the ‘Findbugs™ Warnings By Sample’ Series The FindBugs™ bug descriptions of the online documentation are concise and well written. Unfortunately, some parts of the descriptions are not easy to understand (even for experienced Java developers). It can be difficult to understand the exact root cause for a warning and/or to find the correct way of fixing. To be honest – at least I had problems with some warnings in the last years. Quite often I found no helping sample code in the web. The main weakness of the bug descriptions is, that it uses seldom sample code to demonstrate the wrong and correct situation. This is my motivation to publish in the next weeks a series of articles with simple code samples of selected warnings. Sample Code Findbugs™ (Version 2.0.1-rc2) will show all warnings with the activated settings ‘Security’ and ‘Experimental’ of the Findbugs™ Eclipse Plugin (Version 2.0.1.20120511). See the following figure:The sample has been compiled with JDK 1.6.0_24 and with derby.jar (Version 10.9.1.0) in the build path. package com.sprunck.samples;import java.sql.Connection; import java.sql.DriverManager; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; import java.sql.Statement;public class FindbugsWarningsBySampleSecond {public static void main(final String[] args) {// Prepare database Statement createStatement = null; Connection connection = null; try { System.out.println("Findbugs Sample prepare small in memory database"); connection = getConnection_dmiConstantDbPasswordCORRECT(); createStatement = connection.createStatement(); createStatement.execute("create table T_ADVICE (answer varchar(255), " + "owner varchar(255))"); createStatement.execute("insert into T_ADVICE ( answer, owner ) values " + "('Don''t Panic', 'Joe')"); createStatement.execute("insert into T_ADVICE ( answer, owner ) values " + "('Keep Smiling', 'John')");System.out.println("\nFindbugs Sample for DMI_CONSTANT_DB_PASSWORD"); // WRONG FindbugsWarningsBySampleSecond.getConnection_dmiConstantDbPasswordWRONG(); // CORRECT FindbugsWarningsBySampleSecond.getConnection_dmiConstantDbPasswordCORRECT();System.out.println("\nFindbugs Sample for DMI_EMPTY_DB_PASSWORD"); // WRONG FindbugsWarningsBySampleSecond.getConnection_dmiEmptyDbPasswordWRONG(); // CORRECT FindbugsWarningsBySampleSecond.getConnection_dmiConstantDbPasswordCORRECT();System.out.println("\nFindbugs Sample for SQL_NONCONSTANT_STRING_PASSED_TO_EXECUTE"); // WRONG FindbugsWarningsBySampleSecond.sqlNonconstantStringPassedToExecuteWRONG("Joe"); // CORRECT FindbugsWarningsBySampleSecond.sqlNonconstantStringPassedToExecuteCORRECT("Joe");System.out.println("\nFindbugs Sample for OBL_UNSATISFIED_OBLIGATION"); // WRONG FindbugsWarningsBySampleSecond.oblUnsatisfiedObligationWRONG("Joe"); // CORRECT FindbugsWarningsBySampleSecond.oblUnsatisfiedObligationCORRECT("Joe");System.out.println("\nFindbugs Sample for OBL_UNSATISFIED_OBLIGATION_EXCEPTION_EDGE"); // WRONG FindbugsWarningsBySampleSecond.oblUnsatisfiedObligationExceptionEdgeWRONG("Joe"); // CORRECT FindbugsWarningsBySampleSecond.oblUnsatisfiedObligationExceptionEdgeCORRECT("Joe");} catch (final SQLException e) { System.out.println(" - ERROR:" + e.getMessage()); } finally { if (null != createStatement) { try { createStatement.close(); } catch (final SQLException e) { } } if (null != connection) { try { connection.close(); } catch (final SQLException e) { } }}}private static Connection getConnection_dmiConstantDbPasswordWRONG() throws SQLException {Connection connection = null; try { Class.forName("org.apache.derby.jdbc.EmbeddedDriver"); } catch (final ClassNotFoundException e) { System.out.println(" - ERROR:" + e.getMessage()); } connection = DriverManager.getConnection("jdbc:derby:memory:myDB;create=true", "APP", "my-secret-password"); System.out.println(" - DriverManager.getConnection(\"jdbc:derby:database;" + "create=true\", \"test\", \"admin\"))");return connection; }private static Connection getConnection_dmiEmptyDbPasswordWRONG() throws SQLException {Connection connection = null; try { Class.forName("org.apache.derby.jdbc.EmbeddedDriver"); } catch (final ClassNotFoundException e) { System.out.println(" - ERROR:" + e.getMessage()); } connection = DriverManager.getConnection("jdbc:derby:memory:myDB;create=true", "APP", ""); System.out.println(" - DriverManager.getConnection(\"jdbc:derby:database;create=true\"," + " \"test\", \"\"))"); return connection; }private static Connection getConnection_dmiConstantDbPasswordCORRECT() throws SQLException {Connection connection = null; try { Class.forName("org.apache.derby.jdbc.EmbeddedDriver"); } catch (final ClassNotFoundException e) { System.out.println(" - ERROR:" + e.getMessage()); } connection = DriverManager.getConnection("jdbc:derby:memory:myDB;create=true", "APP", getSecurePassword()); System.out.println(" - DriverManager.getConnection(\"jdbc:derby:database;" + "create=true\", \"test\", pwdDecoder()))"); return connection; }static String getSecurePassword() { // Here we should fetch and decode the password from a secured resource return "my-sec" + "ret-password"; }private static void sqlNonconstantStringPassedToExecuteWRONG(final String owner) {Statement statement = null; ResultSet resultSet = null; try { final String query = "SELECT answer FROM T_ADVICE WHERE owner= '" + owner + "'"; statement = getConnection_dmiConstantDbPasswordCORRECT().createStatement(); resultSet = statement.executeQuery(query); while (resultSet.next()) { System.out.println(" - Result (Statement):" + resultSet.getString("ANSWER")); } } catch (final SQLException e) { System.out.println(" - ERROR:" + e.getMessage()); } finally {if (null != resultSet) { try { resultSet.close(); } catch (final SQLException e) { } } if (null != statement) { try { statement.close(); } catch (final SQLException e) { } } } }private static void sqlNonconstantStringPassedToExecuteCORRECT(final String owner) { PreparedStatement statement = null; ResultSet resultSet = null; try { final String query = "SELECT answer FROM T_ADVICE WHERE owner = ?"; statement = getConnection_dmiConstantDbPasswordCORRECT().prepareStatement(query); statement.setString(1, owner); resultSet = statement.executeQuery(); while (resultSet.next()) { System.out.println(" - Result (PreparedStatement):" + resultSet.getString("ANSWER")); } } catch (final SQLException e) { System.out.println(" - ERROR:" + e.getMessage()); } finally { if (null != resultSet) { try { resultSet.close(); } catch (final SQLException e) { } } if (null != statement) { try { statement.close(); } catch (final SQLException e) { } } } }private static void oblUnsatisfiedObligationWRONG(final String owner) {PreparedStatement statement = null; ResultSet resultSet = null; try { final String query = "SELECT answer FROM T_ADVICE WHERE owner = ?"; statement = getConnection_dmiConstantDbPasswordCORRECT().prepareStatement(query); statement.setString(1, owner); resultSet = statement.executeQuery(); while (resultSet.next()) { System.out.println(" - Result (PreparedStatement):" + resultSet.getString("ANSWER")); } } catch (final SQLException e) { System.out.println(" - ERROR:" + e.getMessage()); } finally { if (null != statement) { try { statement.close(); } catch (final SQLException e) { } } } }private static void oblUnsatisfiedObligationCORRECT(final String owner) { PreparedStatement statement = null; ResultSet resultSet = null; try { final String query = "SELECT answer FROM T_ADVICE WHERE owner = ?"; statement = getConnection_dmiConstantDbPasswordCORRECT().prepareStatement(query); statement.setString(1, owner); resultSet = statement.executeQuery(); while (resultSet.next()) { System.out.println(" - Result (PreparedStatement):" + resultSet.getString("ANSWER")); } } catch (final SQLException e) { System.out.println(" - ERROR:" + e.getMessage()); } finally { if (null != resultSet) { try { resultSet.close(); } catch (final SQLException e) { } } if (null != statement) { try { statement.close(); } catch (final SQLException e) { } } } }private static void oblUnsatisfiedObligationExceptionEdgeWRONG(final String owner) { PreparedStatement statement = null; ResultSet resultSet = null; try { final String query = "SELECT answer FROM T_ADVICE WHERE owner = ?"; statement = getConnection_dmiConstantDbPasswordCORRECT().prepareStatement(query); statement.setString(1, owner); resultSet = statement.executeQuery(); while (resultSet.next()) { System.out.println(" - Result (PreparedStatement):" + resultSet.getString("ANSWER")); } if (null != statement) { try { statement.close(); } catch (final SQLException e) { } } } catch (final SQLException e) { System.out.println(" - ERROR:" + e.getMessage()); } finally { if (null != resultSet) { try { resultSet.close(); } catch (final SQLException e) { } } } }private static void oblUnsatisfiedObligationExceptionEdgeCORRECT(final String owner) { PreparedStatement statement = null; ResultSet resultSet = null; try { final String query = "SELECT answer FROM T_ADVICE WHERE owner = ?"; statement = getConnection_dmiConstantDbPasswordCORRECT().prepareStatement(query); statement.setString(1, owner); resultSet = statement.executeQuery(); while (resultSet.next()) { System.out.println(" - Result (PreparedStatement):" + resultSet.getString("ANSWER")); } if (null != statement) { try { statement.close(); } catch (final SQLException e) { } } } catch (final SQLException e) { System.out.println(" - ERROR:" + e.getMessage()); } finally { if (null != resultSet) { try { resultSet.close(); } catch (final SQLException e) { } } if (null != statement) { try { statement.close(); } catch (final SQLException e) { } } } } }The program should print the following output to the standard output: Findbugs Sample prepare small in memory database - DriverManager.getConnection("jdbc:derby:database;create=true", "test", pwdDecoder()))Findbugs Sample for DMI_CONSTANT_DB_PASSWORD - DriverManager.getConnection("jdbc:derby:database;create=true", "test", "admin")) - DriverManager.getConnection("jdbc:derby:database;create=true", "test", pwdDecoder()))Findbugs Sample for DMI_EMPTY_DB_PASSWORD - DriverManager.getConnection("jdbc:derby:database;create=true", "test", "")) - DriverManager.getConnection("jdbc:derby:database;create=true", "test", pwdDecoder()))Findbugs Sample for SQL_NONCONSTANT_STRING_PASSED_TO_EXECUTE - DriverManager.getConnection("jdbc:derby:database;create=true", "test", pwdDecoder())) - Result (Statement):Don't Panic - DriverManager.getConnection("jdbc:derby:database;create=true", "test", pwdDecoder())) - Result (PreparedStatement):Don't PanicFindbugs Sample for OBL_UNSATISFIED_OBLIGATION - DriverManager.getConnection("jdbc:derby:database;create=true", "test", pwdDecoder())) - Result (PreparedStatement):Don't Panic - DriverManager.getConnection("jdbc:derby:database;create=true", "test", pwdDecoder())) - Result (PreparedStatement):Don't PanicFindbugs Sample for OBL_UNSATISFIED_OBLIGATION_EXCEPTION_EDGE - DriverManager.getConnection("jdbc:derby:database;create=true", "test", pwdDecoder())) - Result (PreparedStatement):Don't Panic - DriverManager.getConnection("jdbc:derby:database;create=true", "test", pwdDecoder())) - Result (PreparedStatement):Don't PanicPlease, do not hesitate to contact me if you have any ideas for improvement and/or you find a bug in the sample code. Reference: Findbugs™ Warnings By Sample – Part I,  Findbugs™ Warnings By Sample – Part IIfrom our JCG partner Markus Sprunck at the Software Engineering Candies blog....
apache-commons-scxml-logo

Apache Commons SCXML: Finite State Machine Implementation

This article mentions about Finite State Machines (FSM), SCXML ( State Chart extensible Markup Language) and Apache Common’s SCXML library. A basic ATM finite state machine sample code is also provided with the article. Finite State Machines: You probably remember Finite State Machines from your Computer Science courses. FSMs are used to design computer programs or digital circuits.       A FSM is simply an abstract machine that can be in one of a finite number of states.The machine is in only one state at a time; the state it is in at any given time is called the current state. It can change from one state to another when initiated by a triggering event or condition, this is called a transition. A particular FSM is defined by a list of the possible transition states from each current state, and the triggering condition for each transition. SCXML Language: A working draft called SCXML (State Machine Notation for Control Abstraction, published by W3C) can be used to describe complex state machines. SCXML is a general-purpose xml-based state machine language. It is still a draft and latest version is 16 February 2012. Click here to get five minute introduction to SCXML documents. Apache Commons SCXML Library: Apache has an implementation aimed at creating and maintaining a Java SCXML engine capable of executing a state machine defined using a SCXML document, while abstracting out the environment interfaces. The latest stable version is 0.9.Library Website: http://commons.apache.org/scxml/index.html Eclipse Plugin: http://commons.apache.org/sandbox/gsoc/2010/scxml-eclipse/ (still under development)  UseCases: http://commons.apache.org/scxml/usecases.html SCXML Editors: Apache’s Eclipse Plugin aims to provide a visual editor to edit SCXML files but it is still under development. There is also scxml gui ( http://code.google.com/p/scxmlgui/ ) which is very successful. You can also check State Forge’s visual State Machine Diagram : http://www.stateforge.com/StateMachineDiagram/StateMachineDiagram.htmlCode Sample : In this part of the article, we will implement a basic ATM Status state-machine. As a brief information, we assume an ATM can have following statuses. :idle: When ATM has no activity, simply it is closed loading: when an idle atm tries to connect to ATM Server, configs and info is started loading Out-of-service: If ATM loading fails or ATM is shutdown In-service: If ATM laoding is successful or ATM is started up Disconnected: If ATM is not connected to networkSorry for the missing or incorrect information about ATM statuses. This is just an example. Let’s first draw our state machine using scxmlgui program. One can write his own scxml file but scxmlgui does that ugly task for you. Here is the state chart diagram which describes the status change of an ATM :And the output SCXML file describing the transitions in the diagram above: <scxml initial="idle" name="atm.connRestored" version="0.9" xmlns="http://www.w3.org/2005/07/scxml"> <state id="idle"> <transition event="atm.connected" target="loading"></transition> </state> <state id="loading"> <transition event="atm.loadSuccess" target="inService"></transition> <transition event="atm.connClosed" target="disconnected"></transition> <transition event="atm.loadFail" target="outOfService"></transition> </state> <state id="inService"> <transition event="atm.shutdown" target="outOfService"></transition> <transition event="atm.connLost" target="disconnected"></transition> </state> <state id="outOfService"> <transition event="atm.startup" target="inService"></transition> <transition event="atm.connLost" target="disconnected"></transition> </state> <state id="disconnected"> <transition event="atm.connRestored" target="inService"></transition> </state> </scxml> Our FSM implemantation is in AtmStatusFSM class.AtmStatusFSM class extends org.apache.commons.scxml.env.AbstractStateMachine. FSM is configured by giving the scxml file (atm_status.xml) path to super constructor. ATM state changes are controlled by events. When fireEvent method is called with related event name [e.g. fireEvent(‘atm.connected’)], FSM state is updated automatically. You can get current state whenever you want. You can also write public methods having the state names of our FSM. These methods are called when the corresponding state is activated.package net.javafun.example.atmstatusfsm;import java.util.Collection; import java.util.Set;import org.apache.commons.scxml.env.AbstractStateMachine; import org.apache.commons.scxml.model.State;/** * Atm Status Finite State Machine * * @see Apache Commons Scxml Library * @author ozkansari.com * */ public class AtmStatusFSM extends AbstractStateMachine {/** * State Machine uses this scmxml config file */ private static final String SCXML_CONFIG_ATM_STATUS = "net/javafun/example/atmstatusfsm/atm_status.xml";/** CONSTRUCTOR(S) */ public AtmStatusFSM() { super(AtmStatusFSM.class.getClassLoader().getResource(SCXML_CONFIG_ATM_STATUS)); } /** HELPER METHOD(S) *//** * Fire the event */ public void firePreDefinedEvent(AtmStatusEventEnum eventEnum){ System.out.println("EVENT: " + eventEnum); this.fireEvent(eventEnum.getEventName()); } public void callState(String name){ this.invoke(name); } /** * Get current state ID as string */ public String getCurrentStateId() { Set states = getEngine().getCurrentStatus().getStates(); State state = (State) states.iterator().next(); return state.getId(); } /** * Get current state as apache's State object */ public State getCurrentState() { Set states = getEngine().getCurrentStatus().getStates(); return ( (State) states.iterator().next()); } /** * Get events belongs to current status of the FSM */ public Collection getCurrentStateEvents() { return getEngine().getCurrentStatus().getEvents(); } /** STATES */ // Each method below is the activity corresponding to a state in the // SCXML document (see class constructor for pointer to the document).public void idle() { System.out.println("STATE: idle"); } public void loading() { System.out.println("STATE: loading"); } public void inService() { System.out.println("STATE: inService"); } public void outOfService() { System.out.println("STATE: outOfService"); } public void disconnected() { System.out.println("STATE: disconnected"); } } We have the following enum file to describe our events. You don’t have to code such a class, but it might help to define the events. You can also get those events dynamically using getEngine().getCurrentStatus().getEvents()code fragment. package net.javafun.example.atmstatusfsm;/** * Atm Status Change Events * * @author ozkansari.com * */ public enum AtmStatusEventEnum {CONNECT("atm.connected"), CONNECTION_CLOSED("atm.connClosed"), CONNECTION_LOST("atm.connLost"), CONNECTION_RESTORED("atm.connRestored"), LOAD_SUCCESS("atm.loadSuccess"), LOAD_FAIL("atm.loadFail"), SHUTDOWN("atm.shutdown"), STARTUP("atm.startup");private final String eventName;private AtmStatusEventEnum(String eventName) { this.eventName = eventName; } public String getEventName() { return eventName; } public static String getNamesAsCsv(){ StringBuilder sb = new StringBuilder(); for (AtmStatusEventEnum e : AtmStatusEventEnum.values()) { sb.append(e.name()); sb.append(","); } return sb.substring(0,sb.length()-2); } } You can see the basic GUI code below. The GUI first shows the possible events that can be fired. When an event is selected and submitted, current ATM status is displayed and event list is updated. package net.javafun.example.atmstatusfsm;import java.awt.BorderLayout; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.util.List;import javax.swing.JButton; import javax.swing.JComboBox; import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.JPanel;import org.apache.commons.scxml.model.Transition;/** * Atm Status Change GUI * * @author ozkansari.com * */ public class AtmDisplay extends JFrame implements ActionListener {private static final long serialVersionUID = -5083315372455956151L; private AtmStatusFSM atmStatusFSM;private JButton button; private JLabel state; private JComboBox eventComboBox = new JComboBox(); public static void main(String[] args) { new AtmDisplay(); } public AtmDisplay() { super("ATM Display Demo"); atmStatusFSM = new AtmStatusFSM(); setupUI(); }@SuppressWarnings("deprecation") private void setupUI() { JPanel panel = new JPanel(); panel.setLayout(new BorderLayout()); setContentPane(panel); button = makeButton("FIRE_EVENT", AtmStatusEventEnum.getNamesAsCsv(), "Submit" ); panel.add(button, BorderLayout.CENTER); state = new JLabel(atmStatusFSM.getCurrentStateId()); panel.add(state, BorderLayout.SOUTH); initEvents(); panel.add(eventComboBox, BorderLayout.NORTH); pack(); setLocation(200, 200); setResizable(false); setSize(300, 125); show(); setDefaultCloseOperation(EXIT_ON_CLOSE); }@SuppressWarnings("unchecked") private void initEvents() { eventComboBox.removeAllItems(); List transitionList = atmStatusFSM.getCurrentState().getTransitionsList(); for (Transition transition : transitionList) { eventComboBox.addItem(transition.getEvent() ); } }public void actionPerformed(ActionEvent e) { String command = e.getActionCommand(); if(command.equals("FIRE_EVENT")) { checkAndFireEvent(); } }private boolean checkAndFireEvent() { atmStatusFSM.fireEvent(eventComboBox.getSelectedItem().toString()); state.setText(atmStatusFSM.getCurrentStateId()); initEvents(); repaint(); return true; }private JButton makeButton(final String actionCommand, final String toolTipText, final String altText) { JButton button = new JButton(altText); button.setActionCommand(actionCommand); button.setToolTipText(toolTipText); button.addActionListener(this); button.setOpaque(false); return button; }} The output of our simple program :The project files (with required libraries) as shown in Eclipse are given in the following image:For full source code visit https://github.com/ozkansari/atmstatemachine   Reference: Easy Finite State Machine Implementation with Apache Commons SCXML from our JCG partner Ozkan SARI at the Java Fun blog. ...
jelastic-logo

Jelastic, cloud platform for Java

Who is behind Jelastic? That was my first question so I took a look to the Jelastic web site. The best way to answer this is looking at the Jelastic Team section. Founders, advisers, special partners conforms a real professional team. As special partners you will find the MySQL (Michael “Monty” Widenius) and Nginx (Igor Sysoev) authors. Special mention too to their evangelists (not mentioned in the web page). In my case, Judah Johns spent their time writing me two personal emails simply to let me know about the Jelastic platform and the possibility to test it for free. That’s a real evangelist. Registration Sign up with the service is really easy. Once sent the registration email you will receive a welcome email with an initial password for log in. First impression My first impression with Jelastic, from the web page to the service once logged in, was: Ough!!! I know design is something subjective, what you love other can hate, but the first impression is what counts in a 75%. Sorry Jelastic but, from my point of view, you need a redesign. That darker theme is absolutely dreadful. Environments After the first impression I start working in something more functional, which is what really matters for a developer. An environment is a concrete configuration of servers for load balancing, application logic and storage.Load Balancing is achieved with Nginx server. Application logic is implemented as a Java server side application and can run on Tomcat6, Tomcat7, Jetty6 or GlasFish3 servers using JDK6 or JDK7. For storage we can use SQL or NoSQL solutions. For SQL we have the most known open source projects: PostgreSQL 8.4, MySQL 5.5 and MariaDB 5.2. For NoSQL we can use MongoDB 2.0 or CouchDB 1.1. Creating a new environment is incredible easy. We can choose to use a load balancer or not, define the number of application logic server instances, possibility of high availability (which means session replication) and the storage service. Once created, the environment’s topology can be modified at any time. At practice this means you can scale your application adding more application server instances or applying the high availability options, which allows to replicate the sessions. In addition you can change or add a new store services. Note: Be aware if you change your relational or NoSQL server because data lost. Deploying applications For testing purposes, Jelastic comes with a HelloWorld.war sample applications. Deploy it is as easy as selecting and deploying on one of your, previously created and configured, environments.To deploy your own application you need to upload it first. After uploaded your application will be shown in the applications list and you could deploy like previously commented.Server configuration Once created the environment, you have access to the configuration files of your servers. I played a bit with a simple Tomcat+MySQL configuration and see you:have access to modify files like web.xml or server.xml can change logging preferences can upload new JAR files to or remove them from the lib folder have access to the webapps folder have a shortened version of my.cnf file you can edit.Log files and monitoring Jelastic monitors the servers of your environments and presents the results in a nice graphical way.In addition it also allows to see the log files of the servers:Looking log files in the browser is something funny, but I would like a way (I didn’t find it) to download the log files to my local machine. Looking for errors in a production environments with tons of lines isn’t easy to do in that text area. Resources Connect your application to the storage service (relational or NoSQL database) is really easy. The documentation contains samples for all the databases the Jelastic has support. The application logic servers have access to a home directory where you can create property files or upload whatever you want your application can use later using: System.getProperty('user.home') Conclusions At the opposite of Amazon AWS, Google App Engine or others, Jealastic is completely oriented to Java. If you are a Java developer and ever worked with AWS or Google App Engine you will find Jelastic complete different and incredible easy to use, really similar as a usual day to day work. While AWS is machine oriented, where you start as many EC2 instance as you require, with Jelastic you have the concept of cloudlet and you can forget completely to manage machine instances and their resources. Note: A cloudlet is roughly equivalent to 128 MB RAM and 200Mhz CPU core. I have written this post before dinner so, as you can see, it is nothing exhaustive but a simple platform presentation. A great continuation would require to explain the experiences working with a real application, deploying operations and tweaking the running environment to achieve good performance with lowest cloudlet consume. If someone is interested, another great article could compare the cost of the same application running with Amazon AWS and Jelastic: where runs with better performance and which one is cheaper. Related Posts:Sending emails with Java Clinker, a software development ecosystem Generating map tiles without a map server. GeoTools the GIS swissknife. How to Create a Cross-Platform Application with NASA WorldWind & NetBeans Platform Downloading files from AEMET FTP server with Java and Apache Commons NetReference: JELASTIC, CLOUD PLATFORM FOR JAVA from our JCG partner Antonio Santiago at the A Curious Animal blog....
software-development-2-logo

Testing legacy code: Hard-wired dependencies

When pairing with some developers, I’ve noticed that one of the reasons they are not unit testing existing code is because, quite often, they don’t know how to overcome certain problems. The most common one is related to hard-wired dependencies – Singletons and static calls. Let’s look at this piece of code: public List<Trip> getTripsByUser(User user) throws UserNotLoggedInException { List<Trip> tripList = new ArrayList<Trip>(); User loggedUser = UserSession.getInstance().getLoggedUser(); boolean isFriend = false; if (loggedUser != null) { for (User friend : user.getFriends()) { if (friend.equals(loggedUser)) { isFriend = true; break; } } if (isFriend) { tripList = TripDAO.findTripsByUser(user); } return tripList; } else { throw new UserNotLoggedInException(); } }Horrendous, isn’t it? The code above has loads of problems, but before we change it, we need to have it covered by tests. There are two challenges when unit testing the method above. They are: User loggedUser = UserSession.getInstance().getLoggedUser(); // Line 3 tripList = TripDAO.findTripsByUser(user); // Line 13As we know, unit tests should test just one class and not its dependencies. That means that we need to find a way to mock the Singleton and the static call. In general we do that injecting the dependencies, but we have a rule, remember? We can’t change any existing code if not covered by tests. The only exception is if we need to change the code to add unit tests, but in this case, just automated refactorings (via IDE) are allowed. Besides that, many of the mocking frameworks are not be able to mock static methods anyway, so injecting the TripDAO would not solve the problem. Overcoming the hard-dependencies problem NOTE: In real life I would be writing tests first and making the change just when I needed but in order to keep the post short and focused I will not go step by step here . First of all, let’s isolate the Singleton dependency on it’s own method. Let’s make it protected as well. But wait, this need to be done via automated “extract method” refactoring. Select just the following piece of code on TripService.java: UserSession.getInstance().getLoggedUser()Go to your IDE’s refactoring menu, choose extract method and give it a name. After this step, the code will look like that: public class TripService {public List<Trip> getTripsByUser(User user) throws UserNotLoggedInException { ... User loggedUser = loggedUser(); ... }protected User loggedUser() { return UserSession.getInstance().getLoggedUser(); } }Doing the same thing for TripDAO.findTripsByUser(user), we will have: public List<Trip> getTripsByUser(User user) throws UserNotLoggedInException { ... User loggedUser = loggedUser(); ... if (isFriend) { tripList = findTripsByUser(user); } ... } protected List<Trip> findTripsByUser(User user) { return TripDAO.findTripsByUser(user); } protected User loggedUser() { return UserSession.getInstance().getLoggedUser(); }In our test class, we can now extend the TripService class and override the protected methods we created, making them return whatever we need for our unit tests: private TripService createTripService() { return new TripService() { @Override protected User loggedUser() { return loggedUser; } @Override protected List<Trip> findTripsByUser(User user) { return user.trips(); } }; }And this is it. Our TripService is now testable. First we write all the tests we need to make sure the class/method is fully tested and all code branches are exercised. I use Eclipse’s eclEmma plugin for that and I strongly recommend it. If you are not using Java and/or Eclipse, try to use a code coverage tool specific to your language/IDE while writing tests for existing code. It helps a lot. So here is the my final test class: public class TripServiceTest { private static final User UNUSED_USER = null; private static final User NON_LOGGED_USER = null; private User loggedUser = new User(); private User targetUser = new User(); private TripService tripService;@Before public void initialise() { tripService = createTripService(); } @Test(expected=UserNotLoggedInException.class) public void shouldThrowExceptionWhenUserIsNotLoggedIn() throws Exception { loggedUser = NON_LOGGED_USER; tripService.getTripsByUser(UNUSED_USER); } @Test public void shouldNotReturnTripsWhenLoggedUserIsNotAFriend() throws Exception { List<Trip> trips = tripService.getTripsByUser(targetUser); assertThat(trips.size(), is(equalTo(0))); } @Test public void shouldReturnTripsWhenLoggedUserIsAFriend() throws Exception { User john = anUser().friendsWith(loggedUser) .withTrips(new Trip(), new Trip()) .build(); List<Trip> trips = tripService.getTripsByUser(john); assertThat(trips, is(equalTo(john.trips()))); }private TripService createTripService() { return new TripService() { @Override protected User loggedUser() { return loggedUser; } @Override protected List<Trip> findTripsByUser(User user) { return user.trips(); } }; } }Are we done? Of course not. We still need to refactor the TripService class. public class TripService {public List<Trip> getTripsByUser(User user) throws UserNotLoggedInException { List<Trip> tripList = new ArrayList<Trip>(); User loggedUser = loggedUser(); boolean isFriend = false; if (loggedUser != null) { for (User friend : user.getFriends()) { if (friend.equals(loggedUser)) { isFriend = true; break; } } if (isFriend) { tripList = findTripsByUser(user); } return tripList; } else { throw new UserNotLoggedInException(); } }protected List<Trip> findTripsByUser(User user) { return TripDAO.findTripsByUser(user); }protected User loggedUser() { return UserSession.getInstance().getLoggedUser(); }}How many problems can you see? Take your time before reading the ones I found.. :-) Refactoring   NOTE: When I’ve done it, I’ve done it step by step running the tests after every step. Here I’ll just summarise my decisions. The first thing I noticed is that the tripList variable does not need to be created when the logged user is null, since an exception is thrown and nothing else happens. I’ve decided to invert the outer if and extract the guard clause. public List<Trip> getTripsByUser(User user) throws UserNotLoggedInException { User loggedUser = loggedUser(); validate(loggedUser); List<Trip> tripList = new ArrayList<Trip>(); boolean isFriend = false; for (User friend : user.getFriends()) { if (friend.equals(loggedUser)) { isFriend = true; break; } } if (isFriend) { tripList = findTripsByUser(user); } return tripList; }private void validate(User loggedUser) throws UserNotLoggedInException { if (loggedUser == null) throw new UserNotLoggedInException(); }Feature Envy When a class gets data from another class in order to do some calculation or comparison on that data, quite often it means that the client class envies the other class. This is called Feature Envy (code smell) and it is a very common occurrence in long methods and is everywhere in legacy code. In OO, data and the operations on that data should be on the same object. So, looking at the code above, clearly the whole thing about determining if an user is friends with another doesn’t belong to the TripService class. Let’s move it to the User class. First the unit test: @Test public void shouldReturnTrueWhenUsersAreFriends() throws Exception { User John = new User(); User Bob = new User();John.addFriend(Bob);assertTrue(John.isFriendsWith(Bob)); }Now, let’s move the code to the User class. Here we can use the Java collections API a bit better and remove the whole for loop and the isFriend flag all together. public class User {...private List<User> friends = new ArrayList<User>();public void addFriend(User user) { friends.add(user); }public boolean isFriendsWith(User friend) { return friends.contains(friend); }... }After a few refactoring steps, here is the new code in the TripService public List<Trip> getTripsByUser(User user) throws UserNotLoggedInException { User loggedUser = loggedUser(); validate(loggedUser); return (user.isFriendsWith(loggedUser)) ? findTripsByUser(user) : new ArrayList<Trip>(); }Right. This is already much better but it is still not good enough. Layers and dependencies Some of you may still be annoyed by the protected methods we created in part one in order to isolate dependencies and test the class. Changes like that are meant to be temporary, that means, they are done so we can unit test the whole method. Once we have tests covering the method, we can start doing our refactoring and thinking about the dependencies we could inject. Many times we would think that we should just inject the dependency into the class. That sounds obvious. TripService should receive an instance of UserSession. Really? TripService is a service. That means, it dwells in the service layer. UserSession knows about logged users and sessions. It probably talks to the MVC framework and/or HttpSession, etc. Should the TripService be dependant on this class (even if it was an interface instead of being a Singleton)? Probably the whole check if the user is logged in should be done by the controller or whatever the client class may be. In order NOT to change that much (for now) I’ll make the TripService receive the logged user as a parameter and remove the dependency on the UserSession completely. I’ll need to do some minor changes and clean up in the tests as well. Naming No, unfortunately we are not done yet. What does this code do anyway? Return trips from a friend. Looking at the name of the method and parameters, or even the class name, there is no way to know that. The word “friend” is no where to be seen in the TripService’s public interface. We need to change that as well. So here is the final code: public class TripService {public List<Trip> getFriendTrips(User loggedUser, User friend) throws UserNotLoggedInException { validate(loggedUser); return (friend.isFriendsWith(loggedUser)) ? findTripsForFriend(friend) : new ArrayList<Trip>(); }private void validate(User loggedUser) throws UserNotLoggedInException { if (loggedUser == null) throw new UserNotLoggedInException(); }protected List<Trip> findTripsForFriend(User friend) { return TripDAO.findTripsByUser(friend); } }Better, isn’t it? We still have the issue with the other protected method, with the TripDAO static call, etc. But I’ll leave this last bit for another post on how to remove dependencies on static methods. I’ll park my refactoring for now. We can’t refactoring the entire system in one day, right? We still need to deliver some features. :-) Conclusion This was just a toy example and may not even make sense. However, it represents many of the problems we find when working with legacy (existing) code. It’s amazing how many problems we can find in such a tiny piece of code. Now imagine all those classes and methods with hundreds, if not thousands of lines. We need to keep refactoring our code mercilessly so we never get to a position where we don’t understand it any more and the whole business starts slowing down because we cannot adjust the software quick enough. Refactoring is not just about extracting methods or making a few tweaks in the logic. We need to think about the dependencies, the responsibilities that each class and method should have, the architectural layers, the design of our application and also the names we give to every class, method, parameter and variable. We should try to have the business domain expressed in the code. We should treat our code base as if it was a big garden. If we want it to be pleasant and maintainable, we need to be constantly looking after it . If you want to give this code a go or find more details about the implementation, check: https://github.com/sandromancuso/testing_legacy_code Reference: Testing legacy: Hard-wired dependencies (part 1 and 2) from our JCG partner Sandro Mancuso at the Crafted Software blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close