Featured FREE Whitepapers

What's New Here?


Java deadlock troubleshooting and resolution

One of the great things about JavaOne annual conferences is the presentation of several technical and troubleshooting labs presented by subject matter experts. One of these labs did especially capture my attention this year: “HOL6500 – Finding And Solving Java Deadlocks ”,  presented by Java Champion Heinz Kabutz . This is one of the best presentations I have seen on this subject. I recommend that you download, run and study the labs yourself. This article will revisit this classic thread problem and summarize the key troubleshooting and resolution techniques presented. I will also expand the subject based on my own multi-threading troubleshooting experience. Java deadlock: what is it?A true Java deadlock can essentially be described as a situation where two or more threads are blocked forever, waiting for each other. This situation is very different from other more commons “day-to-day” thread problem patterns such as lock contention & thread races, threads waiting on blocking IO calls etc. Such lock-ordering deadlock situation can be visualized as per below:In the above visual example, the attempt by Thread A & Thread B to acquire 2 locks in different orders is fatal. Once threads reached the deadlocked state, they can never recover, forcing you to restart the affected JVM process. Heinz also describes another type of deadlock: resource deadlock. This is by far the most common thread problem pattern I have seen in my experience with Java EE enterprise system troubleshooting. A resource deadlock is essentially a scenario where one or multiple threads are waiting to acquire a resource which will never be available such as JDBC Pool depletions. Lock-ordering deadlocksYou should know by now that I am a big fan of JVM thread dump analysis; crucial skill to acquire for individuals either involved in Java/Java EE development or production support. The good news is that Java-level deadlocks can be easily identified out-of-the-box by most JVM thread dump formats (HotSpot, IBM VM…) since they contain a native deadlock detection mechanism which will actually show you the threads involved in a true Java-level deadlock scenario along with the execution stack trace. JVM thread dump can be captured via the tool of your choice such as JVisualVM, jstack or natively such as kill -3 <PID> on Unix-based OS. Find below the JVM Java-level deadlock detection section after running lab 1:Now this is the easy part…The core of the root cause analysis effort is to understand why such threads are involved in a deadlock situation at the first place. Lock-ordering deadlocks could be triggered from your application code but unless you are involved in high concurrency programming, chances are that the culprit code is a third part API or framework that you are using or the actual Java EE container itself, when applicable. Now let’s review below the lock-ordering deadlock resolution strategies presented by Heinz: # Deadlock resolution by global ordering (see lab1 solution)Essentially involves the definition of a global ordering for the locks that would always prevent deadlock (please see lab1 solution)# Deadlock resolution by TryLock (see lab2 solution)Lock the first lock Then try to lock the second lock If you can lock it, you are good to go If you cannot, wait and try againThe above strategy can be implemented using Java Lock & ReantrantLock which also gives you also flexibility to setup a wait timeout in order to prevent thread starvation in the event the first lock is acquired for too long.public interface Lock {void lock();void lockInterruptibly() throws InterruptedException;boolean tryLock();boolean tryLock(long timeout, TimeUnit unit)throws InterruptedException;void unlock();Condition newCondition();}If you look at the JBoss AS7 implementation, you will notice that Lock & ReantrantLock are widely used from core implementation layers such as:Deployment service EJB3 implementation (widely used) Clustering and session management Internal cache & data structures (LRU, ConcurrentReferenceHashMap…)Now and as per Heinz’s point, the deadlock resolution strategy #2 can be quite efficient but proper care is also required such as releasing all held lock via a finally{} block otherwise you can transform your deadlock scenario into a livelock. Resource deadlocksNow let’s move to resource deadlock scenarios. I’m glad that Heinz’s lab #3 covered this since from my experience this is by far the most common “deadlock” scenario that you will see, especially if you are developing and supporting large distributed Java EE production systems. Now let’s get the facts right.Resource deadlocks are not true Java-level deadlocks The JVM Thread Dump will not magically should you these types of deadlocks. This means more work for you to analyze and understand this problem as a starting point. Thread dump analysis can be especially confusing when you are just starting to learn how to read Thread Dump since threads will often show up as RUNNING state vs. BLOCKED state for Java-level deadlocks. For now, it is important to keep in mind that thread state is not that important for this type of problem e.g. RUNNING state != healthy state. The analysis approach is very different than Java-level deadlocks. You must take multiple thread dump snapshots and identify thread problem/wait patterns between each snapshot. You will be able to see threads not moving e.g. threads waiting to acquire a resource from a pool and other threads that already acquired such resource and hanging… Thread Dump analysis is not the only data point/fact important here. You will need to collect other facts such statistics on the resource(s) the threads are waiting for, overall middleware or environment health etc. The combination of all these facts will allow you to conclude on the root cause along with a resolution strategy which may or may not involve code change.I will get back to you with more thread dump problem patterns but first please ensure that you are comfortable with the basic principles of JVM thread dump as a starting point. ConclusionI hope you had the chance to review, run and enjoy the labs from Heinz’s presentation as much as I did. Concurrency programming and troubleshooting can be quite challenging but I still recommend that you spend some time trying to understand some of these principles since I’m confident you will face a situation in the near future that will force you to perform this deep dive and acquire those skills.   Reference: Java deadlock troubleshooting and resolution from our JCG partner Pierre-Hugues Charbonneau at the Java EE Support Patterns & Java Tutorial blog. ...

Design Patterns: Mogwai or Gremlins?

The 1994 book Design Patterns: Elements of Reusable Object-Oriented Software introduced many software developers to the concept of ‘a catalog of simple and succinct solutions to commonly occurring design problems’ that nearly every object-oriented software developer knows of today as ‘design patterns.’ Like most technical concepts (whether real or hype or somewhere in between), ‘design patterns’ seemed to go through the normal stages of acceptance, rising rapidly from new idea to the prevalent way of thinking. As is always the case, this rapid rise in popularity led to backlash as design patterns were overused, abused, and otherwise used inappropriately. Today, design patterns seem to have become accepted as a useful tool when used correctly, but are generally recognized as dangerous in the wrong hands. I have generally avoided devoting an entire blog post to a discussion of the good, the bad, and the ugly of use of design patterns, but a fellow software developer recently made up an analogy related to his observations of the use and misuse of design patterns that motivated me to write this post. Andy pointed out that there seems to be a tendency among some software developers to take an innocent and well-intentioned design pattern and turn it to evil, like turning a Mogwai like Gizmo into a Gremlin. In this post, I look in more detail at why this is a particularly fitting movie-themed analogy for turning effective use of design patterns into misuse and abuse of design patterns. The 1984 movie Gremlins begins with an inventor and father in Chinatown purchasing a Mogwai. Mr. Wing, the owner of the store, does not want to sell the Mogwai to the inventor/father because ‘with Mogwai comes much responsibility.’ However, Mr. Wing’s grandson sneakily sells the Mogwai to the inventor/father while warning him of three important things to be aware of related to care of the Mogwai. These three things are:‘Keep him out of the light. He hates bright light, especially sunlight. It will kill him.’ ‘Keep him away from water. Don’t get him wet.’ ‘But the most important rule, the one you can never forget, no matter how much he cries or how much he begs never, never feed him after midnight.’The appropriate use of design patterns is not affected by bright light, water, or eating after midnight, but the effects of not taking care when applying design patterns can have effects similar to not taking care of Mogwai properly. Rapidly Spawning Design Patterns When Mogwai or Gremlins get wet, spontaneous reproduction of more Mogwai or Gremlins occurs. The effect can be very similar for developers with design patterns. It is easy for a developer new to design patterns (or a developer who is excited about a new design pattern that he or she has recently learned) to apply too many design patterns to the same problem. If design patterns are good, more of them must be better. It is similarly easily for developers to fall into the trap of applying the same favorite design pattern to too many different, diverse, and unrelated problems (Maslow’s Hammer). Design Patterns Turned Evil The Mogwai turned into mischievous Gremlins if they ate after midnight. Similarly, design patterns can be more evil than good if used inappropriately. A misapplied design pattern can obfuscate the intention of the code. Several design patterns used together can obscure the intent as well. Design patterns that are meant to facilitate better design can and often do lead to worse design when not used carefully. What is a design pattern in one situation might be an anti-pattern in a different situation. One of the benefits of cataloging the common design principles as patterns is the ability to aid communication among developers and designers. However, design patterns can have the exact opposite effect (hindering understanding and communication) when misapplied or overused. I have also seen the case when a developer insists he or she is using a particular design pattern when he or she is really using a totally different design pattern or even an anti-pattern. In such cases, use of ‘design pattern’ terminology also confuses rather than clarifying. Well-Intentioned But Ill-Conceived These problems most commonly arise when developers apply the design patterns because they believe they should rather than because they truly understand their value in a particular situation. Rather than applying the same design pattern to every problem or shoe-horning a design pattern into a situation in which it does not fit well, the developer needs to understand the advantages and objectives of different design patterns, along with trade-offs associated with the design patterns, to make an informed decision about application of design patterns. Effective Use of Design Patterns It seems to me that the best use of design patterns occurs when a developer applies them naturally based on experience when need is observed rather than forcing their use. After all, when the Gang of Four compiled their book on design patterns, they were cataloging existing design patterns that developers had been using already. Indeed, some of the patterns covered in their book became so popular that they were incorporated into Java language syntax and into other newer languages. For example, Java provided the interface (which aids many of the design patterns covered in the original design patterns book) and newer languages such as Scala and Groovy have added their own pattern implementations. Use of Design Patterns: Gizmo or Gremlin? When used properly, design patterns are attractive and desirable just as Gizmo the Mogwai is a desirable pet. However, when used inappropriately or applied without appropriate care and consideration, design patterns can turn into Gremlins, wreaking havoc on one’s code base and hindering the ability to understand and maintain one’s design. Note that the design patterns themselves are not necessarily the problem, but rather the people entrusted with proper use of design patterns determine whether they maintain desirable like Gizmo or undesirable like the Gremlins.   Reference: Design Patterns: Mogwai or Gremlins? from our JCG partner Dustin Marx at the Inspired by Actual Events blog. ...

Method Parameter Names and Spring

Continuing on the previous blog entry about Constructor and method parameters and Java not retaining the parameter names at runtime – the previous entry was about constructor not retaining the parameter names and the implication of this for Contructor injections in Spring, here I will cover a few more scenarios where parameter names not being retained has implications with Spring: 1. Consider Spring MVC Controller method with a parameter to bind to a request parameter which is passed in:           @RequestMapping(value='/members/find') public String getMembersByName(@RequestParam String name){ ... return 'list'; } Here the parameter ‘name’ has a @RequestParam annotation associated with it, which is in an indication to Spring MVC to bind a request parameter ‘name’ to this method parameter. Since the parameter name is not retained at runtime, it is likely that an exception will be thrown by Spring: Request processing failed; nested exception is java.lang.IllegalArgumentException: Name for argument type not available, and parameter name i nformation not found in class file either. The fix here is simple, to either compile with debug options on which will retain the parameter names at runtime OR a better one is to simply indicate what the expected request parameter name is, as an argument to the @RequestParam annotation: @RequestMapping(value='/members/find') public String getMembersByName(@RequestParam('name') String name){ return 'list'; } 2. Along the same lines consider another Spring MVC controller method, this time supporting URI template patterns: @RequestMapping(value='/members/{id}', method=RequestMethod.GET) public @ResponseBody Member get(@PathVariable Integer id){ return this.memberDB.get(id); } Here the expectation is that if a request comes in with a uri of /members/20, then the id parameter will get bound with a value of 20, however since at runtime the parameter name of ‘id’ is not retained, the fix like in the previous case is either to compile with debug on, or to explicitly mention in the @PathVariable annotation what the expected pattern name is: @RequestMapping(value='/members/{id}', method=RequestMethod.GET) public @ResponseBody Member get(@PathVariable('id') Integer id){ 3. A third example is with caching support in Spring with @Cacheable annotation. Consider a sample method annotated with @Cacheable: @Cacheable(value='default', key='#param1.concat('-').concat(#param2)') public String cachedMethod(String param1, String param2){ return '' + new Random().nextInt(); } Here the key is a Spring-EL expression, which instructs the keygenerator to generate the key by combining the argument of the first parameter of name param1 with argument to the second parameter with name param2. However the problem like before is that these names are not available at runtime. One of the fixes, as before is to compile with debug symbols turned on. A second fix is to use placeholders to stand in for parameter index – a0 OR p0 for first parameter, a1 OR p1 for second parameter and so on, this way the @Cacheable key will look like this: @Cacheable(value='default', key='#p0.concat('-').concat(#p1)') public String cachedMethod(String param1, String param2){ return '' + new Random().nextInt(); } So in conclusion, a safe way to use Spring features that depend on method parameter names is to compile with debug on(-g or -g:var option of javac) or by explicitly passing in meta information that indicates what the parameter names are at runtime.   Reference: Method Parameter Names and Spring from our JCG partner Biju Kunjummen at the all and sundry blog. ...

Packing your Java application as one (or fat) JAR

This post will target an interesting but quite powerful concept: packing your application as single, runnable JAR file, also known as one or fat JAR. We get used to large WAR archives which contain all dependencies packed together under some common folder structure. With JAR-like packaging the story is a bit different: in order to make your application runnable (via java -jar) all dependencies should be provided over classpath parameter or environment variable. Usually it means there would be some lib folder with all dependencies and some runnable script which will do the job to construct classpath and run JVM. Maven Assembly plugin is well know for making such kind of application distribution. A slightly different approach would be to package all your application dependencies to the same JAR file and make it runnable without any additional parameters or scripting required. Sounds great but … it won’t work unless you add some magic: meet One-JAR project. Let’s briefly outline the problem: we are writing a stand-alone Spring application which should be runnable just by typing java -jar <our-app.jar>. As always, let’s start with our POM file, which will be pretty simple <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemalocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelversion>4.0.0</modelversion><groupid>com.example</groupid> <artifactid>spring-one-jar</artifactid> <version>0.0.1-SNAPSHOT</version> <packaging>jar</packaging><name>spring-one-jar</name> <url>http://maven.apache.org</url><properties> <project.build.sourceencoding>UTF-8</project.build.sourceencoding> <org.springframework.version>3.1.1.RELEASE</org.springframework.version> </properties><dependencies> <dependency> <groupid>cglib</groupid> <artifactid>cglib-nodep</artifactid> <version>2.2</version> </dependency><dependency> <groupid>org.springframework</groupid> <artifactid>spring-core</artifactid> <version>${org.springframework.version}</version> </dependency><dependency> <groupid>org.springframework</groupid> <artifactid>spring-context</artifactid> <version>${org.springframework.version}</version> </dependency> </dependencies> </project> Our sample application will bootstrap Spring context, get some bean instance and call a method on it. Our bean is called SimpleBean and looks like: package com.example; public class SimpleBean { public void print() { System.out.println( 'Called from single JAR!' ); } } Falling in love with Spring Java configuration, let us define our context as annotated AppConfig POJO: package com.example.config;import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration;import com.example.SimpleBean;@Configuration public class AppConfig { @Bean public SimpleBean simpleBean() { return new SimpleBean(); } } And finally, our application Starter with main(): package com.example;import org.springframework.context.ApplicationContext; import org.springframework.context.annotation.AnnotationConfigApplicationContext;import com.example.config.AppConfig;public class Starter { public static void main( final String[] args ) { ApplicationContext context = new AnnotationConfigApplicationContext( AppConfig.class ); SimpleBean bean = context.getBean( SimpleBean.class ); bean.print(); } } Adding our main class to META-INF/MANIFEST.MF allows to leverage Java capabilities to run JAR file without explicitly specifying class with main() method. Maven JAR plugin can help us with that. <build> <plugins> <plugin> <groupid>org.apache.maven.plugins</groupid> <artifactid>maven-jar-plugin</artifactid> <configuration> <archive> <manifest> <mainclass>com.example.Starter</mainclass> </manifest> </archive> </configuration> </plugin> </plugins> </build> Trying to run java -jar spring-one-jar-0.0.1-SNAPSHOT.jar will print the exception to the console: java.lang.NoClassDefFoundError. The reason is pretty straightforward: even such a simple application as this one already required following libraries to be in classpath. aopalliance-1.0.jar cglib-nodep-2.2.jar commons-logging-1.1.1.jar spring-aop-3.1.1.RELEASE.jar spring-asm-3.1.1.RELEASE.jar spring-beans-3.1.1.RELEASE.jar spring-context-3.1.1.RELEASE.jar spring-core-3.1.1.RELEASE.jar spring-expression-3.1.1.RELEASE.jar Let’s see what One-JAR can do for us here. Thanks to availability of onejar-maven-plugin we can add one to the plugins section of our POM file. <plugin> <groupid>org.dstovall</groupid> <artifactid>onejar-maven-plugin</artifactid> <version>1.4.4</version> <executions> <execution> <configuration> <onejarversion>0.97</onejarversion> <classifier>onejar</classifier> </configuration> <goals> <goal>one-jar</goal> </goals> </execution> </executions> </plugin> Also, pluginRepositories section should contain this repository in order to download the plugin. <pluginrepositories> <pluginrepository> <id>onejar-maven-plugin.googlecode.com</id> <url>http://onejar-maven-plugin.googlecode.com/svn/mavenrepo</url> </pluginrepository> </pluginrepositories> As the result, there will be another artifact available in the target folder, postfixed with one-jar: spring-one-jar-0.0.1-SNAPSHOT.one-jar.jar. Running this one with java -jar spring-one-jar-0.0.1-SNAPSHOT.one-jar.jar will print to the console: Called from single JAR! Fully runnable Java application as single, redistributable JAR file! The last comment: though our application looks pretty simple, One-JAR works perfectly for complex, large applications as well without any issues. Please, add it to your toolbox, it’s really useful tool to have. Thanks to One-JAR guys!   Reference: Simple but powerful concept: packing your Java application as one (or fat) JAR from our JCG partner Andrey Redko at the Andriy Redko {devmind} blog. ...

Bloom Filter Implementation in Java on GitHub

A Bloom Filter is a type of Set Data Structure. For those unaware, a Set Data Structure only has one main method, contains. It’s only used to determine if a specific element is included in a group of elements or not. Most Data Structures (like a Hash Map, Linked List, or Array) can create this function fairly easily. You simply need to search the data structure for the specific element. However, these types of Data Structures can pose a problem when the number of elements in the set exceeds the amount of memory available, as these types of data structures store all of the elements in memory. This is where the Bloom Filter becomes interesting. As the Bloom Filter doesn’t actually store all of the elements of the set in memory. Instead of placing each element into a the Data Structure, the Bloom Filter only stores an array of bytes. For each element added to the Bloom Filter, k bits are set in its array. These bits are typically determined by a hashing function. To check if an element is within the set, you simply check if the bits that would normally be one for this item are actually one. If they all are one (instead of zero), then the item is within the set. If any of the bits are not one, then the item is not within the set. With every Data Structure there is definitely a draw back to the Bloom Filter. By using the method above, the Bloom Filter can say an element is within the set when it actually isn’t. False positives are possible in the set, and they depend on several factors, such as:The size of the byte array The number of bits (k) set per element The number of items in the setBy tweaking the above values, you can easily get the false positive probability to respectable levels while still saving a large amount of space. After I discovered the Bloom Filter, I went looking for an implementation in Java. Sadly, a standard implementation doesn’t exist! So, I wrote a quick and simple version of the Bloom Filter for Java. You can find the source code on GitHub. My implementation uses:MD5 HashTo add an Object, the set takes the value of the hashCode() method to compute the MD5 hash. For subsequent values of k, the filter uses the previously computed MD5 hash (converted to an int) to generate the new MD5 hash.Backed by a simple byte array Implements the Set<Object> interface, although some of the methods in the interface will not work properly.Note that the project also used the SizeOf Library to get the number of byte used in memory. I also did a few quick expirements to compare the filter to a standard ArrayList in Java and a few performance checks.Time required to add an element to the set using different k values Size of the set versus the array list at different levelsAs to be expected, the larger the number of elements required to be in the set, the more useful the Bloom Filter becomes. It does get a bit tricky when determining how large the Bloom Filter should be and what the optimal k  value is for a given set, especially if the set is continually growing. For the tests, I simply added Objects (which have a size of 16 bytes) to each data structure, and I then use the SizeOf library to get the true amount of space used.From the above graph, its easy to see that the Bloom Filter is much more efficient on size once the array becomes larger than 100 objects. That trend continues at 1500 objects, with the Bloom Filter requiring 22808 bytes less than the ArrayList to store the same amount of elements.The above graph shows the time in seconds (on an early 2012 iMac) to add an element to the list with different numbers of bits set (k). As k increases, the time increases fairly slowly up to 10 bits. However, anything past 10 becomes very costly, with 100 bits set requiring a full second to complete. Feel free to check out the source code for the tests and the Bloom Filter implementation itself on GitHub.   Reference: Bloom Filter Implementation in Java on GitHub from our JCG partner Isaac Taylor at the Programming Mobile blog. ...

Debugging SQL query in MySQL

Recently I started writing SQL query for analyzing and debugging the production code. But I was surprised to see that some queries takes longer time to execute to achieve the same output. I did research and found some interesting stuff about how to debug the SQL query. I have a very simple table who’s definition is as following. In test environment, this table was populated with more then 1000K rows.           +-----------------------+--------------+------+-----+----------------+ | Field | Type | Null | Key | Extra | +-----------------------+--------------+------+-----+----------------+ | id | bigint(20) | NO | PRI | auto_increment | | dateCreated | datetime | NO | | | | dateModified | datetime | NO | | | | phoneNumber | varchar(255) | YES | MUL | | | version | bigint(20) | NO | | | | oldPhoneNumber | varchar(255) | YES | | | +-----------------------+--------------+------+-----+----------------+ I executed a very simple query to find the tuple which contains 5107357058 as phoneNumber. It took almost 4 seconds to fetch the result. select * from Device where phoneNumber = 5107357058; takes 4 sec. This simple query should have taken few milliseconds. I noticed that phoneNumber datatype is varchar but in query it is provided as number. When I modify the query to match the datatype, it took few milliseconds. select * from Device where phoneNumber = '5107357058'; takes almost no time. After googling and reading post on stackoverflow I found EXPLAIN SQL clouse which helps in debugging the query. The EXPLAIN statement provides information about the execution plan for a SELECTstatement. When I used it to get the information about the two queries I got the following results. mysql> EXPLAIN select * from Device where phoneNumber = 5107357058; +----+-------------+-----------+------+---------------------------------------+------+---------+------+---------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-----------+------+---------------------------------------+------+---------+------+---------+-------------+ | 1 | SIMPLE | Device | ALL | phoneNumber,idx_Device_phoneNumber | NULL | NULL | NULL | 6482116 | Using where | +----+-------------+-----------+------+---------------------------------------+------+---------+------+---------+-------------+ 1 row in set (0.00 sec)mysql> EXPLAIN select * from Device where phoneNumber = '5107357058'; +----+-------------+-----------+------+---------------------------------------+-------------+---------+-------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-----------+------+---------------------------------------+-------------+---------+-------+------+-------------+ | 1 | SIMPLE | Device | ref | phoneNumber,idx_Device_phoneNumber | phoneNumber | 258 | const | 2 | Using where | +----+-------------+-----------+------+---------------------------------------+-------------+---------+-------+------+-------------+ 1 row in set (0.00 sec) The EXPLAINgives you different query attribute. While analysing the query you should take care of the following attribute.possible_keys : shows the indexes apply to the query key : which key use to find the record. NULL values shows that there was no key used for the query and SQL search linearly, eventually takes long time. rows : SQL query with less number of result-rows, is efficient. One should always try to improve the query and avoid using generic query clouse. The query performance is much evident when executed on large number of records. type : is “The join type”. Ref means All rows with matching index values are read from the table; All means the full table scan.The two outputs of EXPLAIN clearly indicate the subtle differences. The later query uses the string which is right datatype, results phoneNumber as key and checks only two rows. Whereas the former uses integer in the query which is different datatype and hence SQL converts to integer value to string value and compares with each record present in that table. This results NULL as key and 6482116 as row output. You can also see that the later query type value is ref and former query type value is All, which clearly indicates that the former is a bad query.   Reference: Debugging SQL query from our JCG partner Rakesh Cusat at the Code4Reference blog. ...

Your first message – discovering Akka

Akka is a platform (framework?) inspired by Erlang, promising easier development of scalable, multi-threaded and safe applications. While in most of the popular languages concurrency is based on memory shared between several threads, guarded by various synchronization mehods, Akka offers concurrency model based on actors. Actor is a lightweight object which you can interact with barely by sending messages to it. Each actor can process at most one message at a time and obviously can send messages to other actors. Within one Java virtual machine millions of actors can exist at the same time, building a hierarchical parent (supervisor) – children structure, where parent monitors the behaviour of children. If that’s not enough, we can easily split our actors between several nodes in a cluster – without modifying a single line of code. Each actor can have internal state (set of fields/variables), but communication can only occur through message passing, never through shared data structures (counters, queues). A combination of the features above lead to a much safer, more stable and scalable code – for the price of a radical paradigm shift in concurrent programming model. So many buzzwords and promises, let’s go forward with an example. And it’s not going to be a “Hello, world” example, but we are going to try to build a small, but complete solution. In the next few articles we will implement integration with random.org API. This web service allows us to fetch truly random numbers (as opposed to pseudo random generators) based on atmospheric noise (whatever that means). API isn’t really that complicated, please visit the following website and refresh it couple times: https://www.random.org/integers/?num=20&min=1000&max=10000&col=1&base=10&format=plain So where is the difficulty? Reading guidelines for automated clients we learn that:The client application should call the URL above at most from one thread – it’s forbidden to concurrently fetch random numbers using several HTTP connections. We should load random numbers in batches, not one by one in every request. The request above takes num=20 numbers in one call. We are warned about latency, response may arrive even after one minute The client should periodically check random number quota (the service is free only up to a given number of random bits per day)All these requirements make integration with random.org non-trivial. In this series I have just begun we will gradually improve our application, learning new Akka features step by step. We will soon realize that quite steep learning curve pays of quickly once we understand the basic concepts of the platform. So, let’s code! Today we will try to handle first two requirements, that is not more than one connection at any given point in time and loading numbers in batches. For this step we don’t really need Akka, simple synchronization and a buffer is just about enough: val buffer = new Queue[Int]def nextRandom(): Int = { this.synchronized { if(buffer.isEmpty) { buffer ++= fetchRandomNumbers(50) } buffer.dequeue() } }def fetchRandomNumbers(count: Int) = { val url = new URL("https://www.random.org/integers/?num=" + count + "&min=0&max=65535&col=1&base=10&format=plain&rnd=new") val connection = url.openConnection() val stream = Source.fromInputStream(connection.getInputStream) val randomNumbers = stream.getLines().map(_.toInt).toList stream.close() randomNumbers } This code works and is equivalent to the synchronized keyword in Java. The way nextRandom() works should be obvious: if the buffer is empty, fill it with 50 random numbers fetched from the server. At the end take and return the first value from the buffer. This code has several disadvantages, starting from the synchronized block on the first place. Rather costly synchronization for each and every call seems like an overkill. And we aren’t even in the cluster yet, where we would have to maintain one active connection per whole cluster, not only withing one JVM! We shall begin with implementing one actor. Actor is basically a class extending Actor trait and implementing receive method. This method is responsible for receiving and handling one message. Let’s reiterate what we already said: each and every actor can handle at most one message at a time, thus receive method is never called concurrently. If the actor is already handling some message, the remaining messages are kept in a queue dedicated to each actor. Thanks to this rigorous rule, we can avoid any synchronization inside actor, which is always thread-safe. case object RandomRequestclass RandomOrgBuffer extends Actor {val buffer = new Queue[Int]def receive = { case RandomRequest => if(buffer.isEmpty) { buffer ++= fetchRandomNumbers(50) } println(buffer.dequeue()) } } fetchRandomNumbers() method remains the same. Single-threaded access to random.org was achieved for free, since actor can only handle one message at a time. Speaking of messages, in this case RandomRequest is our message – empty object not conveying any information except its type. In Akka messages are almost always implemented using case classes or other immutable types. Thus, if we would like to support fetching arbitrary number of random numbers, we would have to include that as part of the message: case class RandomRequest(howMany: Int)class RandomOrgBuffer extends Actor with ActorLogging {val buffer = new Queue[Int]def receive = { case RandomRequest(howMany) => if(buffer.isEmpty) { buffer ++= fetchRandomNumbers(50) } for(_ <- 1 to (howMany min 50)) { println(buffer.dequeue()) } } Now we should try to send some message to our brand new actor. Obviously we cannot just call receive method passing message as an argument. First we have to start the Akka platform and ask for an actor reference. This reference is later used to send a message using slightly counter-intuitive at first ! method, dating back to Erlang days: object Bootstrap extends App { val system = ActorSystem("RandomOrgSystem") val randomOrgBuffer = system.actorOf(Props[RandomOrgBuffer], "buffer")randomOrgBuffer ! RandomRequest(10) //sending a messagesystem.shutdown() } After running the program we should see 10 random numbers on the console. Experiment a little bit with that simple application (full source code is available on GitHub, request-response tag). In particular notice that sending a message is non-blocking and the message itself is handled in a different thread (big analogy to JMS). Try sending a message of different type and fix receive method so that it can handle more than one type. Our application is not very useful by now. We would like to access our random numbers somehow rather than printing them (asynchronously!) to standard output. As you can probably guess, since communication with an actor can only be established via asynchronous message passing (actor cannot “return” result, neither it shouldn’t place it in any global, shared memory). Thus an actor will send the results back via reply message sent directly to us (to sender). But that will be part of the next article. This was a translation of my article “Poznajemy Akka: pierwszy komunikat” originally published on scala.net.pl.   Reference: Your first message – discovering Akka from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog. ...

Building Both Security and Quality In

One of the important things in a Security Development Lifecycle (SDL) is to feed back information about vulnerabilities to developers. This post relates that practice to the Agile practice of No Bugs. The Security Incident Response Even though we work hard to ship our software without security vulnerabilities, we never succeed 100%. When an incident is reported (hopefully responsibly), we execute our security response plan. We must be careful to fix the issue without introducing new problems. Next, we should also look for similar issues to the one reported. It’s not unlikely that there are issues in other parts of the application that are similar to the reported one. We should find and fix those as part of the same security update. Finally, we should do a root cause analysis to determine why this weakness slipped through the cracks in the first place. Armed with that knowledge, we can adapt our process to make sure that similar issues will not occur in the future. From Security To QualityThe process outlined above works well for making our software ever more secure. But security weaknesses are essentially just bugs. Security issues may have more severe consequences than regular bugs, but most regular bugs are expensive to fix once the software is deployed as well. So it actually makes sense to treat all bugs, security or otherwise, the same way. As the saying goes, an ounce of prevention is worth a pound of cure. Just as we need to build security in, we also need to build quality in general in. Building Quality In Using Agile Methods This has been known in the Agile and Lean communities for a long time. For instance, James Shore wrote about it in his excellent book The Art Of Agile Development and Elisabeth Hendrickson thinks that there should be so little bugs that they don’t need triaging.Some people object to the Zero Defects mentality, claiming that it’s unrealistic. There is, however, clear evidence of much lower defect rates for Agile development teams. Many Lean implementations also report successes in their quest for Zero Defects. So there is at least anecdotal evidence that a very significant reduction of defects is possible. This will require change, of course. Testers need to change and so do developers. And then everybody on the team needs to speak the same language and work together as a single team instead of in silos. If we do this well, we’ll become bug exterminators that delight our customers with software that actually works.   Reference: Building Both Security and Quality In from our JCG partner Remon Sinnema at the Secure Software Development blog. ...

Investigating Deadlocks – Part 2

One of the most important requirements when investigating deadlocks is actually having a deadlock to investigate. In my last blog I wrote some code called DeadlockDemo that used a bunch of threads to transfer random amounts between a list of bank accounts before grinding to a halt in a deadlock. This blog runs that code to demonstrates a few ways of obtaining a thread dump.  A thread dump is simply a report showing the status of all your application’s threads at a given point in time. The good thing about it is that it contains various bits of information that will allow you to figure out why you have a deadlock and hopefully allowing you to fix it, but more on that later. kill SIGQUIT If your Java application is running on a UNIX machine the first, and possibly easiest, way to grab hold of a thread dump is to use the UNIX kill command via a terminal. To do this, first get hold of your application’s process identifier or PID using the ps and grep commands. For example if you type: ps –e | grep java …then you’ll produce a list that looks something like this: 74941 ttys000 0:00.01 grep java 70201 ttys004 1:00.89 /usr/bin/java threads.deadlock.DeadlockDemo The PID for DeadlockDemo is, in this case, 70201 and is taken from the output above. Note that different flavours of UNIX or different ps command line args can produce slightly different results, so check your man pages. Having got hold of your PID, use it to issue a kill SIGQUIT command: kill -3 70201 The kill command is the UNIX command that disposes of unwanted processes Although the -3 above is the SIGQUIT (equivalent to a keyboard ctrl-D) argument, if Java receives this signal it will not quit, it will display a thread dump on its associated terminal. You can then grab hold of this and copy it into a text file for further analysis. jstack If you’re working in Windows then the UNIX command line isn’t available. To counter this problem Java comes with a utility that performs the equivalent of kill. This is called jstack and is available on both UNIX and Windows. It is used in the same way as the kill command demonstrated above: jstack <PID> Getting hold of a PID in Windows is a matter of opening the Windows Task Manager. Task Manager doesn’t display the PIDs by default and so you need to update its setup by using the view menu option and checking the PID (Process Identifier) option in the Select Columns dialogue box.Next, it’s just a matter of examining the process list and finding the appropriate instance of java.exe.Read java.exe’s PID and use it as a jstack argument as shown below: jstack 3492 Once the command has completed you can grab hold of the output and copy it into a text file for further analysis. jVisualVM jVisualVM is the ‘Rolls Royce’ way of obtaining a thread dump. It’s provided by Oracle as tool that allows you to get hold of lots of different info about a Java VM. This includes heap dumps, CPU usage, memory profiling and much more. jVisualVM’s actual program name is jvisualvm or jvisualvm.exe on Windows. Once running you’ll see something like this:To obtain a thread dump, find your application in the left hand applications panel, then right click and select: “Thread Dump”.A thread dump is then displayed in jvisualvm’s right-hand pane as shown below:Note that I have seen jvisualvm hang on several occasions when connecting to a local VM. When this happens ensure that its proxy settings are set to No Proxy Having obtained a thread dump, my next blog will now use it to investigate the what’s going wrong with the example DeadlockDemo code. For more information see the other blogs in this series.   Reference: Investigating Deadlocks – Part 2: Obtaining the Thread Dump from our JCG partner Roger Hughes at the Captain Debug’s Blog blog. ...

Spring MVC Form Validation (With Annotations)

This post provides a simple example of a HTML form validation. It is based on the Spring MVC With Annotations example. The code is available on GitHub in the Spring-MVC-Form-Validation directory. Data For this example we will use a bean and JSR303 validation annotations:               public class MyUser {@NotNull @Size(min=1,max=20) private String name;@Min(0) @Max(120) private int age;public MyUser(String name, int age) { this.name = name; this.age = age; }public MyUser() { name = ''; age = 0; }// Setters & Getters}   Pages Our form will contain input elements, but also the possibility to display error messages: <%@page contentType='text/html' pageEncoding='UTF-8'%> <%@ taglib prefix='form' uri='http://www.springframework.org/tags/form' %> <!doctype html> <html> <head> <meta http-equiv='Content-Type' content='text/html; charset=UTF-8'> <title>My User Form!</title> </head> <body> <form:form method='post' action='myForm' commandName='myUser'> <table> <tr> <td>Name: <font color='red'><form:errors path='name' /></font></td> </tr> <tr> <td><form:input path='name' /></td> </tr> <tr> <td>Age: <font color='red'><form:errors path='age' /></font></td> </tr> <tr> <td><form:input path='age' /></td> </tr> <tr> <td><input type='submit' value='Submit' /></td> </tr> </table> </form:form> </body> </html> Our success page is: <%@page contentType='text/html' pageEncoding='UTF-8'%> <%@taglib prefix='form' uri='http://www.springframework.org/tags/form'%> <%@ taglib prefix='c' uri='http://java.sun.com/jsp/jstl/core' %> <!doctype html> <html> <head> <meta http-equiv='Content-Type' content='text/html; charset=UTF-8'> <title>Form Processed Successfully!</title> </head> <body> Form processed for <c:out value='${myUser.name}' /> ! <br /> <a href='<c:url value='/'/>'>Home</a> </body> </html> Our home page: <%@page contentType='text/html' pageEncoding='UTF-8'%> <%@ taglib prefix='c' uri='http://java.sun.com/jsp/jstl/core' %> <!doctype html> <html lang='en'> <head> <meta charset='utf-8'> <title>Welcome !!!</title> </head> <body> <h1> Spring Form Validation !!! </h1> <a href='<c:url value='/myForm'/>'>Go to the form!</a> </body> </html>   Controller Notice that we need to use @ModelAttribute to make sure an instance of MyUser is always available in the model. In the validateForm(), we need to use @ModelAttribute to move the content of the form to the MyUser project. @Controller public class MyController {@RequestMapping(value = '/') public String home() { return 'index'; }@ModelAttribute('myUser') public MyUser getLoginForm() { return new MyUser(); }@RequestMapping(value = '/myForm', method = RequestMethod.GET) public String showForm(Map model) { return 'myForm'; }@RequestMapping(value = '/myForm', method = RequestMethod.POST) public String validateForm( @ModelAttribute('myUser') @Valid MyUser myUser, BindingResult result, Map model) {if (result.hasErrors()) { return 'myForm'; }model.put('myUser', myUser);return 'success';}}   Maven Dependencies We need the following dependencies. The Hibernate validator dependency is necessary to process JSR303 annotations: <dependency> <groupId>javax.validation</groupId> <artifactId>validation-api</artifactId> <version>1.0.0.GA</version> <type>jar</type> </dependency><dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-validator</artifactId> <version>4.3.0.Final</version> </dependency>   Running The Example Once compiled, the example can be run with mvn tomcat:run. Then, browse: http://localhost:8383//spring-mvc-form-validation/. If the end user enters invalid values, error messages will be displayed:  Reference: Spring MVC Form Validation (With Annotations) from our JCG partner Jerome Versrynge at the Technical Notes blog. ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: