Featured FREE Whitepapers

What's New Here?


VisualVM: Monitoring Remote JVM Over SSH (JMX Or Not)

VisualVM is a great tool for monitoring JVM (5.0+) regarding memory usage, threads, GC, MBeans etc. Let’s see how to use it over SSH to monitor (or even profile, using its sampler) a remote JVM either with JMX or without it. This post is based on Sun JVM 1.6 running on Ubuntu 10 and VisualVM 1.3.3. 1. Communication: JStatD vs. JMX There are two modes of communication between VisualVM and the JVM: Either over the Java Management Extensions (JMX) protocol or over jstatd.jstatd jstatd is a daemon that is distributed with JDK. You start it from the command line (it’s likely necessary to run it as the user running the target JVM or as root) on the target machine and VisualVM will contact it to fetch information about the remote JVMs.Advantages: Can connect to a running JVM, no need to start it with special parameters Disadvantages: Much more limited monitoring capabilities (f.ex. no CPU usage monitoring, not possible to run the Sampler and/or take thread dumps).Ex.: bash> cat jstatd.all.policy grant codebase 'file:${java.home}/../lib/tools.jar' { permission java.security.AllPermission; } bash> sudo /path/to/JDK/bin/jstatd -J-Djava.security.policy=jstatd.all.policy # You can specify port with -p number and get more info with -J-Djava.rmi.server.logCalls=true Note: Replace “${java.home}/../lib/tools.jar” with the absolute “/path/to/jdk/lib/tools.jar” if you have only copied but not installed the JDK. If you get the failure Could not create remote object access denied (java.util.PropertyPermission java.rmi.server.ignoreSubClasses write) java.security.AccessControlException: access denied (java.util.PropertyPermission java.rmi.server.ignoreSubClasses write) at java.security.AccessControlContext.checkPermission(AccessControlContext.java:374) then jstatd likely hasn’t been started with the right java.security.policy file (try to provide fully qualified path to it). More info about VisualVM and jstatd from Oracle.JMXAdvantages: Using JMX will give you the full power of VisualVM. Disadvantages: Need to start the JVM with some system properties.You will generally want to use something like the following properties when starting the target JVM (though you could also enable SSL and/or require username and password): yourJavaCommand... -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.port=1098See Remote JMX Connections.2. Security: SSH The easiest way to connect to the remote JMX or jstatd over ssh is to use a SOCKS proxy, which standard ssh clients can set up.2.1 Set Up the SSH Tunnel With SOCKS ssh -v -D 9696 my_server.example.com 2.2 Configure VisualVM to Use the Proxy Tools->Options->Network – Manual Proxy Settings – check it and configure SOCKS Proxy at localhost and port 96962.3 Connect VisualVM to the Target File -> Add Remote Host… – type the IP or hostname of the remote machine JStatD Connection You should see logs both in the ssh window (thanks to its “-v”, f.ex. “debug1: Connection to port 9696 forwarding to socks port 0 requested.” and “debug1: channel 3: free: direct-tcpip: listening port 9696 for port 1099, connect from port 61262, nchannels 6“) and in the console where you started jstatd (many, f.ex. “FINER: RMI TCP Connection(23)- …“) Wait few minutes after having added the remote host, you should then see the JVMs running there. Available stats: JVM arguments, Monitor: Heap, classes, threads monitoring (but not CPU). Sampler and MBeans require JMX.JMX Right-click on the remote host you have added and select Add JMX Connection …, type the JMX port you have chosen. You should see similar logs as with jstatd. Available stats: Also CPU usage, system properties, detailed Threads report with access to stack traces, CPU sampling (memory sampling not supported).Note: Sampler vs. Profiler The VisualVM’s Sampler excludes time spent in Object.wait and Thread.sleep (f.ex. waiting on I/O). Use the NetBeans Profiler to profile or sample a remote application if you want to have more control or want the possibility to include Object.wait and Thread.sleep time. It requires its Remote Pack (a java agent, i.e. a JAR file) to be in the target JVM (NetBeans’ Attach Wizard can generate the remote pack for you in step 4, Manual integration, and show you the options to pass to the target JVM to use it). You can run the profiler over SSH by forwarding its default port (5140) and attaching to the forwarded port at localhost.(NetBeans version 7.1.1.) Don’t forget to share! Reference: VisualVM: Monitoring Remote JVM Over SSH (JMX Or Not) from our JCG partner Jakub Holy at the The Holy Java blog....

OutOfMemoryError: unable to create new native thread – Problem Demystified

As you may have seen from my previous tutorials and case studies , Java Heap Space OutOfMemoryError problems can be complex to pinpoint and resolve. One of the common problems I have observed from Java EE production systems is OutOfMemoryError: unable to create new native thread; error thrown when the HotSpot JVM is unable to further create a new Java thread. This article will revisit this HotSpot VM error and provide you with recommendations and resolution strategies. If you are not familiar with the HotSpot JVM, I first recommend that you look at a high level view of its internal HotSpot JVM memory spaces. This knowledge is important in order for you to understand OutOfMemoryError problems related to the native (C-Heap) memory space. OutOfMemoryError: unable to create new native thread – what is it? Let’s start with a basic explanation. This HotSpot JVM error is thrown when the internal JVM native code is unable to create a new Java thread. More precisely, it means that the JVM native code was unable to create a new “native” thread from the OS (Solaris, Linux, MAC, Windows…). We can clearly see this logic from the OpenJDK 1.6 and 1.7 implementations as per below: Unfortunately at this point you won’t get more detail than this error, with no indication of why the JVM is unable to create a new thread from the OS… HotSpot JVM: 32-bit or 64-bit? Before you go any further in the analysis, one fundamental fact that you must determine from your Java or Java EE environment is which version of HotSpot VM you are using e.g. 32-bit or 64-bit. Why is it so important? What you will learn shortly is that this JVM problem is very often related to native memory depletion; either at the JVM process or OS level. For now please keep in mind that:A 32-bit JVM process is in theory allowed to grow up to 4 GB (even much lower on some older 32-bit Windows versions). For a 32-bit JVM process, the C-Heap is in a race with the Java Heap and PermGen space e.g. C-Heap capacity = 2-4 GB – Java Heap size (-Xms, -Xmx) – PermGen size (-XX:MaxPermSize) A 64-bit JVM process is in theory allowed to use most of the OS virtual memory available or up to 16 EB (16 million TB)As you can see, if you allocate a large Java Heap (2 GB+) for a 32-bit JVM process, the native memory space capacity will be reduced automatically, opening the door for JVM native memory allocation failures. For a 64-bit JVM process, your main concern, from a JVM C-Heap perspective, is the capacity and availability of the OS physical, virtual and swap memory. OK great but how does native memory affect Java threads creation? Now back to our primary problem. Another fundamental JVM aspect to understand is that Java threads created from the JVM requires native memory from the OS. You should now start to understand the source of your problem… The high level thread creation process is as per below:A new Java thread is requested from the Java program & JDK The JVM native code then attempt to create a new native thread from the OS The OS then attempts to create a new native thread as per attributes which include the thread stack size. Native memory is then allocated (reserved) from the OS to the Java process native memory space; assuming the process has enough address space (e.g. 32-bit process) to honour the request The OS will refuse any further native thread & memory allocation if the 32-bit Java process size has depleted its memory address space e.g. 2 GB, 3 GB or 4 GB process size limit The OS will also refuse any further Thread & native memory allocation if the virtual memory of the OS is depleted (including Solaris swap space depletion since thread access to the stack can generate a SIGBUS error, crashing the JVM * http://bugs.sun.com/view_bug.do?bug_id=6302804In summary:Java threads creation require native memory available from the OS; for both 32-bit & 64-bit JVM processes For a 32-bit JVM, Java thread creation also requires memory available from the C-Heap or process address spaceProblem diagnostic Now that you understand native memory and JVM thread creation a little better, is it now time to look at your problem. As a starting point, I suggest that your follow the analysis approach below:Determine if you are using HotSpot 32-bit or 64-bit JVM When problem is observed, take a JVM Thread Dump and determine how many Threads are active Monitor closely the Java process size utilization before and during the OOM problem replication Monitor closely the OS virtual memory utilization before and during the OOM problem replication; including the swap memory space utilization if using Solaris OSProper data gathering as per above will allow you to collect the proper data points, allowing you to perform the first level of investigation. The next step will be to look at the possible problem patterns and determine which one is applicable for your problem case. Problem pattern #1 – C-Heap depletion (32-bit JVM) From my experience, OutOfMemoryError: unable to create new native thread is quite common for 32-bit JVM processes. This problem is often observed when too many threads are created vs. C-Heap capacity. JVM Thread Dump analysis and Java process size monitoring will allow you to determine if this is the cause. Problem pattern #2 – OS virtual memory depletion (64-bit JVM) In this scenario, the OS virtual memory is fully depleted. This could be due to a few 64-bit JVM processes taking lot memory e.g. 10 GB+ and / or other high memory footprint rogue processes. Again, Java process size & OS virtual memory monitoring will allow you to determine if this is the cause. Problem pattern #3 – OS virtual memory depletion (32-bit JVM) The third scenario is less frequent but can still be observed. The diagnostic can be a bit more complex but the key analysis point will be to determine which processes are causing a full OS virtual memory depletion. Your 32-bit JVM processes could be either the source or the victim such as rogue processes using most of the OS virtual memory and preventing your 32-bit JVM processes to reserve more native memory for its thread creation process. Please note that this problem can also manifest itself as a full JVM crash (as per below sample) when running out of OS virtual memory or swap space on Solaris. # # A fatal error has been detected by the Java Runtime Environment: # # java.lang.OutOfMemoryError: requested 32756 bytes for ChunkPool::allocate. Out of swap space? # # Internal Error (allocation.cpp:166), pid=2290, tid=27 # Error: ChunkPool::allocate # # JRE version: 6.0_24-b07 # Java VM: Java HotSpot(TM) Server VM (19.1-b02 mixed mode solaris-sparc ) # If you would like to submit a bug report, please visit: # http://java.sun.com/webapps/bugreport/crash.jsp #--------------- T H R E A D ---------------Current thread (0x003fa800): JavaThread "CompilerThread1" daemon [_thread_in_native, id=27, stack(0x65380000,0x65400000)]Stack: [0x65380000,0x65400000], sp=0x653fd758, free space=501k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) ………………Native memory depletion: symptom or root cause? You now understand your problem and know which problem pattern you are dealing with. You are now ready to provide recommendations to address the problem…are you? Your work is not done yet, please keep in mind that this JVM OOM event is often just a “symptom” of the actual root cause of the problem. The root cause is typically much deeper so before providing recommendations to your client I recommend that you really perform deeper analysis. The last thing you want to do is to simply address and mask the symptoms. Solutions such as increasing OS physical / virtual memory or upgrading all your JVM processes to 64-bit should only be considered once you have a good view on the root cause and production environment capacity requirements. The next fundamental question to answer is how many threads were active at the time of the OutOfMemoryError? In my experience with Java EE production systems, the most common root cause is actually the application and / or Java EE container attempting to create too many threads at a given time when facing non happy paths such as thread stuck in a remote IO call, thread race conditions etc. In this scenario, the Java EE container can start creating too many threads when attempting to honour incoming client requests, leading to increase pressure point on the C-Heap and native memory allocation. Bottom line, before blaming the JVM, please perform your due diligence and determine if you are dealing with an application or Java EE container thread tuning problem as the root cause. Once you understand and address the root cause (source of thread creations), you can then work on tuning your JVM and OS memory capacity in order to make it more fault tolerant and better “survive” these sudden thread surge scenarios. Recommendations:First perform a JVM Thread Dump analysis and determine the source of all the active threads vs. an established baseline. Determine what is causing your Java application or Java EE container to create so many threads at the time of the failure Please ensure that your monitoring tools closely monitor both your Java VM processes size & OS virtual memory. This crucial data will be required in order to perform a full root cause analysis Do not assume that you are dealing with an OS memory capacity problem. Look at all running processes and determine if your JVM processes are actually the source of the problem or victim of other processes consuming all the virtual memory Revisit your Java EE container thread configuration & JVM thread stack size. Determine if the Java EE container is allowed to create more threads than your JVM process and / or OS can handle Determine if the Java Heap size of your 32-bit JVM is too large, preventing the JVM to create enough threads to fulfill your client requests. In this scenario, you will have to consider reducing your Java Heap size (if possible), vertical scaling or upgrade to a 64-bit JVMCapacity planning analysis to the rescue As you may have seen from my past article on the Top 10 Causes of Java EE Enterprise Performance Problems , lack of capacity planning analysis is often the source of the problem. Any comprehensive load and performance testing exercise should also properly determine the Java EE container threads, JVM & OS native memory requirement for your production environment; including impact measurements of ‘non-happy’ paths. This approach will allow your production environment to stay away from this type of problem and lead to better system scalability and stability in the long run. Don’t forget to share! Reference: OutOfMemoryError: unable to create new native thread – Problem Demystified from our JCG partner Pierre-Hugues Charbonneau at the Java EE Support Patterns & Java Tutorial blog....

Multiple versions of Java on OS X Mountain Lion

Before Mountain Lion, Java was bundled inside OS X. It seems that during the upgrade, the Java 6 version I had on my machine was removed. Apparently the reason for uninstalling Java during the upgrade process was caused by a security issue that the Java runtime had.In this way you are forced to install the latest version which fixed this security problem. So I went to /Applications/Utilities/ open a Terminal and executed the following command: java -version ==> “No Java runtime present …” A window prompted asking to install Java.Click “Install” and get the latest version.I installed it but right after I downloaded and installed the JDK SE 7 from Oracle. After installation, open the Java Preferences (Launchapad/Others ) and you will see :Now I knew I had two versions of Java but which one I am using it ? $ java -version java version "1.6.0_35" Java(TM) SE Runtime Environment (build 1.6.0_35-b10-428-11M3811) Java HotSpot(TM) 64-Bit Server VM (build 20.10-b01-428, mixed mode) So what if i want to use JDK SE 7 from Oracle ? Then I had just to drag Java SE 7 in the Java Preferences window to the first position in the list.This time : $ java -version java version "1.7.0_05" Java(TM) SE Runtime Environment (build 1.7.0_05-b06) Java HotSpot(TM) 64-Bit Server VM (build 23.1-b03, mixed mode) I said to myself let’s find out more out how Java is installed on OS X so I dug for more. There are some very useful commands : whereis and which and ls -l. whereis java ==> /usr/bin/java ls -l /usr/bin/java ==> /System/Library/Frameworks/JavaVM.framework/Versions/Current/Commands/java When I saw this I was a little bit curious so I went to list the Versions directory: cd /System/Library/Frameworks/JavaVM.framework/Versions ls ==> 1.4 1.5 1.6 A CurrentJDK 1.4.2 1.5.0 1.6.0 Current Now why do I have this old versions of Java on my machine ? So I asked on Ask Different http://apple.stackexchange.com/questions/57986/multiple-java-versions-support-on-os-x-and-java-home-location $ sw_vers ProductName: Mac OS X ProductVersion: 10.8.1 BuildVersion: 12B19 $ ls -l /System/Library/Frameworks/JavaVM.framework/Versions total 64 lrwxr-xr-x 1 root wheel 10 Sep 16 15:55 1.4 -> CurrentJDK lrwxr-xr-x 1 root wheel 10 Sep 16 15:55 1.4.2 -> CurrentJDK lrwxr-xr-x 1 root wheel 10 Sep 16 15:55 1.5 -> CurrentJDK lrwxr-xr-x 1 root wheel 10 Sep 16 15:55 1.5.0 -> CurrentJDK lrwxr-xr-x 1 root wheel 10 Sep 16 15:55 1.6 -> CurrentJDK lrwxr-xr-x 1 root wheel 10 Sep 16 15:55 1.6.0 -> CurrentJDK drwxr-xr-x 7 root wheel 238 Sep 16 16:08 A lrwxr-xr-x 1 root wheel 1 Sep 16 15:55 Current -> A lrwxr-xr-x 1 root wheel 59 Sep 16 15:55 CurrentJDK -> /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents It seems all the old versions are links to the CurrentJDK version , which is the Apple version, except A and Current which is linked to A.I read something about this on this question.For me A acts like a temp variable. If in Java Preferences you set the in the first position Java 6 from Apple A will have Java 6 from Apple if you put on the first position Java SE 7 from Oracle A will point to this version.Current points to A. /java -version java version "1.6.0_35" Java(TM) SE Runtime Environment (build 1.6.0_35-b10-428-11M3811) Java HotSpot(TM) 64-Bit Server VM (build 20.10-b01-428, mixed mode) ./java -version java version "1.7.0_05" Java(TM) SE Runtime Environment (build 1.7.0_05-b06) Java HotSpot(TM) 64-Bit Server VM (build 23.1-b03, mixed mode) So it means that in this Current directory will point to the first Java Version found in the Java Preferences. A very interesting thing is the following information lrwxr-xr-x 1 root wheel 59 Sep 16 15:55 CurrentJDK -> /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents This means Java from Apple is actually installed here :”/System/Library/Java/JavaVirtualMachines/1.6.0.jdk/” What about Java SE 7 ? I could search the filesystem to see but I found an easier way: If in Java Preferences on the first position is Java SE 7 ==> $ /usr/libexec/java_home /Library/Java/JavaVirtualMachines/1.7.0.jdk/Contents/Home If in Java Preferences on the first position is Java SE 6 (System) ==> $ /usr/libexec/java_home /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home So Java on Mountain Lion (OSX) is more likely to be installed in one of this locations :/System/Library/Java/JavaVirtualMachines /Library/Java/JavaVirtualMachines ~/Library/Java/JavaVirtualMachinesWhat about /System/Library/Frameworks/JavaVM.framework/Versions ? It seems that is linked to the so called “Java bridge“.Here it seems is the native part of the Java on OSX installation. Reference: Multiple versions of Java on OS X Mountain Lion from our JCG partner Cristian Chiovari at the Java Code Samples blog....

Resign Patterns: Eliminate them with Agile practices and Quality Metrics

This blog post is inspired by the article titled Resign Patterns by Michael Duell. I’ve included all the original text from the above article but for each anti-pattern I mention (at least) one agile practice that IMHO is helpful eliminating it and one or more quality metrics that would help you identify it very early. 1 Cremational Patterns Below is a list of five cremational patterns. 1.1 Abject Poverty The Abject Poverty Pattern is evident in software that is so difficult to test and maintain that doing so results in massive budget overruns. Agile Practices : Refactoring, TDD Quality Metrics : LCOM4, RFC, Cyclomatic Complexity 1.2 Blinder The Blinder Pattern is an expedient solution to a problem without regard for future changes in requirements. It is unclear as to whether the Blinder is named for the blinders worn by the software designer during the coding phase, or the desire to gouge his eyes out during the maintenance phase. Agile Practices : Simple Design, Program Intently and Expressively Quality Metrics : LCOM4, Cyclomatic Complexity 1.3 Fallacy Method The Fallacy method is evident in handling corner cases. The logic looks correct, but if anyone actually bothers to test it, or if a corner case occurs, the Fallacy of the logic will become known. Agile Practices : Unit Testing, User Stories , Customer Collaboration to define acceptance criteria and precise requirements Quality Metrics : Code Coverage ( Line + Branch Coverage ) 1.4 ProtoTry The ProtoTry Pattern is a quick and dirty attempt to develop a working model of software. The original intent is to rewrite the ProtoTry, using lessons learned, but schedules never permit. The ProtoTry is also known as legacy code. Agile Practices : Refactoring, Code in Increments Quality Metrics : Code Coverage 1.5 Simpleton The Simpleton Pattern is an extremely complex pattern used for the most trivial of tasks. The Simpleton is an accurate indicator of the skill level of its creator. Agile Practices : Simple Design, Program Intently and Expressively Quality Metrics : LCOM4 , Cyclomatic Complexity 2 Destructural Patterns Below is a list of seven destructural patterns. 2.1 Adopter The Adopter Pattern provides a home for orphaned functions. The result is a large family of functions that don’t look anything alike, whose only relation to one another is through the Adopter. Agile Practices : Simple Design, Refactoring, Program Intently and Expressively Quality Metrics : LCOM4 , Cyclomatic Complexity, RFC 2.2 Brig The Brig Pattern is a container class for bad software. Also known as module. Agile Practices : Refactoring, Code in increment, Write cohesive code Quality Metrics : Package Complexity, Package Size 2.3 Compromise The Compromise Pattern is used to balance the forces of schedule vs quality. The result is software of inferior quality that is still late. Agile Practices : Continuous Integration and Continuous Inspection Quality Metrics : Technical Debt 2.4 Detonator The Detonator is extremely common, but often undetected. A common example is the calculations based on a 2 digit year field. This bomb is out there, and waiting to explode! Agile Practices : Code Reviews, Unit Testing Quality Metrics : Code Violations 2.5 Fromage The Fromage Pattern is often full of holes. Fromage consists of cheesy little software tricks that make portability impossible. The older this pattern gets, the riper it smells. Agile Practices : Refactoring , Code in Increments Quality Metrics : Technical Debt 2.6 Flypaper The Flypaper Pattern is written by one designer and maintained by another. The designer maintaining the Flypaper Pattern finds herself stuck, and will likely perish before getting loose. Agile Practices : Communicate in Code, Keep a solutions log Quality Metrics : Documentation Density 2.7 ePoxy The ePoxy Pattern is evident in tightly coupled software modules. As coupling between modules increases, there appears to be an epoxy bond between them. Agile Practices : Refactoring, Simple Design, Code in increment Quality Metrics : Coupling, LCOM4 3 Misbehavioral Patterns Below is a list of eleven misbehavioral patterns. 3.1 Chain of Possibilities The Chain of Possibilities Pattern is evident in big, poorly documented modules. Nobody is sure of the full extent of its functionality, but the possibilities seem endless. Also known as Non-Deterministic. Agile Practices : Communicate in Code, Keep a solutions log Quality Metrics : Documentation Density 3.2 Commando The Commando Pattern is used to get in and out quick, and get the job done. This pattern can break any encapsulation to accomplish its mission. It takes no prisoners. Agile Practices : TDD, Unit Testing, Code in increment, Write Cohesive Code Quality Metrics : Couplings, Complexity, Code Coverage 3.3 Intersperser The Intersperser Pattern scatters pieces of functionality throughout a system, making a function impossible to test, modify, or understand. Agile Practices : Code in increment, Write Cohesive Code Quality Metrics : Complecity, Couplings 3.4 Instigator The Instigator Pattern is seemingly benign, but wreaks havoc on other parts of the software system. Agile Practices : Unit Testing, Continuous Integration Quality Metrics : Code Coverage, Violations Density 3.5 Momentum The Momentum Pattern grows exponentially, increasing size, memory requirements, complexity, and processing time. Agile Practices : Code in increment, Refactoring, Keep It Simple Quality Metrics : Complexity, Size 3.6 Medicator The Medicator Pattern is a real time hog that makes the rest of the system appear to be medicated with strong sedatives. Agile Practices : Continuous Integration Quality Metrics : Couplings 3.7 Absolver The Absolver Pattern is evident in problem ridden code developed by former employees. So many historical problems have been traced to this software that current employees can absolve their software of blame by claiming that the absolver is responsible for any problem reported. Also known as It’s-not-in-my-code. Agile Practices : Practice Collective Ownership, Attack problems in isolation Quality Metrics : Unit Testing, TDD 3.8 Stake The Stake Pattern is evident in problem ridden software written by designers who have since chosen the management ladder. Although fraught with problems, the manager’s stake in this software is too high to allow anyone to rewrite it, as it represents the pinnacle of the manager’s technical achievement. Agile Practices : Practice Collective Ownership, Be a Mentor Quality Metrics : 3.9 Eulogy The Eulogy Pattern is eventually used on all projects employing the other 22 Resign Patterns. Also known as Post Mortem. Agile Practices : Continuous Inspection, Code Reviews Quality Metrics : ALL!!! 3.10 Tempest Method The Tempest Method is used in the last few days before software delivery. The Tempest Method is characterized by lack of comments, and introduction of several Detonator Patterns. Agile Practices : Communicate with code, Keep a solutions log Quality Metrics : Documentation Density 3.11 Visitor From Hell The Visitor From Hell Pattern is coincident with the absence of run time bounds checking on arrays. Inevitably, at least one control loop per system will have a Visitor From Hell Pattern that will overwrite critical data. Agile Practices : Code Reviews, Unit Testing Quality Metrics : Violations Density Don’t forget to share! Reference: Design Patterns – Eliminate them with Agile practices and Quality Metrics from our JCG partner Papapetrou P. Patroklos at the Only Software matters blog....

An unambiguous software version scheme

When people talk about software versioning schemes they often refer to the commonly used X.Y.Z numerical scheme for versioning. This is often referred to major.minor.build, but these abstract terms are not useful as they don’t explicitly impart any meaning to each numerical component. This can lead to the simplest usage, we just increment the last number for each release, so I’ve seen versions such as 1.0.35. Alternatively, versions become a time consuming point of debate. This is a shame as we could impart some clear and useful information with versions. I’m going to suggest that rather than thinking ‘X.Y.Z’ we think ‘api.feature.bug’. What do I mean by this? You increment the appropriate number for what your release contains. For example, if you have only fixed bugs, you increment the last number. If you introduce even one new feature, then you increment the middle number. If you change a published or documented API, be that the interface of a package, a SOAP or other XML API, or possibly the user interface (in a loose sense of the term ‘API’) then the first number. This system is unambiguous, no need for discussions about the numbering. You zero digits to the right of any you increment, so if you fix a bug and introduce a new feature after version 5.3.6 then the new version is 5.4.0. Unstated digits are assumed to be zero, so 5.4.0 is the same as and… The version is not a number, and it does not have digits. The version 5.261.67 is pretty unusual, but not invalid. Don’t let it put you off. You might need to change an API due to bug fix, but you’ll need to be diligent, and cold to any politicking by increasing the API digit. Otherwise the scheme looses value and you might as well just use a single number for versioning. What if you’re on version 5 of the product and the product lead has told everyone version 6 will be something special, but you need to fix a bug that means an API change? You need a hybrid version system, which consists of the external ‘product version’ and the internal ‘software version’. What about branching for production support? Technically no features, but quite possibly one branch per customer. CVS has a suitable system, take the version of the release, append two digits, the first to indicate the branch, the second for the fix number. For example, if you branch from 5.4.0 then the first release will be, the next branch’s second release would be Reference: An unambiguous software version scheme from our JCG partner Alex Collins at the Alex Collins ‘s blog blog....

Android: Level Two Image Cache

In the mobile world, it’s very common to have scrollable lists of items that contain information and an image or two. To make these lists performance well, most apps follow a lazy loading approach, which simply grabs and displays images in these types of lists. This approach works great for getting images into the system initially. However, there are still a few problems with this approach. The app must re-download each image every time the images need to appear to the user in the list. This creates a pretty bad experience for the user. The first time the user sees the list, s/he has to wait several seconds (or minutes with a bad network connection) before s/he sees the complete list with images. But the real pain comes when the user scrolls to a different part of the list and then scrolls back. This action causes the entire image download process to restart!We can remove the negative user experience by using a image cache.An image cache allows us to store images on the device that have been recently downloaded. By storing them on the device, we can grab them from memory instead of asking the server for them again. This can save us performance in several different ways, most notably:Images that have already been downloaded appear almost instantly, which makes the UI much more snappier Battery Life is saved by not having to go to the network for the imagesThere are some design considerations when using a cache. Since the cache is using memory on the device, it is fairly limited on space. This means we can only have a certain number of images in the cache, so it’s really important to make sure we keep the correct images stored there. “Correct” is a very relative term, which can mean several different things based on what problem is at hand. As you can see here, there are several different types of caching algorithms that attempt to define “correct” for different problems. In our case, “correct” means we want the images that are used the most in the cache. Luckily for us, the type of cache we need is simple and one of the most commonly used. The LRU Cache keeps the most frequently used images in memory, while discarding the least used images. And even luckier, the Android SDK has a LRUCache implementation, which was added in Honey Comb (its also available on the Support Library if you need to support older versions as well). Using a LRUCache that is Stored on the Disk A LRU Cache allows you to save images in memory instead of going to a server every time. This allows your app to respond much quicker to changes and save some battery life. One of the limits of the cache is the amount of memory you can use to actually store the images. This space is very constrained, especially on mobile devices. However, you do have access to one data store that has considerably more space: the disk.The disk on a mobile device is usually much larger than the main memory. Although disk access is much slower than main memory, it is still much faster than going to the network for an image, and you still get the battery life savings by not going to the network. For an excellent Disk LRU Cache implementation that works great with Android, check out Jake Wharton’s DiskLRUCache on GitHub. Combining Memory and Disk Caches Although both of the previous caches (memory LRUCache and disk LRUCache) work well independently, they work every better when combined. By using both caches at the same time, you get the best of both worlds:increased loading speed of the main memory cache increased cache size with the disk cacheCombining the two caches is fairly straight forward.Google provides some excellent example code for both the memory and disk caches here. All you have to do now is take the two different cache implementations and use the above flow chart to put them together to create a Level 2 image cache in Android! Reference: Level Two Image Cache in Android from our JCG partner Isaac Taylor at the Programming Mobile blog....

Running RichFaces on WebLogic 12c

I initially thought I could write this post months back already. But I ended up being overwhelmed by different things. One among them was, that it wasn’t able to simply fire up the RichFaces showcase like I did it for the 4.0 release. With all the JMS magic and the different provider checks in the showcase this has become some kind of a challenge to simply build and deploy it. Anyway, I was willing to give this a try and here we go. If you want to get started with any of the JBoss technologies it is a good idea to check with the JBoss Developer Framework first. That is a nice collection of different examples and quickstarts to get you started on Java EE and it’s technologies. One of them is the RichFaces-Validation example which demonstrates how to use JSF 2.0, RichFaces 4.2, CDI 1.0, JPA 2.0 and Bean Validation 1.0 together. The ExampleThe example consists of a Member entity which has some JSR-303 (Bean Validation) constraints on it. Usually those are checked in several places, beginning with the Database, on to the persistence layer and finally the view layer in close interaction with the client. Even if this quickguide doesn’t contain a persistence layer it starts with the Enity which reflects the real life situation quite good. The application contains a view layer written using JSF and RichFaces and includes an AJAX wizard for new member registration. A newly registered member needs to provide a couple of information before he is actually ‘registered’. This includes e-mail a name and his phone number. Getting Started I’m not going to repeat what the excellent and detailed quickstart is already showing you. So, if you want to run this on JBoss AS7 .. go there. We’re starting with a blank Maven web-project. And the best and easiest way to do this is to fire up NetBeans 7.2 and create one. Lets name it ‘richwls-web’. Open your pom.xml and start changing some stuff there. First remove the endorsed stuff there. We don’t need it. Next is to add a little bit of dependencyManagement: <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom</groupId> <artifactId>jboss-javaee-6.0-with-tools</artifactId> <version>1.0.0.Final</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>org.richfaces</groupId> <artifactId>richfaces-bom</artifactId> <version>4.2.0.Final</version> <scope>import</scope> <type>pom</type> </dependency> </dependencies> </dependencyManagement>This adds the Bill of Materials (BOM) for both Java EE 6 and RichFaces to your project. A BOM specifies the versions of a ‘stack’ (or a collection) of artifacts. You find it with anything from the RedHat guys and it is considered ‘best practice’ to have one. At the end this makes your life easier because it manages versions and dependencies for you. On to the lengthy list of true dependencies: <!-- Import the CDI API --> <dependency> <groupId>javax.enterprise</groupId> <artifactId>cdi-api</artifactId> <scope>provided</scope> </dependency> <!-- Import the JPA API --> <dependency> <groupId>javax.persistence</groupId> <artifactId>persistence-api</artifactId> <version>1.0.2</version> <scope>provided</scope> </dependency> <!-- JSR-303 (Bean Validation) Implementation --> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-validator</artifactId> <version>4.3.0.Final</version> <scope>provided</scope> <exclusions> <exclusion> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> </exclusion> </exclusions> </dependency> <!-- Import the JSF API --> <dependency> <groupId>javax.faces</groupId> <artifactId>jsf-api</artifactId> <version>2.1</version> <scope>provided</scope> </dependency> <!-- Import RichFaces runtime dependencies - these will be included as libraries in the WAR --> <dependency> <groupId>org.richfaces.ui</groupId> <artifactId>richfaces-components-ui</artifactId> </dependency> <dependency> <groupId>org.richfaces.core</groupId> <artifactId>richfaces-core-impl</artifactId> </dependency>Except the RichFaces dependencies all others are provided by the runtime. In this case it will be GlassFish In case you haven’t defined it elsewhere (settings.xml) you should also add the JBoss repository to your build section: <repository> <id>jboss-public-repository-group</id> <name>JBoss Public Maven Repository Group</name> <url>https://repository.jboss.org/nexus/content/groups/public-jboss/</url> </repository>Copy the contents of the richfaces-validation directory of the source-zip or check it out from github. Be a little bit careful and don’t mess up with the pom.xml we created ;) Build it and get that stuff deployed. Issues First thing you are greeted with is a nice little weld message: WELD-000054 Producers cannot produce non-serializable instances for injection into non-transient fields of passivating beans [...] Producer Method [Logger] with qualifiersWe obviously have an issue here and need to declare the Logger field as transient. @Inject private transient Logger logger;Don’t know why this works on AS7 but might be I find out someday :) Next iteration: Change it, build, deploy. java.lang.NoSuchMethodError: com.google.common.collect.ImmutableSet.copyOf(Ljava/util/Collection;)Lcom/google/common/collect/ImmutableSet;That doesn’t look better. Fire up the WLS CAT at http://localhost:7001/wls-cat/ and try to find out about it.Seems as if Oracle is using Google magic inside the server. Ok, fine. We have no way to deploy RichFaces as a standalone war on WebLogic because we need to resolve some classloading issues here. And the recommended way is to add a so-called Filtering Classloader. You do this by adding a weblogic-application.xml to your ear. Yeah: Lets repackage everything and put the war into an empty ear and add the magic to the weblogic-application.xml: <prefer-application-packages> <package-name>com.google.common.*</package-name> </prefer-application-packages>Done? Another deployment and you finally see your application. Basically RichFaces run on WebLogic but you have to package it into an ear and turn the classloader around for the com.google.common.* classes. That is way easier with PrimeFaces but … anyway, there are reasons why I tried this. One is, that I do like the idea of being able to trigger the Bean Validation on the client side. If you take a look at the example you see, that the <rich:validator event=’blur’ /> adds client side validation for both bean validation constraints and standard jsf validators to the client. Without having to mess around with anything in JavaScript or duplicate logic. Happy coding and don’t forget to share! Reference: Running RichFaces on WebLogic 12c from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....

How To Disrupt Technical Recruiting – Hire an Agent

A recent anti-recruiter rant posted to a news group and a subsequent commentary on HackerNews got me thinking about the many ways that tech recruiting and the relationship between recruiters and the tech community is broken. I saw a few comments referencing that the community always says how broken it is, but no one tries to fix it. Here are some ideas on how we got here and directions we can go. Why is the recruiting industry the way it is?The high demand and low supply for tech talent creates a very lucrative market for recruiters. Many technologists might not be aware of this, but successful recruiters probably all make over 100K (some earn much more) and as a commission-based business your compensation has no maximum. Recruiting is an easy field to enter. No formal training is required, although you will need some sales training and tech knowledge to truly make an impact. One can easily start with a computer, a phone line, and a basic website.So we have an industry that can be very lucrative (for some much more lucrative than the tech industry itself) with almost no barriers to entry. Of course an industry with these characteristics will draw both talented, ethical professionals as well as carpetbaggers and bottom-feeders just as the gold rush did. What are the biggest complaints about recruiters (and how can we solve them)? First, complaints from candidates (tech pros):Too many cold calls. POSSIBLE SOLUTION: Without some widespread changes from all three parties (candidates, hiring firms, and recruiters) in the industry, this one is probably impossible to solve. Simply mentioning that you do not wish to hear from recruiters is no guarantee that they won’t contact you, but if I see on a LinkedIn page that someone specifically doesn’t want to hear from recruiters I won’t contact them as it is clear they do not value the services I provide. Dishonesty about the job description or salary. POSSIBLE SOLUTION: What if companies gave recruiters some form of ‘verified job spec‘ to share with candidates? Salary range, job description, location, whatever else might be helpful. A candidate could request this from the recruiter before agreeing to an interview. Being marketed/used without their knowledge. POSSIBLE SOLUTION: Companies could require a ‘right to represent‘ email giving a recruiter permission to submit his/her resume for any or all positions, which would at least eliminate some of this. Of course, recruiters will still send blinded resumes (contact info removed) to client prospects. A better idea may be for candidates to have a document that they ask recruiters to sign – a contract where the recruiter agrees not to send their resume in any form to any company without the express written consent (the ‘right to represent’) of the candidate. I’m not a lawyer, but I assume there could be some financial penalties/restitution allowed if you were to break that trust, as you may damage the candidate’s career. As a rule, if I want to market a candidate to a client, I always get their permission first. No feedback or follow-up. POSSIBLE SOLUTION: Unfortunately there is little value that a company gets by providing specific feedback about a candidate, and it actually exposes them to substantial risk (ageism, racism, etc.). Likewise, taking time to give rejected candidates details provides nothing to the recruiter except goodwill with the candidate. This one is difficult to solve, but probably not as big an issue as the other problems.And complaints from hiring firms:Too many resumes. POSSIBLE SOLUTION: If you provide a very good requirement to a good recruiter, he/she should be able and very willing to limit the resumes. Telling your recruiter that you want to see the best five available candidates should encourage them to limit submissions. Unqualified candidates. POSSIBLE SOLUTION: Same as above. Misrepresenting a candidate’s background. POSSIBLE SOLUTION: Well for starters, stop working with the recruiter and that agency entirely. If you want to make a positive change for the recruiting industry, contact the recruiter’s manager and tell your side of the story. Having liars in an industry is bad for everyone except the liars and those that profit off of them. Marketing cold calls. POSSIBLE SOLUTION: If you truly will not use recruiters for your searches, list that on your job specifications both on your website and the jobs you post publicly. I would rather not waste my time if a company has a policy against using recruiters, and if your policy changes perhaps you will be calling me. I will not call a company that specifically lists that they do not want to hear from recruiters, as it is clear they do not value the service I provide. Price gouging. POSSIBLE SOLUTION: This could be when recruiting agencies mark-up their candidates’ hourly rates well beyond what is a reasonable margin, or when recruiters who receive permanent placement fees tied to salary will stretch every penny from the hiring company. Flat transparent fees work very well for both of these problems (a flat hourly mark-up on contractors and a flat fee for permanent placements), although recruiters would particularly hate a flat fee structure for contractors. The recruiter’s ‘sale’ to a contractor is, “If I can get you $300/hr, do you care if I make $2/hr or $100/hr?“. The answer is usually ‘no’, which is all fine until the contractor finds out that you are billing your client $300/hr and only paying the him/her maybe $50/hr. That is rare, but that is when things get ugly. Flat and transparent rates exposed to all three parties involved will solve that problem, but don’t expect recruiters to go for it.To all the technology pros who claim they really want to disrupt the industry, I have one simple question. Would you be willing to hire, and pay for, an agent? I’ve heard the argument from some engineers that they would like recruiters to care more about the engineer’s career and not treat them like a commodity. Recruiters are traditionally paid for by the hiring companies, but only if they can both find the proper talent and get that talent to take the job (contingency recruiting). This can lead to a recruiter treating candidates like some homogenized commodity that all have similar value. If engineers want true representation of their best interests, having representation from a sole agent would be one obvious choice. As your agent, I could provide career advice at various times during the year, making suggestions on technologies that you may want to explore or giving inside information on which companies might have interest in you. You might come to me to discuss any thoughts on changing jobs, how to apply for promotion, or how to ask for a salary increase (which I could negotiate for you directly with your manager). When you do decide to explore new opportunities, the agent would help put together your resume, set a job search strategy, and possibly market your background to some hiring companies. As the agent is making his living by charging a fee to the candidates, the agent could charge a much smaller fee (or potentially even no fee) to the hiring company, which would make hiring you much less expensive than hiring through a traditional recruiter. If you were contacted by a recruiter from an agency or a hiring company, you would simply refer them to me for the first discussions and I would share the information with you (if appropriate) at a convenient time. You could even list my name on your LinkedIn, GitHub, and Twitter accounts. “If you are interested in hiring me, contact Dave at Fecak.com“ How good would that feel? How good would it feel to tell your friends that you have an agent? All of this assumes your agent would have some high degree of knowledge about the industry, the players, market rates, and a host of other things. Many recruiters don’t have this expertise, but some certainly do. An agent could probably represent and manage the career of perhaps 50-100 candidates very comfortably and make a good living doing it. Would you be willing to pay a percentage of your salary or a flat annual rate to an agent who provides you with these services? If the answer is ‘yes’, look me up and I’d be happy to discuss it with you further. But I’m guessing for many the answer is ‘no’ (or ‘yes, depending on the price’). My business model Most recruiters are contingency based, which means they only get a fee if they make a placement. If they search for months and don’t find a proper candidate, they just wasted months for no payment. This places 100% of the risk of a search on the recruiter and 100% of the control with the hiring company. Even if the recruiter finds a great fit, the company can walk away without making a hire. Contingency recruiting is cut-throat and causes desperation to make a placement, and this is where most of the problems arise for candidates. This is the ‘numbers game’ that tech pros talk about, where the recruiter’s main incentive is to get resumes and bodies in front of clients and see what sticks. Some recruiters are retained search, which means that basically all their fees are guaranteed up front regardless of their results. This is great for the recruiter but places 100% of the risk on the hiring company. The recruiter is working this search to save his/her reputation, which is obviously very important in getting future searches. This is not cut-throat, because it is not a competitive industry – recruiters have exclusive deals with a retained client for that particular job. The model I use combines contingency and retained search. I charge clients a relatively small flat fee upfront to initiate the search, which is non-refundable. When a placement is made, I charge my clients another flat fee (not tied to salary). When you combine the two fees, the percentage of salary is often about half what contingency recruiters would get for the same placement. So you think I’m an idiot for charging much less than my competition. Perhaps. I see it as creating a true partnership with companies that continue to come back with additional searches and repeat business, often referring me to their friends and partners. When a company gives you a fee upfront, they are putting their money where their mouth is and you can be sure they are serious about hiring. It takes some degree of trust on behalf of the hiring company, but once you have been in the business for a while the references are there and chances are we have some business connections in common. So far this model has worked well, with happy clients and lots of repeat business. I have already met my goal for 2012, and I’m hoping to double it in the coming months. What else do I do differently?I give it away (sometimes) – information, resume and interview advice, and any other kind of help you can think of are requested of me, and I rarely refuse a reasonable request. If I can’t help you find a job, I can at least take a look at your resume or evaluate how your applications look. I have known some engineers for over ten years without ever having made .05 in fees, and have helped them make career decisions for free. I’ll often introduce candidates to start-ups or one-man firms with limited budgets who may end up hiring without using my services, with the hopes that they will use me for future searches. I run a users’ group – I’ve run the local Java Users’ Group for almost 13 years. It is a volunteer job with no compensation, but it helps me stay in touch with the tech community and it also adds some credibility to my name. It is a lot of effort at times, but the success of the group is something that I’m quite proud of. I don’t recruit out of the group, but most of the group are aware of my services and come to me for my services when necessary. I specialize – Historically I focused both geographically and on one technology (Philadelphia Java market). I’ve opened that up a bit as many of those Java pros are now doing mobile, RoR, and alternative JVM lang work, and I’m a bit more regional now. Staying specialized in one geography and one technology forces a recruiter to be very careful about his/her reputation, as the degrees of Kevin Bacon are always low. Flat fees – A flat fee lets the company know that my goal is to fill the position and how much you pay the candidate is irrelevant. I inform candidates of this relationship so they are aware that my goal is to get them an offer that they will accept, and my client companies know that if I say I need $5K more in salary to close the deal that I am not trying to line my pockets.CONCLUSION Don’t expect my model to be adopted by any other firms, but I wanted to share it with readers as at least one alternative to the traditional contingency model that seems to be the biggest complaint for both candidates and hiring firms. And I believe the agent model would work quite nicely for all parties involved if anyone would like to inquire. If you truly want to disrupt the industry, let’s talk. Reference: How To Disrupt Technical Recruiting – Hire an Agent from our JCG partner Dave Fecak at the Job Tips For Geeks blog....

Advanced ZK: Asynchronous UI updates and background processing – part 1

Asynchronous UI updates are very useful, because they typically improve the responsiveness, usability and the general feel of user interfaces. I’ll be focusing here on the ZK framework, but generally the same principles apply for desktop UIs too (Swing, SWT). Long-running processing Sometimes you might have a database query, or an external web service call that takes a long time. Typically these jobs are synchronous, so basically there is a specific point in the code where the system will have to wait for a result and will block the thread that runs the code. If you end up running code like that in an UI thread, it will usually block the UI completely.Real-time updates Sometimes you don’t know in advance the exact time when something in the UI should be updated. For example, you could have a visual meter that shows the amount of users in an application. When a new user enters the application, the UIs of the current users should be updated as soon as possible to reflect the new user count. You could use a timer-based mechanism to continuously check if the amount of users has changed, but if there’s too many simultaneous users, the continuous checking will cause a very heavy load even if there is nothing to actually update in the UIs.Basic concepts Let’s first digest the title of this blog post: “Asyncronous UI updates and background processing” Background processing In the long-running processing use case the most obvious way to reduce UI blocking is to move expensive processing from the UI threads to some background threads. It’s very important to be able to understand what kind of thread will run the code in different parts of your application. For example, in ZK applications, most code is executed by servlet threads which are basically servlet world equivalents to UI threads. In order to execute code in some background thread, we’ll need a thread pool. The easiest way is to use java.util.concurrent.ExecutorService that was introduced in JDK5. We can push Runnable objects to the ExecutorService, so we are basically asking the ExecutorService to run a specific block of code in some background thread. It is absolutely crucial to understand that frameworks that use ThreadLocals will have problems with this approach because the ThreadLocals that are set in the servlet thread will not be visible in the background thread. An example is Spring Security which by default uses a ThreadLocal to store the security context (= user identity + other things).Asynchronous UI updates What does an asynchronous UI update mean in this context? Basically the idea is that once we have some information that we’d like to present in the UI, we’ll inform the UI of the new data (= asynchronous) instead of directly updating the UI in the background thread (= synchronous). We cannot know in advance when the new information is available, so we cannot ask for the information from the client side (unless we use polling which is expensive).Server push in ZK With ZK we have basically two different approaches we can use to update the UI once a background thread has new information. The name “server push” comes from the fact that the server has some new data that has to be pushed to the client instead of the typical workflow (client asks the server for information). Firstly, you can do synchronous updates by grabbing exclusive access to a desktop by using Executions.activate/deactivate. Personally I discourage this because once you have exclusive access, UI threads will have to wait until you deactivate the desktop. That’s why I won’t cover this method at all in this blog post. On the other hand, asynchronous updates are done by using Executions.schedule, which conforms to the Event/EventListener model of normal event processing. The idea is that we can push normal ZK Event objects to EventListeners, and the client side will be informed of these events. After that ZK does a normal AJAX request using Javascript and the Events will be processed by the EventListeners. This means that if we use asynchronous updates, all actual event processing will be done by servlet threads and all ThreadLocals are available as usual. This makes the programming model very simple, because you need to only have normal event listener methods without complex concurrent programming. Here’s a small example: public class TestComposer extends GenericForwardComposer { private Textbox search;public void onClick$startButton() { if (desktop.isServerPushEnabled()) { desktop.enableServerPush(true); }final String searchString = search.getValue(); final EventListener el = this; // All GenericForwardComposers are also EventListeners// Don't do this in a real-world application. Use thread pools instead. Thread backgroundThread = new Thread() { public void run() { // In this part of code the ThreadLocals ARE NOT available // You must NOT touch any ZK related things (e.g. components, desktops) // If you need some information from ZK, you need to get them before this code // For example here I've read searchString from a textbox, so I can use the searchString variable without problems String result = ... // Retrieve the result from somewhere Executions.schedule(desktop, el, new Event('onNewData', null, result)); } };backgroundThread.start(); } public void onNewData(Event event) { // In this part of code the ThreadLocals ARE available String result = (String) event.getData(); // Do something with result. You can touch any ZK stuff freely, just like when a normal event is posted. } } In the next part I’ll show you how to use JDK5 ExecutorServices to run tasks without manually creating threads. If you truly want to understand ZK server push, you should also read the relevant ZK documentation. Happy coding and don’t forget to share! Reference: Advanced ZK: Asynchronous UI updates and background processing – part 1 from our JCG partner Joonas Javanainen at the Jawsy Solutions technical blog blog....

Provocateurs Gather the Best Requirements

Ask someone what they want, and they’ll tell you they want a faster horse. Provoke them, and they’ll tell you they have a ‘get there faster’ problem, an ‘equine waste disposal’ problem, and issues with total cost of ownership. Thought Provoking If your requirements elicitation session looks like the photo above, you’re doing it wrong. However, just asking people what they want and confirming that you heard what they said is also not enough. Active listening is important, but to capture great requirements, you also have to provoke thought about why someone is expressing a “requirement.” Adrian Reed wrote a great article this week (hat tip to Kevin Brennan) on asking provoking questions that leverage lateral thinking techniques to get better insight into the true requirements. Adrian presents eight questions, such as “Imagine if we fast-forward to 2 years after the implementation of this project, what will the organisation look like?” Some of his questions remind me a lot of the ideas behind Enthiosys’ Innovation Games (and Luke Hohmann’s Innovation Games book). The remember the future, and product box games immediately come to mind.Unprovoked Thoughts Most good subject matter experts I’ve met, when asked about the important problems to be solved, try and be really helpful and incorporate elements of solutions in their descriptions of problems. They will say things like “the system must integrate with [other system] to do X.” They may even ultimately be right, that this particular system integration is a constraint, and that “X” is the only acceptable (by policy) way to achieve “Y.” But usually, neither constraint is a requirement – it is a solution approach. Subject matter experts who are not as good at having and sharing insights about their domain often confuse problem manifestations with their underlying problems. By analogy, it is requesting treatment for a runny nose, when the problem the you have is the flu. You can dry up your nose and still feel horrible.Provoking Questions Reveal Real Problems Adrian’s questions are designed to help you understand that you’re treating the flu, and not a runny nose. Requirements gathering is a lot like diagnosis of a medical malady. You have to discover the real problems. The problems that people are willing to pay to solve. You have to uncover the latent problems that are “hidden” behind problem manifestations. In a (rare for me) American football analogy – the problem manifestation is that your quarterback is completing 1 of 20 forward passes. Replacing the quarterback and receivers will not solve the problem. The problem is that your offensive line is not able to give the quarterback sufficient time to throw higher-probability-of-success passes. Asking questions that force people to describe their objectives differently is a good way to bypass solution-design answers. It also creates chinks in the armor of problem manifestations. Completing more passes is not the future you’re looking for, winning more games is the goal. When you’re treating your flu, your goal is not to be sick – but with a dry nose. Your goal is to be well. When you ask someone to remember the future, they will will describe being not sick, not being dry-nosed. The product box will be a description of a winning team. Check out Adrian’s list of questions, and ask yourself, how do you get to the root causes? Ishikawa diagrams (also known as cause and effect or fishbone diagrams) provide a great visualization tool if you’re a spatial thinker or a whiteboard-talker. In the example below, you can quickly see that spending too much on fuel is part of the real problem – that the cost of operation is too high. You can likewise see that under-inflated tires are a source of poor fuel economy. Check out the Ishikawa article for an explanation, or this article on providing context (with Ishikawa diagrams), and this article on buyer and user personas for more examples of problem decomposition.If you’ve got any examples of problem-statement-turned-problem, chime in below… Reference: Provocateurs Gather the Best Requirements from our JCG partner Scott Sehlhorst at the Business Analysis | Product Management | Software Requirements blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: