Featured FREE Whitepapers

What's New Here?

scalatra-logo

Getting started with Scala and Scalatra – Part III

This post is the third on a series of articles I’m writing on scalatra. In ‘part I’ we created the initial environment, and in ‘part II’ we created the first part of a REST API and added some tests. In this third part of the scalatra tutorial we’re going to look at the following topics:Persistency: we use scalaquery to persist elements from our model. Security: handle a security header containing an API key.Frist we’ll look at the persistency part. For this part we’ll be using scalaquery. Note that the code we show here is pretty much the same for scalaquery’s successor slick. Slick, however, requires scala 2.10.0-M7 and this would mean we have to alter our complete scala setup. So for this example we’ll just use scalaquery (whose syntax is the same of slick). If you haven’t done so already, install JRebel so your changes are reflected instantly without having to restart the service.Persistency I’ve used postgresql for this example, but any of the databases supported by scalaquery can be used. The database model I’ve used is a very simple one: CREATE TABLE sc_bid ( id integer NOT NULL DEFAULT nextval('sc_bid_id_seq1'::regclass), 'for' integer, min numeric, max numeric, currency text, bidder integer, date numeric, CONSTRAINT sc_bid_pkey1 PRIMARY KEY (id ), CONSTRAINT sc_bid_bidder_fkey FOREIGN KEY (bidder) REFERENCES sc_user (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION, CONSTRAINT sc_bid_for_fkey FOREIGN KEY ('for') REFERENCES sc_item (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION ) CREATE TABLE sc_item ( id integer NOT NULL DEFAULT nextval('sc_bid_id_seq'::regclass), name text, price numeric, currency text, description text, owner integer, CONSTRAINT sc_bid_pkey PRIMARY KEY (id ), CONSTRAINT sc_bid_owner_fkey FOREIGN KEY (owner) REFERENCES sc_user (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION ) CREATE TABLE sc_user ( id serial NOT NULL, username text, firstname text, lastname text, CONSTRAINT sc_user_pkey PRIMARY KEY (id ) ) As you can a simple model, with a couple of foreign keys and primary keys that are autogenerated. We define a table for the users, for the items and for the bids. Note that this is database specific so this will only work for postgresql. An additional note on postgresql and scalaquery. Scalaquery doesn’t support schemas. This means that we have to define the tables in the ‘public’ schema. Before we can start working with scalaquery we first have to add it to our project. In the build.sbt add the following dependencies 'org.scalaquery' %% 'scalaquery' % '0.10.0-M1', 'postgresql' % 'postgresql' % '9.1-901.jdbc4' After updating you’ll have the scalaquery and postgres jars you need. Lets look at one of the repositories: the bidrepository and the RepositoryBase trait. // the trait import org.scalaquery.session.Database trait RepositoryBase { val db = Database.forURL('jdbc:postgresql://localhost/dutch_gis?user=jos&password=secret', driver = 'org.postgresql.Driver') } // simple implementation of the bidrepository package org.smartjava.scalatra.repository import org.smartjava.scalatra.model.Bid import org.scalaquery.session._ import org.scalaquery.ql.basic.{BasicTable => Table} import org.scalaquery.ql.TypeMapper._ import org.scalaquery.ql._ import org.scalaquery.ql.extended.PostgresDriver.Implicit._ import org.scalaquery.session.Database.threadLocalSession class BidRepository extends RepositoryBase { object BidMapping extends Table[(Option[Long], Long, Double, Double, String, Long, Long)]('sc_bid') { def id = column[Option[Long]]('id', O PrimaryKey) def forItem = column[Long]('for', O NotNull) def min = column[Double]('min', O NotNull) def max = column[Double]('max', O NotNull) def currency = column[String]('currency') def bidder = column[Long]('bidder', O NotNull) def date = column[Long]('date', O NotNull) def noID = forItem ~ min ~ max ~ currency ~ bidder ~ date def * = id ~ forItem ~ min ~ max ~ currency ~ bidder ~ date } /** * Return a Option[Bid] if found or None otherwise */ def get(bid: Long, user: String) : Option[Bid] = { var result:Option[Bid] = None; db withSession { // define the query and what we want as result val query = for (u <-BidMapping if u.id === bid) yield u.id ~ u.forItem ~ u.min ~ u.max ~ u.currency ~ u.bidder ~ u.date // map the results to a Bid object val inter = query mapResult { case(id,forItem,min,max,currency,bidder,date) => Option(new Bid(id,forItem, min, max, currency, bidder, date)); } // check if there is one in the list and return it, or None otherwise result = inter.list match { case _ :: tail => inter.first case Nil => None } } // return the found bid result } /** * Create a bid using scala query. This will always create a new bid */ def create(bid: Bid): Bid = { var id: Long = -1; // start a db session db withSession { // create a new bid val res = BidMapping.noID insert (bid.forItem.longValue, bid.minimum.doubleValue, bid.maximum.doubleValue, bid.currency, bid.bidder.toLong, System.currentTimeMillis()); // get the autogenerated bid val idQuery = Query(SimpleFunction.nullary[Long]('LASTVAL')); id = idQuery.list().head; } // create a bid to return val createdBid = new Bid(Option(id), bid.forItem, bid.minimum, bid.maximum, bid.currency, bid.bidder, bid.date); createdBid; } /** * Delete a bid */ def delete(user:String, bid: Long) : Option[Bid] = { // get the bid we're deleting val result = get(bid,user); // delete the bid val toDelete = BidMapping where (_.id === bid) db withSession { toDelete.delete } // return deleted bid result } } Looks complex, right? We’ll it isn’t once you’ve got the hang of how scalaquery works. With scalaquery you create a table mapping. In this mapping you specify the type of fields you expect. In this example our mapping table looks like this: object BidMapping extends Table[(Option[Long], Long, Double, Double, String, Long, Long)]('sc_bid') { def id = column[Option[Long]]('id', O PrimaryKey) def forItem = column[Long]('for', O NotNull) def min = column[Double]('min', O NotNull) def max = column[Double]('max', O NotNull) def currency = column[String]('currency') def bidder = column[Long]('bidder', O NotNull) def date = column[Long]('date', O NotNull) def noID = forItem ~ min ~ max ~ currency ~ bidder ~ date def * = id ~ forItem ~ min ~ max ~ currency ~ bidder ~ date } Here we define the mapping of the table ‘sc_bid’. For each field, we define the name of the column and it’s type. If we want we can add specific options that are taken into account when you create your ddl from this (not something I’ve used for this example). The last two defs define the ‘constructors’ for this mapping. The ‘def *’ is the default constructor, where we have all the fields beforehand, the ‘def noID’ is the one we’ll use when we create a bid for this first time and we don’t have an id yet. Remember the ids are autogenerated by the database. With this mapping we can start writing our repository functions. Lets start with the first one: get /** * Return a Option[Bid] if found or None otherwise */ def get(bid: Long, user: String) : Option[Bid] = { var result:Option[Bid] = None; db withSession { // define the query and what we want as result val query = for (u <-BidMapping if u.id === bid) yield u.id ~ u.forItem ~ u.min ~ u.max ~ u.currency ~ u.bidder ~ u.date // map the results to a Bid object val inter = query mapResult { case(id,forItem,min,max,currency,bidder,date) => Option(new Bid(id,forItem, min, max, currency, bidder, date)); } // check if there is one in the list and return it, or None otherwise result = inter.list match { case _ :: tail => inter.first case Nil => None } } // return the found bid result } Here you can see that we use the standard scala for construct to create a query iterate over the table mapped with BidMapping. To make sure we only get the field we want we apply a filter using the ‘if u.id === bid’ statement. In the yield statement we specify the fields we want to return. By using the mapResult on the query we can process the results from the query and convert it to our case object and add it to a list. We then check whether there really is something in the list and return an Option[Bid]. Note that this can be written more concise, but this nicely explains the steps you need to take. The next function is create def create(bid: Bid): Bid = { var id: Long = -1; // start a db session db withSession { // create a new bid val res = BidMapping.noID insert (bid.forItem.longValue, bid.minimum.doubleValue, bid.maximum.doubleValue, bid.currency, bid.bidder.toLong, System.currentTimeMillis()); // get the autogenerated bid val idQuery = Query(SimpleFunction.nullary[Long]('LASTVAL')); id = idQuery.list().head; } // create a bid to return val createdBid = new Bid(Option(id), bid.forItem, bid.minimum, bid.maximum, bid.currency, bid.bidder, bid.date); createdBid; } We now use the custom BidMapping ‘constructor’ noID to generate an insert statement. If we didn’t specify noID we are required to already specify an id. Now that we’ve inserted a new Bid object in the database, we need to return the just created Bid, with the new id, to the user. For this we need to execute a simple query called ‘LASTVAL’, which returns the last autogenerated value. In our case, this is the id of the bid that was created. From this information we create a new Bid, which we return. The last operation for our repository is the delete function. This function first checks whether the specified bid is present, and if it is, it deletes it. def delete(user:String, bid: Long) : Option[Bid] = { // get the bid we're deleting val result = get(bid,user); // delete the bid val toDelete = BidMapping where (_.id === bid) db withSession { toDelete.delete } // return deleted bid result } Here we use the ‘where’ filter to create the query we want to execute. When we call delete on this filter all matching elements are deleted. And that’s the most basic use of scalaquery for persistency. If you need more complex operations (like joins) look at the scalaquery.org website for examples. We now have functionality to create and delete bids. So it would also be nice if we have some way to authenticate our users. For this tutorial we’re going to create a very simple API Key based authentication scheme. For every request the user has to add a specific header with its API key. Then we can use the information from this key to determine who this user is, and whether he can delete or access specific information.Security We’ll start with the key generation part. When someone wants to use our API we require them to specify an application name and the hostname from which the request will be made. This information we’ll use to generate a key they have to use in each request. This key is just a simple HMAC hash. package org.smartjava.scalatra.util import javax.crypto.spec.SecretKeySpec import javax.crypto.Mac import org.apache.commons.codec.binary.Base64 object SecurityUtil { def calculateHMAC(secret: String, applicationName: String , hostname: String ) : String = { val signingKey = new SecretKeySpec(secret.getBytes(),'HmacSHA1'); val mac = Mac.getInstance('HmacSHA1'); mac.init(signingKey); val rawHmac = mac.doFinal((applicationName + '|' + hostname).getBytes()); new String(Base64.encodeBase64(rawHmac)); } def checkHMAC(secret: String, applicationName: String, hostname: String, hmac: String) : Boolean = { return calculateHMAC(secret, applicationName, hostname) == hmac; } def main(args: Array[String]) { val hmac = SecurityUtil.calculateHMAC('The passphrase to calculate the secret with','App 1','localhost'); println(hmac); println(SecurityUtil.checkHMAC('The passphrase to calculate the secret with','App 1','localhost',hmac)); } } The above helper object is used to calculate the initial hash we send to the user and can be used to validate an incoming hash. To use this in our REST API we need to intercept all the incoming requests and check these headers before invoking the specific route. With scalatra we can do this by using the before() function: package org.smartjava.scalatra.routes import org.scalatra.ScalatraBase import org.smartjava.scalatra.repository.KeyRepository /** * When this trait is used, the incoming request * is checked for authentication based on the * X-API-Key header. */ trait Authentication extends ScalatraBase { val ApiHeader = 'X-API-Key'; val AppHeader = 'X-API-Application'; val KeyChecker = new KeyRepository; /** * A simple interceptor that checks for the existence * of the correct headers */ before() { // we check the host where the request is made val servername = request.serverName; val header = Option(request.getHeader(ApiHeader)); val app = Option(request.getHeader(AppHeader)); List(header,app) match { case List(Some(x),Some(y)) => isValidHost(servername,x,y); case _ => halt(status=401, headers=Map('WWW-Authenticate' -> 'API-Key')); } } /** * Check whether the host is valid. This is done by checking the host against * a database with keys. */ private def isValidHost(hostName: String, apiKey: String, appName: String): Boolean = { KeyChecker.validateKey(apiKey, appName, hostName); } } This trait, which we include in our main scalatra servlet, gets the correct information from the request and checks whether the supplied hash corresponds to the one generated by the code you saw previously. If this is the case the request is passed on, if not, we halt the processing of the request and send back a 401 explaining how to authenticate with this API. If a client omits these headers he’ll get this as a response:If a client sends the correct headers he’ll get this response:That’s it for this part. In the next part we’ll look at Depdency Injection, CQRS, Akka and running this code in the cloud. Reference: Tutorial: Getting started with scala and scalatra – Part III from our JCG partner Jos Dirksen at the Smart Java blog....
sonar-logo

Fixing common Java security code violations in Sonar

This article aims to show you how to quickly fix the most common java security code violations. It assumes that you are familiar with the concept of code rules and violations and how Sonar reports on them. However, if you haven’t heard these terms before then you might take a look at Sonar Concepts or the forthcoming book about Sonar for a more detailed explanation. To get an idea, during Sonar analysis, your project is scanned by many tools to ensure that the source code conforms with the rules you’ve created in your quality profile. Whenever a rule is violated… well a violation is raised. With Sonar you can track these violations with violations drill down view or in the source code editor. There are hundreds of rules, categorized based on their importance. Ill try, in future posts, to cover as many as I can but for now let’s take a look at some common security rules / violations. There are two pairs of rules (all of them are ranked as critical in Sonar ) we are going to examine right now.1. Array is Stored Directly ( PMD ) and Method returns internal array ( PMD ) These violations appear in the cases when an internal Array is stored or returned directly from a method. The following example illustrates a simple class that violates these rules. public class CalendarYear { private String[] months; public String[] getMonths() { return months; } public void setMonths(String[] months) { this.months = months; } } To eliminate them you have to clone the Array before storing / returning it as shown in the following class implementation, so noone can modify or get the original data of your class but only a copy of them. public class CalendarYear { private String[] months; public String[] getMonths() { return months.clone(); } public void setMonths(String[] months) { this.months = months.clone(); } }2. Nonconstant string passed to execute method on an SQL statement (findbugs) and A prepared statement is generated from a nonconstant String (findbugs) Both rules are related to database access when using JDBC libraries. Generally there are two ways to execute an SQL Commants via JDBC connection : Statement and PreparedStatement. There is a lot of discussion about pros and cons but it’s out of the scope of this post. Let’s see how the first violation is raised based on the following source code snippet. Statement stmt = conn.createStatement(); String sqlCommand = 'Select * FROM customers WHERE name = '' + custName + '''; stmt.execute(sqlCommand); You’ve already noticed that the sqlcommand parameter passed to execute method is dynamically created during run-time which is not acceptable by this rule. Similar situations causes the second violation. String sqlCommand = 'insert into customers (id, name) values (?, ?)'; Statement stmt = conn.prepareStatement(sqlCommand); You can overcome this problems with three different ways. You can either use StringBuilder or String.format method to create the values of the string variables. If applicable you can define the SQL Commands as Constant in class declaration, but it’s only for the case where the SQL command is not required to be changed in runtime. Let’s re-write the first code snippet using StringBuilder Statement stmt = conn.createStatement(); stmt.execute(new StringBuilder('Select FROM customers WHERE name = ''). append(custName). append(''').toString()); and using String.format Statement stmt = conn.createStatement(); String sqlCommand = String.format('Select * from customers where name = '%s'', custName); stmt.execute(sqlCommand); For the second example you can just declare the sqlCommand as following private static final SQLCOMMAND = insert into customers (id, name) values (?, ?)'; There are more security rules such as the blocker Hardcoded constant database password but I assume that nobody is still hardcodes passwords in source code files… In following articles I’m going to show you how to adhere to performance and bad practice rules. Until then I’m waiting for your comments or suggestions. Happy coding and don’t forget to share! Reference: Fixing common Java security code violations in Sonar from our JCG partner Papapetrou P. Patroklos at the Only Software matters blog....
java-interview-questions-answers

VisualVM: Monitoring Remote JVM Over SSH (JMX Or Not)

VisualVM is a great tool for monitoring JVM (5.0+) regarding memory usage, threads, GC, MBeans etc. Let’s see how to use it over SSH to monitor (or even profile, using its sampler) a remote JVM either with JMX or without it. This post is based on Sun JVM 1.6 running on Ubuntu 10 and VisualVM 1.3.3. 1. Communication: JStatD vs. JMX There are two modes of communication between VisualVM and the JVM: Either over the Java Management Extensions (JMX) protocol or over jstatd.jstatd jstatd is a daemon that is distributed with JDK. You start it from the command line (it’s likely necessary to run it as the user running the target JVM or as root) on the target machine and VisualVM will contact it to fetch information about the remote JVMs.Advantages: Can connect to a running JVM, no need to start it with special parameters Disadvantages: Much more limited monitoring capabilities (f.ex. no CPU usage monitoring, not possible to run the Sampler and/or take thread dumps).Ex.: bash> cat jstatd.all.policy grant codebase 'file:${java.home}/../lib/tools.jar' { permission java.security.AllPermission; } bash> sudo /path/to/JDK/bin/jstatd -J-Djava.security.policy=jstatd.all.policy # You can specify port with -p number and get more info with -J-Djava.rmi.server.logCalls=true Note: Replace “${java.home}/../lib/tools.jar” with the absolute “/path/to/jdk/lib/tools.jar” if you have only copied but not installed the JDK. If you get the failure Could not create remote object access denied (java.util.PropertyPermission java.rmi.server.ignoreSubClasses write) java.security.AccessControlException: access denied (java.util.PropertyPermission java.rmi.server.ignoreSubClasses write) at java.security.AccessControlContext.checkPermission(AccessControlContext.java:374) then jstatd likely hasn’t been started with the right java.security.policy file (try to provide fully qualified path to it). More info about VisualVM and jstatd from Oracle.JMXAdvantages: Using JMX will give you the full power of VisualVM. Disadvantages: Need to start the JVM with some system properties.You will generally want to use something like the following properties when starting the target JVM (though you could also enable SSL and/or require username and password): yourJavaCommand... -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.port=1098See Remote JMX Connections.2. Security: SSH The easiest way to connect to the remote JMX or jstatd over ssh is to use a SOCKS proxy, which standard ssh clients can set up.2.1 Set Up the SSH Tunnel With SOCKS ssh -v -D 9696 my_server.example.com 2.2 Configure VisualVM to Use the Proxy Tools->Options->Network – Manual Proxy Settings – check it and configure SOCKS Proxy at localhost and port 96962.3 Connect VisualVM to the Target File -> Add Remote Host… – type the IP or hostname of the remote machine JStatD Connection You should see logs both in the ssh window (thanks to its “-v”, f.ex. “debug1: Connection to port 9696 forwarding to socks port 0 requested.” and “debug1: channel 3: free: direct-tcpip: listening port 9696 for 10.2.47.71 port 1099, connect from 127.0.0.1 port 61262, nchannels 6“) and in the console where you started jstatd (many, f.ex. “FINER: RMI TCP Connection(23)-10.2.47.71: …“) Wait few minutes after having added the remote host, you should then see the JVMs running there. Available stats: JVM arguments, Monitor: Heap, classes, threads monitoring (but not CPU). Sampler and MBeans require JMX.JMX Right-click on the remote host you have added and select Add JMX Connection …, type the JMX port you have chosen. You should see similar logs as with jstatd. Available stats: Also CPU usage, system properties, detailed Threads report with access to stack traces, CPU sampling (memory sampling not supported).Note: Sampler vs. Profiler The VisualVM’s Sampler excludes time spent in Object.wait and Thread.sleep (f.ex. waiting on I/O). Use the NetBeans Profiler to profile or sample a remote application if you want to have more control or want the possibility to include Object.wait and Thread.sleep time. It requires its Remote Pack (a java agent, i.e. a JAR file) to be in the target JVM (NetBeans’ Attach Wizard can generate the remote pack for you in step 4, Manual integration, and show you the options to pass to the target JVM to use it). You can run the profiler over SSH by forwarding its default port (5140) and attaching to the forwarded port at localhost.(NetBeans version 7.1.1.) Don’t forget to share! Reference: VisualVM: Monitoring Remote JVM Over SSH (JMX Or Not) from our JCG partner Jakub Holy at the The Holy Java blog....
java-logo

OutOfMemoryError: unable to create new native thread – Problem Demystified

As you may have seen from my previous tutorials and case studies , Java Heap Space OutOfMemoryError problems can be complex to pinpoint and resolve. One of the common problems I have observed from Java EE production systems is OutOfMemoryError: unable to create new native thread; error thrown when the HotSpot JVM is unable to further create a new Java thread. This article will revisit this HotSpot VM error and provide you with recommendations and resolution strategies. If you are not familiar with the HotSpot JVM, I first recommend that you look at a high level view of its internal HotSpot JVM memory spaces. This knowledge is important in order for you to understand OutOfMemoryError problems related to the native (C-Heap) memory space. OutOfMemoryError: unable to create new native thread – what is it? Let’s start with a basic explanation. This HotSpot JVM error is thrown when the internal JVM native code is unable to create a new Java thread. More precisely, it means that the JVM native code was unable to create a new “native” thread from the OS (Solaris, Linux, MAC, Windows…). We can clearly see this logic from the OpenJDK 1.6 and 1.7 implementations as per below: Unfortunately at this point you won’t get more detail than this error, with no indication of why the JVM is unable to create a new thread from the OS… HotSpot JVM: 32-bit or 64-bit? Before you go any further in the analysis, one fundamental fact that you must determine from your Java or Java EE environment is which version of HotSpot VM you are using e.g. 32-bit or 64-bit. Why is it so important? What you will learn shortly is that this JVM problem is very often related to native memory depletion; either at the JVM process or OS level. For now please keep in mind that:A 32-bit JVM process is in theory allowed to grow up to 4 GB (even much lower on some older 32-bit Windows versions). For a 32-bit JVM process, the C-Heap is in a race with the Java Heap and PermGen space e.g. C-Heap capacity = 2-4 GB – Java Heap size (-Xms, -Xmx) – PermGen size (-XX:MaxPermSize) A 64-bit JVM process is in theory allowed to use most of the OS virtual memory available or up to 16 EB (16 million TB)As you can see, if you allocate a large Java Heap (2 GB+) for a 32-bit JVM process, the native memory space capacity will be reduced automatically, opening the door for JVM native memory allocation failures. For a 64-bit JVM process, your main concern, from a JVM C-Heap perspective, is the capacity and availability of the OS physical, virtual and swap memory. OK great but how does native memory affect Java threads creation? Now back to our primary problem. Another fundamental JVM aspect to understand is that Java threads created from the JVM requires native memory from the OS. You should now start to understand the source of your problem… The high level thread creation process is as per below:A new Java thread is requested from the Java program & JDK The JVM native code then attempt to create a new native thread from the OS The OS then attempts to create a new native thread as per attributes which include the thread stack size. Native memory is then allocated (reserved) from the OS to the Java process native memory space; assuming the process has enough address space (e.g. 32-bit process) to honour the request The OS will refuse any further native thread & memory allocation if the 32-bit Java process size has depleted its memory address space e.g. 2 GB, 3 GB or 4 GB process size limit The OS will also refuse any further Thread & native memory allocation if the virtual memory of the OS is depleted (including Solaris swap space depletion since thread access to the stack can generate a SIGBUS error, crashing the JVM * http://bugs.sun.com/view_bug.do?bug_id=6302804In summary:Java threads creation require native memory available from the OS; for both 32-bit & 64-bit JVM processes For a 32-bit JVM, Java thread creation also requires memory available from the C-Heap or process address spaceProblem diagnostic Now that you understand native memory and JVM thread creation a little better, is it now time to look at your problem. As a starting point, I suggest that your follow the analysis approach below:Determine if you are using HotSpot 32-bit or 64-bit JVM When problem is observed, take a JVM Thread Dump and determine how many Threads are active Monitor closely the Java process size utilization before and during the OOM problem replication Monitor closely the OS virtual memory utilization before and during the OOM problem replication; including the swap memory space utilization if using Solaris OSProper data gathering as per above will allow you to collect the proper data points, allowing you to perform the first level of investigation. The next step will be to look at the possible problem patterns and determine which one is applicable for your problem case. Problem pattern #1 – C-Heap depletion (32-bit JVM) From my experience, OutOfMemoryError: unable to create new native thread is quite common for 32-bit JVM processes. This problem is often observed when too many threads are created vs. C-Heap capacity. JVM Thread Dump analysis and Java process size monitoring will allow you to determine if this is the cause. Problem pattern #2 – OS virtual memory depletion (64-bit JVM) In this scenario, the OS virtual memory is fully depleted. This could be due to a few 64-bit JVM processes taking lot memory e.g. 10 GB+ and / or other high memory footprint rogue processes. Again, Java process size & OS virtual memory monitoring will allow you to determine if this is the cause. Problem pattern #3 – OS virtual memory depletion (32-bit JVM) The third scenario is less frequent but can still be observed. The diagnostic can be a bit more complex but the key analysis point will be to determine which processes are causing a full OS virtual memory depletion. Your 32-bit JVM processes could be either the source or the victim such as rogue processes using most of the OS virtual memory and preventing your 32-bit JVM processes to reserve more native memory for its thread creation process. Please note that this problem can also manifest itself as a full JVM crash (as per below sample) when running out of OS virtual memory or swap space on Solaris. # # A fatal error has been detected by the Java Runtime Environment: # # java.lang.OutOfMemoryError: requested 32756 bytes for ChunkPool::allocate. Out of swap space? # # Internal Error (allocation.cpp:166), pid=2290, tid=27 # Error: ChunkPool::allocate # # JRE version: 6.0_24-b07 # Java VM: Java HotSpot(TM) Server VM (19.1-b02 mixed mode solaris-sparc ) # If you would like to submit a bug report, please visit: # http://java.sun.com/webapps/bugreport/crash.jsp #--------------- T H R E A D ---------------Current thread (0x003fa800): JavaThread "CompilerThread1" daemon [_thread_in_native, id=27, stack(0x65380000,0x65400000)]Stack: [0x65380000,0x65400000], sp=0x653fd758, free space=501k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) ………………Native memory depletion: symptom or root cause? You now understand your problem and know which problem pattern you are dealing with. You are now ready to provide recommendations to address the problem…are you? Your work is not done yet, please keep in mind that this JVM OOM event is often just a “symptom” of the actual root cause of the problem. The root cause is typically much deeper so before providing recommendations to your client I recommend that you really perform deeper analysis. The last thing you want to do is to simply address and mask the symptoms. Solutions such as increasing OS physical / virtual memory or upgrading all your JVM processes to 64-bit should only be considered once you have a good view on the root cause and production environment capacity requirements. The next fundamental question to answer is how many threads were active at the time of the OutOfMemoryError? In my experience with Java EE production systems, the most common root cause is actually the application and / or Java EE container attempting to create too many threads at a given time when facing non happy paths such as thread stuck in a remote IO call, thread race conditions etc. In this scenario, the Java EE container can start creating too many threads when attempting to honour incoming client requests, leading to increase pressure point on the C-Heap and native memory allocation. Bottom line, before blaming the JVM, please perform your due diligence and determine if you are dealing with an application or Java EE container thread tuning problem as the root cause. Once you understand and address the root cause (source of thread creations), you can then work on tuning your JVM and OS memory capacity in order to make it more fault tolerant and better “survive” these sudden thread surge scenarios. Recommendations:First perform a JVM Thread Dump analysis and determine the source of all the active threads vs. an established baseline. Determine what is causing your Java application or Java EE container to create so many threads at the time of the failure Please ensure that your monitoring tools closely monitor both your Java VM processes size & OS virtual memory. This crucial data will be required in order to perform a full root cause analysis Do not assume that you are dealing with an OS memory capacity problem. Look at all running processes and determine if your JVM processes are actually the source of the problem or victim of other processes consuming all the virtual memory Revisit your Java EE container thread configuration & JVM thread stack size. Determine if the Java EE container is allowed to create more threads than your JVM process and / or OS can handle Determine if the Java Heap size of your 32-bit JVM is too large, preventing the JVM to create enough threads to fulfill your client requests. In this scenario, you will have to consider reducing your Java Heap size (if possible), vertical scaling or upgrade to a 64-bit JVMCapacity planning analysis to the rescue As you may have seen from my past article on the Top 10 Causes of Java EE Enterprise Performance Problems , lack of capacity planning analysis is often the source of the problem. Any comprehensive load and performance testing exercise should also properly determine the Java EE container threads, JVM & OS native memory requirement for your production environment; including impact measurements of ‘non-happy’ paths. This approach will allow your production environment to stay away from this type of problem and lead to better system scalability and stability in the long run. Don’t forget to share! Reference: OutOfMemoryError: unable to create new native thread – Problem Demystified from our JCG partner Pierre-Hugues Charbonneau at the Java EE Support Patterns & Java Tutorial blog....
java-logo

Multiple versions of Java on OS X Mountain Lion

Before Mountain Lion, Java was bundled inside OS X. It seems that during the upgrade, the Java 6 version I had on my machine was removed. Apparently the reason for uninstalling Java during the upgrade process was caused by a security issue that the Java runtime had.In this way you are forced to install the latest version which fixed this security problem. So I went to /Applications/Utilities/ open a Terminal and executed the following command: java -version ==> “No Java runtime present …” A window prompted asking to install Java.Click “Install” and get the latest version.I installed it but right after I downloaded and installed the JDK SE 7 from Oracle. After installation, open the Java Preferences (Launchapad/Others ) and you will see :Now I knew I had two versions of Java but which one I am using it ? $ java -version java version "1.6.0_35" Java(TM) SE Runtime Environment (build 1.6.0_35-b10-428-11M3811) Java HotSpot(TM) 64-Bit Server VM (build 20.10-b01-428, mixed mode) So what if i want to use JDK SE 7 from Oracle ? Then I had just to drag Java SE 7 in the Java Preferences window to the first position in the list.This time : $ java -version java version "1.7.0_05" Java(TM) SE Runtime Environment (build 1.7.0_05-b06) Java HotSpot(TM) 64-Bit Server VM (build 23.1-b03, mixed mode) I said to myself let’s find out more out how Java is installed on OS X so I dug for more. There are some very useful commands : whereis and which and ls -l. whereis java ==> /usr/bin/java ls -l /usr/bin/java ==> /System/Library/Frameworks/JavaVM.framework/Versions/Current/Commands/java When I saw this I was a little bit curious so I went to list the Versions directory: cd /System/Library/Frameworks/JavaVM.framework/Versions ls ==> 1.4 1.5 1.6 A CurrentJDK 1.4.2 1.5.0 1.6.0 Current Now why do I have this old versions of Java on my machine ? So I asked on Ask Different http://apple.stackexchange.com/questions/57986/multiple-java-versions-support-on-os-x-and-java-home-location $ sw_vers ProductName: Mac OS X ProductVersion: 10.8.1 BuildVersion: 12B19 $ ls -l /System/Library/Frameworks/JavaVM.framework/Versions total 64 lrwxr-xr-x 1 root wheel 10 Sep 16 15:55 1.4 -> CurrentJDK lrwxr-xr-x 1 root wheel 10 Sep 16 15:55 1.4.2 -> CurrentJDK lrwxr-xr-x 1 root wheel 10 Sep 16 15:55 1.5 -> CurrentJDK lrwxr-xr-x 1 root wheel 10 Sep 16 15:55 1.5.0 -> CurrentJDK lrwxr-xr-x 1 root wheel 10 Sep 16 15:55 1.6 -> CurrentJDK lrwxr-xr-x 1 root wheel 10 Sep 16 15:55 1.6.0 -> CurrentJDK drwxr-xr-x 7 root wheel 238 Sep 16 16:08 A lrwxr-xr-x 1 root wheel 1 Sep 16 15:55 Current -> A lrwxr-xr-x 1 root wheel 59 Sep 16 15:55 CurrentJDK -> /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents It seems all the old versions are links to the CurrentJDK version , which is the Apple version, except A and Current which is linked to A.I read something about this on this question.For me A acts like a temp variable. If in Java Preferences you set the in the first position Java 6 from Apple A will have Java 6 from Apple if you put on the first position Java SE 7 from Oracle A will point to this version.Current points to A. /java -version java version "1.6.0_35" Java(TM) SE Runtime Environment (build 1.6.0_35-b10-428-11M3811) Java HotSpot(TM) 64-Bit Server VM (build 20.10-b01-428, mixed mode) ./java -version java version "1.7.0_05" Java(TM) SE Runtime Environment (build 1.7.0_05-b06) Java HotSpot(TM) 64-Bit Server VM (build 23.1-b03, mixed mode) So it means that in this Current directory will point to the first Java Version found in the Java Preferences. A very interesting thing is the following information lrwxr-xr-x 1 root wheel 59 Sep 16 15:55 CurrentJDK -> /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents This means Java from Apple is actually installed here :”/System/Library/Java/JavaVirtualMachines/1.6.0.jdk/” What about Java SE 7 ? I could search the filesystem to see but I found an easier way: If in Java Preferences on the first position is Java SE 7 ==> $ /usr/libexec/java_home /Library/Java/JavaVirtualMachines/1.7.0.jdk/Contents/Home If in Java Preferences on the first position is Java SE 6 (System) ==> $ /usr/libexec/java_home /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home So Java on Mountain Lion (OSX) is more likely to be installed in one of this locations :/System/Library/Java/JavaVirtualMachines /Library/Java/JavaVirtualMachines ~/Library/Java/JavaVirtualMachinesWhat about /System/Library/Frameworks/JavaVM.framework/Versions ? It seems that is linked to the so called “Java bridge“.Here it seems is the native part of the Java on OSX installation. Reference: Multiple versions of Java on OS X Mountain Lion from our JCG partner Cristian Chiovari at the Java Code Samples blog....
software-development-2-logo

Resign Patterns: Eliminate them with Agile practices and Quality Metrics

This blog post is inspired by the article titled Resign Patterns by Michael Duell. I’ve included all the original text from the above article but for each anti-pattern I mention (at least) one agile practice that IMHO is helpful eliminating it and one or more quality metrics that would help you identify it very early. 1 Cremational Patterns Below is a list of five cremational patterns. 1.1 Abject Poverty The Abject Poverty Pattern is evident in software that is so difficult to test and maintain that doing so results in massive budget overruns. Agile Practices : Refactoring, TDD Quality Metrics : LCOM4, RFC, Cyclomatic Complexity 1.2 Blinder The Blinder Pattern is an expedient solution to a problem without regard for future changes in requirements. It is unclear as to whether the Blinder is named for the blinders worn by the software designer during the coding phase, or the desire to gouge his eyes out during the maintenance phase. Agile Practices : Simple Design, Program Intently and Expressively Quality Metrics : LCOM4, Cyclomatic Complexity 1.3 Fallacy Method The Fallacy method is evident in handling corner cases. The logic looks correct, but if anyone actually bothers to test it, or if a corner case occurs, the Fallacy of the logic will become known. Agile Practices : Unit Testing, User Stories , Customer Collaboration to define acceptance criteria and precise requirements Quality Metrics : Code Coverage ( Line + Branch Coverage ) 1.4 ProtoTry The ProtoTry Pattern is a quick and dirty attempt to develop a working model of software. The original intent is to rewrite the ProtoTry, using lessons learned, but schedules never permit. The ProtoTry is also known as legacy code. Agile Practices : Refactoring, Code in Increments Quality Metrics : Code Coverage 1.5 Simpleton The Simpleton Pattern is an extremely complex pattern used for the most trivial of tasks. The Simpleton is an accurate indicator of the skill level of its creator. Agile Practices : Simple Design, Program Intently and Expressively Quality Metrics : LCOM4 , Cyclomatic Complexity 2 Destructural Patterns Below is a list of seven destructural patterns. 2.1 Adopter The Adopter Pattern provides a home for orphaned functions. The result is a large family of functions that don’t look anything alike, whose only relation to one another is through the Adopter. Agile Practices : Simple Design, Refactoring, Program Intently and Expressively Quality Metrics : LCOM4 , Cyclomatic Complexity, RFC 2.2 Brig The Brig Pattern is a container class for bad software. Also known as module. Agile Practices : Refactoring, Code in increment, Write cohesive code Quality Metrics : Package Complexity, Package Size 2.3 Compromise The Compromise Pattern is used to balance the forces of schedule vs quality. The result is software of inferior quality that is still late. Agile Practices : Continuous Integration and Continuous Inspection Quality Metrics : Technical Debt 2.4 Detonator The Detonator is extremely common, but often undetected. A common example is the calculations based on a 2 digit year field. This bomb is out there, and waiting to explode! Agile Practices : Code Reviews, Unit Testing Quality Metrics : Code Violations 2.5 Fromage The Fromage Pattern is often full of holes. Fromage consists of cheesy little software tricks that make portability impossible. The older this pattern gets, the riper it smells. Agile Practices : Refactoring , Code in Increments Quality Metrics : Technical Debt 2.6 Flypaper The Flypaper Pattern is written by one designer and maintained by another. The designer maintaining the Flypaper Pattern finds herself stuck, and will likely perish before getting loose. Agile Practices : Communicate in Code, Keep a solutions log Quality Metrics : Documentation Density 2.7 ePoxy The ePoxy Pattern is evident in tightly coupled software modules. As coupling between modules increases, there appears to be an epoxy bond between them. Agile Practices : Refactoring, Simple Design, Code in increment Quality Metrics : Coupling, LCOM4 3 Misbehavioral Patterns Below is a list of eleven misbehavioral patterns. 3.1 Chain of Possibilities The Chain of Possibilities Pattern is evident in big, poorly documented modules. Nobody is sure of the full extent of its functionality, but the possibilities seem endless. Also known as Non-Deterministic. Agile Practices : Communicate in Code, Keep a solutions log Quality Metrics : Documentation Density 3.2 Commando The Commando Pattern is used to get in and out quick, and get the job done. This pattern can break any encapsulation to accomplish its mission. It takes no prisoners. Agile Practices : TDD, Unit Testing, Code in increment, Write Cohesive Code Quality Metrics : Couplings, Complexity, Code Coverage 3.3 Intersperser The Intersperser Pattern scatters pieces of functionality throughout a system, making a function impossible to test, modify, or understand. Agile Practices : Code in increment, Write Cohesive Code Quality Metrics : Complecity, Couplings 3.4 Instigator The Instigator Pattern is seemingly benign, but wreaks havoc on other parts of the software system. Agile Practices : Unit Testing, Continuous Integration Quality Metrics : Code Coverage, Violations Density 3.5 Momentum The Momentum Pattern grows exponentially, increasing size, memory requirements, complexity, and processing time. Agile Practices : Code in increment, Refactoring, Keep It Simple Quality Metrics : Complexity, Size 3.6 Medicator The Medicator Pattern is a real time hog that makes the rest of the system appear to be medicated with strong sedatives. Agile Practices : Continuous Integration Quality Metrics : Couplings 3.7 Absolver The Absolver Pattern is evident in problem ridden code developed by former employees. So many historical problems have been traced to this software that current employees can absolve their software of blame by claiming that the absolver is responsible for any problem reported. Also known as It’s-not-in-my-code. Agile Practices : Practice Collective Ownership, Attack problems in isolation Quality Metrics : Unit Testing, TDD 3.8 Stake The Stake Pattern is evident in problem ridden software written by designers who have since chosen the management ladder. Although fraught with problems, the manager’s stake in this software is too high to allow anyone to rewrite it, as it represents the pinnacle of the manager’s technical achievement. Agile Practices : Practice Collective Ownership, Be a Mentor Quality Metrics : 3.9 Eulogy The Eulogy Pattern is eventually used on all projects employing the other 22 Resign Patterns. Also known as Post Mortem. Agile Practices : Continuous Inspection, Code Reviews Quality Metrics : ALL!!! 3.10 Tempest Method The Tempest Method is used in the last few days before software delivery. The Tempest Method is characterized by lack of comments, and introduction of several Detonator Patterns. Agile Practices : Communicate with code, Keep a solutions log Quality Metrics : Documentation Density 3.11 Visitor From Hell The Visitor From Hell Pattern is coincident with the absence of run time bounds checking on arrays. Inevitably, at least one control loop per system will have a Visitor From Hell Pattern that will overwrite critical data. Agile Practices : Code Reviews, Unit Testing Quality Metrics : Violations Density Don’t forget to share! Reference: Design Patterns – Eliminate them with Agile practices and Quality Metrics from our JCG partner Papapetrou P. Patroklos at the Only Software matters blog....
software-development-2-logo

An unambiguous software version scheme

When people talk about software versioning schemes they often refer to the commonly used X.Y.Z numerical scheme for versioning. This is often referred to major.minor.build, but these abstract terms are not useful as they don’t explicitly impart any meaning to each numerical component. This can lead to the simplest usage, we just increment the last number for each release, so I’ve seen versions such as 1.0.35. Alternatively, versions become a time consuming point of debate. This is a shame as we could impart some clear and useful information with versions. I’m going to suggest that rather than thinking ‘X.Y.Z’ we think ‘api.feature.bug’. What do I mean by this? You increment the appropriate number for what your release contains. For example, if you have only fixed bugs, you increment the last number. If you introduce even one new feature, then you increment the middle number. If you change a published or documented API, be that the interface of a package, a SOAP or other XML API, or possibly the user interface (in a loose sense of the term ‘API’) then the first number. This system is unambiguous, no need for discussions about the numbering. You zero digits to the right of any you increment, so if you fix a bug and introduce a new feature after version 5.3.6 then the new version is 5.4.0. Unstated digits are assumed to be zero, so 5.4.0 is the same as 5.4.0.0 and 5.4.0.0.0.0.0.0.0… The version is not a number, and it does not have digits. The version 5.261.67 is pretty unusual, but not invalid. Don’t let it put you off. You might need to change an API due to bug fix, but you’ll need to be diligent, and cold to any politicking by increasing the API digit. Otherwise the scheme looses value and you might as well just use a single number for versioning. What if you’re on version 5 of the product and the product lead has told everyone version 6 will be something special, but you need to fix a bug that means an API change? You need a hybrid version system, which consists of the external ‘product version’ and the internal ‘software version’. What about branching for production support? Technically no features, but quite possibly one branch per customer. CVS has a suitable system, take the version of the release, append two digits, the first to indicate the branch, the second for the fix number. For example, if you branch from 5.4.0 then the first release will be 5.4.1.0, the next branch’s second release would be 5.4.2.1. Reference: An unambiguous software version scheme from our JCG partner Alex Collins at the Alex Collins ‘s blog blog....
android-logo

Android: Level Two Image Cache

In the mobile world, it’s very common to have scrollable lists of items that contain information and an image or two. To make these lists performance well, most apps follow a lazy loading approach, which simply grabs and displays images in these types of lists. This approach works great for getting images into the system initially. However, there are still a few problems with this approach. The app must re-download each image every time the images need to appear to the user in the list. This creates a pretty bad experience for the user. The first time the user sees the list, s/he has to wait several seconds (or minutes with a bad network connection) before s/he sees the complete list with images. But the real pain comes when the user scrolls to a different part of the list and then scrolls back. This action causes the entire image download process to restart!We can remove the negative user experience by using a image cache.An image cache allows us to store images on the device that have been recently downloaded. By storing them on the device, we can grab them from memory instead of asking the server for them again. This can save us performance in several different ways, most notably:Images that have already been downloaded appear almost instantly, which makes the UI much more snappier Battery Life is saved by not having to go to the network for the imagesThere are some design considerations when using a cache. Since the cache is using memory on the device, it is fairly limited on space. This means we can only have a certain number of images in the cache, so it’s really important to make sure we keep the correct images stored there. “Correct” is a very relative term, which can mean several different things based on what problem is at hand. As you can see here, there are several different types of caching algorithms that attempt to define “correct” for different problems. In our case, “correct” means we want the images that are used the most in the cache. Luckily for us, the type of cache we need is simple and one of the most commonly used. The LRU Cache keeps the most frequently used images in memory, while discarding the least used images. And even luckier, the Android SDK has a LRUCache implementation, which was added in Honey Comb (its also available on the Support Library if you need to support older versions as well). Using a LRUCache that is Stored on the Disk A LRU Cache allows you to save images in memory instead of going to a server every time. This allows your app to respond much quicker to changes and save some battery life. One of the limits of the cache is the amount of memory you can use to actually store the images. This space is very constrained, especially on mobile devices. However, you do have access to one data store that has considerably more space: the disk.The disk on a mobile device is usually much larger than the main memory. Although disk access is much slower than main memory, it is still much faster than going to the network for an image, and you still get the battery life savings by not going to the network. For an excellent Disk LRU Cache implementation that works great with Android, check out Jake Wharton’s DiskLRUCache on GitHub. Combining Memory and Disk Caches Although both of the previous caches (memory LRUCache and disk LRUCache) work well independently, they work every better when combined. By using both caches at the same time, you get the best of both worlds:increased loading speed of the main memory cache increased cache size with the disk cacheCombining the two caches is fairly straight forward.Google provides some excellent example code for both the memory and disk caches here. All you have to do now is take the two different cache implementations and use the above flow chart to put them together to create a Level 2 image cache in Android! Reference: Level Two Image Cache in Android from our JCG partner Isaac Taylor at the Programming Mobile blog....
oracle-weblogic-logo

Running RichFaces on WebLogic 12c

I initially thought I could write this post months back already. But I ended up being overwhelmed by different things. One among them was, that it wasn’t able to simply fire up the RichFaces showcase like I did it for the 4.0 release. With all the JMS magic and the different provider checks in the showcase this has become some kind of a challenge to simply build and deploy it. Anyway, I was willing to give this a try and here we go. If you want to get started with any of the JBoss technologies it is a good idea to check with the JBoss Developer Framework first. That is a nice collection of different examples and quickstarts to get you started on Java EE and it’s technologies. One of them is the RichFaces-Validation example which demonstrates how to use JSF 2.0, RichFaces 4.2, CDI 1.0, JPA 2.0 and Bean Validation 1.0 together. The ExampleThe example consists of a Member entity which has some JSR-303 (Bean Validation) constraints on it. Usually those are checked in several places, beginning with the Database, on to the persistence layer and finally the view layer in close interaction with the client. Even if this quickguide doesn’t contain a persistence layer it starts with the Enity which reflects the real life situation quite good. The application contains a view layer written using JSF and RichFaces and includes an AJAX wizard for new member registration. A newly registered member needs to provide a couple of information before he is actually ‘registered’. This includes e-mail a name and his phone number. Getting Started I’m not going to repeat what the excellent and detailed quickstart is already showing you. So, if you want to run this on JBoss AS7 .. go there. We’re starting with a blank Maven web-project. And the best and easiest way to do this is to fire up NetBeans 7.2 and create one. Lets name it ‘richwls-web’. Open your pom.xml and start changing some stuff there. First remove the endorsed stuff there. We don’t need it. Next is to add a little bit of dependencyManagement: <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.bom</groupId> <artifactId>jboss-javaee-6.0-with-tools</artifactId> <version>1.0.0.Final</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>org.richfaces</groupId> <artifactId>richfaces-bom</artifactId> <version>4.2.0.Final</version> <scope>import</scope> <type>pom</type> </dependency> </dependencies> </dependencyManagement>This adds the Bill of Materials (BOM) for both Java EE 6 and RichFaces to your project. A BOM specifies the versions of a ‘stack’ (or a collection) of artifacts. You find it with anything from the RedHat guys and it is considered ‘best practice’ to have one. At the end this makes your life easier because it manages versions and dependencies for you. On to the lengthy list of true dependencies: <!-- Import the CDI API --> <dependency> <groupId>javax.enterprise</groupId> <artifactId>cdi-api</artifactId> <scope>provided</scope> </dependency> <!-- Import the JPA API --> <dependency> <groupId>javax.persistence</groupId> <artifactId>persistence-api</artifactId> <version>1.0.2</version> <scope>provided</scope> </dependency> <!-- JSR-303 (Bean Validation) Implementation --> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-validator</artifactId> <version>4.3.0.Final</version> <scope>provided</scope> <exclusions> <exclusion> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> </exclusion> </exclusions> </dependency> <!-- Import the JSF API --> <dependency> <groupId>javax.faces</groupId> <artifactId>jsf-api</artifactId> <version>2.1</version> <scope>provided</scope> </dependency> <!-- Import RichFaces runtime dependencies - these will be included as libraries in the WAR --> <dependency> <groupId>org.richfaces.ui</groupId> <artifactId>richfaces-components-ui</artifactId> </dependency> <dependency> <groupId>org.richfaces.core</groupId> <artifactId>richfaces-core-impl</artifactId> </dependency>Except the RichFaces dependencies all others are provided by the runtime. In this case it will be GlassFish 3.1.2.2. In case you haven’t defined it elsewhere (settings.xml) you should also add the JBoss repository to your build section: <repository> <id>jboss-public-repository-group</id> <name>JBoss Public Maven Repository Group</name> <url>https://repository.jboss.org/nexus/content/groups/public-jboss/</url> </repository>Copy the contents of the richfaces-validation directory of the source-zip or check it out from github. Be a little bit careful and don’t mess up with the pom.xml we created ;) Build it and get that stuff deployed. Issues First thing you are greeted with is a nice little weld message: WELD-000054 Producers cannot produce non-serializable instances for injection into non-transient fields of passivating beans [...] Producer Method [Logger] with qualifiersWe obviously have an issue here and need to declare the Logger field as transient. @Inject private transient Logger logger;Don’t know why this works on AS7 but might be I find out someday :) Next iteration: Change it, build, deploy. java.lang.NoSuchMethodError: com.google.common.collect.ImmutableSet.copyOf(Ljava/util/Collection;)Lcom/google/common/collect/ImmutableSet;That doesn’t look better. Fire up the WLS CAT at http://localhost:7001/wls-cat/ and try to find out about it.Seems as if Oracle is using Google magic inside the server. Ok, fine. We have no way to deploy RichFaces as a standalone war on WebLogic because we need to resolve some classloading issues here. And the recommended way is to add a so-called Filtering Classloader. You do this by adding a weblogic-application.xml to your ear. Yeah: Lets repackage everything and put the war into an empty ear and add the magic to the weblogic-application.xml: <prefer-application-packages> <package-name>com.google.common.*</package-name> </prefer-application-packages>Done? Another deployment and you finally see your application. Basically RichFaces run on WebLogic but you have to package it into an ear and turn the classloader around for the com.google.common.* classes. That is way easier with PrimeFaces but … anyway, there are reasons why I tried this. One is, that I do like the idea of being able to trigger the Bean Validation on the client side. If you take a look at the example you see, that the <rich:validator event=’blur’ /> adds client side validation for both bean validation constraints and standard jsf validators to the client. Without having to mess around with anything in JavaScript or duplicate logic. Happy coding and don’t forget to share! Reference: Running RichFaces on WebLogic 12c from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....
career-logo

How To Disrupt Technical Recruiting – Hire an Agent

A recent anti-recruiter rant posted to a news group and a subsequent commentary on HackerNews got me thinking about the many ways that tech recruiting and the relationship between recruiters and the tech community is broken. I saw a few comments referencing that the community always says how broken it is, but no one tries to fix it. Here are some ideas on how we got here and directions we can go. Why is the recruiting industry the way it is?The high demand and low supply for tech talent creates a very lucrative market for recruiters. Many technologists might not be aware of this, but successful recruiters probably all make over 100K (some earn much more) and as a commission-based business your compensation has no maximum. Recruiting is an easy field to enter. No formal training is required, although you will need some sales training and tech knowledge to truly make an impact. One can easily start with a computer, a phone line, and a basic website.So we have an industry that can be very lucrative (for some much more lucrative than the tech industry itself) with almost no barriers to entry. Of course an industry with these characteristics will draw both talented, ethical professionals as well as carpetbaggers and bottom-feeders just as the gold rush did. What are the biggest complaints about recruiters (and how can we solve them)? First, complaints from candidates (tech pros):Too many cold calls. POSSIBLE SOLUTION: Without some widespread changes from all three parties (candidates, hiring firms, and recruiters) in the industry, this one is probably impossible to solve. Simply mentioning that you do not wish to hear from recruiters is no guarantee that they won’t contact you, but if I see on a LinkedIn page that someone specifically doesn’t want to hear from recruiters I won’t contact them as it is clear they do not value the services I provide. Dishonesty about the job description or salary. POSSIBLE SOLUTION: What if companies gave recruiters some form of ‘verified job spec‘ to share with candidates? Salary range, job description, location, whatever else might be helpful. A candidate could request this from the recruiter before agreeing to an interview. Being marketed/used without their knowledge. POSSIBLE SOLUTION: Companies could require a ‘right to represent‘ email giving a recruiter permission to submit his/her resume for any or all positions, which would at least eliminate some of this. Of course, recruiters will still send blinded resumes (contact info removed) to client prospects. A better idea may be for candidates to have a document that they ask recruiters to sign – a contract where the recruiter agrees not to send their resume in any form to any company without the express written consent (the ‘right to represent’) of the candidate. I’m not a lawyer, but I assume there could be some financial penalties/restitution allowed if you were to break that trust, as you may damage the candidate’s career. As a rule, if I want to market a candidate to a client, I always get their permission first. No feedback or follow-up. POSSIBLE SOLUTION: Unfortunately there is little value that a company gets by providing specific feedback about a candidate, and it actually exposes them to substantial risk (ageism, racism, etc.). Likewise, taking time to give rejected candidates details provides nothing to the recruiter except goodwill with the candidate. This one is difficult to solve, but probably not as big an issue as the other problems.And complaints from hiring firms:Too many resumes. POSSIBLE SOLUTION: If you provide a very good requirement to a good recruiter, he/she should be able and very willing to limit the resumes. Telling your recruiter that you want to see the best five available candidates should encourage them to limit submissions. Unqualified candidates. POSSIBLE SOLUTION: Same as above. Misrepresenting a candidate’s background. POSSIBLE SOLUTION: Well for starters, stop working with the recruiter and that agency entirely. If you want to make a positive change for the recruiting industry, contact the recruiter’s manager and tell your side of the story. Having liars in an industry is bad for everyone except the liars and those that profit off of them. Marketing cold calls. POSSIBLE SOLUTION: If you truly will not use recruiters for your searches, list that on your job specifications both on your website and the jobs you post publicly. I would rather not waste my time if a company has a policy against using recruiters, and if your policy changes perhaps you will be calling me. I will not call a company that specifically lists that they do not want to hear from recruiters, as it is clear they do not value the service I provide. Price gouging. POSSIBLE SOLUTION: This could be when recruiting agencies mark-up their candidates’ hourly rates well beyond what is a reasonable margin, or when recruiters who receive permanent placement fees tied to salary will stretch every penny from the hiring company. Flat transparent fees work very well for both of these problems (a flat hourly mark-up on contractors and a flat fee for permanent placements), although recruiters would particularly hate a flat fee structure for contractors. The recruiter’s ‘sale’ to a contractor is, “If I can get you $300/hr, do you care if I make $2/hr or $100/hr?“. The answer is usually ‘no’, which is all fine until the contractor finds out that you are billing your client $300/hr and only paying the him/her maybe $50/hr. That is rare, but that is when things get ugly. Flat and transparent rates exposed to all three parties involved will solve that problem, but don’t expect recruiters to go for it.To all the technology pros who claim they really want to disrupt the industry, I have one simple question. Would you be willing to hire, and pay for, an agent? I’ve heard the argument from some engineers that they would like recruiters to care more about the engineer’s career and not treat them like a commodity. Recruiters are traditionally paid for by the hiring companies, but only if they can both find the proper talent and get that talent to take the job (contingency recruiting). This can lead to a recruiter treating candidates like some homogenized commodity that all have similar value. If engineers want true representation of their best interests, having representation from a sole agent would be one obvious choice. As your agent, I could provide career advice at various times during the year, making suggestions on technologies that you may want to explore or giving inside information on which companies might have interest in you. You might come to me to discuss any thoughts on changing jobs, how to apply for promotion, or how to ask for a salary increase (which I could negotiate for you directly with your manager). When you do decide to explore new opportunities, the agent would help put together your resume, set a job search strategy, and possibly market your background to some hiring companies. As the agent is making his living by charging a fee to the candidates, the agent could charge a much smaller fee (or potentially even no fee) to the hiring company, which would make hiring you much less expensive than hiring through a traditional recruiter. If you were contacted by a recruiter from an agency or a hiring company, you would simply refer them to me for the first discussions and I would share the information with you (if appropriate) at a convenient time. You could even list my name on your LinkedIn, GitHub, and Twitter accounts. “If you are interested in hiring me, contact Dave at Fecak.com“ How good would that feel? How good would it feel to tell your friends that you have an agent? All of this assumes your agent would have some high degree of knowledge about the industry, the players, market rates, and a host of other things. Many recruiters don’t have this expertise, but some certainly do. An agent could probably represent and manage the career of perhaps 50-100 candidates very comfortably and make a good living doing it. Would you be willing to pay a percentage of your salary or a flat annual rate to an agent who provides you with these services? If the answer is ‘yes’, look me up and I’d be happy to discuss it with you further. But I’m guessing for many the answer is ‘no’ (or ‘yes, depending on the price’). My business model Most recruiters are contingency based, which means they only get a fee if they make a placement. If they search for months and don’t find a proper candidate, they just wasted months for no payment. This places 100% of the risk of a search on the recruiter and 100% of the control with the hiring company. Even if the recruiter finds a great fit, the company can walk away without making a hire. Contingency recruiting is cut-throat and causes desperation to make a placement, and this is where most of the problems arise for candidates. This is the ‘numbers game’ that tech pros talk about, where the recruiter’s main incentive is to get resumes and bodies in front of clients and see what sticks. Some recruiters are retained search, which means that basically all their fees are guaranteed up front regardless of their results. This is great for the recruiter but places 100% of the risk on the hiring company. The recruiter is working this search to save his/her reputation, which is obviously very important in getting future searches. This is not cut-throat, because it is not a competitive industry – recruiters have exclusive deals with a retained client for that particular job. The model I use combines contingency and retained search. I charge clients a relatively small flat fee upfront to initiate the search, which is non-refundable. When a placement is made, I charge my clients another flat fee (not tied to salary). When you combine the two fees, the percentage of salary is often about half what contingency recruiters would get for the same placement. So you think I’m an idiot for charging much less than my competition. Perhaps. I see it as creating a true partnership with companies that continue to come back with additional searches and repeat business, often referring me to their friends and partners. When a company gives you a fee upfront, they are putting their money where their mouth is and you can be sure they are serious about hiring. It takes some degree of trust on behalf of the hiring company, but once you have been in the business for a while the references are there and chances are we have some business connections in common. So far this model has worked well, with happy clients and lots of repeat business. I have already met my goal for 2012, and I’m hoping to double it in the coming months. What else do I do differently?I give it away (sometimes) – information, resume and interview advice, and any other kind of help you can think of are requested of me, and I rarely refuse a reasonable request. If I can’t help you find a job, I can at least take a look at your resume or evaluate how your applications look. I have known some engineers for over ten years without ever having made .05 in fees, and have helped them make career decisions for free. I’ll often introduce candidates to start-ups or one-man firms with limited budgets who may end up hiring without using my services, with the hopes that they will use me for future searches. I run a users’ group – I’ve run the local Java Users’ Group for almost 13 years. It is a volunteer job with no compensation, but it helps me stay in touch with the tech community and it also adds some credibility to my name. It is a lot of effort at times, but the success of the group is something that I’m quite proud of. I don’t recruit out of the group, but most of the group are aware of my services and come to me for my services when necessary. I specialize – Historically I focused both geographically and on one technology (Philadelphia Java market). I’ve opened that up a bit as many of those Java pros are now doing mobile, RoR, and alternative JVM lang work, and I’m a bit more regional now. Staying specialized in one geography and one technology forces a recruiter to be very careful about his/her reputation, as the degrees of Kevin Bacon are always low. Flat fees – A flat fee lets the company know that my goal is to fill the position and how much you pay the candidate is irrelevant. I inform candidates of this relationship so they are aware that my goal is to get them an offer that they will accept, and my client companies know that if I say I need $5K more in salary to close the deal that I am not trying to line my pockets.CONCLUSION Don’t expect my model to be adopted by any other firms, but I wanted to share it with readers as at least one alternative to the traditional contingency model that seems to be the biggest complaint for both candidates and hiring firms. And I believe the agent model would work quite nicely for all parties involved if anyone would like to inquire. If you truly want to disrupt the industry, let’s talk. Reference: How To Disrupt Technical Recruiting – Hire an Agent from our JCG partner Dave Fecak at the Job Tips For Geeks blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close