Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?


Benchmarking Scala against Java

A question recently came up at work about benchmarks between Java and Scala. Maybe you came across my blog post because you too are wanting to know which is faster, Java or Scala. Well I’m sorry to say this, but if that is you, you are asking the wrong question. In this post, I will show you that Scala is faster than Java. After that, I will show you why the question was the wrong question and why my results should be ignored. Then I will explain what question you should have asked. The benchmark Today we are going to choose a very simple algorithm to benchmark, the quick sort algorithm. I will provide implementations both in Scala and Java. Then with each I will sort a list of 100000 elements 100 times, and see how long each implementations takes to sort it. So let’s start off with Java: public static void quickSort(int[] array, int left, int right) { if (right <= left) { return; } int pivot = array[right]; int p = left; int i = left; while (i < right) { if (array[i] < pivot) { if (p != i) { int tmp = array[p]; array[p] = array[i]; array[i] = tmp; } p += 1; } i += 1; } array[right] = array[p]; array[p] = pivot; quickSort(array, left, p - 1); quickSort(array, p + 1, right); } Timing this, sorting a list of 100000 elements 100 times on my 2012 MacBook Pro with Retina Display, it takes 852ms. Now the Scala implementation: def sortArray(array: Array[Int], left: Int, right: Int) { if (right <= left) { return } val pivot = array(right) var p = left var i = left while (i < right) { if (array(i) < pivot) { if (p != i) { val tmp = array(p) array(p) = array(i) array(i) = tmp } p += 1 } i += 1 } array(right) = array(p) array(p) = pivot sortArray(array, left, p - 1) sortArray(array, p + 1, right) } It looks very similar to the Java implementation, slightly different syntax, but in general, the same. And the time for the same benchmark? 695ms. No benchmark is complete without a graph, so let’s see what that looks like visually:So there you have it. Scala is about 20% faster than Java. QED and all that. The wrong question However this is not the full story. No micro benchmark ever is. So let’s start off with answering the question of why Scala is faster than Java in this case. Now Scala and Java both run on the JVM. Their source code both compiles to bytecode, and from the JVMs perspective, it doesn’t know if one is Scala or one is Java, it’s just all bytecode to the JVM. If we look at the bytecode of the compiled Scala and Java code above, we’ll notice one key thing, in the Java code, there are two recursive invocations of the quickSort routine, while in Scala, there is only one. Why is this? The Scala compiler supports an optimisation called tail call recursion, where if the last statement in a method is a recursive call, it can get rid of that call and replace it with an iterative solution. So that’s why the Scala code is so much quicker than the Java code, it’s this tail call recursion optimisation. You can turn this optimisation off when compiling Scala code, when I do that it now takes 827ms, still a little bit faster but not much. I don’t know why Scala is still faster without tail call recursion. This brings me to my next point, apart from a couple of extra niche optimisations like this, Scala and Java both compile to bytecode, and hence have near identical performance characteristics for comparable code. In fact, when writing Scala code, you tend to use a lot of exactly the same libraries between Java and Scala, because to the JVM it’s all just bytecode. This is why benchmarking Scala against Java is the wrong question. But this still isn’t the full picture. My implementation of quick sort in Scala was not what we’d call idiomatic Scala code. It’s implemented in an imperative fashion, very performance focussed – which it should be, being code that is used for a performance benchmark. But it’s not written in a style that a Scala developer would write day to day. Here is an implementation of quick sort that is in that idiomatic Scala style: def sortList(list: List[Int]): List[Int] = list match { case Nil => Nil case head :: tail => sortList(tail.filter(_ < head)) ::: head :: sortList(tail.filter(_ >= head)) } If you’re not familiar with Scala, this code may seem overwhelming at first, but trust me, after a few weeks of learning the language, you would be completely comfortable reading this, and would find it far clearer and easier to maintain than the previous solution. So how does this code perform? Well the answer is terribly, it takes 13951ms, 20 times longer than the other Scala code. Obligatory chart:So am I saying that when you write Scala in the ‘normal’ way, your codes performance will always be terrible? Well, that’s not quite how Scala developers write code all the time, they aren’t dumb, they know the performance consequences of their code. The key thing to remember is that most problems that developers solve are not quick sort, they are not computation heavy problems. A typical web application for example is concerned with moving data around, not doing complex algorithms. The amount of computation that a piece of Java code that a web developer might write to process a web request might take 1 microsecond out of the entire request to run – that is, one millionth of a second. If the equivalent Scala code takes 20 microseconds, that’s still only one fifty thousandth of a second. The whole request might take 20 milliseconds to process, including going to the database a few times. Using idiomatic Scala code would therefore increase the response time by 0.1%, which is practically nothing. So, Scala developers, when they write code, will write it in the idiomatic way. As you can see above, the idiomatic way is clear and concise. It’s easy to maintain, much easier than Java. However, when they come across a problem that they know is computationally expensive, they will revert to writing in a style that is more like Java. This way, they have the best of both worlds, with the easy to maintain idiomatic Scala code for the most of their code base, and the well performaning Java like code where the performance matters. The right question So what question should you be asking, when comparing Scala to Java in the area of performance? The answer is in Scala’s name. Scala was built to be a ‘Scalable language’. As we’ve already seen, this scalability does not come in micro benchmarks. So where does it come? This is going to be the topic of a future blog post I write, where I will show some closer to real world benchmarks of a Scala web application versus a Java web application, but to give you an idea, the answer comes in how the Scala syntax and libraries provided by the Scala ecosystem is aptly suited for the paradigms of programming that are required to write scalable fault tolerant systems. The exact equivalent bytecode could be implemented in Java, but it would be a monstrous nightmare of impossible to follow anonymous inner classes, with a constant fear of accidentally mutating the wrong shared state, and a good dose of race conditions and memory visibility issues. To put it more concisely, the question you should be asking is ‘How will Scala help me when my servers are falling over from unanticipated load?’ This is a real world question that I’m sure any IT professional with any sort of real world experience would love an answer to. Stay tuned for my next blog post.   Reference: Benchmarking Scala against Java from our JCG partner James Roper at the James and Beth Roper’s blogs blog. ...

Spring 3.1, Cloud Foundry and Local Development

This post will help you build a Spring 3.1 web application using MongoDB on Cloud Foundry. In addition to pushing to Cloud Foundry, you will also be able to develop in your local environment with a MongoDB instance. Goals The goals for this blog posting will be to build the application locally, then publish to your local Cloud Foundry instance. We will utilize the Cloud Foundry runtime, and the new Spring Profiles Setup  Create an account with Cloud Foundry ( Follow instructions to setup your own Micro CloudI use VMWare’s player Verify ‘vmc info’ that the micro cloud console matchesDownload MongoDB (at least version 2.0) Install and be familiar with Maven 3 ( Familiarize yourself with Spring 3.1, Spring Data and Spring MongoDB Clone or download the source ( Run the app locally with: mvn clean package cargo:run -DskipTestsGo to http://localhost:8080/home  Profiles New in Spring 3.1 are the environment profiles which allow a developer to activate groups of beans based on an environment parameter. There are several ‘ gotchas‘ that I’ve discovered, one being an undocumented ordering for beans using profiles. Take a look at data-services.xml. Notice how the MongoTemplate is defined before the. This is against my intuition because the MongoTemplate takes a reference to the MongoFactory object, which is defined below the MongoTemplate definition. The second ‘gotcha’ came from when and where to set the parameter to enable Spring’s profiles. The documentation and blogs do not explicitly mention that the developer must specify the profile that is active. It was implied by the documentation that ‘default‘ was active by default, but this is not true. In order for the default profile to be active, I added it as a system property in my cargo settings. (as long as it is a system environment property, feel free to set it anywhere or any way you’d like). Take a look at the pom.xml file around line 40 for the local Maven property and then around line 253 for the environment variable to be set. Local development vs. Cloud Development One of the main goals I had for interacting with Cloud Foundry is that I wanted a local development environment to speed up and ease development and reduce complexity with debugging. Notice that in data-services.xml there is a ‘cloud’ profile and a ‘default’. The point of the ‘default’ profile is to have beans that are constructed when on a local environment. You can see that there are two definitions of MongoFactory, one using Spring Data MongoDB’s XML namespace and one using CloudFoundry Runtime’s namespace. I am not going to cover why these work the way they do, so if you’d like information, refer to and Pushing to Cloud Foundry Now that you have a local running instance of the webapp, you will notice that the artifact is called ‘first-cloud-app.war‘, which you can find in the ‘/target’ folder. This is a problem when pushing to the Cloud Foundry instance since the name cannot contain any non-alpha characters. Cloud Foundry’s vmc tool is built from the VCAP open source project that is responsible for the open source PaaS services. Another PaaS service includes App Fog, which allows you to basically use the same commands, but replace ‘vmc’ for ‘af’. Both services fall victim to the naming problem. In order to get around the naming problem, I have created a Maven profile cloud that builds the WAR artifact as ‘mikeensor.war’. Please change this to match your application’s name since you won’t have the user/password to publish (or the DNS) to publish to my micro instance. The name will need to fit into the URL pattern http://< applicationname>. To publish to your local cloud foundry micro instance, goto the root folder and type the following. (this is assuming your micro instance is running and there are no ‘red’ errors. mvn clean package -Pcloud vmc push <application name> -path target/ (if you have already pushed before, you will need to type: vmc update <application name> -path target/ Note: It is possible to use the Maven Plugin for Cloud Foundry, however, I have still not been able to get it to work without changing the name of the artifact. Enabling and connecting to services You must create a service(s) so that your application can bind to the data source. The VCAP (vmc) application handles how the configuration works when loading your application into Cloud Foundry. It does this via an environment variable that is consumed in the namespaced configuration element. In my example, I created a MongoDB service by typing: vmc create-service mongodb --name <what you want to call your instance> I named mine ‘second’ (because I had created a first) and you will see that in data-services.xml the cloud XML configuration refers to the name of the service. Note that if you have multiple MongoDB instances, you will need to do some Spring configuration (@Qualifier) when you want to use different instances. This is not covered by this blog posting. Now you will need to ‘bind’ the service to your application. This is done by typing: vmc bind-service <name above> <application name>   Testing it out You should be able to goto http://. (example: Congratulations! You should have not only successfully deployed to Cloud Foundry (micro instance), bound to a MongoDB instance, but should be able to run in your local environment too! As I get time, I will try to add in more detailed features such as multiple types of storage and post other ‘gotchas’ as I find them.   Reference: Spring 3.1 + Cloud Foundry + Local Development from our JCG partner Mike at the Mike’s site blog. ...

Hadoop + Amazon EC2 – An updated tutorial

There is an old tutorial placed at Hadoop’s wiki page:, but recently I had to follow this tutorial and I noticed that it doesn’t cover some new Amazon functionality. To follow this tutorial is recommended that you are already familiar with the basics of Hadoop, a very useful ‘how to start’ tutorial can be found at Hadoop’s homepage: Also, you have to be familiar with at least Amazon EC2 internals and instance definitions. When you register an account at Amazon AWS you receive 750 hours to run t1.micro instances, but unfortunately, you can’t successfully run Hadoop in such machines. On the following steps, when a command starts with $ means that it should be executed into the local machine, and with # into the EC2 instance. Create an X.509 Certificate Since we gonna use ec2-tools, our account at AWS needs a valid X.509 Certificate:Create .ec2 folder:$ mkdir ~/.ec2Login in at AWSSelect “Security Credentials” and at ‘Access Credentials’ click on ‘X.509 Certificates'; You have two options:Create certificate using command line:$ cd ~/.ec2; openssl genrsa -des3 -out my-pk.pem 2048 $ openssl rsa -in my-pk.pem -out my-pk-unencrypt.pem $ openssl req -new -x509 -key my-pk.pem -out my-cert.pem -days 1095It only works if your machine date is ok.Create the certificate using the site and download the private-key (remember to put it at ~/.ec2).  Setting up Amazon EC2-ToolsDownload and unpack ec2-tools; Edit your ~/.profile to export all variables needed by ec2-tools, so you don’t have to do it every time that you open a prompt:Here is an example of what should be appended to the ~/.profile file:export JAVA_HOME=/usr/lib/jvm/java-6-sun export EC2_HOME=~/ec2-api-tools-* export PATH=$PATH:$EC2_HOME/bin export EC2_CERT=~/.ec2/my-cert.pemTo access an instance, you need to be authenticated (obvious security reasons), in this way, you have to create a Key Pair (public and private keys):At, click on ‘Key Pairs’, or You can run the following commands:$ ec2-add-keypair my-keypair | grep –v KEYPAIR > ~/.ec2/id_rsa-keypair $ chmod 600 ~/.ec2/id_rsa-keypair   Setting up Hadoop After download and unpack Hadoop, you have to edit the EC2 configuration script present at src/contrib/ec2/bin/ variablesThese variables are related to your AWS account (AWS_ACCOUNT_ID, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY), they can be found at logging at your account, in Security Credentials;The AWS_ACCOUNT_ID is your 12 digit account number.Security variablesThe security variables (EC2_KEYDIR, KEY_NAME, PRIVATE_KEY_PATH), are the ones related to the launch and access of an EC2 instance; You have to save the private key into your EC2_KEYDIR path.Select an AMIDepending on Hadoop’s version that you want to run (HADOOP_VERSION) and the instance type (INSTANCE_TYPE), you should use a properly image to deploy your instance: There are many public AMI images that you can use (they must suit the needs for most users), to list, type$ ec2-describe-images -x all | grep hadoopOr you can build your own image, and upload it to an Amazon S3 bucket; After selecting the AMI you will use, there are basically three variables to edit at the bucket where is placed the image that you will use, example hadoop-images, ARCH: the architecture of the AMI image you have chosen (i386 or x84_64) and BASE_AMI_IMAGE: the unique code that maps an AMI image, example ami-2b5fba42.Other configurable variable is the JAVA_VERSION, there you can define which version will be installed along with the instance:You can also provide a link where would be located the binary (JAVA_BINARY_URL), for instance, if you have JAVA_VERSION=1.6.0_29, an option is use JAVA_BINARY_URL=  Running!You can add the content of src/contrib/ec2/bin to your PATH variable so you will be able to run the commands indepentend from where the prompt is open; To launch a EC2 cluster and start Hadoop, you use the following command. The arguments are the cluster name (hadoop-test) and the number of slaves (2). When the cluster boots, the public DNS name will be printed to the console.$ hadoop-ec2 launch-cluster hadoop-test 2To login at the master node from your ‘cluster’ you type:$ hadoop-ec2 login hadoop-testOnce you are logged into the master node you will be able to start the job:For example, to test your cluster, you can run a pi calculation that is already provided by the hadoop*-examples.jar:# cd /usr/local/hadoop-* # bin/hadoop jar hadoop-*-examples.jar pi 10 10000000You can check your job progress at http://MASTER_HOST:50030/. Where MASTER_HOST is the host name returned after the cluster started. After your job has finished, the cluster remains alive. To shutdown you use the following command:$ hadoop-ec2 terminate-cluster hadoop-testRemember that in Amazon EC2, the instances are charged per hour, so if you only wanted to do tests, you can play with the cluster for some more minutes.  Reference: Hadoop + Amazon EC2 – An updated tutorial from our JCG partner Rodrigo Duarte at the Thinking Bigger blog. ...

Sandboxing Java Code

In a previous post, we looked at securing mobile Java code. One of the options for doing so is to run the code in a cage or sandbox. This post explores how to set up such a sandbox for Java applications. Security Manager The security facility in Java that supports sandboxing is the java.lang.SecurityManager. By default, Java runs without a SecurityManager, so you should add code to your application to enable one: System.setSecurityManager(new SecurityManager()); You can use the standard SecurityManager, or a descendant. The SecurityManager has a bunch of checkXXX() methods that all forward to checkPermission(permission, context). This method calls upon the AccessController to do the actual work (see below). [The checkXXX() methods are a relic from Java 1.1.] If a requested access is allowed, checkPermission() returns quietly. If denied, a java.lang.SecurityException is thrown. Code that implements the sandbox should call a checkXXX method before performing a sensitive operation: SecurityManager securityManager = System.getSecurityManager(); if (securityManager != null) { Permission permission = ...; securityManager.checkPermission(permission); } The JRE contains code just like that in many places. Permissions A permission represents access to a system resource. In order for such access to be allowed, the corresponding permission must be explicitly granted (see below) to the code attempting the access. Permissions derive from They have a name and an optional list of actions (in the form of comma separated string values). Java ships with a bunch of predefined permissions, like FilePermission. You can also add your own permissions. The following is a permission to read the file /home/remon/thesis.pdf: Permission readPermission = new '/home/remon/thesis.pdf', 'read'); You can grant a piece of code permissions to do anything and everything by granting it AllPermission. This has the same effect as running it without SecurityManager. Policies Permissions are granted using policies. A Policy is responsible for determining whether code has permission to perform a security-sensitive operation. The AccessController consults the Policy to see whether a Permission is granted. There can only be one Policy object in use at any given time. Application code can subclass Policy to provide a custom implementation. The default implementation of Policy uses configuration files to load grants. There is a single system-wide policy file, and a single (optional) user policy file. You can create additional policy configuration files using the PolicyTool program. Each configuration file must be encoded in UTF-8. By default, code is granted no permissions at all. Every grant statement adds some permissions. Permissions that are granted cannot be revoked. The following policy fragment grants code that originates from the /home/remon/code/ directory read permission to the file /home/remon/thesis.pdf: grant codeBase 'file:/home/remon/code/-' { permission '/home/remon/thesis.pdf', 'read'; }; Note that the part following codeBase is a URL, so you should always use forward slashes, even on a Windows system. A codeBase with a trailing / matches all class files (not JAR files) in the specified directory. A codeBase with a trailing /* matches all files (both class and JAR files) contained in that directory. A codeBase with a trailing /- matches all files (both class and JAR files) in the directory and recursively all files in subdirectories contained in that directory. For paths in file permissions on Windows systems, you need to use double backslashes (\\), since the \ is an escape character: grant codeBase 'file:/C:/Users/remon/code/-' { permission 'C:\\Users\\remon\\thesis.pdf', 'read'; }; For more flexibility, you can write grants with variable parts. We already saw the codeBase wildcards. You can also substitute system properties: grant codeBase 'file:/${user.home}/code/-' { permission '${user.home}${/}thesis.pdf', 'read'; }; Note that ${/} is replaced with the path separator for your system. There is no need to use that in codeBase, since that’s a URL. Signed Code Of course, we should make sure that the code we use is signed, so that we know that it actually came from who we think it came from. We can test for signatures in our policies using the signedBy clause: keystore 'my.keystore'; grant signedBy 'signer.alias', codeBase ... { ... }; This policy fragment uses the keystore with alias my.keystore to look up the public key certificate with alias signer.alias. It then verifies that the executing code was signed by the private key corresponding to the public key in the found certificate. There can be only one keystore entry. The combination of codeBase and signedBy clauses specifies a ProtectionDomain. All classes in the same ProtectionDomain have the same permissions. Privileged Code Whenever a resource access is attempted, all code on the stack must have permission for that resource access, unless some code on the stack has been marked as privileged. Marking code as privileged enables a piece of trusted code to temporarily enable access to more resources than are available directly to the code that called it. In other words, the security system will treat all callers as if they originated from the ProtectionDomain of the class that issues the privileged call, but only for the duration of the privileged call. You make code privileged by running it inside an AccessController.doPrivileged() call: AccessController.doPrivileged(new PrivilegedAction() { public Object run() { // ...privileged code goes here... return null; } }); Assembling the Sandbox Now we have all the pieces we need to assemble our sandbox:Install a SecurityManager Sign the application jars Grant all code signed by us AllPermission Add permission checks in places that mobile code may call Run the code after the permission checks in a doPrivileged() blockI’ve created a simple example on GitHub.   Reference: Sandboxing Java Code from our JCG partner Remon Sinnema at the Secure Software Development blog. ...

Why Future Generations Will Hate You for Using java.util.Stack

Before I kill you with some meaningless tautology, here is the gistIf your application is near real time or you are sending your code to Mars, you need to keep off the default Stack implementation in Java. Write your own version based on LinkedList. Again, if your application is mission critical and your Stack is expected to be manipulated by concurrent threads, then use a ConcurrentLinkedDeque or write your own Stack based on LinkedList – just make sure your add and remove operations are thread safe. While doing so, consider concurrency locks. You just need raw power and are not bothered by occasional hiccups during the push process AND your Stack is not manipulated by concurrent threads, then use an ArrayDeque or go ahead and write your own Stack based on an ArrayList. If multithreaded, then write your own Stack based on ArrayQueue and util.concurrent locks. If you refuse to read the Java Stack API and the Java Deque API and you are simply a crazy person, then use the default implementation. And I promise, no mercy will be shown to you when the bots take over the world.Note : The truth is unless, for some reason, you would want to name your implementation class as ’Stack’, you are pretty much free to use all the Deque implementations as a Stack directly.Now that enough mud had been thrown against the default implementation and I have your attention for a couple of minutes, let me sum things up fast. We know that Stack in Java Collection API extends from Vector which internally uses an array. In other words, Java uses an array based implementation for its Stack. So, let’s see why, between the two most popular Stack implementation – arrays and linkedlists, Java chose arrays. Some answers were quite obvious, some weren’t : Fair Play A cursory look over the add and remove methods of arrays and linkedlist, which are the pillars of push and pop methods of the Stack has a constant time retrieval across the board. Growth issues It’s no news that arrays are fixed size and that the growth of an array is achieved by just copying the array to a bigger array. In case of our default implementation of Stack using Vector, the increment capacity is just double. It just means if we are adding 80 elements to a stack, the internal array gets copied 4 times – at 10, 20, 40 and 80. So, say, when we are adding the 80th element, the push operation actually takes O(N) time and since our N is 80 in this case, that is going to make atleast a little pause on your program with that cruel deep copy – those valuable little cycles that you could save for some other ride. Too bad, unlike Vectors, you wont be able to specify the initial size or the increment factor for the java.util.Stack because there are no overloaded constructors. On the other hand, though growth hiccups frequent an ArrayQueue, ArrayQueues have a sweet overloaded constructor for initial capacity which comes in handy if you have an approximate idea on how big you stack is going to be. Also, the default initial capacity is 16 for an ArrayQueue as against 10 for a Vector. Time and Place, my friend. Time and Place To be fair with arrays, the objects stored in the array based stack are just references to the actual object in the heap (in case of objects) or actual values (in case of primitives). On the other hand, in case of LinkedList, there is a Node wrapper over the top of the stored item. On an average that should cost you ~40 bytes extra space in the heap per stored object (including Node object inner class, link to the next Node and the reference to the item itself). So, ArrayQueue or LinkedList ? Arrays are preferred for most purposes because they offer much better speed of access due to their unique advantage of occupying sequential memory and getting to the actual object is just pointer arithmetic. However, push and pop operations on the threshold item (the item that triggers resize), takes O(n) time. However, on an average, it takes constant time (amortized constant time if you will). On the other hand, with LinkedList, add operations are slower than arrays due to the extra time taken to construct new nodes and pointing to the new ones. Needless to mention that new nodes consume heap space other than the space consumed by the actual object. However, since there are no resizing (or need for sequential memory) and it always has a reference to the first element, it has a worst case guarantee of constant time. Now, while you revisit the first part of this blog, feel free to say Damn you, default implementation !!! Related links :  Reference: Why Future Generations Will Hate You for Using java.util.Stack from our JCG partner Arun Manivannan at the blog. ...

Basics about Servlets

Through this tutorial i will try to get you closer to the Java Servlet Model. Before we start with examination of classes defined within the servlet specification, i will explain basic stuff needed to know before you start with developing web application. Understanding the Java Servlet Models First of all Java Servlet Model is not defined only for the web application that is specification which is based on request and response programing model. But basically it’s most used in interaction with HTTP protocol, so from now we will disussing using of servlet model in http applications. In basically, the web applications should be applications placed somewhere on the web, and they can be accessed through the net. If you need more information and better definition of “What is a web application” you can visit next link: Web application. If we want to create a web application we should have a web server, it can be any kind of HTTP server with included web container such as Tomcat. HTTP server is charged for handling clients request, security, servicing content to client etc. , but the HTTP server can not dynamically create response to the client he is able to serve static content. Solution of this problem is a Web Container. Container is able to hosts web application, HTTP server will pass request to the web container which will take care of the request. Usually there is only one container per Server. And all web application on the server are served by this container.All communication inside of Web container is realized through the web container interface. It means that one application can not directly access to the another one. Also components inside one application can not access directly each others, also all communication betwean components in same application is realized using web container interface. This is very important in understanding how web application works in web container, this allow as to create filters, listeners and it allow as to use security feature of web container.War Application Structure By specification Java Web application is packed into war package. War package is same as jar package,but when web container find war file in deploy folder it will assume that it is a web application and it will try to start it. Inside of war package we have one special directory called WEB-INF. Content of this folder is not directly served to the user. This folder contains folders classes and lib in which we can put class used by application (classes folder) and additional jars (lib folder). Content of those folders will be automatically read by class loader without any additional settings for class path. Also this folder contains web.xml file which is called deployment description. This file is not required if web application contains only jsp pages, but if application need to have servlets or filters this file must be defined. Servlet Life Cycle During its existing , servlet pass through the five life cycles:loading instantiation initialization serving destroyingLoading is phase in which class loader load a class. Every web application will get a different instance of class loader which will be used for loading web components. This allow as to deploy two version of same application in same container and every application may have classes with same name (full class name). After loading, web container will try to instantiate class, (i.e. to create a new instance of class). Usually every web component is created just once, but this depend on behavior of web container, in some case a web container can be set to create more instance of component class in the pool, and serve request with one of instance from pool. When web container create new instance of servlet it use a default constructor. Initialization is life cycle phase in which servlet is initialized. In this phase is supposed to servlet read some values and do some additional action and steps before servlet is able to serve client request. Serving phase is cycle in servlet life in which servlet serving clients requests. Destroy phase is last phase in servlet life and it happen when a servlet is removed from service. Servlet Interface If we want to create a servlet everything what should be done is implementing of Servlet interface. This interface provide next three method, which are called by container:init(ServletConfig config), called during initialization service(ServletRequest request, ServletResponse response), called during servicing a request destroy(), called when servlet is removed from service.Also this interface provide two ancillary methods:ServletConfig getServletConfig() String getServletInfo()During initialization it is possible to get ServletException. Raising of this exception in init method will signal container that some error occurred and container will stop initialization and mark servlet instance as ready for garbage collector and this will not cause calling of the destroy method. Also during service method it is possible to get ServletException or UnavailableException. This exceptions can be temporary or permanent. In case of temporary exception server will block calling of service method for some time, but in case of permanent exception destroy method will be called and servlet will be ready for garbage collector, and every future call to this servlet will lead to 404 response. GenericServlet Class GenericServlet class is part of javax.servlet package. It is an abstract class which implementing Servlet interface and create basic implementation which do not depend on platform. This class introduce one new method:init(), called by init(ServletConfig config) method, during initialization phase ServletContext getServletContext(), provide access to the ServletContex String getInitParameter(String name), retrieve value of servlet config parameter defined in application descriptor for specified name Enumeration getInitParameterNames(), return an enumeration of all servlet init parameters. String getServletName(), return name of servlet.If we extend GenericServlet class instead of implementing Servlet interface, only stuff what we should do is implementing of service method, all other method are already implemented by abstract class. HttpServlet Class This is also abstract class like GenericServlet, but this class is not platform independent. It is tied to HTML protocol, and introduce new method which are only related to HTTP protocol. Every of this new methods is responsible for processing client request for particular HTTP method. The doXxx methods:doGet(HttpServletRequest request, HttpServletResponse response), process get request doPost(HttpServletRequest request, HttpServletResponse response), process post request doOptions(HttpServletRequest request, HttpServletResponse response), process HTTP options request doPut(HttpServletRequest request, HttpServletResponse response), process HTTP put request doDelete(HttpServletRequest request, HttpServletResponse response), process HTTP delete request doHead(HttpServletRequest request, HttpServletResponse response), process HTTP head request doTrace(HttpServletRequest request, HttpServletResponse response), process HTTP trace request.ServletContext Interface ServletContext interface is API which provide access to information about application. Every application is executed inside of own context, so this interface provide access to that information. Implementation of this interface is provided by server vendor and we should not be interested by concrete implementation. When application is deployed container will first create ServletContext implementation class and fill it with data provided by application descriptor. Methods inside this interface we can split in few groups:Method for accessing context attribute:Object getAttribute(String name), retrieve object from context Enumeration getAttributeNames(), retrieve attributes names void removeAttribute(String name), remove attribute from context void setAttribute(String name, Object value), add new object into context and bind it by specifed name. If object with specified name already exists it will be owerridden.Methods for obtaining context information:String getServletContextName(), retrieve value defined by <display-name> in application descriptor, if not exists retriev null. String getRealPath(String path), context relative path of specified resource, null if application is deployed as WAR (if it not exploded in folder). Set getResourcesPaths(String path), retrieve files inside specified partial path, only one level ServletContext getContext(String appURL), retrieve ServletContex of another application deployed on same server. Url must start with ‘/’Methods for accessing static resources:URL getResource(String path), retrieve URL of resource specified by path. Path must start with ‘/’ InputStream getResourceAsStream(String path), retrieve InputStream of specifed resource. Path can be context relative. String getMimeType(String path), return mie type of resource.Methods for obtaining request dispatcher :RequestDispatcher getRequestDispatcher(String path), return RequestDispatcher for specified resource or null if resource do not exists. RequestDispatcher getNamedDispatcher(String name), return RequestDispatcher for named resource inside of deployment descriptor.Methods for accessing context initialization parameters:String getInitParameter(String name), retrieve value for specified parameter defined in deployment descriptor, or null if it does not exists. Enumeration getInitParameterNames(), list of parameter names defined in applications deployment desriptor.Context attributes are application scoped attributes, which means that all clients share same attributes, change on attribute made by one client is visible to every other clients.ServletConfig Interface This is API which provide methods for accessing information defined inside deployment descriptor. Concrete object is created by servlet container and provided to servlet during initialization phase. This interface define next methods:String getInitParameter(String name), get value of init parameter defined for the servlet with specified name, or null if there is no such parameter. Enumeration getInitParameterNames(), retrive enumeration of servlet init parameters names. ServletContext getServletContext(), retrieve servlet context. String getServletName(), retrieve servlet name specifed in web.xmlAs you can see the ServletConfig proviede only methods for reading init parameter and there is no method for changing or adding new init parameter because they cant be changed or added.Servlets Deployment Description If we want to use servlets we need to define them inside of deployment descriptor.<servlet><description>This is a servlet</description><display-name>First Servlet</display-name><servlet-name>FirstServlet</servlet-name><class>ba.codecentric.scwcd.FirstServlet</class><init-param><param-name>firstParam</param-name><param-value>value</param-value></init-param></servlet><servlet-mapping><servlet-name>FirstServlet</servlet-name><uri-pattern>/FirstServlet</uti-pattern></servlet-mapping>Inside of servlet tags we defined servlet, inside of servlet tag we can use init param tag for defining initialization parameters, which will be sent to servlet during initialization phase as part of ServletConfig object. And with the servlet mapping tags we define uri pattern which will be used for activeting specified servlet. Also during this tutorial i spoken about ServletContext, and I have mentioned context parameters. These parameters are also defined in deployment description using context param tag.<context-param><param-name>contextParameter</param-name><param-value>value</param-value></context-param>  Reference: Basics about Servlets from our JCG partner Igor Madjeric at the Igor Madjeric blog. ...

Trunk, Branch, Tag And Related Concepts

Trunk, Branch and Tag concepts are relevant to revision control (or version control) systems. These systems are typically implemented as repositories containing electronic documents, and changes to these documents. Each set of changes to the documents is marked with a revision (or version) number. These numbers identify each set of modifications uniquely. Reminder A version control system, like Subversion, works with a central repository. Users can check out its content (i.e., the documents) to their local PC. They can perform modifications to these documents locally. Then, users can commit their changes back to the repository. If there is a conflict between modifications made to documents by different users committing simultaneously, Subversion will ask each user to resolve the conflicts locally first, before accepting their modifications back to the repository. This ensures a continuity between each revisions (i.e., set of modifications) made to the content of the repository. What is this good for? If the repository is used to store software development code files, each developer can check-out these files locally, and after making modifications, they can make sure these compile properly before committing their modification back to the repository. This guarantees that each revision compiles properly and that code does not contain any incoherence. Trunk, Branches & Tags  Sometimes, software development requires working on large pieces of code. This can be experimental code too. Multiple software developers may be involved. These modifications are so large that one may want to have a temporary copy of the repository content to work on, without modifying the original content. This is issue is addressed with trunk and branches. When using Subversion, the typical directory structure within the repository is made of three directories: trunk, branches and tags. The trunk contains the main line of development. To solve the issue raised above, a branch can be created. This branch is a copy of the main line of development (trunk) at a given revision number. Software developers can check the branch out, like they would with trunk content. They can perform modifications locally and commit content back to the branch. The trunk content will not be modified. Multiple software developers can work on this branch, like they would on trunk. Once all modifications are made to the branch (or experimental code is approved), these modifications can be merged back to trunk. Just like with a simple check-out, if there is any incoherence, Subversion will request software developers to solve them at the branch level, before accepting to merge back the branch into trunk. Once the branch is merged, it is also closed. No more modification on it is accepted. Multiple branches can be created simultaneously from trunk. Branches can also be abandoned and deleted. Modifications are not merged back to trunk. When the software development team has finished working on a project, and every modifications have been committed or merged back to trunk, one may want to release a copy (or snapshot) of the code from trunk. This is called a tag. The code is copied in a directory within the tag directory. Usually, the version of the release is used as the directory name (for example 1.0.0). Contrary to branches, tags are not meant to receive further modifications. Further modifications should be performed only on trunk and branches. If a released tag need modifications, a branch from the tag (not the trunk) should be created (for example, with name 1.0.x). Later, an extra tag from that tag branch can be created with a minor release version (for example 1.0.1). Why work like this? Imagine a software application is released and put on production (version 1.0.0). The team carries on working on version 2.0.0 from trunk (or a branch from trunk). This makes sense regarding the continuity of the code line. Later, one finds a bug on version 1.0.0. A code correction is required. It cannot be performed on trunk since it already contains 2.0.0 code. Tag 1.0.0 must be used. Hence, the need to create a branch from tag 1.0.0. But what about the code correction created for the 1.0.0 bug? Shouldn’t it be included in version 2.0.0? Yes, it should, but since branch 1.0.x cannot be merged back to trunk (it comes from tag 1.0.0), another solution is required. Typically, one will create a patch containing the code correction from branch 1.0.x, and apply it locally from a check out of trunk. Then, this code correction can be committed back to trunk and it will be part of version 2.0.0. Branches created from tag releases have a life of their own. They are called maintenance branches and remain alive as long released versions are maintained. Contrary to trunk branches, they are never merged back with their original tag. You could consider them like little trunks for tag releases.   Reference: Explain Trunk, Branch, Tag And Related Concepts from our JCG partner Jerome Versrynge at the Technical Notes blog. ...

Java 7: File Filtering using NIO.2 – Part 3

Hello all. This is Part 3 of the File Filtering using NIO.2 series. For those of you who haven’t read Part 1 or Part 2, here’s a recap. NIO.2 is a new API for I/O operations included in the JDK since Java 7. With this new API, you can perform the same operations performed with plus a lot of great functionalities such as: Accessing file metadata and watching for directory changes, among others. Obviously, the package is not going to disappear because of backward compatibility, but we are encouraged to start using NIO.2 for our new I/O requirements. In this post, we are going to see how easy it is to filter the contents of a directory using this API. There are 3 ways in order to do so, we already reviewed two similar ways in Part 1 and Part 2, but now we are going to see a more powerful approach. What you need NetBeans 7+ or any other IDE that supports Java 7 JDK 7+ Filtering content of a directory is a common task in some applications and NIO.2 makes it really easy. The classes and Interfaces we are going to use are described next:java.nio.file.Path: Interface whose objects may represent files or directories in a file system. It’s like the but in NIO.2. Whatever I/O operation you want to perform, you need an instance of this interface. java.nio.file.DirectoryStream: Interface whose objects iterate over the content of a directory. java.nio.file.DirectoryStream.filter<T>: A nested interface whose objects decide whether an element in a directory should be filtered or not. java.nio.file.Files: Class with static methods that operates on files, directories, etc.The way we are going to filter the contents of a directory is by using objects that implement the java.nio.file.DirectoryStream.filter<T> interface. This interface declares only one method +accept(T):boolean which as the JavaDoc says: ‘returns true if the directory entry should be accepted’. So it’s up to you to implement this method and decide whether a directory entry should be accepted based on whatever attribute you want to use: by hidden, by size, by owner, by creation date, etc. This is important to remember, using this method you are no longer tied to filter only by name, you can use any other attribute. If you only want directories, you can use the java.nio.file.Files class and its +isDirectory(Path, LinkOption…):boolean method when creating the filter: //in a class.../** * Creates a filter for directories only * @return Object which implements DirectoryStream.Filter * interface and that accepts directories only. */ public static DirectoryStream.Filter<Path> getDirectoriesFilter() {DirectoryStream.Filter<Path> filter = new DirectoryStream.Filter<Path>() {@Override public boolean accept(Path entry) throws IOException { return Files.isDirectory(entry); } };return filter; } Or if you only want hidden files, you can use the java.nio.file.Files class and its +isHidden(Path):boolean method when creating the filter: //in a class.../** * Creates a filter for hidden files only * @return Object which implements DirectoryStream.Filter * interface and that accepts hidden files only. */ public static DirectoryStream.Filter<Path> getHiddenFilesFilter() {DirectoryStream.Filter<Path> filter = new DirectoryStream.Filter<Path>() {@Override public boolean accept(Path entry) throws IOException { return Files.isHidden(entry); } };return filter; } Or if you want files belonging to a specific user, you have to ask for a user and compare it with the owner of the directory entry. To obtain the owner of a directory entry, you can use the java.nio.file.Files class and its +getOwner(Path, LinkOption…):UserPrincipal method (watch out, not all OS support this). To obtain a specific user on the filesystem use the java.nio.file.FileSystem class and its +getUserPrincipalLookupService(): //in a class.../** * Creates a filter for owners * @return Object which implements DirectoryStream.Filter * interface and that accepts files that belongs to the * owner passed as parameter. */ public static DirectoryStream.Filter<Path> getOwnersFilter(String ownerName) throws IOException{UserPrincipalLookupService lookup = FileSystems.getDefault().getUserPrincipalLookupService();final UserPrincipal me = lookup.lookupPrincipalByName(ownerName);DirectoryStream.Filter<Path> filter = new DirectoryStream.Filter<Path>() {@Override public boolean accept(Path entry) throws IOException { return Files.getOwner(entry).equals(me); } };return filter; } The following piece of code defines a method which scans a directory using any of the previous filters: //in a class.../** * Scans the directory using the filter passed as parameter. * @param folder directory to scan * @param filter Object which decides whether a * directory entry should be accepted */ private static void scan(String folder , DirectoryStream.Filter<Path> filter) { //obtains the Images directory in the app directory Path dir = Paths.get(folder); //the Files class offers methods for validation if (!Files.exists(dir) || !Files.isDirectory(dir)) { System.out.println('No such directory!'); return; } //validate the filter if (filter == null) { System.out.println('Please provide a filter.'); return; }//Try with resources... so nice! try (DirectoryStream<Path> ds = Files.newDirectoryStream(dir, filter)) { //iterate over the filtered content of the directory int count = 0; for (Path path : ds) { System.out.println(path.getFileName()); count++; } System.out.println(); System.out.printf( '%d entries were accepted\n', count); } catch (IOException ex) { ex.printStackTrace(); } } We can execute the previous code passing the following parameters to the main method (check the source code at the end of this post):Directory to scan: C:\ or / dependening on your OS. Filter: hiddenWhen executing the code we get the following:In a windows machine, you can obtain the hidden files using the command: dir /AAH Notice that we are getting the same result:And on my Linux virtual machine:Using the command ls -ald .* we get similar results:Again, Write once, run everywhere! I hope you enjoyed the File Filtering using NIO.2 series. One last word, all the filtering methods we reviewed worked on one directory only, if you want to scan a complete tree of directories, you’ll have to make use of the java.nio.file.SimpleFileVisitor class. Click here to download the source code of this post.   Reference: Java 7: File Filtering using NIO.2 – Part 3 from our JCG partner Alexis Lopez at the Java and ME blog. ...

Collaborative Artifacts as Code

A software development project is a collaborative endeavor. Several team members work together and produce artifacts that evolve continuously over time, a process that Alberto Brandolini (@ziobrando) calls Collaborative Construction. Regularly, these artifacts are taken in their current state and transformed into something that become a release. Typically, source code is compiled and packaged into some executable. The idea of Collaborative Artifacts as Code is to acknowledge this collaborative construction phase and push it one step further, by promoting as many collaborative artifacts as possible into plain text files stored in the same source control, while everything else is generated, rendered and archived by the software factory. Collaborative artifacts are the artifacts the team works on and maintains over time thanks to the changes done by several people through a source control management such as SVN, TFS or Git, with all their benefits like branching and versioning. Keep together what varies together The usual way of storing documentation is to put MS Office documents into a shared drive somewhere, or to write random stuff in a wiki that is hardly organized. Either way, this documentation will quickly get out of sync because the code is continuously changing, independently of the documents stored somewhere else, and as you know, “Out of sight, out of mind”.we now have better alternatives We now have better alternatives Over the last few years, there has been changes in software development. Github has popularized the overview file written in Markdown. DevOps brought the principle of Infrastructure as Code. The BDD approach introduced the idea of text scenarios as a living documentation and an alternative for both specifications and acceptance tests. New ways of planning what a piece of software is supposed to be doing have appeared as in Impact Mapping. All this suggests that we could replace many informal documents by their more structured alternatives, and we could have all these files collocated within the source control together with the source. In any given branch in the source control we would then have something like this:Source code (C#, Java, VB.Net, VB, C++) Basic documentation through plain and perhaps other .md files wherever useful to give a high-level overview on the code SQL code as source code too, or through Liquibase-style configuration Living Documentation: unit tests and BDD scenarios (SpecFlow/Cucumber/JBehave feature files) as living documentation Impact Maps (and every other mindmaps), may be done as text then rendered via tools like text2mindmap Any other kind of diagrams (UML or general purpose graphs) ideally be defined in plain text format, then rendered through tools (Graphviz, yUml). Dependencies declarations as manifest instead of documentation on how to setup and build manually (Maven, Nuget…) Deployment code as scripts or Puppet manifests for automated deployment instead of documentation on how to deploy manually  Plain Text Obsession is a good thing! Nobody creates software by editing directly the executable binary that the users actually run at the end, yet it is common to directly edit the MS Word document that will be shipped in a release. Collaborative Artifacts as Code suggests that every collaborative artifact should be text-based to work nicely with source control, and to be easy to compare and merge between versions. Text-based formats shall be preferred whenever possible, e.g. .csv over xls, rtf or .html over .doc, otherwise the usual big PPT files must go to another dedicated wiki where they can be safely forgotten and become instantly deprecated… Like a wiki, but generated and read-only My colleague Thomas Pierrain summed up the benefits of this approach, for a documentation:always be up-to-date and versioned easily diff-able (text filesn e.g. with Markdown format) respect the DRY principle (with the SCM as its golden source) easily browsable by everyone (DEV, QA, BA, Support teams…) in the readonly and readable wiki-like web site easily modifiable by team members in a well know and official location (as easy as creating or modifying a text file in a SCM)  What’s next? This approach is nothing really new (think about LateX…), and many of the tools we need for it already exist (Markdown renderers, web site to organize and display Gherkin scenarios…). However I have never seen this approach fully done in an actual project. Maybe your project is already doing that? please share your feedback!   Reference: Collaborative Artifacts as Code from our JCG partner Cyrille Martraire at the Cyrille Martraire’s blog blog. ...

java.lang.ClassNotFoundException: How to resolve

This article is intended for Java beginners currently facing java.lang.ClassNotFoundException challenges. It will provide you with an overview of this common Java exception, a sample Java program to support your learning process and resolution strategies. If you are interested on more advanced class loader related problems, I recommended that you review my article series on java.lang.NoClassDefFoundError since these Java exceptions are closely related. java.lang.ClassNotFoundException: Overview As per the Oracle documentation, ClassNotFoundException is thrown following the failure of a class loading call, using its string name, as per below:The Class.forName method The ClassLoader.findSystemClass method The ClassLoader.loadClass methodIn other words, it means that one particular Java class was not found or could not be loaded at “runtime” from your application current context class loader. This problem can be particularly confusing for Java beginners. This is why I always recommend to Java developers to learn and refine their knowledge on Java class loaders. Unless you are involved in dynamic class loading and using the Java Reflection API, chances are that the ClassNotFoundException error you are getting is not from your application code but from a referencing API. Another common problem pattern is a wrong packaging of your application code. We will get back to the resolution strategies at the end of the article. java.lang.ClassNotFoundException : Sample Java programNow find below a very simple Java program which simulates the 2 most common ClassNotFoundException scenarios via Class.forName() & ClassLoader.loadClass(). Please simply copy/paste and run the program with the IDE of your choice (Eclipse IDE was used for this example). The Java program allows you to choose between problem scenario #1 or problem scenario #2 as per below. Simply change to 1 or 2 depending of the scenario you want to study. # Class.forName() private static final int PROBLEM_SCENARIO = 1; # ClassLoader.loadClass() private static final int PROBLEM_SCENARIO = 2; # ClassNotFoundExceptionSimulator package;/** * ClassNotFoundExceptionSimulator * @author Pierre-Hugues Charbonneau * */ public class ClassNotFoundExceptionSimulator {private static final String CLASS_TO_LOAD = ""; private static final int PROBLEM_SCENARIO = 1;/** * @param args */ public static void main(String[] args) {System.out.println("java.lang.ClassNotFoundException Simulator - Training 5"); System.out.println("Author: Pierre-Hugues Charbonneau"); System.out.println("");switch(PROBLEM_SCENARIO) {// Scenario #1 - Class.forName() case 1:System.out.println("\n** Problem scenario #1: Class.forName() **\n"); try { Class<?> newClass = Class.forName(CLASS_TO_LOAD);System.out.println("Class "+newClass+" found successfully!");} catch (ClassNotFoundException ex) {ex.printStackTrace();System.out.println("Class "+CLASS_TO_LOAD+" not found!");} catch (Throwable any) { System.out.println("Unexpected error! "+any); }break;// Scenario #2 - ClassLoader.loadClass() case 2:System.out.println("\n** Problem scenario #2: ClassLoader.loadClass() **\n"); try { ClassLoader classLoader = Thread.currentThread().getContextClassLoader(); Class<?> callerClass = classLoader.loadClass(CLASS_TO_LOAD);Object newClassAInstance = callerClass.newInstance();System.out.println("SUCCESS!: "+newClassAInstance); } catch (ClassNotFoundException ex) {ex.printStackTrace();System.out.println("Class "+CLASS_TO_LOAD+" not found!");} catch (Throwable any) { System.out.println("Unexpected error! "+any); }break; }System.out.println("\nSimulator done!"); } } # ClassA package;/** * ClassA * @author Pierre-Hugues Charbonneau * */ public class ClassA {private final static Class<ClassA> CLAZZ = ClassA.class;static { System.out.println("Class loading of "+CLAZZ+" from ClassLoader '"+CLAZZ.getClassLoader()+"' in progress..."); }public ClassA() { System.out.println("Creating a new instance of "+ClassA.class.getName()+"...");doSomething(); }private void doSomething() { // Nothing to do... } } If you run the program as is, you will see the output as per below for each scenario: #Scenario 1 output (baseline) java.lang.ClassNotFoundException Simulator – Training 5 Author: Pierre-Hugues Charbonneau ** Problem scenario #1: Class.forName() ** Class loading of class from ClassLoader ‘sun.misc.Launcher$AppClassLoader@bfbdb0′ in progress… Class class found successfully! Simulator done! #Scenario 2 output (baseline)java.lang.ClassNotFoundException Simulator – Training 5 Author: Pierre-Hugues Charbonneau ** Problem scenario #2: ClassLoader.loadClass() ** Class loading of class from ClassLoader ‘sun.misc.Launcher$AppClassLoader@2a340e’ in progress… Creating a new instance of… SUCCESS!: Simulator done! For the “baseline” run, the Java program is able to load ClassA successfully. Now let’s voluntary change the full name of ClassA and re-run the program for each scenario. The following output can be observed: #ClassA changed to ClassBprivate static final String CLASS_TO_LOAD = "";#Scenario 1 output (problem replication) java.lang.ClassNotFoundException Simulator – Training 5 Author: Pierre-Hugues Charbonneau ** Problem scenario #1: Class.forName() ** java.lang.ClassNotFoundException : at$ ) at$ ) at Native Method ) at ) at java.lang.ClassLoader.loadClass( ) at sun.misc.Launcher$AppClassLoader.loadClass( ) at java.lang.ClassLoader.loadClass( ) at java.lang.Class.forName0( Native Method ) at java.lang.Class.forName( ) at ) Class not found! Simulator done! #Scenario 2 output (problem replication) java.lang.ClassNotFoundException Simulator – Training 5 Author: Pierre-Hugues Charbonneau ** Problem scenario #2: ClassLoader.loadClass() ** java.lang.ClassNotFoundException : at$ ) at$ ) at Native Method ) at ) at java.lang.ClassLoader.loadClass( ) at sun.misc.Launcher$AppClassLoader.loadClass( ) at java.lang.ClassLoader.loadClass( ) at ) Class not found! Simulator done! What happened? Well since we changed the full class name to, such class was not found at runtime (does not exist), causing both Class.forName() and ClassLoader.loadClass() calls to fail. You can also replicate this problem by packaging each class of this program to its own JAR file and then omit the jar file containing ClassA.class from the main class path Please try this and see the results for yourself…(hint: NoClassDefFoundError) Now let’s jump to the resolution strategies. java.lang.ClassNotFoundException : Resolution strategiesNow that you understand this problem, it is now time to resolve it. Resolution can be fairly simple or very complex depending of the root cause.Don’t jump on complex root causes too quickly, rule out the simplest causes first. First review the java.lang.ClassNotFoundException stack trace as per the above and determine which Java class was not loaded properly at runtime e.g. application code, third party API, Java EE container itself etc. Identify the caller e.g. Java class you see from the stack trace just before the Class.forName() or ClassLoader.loadClass() calls. This will help you understand if your application code is at fault vs. a third party API. Determine if your application code is not packaged properly e.g. missing JAR file(s) from your classpath If the missing Java class is not from your application code, then identify if it belongs to a third party API you are using as per of your Java application. Once you identify it, you will need to add the missing JAR file(s) to your runtime classpath or web application WAR/EAR file. If still struggling after multiple resolution attempts, this could means a more complex class loader hierarchy problem. In this case, please review my NoClassDefFoundError article series for more examples and resolution strategiesI hope this article has helped you to understand and revisit this common Java exception. Please feel free to post any comment or question if you are still struggling with your java.lang.ClassNotFoundException problem.   Reference: java.lang.ClassNotFoundException: How to resolve from our JCG partner Pierre-Hugues Charbonneau at the Java EE Support Patterns & Java Tutorial blog. ...
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

Get ready to Rock!
To download the books, please verify your email address by following the instructions found on the email we just sent you.