Featured FREE Whitepapers

What's New Here?


Why Future Generations Will Hate You for Using java.util.Stack

Before I kill you with some meaningless tautology, here is the gistIf your application is near real time or you are sending your code to Mars, you need to keep off the default Stack implementation in Java. Write your own version based on LinkedList. Again, if your application is mission critical and your Stack is expected to be manipulated by concurrent threads, then use a ConcurrentLinkedDeque or write your own Stack based on LinkedList – just make sure your add and remove operations are thread safe. While doing so, consider concurrency locks. You just need raw power and are not bothered by occasional hiccups during the push process AND your Stack is not manipulated by concurrent threads, then use an ArrayDeque or go ahead and write your own Stack based on an ArrayList. If multithreaded, then write your own Stack based on ArrayQueue and util.concurrent locks. If you refuse to read the Java Stack API and the Java Deque API and you are simply a crazy person, then use the default implementation. And I promise, no mercy will be shown to you when the bots take over the world.Note : The truth is unless, for some reason, you would want to name your implementation class as ’Stack’, you are pretty much free to use all the Deque implementations as a Stack directly.Now that enough mud had been thrown against the default implementation and I have your attention for a couple of minutes, let me sum things up fast. We know that Stack in Java Collection API extends from Vector which internally uses an array. In other words, Java uses an array based implementation for its Stack. So, let’s see why, between the two most popular Stack implementation – arrays and linkedlists, Java chose arrays. Some answers were quite obvious, some weren’t : Fair Play A cursory look over the add and remove methods of arrays and linkedlist, which are the pillars of push and pop methods of the Stack has a constant time retrieval across the board. Growth issues It’s no news that arrays are fixed size and that the growth of an array is achieved by just copying the array to a bigger array. In case of our default implementation of Stack using Vector, the increment capacity is just double. It just means if we are adding 80 elements to a stack, the internal array gets copied 4 times – at 10, 20, 40 and 80. So, say, when we are adding the 80th element, the push operation actually takes O(N) time and since our N is 80 in this case, that is going to make atleast a little pause on your program with that cruel deep copy – those valuable little cycles that you could save for some other ride. Too bad, unlike Vectors, you wont be able to specify the initial size or the increment factor for the java.util.Stack because there are no overloaded constructors. On the other hand, though growth hiccups frequent an ArrayQueue, ArrayQueues have a sweet overloaded constructor for initial capacity which comes in handy if you have an approximate idea on how big you stack is going to be. Also, the default initial capacity is 16 for an ArrayQueue as against 10 for a Vector. Time and Place, my friend. Time and Place To be fair with arrays, the objects stored in the array based stack are just references to the actual object in the heap (in case of objects) or actual values (in case of primitives). On the other hand, in case of LinkedList, there is a Node wrapper over the top of the stored item. On an average that should cost you ~40 bytes extra space in the heap per stored object (including Node object inner class, link to the next Node and the reference to the item itself). So, ArrayQueue or LinkedList ? Arrays are preferred for most purposes because they offer much better speed of access due to their unique advantage of occupying sequential memory and getting to the actual object is just pointer arithmetic. However, push and pop operations on the threshold item (the item that triggers resize), takes O(n) time. However, on an average, it takes constant time (amortized constant time if you will). On the other hand, with LinkedList, add operations are slower than arrays due to the extra time taken to construct new nodes and pointing to the new ones. Needless to mention that new nodes consume heap space other than the space consumed by the actual object. However, since there are no resizing (or need for sequential memory) and it always has a reference to the first element, it has a worst case guarantee of constant time. Now, while you revisit the first part of this blog, feel free to say Damn you, default implementation !!! Related links :http://onjava.com/pub/a/onjava/2001/10/23/optimization.html http://www.javaworld.com/javatips/jw-javatip130.html?page=1 http://docs.oracle.com/javase/7/docs/api/java/util/RandomAccess.html http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/util/Vector.java  Reference: Why Future Generations Will Hate You for Using java.util.Stack from our JCG partner Arun Manivannan at the Rerun.me blog. ...

Basics about Servlets

Through this tutorial i will try to get you closer to the Java Servlet Model. Before we start with examination of classes defined within the servlet specification, i will explain basic stuff needed to know before you start with developing web application. Understanding the Java Servlet Models First of all Java Servlet Model is not defined only for the web application that is specification which is based on request and response programing model. But basically it’s most used in interaction with HTTP protocol, so from now we will disussing using of servlet model in http applications. In basically, the web applications should be applications placed somewhere on the web, and they can be accessed through the net. If you need more information and better definition of “What is a web application” you can visit next link: Web application. If we want to create a web application we should have a web server, it can be any kind of HTTP server with included web container such as Tomcat. HTTP server is charged for handling clients request, security, servicing content to client etc. , but the HTTP server can not dynamically create response to the client he is able to serve static content. Solution of this problem is a Web Container. Container is able to hosts web application, HTTP server will pass request to the web container which will take care of the request. Usually there is only one container per Server. And all web application on the server are served by this container.All communication inside of Web container is realized through the web container interface. It means that one application can not directly access to the another one. Also components inside one application can not access directly each others, also all communication betwean components in same application is realized using web container interface. This is very important in understanding how web application works in web container, this allow as to create filters, listeners and it allow as to use security feature of web container.War Application Structure By specification Java Web application is packed into war package. War package is same as jar package,but when web container find war file in deploy folder it will assume that it is a web application and it will try to start it. Inside of war package we have one special directory called WEB-INF. Content of this folder is not directly served to the user. This folder contains folders classes and lib in which we can put class used by application (classes folder) and additional jars (lib folder). Content of those folders will be automatically read by class loader without any additional settings for class path. Also this folder contains web.xml file which is called deployment description. This file is not required if web application contains only jsp pages, but if application need to have servlets or filters this file must be defined. Servlet Life Cycle During its existing , servlet pass through the five life cycles:loading instantiation initialization serving destroyingLoading is phase in which class loader load a class. Every web application will get a different instance of class loader which will be used for loading web components. This allow as to deploy two version of same application in same container and every application may have classes with same name (full class name). After loading, web container will try to instantiate class, (i.e. to create a new instance of class). Usually every web component is created just once, but this depend on behavior of web container, in some case a web container can be set to create more instance of component class in the pool, and serve request with one of instance from pool. When web container create new instance of servlet it use a default constructor. Initialization is life cycle phase in which servlet is initialized. In this phase is supposed to servlet read some values and do some additional action and steps before servlet is able to serve client request. Serving phase is cycle in servlet life in which servlet serving clients requests. Destroy phase is last phase in servlet life and it happen when a servlet is removed from service. Servlet Interface If we want to create a servlet everything what should be done is implementing of Servlet interface. This interface provide next three method, which are called by container:init(ServletConfig config), called during initialization service(ServletRequest request, ServletResponse response), called during servicing a request destroy(), called when servlet is removed from service.Also this interface provide two ancillary methods:ServletConfig getServletConfig() String getServletInfo()During initialization it is possible to get ServletException. Raising of this exception in init method will signal container that some error occurred and container will stop initialization and mark servlet instance as ready for garbage collector and this will not cause calling of the destroy method. Also during service method it is possible to get ServletException or UnavailableException. This exceptions can be temporary or permanent. In case of temporary exception server will block calling of service method for some time, but in case of permanent exception destroy method will be called and servlet will be ready for garbage collector, and every future call to this servlet will lead to 404 response. GenericServlet Class GenericServlet class is part of javax.servlet package. It is an abstract class which implementing Servlet interface and create basic implementation which do not depend on platform. This class introduce one new method:init(), called by init(ServletConfig config) method, during initialization phase ServletContext getServletContext(), provide access to the ServletContex String getInitParameter(String name), retrieve value of servlet config parameter defined in application descriptor for specified name Enumeration getInitParameterNames(), return an enumeration of all servlet init parameters. String getServletName(), return name of servlet.If we extend GenericServlet class instead of implementing Servlet interface, only stuff what we should do is implementing of service method, all other method are already implemented by abstract class. HttpServlet Class This is also abstract class like GenericServlet, but this class is not platform independent. It is tied to HTML protocol, and introduce new method which are only related to HTTP protocol. Every of this new methods is responsible for processing client request for particular HTTP method. The doXxx methods:doGet(HttpServletRequest request, HttpServletResponse response), process get request doPost(HttpServletRequest request, HttpServletResponse response), process post request doOptions(HttpServletRequest request, HttpServletResponse response), process HTTP options request doPut(HttpServletRequest request, HttpServletResponse response), process HTTP put request doDelete(HttpServletRequest request, HttpServletResponse response), process HTTP delete request doHead(HttpServletRequest request, HttpServletResponse response), process HTTP head request doTrace(HttpServletRequest request, HttpServletResponse response), process HTTP trace request.ServletContext Interface ServletContext interface is API which provide access to information about application. Every application is executed inside of own context, so this interface provide access to that information. Implementation of this interface is provided by server vendor and we should not be interested by concrete implementation. When application is deployed container will first create ServletContext implementation class and fill it with data provided by application descriptor. Methods inside this interface we can split in few groups:Method for accessing context attribute:Object getAttribute(String name), retrieve object from context Enumeration getAttributeNames(), retrieve attributes names void removeAttribute(String name), remove attribute from context void setAttribute(String name, Object value), add new object into context and bind it by specifed name. If object with specified name already exists it will be owerridden.Methods for obtaining context information:String getServletContextName(), retrieve value defined by <display-name> in application descriptor, if not exists retriev null. String getRealPath(String path), context relative path of specified resource, null if application is deployed as WAR (if it not exploded in folder). Set getResourcesPaths(String path), retrieve files inside specified partial path, only one level ServletContext getContext(String appURL), retrieve ServletContex of another application deployed on same server. Url must start with ‘/’Methods for accessing static resources:URL getResource(String path), retrieve URL of resource specified by path. Path must start with ‘/’ InputStream getResourceAsStream(String path), retrieve InputStream of specifed resource. Path can be context relative. String getMimeType(String path), return mie type of resource.Methods for obtaining request dispatcher :RequestDispatcher getRequestDispatcher(String path), return RequestDispatcher for specified resource or null if resource do not exists. RequestDispatcher getNamedDispatcher(String name), return RequestDispatcher for named resource inside of deployment descriptor.Methods for accessing context initialization parameters:String getInitParameter(String name), retrieve value for specified parameter defined in deployment descriptor, or null if it does not exists. Enumeration getInitParameterNames(), list of parameter names defined in applications deployment desriptor.Context attributes are application scoped attributes, which means that all clients share same attributes, change on attribute made by one client is visible to every other clients.ServletConfig Interface This is API which provide methods for accessing information defined inside deployment descriptor. Concrete object is created by servlet container and provided to servlet during initialization phase. This interface define next methods:String getInitParameter(String name), get value of init parameter defined for the servlet with specified name, or null if there is no such parameter. Enumeration getInitParameterNames(), retrive enumeration of servlet init parameters names. ServletContext getServletContext(), retrieve servlet context. String getServletName(), retrieve servlet name specifed in web.xmlAs you can see the ServletConfig proviede only methods for reading init parameter and there is no method for changing or adding new init parameter because they cant be changed or added.Servlets Deployment Description If we want to use servlets we need to define them inside of deployment descriptor.<servlet><description>This is a servlet</description><display-name>First Servlet</display-name><servlet-name>FirstServlet</servlet-name><class>ba.codecentric.scwcd.FirstServlet</class><init-param><param-name>firstParam</param-name><param-value>value</param-value></init-param></servlet><servlet-mapping><servlet-name>FirstServlet</servlet-name><uri-pattern>/FirstServlet</uti-pattern></servlet-mapping>Inside of servlet tags we defined servlet, inside of servlet tag we can use init param tag for defining initialization parameters, which will be sent to servlet during initialization phase as part of ServletConfig object. And with the servlet mapping tags we define uri pattern which will be used for activeting specified servlet. Also during this tutorial i spoken about ServletContext, and I have mentioned context parameters. These parameters are also defined in deployment description using context param tag.<context-param><param-name>contextParameter</param-name><param-value>value</param-value></context-param>  Reference: Basics about Servlets from our JCG partner Igor Madjeric at the Igor Madjeric blog. ...

Trunk, Branch, Tag And Related Concepts

Trunk, Branch and Tag concepts are relevant to revision control (or version control) systems. These systems are typically implemented as repositories containing electronic documents, and changes to these documents. Each set of changes to the documents is marked with a revision (or version) number. These numbers identify each set of modifications uniquely. Reminder A version control system, like Subversion, works with a central repository. Users can check out its content (i.e., the documents) to their local PC. They can perform modifications to these documents locally. Then, users can commit their changes back to the repository. If there is a conflict between modifications made to documents by different users committing simultaneously, Subversion will ask each user to resolve the conflicts locally first, before accepting their modifications back to the repository. This ensures a continuity between each revisions (i.e., set of modifications) made to the content of the repository. What is this good for? If the repository is used to store software development code files, each developer can check-out these files locally, and after making modifications, they can make sure these compile properly before committing their modification back to the repository. This guarantees that each revision compiles properly and that code does not contain any incoherence. Trunk, Branches & Tags  Sometimes, software development requires working on large pieces of code. This can be experimental code too. Multiple software developers may be involved. These modifications are so large that one may want to have a temporary copy of the repository content to work on, without modifying the original content. This is issue is addressed with trunk and branches. When using Subversion, the typical directory structure within the repository is made of three directories: trunk, branches and tags. The trunk contains the main line of development. To solve the issue raised above, a branch can be created. This branch is a copy of the main line of development (trunk) at a given revision number. Software developers can check the branch out, like they would with trunk content. They can perform modifications locally and commit content back to the branch. The trunk content will not be modified. Multiple software developers can work on this branch, like they would on trunk. Once all modifications are made to the branch (or experimental code is approved), these modifications can be merged back to trunk. Just like with a simple check-out, if there is any incoherence, Subversion will request software developers to solve them at the branch level, before accepting to merge back the branch into trunk. Once the branch is merged, it is also closed. No more modification on it is accepted. Multiple branches can be created simultaneously from trunk. Branches can also be abandoned and deleted. Modifications are not merged back to trunk. When the software development team has finished working on a project, and every modifications have been committed or merged back to trunk, one may want to release a copy (or snapshot) of the code from trunk. This is called a tag. The code is copied in a directory within the tag directory. Usually, the version of the release is used as the directory name (for example 1.0.0). Contrary to branches, tags are not meant to receive further modifications. Further modifications should be performed only on trunk and branches. If a released tag need modifications, a branch from the tag (not the trunk) should be created (for example, with name 1.0.x). Later, an extra tag from that tag branch can be created with a minor release version (for example 1.0.1). Why work like this? Imagine a software application is released and put on production (version 1.0.0). The team carries on working on version 2.0.0 from trunk (or a branch from trunk). This makes sense regarding the continuity of the code line. Later, one finds a bug on version 1.0.0. A code correction is required. It cannot be performed on trunk since it already contains 2.0.0 code. Tag 1.0.0 must be used. Hence, the need to create a branch from tag 1.0.0. But what about the code correction created for the 1.0.0 bug? Shouldn’t it be included in version 2.0.0? Yes, it should, but since branch 1.0.x cannot be merged back to trunk (it comes from tag 1.0.0), another solution is required. Typically, one will create a patch containing the code correction from branch 1.0.x, and apply it locally from a check out of trunk. Then, this code correction can be committed back to trunk and it will be part of version 2.0.0. Branches created from tag releases have a life of their own. They are called maintenance branches and remain alive as long released versions are maintained. Contrary to trunk branches, they are never merged back with their original tag. You could consider them like little trunks for tag releases.   Reference: Explain Trunk, Branch, Tag And Related Concepts from our JCG partner Jerome Versrynge at the Technical Notes blog. ...

Java 7: File Filtering using NIO.2 – Part 3

Hello all. This is Part 3 of the File Filtering using NIO.2 series. For those of you who haven’t read Part 1 or Part 2, here’s a recap. NIO.2 is a new API for I/O operations included in the JDK since Java 7. With this new API, you can perform the same operations performed with java.io plus a lot of great functionalities such as: Accessing file metadata and watching for directory changes, among others. Obviously, the java.io package is not going to disappear because of backward compatibility, but we are encouraged to start using NIO.2 for our new I/O requirements. In this post, we are going to see how easy it is to filter the contents of a directory using this API. There are 3 ways in order to do so, we already reviewed two similar ways in Part 1 and Part 2, but now we are going to see a more powerful approach. What you need NetBeans 7+ or any other IDE that supports Java 7 JDK 7+ Filtering content of a directory is a common task in some applications and NIO.2 makes it really easy. The classes and Interfaces we are going to use are described next:java.nio.file.Path: Interface whose objects may represent files or directories in a file system. It’s like the java.io.File but in NIO.2. Whatever I/O operation you want to perform, you need an instance of this interface. java.nio.file.DirectoryStream: Interface whose objects iterate over the content of a directory. java.nio.file.DirectoryStream.filter<T>: A nested interface whose objects decide whether an element in a directory should be filtered or not. java.nio.file.Files: Class with static methods that operates on files, directories, etc.The way we are going to filter the contents of a directory is by using objects that implement the java.nio.file.DirectoryStream.filter<T> interface. This interface declares only one method +accept(T):boolean which as the JavaDoc says: ‘returns true if the directory entry should be accepted’. So it’s up to you to implement this method and decide whether a directory entry should be accepted based on whatever attribute you want to use: by hidden, by size, by owner, by creation date, etc. This is important to remember, using this method you are no longer tied to filter only by name, you can use any other attribute. If you only want directories, you can use the java.nio.file.Files class and its +isDirectory(Path, LinkOption…):boolean method when creating the filter: //in a class.../** * Creates a filter for directories only * @return Object which implements DirectoryStream.Filter * interface and that accepts directories only. */ public static DirectoryStream.Filter<Path> getDirectoriesFilter() {DirectoryStream.Filter<Path> filter = new DirectoryStream.Filter<Path>() {@Override public boolean accept(Path entry) throws IOException { return Files.isDirectory(entry); } };return filter; } Or if you only want hidden files, you can use the java.nio.file.Files class and its +isHidden(Path):boolean method when creating the filter: //in a class.../** * Creates a filter for hidden files only * @return Object which implements DirectoryStream.Filter * interface and that accepts hidden files only. */ public static DirectoryStream.Filter<Path> getHiddenFilesFilter() {DirectoryStream.Filter<Path> filter = new DirectoryStream.Filter<Path>() {@Override public boolean accept(Path entry) throws IOException { return Files.isHidden(entry); } };return filter; } Or if you want files belonging to a specific user, you have to ask for a user and compare it with the owner of the directory entry. To obtain the owner of a directory entry, you can use the java.nio.file.Files class and its +getOwner(Path, LinkOption…):UserPrincipal method (watch out, not all OS support this). To obtain a specific user on the filesystem use the java.nio.file.FileSystem class and its +getUserPrincipalLookupService(): //in a class.../** * Creates a filter for owners * @return Object which implements DirectoryStream.Filter * interface and that accepts files that belongs to the * owner passed as parameter. */ public static DirectoryStream.Filter<Path> getOwnersFilter(String ownerName) throws IOException{UserPrincipalLookupService lookup = FileSystems.getDefault().getUserPrincipalLookupService();final UserPrincipal me = lookup.lookupPrincipalByName(ownerName);DirectoryStream.Filter<Path> filter = new DirectoryStream.Filter<Path>() {@Override public boolean accept(Path entry) throws IOException { return Files.getOwner(entry).equals(me); } };return filter; } The following piece of code defines a method which scans a directory using any of the previous filters: //in a class.../** * Scans the directory using the filter passed as parameter. * @param folder directory to scan * @param filter Object which decides whether a * directory entry should be accepted */ private static void scan(String folder , DirectoryStream.Filter<Path> filter) { //obtains the Images directory in the app directory Path dir = Paths.get(folder); //the Files class offers methods for validation if (!Files.exists(dir) || !Files.isDirectory(dir)) { System.out.println('No such directory!'); return; } //validate the filter if (filter == null) { System.out.println('Please provide a filter.'); return; }//Try with resources... so nice! try (DirectoryStream<Path> ds = Files.newDirectoryStream(dir, filter)) { //iterate over the filtered content of the directory int count = 0; for (Path path : ds) { System.out.println(path.getFileName()); count++; } System.out.println(); System.out.printf( '%d entries were accepted\n', count); } catch (IOException ex) { ex.printStackTrace(); } } We can execute the previous code passing the following parameters to the main method (check the source code at the end of this post):Directory to scan: C:\ or / dependening on your OS. Filter: hiddenWhen executing the code we get the following:In a windows machine, you can obtain the hidden files using the command: dir /AAH Notice that we are getting the same result:And on my Linux virtual machine:Using the command ls -ald .* we get similar results:Again, Write once, run everywhere! I hope you enjoyed the File Filtering using NIO.2 series. One last word, all the filtering methods we reviewed worked on one directory only, if you want to scan a complete tree of directories, you’ll have to make use of the java.nio.file.SimpleFileVisitor class. Click here to download the source code of this post.   Reference: Java 7: File Filtering using NIO.2 – Part 3 from our JCG partner Alexis Lopez at the Java and ME blog. ...

Collaborative Artifacts as Code

A software development project is a collaborative endeavor. Several team members work together and produce artifacts that evolve continuously over time, a process that Alberto Brandolini (@ziobrando) calls Collaborative Construction. Regularly, these artifacts are taken in their current state and transformed into something that become a release. Typically, source code is compiled and packaged into some executable. The idea of Collaborative Artifacts as Code is to acknowledge this collaborative construction phase and push it one step further, by promoting as many collaborative artifacts as possible into plain text files stored in the same source control, while everything else is generated, rendered and archived by the software factory. Collaborative artifacts are the artifacts the team works on and maintains over time thanks to the changes done by several people through a source control management such as SVN, TFS or Git, with all their benefits like branching and versioning. Keep together what varies together The usual way of storing documentation is to put MS Office documents into a shared drive somewhere, or to write random stuff in a wiki that is hardly organized. Either way, this documentation will quickly get out of sync because the code is continuously changing, independently of the documents stored somewhere else, and as you know, “Out of sight, out of mind”.we now have better alternatives We now have better alternatives Over the last few years, there has been changes in software development. Github has popularized the README.md overview file written in Markdown. DevOps brought the principle of Infrastructure as Code. The BDD approach introduced the idea of text scenarios as a living documentation and an alternative for both specifications and acceptance tests. New ways of planning what a piece of software is supposed to be doing have appeared as in Impact Mapping. All this suggests that we could replace many informal documents by their more structured alternatives, and we could have all these files collocated within the source control together with the source. In any given branch in the source control we would then have something like this:Source code (C#, Java, VB.Net, VB, C++) Basic documentation through plain README.md and perhaps other .md files wherever useful to give a high-level overview on the code SQL code as source code too, or through Liquibase-style configuration Living Documentation: unit tests and BDD scenarios (SpecFlow/Cucumber/JBehave feature files) as living documentation Impact Maps (and every other mindmaps), may be done as text then rendered via tools like text2mindmap Any other kind of diagrams (UML or general purpose graphs) ideally be defined in plain text format, then rendered through tools (Graphviz, yUml). Dependencies declarations as manifest instead of documentation on how to setup and build manually (Maven, Nuget…) Deployment code as scripts or Puppet manifests for automated deployment instead of documentation on how to deploy manually  Plain Text Obsession is a good thing! Nobody creates software by editing directly the executable binary that the users actually run at the end, yet it is common to directly edit the MS Word document that will be shipped in a release. Collaborative Artifacts as Code suggests that every collaborative artifact should be text-based to work nicely with source control, and to be easy to compare and merge between versions. Text-based formats shall be preferred whenever possible, e.g. .csv over xls, rtf or .html over .doc, otherwise the usual big PPT files must go to another dedicated wiki where they can be safely forgotten and become instantly deprecated… Like a wiki, but generated and read-only My colleague Thomas Pierrain summed up the benefits of this approach, for a documentation:always be up-to-date and versioned easily diff-able (text filesn e.g. with Markdown format) respect the DRY principle (with the SCM as its golden source) easily browsable by everyone (DEV, QA, BA, Support teams…) in the readonly and readable wiki-like web site easily modifiable by team members in a well know and official location (as easy as creating or modifying a text file in a SCM)  What’s next? This approach is nothing really new (think about LateX…), and many of the tools we need for it already exist (Markdown renderers, web site to organize and display Gherkin scenarios…). However I have never seen this approach fully done in an actual project. Maybe your project is already doing that? please share your feedback!   Reference: Collaborative Artifacts as Code from our JCG partner Cyrille Martraire at the Cyrille Martraire’s blog blog. ...

java.lang.ClassNotFoundException: How to resolve

This article is intended for Java beginners currently facing java.lang.ClassNotFoundException challenges. It will provide you with an overview of this common Java exception, a sample Java program to support your learning process and resolution strategies. If you are interested on more advanced class loader related problems, I recommended that you review my article series on java.lang.NoClassDefFoundError since these Java exceptions are closely related. java.lang.ClassNotFoundException: Overview As per the Oracle documentation, ClassNotFoundException is thrown following the failure of a class loading call, using its string name, as per below:The Class.forName method The ClassLoader.findSystemClass method The ClassLoader.loadClass methodIn other words, it means that one particular Java class was not found or could not be loaded at “runtime” from your application current context class loader. This problem can be particularly confusing for Java beginners. This is why I always recommend to Java developers to learn and refine their knowledge on Java class loaders. Unless you are involved in dynamic class loading and using the Java Reflection API, chances are that the ClassNotFoundException error you are getting is not from your application code but from a referencing API. Another common problem pattern is a wrong packaging of your application code. We will get back to the resolution strategies at the end of the article. java.lang.ClassNotFoundException : Sample Java programNow find below a very simple Java program which simulates the 2 most common ClassNotFoundException scenarios via Class.forName() & ClassLoader.loadClass(). Please simply copy/paste and run the program with the IDE of your choice (Eclipse IDE was used for this example). The Java program allows you to choose between problem scenario #1 or problem scenario #2 as per below. Simply change to 1 or 2 depending of the scenario you want to study. # Class.forName() private static final int PROBLEM_SCENARIO = 1; # ClassLoader.loadClass() private static final int PROBLEM_SCENARIO = 2; # ClassNotFoundExceptionSimulator package org.ph.javaee.training5;/** * ClassNotFoundExceptionSimulator * @author Pierre-Hugues Charbonneau * */ public class ClassNotFoundExceptionSimulator {private static final String CLASS_TO_LOAD = "org.ph.javaee.training5.ClassA"; private static final int PROBLEM_SCENARIO = 1;/** * @param args */ public static void main(String[] args) {System.out.println("java.lang.ClassNotFoundException Simulator - Training 5"); System.out.println("Author: Pierre-Hugues Charbonneau"); System.out.println("http://javaeesupportpatterns.blogspot.com");switch(PROBLEM_SCENARIO) {// Scenario #1 - Class.forName() case 1:System.out.println("\n** Problem scenario #1: Class.forName() **\n"); try { Class<?> newClass = Class.forName(CLASS_TO_LOAD);System.out.println("Class "+newClass+" found successfully!");} catch (ClassNotFoundException ex) {ex.printStackTrace();System.out.println("Class "+CLASS_TO_LOAD+" not found!");} catch (Throwable any) { System.out.println("Unexpected error! "+any); }break;// Scenario #2 - ClassLoader.loadClass() case 2:System.out.println("\n** Problem scenario #2: ClassLoader.loadClass() **\n"); try { ClassLoader classLoader = Thread.currentThread().getContextClassLoader(); Class<?> callerClass = classLoader.loadClass(CLASS_TO_LOAD);Object newClassAInstance = callerClass.newInstance();System.out.println("SUCCESS!: "+newClassAInstance); } catch (ClassNotFoundException ex) {ex.printStackTrace();System.out.println("Class "+CLASS_TO_LOAD+" not found!");} catch (Throwable any) { System.out.println("Unexpected error! "+any); }break; }System.out.println("\nSimulator done!"); } } # ClassA package org.ph.javaee.training5;/** * ClassA * @author Pierre-Hugues Charbonneau * */ public class ClassA {private final static Class<ClassA> CLAZZ = ClassA.class;static { System.out.println("Class loading of "+CLAZZ+" from ClassLoader '"+CLAZZ.getClassLoader()+"' in progress..."); }public ClassA() { System.out.println("Creating a new instance of "+ClassA.class.getName()+"...");doSomething(); }private void doSomething() { // Nothing to do... } } If you run the program as is, you will see the output as per below for each scenario: #Scenario 1 output (baseline) java.lang.ClassNotFoundException Simulator – Training 5 Author: Pierre-Hugues Charbonneau http://javaeesupportpatterns.blogspot.com ** Problem scenario #1: Class.forName() ** Class loading of class org.ph.javaee.training5.ClassA from ClassLoader ‘sun.misc.Launcher$AppClassLoader@bfbdb0′ in progress… Class class org.ph.javaee.training5.ClassA found successfully! Simulator done! #Scenario 2 output (baseline)java.lang.ClassNotFoundException Simulator – Training 5 Author: Pierre-Hugues Charbonneau http://javaeesupportpatterns.blogspot.com ** Problem scenario #2: ClassLoader.loadClass() ** Class loading of class org.ph.javaee.training5.ClassA from ClassLoader ‘sun.misc.Launcher$AppClassLoader@2a340e’ in progress… Creating a new instance of org.ph.javaee.training5.ClassA… SUCCESS!: org.ph.javaee.training5.ClassA@6eb38a Simulator done! For the “baseline” run, the Java program is able to load ClassA successfully. Now let’s voluntary change the full name of ClassA and re-run the program for each scenario. The following output can be observed: #ClassA changed to ClassBprivate static final String CLASS_TO_LOAD = "org.ph.javaee.training5.ClassB";#Scenario 1 output (problem replication) java.lang.ClassNotFoundException Simulator – Training 5 Author: Pierre-Hugues Charbonneau http://javaeesupportpatterns.blogspot.com ** Problem scenario #1: Class.forName() ** java.lang.ClassNotFoundException : org.ph.javaee.training5.ClassB at java.net.URLClassLoader$1.run( URLClassLoader.java:366 ) at java.net.URLClassLoader$1.run( URLClassLoader.java:355 ) at java.security.AccessController.doPrivileged( Native Method ) at java.net.URLClassLoader.findClass( URLClassLoader.java:354 ) at java.lang.ClassLoader.loadClass( ClassLoader.java:423 ) at sun.misc.Launcher$AppClassLoader.loadClass( Launcher.java:308 ) at java.lang.ClassLoader.loadClass( ClassLoader.java:356 ) at java.lang.Class.forName0( Native Method ) at java.lang.Class.forName( Class.java:186 ) at org.ph.javaee.training5.ClassNotFoundExceptionSimulator.main( ClassNotFoundExceptionSimulator.java:29 ) Class org.ph.javaee.training5.ClassB not found! Simulator done! #Scenario 2 output (problem replication) java.lang.ClassNotFoundException Simulator – Training 5 Author: Pierre-Hugues Charbonneau http://javaeesupportpatterns.blogspot.com ** Problem scenario #2: ClassLoader.loadClass() ** java.lang.ClassNotFoundException : org.ph.javaee.training5.ClassB at java.net.URLClassLoader$1.run( URLClassLoader.java:366 ) at java.net.URLClassLoader$1.run( URLClassLoader.java:355 ) at java.security.AccessController.doPrivileged( Native Method ) at java.net.URLClassLoader.findClass( URLClassLoader.java:354 ) at java.lang.ClassLoader.loadClass( ClassLoader.java:423 ) at sun.misc.Launcher$AppClassLoader.loadClass( Launcher.java:308 ) at java.lang.ClassLoader.loadClass( ClassLoader.java:356 ) at org.ph.javaee.training5.ClassNotFoundExceptionSimulator.main( ClassNotFoundExceptionSimulator.java:51 ) Class org.ph.javaee.training5.ClassB not found! Simulator done! What happened? Well since we changed the full class name to org.ph.javaee.training5.ClassB, such class was not found at runtime (does not exist), causing both Class.forName() and ClassLoader.loadClass() calls to fail. You can also replicate this problem by packaging each class of this program to its own JAR file and then omit the jar file containing ClassA.class from the main class path Please try this and see the results for yourself…(hint: NoClassDefFoundError) Now let’s jump to the resolution strategies. java.lang.ClassNotFoundException : Resolution strategiesNow that you understand this problem, it is now time to resolve it. Resolution can be fairly simple or very complex depending of the root cause.Don’t jump on complex root causes too quickly, rule out the simplest causes first. First review the java.lang.ClassNotFoundException stack trace as per the above and determine which Java class was not loaded properly at runtime e.g. application code, third party API, Java EE container itself etc. Identify the caller e.g. Java class you see from the stack trace just before the Class.forName() or ClassLoader.loadClass() calls. This will help you understand if your application code is at fault vs. a third party API. Determine if your application code is not packaged properly e.g. missing JAR file(s) from your classpath If the missing Java class is not from your application code, then identify if it belongs to a third party API you are using as per of your Java application. Once you identify it, you will need to add the missing JAR file(s) to your runtime classpath or web application WAR/EAR file. If still struggling after multiple resolution attempts, this could means a more complex class loader hierarchy problem. In this case, please review my NoClassDefFoundError article series for more examples and resolution strategiesI hope this article has helped you to understand and revisit this common Java exception. Please feel free to post any comment or question if you are still struggling with your java.lang.ClassNotFoundException problem.   Reference: java.lang.ClassNotFoundException: How to resolve from our JCG partner Pierre-Hugues Charbonneau at the Java EE Support Patterns & Java Tutorial blog. ...

Invaluable books for an enterprise software engineer

I am again in the design phase of a very large project. The project context is to provide a new solution for the core services and core integration infrastructure around the prepaid platform on the largest telecommunication organisation in Greece. This is the most intrinsic motivation for me, define the essential architecture, spot the tricky points of requirements and provide a durable and efficient solution. Two are the most common ways to become a very skilled enterprise software engineer/architect. The first is to work hard with such skilled people and the second is to read books that are really useful. I present here a list of books that have really helped me to construct/design enterprise quality software. The order of books is meaningless. DevelopmentThe Pragmatic Programmer: From Journeyman to Master Authors: Andrew Hunt, David ThomasJava Concurrency in Practice Authors: Brian Goetz with Tim Peierls, Joshua Bloch, Joseph Bowbeer, David Holmes, Doug LeaEffective Java Programming Language Guide Author: Joshua BlochEffective Java 2nd Edition Author: Joshua Bloch IntegrationPatterns of Enterprise Application Architecture Authors: Martin Fowler with Dave Rice, Matthew Foemmel, Edward Hieatt, Robert Mee, and Randy StaffordEnterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions Authors: Gregor Hohpe, Bobby WoolfDesign Patterns: Elements of Reusable Object-Oriented Software Authors: Erich Gamma, Richard Helm, Ralph Johnson, John VlissidesService-Oriented Architecture: Concepts, Technology, and Design Author: Thomas Erl (only a few chapters, which are really valuable)SOA Principles of Service Design Author: Thomas Erl(only a few chapters, which are really valuable) GeneralThe Mythical Man-Month Author: Frederick Brooks This is not a closed list, but contains most of the books that I really like (for now). I know that there are newer books, which are great too, and also new languages that are really very promising, but these are the books that really help me to become what I am. What am I really?? Maybe the book that I read now may help you and me on this definition. The book I read this period is this:Synthetic Overview of the Collaborative Economy Authors: Michel Bauwens and Franco Iacomella, Nicolas Mendoza, James Burke, Chris Pinchen, Antonin Leonard, Edwin Mootoosamy I am a fan of Free Software and Open Source ideas. These ideas are not restricted only to software development, but to many areas out of software field (check Open Source Ecology). I am not trying to advertise these books, or the authors or the publishers. I just respect theirs work and that’s why I mention these books in this blog.   Reference: Invaluable books for an enterprise software engineer from our JCG partner Adrianos Dadis at the Java, Integration and the virtues of source blog. ...

Health Checks, Run-time Asserts and Monkey Armies

After going live, we started building health checks into the system – run-time checks on operational dependencies and status to ensure that the system is setup and running correctly. Over time we have continued to add more run-time checks and tests as we have run into problems, to help make sure that these problems don’t happen again. This is more than pings and Nagios alerts. This is testing that we installed the right code and configuration across systems. Checking code build version numbers and database schema versions. Checking signatures and checksums on files. That flags and switches that are supposed to be turned on or off are actually on or off. Checking in advance for expiry dates on licenses and keys and certs. Sending test messages through the system. Checking alert and notification services, make sure that they are running and that other services that are supposed to be running are running, and that services that aren’t supposed to be running aren’t running. That ports that are supposed to be open are open and ports that are supposed to be closed are closed. Checks to make sure that files and directories that are supposed to be there are there, that files and directories that aren’t supposed to be there aren’t, that tables that are supposed to be empty are empty. That permissions are set correctly on control files and directories. Checks on database status and configuration. Checks to make sure that production and test settings are production and test, not test and production. Checking that diagnostics and debugging code has been disabled. Checks for starting and ending record counts and sequence numbers. Checking artefacts from “jobs” – result files, control records, log file entries – and ensuring that cleanup and setup tasks completed successfully. Checks for run-time storage space. We run these health checks at startup, or sometimes early before startup, after a release or upgrade, after a failover – to catch mistakes, operational problems and environmental problems. These are tests that need to run quickly and return unambiguous results (things are ok or they’re not). They can be simple scripts that run in production or internal checks and diagnostics in the application code – although scripts are easier to adapt and extend. Some require hooks to be added to the application, like JMX. Run-time Asserts Other companies like Etsy do something similar with run-time asserts, using a unit test approach to check for conditions that must be in place for the system to work properly. These tests can (and should) be run on development and test systems too, to make sure that the run-time environments are correct. The idea is to get away from checks being done by hand, operational checklists and calendar reminders and manual tests. Anything that has a dependency, anything that needs a manual check or test, anything in an operational checklist should have an automated run-time check instead. Monkey Armies The same ideas are behind Netflix’s over-hyped (though not always by Netflix) Simian Army, a set of robots that not only check for run-time conditions, but that also sometimes take automatic action when run-time conditions are violated – or even violate run-time conditions to test that the system will still run correctly. The army includes Security Monkey, which checks for improperly configured security groups, firewall rules, expiring certs and so on; and Exploit Monkey, which automatically scans new instances for vulnerabilities when they are brought up. Run-time checking is taken to an extreme in Conformity Monkey, which shuts down services that don’t adhere to established policies, and the famous Chaos Monkey, which automatically forces random failures on systems, in test and in production. It’s surprising how much attention Chaos Monkey gets – maybe it’s the cool name, or because Netflix has Open Sourced it along with some of their other monkeys. Sure it’s ballsy to test failover in production by actually killing off systems during the day, even if they are stateless VM instances which by design should failover without problems (although this is the point, to make sure that they really do failover without problems like they are supposed to). There’s more to Netflix’s success than run-time fault injection and the other monkeys. Still, automatically double-checking as much as you can at run-time is especially important in an engineering-driven, rapidly-changing Devops or Noops environment where developers are pushing code into production too fast to properly understand and verify in advance. But whether you are continuously deploying changes to production (like Etsy and Netflix) or not, getting developers and ops and infosec together to write automated health checks and run-time tests is an important part of getting control over what’s actually happening in the system and keeping it running reliably.   Reference: Health Checks, Run-time Asserts and Monkey Armies from our JCG partner Jim Bird at the Building Real Software blog. ...

become/unbecome – discovering Akka

Sometimes our actor needs to react differently based on its internal state. Typically receiving some specific message causes the state transition which, in turns, changes the way subsequent messages should be handled. Another message restores the original state and thus – the way messages were handled before. In the previous article we implemented RandomOrgBuffer actor based on waitingForResponse flag. It unnecessarily complicated already complex message handling logic:           var waitingForResponse = falsedef receive = { case RandomRequest => preFetchIfAlmostEmpty() if(buffer.isEmpty) { backlog += sender } else { sender ! buffer.dequeue() } case RandomOrgServerResponse(randomNumbers) => buffer ++= randomNumbers waitingForResponse = false while(!backlog.isEmpty && !buffer.isEmpty) { backlog.dequeue() ! buffer.dequeue() } preFetchIfAlmostEmpty() }private def preFetchIfAlmostEmpty() { if(buffer.size <= BatchSize / 4 && !waitingForResponse) { randomOrgClient ! FetchFromRandomOrg(BatchSize) waitingForResponse = true } } Wouldn’t it be simpler to have two distinct receive methods – one used when we are awaiting for external server response (waitingForResponse == true) and the other when buffer is filled sufficiently and no request to random.org was yet issued? In such circumstances become() and unbecome() methods come very handy. By default receive method is used to handle all incoming messages. However at any time we can call become(), which accept any method compliant with receive signature as an argument. Every subsequent message will be handled by this new method. Calling unbecome() restores original receive method. Knowing this technique we can refactor our solution above to the following: def receive = { case RandomRequest => preFetchIfAlmostEmpty() handleOrQueueInBacklog() }def receiveWhenWaiting = { case RandomRequest => handleOrQueueInBacklog() case RandomOrgServerResponse(randomNumbers) => buffer ++= randomNumbers context.unbecome() while(!backlog.isEmpty && !buffer.isEmpty) { backlog.dequeue() ! buffer.dequeue() } preFetchIfAlmostEmpty() }private def handleOrQueueInBacklog() { if (buffer.isEmpty) { backlog += sender } else { sender ! buffer.dequeue() } }private def preFetchIfAlmostEmpty() { if(buffer.size <= BatchSize / 4) { randomOrgClient ! FetchFromRandomOrg(BatchSize) context become receiveWhenWaiting } } We extracted code responsible for handling message while we wait for random.org response into a separate receiveWhenWaiting method. Notice the become() and unbecome() calls – they replaced no longer needed waitingForResponse flag. Instead we simply say: starting from next message please use this other method to handle (become slightly different actor). Later we say: OK, let’s go back to the original state and receive messages as you used to (unbecome). But the most important change is the transition from one, big method into two, much smaller a better named ones. become() and unbecome() methods are actually much more powerful since they internally maintain a stack of receiving methods. Every call to become() (with discardOld = false as a second parameter) pushes current receiving method onto a stack while unbecome() pops it and restores the previous one. Thus we can use become() to use several receiving methods and then gradually go back through all the changes. Moreover Akka also supports finite state machine pattern, but more on that maybe in the future. Source code for this article is available on GitHub in become-unbecome tag. This was a translation of my article “Poznajemy Akka: become/unbecome” originally published on scala.net.pl.   Reference: become/unbecome – discovering Akka from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog. ...

JUnit4 Parameterized and Theories Examples

I always relied on TestNG to pass parameters to test methods in order to give a bit of flexibility to my tests or suites. However, the same flexibility can be achieved using JUnit4. To use it it’s simple:               package com.marco.test;import java.util.Arrays;import java.util.Collection;import junit.framework.Assert;import org.junit.Test;import org.junit.runner.RunWith;import org.junit.runners.Parameterized;import org.junit.runners.Parameterized.Parameters;@RunWith(Parameterized.class)public class ParameterizedTest {@Parameterspublic static Collection data() {return Arrays.asList(new Object[][] {/* Sport Nation year totWinners */{ “basket”, “usa”, 2002, 5, },{ “soccer”, “argentina”, 2003, 2 },{ “tennis”, “spain”, 2004, 10 },{ “chess”, “ireland”, 2005, 0 },{ “eatingbananas”, “italy”, 2006, 20 }});}private final String sport;private final String nation;private final int year;private final int totWinners;public ParameterizedTest(String sport, String nation, int year, int totWinners) {this.sport = sport;this.nation = nation;this.year = year;this.totWinners = totWinners;}@Testpublic void test() {Assert.assertTrue(isDataCorrect(sport, nation, year, totWinners));}private boolean isDataCorrect(String sport2, String nation2, int year2, int totWinners2) {return true;}} JUnit will create an instance of the ParameterizedTest class and run the testCombination() method (or any method marked as @Test) for each row defined in the static collection. Theories This another interesting feature from JUnit4 that I like. You use Theories in JUnit 4 to test combinations of inputs using the same test method: package com.marco.test;import static org.hamcrest.CoreMatchers.is;import java.math.BigDecimal;import org.junit.Assert;import org.junit.Assume;import org.junit.experimental.theories.DataPoint;import org.junit.experimental.theories.Theories;import org.junit.experimental.theories.Theory;import org.junit.runner.RunWith;@RunWith(Theories.class)public class TheoryTest {@DataPointpublic static int MARKET_FIRST_GOALSCORERE_ID = 2007;@DataPointpublic static int MARKET_WDW_ID = 2008;@DataPointpublic static BigDecimal PRICE_BD = new BigDecimal(6664.0);@DataPointpublic static double PRICE_1 = 0.01;@DataPointpublic static double PRICE_2 = 100.0;@DataPointpublic static double PRICE_3 = 13999.99;@Theorypublic void lowTaxRateIsNineteenPercent(int market_id, double price) {Assume.assumeThat(market_id, is(2008));Assume.assumeThat(price, is(100.0));// run your testAssert.assertThat(price, is(100.0));}@Theorypublic void highTaxRateIsNineteenPercent(int market_id, double price) {Assume.assumeThat(market_id, is(2007));Assume.assumeThat(price, is(13999.99));Assert.assertThat(price, is(13999.99));}@Theorypublic void highTaxRateIsNineteenPercent(int market_id, BigDecimal price) {Assume.assumeThat(market_id, is(2007));Assert.assertThat(price, is(BigDecimal.valueOf(6664)));}} This time you need to mark the test class as @RunWith(Theories.class) and use @DataPoint to define properties that you want to test. JUnit will call the methods market as @Theory using all the possible combinations based on the DataPoint provided and the type of the variable. PRICE_BD DataPoint will be used only in the last method, the only one accepting BigDecimal in its method parameters. Only parameters that satisfy the Assume.assumeThat() condition will make through the asser test. The combinations that don’t satisfy the Assume.assumeThat() condition will be ignored silently.   Reference: JUnit4 Parameterized and Theories from our JCG partner Marco Castigliego at the Remove duplication and fix bad names blog. ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: