Featured FREE Whitepapers

What's New Here?

java-logo

Ensuring the order of execution for tasks

Sometimes it is necessary to impose certain order on the tasks in a threadpool. Issue 206 of the JavaSpecialists newsletter presents one such case: we have multiple connections from which we read using NIO. We need to ensure that events from a given connection are executed in-order but events between different connections can be freely mixed.               I would like to present a similar but slightly different situation: we have N clients. We would like to execute events from a given client in the order they were submitted, but events from different clients can be mixed freely. Also, from time to time, there are ‘rollup’ tasks which involve more than one client. Such tasks should block the tasks for all involved clients (but not more!). Let’s see a diagram of the situation:As you can see tasks from client A and client B are happily processed in parallel until a ‘rollup’ task comes along. At that point no more tasks of type A or B can be processed but an unrelated task C can be executed (provided that there are enough threads). The skeleton of such an executor is available in my repository. The centerpiece is the following interface: public interface OrderedTask extends Runnable { boolean isCompatible(OrderedTask that); } Using this interface the threadpool decides if two tasks may be run in parallel or not (A and B can be run in parallel if A.isCompatible(B) && B.isComaptible(A)). These methods should be implemented in a fast, non locking and time-invariant manner. The algorithm behind this threadpool is as follows:If the task to be added doesn’t conflict with any existing tasks, add it to the thread with the fewest elements. If it conflicts with elements from exactly one thread, schedule it to be executed on that thread (and implicitly after the conflicting elements which ensures that the order of submission is maintained) If it conflicts with multiple threads, add tasks (shown with red below) on all but the first one of them on which a task on the first thread will wait, after which it will execute the original task.More information about the implementation:The code is only a proof-of-concept, some more would would be needed to make it production quality (it needs code for exception handling in tasks, proper shutdown, etc) For maximum performance it uses lock-free* structures where available: each worker thread has an associated ConcurrentLinkedQueue. To achieve the sleep-until-work-is-available semantics, an additional Semaphore is used** To be able to compare a new OrderedTask with currently executing ones, a copy of their reference is kept. This list of copies is updated whenever new elements are enqueued (this is has the potential of memory leaks and if tasks are infrequent enough alternatives – like an additional timer for weak references – should be investigated) Compared to the solution in the JavaSpecialists newsletter, this is more similar to a fixed thread pool executor, while the solution from the newsletter is similar to a cached thread pool executor. This implementation is ideal if (a) the tasks are (mostly) short and (mostly) uniform and (b) there are few (one or two) threads submitting new tasks, since multiple submissions are mutually exclusive (but submission and execution isn’t) If immediately after a ‘rollup’ is submitted (and before it can be executed) tasks of the same kind are submitted, they will unnecessarily be forced on one thread. We could add code rearrange tasks after the rollup task finished if this becomes an issue.Have fun with the source code! (maybe some day I’ll find the time to remove all the rough edges). * somewhat of a misnomer, since there are still locks, only at a lower – CPU not OS – level, but this is the accepted terminology ** – benchmarking indicated this to be the most performant solution. This was inspired from the implementation of the ThreadPoolExecutor.   Reference: Ensuring the order of execution for tasks from our JCG partner Attila-Mihaly Balazs at the Java Advent Calendar blog. ...
java-logo

Can synchronization be optimised away?

Overview There is a common misconception that because the JIT is smart and synchronization can be eliminated for an object which is only local to a method that there is no performance impact.             A test comparing StringBuffer and StringBuilder These two classes do basically the same thing except one is synchronized (StringBuffer) and the other is not. It is also a class which is often used in one method to build a String.  The following test attempts to determine how much difference using one other the other can make. static String dontOptimiseAway = null; static String[] words = new String[100000];public static void main(String... args) { for (int i = 0; i < words.length; i++) words[i] = Integer.toString(i);for (int i = 0; i < 10; i++) { dontOptimiseAway = testStringBuffer(); dontOptimiseAway = testStringBuilder(); } }private static String testStringBuffer() { long start = System.nanoTime(); StringBuffer sb = new StringBuffer(); for (String word : words) { sb.append(word).append(','); } String s = sb.substring(0, sb.length() - 1); long time = System.nanoTime() - start; System.out.printf("StringBuffer: took %d ns per word%n", time / words.length); return s; }private static String testStringBuilder() { long start = System.nanoTime(); StringBuilder sb = new StringBuilder(); for (String word : words) { sb.append(word).append(','); } String s = sb.substring(0, sb.length() - 1); long time = System.nanoTime() - start; System.out.printf("StringBuilder: took %d ns per word%n", time / words.length); return s; } at the end prints with -XX:+DoEscapeAnalysis using Java 7 update 10 StringBuffer: took 69 ns per word StringBuilder: took 32 ns per word StringBuffer: took 88 ns per word StringBuilder: took 26 ns per word StringBuffer: took 62 ns per word StringBuilder: took 25 ns per word Testing with one million words doesn’t change the results significantly. ConclusionWhile the cost of using synchronization is small, it is measurable and if you can use StringBuilder it is preferred as it states in the Javadocs for this class. In theory, synchronization can be optimised away, but it is yet to be the case even in simple cases.  Reference: Can synchronization be optimised away? from our JCG partner Peter Lawrey at the Vanilla Java blog. ...
java-interview-questions-answers

JAXB – Representing Null and Empty Collections

Demo Code The following demo code will be used for all the different versions of the Java model. It simply sets one collection to null, the second to an empty list, and the third to a populated list.             package package blog.xmlelementwrapper;import java.util.ArrayList; import javax.xml.bind.*;public class Demo {public static void main(String[] args) throws Exception { JAXBContext jc = JAXBContext.newInstance(Root.class);Root root = new Root();root.nullCollection = null;root.emptyCollection = new ArrayList<String>();root.populatedCollection = new ArrayList<String>(); root.populatedCollection.add('foo'); root.populatedCollection.add('bar');Marshaller marshaller = jc.createMarshaller(); marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true); marshaller.marshal(root, System.out); }} Mapping #1 – Default JAXB models do not require any annotations (see JAXB – No Annotations Required). First we will look at what the default behaviour is for collection properties. package blog.xmlelementwrapper;import java.util.List; import javax.xml.bind.annotation.*;@XmlRootElement @XmlAccessorType(XmlAccessType.FIELD) public class Root {List<String> nullCollection;List<String> emptyCollection;List<String> populatedCollection;} Examining the output we see that the output corresponding to the nullCollection and emptyCollection fields is the same. This means with the default mapping we can’t round trip the instance. For the unmarshal use case the value of the nullCollection and emptyCollection the value of the fields will be whatever the class initialized them to (null in this case). <?xml version='1.0' encoding='UTF-8'?> <root> <populatedCollection>foo</populatedCollection> <populatedCollection>bar</populatedCollection> </root> Mapping #2 – @XmlElementWrapper The @XmlElementWrapper annotation is used to add a grouping element around the contents of a collection. In addition to changing the appearance of the XML representation it also allows us to distinguish between null and empty collections. package blog.xmlelementwrapper;import java.util.List; import javax.xml.bind.annotation.*;@XmlRootElement @XmlAccessorType(XmlAccessType.FIELD) public class Root {@XmlElementWrapper List<String> nullCollection;@XmlElementWrapper List<String> emptyCollection;@XmlElementWrapper List<String> populatedCollection;} The representation for the null collection remains the same, it is absent from the XML document. For an empty collection we see that only the grouping element is marshalled out. Since the representations for null and empty are different we can round trip this use case. <?xml version='1.0' encoding='UTF-8'?> <root> <emptyCollection/> <populatedCollection> <populatedCollection>foo</populatedCollection> <populatedCollection>bar</populatedCollection> </populatedCollection> </root> Mapping #3 – @XmlElementWrapper(nillable=true) The nillable property on the @XmlElementWrapper annotation can be used to change the XML representation of null collections. package blog.xmlelementwrapper;import java.util.List; import javax.xml.bind.annotation.*;@XmlRootElement @XmlAccessorType(XmlAccessType.FIELD) public class Root {@XmlElementWrapper(nillable=true) List<String> nullCollection;@XmlElementWrapper(nillable=true) List<String> emptyCollection;@XmlElementWrapper(nillable=true) List<String> populatedCollection;} Now the grouping element is present for all three fields. The xsi:nil attribute is used to indicate that the nullCollection field was null. Like the previous mapping this one can be round tripped. <?xml version='1.0' encoding='UTF-8'?> <root> <nullCollection xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:nil='true'/> <emptyCollection/> <populatedCollection> <populatedCollection>foo</populatedCollection> <populatedCollection>bar</populatedCollection> </populatedCollection> </root>   Reference: JAXB – Representing Null and Empty Collections from our JCG partner Blaise Doughan at the Java XML & JSON Binding blog. ...
groovy-logo

A simple Groovy issue tracker using file system

It will be a chaos not to track bugs and feature requests when you developing software. Having a simple issue tracker would make managing the project much more successful. Now I like simple stuff, and I think for small project, having this tracker right inside the source control (especially with DSVC like Mercurial/Git etc) repository is not only doable, but very convenient as well. You don’t have to go crazy with all the fancy features, but just enough to track issues are fine. I would like to propose this layout for you.           Let’s say you have a project that looks like this project +- src/main/java/Hello.java +- issues/issue-001.md +- pom.xml All I need is a simple directory issues to get going. Now I have a place to track my issue! First issue issue-000.md should be what your project is about. For example: /id=issue-001 /createdon=2012-12-16 18:07:08 /type=bug /status=new /resolution= /from=Zemian /to= /found= /fixed= /subject=A simple Java Hello program# Updated on 2012-12-16 18:07:08We want to create a Maven based Hello world program. It should print 'Hello World.' I choose .md as file extension for intending to write comments in Markdown format. Since it’s a text file, you do what you want. To be more structured, I have added some headers metadata for issue tracking. Let’s define some here. I would propose to use these and formatting: /id=issue-<NUM> /createdon=<TIMESTAMP> /type=feature|bug|question /status=new|reviewing|working|onhold|testing|resolved /resolution=fixed|rejected|duplicated /from=<REPORTER_FROM_NAME> /to=<ASSIGNEE_TO_NAME> /found=<VERSION_FOUND> /fixed=<VERSION_FIXED> That should cover most of the bug and feature development issues. It’s not cool to write software without a history of changes, including these issues created. So let’s use a source control. I highly recommend you to use Mercurial hg. You can create and initialize a new repository like this. bash> cd project bash> hg init bash> hg add bash> hg commit -m 'My hello world project' Now your project is created and we have a place to track your issues. Now it’s simple text file, so use your favorite text editor and edit away. However, creating new issue with those header tags is boring. It will be nice to have a script that manage it a little. I have a Groovy script issue.groovy (see at the end of this article) that let you run reports and create new issues. You can add this script into your project/issues directory and you can instantly creating new issue and querying reports! Here is an example output on my PC: bash> cd project bash> groovy scripts/issue.groovySearching for issues with /status!=resolved Issue: /id=issue-001 /status=new /subject=A simple Java Hello program 1 issues found.bash> groovy scripts/issue.groovy --new /type=feature /subject='Add a unit test.'project/issues/issue-002.md created. /id=issue-002 /createdon=2012-12-16 19:10:00 /type=feature /status=new /resolution= /from=Zemian /to= /found= /fixed= /subject=Add a unit test.bash> groovy scripts/issue.groovySearching for issues with /status!=resolved Issue: /id=issue-000 /status=new /subject=A simple Java Hello program Issue: /id=issue-002 /status=new /subject=Add a unit test. 2 issues found.bash> groovy scripts/issue.groovy --details /id=002Searching for issues with /id=002 Issue: /id=issue-002 /createdon=2012-12-16 19:10:00 /found= /from=Zemian /resolution= /status=new /type=feature /subject=Add a unit test. 1 issues found.bash> groovy scripts/issue.groovy --update /id=001 /status=resolved /resolution=fixed 'I fixed this thang.' Updating issue /id=issue-001 Updating /status=resolved Updating /resolution=fixedUpdate issue-001 completed. The script give you some quick and consistent way to create/update/search issues. But they are just plain text files! You can just as well fire up your favorite text editor and change any any thing you want. Save and even commit it into your source repository. All will not lost. Here is my issue.groovy script: #!/usr/bin/env groovy // // A groovy script to manage issue files and its metadata/headers. // // Created by Zemian Deng <saltnlight5@gmail.com> 12/2012 v1.0.1 // // Usage: // bash> groovy [java_opts] issue.groovy [option] [/header_name=value...] [arguments] // // Examples: // # Report all issues that match headers (we support RegEx!) // bash> groovy issue /resolution=fixed // bash> groovy issue /status!=onhold // bash> groovy issue '/subject=Improve UI|service' // bash> groovy issue --details /status=resolved // // # Create a new bug issue file. // bash> groovy issue --new /type=bug /to=zemian /found=v1.0.1 /subject='I found some problem.' 'More details here.' // // # Update an issue // bash> groovy issue --update /id=issue-001 /status=resolved /resolution=fixed 'I fixed this issue with Z algorithm.' // // Becareful on the following notes: // * Ensure your filename issue id match to the /id or your search may not work! // * You need to use quote the entire header such as these 'key=space value' // class issue { def ISSUES_HEADERS = ['/id', '/createdon', '/type', '/status', '/resolution', '/from', '/to', '/found', '/fixed', '/subject'] def ISSUES_HEADERS_VALS = [ '/type' : ['feature', 'bug', 'question'] as Set, '/status' : ['new', 'reviewing', 'working', 'onhold', 'testing', 'resolved'] as Set, '/resolution' : ['fixed', 'rejected', 'duplicated'] as Set ] def issuesDir = new File(System.getProperty("issuesDir", getDefaultIssuesDir())) def issuePrefix = System.getProperty("issuePrefix", 'issue') def arguments = [] // script arguments after parsing def options = [:] // script options after parsing def headers = [:] // user input issue headersstatic void main(String[] args) { new issue().run(args) }// Method declarations def run(String[] args) { // Parse and save options, arguments and headers vars args.each { arg -> def append = true if (arg =~ /^--{0,1}\w+/) { options[arg] = true append = false } else { def pos = arg.indexOf('=') if (pos >= 1 && arg.length() > pos) { def name = arg.substring(0, pos) def value = arg.substring(pos + 1) headers.put(name, value) append = false } }if (append) { arguments << arg } }// support short option flag if (options['-d']) options['--details'] = true// Run script depending on options passed if (options['--help'] || options['-h']) { printHelp() } else if (options['--new'] || options['-n']) { createIssue() } else if (options['--update'] || options['-u']) { updateIssue() } else { reportIssues() } }def printHelp() { new File(getClass().protectionDomain.codeSource.location.path).withReader{ reader -> def done = false def line = null while (!done && (line = reader.readLine()) != null) { line = line.trim() if (line.startsWith("#") || line.startsWith("//")) println(line) else done = true } } }def validateHeaders() { def headersSet = ISSUES_HEADERS.toSet() headers.each{ name, value -> if (!headersSet.contains(name)) throw new Exception("ERROR: Unkown header name $name.") if (ISSUES_HEADERS_VALS[name] != null && !(ISSUES_HEADERS_VALS[name].contains(value))) throw new Exception("ERROR: Unkown header $name=$value. Allowed: ${ISSUES_HEADERS_VALS[name].join(', ')}") } }def getDefaultIssuesDir() { return new File(getClass().protectionDomain.codeSource.location.path).parentFile.path }def getIssueIds() { def issueIds = [] def files = issuesDir.listFiles() if (files == null) return issueIds files.each{ f -> def m = f.name =~ /^(\w+-\d+)\.md$/ if (m) issueIds << m[0][1] } return issueIds }def getIssueFile(String issueid) { return new File(issuesDir, "${issueid}.md") }def reportIssues() { def userHeaders = new HashMap(headers) if (userHeaders.size() ==0) userHeaders['/status!'] = 'resolved' def headersLine = userHeaders.sort{ a,b -> a.key <=> b.key }.collect{ k,v -> "$k=$v" }.join(', ') println "Searching for issues with $headersLine" def count = 0 getIssueIds().each { issueid -> def file = getIssueFile(issueid) def issueHeaders = [:] file.withReader{ reader -> def done = false def line = null while (!done && (line = reader.readLine()) != null) { if (line =~ /^\/\w+=.*$/) { def words = line.split('=') if (words.length >= 2) { issueHeaders.put(words[0], words[1..-1].join('=')) } } else if (issueHeaders.size() > 0) { done = true } } } def match = userHeaders.findAll{ k,v -> if (k.endsWith("!")) (issueHeaders[k.substring(0, k.length() - 1)] =~ /${v}/) ? false : true else (issueHeaders[k] =~ /${v}/) ? true : false } if (match.size() == userHeaders.size()) { def line = "Issue: /id=${issueHeaders['/id']}" if (options['--details']) { def col = 4 def issueHeadersKeys = issueHeaders.keySet().sort() - ['/id', '/subject'] issueHeadersKeys.collate(col).each { set -> line += "\n " + set.collect{ k -> "$k=${issueHeaders[k]}" }.join(" ") } line += "\n /subject=${issueHeaders['/subject']}" } else { line += " /status=${issueHeaders['/status']}" + " /subject=${issueHeaders['/subject']}" } println line count += 1 } } println "$count issues found." }def createIssue() { validateHeaders() if (headers['/status'] == 'resolved' && headers['/resolution'] == null) throw new Exception("You must provide /resolution after resolved an issue.")def ids = getIssueIds().collect{ issueid -> issueid.split('-')[1].toInteger() } def nextid = ids.size() > 0 ? ids.max() + 1 : 1 def issueid = String.format("${issuePrefix}-%03d", nextid) def file = getIssueFile(issueid) def createdon = new Date().format('yyyy-MM-dd HH:mm:ss') def newHeaders = [ '/id' : issueid, '/createdon' : createdon, '/type' : 'bug', '/status' : 'new', '/resolution' : '', '/from' : System.properties['user.name'], '/to' : '', '/found' : '', '/fixed' : '', '/subject' : 'A bug report' ] // Override newHeaders from user inputs headers.each { k,v -> newHeaders.put(k, v) }//Output to file file.withWriter{ writer -> ISSUES_HEADERS.each{ k -> writer.println("$k=${newHeaders[k]}") } writer.println() writer.println("# Updated on ${createdon}") writer.println() arguments.each { writer.println(it) writer.println() } writer.println() }// Output issue headers to STDOUT println "$file created." ISSUES_HEADERS.each{ k -> println("$k=${newHeaders[k]}") } }def updateIssue() { validateHeaders() if (headers['/status'] == 'resolved' && headers['/resolution'] == null) throw new Exception("You must provide /resolution after resolved an issue.")def userHeaders = new HashMap(headers) userHeaders.remove('/createdon') // we should not update this fielddef issueid = userHeaders.remove('/id') // We will not re-update /id if (issueid == null) throw new Exception("Failed to update issue: missing /id value.") if (!issueid.startsWith(issuePrefix)) issueid = "${issuePrefix}-${issueid}" println("Updating issue /id=${issueid}")def file = getIssueFile(issueid) def newFile = new File(file.parentFile, "${file.name}.update.tmp") def hasUpdate = false def issueHeaders = [:]if (!file.exists()) throw new Exception("Failed to update issue: file not found for /id=${issueid}")// Read and update issue headers file.withReader{ reader -> // Read all issue headers first def done = false def line = null while (!done && (line = reader.readLine()) != null) { if (line =~ /^\/\w+=.*$/) { def words = line.split('=') if (words.length >= 2) { issueHeaders.put(words[0], words[1..-1].join('=')) } } else if (issueHeaders.size() > 0) { done = true } }// Find issue headers differences userHeaders.each{ k,v -> if (issueHeaders[k] != v) { println("Updating $k=$v") issueHeaders[k] = v if (!hasUpdate) hasUpdate = true } }// Update issue file if (hasUpdate) { newFile.withWriter{ writer -> ISSUES_HEADERS.each{ k -> writer.println("${k}=${issueHeaders[k] ?: ''}") } writer.println()// Write/copy the rest of the file. done = false while (!done && (line = reader.readLine()) != null) { writer.println(line) } writer.println() } } } // readerif (hasUpdate) { // Rename the new file back to orig file.delete() newFile.renameTo(file) }// Append any arguments as user comments if (arguments.size() > 0) { file.withWriterAppend{ writer -> writer.println() writer.println("# Updated on ${new Date().format('yyyy-MM-dd HH:mm:ss')}") writer.println() arguments.each{ text -> writer.println(text) writer.println() } println() } }println("Update $issueid completed.") } }   Reference: A simple Groovy issue tracker using file system from our JCG partner Zemian Deng at the A Programmer’s Journal blog. ...
java-logo

Under the JVM hood – Classloaders

Classloaders are a low level and often ignored aspect of the Java language among many developers. At ZeroTurnaround , our developers have had to live, breathe, eat, drink and almost get intimate with classloaders to produce the JRebel technology which interacts at a classloader level to provide live runtime class reloading, avoiding lengthy rebuilds/repackaging/redeploying cycles. Here are some of the things we’ve learnt around classloaders including some debugging tips which will hopefully save you time and potential headdesking in the future.       A classloader is just a plain java object Yes, it’s nothing clever, well other than the system classloader in the JVM, a classloader is just a java object! It’s an abstract class, ClassLoader, which can be implemented by a class you create. Here is the API: public abstract class ClassLoader {public Class loadClass(String name);protected Class defineClass(byte[] b);public URL getResource(String name);public Enumeration getResources(String name);public ClassLoader getParent()} Looks pretty straightforward, right? Let’s take a look method by method. The central method is loadClass which just takes a String class name and returns you the actual Class object. This is the method which if you’ve used classloaders before is probably the most familiar as it’s the most used in day to day coding. defineClass is a final method in the JVM that takes a byte array from a file or a location on the network and produces the same outcome, a Class object. A classloader can also find resources from a classpath. It works in a similar way to the loadClass method. There are a couple of methods, getResource and getResources, which return a URL or an Enumeration of URLs which point to the resource which represents the name passed as input to the method. Every classloader has a parent; getParent returns the classloaders parent, which is not Java inheritance related, rather a linked list style connection. We will look into this in a little more depth later on. Classloaders are lazy, so classes are only ever loaded when they are requested at runtime. Classes are loaded by the resource which invokes the class, so a class, at runtime, could be loaded by multiple classloaders depending on where they are referenced from and which classloader loaded the classes which referen… oops, I’ve gone cross-eyed! Let’s look at some code. public class A {public void doSmth() {B b = new B();b.doSmthElse();}} Here we have class A calling the constructor of class B within the doSmth of it’s methods. Under the covers this is what is happening A.class.getClassLoader().loadClass(“B”); The classloader which originally loaded class A is invoked to load the class B. Classloaders are hierarchical, but like children, they don’t always ask their parents Every classloader has a parent classloader. When a classloader is asked for a class, it will typically go straight to the parent classloader first calling loadClass which may in turn ask it’s parent and so on. If two classloaders with the same parent are asked to load the same class, it would only be done once, by the parent. It gets very troublesome when two classloaders load the same class separately, as this can cause problems which we’ll look at later. When the JEE spec was designed, the web classloader was designed to work the opposite way – great. Let’s take a look at the figure below as our example.Module WAR1 has its own classloader and prefers to load classes itself rather than delegate to it’s parent, the classloader scoped by App1.ear. This means different WAR modules, like WAR1 and WAR2 cannot see each others classes. The App1.ear module has its own classloader and is parent to the WAR1 and WAR2 classloaders. The App1.ear classloader is used by the WAR1 and WAR2 classloaders when they needs to delegate a request up the hierarchy i.e. a class is required outside of the WAR classloader scope. Effectively the WAR classes override the EAR classes where both exist. Finally the EAR classloader’s parent is the container classloader. The EAR classloader will delegate requests to the container classloader, but it does not do it in the same way as the WAR classloader, as the EAR classloader will actually prefer to delegate up rather than prefer local classes. As you can see this is getting quite hairy and is different to the plain JSE class loading behaviour. The flat classpath We talked about how the system classloader looks to the classpath to find classes that have been requested. This classpath could include directories or JAR files and the order which they are looked through is actually dependant on the JVM you are using. There may be multiple copies or versions of the class you require on the classpath, but you will always get the first instance of the class found on the classpath. It’s essentially just a list of resources, which is why it’s referred to as flat. As a result the classpath list can often be relatively slow to iterate through when looking for a resource. Problems can occur when applications who are using the same classpath want to use different versions of a class, lets use Hibernate as an example. When two versions of Hibernate JARs exist on the classpath, one version cannot be higher up the classpath for one application than it is for the other, which means both will have to use the same version. One way around this is to bloat the application (WAR) with all the libraries necessary, so that they use their local resources, but this then leads to big applications which are hard to maintain. Welcome to JAR hell! OSGi provides a solution here as it allows versioning of JAR files, or bundles, which results in a mechanism to allow wiring to particular versions of JAR files avoiding the flat classpath problems. How do I debug my class loading errors? NoClassDefFoundError/ClassNotFoundException/ClassNoDefFoundException? So, you’ve got an error/exception like the ones above. Well, does the class actually exist? Don’t bother looking in your IDE, as that’s where you compiled your class, it must be there otherwise you’ll get a compile time exception. This is a runtime exception so it’s in the runtime we want to look for the class which it says we’re missing… but where do you start? Consider the following piece of code… Arrays.toString((((URLClassLoader) Test.class.getClassLoader()) .getURLs())); This code returns an array list of all jars and directories on the classpath of the classloader the class Test is using. So now we can see if the JAR or location our mystery class should exist in is actually on the classpath. If it does not exist, add it! If it does exist, check the JAR/directory to make sure your class actually exists in that location and add it if it’s missing. These are the two typical problems which result in this error case. NoSuchMethodError/NoSuchFieldError/AbstractMethodError/IllegalAccessError? Now it’s getting interesting! These are all subclasses of the IncompatibleClassChangeError. We know the classloader has found the class we want (by name), but clearly it hasn’t found the right version. Here we have a class called Test which is making an invocation to another class, Util, but BANG – We get an exception! Lets look at the next snippet of code to debug: Test.class.getClassLoader().getResource(Util.class.getName() .replace('.', '/') + ".class"); We’re calling getResource on the classloader of class Test. This returns us the URL of the Util resource. Notice we’ve replaced the ‘.’ with a ‘/’ and added a ‘.class’ at the end of the String. This changes the package and classname of the class we’re looking for (from the perspective of the classloader) into a directory structure and filename on the filesystem – neat. This will show us the exact class we have loaded and we can make sure it’s the correct version. We can use javap -private on the class at a command prompt to see the byte code and check which methods and fields actually exist. You can easily see the structure of the class and validate whether it’s you or the Java runtime which is going crazy! Believe me, at one stage or another you’ll question both, and nearly every time it will be you! LinkageError/ClassCastException/IllegalAccessError These can occur if two different classloaders load the same class and they try to interact… ouch! Yes, it’s now getting a bit hairy. This can cause problems as we do not know if they will load the classes from the same place. How can this happen? Lets look at the following code, still in the Test class: Factory.instance().sayHello(); The code looks pretty clean and safe, and it’s not clear how an error could emerge from this line. We’re calling a static factory method to get us an instance of the Test class and are invoking a method on it. Lets look at this supporting image to show the reason why an exception is being thrown.Here we can see a web classloader (which loaded the Test class) will prefer local classes, so when it makes reference to a class, it will be loaded by the web classloader, if possible. Fairly straightforward so far. The Test class uses the Factory class to get hold of an instance of the Util class which is fairly typical practice in Java, but the Factory class doesn’t exist in the WAR as it is an external library. This is no problem as the web classloader can delegate to the shared classloader, which can see the Factory class. Note that the shared classloader is now loading it’s own version of the Util class as when the Factory instantiates the class, it uses the shared classloader (as shown in the first example earlier). The Factory class returns the Util object (created by the shared classloader) back to the WAR, which then tries to use the class, and effectively cast the class to a potentially different version of the same class (the Util class visible to the web classloader). BOOM! We can run the same code as before from within both places (The Factory.instance() method and the Test class) to see where each of our Util classes are being loaded from. Test.class.getClassLoader().getResource(Util.class.getName() .replace('.', '/') + ".class")); Hopefully this has given you an insight into the world of classloading, and instead of not understanding the classloader, you can now appreciate it with a hint of fear and uncertainty! Thanks for reading and making it to the end. We’d all like to wish you a Merry Christmas and a happy new year from ZeroTurnaround! Happy coding!   Reference: Under the JVM hood – Classloaders from our JCG partner Simon Maple at the Java Advent Calendar blog. ...
agile-logo

Who Do You Promote Into Management?

I vividly remember my first promotion into management. I was looking for a promotion to be a senior engineer. I asked for a promotion. I got a promotion into management. Was I ready? Oh no! I remember asking for another promotion. I was told, “You’re too valuable where you are.” I decided to make my myself less valuable and leave. When I made my last transition into management—the one where I did not transition back into development or testing—that was the one where there were two candidates for the position. One was a very technical guy who barely had any people skills and didn’t like managing people. How did I know? He said so. I was the other candidate. Now, you need to know that I have been working on my people skills my entire life. I’ve been given feedback that I’m too blunt and direct. I suspect that if I’d been born a man and 6 feet tall, I would have received kudos for being aggressive. I’m just too short and the wrong gender. On the other hand, I need to know how to phrase the information so the other people can hear it. Promoting people into management is one of those very difficult decisions. It should not be a decision you make on the spur of the moment. If you have one-on-one’s with people, you can discover their career plans. You can help them, if they want it. Part of a manager’s job is succession planning. Do you plan to be in this job forever? I hope not. Even if you just take a vacation, you are not going to be in this job for the rest of your life. You need to think about who you promote into management. Who is the best person to promote? It might not be the person with the best technical skills. It might not be the person with the best people skills. It might be the person with some combination of the two. I don’t know. You should do an analysis of the value the job requires. Here’s what I do know. If you always take the best technical person, you deprive the team of someone who was doing great technical work. And, if that person does not want to do management work, you deprive the team of a potentially great manager. If you know of someone who falls into the trap of promoting the best technical person into management, have that person read my new myth, Management Myth #12: I Must Promote the Best Technical Person to Be a Manager. Remember before, when I said I asked for a promotion? I wanted to be a manager. Why? Because I was ready for the challenge of making the difficult management decisions. I saw the project portfolio decisions that were not being made and I wanted to make them. I saw the client decisions that were not being made and I wanted to make them. I knew there were difficult tradeoffs to make in the projects, and I was willing to make them. Those were management decisions. I was willing to take a stand and make them. They were not technical decisions. They were management decisions. So, think about who you promote into management. It should not be a spur-of-the-moment decision. Think about your succession planning. Discuss what people want out of their careers in your one-on-ones with your staff. Whatever you do, don’t fall prey to the Myth: I Must Promote the Best Technical Person to Be A Manager.   Reference: Who Do You Promote Into Management? from our JCG partner Johanna Rothman at the Managing Product Development blog. ...
spring-interview-questions-answers

Chunk Oriented Processing in Spring Batch

Big Data Sets’ Processing is one of the most important problem in the software world. Spring Batch is a lightweight and robust batch framework to process the data sets. Spring Batch Framework offers ‘TaskletStep Oriented’ and ‘Chunk Oriented’ processing style. In this article, Chunk Oriented Processing Model is explained. Also, TaskletStep Oriented Processing in Spring Batch Article is definitely suggested to investigate how to develop TaskletStep Oriented Processing in Spring Batch.           Chunk Oriented Processing Feature has come with Spring Batch v2.0. It refers to reading the data one at a time, and creating ‘chunks’ that will be written out, within a transaction boundary. One item is read from an ItemReader, handed to an ItemProcessor, and written. Once the number of items read equals the commit interval, the entire chunk is written out via the ItemWriter, and then the transaction is committed. Basically, this feature should be used if at least one data item’ s reading and writing is required. Otherwise, TaskletStep Oriented processing can be used if the data item’ s only reading or writing is required. Chunk Oriented Processing model exposes three important interface as ItemReader, ItemProcessor and ItemWriter via org.springframework.batch.item package.ItemReader : This interface is used for providing the data. It reads the data which will be processed. ItemProcessor : This interface is used for item transformation. It processes input object and transforms to output object. ItemWriter : This interface is used for generic output operations. It writes the datas which are transformed by ItemProcessor. For example, the datas can be written to database, memory or outputstream (etc). In this sample application, we will write to database.Let us take a look how to develop Chunk Oriented Processing Model. Used Technologies :JDK 1.7.0_09 Spring 3.1.3 Spring Batch 2.1.9 Hibernate 4.1.8 Tomcat JDBC 7.0.27 MySQL 5.5.8 MySQL Connector 5.1.17 Maven 3.0.4STEP 1 : CREATE MAVEN PROJECT A maven project is created as below. (It can be created by using Maven or IDE Plug-in).STEP 2 : LIBRARIES A new USER Table is created by executing below script: CREATE TABLE ONLINETECHVISION.USER ( id int(11) NOT NULL AUTO_INCREMENT, name varchar(45) NOT NULL, surname varchar(45) NOT NULL, PRIMARY KEY (`id`) ); STEP 3 : LIBRARIES Firstly, dependencies are added to Maven’ s pom.xml. <properties> <spring.version>3.1.3.RELEASE</spring.version> <spring-batch.version>2.1.9.RELEASE</spring-batch.version> </properties><dependencies><dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>${spring.version}</version> </dependency><dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>${spring.version}</version> </dependency><dependency> <groupId>org.springframework</groupId> <artifactId>spring-tx</artifactId> <version>${spring.version}</version> </dependency><dependency> <groupId>org.springframework</groupId> <artifactId>spring-orm</artifactId> <version>${spring.version}</version> </dependency><dependency> <groupId>org.springframework.batch</groupId> <artifactId>spring-batch-core</artifactId> <version>${spring-batch.version}</version> </dependency><!-- Hibernate dependencies --> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-core</artifactId> <version>4.1.8.Final</version> </dependency><!-- Tomcat DBCP --> <dependency> <groupId>org.apache.tomcat</groupId> <artifactId>tomcat-jdbc</artifactId> <version>7.0.27</version> </dependency><!-- MySQL Java Connector library --> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.17</version> </dependency><!-- Log4j library --> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>1.2.16</version> </dependency></dependencies> maven-compiler-plugin(Maven Plugin) is used to compile the project with JDK 1.7 <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.0</version> <configuration> <source>1.7</source> <target>1.7</target> </configuration> </plugin> The following Maven plugin can be used to create runnable-jar, <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> <version>2.0</version><executions> <execution> <phase>package</phase> <goals> <goal>shade</goal> </goals> <configuration> <configuration> <source>1.7</source> <target>1.7</target> </configuration> <transformers> <transformer implementation='org.apache.maven.plugins.shade.resource. ManifestResourceTransformer'> <mainClass>com.onlinetechvision.exe.Application</mainClass> </transformer> <transformer implementation='org.apache.maven.plugins.shade.resource. AppendingTransformer'> <resource>META-INF/spring.handlers</resource> </transformer> <transformer implementation='org.apache.maven.plugins.shade.resource. AppendingTransformer'> <resource>META-INF/spring.schemas</resource> </transformer> </transformers> </configuration> </execution> </executions> </plugin> STEP 4 : CREATE User ENTITY User Entity is created. This entity will be stored after processing. package com.onlinetechvision.user;import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; import javax.persistence.Table;/** * User Entity * * @author onlinetechvision.com * @since 10 Dec 2012 * @version 1.0.0 * */ @Entity @Table(name='USER') public class User {private int id; private String name; private String surname;@Id @GeneratedValue(strategy=GenerationType.AUTO) @Column(name='ID', unique = true, nullable = false) public int getId() { return id; }public void setId(int id) { this.id = id; }@Column(name='NAME', unique = true, nullable = false) public String getName() { return name; }public void setName(String name) { this.name = name; }@Column(name='SURNAME', unique = true, nullable = false) public String getSurname() { return surname; }public void setSurname(String surname) { this.surname = surname; }@Override public String toString() { StringBuffer strBuff = new StringBuffer(); strBuff.append('id : ').append(getId()); strBuff.append(', name : ').append(getName()); strBuff.append(', surname : ').append(getSurname()); return strBuff.toString(); } } STEP 5 : CREATE IUserDAO INTERFACE IUserDAO Interface is created to expose data access functionality. package com.onlinetechvision.user.dao;import java.util.List;import com.onlinetechvision.user.User;/** * User DAO Interface * * @author onlinetechvision.com * @since 10 Dec 2012 * @version 1.0.0 * */ public interface IUserDAO {/** * Adds User * * @param User user */ void addUser(User user);/** * Gets User List * */ List<User> getUsers(); } STEP 6 : CREATE UserDAO IMPL UserDAO Class is created by implementing IUserDAO Interface. package com.onlinetechvision.user.dao;import java.util.List;import org.hibernate.SessionFactory;import com.onlinetechvision.user.User;/** * User DAO * * @author onlinetechvision.com * @since 10 Dec 2012 * @version 1.0.0 * */ public class UserDAO implements IUserDAO {private SessionFactory sessionFactory;/** * Gets Hibernate Session Factory * * @return SessionFactory - Hibernate Session Factory */ public SessionFactory getSessionFactory() { return sessionFactory; }/** * Sets Hibernate Session Factory * * @param SessionFactory - Hibernate Session Factory */ public void setSessionFactory(SessionFactory sessionFactory) { this.sessionFactory = sessionFactory; }/** * Adds User * * @param User user */ @Override public void addUser(User user) { getSessionFactory().getCurrentSession().save(user); }/** * Gets User List * * @return List - User list */ @SuppressWarnings({ 'unchecked' }) @Override public List<User> getUsers() { List<User> list = getSessionFactory().getCurrentSession().createQuery('from User').list(); return list; }} STEP 7 : CREATE IUserService INTERFACE IUserService Interface is created for service layer. package com.onlinetechvision.user.service;import java.util.List;import com.onlinetechvision.user.User;/** * * User Service Interface * * @author onlinetechvision.com * @since 10 Dec 2012 * @version 1.0.0 * */ public interface IUserService {/** * Adds User * * @param User user */ void addUser(User user);/** * Gets User List * * @return List - User list */ List<User> getUsers(); } STEP 8 : CREATE UserService IMPL UserService Class is created by implementing IUserService Interface. package com.onlinetechvision.user.service;import java.util.List;import org.springframework.transaction.annotation.Transactional;import com.onlinetechvision.user.User; import com.onlinetechvision.user.dao.IUserDAO;/** * * User Service * * @author onlinetechvision.com * @since 10 Dec 2012 * @version 1.0.0 * */ @Transactional(readOnly = true) public class UserService implements IUserService {IUserDAO userDAO;/** * Adds User * * @param User user */ @Transactional(readOnly = false) @Override public void addUser(User user) { getUserDAO().addUser(user); }/** * Gets User List * */ @Override public List<User> getUsers() { return getUserDAO().getUsers(); }public IUserDAO getUserDAO() { return userDAO; }public void setUserDAO(IUserDAO userDAO) { this.userDAO = userDAO; } } STEP 9 : CREATE TestReader IMPL TestReader Class is created by implementing ItemReader Interface. This class is called in order to read items. When read method returns null, reading operation is completed. The following steps explains with details how to be executed firstJob. The commit-interval value of firstjob is 2 and the following steps are executed : 1) firstTestReader is called to read first item(firstname_0, firstsurname_0) 2) firstTestReader is called again to read second item(firstname_1, firstsurname_1) 3) testProcessor is called to process first item(FIRSTNAME_0, FIRSTSURNAME_0) 4) testProcessor is called to process second item(FIRSTNAME_1, FIRSTSURNAME_1) 5) testWriter is called to write first item(FIRSTNAME_0, FIRSTSURNAME_0) to database 6) testWriter is called to write second item(FIRSTNAME_1, FIRSTSURNAME_1) to database 7) first and second items are committed and the transaction is closed. firstTestReader is called to read third item(firstname_2, firstsurname_2) 9) maxIndex value of firstTestReader is 3. read method returns null and item reading operation is completed. 10) testProcessor is called to process third item(FIRSTNAME_2, FIRSTSURNAME_2) 11) testWriter is called to write first item(FIRSTNAME_2, FIRSTSURNAME_2) to database 12) third item is committed and the transaction is closed. firstStep is completed with COMPLETED status and secondStep is started. secondJob and thirdJob are executed in the same way. package com.onlinetechvision.item;import org.springframework.batch.item.ItemReader; import org.springframework.batch.item.NonTransientResourceException; import org.springframework.batch.item.ParseException; import org.springframework.batch.item.UnexpectedInputException;import com.onlinetechvision.user.User;/** * TestReader Class is created to read items which will be processed * * @author onlinetechvision.com * @since 10 Dec 2012 * @version 1.0.0 * */ public class TestReader implements ItemReader<User> { private int index; private int maxIndex; private String namePrefix; private String surnamePrefix;/** * Reads items one by one * * @return User * * @throws Exception * @throws UnexpectedInputException * @throws ParseException * @throws NonTransientResourceException * */ @Override public User read() throws Exception, UnexpectedInputException, ParseException, NonTransientResourceException { User user = new User(); user.setName(getNamePrefix() + '_' + index); user.setSurname(getSurnamePrefix() + '_' + index);if(index > getMaxIndex()) { return null; }incrementIndex();return user; }/** * Increments index which defines read-count * * @return int * */ private int incrementIndex() { return index++; }public int getMaxIndex() { return maxIndex; }public void setMaxIndex(int maxIndex) { this.maxIndex = maxIndex; }public String getNamePrefix() { return namePrefix; }public void setNamePrefix(String namePrefix) { this.namePrefix = namePrefix; }public String getSurnamePrefix() { return surnamePrefix; }public void setSurnamePrefix(String surnamePrefix) { this.surnamePrefix = surnamePrefix; }} STEP 10 : CREATE FailedCaseTestReader IMPL FailedCaseTestReader Class is created in order to simulate the failed job status. In this sample application, when thirdJob is processed at fifthStep, failedCaseTestReader is called and exception is thrown so its status will be FAILED. package com.onlinetechvision.item;import org.springframework.batch.item.ItemReader; import org.springframework.batch.item.NonTransientResourceException; import org.springframework.batch.item.ParseException; import org.springframework.batch.item.UnexpectedInputException;import com.onlinetechvision.user.User;/** * FailedCaseTestReader Class is created in order to simulate the failed job status. * * @author onlinetechvision.com * @since 10 Dec 2012 * @version 1.0.0 * */ public class FailedCaseTestReader implements ItemReader<User> { private int index; private int maxIndex; private String namePrefix; private String surnamePrefix;/** * Reads items one by one * * @return User * * @throws Exception * @throws UnexpectedInputException * @throws ParseException * @throws NonTransientResourceException * */ @Override public User read() throws Exception, UnexpectedInputException, ParseException, NonTransientResourceException { User user = new User(); user.setName(getNamePrefix() + '_' + index); user.setSurname(getSurnamePrefix() + '_' + index);if(index >= getMaxIndex()) { throw new Exception('Unexpected Error!'); }incrementIndex();return user; }/** * Increments index which defines read-count * * @return int * */ private int incrementIndex() { return index++; }public int getMaxIndex() { return maxIndex; }public void setMaxIndex(int maxIndex) { this.maxIndex = maxIndex; }public String getNamePrefix() { return namePrefix; }public void setNamePrefix(String namePrefix) { this.namePrefix = namePrefix; }public String getSurnamePrefix() { return surnamePrefix; }public void setSurnamePrefix(String surnamePrefix) { this.surnamePrefix = surnamePrefix; }} STEP 11 : CREATE TestProcessor IMPL TestProcessor Class is created by implementing ItemProcessor Interface. This class is called to process items. User item is received from TestReader, processed and returned to TestWriter. package com.onlinetechvision.item;import java.util.Locale;import org.springframework.batch.item.ItemProcessor;import com.onlinetechvision.user.User;/** * TestProcessor Class is created to process items. * * @author onlinetechvision.com * @since 10 Dec 2012 * @version 1.0.0 * */ public class TestProcessor implements ItemProcessor<User, User> {/** * Processes items one by one * * @param User user * @return User * @throws Exception * */ @Override public User process(User user) throws Exception { user.setName(user.getName().toUpperCase(Locale.ENGLISH)); user.setSurname(user.getSurname().toUpperCase(Locale.ENGLISH)); return user; }} STEP 12 : CREATE TestWriter IMPL TestWriter Class is created by implementing ItemWriter Interface. This class is called to write items to DB, memory etc… package com.onlinetechvision.item;import java.util.List;import org.springframework.batch.item.ItemWriter;import com.onlinetechvision.user.User; import com.onlinetechvision.user.service.IUserService;/** * TestWriter Class is created to write items to DB, memory etc... * * @author onlinetechvision.com * @since 10 Dec 2012 * @version 1.0.0 * */ public class TestWriter implements ItemWriter<User> {private IUserService userService;/** * Writes items via list * * @throws Exception * */ @Override public void write(List<? extends User> userList) throws Exception { for(User user : userList) { getUserService().addUser(user); } System.out.println('User List : ' + getUserService().getUsers()); }public IUserService getUserService() { return userService; }public void setUserService(IUserService userService) { this.userService = userService; }} STEP 13 : CREATE FailedStepTasklet CLASS FailedStepTasklet is created by implementing Tasklet Interface. It illustrates business logic in failed step. package com.onlinetechvision.tasklet;import org.apache.log4j.Logger; import org.springframework.batch.core.StepContribution; import org.springframework.batch.core.scope.context.ChunkContext; import org.springframework.batch.core.step.tasklet.Tasklet; import org.springframework.batch.repeat.RepeatStatus;/** * FailedStepTasklet Class illustrates a failed job. * * @author onlinetechvision.com * @since 10 Dec 2012 * @version 1.0.0 * */ public class FailedStepTasklet implements Tasklet {private static final Logger logger = Logger.getLogger(FailedStepTasklet.class);private String taskResult;/** * Executes FailedStepTasklet * * @param StepContribution stepContribution * @param ChunkContext chunkContext * @return RepeatStatus * @throws Exception * */ public RepeatStatus execute(StepContribution stepContribution, ChunkContext chunkContext) throws Exception { logger.debug('Task Result : ' + getTaskResult()); throw new Exception('Error occurred!'); }public String getTaskResult() { return taskResult; }public void setTaskResult(String taskResult) { this.taskResult = taskResult; }} STEP 14 : CREATE BatchProcessStarter CLASS BatchProcessStarter Class is created to launch the jobs. Also, it logs their execution results. package com.onlinetechvision.spring.batch;import org.apache.log4j.Logger; import org.springframework.batch.core.Job; import org.springframework.batch.core.JobExecution; import org.springframework.batch.core.JobParametersBuilder; import org.springframework.batch.core.JobParametersInvalidException; import org.springframework.batch.core.launch.JobLauncher; import org.springframework.batch.core.repository.JobExecutionAlreadyRunningException; import org.springframework.batch.core.repository.JobInstanceAlreadyCompleteException; import org.springframework.batch.core.repository.JobRepository; import org.springframework.batch.core.repository.JobRestartException;/** * BatchProcessStarter Class launches the jobs and logs their execution results. * * @author onlinetechvision.com * @since 10 Dec 2012 * @version 1.0.0 * */ public class BatchProcessStarter {private static final Logger logger = Logger.getLogger(BatchProcessStarter.class);private Job firstJob; private Job secondJob; private Job thirdJob; private JobLauncher jobLauncher; private JobRepository jobRepository;/** * Starts the jobs and logs their execution results. * */ public void start() { JobExecution jobExecution = null; JobParametersBuilder builder = new JobParametersBuilder();try { getJobLauncher().run(getFirstJob(), builder.toJobParameters()); jobExecution = getJobRepository().getLastJobExecution(getFirstJob().getName(), builder.toJobParameters()); logger.debug(jobExecution.toString());getJobLauncher().run(getSecondJob(), builder.toJobParameters()); jobExecution = getJobRepository().getLastJobExecution(getSecondJob().getName(), builder.toJobParameters()); logger.debug(jobExecution.toString());getJobLauncher().run(getThirdJob(), builder.toJobParameters()); jobExecution = getJobRepository().getLastJobExecution(getThirdJob().getName(), builder.toJobParameters()); logger.debug(jobExecution.toString());} catch (JobExecutionAlreadyRunningException | JobRestartException | JobInstanceAlreadyCompleteException | JobParametersInvalidException e) { logger.error(e); }}public Job getFirstJob() { return firstJob; }public void setFirstJob(Job firstJob) { this.firstJob = firstJob; }public Job getSecondJob() { return secondJob; }public void setSecondJob(Job secondJob) { this.secondJob = secondJob; }public Job getThirdJob() { return thirdJob; }public void setThirdJob(Job thirdJob) { this.thirdJob = thirdJob; }public JobLauncher getJobLauncher() { return jobLauncher; }public void setJobLauncher(JobLauncher jobLauncher) { this.jobLauncher = jobLauncher; }public JobRepository getJobRepository() { return jobRepository; }public void setJobRepository(JobRepository jobRepository) { this.jobRepository = jobRepository; }} STEP 15 : CREATE dataContext.xml jdbc.properties, is created. It defines data-source informations and is read via dataContext.xml jdbc.db.driverClassName=com.mysql.jdbc.Driver jdbc.db.url=jdbc:mysql://localhost:3306/onlinetechvision jdbc.db.username=root jdbc.db.password=root jdbc.db.initialSize=10 jdbc.db.minIdle=3 jdbc.db.maxIdle=10 jdbc.db.maxActive=10 jdbc.db.testWhileIdle=true jdbc.db.testOnBorrow=true jdbc.db.testOnReturn=true jdbc.db.initSQL=SELECT 1 FROM DUAL jdbc.db.validationQuery=SELECT 1 FROM DUAL jdbc.db.timeBetweenEvictionRunsMillis=30000 STEP 16 : CREATE dataContext.xml Spring Configuration file, dataContext.xml, is created. It covers dataSource, sessionFactory and transactionManager definitions. <?xml version='1.0' encoding='UTF-8'?> <beans xmlns='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:context='http://www.springframework.org/schema/context' xmlns:p='http://www.springframework.org/schema/p' xmlns:batch='http://www.springframework.org/schema/batch' xmlns:tx='http://www.springframework.org/schema/tx' xsi:schemaLocation='http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans-3.0.xsdhttp://www.springframework.org/schema/contexthttp://www.springframework.org/schema/context/spring-context-3.0.xsdhttp://www.springframework.org/schema/batchhttp://www.springframework.org/schema/batch/spring-batch-2.1.xsdhttp://www.springframework.org/schema/txhttp://www.springframework.org/schema/tx/spring-tx-3.0.xsd'><context:property-placeholder location='classpath:jdbc.properties'/><!-- Enable the configuration of transactional behavior based on annotations --> <tx:annotation-driven transaction-manager='transactionManager'/><!-- Data Source Declaration --> <bean id='dataSource' class='org.apache.tomcat.jdbc.pool.DataSource' destroy-method='close' p:driverClassName='${jdbc.db.driverClassName}' p:url='${jdbc.db.url}' p:username='${jdbc.db.username}' p:password='${jdbc.db.password}' p:initialSize='${jdbc.db.initialSize}' p:minIdle='${jdbc.db.minIdle}' p:maxIdle='${jdbc.db.maxIdle}' p:maxActive='${jdbc.db.maxActive}' p:testWhileIdle='${jdbc.db.testWhileIdle}' p:testOnBorrow='${jdbc.db.testOnBorrow}' p:testOnReturn='${jdbc.db.testOnReturn}' p:initSQL='${jdbc.db.initSQL}' p:validationQuery='${jdbc.db.validationQuery}' p:timeBetweenEvictionRunsMillis='${jdbc.db.timeBetweenEvictionRunsMillis}'/><!-- Session Factory Declaration --> <bean id='sessionFactory' class='org.springframework.orm.hibernate4.LocalSessionFactoryBean'> <property name='dataSource' ref='dataSource' /> <property name='annotatedClasses'> <list> <value>com.onlinetechvision.user.User</value> </list> </property> <property name='hibernateProperties'> <props> <prop key='hibernate.dialect'>org.hibernate.dialect.MySQLDialect</prop> <prop key='hibernate.show_sql'>true</prop> </props> </property> </bean><!-- Transaction Manager Declaration --> <bean id='transactionManager' class='org.springframework.orm.hibernate4.HibernateTransactionManager'> <property name='sessionFactory' ref='sessionFactory'/> </bean></beans> STEP 17 : CREATE jobContext.xml Spring Configuration file, jobContext.xml, is created. It covers jobRepository, jobLauncher, item reader, item processor, item writer, tasklet and job definitions. <?xml version='1.0' encoding='UTF-8'?> <beans xmlns='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:batch='http://www.springframework.org/schema/batch' xsi:schemaLocation='http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans-3.0.xsdhttp://www.springframework.org/schema/batchhttp://www.springframework.org/schema/batch/spring-batch-2.1.xsd'><import resource='dataContext.xml'/><!-- jobRepository Declaration --> <bean id='jobRepository' class='org.springframework.batch.core.repository.support.MapJobRepositoryFactoryBean'> <property name='transactionManager' ref='transactionManager' /> </bean><!-- jobLauncher Declaration --> <bean id='jobLauncher' class='org.springframework.batch.core.launch.support.SimpleJobLauncher' > <property name='jobRepository' ref='jobRepository'/> </bean><!-- Reader Bean Declarations --> <bean id='firstTestReader' class='com.onlinetechvision.item.TestReader'> <property name='maxIndex' value='2'/> <property name='namePrefix' value='firstname'/> <property name='surnamePrefix' value='firstsurname'/> </bean><bean id='secondTestReader' class='com.onlinetechvision.item.TestReader'> <property name='maxIndex' value='2'/> <property name='namePrefix' value='secondname'/> <property name='surnamePrefix' value='secondsurname'/> </bean><bean id='thirdTestReader' class='com.onlinetechvision.item.TestReader'> <property name='maxIndex' value='3'/> <property name='namePrefix' value='thirdname'/> <property name='surnamePrefix' value='thirdsurname'/> </bean><bean id='fourthTestReader' class='com.onlinetechvision.item.TestReader'> <property name='maxIndex' value='3'/> <property name='namePrefix' value='fourthname'/> <property name='surnamePrefix' value='fourthsurname'/> </bean><bean id='fifthTestReader' class='com.onlinetechvision.item.TestReader'> <property name='maxIndex' value='3'/> <property name='namePrefix' value='fifthname'/> <property name='surnamePrefix' value='fifthsurname'/> </bean><bean id='failedCaseTestReader' class='com.onlinetechvision.item.FailedCaseTestReader'> <property name='maxIndex' value='1'/> <property name='namePrefix' value='failedcasename'/> <property name='surnamePrefix' value='failedcasesurname'/> </bean><!-- Processor Bean Declaration --> <bean id='testProcessor' class='com.onlinetechvision.item.TestProcessor' /><!-- Writer Bean Declaration --> <bean id='testWriter' class='com.onlinetechvision.item.TestWriter' > <property name='userService' ref='userService'/> </bean><!-- Failed Step Tasklet Declaration --> <bean id='failedStepTasklet' class='com.onlinetechvision.tasklet.FailedStepTasklet'> <property name='taskResult' value='Error occurred!' /> </bean><!-- Batch Job Declarations --> <batch:job id='firstJob'> <batch:step id='firstStep' next='secondStep'> <batch:tasklet> <batch:chunk reader='firstTestReader' processor='testProcessor' writer='testWriter' commit-interval='2'/> </batch:tasklet> </batch:step> <batch:step id='secondStep'> <batch:tasklet> <batch:chunk reader='secondTestReader' processor='testProcessor' writer='testWriter' commit-interval='2'/> </batch:tasklet> </batch:step> </batch:job><batch:job id='secondJob'> <batch:step id='thirdStep'> <batch:tasklet> <batch:chunk reader='thirdTestReader' processor='testProcessor' writer='testWriter' commit-interval='2'/> </batch:tasklet> <batch:next on='*' to='fourthStep' /> <batch:next on='FAILED' to='firstFailedStep' /> </batch:step> <batch:step id='fourthStep'> <batch:tasklet> <batch:chunk reader='fourthTestReader' processor='testProcessor' writer='testWriter' commit-interval='2'/> </batch:tasklet> </batch:step> <batch:step id='firstFailedStep'> <batch:tasklet ref='failedStepTasklet' /> </batch:step> </batch:job><batch:job id='thirdJob'> <batch:step id='fifthStep'> <batch:tasklet> <batch:chunk reader='failedCaseTestReader' processor='testProcessor' writer='testWriter' commit-interval='2'/> </batch:tasklet> <batch:next on='*' to='sixthStep' /> <batch:next on='FAILED' to='secondFailedStep' /> </batch:step> <batch:step id='sixthStep'> <batch:tasklet> <batch:chunk reader='fifthTestReader' processor='testProcessor' writer='testWriter' commit-interval='2'/> </batch:tasklet> </batch:step> <batch:step id='secondFailedStep'> <batch:tasklet ref='failedStepTasklet' /> </batch:step> </batch:job></beans> STEP 18 : CREATE applicationContext.xml Spring Configuration file, applicationContext.xml, is created. It covers bean definitions. <?xml version='1.0' encoding='UTF-8'?> <beans xmlns='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:batch='http://www.springframework.org/schema/batch' xsi:schemaLocation='http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans-3.0.xsdhttp://www.springframework.org/schema/batchhttp://www.springframework.org/schema/batch/spring-batch-2.1.xsd'><import resource='jobContext.xml'/><!-- User DAO Declaration --> <bean id='userDAO' class='com.onlinetechvision.user.dao.UserDAO'> <property name='sessionFactory' ref='sessionFactory' /> </bean><!-- User Service Declaration --> <bean id='userService' class='com.onlinetechvision.user.service.UserService'> <property name='userDAO' ref='userDAO' /> </bean><!-- BatchProcessStarter Declaration --> <bean id='batchProcessStarter' class='com.onlinetechvision.spring.batch.BatchProcessStarter'> <property name='jobLauncher' ref='jobLauncher'/> <property name='jobRepository' ref='jobRepository'/> <property name='firstJob' ref='firstJob'/> <property name='secondJob' ref='secondJob'/> <property name='thirdJob' ref='thirdJob'/> </bean></beans> STEP 19 : CREATE Application CLASS Application Class is created to run the application. package com.onlinetechvision.exe;import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext;import com.onlinetechvision.spring.batch.BatchProcessStarter;/** * Application Class starts the application. * * @author onlinetechvision.com * @since 10 Dec 2012 * @version 1.0.0 * */ public class Application {/** * Starts the application * * @param String[] args * */ public static void main(String[] args) { ApplicationContext appContext = new ClassPathXmlApplicationContext('applicationContext.xml'); BatchProcessStarter batchProcessStarter = (BatchProcessStarter)appContext.getBean('batchProcessStarter'); batchProcessStarter.start(); }} STEP 20 : BUILD PROJECT After OTV_SpringBatch_Chunk_Oriented_Processing Project is built, OTV_SpringBatch_Chunk_Oriented_Processing-0.0.1-SNAPSHOT.jar will be created. STEP 21 : RUN PROJECT After created OTV_SpringBatch_Chunk_Oriented_Processing-0.0.1-SNAPSHOT.jar file is run, the following database and console output logs will be shown : Database screenshot :First Job’ s console output : 16.12.2012 19:30:41 INFO (SimpleJobLauncher.java:118) - Job: [FlowJob: [name=firstJob]] launched with the following parameters: [{}]16.12.2012 19:30:41 DEBUG (AbstractJob.java:278) - Job execution starting: JobExecution: id=0, version=0, startTime=null, endTime=null, lastUpdated=Sun Dec 16 19:30:41 GMT 2012, status=STARTING, exitStatus=exitCode=UNKNOWN;exitDescription=, job=[JobInstance: id=0, version=0, JobParameters=[{}], Job=[firstJob]]User List : [id : 181, name : FIRSTNAME_0, surname : FIRSTSURNAME_0, id : 182, name : FIRSTNAME_1, surname : FIRSTSURNAME_1, id : 183, name : FIRSTNAME_2, surname : FIRSTSURNAME_2, id : 184, name : SECONDNAME_0, surname : SECONDSURNAME_0, id : 185, name : SECONDNAME_1, surname : SECONDSURNAME_1, id : 186, name : SECONDNAME_2, surname : SECONDSURNAME_2]16.12.2012 19:30:42 DEBUG (BatchProcessStarter.java:43) - JobExecution: id=0, version=2, startTime=Sun Dec 16 19:30:41 GMT 2012, endTime=Sun Dec 16 19:30:42 GMT 2012, lastUpdated=Sun Dec 16 19:30:42 GMT 2012, status=COMPLETED, exitStatus=exitCode=COMPLETED;exitDescription=, job=[JobInstance: id=0, version=0, JobParameters=[{}], Job=[firstJob]] Second Job’ s console output : 16.12.2012 19:30:42 INFO (SimpleJobLauncher.java:118) - Job: [FlowJob: [name=secondJob]] launched with the following parameters: [{}]16.12.2012 19:30:42 DEBUG (AbstractJob.java:278) - Job execution starting: JobExecution: id=1, version=0, startTime=null, endTime=null, lastUpdated=Sun Dec 16 19:30:42 GMT 2012, status=STARTING, exitStatus=exitCode=UNKNOWN;exitDescription=, job=[JobInstance: id=1, version=0, JobParameters=[{}], Job=[secondJob]]User List : [id : 181, name : FIRSTNAME_0, surname : FIRSTSURNAME_0, id : 182, name : FIRSTNAME_1, surname : FIRSTSURNAME_1, id : 183, name : FIRSTNAME_2, surname : FIRSTSURNAME_2, id : 184, name : SECONDNAME_0, surname : SECONDSURNAME_0, id : 185, name : SECONDNAME_1, surname : SECONDSURNAME_1, id : 186, name : SECONDNAME_2, surname : SECONDSURNAME_2, id : 187, name : THIRDNAME_0, surname : THIRDSURNAME_0, id : 188, name : THIRDNAME_1, surname : THIRDSURNAME_1, id : 189, name : THIRDNAME_2, surname : THIRDSURNAME_2, id : 190, name : THIRDNAME_3, surname : THIRDSURNAME_3, id : 191, name : FOURTHNAME_0, surname : FOURTHSURNAME_0, id : 192, name : FOURTHNAME_1, surname : FOURTHSURNAME_1, id : 193, name : FOURTHNAME_2, surname : FOURTHSURNAME_2, id : 194, name : FOURTHNAME_3, surname : FOURTHSURNAME_3]16.12.2012 19:30:42 DEBUG (BatchProcessStarter.java:47) - JobExecution: id=1, version=2, startTime=Sun Dec 16 19:30:42 GMT 2012, endTime=Sun Dec 16 19:30:42 GMT 2012, lastUpdated=Sun Dec 16 19:30:42 GMT 2012, status=COMPLETED, exitStatus=exitCode=COMPLETED;exitDescription=, job=[JobInstance: id=1, version=0, JobParameters=[{}], Job=[secondJob]] Third Job’ s console output : 16.12.2012 19:30:42 INFO (SimpleJobLauncher.java:118) - Job: [FlowJob: [name=thirdJob]] launched with the following parameters: [{}]16.12.2012 19:30:42 DEBUG (AbstractJob.java:278) - Job execution starting: JobExecution: id=2, version=0, startTime=null, endTime=null, lastUpdated=Sun Dec 16 19:30:42 GMT 2012, status=STARTING, exitStatus=exitCode=UNKNOWN;exitDescription=, job=[JobInstance: id=2, version=0, JobParameters=[{}], Job=[thirdJob]]16.12.2012 19:30:42 DEBUG (TransactionTemplate.java:159) - Initiating transaction rollback on application exception org.springframework.batch.repeat.RepeatException: Exception in batch process; nested exception is java.lang.Exception: Unexpected Error! ...16.12.2012 19:30:43 DEBUG (BatchProcessStarter.java:51) - JobExecution: id=2, version=2, startTime=Sun Dec 16 19:30:42 GMT 2012, endTime=Sun Dec 16 19:30:43 GMT 2012, lastUpdated=Sun Dec 16 19:30:43 GMT 2012, status=FAILED, exitStatus=exitCode=FAILED;exitDescription=, job=[JobInstance: id=2, version=0, JobParameters=[{}], Job=[thirdJob]] STEP 22 : DOWNLOAD https://github.com/erenavsarogullari/OTV_SpringBatch_Chunk_Oriented_Processing Resources: Chunk Oriented Processing in Spring Batch   Reference: Chunk Oriented Processing in Spring Batch from our JCG partner Eren Avsarogullari at the Online Technology Vision blog. ...
java-logo

Java Object resurrection

Overview After an object which overrides finalize() is collected it is added to a finalization queue to be cleaned up after calling the finalize() method of each object.  By what happens if you resurrect the object?             When is finalize called? The finalize method is called by a single threaded system task which calls this method for each object which has been collected. Note: the nodes in the finalization queue are objects also have finalize() methods notionally. Objects cannot be cleaned up until the GC after they have been finalized. Most objects (including the node in the finalization queue) don’t overriden finalize() and so the GC is smart enough to detect this and not add them to the queue. These obejcts can be cleaned up immediately. If you override the method, even with an empty one, it makes a difference. What about resurrected objects? In the finalize() method, you can resurrect the object by giving making something point to it. e.g. a static collection.  This object can no longer be collected by a GC (until it is discarded again)  So what happens then? The object has been flags as being finalized once and is not finalized repeatedly. static final List ZOMBIES = new ArrayList<>(); static class Zombies { private int num; public Zombies(int num) { this.num = num; } @Override protected void finalize() throws Throwable { System.out.println("Resurrect " + num); ZOMBIES.add(this); } @Override public String toString() { return "Zombies{" + "num=" + num + '}'; } } public static void main(String... args) throws InterruptedException { for (int i = 0; i < 3; i++) ZOMBIES.add(new Zombies(i)); for (int j = 0; j < 5; j++) { System.out.println("Zombies: " + ZOMBIES); ZOMBIES.clear(); System.gc(); Thread.sleep(100); } } printsZombies: [Zombies{num=0}, Zombies{num=1}, Zombies{num=2}] Resurrect 2 Resurrect 1 Resurrect 0 Zombies: [Zombies{num=2}, Zombies{num=1}, Zombies{num=0}] Zombies: [] Zombies: [] Zombies: []In this example, the Zombies are added once to the collection and resurrected once by the finalize method. When they are collected a second time, they have been flagged as finalized and not queued again. Conclusion While it’s a good idea to avoid using finalize(), it is a small comfort to know it will only be called once it the object is resurrected.   Reference: Java Object resurrection from our JCG partner Peter Lawrey at the Vanilla Java blog. ...
software-development-2-logo

Don’t take the Technical Debt Metaphor too far

Because “technical debt” has the word “debt” in it, many people have decided that it makes sense to think and work with technical debt in monetary terms, and treat technical debt as a real financial cost. This is supposed to make it easier for technical people to explain technical debt to the business, and easier to make a business case for paying debt off. Putting technical debt into financial terms also allows consultants and vendors to try to scare business executives into buying their tools or their help – like Gartner calculating that world wide “IT debt” costs will exceed $1.5 in a couple of more years, or CAST software’s assessment that the average enterprise is carrying millions of dollars of technical debt. Businesses understand debt. Businesses make a decision to take on debt on and they track it, account for it and manage it. The business always knows how much debt they have, why they took it on, and when they need to pay it off. Businesses don’t accidentally take on debt – debt doesn’t just show up on the books one day. We don’t know when we’re taking technical debt on But developers accidentally take on debt all of the time – what Martin Fowler calls “inadvertent debt”, due to inexperience and misunderstandings, everything from “What’s Layering?” to “Now we know how we should have done it” looking at the design a year or two later.‘The point is that while you’re programming, you are learning. It’s often the case that it can take a year of programming on a project before you understand what the best design approach should have been.’ Taking on this kind of debt is inevitable – and you’ll never know when you’re taking it on or how much, because you don’t know what you don’t know. Even when developers take on debt consciously, they don’t understand the costs at the time – the principal or the interest. Most teams don’t record when they make a trade-off in design or a shortcut in coding or test automation, never mind try to put a value on paying off their choice. We don’t understand (or often even see) technical debt costs until long after we’ve taken the costs on. When you’re dealing with quality and stability problems; or when you’re estimating out a change and you recognize that you made mistakes in the past or that you took shortcuts that you didn’t realize before or shortcuts that you did know about but that turned out to be much more expensive than you expected; or once you understand that you chose the wrong architecture or the wrong technical platform. Or maybe you’ve just run a static analysis tool like CAST or SONAR which tells you that you have thousands of dollars of technical debt in your code base that you didn’t know about until now. Now try and explain to a business executive that you just realized or just remembered that you have put the company into debt for tens or hundreds of thousands of dollars. Businesses don’t and can’t run this way. We don’t know how much technical debt is really costing us By expressing everything in financial terms, we’re also pretending that technical debt costs are all hard costs to the business and that we actually know how much the principal and interest costs are: we’re $100,000 in debt and the interest rate is 3% per year. Assigning a monetary value to technical debt costs give them a false sense of precision and accuracy. Let’s be honest. There aren’t clear and consistent criteria for costing technical debt and modelling technical debt repayment – we don’t even have a definition of what technical debt is that we can all agree on. Two people can come up with a different technical debt assessment for the same system, because what I think technical debt is and what you think technical debt is aren’t the same. And just because a tool says that technical debt costs are $100,000.00 for a code base, doesn’t make the number true. Any principal and interest that you calculate (or some tool calculates for you) are made-up numbers and the business will know this when you try to defend them – which you are going to have to do, if you want to talk in financial terms with someone who does finance for a living. You’re going to be on shaky ground at best – at worse, they’ll understand that you’re not talking about real business debt and wonder what you’re trying to pull off. The other problem that I see is “debt fatigue”. Everyone is overwhelmed by the global government debt crisis and the real estate debt crisis and the consumer debt crisis and the fiscal cliff and whatever comes next. Your business may be already fighting its own problems with managing its financial debt. Technical debt is one more argument about debt that nobody is looking forward to hearing. We don’t need to talk about debt with the business We don’t use the term “technical debt” with the business, or try to explain it in financial debt terms. If we need to rewrite code because it is unstable, we treat this like any other problem that needs to be solved – we cost it out, explain the risks, and prioritize this work with everything else. If we need to rewrite or restructure code in order to make upcoming changes easier, cheaper and less risky, we explain this as part of the work that needs to be done, and justify the costs. If we need to replace or upgrade a platform technology because we are getting poor support from the supplier, we consider this a business risk that needs to be understood and managed. And if code should be refactored or tests filled in, we don’t explain it, we just do it as part of day-to-day engineering work. We’re dealing with technical debt in terms that the business understands without using a phony financial model. We’re not pretending that we’re carrying off-balance sheet debt that the company needs to rely on technologists to understand and manage. We’re leaving debt valuation and payment amortization arguments to the experts in finance and accounting where they belong, and focusing on solving problems in software, which is where we belong.   Reference: Don’t take the Technical Debt Metaphor too far from our JCG partner Jim Bird at the Building Real Software blog. ...
java-logo

Java – far sight look at JDK 8

The world is changing slowly but surely. After the changes that gave java a fresher look with JDK 7, the java community is looking forward to the rest of the improvements that will come with JDK 8 and probably JDK 9. The targeted purpose of JDK 8 was to fill in the gaps in the implementation of JDK 7 – part of the remaining puzzle pieces laking from this implementation, that should be available for the broad audience by in late 2013 is to improve and boost the language in three particular directions:productivity performance modularitySo from next year, java will run everywhere (mobile, cloud, desktop, server etc.), but in an improved manner. In what follows I will provide a short overview of what to expect from 2013 – just in time for New Year’s Resolutions – afterwards I will focus mainly on productivity side with emphasis on project lambda and how will its introduction affect the way we code. Productivity In regards of productivity JDK 8 targets two main areas: – collections – a more facile way to interact with collections through literal extensions brought to the language – annotations – enhanced support for annotations, allowing writting them in contexts where are currently illegal (e.g. primitives) Performance The addition of the Fork/Join framework to JDK 7, was the first step that java took in the direction of multicore CPUs. JDK 8 takes this road even further by bringing closures’ support to java (lambda expression, that is). Probably the most affected part of java will be the Collections part, the closures combined with the newly added interfaces and functionalities pushing the java containers to the next level. Besides the more readable and shorter code to be written, by providing to the collections a lambda expression that will be executed internally the platform can take advantage of multicore processors. Modularity One of the most interresting pieces for the community was project jigsaw: ‘The goal of this Project is to design and implement a standard module system for the Java SE Platform, and to apply that system to the Platform itself and to the JDK.’. I am using past tense because, for the those of us that were hoping to get rid of the classpaths and classloaders, we have to postpone our exciment for Java 9, as for that point of time was also project jigsaw postponed. To have a clearer picture of how the remaning Java Roadmap 2013:2013/01/31 M6 Feature Complete 2013/02/21 M7 Developer Preview 2013/07/05 M8 Final Release Candidate 2013/09/09 GA General AvailabilityBesides project jigsaw another big and exciting change that will come (in this version), is the support for closures. Provided through the help of lambda expressions they will improve key points of the JDK. Lambdas Getting started First and first of all one should get a lambda enabled SDK. In this direction there are two ways to obtain one: * the one intended for the brave ones: build it from the sources * the convenient one: downloading an already compiled version of the SDK Initially I started with building it from the sources, but due to the lack of time and too many warnings related to environment variables, I opted for the lazy approach and took the already existing JDK. The other important tool is a text editor to write the code. As it happened until now, tipically first came the JDK release and after a period of time, an enabled IDE came out. This time it is different, maybe also due to the transparency and the broad availability of the SDK through openjdk. Some days ago, the first Java 8 enabled IDE was realesed by JetBrain. So IntelliJ IDEA version 12 is the first IDE to provide support for JDK 8, besides are improvements? So for testing purposes I used IntelliJ 12 Community Edition together with JDK 8 b68, on a Windows 7, x64 machine. For those of you that prefer Netbeans, a nightly build with lambda support is available for download. Adjusting to the appropriate mindset. Before starting to write improved and cleaner code using the newly provided features, one must get a grasp on a couple new concepts – I needed to, anyway.What is a lambda expression? The easiest way to see a lambda expression is just like a method: ‘it provides a list of formal parameters and a body—an expression or block— expressed in terms of those parameters.The parameters of a lambda expression can be either declared or inferred, when the formal parameters have inferred types, then these types are derived from the functional interface type targeted by the lambda expression. From the point of view of the returned value, a lambda expression can be void compatible – they don’t return anything or value compatible – if any given execution path returns a value. Examples of lambda expressions:(a) (int a, int b) -> a + b(b) (int a, int b) -> { if (a > b) { return a; } else if (a == b) { return a * b; } else { return b; } }What is a functional interface? A functional interface is an interface that contains just one abstract method, hence represents a single method contract. In some situations, the single method may have the form of multiple methods with override-equivalent signatures, in this case all the methods represent a single method. Besides the typical way of creating an interface instance by creating and instantiating a class, functional interface instances can be created also by usage of lambda expressions, method or constructor references. Example of functional interfaces: // custom built functional interface public interface FuncInterface { public void invoke(String s1, String s2); } Example of functional interfaces from the JAVA API: java.lang.Comparable java.lang.Runnable java.util.concurrent.Callable java.awt.event.ActionListener So let’s see how the starting of a thread might change in the future: OLD WAY: new Thread(new Runnable() { @Override public void run() { for (int i=0; i< 9; i++) { System.out.println(String.format('Message #%d from inside the thread!', i)); } } }).start(); NEW WAY: new Thread(() -> { for (int i=0; i< 9; i++) { System.out.println(String.format('Message #%d from inside the thread!', i)); } }).start(); Even if I didn’t write for some time any java Swing, AWT related functionality I have to admit that lambdas will give a breath of fresh air to the Swing developers Action listener addition: JButton button = new JButton('Click');// NEW WAY: button.addActionListener( (e) -> { System.out.println('The button was clicked!'); });// OLD WAY: button.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { System.out.println('The button was clicked using old fashion code!'); } });Who/What is SAM? SAM stands for Single Abstract Method, so to cut some corners we can say that SAM == functional interface. Even if in the initial specification, also abstract classes with only one abstract method were considered SAM types, some people found/guessed also the reason why. Method/Constructor referencingThe lambdas sound all nice and all? But somehow the need for functional interface is somehow to some extend restrictive – does this mean that I can use only interfaces that contain a single abstract method? Not really – JDK 8 provides an aliasing mechanism that allows ‘extraction’ of methods from classes or objects. This can be done by using the newly added :: operator. It can be applied on classes – for extraction of static methods or on objects for extraction of methods. The same operator can be used for constructors also. Referencing: interface ConstructorReference{ T constructor(); }interface MethodReference { void anotherMethod(String input); }public class ConstructorClass { String value;public ConstructorClass() { value = 'default'; }public static void method(String input) { System.out.println(input); }public void nextMethod(String input) { // operations }public static void main(String... args) { // constructor reference ConstructorReferencereference = ConstructorClass::new; ConstructorClass cc = reference.constructor();// static method reference MethodReference mr = cc::method;// object method reference MethodReference mr2 = cc::nextMethod;System.out.println(cc.value); } }Default methods in interfacesThis means that from version 8, java interfaces can contain method bodies, so to put it simple java will support multiple inheritance without the headaches that usually come with it. Also, by providing default implementations for interface methods one can assure ensure that adding a new method will not create chaos in the implementing classes. JDK 8 added default methods to interfaces like java.util.Collection or java.util.Iterator and through this it provided a mechanism to better use lambdas where it is really needed. Notable interfaces added: java.util.stream.Streamable java.util.stream.Stream Improved collections’ interaction In my opinion all the changes that come with project lambda are great additions to the language, that will make it align with the current day standards and make it simpler and leaner but probably the change that will have the biggest productivity impact and the biggest cool + wow effect is definitely the revamping of the collections framework. No, there is no Collection 2 framework, we still have to cope with type erasure for now, but java will make another important shift: from external to internal iteration. By doing so, it provides the developer the mechanism to filter and aggregate collections in an elegant manner and besides this to push for more efficiency. By providing a lambda expression that will be executed internally, so multicore processors can be used to their full power. Let’s consider the following scenarios: a. Considering a list of strings, select all of them that are uppercased written. How would this be written? OLD WAY: //..... List inputList = new LinkedList<>(); List upper = new LinkedList<>();// add elementsfor (String currentValue : inputList) { if (currentValue != null && currentValue.matches("[A-Z0-9]*")) { upper.add(currentValue); } }System.out.println(upper); //….. NEW WAY: //..... inputList.stream().filter(x -> (x != null && x.matches('[A-Z0-9]*'))).into(upper); b. Consider that you would like to change all the extracted characters to lowercase. Using the JDK8 way this would look like this: // ..... inputList.stream().filter(x -> (x != null && x.matches("[A-Z0-9]*"))).map(String::toLowerCase).into(upper); c. And how about finding out the number of characters from the selected collection // ..... int sumX = inputList.stream().filter(x -> (x != null && x.matches("[A-Z0-9]*"))).map(String::length).reduce(0, Integer::sum); Used methods: default Streamstream() // java.util.Collection Streamfilter(Predicate predicate) // java.util.stream.Stream IntStream map(IntFunction mapper) //java.util.stream.Stream d. What if I would like to take each element from a collection and print it? //OLD WAY: for (String current : list) { System.out.println(current); } //NEW WAY: list.forEach(x -> System.out.println(x)); Besides the mentioned functionality, JDK 8 has are other interesting news also, but for brevity reasons I will stop here. More information about it can be found on the JDK 8 Project lambda site or the webpage of the JSR 337. To conclude, Java is moving forward and I personally like the direction it is heading, another point of interest would be to point of time when library developers start adopting JDK 8 too. That will be for sure interesting. Thank you for your time and patience, I wish you a merry Christmas. Resources Brian Goetz resource folder: http://cr.openjdk.java.net/~briangoetz/lambda Method/constructor references: http://doanduyhai.wordpress.com/2012/07/14/java-8-lambda-in-details-part-iii-method-and-constructor-referencing   Reference: Java – far sight look at JDK 8 from our JCG partner Olimpiu Pop at the Java Advent Calendar blog. ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close