Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?


Inferred exceptions in Java

It’s always nice to borrow and steal concepts and ideas from other languages. Scala’s Option is one idea I really like, so I wrote an implementation in Java. It wraps an object which may or may not be null, and provides some methods to work with in a more kinda-sorta functional way. For example, the isDefined method adds an object-oriented way of checking if a value is null. It is then used in other places, such as the getOrElse method, which basically says “give me what you’re wrapping, or a fallback if it’s null”.         public T getOrElse(T fallback) { return isDefined() ? get() : fallback; } In practice, this would replace tradional Java, such as public void foo() { String s = dao.getValue(); if (s == null) { s = 'bar'; } System.out.println(s); } with the more concise and OO public void foo() { Option<String> s = dao.getValue(); System.out.println(s.getOrElse('bar')); } However, what if I want to do something other than get a fallback value – say, throw an exception? More to the point, what if I want to throw a specific type of exception – that is, both specific in use and not hard-coded into Option? This requires a spot of cunning, and a splash of type inference. Because this is Java, we can start with a new factory – ExceptionFactory. This is a basic implementation that only creates exceptions constructed with a message, but you can of course expand the code as required. public interface ExceptionFactory <E extends Exception> { E create(String message); } Notice the <E extends Exception> – this is the key to how this works. Using the factory, we can now add a new method to Option: public <E extends Exception> T getOrThrow(ExceptionFactory<E> exceptionFactory, String message) throws E { if (isDefined()) { return get(); } else { throw exceptionFactory.create(message); } } Again, notice the throws E – this is inferred from the exception factory. And that, believe it or not, is 90% of what it takes. The one irritation is the need to have exception factories. If you can stomach this, you’re all set. Let’s define a couple of custom exceptions to see this in action. public <E extends Exception> T getOrThrow(ExceptionFactory<E> exceptionFactory, String message) throws E { if (isDefined()) { return get(); } else { throw exceptionFactory.create(message); } } And the suspiciously similar ExceptionB public class ExceptionB extends Exception { public ExceptionB(String message) { super(message); }public static ExceptionFactory<ExceptionB> factory() { return new ExceptionFactory<ExceptionB>() { @Override public ExceptionB create(String message) { return new ExceptionB(message); } }; } } And finally, throw it all together: public class GenericExceptionTest { @Test(expected = ExceptionA.class) public void exceptionA_throw() throws ExceptionA { Option.option(null).getOrThrow(ExceptionA.factory(), "Some message pertinent to the situation"); }@Test public void exceptionA_noThrow() throws ExceptionA { String s = Option.option("foo").getOrThrow(ExceptionA.factory(), "Some message pertinent to the situation"); Assert.assertEquals("foo", s); }@Test(expected = ExceptionB.class) public void exceptionB_throw() throws ExceptionB { Option.option(null).getOrThrow(ExceptionB.factory(), "Some message pertinent to the situation"); }@Test public void exceptionB_noThrow() throws ExceptionB { String s = Option.option("foo").getOrThrow(ExceptionB.factory(), "Some message pertinent to the situation"); Assert.assertEquals("foo", s); } } The important thing to notice, as highlighted in bold above, is the exception declared in the method signature is specific – it’s not a common ancestor (Exception or Throwable). This means you can now use Options in your DAO layer, your service layer, wherever, and throw specific exceptions where and how you need. Download source: You can get the source code and tests from here – genex Sidenote One other interesting thing that came out of writing this was the observation that it’s possible to do this: public void foo() { throw null; }public void bar() { try { foo(); } catch (NullPointerException e) { ... } } It goes without saying that this is not a good idea.   Reference: Inferred exceptions in Java from our JCG partner Steve Chaloner at the Objectify blog. ...

Java Code Geeks Rebranded

Hello all and happy new year! During the past few weeks you might have noticed some changes here in Java Code Geeks. We have recently finished rebranding and restyling our site! We have upgraded our infrastructure, introduced a new layout and adopted new logos. We believe the site has now a cleaner view and it is much smoother. ...

Classic Mistakes in Software Development and Maintenance

…the only difference between experienced and inexperienced developers is that the experienced ones realize when they’re making mistakes. Jeff Atwood, Escaping from Gilligan’s Island An important part of risk management, and responsible management at all, is making sure that you aren’t doing anything obviously stupid. Steve McConnell’s list of Classic Mistakes is a place to start: a list of common basic mistakes in developing software and in managing development work, mistakes that are made so often, by so many people, that we all need to be aware of them.     McConnell originally created this list in 1996 for his book Rapid Development (still one of the best books on managing software development). The original list of 36 mistakes was updated in 2008 to a total of 42 common mistakes based on a survey of more than 500 developers and managers. The mistakes that have the highest impact, the mistakes that will most likely led to failure, are:Unrealistic expectations Weak personnel Overly optimistic schedules Wishful thinking Shortchanged QA Inadequate design Lack of project sponsorship Confusing estimates with targets Excessive multi-tasking Lack of user involvementMost of the mistakes listed have not changed since 1996 (and were probably well known long before that). Either they’re fundamental, or as an industry we just aren’t learning, or we don’t care. Or we can’t find the time or secure a mandate to do things right, because of the relentless focus on short-term results:Stakeholders won’t naturally take a long-term view: they tend to minimize the often extreme down-the-road headaches that result from the cutting of corners necessitated by the rush, rush, rush mentality. They’ll drive the car without ever changing the oil. Peter Kretzman, Software development’s classic mistakes and the role of the CTO/CIO The second most severe mistake that a development organization can make is to staff the team with weak personnel: hiring fast or cheap rather than holding out for people who have more experience and better skills, but who cost more. Although the impact of making this mistake is usually severe, it happens in only around half of projects – most companies aren’t stupid enough to staff a development team with weak developers, at least not a big, high-profile project. Classic Mistakes in Software Maintenance But a lot of companies staff maintenance teams this way, with noobs and maybe a couple of burned out old-timers who are putting in their time and willing to deal with the demands of maintenance until they retire.You get stuck in maintenance only if you are not good enough to work on new projects. After spending millions of dollars and many developer-years of effort on creating an application, the project is entrusted to the care of the lowest of the low. Crazy! Pete McBreen, Software Craftsmanship Capers Jones (Geriatric Issues of Ageing Software 2007, Estimating Software Costs 2008) has found that staffing a maintenance team with inexperienced people destroys productivity and is one of the worst practices that any organization can follow:Worst Practice Effect on ProductivityNot identifying and cleaning up error-prone code – the 20% of code that contains 80% of bugs -50%Code with embedded data and hard-coded variables – which contributes to “mass update” problems when this data changes -45%Staffing maintenance teams with inexperienced people -40%High complexity code that is hard to understand and change (often the same code that is error-prone) -30%Lack of good tools for source code navigation and test coverage -28%Inefficient or nonexistent change control methods -27%Many of these mistakes are due to not recognizing and not dealing with basic code quality and technical debt issues, figuring out what code is causing you the most trouble and cleaning it up. The rest are basic, obvious management issues. Keep the people who built the system and who understand it and know how and why it works working on it as long as you can. Make it worth their while to stay, give them meaningful things to work on, and make sure that they have good tools to work with. Find ways for them to work together efficiently, with each other, with other developers, with operations and with the customer. These simple, stupid mistakes add up over time to huge costs, when you consider that maintenance makes up between 40% and 80% of total software costs. Like the classic mistakes in new development, mistakes in maintenance are obvious and fixable. We know we shouldn’t do these things, we know what’s going to happen, and yet we keep doing them, over and again, and we’re surprised when things fail. Why?   Reference: Classic Mistakes in Software Development and Maintenance from our JCG partner Jim Bird at the Building Real Software blog. ...

Significant Software Development Developments of 2012

I have written before (2007, 2008, 2009, 2010, 2011) on my biased perspective of the most significant developments in software development for that year. This post is the 2012 version with all my biases and skewed perspectives freely admitted.                   10. Groovy 2.0 Groovy 2.0 have been an important version for Groovy. Groovy 2‘s arguably most notable new features are its static type checking and static compilation capabilities. 9. Perl Turns 25 Perl celebrated its 25th anniversary in 2012. Love it or loathe it, Perl has definitely become the predominant scripting language, especially in the non-Windows environments. Perl is not my favorite (I’d rather use Groovy, Python, or Ruby), but I find myself needing to use it at times, usually because I’m modifying or using an existing script or set of scripts already written in Perl. Happy Birthday, Perl! 8. Git and GitHub Git is the trendy choice now in version control and GitHub is equally trendy for hosting code. The post Why Would Anyone use Git in their Enterprise? states, ‘Git has a cult-like following in the development community.’ The book Pro Git (2009) is freely available for reading online and can be downloaded as a PDF, mobi, or ePub electronic book. 7. NoSQL and Polyglot Persistence The NoSQL concept seems to be maturing and moving from unabated hype and hyperbole to understanding when it works well and when it doesn’t. In 7 hard truths about the NoSQL revolution, Peter Wayner writes: NoSQL systems are far from a perfect fit and often rub the wrong way. The smartest developers knew this from the beginning. … the smart NoSQL developers simply noted that NoSQL stood for ‘Not Only SQL.’ If the masses misinterpreted the acronym, that was their problem. Martin Fowler‘s nosql page states: ‘The rise of NoSQL databases marks the end of the era of relational database dominance. But NoSQL databases will not become the new dominators. Relational will still be popular, and used in the majority of situations. They, however, will no longer be the automatic choice.’ With this, Fowler introduced the concept of polyglot persistence (which he mentions was previously coined by Scott Leberknight in 2008) and explicitly compared this to the concept of polyglot programming. If we as a software development community believe that the advantages of using multiple languages in the same application (polyglot programming) are worth the costs, then it follows that we might also determine that the advantages of using multiple data stores within the same application (polyglot persistence) are also worth the costs of doing so. 6. Mobile Development Mobile development continues to rapidly rise in 2012. The December 2012 write-up on the Tiobe Index states that Objective-C is likely to be the language of the year again in 2012 due to its rapid rise (third in December 2012 behind only C and Java and ahead of C++ and C#). The writers of that summary conclude about language ratings on this index, ‘In fact it seems that if you are not in the mobile phone market you are losing ground.’ Suzanne Kattau‘s post Mobile development in the year 2012 succinctly summarizes the changes in popular mobile device platforms and operating systems in 2012. 5. Scala (and Typesafe Stack 2.0 with Play and Akka) I have highlighted Scala multiple times in these year-end review posts, but this is my highest rating of Scala because Scala has seen a tremendous year in 2012. On 23 August 2012, Cameron McKenzie asked, ‘Is Scala the new Spring framework?‘ An answer to that question might be implied by the 1 October 2012 announcement that Spring founder Rod Johnson had joined Typesafe’s Board of Directors (Johnson left SpringSource in July). Scala provides the intersection of again-trendy functional programming with widely popular and proven object-oriented programming and is part of the increasingly popular movement to languages other than Java on the JVM. It’s not difficult to see why it had a big year in 2012. The Typesafe Blog features a post called Why Learn Scala in 2013? that begins with the statement, ‘2012 was a big year for the Scala programming language – with monumental releases, adoption by major enterprises and social sites alike and a Scala Days conference that knocked the socks off years past.’ The post then goes on to list reasons to learn Scala in 2013 with liberal references to other recent positive posts regarding Scala. Ted Neward has predicted that in 2013 ‘Typesafe (and their Scala/Akka/Play! stack) will begin to make some serious inroads into the enterprise space, and start to give Groovy/Grails a serious run for their money.’ I am not calling Play and Akka out in this post as separate significant developments in 2012, but instead lump them together with Scala as part of justifying Scala taking the #5 spot for 2012. There is no question, however, that 2012 was a big year for Akka and Play. The year 2012 saw the release of Typesafe Stack 2.0 along with Play 2.0 and Akka 2.0. 4. Big Data Big Data was big in 2012. AOL Government has named Big Data its Best of 2012 for the technology category. Geoff Nunberg argues that ‘‘Big Data’ Should Be The Word Of The Year.’ Interest in the statistical computing language R has (not surprisingly) risen along with the surging interest in Big Data. 3. HTML5 2012 was another big year for HTML5. Although HTML5 continued to be evangelized as a standards-friendly favorite of developers, some hard truths (such as performance versus native code) about the current state of HTML5 also became more readily obvious. That being stated, I think HTML5 still has a very positive future ahead of it. Although it has certainly been over-hyped with emphasis on what it might one day become rather than what it is today, it would also be foolhardy to ignore it or underestimate its usefulness. Two articles that remind us of this are FT exec: HTML5 is not dead and HTML5 myth busting. The article ‘HTML5 is ready’ say creators of mobile HTML5 Facebook clone talks about attempts to prove HTML5 is ready today from a performance standpoint. 2. Security Awareness of security holes, risks, and vulnerabilities has been increasing for the past several years largely due to high-profile incidents of lost sensitive data and new legal requirements. However, 2012 seemed to be a bigger year than most in terms of increasing awareness of security requirements and expectations in software architecture and design. Java seemed to be particularly hard hit by bad security news in 2012. Articles and posts that provide examples of this include 1 Billion computers at risk from Java exploit, Oracle’s Java Security Woes Mount As Researchers Spot A Bug In Its Critical Bug Fix, Java Vulnerability Affects 1 Billion Plug-ins, Another Week, Another Java Security Issue Found, Oracle and Apple Struggle to Deal with Java Security Issues, and Java still has a crucial role to play—despite security risks. The article Oracle to stop patching Java 6 in February 2013 suggests that users of Java should upgrade to Java 7 before February 2013 when Oracle will supply the last publicly available security patch to Java SE 6 outside of an Oracle support plan. Another article is called Oracle’s Java security update lacking, experts say. 1. Cloud Computing It seemed like everybody wanted a cloud in 2012 even if they didn’t really need one. Archana Venkatraman put it this way, ‘2012 was the year cloud computing hit the mainstream.’ Steve Cerocke stated, ‘2012 will go down as the year of cloud computing.’ Other articles and posts on the biggest cloud stories of 2012 include The 10 Biggest Cloud Stories Of 2012 and Top five cloud computing news stories in 2012. Cloud Computing is in the sweet spot many trendy technologies and approaches go through when enthusiasm is much greater than negativism. Charles Babcock‘s Cloud Computing: Best And Worst News Of 2012 is more interesting to me than many cloud-focused publications because it highlights the good and the bad of cloud computing in 2012. Honorable Mention I couldn’t fit everything that interested me about software development in 2012 into the Top Ten. Here are some others that barely missed my cut. C As mentioned earlier, the C programming language appears headed for #1 on the Tiobe Index for 2012. One of programming’s most senior languages is also one of its most popular. When one considers that numerous other languages are themselves built on C and when one considers that many languages strive for C-like syntax, the power and influence of C is better appreciated. C’s popularity has remained strong for years and 2012 was another very strong year for C. Another piece of evidence arguing C’s case is the late 2012 O’Reilly publication of Ben Klemens‘s book 21st Century C: C Tips from the New School. The author discusses this book and C today in the O’Reilly post New school C. Although I have not written C code for several years now, I’ve always had a fondness for the language. It was the language I used predominately in college (with Pascal and C++ being the other languages I used to a less degree) and I wrote the code for my electrical engineering capstone project in C. I remember (now fondly) spending almost an entire Saturday on one of my first C assignments fighting a bug to only realize that it was not working because I was using the assignment operator (=) rather than the equality operator (==). This lesson served me well as I learned other languages in terms of both what it important to differentiate and in terms of how to better debug programs even when a debugger is not readily available. I think my C experience grounded me well for later professional development with C++ and Java. Gradle 1.x Using an expressive programming language rather than XML or archaic make syntax to build software seems like an obviously beneficial thing to do. However, make, Ant, and Maven have continued to dominate in this area, but Groovy-based Gradle shows early signs of providing the alternative we’ve all been looking for. Gradle still has a long way to go in terms of acceptance and popularity and there are many other build systems with some of Gradle’s ideals that have failed, but Gradle did seem to capture significant attention in 2012 and can hopefully build upon that in future years. Gradle 1.0 was formally released in June 2012 and Gradle 1.3 was released in November 2012. DevOps Among others, Scott Ambler predicted that ‘DevOps will become the new IT buzzword’ in 2012. If it is not ‘the’ buzzword of 2012, it is not for a lack of trying on the DevOps evangelists’ part. The DevOps movement continued to gain momentum in 2012. The DZone DevOps Zone sees one or more posts on the subject added each day. The only reason this did not make it into my Top Ten is that I still don’t see ‘Typical Everyday Coder’ talking about it when I am away from the blogosphere talking to in-the-trenches developers. Amber’s concluding paragraph begins with this prediction, ‘Granted, there’s likely going to be a lot of talk and little action within most organizations due to the actual scope of DevOps, but a few years from now, we’ll look back on 2012 as the year when DevOps really started to take off.’ Only time will tell. There continue to be posts trying to explain what exactly DevOps is. Departures of Noteworthy Development Personnel There were some separations of key developers from their long-time organizations in 2012. As mentioned previously, Spring Framework founder Rod Johnson left VMWare/SpringSource (and ultimately ended up on the Board of Directors for Scala company Typesafe). Josh Bloch, perhaps still best known for his work at Sun on the JDK and for writing Effective Java, left Google in 2012 after working there for eight years. Resurgence of Widely Popular but Aged Java Frameworks Two very popular long-in-the-tooth Java-based frameworks saw a resurgence in 2012. Tomek Kaczanowski recently posted JUnit Strikes Back, in which he cites several pieces of evidence indicating a resurgence in JUnit, arguably the most commonly used testing framework in Java (and, in many ways, the inspiration for numerous other nUnit-based unit testing frameworks). Christian Grobmeier‘s recent post The new log4j 2.0 talks about many benefits of Log4j2 and how it can be used with more recently popular logging frameworks such as SLF4J and even Apache Commons Logging. Electronic Books (E-books) Electronic books (ebooks) are becoming widely popular in general, but also specifically within software development books. This is not surprising because e-books provide many general benefits, but also have benefits particular to software development. In particular, it is nice to be able to electronically search for terms (overcoming the poor index problem common to many printed programming books). Other advantages include the ability to have the book on laptops, mobile devices, e-readers, and other portable devices. This not only makes a particular book readily available, but makes it easy to carry many books on many different technical subjects with one on travel. It is also less likely for the electronic book to be ‘borrowed’ unknowingly by others or turn up missing. Perhaps the biggest advantage of electronic books is cost. It is fairly well known that technical books are generally not big profit makers for publishers. However, with printing and distribution costs being a significant portion of traditional publication costs, e-books make it easier to publishers to price these books at a lower cost than the printed equivalent. The reduced cost to the publisher for an electronic book can be passed onto the consumer. I recently took advantage of an offer from Packt Publishing to purchase a total of eight of their books as electronic books for a total price of $40. Given that a single printed programming book can cost $40 or more, this was a bargain. I have also blogged on other good deals on e-books provided by other technical publishers such as O’Reilly and Manning. I have especially appreciated the Manning Early Access Program (MEAP). This program is only viable thanks to electronic books and allows readers to read a book as it is developed. Because technologies change so quickly, it is often helpful to get access to even portions of these books as quickly as possible. Finally, another advantage of e-books is their ultimate disposal. In reality, they take up such a small portion of even an old-fashioned CD or DVD, that I can usually dig up a copy if I want to. However, I can remove them from my electronic devices when I no longer need them and need the space. There are no environmental or logistic concerns about their disposal. This is important because these books do tend to get outdated quickly and sometimes an outdated programming book is worse than having no book at all because it can be very misleading. PhoneGap / Cordova Given the popularity of mobile development and HTML5 in 2012, it’s not surprising that PhoneGap and Cordova had big years in 2011/2012. In the web versus native debate, one of the advantages of web is the portability of web apps across multiple devices. The PhoneGap/Cordova approach brings some of this benefit for writing code but maintains some of the performance advantages of running native applications. Objective-C Objective-C looks to win the Tiobe Index language of the year again in 2012. This is yet another indicator of mobile development prevalence as Objective-C’s popularity is almost exclusively tied to iPhone/iPad development, though Objective-C’s history is closely coupled with the NeXT workstations and even has been called an inspiration for Java (advertised as quoted by Patrick Naughton) instead of C++. Kotlin For several years now, it has been trendy for the ‘cool kids’ to post feedback messages on articles or blogs about Java features proclaiming that Groovy or Scala does anything covered in that blog post or article better than Java does it. Many of the ‘cool kids’ (or maybe different ‘cool kids’ with the same modus operandi) now seem to be doing the same on Scala blog posts and articles, advocating the advantages of Kotlin over Scala. As Scala and Groovy still lag Java in terms of overall popularity and usage, Kotlin lags both Groovy and Scala in terms of adoption at this point. However, there are definitely some characteristics of Kotlin in its favor. First, the Kotlin web page describes the language as, ‘a statically typed programming language that compiles to JVM byte codes and JavaScript.’ I could definitely see how ‘statically typed’ and ‘compiles … to JavaScript’ would be endearing to developers who must write JavaScript but prefer static compilation. Andrey Breslav, Kotlin Project Lead, believes that static languages compiling to ‘typed JavaScript’ will be a major development of 2013 and he cites Dart and TypeScript as other examples of this. Being able to run on the JVM can also be an advantage, though this is no different than Groovy or Scala. One major positive for Kotlin is its sponsor: JetBrains. It is likely that their IDE, IntelliJ IDEA, will provide elegant support for the Kotlin language. This is also a sponsor/owner with the resources (monetary and people) to improve chances for success. Because JetBrains is ‘planning to roll out a beta of Kotlin and start using it extensively in production,’ they are more likely to continue investing resources in this new language. There was no way I could justify to myself putting Kotlin in my top ten for 2012, but once it is released for production use, it is possible that Kotlin may make another year’s Top Ten list if it is widely adopted. Ceylon Kotlin isn’t the only up-and-coming static language for the JVM; Ceylon is also relatively young in this space. I wrote about the JavaOne 2012 presentation Introduction to Ceylon and that post provides brief overview and description information. The first milestone of Ceylon IDE (Eclipse plug-in) was released in early 2012 and was followed in March with the release of Ceylon M2 (‘Minitel’). This was followed by the Ceylon M3 (‘V2000′) release in June and Ceylon M4 (‘Analytical Engine’) in October. The newer JVM-friendly languages with the seeming best chances of long-term viability are those with strong sponsors: Groovy has SpringSource, Scala has TypeSafe, Kotlin has JetBrains, and Ceylon has RedHat. End of Oracle/Google Android Lawsuit The lawsuit between Oracle and Google over Android seems to have, for the most part, concluded in 2012. There still seems to be bad blood between the two companies, but the settlement of this probably allows for continued success of the Android platform and potentially for collaboration at some future point between the two companies on Java. It will be interesting to see if Google allows its employees to submit abstracts to JavaOne 2013. Everyone a Programmer When HTML first started to expand across the universities and colleges in the early-to-mid 1990s, it seemed that everyone I knew was learning HTML. Most of us were ‘learning’ it by copying other peoples’ HTML source and editing it for our own use. Of course, everything then was static and fairly easy to pick up. It probably also skewed my perspective that I was majoring in electrical engineering with an emphasis in computer science and so was around people who had a tendency to adopt and use new technology. Perhaps for the first time since then, I have felt that there is an ever-growing interest in pushing everyone to learn how to program at a certain level. I don’t need to provide any supporting points for this because I can instead reference Joab Jackson‘s 2012: The year that coding became de rigueur. Not only does this post enumerate several examples of the debate about whether everyone should learn programming, but it also makes cool use of ‘de rigueur‘ in its title. Java I did not include Java itself in my Top Ten in 2012. Perhaps this is an indication that I too felt that 2012 was a slow year for Java (and agree that this is not necessarily a bad thing). That being stated, Martijn Verburg has listed some ‘personal highlights of the year’ in the world of Java in 2012 in What will 2013 bring? Developers place their bets. These include the JVM’s entry into the cloud, Java/JVM community growth, OpenJDK, Java EE 7, and Mechanical Sympathy. It’s a small thing in many ways, but I think James Gosling returning to JavaOne and throwing out t-shirts was symbolic of a strengthening resurgence among an already strong Java development community. Jelastic Java on the cloud in general had a big year in 2012. Jelastic had a particularly big year. The screen snapshot below is from an e-mail sent out by Jelastic COO Dmitry Sotnikov.Jelastic was prominently mentioned at JavaOne 2012 multiple times. Some of these mentions were due to winning the Duke’s Choice Award for 2012. Other mentions resulted from James Gosling‘s positive review of his use of Jelastic. As I blogged on earlier, Gosling described himself as ‘a real Jelastic fan’ at the JavaOne 2012 Community Keynote. Linux-based ARM Devices Oracle announced release of a Linux ARM JDK in 2012. The ability to run even JavaFX on Linux ARM devices provides evidence of Oracle’s interest in supporting Linux ARM devices. Given that Oracle is well-known for investing in areas where returns are often great, it follows that perhaps Oracle sees great potential for the Linux ARM devices in the future. An interesting article that looks into this is Java 8 on ARM: Oracle’s new shot against Android? One couldn’t go to a keynote presentation at JavaOne 2012 without hearing about one very trendy Linux ARM Device, the Rasperry Pi. Similarly, the BeagleBoard and PandaBoard have also become very popular. Improving Job Market for Software Developers 2012 seemed to be a good year for those with careers in software development and this seems likely to continue. CNN Money ranked software developer as #9 in its Best Jobs in America story (software architect was #3). For example, Lauren Hepler has written that Software developers top 2013 job projection and cites a Forbes slides-based presentation. Perhaps more importantly than these stories are my anecdotal observations of a surging market for software developers. I have seen an uptake in number of unsolicited employment queries from potential employers and clients. I am also seeing an increase in billboard advertising for developers, especially in areas of the United States with high concentrations of software development. This improving job market might be one of many reasons for increasing interest in programming in general. Other Resources There are other posts of potential interest. Katherine Slattery’s Takeaways from the Top Development News Stories of 2012 talks about the Node.js ‘hype cycle,’ open source hardware, native apps versus HTML5 apps, and the ”learn to code’ craze.’ Ted Neward‘s annual predictions (for 2013 in this case) and review of his prior year (2012 in this) predictions is an interesting read. Conclusion 2012 was another big year in software development across many different areas and types of development. Many of the themes discussed in this post overlap and are somehow associated with mobile development, cloud computing, and greater availability of data.   Reference: Significant Software Development Developments of 2012 from our JCG partner Dustin Marx at the Inspired by Actual Events blog. ...

Couchbase 101: Create views (MapReduce) from your Java application

When you are developing a new applications with Couchbase 2.0, you sometimes need to create view dynamically from your code. For example you may need this when you are installing your application, writing some test, or you can also use that when you are building frameworks, and wants to dynamically create views to query data. This post shows how to do it.               PrerequisitesCouchbase Server 2.0 Couchbase Jave Client Library 1.1.x Beer Sample datasetIf you are using Maven you can use the following information in your pom.xml to add the Java Client library: <repositories> <repository> <id>couchbase</id> <name>Couchbase Maven Repository</name> <layout>default</layout> <url></url> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories><dependencies> <dependency> <groupid>couchbase</groupid> <artifactid>couchbase-client</artifactid> <version>1.1.0</version> <type>jar</type> </dependency> </dependencies> See online at Create and Manage Views From Java The full Maven project is available on Github. Connect to Couchbase Cluster The first thing to do when you want to create a view from Java is obviously to connect to the cluster. import com.couchbase.client.CouchbaseClient; ... ...List<uri> uris = new LinkedList<uri>(); uris.add(URI.create("")); CouchbaseClient client = null; try { client = new CouchbaseClient(uris, "beer-sample", "");// put your code hereclient.shutdown();} catch (Exception e) { System.err.println("Error connecting to Couchbase: " + e.getMessage()); System.exit(0); }... ... </uri></uri>Create a list of URIs to different nodes of the cluster – lines 5-6. (In this example I am working on a single node) Connect to the bucket, in our case beer-sample -line 9. You can include the password if the bucket is protected ( this is not the case here so I am sending an empty string)If you are looking for more information about Couchbase and Java, you can read this article from DZone : Hello World with Couchbase and Java. Let’s now talk about Couchbase views. You use views/map-reduce functions to index and query data from Couchbase Server based on the content of the JSON document you store inside Couchbase. For more information about views you can look at the ‘view basics’ chapter of the Couchbase Server Manual. Create Views from Java Creating a view from Java is really easy : the Java Client Library contains all the classes and methods to do it. As a concrete use case we will use the Application that is described in the Couchbase Java Tutorial. When you follow this tutorial, you need to manually create some views, as you can see here. In this example, we will create our map function and directly in our Java code and then store it to Couchbase Server. The tutorial asks you to create the following artifacts:a view named ‘by_name’ in the design document named ‘dev_beer’ (development mode) and the map function which looks like the following :function (doc, meta) { if(doc.type && doc.type == 'beer') { emit(, null); } } The following code allows you to do it from Java: import com.couchbase.client.protocol.views.DesignDocument; import com.couchbase.client.protocol.views.ViewDesign; ... DesignDocument designDoc = new DesignDocument('dev_beer');String viewName = 'by_name'; String mapFunction = 'function (doc, meta) {\n' + ' if(doc.type && doc.type == \'beer\') {\n' + ' emit(;\n' + ' }\n' + '}';ViewDesign viewDesign = new ViewDesign(viewName,mapFunction); designDoc.getViews().add(viewDesign); client.createDesignDoc( designDoc ); ...Create a design document using the com.couchbase.client.protocol.views.DesignDocument class – line 4. Create a view using com.couchbase.client.protocol.views.ViewDesign class with a name and the map function – line 14. You can add this view to a design document – line 15 Finally save the document into the cluster using the CouchbaseClient.createDesignDoc method.If you need to use a reduce function (built-in or custom) you just need to pass to the ViewDesign constructor as 3rd parameter. When developing view, from Java or from any other tool/language be sure you understand what are the best practices, and the life cycle of the index. This is why I am inviting you to take a look to the following chapters in the Couchbase documentation:View Writing Best Practice : for example in the map function, I do not emit any value. I only emit a key (the beer name). Views and Stored Data Development and Production Views : in the view above, I have created the view in the development environment (dev_ prefix) allowing me to test and use it on a subset of the data (cluster/index)Using the view First of all, the view that you just created is in ‘development mode’, and by default the Java client SDK will only access the view when it is in ‘production mode’. This means that when you are calling a view from your application it will search it into the production environment. So before connecting to Couchbase cluster you need to setup the viewmode to development. This is done using the viewmode environment variable from the Java SDK, that could be set using the following methods:In your code, add this line before the client connects to the cluster : System.setProperty(‘viewmode’, ‘development’); At the command line -Dviewmode=development In a properties file viewmode=developmentOnce it is done you can call the view using the following code: import import com.couchbase.client.protocol.views.*;... System.setProperty('viewmode', 'development'); // before the connection to Couchbase ... View view = client.getView('beer', 'by_name'); Query query = new Query(); query.setIncludeDocs(true).setLimit(20); query.setStale( Stale.FALSE ); ViewResponse result = client.query(view, query); for(ViewRow row : result) { row.getDocument(); // deal with the document/data } ... This code queries the view you just created. This means Couchbase Server will generate an index based on your map function, will query the server for results. In this case, we specifically want to set a limit of 20 results and also get the most current results by setting Stale.FALSE.Set the viewmode to development – line 4 Get the view using the CouchbaseClient.getView() method -line 6-. As you can see I just use the name beer for the design document (and not dev_beer, Couchbase will know where to search since I am in development mode) Create a query and set a limit (20) and ask the SDK to return the document itself setIncludeDocs(true) -line 8- The document will be returned from Couchbase server in the most efficient way Ask the system to update the index before returning the result using query.setStale( Stale.FALSE ); -line 9-. Once again be careful when you use setStale method. Just to be sure here is the documentation about it : Index Updates and the stale Parameter Execute the query – line 10 And use the result – lines 11-13Conclusion In this article, you have learned:How to create Couchbase views from Java Call this view from Java Configure development/production mode views from Couchbase Java Client LibraryThis example is limited to the creation of a view, you can take a look to the other methods related to design documents and views if you want to manage your design documents : getDesignDocument(), deleteDesignDocument(), … .   Reference: Couchbase 101: Create views (MapReduce) from your Java application from our JCG partner Tugdual Grall at the Tug’s Blog blog. ...

Super Fast Tomcat Installation using FTP and Version Control

When talking about Continuous Delivery one of the tests that both Martin Fowler and Jez Humble often mention is their ‘flame thrower’ test. It goes something like this: Jez will say ‘How long would it take you to get up and running if Martin and I went into your machine rooms armed with flame throwers and axes and started attacking your servers’?            The answer, of course, should be: ‘oh about an hour – right after we’ve put the fire out, swept up the mess, found some new servers, plugged them in and contacted our top flight lawyers so that we can sue you for criminal damage’. Most of the time this isn’t the case as deployment is all too often a manual process, with the guy doing the installation simply following a list of instructions written in a Word document. And what is a list of instructions? A computer program. Now, hands up everyone who likes writing Word documents. Okay, now hands up everyone who likes writing code… In your imagination, you should see a room full of people voting for writing code, so why is it that when there are opportunities for writing deployment scripts do we prefer (or get lumbered with) writing Word documents? It must be more fun, productive and cost effective to write scripts that do our deployment for us in seconds rather than writing Word documents and then do every deployment ourselves. Assuming that your machine room has been trashed, then Let’s consider what you need to get up and running again. For a start, you’ll need a machine with a few common things setup. I’m simplifying here; however, you’ll probably need:some user accounts. a pinch of networking (knowledge of where your DNS is etc) the correct version of Java a Subversion (or other version control) client tomcat (or other server) some configuration files your WAR fileNow, this should be as simple as getting hold of a basic drive image or virtual machine and turning it on, letting it boot up and running a deployment script. If you’ve ever read Jez Humble and David Farley book Continuous Delivery you’ll know that one of the major points they make is that, when setting up your deployment process, you should store your configuration files in your version control system. This to me sounds like one of those really obvious and useful ideas that you only do when someone else points it out to you. This is usually taken to mean your application’s configuration files, but should also refer to your server’s configuration files. The other thing you’ll notice about Continuous Delivery is that although it’s full of good ideas, it’s intentionally short on practical coding examples1. If you read my blog you may recall that I’ve mentioned updating most of my tomcat’s config files at one point or another; for example adding datasource details or an SSL configuration. With this in mind, the rest of this blog takes Jez’s and David’s sound ideas and demonstrates creating a simple install script for my tomcat server. If you download your candidate version of tomcat and expand the tar/zip file, you’ll notice that if has a conf directory, so the first thing to do is to add this directory to version control and delete it from the expanded tomcat binaries. You can now check the conf files out and modify them, adding SSL, an admin user, a MySQL data source or whatever you like. Don’t forget to check them in again.The next thing to do is to put the remaining binaries directories on a FTP server2 in an accessible safe place. What you now have is a basic setup with all the files held in two convenient places These can now be recombined to create a functioning server.The big idea here is that although combining these two parts can easily be done manually, the best approach is to do it automatically using a simple script. #!/bin/sh # echo Running Tomcat install ScriptTOMCAT_VERSION=apache-tomcat-7.0.33-blog# The FTP server holding the tomcat binaries SERVER=<your server name> TOMCAT_LOCATION=/Public/binaries/ SERVER_USER=<your FTP User Name> SERVER_PASSWORD=<your FTP password> CUT_DIRS=3# The version control details SVN_USER=<your Subversion username> SVN_PASSWORD=<your Subversion Password> SVN_URL=https:<the URL to your subversion repository>/Tomcat/apache-tomcat-7.0.33/confmkdir ../$TOMCAT_VERSION pwd echo changing directory cd ../vim $TOMCAT_VERSION pwdwget -r -nH -nc --cut-dirs=$CUT_DIRS ftp://$SERVER_USER:$SERVER_PASSWORD@$SERVER$TOMCAT_LOCATION$TOMCAT_VERSIONecho .. echo The directory looks like this: ls echo .. echo Getting the config files from config..svn --username=$SVN_USER --password=$SVN_PASSWORD co $SVN_URLbin/ At this point handover to the application deploy script. In my script you’ll see that all I do is to use three simple commands. Firstly I use wget to copy the files from my FTP server to the new server location. The second command is the svn co command, which checks out my tomcat config files and the final command simply starts the server. The rest of the script is all variables, comments and padding. Now, I’m guessing that one of the reasons that we write so many Word documents is for traceability, so the final trick is to let your script be the documentation by also storing it in version control. So, there you have it, a simple way of creating tomcat server installations in seconds. Obviously the next step would be for the server install script to handover to another script that installs your web app(s) on the server, but more on that later.   Notes 1I guess that the main reason why there are no concrete examples is that there is no fixed structure, such as the Maven directory structure, for creating servers or deploying code as each organisation is different. Is this a good thing? Probably not, perhaps there should be a move to create a ‘standard server’ to match the ‘standard’ way we lay code out. 2This could also be a file server, SFTP or web server and the tomcat binaries could be zipped or unzipped.   Reference: Super Fast Tomcat Installation using FTP and Version Control from our JCG partner Roger Hughes at the Captain Debug’s Blog blog. ...

Software Transactional Memory (STM)

The Actor Model is based on the premise of small independent processes working in isolation and where the state can be updated only via message passing. The actors hold the state within themselves, but the asynchronous message passing means there is no guarantee that a stable view of the state can be provided to the calling components. For transactional systems like banking where account deposit and withdrawal need to be atomic, this is a bad fit with an Actor Model.So, if your Akka applications need to be implementing a shared state model and providing a consensual, stable view of the state across the calling components, Software Transactional Memory (STM) provides the answer.       STM provides a concurrency-control mechanism for managing access to shared memory. STM makes use of two concepts – optimism and transactions to manage the shared concurrency control.Optimism means that we run multiple atomic blocks in parallel, assuming there will be no errors. When we are done, we check for any problems. If no problems are found, we update the state variables in the atomic block. If we find problems then we roll back and retry. Optimistic concurrency typically provides better scalability options than any other alternate approaches. Secondly, STM is modeled on similar lines of database transaction handling. In the case of STM, the Java heap is the transactional data set with begin/commit and rollback constructs. As the objects hold the state in memory, the transaction only implements the following characteristics – atomicity, consistency, and isolation.To manage multiple transactions running on separate threads as a single atomic block, the concept of CommitBarrier is used. CommitBarrier is a synchronization aid that is used as a single, common barrier point by all the transactions across multiple threads. Once the barrier is reached, all the transactions commit automatically. It is based on the Java’s CountDownLatch. Akka transactors are based on CommitBarrier, where the atomic blocks of each actor (member) participating are treated as one, big single unit. Each actor will block until everyone participating in the transaction has completed. This means that all the actions executed as part of the atomic blocks by members of the CommitBarrier will appear to occur as a single atomic action even though the members may be spread across multiple threads. If any of the atomic blocks throw an exception or a conflict happens, all the CommitBarrier members will roll back.Akka provides a construct for coordinating transactions across actors called coordinated.coordinated, which is used to define the transaction boundary in terms of where the transaction starts, and the coordinated.coordinate() method is used to add all the members that will participate in the same transaction context. Money transfer between two accountsLet’s take an example and see how the actors can participate in the transactions. We will use the classic example of transfer of funds between two bank accounts . We have an AccountActor that holds the account balance and account number information. It has two operations – credit (add money to the account) and debit (take money away from the account). In addition, we have the TransferActor object that will hold the two AccountActor objects and then invoke the debit and credit operations on the account objects. To make sure that the money transfer in the account happens in a synchronized way, we need to implement the following:In the account object, the state variable that needs to participate in the transaction should be of transaction reference type. The credit and debit operations in the account object need to be atomic. In the transfer object, the transaction boundary needs to be defined and the account objects need to participate in the same transaction context. In addition, we define the supervisor policy in TransferActor and BankActor to handle the transaction exceptions:Parts of the post are excerpts from the book – Akka Essentials. For more details on the example, please refer to the book.   Reference: Software Transactional Memory (STM) from our JCG partner Munish K Gupta at the Akka Essentials blog. ...

Executing a Command Line Executable From Java

In this post we’ll deal with a common need for Java developers. Execute and manage an external process from within Java. Since this task is quite common we set out to find a Java library to help us accomplish it.                   The requirements from such a library are:Execute the process asynchronously. Ability to abort the process execution. Ability to wait for process completion. On process output notifications. Ability to kill the process in case it hung. Get the process exit code.The native JDK does not help much. Fortunately, we have Apache Commons Exec. Indeed it is much easier but still not as straightforward as we hoped. We wrote a small wrapper on top of it. Here is the method signature we expose: 1: public static Future<Long> runProcess(final CommandLine commandline, final ProcessExecutorHandler handler, final long watchdogTimeout) throws IOException;It returns a Future<Long>. This covers section 1,2,3,6. Instance of ProcessExecutorHandler is passed to the function. This instance is actually a listener for any process output. This covers section 4 in our requirement. Last but not least you supply a timeout. If the process execution takes more than said timeout you assume the process hung and you will end it. In that case the error code returned by the process will be -999.That’s it! Here is the method implantation. Enjoy. import org.apache.commons.exec.*;import org.apache.commons.exec.Executor;import;import java.util.concurrent.*;public class ProcessExecutor {public static final Long WATCHDOG_EXIST_VALUE = -999L;public static Future<Long> runProcess(final CommandLine commandline, final ProcessExecutorHandler handler, final long watchdogTimeout) throws IOException{ExecutorService executor = Executors.newSingleThreadExecutor();return executor.submit(new ProcessCallable(watchdogTimeout, handler, commandline));}private static class ProcessCallable implements Callable<Long>{private long watchdogTimeout;private ProcessExecutorHandler handler;private CommandLine commandline;private ProcessCallable(long watchdogTimeout, ProcessExecutorHandler handler, CommandLine commandline) {this.watchdogTimeout = watchdogTimeout;this.handler = handler;this.commandline = commandline;}@Overridepublic Long call() throws Exception {Executor executor = new DefaultExecutor();executor.setProcessDestroyer(new ShutdownHookProcessDestroyer());ExecuteWatchdog watchDog = new ExecuteWatchdog(watchdogTimeout);executor.setWatchdog(watchDog);executor.setStreamHandler(new PumpStreamHandler(new MyLogOutputStream(handler, true),new MyLogOutputStream(handler, false)));Long exitValue;try {exitValue = new Long(executor.execute(commandline));} catch (ExecuteException e) {exitValue = new Long(e.getExitValue());}if(watchDog.killedProcess()){exitValue =WATCHDOG_EXIST_VALUE;}return exitValue;}}private static class MyLogOutputStream extends LogOutputStream{private ProcessExecutorHandler handler;private boolean forewordToStandardOutput;private MyLogOutputStream(ProcessExecutorHandler handler, boolean forewordToStandardOutput) {this.handler = handler;this.forewordToStandardOutput = forewordToStandardOutput;}@Overrideprotected void processLine(String line, int level) {if (forewordToStandardOutput){handler.onStandardOutput(line);}else{handler.onStandardError(line);}}}public static void main(String[] args) throws IOException {CommandLine cl = CommandLine.parse("test.bat");Future<Long> exitValue = runProcess(cl, new ProcessExecutorHandler() {@Overridepublic void onStandardOutput(String msg) {System.out.println("output msg = " + msg);}@Overridepublic void onStandardError(String msg) {System.out.println("error msg = " + msg);}}, 1);try {Long aVoid = exitValue.get();System.out.println("Finished with " + aVoid);} catch (InterruptedException e) {e.printStackTrace(); //To change body of catch statement use File | ngs | File Templates.} catch (ExecutionException e) {e.printStackTrace(); //To change body of catch statement use File | ngs | File Templates.}}}  Reference: Executing a Command Line Executable From Java from our JCG partner Nadav Azaria at the DeveloperLife blog. ...

Apache Apollo REST API

Apache Apollo is a next-generation, high-performance, multi-protocol messaging broker built from the ground up to one day be a drop-in replacement of ActiveMQ 5.x. I have blogged about it in the past (Part I has already been published with part II on its way). Apollo’s non-blocking, asynchronous architecture allows it to be super fast and scale very well on multi-core systems using a minimal number of threads. The supported protocols include AMQP[amqtp], STOMP, MQTT, and ActiveMQ’s native binary protocol, Openwire. Among all of the cool features implemented in Apollo, the one I want to briefly introduce is the REST API.     Apollo will soon have a JMX API just like ActiveMQ, but in the meantime, the REST API is much more amenable to automated management or broker inspection. At the moment, there are three main sections to the API:Broker Session ConfigBroker With the broker API, you have access to the heart of Apollo and each individual Virtual Host. A Virtual Host is a grouping of store, authentication mechanisms, and destinations useful for implementing multi-tenancy. You can manage each Virtual Host’s destinations (topics, queues) by inspecting existing destinations, deleting ones that should no longer be around, or creating new ones. Also available are details about the connectors (these are what allow Apollo to take incoming connections from clients), or existing connections. With the REST API, you can start and stop connectors, delete connections, or even bring down the entire broker. Session The session API is responsible for authenticating a user so that they have access to the API. Config Use the Config API to view existing configuration or change configuration which would be in effect immediately (no restart required). The REST API makes it easy to administer Apollo. You can view each REST endpoint and its details at http://localhost:61680/api/index.html using the default installation (could be different URL depending on where you configured your Administration endpoint to be). The Apollo console is built on top of the REST API and a new improved UX console is on its way. I highly recommend taking a look at Apollo!   Reference: Apache Apollo REST API from our JCG partner Christian Posta at the Christian Posta Software blog. ...

Json deserialization with Jackson and Super type tokens

Datatables is a jquery plugin to present tabular information – it can enhance a simple table or can use a AJAX based data and present the information in a tabular form. Datatables requires the data from the server to follow a specific JSON format for it to be displayed on screen. Consider the case where a list of Member entities is to be displayed, the expected json structure for Members then has to be along these lines:             { 'aaData':[ { 'id':1, 'first':'one', 'last':'one', 'addresses':[], 'version':0 }, { 'id':2, 'first':'two', 'last':'two', 'addresses':[], 'version':0 } ], 'iTotalRecords':100, 'iTotalDisplayRecords':10, 'success':true } A generic java type can be defined which Jackson can use to generate json of the type shown above, consider the following Java generic type: package mvcsample.types; import java.util.List;public class ListWrapper<T> { private List<T> aaData; private int iTotalRecords; private int iTotalDisplayRecords; private Boolean success;public List<T> getAaData() { return aaData; } public void setAaData(List<T> aaData) { this.aaData = aaData; } public int getiTotalRecords() { return iTotalRecords; } public void setiTotalRecords(int iTotalRecords) { this.iTotalRecords = iTotalRecords; } public int getiTotalDisplayRecords() { return iTotalDisplayRecords; } public void setiTotalDisplayRecords(int iTotalDisplayRecords) { this.iTotalDisplayRecords = iTotalDisplayRecords; } public Boolean getSuccess() { return success; } public void setSuccess(Boolean success) { this.success = success; } } So, with this generic type, to generate a list of Members I would have a parameterized type defined as in this test: List<Member> members = new ArrayList<>(); members.add(new Member('one', 'one')); members.add(new Member('two', 'two')); ListWrapper<Member> membersWrapper = new ListWrapper<>(); membersWrapper.setAaData(members); membersWrapper.setiTotalDisplayRecords(10); membersWrapper.setiTotalRecords(100); ObjectMapper objectMapper = new ObjectMapper();StringWriter w = new StringWriter(); objectMapper.writeValue(w, membersWrapper); String json = w.toString(); System.out.println(json); And similarly a json for any other type can be generated. However, what about the other way around, generating the Java type given the json. Again, consider a case where the json given in the beginning is to be converted to ListWrapper<Member> , I can try a deserialization this way: ObjectMapper objectMapper = new ObjectMapper(); ListWrapper<Member> membersUpdated = objectMapper.readValue(json, ListWrapper.class); Note that above I cannot refer to the class type as ListWrapper<Member>.class, I can only refer to it as ListWrapper.class. This however will not work and the resulting type will not be a wrapper around Member class, as at runtime Jackson has no idea that it has to generate a ListWrapper<Member>. The fix is to somehow pass the information about the ListWrapper’s type to Jackson and this is where Super type tokens fits in. The article explains how this works in great detail, the essence is that while type erasure does remove the type information from parameterized instances of generic type, however the type is retained in subclasses of generic classes. For eg. Consider the following StringList class which derives from ArrayList<String> , it is possible to find that the type parameter of the base class is a String as shown in the test below: import java.lang.reflect.ParameterizedType; import java.lang.reflect.Type; import java.util.ArrayList;public class StringList extends ArrayList<String>{public static void main(String[] args) { StringList list = new StringList(); Type superClassType = list.getClass().getGenericSuperclass(); ParameterizedType parameterizedType = (ParameterizedType)superClassType; System.out.println(parameterizedType.getActualTypeArguments()[0]); } } This is applicable for a case where a subclass is defined as an anonymous class also this way: ArrayList<String> list = new ArrayList<String>(){}; Type superClassType = list.getClass().getGenericSuperclass(); ParameterizedType parameterizedType = (ParameterizedType)superClassType; System.out.println(parameterizedType.getActualTypeArguments()[0]); This is what is used internally with the Super Type tokens pattern to find the type of the parameterized type. Jackson’s com.fasterxml.jackson.core.type.TypeReference abstract class implements this and using this the Jackson deserialization would work this way: import com.fasterxml.jackson.core.type.TypeReference;.... ListWrapper<Member> membersWrapper = objectMapper.readValue(json, new TypeReference<ListWrapper<Member>>() {});ListWrapper<Address> addressWrapper = objectMapper.readValue(json, new TypeReference<ListWrapper<Address>>() {}); This way two different parameterized types can be deserialized given a generic type and a json representation. Resources:Reflecting Generics: Neal Gafter’s Super type tokens:  Reference: Json deserialization with Jackson and Super type tokens from our JCG partner Biju Kunjummen at the all and sundry blog. ...
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: