Featured FREE Whitepapers

What's New Here?


Java 7 – The NIO File Revolution

Java 7 (“Project Coin”) has been out since July of last year. The additions with this release are useful, for example Try with resources – having closable resources handled automatically from try blocks, Strings in switch statements, multicatch for Exceptions and the ‘<>‘ operator for working with generics. The addition that everyone was anticipating the most, closures, has been deferred to version 8. Surprisingly though, there was a small ‘revolution’ of sorts with the release of Java 7, that for the most part, in my opinion, went unnoticed and could possibly be the best part of the Java 7 release. The change I’m referring to is the addition of the java.nio.file package. The java.nio.file package added classes and interfaces that make working with files and directories in Java much easier. First and formost of these changes is the ability to copy or move files. I always found it frustrating that if you want to copy or move a file, you have to roll your own version of ‘copy’ or ‘move’. The utilities found in the Guava project com.google.common.io package provides these capabilities, but I feel that copy and move operations should be a core part of the language. Over the next few posts, I’ll be going into greater detail (with code examples) on the classes/interfaces discussed here and some others that have not been covered. This post serves as an introduction and overview of the new functionality in the java.nio.file package. Breaking Out Responsibilities If you take a look at the java.io package as it stands now, the vast majority of the classes are for input streams, output streams, readers or writers . There is only one class that defines operations for directly working with the file system, the File class. Some of the other classes in java.io will take a File object as an argument in a constructor, but all file and directory interaction is through the File class. In the java.nio.file package, functionality has been teased out into other classes/interfaces. The first ones to discuss are the Path interface and the Files class. Path and Files A Path object is some what analogous to a java.io.File object as it can represent a file or directory on the file system. A Path object is more abstract though, in that it is a sequence of names that represent a directory hierarchy (that may or may not include a file) on the file system. There are not methods in the Path interface that allow for working with directories or files. The methods defined are for working with or manipulating Path objects only, resolving one Path to another etc. (There is one method that can be used to obtain a java.io.File object from a Path, toFile. Likewise the java.io.File class now contains a toPath method.) To work with files and directories, Path objects are used in conjunction with the Files class. The Files class consists entirely of static methods for manipulating directories and files, including copy, move and functions for working with symbolic links. Another interesting method in the Files class is the newDirectoryStream method that returns a DirectoryStream object that can iterate over all the entries in a directory. Although the java.io.File class has the list method where you provide a FileFilter instance, the newDirectoryStream can take a String ‘glob’ like ‘*.txt’ to filter on. FileStore As previously mentioned, all interaction with the file system in the java.io package is through the File class. This includes getting information on used or available space in the file system. In java.nio.file there is a FileStore class that represents the storage type for the files whether it’s a device, partition or concreate file system. The FileStore class defines methods for getting information about the file storage such as getTotalSpace, getUsableSpace, getUnallocated space. A FileStore can be obtained by calling the Files.getFileStore(Path path) method that will return the FileStore for that particular file. FileSystem and FileSystems A FileSystem, as the name implies, provides access to the file system and is a factory for other objects in the file system. For example, the FileSystem class defines a getPath method that converts a string (/foo/bar) into a system dependent Path object that can be used for accessing a file or directory. The FileSystem class also provides a getFileStores method that returns an iterable of all FileStores in the FileSystem. The FileSystems class provides access to the FileSystem object with the static FactorySystems.getDefault method. There are also static methods for creating custom FileSystem objects. Conclusion This has been a fast, high level view of the new functionality for working with files provided by the java.nio.file package. There is much more information that has not been covered here, so take a look at the api docs Hopefully this post has been able get the reader interested in the improved file handling in Java 7. Be sure to stick around as we begin to explore in more detail what the java.nio.file package has to offer. Reference: What’s new in Java 7 – The (Quiet) NIO File Revolution from our JCG partner Bill Bejeck at the Random Thoughts On Coding blog...

XML parsing using SaxParser with complete code

SAX parser use callback function (org.xml.sax.helpers.DefaultHandler) to informs clients of the XML document structure. You should extend DefaultHandler and override few methods to achieve xml parsing. The methods to override arestartDocument() and endDocument() – Method called at the start and end of an XML document.  startElement() and endElement() – Method called at the start and end of a document element.   characters() – Method called with the text contents in between the start and end tags of an XML document element.The following example demonstrates the uses of DefaultHandler to parse and XML document. It performs mapping of xml to model class and generate list of objects. Sample XML Document : <?xml version="1.0" encoding="UTF-8"?> <catalog> <book id="001" lang="ENG"> <isbn>23-34-42-3</isbn> <regDate>1990-05-24</regDate> <title>Operating Systems</title> <publisher country="USA">Pearson</publisher> <price>400</price> <authors> <author>Ganesh Tiwari</author> </authors> </book> <book id="002"> <isbn>24-300-042-3</isbn> <regDate>1995-05-12</regDate> <title>Distributed Systems</title> <publisher country="Nepal">Ekata</publisher> <price>500</price> <authors> <author>Mahesh Poudel</author> <author>Bikram Adhikari</author> <author>Ramesh Poudel</author> </authors> </book> </catalog> Model Class for Book Object for Mapping xml to object /** * Book class stores book information, after parsing the xml * @author Ganesh Tiwari */ public class Book { String lang; String title; String id; String isbn; Date regDate; String publisher; int price; List<String> authors; public Book(){ authors=new ArrayList<String>(); } //getters and setters }Java Code for XML Parsing (Sax) : import java.io.IOException; import java.text.ParseException; import java.text.SimpleDateFormat; import java.util.ArrayList; import java.util.List;import javax.xml.parsers.ParserConfigurationException; import javax.xml.parsers.SAXParser; import javax.xml.parsers.SAXParserFactory;import org.xml.sax.Attributes; import org.xml.sax.SAXException; import org.xml.sax.helpers.DefaultHandler; public class MySaxParser extends DefaultHandler { List<Book> bookL; String bookXmlFileName; String tmpValue; Book bookTmp; SimpleDateFormat sdf= new SimpleDateFormat("yy-MM-dd"); public MySaxParser(String bookXmlFileName) { this.bookXmlFileName = bookXmlFileName; bookL = new ArrayList<Book>(); parseDocument(); printDatas(); } private void parseDocument() { // parse SAXParserFactory factory = SAXParserFactory.newInstance(); try { SAXParser parser = factory.newSAXParser(); parser.parse(bookXmlFileName, this); } catch (ParserConfigurationException e) { System.out.println("ParserConfig error"); } catch (SAXException e) { System.out.println("SAXException : xml not well formed"); } catch (IOException e) { System.out.println("IO error"); } } private void printDatas() { // System.out.println(bookL.size()); for (Book tmpB : bookL) { System.out.println(tmpB.toString()); } } @Override public void startElement(String s, String s1, String elementName, Attributes attributes) throws SAXException { // if current element is book , create new book // clear tmpValue on start of elementif (elementName.equalsIgnoreCase("book")) { bookTmp = new Book(); bookTmp.setId(attributes.getValue("id")); bookTmp.setLang(attributes.getValue("lang")); } // if current element is publisher if (elementName.equalsIgnoreCase("publisher")) { bookTmp.setPublisher(attributes.getValue("country")); } } @Override public void endElement(String s, String s1, String element) throws SAXException { // if end of book element add to list if (element.equals("book")) { bookL.add(bookTmp); } if (element.equalsIgnoreCase("isbn")) { bookTmp.setIsbn(tmpValue); } if (element.equalsIgnoreCase("title")) { bookTmp.setTitle(tmpValue); } if(element.equalsIgnoreCase("author")){ bookTmp.getAuthors().add(tmpValue); } if(element.equalsIgnoreCase("price")){ bookTmp.setPrice(Integer.parseInt(tmpValue)); } if(element.equalsIgnoreCase("regDate")){ try { bookTmp.setRegDate(sdf.parse(tmpValue)); } catch (ParseException e) { System.out.println("date parsing error"); } } } @Override public void characters(char[] ac, int i, int j) throws SAXException { tmpValue = new String(ac, i, j); } public static void main(String[] args) { new MySaxParser("catalog.xml"); } }Output of Parsing : Book [lang=ENG, title=Operating Systems, id=001, isbn=23-34-42-3, regDate=Thu May 24 00:00:00 NPT 1990, publisher=USA, price=400, authors=[Ganesh Tiwari]] Book [lang=null, title=Distributed Systems, id=002, isbn=24-300-042-3, regDate=Fri May 12 00:00:00 NPT 1995, publisher=Nepal, price=500, authors=[Mahesh Poudel, Bikram Adhikari, Ramesh Poudel]]Reference: XML parsing using SaxParser with complete code from our JCG partner Ganesh Tiwari at the GT’s Blog ....

Agile Before there was Agile: Egoless Programming and Step-by-Step

Two key ideas underlying modern Agile development practices. First, that work can be done more effectively by Whole Teams in which people work together collaboratively to design and build systems. They share code, the review each other’s work, they share ideas and problems and solutions, they share responsibility, they work closely with each other and communicate constantly with each other and the customer. The second is that working software is designed, built and delivered incrementally in short time boxes. Egoless Programming The idea of developers working together collaboratively, sharing code and reviewing each other’s work isn’t new. It goes back to Egoless Programming first described by Gerald Weinberg in the early 1970s, in his book The Psychology of Computer Programming. In Egoless Programming teams, everyone works together to build the best possible software, in an open, respectful, democratic way, sharing ideas and techniques. People put personal feelings aside, accept criticism and look for opportunities to learn from each other. The important thing is to write the best possible code. Egoless programmers share code, review each other’s work, improve code, find bugs and fix them. People work on what they can do best. Leadership of the team moves around and changes based on what problems the team is working on. The result is people who are more motivated and code that is more understandable and more maintainable. Sounds a lot like how Agile teams are trying to work together today. Step-by-Step My first experience with “agile development”, or at least with iterative design and incremental time boxed development and egoless programming, came a long time before the famous meeting at Snowbird in 2001. In 1990 joined the technical support team at a small software development company on the west coast of Canada. It was quite a culture shock joining the firm. First, I was recruited while I was back-packing around the world on a shoestring for a year – coming back to Canada and back to work was a culture shock in itself. But this wasn’t your standard corporate environment. The small development team all worked from their homes, while the rest of us worked in a horse ranch in the country side, taking calls and solving problems while cooking spaghetti for lunch in the ranch house kitchen, with an attic stuffed full of expensive high-tech gear. We built and supported tools used by thousands of other programmers around the world to build software of their own. All of our software was developed following an incremental, time boxed method called Step-by-Step which was created by Michel Kohon in the early 1980s. In Step-by-Step, requirements are broken down into incremental pieces and developers develop and deliver working software in regular time boxes (ideally two weeks long), building and designing as they go. You expect requirements to be incomplete or wrong, and you expect them to change, especially as you deliver software to the customer and they start to use it. Sounds a lot like today’s Agile time boxed development, doesn’t it? Even though the company was distributed (the company’s President, who still did a lot of the programming, moved to a remote island off the west coast of Canada, and later to an even more remote island in the Caribbean), we all worked closely together and were in constant communication. We relied a lot on email (we wrote our own) and an excellent issue tracking system (we wrote that too), and we spent a lot of time on the phone with each other and with customers and partners. The programmers were careful and disciplined. All code changes were peer reviewed (I still remember going through my first code review, how much I learned about how to write good code) and developers tested all their own code. Then the support team reviewed and tested everything again. Each month we documented and packaged up a time boxed release and delivered it to beta test customers – customers who had reported a problem or asked for a new feature – and asked for their feedback. Once a year we put together a final release and distributed to everyone around the world. We carefully managed technical debt – although of course we didn’t know that it was called technical debt back then, we just wrote good code to last. Some of that code is still being used today, more than 25 years after the products were first developed. After I left this company and started leading and managing development teams, I didn’t appreciate how this way of working could be scaled up to bigger teams and bigger problems. It wasn’t until years later, after I had more experience with Agile development practices, that I saw how what I learned 20 years ago could be applied to make the work that we do today better and simpler. Reference: Agile Before there was Agile: Egoless Programming and Step-by-Step from our JCG partner Jim Bird at the Building Real Software blog...

Best Of The Week – 2012 – W03

Hello guys, Time for the “Best Of The Week” links for the week that just passed. Here are some links that drew Java Code Geeks attention: * Best development tools in the cloud: In this article, the best cloud-based development tools are presented. Source Control, Agile project management, Collaboration, Continuous Integration and Automated Testing tools are discussed. Also check out Developing and Testing in the Cloud and Services, practices & tools that should exist in any software development house. * JBoss Releases Hibernate 4.0: This article discusses the release of Hibernate 4.0 by JBoss. The new release includes features such as Multi-tenancy support, introduction of the “Services” API, better logging with i18n support, preparation for OSGi support and various code cleanups. * Writing Applications for Cloud Foundry Using Spring and MongoDB: A presentation on how to create Spring applications using Spring Data and MongoDB, applications deployed on Cloud Foundry. Also check out Using MongoDB with Morphia. * How I Program Stuff: An interesting approach on how to program that involves isolation from distraction, elimination of redundant stuff and ruthless coding. * Android Essentials: Create a Mirror: This tutorial demonstrates how to create a mirror app for your Android phone using the in built front facing camera. Also check out Android HTTP Camera Live Preview Tutorial. * Oracle and the Java Ecosystem: An overview of Oracle’s current position in the Java ecosystem. Discusses topics like the Java Community Process (JCP), OpenJDK with its newly renewed interest and how Android fits in all these. * Just-In-Time Logging: An interesting article that discusses logging and proposes moving it from the province of the developer’s discretion to the circle of architecture and design results in just-in-time logging. On the same topic, also read 10 Tips for Proper Application Logging. * How to get the most out of Spring and Google App Engine: This presentation will get you up and running building Spring apps on Google App Engine. Explains step-by-step how to build a real Spring app and identifies not only the basics of App Engine, but more advanced topics such as integrating with Google’s SQL Service and using App Engine’s “Always on” feature. Also check out Spring MVC and REST at Google App Engine. * How Many Hours Can a Programmer Program?: This article discusses programmer’s productivity and how working more hours than usually can affect one’s life. * Creating Your First Spring Web MVC Application: Detailed tutorial to get you started with a Spring Web MVC application. Also check out Spring MVC Development – Quick Tutorial and Spring MVC3 Hibernate CRUD Sample Application. * How Do You Pick Open Source Libraries?: A nice article with pointers on how to pick open source libraries like how well does it fit to a specific scenario, how popular it is, what the code quality is, the location of code and issue tracker and the underlying license. * Java verbosity, JEE and Lombok: This post discusses how to minimize some of Java’s verbosity by using the Lombok project. More specifically, apart from the Java beans related code, the benefits of Lombok with JPA Entities and JAXB Content Objects are discussed. * Is it time to get rid of the Linux OS model in the cloud?: A nice article with pointers on how to pick open source libraries like how well does it fit to a specific scenario, how popular it is, what the code quality is, the location of code and issue tracker and the underlying license. * JVM Performance Tuning (notes): An article with some quick and dirty JVM performance tuning tips focusing on memory management, object allocation and garbage collection. Also check out Profile your applications with Java VisualVM and Practical Garbage Collection, part 1 – Introduction. * The Principles of Web API Usage: This article describes the basic principles of using a web API, such as reading the provided documentation, using the appropriate version, checking the change log, taking usage limits under consideration, caching data, using the appropriate data format, handling authentication etc. That’s all for this week. Stay tuned for more, here at Java Code Geeks. Cheers, Ilias Tsagklis Related Articles:Best Of The Week – 2012 – W02 Best Of The Week – 2012 – W01 Best Of The Week – 2011 – W53 Best Of The Week – 2011 – W52 Best Of The Week – 2011 – W51 Best Of The Week – 2011 – W50 Best Of The Week – 2011 – W49 Best Of The Week – 2011 – W48 Best Of The Week – 2011 – W47 Best Of The Week – 2011 – W46...

GWT – Pros and Cons

I love JavaScript. With the advent of jQuery and Mootools, my love for JavaScript has only increased plenty-fold. Given a choice I would use either of the aforementioned frameworks for any web application I develop. But being in the service industry, time and again I have to succumb to the client’s pressure and work in their choice of technology – whether or not it is the right one (The one who pays the piper calls the tune. Isn’t it?). One such client exposed me to the world of GWT. I have given GWT a shot couple of years back on the day it was released. I didn’t like it that much then, so I dismissed it and never returned back. But, over the past six months working on this project I have a slightly different impression on this framework. I still cannot say that GWT is the next big thing since sliced bread, but at least it is not as bad as I thought it was. I have just documented my observations, both good and bad during the course of this project and thought some fellow developer might find it useful while evaluating GWT. Pros:If you are a Java veteran with experience in Swing or AWT, then choosing GWT should be a no-brainer. The learning curve is the least with this background. Even if you are not experienced in Java GUI development, the experience in working on server-side Java for years will come in handy while developing GWT apps You can create highly responsive web applications with heavy lifting on the client-side and reduced chattiness with the server-side Although there are numerous JavaScript libraries out in the wild and most of them are worth their salt, many conventional developers don’t understand its true power. Remember, a powerful language like JavaScript is a double-edged sword. If you don’t know how to use it, even you won’t be able to clean the mess you create You can migrate from a typical web application to a GWT application iteratively. It is not an all or nothing proposition. You can use a clever trick called JSNI to interact with loads of JavaScript functions you already possess. But it is always better to move them to GWT sooner rather than later The IDE support for GWT cannot be better. Java IDEs have matured over the past decade to be one of the best in the world and GWT can take direct advantage of it The integrated debugging beauty is something you can kill for. The excellent debugging support offered by the mature Java IDEs is one feature that could sway anybody’s decision in favor of GWT The built-in IDE support to refactor Java code can directly be put to good use to maintain a simple design at all times. Doing this in JavaScript is not for the faint at heart The IDE syntax highlighting, error checking, code completion shortcuts etc are overwhelming – to say the least GWT is being actively developed by Google. We know that the project is not going to die off anytime soon. Until now their commitment towards the project says a lot about its future in the industry. The community behind the project is also a big PLUS. Discussions take place daily in Stack overflow, discussion forums, wikis and personal blogs. A simple search with the right keyword could point you in the right direction GWT is a well thought-out API; not something that was put together in a hurry. This helps you as a developer to quickly comprehend the abstractions and makes it really intuitive to use You can use GWT’s built-in protocol to transfer data between the client and the server without any additional knowledge of how the data is packaged and sent. If you prefer more control, you can always use XML, JSON or another proprietary format of your choice. Even in that case, while using JSON, you don’t have to use an non-intuitive java JSON library. You can use JSNI to ‘eval’ the JSON using straight javascript. Cool huh! You have the advantage of being able to use standard Java static code analyzers like FindBugs, CheckStyle, Detangler, PMD etc to monitor code and design quality. This is very important when you are working in a big team with varying experience. You can use JUnit or Test NG for unit testing and JMock or another mock library for mocking dependencies. Following TDD is straight-forward if you already practice it. Although there are JavaScript based unit testing frameworks like jsunit and qunit, come on tell me how many people already know that or are itching to use that. The GWT compiler generates cross-browser JavaScript code. Today, any marketing person who says this will probably be beaten. It has now become a basic necessity, not a luxury The GWT compiler optimizes the generated code, removes dead code and even obfuscates the JavaScript for you all in one shot Although the compilation process takes hell a lot of time, you don’t have to go through that during development. There is a special hosted mode that uses a browser plug-in and direct java byte-code to produce output. That is one of the main reasons you are able to use a Java debugger to debug client side code. Rich third-party controls are available through quite a few projects like Smart GWT, Ext GWT etc. They are well designed, easy to use and theme-able. So, if you have a requirement where existing controls don’t just cut it, you should be looking into one of these projects. There is a really fat chance that one of those components will work out. Even if that doesn’t work out, you can always roll out your own. GWT emphasizes the concept of a stateful client and a stateless server. This results in extremely less load on the server where many users have to co-exist and high load on the client where only one user is working I18N and L10N are pretty straight-forward with GWT. In fact locale based compilation is taken care by the GWT compiler itself. The same cannot be said about regular client-only frameworks GWT comes built-in with browser back button support even while using AJAX. If you are an AJAX developer, I can almost feel your relief. This is priceless.Cons:GWT is a fast developing project. So, there are a lot of versions floating around. Many functions, interfaces and events get deprecated and keeping up with their pace is not too much fun when you have other work to do There were quite a few GWT books during the beginning. Not so much these days. For example, I haven’t found many books on the 2.0 version of GWT. This leaves us only with Google’s documentation. I agree that the documentation is good and all, but nothing can beat a well written book GWT is not fun to work with. After all it is Java and Java is not a fun language to work with. If you add the fact that entire layouts and custom controls should be created in java, you can easily make a grown programmer cry. With the introduction of UI binder starting version 2.0, that problem is kind of solved, but now you have a new syntax to learn. The Java to JavaScript compilation is fairly slow, which is a significant con if you choose GWT. I personally prefer defining structure in HTML and styling it using CSS. The concepts used in HTML are clean and straight-forward and I have years of experience doing just that. But in GWT, I am kind of forced to use proprietary methods to do the same. That combined with the fact that GWT doesn’t solve the styling and alignment incompatibilies for me compounds the problem. So, writing layout code in GWT is something I despice. But with UI Binder and HTMLLayout from version 2.0 onwards, I feel I am back in my own territory It requires some serious commitment levels to get into GWT, coz, after that, a change in client side technology could require a complete rewrite of your app, as it is a radically different approach than other client side frameworks There is not a defined way to approach an application development using GWT. Should we use only module per app or one module per page or somewhere in between. These design patterns are slowly evolving only now. So, typically people tend to develop all in one module until the module size goes beyond being acceptable and then they refactor it into multiple modules. But if it is too late, then refactoring could not be that easy either Mixing presentation and code doesn’t sound right although typical desktop GUI applications does just that. But these days, even desktop application frameworks like Flex and Silverlight have taken an XML based declarative approach to separate presentation from logic. I think GWT 1.x version had this disadvantage. With the introduction of UI Binder starting from version 2.0, I think this disadvantage can be written off although it is yet another painful XML language to learn You would often be punching in 3X to 5X more code than you would with other client libraries – like jQuery – to get simple things done You should also remember that with GWT, the abstraction away from HTML isn’t complete. You’ll still need to understand the DOM structure your app is generating in order to style it, and GWT can make it harder to see this structure in the code. GWT is an advantage only for Java developers. Developers with .NET or PHP background won’t gain anything here If you have tasted the power of JavaScript and know how to properly utilize it to your advantage, then you will feel crippled with an unexpressive language like JavaI am sure many of you will have differences of opinion. Difference is good. So, if you think otherwise, feel free to leave a comment. We’ll discuss… References:  GWT – Pros and Cons from our JCG partner Ganeshji Marwaha at the Ganesh blog....

Domain-Driven Design Using Naked Objects

I just had a chance to read a newly released book, ‘Domain-Driven Design Using Naked Objects’ by Dan Haywood [http://www.pragprog.com/titles/dhnako] that provides an insight into the world of DDD. If nothing else, this book is for techies & management people alike. Although Naked Objects is covered as the implementation framework (to explain practically, all aspects of DDD), this book serves as an excellent introduction text for all of us who are new into this concept of crafting an enterprise application according to domain. This book also deserves to be read specially because there has been considerable buzz around this ‘Domain’ stuff in the recent past. According to the book , ‘Domain-driven design is an approach to building application software that focuses on the bit that matters in enterprise applications: the core business domain. Rather than putting all your effort into technical concerns, you work to identify the key concepts that you want the application to handle, you try to figure out how they relate, and you experiment with allocating responsibilities (functionality). Those concepts might be easy to see (Customers, Products, Orders, and so on), but often there are more subtle ones (Payable, ShippingRecipient, and RepeatingOrder) that won’t get spotted the first time around. So, you can use a team that consists of business domain experts and developers, and you work to make sure that each understands the other by using the common ground of the domain itself.’ Since being a domain expert is the next step for any developer naturally, as his experience increases, there is need to have more insight into the business stuff, instead of just the software. UML although does a great job for explaining the Object Oriented stuff, but ponder for a second, how is it really going to help a businessman that is interested in getting more profits. Naked Objects framework (http://www.nakedobjects.org) is based on the design pattern with same name(http://en.wikipedia.org/wiki/naked_objects) is an open source framework (only for java, .NET is a commercial one) that converts simple bean/components automatically into an interface(read as  multiple applications). Don’t yet confuse this with prototyping because DDD incorporates both the developer and the domain expert teams and we are not just creating the UI. DDD’s 2 central premises explained :A ubiquitous language for integrating & easing communication between domain experts instead of 2, which is the existing norm(like code & UML) Using model-driven design that aims to capture the model of the business process. This is done in code, rather than just visually, as was the case earlier.Naked Objects The java based Naked Objects(NO) framework is an evolutionary step forward from Rails (and its other avatars : Grails, Spring Roo, asp.net MVC, etc) that focuses more on M & V rather than MVC and provides much more domain-specific applications in turn resulting in flexibility for all. A typical NO application consists of multiple sub-projects like the core domain, fixture, service, command line and webapp project through a maven archetype. The coolest thing  is that NO automatically displays the domain objects in an O-O based UI that offers display in more flexible manner than any other IDE. NO also challenges the common frontend-middleware-backend convention and instead applies the Hexagonal architecture (http://alistair.cockburn.us/Hexagonal+architecture) that deals with the bigger picture in mind. The development in this framework is pojo centric and is heavily based on annotations, which should be pretty much regular stuff for any JEE developer. Also, during my initial evaluation of the framework, the code that’s being generated during the development is of maintainable quality, which is virtually essential for maintenance and scaling in any enterprise application. Hence this book, and its field of study is highly recommended for any enterprise developer/ team/ manager/ domain expert and as is repeatedly mentioned, becomes highly important when one has more years of experience under his belt. I am continuing my exploration in this and if it is really useful for me, would post some exercises here. Reference: Domain-Driven Design Using Naked Objects from our JCG partner Sumit Bisht at the Sumit Bisht blog....

WhateverOrigin – Combat the Same Origin Policy with Heroku and Play! Framework

A little while ago, while coding Bitcoin Pie, I found the need to overcome the notorious Same Origin Policy that limits the domains javascript running on a client’s browser can access. Via Stack Overflow I found a site called Any Origin, that’s basically the easiest way to defeat Same Origin Policy without setting up a dedicated server. All was well, until about a week ago, Any Origin stopped working for some (but not all) https requests. It just so happened that in that time I had gained some experience with Play! and Heroku, which enabled me to quickly build an open source clone of Any Origin called Whatever Origin (.org!) (on github). For those unfamiliar with Play! and Heroku, let me give a short introduction: Heroku is one of the leading PaaS providers. PaaS is just a fancy way of saying “Let us manage your servers, scalability, and security … you just focus on writing the appliaction.” Heroku started as a Ruby shop, but they now support a variety of programming languages and platforms including python, java, scala, javascript/Node.Js. What’s extra cool about them is that they offer a huge set of addons ranging from simple stuff like Custom Domains and Logging through scheduling, email, SMS, and up to more powerful addons like Redis, Neo4j and Memcached. Now for the application part, I had recently found Play! Framework. Play is a Java/Scala framework for writing web applications that borrows from the Ruby on Rails / Django ideas of providing you with a complete pre-built solution, letting you focus on writing your actual business logic, while allowing you to customize everything later if needed. I encourage you to watch the 12 minute video on Play!’s homepage, it shows how to achieve powerful capabilities from literally scratch. Play! is natively supported at Heroku, so really all you need to do to get a production app running is:play new Write some business logic (Controllers/Views/whatnot) git init … git commit “heroku apps add” to create a new app (don’t forget to add “–stack cedar” to use the latest generation Cedar stack) “git push heroku master” to upload a new version of your app … it’s automatically built and deployed.Armed with these tools (which really took me only a few days to learn), I set out to build Whatever Origin. Handling JSONP requests is an IO-bound task – your server basically does an HTTP request, and when it completes, it sends the response to your client wrapped in some javascript/JSON magic. Luckily Play!’s support for Async IO is really sweet and simple. Just look at my single get method: public static void get(final String url, final String callback) { F.Promise<WS.HttpResponse> remoteCall = WS.url(url).getAsync(); await(remoteCall, new F.Action<WS.HttpResponse>() { public void invoke(WS.HttpResponse result) { String responseStr = getResponseStr(result, url); // code for getResponseStr() not included in this snippet to hide some ugly irrelevant details // http://blog.altosresearch.com/supporting-the-jsonp-callback-protocol-with-jquery-and-java/ if ( callback != null ) { response.contentType = "application/x-javascript"; responseStr = callback + "(" + responseStr + ")"; } else { response.contentType = "application/json"; } renderJSON(responseStr); } }); }The first line initiates an async fetch of the requested URL, followed by registration to the completion event, and releasing the thread. You could almost think this is Node.Js! What actually took me the longest time to develop and debug was JSONP itself. The information I found about it, and jQuery’s client-side support was a little tricky to find, and I spent a few hours struggling with overly escaped JSON and other fun stuff. After that was done, I simply pushed it to github, registered the whateverorigin.org domain for a measly $7 a year, and replaced anyorigin.com with whateverorigin.org in Bitcoin Pie’s code, and voila – the site was back online. I really like developing websites in 2011 – there are entire industries out there that have set out to make it easy for individuals / small startups to build amazing products. Reference: WhateverOrigin – Combat the Same Origin Policy with Heroku and Play! Framework from our JCG partner Ron Gross at the A Quantum Immortal blog...

if – else coding style best practices

The following post is going to be an advanced curly-braces discussion with no right or wrong answer, just more “matter of taste”. It is about whether to put “else” (and other keywords, such as “catch”, “finally”) on a new line or not.             Some may write if (something) { doIt(); } else { dontDoIt(); }I, however, prefer if (something) { doIt(); } else { dontDoIt(); }That looks silly, maybe. But what about comments? Where do they go? This somehow looks wrong to me: // This is the case when something happens and blah // blah blah, and then, etc... if (something) { doIt(); } else { // This happens only 10% of the time, and then you // better think twice about not doing it dontDoIt(); }Isn’t the following much better? // This is the case when something happens and blah // blah blah, and then, etc... if (something) { doIt(); }// This happens only 10% of the time, and then you // better think twice about not doing it else { dontDoIt(); }In the second case, I’m really documenting the “if” and the “else” case separately. I’m not documenting the call to “dontDoIt()”. This can go further: // This is the case when something happens and blah // blah blah, and then, etc... if (something) { doIt(); }// Just in case else if (somethingElse) { doSomethingElse(); }// This happens only 10% of the time, and then you // better think twice about not doing it else { dontDoIt(); }Or with try-catch-finally: // Let's try doing some business try { doIt(); }// IOExceptions don't really occur catch (IOException ignore) {}// SQLExceptions need to be propagated catch (SQLException e) { throw new RuntimeException(e); }// Clean up some resources finally { cleanup(); }It looks tidy, doesn’t it? As opposed to this: // Let's try doing some business try { doIt(); } catch (IOException ignore) { // IOExceptions don't really occur } catch (SQLException e) { // SQLExceptions need to be propagated throw new RuntimeException(e); } finally { // Clean up some resources cleanup(); }I’m curious to hear your thoughts… References:  if – else coding style best practices from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....

REST Pagination in Spring

This is the seventh of a series of articles about setting up a secure RESTful Web Service using Spring 3.1 and Spring Security 3.1 with Java based configuration. This article will focus on the implementation of pagination in a RESTful web service. The REST with Spring series:Part 1 – Bootstrapping a web application with Spring 3.1 and Java based Configuration Part 2 – Building a RESTful Web Service with Spring 3.1 and Java based Configuration Part 3 – Securing a RESTful Web Service with Spring Security 3.1 Part 4 – RESTful Web Service Discoverability Part 5 – REST Service Discoverability with Spring Part 6 – Basic and Digest authentication for a RESTful Service with Spring Security 3.1Page as resource vs Page as representation The first question when designing pagination in the context of a RESTful architecture is whether to consider the page an actual resource or just a representation of resources. Treating the page itself as a resource introduces a host of problems such as no longer being able to uniquely identify resources between calls. This, coupled with the fact that outside the RESTful context, the page cannot be considered a proper entity, but a holder that is constructed when needed makes the choice straightforward: the page is part of the representation. The next question in the pagination design in the context of REST is where to include the paging information:in the URI path: /foo/page/1 the URI query: /foo?page=1Keeping in mind that a page is not a resource, encoding the page information in the URI is no longer an option. Page information in the URI query Encoding paging information in the URI query is the standard way to solve this issue in a RESTful service. This approach does however have one downside – it cuts into the query space for actual queries: /foo?page=1&size=10 The Controller Now, for the implementation – the Spring MVC Controller for pagination is straightforward: @RequestMapping( value = "admin/foo",params = { "page", "size" },method = GET ) @ResponseBody public List< Foo > findPaginated( @RequestParam( "page" ) int page, @RequestParam( "size" ) int size, UriComponentsBuilder uriBuilder, HttpServletResponse response ){ Page< Foo > resultPage = service.findPaginated( page, size ); if( page > resultPage.getTotalPages() ){ throw new ResourceNotFoundException(); } eventPublisher.publishEvent( new PaginatedResultsRetrievedEvent< Foo > ( Foo.class, uriBuilder, response, page, resultPage.getTotalPages(), size ) ); return resultPage.getContent(); }The two query parameters are defined in the request mapping and injected into the controller method via @RequestParam; the HTTP response and the Spring UriComponentsBuilder are injected in the Controller method to be included in the event, as both will be needed to implement discoverability. Discoverability for REST pagination Withing the scope of pagination, satisfying the HATEOAS constraint of REST means enabling the client of the API to discover the next and previous pages based on the current page in the navigation. For this purpose, the Link HTTP header will be used, coupled with the official “next“, “prev“, “first” and “last” link relation types. In REST, Discoverability is a cross cutting concern, applicable not only to specific operations but to types of operations. For example, each time a Resource is created, the URI of that resource should be discoverable by the client. Since this requirement is relevant for the creation of ANY Resource, it should be dealt with separately and decoupled from the main Controller flow. With Spring, this decoupling is achieved with events, as was thoroughly discussed in the previous article focusing on Discoverability of a RESTful service. In the case of pagination, the event – PaginatedResultsRetrievedEvent – was fired in the Controller, and discoverability is achieved in a listener for this event: void addLinkHeaderOnPagedResourceRetrieval( UriComponentsBuilder uriBuilder, HttpServletResponse response, Class clazz, int page, int totalPages, int size ){ String resourceName = clazz.getSimpleName().toString().toLowerCase(); uriBuilder.path( "/admin/" + resourceName ); StringBuilder linkHeader = new StringBuilder(); if( hasNextPage( page, totalPages ) ){ String uriNextPage = constructNextPageUri( uriBuilder, page, size ); linkHeader.append( createLinkHeader( uriForNextPage, REL_NEXT ) ); } if( hasPreviousPage( page ) ){ String uriPrevPage = constructPrevPageUri( uriBuilder, page, size ); appendCommaIfNecessary( linkHeader ); linkHeader.append( createLinkHeader( uriForPrevPage, REL_PREV ) ); } if( hasFirstPage( page ) ){ String uriFirstPage = constructFirstPageUri( uriBuilder, size ); appendCommaIfNecessary( linkHeader ); linkHeader.append( createLinkHeader( uriForFirstPage, REL_FIRST ) ); } if( hasLastPage( page, totalPages ) ){ String uriLastPage = constructLastPageUri( uriBuilder, totalPages, size ); appendCommaIfNecessary( linkHeader ); linkHeader.append( createLinkHeader( uriForLastPage, REL_LAST ) ); } response.addHeader( HttpConstants.LINK_HEADER, linkHeader.toString() ); }In short, the listener logic checks if the navigation allows for a next, previous, first and last pages and, if it does, adds the relevant URIs to the Link HTTP Header. It also makes sure that the link relation type is the correct one – “next”, “prev”, “first” and “last”. This is the single responsibility of the listener (the full code here). Test Driving Pagination Both the main logic of pagination and discoverability should be extensively covered by small, focused integration tests; as in the previous article, the rest-assured library is used to consume the REST service and to verify the results. These are a few example of pagination integration tests; for a full test suite, check out the github project (link at the end of the article): @Test public void whenResourcesAreRetrievedPaged_then200IsReceived(){ Response response = givenAuth().get( paths.getFooURL() + "?page=1&size=10" ); assertThat( response.getStatusCode(), is( 200 ) ); } @Test public void whenPageOfResourcesAreRetrievedOutOfBounds_then404IsReceived(){ Response response = givenAuth().get( paths.getFooURL() + "?page=" + randomNumeric( 5 ) + "&size=10" ); assertThat( response.getStatusCode(), is( 404 ) ); } @Test public void givenResourcesExist_whenFirstPageIsRetrieved_thenPageContainsResources(){ restTemplate.createResource(); Response response = givenAuth().get( paths.getFooURL() + "?page=1&size=10" ); assertFalse( response.body().as( List.class ).isEmpty() ); }Test Driving Pagination Discoverability Testing Discoverability of Pagination is relatively straightforward, although there is a lot of ground to cover. The tests are focused on the position of the current page in navigation and the different URIs that should be discoverable from each position: @Test public void whenFirstPageOfResourcesAreRetrieved_thenSecondPageIsNext(){ Response response = givenAuth().get( paths.getFooURL()+"?page=0&size=10" );String uriToNextPage = extractURIByRel( response.getHeader( LINK ), REL_NEXT ); assertEquals( paths.getFooURL()+"?page=1&size=10", uriToNextPage ); } @Test public void whenFirstPageOfResourcesAreRetrieved_thenNoPreviousPage(){ Response response = givenAuth().get( paths.getFooURL()+"?page=0&size=10" ); String uriToPrevPage = extractURIByRel( response.getHeader( LINK ), REL_PREV ); assertNull( uriToPrevPage ); } @Test public void whenSecondPageOfResourcesAreRetrieved_thenFirstPageIsPrevious(){ Response response = givenAuth().get( paths.getFooURL()+"?page=1&size=10" ); String uriToPrevPage = extractURIByRel( response.getHeader( LINK ), REL_PREV ); assertEquals( paths.getFooURL()+"?page=0&size=10", uriToPrevPage ); } @Test public void whenLastPageOfResourcesIsRetrieved_thenNoNextPageIsDiscoverable(){ Response first = givenAuth().get( paths.getFooURL()+"?page=0&size=10" ); String uriToLastPage = extractURIByRel( first.getHeader( LINK ), REL_LAST ); Response response = givenAuth().get( uriToLastPage ); String uriToNextPage = extractURIByRel( response.getHeader( LINK ), REL_NEXT ); assertNull( uriToNextPage ); }These are just a few examples of integration tests consuming the RESTful service. Getting All Resources On the same topic of pagination and discoverability, the choice must be made if a client is allowed to retrieve all the Resources in the system at once, or if the client MUST ask for them paginated. If the choice is made that the client cannot retrieve all Resources with a single request, and pagination is not optional but required, then several options are available for the response to a get all request. One option is to return a 404 (Not Found) and use the Link header to make the first page discoverable: Link=<http://localhost:8080/rest/api/admin/foo?page=0&size=10>; rel=”first“, <http://localhost:8080/rest/api/admin/foo?page=103&size=10>; rel=”last“ Another option is to return redirect – 303 (See Other) – to the first page of the pagination. A third option is to return a 405 (Method Not Allowed) for the GET request. REST Paginag with Range HTTP headers A relatively different way of doing pagination is to work with the HTTP Range headers – Range, Content-Range, If-Range, Accept-Ranges – and HTTP status codes – 206 (Partial Content), 413 (Request Entity Too Large), 416 (Requested Range Not Satisfiable). One view on this approach is that the HTTP Range extensions were not intended for pagination, and that they should be managed by the Server, not by the Application. Implementing pagination based on the HTTP Range header extensions is nevertheless technically possible, although not nearly as common as the implementation discussed in this article. Conclusion This article covered the implementation of Pagination in a RESTful service with Spring, discussing how to implement and test Discoverability. For a full implementation of pagination, check out the github project. If you read this far, you should follow me on twitter here. Reference: REST Pagination in Spring from our JCG partner Eugen Paraschiv at the baeldung blog...

Public key infrastructure

Some time ago I was asked to create presentation for my colleagues which describes Public Key Infrastructure, its components, functions, how it generally works, etc. To create that presentation, I’ve collected some material on that topic and it would be just dissipation to throw it out. That presentation wasn’t technical at all, and that post is not going to be technical as well. It will give just a concept, high-level picture, which, I believe, can be a good base knowledge before start looking at details. I will start with cryptography itself. Why do we need it? There are at least three reasons for that – Confidentiality, Authentication and Integrity. Confidentiality is the most obvious one. It’s crystal clear that we need cryptography to hide information from others. Authentication confirms that message is send by subject which we can identify and our claims about it are true. And finally, Integrity ensures that message wasn’t modified or corrupted during transfer process.We may try to use Symmetric Cryptography to help us to achieve our aims. It uses just one shared key, which is also called secret. The secret is used for encryption and for decryption of data. Let’s have a look how it can help us to archive our aims. Does it encrypt messages? Yes. Well, Confidentiality is solved, as soon as nobody else, except communicating parties, knows the secret. Does it provide Authentication? Mmm… I would say, no. If there are just two parties in conversation, is seems ok, but if there are hundreds, then should be hundreds secrets, which is hard to manage and distribute. What about Integrity? Yes, it works fine – it’s very hard to modify encrypted message. As you can guess, symmetric cryptography has one big problem – and that problem is “shared secret”. These two words… they don’t even fit one to other. If something is known by more that one person, it is not a secret any more. Moreover, to be shared, that secret somehow has to be transferred and during that process there are too many way for secret to be stolen. This means that such type of cryptography hardly solves our problems. But it is still in use and works quite well for its purposes. It’s very fast and can be used for encryption/decryption of big amounts of data, e.g. you hard drive. Also, as far as it hundreds or even thousands times faster that asymmetric cryptography, it’s used in hybrid schemas (like TLS aka SSL), where asymmetric cryptography is used for just for transferring symmetric key and encryption/decryption is done by symmetric algorithm. Let’s have a look at Asymmetric Cryptography. It was invented very recently about 40 years ago. The first paper (“New Directions in Cryptography”) was published in 1976 by Whitfield Diffie and Martin Hellman. Their work was influenced by Ralph Merkle, who believed to be the one who created the idea of Public Key Cryptography in 1974 (http://www.merkle.com/1974/) and suggested it as project to his mentor – Lance Hoffman, who rejected it. “New Directions in Cryptography” describes algorithm of key exchange known as “Diffie–Hellman key exchange”. Interesting fact that the same key exchange algorithm was invented earlier, in 1974 in Government Communication Headquarters, UK by Malcolm J. Williamson, but that information was classified and fact was disclosed just in 1997.Asymmetric Cryptography uses pair of keys – one Private Key and one Public Key. Private Key has to be kept secret and not shared with anybody. Public Key can be available to public; it doesn’t need to be secret. Information encrypted with public key can be decrypted only with corresponding private key. As far as Private Key is not shared, there is no need to distribute it, and there is reasonably small chance that it will be compromised. So such way of exchanging information can solve Confidentiality problem. What about Authentication and Integrity? These problems are solvable as well and utilise mechanism called Digital Signature. The simplest variant if Digital Signature can use following scenario – subject creates a hash based on message, encrypt that hash with Private Key and attach it to message. Now if recipient wants to verify the subject who created a message, he will encrypt that hash using subject’s public key (that’s Authentication) and compare it with hash generated on recipient side (Integrity). In reality hash is not exactly encrypted, instead it used in special signing algorithm, but the overall concept is the same. It’s important to notice that in Asymmetric Cryptography each pair of keys serves just one purpose, e.g. if pair is used for signing, it can’t be used for encryption. Digital Signature, also, is the base for Digital Certificate AKA Public Key Certificate. Certificate is pretty much the same as your passport. It has identity information, which is similar to name, date of birth, etc. in passport. Owner of certificate has to have Private Key which matches Public Key stored in certificate, similar passport has photo of the owner, which matches owner’s face. And, finally, certificate has a signature, and its meaning is the same, as meaning of stamp in passport. Signature proves that certificate was issued by organization which made that signature. In Public Key Infrastructure world such organizations are called Certificate Authorities. If one system discovers that Certificate is signed by “trusted” Certificate Authority, it means that system will trust to information in certificate. Last paragraph may not be obvious, especially “trust” part of it. What does “trust” mean in that context? Let have a look at simple example. Every site on Web which makes a use of encrypted connection does it via TLS (SSL) protocol, which is based on Certificates. When you go to https://www.amazon.co.uk and it sends its certificate back to your browser. In that certificate there is information about website and reference to Certificate Authority who signed that certificate. First browser will look at the name in certificate – it has to be exactly the same as website domain name, in our case, that’s “www.amazon.co.uk”. Then browser will verify that certificate is signed by Trusted Certificate Authority, which is VeriSing in case of Amazon. You browser already has a list of Certificate Authorities (this is just a list of certificates with public keys) which are known as trusted ones, so it can verify that certificate is issued by one of them. There are some other verification steps, that these two are the most important ones. Assume in our case verification was successful (if it’s not browser will show is big red warning message, like that one) – certificate has proper name in it and was signed by Trusted Certificate Authority. What does it give to us? Just one thing – we know that we are on www.amazon.co.uk and the server behind that name is Amazon server, not some dodgy website, which just looks like Amazon. When we enter our credit card details and we can be relatively sure that they will be sent to Amazon, but not to hacker’s database. Our hope here based on assumption that such Certificate Authorities like VeriSign do not give dodgy certificates and Amazon server is not compromised. Well, better than nothing J Another example are severs in organization, which use certificates to verify that they can trust one to other. The schema there is very similar to browser’s ones, except two differences:Mutual authentication. Certificates are, usually, verified but both sides, not just by client. Client has to send his certificate to server. Certificate Authority, is hosted inside the company.When CA is inside the company we can be almost sure that certificates are going to be issued only to properly validated subjects. It gives some confidence that hacker can’t inject his server, even if he has access to network infrastructure. Attack is possible only if CA is compromised or some server’s Private Key is compromised.We already know, Certificate Authority is the organization which issues certificate and in the Internet, an example of such organization is VeriSing. If certificate is created to be used just inside organization (intranet), it can be issued by Information Security Department which can act as Certificate Authority. When someone wants to have certificate, he has to send certificate request which is called Certificate Signing Request to Certificate Authority. That certificate consists of subject’s identity information, subject’s public key and signature, created by subject’s private key to ensure, that subject who sent request has appropriate private key. Before signing Certificate Authority passes that request to Registration Authority who verifies all details, ensures that proper process is followed, etc. It’s possible that Certificate Authority can also act as Registration Authority. After all, if everything is ok, Certificate Authority creates new certificate signed by its private key and send it back to subject which requested certificate.I’ve already mentioned Certificate validation process. Here are some details of it; worth mentioning theirs details are still high-level. Validation consists of several steps which, broadly speaking can be described as:Certificate data validation – validity date, presence of required fields, their values, etc. Verify that certificate is issued by Trusted Certificate Authority. If you are browsing internet that list if already built-in in your browser. If that’s communication between two systems, each system has a list of trusted Certificate Authorities; usually that is just a file with certificates. Certificate’s signature is valid and made by Certificate Authority who signed that certificate. Verify that certificate is not revoked. Key verification – proves that servers can decode messaged encrypted by certificate’s Public Key.Mentioned above certificate revocation can happen because of many reasons – certificate could be compromised, or, in corporate world, employee, which owned certificate, left company, or sever which had certificate was decommissioned, etc. On order to verify certificate revocation, browser or any other piece of software, has to use one or both of following techniques:Certificate Revocation List (CRL). That’s just a file, which can be hosted on http server. It contains list of revoked certificate IDs. That’s method is simple and straightforward, it doesn’t require lots of efforts for implementation, but has three disadvantages – that’s just a file, which means, that it’s not real-time, it can use significant network traffic and it’s not checked by default by the most of the browsers (I would even say by all browsers), even if certificate has a link to CRL. Online Certificate Status Protocol (OCSP). That is preferable solution, which utilizes dedicated server, which implements protocol which will return back revocation status of certificate by its id. If browser (at least FireFox > v.3.0) will find link to that server in certificate, it will make a call to verify that certificate is not revoked. Only disadvantage is that OCSP server has to be very reliable and be able to answer on requests all the time.In internet certificate usually contains links to CRL or OCSP inside it. When certificates are used in corporate network these links are usually known by all parties and there is no need to have them in certificate. So, finally, what is Public Key Infrastructure? That’s infrastructure, which supports everything which was described above and generally consists of following elements:Subscribers. Users of certificates. Clients and ones who owns certificates. Certificates. Certificate Authority and Registration Authority. Certificate Revocation Infrastructure. Server with Certificate Revocation list or OCSP Server. Certificate Policy and Practices documents. Describe format of certificate, format of certificate request, when certificated have to be revoked, etc. Basically all procedures related to infrastructure. Hardware Security Modules, which are usually used to protect Root CA’s private key.And that entire infrastructure support following functions, which we’ve just discussed:Public Key Cryptography. Certificate issuance. Certificate validation. Certificate revocation.And that’s it. Appeared to be not such a big topic ;) References:  Public key infrastructure from our JCG partner Stanislav Kobylansky at the Stas’s blog ....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: