Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?


Java EE 7 is final. Thoughts, Insights and further Pointers.

It took us a little less than three years to get the next Java EE version out the door. On April 16th this year the JCP EC voted on JSR 342 and approved it. This is kind of a success story because the initial idea of having a cloud ready platform was withdrawn at the very last possible moment in late August last year. As a member of the EG it is more or less easy to write about upcoming features. Even if the umbrella EG only is responsible for the platform level stuff and not the individual contained JSRs you need to know a little more about the details than I expected at first. But I’m not going to recap what has already been written by Arun or the Adopt-a-JSR members. Instead I would like to give you some more behind the scenes and impressions. First of all: A hearty “Thank-You!” to all the hard working EGs and contributors of the individual JSRs! It was a pleasure contributing as an individual and   I am thankful for the patience and respect I received for my views and ideas! Platform Road-map What started back in 1998 has been a tremendous success. The Java Enterprise Edition as we know it today started out with less than 10 individual specifications and grew over time to what it is today. Different topics started to form the versions with the beginning of what was called J2EE 1.4 in 2003.A more developer centered view came up with the re-branding towards Java EE (and yes: There is nothing named JEE! Never use that name! Please!) This was extended in the overly successful sixth version. Following that path for me it seemed as if the “cloud” topic initially proposed for 7 came out of nowhere. Reading Linda’s email about the possible re-alignment was kind of a relief and the only thing I have to add is, that it probably came to late. The cloud things will come up again in the next version which will start somewhere in the future hopefully. What I would Wish for My personal wish would be to have a better and longer strategy here. Knowing that we are talking about comparably long time-frames this might stay a wish but instead of adopting latest industry trends all over and leaving it up to the individual JSRs to fill the buzz words, I would rather like to see a more platform centered approach. Given the different categories in which each of the new EE versions emerges this could look like this:With a maximum of 25% fore each of them it would be a reasonable way to fulfill the needs for every stakeholder. 75% for standards related work to keep the platform integrated, usable and up to date and only 25% of the work to slightly adopt to new things. To me it feels like this approach would invert the way it is done today. But someone with more insight might proof me wrong here. Further on I would suggest, that the “Big Tickets” need some kind of a visionary road-map, too. Lets say it might be related to Gartners Emerging Technologies Hype Cycle.So my personal road-map for EE’s next big ticket topics would be the following:Transparency and Community Contribution and Work in the EG Even if I am complaining about the lack of transparency behind the overall planning I have to note that overall transparency and community contribution raised to a new level in EE 7. Starting with the official survey which Linda launched at the EE-BOF at JavaOne last year on to the upgraded JCP version (JCP 2.8) which is in use for most of the EE JSRs and the incredible amount of people working in the Adopt-A-JSR program this has been the most open EE specification effort of all time. And for those willing to contribute further I suggest  that you get familiar with the Adopt-a-JSR program and start contributing. This is a great way to give feedback to the individual EGs. You’re of course free to pick whatever specification you want and contribute on the user-mailing-lists. They are open and the EGs monitor what is happening there. Further on, most of the EG members are publicly reachable and happy to receive feedback. Generally I am pleased to say that working in the EE 7 Expert Group was a pleasant experience. I am incredibly honored to have the chance to work with the brightest EE minds in the industry. This includes Bill and Pete and many others. Especially those who won this year’s Star Spec Lead award are the ones I recall being open and responsive to any single question I had. Thank you. Java Enterprise Edition 7 at a Glance Enough of behind the scenes and crazy ideas. Here is what EE 7 looks like as of today:With four new specifications on board and four pruned ones (EJB Entity Beans, JAX-RPC 1.1, JAXR 1.0, und JSR-88 1.2) we’re exactly where we’ve been in EE 6 according to the numbers. The complete specification now contains 34 individual specifications.Spezifikation JSR Version ProjectJava Platform, Enterprise Edition 342 7 javaee-specManaged Beans 342 1.0Java EE Web Profile (Web Profile) 342 1.0Java API for RESTful Web Services (JAX-RS) 339 2.0 jax-rs-specWeb Services for Java EE 109 1.4Java API for XML-Based Web Services (JAX-WS) 224 2.2 jax-wsJava Architecture for XML Binding (JAXB) 222 2.2 jaxbWeb Services Metadata for the Java Platform 181 2.1Java API for XML-Based RPC (JAX-RPC) (Optional) 101 1.1 jax-rpcJava API for XML Registries (JAXR) (Optional) 93 1.0Servlet 340 3.1JavaServer Faces(JSF) 344 2.2 javaserverfacesJavaServer Pages (JSP) 245 2.3JavaServer Pages Expression Language (EL) 341 3.0 el-specA Standard Tag Library for JavaServer Pages (JSTL) 52 1.2 jstlDebugging Support for Other Languages 45 1.0Contexts and Dependency Injection for the Java EE Platform (CDI) 346 1.1 github.comDependency Injection for Java (DI) 330 1.0Bean Validation 349 1.1 http://beanvalidation.orgEnterprise JavaBeans (EJB) 345 3.2 ejb-specJava EE Connector Architecture (JCA) 322 1.7Java Persistence (JPA) 338 2.1 jpa-specCommon Annotations for the Java Platform 250 1.2Java Message Service API (JMS) 343 2.0Java Transaction API (JTA) 907 1.2 jta-specJavaMail 919 1.5 javamailJava Authentication Service Provider Interface for Containers (JASPIC) 196 1.1 jaspic-specJava Authorization Contract for Containers (JACC) 115 1.5 jacc-specJava EE Application Deployment (Optional) 88 1.2Java Database Connectivity (JDBC) 221 4.0Java Management Extensions (JMX) 255 2.0 openjdkJavaBeans Activation Framework (JAF) 925 1.1Streaming API for XML (StAX) 173 1.0 sjsxpJava Authentication and Authorization Service (JAAS)1.0Interceptors 318 1.2Batch Applications for the Java Platform 352 1.0 jbatchJava API for JSON Processing 353 1.0 json-processing-specJava API for WebSocket 356 1.0 websocket-specConcurrency Utilities for Java EE 236 1.0 concurrency-ee-specFree Online Launch Event for Java EE 7If you’re interested in first hand information about all the new specs register for the Java EE 7 Launch Webcast: Jun 12th. The introduction of Java EE 7 is a free online event where you can connect with Java users from all over the world as you learn about the power and capabilities of Java EE 7. Join Oracle for presentations from technical leaders and Java users from both large and small enterprises, deep dives into the new JSRs, and scheduled chats with Java experts.Business Keynote (Hasan Rizvi and Cameron Purdy) Technical Keynote (Linda DeMichiel) Breakout Sessions on different JSRs by specification leads Live Chat Lots of Demos Community, Partner, and Customer video testimonials  Reference: Java EE 7 is final. Thoughts, Insights and further Pointers. from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog. ...

Mapping enums done right with @Convert in JPA 2.1

If you ever worked with Java enums in JPA you are definitely aware of their limitations and traps. Using enum as a property of your @Entity is often very good choice, however JPA prior to 2.1 didn’t handle them very well. It gave you 2+1 choices:@Enumerated(EnumType.ORDINAL) (default) will map enum values using Enum.ordinal(). Basically first enumerated value will be mapped to 0 in database column, second to 1, etc. This is very compact and works great to the point when you want to modify your enum. Removing or adding value in the middle or rearranging them will totally break existing records. Ouch! To make matters worse, unit and integration tests often work on clean database, so they won’t catch discrepancy in old data. @Enumerated(EnumType.STRING) is much safer because it stores string representation of enum. You can now safely add new values and move them around. However renaming enum in Java code will still break existing records in DB. Even more important, such representation is very verbose, unnecessarily consuming database resources. You can also use raw representation (e.g. single char or int) and map it manually back and forth in @PostLoad/@PrePersist/@PreUpdate events. Most flexible and safe from database perspective, but quite ugly.Luckily Java Persistence API 2.1 ( JSR-388) released few days ago provides standardized mechanism of pluggable data converters. Such API was present for ages in proprietary forms and it’s not really rocket science, but having it as part of JPA is a big improvement. To my knowledge Eclipselink is the only JPA 2.1 implementation available to date, so we will use it to experiment a bit. We will start from sample Spring application developed as part of “Poor man’s CRUD: jqGrid, REST, AJAX, and Spring MVC in one house” article. That version had no persistence, so we will add thin DAO layer on top of Spring Data JPA backed by Eclipselink. Only entity so far is Book: @Entity public class Book { @Id @GeneratedValue(strategy = IDENTITY) private Integer id; //... private Cover cover; //... } Where Cover is an enum: public enum Cover { PAPERBACK, HARDCOVER, DUST_JACKET } Neither ORDINAL nor STRING is a good choice here. The former because rearranging first three values in any way will break loading of existing records. The latter is too verbose. Here is where custom converters in JPA come into play: import javax.persistence.AttributeConverter; import javax.persistence.Converter; @Converter public class CoverConverter implements AttributeConverter<Cover, String> { @Override public String convertToDatabaseColumn(Cover attribute) { switch (attribute) { case DUST_JACKET: return "D"; case HARDCOVER: return "H"; case PAPERBACK: return "P"; default: throw new IllegalArgumentException("Unknown" + attribute); } } @Override public Cover convertToEntityAttribute(String dbData) { switch (dbData) { case "D": return DUST_JACKET; case "H": return HARDCOVER; case "P": return PAPERBACK; default: throw new IllegalArgumentException("Unknown" + dbData); } } } OK, I won’t insult you, my dear reader, explaining this. Converting enum to whatever will be stored in relational database and vice-versa. Theoretically JPA provider should apply converters automatically if they are declared with: @Converter(autoApply = true It didn’t work for me. Moreover declaring them explicitly instead of @Enumerated in @Entity class didn’t work as well: import javax.persistence.Convert; //... @Convert(converter = CoverConverter.class) private Cover cover; Resulting in exception: Exception Description: The converter class [com.blogspot.nurkiewicz.CoverConverter] specified on the mapping attribute [cover] from the class [com.blogspot.nurkiewicz.Book] was not found. Please ensure the converter class name is correct and exists with the persistence unit definition. Bug or feature, I had to mention converter in orm.xml: <?xml version="1.0"?> <entity-mappings xmlns="" version="2.1"> <converter class="com.blogspot.nurkiewicz.CoverConverter"/> </entity-mappings> And it flies! I have a freedom of modifying my Cover enum (adding, rearranging, renaming) without affecting existing records. One tip I would like to share with you is related to maintainability. Every time you have a piece of code mapping from or to enum, make sure it’s tested properly. And I don’t mean testing every possible existing value manually. I am more after a test making sure that new enum values are reflected in mapping code. Hint: code below will fail (by throwing IllegalArgumentException) if you add new enum value but forget to add mapping code from it: for (Cover cover : Cover.values()) { new CoverConverter().convertToDatabaseColumn(cover); } Custom converters in JPA 2.1 are much more useful than what we saw. If you combine JPA with Scala, you can use @Converter to map database columns directly to scala.math.BigDecimal, scala.Option or small case class. In Java there will finally be a portable way of mapping Joda time. Last but not least, if you like (very) strongly typed domain, you may wish to have PhoneNumber class (with isInternational(), getCountryCode() and custom validation logic) instead of String or long. This small addition in JPA 2.1 will surely improve domain objects quality significantly. If you wish to play a bit with this feature, sample Spring web application is available on GitHub.   Reference: Mapping enums done right with @Convert in JPA 2.1 from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog. ...

So, what’s in a story?

I don’t know about you, but I always feel a little nervous when it comes to writing Agile stories. I often worry whether I’m the best person to write them and if I’ve got them right. The reason for the first worry is that the agile gospel says that the best stories are written by the customers whilst, the reason for the second worry is that my stories will be reviewed by a number of people including various project stake holders. All of which got me thinking: writing a story, how hard can it be? So, I took a good look around to see what advice others could offer. Some of this advice was often vague such as that in James Shore and Shane Warden’s book The Art of Agile Development, which states that   “Stories represent self-contained, individual elements of the project. They tend to correspond to individual features and typically represent one or two days work”. The book goes on to give other advice, such as stories “represent customer value” and should have “clear completion criteria”, which complies with the agile manifesto, but is all still pretty vague. Now, I guess that some of this vagueness can be overcome by using the well known ‘as’, ‘I’ and ‘so’ convention for writing stories, which goes something like this: Title: <some title> As a <role> I want <to obtain some goal> So that <I get some benefit> Wikipedia puts the development of this idea down to the Connextra team who work for ad server people Connextra. The idea of this format is that it defines the story from a user’s perspective, it defines what the user wants to do and what the expected result should be. But, I’m still worried. How much detail should I include in my story? What should be the scope of the work my story encompasses? Fortunately, James Shores comes to the rescue here. Remember that stories should also be used for estimation purposes and he says, in The Art of Agile Development, that stories should take about 1 to 2 days to implement. I’ve got to find a short and concise form of words written from the user’s perspective with enough detail to provide one or two days work. Does this seem a little contradictory to you? Then I had a revelation, it doesn’t really matter too much if your stories don’t contain enough detail, can’t be implemented or are just plain wrong. The whole point of stories is that they are a communication tool, they get talked about, reviewed, sized and estimated and that means that they’ll hopefully evolve. I guess that’s why agile generally defines two types of story: epic and ready. An epic story is one that defines as a story that’s “too big to implement in a single iteration and therefore they need to be disaggregated into smaller user stories”. A ready story is one that both a team can implement and a Product Owner can prioritise. This, to me, seems an over simplification as between epic and ready stories there are often Fifty Shades of Grey, which is a different kind of story. I thought that I’d try to demonstrate the evolution of a story by using an example and in the time honoured tradition of this blog, my example has to be far-fetched and contrived. In this example, I’m going to take the case of Backwater Bank inc, a good old traditional bank where your money is safe. They’re so traditional that they don’t have an Internet offering, they’ve got wooden counters, high ceilings, human tellers and cheque books; imagine the Building and Loan Company of Bedford Falls in It’s a Wonderful Life. But, this is the 21st century and their new CEO wants the bank to have a brand new website where customers can do everything they need to do without visiting their local branch. The new CEO hires Agile Cowboys inc to develop his website and Agile Cowboys inc’s CEO gets together with the bank’s CEO and come up with the following story: As a banker I want my bank to give customers an online service So that I can compete with the big city banks and maximise my profit This epic story defines the whole project; it’s totally unimplementable and if it were then it’s not very agile having just one story in your back log. Agile Cowboys inc CEO figures out there’s a problem with this story and puts together an agile project team, passing the story on to the Product Owner before going out for a well deserved game of golf. The Product Owner has a meeting with the dev team and the some of the bank’s brightest staff (AKA the stakeholders) who start grooming the backlog of one story coming up with a whole bunch of stories including: Title: Create a New Customer Account As a person in the street I want to create my online account So that I can become a customer of the bank Title: Move Existing Customer Accounts Online As a banker I want to convert my customers' accounts into internet accounts So that they can access them on line Title: Paying Bills As a customer I want to access my online account So that I can pay my bills Title: Paying Bills As a customer I want to access my online account So that I can view my statements Title: Account Transfers As a customer I want to access my online account So that I can transfer money between accounts Title: Order Cheque Book As a customer I want to access my online account So that I can order a new cheque book Title: Sales of New Products As the banker I want to sell the customer more products So that I can buy myself a new house in the country. Title: Display Account Balance As a customer I want to access my online account So that I can see the balance of my account(s) These are just a few of the online banking stories that spring to mind for Backwater Bank inc, but they’re still epics. How can you tell? Conventional wisdom says that you apply the INVEST mnemonic. Take for example the Display Account Balance story. It seems pretty straight forward coding wise: load a balance from the database and then display it on the screen; however, things aren’t that simple. It’s dependent upon either the Create a New Customer Account or Move Existing Customer Accounts Online stories, which must be completed before this one. Also, a whole bunch of questions spring to mind:What about security? How do we authenticate and authorise the customer to see their account balance? Are we coding for just one account balance, or is the customer going to be able to check the balance of any of their accounts? What about data access? Do we access Backwater Bank inc’s existing database or create a new one? What about server technology? Java? .Net? Do we have to support all browsers including IE6? What about today’s account transactions? Are they included in the balance as some cheques won’t have cleared?Okay, so I’ve added a couple of implementation questions in to the mix, after all I’m a developer and I have to know what I’m dealing with before I can estimate things; however, getting back to the stories, these questions lead to the creation of new stories: Title: User Logs In As the chief of internet security I want the customer to login and be authenticated against those things he/she is allowed to see So that the website is secure and a customer's data remains confidential. It may also lead to the revision/splitting of an existing story. In this case, “So that I can get the balance of my account(s)” becomes “So that I can get the balance of my current account”: Title: Display Account Balance As a customer I want to access my online account So that I can see the balance of my current account …and another story for completion at a later date is added to the backlog: Title: Display Balance of All Accounts As a customer I want to access my online account So that I can see the balance of all my accounts Now, assuming that the Move Existing Customer Accounts Online story is complete, that you have chosen your server technology, setup your development environment, and have a fair understanding of Backwater Bank's database, can you implement the Display Account Balance story? Is it ready? Almost… We now need to consider acceptance criteria and for that we relate back to some of the earlier questions. For example: Title: Display Account Balance As a customer I want to access my online account So that I can see the balance of my current accountAcceptance Criteria1) Will only need work using Chrome, behaviour on other browsers is undefined. 2) The balance will only consider transactions up to midnight of the previous day 3) Positive balances will be in black, for example $244.45 4) Overdrawn balances will be in red and use a '(nnn.nn)' format. For example:(134.87) The final mark of the Display Account Balance being ready is an attached estimate and a priority in the backlog. This is done at the backlog refinement meeting (AKA product grooming meeting) attended by such notorieties as the Product Owner, stake holders,dev team and scrum master. In this meeting, if it’s priority is high enough it’ll get added to the next scrum and when that starts, the dev team will get busy. And the end of the story? Well Agile Cowboys online project was a success and the bank’s customer’s used the internet to access their accounts. The bank’s staff didn’t have that much to do so, they spent their time lending money on the sub-prime market and paying themselves large bonuses. In 2008, the bank collapsed owing billions of dollars, only to be saved by you and me, the taxpayer. As for me, maybe I should end by paraphrasing the Connextra Story Card: Title: Writing Good Stories As a developer I want to know how to write good stories So that I can submit cards to the planning game that are clear and will be accepted in the next iteration. Is this the start of another epic?   Reference: So, what’s in a story? from our JCG partner Roger Hughes at the Captain Debug’s Blog blog. ...

Various ways to run Scala code

For running example in this tutorial, make sure that, you have latest Java distribution and Scala distribution installed on your machine and environment variable SCALA_HOME points to base directory of the scala installation and %SCALA_HOME%/bin added to PATH variable. using Scala REPL It is basically command line interactive shell called as REPL short for Read-Eval-Print-Loop.     To start scala REPL, open command prompt and simply type scala.  After that, you will see new scala prompt waiting for your input as shown belowNow, you can type of any scala expressions or code in prompt and hit enter and you will get the output immediately as shown below.using Scala interpreter to run scala script You can save your scala code in the file with .scala extension  (basically with any file extension but prefered .scala extension) and to run, provide file name with extension as parameter to scala interpreter Create the file HelloScala.scala with following code val str = "Hello "+ "Scala " println("'str' contents : "+str) Now, we can run this file using command shown as follows shell> scala HelloScala.scala 'str' contents : Hello Scala As you can observe we do not require the any class definition declaration or so. We can put the code inside the file and we are ready to run it. using Scala interpreter Usual scala program contains lot code chunks spread of across lot of files, for running these programs we need to go through two stages, compile the scala source code using scala compiler and run the compiled bytecode using scala interpreter. Lets create file named Hello.scala with following code object Hello {     def main(args:Array[String]):Unit = {       println("Hello, Scala !! ") } } Little explanation about above program, we created object Hello, Object is way scala represent the static members and inside it we have main method taking param as array of strings and returning Unit which is same as void in Java. This main method more like one in java but scala version of it. compile file using scala compiler scalac, as shown below, shell > scalac Hello.scala It will create the couple of class files in current directory. To run this, we use scala interpreter (or java interpreter, a little later on this ) by passing the class name (not with .scala or .class extension). In our case, we do following shell > scala Hello Hello, Scala !! using Java interpreter As compiled Scala code is bytecode that we can run with Java interpreter which is java.exe or shipped with standard Java JRE distribution. But for this we need to put additional library in classpath. We just need to add the scala-library.jar which located under $SCALA_HOME/lib To run using java interpreter, we use following command shell > java -cp $SCALA_HOME/lib/scala-library.jar;.  Hello using Scala worksheet This scala worksheet is part of Scala IDE for eclipse. It is like the REPL but much more convenient and powerful than REPL. Following is excerpt from official github repo wiki about the scala worksheet A worksheet is a Scala file that is evaluated on save, and the result of each expression is shown in a column to the right of your program. Worksheets are like a REPL session on steroids, and enjoy 1st class editor support: completion, hyperlinking, interactive errors-as-you-type, auto-format, etc. For creating new scala worksheet in scala ide, first create scala project then right click on scala project and go to following New > Scala WorkSheet. It will prompt for name for worksheet and folder to which worksheet to be created. Give any name, accept default folder and then hit enter. After that you will get the worksheet  as shown as follows. it gives output at right of your code (one marked in red) as shown following figure:You can write any code inside object body and hit the save button and you have the output at right of your code:  Reference: Various ways to run Scala code from our JCG partner Abhijeet Sutar at the Another Java Duke blog. ...

A good, lazy way to write tests

Testing. I’ve been thinking a lot about testing recently. As part of code reviews I’ve done for various projects, I’ve seen thousands of lines of untested code. This is not just a case of test coverage statistics pointing this out, it’s more a case of there not being any tests at all in this projects. And the two reason I keep hearing for this sad state of affairs? “We don’t have time”, swiftly followed by “We’ll do it when we’ve finished the code”. What I present here is not a panacea for testing. It covers unit testing, and specifically, unit testing of interfaces. Interfaces are good things. Interfaces define contracts. Interfaces, regardless of how many implementations they have, can be tested easily and with very little effort. Let’s see how, using this class structure as an example.    CustomerService is our interface. It has two methods, to keep the example simple, and is described below. Note the javadoc – this is where the contract is described. public interface CustomerService { /** * Retrieve the customer from somewhere. * @param userName the userName of the customer * @return a non-null Customer instance compliant with the userName * @throws CustomerNotFoundException if a customer with the given user name can not be found */ Customer get(String userName) throws CustomerNotFoundException;/** * Persist the customer. * @param customer the customer to persist * @return the customer as it now exists in its persisted form * @throws DuplicateCustomerException if a customer with the user name already exists */ Customer create(Customer customer) throws DuplicateCustomerException; } As we can see from the diagram, we have two implementations of this class, RemoteCustomerService and CachingCustomerService. The implementations of these are not shown, because they don’t matter. How can I say this? Simple – we are testing the contract. We write tests for every method in the interface, along with every permutation of the contract. For example, for get() we need to test what happens when a customer with the given user name is present, and what happens when it isn’t present. public abstract class CustomerServiceTest { @Test public void testCreate() { CustomerService customerService = getCustomerService(); Customer customer = customerService.create(new Customer("userNameA"));Assert.assertNotNull(customer); Assert.assertEquals("userNameA", customer.getUserName()); }@Test(expected = DuplicateCustomerException.class) public void testCreate_duplicate() { CustomerService customerService = getCustomerService(); Customer customer = new Customer("userNameA"); customerService.create(customer); customerService.create(customer); }@Test public void testGet() { CustomerService customerService = getCustomerService(); customerService.create(new Customer("userNameA")); Customer customer = customerService.get("userNameA");Assert.assertNotNull(customer); Assert.assertEquals("userNameA", result.getUserName()); }@Test(expected = CustomerNotFoundException.class) public void testGet_noUser() { CustomerService customerService = getCustomerService(); customerService.get("userNameA"); }public abstract CustomerService getCustomerService(); } We now have a test for the contract, and at no point have we mentioned any of the implementations. This means two things:We don’t need to duplicate the tests for each and every implementation. This is a Very Good Thing. None of the implementations are being tested. We can correct this by adding one test class per implementation. Since each test class will be almost identical, I’ll just show the test of RemoteCustomerService.public class RemoteCustomerServiceTest extends CustomerServiceTest { public CustomerService getCustomerService() { return new RemoteCustomerService(); } } And that’s it! We now have a very simple way to test multiple implementations of any interface by putting in the hard work up front, and reducing the effort of testing new implementations to a single, simple method.   Reference: A good, lazy way to write tests from our JCG partner Steve Chaloner at the Objectify blog. ...

Java File Merging Goes Semantic

Talk to any programmer and ask him how a merge should be: “it should understand the code, parse it, and then merge based on the structure” – he’ll most likely say. And this is precisely what SemanticMerge for Java does: it parses the files to be merged (plus the ancestor or “how the files were before changing them”) and acts based on that information. Why all this buzz about merging? Developing software is a collaborative process. If you work on a team, sooner or later you’ll end up having two developers modifying the same file. Whenever that happens   you’ll have to merge. In fact, merging is not bound to creating branches (as many will say) but to developers working on the same files, even if they do it on the same branch (if two work on the same branch, on the same files, one of them will have to merge during check-in – or “commit” in Git jargon). The new wave of distributed version control systems (DVCS) do a much better job than the previous generation when it comes to merging. That’s why many jumped to Git from SVN, CVS, and older alternatives. The next step is not only a better algorithm in terms of how to deal with the files, the next step is creating a better mechanism to merge “inside the file”, and this is exactly what SemanticMerge is all about. SemanticMerge is about reducing the cost of keeping the code clean There are two graphics that we always keep in mind when developing merge tools: the cost of change by Barry Bohem back in 1981 and the same graph by Kent Beck 20 years later:Generations of developers were taught the “Bohem’s principle”: “do a change in production and it will cost you a fortune compared to the same change introduced during the analysis phase”. Then Beck came up with something like: “keep your code clean and the cost of change will remain constant”, which is the cornerstone behind the agile methods. And this is precisely the mantra behind SemanticMerge: keep your code clean. Why? Because it pays off. More often than not, you see a class that needs to be rearranged: “put these two private methods down, move the public constructor up, move the private fields to the bottom…” But the reason you don’t do it is because maybe someone else is touching the file and the merge is going to be hell. This is exactly what SemanticMerge solves: it “knows” you moved a method, so it won’t be fooled by that. SemanticMerge in action Let’s now look into a typical semantic merge case. Suppose you have a class with several methods. The first developer moves one of the methods to a different location within the class and also modifies the method. Meanwhile, a second developer modifies the method on the original location. Check out the following figure:A regular, text-based merge tool would fail handling this scenario, but SemanticMerge is able to identify what happened to the method and propose the following merge situation:As you can see, it identifies that the method “onBuildHeaders” has been modified in parallel (check the “c” icon at both sides of the bar where the method name is printed) and also moved in one of the contributors (check the “m” icon). Now the developer doing the merge can go and run “merge” on the “onBuildHeaders” method, which will merge only the conflicting method, preserving the new location. How does SemanticMerge work? As you may guess, SemanticMerge first parses the code of the 3 files involved (the original file plus the two contributors) and then calculates the structures of each of them: it is a tree-like representation of the code. Once this is done, SemanticMerge starts working with the 3 trees: first it calculates the diffs between one of the contributors and the original version, then it repeats the process with the other contributor. Step three is the merge calculation itself: it will walk the two pairs of diffs and will check whether they collide. If they do, then there is a merge conflict. It can happen if the same method has been moved or modified twice and so on. The calculation is slightly more complex because the conflicts must be calculated not only when the same method collides but also if there are conflicts in their containers (like doing a “divergent” rename between parent classes and so on). It is also worth adding that in order to track methods (or fields, properties, and so on) when they are renamed, SemanticMerge calculates a “similarity index” to see how close the bodies of the methods are, and when the match is good, it assumes it is the same element. Some numbers We rerun about 40 thousand merges downloading close to 500 open source projects. This means we pull the repositories, find all the merges, and run them again through the SemanticMerge tool. Doing that we found the following numbers:23% of the current merges are “semantic” – which mean they have something that is not a “changed-changed” conflict. It can be code being moved, more than one method being added on the same position, methods moved and changed, and many more. Out of these 40 thousand merges run, we found out that 1.54% of the merges go from manual to totally automatic. It is not a huge number which means it will grow as soon as teams start using SemanticMerge. (These numbers are the result of rerunning merges done with current language-agnostic merge tools, so developers tend to avoid complex changes on files). We counted the number of lines involved in merge conflicts while running the code through both SemanticMerge and a conventional text-based merge tool and we found out that using SemanticMerge, there are 97% less lines of code involved in conflicts… which means much less work to do!!Free for Open Source While testing SemanticMerge we pulled about 500, long-lived, frantically active, Open Source repositories and we “replayed” all their merges. In the list, there were repositories like hibernate, openjdk, apache-lucene, jbos, monodevelop, mono, monomac, monogame, nhibernate… and it was really helpful. So we decided to make SemanticMerge free for the developers contributing to Open Source projects, because we believe in contributing back. You may check it out here!   Reference: Java File Merging Goes Semantic from our JCG partner Pablo Santos at the SemanticMerge blog. ...

JVM Performance Magic Tricks

HotSpot, the JVM we all know and love, is the brain in which our Java and Scala juices flow. Over the years, it’s been improved and tweaked by more than a handful of engineers, and with every iteration, the speed and efficiency of its code execution is nearing that of native compiled code. At its core lies the JIT (“Just-In-Time”) compiler. The sole purpose of this component is to make your code run fast, and it is one of the reasons HotSpot is so popular and successful. What does the JIT compiler actually do? While your code is being executed, the JVM gathers information about its behavior. Once enough statistics are gathered about a hot method (10K invocations is the default threshold), the compiler kicks in, and converts that method’s platform-independent “slow” bytecode into an optimized, lean, mean compiled version of itself. Some optimizations are obvious: simple method inlining, dead code removal, replacing library calls with native math operations, etc. The JIT compiler doesn’t stop there, mind you. Here are some of the more interesting optimizations performed by it: Divide and conquer How many times have you used the following pattern: StringBuilder sb = new StringBuilder("Ingredients: ");for (int i = 0; i < ingredients.length; i++) { if (i > 0) { sb.append(", "); } sb.append(ingredients[i]); }return sb.toString(); Or perhaps this one: boolean nemoFound = false; for (int i = 0; i < fish.length; i++) { String curFish = fish[i]; if (!nemoFound) { if (curFish.equals("Nemo")) { System.out.println("Nemo! There you are!"); nemoFound = true; continue; } } if (nemoFound) { System.out.println("We already found Nemo!"); } else { System.out.println("We still haven't found Nemo : ("); } } What both these loops have in common, is that in both cases the loop does one thing for a while, and then another thing from a certain point on. The compiler can spot these patterns, and split the loops into cases, or “peel” several iterations. Let’s take the first loop for example. The if (i > 0) line starts as false for a single iteration, and from that point on it always evaluates to true. Why check the condition every time then? The compiler would compile that code as if it were written like so: StringBuilder sb = new StringBuilder("Ingredients: ");if (ingredients.length > 0) { sb.append(ingredients[0]); for (int i = 1; i < ingredients.length; i++) { sb.append(", "); sb.append(ingredients[i]); } }return sb.toString(); This way, the redundant if (i > 0) is removed, even if some code might get duplicated in the process, as speed is what it’s all about. Living on the edge Null checks are bread-and-butter. Sometimes null is a valid value for our references (e.g. indicating a missing value, or an error), but sometimes we add null checks just to be on the safe side. Some of these checks may never fail (null denoting a failure, for that matter). One classic example would include an assertion, like this: public static String l33tify(String phrase) { if (phrase == null) { throw new IllegalArgumentException("phrase must not be null"); } return phrase.replace('e', '3'); } If your code behaves well, and never passes null as an argument to l33tify, the assertion will never fail. After executing this code many, many times without ever entering the body of the if statement, the JIT compiler might make the optimistic assumption that this check is most likely unnecessary. It would then proceed to compile the method, dropping the check altogether, as if it were written like so: public static String l33tify(String phrase) { return phrase.replace('e', '3'); } This can result in a significant performance boost, which may be a pure win in most cases. But what if that happy-path assumption eventually proves to be wrong? Since the JVM is now executing native compiled code, a null reference would not result in a fuzzy NullPointerException, but rather in a real, harsh memory access violation. The JVM, being the low-level creature that it is, would intercept the resulting segmentation fault, recover, and follow-up with a de-optimization — the compiler can no longer assume that the null check is redundant: it recompiles the method, this time with the null check in place. Virtual insanity One of the main differences between the JVM’s JIT compiler and other static ones such as C++ compilers, is that the JIT compiler has dynamic runtime data on which it can rely when making decisions. Method inlining is a common optimization in which the compiler takes a complete method and inserts its code into another’s, in order to avoid a method call. This gets a little tricky when dealing with virtual method invocations (or dynamic dispatch). Take the following code for example: public class Main { public static void perform(Song s) { s.sing(); } }public interface Song { void sing(); }public class GangnamStyle implements Song { @Override public void sing() { System.out.println("Oppan gangnam style!"); } }public class Baby implements Song { @Override public void sing() { System.out.println("And I was like baby, baby, baby, oh"); } }// More implementations here The method perform might be executed millions of times, and each time an invocation of the method sing takes place. Invocations are costly, especially ones such as this one, since it needs to dynamically select the actual code to execute each time according to the runtime type of s. Inlining seems like a distant dream at this point, doesn’t it? Not necessarily! After executing perform a few thousand times, the compiler might decide, according to the statistics it gathered, that 95% of the invocations target an instance of GangnamStyle. In these cases, the HotSpot JIT can perform an optimistic optimization with the intent of eliminating the virtual call to sing. In other words, the compiler will generate native code for something along these lines: public static void perform(Song s) { if (s fastnativeinstanceof GangnamStyle) { System.out.println("Oppan gangnam style!"); } else { s.sing(); } } Since this optimization relies on runtime information, it can eliminate most of the invocations to sing, even though it is polymorphic. The JIT compiler has a lot more tricks up its sleeve, but these are just a few to give you a taste of what goes on under the hood when our code is executed and optimized by the JVM. Can I help? The JIT compiler is a compiler for straightforward people; it is built to optimize straightforward writing, and it searches for patterns which appear in everyday standard code. The best way to help your compiler is to not try so hard to help it — just write your code as you otherwise would.   Reference: JVM Performance Magic Tricks from our JCG partner Niv Steingarten at the Takipi blog. ...

Java EE CDI dependency disambiguation example

In this tutorial we shall show you how to avoid dependency disambiguation in CDI beans. In CDI we can achieve dependency injection for multiple implementations of an interface to different clients in an application. The problem of dependency disambiguation is how a client can call a specific implementation among different ones, without any errors occurring. To see how we can avoid dependency disambiguation when injecting beans to an application we will create a simple service. We will create two implementations of the service and then we will inject both implementations in a servlet in our application. We will make use of the @Qualifiers, as will be explained below. Our preferred development environment is Eclipse. We are using Eclipse Juno (4.2) version, along with Maven Integration plugin version 3.1.0. You can download Eclipse from here and Maven Plugin for Eclipse from here. The installation of Maven plugin for Eclipse is out of the scope of this tutorial and will not be discussed. Tomcat 7 is the application server used. Let’s begin, 1. Create a new Maven project Go to File -> Project ->Maven -> Maven Project.In the “Select project name and location” page of the wizard, make sure that “Create a simple project (skip archetype selection)” option is unchecked, hit “Next” to continue with default values.Here the maven archetype for creating a web application must be added. Click on “Add Archetype” and add the archetype. Set the “Archetype Group Id” variable to "org.apache.maven.archetypes", the “Archetype artifact Id” variable to "maven-archetype-webapp" and the “Archetype Version” to "1.0". Click on “OK” to continue.In the “Enter an artifact id” page of the wizard, you can define the name and main package of your project. Set the “Group Id” variable to "com.javacodegeeks.snippets.enterprise" and the “Artifact Id” variable to "cdibeans". The aforementioned selections compose the main project package as "com.javacodegeeks.snippets.enterprise.cdibeans" and the project name as "cdibeans". Set the “Package” variable to "war", so that a war file will be created to be deployed to tomcat server. Hit “Finish” to exit the wizard and to create your project.The Maven project structure is shown below:It consists of the following folders: /src/main/java folder, that contains source files for the dynamic content of the application, /src/test/java folder contains all source files for unit tests, /src/main/resources folder contains configurations files, /target folder contains the compiled and packaged deliverables, /src/main/resources/webapp/WEB-INF folder contains the deployment descriptors for the Web application , the pom.xml is the project object model (POM) file. The single file that contains all project related configuration.2. Add all the necessary dependencies You can add the dependencies in Maven’s pom.xml file, by editing it at the “Pom.xml” page of the POM editor, as shown below:   pom.xml: <project xmlns="" xmlns:xsi="" xsi:schemaLocation=""> <modelVersion>4.0.0</modelVersion> <groupId>com.javacodegeeks.snippets.enterprise.cdi</groupId> <artifactId>cdibeans</artifactId> <packaging>war</packaging> <version>0.0.1-SNAPSHOT</version> <name>cdibeans Maven Webapp</name> <url></url> <dependencies> <dependency> <groupId>org.jboss.weld.servlet</groupId> <artifactId>weld-servlet</artifactId> <version>1.1.10.Final</version> </dependency> <dependency> <groupId>javax.servlet</groupId> <artifactId>jstl</artifactId> <version>1.2</version> </dependency> <dependency> <groupId>javax.servlet</groupId> <artifactId>javax.servlet-api</artifactId> <version>3.0.1</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.glassfish</groupId> <artifactId>javax.faces</artifactId> <version>2.1.7</version> </dependency> </dependencies><build> <finalName>cdibeans</finalName> </build> </project>As you can see Maven manages library dependencies declaratively. A local repository is created (by default under {user_home}/.m2 folder) and all required libraries are downloaded and placed there from public repositories. Furthermore intra – library dependencies are automatically resolved and manipulated. 3. Create a simple Service We make use of a simple service that creates a greeting message for the application that uses it. The class is an interface with a method that produces the greeting message. package com.javacodegeeks.snippets.enterprise.cdibeans;public interface GreetingCard {void sayHello(); }We create two implementations of the service. Each implementation produces a different message, as shown below: package com.javacodegeeks.snippets.enterprise.cdibeans.impl;import com.javacodegeeks.snippets.enterprise.cdibeans.GreetingCard;public class GreetingCardImpl implements GreetingCard {public void sayHello() { System.out.println("Hello!!!"); }} package com.javacodegeeks.snippets.enterprise.cdibeans.impl;import com.javacodegeeks.snippets.enterprise.cdibeans.GreetingCard;public class AnotherGreetingCardImpl implements GreetingCard {public void sayHello() { System.out.println("Have a nice day!!!"); }}4. Use of the Service In order to inject the service to another bean, we can make use of the @Qualifier. CDI allows us to create our own Java annotation, and then use it in the injection point of our application to get the correct implementation of the GreetingCard according to the GreetingType of the bean. package com.javacodegeeks.snippets.enterprise.cdibeans;import static java.lang.annotation.ElementType.FIELD; import static java.lang.annotation.ElementType.TYPE; import static java.lang.annotation.ElementType.METHOD; import static java.lang.annotation.RetentionPolicy.RUNTIME; import java.lang.annotation.Retention; import java.lang.annotation.Target;import javax.inject.Qualifier;@Qualifier @Retention(RUNTIME) @Target({ FIELD, TYPE, METHOD }) public @interface Greetings {GreetingType value();}The is an enumeration, as shown below: package com.javacodegeeks.snippets.enterprise.cdibeans;public enum GreetingType {HELLO, HI; }Now, the service implementations use the annotation, as shown below: package com.javacodegeeks.snippets.enterprise.cdibeans.impl;import com.javacodegeeks.snippets.enterprise.cdibeans.GreetingCard;@Greetings(GreetingType.HELLO) public class GreetingCardImpl implements GreetingCard {public void sayHello() { System.out.println("Hello!!!"); }} package com.javacodegeeks.snippets.enterprise.cdibeans.impl;import com.javacodegeeks.snippets.enterprise.cdibeans.GreetingCard;@Greetings(GreetingType.HI) public class AnotherGreetingCardImpl implements GreetingCard {public void sayHello() { System.out.println("Have a nice day!!!"); }}5. Inject the service in a servlet We create a simple servlet, and inject both implementations of the service, using the @Inject annotation provided by CDI, as shown below: package com.javacodegeeks.snippets.enterprise.cdibeans.servlet;import; import;import javax.inject.Inject; import javax.servlet.ServletException; import javax.servlet.annotation.WebServlet; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse;import com.javacodegeeks.snippets.enterprise.cdibeans.GreetingCard; import com.javacodegeeks.snippets.enterprise.cdibeans.GreetingType; import com.javacodegeeks.snippets.enterprise.cdibeans.Greetings;@WebServlet(name = "greetingServlet", urlPatterns = {"/sayHello"}) public class GreetingServlet extends HttpServlet {private static final long serialVersionUID = 2280890757609124481L; @Inject @Greetings(GreetingType.HELLO) private GreetingCard greetingCard;@Inject @Greetings(GreetingType.HI) private GreetingCard anotherGreetingCard;public void init() throws ServletException { }public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("text/html"); PrintWriter out = response.getWriter(); out.println("<h1>" + greetingCard.sayHello() + "</h1>"); out.println("<h1>" + anotherGreetingCard.sayHello() + "</h1>"); } public void destroy(){ }}To run the example we must build the project with Maven, and then place the war file produced in webbaps folder of tomcat. Then, we can hit on : http://localhost/8080/cdibeans/sayHello and the result is the one shown below:Note that dependency disambiguation may also occur when using Producer methods to inject CDI beans, as shown in the Java EE CDI Producer methods tutorial.   This was a tutorial of Java EE CDI dependency disambiguation.   Download the source code of this tutorial: ...

Android Studio Tutorial: Getting started with the new Android IDE

In May 15th, during Google’s I/O developer conference, a new developer suite called Android Studio was announced. It is a very powerful IDE based on the famous IntelliJ IDE. Android Studio offers a lot more options for Android Developers on top of InteliJ’s fantastic features and deep code analysis. And it seems that it is aimed to the Android Professionals that want to make the process much faster and be more productive in general. In this example we are going to see how to install Android Studio, as well as creating and Launching a new Android Project with it. 1. Download and Install Android Studio Android Studio comes with everything you need in order to start developing Android Applications instantly. If you already have Android SDK Manager and Android Virtual Devices (AVD) already installed, that’s fine; it will work with those, no problem whatsoever. Go to the home page of Android project to download the Bundle for your platform: Installer will install Android Studio 0.1.1, and you can update it later from inside the application. When the download is finished, launch the installer:And specify the installation folder:When the installation is finished check the option to Start Android Studio and click Finish:2. Create a new Android Project When Android Studio launches it will prompt you if you want to import settings from a previous installations ( I guess that this would apply for previous installation of IntelliJ with or without the Android Bundle …).After that step, you will find yourself in Android Studio Quick Start Window. You can specify many settings and configuration options from there, but let’s go ahead and select “New Project”:Then you have to specify the Project Name, the main Package Name, as well as the target platforms and also the Theme of the application. You can also choose to create a new Activity and select a custom Launcher Icon:Then you can select and customize your Launcher Icon:Now you can create a new blank Activity for your project.You have to specify a Name for the activity and the corresponding Layout XML file Name.All the above should seem familiar, if you’re using Eclipse IDE for Android Development. 4. Run the Application After clicking finish you will find yourself in the IDE. This is how it looks like:Now you can select Help -> Check For Updates to update Android Studio to the latest available version. You can go ahead and click the Run Button to run your application. You will be prompt to choose the device you want to install your Application to. I have already installed AVDs in my system so I can work with those. This should come really handy when you have multiple real devices connected to your system and you want to install and run the applications on them as well:5. Live layout Editing One of the most impressive characteristics (at first glance …)of this IDE is the new live layout editing where you can preview your user interface in  a number of real devices , both smartphones and tablets, on the fly as you are editing the code. You can also choose to preview your code in different orientations. It looks and works great:6. Video This is a video with a small preview of Android Studio’s numerous features:7. Download the Android Studio Project Download the Android Studio Project of this tutorial: 8. Conclusion This was just an introduction to Android Studio that will get you up and running with Android Studio. I’m sure that there are tons of features to explore. Hope you are excited with Android Studio! ...

Design Patterns: Builder

Sometimes there is a need to create a complex object in application. The one solution for this is a Factory pattern, the another one is a Builder design pattern. In some situation you even can combine these two patterns. But in this article I want to examine the Builder design pattern. The first thing which I need to say that it is a creational pattern. In what situations you should use the Builder design pattern? Definitely when creation of the object requires a plenty of another independent objects. When you want to hide creation process from the user. When you can have different representation of the object in the end of the construction process. Let’s proceed with a code example. UML scheme of the pattern:As I mentioned the Builder pattern is the creational pattern. This circumstance implies creation of some object (product) in the end of the process. The product is created with the help of a concrete builder, in its turn the builder has some parent builder class or interface. The final point of the pattern is a director class, it is responsible for creation of the concrete builder for the appropriate product. The example will be based on famous and epic computer game – StarCraft. A role of the product will play a Zealot it is a simple battle Protoss unit. The director’s role will play a Gateway. And the concrete builder is a ZealotBuilder. All code I will provide below: Abstract class for the game units: public abstract class Unit {protected int hitPoints; protected int armor; protected int damage;public int getHitPoints() { return hitPoints; }public void setHitPoints(int hitPoints) { this.hitPoints = hitPoints; }public int getArmor() { return armor; }public void setArmor(int armor) { this.armor = armor; }public int getDamage() { return damage; }public void setDamage(int damage) { this.damage = damage; }} Class of the Zealot (product): public class Zealot extends Unit {public String toString() { return "Zealot is ready!"+ "\nHitPoints: "+getHitPoints()+ "\nArmor: "+getArmor()+ "\nDamage: "+getDamage(); }} Interface of the builder: public interface UnitBuilder {public void buildHitPoints(); public void buildArmor(); public void buildDamage(); public Unit getUnit();} Implementation of the builder interface: public class ZealotBuilder implements UnitBuilder {private Unit unit;public ZealotBuilder() { unit = new Zealot(); }@Override public void buildHitPoints() { unit.setHitPoints(100); }@Override public void buildArmor() { unit.setArmor(50); }@Override public void buildDamage() { unit.setDamage(8); }@Override public Unit getUnit() { return unit; }} The Gateway (Director) class: public class Gateway {public Unit constructUnit(UnitBuilder builder) { builder.buildHitPoints(); builder.buildArmor(); builder.buildDamage(); return builder.getUnit(); }} And now let’s see how it works all together: ... public static void main(String[] args) {UnitBuilder builder = new ZealotBuilder(); Gateway director = new Gateway(); Unit product = director.constructUnit(builder); System.out.println(product);} ... The result of the last code snippet is: Zealot is ready! HitPoints: 100 Armor: 50 Damage: 8 So as you can see the Builder design pattern is really helpful in situation when you need to create complex objects. The example in the tutorial wasn’t really hard, but now you can imagine in what situation you can apply this approach. More articles about the design patterns you can find here.   Reference: Design Patterns: Builder from our JCG partner Alexey Zvolinskiy at the Fruzenshtein’s notes blog. ...
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

Get ready to Rock!
To download the books, please verify your email address by following the instructions found on the email we just sent you.