Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?

scala-logo

Dependency injection with Scala macros: auto-wiring

You can look at dependency injection as a fancy name for passing parameters to a function (or constructor arguments to a constructor). However usually, DI containers do much more than that. Among other things, one very nice feature is auto-wiring: instantiating the right objects with the right arguments. Most popular frameworks (Spring, Guice, CDI/Weld) accomplish this task at runtime using reflection. [rant] Doing the wiring at runtime with reflection has its downsides though. Firstly, there’s no compile-time checking that each dependency is satisfied. Secondly, we loose some of the flexibility we would have when doing things by hand, as we have to obey the rules by which the objects are created “automatically”. For example, if for some reason   an object needs to be created manually, this requires a level of indirection (boilerplate), namely a factory. Finally, often the dependency injection is “global”, that is there is a single container with all the objects, it’s hard to create local/parametrized “universes” (Guice is an exception here). Finally-finally some frameworks do classpath scanning, which is slow, and sometimes can give unexpected results. [/rant] Way too magical for such a simple thing. But isn’t what we really want just a way to have all the news with correct parameters generated for us? If you’re using Scala, and want code generation, the obvious answer are macros! To finally show some code, given: class A class B class C(a: A, b: B) class D(b: B, c: C) it would be nice to have: val a = wire[A] val theB = wire[B] // 'theB', not 'b', just to show that we can use any name val theC = wire[C] val d = wire[D] transformed to: val a = new A() val theB = new B() val theC = new C(a, theB) val d = new D(theB, c) Turns out it’s possible, and even not very complicated. A proof-of-concept is available on GitHub. It’s very primitive and currently supports only one specific way of defining classes/wirings, but works. If a dependency is missing, there’s a compile error. To check it out, simply clone the repo, run sbt and then invoke the task: run-main com.softwaremill.di.DiExampleRunner (implementation). During compilation, you should see some info messages regarding the generated code, e.g.: [info] /Users/adamw/(...)/DiExample.scala:13: Generated code: new C(a, theB) [info] val c = wire[C] [info] ^ and then a proof that indeed the code was generated correctly: when the code is executed, the instances are printed to stdout so that you can see the arguments. The macro here is of course the wire method (implementation). What it does is it first checks what are the parameters of the constructor of the class, and then for each parameter, tries to find a val defined in the enclosing class of the desired type (findWiredOfType method; see also this StackOverflow question why the search is limited to the enclosing class). Finally, it assembles a tree corresponding to invoking the constructor with the right arguments: Apply( Select(New(Ident([class's type])), nme.CONSTRUCTOR), List(Ident([arg1]), Ident([arg2]), ...)) This concept can be extended in many ways. Firstly, by adding support for sub-typing (now only exact type matches will work). Then, there’s the ability to define the wirings not only in a class, but also in methods; or extending the search to mixed-in traits, so that you could split the wire definitions among multiple traits (“modules”?). Notice that we could also have full flexibility in how we access the wired valued; it could be a val, lazy val or a def. There’s also support for scoping, factories, singletons, configurations values, …; for example: (Dependency Injection of the future!) // 'scopes' val a = wire[X] lazy val b = wire[Y] def c = wire[Z] val d = provided(manuallyCreatedInstance)// override a single dependency val a = wire[X].with(anotherYInstance)// factories: p1, p2, ... are used in the constructor where needed def e(p1: T, p2: U, ...) = wire[X]// by-name binding for configuration parameters; whenever a class has a // 'maxConnections' constructor argument, this value is used. val maxConnections = conf(10) A recent project by Guice’s creator, Bob Lee, goes in the same direction. Dagger (mainly targeted at Android as far as I know) uses an annotation processor to generate the wiring code; at runtime, it’s just plain constructor invocations, no reflection. Similarly here, with the difference that we use Scala’s macros. What do you think of such an approach to DI?   Reference: Dependency injection with Scala macros: auto-wiring from our JCG partner Adam Warski at the Blog of Adam Warski blog. ...
jboss-hibernate-logo

Migrating from Hibernate 3 to 4 with Spring integration

This week it was time to upgrade our code base to the latest Hibernate 4.x. We postponed our migration (still being on Hibernate 3.3) since the newer maintenance releases of the 3.x branch required some API changes which were apparently still in flux. An example is the UserType API which was still showing flaws and was going to be finalized in Hibernate 4. The migration went quite smooth. Adapting the UserType’s to the new interface was pretty straightforward. There were some hick ups here and there but nothing painful. The thing to watch out for is the Spring integration. If you have been using Spring with Hibernate before, you will be using the LocalSessionFactoryBean (or AnnotationSessionFactoryBean) for creating the SessionFactory. For hibernate 4   there is a separate one in its own package: org.springframework.orm. hibernate4 instead of org.springframework.orm. hibernate3. The LocalSessionFactoryBean from the hibernate 4 package will do for both mapping files as well as annotated entities, so you only need one for both flavors. When the upgrade was done, all our tests were running and the applications were also running fine on Tomcat using the local Hibernate transaction manager. However when running on Glassfish using JTA transactions (and Spring’s JtaTransactionManager) we got the ‘No Session found for current thread’ when calling sessionFactory.getCurrentSession(); So it seemed I missed something in relation with the JTA configuration. As you normally do with the Spring-Hibernate integration, you let Spring drive the transactions. You specify a transaction manager and Spring makes sure all resources are registered with the transaction manager and eventually call commit or rollback. Spring will integrate with Hibernate, so it makes sure the session is flushed before a transaction commit. When using hibernate 3 and the hibernate 3 Spring integration, the session is bound to a thread local. This technique allow you to use the sessionFactory.getCurrentSession() to obtain a open session anywhere inside the active transaction. This is both the case for the local HibernateTransactionManager as for the JtaTransactionManager. However, as of the hibernate 4 integration, the hibernate session will be bound to the currently running JTA transaction instead. From a user point of view nothing changes as sessionFactory.getCurrentSession() will still do its job. But when running JTA this means that Hibernate must be able to lookup the transaction manager to able to register the session with the currently running transaction. This is new if you are coming from Hibernate 3 with Spring, in fact, you did not have to configure anything in regards to transactions in your Hibernate SessionFactory (or LocalSessionFactoryBean) configuration. As it turned out, with the Hibernate 4 Spring integration the transaction manager lookup configuration is effectively done by hibernate and not by Spring’s LocalSessionFactoryBean. The solution was pretty simple; adding this to the Hibernate (LocalSessionFactoryBean) configuration solved our problems: <prop key="hibernate.transaction.jta.platform"> org.hibernate.service.jta.platform.internal.SunOneJtaPlatform </prop> ‘SunOneJtaPlatform’ should then be replaced by a subclass that reflects your container. See the API docs for the available subclasses. What this class does is actually telling Hibernate how it can lookup the transaction manager for your environment. If you don’t configure this there will be nothing for Hibernate to bind the session to hence throwing the exception. There is also a property: hibernate.current_session_context_class Which should point to org.springframework.orm.hibernate4.SpringSessionContext, but this automatically done by the LocalSessionFactoryBean, so there is no need to specify it in the configuration. As this solved my ‘No Session found for current thread’ problem, there was still another one. Changes made to the database inside a transaction were not visible after a successful transaction commit. After some research I found out that no one was calling session.flush(). While with the hibernate 3 integration there was a SpringSessionSynchronization registered which would call session.flush() prior transaction commit (in the beforeCommmit method). In the hibernate 4 integration there is a SpringFlushSynchronization registered, which, as its name says, will perform a flush also. However, this is only implemented in the actual “flush” method of the TransactionSynchronization, and this method gets never called. I raised an issue for this on Spring bugtracker, including two sample application which illustrates the problem clearly. The first uses Hibernate 3 and the other is the exact same application but this time using hibernte 4. The second will show that no information is actually persisted to database (both apps are tested under the latest Glassfish 3.1.2) Until now the best workaround seems to be creating a flushing Aspect that wraps around @Transactional annotations. Using the order attribute you can order the transactional annotation to be applied before your flushing Aspect. This way your Aspect is still running inside the transaction, and is able to flush the session. It can obtain the session the normal way by injecting the SessionFactory (one way or the other) and then calling sessionFactory.getCurrentSession().flush(). <tx:annotation-driven order="1"> <bean id="flushinAspect" clas="..."> <property name="order" value="2"> </property></bean> </tx:annotation-driven>or, if using the annotation configuration: @EnableTransactionManagement(order=1) Update: There was some feedback on the issue. As it turns out it does not seem to be a bug in the Spring Hibernate integration, but a missing Hibernate configuration element. Apparently the ‘hibernate.transaction.factory_class’ needs to be set to JTA, the default is JDBC which depends on the Hibernate Transaction API for explicit transaction management. By setting this to JTA the necessary synchronizations are registered by hibernate which will perform the flush. See the Spring https://jira.springsource.org/browse/SPR-9404 Update 2: As it turns out, after correcting the configuration as proposed on the preceding issue, there was still a problem. I’m not going to repeat everything, you can find detailed information in the second bug entry I submitted here: https://jira.springsource.org/browse/SPR-9480 It basically comes down to the fact that in a JTA scenario with the JtaTransactionFactory configured, hibernate does not detect that it is in a transaction and will therefore not execute intermediate flushes. With the JtaTransactionFactory configured, you are expected to control the transaction via the Hibernate API rather then via an external (Spring in our case) mechanism. One of the side effects is that you might be reading stale data in some cases. Example: //[START TX1] Query query = session.createQuery('from Person p where p.firstName = :firstName and p.lastName = :lastName'); Person johnDoe = (Person)query.setString('firstName','john').setString('lastName','doe').uniqueResult(); johnDoe.setFirstName('Jim'); Person jimDoe = (Person)query.setString('firstName','jim').setString('lastName','doe').uniqueResult(); //[END TX1] What happens is that when performing the second query at line 5, hibernate should detect that it should flush the previous update which was made to the attached entity on line 4 (updating the name from ‘john’ to ‘jim’). However, because hibernate is not aware it is running inside an active transaction, the intermediate flushing doesn’t work. It will only flush once before the transaction commits. This results in stale data, as the 2nd query would not find ‘jim’ and return null instead. The solution (see the reply from Juergen Hoeller in the issue) is configuring hibernate.transaction.factory_class to org.hibernate.transaction.CMTTransactionFactory instead. At first I was a bit sceptical, as CMT makes be thing about EJB containers. However, if you read the Java doc on CMTTransaction it does make sense: /** * Implements a transaction strategy for Container Managed Transaction (CMT) scenarios. All work is done in * the context of the container managed transaction. * * The term 'CMT' is potentially misleading; the pertinent point simply being that the transactions are being * managed by something other than the Hibernate transaction mechanism. * * Additionally, this strategy does *not* attempt to access or use the {@link javax.transaction.UserTransaction} since * in the actual case CMT access to the {@link javax.transaction.UserTransaction} is explicitly disallowed. Instead * we use the JTA {@link javax.transaction.Transaction} object obtained from the {@link TransactionManager} After that everything seems to work fine. So to conclude, if you want hibernate to manage the JTA transaction via the UserTransaction, you should use JtaTransactionFactory. In that case you must use the Hibernate API to control the transaction. If there is someone else managing the transaction (Spring, EJB container …) you should use CMTTransactionFactory instead. Hibernate will then revert to registering synchronisations, by checking for active javax.transaction.Transaction using the javax.transaction.TransactionManager. If there is any other issue popping up I’ll update this entry accordingly.   Reference: Migrating from Hibernate 3 to 4 with Spring integration from our JCG partner Koen Serneels at the Koen Serneels – Technology blog blog. ...
agile-logo

Effective Sprint Goals

Working with a sprint goal is a powerful agile practice. This post helps you understand what sprint goals are, why they matter, how to write and to track them. The Sprint Goal Explained A sprint goal summarises the desired outcome of an iteration. It provides a shared objective, and it states why it’s worthwhile undertaking the sprint. Sample sprint goals are “Learn about the right user interaction for the registration feature”, or “Provide the missing reporting functionality”. Every sprint should have one shared goal. This ensures that everyone moves in the same direction. Once the goal has been selected, the team implements it. Stakeholder feedback is then used to understand if the goal   has been met, as the following picture shows.Sprint Goal Benefits I have found that working with a sprint goal has five main benefits, particularly for new products and new features: It facilitates prioritisation and effective teamwork; it makes it easier to obtain and analyse feedback; and it helps with stakeholder communication. Supports Prioritisation A shared sprint goal facilitates prioritisation: It makes it easier to determine which stories should be worked on in the next cycle. Here is how I do it: I first select the goal. Then I explore which epics have to contribute to it, and I break out small detailed stories from the epics. Finally, I order the new ready stories based on their contribution to the goal.Creates Focus and Facilitates Teamwork Sprint goals create focus, facilitate teamwork, and provide the basis an effective sprint planning session. A shared objective guides the development work, encourages creativity, and enables commitment. Teams don’t commit to individual stories in Scrum; they commit to the sprint goal. Helps Obtain Relevant Feedback Employing a sprint goal makes it easier to collect the right feedback. If the goal is to evaluate the user experience, for instance, then it is desirable to collect feedback from actual target users. User representatives should therefore attend the sprint review meeting. But if the goal is to reduce technical risk by evaluating different object-relational mapping tools, then it is probably more appropriate to invite an experienced developer or architect from another team to discuss the solution. Makes it Easier to Analyse the Feedback Working with a sprint goal helps analyse the feedback obtained. If the team works on several unrelated stories in the same sprint then it can be tricky to relate the feedback to the right user story. This makes it harder to understand if the right product with the right features is being built. Supports Stakeholder Communication Finally, imagine meeting the big boss in the elevator and being asked what you are working on. Chances are that without a sprint goal, the boss will be bored to death, jump onto a specific story, or he will have left the elevator before you are finished listing all the things you do. Using a sprint goal helps you communicate the objective of the sprint to the stakeholders. This allows them to understand what the sprint is about and to decide if they should attend the next sprint review meeting. Writing Great Sprint Goals Like any operational goal, a sprint goal should be SMART: specific, measurable, attainable, relevant, and time-bound. As sprints are time-boxed iterations, every sprint goal is naturally time-bound: It has to be reached by the end of the sprint. A relevant sprint goal helps you address the most important challenge, and it moves you closer towards your vision or release goal. For a new product or a bigger product update, the main challenge in the early sprints is to resolve uncertainty and to mitigate the key risks. To determine where the greatest risk currently is, I use the three innovation drivers – desirability, feasibility, or viability – as the following picture shows.A sample goal of an early sprint is to learn more about the desired user experience (a desirability aspect), the software architecture (feasibility), or the pricing model (viability). To pick the right goal, choose the risk that is likely to hurt you most if it is not addressed immediately. When selecting your sprint goal, remember that trying out new things requires failure. Failure creates the empirical data required to make informed assumptions about what should and can be done next. Failing early helps you succeed in the long term. After you have run a few sprints, the emphasis usually starts to shift from resolving uncertainty to completing features so that they can be released – at least to selected users. This allows you to gather quantitative data and to understand how users employ your product in its target environment. The shift should be reflected in your sprint goal, which now focuses on “getting stuff done” rather than testing ideas.Employing a specific and measurable sprint goal allows you to determine success. For instance, don’t just state “Create a prototype” as your sprint goal. Be explicit about the type and its purpose. Say instead: “Create a paper prototype of the user registration feature to test our user interaction ideas.” The default mechanism in Scrum to determine success is to analyse the stakeholder feedback. Scrum suggests that the feedback should be obtained in the sprint review meeting by presenting the product increment. If this is not appropriate for you, then I suggest you make your approach explicit in your sprint goal. Write, for instance: “Test the user interaction design of the registration feature by conducting a user test in the sprint review meeting.” Carrying out sprint planning activities ensures that the sprint goal is attainable. Traditionally, this involves selecting user stories that are required to reach the goal until the team’s capacity has been consumed. Sprint planning hence allows the product owner and the team to understand if the goal chosen can be reached. This helps you to invite the right stakeholders and be confident that they can provide meaningful feedback. Unrealistic sprint goals waste the stakeholders’ time and undermine their willingness to participate in the process. Visualising and Tracking the Sprint Goal To ensure that the sprint goal is fully leveraged, I visualise it. My product canvas tool contains a ready section with items for the next sprint, as the picture below shows. I place the sprint goal at the top of the ready section, and I determine the stories required to reach the goal. These are then listed underneath the goal.As part of creating the task board or sprint backlog, I move the sprint goal from the product canvas onto the board. This helps ensure that the team keeps the goal in mind when implementing the individual stories.Summary “You’ve got to be very careful if you don’t know where you are going, because you might not get there,” says Yogi Berra. Employing a sprint goal increases that chances of getting where you want to go, of creating a successful product.   Reference: Effective Sprint Goals from our JCG partner Roman Pichler at the Pichler’s blog blog. ...
agile-logo

The Product Demo as an Agile Market Research Method

This post helps you use your product demos as an effective agile market research tool: to collect relevant feedback in order to validate your ideas and improve your product. If you employ your demos to sign off user stories then this article will show you how to get much more out of them. In an agile approache like Scrum, the latest product increment is demoed to the stakeholders in the sprint review meeting, as the picture below illustrates. (Please see my post “The Scrum Cycle” for a detailed explanation of the picture.) While the product demo allows understanding which stories have been completed, I find that using it as qualitative market research technique unleashes its real potential.   Its primary goal is then to collect feedback from users and other stakeholders in order to validate ideas and improve the product. The demo is best done in person with everyone being present in the same room, but you can also conduct it as a videoconference. The following tips should help you leverage your product demo as an effective market research method.Be Clear on your Research Goal Understand what questions you would like to get answers to, and what ideas you would like to validate before conducting the demo. Your sprint goal should help you with this: If, for instance, your sprint goal is to test your user interface design ideas, then you should plan the demo accordingly: You may want to present different versions as mock-ups to the users to understand which one they prefer and why that’s the case. Having one sprint and research goal helps you focus the presentation. It increases the likelihood to collect relevant feedback, and it makes it easier to analyse the feedback. Invite the Right People Use your sprint goal to decide who can help you validate your ideas and improve the product and who should therefore attend the demo. If the goal of the sprint is to establish the right software architecture decisions, then end users are probably not the right attendees. In the worst case, the demo could be a frustrating experience and prevent them from attending another review meeting. But if the goal is to better understand how users are likely to interact with the product, then end users should be present. Otherwise, you are in danger of collecting lots of interesting but irrelevant or misleading data. Explain what the Product does for the User Avoid listing features and functionality in your demo, and describe what the product does for the user in order to receive meaningful feedback. A great way to do this is to use a scenario. If you develop a mobile banking application, for instance, you may want to say: “Imagine you are on the train on your way to work, and you remember you still need to pay your water bill. You open the banking app, log on, and then you would see the screen I am showing you now.” If you employ my Product Canvas, then you should be able to use its scenarios and storyboards for your demos. Engage in a Dialogue An agile product demo should not be a one-way communication or a sales event. Instead, its objective is to generate valuable feedback that allows you to gain new insights. Unfortunately, users and other stakeholders don’t always provide helpful feedback straight away. You sometimes have to ask the right questions and create a dialogue. For instance, if the feedback you receive is “Great demo, I really like the product”, then that’s nice. But what does it actually mean? How does it help you, and what can you learn from it? Dig deeper, ask why the individual likes the product, which aspects are particularly valuable, and which could be improved. Take Notes To be able to analyse the feedback afterwards, I recommend you record who provides the feedback and what you hear and see. Ask the team members to take notes too. This reduces the risk of overlooking feedback. I also suggest you record relevant background information about the attendees including demographics and job role. The information will be handy when you analyse the feedback. Separate Research from Analysis I prefer to separate collecting the feedback from analysing it. This allows me to listen to the users, and to decide afterwards what I can learn from the information gathered by carefully considering if the feedback is relevant and how it is best acted upon. It also makes it possible to compare notes with the team members thereby leveraging the collective wisdom of the team and mitigating cognitive biases. But I do suggest that you reject an idea or request immediately if you know that it does not make sense or that it is impossible to take it on. Understand the Limits of the Product Demo A product demo is a great tool for getting feedback particularly in the early sprints when the product has not enough functionality to be exposed in other ways to the users. But it does have a drawback: Users provide feedback based on what they see and hear. The demo does not validate how people actually use the product. I hence recommend you employ user tests and software releases once your product has progressed further. This allows you to understand better how the users interact with your product, and how well your product meets their needs. Summary A product demo is a great tool for collecting feedback particularly in the early sprints. To fully leverage your demos, make sure that you understand your research goal, invite the right people, explain what the product does for the user, create a dialogue, record the feedback, do the analysis afterwards, and consider employing user tests and releases as soon as your product as progressed further.   Reference: The Product Demo as an Agile Market Research Method from our JCG partner Roman Pichler at the Pichler’s blog blog. ...
apache-hadoop-logo

Big Data 2013 Predictions

If you just invested a lot of money in a Big Data solution from any of the traditional BI vendors (Teradata, IBM, Oracle, SAS, EMC, HP, etc.) then you are likely to see a sub-optimal ROI in 2013. Several innovations will come in 2013 that will change the value of Big Data exponentially. Other technology innovations are just waiting for smart start-ups to put them into good use. Real-Time Hadoop The first major innovation will be Google’s Dremel-like solutions coming of age like Impala, Drill, etc. They will allow real-time queries on Big Data and be open source. So you will get a superior offering compared to what is currently available for free. Cloud-Based Big Data Solutions The absolute market leader is Amazon with EMR. Elastic Map Reduce is not so much about being able to run a Map Reduce operation in the Cloud but about paying for what you use and not more. The traditional BI vendors are still getting their head around a usage-based licensing for the Cloud. Except a lot of smart startups to come up with really innovative Big Data and Cloud solutions. Big Data Appliances You can buy some really expensive Big Data Appliances but also here disruptive players are likely to change the market. GPUs are relatively cheap. Stack them into servers and use something like Virtual OpenCL to make your own GPU virtualization cluster solution. These type of home-made GPU clusters are already being used for security Big Data related work. Also expect more hardware vendors to pack mobile ARM processors into server boxes. Dell, HP, etc. are already doing it. Imagine the potential for Distributed Map Reduce. Finally Parallella will put a 16-core supercomputer into everybody’s hands for $99. Their 2013 supercomputer challenge is definitely something to keep your eyes on. Their roadmap talks about 64 and 1000 core versions. If Adapteva can keep their promises and flood the market with Parallella’s then expect Parallella Clusters to be 2013 Big Data Appliance. Distributed Machine Learning Mahout is a cool project but Map Reduce might not be the best possible architecture to run iterative distributed backpropagation or any other machine learning algorithms. Jubatus looks promising. Also algorithm innovations like HogWild could really change the dynamics for efficient distributed machine learning. This space is definitely ready for more ground-breaking innovations in 2013. Easier Big Data Tools This is still a big white spot in the Open Source field. Having Open Source and easy to use drag-and-drop tools for Big Data Analytics would really excel the adoption. We already have some good commercial examples (Radoop = RapidMiner + Mahout, Tableau, Datameer, etc.) but we are missing good Open Source tools. I am currently looking for new challenges so if you are active in the Big Data space and are looking for a knowledgable senior executive be sure to contact me at maarten at telruptive dot com.   Reference: Big Data 2013 Predictions from our JCG partner Maarten Ectors at the Telruptive blog ...
jetbrains-intellijidea-logo

Reasons for IntelliJ IDEA

Introduction I often get the question why I use Intellij in favor of another IDE, in this case Eclipse. Most of the time I answer that question by demonstrating some features of IntelliJ and showing how integrated everything is. This got me thinking about what are the actual reasons that I use it. This post will try to make that clear and help others decide if the switch is worth it or not. Some background I had been a long time Eclipse (7+ years) user before I made the jump to IntelliJ.   Before Eclipse, I worked with Rational Application Developer, WSAD, JBuilder and Visual Age for Java. Compared to these IDE’s, Eclipse was a joy to use. I could, for example, generate getters and setters, which was not possible in one of the older IDE’s (we are talking about more than 10 years ago). Although I quite liked Eclipse I always thought there were some deficiencies. Mainly in the following areas:Why was there no core functionality bundled with the standalone Eclipse variant? For example Subversion and Maven integration. Why was it always painful to setup an Eclipse version to your liking with all the required plugins? With every new version I spent always nearly half a day setting up my IDE. This is unacceptable I think. The more plugins and functionality the harder it got. Updating to a new version was sometimes painful. Plugins that stopped working for example. I never quite liked the concept of a workspace. I already organize my projects on disk so I do not need a workspace concept. I did not like the idea of different perspectives. Why do I have to think about the context I am working in? For example: working with Java and Flex in one project. When I am in the Flex perspective my Java code completion/refactoring did work in Java files. Context should be file or even fragment driven.Please note that the above are personal opinions and may vary between users. Despite of this I was quite productive in Eclipse and liked the performance of it. Also note that these observations are from a couple of versions back. Things may have changed. Around 2007/2008, a colleague of mine introduced me to Intellij, I think it was version 7 back then. My first reaction was I don’t need another IDE. He showed me some features, like code inspections, and I said I would give it a try. My main obstacle back then was the price. That year I also gave a talk at the Dutch Java User group conference. Every speaker received a free IntelliJ license from JetBrains. I then decided I would give it a try. After the first two or three days I thought I would give up. I had to learn all new key-bindings and I was less productive. I persisted and after a week or so I begun seeing the benefits of it. After version 7 I upgraded to 8, 9 without any problem. Things could be different. At the moment I work with the latest version, 12.1 EAP. Below are some of my reasons why I do most (if not all) of my development work in IntelliJ. Major featuresIt is an integrated solution. I do a lot of different development work with a lot of different technologies, for example: Java, HTML/CSS/JavaScript, Android, Grails/Groovy, Flex, Subversion, Git, Maven, Ant etc. This is all possible with IntelliJ out-of-the box. There is no need to install separate plugins, which saves me a huge amount of setup time. Just download and install it and you’re good to go. The editor itself. I invest heavily in knowing all the shortcuts. By knowing all the shortcuts I can code very fast. The instant code completion (not having to hit Ctlr-space all the time) is a joy to work with. Just type a couple of characters and hit Tab to complete the code. When I generate code, the cursor almost always is in the correct position to begin typing again. No need to touch the mouse or whatever. Code inspections and analysis tooling build in. I find it important to keep my code clean and bug free. The build in inspections and the ability to auto solve them are a really nice addition. Besides this you also have a dependency matrix viewer to get a quick overview of the dependency structure of your application and a duplicate code checker. Live templates. Live templates greatly increase coding speed. To make the most of it, I highly recommend creating your own templates. This is very easy. Just select a piece of code and select Save as Live template from the Tools menu. Press Ctrl/Cmd+J to view the live templates. Maven/Gradle integration out of the box. Just import a Maven project and Intellij knows the modules, dependencies etc. You can easily generate a dependency diagram from the Maven pom file to view all the dependencies at a glance. See figure 1 for an example of the Maven dependency viewer. Some handy tools. I often use the database editor and the RESTful web service test utility. The database editor has code completion in SQL and table creation. With the RESTful web service tester you can easily test HTTP services. The response can then me immediately saved and formatted as JSON or XML. Powerful refactorings and structural search & replace. IntelliJ knows a lot about my code. For example in Android: when I rename an image in the values/hdpi folder, it also renames the corresponding images in the mdi and xhdpi folder but also updates my XML views and code references to that image. Tasks and Contexts. I use IntelliJ in combination with YouTrack (there are more issue trackers that IntelliJ can integrate with). It is really easy to start working on an issue. IntelliJ creates a new context that tracks the files that belongs to that specific issue. I can mark the issue in progress and when I commit my changes it takes the comments from the context and uses this as the commit comments. It also changes the status of the issue to resolved when done working on the issue. All from within the IDE itself, no need for context switching.Smaller features  And then there are the smaller but just as important features which increase my productivity:Stacked clipboard. You can have multiple entries in your clipboard. Just hit Ctrl-Shift-V to show the clipboard stack. Column mode in the editor. This comes in handy when working with fixed structure files like CSV for example. Darcula theme. This is one of the best dark themes I encountered. A dark theme is especially useful when coding in the evening with the lights dimmed. It is less stressful for the eyes I think. See figure 2 for an example of the Darcula theme. Stack trace analyzer. Just copy a stack trace from the clipboard and IntelliJ analyses it and matches it with the code to easily navigate to the problem at hand. Unit test and coverage integration. And many more.Final thoughts This article describes the reasons why I use IntelliJ as my primary development tool of choice. Please note that this is my personal opinion. Also, this is obviously not an exhausted list. I would like to hear from you why you choose IntelliJ.   Reference: Reasons for IntelliJ IDEA from our JCG partner Jamie Craane at the Jamie Craane’s Blog blog. ...
javafx-logo

JavaFX 2 with Spring

I’m going to start this one with a bold statement: I always liked Java Swing, or applets for that matter. There, I said it. If I perform some self analysis, this admiration probably started when I got introduced to Java. Swing was (practically) the first thing I ever did with Java that gave some statisfactionary result and made me able to do something with the language at the time. When I was younger we build home-brew fat clients to manage our 3.5′ floppy/CD collection (written in VB and before that in basic) this probably also played a role. Anyway, enough about my personal rareness. Fact is that Swing has helped many build great applications but as we all know Swing has it drawbacks. For starters it hasn’t been evolving since well, a long time. It also requires a lot of boiler plate code if   you want to create high quality code. It comes shipped with some quirky design ‘flaws’, lacks out of the box patterns such as MVC. Styling is a bit of a limitation since you have to fall back on the limited L&F architecture, I18N is not build in by default and so on. One could say that developing Swing these days is well, basically going back into time. Fortunately Oracle tried to change this some years ago by Launching JavaFX. I recall getting introduced to JavaFX on Devoxx (or Javapolis as it was named back then). The nifty demo’s looked very promising, so I was glad to see that a Swing successor was finally on its way. This changed from the moment I saw its internals. One of its major drawbacks was that it was based on a dark new syntax (called JavaFX script). In case you have never seen JavaFX script; it looks like a bizarre breed between Java, JSON and JavaScript. Although it is compiled to Java byte-code, and you could use the Java API’s from it, integration with Java was never really good. The language itself (although pretty powerful) required you to spend a lot of time understanding the details, for ending up with, well, again source code, but this time less manageable and supported then plain Java code. As it turned out, I wasn’t the only one. A lot of people felt the same (for sure there were other reasons as well) and JavaFX never was a great success. However, a while ago Oracle changed the tide by introducing JavaFX 2. First of all they got rid of JavaFX script (which is no longer supported) and turned it into a real native Java SE API (JavaFX 2.2.3 is part of the Java 7 SE update 6) . The JavaFX API now looks more like the familiar Swing API, which is a good thing. It gives you layout managers lookalikes, event listeners, and all those other components you were so used to, but even better. So if you want you can code JavaFX like you did Swing you can, albeit with slightly different syntax and improved architecture. It is also possible now to intermix existing Java Swing applications with JavaFX. But there is more. They introduced an XML based markup language that allows you to describe the view. This has some advantages, first of all coding in XML works faster then Java. XML can be more easily be generated then Java and the syntax for describing a view is simply more compact. It is also more intuitive to express a view using some kind of markup, especially if you ever did some web development before. So, one can have the view described in FXML (thats how its called), the application controllers separate from the view, both in Java, and your styling in CSS (yeah, so no more L&F, CSS support is standard). You can still embed Java (or other languages) directly in the FXML; but this is probably not what you want (scriptlet anti-pattern). Another nice thing is support for binding. You can bind each component in your view to the application controller by putting an fx:id attribute on the view component and an @FXML annotation on the instance variable in the application controller. The corresponding element will then be auto injected, so you can change its data or behavior from inside your application controller. It also turns out that with some lines of code you can painlessly integrate the DI framework of your choice, isn’t that sweet? And what about the tooling? Well, first of all there is a plug-in for Eclipse (fxclipse) which will render you FXML on the fly. You can install it via Eclipse market place:The plug-in will render any adjustment you make immediately:Note that you need at least JDK7u6 for this plug-in to work. If your JDK is too old you’ll get an empty pane in eclipse. Also, if you create a JavaFX project I needed to put the jfxrt.jar manually on my build classpath. You’ll find this file in %JAVA_HOME%/jre/lib. Up until know the plug-in doesn’t help you visually (by drag& drop) but that there a separate IDE: scene builder. This builder is also integrated in Netbeans, for AFAIK there is no support for eclipse yet so you’ll have to run it separately if you want to use it. The builder lets you develop FXML the visual way, using drag&drop. Nice detail; scene builder is in fact written in JavaFX. Then you also have a separate application called scenic view which does introspection on a running JavaFX application and shows how it is build up. You get a graph with the different nodes and their hierarchical structure. For each node you can see its properties and so forth:Ok, so lets start with some code examples. The first thing I did was design my demo application in scene builder:I did this graphically by d&d the containers/controlers on to the view. I also gave the controls that I want to bind to my view and fx:id, you can do that also via scene builder:For the buttons in particular I also added an onAction (which is the method that should be executed on the controller once the button is clicked):Next I added the controller manually in the source view in eclipse. There can only be one controller per FXML and it should be declared in the top level element. I made two FXML’s, one that represents the main screen and one that acts as the menu bar. You probably want a division of your logic in multiple controllers, rather then stuffing to much in a single controller – single responsibility is a good design guideline here. The first FXML is “search.fxml” and represents the search criteria and result view: <?xml version="1.0" encoding="UTF-8"?> <?import java.lang.*?> <?import java.util.*?> <?import javafx.scene.control.*?> <?import javafx.scene.control.Label?> <?import javafx.scene.control.cell.*?> <?import javafx.scene.layout.*?> <?import javafx.scene.paint.*?> <StackPane id="StackPane" maxHeight="-Infinity" maxWidth="-Infinity" minHeight="-Infinity" minWidth="-Infinity" prefHeight="400.0" prefWidth="600.0" xmlns:fx="http://javafx.com/fxml" fx:controller="be.error.javafx.controller.SearchController"> <children> <SplitPane dividerPositions="0.39195979899497485" focusTraversable="true" orientation="VERTICAL" prefHeight="200.0" prefWidth="160.0"> <items> <GridPane fx:id="grid" prefHeight="91.0" prefWidth="598.0"> <children> <fx:include source="/menu.fxml"/> <GridPane prefHeight="47.0" prefWidth="486.0" GridPane.columnIndex="1" GridPane.rowIndex="5"> <children> <Button fx:id="clear" cancelButton="true" mnemonicParsing="false" onAction="#clear" text="Clear" GridPane.columnIndex="1" GridPane.rowIndex="1" /> <Button fx:id="search" defaultButton="true" mnemonicParsing="false" onAction="#search" text="Search" GridPane.columnIndex="2" GridPane.rowIndex="1" /> </children> <columnConstraints> <ColumnConstraints hgrow="SOMETIMES" maxWidth="338.0" minWidth="10.0" prefWidth="338.0" /> <ColumnConstraints hgrow="SOMETIMES" maxWidth="175.0" minWidth="0.0" prefWidth="67.0" /> <ColumnConstraints hgrow="SOMETIMES" maxWidth="175.0" minWidth="10.0" prefWidth="81.0" /> </columnConstraints> <rowConstraints> <RowConstraints maxHeight="110.0" minHeight="10.0" prefHeight="10.0" vgrow="SOMETIMES" /> <RowConstraints maxHeight="72.0" minHeight="10.0" prefHeight="40.0" vgrow="SOMETIMES" /> </rowConstraints> </GridPane> <Label alignment="CENTER_RIGHT" prefHeight="21.0" prefWidth="101.0" text="Product name:" GridPane.columnIndex="0" GridPane.rowIndex="1" /> <TextField fx:id="productName" prefWidth="200.0" GridPane.columnIndex="1" GridPane.rowIndex="1" /> <Label alignment="CENTER_RIGHT" prefWidth="101.0" text="Min price:" GridPane.columnIndex="0" GridPane.rowIndex="2" /> <Label alignment="CENTER_RIGHT" prefWidth="101.0" text="Max price:" GridPane.columnIndex="0" GridPane.rowIndex="3" /> <TextField fx:id="minPrice" prefWidth="200.0" GridPane.columnIndex="1" GridPane.rowIndex="2" /> <TextField fx:id="maxPrice" prefWidth="200.0" GridPane.columnIndex="1" GridPane.rowIndex="3" /> </children> <columnConstraints> <ColumnConstraints hgrow="SOMETIMES" maxWidth="246.0" minWidth="10.0" prefWidth="116.0" /> <ColumnConstraints fillWidth="false" hgrow="SOMETIMES" maxWidth="537.0" minWidth="10.0" prefWidth="482.0" /> </columnConstraints> <rowConstraints> <RowConstraints maxHeight="64.0" minHeight="10.0" prefHeight="44.0" vgrow="SOMETIMES" /> <RowConstraints maxHeight="68.0" minHeight="0.0" prefHeight="22.0" vgrow="SOMETIMES" /> <RowConstraints maxHeight="68.0" minHeight="10.0" prefHeight="22.0" vgrow="SOMETIMES" /> <RowConstraints maxHeight="68.0" minHeight="10.0" prefHeight="22.0" vgrow="SOMETIMES" /> <RowConstraints maxHeight="167.0" minHeight="10.0" prefHeight="14.0" vgrow="SOMETIMES" /> <RowConstraints maxHeight="167.0" minHeight="10.0" prefHeight="38.0" vgrow="SOMETIMES" /> </rowConstraints> </GridPane> <StackPane prefHeight="196.0" prefWidth="598.0"> <children> <TableView fx:id="table" prefHeight="200.0" prefWidth="200.0"> <columns> <TableColumn prefWidth="120.0" resizable="true" text="OrderId"> <cellValueFactory> <PropertyValueFactory property="orderId" /> </cellValueFactory> </TableColumn> <TableColumn prefWidth="120.0" text="CustomerId"> <cellValueFactory> <PropertyValueFactory property="customerId" /> </cellValueFactory> </TableColumn> <TableColumn prefWidth="120.0" text="#products"> <cellValueFactory> <PropertyValueFactory property="productsCount" /> </cellValueFactory> </TableColumn> <TableColumn prefWidth="120.0" text="Delivered"> <cellValueFactory> <PropertyValueFactory property="delivered" /> </cellValueFactory> </TableColumn> <TableColumn prefWidth="120.0" text="Delivery days"> <cellValueFactory> <PropertyValueFactory property="deliveryDays" /> </cellValueFactory> </TableColumn> <TableColumn prefWidth="150.0" text="Total order price"> <cellValueFactory> <PropertyValueFactory property="totalOrderPrice" /> </cellValueFactory> </TableColumn> </columns> </TableView> </children> </StackPane> </items> </SplitPane> </children> </StackPane> On line 11 you can see that I configured the application controller class that should be used with the view. On line 17 you can see the import of the separate menu.fxml which is shown here: <?xml version='1.0' encoding='UTF-8'?><?import javafx.scene.control.*?> <?import javafx.scene.layout.*?> <?import javafx.scene.control.MenuItem?><Pane prefHeight='465.0' prefWidth='660.0' xmlns:fx='http://javafx.com/fxml' fx:controller='be.error.javafx.controller.FileMenuController'> <children> <MenuBar layoutX='0.0' layoutY='0.0'> <menus> <Menu mnemonicParsing='false' text='File'> <items> <MenuItem text='Exit' onAction='#exit' /> </items> </Menu> </menus> </MenuBar> </children> </Pane> On line 7 you can see that it uses a different controller. In Eclipse, if you open the fxclipse view from the plug-in you will get the same rendered view as in scene builder. Its however convenient if you want to make small changes in the code to see them directly reflected: The code for launching the application is pretty standard: package be.error.javafx;import javafx.application.Application; import javafx.scene.Parent; import javafx.scene.Scene; import javafx.stage.Stage;public class TestApplication extends Application {private static final SpringFxmlLoader loader = new SpringFxmlLoader();@Override public void start(Stage primaryStage) { Parent root = (Parent) loader.load('/search.fxml'); Scene scene = new Scene(root, 768, 480); primaryStage.setScene(scene); primaryStage.setTitle('JavaFX demo'); primaryStage.show(); }public static void main(String[] args) { launch(args); } } The only special thing to note is that we extend from Application. This is a bit boiler plate code which will for example make sure that creating of the UI happens on the JavaFX application thread. You might remember such stories from Swing, where every UI interaction needs to occur on the event dispatcher thread (EDT), this is the same with JavaFX. You are by default on the “right thread” when you are called back by the application (in for example action listeners alike methods). But if you start the application or perform long running tasks in separate threads you need to make sure you start UI interaction on the right thread. For swing you would use SwingUtilities.invokeLater() for JavaFX: Platform.runLater(). More special is our SpringFxmlLoader: package be.error.javafx;import java.io.IOException; import java.io.InputStream;import javafx.fxml.FXMLLoader; import javafx.util.Callback;import org.springframework.context.ApplicationContext; import org.springframework.context.annotation.AnnotationConfigApplicationContext;public class SpringFxmlLoader {private static final ApplicationContext applicationContext = new AnnotationConfigApplicationContext(SpringApplicationConfig.class);public Object load(String url) { try (InputStream fxmlStream = SpringFxmlLoader.class .getResourceAsStream(url)) { System.err.println(SpringFxmlLoader.class .getResourceAsStream(url)); FXMLLoader loader = new FXMLLoader(); loader.setControllerFactory(new Callback<Class<?>, Object>() { @Override public Object call(Class<?> clazz) { return applicationContext.getBean(clazz); } }); return loader.load(fxmlStream); } catch (IOException ioException) { throw new RuntimeException(ioException); } } } The highlighted lines show the custom ControllerFactory. Without setting this JavaFX will simply instantiate the class you specified as controller in the FXML without anything special. In that case the class will not be Spring managed (unless you would be using CTW/LTW AOP). By specifying a custom factory we can define how the controller should be instantiated. In this case we lookup the bean from the application context. Finally we have our two controllers, the SearchController: package be.error.javafx.controller;import java.math.BigDecimal; import java.net.URL; import java.util.ResourceBundle;import javafx.collections.FXCollections; import javafx.collections.ObservableList; import javafx.fxml.FXML; import javafx.fxml.Initializable; import javafx.scene.control.Button; import javafx.scene.control.TableView; import javafx.scene.control.TextField;import org.apache.commons.lang.StringUtils; import org.springframework.beans.factory.annotation.Autowired;import be.error.javafx.model.Order; import be.error.javafx.model.OrderSearchCriteria; import be.error.javafx.model.OrderService;public class SearchController implements Initializable {@Autowired private OrderService orderService; @FXML private Button search; @FXML private TableView<Order> table; @FXML private TextField productName; @FXML private TextField minPrice; @FXML private TextField maxPrice;@Override public void initialize(URL location, ResourceBundle resources) { table.setColumnResizePolicy(TableView.CONSTRAINED_RESIZE_POLICY); }public void search() { OrderSearchCriteria orderSearchCriteria = new OrderSearchCriteria(); orderSearchCriteria.setProductName(productName.getText()); orderSearchCriteria .setMaxPrice(StringUtils.isEmpty(minPrice.getText()) ? null:new BigDecimal(minPrice.getText())); orderSearchCriteria .setMinPrice(StringUtils.isEmpty(minPrice.getText()) ? null: new BigDecimal(minPrice.getText())); ObservableList<Order> rows = FXCollections.observableArrayList(); rows.addAll(orderService.findOrders(orderSearchCriteria)); table.setItems(rows); }public void clear() { table.setItems(null); productName.setText(''); minPrice.setText(''); maxPrice.setText(''); } } The highlighted lines in respective order:Auto injection by Spring, this is our Spring managed service which we will use to lookup data from Auto injection by JavaFX, our controls that we need to manipulate or read from in our controller Special init method to initialize our table so columns will auto resize when the view is enlarged action listener style callback which is invoked when the search button is pressed action listener style callback which is invoked when the clear button is pressedFinally the FileMenuController which does nothing special besides closing our app: package be.error.javafx.controller;import javafx.application.Platform; import javafx.event.ActionEvent;public class FileMenuController {public void exit(ActionEvent actionEvent) { Platform.exit(); } } And finally the (not so exciting) result:After searching:Making view wider, also stretches the columns:The file menu allowing us the exit:After playing a bit with JavaFX2 I was pretty impressed. There are also more and more controls coming (I believe there is already a browser control and such). So I think we are on the right track here.   Reference: JavaFX 2 with Spring from our JCG partner Koen Serneels at the Koen Serneels – Technology blog blog. ...
software-development-2-logo

5 Strategies for Making Money with the Cloud

Everybody is hearing Cloud Computing on the television now. Operators will store your contacts in the Cloud. Hosting companies will host your website in the Cloud. Others will store your photos in the Cloud. However how do you make money with the Cloud? The first thing is to forget about infrastructure and virtualization. If you are thinking that in 2013, the world needs more IaaS providers then you haven’t seen what is currently on offer (Amazon, Microsoft, Google, Rackspace, Joyent, Verizon/Terramark, IBM, HP, etc.). So what are alternative strategies:   1) Rocket Internet SaaS Cloning Your best hope is SaaS and PaaS. The best markets are non-English speaking markets. We have seen an explosion of SaaS in the USA but most have not made it to the rest of the world yet. Only some bigger SaaS solutions (Webex, GoToMeeting, Office 365, etc.) and PaaS platforms (Salesforce, Workday, etc.) are available outside of the US and the UK. However most SaaS and PaaS solutions are currently still English-only. So the quickest solution to make some money is to just copy, translate and paste some successful English-only SaaS product. If you do not know how to copy dotcoms, take a look at how the Rocket Internet team is doing it. Of course you should always be open for those annoying problems everybody has that could use a new innovative solution and as such create your own SaaS. 2) SaaSification During the gold rush, be the restaurant, hotel or tool shop. While everybody is looking for the SaaS gold, offer solutions that will save gold diggers time and money. SaaSification allows others to focus on building their SaaS business, not on reinventing for the millionth time a web page, web store, email server, search, CRM, monthly subscription billing, reporting, BI, etc. Instead of a “Use Shopify to create your online store”, it should be “Use <YOUR PRODUCT> to create a SaaS Business”. 3) Mobile & Cloud Everybody is having, or at least thinking about buying, a Smartphone. However there are very few really good mobile services that fully exploit the Cloud. Yet I can get a shopping list app but most are just glorified to-do lists. None is recommending me where to go and buy based on current promotions and comparison with other buyers. None is helping me find products inside a large supermarket. None is learning from my shopping habits and suggesting items on the list. None is allowing me to take a number at the seafood queue. These are just examples for one mobile + cloud app. Think about any other field and you are sure to find great ideas. 4) Specialized IaaS I mentioned it before, IaaS is already overcrowded but there is one exception: specialized IaaS. You can focus on specialized hardware, e.g. virtualized GPU, DSP, mobile ARM processors. On network virtualization like SDN and Openflow. Mobile and tablet virtualization. Embedded device virtualization. Machine Learning IaaS. Car Software virtualization. 5) Disruptive Innovations + Cloud Selling disruptive innovations and offering them as Cloud services. Examples could be 3D printing services, wireless sensor networks / M2M, Big Data, Wearable Tech, Open Source Hardware, etc. The Cloud will lower your costs and give you a global elastically scalable solution.   Reference: 5 Strategies for Making Money with the Cloud from our JCG partner Maarten Ectors at the Telruptive blog. ...
apache-tomcat-logo

Most popular application servers

This is the second post in the series where we publish statistical data about the Java installations. The used dataset originates from free Plumbr installations out there totalling 1,024 different environments we have collected during the past six months. First post in the series analyzed the foundation – on what OS the JVM is run, whether it is a 32 or 62-bit infrastructure and what JVM vendor and version were used. In this post we are going to focus on the application servers used. It proved to be a bit more challenging task than originally expected – the best shot we had towards the goal was to extract it from the bootstrap classpath. With queries similar to “grep -i tomcat classpath.log”. Which was easy. As opposed to discovering that:     Out of the 1024 samples 92 did not contain a reference to bootstrap classpath at all. Which was our first surprise. Whether they were really ran without any entries to bootstrap classpath or our statistics just do not record all the entries properly – failed to trace the reason. But nevertheless, this left us with 932 data points. Out of the remaining 932 we were unable to link 256 reports to any of the application servers known to mankind. Before jumping to the conclusion that approx. 27% of the JVMs out there are running client side programs, we tried to dig further57 seemed to be launched using Maven plugins, which hides the actual runtime from us. But I can bet the vast majority of those are definitely not Swing applications. 11 environments were running on Play Framework, which is not using Java EE containers to run. 6 environments were launched with Scala runtime attached, so I assume these were also actually web applications. 54 had loaded either jgoodies or swing libraries which make them good candidates for being a desktop application 6 were running on Android. Which we don’t even support. If you guys can shed some light on how you managed to launch Plumbr with Android, let us know. And the remaining 122 – we just failed to categorize – they seemed to range from MQ solutions to batch processes to whatnot.But 676 reports did contain reference to the Java EE container used. And results are visible from the following diagram:The winner should not be a surprise to anyone – Apache Tomcat is being used in 43% of the installations. Other places on the podium are a bit more surprising – Jetty coming in second with 23% of the deployments and JBoss third with 16%. The expected result was exactly reversed, but apparently the gears have shifted during the last years. Next group contains Glassfish, Geronimo and Weblogic with 7, 6 and 3% of the deployment base respectively. Which is also somewhat surprising – having just 20 Weblogic installations and Websphere nowhere in sight at all – the remaining five containers altogether represent less than 2% of the installations. I guess all the pragmatic-lean-KISS-… approach is finally starting to pay off and we are moving towards tools developers actually enjoy.   Reference: Most popular application servers from our JCG partner Vladimir Sor at the Plumbr Blog blog. ...
java-logo

Cryptography Using JCA – Services In Providers

The Java Cryptography Architecture (JCA) is an extensible framework that enables you to use perform cryptographic operations. JCA also promotes implementation independence (program should not care about who’s providing the cryptographic service) and implementation interoperability (program should not be tied to a specific provider of a particular cryptographic service). JCA allows numerous cryptographic services e.g. ciphers, key generators, message digests to be bundled up in a java.security.Provider class, and registered declaratively in a special file (java.security) or programmatically via the java.security.Security class (method ‘addProvider’).   Although JCA is a standard, different JDKs implement JCA differently. Between Sun/Oracle and IBM JDKs, the IBM JDK is sort of more ‘orderly’ than Oracle’s. For instance, IBM’s uber provider (com.ibm.crypto.provider.IBMJCE) implements the following keystore formats: JCEKS, PKCS12KS (PKCS12), JKS. Oracle JDK ‘spreads’ the keystore format implementations into the following providers:sun.security.provider.Sun – JKS com.sun.crypto.provider.SunJCE – JCEKS com.sun.net.ssl.internal.ssl.Provider – PKCS12Despite the popular recommendation to write applications that do not point to a specific Provider class, there are some use cases that require an application/program to know exactly what services a Provider class is offering. This requirement becomes more prevalent when supporting multiple application servers that may be tightly coupled with a particular JDK e.g. WebSphere bundled with IBM JDK. I usually use Tomcat+Oracle JDK for development (more lightweight, faster), but my testing/production setup is WebSphere+IBM JDK. To further complicate matters, my project needs the use of a hardware security module (HSM) which uses the JCA API via the provider class com.ncipher.provider.km.nCipherKM. So, when I am at home (without access to the HSM), I would want to continue writing code but at least get the codes tested on a JDK provider. I can then switch to use the nCipherKM provider for another round of unit testing before committing the code to source control. The usual assumption is that one Provider class is enough e.g. IBMJCE for IBM JDKs, SunJCE for Oracle JDKs. So the usual solution is to implement a class that specifies one provider, using reflection to avoid compile errors due to ‘Class Not Found': //For nShield HSM Class c = Class.forName('com.ncipher.provider.km.nCipherKM'); Provider provider = (Provider)c.newInstance();//For Oracle JDK Class c = Class.forName('com.sun.crypto.provider.SunJCE'); Provider provider = (Provider)c.newInstance();//For IBM JDK Class c = Class.forName('com.ibm.crypto.provider.IBMJCE'); Provider provider = (Provider)c.newInstance(); This design was OK, until I encountered a NoSuchAlgorithmException error running some unit test cases on Oracle JDK. And the algorithm I was using is RSA, a common algorithm! How can this be, the documentation says that RSA is supported! The same test cases worked fine on IBM JDK. Upon further investigation, I realised that much to my dismay, the SunJCE provider does not have an implementation for the KeyPairGenerator service for RSA. An implementation however is found in the provider class sun.security.rsa.SunRsaSign. So the assumption of ‘1 provider to provide them all’ is broken. But thanks to JCA’s open API, a Provider object can be passed in when requesting for a Service instance e.g. KeyGenerator kgen = KeyGenerator.getInstance('AES', provider); To help with my inspection of the various Provider objects, I’ve furnished a JUnit test to pretty-print out the various services of each registered Provider instance in a JDK. package org.gizmo.jca;import java.security.Provider; import java.security.Provider.Service; import java.security.Security; import java.util.Comparator; import java.util.SortedSet; import java.util.TreeSet;import javax.crypto.KeyGenerator;import org.bouncycastle.jce.provider.BouncyCastleProvider; import org.junit.Test;public class CryptoTests {@Test public void testBouncyCastleProvider() throws Exception { Provider p = new BouncyCastleProvider(); String info = p.getInfo(); System.out.println(p.getClass() + ' - ' + info); printServices(p); }@Test public void testProviders() throws Exception {Provider[] providers = Security.getProviders(); for(Provider p : providers) { String info = p.getInfo(); System.out.println(p.getClass() + ' - ' + info); printServices(p); } }private void printServices(Provider p) { SortedSetservices = new TreeSet(new ProviderServiceComparator()); services.addAll(p.getServices());for(Service service : services) { String algo = service.getAlgorithm(); System.out.println('==> Service: ' + service.getType() + ' - ' + algo); } }/** * This is to sort the various Services to make it easier on the eyes... */ private class ProviderServiceComparator implements Comparator{@Override public int compare(Service object1, Service object2) { String s1 = object1.getType() + object1.getAlgorithm(); String s2 = object2.getType() + object2.getAlgorithm();;return s1.compareTo(s2); }} } Anyway, if the algorithms you use are common and strong enough for your needs, the BouncyCastle provider can be used. It works well across JDKs (tested against IBM & Oracle). BouncyCastle does not support JKS or JCEKS keystore formats, but if you are not fussy, the BC keystore format works just fine. BouncyCastle is also open source and can be freely included in your applications. Tip: JKS keystores cannot store SecretKeys. You can try it as your homework Hope this post will enlighten you to explore JCA further, or at least be aware of the pitfalls of ‘blissful ignorance’ when working with JCA.   Reference: Cryptography Using JCA – Services In Providers from our JCG partner Allen Julia at the YK’s Workshop blog. ...
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

Get ready to Rock!
To download the books, please verify your email address by following the instructions found on the email we just sent you.

THANK YOU!

Close