Featured FREE Whitepapers

What's New Here?


JavaOne 2012: What’s New in Scala 2.10

After getting lunch, I went to Hilton Golden Gate 6/7/8 to see Martin Odersky‘s (Typesafe) presentation ‘What’s New in Scala 2.10.’ It is always an opportunity to hear a language’s creator discuss the language that he created. Odersky stated by providing a brief history of Scala:1996-1997 Pizza1998-2000 GJ, Java generics, javac2003-2006 The Scala ‘Experiment’2006-2009 Industrial Strength Programming LanguageScala is a unifier between worlds of functional programm and object-oriented programming and is a unifier of agile (with lightweight syntax) and safe/performant/strongly typed programming. Functional programming on the rise. 12 years ago OOPSLA had ten times attendance of ECOOP but today ICFP has three times ECOOP attendance and OOPSLA is no longer an independent conference. Functional programming important now because of market drivers. Odersky stated, ‘The world of software is changing because of hardware trends’ and added, ‘Moore’s Law is now achieved by increasing number of cores rather than clock cycles.’ In this new world of clock cycle limitations, we have a ‘Triple Challenge': ‘Parallel’ (how to use multicore), ‘Asynchronous’ (how to deal with asynchronous events), and ‘Distributed’ (how to deal with failures). In this world, ‘every piece of mutable state you have is a liability.’ The ‘problem in a nutshell’ is ‘non-determinism caused by concurrent threads accessing shared mutable state.’ non-determinism = parallel processing + mutable state Non-deterministic behavior is something ‘we are uncomfortable with’ and ‘avoiding mutable state means programming functionally.’ Eliminating mutable state is the one thing we have control over and can change. This is why Odersky believes that functional programming is just now rising out of the academic world to be used in the business world. Functional programming has always been inherently tied to fancy machines. Odersky pointed out that writing functional programming is quite different than writing object-oriented programming. The ‘thought process of functional programming is really quite different’ because it requires thinking of things spatially rather than temporally. Having made the case for functional programming, Odersky asked if we should all move to Haskell now? He answered that there are still important things learned from object-oriented development. His slide stated, ‘What the industry learned about OO decompositionis analysis and design stays valid.’ We want to have functions and objects in our coding. Odersky quoted Grady Booch, ‘Objects are characterized by state, identity, and behavior. Odersky maintains we now want to ‘eliminate, or at least reduce, mutable state.’ We want to get away from mutable state being the default. Odersky also believes we want to get away from referential equality and focus on structural equality. Odersky listed simplicity, productivity, and fun as other reasons for functional programming in addition to dealing with the hardware environment of parallel processors. Odersky moved from general functional programming discussion to specific Scala discussion. His bullet stated, ‘Scala can be as simple or as complex as you like, but it’s best when it’s simple.’ He cited Kojo as an example of Scala’s alignment with simplicity. He also referenced ‘Sadaj’s great talks.’ Odersky then began to work on some examples based on Ninety-Nine Scala Problems. Scala 2.10 is ‘very close to RC1′ and is in ‘code freeze’ while working on three remaining code blockers and several documentation blockers. Odersky talked about the Scala Improvement Process (SIP) and showed the ‘Pending SIPs’ that are accepted, postponed, or rejected. SIP 12 (‘String Interpolation’ addresses the weaknesses and difficulties associated with concatenation of Strings using the + operator. It is shorter and safer to use string interpolation. The biggest obstacle to adding string interpolation was that $ is ‘already a legal character in strings.’ Rather than using a ‘special trick’ of placing ‘s’ in front of strings for which $ should be escaped and used for string interpolation, Scala will use more general solution of recognizing arbitrary IDs. The ‘f’ string processor will interpret $ and %. Developers could implement their own string processors and Odersky showed examples of this that might be written for XML and JSON. Odersky stated that, ‘Adding XML support in Scala is something that did not turn out so well.’ He says now there would be an advantage to using an XML string processor instead and remove the XML specific library dependency Scala currently has. Odersky addressed why Scala does not have extension methods that ‘other cool languages’ have. He answered that Scala does not have extension methods because ‘we can’t abstract over them.’ It is ‘impossible to have them implement an interface.’ Odersky introduced SIP 11 (‘Implicit Classes’). One issue to think about here is ‘runtime overhead.’ SIP 15 (‘Value Classes’) introduces idea that Scala classes could extend AnyVal instead of Object. Classes extending AnyVal are ‘value classes’ and are always unboxed. Odersky showed what code using these value classes would expand (be compiled) to. The implicit keyword in Scala enables ‘very poweful’ implicit conversions, but Odersky warns that ‘they can be misused’ particularly if ‘there are too many of them.’ I like that Odersky likened them to chocolate: a little is very good, but too much leads to a stomach ache. You can turn off implicit conversion warnings by ‘bringing the identifier scala.langage.implicitConversions into scope (typically via import). Odersky briefly discussed SIP 18 (‘Language Imports’) and then covered ‘Better tools.’ I found Odersky’s goal for his language to be interesting: Odersky wants to ‘make Scala the language of choice for smart kids.’ To him, it’s the ideal of what the language should be. One attendee felt that boilerplate code is one of the best forms of simplicity and wondered whether Scala was missing its goal of simplicity by displacing boilerplate code. Odersky replied that boilerplate code would not be his preferred form of simplicity. Odersky stated that Scala is largely mature from the perspective of significant API changes and so forth. He pointed out that all languages reach the point where it is difficult to make wholesale fundamental changes and that Scala’s changes are smaller in nature now than they were in the late 2000s. In response to an audience member’s question, Odersky said that the best way to learn Scala is to read a book or take a course. He says many of the people he sees who try and struggle with Scala tried to learn it by studying its API documentation. When asked to compare Scala to Groovy and JRuby, Odersky said that Scala is about as complicated as JDK 7 and will be less complicated than JDK 8. He feels Groovy is more complicated because it is a superset of Java. This is an interestingly different perspective on complexity. I have found Groovy to be very simple to learn, but I do have years of Java experience. I wonder if someone without Java experience would find Scala less difficult to learn. For me, even with all of Scala’s great features, it cannot be easier to learn than Groovy was. Another audience member’s question led to Odersky mentioning some alternate IDE support (including Emacs and IntelliJ IDEA) for Scala in addition to ScalaIDE. There is a NetBeans plugin for Scala. Odersky pointed out something I like about Scala in response to another question. He stated that Scala allows the developer to easily specify a piece of data is mutable. I like that Scala ‘does not prejudice’ against mutable data and allows the developers to choose to do so if they want to. I really don’t like it when ‘opinionated software’ forces me to do what the framework’s or language’s author thinks is best. I really enjoyed this presentation. I had changed from a differently planned session to this one last night and am happy with the change (although the other likely would have been good as well). My only concern about this presentation had been that, given my lack of Scala familiarity, that I’d be overwhelmed by discussion of new Scala 2.10 features without proper foundation. Fortunately for me, Odersky did not get into Scala 2.10-specific features until half way into the session. His coverage of functional programming and basics and history of Scala were helpful to me in general and were helpful specifically in preparing me to appreciate the new Scala 2.10 features. Between this presentation and the Scala Tricks presentation, my interest in trying out Scala is being renewed. Don’t forget to share! Reference: JavaOne 2012: What’s New in Scala 2.10 from our JCG partner Dustin Marx at the Inspired by Actual Events blog....

Customize PMD in Eclipse with your own rules

PMD is very nice Java code scanner which helps you avoid potential programming problems. It can be easily extended to your needs, and this post will bring you simple example of custom PMD rules related to JPA’s @Enumerated annotation usage. Before you’ll continue the reading, you should check one of my previous posts – JPA – @Enumerated default attribute. When you work with the group of people on JPA project, it is almost certain that one of the developers will use @Enumerated annotation without defining the EnumType, and if you don’t use strict data validation on the DB level (like column level constraints), you will fall into deep troubles. What we would like to achieve is reporting an error when one uses @Enumerated without the EnumType: @Entity @Table(name = 'BENEFITS') public class Benefit implements Serializable { ... @Column(name = 'BENEFIT_TYPE') @Enumerated public BenefitType getType() { return type; } ... }and a warning if one uses @Enumerated with ORDINAL EnumType: @Entity @Table(name = 'BENEFITS') public class Benefit implements Serializable { ... @Column(name = 'BENEFIT_TYPE') @Enumerated(EnumType.ORDINAL) public BenefitType getType() { return type; } ... }We can achieve our goal in two ways, either describing the PMD Rule in Java, or using XPath – I’ll focus on the second way in this post. Let’s start from the beginning ;) – we have to download PMD first (I used version 4.2.5, pmd-bin-4.2.5.zip), unpack it somewhere, change the working directory to the unpacked PMD directory, and run the Rule Designer (it can be found in ./bin/designer.sh). You should see something like this:Let’s put the code we want to analyze into the source code panel, and click ‘Go’ button:In the middle of Abstract Syntax Tree panel you may see: Annotation / MarkerAnnotation / Name structure corresponding to our @Enumerated annotation without defined EnumType. To match it we will put into XPath Query panel following XPath expression: //MarkerAnnotation/Name[@Image = 'Enumerated'] When you click on the ‘Go’ button now:you will see at the bottom right panel that the match was found :) – XPath Query is correct :). Now when we have the XPath Query we have to define the rule using it, let’s open new XML file, name it jpa-ruleset.xml, and put into it: <ruleset name='JPA ruleset' xmlns='http://pmd.sf.net/ruleset/1.0.0' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:schemaLocation='http://pmd.sf.net/ruleset/1.0.0 http://pmd.sf.net/ruleset_xml_schema.xsd' xsi:noNamespaceSchemaLocation='http://pmd.sf.net/ruleset_xml_schema.xsd'> <description>JPA ruleset</description> <rule name='AvoidDefaultEnumeratedValue' message='By default @Enumerated will use the ordinal.' class='net.sourceforge.pmd.rules.XPathRule'> <priority>2</priority> <properties> <property name='xpath' value='//MarkerAnnotation/Name[@Image = 'Enumerated']' /> </properties> </rule> </ruleset>As you see we are using net.sourceforge.pmd.rules.XPathRule as the rule class, and define xpath property for this rule holding our XPath Query. Priority in the above example means: 1 – error, high priority, 2 – error, normal priority, 3 – warning, high priority, 4 – warning, normal priority and 5 – information. We will add another rule to our JPA ruleset, responsible for reporting a warning when @Enumerated is used with explicit ORDINAL EnumType – it can be either @Enumerated(EnumType.ORDINAL) or @Enumerated(value = EnumType.ORDINAL), therefore we need an alternative of two XPath expressions now: <rule name='EnumeratedAsOrdinal' message='Enumeration constants shouldn''t be persisted using ordinal.' class='net.sourceforge.pmd.rules.XPathRule'> <priority>4</priority> <properties> <property name='xpath' value=' //SingleMemberAnnotation/Name[@Image = 'Enumerated']/following-sibling::MemberValue//Name[@Image = 'EnumType.ORDINAL'] | //NormalAnnotation/Name[@Image = 'Enumerated']/following-sibling::MemberValuePairs/MemberValuePair[@Image = 'value']//Name[@Image = 'EnumType.ORDINAL']' /> </properties> </rule>Now, when we have ruleset holding those two rules, we will import it into Eclipse IDE. At this point I’m assuming that you have already installed PMD plugin for Eclipse (see: PMD – Integrations with IDEs). Open Eclipse Preferences, find the PMD section and expand it, you should see:click on ‘Import rule set …’select the file holding the ruleset, choose if you want to import it by reference or copy (in this case your ruleset name will be ignored and ‘ pmd-eclipse‘ name will be used), and you should see our two rules added to the list:Perform the necessary build when asked by eclipse, and before you’ll start enjoying our new rules, check the project properties:‘Enable PMD’ option should be turned on to let PMD check your code on-the-fly, our newly added rules should be active for this project (they will be by default). Let’s write some ‘bad code’ now, matching the first rule defined by us:When you point the red marker on the left with your mouse you will see the rule message, as defined in XML:The second rule matching:and the message, as defined in XML:Few links for the dessert:How to write a PMD rule XPath Rule Tutorial How to make a new rule setReference: Customize PMD in Eclipse with your own rules from our JCG partner Micha? Ja?tak at the Warlock’s Thoughts blog....

Using Story Mapping For Defining Business Requiremets

Story mapping is a lightweight and collaborative approach to defining and structuring user requirements. Story mapping involves describing the system as a list of features that provide a sequential story of requirements in a user-centric way. It supports iterative delivery where the story is divided into Features which can be prioritized and grouped by planned Releases/Minimum Marketable Features. Approach To come up with a Story Map, start out by identifying user personas or the major categories of users that gain value from the system. Next, identify the business/user goals or the major objectives that the system must support. For each goal, determine the user activities or the sequential events that the user needs to do in order to get value. Finally, break the activities down into explicit system Features that have real tangible business value.Once the overall narrative of the system is understood, the Story Map can be used to prioritize the Features. For each of the activities, vertical component can be added to define the priority of the features with respect to each other. Each set of Features for a particular user activity are reshuffled and stacked vertically according to their relative priority. The top row of the Features on the story map represent the backbone of the product, which is the Minimum Marketable Features (MMFs) that are required in order to have a functional product. Features that may be delivered later (or sometimes never) are placed lower down.Once overall prioritization of Features necessary to support each user activity is understood, the entire Story Map can be divided by planned Releases/Minimum Marketable Features. Horizontal lanes can be used to divide different MMFs from each other. The Story Map can then be used to tell a narrative for a particular MMF. This is done by traversing the map from left to right and only looking at the Features within a particular lane.Story Mapping enables both a top-down and bottom-up perspective of the system to emerge. It facilitates understanding of the system from the users’ perspective. It also lends itself to iterative delivery in that prioritization and grouping of Features into Releases/Minimum Marketable Features is supported. Don’t forget to share! Reference: Using Story Mapping For Defining Business Requiremets from our JCG partner Alexis Hui at the Lean Transformation blog....

JavaOne 2012: Mastering Java Deployment

After grabbing an Italian Classic Combo for another JavaOne 2012 lunch, I headed to Hilton Imperial Ballroom B to see the presentation ‘Mastering Java Deployment.’ The speakers, both from Oracle, were Mark Howe and Igor NekrestyanoHowe stated that a goal of the deployment team is to help Java developers deploy their applications to platforms of choice. He started by discussing ‘feature deprecation.’ In some cases, there are multiple ways to do the same thing. An example of this is jarjar and pack200. By deprecating redundant (especially older) approaches, they don’t have to spend as much time supporting and fixing bugs on these seldom used things. Howe showed a table of features being deprecated and removed in JDK 7, JDK 8, and JDK 9. In general, anything being deprecated and/or removed has alternatives and folks using the deprecated/removed features should start looking at which alternative works best for them. As of JRE 7 Update 6, a totally Oracle-supported JRE will be issued for Mac OS X. Oracle’s intention is to fix bugs and add features across JVMs for all deployment environments at the same time. The JRE 7 is ‘mostly compatible’ with Apple’s JRE 6. One change is to be more aligned with Oracle’s JVM support for other platforms and have Oracle’s updated update the JRE on Mac OS X rather than using the Mac ‘Software Update.’ One caveat is that ‘Chrome on Mac is currently unsupported (32-bit only).’ In continuing the theme of platform JVM feature polarity, JavaFX is now delivered with JRE for Linux. Howe’s ‘Convergence of Java and JavaFX’ slide showed a table indicating the progress of converging Java and JavaFX versions. The goal is for JavaFX to be one of the core libraries in the Java specification. Plans for JDK 8 include ‘Java launcher able to run JavaFX applications’ and ‘jfxrt.jar on boot classpath for java and javac.’ Howe introduced the Java Deployment Toolkit and described it as a ‘tool to simplify deployment of Java content in the browser.’ He contrasted deployJava.js (‘original version’) with dtjava.js (‘better JavaFX support and portability’). The dtjava.js version ‘supports all deployment scenarios on all platforms’ though there is no autostart on Mac or Linux. Howe talked about WebStart and explained that ‘user experience is not quite as nice as you’d like it to be.’ He contrasted this with use of dtjava.js that allows developer to set parameters for control of launching from JavaScript. It makes for more control and better user experience. This also removes need for fixed code base. The code shown in a slide for using dtjava.launch requires JRE 7 Update 6 or later. The goal of packaging tools is to ‘simplify deployment for application developers.’ The command-line tool bin/javfxpackager (or set of Ant tasks lib/ant-javafx.jar) can be used with JDK 7 Update 6. The ‘latest release of NetBeans‘ supports these. Howe covered several motivations for completely self-contained applications. A self-contained applications contains ‘all the artifacts necessary to run your application.’ It has a private copy of the Java runtime and removes the dependency on the external JRE. Many of the motivations for self-contained applications revolved around issues of acquiring a current JRE to run the application. Benefits of self-contained applications include the feeling of a native application, improved compatibility, easier deployment on a fresh system, optional administrative privileges, and support of newer distribution channels such as the Apple Apps Store. The caveats of self-contained applications include larger size (JRE included), ‘download and run’ instead of WebStart’s ‘click and launch,’ need to build package per platform, and other current limitations such as package needing to be built on the target platform and application updates being the responsibility of the developer.’ To create a self-contained application, one needs JDK 7 Update 6 as well as optional third party tools such as WiX to build MSI on Windows. Howe showed a slide with Ant code for generating the self-contained application. The Developer Preview will allow a developer to select target version of JVM (current choices are JRE 7 Update 6 or JRE Update 10). The Developer Preview is expected to be available with JRE 7 Update 10. JDK 7 Update 10 is also anticipated to support Mac App Store support. Like so many other presentations at JavaOne 2012, community feedback was solicited. In this case, the deployment team would like to know what people want and need for more effective web deployment of Java applications. Howe had a nice slide comparing executable JAR to WebStart to self-contained application. Mac App Store does not allow applications to ‘rely on optionally-installed technology.’ Other requirements include need for application to ‘run in a sandbox’ and ‘follow UI guidelines.’ Certain APIs (FileChooser) should be avoided. See JavaOne 2012 slides for ‘Deploy Your Application with OpenJDK 7 on Mac OS X’ and future version of JavaFX Deployment Guide for more details. Howe’s ‘key points to remember’ include merging of Java with JavaFX, new platforms for Java, new deployment options (self-contained application bundle and deployment to Mac App Store), and deprecation of old deployment features. One of the attendees asked if there is a way to share a single JRE among multiple shared self-contained applications. The answer is that there currently is not a way to do this, but that a JRE can optionally not be included in the otherwise self-contained application. In responses to another question, the speakers stated they are not aware of any plans to deprecate Swing. They also responded to yet another question that there is currently no Maven support for building self-contained applications (use Ant or NetBeans). There were several good slides shown in this presentation that I’d like to look at more closely in the future. Fortunately, Howe stated that these will be made available. Much of what was covered in this session will be open source and audience members were encouraged to contribute to the open source projects. Reference: JavaOne 2012: Mastering Java Deployment from our JCG partner Dustin Marx at the Inspired by Actual Events blog....

Your first Juzu portlet on eXo platform

Juzu is a Buddhist prayer bead. One sentence and I am sure you have already learnt something, impressive no? Ok, I won’t speak about Buddhism here. Juzu is also a new framework for developing portlets (and standalone applications soon) very quickly. You can find all the information you need on the Juzu website. Now let’s create our first portlet with Juzu !Creating a new project Juzu comes with a maven archetype. We can use it to quickly create our first application : mvn archetype:generate \ -DarchetypeGroupId=org.juzu \ -DarchetypeArtifactId=juzu-archetype \ -DarchetypeVersion=0.5.1 \ -DgroupId=org.example \ -DartifactId=myapp \ -Dversion=1.0.0-SNAPSHOT This creates a juzu project in a myapp folder.Deploying the Juzu portlet Before deploying the application, you need to build it. Simply run mvn clean package in the myapp folder. It will generate a myapp.war under your myapp/target folder. We are now ready to deploy the portlet in a portal container. We will use the latest GateIn release (3.4), the tomcat bundle version. Once downloaded, install it by unzipping it in the location of your choice. The only thing you need to do is to drop the myapp.war file in the webapps folder, and start GateIn with bin/gatein.sh run. Once started, add your portlet in a page. You should see :Great ! You just finished your first Juzu portlet ! Let’s explore the project before enhancing it.Exploring the project The project structure looks like this :The mandatory web.xml is there. It does not contain anything. portlet.xml The archetype generates a basic portlet.xml with some juzu init parameters : <?xml version='1.0' encoding='UTF-8'?> <portlet-app xmlns='http://java.sun.com/xml/ns/portlet/portlet-app_2_0.xsd' version='2.0' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:schemaLocation='http://java.sun.com/xml/ns/portlet/portlet-app_2_0.xsdhttp://java.sun.com/xml/ns/portlet/portlet-app_2_0.xsd'><portlet> <portlet-name>SampleApplication</portlet-name> <display-name xml:lang='EN'>Juzu Sample Application</display-name> <portlet-class>juzu.portlet.JuzuPortlet</portlet-class> <init-param> <name>juzu.run_mode</name> <value>prod</value> </init-param> <init-param> <name>juzu.inject</name> <value>weld</value> <!-- <value>spring</value> --> </init-param> <supports> <mime-type>text/html</mime-type> </supports> <portlet-info> <title>Sample Application</title> </portlet-info> </portlet> </portlet-app> The portlet-class is the generic Juzu portlet class juzu.portlet.JuzuPortlet. This class declares 2 init parameters :juzu.run_modedev : changes made on source files are automatically hot recompiled and reloaded, so you don’t need to redeploy your application to test them. This is a real productivity boost while developing an application ! prod : “classic” mode, where you need to recompile and redeploy your application to apply your changes.juzu.inject – defines the inject implementation. Two implementations are currently supported : weld (CDI Reference Implementation) and spring.The Juzu portlet class uses the package-info.java file to gather needed extra information. The portlet.xml file also contains basic information about the portlet : portlet-name, display-name and portlet-info. You can change them or add some others if needed. package-info.java This file contains all the configuration of the application. The file allows to activate plugins, add JS/CSS resources, … but let’s keep it simple for now. The only mandatory configuration is the declaration of the application, thanks to the @juzu.Application annotation. You have to declare the base package of your application, in our case org.sample. Controller.java This class is a Juzu controller. It is composed of a view method index (annotated with @View) which allows to render the index template. The path of the index template is set with the @Path annotation. By default, Juzu uses the templates package of the application as its root path. So in our case, the template is located at org/sample/templates/index.gtmpl.Switching to dev mode Now that we know a little bit more about what is a Juzu application, let’s improve a little bit our basic helloworld application. First of all, we will switch from prod to dev mode, in order to quickly test our changes. For that, edit your portlet.xml file and change the value of the init-param juzu.run_mode to dev. Then build your application and drop the war in the webapps folder of GateIn. Here you don’t need to stop/start GateIn as the webapp will be automatically redeployed. As we did not change anything in the source files of our application, you should see the same “Hello World” message in your portlet. In order to test the dev mode, you can for instance rename the file webapps/myapp/WEB-INF/src/org/sample/templates/index.gtmpl to index2.gtmpl. After refreshing your page, you will get the following message :Now edit webapps/myapp/WEB-INF/src/org/sample/Controller.java and change @Inject @Path('index.gtmpl') Template index; by @Inject @Path('index2.gtmpl') Template index; and refresh your page once again. Everything back to normal ! Pretty cool, isn’t it ?Forms, Actions and type safe template parameters We will create an application which displays the map of the location choosen by the user. Firstly, update your index.gtmpl template : #{param name=location/} #{param name=mapURL/}Location : <form action='@{updateLocation()}' method='post'> <input type='text' name='location' value='${location}'/> <input type='submit'/> </form> <br/> <%if(location) {%> <div id='map'> </div> <%}%>#{param name=location/} and #{param name=mapURL/} declares 2 type safe template parameters which will be used latter in our Controller the form contains a input text, and submit to our juzu controller action updateLocation finally, if a location is specified, the maps is displayedNow, let’s update update our Controller.java : package org.sample;import juzu.Action; import juzu.Path; import juzu.Resource; import juzu.Response; import juzu.View; import juzu.template.Template;import javax.inject.Inject; import java.io.IOException; import java.util.HashMap; import java.util.Map;public class Controller {@Inject @Path('index.gtmpl') org.sample.templates.index index;@View public void index() throws IOException { index('', ''); }@View public void index(String location, String mapURL) throws IOException { index.with().location(location).mapURL(mapURL).render(); }@Action public Response updateLocation(String location) throws IOException { String mapURL = 'https://maps.google.fr/maps?f=q&source=s_q&hl=en&geocode=&q=' + location + '&aq=&t=m&ie=UTF8&hq=&hnear=' + location + '&z=12&output=embed';return Controller_.index(location, mapURL); } }the index template is now of type org.sample.templates.index. This class is generated thanks to the annotations, and is a subclass of Template. Using this specific type will allow us to leverage declared template parameters, location and mapURL in our case. the default index View now calls a new index View which accepts the location and mapURL arguments. This new view uses the index template class and its fluent syntax (do you like it ? Personnaly I do). Thanks to the declaration of the location and mapURL parameters in the template, the org.sample.templates.index template class accepts a location method and a mapURL method to set their values. the updateLocation method is defined as an action thanks to tthe @Action annotation. It is called by the form to retrieve the correct URL (building the map URL is a basic example, generally you will call your services here). Then it redirects to the index View method in order to render the index template. Note the _ at the end of the Controller name. The class Controller_ is the “annotations processed” version of the Controller class.If you did all these changes in the deployed version of your application (in webapps/myapp), you just need to refresh, and you should be able to enter a location and then see the corresponding map :Ajax Juzu provides some ease to use Ajax in your application. We will use them to avoid reloading our page when submitting a new location in our form. The Ajax plugin needs JQuery. We can add it to our application by simply dropping the JQuery js file in the project and declare it in the package-info.java file with the Asset plugin (I dropped the JQuery js file in public/scripts) : @juzu.plugin.asset.Assets( scripts = { @juzu.plugin.asset.Script( id = 'jquery', src = 'public/scripts/jquery-1.7.1.min.js') } ) We will now update our controller in order to add a new method which will only provide the map URL : @Ajax @Resource public Response.Content<Stream.Char> getMapURL(String location) throws IOException { String mapURL = 'https://maps.google.fr/maps?f=q&source=s_q&hl=en&geocode=&q=' + location + '&aq=&t=m&ie=UTF8&hq=&hnear=' + location + '&z=12&output=embed';return Response.ok('{\'mapURL\': \'' + mapURL +'\'}').withMimeType('application/json'); } Note that this new method is not annotated with @Action anymore. Annotating a method with @Ajax will make it accessible for Ajax calls. The @Resource annotation makes this method send the entire response to the client. That’s what we want as this method simply creates the new URL and sends it back to the client as a JSON response. Finally, we have to update our template file to add the Ajax call : #{param name=location/} #{param name=mapURL/}<script> function submitLocation(location) { $('#map').jzAjax({ url: 'Controller.getMapURL()', data: {'location': location} }).done(function(data) { $('#map > iframe').attr('src', data.mapURL); }); return false; } </script>Location : <form onsubmit='return submitLocation(this.location.value)'> <input type='text' name='location' value='${location}'/> <input type='submit'/> </form> <br/><div id='map'> </div> The submission of the form now calls the submitLocation javascript function. This function uses the juzu Ajax function jzAjax (which uses the ajax JQuery function under the hood). This function calls the URL provided in the url param with the parameters provided in data. So here it will call the newly created method of our Controller and receive the new map URL in JSON : {'mapURL': 'https://maps.google.fr/maps?f=q&source=s_q&hl=en&geocode=&q=nantes&aq=&t=m&ie=UTF8&hq=&hnear=nantes&z=12&output=embed'} Then we just use JQuery to update the map. Once again, simply refresh your page to see it in action ! You can now learn more on Juzu by going to the website or watching the screencasts. Happy coding and don’t forget to share! Reference: Your first Juzu portlet from our JCG partner Thomas Delhimenie at the T’s blog blog....

Where do the stack traces come from?

I believe that reading and understanding stack traces is an essential skill every programmer should posses in order to effectively troubleshoot problems with every JVM language (see also: Filtering irrelevant stack trace lines in logs and Logging exceptions root cause first). So may we start with a little quiz? Given the following piece of code, which methods will be present in the stack trace? foo(), bar() or maybe both? public class Main {public static void main(String[] args) throws IOException { try { foo(); } catch (RuntimeException e) { bar(e); } }private static void foo() { throw new RuntimeException('Foo!'); }private static void bar(RuntimeException e) { throw e; } }In C# both answers would be possible depending on how the original exception is re-thrown in bar() – throw e overwrites the original stack trace (originating in foo()) with the place where it was thrown again (in bar()). On the other hand bare ‘ throw‘ keyword re-throws the exception keeping the original stack trace. Java follows the second approach (using the syntax of the first one) and doesn’t even allow the former approach directly. But what about this slightly modified version: public static void main(String[] args) throws IOException { final RuntimeException e = foo(); bar(e); }private static RuntimeException foo() { return new RuntimeException(); }private static void bar(RuntimeException e) { throw e; }foo() only creates the exception, but instead of throwing, returns that exception object. This exception is then thrown from a completely different method. How will the stack trace look now? Surprise, it still points to foo(), just like if the exception was thrown from there, exactly the same as in first example: Exception in thread 'main' java.lang.RuntimeException at Main.foo(Main.java:7) at Main.main(Main.java:15)What’s going on, you might ask? Looks like the stack trace is not generated when the exception is thrown, but when the exception object is created. In a vast majority of situations these actions occur in the same place, so no one bothers. Many beginning Java programmers aren’t even aware that one can create an exception object and assign it to a variable or field or even pass it around. But where does the exception stack trace come from, really? The answer is quite simple, from Throwable.fillInStackTrace() method! public class Throwable implements Serializable {public synchronized native Throwable fillInStackTrace();//... }Notice that this method is not final, which allows us to hack a little bit. Not only we can bypass stack trace creation and throw an exception without any context, but even overwrite the stack completely! public class SponsoredException extends RuntimeException {@Override public synchronized Throwable fillInStackTrace() { setStackTrace(new StackTraceElement[]{ new StackTraceElement('ADVERTISEMENT', ' If you don't ', null, 0), new StackTraceElement('ADVERTISEMENT', ' want to see this ', null, 0), new StackTraceElement('ADVERTISEMENT', ' exception ', null, 0), new StackTraceElement('ADVERTISEMENT', ' please buy ', null, 0), new StackTraceElement('ADVERTISEMENT', ' full version ', null, 0), new StackTraceElement('ADVERTISEMENT', ' of the program ', null, 0) }); return this; } }public class ExceptionFromHell extends RuntimeException {public ExceptionFromHell() { super('Catch me if you can'); }@Override public synchronized Throwable fillInStackTrace() { return this; } }Throwing the exceptions above will result in the following errors printed by the JVM (seriously, try it!) Exception in thread 'main' SponsoredException at ADVERTISEMENT. If you don't (Unknown Source) at ADVERTISEMENT. want to see this (Unknown Source) at ADVERTISEMENT. exception (Unknown Source) at ADVERTISEMENT. please buy (Unknown Source) at ADVERTISEMENT. full version (Unknown Source) at ADVERTISEMENT. of the program (Unknown Source)Exception in thread 'main' ExceptionFromHell: Catch me if you canThat’s right. ExceptionFromHell is even more interesting. As it does not include the stack trace as part of the exception object, only class name and message are available. Stack trace was lost and neither JVM nor any logging framework can do anything about it. Why on earth would you ever do that (and I am not talking about SponsoredException)? Unexpectedly generating stack trace is considered expensive by some (?) It’s a native method and it has to walk down the whole stack to build the StackTraceElements. Once in my life I saw a library using this technique to make throwing exceptions faster. So I wrote a quick caliper benchmark to see the performance difference between throwing normal RuntimeException and exception without stack trace filled vs. ordinary method returning value. I run tests with different stack trace depths using recursion: public class StackTraceBenchmark extends SimpleBenchmark {@Param({'1', '10', '100', '1000'}) public int threadDepth;public void timeWithoutException(int reps) throws InterruptedException { while(--reps >= 0) { notThrowing(threadDepth); } }private int notThrowing(int depth) { if(depth <= 0) return depth; return notThrowing(depth - 1); }//--------------------------------------public void timeWithStackTrace(int reps) throws InterruptedException { while(--reps >= 0) { try { throwingWithStackTrace(threadDepth); } catch (RuntimeException e) { } } }private void throwingWithStackTrace(int depth) { if(depth <= 0) throw new RuntimeException(); throwingWithStackTrace(depth - 1); }//--------------------------------------public void timeWithoutStackTrace(int reps) throws InterruptedException { while(--reps >= 0) { try { throwingWithoutStackTrace(threadDepth); } catch (RuntimeException e) { } } }private void throwingWithoutStackTrace(int depth) { if(depth <= 0) throw new ExceptionFromHell(); throwingWithoutStackTrace(depth - 1); }//--------------------------------------public static void main(String[] args) { Runner.main(StackTraceBenchmark.class, new String[]{'--trials', '1'}); }}Here are the results:We can clearly see that the longer the stack trace is, the longer it takes to throw an exception. We also see that for reasonable stack trace lengths throwing an exception should not take more than 100 ?s (faster than reading 1 MiB of main memory). Finally throwing an exception without stack trace is 2-5 times faster. But honestly, if this is an issue for you, the problem is somewhere else. If your application throws exceptions so often that you actually have to optimize it, there is probably something wrong with your design. Do not fix Java then, it’s not broken. Summary:stack trace always shows the place where the exception (object) was created, not where it was thrown – although in 99% of the cases that’s the same place. you have full control over the stack trace returned by your exceptions generating stack trace has some cost, but if it becomes a bottleneck in your application, you are probably doing something wrong.Reference: Where do the stack traces come from? from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog....

5 things I learned from a Hacker attack

On Friday evening I got an e-mail from my provider. They told me my webspace was subject of a hackers attack and they would shut it down until they have analysed its root cause. There was no more information and the only thing I could do was to wait. Fortunately they wrote me back on Saturday morning with some explanation and tips how to clean my websites up. Here is what I have learned from the past night and from the attack of some script kids. And on a side note, I really dislike these idiots who were browsing the Internet and go on everybody else nerves!1. Update! Yes, it’s my fault. I have made a quick sample installation of WordPress for a potential customer. The customer did not want it and forgot about it. The current WordPress version is 3.4.1 and my server had 3.1.4 installed. I have heard the WordPress developers are quick with security fixes, but if you don’t update your installation it’s your fault.2. Delete what you don’t need. Now. As mentioned, I didn’t need the WordPress instance but was to lazy to delete it right now and later forget about it. I will not do this mistake again. If I don’t need it, I will delete it instantly. To my defence, I have a pretty bad Internet connection and uploading does take me ages. This is why I have become lazy. But of course I could have moved it into an invisible folder. In addition, these web sites are not my main business. Therefore I have bought a standard hosting package and thought i could trust that nobody would find my old files. Of course this was idiotic to think, i know it and knew it.3. Check what happens When I got the e-mail the script kiddies were already acting a while. I was unaware they did weird stuff. If I would have known, I would have avoided the outage: I could disable all my websites, look for the root cause and fix the system before my provider takes me off for 12 hours. Therefore I decided to check more regularly whats going on. The following script helps me: find -newermt yesterday -ls | mail -s 'Changed Files Report' mail@example.com This will run as a cronjob. It will mail me the files which changed yesterday. This way I can double check about the changes and have a higher chance to act quickly (and hopefully quicker than my provider).4. Go static Before a while I played with Jekyll. It’s a nice Ruby-Tool which lets you generate static HTML pages, similar to Mavens Site. It is great, because it supports templates, Markdown and many more stuff which helps to use “dynamic power” to generate static pages. The projects I have started with it are not ready yet, but the Dartlang.org homepage is build with Jekyll itself. You can read on Seth Ladds blog how it works. What i have learned of yesterday was that I will replace all dynamic web pages (mostly on WordPress) with static HTML pages generated by Jekyll, when I am not urgently needing some of the dynamic power. Be honest, in some cases we need PHP just as some kind of templating mechanism. You can do templating with Jekyll. Even standard blogs can be done perfectly with it. In addition you can commit the whole Jekyll project to GIT and the project layout is very easy to understand. In my case, I have various webpages in mind which will now turn to Jekyll-pages. And yes, I will take the performance bonus as well as the fact that HTML pages are not so easily opening security wholes to script kids. UPDATE: My colleg Torsten Curdt recommended me awestruct for static site generation. Looks promising!5. Read exploit sites The idiots who thought it would be a good idea to break into my webspace and put links up for their trivial websites copied a PHP script to my web server which gave them some a lot of information on my environment, like writable folders and such. The funny thing is, the script was GPLed and they stayed conform to the licensing conditions. In the header was the original source of the script which is exploit-db dot com. On this page are tons of exploits collected. Script Kids can download it from there and attack you. The website says, it’s intention is to give people like us the chance to protect our work against hackers. I am not sure how many of us do read such pages compared to script kids. But well, from now on I will look at that site from time to time and check if the software I use is vulnerable to a a specific exploit which has not been fixed yet. Reference: 5 things I learned from a Hacker attack from our JCG partner Christian Grobmeier at the PHP und Java Entwickler blog....

Spring Security: Prevent brute force attack

Spring Security can do lot of stuff for you. Account blocking, password salt. But what about brute force blocker. That what you have to do by yourself. Fortunately Spring is quite flexible framework so it is not a big deal to configure it. Let me show you little guide how to do this for Grails application. First of all you have to enable springSecurityEventListener in your config.groovy grails.plugins.springsecurity.useSecurityEventListener = true then implement listeners in /src/bruteforce create classes /** Registers all failed attempts to login. Main purpose to count attempts for particular account ant block user*/ class AuthenticationFailureListener implements ApplicationListener {LoginAttemptCacheService loginAttemptCacheService@Override void onApplicationEvent(AuthenticationFailureBadCredentialsEvent e) { loginAttemptCacheService.failLogin(e.authentication.name) } }next we have to create listener for successful logins in same package /** Listener for successfull logins. Used for reseting number on unsuccessfull logins for specific account */ class AuthenticationSuccessEventListener implements ApplicationListener{LoginAttemptCacheService loginAttemptCacheService@Override void onApplicationEvent(AuthenticationSuccessEvent e) { loginAttemptCacheService.loginSuccess(e.authentication.name) } }We were not putting them in our grails-app folder so we need to regiter these classes as spring beans. Add next lines into grails-app/conf/spring/resources.groovy beans = { authenticationFailureListener(AuthenticationFailureListener) { loginAttemptCacheService = ref('loginAttemptCacheService') }authenticationSuccessEventListener(AuthenticationSuccessEventListener) { loginAttemptCacheService = ref('loginAttemptCacheService') } }You probably notice usage of LoginAttemptCacheService loginAttemptCacheService Let’s implement it. This would be typical grails service package com.picsel.officeanywhereimport com.google.common.cache.CacheBuilder import com.google.common.cache.CacheLoader import com.google.common.cache.LoadingCacheimport java.util.concurrent.TimeUnit import org.apache.commons.lang.math.NumberUtils import javax.annotation.PostConstructclass LoginAttemptCacheService {private LoadingCache attempts; private int allowedNumberOfAttempts def grailsApplication@PostConstruct void init() { allowedNumberOfAttempts = grailsApplication.config.brutforce.loginAttempts.allowedNumberOfAttempts int time = grailsApplication.config.brutforce.loginAttempts.timelog.info 'account block configured for $time minutes' attempts = CacheBuilder.newBuilder() .expireAfterWrite(time, TimeUnit.MINUTES) .build({0} as CacheLoader); }/** * Triggers on each unsuccessful login attempt and increases number of attempts in local accumulator * @param login - username which is trying to login * @return */ def failLogin(String login) { def numberOfAttempts = attempts.get(login) log.debug 'fail login $login previous number for attempts $numberOfAttempts' numberOfAttempts++if (numberOfAttempts > allowedNumberOfAttempts) { blockUser(login) attempts.invalidate(login) } else { attempts.put(login, numberOfAttempts) } }/** * Triggers on each successful login attempt and resets number of attempts in local accumulator * @param login - username which is login */ def loginSuccess(String login) { log.debug 'successfull login for $login' attempts.invalidate(login) }/** * Disable user account so it would not able to login * @param login - username that has to be disabled */ private void blockUser(String login) { log.debug 'blocking user: $login' def user = User.findByUsername(login) if (user) { user.accountLocked = true; user.save(flush: true) } } } We will be using CacheBuilder from google guava library. So add next line to BuildConfig.groovy dependencies { runtime 'com.google.guava:guava:11.0.1' }And the last step add service configuration to cinfig.groovy brutforce { loginAttempts { time = 5 allowedNumberOfAttempts = 3 }That’s it, you ready to run you application. For typical java project almost everething will be the same. Same listeners and same services.More about Spring Security Events More about caching with google guava Grails user can simple use this plugin https://github.com/grygoriy/bruteforcedefender Happy coding and don’t forget to share! Reference: Prevent brute force attack with Spring Security from our JCG partner Grygoriy Mykhalyuno at the Grygoriy Mykhalyuno’s blog blog. ...

Android Dialog – Android Custom Dialog

In this tutorial I am going to describe how to create Android Custom Dialg. Android DialogCreate Android Project AndroidDialog ; File -> New -> Android ProjectAndroid Layoutactivity_android_dialog.xml <RelativeLayout xmlns:android='http://schemas.android.com/apk/res/android' xmlns:tools='http://schemas.android.com/tools' android:layout_width='match_parent' android:layout_height='match_parent' ><Button android:id='@+id/btn_launch' android:layout_width='wrap_content' android:layout_height='wrap_content' android:layout_alignParentTop='true' android:layout_centerHorizontal='true' android:layout_marginTop='115dp' android:text='Launch Dialog' /><TextView android:id='@+id/textView1' android:layout_width='wrap_content' android:layout_height='wrap_content' android:layout_alignParentLeft='true' android:layout_alignParentTop='true' android:layout_marginLeft='28dp' android:layout_marginTop='54dp' android:text='@string/app_desc' android:textAppearance='?android:attr/textAppearanceLarge' /> </RelativeLayout>Dialog Layoutdialog_layout.xml <?xml version='1.0' encoding='utf-8'?> <LinearLayout xmlns:android='http://schemas.android.com/apk/res/android' android:layout_width='fill_parent' android:layout_height='fill_parent' android:orientation='vertical' android:padding='10sp' ><EditText android:id='@+id/txt_name' android:layout_width='fill_parent' android:layout_height='wrap_content' android:hint='@string/dialog_uname' android:singleLine='true' ><requestFocus /> </EditText><EditText android:id='@+id/password' android:layout_width='match_parent' android:layout_height='wrap_content' android:ems='10' android:inputType='textPassword' > </EditText><RelativeLayout android:layout_width='match_parent' android:layout_height='wrap_content' ><Button android:id='@+id/btn_login' android:layout_width='120dp' android:layout_height='wrap_content' android:text='@string/dialog_submit' /><Button android:id='@+id/btn_cancel' android:layout_width='120dp' android:layout_height='wrap_content' android:layout_alignParentTop='true' android:layout_marginLeft='10dp' android:layout_toRightOf='@+id/btn_login' android:text='@string/dialog_cancel' /> </RelativeLayout></LinearLayout>AndroidDialog ActivityOverride both onCreateDialog(int id) and onPrepareDialog(int id, Dialog dialog) methods and add following code which will create your custom Android Dialog. import android.os.Bundle; import android.view.LayoutInflater; import android.view.View; import android.widget.Button; import android.widget.EditText; import android.widget.Toast; import android.app.Activity; import android.app.AlertDialog; import android.app.Dialog;public class AndroidDialog extends Activity {final private static int DIALOG_LOGIN = 1;@Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_android_dialog);Button launch_button = (Button) findViewById(R.id.btn_launch);launch_button.setOnClickListener(new View.OnClickListener() {@Override public void onClick(View v) { showDialog(DIALOG_LOGIN); } }); }@Override protected Dialog onCreateDialog(int id) {AlertDialog dialogDetails = null;switch (id) { case DIALOG_LOGIN: LayoutInflater inflater = LayoutInflater.from(this); View dialogview = inflater.inflate(R.layout.dialog_layout, null);AlertDialog.Builder dialogbuilder = new AlertDialog.Builder(this); dialogbuilder.setTitle('Login'); dialogbuilder.setView(dialogview); dialogDetails = dialogbuilder.create();break; }return dialogDetails; }@Override protected void onPrepareDialog(int id, Dialog dialog) {switch (id) { case DIALOG_LOGIN: final AlertDialog alertDialog = (AlertDialog) dialog; Button loginbutton = (Button) alertDialog .findViewById(R.id.btn_login); Button cancelbutton = (Button) alertDialog .findViewById(R.id.btn_cancel); final EditText userName = (EditText) alertDialog .findViewById(R.id.txt_name); final EditText password = (EditText) alertDialog .findViewById(R.id.password);loginbutton.setOnClickListener(new View.OnClickListener() {@Override public void onClick(View v) { alertDialog.dismiss(); Toast.makeText( AndroidDialog.this, 'User Name : ' + userName.getText().toString() + ' Password : ' + password.getText().toString(), Toast.LENGTH_LONG).show(); } });cancelbutton.setOnClickListener(new View.OnClickListener() {@Override public void onClick(View v) { alertDialog.dismiss(); } }); break; } } }Happy coding and don’t forget to share! Reference: Android Dialog – Android Custom Dialog from our JCG partner Chathura Wijesinghe at the Java Sri Lankan Support blog....

Coding and Cynicism

We had no reasons to be anxious about this component. It has been running for about an year now. It used to handle around 1000 messages per day and email out a automated report twice every day. The solution was based on robust integration tools and technologies i.e. TIBCO EMS for delivering messages and Spring Integration for reading and handling them. Everything was predictable, boring and nice. And one morning everything changed. This component froze with a null pointer exception. Nothing more, nothing less. There were no logs. They never are when you need them. Nothing had changed in the code or in the mode of delivery. There were no obvious miscreants. Business had found out the break – as one of the automated reports had failed – and were demanding an estimated time of fix. It was a picture perfect start for the firefighters of the product team – and they poured out their first cup of coffee. So, the team swung into action. Half a day later – after multiple calls with business (not very pleasant, any one of them, mind you) – it was suggested that it might – just might be – that a couple of messages in the 1000 or so, did not have a required field – which by the way was guaranteed to be there by the business processes. So we took these two messages off and switched on the component. Lo and behold, crashed again. This time because there were much more messages than it could handle (remember messages kept coming in while the team was troubleshooting the problem). I will not bore you with the multitude of calls that followed, and how a fix was arrived and delivered. It suffices to say that too many man hours were spent on this for my comfort. And this lead me to write down my thoughts on this. I am all for communications, meetings, workshops, creation of all sorts of requirements and design documents. I see the value in all of them. I really do – although it has been accused many a times that I don’t. But, at the end of the day, there is no substitute for a minimal amount of street smartness. A healthy amount of cynicism goes a long way in designing a resilient system. In this particular case, a couple of things had gone wrong. 1. We trusted the data quality of the feed coming in from a different system. And we should not have. No. This is not going to be written down in any book discussing integration patterns. It is just something that a seasoned developer would not do, but a new one – although as sharp as a tac – would slip up on. Folks had trusted the requirement document that guaranteed that certain fields would be populated. But, the fact is, when the fields were not populated, it was not Ok for our component to go down. A seasoned developer would have consulted the requirements document and developed to it – but would not have trusted the requirement document. He would have been cynical. 2. We trusted the data volume of the feed. And we should not have. Again, this was something written down in the document and the code hence was technically correct. But, if only the developer would have said, ‘Hang on, if you are saying 1000 is the tops that you expect, fine, I will pull only 1000 at one go. If there are more, I will pull a second batch. And more batches if I need. But never more than 1000.’ we would have been fine. We should not have pulled all data from message queue – assuming it will be less than 1000, because it was written down in the document. A seasoned developer would have been cynical of the document. The component is fixed and everything is back in business. It is no biggie. This was not the first time something like this happened and I am willing to wager that it will not be the last. The point that I am trying to make is that the business of software production is not – and perhaps will never be – like the production line of a hardware commodity. It is most unlikely to enjoy the stability, predictability, and repeatability of the production line of – say a car. So, the proliferation of processes, documents, meetings will not going to be as successful in this business. Processes are fine. Documents are fine. Productivity measuring tools and code quality matrices are great. Workshops are great. Peer reviews are a must. But they are quite unlikely to be a substitute for a person who loves coding, takes pride in it, and goes that extra mile to ensure that his code does not fail. These people will always be in short supply and in great demand. As an industry, sooner or later we will have to find a way to create, foster and retain these individuals. That’s it for today. Don’t forget to share! Reference: Coding and Cynicism. from our JCG partner Partho at the Tech for Enterprise blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: