Featured FREE Whitepapers

What's New Here?


The Architecture Spike Kata

Do you know how to apply coding practices the technology stack that you use on a daily basis? Do you know how the technology stack works? For many programmers, it’s easy enough to use test-driven development with a trivial example, but it can be very hard to know how to apply it to the problems you face every day in your job. Java web+database applications are usually filled to the brim with technologies. Many of these are hard to test and many of these may not add value. In order to explore TDD and Java applications, I practiced the Java EE Spike Kata in 2010. Here’s a video of me and Anders Karlsen doing this kata at JavaZone 2010. A similar approach is likely useful for programmers using any technology. Therefore, I give you: The rules of the Architecture Spike Kata.The problem Create a web application that lets users register a Person with names and search for people. The Person objects should be saved in a data store that is similar to the technology you use daily (probably a relational database). The goal is to get a spike working as quickly as possible, so in the first iteration, the Person entity should probably only contain one field. You can add more fields and refactor the application later.The rules The most important rules are Robert Martin‘s three rules of Test-driven development:No code without test (that is, the code should never do something that isn’t required in order to get a test to pass) Only enough test to get to red (that is, the tests should run, give an error message and that error message should correct) Only enough code to get to green (that is, the tests should run and not give an error) (My addition: Refactor on green without adding functionality)Secondly, application should be driven from the outside in. That is, your first test should be a top-level acceptance test that tests through http and html. It’s okay to comment out or @Ignore this test after it has run red for the first time. Lastly, you should not introduce any technology before the pain of not doing so is blinding. The first time you do the kata in a language, don’t use a web framework beyond the language minimum (in Java, this means Servlets, in node.js it’s require('http'), in Ruby it means Rack). Don’t use a Object-Relational Mapping framework. Don’t use a dependency injection framework. Most definitely don’t use an application generator like Rails scaffold, Spring Roo or Lift. These frameworks can be real time savers, but this kata is about understanding how the underlying technology works. As a second iteration, use the technologies you use on a daily basis, but this time set up from scratch. For example, if your project uses Hibernate, try configuring the session factory by hand. By using frameworks in simplest way possible, you’ll both learn more about what they bring to the table and how to use them properly. For complex technology like Hibernate, there’s no substitute for deeper understanding.What to expect So far, I’ve only done the Architecture Spike Kata in Java. But on the other hand, I’ve done it around 50 times together with more than ten other developers. I’ve written about how to get started with the Java EE Spike Kata (in Norwegian) on my blog before. This is what I’ve learned about working with web applications in Java:Most Java web frameworks seem to harm more than they help Hibernate is a bitch to set up, but once it’s working, it saves a lot of hassle Using TDD with Hibernate helped me understand how to use Hibernate more effectively I’ve stopped using dependency injection frameworks (but kept on using dependency injection as a pattern) I have learned several ways to test web applications and database access independently and integrated I no longer have to expend mental energy to write tests for full stack applicationThe first time I write this kata with another developer, it takes around 3 to 5 hours, depending on the experience level of my pair. After running through it a few times, most developers can complete the task in less than an hour. We get better through practice, and the Architecture Spike Kata is a way to practice TDD with the technologies that you use daily and get a better understanding of what’s going on. Reference: The Architecture Spike Kata from our JCG partner Johannes Brodwall at the Thinking Inside a Bigger Box blog. Related Articles :How to start a Coding Dojo Iterationless Development – the latest New New Thing You can’t be Agile in Maintenance? (Part 1) Even Backlogs Need Grooming Agile software development recommendations for users and new adopters...

Red Hat Openshift: Getting started – Java EE6 in the Cloud

For a while now I’m looking into ‘the cloud’. Looking into its features, what it can do, why we should switch to ‘ the cloud’, going to talks, talking to people like @maartenballiauw, who is a cloud specialist at RealDolmen. I’ve already deployed an application on google app engine (for java) and I really liked the experience. Some new concepts come into play like distributed data and so on. But in the recent chain of events, being more interested in the future of java EE, I looked into OpenShift. OpenShift is a PaaS offerd by Red Hat. The basic idea is to run Java EE 6 in the cloud and that is exactly what we want to do. I’m using Ubuntu for this, so all my commands are based upon the Ubuntu distro. Be sure to register for an account on openshift.redhat.com, you will need it to create a domain and application. Starting of we have to install ruby gems. The ruby gems are the interface to manage our cloud domain. So first we install the gems. $ sudo apt-get install git ruby rubygems ruby1.8-dev We need git to checkout the code, the ruby packages is to install the gems. Now we install the gems. $ sudo gem install rhc The rhc (red hat cloud I presume) is the base for all the commands that will be used to manipulate our openshift domain. So first we need to create a domain. The gems are standard deployed installed in the /var/lib/gems/1.8/gems/bin folder. We best add it to our $PATH variable for easy access. Now everything is ready to start working with openshift. Now we want to create a domain. The domain is your work directory on OpenShift. Choose something unique and you will be able to access your applications via http://projectname-domainname.rhcloud.com. To create your domain we need the ‘rhc-create-domain’ command. $ ./rhc-create-domain -n domainname -l loginid Now you will be promted for you password, just type it and you are done. Your domain is created. Your domain is setup, we now want to create an application. $ ./rhc-create-app -a applicationName -t jbossas-7.0 The -t parameter indicates we will be running the application on a jbossas-7.0. The cool thing about creating an application on OpenShift is that we now have a fully setup git repository. When we push, the application is pushed to OpenShift. To start of I forked the seambooking example on github (https://github.com/openshift/seambooking-example). I did not really need to fork it, but it offers a good basic setup for OpenShift project. Once I’ve added the code to my OpenShift git repository, I can simply do a git push. $ git push The sample app is running, running in the cloud… More information on http://openshift.redhat.com and https://github.com/openshift/seambooking-example Reference:  Red Hat Openshift: Getting started – Java EE6 in the Cloud from our JCG partner Jelle Victoor at the Styled Ideas blog. Related Articles :From Spring to Java EE 6 Java EE6 CDI, Named Components and Qualifiers Oracle WebLogic Java Cloud Service – Behind the scenes. Java EE Past, Present, & Cloud 7 Developing and Testing in the Cloud...

JAXB, SAX, DOM Performance

This post investigates the performance of unmarshalling an XML document to Java objects using a number of different approaches. The XML document is very simple. It contains a collection of Person entities.  <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <persons> <person> <id>person0</id> <name>name0</name> </person> <person> <id>person1</id> <name>name1</name> </person> ...There is a corresponding Person Java object for the Person entity in the XML .. @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "", propOrder = { "id", "name" }) public class Person { private String id; private String name; public String getId() { return id; } public void setId(String id) { this.id = id; } public String getName() { return name; } public void setName(String value) { this.name = value; } }and a PersonList object to represent a collection of Persons.  @XmlAccessorType(XmlAccessType.FIELD) @XmlRootElement(name = "persons") public class PersonList { @XmlElement(name="person") private List<person> personList = new ArrayList<person>(); public List<person> getPersons() { return personList; } public void setPersons(List<person> persons) { this.personList = persons; } }The approaches investigated were:Various flavours of JAXB SAX DOMIn all cases, the objective was to get the entities in the XML document to the corresponding Java objects. The JAXB annotations on the Person and PersonList POJOS are used in the JAXB tests. The same classes can be used in SAX and DOM tests (the annotations will just be ignored). Initially the reference implementations for JAXB, SAX and DOM were used. The Woodstox STAX parsing was then used. This would have been called in some of the JAXB unmarshalling tests. The tests were carried out on my Dell Laptop, a Pentium Dual-Core CPU, 2.1 GHz running Windows 7. Test 1 – Using JAXB to unmarshall a Java File. @Test public void testUnMarshallUsingJAXB() throws Exception { JAXBContext jc = JAXBContext.newInstance(PersonList.class); Unmarshaller unmarshaller = jc.createUnmarshaller(); PersonList obj = (PersonList)unmarshaller.unmarshal(new File(filename)); }Test 1 illustrates how simple the progamming model for JAXB is. It is very easy to go from an XML file to Java objects. There is no need to get involved with the nitty gritty details of marshalling and parsing. Test 2 – Using JAXB to unmarshall a StreamsourceTest 2 is similar Test 1, except this time a Streamsource object wraps around a File object. The Streamsource object gives a hint to the JAXB implementation to stream the file. @Test public void testUnMarshallUsingJAXBStreamSource() throws Exception { JAXBContext jc = JAXBContext.newInstance(PersonList.class); Unmarshaller unmarshaller = jc.createUnmarshaller(); StreamSource source = new StreamSource(new File(filename)); PersonList obj = (PersonList)unmarshaller.unmarshal(source); }Test 3 – Using JAXB to unmarshall a StAX XMLStreamReader Again similar to Test 1, except this time an XMLStreamReader instance wraps a FileReader instance which is unmarshalled by JAXB. @Test public void testUnMarshallingWithStAX() throws Exception { FileReader fr = new FileReader(filename); JAXBContext jc = JAXBContext.newInstance(PersonList.class); Unmarshaller unmarshaller = jc.createUnmarshaller(); XMLInputFactory xmlif = XMLInputFactory.newInstance(); XMLStreamReader xmler = xmlif.createXMLStreamReader(fr); PersonList obj = (PersonList)unmarshaller.unmarshal(xmler); }Test 4 – Just use DOM This test uses no JAXB and instead just uses the JAXP DOM approach. This means straight away more code is required than any JAXB approach. @Test public void testParsingWithDom() throws Exception { DocumentBuilderFactory domFactory = DocumentBuilderFactory.newInstance(); DocumentBuilder builder = domFactory.newDocumentBuilder(); Document doc = builder.parse(filename); List personsAsList = new ArrayList(); NodeList persons = doc.getElementsByTagName("persons"); for (int i = 0; i <persons.getLength(); i++) { Element person = (Element)persons.item(i); NodeList children = (NodeList)person.getChildNodes(); Person newperson = new Person(); for (int j = 0; j < children.getLength(); j++){ Node child = children.item(i); if (child.getNodeName().equalsIgnoreCase("id")) { newperson.setId(child.getNodeValue()); } else if (child.getNodeName().equalsIgnoreCase("name")) { newperson.setName(child.getNodeValue()); } } personsAsList.add(newperson); } }Test 5 – Just use SAX Test 5 uses no JAXB and uses SAX to parse the XML document. The SAX approach involves more code and more complexity than any JAXB approach. The Developer has to get involved with the parsing of the document. @Test public void testParsingWithSAX() throws Exception { SAXParserFactory factory = SAXParserFactory.newInstance(); SAXParser saxParser = factory.newSAXParser(); final List<person> persons = new ArrayList<person>(); DefaultHandler handler = new DefaultHandler() { boolean bpersonId = false; boolean bpersonName = false; public void startElement(String uri, String localName,String qName, Attributes attributes) throws SAXException { if (qName.equalsIgnoreCase("id")) { bpersonId = true; Person person = new Person(); persons.add(person); } else if (qName.equalsIgnoreCase("name")) { bpersonName = true; } } public void endElement(String uri, String localName, String qName) throws SAXException { } public void characters(char ch[], int start, int length) throws SAXException { if (bpersonId) { String personID = new String(ch, start, length); bpersonId = false; Person person = persons.get(persons.size() - 1); person.setId(personID); } else if (bpersonName) { String name = new String(ch, start, length); bpersonName = false; Person person = persons.get(persons.size() - 1); person.setName(name); } } }; saxParser.parse(filename, handler); }The tests were run 5 times for 3 files which contain a collection of Person entities. The first first file contained 100 Person entities and was 5K in size. The second contained 10,000 entities and was 500K in size and the third contained 250,000 Person entities and was 15 Meg in size. In no cases was any XSD used, or any validations performed. The results are given in result tables where the times for the different runs are comma separated. TEST RESULTS The tests were first run using JDK 1.6.26, 32 bit and the reference implementation for SAX, DOM and JAXB shipped with JDK was used.Unmarshall Type 100 Persons time (ms) 10K Persons time (ms)  250K Persons time (ms)JAXB (Default)  48,13, 5,4,4 78, 52, 47,50,50 1522, 1457, 1353, 1308,1317JAXB(Streamsource) 11, 6, 3,3,2 44, 44, 48,45,43 1191, 1364, 1144, 1142, 1136JAXB (StAX) 18, 2,1,1,1 111, 136, 89,91,92 2693, 3058, 2495, 2472, 2481DOM 16, 2, 2,2,2 89,50, 55,53,50 1992, 2198, 1845, 1776, 1773SAX 4, 2, 1,1,1 29, 34, 23,26,26 704, 669, 605, 589,591JDK 1.6.26 Test commentsThe first time unmarshalling happens is usually the longest.The memory usage for the JAXB and SAX is similar. It is about 2 Meg for the file with 10,000 persons and 36 – 38 Meg file with 250,000.  DOM Memory usage is far higher.  For the 10,000 persons file it is 6 Meg, for the 250,000 person file it is greater than 130 Meg. The performance times for pure SAX are better. Particularly, for very large files.The exact same tests were run again, using the same JDK (1.6.26) but this time the Woodstox implementation of StAX parsing was used.Unmarshall Type 100 Persons time (ms) 10K Persons time (ms)  250K Persons time (ms)JAXB (Default)  168,3,5,8,3 294, 43, 46, 43, 42 2055, 1354, 1328, 1319, 1319JAXB(Streamsource) 11, 3,3,3,4 43,42,47,44,42 1147, 1149, 1176, 1173, 1159JAXB (StAX) 30,0,1,1,0 67,37,40,37,37 1301, 1236, 1223, 1336, 1297DOM 103,1,1,1,2 136,52,49,49,50 1882, 1883, 1821, 1835, 1822SAX 4, 2, 2,1,1 31,25,25,38,25 613, 609, 607, 595, 613JDK 1.6.26 + Woodstox test commentsAgain, the first time unmarshalling happens is usually proportionally longer.Again, memory usage for SAX and JAXB is very similar. Both are far better than DOM.  The results are very similar to Test 1.The JAXB (StAX) approach time has improved considerably. This is due to the Woodstox implementation of StAX parsing being used.The performance times for pure SAX are still the best. Particularly for large files.The the exact same tests were run again, but this time I used JDK 1.7.02 and the Woodstox implementation of StAX parsing.Unmarshall Type 100 Persons time (ms) 10,000 Persons time (ms)  250,000 Persons time (ms)JAXB (Default)  165,5, 3, 3,5 611,23, 24, 46, 28 578, 539, 511, 511, 519JAXB(Streamsource) 13,4, 3, 4, 3 43,24, 21, 26, 22 678, 520, 509, 504, 627JAXB (StAX) 21,1,0, 0, 0 300,69, 20, 16, 16 637, 487, 422, 435, 458DOM 22,2,2,2,2 420,25, 24, 23, 24 1304, 807, 867, 747, 1189SAX 7,2,2,1,1 169,15, 15, 19, 14 366, 364, 363, 360, 358JDK 7 + Woodstox test comments: The performance times for JDK 7 overall are much better.   There are some anomolies – the first time the  100 persons and the 10,000 person file is parsed. The memory usage is slightly higher.  For SAX and JAXB it is 2 – 4 Meg for the 10,000 persons file and 45 – 49 Meg for the 250,000 persons file.  For DOM it is higher again.  5 – 7.5 Meg for the 10,000 person file and 136 – 143 Meg for the 250,000 persons file.Note: W.R.T. all testsNo memory analysis was done for the 100 persons file. The memory usage was just too small and so it would have pointless information.The first time to initialise a JAXB context can take up to 0.5 seconds. This was not included in the test results as it only took this time the very first time. After that the JVM initialises context very quickly (consistly < 5ms). If you notice this behaviour with whatever JAXB implementation you are using, consider initialising at start up.These tests are a very simple XML file. In reality there would be more object types and more complex XML. However, these tests should still provide a guidance.Conclusions:The peformance times for pure SAX are slightly better than JAXB but only for very large files. Unless you are using very large files the performance differences are not worth worrying about. The progamming model advantages of JAXB win out over the complexitiy of the SAX programming model.  Don’t forget JAXB also provides random accses like DOM does. SAX does not provide this.Performance times look a lot better with Woodstox, if JAXB / StAX is being used.Performance times with 64 bit JDK 7 look a lot better. Memory usuage looks slightly higher.Reference: JAXB, SAX, DOM Performance from our JCG partner Alex Staveley at the Dublin’s Tech Blog . Related Articles :Using JAXB to generate XML from XSD Mapping Objects to Multiple XML Schemas – Weather Example Develop Restful web services using Spring MVC Android XML Binding with Simple Framework Tutorial Boost your Android XML parsing with XML Pull...

Spring MVC and REST at Google App Engine

Some time ago I wrote about how to implement your Restful Web API using Spring MVC. Read my previous post to know about it. In that post it was developed a simple Rest example. For testing the application, file was copied into a web server (Tomcat for example), and then accessing to http://localhost:8080/RestServer/characters/1 information of character 1 was returned. In current post I am going to explain how to transform that application to a Google App Engine and be deployed into Google’s infrastructure using Maven. Of course in this case we are going to deploy a Rest Spring MVC application, but same approach can be used for migrating a Spring MVC web application (or any other application developed with other web framework) to GAE. First of all, obviously you should create a Google Account and register a new application (remember the name because will be used in next step). After that you can start the migration. Three changes are required, create appengine-web.xml defining application name; add server tag to settings.xml with Google account information, and modify pom.xml for adding GAE plugin and its dependencies. Let’s start with appengine-web.xml. This file is used by GAE to configure application and is created into WEB-INF directory (at same level of web.xml). <?xml version="1.0" encoding="utf-8"?> <appengine-web-app xmlns="http://appengine.google.com/ns/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://appengine.google.com/ns/1.0 http://googleappengine.googlecode.com/svn/branches/1.2.1/java/docs/appengine-web.xsd"> <application>alexsotoblog</application> <version>1</version><system-properties> <property name="java.util.logging.config.file" value="WEB-INF/classes/logging.properties"/> </system-properties> <precompilation-enabled>false</precompilation-enabled> <sessions-enabled>true</sessions-enabled> </appengine-web-app>The most important field is application tag. This tag contains the name of our application (defined when you register a new Google Application). Other tags are version, system properties and environment variables, and misc configuration like if you want a precompilation to enhance performance or if your application requires sessions. And your project should not be modified anymore, now only Maven files will be touched. In settings.xml, account information should be added: <settings xmlns="http://maven.apache.org/SETTINGS/1.1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.1.0 http://maven.apache.org/xsd/settings-1.1.0.xsd"> <localRepository>/media/share/maven_repo</localRepository> <servers> <server> <id>appengine.google.com</id> <username>my_account@gmail.com</username> <password>my_password</password> </server></servers> </settings>See that it is as easy as registering any other server in Maven. And finally the most tedious part, modifying pom.xml. First thing is adding new properties: <gae.home>/media/share/maven_repo/com/google/appengine/appengine-java-sdk/1.5.5/appengine-java-sdk-1.5.5</gae.home> <gaeApplicationName>alexsotoblog</gaeApplicationName> <gaePluginVersion>0.9.0</gaePluginVersion> <gae.version>1.5.5</gae.version><!-- Upload to http://test.latest.<applicationName>.appspot.com by default --> <gae.application.version>test</gae.application.version>At first line we are defining Appengine Java SDK location. If you have already installed then insert location in this tag, if not, copy same location of this pom and simply change maven repository directory, in my case /media/share/maven_repo, to yours. Typically your Maven repository location will be /home/user/.m2/repositories. Maven will download SDK for you at deploy time. Next step is adding Maven GAE repository. <repositories> <repository> <id>maven-gae-plugin-repo</id> <url>http://maven-gae-plugin.googlecode.com/svn/repository</url> <name>maven-gae-plugin repository</name> </repository> </repositories><pluginRepositories> <pluginRepository> <id>maven-gae-plugin-repo</id> <name>Maven Google App Engine Repository</name> <url>http://maven-gae-plugin.googlecode.com/svn/repository/</url> </pluginRepository> </pluginRepositories>Because our project is dummy project, Datanucleus are not used. In case of more complex projects, that database access is required using, for example JDO, next dependencies should be added: <dependency> <groupId>javax.jdo</groupId> <artifactId>jdo2-api</artifactId> <version>2.3-eb</version> <exclusions> <exclusion> <groupId>javax.transaction</groupId> <artifactId>transaction-api</artifactId> </exclusion> </exclusions> </dependency><dependency> <groupId>com.google.appengine.orm</groupId> <artifactId>datanucleus-appengine</artifactId> <version>1.0.6.final</version> </dependency><dependency> <groupId>org.datanucleus</groupId> <artifactId>datanucleus-core</artifactId> <version>1.1.5</version> <scope>runtime</scope> <exclusions> <exclusion> <groupId>javax.transaction</groupId> <artifactId>transaction-api</artifactId> </exclusion> </exclusions> </dependency><dependency> <groupId>com.google.appengine</groupId> <artifactId>geronimo-jta_1.1_spec</artifactId> <version>1.1.1</version> <scope>runtime</scope> </dependency><dependency> <groupId>com.google.appengine</groupId> <artifactId>geronimo-jpa_3.0_spec</artifactId> <version>1.1.1</version> <scope>runtime</scope> </dependency>And in case you are using Datanucleus, maven-datanucleus-plugin should be registered. Take care to configure it properly depending on your project. <plugin> <groupId>org.datanucleus</groupId> <artifactId>maven-datanucleus-plugin</artifactId> <version>1.1.4</version> <configuration> <!-- Make sure this path contains your persistent classes! --> <mappingIncludes>**/model/*.class</mappingIncludes> <verbose>true</verbose> <enhancerName>ASM</enhancerName> <api>JDO</api> </configuration> <executions> <execution> <phase>compile</phase> <goals> <goal>enhance</goal> </goals> </execution> </executions> <dependencies> <dependency> <groupId>org.datanucleus</groupId> <artifactId>datanucleus-core</artifactId> <version>1.1.5</version> <exclusions> <exclusion> <groupId>javax.transaction</groupId> <artifactId>transaction-api</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.datanucleus</groupId> <artifactId>datanucleus-rdbms</artifactId> <version>1.1.5</version> </dependency> <dependency> <groupId>org.datanucleus</groupId> <artifactId>datanucleus-enhancer</artifactId> <version>1.1.5</version> </dependency> </dependencies> </plugin>Now Google App Engine dependencies are added. <dependency> <groupId>com.google.appengine</groupId> <artifactId>appengine-api-1.0-sdk</artifactId> <version>${gae.version}</version> </dependency><dependency> <groupId>com.google.appengine</groupId> <artifactId>appengine-tools-api</artifactId> <version>1.3.7</version> </dependency>Then if you want to test GAE functionalities (not used in our dummy project), next GAE libraries are added: <dependency> <groupId>com.google.appengine</groupId> <artifactId>appengine-api-labs</artifactId> <version>${gae.version}</version> <scope>test</scope> </dependency><dependency> <groupId>com.google.appengine</groupId> <artifactId>appengine-api-stubs</artifactId> <version>${gae.version}</version> <scope>test</scope> </dependency><dependency> <groupId>com.google.appengine</groupId> <artifactId>appengine-testing</artifactId> <version>${gae.version}</version> <scope>test</scope> </dependency>Next change is a modification on maven-war-plugin including appengine-web.xml into generated package: <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <configuration> <webResources> <resource> <directory>src/main/webapp</directory> <filtering>true</filtering> <includes> <include>**/appengine-web.xml</include> </includes> </resource> </webResources> </configuration> </plugin>And finally adding maven-gae-plugin and configuring it to upload application to appspot. <plugin> <groupId>net.kindleit</groupId> <artifactId>maven-gae-plugin</artifactId> <version>${gaePluginVersion}</version> <configuration> <serverId>appengine.google.com</serverId> </configuration> <dependencies> <dependency> <groupId>net.kindleit</groupId> <artifactId>gae-runtime</artifactId> <version>${gae.version}</version> <type>pom</type> </dependency> </dependencies> </plugin>See that <serviceId> tag contains the server name defined previously in settings.xml file. Also if you are using maven-release-plugin you can upload application to the appspot automatically, during release:perform goal: <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-release-plugin</artifactId> <version>2.2.1</version> <configuration> <goals>gae:deploy</goals> </configuration> </plugin>Now run gae:deploy goal. If you have already installed Appengine Java SDK, then your application will be uploaded to your GAE site. But if it is the first time you run the plugin, you will receive an error. Do not panic, this error occurs because Maven plugin does not find Appengine SDK into directory you specified in <gae.home> tag. But if you have configured gae.home location into your local Maven repository, simply run gae:unpack goal, and SDK will be installed correctly so when you rerun gae:deploy your application will be uploaded into Google infrastructure. In post example you can go to http://alexsotoblog.appspot.com/characters/1http://alexsotoblog.appspot.com/characters/1 and character information in JSON format is displayed into your browser. As I have noted at the beginning of the post, the same process can be used for any web application, not only for Spring Rest MVC. Because of teaching purpose all modifications have been made into application pom. My advice is that you create a parent pom with GAE related tags, so each project that must be uploaded into Google App Engine extends from same pom file. I wish you have found this post useful. This week I am at devoxx, meet me there ;) I will be speaking on Thursday 17 at 13:00 about Speeding Up Javascript & CSS Download Times With Aggregation and Minification. Full pom file: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>org.springframework</groupId> <artifactId>rest</artifactId> <name>Rest</name> <packaging>war</packaging> <version>1.0.0-BUILD-SNAPSHOT</version> <properties> <java-version>1.6</java-version> <org.springframework-version>3.0.4.RELEASE</org.springframework-version> <org.aspectj-version>1.6.9</org.aspectj-version> <org.slf4j-version>1.5.10</org.slf4j-version> <!-- Specify AppEngine version for your project. It should match SDK version pointed to by ${gae.home} property (Typically, one used by your Eclipse plug-in) --><gae.home>/home/alex/.m2/repository/com/google/appengine/appengine-java-sdk/1.5.5/appengine-java-sdk-1.5.5</gae.home> <gaeApplicationName>alexsotoblog</gaeApplicationName> <gaePluginVersion>0.9.0</gaePluginVersion> <gae.version>1.5.5</gae.version><!-- Upload to http://test.latest.<applicationName>.appspot.com by default --> <gae.application.version>test</gae.application.version> </properties> <dependencies><!-- Rest --> <dependency> <groupId>com.sun.xml.bind</groupId> <artifactId>jaxb-impl</artifactId> <version>2.2.4-1</version> </dependency> <dependency> <groupId>org.codehaus.jackson</groupId> <artifactId>jackson-core-lgpl</artifactId> <version>1.8.5</version> </dependency> <dependency> <groupId>org.codehaus.jackson</groupId> <artifactId>jackson-mapper-lgpl</artifactId> <version>1.8.5</version> </dependency><!-- GAE libraries for local testing as described here: http://code.google.com/appengine/docs/java/howto/unittesting.html --> <dependency> <groupId>com.google.appengine</groupId> <artifactId>appengine-api-labs</artifactId> <version>${gae.version}</version> <scope>test</scope> </dependency><dependency> <groupId>com.google.appengine</groupId> <artifactId>appengine-api-stubs</artifactId> <version>${gae.version}</version> <scope>test</scope> </dependency><dependency> <groupId>com.google.appengine</groupId> <artifactId>appengine-testing</artifactId> <version>${gae.version}</version> <scope>test</scope> </dependency><dependency> <groupId>com.google.appengine</groupId> <artifactId>appengine-api-1.0-sdk</artifactId> <version>${gae.version}</version> </dependency><dependency> <groupId>com.google.appengine</groupId> <artifactId>appengine-tools-api</artifactId> <version>1.3.7</version> </dependency><!-- Spring --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>${org.springframework-version}</version> <exclusions> <!-- Exclude Commons Logging in favor of SLF4j --> <exclusion> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-webmvc</artifactId> <version>${org.springframework-version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-oxm</artifactId> <version>${org.springframework-version}</version> </dependency><!-- AspectJ --> <dependency> <groupId>org.aspectj</groupId> <artifactId>aspectjrt</artifactId> <version>${org.aspectj-version}</version> </dependency><!-- Logging --> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>${org.slf4j-version}</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>jcl-over-slf4j</artifactId> <version>${org.slf4j-version}</version> <scope>runtime</scope> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> <version>${org.slf4j-version}</version> <scope>runtime</scope> </dependency> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>1.2.15</version> <exclusions> <exclusion> <groupId>javax.mail</groupId> <artifactId>mail</artifactId> </exclusion> <exclusion> <groupId>javax.jms</groupId> <artifactId>jms</artifactId> </exclusion> <exclusion> <groupId>com.sun.jdmk</groupId> <artifactId>jmxtools</artifactId> </exclusion> <exclusion> <groupId>com.sun.jmx</groupId> <artifactId>jmxri</artifactId> </exclusion> </exclusions> <scope>runtime</scope> </dependency><!-- @Inject --> <dependency> <groupId>javax.inject</groupId> <artifactId>javax.inject</artifactId> <version>1</version> </dependency><!-- Servlet --> <dependency> <groupId>javax.servlet</groupId> <artifactId>servlet-api</artifactId> <version>2.5</version> <scope>provided</scope> </dependency> <dependency> <groupId>javax.servlet.jsp</groupId> <artifactId>jsp-api</artifactId> <version>2.1</version> <scope>provided</scope> </dependency> <dependency> <groupId>javax.servlet</groupId> <artifactId>jstl</artifactId> <version>1.2</version> </dependency><!-- Test --> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.7</version> <scope>test</scope> </dependency></dependencies> <repositories> <!-- For testing against latest Spring snapshots --> <repository> <id>org.springframework.maven.snapshot</id> <name>Spring Maven Snapshot Repository</name> <url>http://maven.springframework.org/snapshot</url> <releases> <enabled>false</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> </repository> <!-- For developing against latest Spring milestones --> <repository> <id>org.springframework.maven.milestone</id> <name>Spring Maven Milestone Repository</name> <url>http://maven.springframework.org/milestone</url> <snapshots> <enabled>false</enabled> </snapshots> </repository> <!-- GAE repositories --> <repository> <id>maven-gae-plugin-repo</id> <url>http://maven-gae-plugin.googlecode.com/svn/repository</url> <name>maven-gae-plugin repository</name> </repository> </repositories><pluginRepositories> <pluginRepository> <id>maven-gae-plugin-repo</id> <name>Maven Google App Engine Repository</name> <url>http://maven-gae-plugin.googlecode.com/svn/repository/</url> </pluginRepository> </pluginRepositories><build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>${java-version}</source> <target>${java-version}</target> </configuration> </plugin><!-- Adding appengine-web into war --> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <configuration> <webResources> <resource> <directory>src/main/webapp</directory> <filtering>true</filtering> <includes> <include>**/appengine-web.xml</include> </includes> </resource> </webResources> <warName>abc</warName> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-dependency-plugin</artifactId> <executions> <execution> <id>install</id> <phase>install</phase> <goals> <goal>sources</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>aspectj-maven-plugin</artifactId> <!-- Have to use version 1.2 since version 1.3 does not appear to work with ITDs --> <version>1.2</version> <dependencies> <!-- You must use Maven 2.0.9 or above or these are ignored (see MNG-2972) --> <dependency> <groupId>org.aspectj</groupId> <artifactId>aspectjrt</artifactId> <version>${org.aspectj-version}</version> </dependency> <dependency> <groupId>org.aspectj</groupId> <artifactId>aspectjtools</artifactId> <version>${org.aspectj-version}</version> </dependency> </dependencies> <executions> <execution> <goals> <goal>compile</goal> <goal>test-compile</goal> </goals> </execution> </executions> <configuration> <outxml>true</outxml> <source>${java-version}</source> <target>${java-version}</target> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <junitArtifactName>junit:junit</junitArtifactName> </configuration> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>tomcat-maven-plugin</artifactId> <version>1.0-beta-1</version> </plugin> <!-- The actual maven-gae-plugin. Type "mvn gae:run" to run project, "mvn gae:deploy" to upload to GAE. --> <plugin> <groupId>net.kindleit</groupId> <artifactId>maven-gae-plugin</artifactId> <version>${gaePluginVersion}</version> <configuration> <serverId>appengine.google.com</serverId> </configuration> <dependencies> <dependency> <groupId>net.kindleit</groupId> <artifactId>gae-runtime</artifactId> <version>${gae.version}</version> <type>pom</type> </dependency> </dependencies> </plugin> </plugins> </build> </project>Download Code. Music: http://www.youtube.com/watch?v=Nba3Tr_GLZU Reference: Spring MVC and REST at Google App Engine from our JCG partner Alex Soto at the One Jar To Rule Them All blog. Related Articles :Develop Restful web services using Spring MVC Spring MVC Development – Quick Tutorial jqGrid, REST, AJAX and Spring MVC Integration Building a RESTful Web Service with Spring 3.1 and Java based Configuration, part 2 Spring MVC3 Hibernate CRUD Sample Application Multitenancy in Google AppEngine (GAE)...

Significant Software Development Developments of 2011

As I did in 2007, 2008, 2009, and 2010, I summarize some of the software development events of 2011 that I find most significant. All of the normal caveats still apply: this list is definitely shaped by personal experience, interests, and biases. 10. Functional Language Popularity I cannot recall a single day in 2011 where I didn’t run across a headline touting the virtues of functional programming in general or a functional programming language in particular as I browsed software development headlines. The ability to enjoy advantages of functional programming in the JVM have been one of the touted advantages of Scala (see honorable mention section of this post for more on Scala‘s big year). Programming languages such as Haskell (or Jaskell on the JVM) and LISP (invented by John McCarthy, who is mentioned in item #9) are obviously known for their Functional nature, but we have seen articles in 2011 about using functional aspects of other languages. These include Functional Programming in JavaScript (25 December 2011), the IBM developerWorks’s series Functional thinking: Functional features in Groovy (started 22 November 2011), and Functional Programming in Java (15 December 2011). The JVM space has seen a lot of recent activity related to functional programming. Besides languages such as Jaskell, Scala, and Clojure, there are also functional programming-oriented frameworks for Java such as Guava, fun4j, lambdaj, op4j, and Commons Functor. Regarding writing code in functional programming languages, Bruce Eckel writes, “For me, one of the best things about functional programming is the mental discipline that it produces. I find it helps me learn to break problems down into small, testable pieces, and clarifies my analysis. For that reason alone, it is a worthwhile practice.” Functional programming languages have been around for a long time and 2011 was not even the beginning of renewed interest in them. However, 2011 seemed to be the year that FP really has taken off in mainstream software development blogosphere and press. 9. Legends Lost This year has seen the deaths of multiple prominent technologists. Steve Jobs was probably the most well-known of these, but we also lost Dennis Ritchie and John McCarthy in the same year. Physical deaths were not the only losses our industry suffered in 2011. In October 2011, Mark Pilgrim removed his considerable online contributions. This was reminiscent of why the lucky stiff‘s similar removal of his online presence (with considerable Ruby contributions) in 2009. 8. C++: Another ‘Dead’ Language Makes a Comeback 2011 was a big year for C++. C++11 (AKA C++0x) was approved and published, the first new standardized version since C++03. C++ was back in the news again and seems to be experiencing an invigoration similar to that Java is experiencing. See C++ and Beyond 2011: Herb Sutter – Why C++? for an example of this. As mentioned in relation to concurrency, Microsoft is making overtures toward renewed interest in C++ with its C++ AMP (Accelerated Massive Parallelism). In addition, the July 2011 edition of MSDN Magazine featured an editorial called Why C++ Still Matters in which the author stated that “one of the things [MSDN readers have] consistently told us is that we need to not treat C++ like the crazy uncle in the attic.” Bjarne Stroustrup‘s FAQ has an interesting answer to the question Is C++ in Decline?: No, I don’t think so. C++ use appears to be declining in some areas and to be on an upswing in others. If I had to guess, I’d suspect a net decrease sometime during 2002-2004 and a net increase in 2005-2007 and again in 2010-2011, but I doubt anyone really knows. Most of the popular measures basically measures noise and ought to report their findings in decibel rather than “popularity.” Many of the major uses of C++ are in infrastructure (telecommunications, banking, embedded systems, etc.) where programmers don’t go to conferences or describe their code in public. Many of the most interesting and important C++ applications are not noticed, they are not for sale to the public as programming products, and their implementation language is never mentioned. Examples are Google and “800” phone numbers. Had I thought of a “C++ inside” logo in 1985, the programming world might have been different today. I think much of what Stroustrup says here about C++ and lack of coverage online has become true for general Java as well. It’s not as exciting or trendy to write on traditional Java as it is on newer languages, but one should be careful about assuming that percentage of writers on a topic equates to percentage of users. C++ is not dead yet. 7. Java Community: OpenJDK and Java Community Process 2011 continued to be a big year for OpenJDK. In late 2010, IBM joined OpenJDK and Apple joined OpenJDK shortly thereafter. In 2011, Twitter also joined OpenJDK and Apache Harmony retired to the attic. Other big news in the Java community involved the Java Community Process (JCP). JSR 348 (“Towards a new version of the Java Community Process”) is described in the post JCP.next, JSR 348 — Towards a New Version of the Java Community Process. This post concludes, “The success of the Java community depends upon an open and transparent JCP, so JCP.next is worthy of our praise and attention.” The post lists some early actions of JSR 348, including “greater transparency by requiring that all development is done on open mailing lists and issue trackers” and that “recruiting process for Expert Group members will be publicly viewable.” The aim is for “a more public, open, accessible and transparent JCP.” As additional evidence of the big year that 2011 was for the Java community, I cite the first-ever Java Community Keynote at JavaOne during JavaOne 2011. To me, the Java community seems more energized and enthusiastic than it has for several years. 6. JavaScript 2011 was a huge year for JavaScript. First, my citing of Dart, CoffeeScript, and Node.js as “honorable mention” developments (later in this post) and my citing of the year’s biggest winner as HTML5 are evidence in and of themselves of the influence of JavaScript in 2011. Oracle announced at JavaOne 2011 their intention to provide a new server-side JavaScript implementation (Project Nashorn) to illustrate and test non-Java language JVM support and for a high-quality server-side JavaScript implementation that runs on the JVM. jQuery‘s success (and its own 2011 growth) is also another example illustrating the rising prominence of JavaScript. 5. The Return to Java and the Return of Java A significant trend seen in 2011 was the return to Java by several prominent projects. Twitter, for example, joined the Java Community Process, after earlier moving their search architecture from Ruby on Rails to Java/Lucene. Another recent example has been Yammer moving part of their offering from Scala to Java. Other informative posts that provide evidence of resurgent interest in Java include Edd Dumbill‘s O’Reilly Radar posts in advance of OSCON Java 2011. Oracle Technology Network‘s Our Most Popular Tech Articles of 2011 is dominated by Java-related articles. Alex Handy writes about the recent positive direction of Java in his post Look what 2011 washed in: The return of Java. He writes, “After a long hiatus and seemingly endless dithering by Sun, Oracle has officially given Java the kick in the pants it needed. … Java is no longer standing still. ” Markus Eisele‘s post Moving Java Forward? A definition. A year in review. summarizes events in the world of Java in 2011. Eisele references the Oracle slogan “Moving Java Forward” and describes this as “probably the sentence of the year 2011.” Regarding 2011, Eisele states, “For me personally this was a powerful Java year.” One could argue that, outside of bloggers and consultants, most Java developers never really left Java. Evidence for this argument include Java’s consistently high (almost always #1) ranking in the TIOBE Programming Community Index and in programming language ratings such as Top 10 Programming Languages of 2011. Java remains remarkably popular and vibrant for a “dead language.” Java.net editor Kevin Farnham sums up the year 2011 from a Java perspective: “It’s generally agreed that 2011 was a great year for Java and languages that run on the JVM.” 4. Mobile Devices Remain All the Rage Mobile device development was high on my list of software development developments last year and it seems to continue to dominate software development news and trends watching. This doesn’t mean that mobile development is the most commonly performed development, but simply means it gets the most attention. Mobile development should become even more significant with the proliferation of mobile devices. The Economist article Not Just Talk states, “Mobile phones are the world’s most widely distributed computers. Even in poor countries about two-thirds of people have access to one.” We don’t need any more evidence of mobile development popularity, but it is worth noting that Objective-C‘s high ranking in the TIOBE Programming Community Index is almost exclusively due to the Apple mobile devices such as iPhone, iPad, and iPod Touch. Mobile devices from a single company have almost single-handedly moved a programming language from obscurity into the top ten. 3. Cloud Computing I ranked cloud computing high on last year’s list as well. It continues to be the most trendy of software development buzzwords, but also boasts real-life advantages for consumers and developers. I believe that there is no silver bullet and cloud computing does not change that belief. However, even without silver bullets, we’ve made strides in the software development industry with less significant steps forward and cloud computing does seem to provide advantages in select situations. Nothing (including the cloud) can be best for everyone all the time and in all situations, but cloud computing definitely seems to benefit certain people and situations. A good read regarding state of the cloud in 2011 is 2011: When cloud computing shook the data center. This post highlights progress in 2011 in terms of both private and public clouds. One sentence in this article was of particular interest: “The road to simplicity seems paved with even more complexity.” An argument could be made (and I think Kevin Farnham has made it) that cloud computing received significant press in 2011, but really didn’t have any one easily identifiable single major development in 2011. Rather, many smaller developments and intense online conversation on the topic have added up to make it another big year for the cloud. 2. Making Concurrency Easier A major trend of 2011 appears to be that of making announcements about how various languages intend to make writing highly concurrent applications easier. This is one of the often advertised benefits of Scala and some of the other newer JVM languages. Even the JVM’s oldest language, Java itself, is expected to continue to enhance its concurrency support (after major improvements in J2SE 5 and Java SE 7) with lambda expressions in Java 8 (see Kevin Farnham‘s Java.net editorial The State of Java: Meeting the Multicore Challenge for a nice overview). Microsoft has shown renewed interest in C++ for developing concurrent applications on its platforms. Specifically, Herb Sutter and Daniel Moth have talked about how C++ Accelerated Massive Parallelism (AMP) “introduces a key new language feature to C++ and a minimal STL-like library that enables you to very easily work with large multidimensional arrays to express your data parallel algorithms in a manner that exposes massive parallelism on an accelerator, such as the GPU.” It is not a coincidence that the rising popularity of Scala and of functional programming are happening as there is a greater interest in making concurrency easier. Martin Odersky has presented (at OSCON Java 2011) on why it’s so difficult to get concurrency correct with shared state/memory. Bruce Eckel reminds us of the Brian Goetz presentation that provided significant evidence supporting the same proposition. Goetz talked about this and about greater support for this in Java 7 in a January 2011 interview: Not only do improved libraries provide pre-baked versions of common idioms, but developers can often sidestep the complexity of concurrency by limiting shared mutable state in their programs. If you don’t have shared state, you don’t have to coordinate access to that state, which is where the trouble comes from. Functional languages are built on this principle, but we don’t have to switch languages to get the benefit — we just have to adopt the orientation that immutable state is better than mutable state, and seek to increase immutability and decrease sharing. In Java SE 7, we’ll be adding a framework for fork-join decomposition, which makes it easier to write algorithms that parallelize automatically across a wide range of hardware configurations. 1. HTML5’s Ascendancy Continues In last year’s software development highlights post, I stated, “HTML5 finally seems to be gaining the traction it needs among the major browsers and among web developers and authors to be worth paying attention to.” Many people having been paying attention to HTML5 in 2011 as its popularity has increased rapidly. There are numerous signs of HTML5’s taking over the web development ecosystem as it makes inroads into the mobile devices as well. Indeed, one could say that the desire to have a common platform on which to develop applications for the myriad of mobile devices has been the biggest motivator in the successful rise of HTML5. The victims of HTML5’s success are piling up. We learned in the first quarter of 2011 that Google Gears was being eliminated (one of many Google project prunings that occurred in 2011). The announcement of Gears’s demise explicitly acknowledged HTML5’s role: “we’ve shifted our effort towards bringing all of the Gears capabilities into web standards like HTML5.” Later in 2011, Adobe announced that it’d no longer develop a Flash Play for mobile devices and quickly followed that announcement up with the announcement that Adobe would attempt to donate Flex (which was already open source, but under Adobe’s stewardship) to the Apache Software Foundation. Many Flex developers, of course, hope that this isn’t just Adobe’s way of using open source as a dumping ground. In explaining the decision regarding Flex in Your Questions About Flex, the Flex blog acknowledges, “In the long-term, we believe HTML5 will be the best technology for enterprise application development.” In announcing abandoning of mobile Flash Player in favor of AIR and HTML5, Adobe stated, “HTML5 is now universally supported on major mobile devices, in some cases exclusively. This makes HTML5 the best solution for creating and deploying content in the browser across mobile platforms.” Microsoft provides more examples of the persuasive power of the HTML5 momentum. Early details of Windows 8 and Windows Phone 7 Metro have scared some in the the Silverlight community as evidenced by posts such as Microsoft has Abandoned Silverlight and All Other Plugins in Metro IE, Did Microsoft Just kill Flash, Silverlight?, and Silverlight Developers Have the Smoothest Road to Metro. HTML5 is still not fully here (Five Things You Can’t Do With HTML5 (yet)), but it already seems to be the victor. Honorable Mention These are topics that did not make my Top Ten for 2011, but were very close. JavaFX Appears Here to Stay It was at JavaOne 2010 that Oracle announced that JavaFX Script would be deprecated and announced other plans regarding JavaFX 2.0. However, it was in 2011 (again at JavaOne) that Oracle reaffirmed JavaFX’s future by announcing plans to open source it and to make it part of standard Java via the Java Community Process. JavaFX 2.0 is now delivered with Oracle’s JDK 7 Update 2. Oracle’s statements and actions lead me to believe that JavaFX likely has a future after all. Now that JavaFX SDK is included with Oracle’s JDK 7 Update 2 and the JavaFX runtime is included with Oracle’s Java 7 Update 2 JRE, JavaFX will be readily available on more machines. JavaFX includes some support for integration with HTML5 and is likely to add additional support for that integration, which should increase the adoption of JavaFX. 2011 may be the biggest year yet for JavaFX. The only reason it did not make my overall top ten is that I believe the interested portion of the development community is still relatively small, but that could change in the near future. Node.js The Node.js platform has generally enjoyed a great year in 2011. Although it was created in 2009, significant press attention seems to have been garnered in 2011. Besides numerous blog posts and articles on Node.js in 2011, several books have become available this year. These include Node Web Development (August 2011), Hands-On Node.js (May 2011), and The Node Beginning Book (October 2011). Two more books expected in 2012 are Node: Up and Running: Scalable Server-Side Code with JavaScript and Programming Node.js: Build Evented I/O Applications in JavaScript. Not everyone in enamored with Node.js as shown by posts Node.js is Cancer and a mixed report in Is node.js best for Comet? Scala 2011 has been a big year for Scala. In May, Greylock Partners (venture capitalists) invested $3 million (U.S.) in newly created Typesafe, the Scala/Akka company founded by Scala creator Martin Odersky and Akka creator Jonas Bonér that same month. Scala has received considerable press in recent years and this has seemed to escalate in 2011. I often read comments to posts on Java or other JVM languages that have some enthusiastic Scala user pointing out how whatever task discussed in the post would have been easier in Scala. Bruce Eckel wrote Scala: The Static Language that Feels Dynamic in 2011. There has also been some negative press for Scala recently as document in my post Recent Community Conversation on Scala. It is my belief that the very existence of such discussion and the fact that people actually cared about the discussion together imply that Scala is at a point where it’s in the process of moving into the mainstream. Barb Darrow‘s article on GigaOM (same site that published Why Modern Applications Demand Modern Tools by two authors from Greylock Partners) called Scala sets sights on top-tier status among the Java faithful suggests that the folks at Typesafe feel similarly (and they are much closer to the situation than I am). Darrow’s article starts, “To hear Typesafe tell it, the Scala programming language is about to join the ranks of top-tier development tools such as Java, C++, Ruby, and PHP. A new Scala plugin for the Eclipse integrated development environment (IDE) should help pave the way.” Speaking of Scala and Eclipse, it was announced this month that Scala IDE V2.0 has been released. One of the most common complaints about Scala has been lack of tool support, so this should definitely be welcome. Other new developments for Scala in 2011 include the release of the Second Edition of Programming in Scala (release date listed differently in different locations and ranging from December 2010 to January 2011) and Cay Horstmann‘s announcement of his forthcoming book Scala for the Impatient (of which Typesafe has offered a free preview). Scala, as was the case for the past year or two, continues to “just miss” making my top ten software development developments for the year. I believe that 2012 will be a significant year for Scala because my perception of Scala is that it is at an important juncture in terms of adoption. It seems that it is poised to really take off or flatten out in terms of adoption at this point and which it will be may start to become clearer in 2012. Cloud and Service Outages There were some large and highly visible outages this year. The normally highly reliable Blogger (my blog’s host) was out for about 24 hours and Amazon’s EC2 also went down in 2011. See The 10 Biggest Cloud Outages Of 2011 (So Far) for a list of cloud outages in the first half of 2011 alone. Any architecture or infrastructure relying on a single point of failure is likely to end up suffering for that decision. These stories simply remind us that this applies to cloud as well. Google Chrome and Firefox Adoption of Chrome as web browser of choice continues to rise. Chrome 15 has been reported (StatCounter) to be the most popular single web browser (by specific version). It’s surprisingly difficult to measure with certainty which browser is most popular because it depends on which site is doing the counting, but Chrome certainly has been catching up with Firefox in terms of adoption. I looked at the Blogger Stats for my own blog for the week between December 16 and December 23 and the break-out (not version specific) was 39% Firefox, 23% Chrome, and 20% Internet Explorer. A snapshot of containing these statistics and other browsers is shown next.Chrome’s rapid rise in popularity seems to put Firefox at a disadvantage. However, reports of significant payments from Google to Mozilla for Google to be Firefox’s default search engine show that Firefox still has life in it. The post Chrome Engineer: Firefox Is A Partner, Not A Competitor references a Google+ post by Chrome team member Peter Kasting in which Kasting explains his perspective on why Google made the nearly $1 billion deal with Firefox: People never seem to understand why Google builds Chrome no matter how many times I try to pound it into their heads. It’s very simple: the primary goal of Chrome is to make the web advance as much and as quickly as possible. That’s it. It’s completely irrelevant to this goal whether Chrome actually gains tons of users or whether instead the web advances because the other browser vendors step up their game and produce far better browsers. Either way the web gets better. Job done. The end. Woody Leonhard writes that “Google needs Firefox now more than ever.” Leonhard also observes, “The money’s in search, not the browser.” Firefox enjoyed its own progress in 2011. The blog post Firefox: 2011 begins, “Firefox helped make the Web more awesome in 2011.” It highlights the release of Firefox 4, Firefox for Android, and how Firefox “introduced Do Not Track to the industry.” The post also includes the “Firefox: 2011″ infographic (shown below with link to original).Dart In a move reminiscent of its release of the Go programming language, Google’s announcement of the Dart programming language has created significant discussion in 2011. Lars Bak’s announcement of Dart calls the language “a class-based optionally typed programming language for building web applications.” Bak goes on to state that Dart is intended to be a “structured yet flexible language for web programming.” In a separate interview, Bak states that “I think it’s an exaggeration” to call Dart “a JavaScript killer.” That stated, JavaScript certainly has its warts and Google has the clout to make web technologies successful (but some do fail). Dart can be either compiled into JavaScript (reminiscent of Google Web Toolkit‘s Java-to-JavaScript compilation) or can be run in a virtual machine in the web browser. Dart is an open source language and is available at http://www.dartlang.org/. CoffeeScript Speaking of compiling into JavaScript, CoffeeScript is described on its main page as a “little language that compiles into JavaScript.” That same main page adds, “CoffeeScript is an attempt to expose the good parts of JavaScript in a simple way.” A book on this language, CoffeeScript: Accelerated JavaScript Development, was published in 2011. NoSQL NoSQL has generated tremendous development buzz in recent years, but Oracle’s 2011 entrance into the world of NoSQL makes 2011 especially noteworthy for the movement. The announcement of CouchBase and SQLite collaborating on UnQL (Unstructured Data Query Language) could also lead to increased adoption of NoSQL databases. Big Data A post that articulates why 2011 was a Big Year for Big Data better than I could is The Biggest Big Data Stories of 2011. McKinsey Global Institute has called big data “the next frontier for innovation, competition, and productivity.” NetBeans The NetBeans Team’s End of Year Message 2011 reminds us of the major developments of 2011 for the NetBeans IDE. The message states that 2011 “has been a milestone year for NetBeans where we’ve hit the 1,000,000 active user mark.” The message adds, “We’ve also released NetBeans IDE 7.0 and NetBeans IDE 7.0.1, and underpinned the success of many other projects.” The message also states that NetBeans 7.1 (which is currently in Release Candidate 2) is expected to be released in early 2012. One of the major inclusions with NetBeans 7.1 is JavaFX 2.0 support. Tech Jobs Market Making a Comeback The availability of technical jobs, including software development positions, appears to be rising in 2011. Dan Tynan writes, “Now might be just the moment” to say to again say to oneself, ” I’m glad I chose a career in tech.” Mono Mono, the “cross platform … open source implementation of Microsoft’s .Net Framework based on the ECMA standards for C# and the Common Language Runtime” had a big year in 2011. The post Mono in 2011 covers the year 2011 for Mono fairly exhaustively and states, “2011 was a year that showed an explosion of applications built with Mono.” Among several developments related to Mono, one that stood out to me was the creation of Xamarin by members of the laid-off former AttachMate/Novell Mono team. This is certainly welcome news for an open source project that some feared was doomed at that point. Today, Mono promises the sharing of code “between iOS, Android and Windows Phone 7.” Devops “Devops” continued to be a much-hyped concept in 2011. I still struggle to understand exactly how this movement has helped me or will help me, but I do understand the issues the movement/effort is intended to address. Wikipedia defines devops as “an emerging set of principles, methods and practices for communication, collaboration and integration between software development (application/software engineering) and IT operations (systems administration/infrastructure) professionals.” Neil McAllister has called devops “IT’s latest paper tiger.” Interconnectedness Most of the items cited in my top ten of 2011 list and in the honorable mention section are highly related to at least one other item in the lists and often to multiple items. For example, functional programming, Scala, making concurrency easier, and the loss of John McCarthy are all related. Similarly, JavaScript is related to HTML5, Dart, CoffeeScript, mobile devices, cloud development, and web browsers. Conclusion 2011 was yet another year that saw significant developments and advances in the software development industry. The lists compiled in this post indicate how broadly spread these advances were, affecting different programming languages, different deployment environments, and different stakeholders. When one considers that this is a list written by a single individual, it quickly becomes apparent that the actual breadth of advancement is much greater than these biased lists. Reference: Significant Software Development Developments of 2011 from our JCG partner Dustin Marx at the Inspired by Actual Events  blog. Related Articles :2011: The State of Software Security and Quality Diminishing Returns in software development and maintenance Services, practices & tools that should exist in any software development house, part 1 Java Tools: Source Code Optimization and Analysis The top 9+7 things every programmer or architect should know...

Product Related Classic Mistakes

In my last blog I looked a Process Related Classic Mistakes from Rapid Development: Taming Wild Software Schedules by Steve McConnell, which although it’s now been around for at least 10 years, and times have changed, is still as relevant today as when it was written. As Steve’s book states, classic mistakes are classic mistakes because they’re mistakes that are made so often and by so many people. They have predictably bad results and, when you know them, they stick out like a sore thumb and the idea behind listing them here is that, once you know them, you can spot them and hopefully do something to remedy their effect. Classic mistakes can be divided in to four types:People Related Mistakes Process Related Mistakes Product Related Mistakes Technology Related MistakesToday’s blog takes a quick look at the third of Steve’s categories of mistakes: Product Related Mistakes, which include:Requirements Gold-Plating Feature Creep Developer Gold Plating Push-me, Pull-me Negotiation Research Orientated DevelopmentRequirements Gold-Plating Defining a luxury Rolls Royce of a product when an economical hatchback is all that the user requires. This is adding unnecessary features or function points. Users tend to be less interested in complex features than either Marketing or Development and complex features add a disproportionate amount of time to the development schedule. Feature Creep The average project experiences about a 25% change in the requirements over its lifetime. This will add at least 25% to the schedule. Developer Gold Plating Developers like to play with new technology and are anxious to try new things out – even if its only as a means to bolstering their CV’s. They also like adding features because they’re ‘neat’ or ‘cool’ and will take no time at all. There’s no such thing as a free lunch, all new features need to be tested, documented, supported, reviewed etc. etc. Add these features in the next release after evaluating whether or not they’re really required. Push-me, Pull-me Negotiation This is bizarre! Some managers approve a schedule slip on a project that is progressing more slowly than expected and then add in additional new tasks after the schedule change to make the schedule wrong again. Research Orientated Development If your project pushes the state of the art limits, requires new algorithms, hardware designs and higher speed then you’re doing research not development. Research schedules are very problematic and usually wrong. Beware, don’t expect to advance the state of the art to rapidly. Reference: Product Related Classic Mistakes from our JCG partner Roger Hughes at the Captain Debug’s Blog . Related Articles :Process Related Classic Mistakes People Related Classic Mistakes The top 9+7 things every programmer or architect should know Things Every Programmer Should Know Java Developer Most Useful Books...

Devops: How NOT to collect configuration management data

Hi all, Willie here. This time we’re going to step away from the keyboard and get architectural. But no ivory towers here. In my next two blog posts, I’m going to give you something that will get you out of lots of pointless meetings. Got your attention yet? Good! If you’re in devops, one of the things that you have to figure out is how to collect up all the information that will allow you to manage your configuration and also keep your apps up and running. This isn’t too hard when you have two apps sitting on a single server. It’s much harder when you have 400 apps deployed to thousands of servers across multiple environments and data centers. So how do you do that? Let’s start off by looking at some things that just don’t work. In my next post I’ll share with you an approach that does work.SOME QUICK BACKGROUND Probably owing to some psychological glitch, I’ve always been an information curator. Especially when I was in management, it was always important to me to understand exactly where my apps were deployed, which servers they were on, which services and databases they depended on, which servers those services and databases lived on, who are the SMEs for a given app, and so on. If you work in a small environment, you’re probably thinking, “what’s the big deal?” Well, I don’t work in a small environment, but the first time I undertook this task, I probably would have said something like, “no sweat.” (That’s the developer in me.) Anyway, for whatever reason, I decided that it was high time that somebody around the department could answer seemingly basic questions about our systems. ATTEMPT #1: WIKI WHEELER The first time I made the attempt (several years ago), I was a director over a dozen or so app teams. So I created a wiki template with all the information that I wanted, and I chased after my teams to fill it out. My zeal was such that, unbeknownst to me, I acquired the nickname “Wiki Wheeler”. (One of my managers shared this heartwarming bit of information with me a couple years later.) I guess other managers liked the approach, though, because they chased after their teams too. This approach started out decently well, since the teams involved could manage their own information, and since I was, well, Wiki Wheeler. But it didn’t last. Through different system redesigns and department reorgs, wiki spaces came and went, some spaces atrophied from neglect, and there was redundant but contradictory information everywhere. The QA team had its own list, the release guys had their list, the app teams had their list. The UX guys might have even gotten in on the act. Anyway, after a year or so it was a big mess and to this day our wiki remains a big mess. ATTEMPT #2: MY HUGE VISIO The second time, my approach was to collect the information from all the app development teams. We had hundreds of developers, so to make things efficient, I sent out a department-wide e-mail telling everybody that I was putting together a huge Visio document with all of our systems, and I’d appreciate it if they could reply back with the requested information. And while I had to send out more nag e-mails than I would have liked, the end result was in fact a huge Visio diagram that had hundreds of boxes and lines going everywhere. I was very proud of this thing, and I printed out five or six copies on the department plotter and hung them on the walls. How long do you think it was before it was out of date? I have no idea. I seriously doubt that it was ever correct in the first place. Nobody (myself included) ever used the diagram to do actual work, and its chief utility was in making our workplace look somewhat more hardcore since there were huge color engineering plots on the walls. There was one additional effect of this diagram, though I hesitate to call it utility. I acquired the wholly undeserved reputation for knowing what all these apps were and how they related to one another. This went on for years through no fault of my own. I mention it because it’s relevant for the next attempt. ATTEMPT #3: DISASTER RECOVERY This one wasn’t my attempt, but it’s still worth describing since I have to imagine that we aren’t the only ones who ever tried it.As a rapidly growing company, disaster recovery became an increasingly big deal for us several years back, and there was a mandate from up high to put a plan in place. This involved collecting up all the information that would allow us to keep the business running if our primary data center ever got cratered. The person in charge of the effort spent about two years meeting with app teams or else “experts” (me, along with others who, like me, had only the highest-level understanding of how things were connected up) and documenting it on a Sharepoint site where nobody could see it. This didn’t work at all. Most people were too busy to meet with her, so the quality of the information was poor. Apps were misnamed, app suites were mistaken for apps, and to make a long story short, the result was not correct or maintainable. ATTEMPT #4: OUR EXTERNAL CONSULTANTS’ EVEN BIGGER VISIO Following a major reorg, we brought in some external consultants and they came up with their own Visio. By this time I had already figured out that interviewing teams and putting together huge Visios is a losing approach, so I was surprised that a bunch of highly-paid consultants would use it. Well, they did, and their diagram was, well, “impressive”. It was also (to put it gently) neither as helpful nor as maintainable as we would have liked. ATTEMPT #5: HP UCMDB AUTODISCOVERY We have the HP uCMDB tool, and so our IT Services group ran some automated discovery agents to populate it with data. It loaded the uCMDB up with tons of fine-grained detail that nobody cared about, and we couldn’t see the stuff that we actually did care about (like apps, web services, databases, etc.). ATTEMPT #6: SERVICE CATALOG Our IT Services group was pretty big on ITIL, so they went about putting together a service catalog. The person doing it collected service and app information into an Excel spreadsheet and posted it to a Sharepoint site. But this didn’t really get into the details of configuration management. It was mostly around figuring out which business services we offered and which systems support which services. They ended up printing out a glossy, full-color service catalog for the business, but nobody was really asking for it, so it was more of a curiosity than anything else. There were other attempts too (Paul led a couple that I haven’t even mentioned), but by now you get the picture. SO WHY DID THESE ATTEMPTS FAIL? To understand why these attempts failed, it helps to look at what they had in common:First, they generally involved trying to collect up information from SMEs and then putting it in some kind of document. This could be wiki pages, Visio diagrams, Word docs on Sharepoint or an Excel spreadsheet. Once the information went there, nobody used it. So the information, if it was ever correct and/or complete in the first place, was before long outdated, and there wasn’t any mechanism in place to bring it up to date.In general, people saw the problem as a data collection problem rather than a data management problem. There needs to be a systematic, ongoing mechanism to discover and fix errors. None of the approaches took the management problem seriously at all–they all simply assumed that the organization would care about the data enough to keep it up to date. In the next installment, I’ll tell you about the approach we eventually figured out. And as promised, that’s also where I’ll explain how to get rid of some of your meetings. Reference: Devops: How NOT to collect configuration management data from our JCG partner Willie Wheeler at the Skydingo blog. Related Articles :Devops has made Release and Deployment Cool GlassFish Response GZIP Compression in Production How to solve production problems Iterationless Development – the latest New New Thing...

Best Of The Week – 2011 – W52

Hello guys, Time for the “Best Of The Week” links for the week that just passed. Here are some links that drew Java Code Geeks attention: * 7 habits I learned to become more efficient programmer: This post summarizes some coding guidelines that might help developers to write more maintainable source code. The tips include using names, keeping the indentation consistent, adding meaningful comments, following design patterns etc. Also check out Things Every Programmer Should Know. * Java EE 6 Testing with Arquillian Persistence Extension: This tutorial shows how to use the Arquillian Persistence Extension. Arquillian provides a simple test harness that abstracts away all container lifecycle and deployment, while the persistence extension allows you to test JPA related code without filling up the database with test data (they are populated via .yml files). * How to use Domain-Driven Design to better understand the business: This article is an introduction to Domain-Driven Design, a software development approach to model complex business applications. It strongly focuses on modeling the core business concepts – model – which in general is the most complex part of a business systems. Also check out Using the State pattern in a Domain Driven Design and Domain Driven Design with Spring and AspectJ. * Architecting Massively-Scalable Near-Real-Time Risk Analysis Solutions: This article discusses the architectural approach behind Risk Analysis solutions. Risk analysis is a compute-intensive and a data-intensive process, a classic Big Data analytics problem. The effective architecture is a Big Data multi-tiered architecture, in which intraday data is cached in-memory, while historical data is kept in a database. * Processing Huge JSON Files with Jackson: This article explains how to process large JSON files with Jackson leveraging the JsonView concept and the Streaming API. Also check out Java JSON processing with Jackson and Android JSON Parsing with Gson Tutorial. * How to manage the performance of 1000+ JVMs: This article gives some hints on how to perform monitoring of production systems where there is a huge number of applications and JVMs involved. Establishing health metrics, measuring the application performance andmonitoring for errors may all help for that cause. * Using JSON to Build Efficient Applications: This article is a nice introduction to JSON, a way to store information in an organized, easy-to-access way. It also provides a number of reasons on why use JSON (e.g. it is lightweight, integrates easily with every programming language, is natively supported in Javascript etc.). Also take a look at Java JSON processing with Jackson, Add JSON capabilities into your GWT application and Android JSON Parsing with Gson Tutorial. * In Memory Data Grid Technologies: This article is an introduction to In Memory Data Grids (IMDG), i.e. the software products where the data model is distributed across many servers in a single location or across multiple locations in a ‘shared nothing’ architecture. Also check out GWT Spring and Hibernate enter the world of Data Grids. * 10 reasons why this is a great time to be a developer: This article presents some of the top reasons why now is a great time to be a developer. Reasons include move to SaaS, low startup costs, mobile technology, the rise of “personal computing” devices, the increasingly prominent role of developers and others. * How Twitter Stores 250 Million Tweets a Day Using MySQL: An article that sheds some light on Twitter’s data persistence layer. It discusses the transition from Twitter’s old way of storing tweets using temporal sharding, to a more distributed approach using a new tweet store called T-bird, which is built on top of Gizzard, which is built using MySQL. * Third Party Content Management applied: Four steps to gain control of your Page Load Performance!: This article presents some best practices for integrating Third Party Content and for convincing your business that they will benefit from establishing Third Party Management. That’s all for this week. Stay tuned for more, here at Java Code Geeks. Cheers, Ilias Related Articles:Best Of The Week – 2011 – W51 Best Of The Week – 2011 – W50 Best Of The Week – 2011 – W49 Best Of The Week – 2011 – W48 Best Of The Week – 2011 – W47 Best Of The Week – 2011 – W46 Best Of The Week – 2011 – W45 Best Of The Week – 2011 – W44 Best Of The Week – 2011 – W43 Best Of The Week – 2011 – W42...

Arcane magic with the SQL:2003 MERGE statement

Every now and then, we feel awkward about having to distinguish INSERT from UPDATE for any of the following reasons:We have to issue at least two statements We have to think about performance We have to think about race conditions We have to choose between [UPDATE; IF UPDATE_COUNT = 0 THEN INSERT] and [INSERT; IF EXCEPTION THEN UPDATE] We have to do those statements once per updated / inserted recordAll in all, this is a big source of error and frustration. When at the same time, it could’ve been so easy with the SQL MERGE statement! A typical situation for MERGE Among many other use-cases, the MERGE statement may come in handy when handling many-to-many relationships. Let’s say we have this schema: CREATE TABLE documents (id NUMBER(7) NOT NULL,CONSTRAINT docu_id PRIMARY KEY (id));CREATE TABLE persons (id NUMBER(7) NOT NULL,CONSTRAINT pers_id PRIMARY KEY (id));CREATE TABLE document_person (docu_id NUMBER(7) NOT NULL,pers_id NUMBER(7) NOT NULL,flag NUMBER(1) NULL,CONSTRAINT docu_pers_pk PRIMARY KEY (docu_id, pers_id),CONSTRAINT docu_pers_fk_docuFOREIGN KEY (docu_id) REFERENCES documents(id),CONSTRAINT docu_pers_fk_persFOREIGN KEY (pers_id) REFERENCES persons(id));The above tables are used to model which person has read (flag=1) / deleted (flag=2) what document. To make things simple, the “document_person” entity is usually OUTER JOINed to “documents”, such that the presence or absence of a “document-person” record may have the same semantics: “flag IS NULL” means the document is unread. Now when you want to mark a document as read, you have to decide whether you INSERT a new “document_person”, or whether to UPDATE the existing one. Same with deletion. Same with marking all documents as read, or deleting all documents. Use MERGE instead You can do it all in one statement! Let’s say, you want to INSERT/UPDATE one record, in order to mark one document as read for a person: -- The target tableMERGE INTO document_person dst-- The data source. In this case, just a dummy recordUSING (SELECT :docu_id as docu_id,:pers_id as pers_id,:flag as flagFROM DUAL) src-- The merge condition (if true, then update, else insert)ON (dst.docu_id = src.docu_id AND dst.pers_id = src.pers_id)-- The update actionWHEN MATCHED THEN UPDATE SETdst.flag = src.flag-- The insert actionWHEN NOT MATCHED THEN INSERT (dst.docu_id,dst.pers_id,dst.flag)VALUES (src.docu_id,src.pers_id,src.flag)This looks quite similar, yet incredibly more verbose than MySQL’s INSERT .. ON DUPLICATE KEY UPDATE statement, which is a bit more concise. Taking it to the extreme But you can go further! As I said previously, you may also want to mark ALL documents as read, for a given person. No problem with MERGE. The following statement does the same as the previous one, if you specify :docu_id. If you leave it null, it will just mark all documents as :flag: MERGE INTO document_person dst-- The data source is now all "documents" (or just :docu_id) left outer-- joined with the "document_person" mappingUSING (SELECT d.id as docu_id,:pers_id as pers_id,:flag as flagFROM documents dLEFT OUTER JOIN document_person d_pON d.id = d_p.docu_id AND d_p.pers_id = :pers_id-- If :docu_id is set, select only that documentWHERE (:docu_id IS NOT NULL AND d.id = :docu_id)-- Otherwise, select all documentsOR (:docu_id IS NULL)) src-- If the mapping already exists, update. Else, insertON (dst.docu_id = src.docu_id AND dst.pers_id = src.pers_id)-- The rest stays the sameWHEN MATCHED THEN UPDATE SETdst.flag = src.flagWHEN NOT MATCHED THEN INSERT (dst.docu_id,dst.pers_id,dst.flag)VALUES (src.docu_id,src.pers_id,src.flag)MERGE support in jOOQ MERGE is also fully supported in jOOQ. See the manual for more details (scroll to the bottom): http://www.jooq.org/manual/JOOQ/Query/ Happy merging! Reference: Arcane magic with the SQL:2003 MERGE statement from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog. Related Articles :Database schema navigation in Java Problems with ORMs SQL or NOSQL: That is the question? What is NoSQL ? GROUP BY ROLLUP / CUBE...

Migrating from JavaFX 1.3 to JavaFX 2.0

Some days ago I finished migrating the source code of Modellus from JavaFX 1.3 script to JavaFX 2.0 java language. So I thought it would be nice to write about what I’ve learned in the process. I’d like to point out that if you want to keep using JavaFX script in JavaFX 2.0 you can use Visage: http://code.google.com/p/visage/CustomNode class doesn’t exist any more. Extend Group or Region to create “custom nodes”. No more blocksMouse. In javafx 2.0 mouse events are only received by the top most node. There is also a new method on Node: setMouseTransparent(boolean). Mouse events on node with mouseTransparent set to true will be ignored and captured by the topmost node below.Use properties to bind values. Javafx 2.0 has a set of classes you can use to bind values to each other. For each primitive type there is a class – SimpleBooleanProperty, SimpleDoubleProperty, etc, and for reference types you use an Object Property instance, for instance if you want to bind colors you can use SimpleObjectProperty<Color>.Not all variables from the API are “bindable”. In Javafx 1.3 script you could bind to any variable of the API. In javafx 2.0 java language, that means that all variables from the API would need to be available as propertys. But that is not the case, for instance Bounds, LinearGradient, Stop are examples of classes that do not have propertys, so you can’t bind directly to their fields. In this situations you’ll need to use other methods like low-level binding. For example suppose you wanted to bind a variable to the width of the layout bounds of a node. Since the field width of Bounds is not available as a property you would have to do something like this:In Javafx script: float nameLabelXPosition = bind - nameLabel.layoutBounds.width / 2;In Javafx2.0 java language: nameLabelXPosition.bind(new DoubleBinding() {{super.bind(nameLabel.layoutBoundsProperty());}@Overrideprotected double computeValue() {return nameLabel.getLayoutBounds().getWidth() / 2;}});When you used javafx script initiliazer blocks you can now use javafx builders. However in javafx script you could use binding in the initializer block, on java you can’t do that with builders. Only in JavaFX 3.0 (Lombard) will you be able to do that:http://javafx-jira.kenai.com/browse/RT-13680. So, whenever you used binding on javafx script initializer blocks you can’t use builders in java javafx 2.0.No more language level support for sequences on javafx 2.0 java. Wherever you used sequences you now will use ObservableLists. To create ObservableLists you can use FXColections creator methods, there you’ll find all sorts of methods to create ObservableLists, even empty ones. Sequences present on the API have been converted to ObservableLists. If, for instance, you want to insert a node on a Group you need to get it’s children ObservableList and than call the method add on it. Like so: .getChildren().add(Node)No more function types. Since only on java8 will there be support for Closures, the Oracle team has relied on the use of SAM types instead. That is a Class with only a single abstract method that you’ll have to override (Single Abstract Method). You can use the same strategy as Oracle and write SAM types wherever you used function objects.No more triggers. Replace triggers with change listeners. You can assign a change listener to a property which is the same as assign a trigger on javafx script.No more variable overrides on subclasses. For these one you won’t have a substitute on java, the best thing you can do is reassign a value to the variable on a subclass. But it is not the same, since overriding variables, assigned values before initializer blocks of superclass were invoked.For further reading on this topic checkout: http://weblogs.java.net/blog/opinali/archive/2011/05/28/javafx-20-beta-first-impressions If you have any more valuable tips on this topic which I don’t cover please add them in the comments and I’ll insert them in the post. Reference: Migrating from javafx 1.3 to javafx 2.0 from our JCG partner Pedro Duque Vieira at the Pixel Duke blog. Related Articles :JavaFX 2.0 beta sample application and after thoughts JavaOne is Rebuilding Momentum Sometimes in Java, One Layout Manager Is Not Enough Xuggler Development Tutorials...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: