Featured FREE Whitepapers

What's New Here?


Hardware Acceleration in Android – Are You Using It?

Did you know Android has Hardware Acceleration? Did you also know you actually need to enable it for your app first? Suprisingly you do! It’s not defaulted to on. Here’s another little gem in Android that could have a major impact on your application. If you allow your app to run on Android versions above 3.0, you should probably enable Hardware Acceleration. By enabling Hardware Acceleration, the performance of your application’s UI may improve considerably. To enable Hardware Acceleration on an application, simply add the android:hardwareAccelerated tag to the manifest file.     After adding that tag to the application element, simply recompile and test your app. It is very important to fully test your app after you add this line. Although it’s unlikely that Hardware Acceleration will negatively affect your app, it is certainly possible. It is a good idea to make sure all of the views and animations still work as you expect it to. If you find that certain screens seem to have problems with Hardware Acceleration, you can disable it on a per Activity basis if needed as well. To do so, simply add the tag (set to false) to the activity tag within the manifest. This allows you to enable Hardware Acceleration for the entire application while removing it for certain parts. And this also works in the reverse. You can enable only specific Activities while leaving it off for the majority of the application. One other interesting feature in the IO Session (linked below) is the concept of a View Layer. By using this new Layer method, you are able to use the GPU within the device to speed up animations (IE ListView scrolling). Check out View.setLayerType for some more information. For more details about Hardware Acceleration–and really some interesting information on how views are actually drawn in Android–check out this Google IO Session. And like most things Android-related, Google has a detailed page here on Hardware Acceleration.   Reference: Hardware Acceleration in Android – Are You Using It? from our JCG partner Isaac Taylor at the Programming Mobile blog. ...

GlassFish 4 Promoted Build, Gradle and Embedded Application Server

Very recently, perhaps towards end of last year, the GlassFish open source team released GlassFish 4.0 beta 72 as a promoted build. Arun Gupta posted an article on the Maven coordinates for the GlassFish 4 .0 beta 72 on his blog. This release was significant because the team published the artifacts into a maven repository. This year, 2013, I am the author of a up and coming Java EE 7 User Guide and so it is important that I investigate the latest GlassFish, especially since it is the reference implementation of the specification. I want to actually research and investigate how far the latest Java Servlets 3.1, Web Sockets and JAX-RS specifications behave in the server. Here is a Gradle build script that I wrote last night to execute an GlassFish Embedded application: apply plugin: 'java' apply plugin: 'maven' apply plugin: 'eclipse' apply plugin: 'idea'group = 'com.javaeehandbook.book1' archivesBaseName = 'ch06-servlets-basic' version = '1.0'repositories { mavenCentral() maven { url 'https://maven.java.net/content/groups/promoted' } maven { url 'http://repository.jboss.org/nexus/content/groups/public' } }dependencies { compile 'org.glassfish.main.extras:glassfish-embedded-all:4.0-b72' compile 'javax:javaee-api:7.0-b72' testCompile 'junit:junit:4.10' }// Override Gradle defaults - a force an exploded JAR view sourceSets { main { output.resourcesDir = 'build/classes/main' output.classesDir = 'build/classes/main' } test { output.resourcesDir = 'build/classes/test' output.classesDir = 'build/classes/test' } }task(run, dependsOn: 'classes', type: JavaExec) { description = 'Runs the main application' main = 'je7hb.common.webcontainer.embedded.glassfish.EmbeddedRunner' classpath = sourceSets.main.runtimeClasspath } The key to the build script is the order of the dependencies. I found that the glassfish-embedded-all had to be first dependency on the list, otherwise there would be ValidationException from the Hibernate Validator (bean validator) jar not being found. The exception message was 'javax.validation.ValidationException: Unable to load Bean Validation provider'. The Gradle build also references the GlassFish Java repositories, which is the second key point. Here is the EmbeddedRunner, the Java application code: package je7hb.common.webcontainer.embedded.glassfish;import org.glassfish.embeddable.*; import java.io.*; import java.util.*; import java.util.concurrent.atomic.AtomicBoolean;public class EmbeddedRunner {private int port; private AtomicBoolean initialized = new AtomicBoolean(); private GlassFish glassfish;public EmbeddedRunner(int port) { this.port = port; }public EmbeddedRunner init() throws Exception{ if ( initialized.get() ) { throw new RuntimeException('runner was already initialized'); } BootstrapProperties bootstrapProperties = new BootstrapProperties(); GlassFishRuntime glassfishRuntime = GlassFishRuntime.bootstrap(bootstrapProperties);GlassFishProperties glassfishProperties = new GlassFishProperties(); glassfishProperties.setPort('http-listener', port); String [] paths = System.getProperty('java.class.path').split(File.pathSeparator); for (int j=0; j<paths.length; ++j) { System.out.printf('classpath[%d] = %s\n', j, paths[j]); } glassfish = glassfishRuntime.newGlassFish(glassfishProperties); initialized.set(true); return this; }private void check() { if ( !initialized.get() ) { throw new RuntimeException('runner was not initialised'); } }public EmbeddedRunner start() throws Exception{ check(); glassfish.start(); return this; }public EmbeddedRunner stop() throws Exception{ check(); glassfish.stop(); return this; }public static void main(String args[]) throws Exception { EmbeddedRunner runner = new EmbeddedRunner(8080).init().start(); Thread.sleep(1000); runner.stop(); } } The class executes the embedded GlassFish as the beginnings of a containerless build, which is a term that James Ward and others have coined. This class starts GlassFish, waits one second, and then shuts it down again. The code works with Gradle, by invoking at the command line gradle run or through an IDE. I used the command gradle idea to generate IDEA project files. Here is sample output from IntelliJ IDEA 12: /Library/Java/JavaVirtualMachines/jdk1.7.0_09.jdk/Contents/Home/bin/java -Didea.launcher.port=7537 "-Didea.launcher.bin.path=/Applications/IntelliJ IDEA 11.app/bin" -Dfile.encoding=UTF-8 -classpath "/Users/Developer/Documents/IdeaProjects/javaee7-handbook/ch06/servlets-basic/out/production/servlets-basic:/Library/Java/JavaVirtualMachines/jdk1.7.0_09.jdk/Contents/Home/lib/ant-javafx.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_09.jdk/Contents/Home/lib/dt.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_09.jdk/Contents/Home/lib/javafx-doclet.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_09.jdk/Contents/Home/lib/javafx-mx.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_09.jdk/Contents/Home/lib/jconsole.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_09.jdk/Contents/Home/lib/sa-jdi.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_09.jdk/Contents/Home/lib/tools.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_09.jdk/Contents/Home/jre/lib/charsets.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_09.jdk/Contents/Home/jre/lib/deploy.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_09.jdk/Contents/Home/jre/lib/htmlconverter.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_09.jdk/Contents/Home/jre/lib/javaws.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_09.jdk/Contents/Home/jre/lib/jce.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_09.jdk/Contents/Home/jre/lib/jfr.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_09.jdk/Contents/Home/jre/lib/jfxrt.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_09.jdk/Contents/Home/jre/lib/JObjC.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_09.jdk/Contents/Home/jre/lib/jsse.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_09.jdk/Contents/Home/jre/lib/management-agent.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_09.jdk/Contents/Home/jre/lib/plugin.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_09.jdk/Contents/Home/jre/lib/resources.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_09.jdk/Contents/Home/jre/lib/rt.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_09.jdk/Contents/Home/jre/lib/ext/dnsns.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_09.jdk/Contents/Home/jre/lib/ext/localedata.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_09.jdk/Contents/Home/jre/lib/ext/sunec.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_09.jdk/Contents/Home/jre/lib/ext/sunjce_provider.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_09.jdk/Contents/Home/jre/lib/ext/sunpkcs11.jar:/Library/Java/JavaVirtualMachines/jdk1.7.0_09.jdk/Contents/Home/jre/lib/ext/zipfs.jar:/Users/Developer/.gradle/caches/artifacts-15/filestore/org.glassfish.main.extras/glassfish-embedded-all/4.0-b72/jar/942b982d5c005806a08843d2a1f411f278c04077/glassfish-embedded-all-4.0-b72.jar:/Users/Developer/.gradle/caches/artifacts-15/filestore/javax/javaee-api/7.0-b72/jar/56d50eaa8d21c2f70394f607efc1aa27c360141d/javaee-api-7.0-b72.jar:/Users/Developer/.gradle/caches/artifacts-15/filestore/javax.activation/activation/1.1/jar/e6cb541461c2834bdea3eb920f1884d1eb508b50/activation-1.1.jar:/Users/Developer/.gradle/caches/artifacts-15/filestore/com.sun.mail/javax.mail/1.4.6-rc1/jar/5c5de8592e570afb595a8be727b484d438b49d69/javax.mail-1.4.6-rc1.jar:/Applications/IntelliJ IDEA 11.app/lib/idea_rt.jar" com.intellij.rt.execution.application.AppMain je7hb.common.webcontainer.embedded.glassfish.EmbeddedRunner classpath[0] = /Users/Developer/Documents/IdeaProjects/javaee7-handbook/ch06/servlets-basic/out/production/servlets-basic classpath[1] = /Library/Java/JavaVirtualMachines/jdk1.7.0_09.jdk/Contents/Home/lib/ant-javafx.jar classpath[26] = /Library/Java/JavaVirtualMachines/jdk1.7.0_09.jdk/Contents/Home/jre/lib/ext/zipfs.jar classpath[27] = /Users/Developer/.gradle/caches/artifacts-15/filestore/org.glassfish.main.extras/glassfish-embedded-all/4.0-b72/jar/942b982d5c005806a08843d2a1f411f278c04077/glassfish-embedded-all-4.0-b72.jar classpath[28] = /Users/Developer/.gradle/caches/artifacts-15/filestore/javax/javaee-api/7.0-b72/jar/56d50eaa8d21c2f70394f607efc1aa27c360141d/javaee-api-7.0-b72.jar classpath[29] = /Users/Developer/.gradle/caches/artifacts-15/filestore/javax.activation/activation/1.1/jar/e6cb541461c2834bdea3eb920f1884d1eb508b50/activation-1.1.jar classpath[30] = /Users/Developer/.gradle/caches/artifacts-15/filestore/com.sun.mail/javax.mail/1.4.6-rc1/jar/5c5de8592e570afb595a8be727b484d438b49d69/javax.mail-1.4.6-rc1.jar classpath[31] = /Applications/IntelliJ IDEA 11.app/lib/idea_rt.jar Found populator: org.glassfish.kernel.embedded.EmbeddedDomainXml Jan 31, 2013 10:05:12 AM org.glassfish.security.services.impl.authorization.AuthorizationServiceImpl initialize INFO: Authorization Service has successfully initialized. Jan 31, 2013 10:05:12 AM org.hibernate.validator.internal.util.Version <clinit> INFO: HV000001: Hibernate Validator 5.0.0.Alpha1 Jan 31, 2013 10:05:13 AM com.sun.enterprise.config.modularity.StartupConfigBeanOverrider postConstruct INFO: Starting the config overriding procedure Jan 31, 2013 10:05:13 AM com.sun.enterprise.config.modularity.StartupConfigBeanOverrider postConstruct INFO: Finished the config overriding procedure Jan 31, 2013 10:05:13 AM com.sun.enterprise.v3.services.impl.GrizzlyProxy start INFO: Grizzly Framework 2.3 started in: 18ms - bound to [/,080] Jan 31, 2013 10:05:13 AM com.sun.enterprise.v3.services.impl.GrizzlyProxy start INFO: Grizzly Framework 2.3 started in: 3ms - bound to [/,081] Jan 31, 2013 10:05:13 AM com.sun.enterprise.v3.admin.adapter.AdminEndpointDecider setGuiContextRoot INFO: Admin Console Adapter: context root: /admin Jan 31, 2013 10:05:13 AM com.sun.enterprise.v3.admin.adapter.AdminEndpointDecider setGuiContextRoot INFO: Admin Console Adapter: context root: /admin Jan 31, 2013 10:05:13 AM com.sun.enterprise.v3.admin.adapter.AdminEndpointDecider setGuiContextRoot INFO: Admin Console Adapter: context root: /admin Jan 31, 2013 10:05:13 AM com.sun.enterprise.v3.server.AppServerStartup$StartupActivator awaitCompletion INFO: Undefined Product Name - define product and version info in config/branding 0.0.0 (0) startup time : Embedded (1,204ms), startup services(856ms), total(2,060ms) Jan 31, 2013 10:05:13 AM org.glassfish.admin.mbeanserver.JMXStartupService$JMXConnectorsStarterThread run INFO: JMXStartupService has disabled JMXConnector system Jan 31, 2013 10:05:13 AM com.sun.enterprise.connectors.jms.util.JmsRaUtil getInstalledMqVersion WARNING: RAR7000 : Check for a new version of MQ installation failed : /var/folders/kr/vj5fd5s91g76_t348ndnbtxr0000gn/T/gfembed883899172293116872tmp/lib/install/applications/jmsra/../imqjmsra.rar (No such file or directory):/var/folders/kr/vj5fd5s91g76_t348ndnbtxr0000gn/T/gfembed883899172293116872tmp/lib/install/applications/jmsra/imqjmsra.rar Jan 31, 2013 10:05:14 AM org.glassfish.admin.mbeanserver.JMXStartupService shutdown INFO: JMXStartupService and JMXConnectors have been shut down. JdbcRuntimeExtension, getAllSystemRAResourcesAndPools = [GlassFishConfigBean.org.glassfish.jdbc.config.JdbcResource, GlassFishConfigBean.org.glassfish.jdbc.config.JdbcResource, GlassFishConfigBean.org.glassfish.jdbc.config.JdbcConnectionPool, GlassFishConfigBean.org.glassfish.jdbc.config.JdbcConnectionPool] Jan 31, 2013 10:05:15 AM com.sun.enterprise.v3.server.AppServerStartup stop INFO: Shutdown procedure finishedProcess finished with exit code 0 You should be seeing something the above output in the IDE if you are doing it right. It would appear we will have to be on our guard until JDK 9 in order to avoid class path loading issues. My book on the user guide to Java EE 7 is scheduled for the Summer of 2013.   Reference: GlassFish 4 Promoted Build, Gradle and Embedded Application Server from our JCG partner Peter Pilgrim at the Peter Pilgrim’s blog blog. ...

“Why You No Train?”

It is a simple question. So why don’t you get more training? Do you feel that you operate already effectively? Is there no more stuff to learn? Do you think that you are already “good”? Sometimes,  just when we are walking about and we feel everything is going smoothly, then the bottom drops out of the bucket, our world suddenly of positivity, in the situation, our lives, family and friends, takes a nose dive to the other side. When our world changes to tragedy, conflict and controversy those times,we all experienced, can be really depressing and shockingly awful. These are the times when we start to kick ourselves, probably. We reproach ourselves with, “I should’ve done this. I could’ve done that.” Well the question at the beginning still stand, why do you not care about any changing the train, saving for a rainy day, and could of all of this tragedy be avoided. They say, prevention is better than the cure.   We could all have done with some forewarning, some training. If you are expecting a company to train you in all things; then that is great, especially if you are a star performer on the ye olde balance scorecard, getting 100% in the 360 review, and if your boss is terrific, he and she will send to you training. If the company has the money to invest in you, when you are brilliant and company thinks so as well then they will continue to invest in you. For the rest of us mortal souls, though, we are not perfect creatures and probably never going to be fortunate for the big golden handshake of continuous personal development and abundant budgets. Some companies do care about their employers and give them a fair crack of training budget. Sadly, the training budgets for the common worker, developer and designer team are reducing month by month, and it is one of the first thing that are cut in a downturn. So if you are waiting for a company to send you on splendid trip to conference to JavaOne 2013, with all expenses, flights, hotels and tickets, paid; good luck with that. In the next section, I thought about some insights in to the gaining personal training, divided in two sections. The company does not want give you the training you really want, what can you do as alternative.Spent your own personal money and funds; if you believe in it and then you will do it Negotiate with the training company, they might be able  to cut you a special deal on a early bird that far enough in the future. Find a user group who are doing a coding group or the practising the skills you desire Find another organisation to work for, watch the job adverts for up and coming talent and especially firms with attractive starting gifts like those willing to throw in a Retina edition mac book pro; trade that in for the training instead as a condition of joining Find another job that pays more money and then personally fund the training you need Trade skills with a pal. Suppose you have reasonable advanced knowledge in No SQL databases, like Mongo DB, and want to learn better JavaScript then try good old-horse trading with a colleague might just be way to get training on your side as well as theirs Don’t accept the classic answer from the boss, “How does X help the business?”. If the training is relevant to you achieving a goal of being a much better software engineer and designer, then, of course it is relevant. These types of answers are just excuses to keep you, bedded down; to just give in and accept the status quo, which, of course, is utter nonsense. Perhaps, the time has finally come to find better job.Ok let suppose you are the boss, you are the line manager and you have a good team.Fight for your team and their training; fight for your team’s budget and don’t let the senior management take it away Give up your personal training for the entire year and suggest that they allocate the extra budget to training for your team members Perhaps, it is time to evaluate the relationship with the preferred supplier of training, if your company operates like this. Have your firm been getting decent value from the PSL (preferred supplier list)? No, then try an independent trainer, a famous speaker or find a lone runner, who is much smaller than the big training business, but can deliver bespoke training to you company. Procrastinating on a bad PSL is a waste of everyone’s and your business’s time. Find alternatives to training like brown bag lunches, collaborate with other businesses Get on the old blower (the telephone) to the training company and use your managerial skills to negotiate rate especially for early birds far into the future Take the sword for the team when your boss says the training budget has to be cut. Say that you will resign if the team’s training budget is cut. They will probably think hard and fast, the cost of recruiting another person just like you, training somebody else up in your role equates to training for about 4 or 6 developers. Don’t be idiot and attempt to coach or mentor in the training yourself, especially if you have no idea what you are talking about. Don’t go cheap, go for the quality training for team member. Use so-called creative accounting and budget, stick too fingers to human resources dictum, find an independent trainer not for you, but your team. Use the magic entry in the budget cover the training. Insist to HR afterwards that you wanted your team to be best, be productive and get the job done with higher quality. If HR still don’t like it, then perhaps it is time to be a manager in another firm, because your firm would have shown their idea of value of people in the organisation. (If you are going to leave, think about taking your best pal with you.)Without commitment to training and learning new skills there can be no continuous improvement, which is one of the prime directives for Agile and Lean engineering. I hope that I have given you some great ideas. Everybody needs training and self-improvement; don’t let the government or business tell you otherwise.   Reference: “Why You No Train?” from our JCG partner Peter Pilgrim at the Peter Pilgrim’s blog. ...

Using Google Guava’s Ordering API

We’ve been playing a bit more with Google’s Guava library – what a great library! The most recent thing we used it for was to sort out the comparators for our domain objects. Here’s how. Using Apache Isis‘ JDO Objectstore, it’s good practice to make your classes implement java.lang.Comparable, and use SortedSet for the collections. You can see this in Isis’ quickstart archetype, where the ToDoItem has a recursive relationship to itself:           public class ToDoItem implements Comparable<ToDoItem> { ... private SortedSet<ToDoItem> dependencies = Sets.newTreeSet(); ... } How best to implement the compareTo method, though? Here’s the original implementation: public int compareTo(final ToDoItem other) { if (isComplete() && !other.isComplete()) { return +1; } if (!isComplete() && other.isComplete()) { return -1; } if (getDueBy() == null && other.getDueBy() != null) { return +1; } if (getDueBy() != null && other.getDueBy() == null) { return -1; } if (getDueBy() == null && other.getDueBy() == null || getDueBy().equals(this.getDueBy())) { return getDescription().compareTo(other.getDescription()); } return getDueBy().compareTo(getDueBy()); } Yuk! Basically it says: * order the not-yet-completed objects before the completed-objects * where there’s a tie, order by due date (put those without a due by date last) * where there’s a tie, order by description. Here’s how to rewrite that using Guava’s Ordering class. First, let’s create some Ordering instances for the scalar types: public final class Orderings {public static final Ordering<Boolean> BOOLEAN_NULLS_LAST = Ordering.<Boolean>natural().nullsLast(); public static final Ordering<LocalDate> LOCAL_DATE_NULLS_LAST = Ordering.<LocalDate>natural().nullsLast(); public static final Ordering<String> STRING_NULLS_LAST = Ordering.<String>natural().nullsLast();private Orderings(){} } Now we can rewrite our ToDoItem‘s compareTo() method in a declarative fashion: public class ToDoItem implements Comparable {...public int compareTo(ToDoItem o) { return ORDERING_BY_COMPLETE .compound(ORDERING_BY_DUE_BY) .compound(ORDERING_BY_DESCRIPTION) .compare(this, o); }public static Ordering<ToDoItem> ORDERING_BY_COMPLETE = new Ordering<ToDoItem>(){ public int compare(ToDoItem p, ToDoItem q) { return Orderings.BOOLEAN_NULLS_LAST.compare(p.isComplete(), q.isComplete()); } };public static Ordering<ToDoItem> ORDERING_BY_DUE_BY = new Ordering()<ToDoItem>{ public int compare(ToDoItem p, ToDoItem q) { return Orderings.BOOLEAN_NULLS_LAST.compare(p.getDueBy(), q.getDueBy()); } };public static Ordering<ToDoItem> ORDERING_BY_DESCRIPTION = new Ordering()<ToDoItem>{ public int compare(ToDoItem p, ToDoItem q) { return Orderings.STRINGS_NULLS_LAST.compare( p.getDescription(), q.getDescription()); } }; Now, admittedly, this hardly warrants all that boilerplate for just a single method in a single class; of course not! But what we have here now is a little algebra that we can use to combine across all the domain classes in our domain model. Other domain classes that use a ToDoItem can order themselves using the ToDoItem‘s natural ordering (accessed from Ordering.natural()), or they can create new orderings using the various ToDoItem.ORDERING_BY_xxx orderings.   Reference: Using Google Guava’s Ordering API from our JCG partner Dan Haywood at the Dan Haywood blog blog. ...

Java 8: From PermGen to Metaspace

As you may be aware, the JDK 8 Early Access is now available for download. This allows Java developers to experiment with some of the new language and runtime features of Java 8. One of these features is the complete removal of the Permanent Generation (PermGen) space which has been announced by Oracle since the release of JDK 7. Interned strings, for example, have already been removed from the PermGen space since JDK 7. The JDK 8 release finalizes its decommissioning. This article will share the information that we found so far on the PermGen successor: Metaspace. We will also compare the runtime behavior of the HotSpot 1.7 vs. HotSpot 1.8 (b75) when executing a Java program “leaking” class metadata objects. The final specifications, tuning flags and documentation around Metaspace should be available once Java 8 is officially released. Metaspace: A new memory space is born The JDK 8 HotSpot JVM is now using native memory for the representation of class metadata and is called Metaspace;  similar to the Oracle JRockit and IBM JVM’s. The good news is that it means no more java.lang.OutOfMemoryError: PermGen space problems and no need for you to tune and monitor this memory space anymore…not so fast. While this change is invisible by default, we will show you next that you will still need to worry about the class metadata memory footprint. Please also keep in mind that this new feature does not magically eliminate class and classloader memory leaks. You will need to track down these problems using a different approach and by learning the new naming convention. I recommend that you read the PermGen removal summary and comments from Jon on this subject. In summary: PermGen space situationThis memory space is completely removed. The PermSize and MaxPermSize JVM arguments are ignored and a warning is issued if present at start-up.Metaspace memory allocation modelMost allocations for the class metadata are now allocated out of native memory. The klasses that were used to describe class metadata have been removed.Metaspace capacityBy default class metadata allocation is limited by the amount of available native memory (capacity will of course depend if you use a 32-bit JVM vs. 64-bit along with OS virtual memory availability). A new flag is available (MaxMetaspaceSize), allowing you to limit the amount of native memory used for class metadata. If you don’t specify this flag, the Metaspace will dynamically re-size depending of the application demand at runtime.Metaspace garbage collectionGarbage collection of the dead classes and classloaders is triggered once the class metadata usage reaches the “MaxMetaspaceSize”. Proper monitoring & tuning of the Metaspace will obviously be required in order to limit the frequency or delay of such garbage collections. Excessive Metaspace garbage collections may be a symptom of classes, classloaders memory leak or inadequate sizing for your application.Java heap space impactSome miscellaneous data has been moved to the Java heap space. This means you may observe an increase of the Java heap space following a future JDK 8 upgrade.Metaspace monitoringMetaspace usage is available from the HotSpot 1.8 verbose GC log output. Jstat & JVisualVM have not been updated at this point based on our testing with b75 and the old PermGen space references are still present.Enough theory now, let’s see this new memory space in action via our leaking Java program… PermGen vs. Metaspace runtime comparison In order to better understand the runtime behavior of the new Metaspace memory space, we created a class metadata leaking Java program. You can download the source here. The following scenarios will be tested:Run the Java program using JDK 1.7 in order to monitor & deplete the PermGen memory space set at 128 MB. Run the Java program using JDK 1.8 (b75) in order to monitor the dynamic increase and garbage collection of the new Metaspace memory space. Run the Java program using JDK 1.8 (b75) in order to simulate the depletion of the Metaspace by setting the MaxMetaspaceSize value at 128 MB.JDK 1.7 @64-bit – PermGen depletionJava program with 50K configured iterations Java heap space of 1024 MB Java PermGen space of 128 MB (-XX:MaxPermSize=128m)As you can see form JVisualVM, the PermGen depletion was reached after loading about 30K+ classes. We can also see this depletion from the program and GC output. Class metadata leak simulatorAuthor: Pierre-Hugues Charbonneauhttp://javaeesupportpatterns.blogspot.comERROR: java.lang.OutOfMemoryError: PermGen spaceNow let’s execute the program using the HotSpot JDK 1.8 JRE. JDK 1.8 @64-bit – Metaspace dynamic re-sizeJava program with 50K configured iterations Java heap space of 1024 MB Java Metaspace space: unbounded (default)As you can see from the verbose GC output, the JVM Metaspace did expand dynamically from 20 MB up to 328 MB of reserved native memory in order to honor the increased class metadata memory footprint from our Java program. We could also observe garbage collection events in the attempt by the JVM to destroy any dead class or classloader object. Since our Java program is leaking, the JVM had no choice but to dynamically expand the Metaspace memory space. The program was able to run its 50K of iterations with no OOM event and loaded 50K+ Classes. Let’s move to our last testing scenario. JDK 1.8 @64-bit – Metaspace depletionJava program with 50K configured iterations Java heap space of 1024 MB Java Metaspace space: 128 MB (-XX:MaxMetaspaceSize=128m)As you can see form JVisualVM, the Metaspace depletion was reached after loading about 30K+ classes; very similar to the run with the JDK 1.7. We can also see this from the program and GC output. Another interesting observation is that the native memory footprint reserved was twice as much as the maximum size specified. This may indicate some opportunities to fine tune the Metaspace re-size policy, if possible, in order to avoid native memory waste. Now find below the Exception we got from the Java program output. Class metadata leak simulatorAuthor: Pierre-Hugues Charbonneauhttp://javaeesupportpatterns.blogspot.comERROR: java.lang.OutOfMemoryError: Metadata space Done! As expected, capping the Metaspace at 128 MB like we did for the baseline run with JDK 1.7 did not allow us to complete the 50K iterations of our program. A new OOM error was thrown by the JVM. The above OOM event was thrown by the JVM from the Metaspace following a memory allocation failure. #metaspace.cppFinal words I hope you appreciated this early analysis and experiment with the new Java 8 Metaspace. The current observations definitely indicate that proper monitoring & tuning will be required in order to stay away from problems such as excessive Metaspace GC or OOM conditions triggered from our last testing scenario. Future articles may include performance comparisons in order to identify potential performance improvements associated with this new feature.   Reference: Java 8: From PermGen to Metaspace from our JCG partner Pierre-Hugues Charbonneau at the Java EE Support Patterns & Java Tutorial blog. ...

History of Open Source

I gave a talk last year at Walmart HQ about Fuse open source integration and messaging, the benefits of using open source and the benefits of contributing to the Apache Software Foundation. You can find a sanitized version on slideshare.net. I also covered the history of open source, and like all histories there is no one ‘right’ story, but I tried to distill the timeline as best I could. On the chance this may be useful to someone else I tell the story here. Are you sitting comfortably? Then I’ll begin … The IBM 704 was the first mass produced computer with floating point arithmetic hardware and was introduced to the fledgling computer market back in 1954. IBM managed to sell a whopping 123 of these beasts between 1955 and 1960. IBM bundled these machines with free software and source code – and in 1955 the   organization SHARE was formed to allow like-minded individuals to share and swap code for the 704 machine. SHARE is the first public open source organization, and is still around today. Everything was really rosy in open source land (though not good for IBM competitors) – until in 1969 the Department of Justice filed a suit alleging that IBM was monopolizing the computer market. The suit dragged on for 13 years, but the effect of the filing was that in 1969 IBM unbundled its software from distribution with its computers. Until this point, the software was free and the source code was distributed too. Now we diverge in to the 70’s-80’s, and the UNIX wars, which are only slightly less hard to follow than the War of the Roses – so I’ll try and distill the salient points. UNIX started its development in the 1960’s as Multics, at Bell Labs, which was then part of AT&T. Originally written in assembly language, it was ported in 1972 to the C programming language – allowing it to become more portable. AT&T were banned from selling computers and computer software – as result of an antitrust case brought back in the 1950’s. UNIX was freely licensed to Universities and educational establishments, and became the Berkley Software Distribution (BSD) – first released by University of Berkley in 1984. On a tangent timeline, in 1983 Richard Stallman creates GNU (GNU is not UNIX) which will lead to the Free Software Foundation and the the GPL (and LGPL) licenses. Jumping back to 1984, US regulators broke up AT&T into regional telcos (the ‘baby bells’), but also allowed AT&T to enter the computing and software market, and to start selling its own distributions of AT&T UNIX, which it did aggressively. When AT&T sued Berkley in the early 90’s for license violations, the UNIX market was left in disarray. From every conflict, there is opportunity, and Linus Torvalds’ development of LINUX in 1991 started an open source fire, which ultimately led to the most successful open source company, Red Hat, coming into existence. The Apache HTTP server started in 1994 from the requirement to maintain the old NCSA HTTP daemon. This collaboration led to the Apache Software Foundation in 1999. The biggest revolution in open source has come about through the creation of GitHub (2008) – which just narrowly gets my vote as being the most important company in open source development, primarily because it uses social networking techniques to encourage collaboration – and in just a few short years has over 1 million users and over 2 million repos. So we come full circle, with Gartner predicting that open source software will be included in mission critical software portfolios of 99% of the Global 2000 enterprises.   Reference: History of Open Source from our JCG partner Rob Davies at the Rob Davies on Open Source Integration blog. ...

Unix pipelines for basic spelling error detection

Introduction We can of course write programs to do most anything we want, but often the Unix command line has everything we need to perform a series of useful operations without writing a line of code. In my Applied NLP class today, I show how one can get a high-confidence dictionary out of a body of raw text with a series of Unix pipes, and I’m posting that here so students can refer back to it later and see some pointers to other useful Unix resources. Note: for help with any of the commands, just type “man <command>” at the Unix prompt. Checking for spelling errors We are working on automated spelling correction as an in-class exercise, with a particular emphasis on the following sentence: This Facebook app shows that she is there favorite acress in tonw So, this has a contextual spelling error (there), an error that could be a valid English word but isn’t (acress) and an error that violates English sound patterns (tonw). One of the key ingredients for spelling correction is a dictionary of words known to be valid in the language. Let’s assume we are working with English here. On most Unix systems, you can pick up an English dictionary in /usr/share/dict/words, though the words you find may vary from one platform to another. If you can’t find anything there, there are many word lists available online, e.g. check out the Wordlist project for downloads and links. We can easily use the dictionary and Unix to check for words in the above sentence that don’t occur in the dictionary. First, save the sentence to a file. $ echo 'This Facebook app shows that she is there favorite acress in tonw' > sentence.txt Next, we need to get the unique word types (rather than tokens) is sorted lexicographic order. The following Unix pipeline accomplishes this. $ cat sentence.txt | tr ' ' '\n' | sort | uniq > words.txt To break it down:The cat command spills the file to standard output. The tr command “translates” all spaces to new lines. So, this gives us one word per line. The sort command sorts the lines lexicographically. The uniq command makes those lines uniq by making adjacent duplicates disappear. (This doesn’t do anything for this particular sentence, but I’m putting it in there in case you try other sentences that have multiple tokens of the type “the”, for example.)You can see these effects by doing each in turn, building up the pipeline incrementally. $ cat sentence.txt This Facebook app shows that she is there favorite acress in tonw $ cat sentence.txt | tr ' ' '\n' This Facebook app shows that she is there favorite acress in tonw $ cat sentence.txt | tr ' ' '\n' | sort Facebook This acress app favorite in is she shows that there tonw We can now use the comm command to compare the file words.txt and the dictionary. It produces three columns of output: the first gives the lines only in the first file, the second are lines only in the second file, and the third are those in common. So, the first column has what we need, because those are words in our sentence that are not found in the dictionary. Here’s the command to get that. $ comm -23 words.txt /usr/share/dict/words Facebook This acress app shows tonw The -23 options indicate we should suppress columns 2 and 3 and show only column 1. If we just use -2, we get the words in the sentence with the non-dictionary words on the left and the dictionary words on the right (try it). The problem of course is that any word list will have gaps. This dictionary doesn’t have more recent terms like Facebook and app. It also doesn’t have upper-case This. You can ignore case with comm using the -i option and this goes away. It doesn’t have shows, which is not in the dictionary since it is an inflected form of the verb stem show. We could fix this with some morphological analysis, but instead of that, let’s go the lazy route and just grab a larger list of words. Extracting a high-confidence dictionary from a corpus Raw text often contains spelling errors, but errors don’t tend to happen with very high frequency, so we can often get pretty good expanded word lists by computing frequencies of word types on lots of text and then applying reasonable cutoffs. (There are much more refined methods, but this will suffice for current purposes.) First, let’s get some data. The Open American National Corpus has just released v3.0.0 of its Manually Annotated Sub-Corpus (MASC), which you can get from this link. – http://www.anc.org/masc/MASC-3.0.0.tgz Do the following to get it and set things up for further processing: $ mkdir masc $ cd masc $ wget http://www.anc.org/masc/MASC-3.0.0.tgz $ tar xzf MASC-3.0.0.tgz (If you don’t have wget, you can just download the MASC file in your browser and then move it over.) Next, we want all the text from the data/written directory. The find command is very handy for this. $ find data/written -name '*.txt' -exec cat {} \; > all-written.txt To see how much is there, use the wc command. $ wc all-written.txt 43061 400169 2557685 all-written.txt So, there are 43k lines, and 400k tokens. That’s a bit small for what we are trying to do, but it will suffice for the example. Again, I’ll build up a Unix pipeline to extract the high-confidence word types from this corpus. I’ll use the head command to show just part of the output at each stage. Here are the raw contents. $ cat all-written.txt | headI can't believe I wrote all that last year. AcephalousFriday, 07 May 2010 Now, get one word per line. $ cat all-written.txt | tr -cs 'A-Za-z' '\n' | headI can t believe I wrote all that last The tr translator is used very crudely: basically, anything that is not an ASCII letter character is turned into a new line. The -cs options indicate to take the complement (opposite) of the ‘A-Za-z’ argument and to squeeze duplicates (e.g. A42, becomes A with a single new line rather than three). Next, we sort and uniq, as above, except that we use the -c option to uniq so that it produces counts. $ cat all-written.txt | tr -cs 'A-Za-z' '\n' | sort | uniq -c | head 1 737 A 22 AA 1 AAA 1 AAF 1 AAPs 21 AB 3 ABC 1 ABDULWAHAB 1 ABLE Because the MASC corpus includes tweets and blogs and other unedited text, we don’t trust words that have low counts, e.g. four or fewer tokens of that type. We can use awk to filter those out. $ cat all-written.txt | tr -cs 'A-Za-z' '\n' | sort | uniq -c | awk '{ if($1>4) print $2 }' | head A AA AB ACR ADDRESS ADP ADPNP AER AIG ALAN Awk makes it easy to process lines of files, and gives you indexes into the first column ($1), second ($2), and so on. There’s much more you can do, but this shows how you can conditionally output some information from each line using awk. You can of course change the threshold. You can also turn all words to lower-case by inserting another tr call into the pipe, e.g.: $ cat all-written.txt | tr 'A-Z' 'a-z' | tr -cs 'a-z' '\n' | sort | uniq -c | awk '{ if($1>8) print $2 }' | head a aa ab abandoned abbey ability able abnormal abnormalities aboard It all comes down to what you need out of the text. Combining and using the dictionaries Let’s do the check on the sentence above, but using both the standard dictionary and the one derived from MASC. Run the following command first. $ cat all-written.txt | tr -cs 'A-Za-z' '\n' | sort | uniq -c | awk '{ if($1>4) print $2 }' > /tmp/masc_vocab.txt Then in the directory where you saved words.txt, do the following. $ cat /usr/share/dict/words /tmp/masc_vocab.txt | sort | uniq > big_vocab.txt $ comm -23 words.txt big_vocab.txt acress tonw Ta-da! The MASC corpus provided us with enough examples of other words that This, Facebook, app, and shows are no longer detected as errors. Of course, detecting there as an error is much more difficult and requires language models and more. Conclusion Learn to use the Unix command line! This post is just a start into many cool things you can do with Unix pipes. Here are some other resources:Unix for Poets (the classic resource by Ken Church) Data-Intensive Linguistics book draft by Chris Brew and Marc Moens (never published, but has a great section on using Unix for language processing). Slides I did on using Unix to count words in texts and compute language model probabilitiesHappy (Unix) hacking!   Reference: Unix pipelines for basic spelling error detection from our JCG partner Jason Baldridge at the Bcomposes blog. ...

Defensive API evolution with Java interfaces

API evolution is something absolutely non-trivial. Something that only few have to deal with. Most of us work on internal, proprietary APIs every day. Modern IDEs ship with awesome tooling to factor out, rename, pull up, push down, indirect, delegate, infer, generalise our code artefacts. These tools make refactoring our internal APIs a piece of cake. But some of us work on public APIs, where the rules change drastically. Public APIs, if done properly, are versioned. Every change – compatible or incompatible – should be published in a new API version. Most people will agree that API evolution should be done in major and minor releases, similar to what is specified in semantic versioning. In short: Incompatible API changes are published in major releases (1.0, 2.0, 3.0), whereas compatible API changes / enhancements are published in minor releases (1.0, 1.1, 1.2). If you’re planning ahead, you’re going to foresee most of your incompatible changes a long time before actually publishing the next major release. A good tool in Java to announce such a change early is deprecation. Interface API evolution Now, deprecation is a good tool to indicate that you’re about to remove a type or member from your API. What if you’re going to add a method, or a type to an interface’s type hierarchy? This means that all client code implementing your interface will break – at least as long as Java 8?s defender methods aren’t introduced yet. There are several techniques to circumvent / work around this problem: 1. Don’t care about it Yes, that’s an option too. Your API is public, but maybe not so much used. Let’s face it: Not all of us work on the JDK / Eclipse / Apache / etc codebases. If you’re friendly, you’re at least going to wait for a major release to introduce new methods. But you can break the rules of semantic versioning if you really have to – if you can deal with the consequences of getting a mob of angry users. Note, though, that other platforms aren’t as backwards-compatible as the Java universe (often by language design, or by language complexity). E.g. with Scala’s various ways of declaring things as implicit, your API can’t always be perfect. 2. Do it the Java way The “Java” way is not to evolve interfaces at all. Most API types in the JDK have been the way they are today forever. Of course, this makes APIs feel quite “dinosaury” and adds a lot of redundancy between various similar types, such as StringBuffer and StringBuilder, or Hashtable and HashMap. Note that some parts of Java don’t adhere to the “Java” way. Most specifically, this is the case for the JDBC API, which evolves according to the rules of section #1: “Don’t care about it”. 3. Do it the Eclipse way Eclipse’s internals contain huge APIs. There are a lot of guidelines how to evolve your own APIs (i.e. public parts of your plugin), when developing for / within Eclipse. One example about how the Eclipse guys extend interfaces is the IAnnotationHover type. By Javadoc contract, it allows implementations to also implement IAnnotationHoverExtension and IAnnotationHoverExtension2. Obviously, in the long run, such an evolved API is quite hard to maintain, test, and document, and ultimately, hard to use! (consider ICompletionProposal and its 6 (!) extension types) 4. Wait for Java 8 In Java 8, you will be able to make use of defender methods. This means that you can provide a sensible default implementation for your new interface methods as can be seen in Java 1.8?s java.util.Iterator (an extract): public interface Iterator<E> {// These methods are kept the same: boolean hasNext(); E next();// This method is now made 'optional' (finally!) public default void remove() { throw new UnsupportedOperationException('remove'); }// This method has been added compatibly in Java 1.8 default void forEach(Consumer<? super E> consumer) { Objects.requireNonNull(consumer); while (hasNext()) consumer.accept(next()); } } Of course, you don’t always want to provide a default implementation. Often, your interface is a contract that has to be implemented entirely by client code. 5. Provide public default implementations In many cases, it is wise to tell the client code that they may implement an interface at their own risk (due to API evolution), and they should better extend a supplied abstract or default implementation, instead. A good example for this is java.util.List, which can be a pain to implement correctly. For simple, not performance-critical custom lists, most users probably choose to extend java.util.AbstractList instead. The only methods left to implement are then get(int) and size(), The behaviour of all other methods can be derived from these two: class EmptyList<E> extends AbstractList<E> { @Override public E get(int index) { throw new IndexOutOfBoundsException('No elements here'); }@Override public int size() { return 0; } } A good convention to follow is to name your default implementation AbstractXXX if it is abstract, or DefaultXXX if it is concrete 6. Make your API very hard to implement Now, this isn’t really a good technique, but just a probable fact. If your API is very hard to implement (you have 100s of methods in an interface), then users are probably not going to do it. Note: probably. Never underestimate the crazy user. An example of this is jOOQ’s org.jooq.Field type, which represents a database field / column. In fact, this type is part of jOOQ’s internal domain specific language, offering all sorts of operations and functions that can be performed upon a database column. Of course, having so many methods is an exception and – if you’re not designing a DSL – is probably a sign of a bad overall design. 7. Add compiler and IDE tricks Last but not least, there are some nifty tricks that you can apply to your API, to help people understand what they ought to do in order to correctly implement your interface-based API. Here’s a tough example, that slaps the API designer’s intention straight into your face. Consider this extract of the org.hamcrest.Matcher API: public interface Matcher<T> extends SelfDescribing {// This is what a Matcher really does. boolean matches(Object item); void describeMismatch(Object item, Description mismatchDescription);// Now check out this method here:/** * This method simply acts a friendly reminder not to implement * Matcher directly and instead extend BaseMatcher. It's easy to * ignore JavaDoc, but a bit harder to ignore compile errors . * * @see Matcher for reasons why. * @see BaseMatcher * @deprecated to make */ @Deprecated void _dont_implement_Matcher___instead_extend_BaseMatcher_(); } “Friendly reminder”, come on. Other ways I’m sure there are dozens of other ways to evolve an interface-based API. I’m curious to hear your thoughts!   Reference: Defensive API evolution with Java interfaces from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog. ...

High Performance Durable Messaging

Overview While there are a good number of high performance messaging systems available for Java, most avoid quoting benchmarks which include durable messaging and serialization/deserialization of messages. This is done for a number of reasons; 1) you don’t always need or want durable messages 2) you want the option of using your own serialization. One important reason they are avoided is that both of these slow down messaging by as much as 10x which doesn’t look so good. Most messaging benchmarks will highlight the performance of passing raw bytes around without durability as this gives the highest numbers. Some also quote durable messaging numbers, but these are typically much slower. What if you need to serialize and deserialize real data efficiently and you would like to record and replay messages even if you have learnt to do without these. Higher performance serialization and durability I have written a library which attempt to solve more of the problem, as I see it, to give you a better overall solution. It is not the fastest messaging available but it is durable and includes serialization and de-serialization times. As I have noted already this can be 10x higher than the cost of transporting the serialized data so in real applications this solution can be much faster. An Example: Sending prices In this test InProcessChronicleTest.testPricePublishing() I send price events consisting of a long timestamp, a symbol, a bid price / volume and ask price / volume. The time to write the data was 0.4 µS (0.0004 milli-seconds) and the time to receive it over a TCP connection was 1.8 µS. Note: this connection is durable at both ends so you can see what data has been queue to be sent and what has been received. If a connection is lost, it can continue from the place it was up to, or optionally playing any message the client has already received, even if the server is not available e.g. if the client is restarted. To send and receive 5 million messages I used the -Xmx32m -verbosegc flags which show that little heap needed, and not a single GC occurs during this test. This means the library will have little impact on the rest of your application. Comparing with Externalize objects. To put this in context, I have compared this with the time it takes to serialize and deserialize objects containing the same data in InProcessChronicleTest. testSerializationPerformance(). The PriceUpdate object is Externalizable and this Benchmark for “Comparing various aspects of Serialization libraries on the JVM Platform” shows an Externalizable object can be one of the fastest serializations available. The time this took on the same machine was 2.7 µS to serialize and 7.5 µS to deserialize. Note: this doesn’t include messaging or persistence, just to do the serialization and deserialization.Serialization transport Time to write Time to readJava Chronicle TCP and persistence 0.4 µS 1.8 µSExternalizable none 2.7 µS 7.5 µSSerializable none 3.8 µS 13.2 µSConclusion When benchmarking messaging, you should include how long it takes to send and receive real messages, not just byte[], and include durability if that is desirable.   Reference: High Performance Durable Messaging from our JCG partner Peter Lawrey at the Vanilla Java blog. ...

Building Vaadin UI with Xtend

Today I have decided to say hello to Xtend . I had wish to learn some new programming language. The list of the criteria for choosing one, wasn’t so big .It must be a programming language running on JVM, and it would be nice if i don’t need to learn completely new eco system for building application.I have checked a few options. The list of programming language for JVM is quit big, but at I was deciding between the following ones: Groovy, Scala and Xtend. At and i have choose Xtend.     Scala did not fit well in my criteria, on the other hand groovy fits my criteria but it will be the next programming language which i will learn, after Xtend. It is hard to explain why I choose Xtend. I don’t even think that the Xtend is a programming language, it’s more like extension, but it’s my opinion. What is Xtend So here is a few words about the language. For more info go on the Xtend web page. It’s nice and simple language which modernize Java. Instead of compiled byte code, Xtend is translated in to pretty-printed Java class, which makes it suitable for working with platforms which don’t work with byte code like GWT. The code written in Xtend produce Java classes, as i already mention, so there is no limitation in usage of any existing Java framework. The language is created with Xtext so it comes with already prepared Eclipse, and there are maven plugins for the language, so using it out of Eclipse will not be a problem. Learning Learning Xtend is not hard. There is a few syntax changes and few new semantics concepts which are currently missing in Java. There is nothing revolutionary comparing to the other programming language, Xtend just extends Java with the new feature which will allow you to create nicer,shorter classes. The features which get most of my attention were closure, lambda expression and extensions. Those stuff allow you to create a really nice builder classes. You can easily create UI Builders API, which will allow you to create simpler view(not in context of functionality, but in context of understanding of code). Engaging Xtend I’ve already mentioned that Xtend is build with Xtext, and that means that eclipse is already able to correctly handle Xtend language. After creating new Xtend class eclipse will complain about missing libs and offer you to add them in the class path, if you do not use maven for getting dependencies. The goals of this blog post are to show how Xtend can improve the way of building UI I’ve found nice examples for JavaFX, GWT … but I didn’t find anything for Vaadin, so I have decide to build a simple class for building Vaadin UI. Or to be more precise just a segment of it. The following example is not fully implemented and it can build just some part of UI, but it can be easily extend. Vaadins UI is example of imperative UI written in the Java. Process of building UI is similar to the building imperative UI in GWT or SWT. Here is the simple example how it looks like: package org.pis.web.application; import org.eclipse.xtext.xbase.lib.InputOutput; import com.vaadin.Application; import com.vaadin.ui.Button; import com.vaadin.ui.Button.ClickEvent; import com.vaadin.ui.Button.ClickListener; import com.vaadin.ui.HorizontalLayout; import com.vaadin.ui.Panel; import com.vaadin.ui.Window; @SuppressWarnings("serial") public class MainWindow extends Application { public void init() { Window main = new Window(); HorizontalLayout hl = new HorizontalLayout(); Panel panel = new Panel(); final Button button = new Button("First button"); button.addListener( new ClickListener() { @Override public void buttonClick(ClickEvent event) { sayHello("Hello First Button"); button.setCaption("First button clicked"); } }); panel.addComponent(button); Button button2 = new Button("Second button"); button2.addListener(new ClickListener() { @Override public void buttonClick(ClickEvent event) { sayHello("Hello Second Button"); } }); main.addComponent(hl); setMainWindow(main); } public void sayHello(final String string) { InputOutput.<String> println(string); } } The example above is typical implementation of Vaadin UI, and my goal is to make it easier and more readable. To do that I will start with the builder class. Making UI Builder API For getting nicer way for creating UI, I will create a component builder first. This is not a standard implementation of builder pattern, like we can do in pure Java. Actually we are building extension class. This class contains extension methods which will extend existing class with new methods. And here is implementation of the class. package org.pis.web.application import com.vaadin.ui.Window import com.vaadin.ui.Button import com.vaadin.ui.Panel import com.vaadin.ui.HorizontalLayout import com.vaadin.ui.ComponentContainer class ComponentBuilder{ def window ( (Window) => void initializer){ new Window().init(initializer) } def panel( ComponentContainer it, (Panel) => void initializer){ val panel = new Panel().init(initializer) it.addComponent(panel) return panel } def horizontalLayout (ComponentContainer it, (HorizontalLayout) => void initializer){ val hl = new HorizontalLayout().init(initializer) it.addComponent(hl); return hl } def button ( ComponentContainer it, (Button)=> void initializer){ println('Button in panel creation') val that = new Button().init(initializer); it.addComponent(that); return that } def private <T> T init(T obj, (T) => void init){ init?.apply(obj) return obj } } The builder class alone can not do so much, it has basic functionality like building window, adding the different kinds of panels and button, and if you are familiar with Vaadin you know that there is a lot more components built in the framework. Almost all methods in the builder have two parameters. First parameter represent container class which will handle new component, and second parameter is closure which will contain code for the component initialization. Making UI The code, in the following snippet, is illustration of using builder class for building a Vaadin UI. The first line, in the class body, include ComponentBuilder as an extension. And powerful Xtend’s lambda syntax allows that the code looks more simple and easier to understand. This way we eliminated Java’s inner class and lot of boilerplate code which we would have in the case of pure Java. More about Xtend lambda expressions can be found in Xtend’s documentation package org.pis.web.application import com.vaadin.Application import com.vaadin.ui.Button class MainWindowXtend extends Application{ extension ComponentBuilder = new ComponentBuilder override init() { mainWindow = window[ horizontalLayout[ panel[ button[ caption = "First button" it.addListener()[ sayHello('Hello First Button'); component as Button component.caption = 'First button clicked' ] ] button[ caption = "Second button" it.addListener()[ sayHello('Hello'); ] ] ] ] ]; } def void sayHello(String string) { println(string) } } Conclusion So this is really nice language, learning process take just few hours. Documentation is well written, the main language concept is shown in about 50 pages. After few hours you are ready for improving you application. This is something how Java should look like. In short, playing with Xtend was fun and it is worth of investing time.   Reference: Building Vaadin UI with Xtend from our JCG partner Igor Madjeric at the Igor Madjeric blog. ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: