Featured FREE Whitepapers

What's New Here?

scala-logo

Implicit Conversions in Scala

Following on from the previous post on operator overloading I’m going to be looking at Implicit Conversions, and how we can combine them to with operator overloading to do some really neat things, including one way of creating a multi-parameter conversion. So what’s an “Implicit Conversion” when it’s at home? So lets start with some basic Scala syntax, if you’ve spent any time with Scala you’ve probably noticed it allows you to do things like: (1 to 4).foreach(println) // print out 1 2 3 4Ever wondered how it does this? Lets make things more explicit, you could rewrite the above code as: val a : Int = 1 val b : Int = 4 val myRange : Range = a to b myRange.foreach(println)Scala is creating a Range object directly from two Ints, and a method called to. So what’s going on here? Is this just a sprinkling of syntactic sugar to make writing loops easier? Is to just a keyword in like def or val? The answers to all this is no, there’s nothing special going on here. to is simply a method defined in the RichInt class, which takes a parameter and returns a Range object (specifically a subclass of Range called Inclusive). You could rewrite it as the following if you really wanted to: val myRange : Range = a.to(b)Hang on though, RichInt may have a “to” method but Int certainly doesn’t, in your example you’re even explicitly casting your numbers to Ints Which brings me nicely on to the subject of this post, Implicit Conversions. This is how Scala does this. Implicit Conversions are a set of methods that Scala tries to apply when it encounters an object of the wrong type being used. In the case of the to example there’s a method defined and included by default that will convert Ints into RichInts. So when Scala sees 1 to 4 it first runs the implicit conversion on the 1 converting it from an Int primitive into a RichInt. It can then call the to method on the new RichInt object, passing in the second Int (4) as the parameter. Hmm, think I understand, how’s about another example? Certainly. Lets try to improve our Complex number class we created in the previous post. Using operator overloading we were able to support adding two complex numbers together using the + operator. eg. class Complex(val real : Double, val imag : Double) { def +(that: Complex) = new Complex(this.real + that.real, this.imag + that.imag) def -(that: Complex) = new Complex(this.real - that.real, this.imag - that.imag) override def toString = real + " + " + imag + "i" } object Complex { def main(args : Array[String]) : Unit = { var a = new Complex(4.0,5.0) var b = new Complex(2.0,3.0) println(a) // 4.0 + 5.0i println(a + b) // 6.0 + 8.0i println(a - b) // 2.0 + 2.0i } }But what if we want to support adding a normal number to a complex number, how would we do that? We could certainly overload our “+” method to take a Double argument, ie something like… def +(n: Double) = new Complex(this.real + n, this.imag)Which would allow us to do… val sum = myComplexNumber + 8.5…but it’ll break if we try… val sum = 8.5 + myComplexNumberTo get around this we could use an Implicit Conversion. Here’s how we create one. object ComplexImplicits { implicit def Double2Complex(value : Double) = new Complex(value,0.0) }Simple! Although you do need to be careful to import the ComplexImplicits methods before they can be used. You need to make sure you add the following to the top of your file (even if your Implicits object is in the same file)… import ComplexImplicits._And that’s the problem solved, you can now write val sum = 8.5 + myComplexNumber and it’ll do what you expect! Nice. Is there anything else I can do with them? One other thing I’ve found them good for is creating easy ways of instantiating objects. Wouldn’t it be nice if there were a simpler way of creating one of our complex numbers other than with new Complex(3.0,5.0). Sure you could get rid of the new by making it a case class, or implementing an apply method. But we can do better, how’s about just (3.0,5.0) Awesome, but I’d need some sort of multi parameter implicit conversion, and I don’t really see how that’s possible!? The thing is, ordinarily (3.0,5.0) would create a Tuple. So we can just use that tuple as the parameter for our implicit conversion and convert it into a Complex. how we might go about doing this… implicit def Tuple2Complex(value : Tuple2[Double,Double]) = new Complex(value._1,value._2);And there we have it, a simple way to instantiate our Complex objects, for reference here’s what the entire Complex code looks like now. import ComplexImplicits._ object ComplexImplicits { implicit def Double2Complex(value : Double) = new Complex(value,0.0) implicit def Tuple2Complex(value : Tuple2[Double,Double]) = new Complex(value._1,value._2); } class Complex(val real : Double, val imag : Double) { def +(that: Complex) : Complex = (this.real + that.real, this.imag + that.imag) def -(that: Complex) : Complex = (this.real - that.real, this.imag + that.imag) def unary_~ = Math.sqrt(real * real + imag * imag) override def toString = real + " + " + imag + "i" } object Complex { val i = new Complex(0,1); def main(args : Array[String]) : Unit = { var a : Complex = (4.0,5.0) var b : Complex = (2.0,3.0) println(a) // 4.0 + 5.0i println(a + b) // 6.0 + 8.0i println(a - b) // 2.0 + 8.0i println(~b) // 3.60555 var c = 4 + b println(c) // 6.0 + 3.0i var d = (1.0,1.0) + c println(d) // 7.0 + 4.0i } }Reference: Implicit Conversions in Scala from our JCG partner Tom Jefferys at the Tom’s Programming Blog ....
java-logo

High performance libraries in Java

There is an increasing number of libraries which are described as high performance and have benchmarks to back that claim up. Here is a selection that I am aware of. Disruptor library - http://code.google.com/p/disruptor/ LMAX aims to be the fastest trading platform in the world. Clearly, in order to achieve this we needed to do something special to achieve very low-latency and high-throughput with our Java platform. Performance testing showed that using queues to pass data between stages of the system was introducing latency, so we focused on optimising this area. The Disruptor is the result of our research and testing. We found that cache misses at the CPU-level, and locks requiring kernel arbitration are both extremely costly, so we created a framework which has “mechanical sympathy” for the hardware it’s running on, and that’s lock-free.The 6 million TPS benchmark was measured on a 3Ghz dual-socket quad-core Nehalem based Dell server with 32GB RAM.http://martinfowler.com/articles/lmax.html Java Chronicle - https://github.com/peter-lawrey/Java-Chronicle This library is an ultra low latency, high throughput, persisted, messaging and event driven in memory database. The typical latency is as low as 16 nano-seconds and supports throughputs of 5-20 million messages/record updates per second. It uses almost no heap, trivial GC impact, can be much larger than your physical memory size (only limited by the size of your disk). and can be shared between processes with better than 1/10th latency of using Sockets over loopback. It can change the way you design your system because it allows you to have independent processes which can be running or not at the same time (as no messages are lost) This is useful for restarting services and testing your services from canned data. e.g. like sub-microsecond durable messaging. You can attach any number of readers, including tools to see the exact state of the data externally. e.g. You can use; od -t cx1 {file} to see the current state. Colt Matrix library - http://acs.lbl.gov/software/colt/ Scientific and technical computing, as, for example, carried out at CERN, is characterized by demanding problem sizes and a need for high performance at reasonably small memory footprint. There is a perception by many that the Java language is unsuited for such work. However, recent trends in its evolution suggest that it may soon be a major player in performance sensitive scientific and technical computing. For example, IBM Watson’s Ninja project showed that Java can indeed perform BLAS matrix computations up to 90% as fast as optimized Fortran. The Java Grande Forum Numerics Working Group provides a focal point for information on numerical computing in Java. With the performance gap steadily closing, Java has recently found increased adoption in the field. The reasons include ease of use, cross-platform nature, built-in support for multi-threading, network friendly APIs and a healthy pool of available developers. Still, these efforts are to a significant degree hindered by the lack of foundation toolkits broadly available and conveniently accessible in C and Fortran. The latest stable Colt release breaks the 1.9 Gflop/s barrier on JDK ibm-1.4.1, RedHat 9.0, 2x IntelXeon@2.8 GHz. Javolution - http://javolution.org/ Javolution real-time goals are simple: To make your application faster and more time predictable! That being accomplished through:High performance and time-deterministic (real-time) util / lang / text / io / xml base classes. Context programming in order to achieve true separation of concerns (logging, performance, etc). A testing framework addressing not only unit tests but also performance and regression tests as well. Straightforward and low-level parallel computing capabilities with ConcurrentContext. Struct and Union base classes for direct interfacing with native applications (e.g. C/C++). World’s fastest and first hard real-time XML marshalling/unmarshalling facility. Simple yet flexible configuration management of your application.Trove collections for primitives - http://trove.starlight-systems.com/ The Trove library provides high speed regular and primitive collections for Java. The GNU Trove library has two objectives:Provide “free” (as in “free speech” and “free beer”), fast, lightweight implementations of the java.util Collections API. These implementations are designed to be pluggable replacements for their JDK equivalents. Provide primitive collections with similar APIs to the above. This gap in the JDK is often addressed by using the “wrapper” classes (java.lang.Integer, java.lang.Float, etc.) with Object-based collections. For most applications, however, collections which store primitives directly will require less space and yield significant performance gains.MG4J: Managing Gigabytes for Java™ - http://mg4j.dsi.unimi.it/ MG4J (Managing Gigabytes for Java) is a free full-text search engine for large document collections written in Java. MG4J is a highly customisable, high-performance, full-fledged search engine providing state-of-the-art features (such as BM25/BM25F scoring) and new research algorithms. Other links : Overview of 8 performance libraries http://www.dzone.com/links/r/8_best_open_source_high_performance_java_collecti.html Sometimes collection classes in JDK may not sufficient. We may require some high performance hashtable, Bigarrays etc. Check out the list of open source high performance collection libraries. Serialization benchmark http://code.google.com/p/thrift-protobuf-compare/wiki/Benchmarking This is a comparison of some serialization libraries. Its still hard to beat hand coded serialization. http://vanillajava.blogspot.com/2011/10/serialization-using-bytebuffer-and.html Reference: High performance libraries in Java from our JCG partner Peter Lawrey at the Vanilla Java blog....
jaspersoft-ireport-logo

Big Data analytics with Hive and iReport

Each J.J. Abrams’ TV series Person of Interest episode starts with the following narration from Mr. Finch one of the leading characters: “You are being watched. The government has a secret system–a machine that spies on you every hour of every day. I know because…I built it.” Of course us technical people know better. It would take a huge team of electrical and software engineers many years to build such a high performing machine and the budget would be unimaginable… or wouldn’t be? Wait a second we have Hadoop! Now everyone of us can be Mr. Finch for a modest budget thanks to Hadoop. In JCG article “Hadoop Modes Explained – Standalone, Pseudo Distributed, Distributed” JCG partner Rahul Patodi explained how to setup Hadoop. The Hadoop project has produced a lot of tools for analyzing semi-structured data but Hive is perhaps the most intuitive among them as it allows anyone with an SQL background to submit MapReduce jobs described as SQL queries. Hive can be executed from a command line interface, as well as run in a server mode with a Thrift client acting as a JDBC/ODBC interface giving access to data analysis and reporting applications. In this article we will set up a Hive Server, create a table, load it with data from a text file and then create a Jasper Resport using iReport. The Jasper Report executes an SQL query on the Hive Server that is then translated to a MapReduce job executed by Hadoop. Note: I used Hadoop version 0.20.205, Hive version 0.7.1 and iReport version 4.5 running OpenSuSE 12.1 Linux with MySQL 5.5 installed. Assuming you have already installed Hadoop download and install Hive following the Hive Getting Started wiki instructions. By default Hive is installed in CLI mode running on a Standalone Hadoop mode. Making a multiuser Hive metastore The default Hive install uses a derby embedded database as its metastore. The metastore is where Hive maintains descriptions of the data we want to access via SQL. In order for the metastore to be accessible from many users simultaneously it is necessary to be moved into a standalone database. Here is how to install a MySQL metastore.Copy the MySQL JDBC driver jar file to ~/hive-0.7.1-bin/lib directoryChange the following properties in file hive-default.xml found in ~/hive-0.7.1-bin/conf directory: <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://hyperion/metastore?createDatabaseIfNotExist=true</value> <description>JDBC connect string for a JDBC metastore</description> </property><property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> <description>Driver class name for a JDBC metastore</description> </property><property> <name>javax.jdo.option.ConnectionUserName</name> <value>foo</value> <description>Username to connect to the database</description> </property><property> <name>javax.jdo.option.ConnectionPassword</name> <value>bar</value> <description>Password to connect to the database</description> </property>Use MySQL workbench or the MySQL command line utility to create a schema using the latin1 character set. If Hive does not find a schema it will create it on its own using the default character set for MySQL. In my case that was UTF-8 and that generated jdbc errors. If you want to use the command line utility just type: mysql> CREATE DATABASE IF NOT EXISTS `metastore` DEFAULT CHARACTER SET latin1 COLLATE latin1_bin;Type Hive in your command prompt to enter Hive CLI and type: hive> SHOW TABLES; OK testlines Time taken: 3.654 seconds hive>That will populate your newly created metastore schema. If you see any errors check your hive-default.xml configuration and make sure your database schema is named ‘metastore’ with latin1 as the default Character Set.Now let’s populate Hadoop Hive with some data Let’s just create two text files named file01 and file02 each containing: file01: Hello World Bye World Hello Everybody Bye Everybody file02: Hello Hadoop Goodbye Hadoop Hello Everybody Goodbye Everybody Copy these files from your local filesystem to HDFS: $ hadoop fs -mkdir HiveExample $ hadoop fs -copyFromLocal ~/file* /user/ssake/HiveExampleGo to Hive CLI and create a table named testlines that would contain each line’s words in an array of strings: hive> create table testlines (line array<string>) row format delimited collection items terminated by ' ';Load the text files into Hive: hive> load data inpath "/user/ssake/HiveExample/file01" INTO table testlines; hive> load data inpath "/user/ssake/HiveExample/file02" INTO table testlines;Check that testlines now contains each line’s words: hive> select * from testlines; OK ["Hello","World","Bye","World"] ["Hello","Everybody","Bye","Everybody"] ["Hello","Hadoop","Goodbye","Hadoop"] ["Hello","Everybody","Goodbye","Everybody"] Time taken: 0.21 secondsNow that we have a Hive with data we can run it as a server in port 10000 which is typical for running a hive server: $ HIVE_PORT=10000 $ hive --service hiveserverWith this setup it is possible to have several Thrift clients accessing our Hive server. However according to the Apache Hive blog the multithreaded Hive features are not thoroughly tested and thus it is safer to use a separate port and hive instance per Thrift client. Create a “word count” report iReport 4.5 supports hive datasources so let’s use it to create a report that runs with our hive server as its datasource: 1. Create a datasource connecting to the hive server2. Use the report wizard to generate your report3. Type the following in the HiveQL Query input box: select word,count(word) from testlines lateral view explode(line) words as word group by wordLets briefly explain what the above query does: Our source table, the “testlines” table, has a single column named “line” which contains data in the form of array of strings. Each array of strings represents the words in a sentence as found in the imported files “file01” and “file02“. In order to properly count the occurrences of each distinct word in all of the input files we have to “explode” the arrays of strings from our source table into a new one that should contain every individual word. To do so we use the “lateral view” in conjunction with the “explode()” HiveQL commands as shown above. Issuing the HiveQL query above we create an new iconic table named “words” that has a single column named “word” containing all the words found in every string array from our “testlines” table.4. Click the … button to select all fileds and click next5. When you are at the designer view click the Preview tab to execute your HiveQL reportAnd here is our report:Now you are all set to build applications that access your Hadoop data using the familiar JDBC interface! Reference: Big Data analytics with Hive and iReport from our W4G partner Spyros Sakellariou....
codehaus-jetty-logo

Embedded Jetty, Vaadin and Weld

When I develop web applications I like to be able to quickly start them from Eclipse without having to rely on all kinds of heavy-weight tomcat or glassfish plugins. So what I often do is just create a simple Java based Jetty launcher I can run directly from Eclipse. This launcher starts up within a couple of seconds so this makes developing a lot more pleasant. However, sometimes, getting everything setup correctly is a bit hard. So in this article I’ll give you a quick overview of how you can setup Jetty together with Weld for CDI and Vaadin as the web framework. To get everything setup correctly we’ll need to take the following steps:Setup the maven pom for the required dependencies Create a java based Jetty Launcher Setup web.xml Add Weld placeholdersSetup the maven pom for the required dependencies I use the following pom.xml file. You might not need everything if you for instance don’t use custom components. But it should serve as a good reference of what should be in there. <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>group.id</groupId> <artifactId>artifact.id</artifactId> <packaging>war</packaging> <version>1.0</version> <name>Vaadin Web Application</name> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <vaadin.version>6.7.1</vaadin.version> <gwt.version>2.3.0</gwt.version> <gwt.plugin.version>2.2.0</gwt.plugin.version> </properties> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.5</source> <target>1.5</target> </configuration> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>gwt-maven-plugin</artifactId> <version>${gwt.plugin.version}</version> <configuration> <webappDirectory>${project.build.directory}/${project.build.finalName}/VAADIN/widgetsets</webappDirectory> <extraJvmArgs>-Xmx512M -Xss1024k</extraJvmArgs> <runTarget>cvgenerator-web</runTarget> <hostedWebapp>${project.build.directory}/${project.build.finalName}</hostedWebapp> <noServer>true</noServer> <port>8080</port> <compileReport>false</compileReport> </configuration> <executions> <execution> <goals> <goal>resources</goal> <goal>compile</goal> </goals> </execution> </executions> <dependencies> <dependency> <groupId>com.google.gwt</groupId> <artifactId>gwt-dev</artifactId> <version>${gwt.version}</version> </dependency> <dependency> <groupId>com.google.gwt</groupId> <artifactId>gwt-user</artifactId> <version>${gwt.version}</version> </dependency> </dependencies> </plugin> <plugin> <groupId>com.vaadin</groupId> <artifactId>vaadin-maven-plugin</artifactId> <version>1.0.2</version> <executions> <execution> <configuration> </configuration> <goals> <goal>update-widgetset</goal> </goals> </execution> </executions> </plugin> </plugins> </build> <!-- extra repositories for Vaadin extensions --> <repositories> <repository> <id>vaadin-snapshots</id> <url>http://oss.sonatype.org/content/repositories/vaadin-snapshots/</url> <releases> <enabled>false</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> </repository> <repository> <id>vaadin-addons</id> <url>http://maven.vaadin.com/vaadin-addons</url> </repository> </repositories> <!-- repositories for the plugins --> <pluginRepositories> <pluginRepository> <id>codehaus-snapshots</id> <url>http://nexus.codehaus.org/snapshots</url> <snapshots> <enabled>true</enabled> </snapshots> <releases> <enabled>false</enabled> </releases> </pluginRepository> <pluginRepository> <id>vaadin-snapshots</id> <url>http://oss.sonatype.org/content/repositories/vaadin-snapshots/</url> <snapshots> <enabled>true</enabled> </snapshots> <releases> <enabled>false</enabled> </releases> </pluginRepository> </pluginRepositories> <!-- minimal set of dependencies --> <dependencies> <dependency> <groupId>com.vaadin</groupId> <artifactId>vaadin</artifactId> <version>${vaadin.version}</version> </dependency> <dependency> <groupId>org.vaadin.addons</groupId> <artifactId>stepper</artifactId> <version>1.1.0</version> </dependency> <!-- the jetty version we'll use --> <dependency> <groupId>org.eclipse.jetty.aggregate</groupId> <artifactId>jetty-all-server</artifactId> <version>8.0.4.v20111024</version> <type>jar</type> <scope>compile</scope> <exclusions> <exclusion> <artifactId>mail</artifactId> <groupId>javax.mail</groupId> </exclusion> </exclusions> </dependency> <!-- vaadin custom field addon --> <dependency> <groupId>org.vaadin.addons</groupId> <artifactId>customfield</artifactId> <version>0.9.3</version> </dependency> <!-- with cdi utils plugin you can use Weld --> <dependency> <groupId>org.vaadin.addons</groupId> <artifactId>cdi-utils</artifactId> <version>0.8.6</version> </dependency> <!-- we'll use this version of Weld --> <dependency> <groupId>org.jboss.weld.servlet</groupId> <artifactId>weld-servlet</artifactId> <version>1.1.5.Final</version> <type>jar</type> <scope>compile</scope> </dependency> <!-- normally following are provided, but not if you run within jetty --> <dependency> <groupId>javax.servlet</groupId> <artifactId>servlet-api</artifactId> <version>2.5</version> <type>jar</type> <scope>provided</scope> </dependency> <dependency> <groupId>javax.servlet.jsp</groupId> <artifactId>jsp-api</artifactId> <version>2.2</version> <type>jar</type> <scope>provided</scope> </dependency> <dependency> <artifactId>el-api</artifactId> <groupId>javax.el</groupId> <version>2.2</version> <scope>provided</scope> </dependency> </dependencies> </project>Create the java launcher With this pom we have all the dependencies we need to run Jetty, Vaadin and Weld together. Lets look at the Jetty Launcher. import javax.naming.InitialContext; import javax.naming.Reference; import org.eclipse.jetty.plus.jndi.Resource; import org.eclipse.jetty.server.Server; import org.eclipse.jetty.webapp.WebAppContext; /** * Simple jetty launcher, which launches the webapplication from the local * resources and reuses the projects classpath. * * @author jos */ public class Launcher { /** run under root context */ private static String contextPath = "/"; /** location where resources should be provided from for VAADIN resources */ private static String resourceBase = "src/main/webapp"; /** port to listen on */ private static int httpPort = 8081; private static String[] __dftConfigurationClasses = { "org.eclipse.jetty.webapp.WebInfConfiguration", "org.eclipse.jetty.webapp.WebXmlConfiguration", "org.eclipse.jetty.webapp.MetaInfConfiguration", "org.eclipse.jetty.webapp.FragmentConfiguration", "org.eclipse.jetty.plus.webapp.EnvConfiguration", "org.eclipse.jetty.webapp.JettyWebXmlConfiguration" } ; /** * Start the server, and keep waiting. */ public static void main(String[] args) throws Exception { System.setProperty("java.naming.factory.url","org.eclipse.jetty.jndi"); System.setProperty("java.naming.factory.initial","org.eclipse.jetty.jndi.InitialContextFactory"); InitialContext ctx = new InitialContext(); ctx.createSubcontext("java:comp"); Server server = new Server(httpPort); WebAppContext webapp = new WebAppContext(); webapp.setConfigurationClasses(__dftConfigurationClasses); webapp.setDescriptor("src/main/webapp/WEB-INF/web.xml"); webapp.setContextPath(contextPath); webapp.setResourceBase(resourceBase); webapp.setClassLoader(Thread.currentThread().getContextClassLoader()); server.setHandler(webapp); server.start(); new Resource("BeanManager", new Reference("javax.enterprise.inject.spi.BeanMnanager", "org.jboss.weld.resources.ManagerObjectFactory", null)); server.join(); } }This code will start a Jetty server that uses the web.xml from the project to launch the Vaadin web-app. Take note that we explicitly use the setConfigurationClasses operation. This is needed to make sure we have a JNDI context we can use to register the Weld beanmanager in. Setup the web.xml Next we’ll look at the web.xml. The one I use in this example is shown next: <?xml version="1.0" encoding="UTF-8"?> <web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:web="http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" id="WebApp_ID" version="2.5"> <display-name>Vaadin Web Application</display-name> <context-param> <description>Vaadin production mode</description> <param-name>productionMode</param-name> <param-value>false</param-value> </context-param> <servlet> <servlet-name>example</servlet-name> <servlet-class>ServletSpecifiedByTheCDIVaadinPlugin</servlet-class> <init-param> <description>Vaadin application class to start</description> <param-name>application</param-name> <param-value>VaadinApplicationClassName</param-value> </init-param> <init-param> <param-name>widgetset</param-name> <param-value>customwidgetsetnameifyouuseit</param-value> </init-param> </servlet> <servlet-mapping> <servlet-name>example</servlet-name> <url-pattern>/example/*</url-pattern> </servlet-mapping> <welcome-file-list> <welcome-file>index.html</welcome-file> </welcome-file-list> <listener> <listener-class>org.jboss.weld.environment.servlet.Listener</listener-class> </listener> <resource-env-ref> <description>Object factory for the CDI Bean Manager</description> <resource-env-ref-name>BeanManager</resource-env-ref-name> <resource-env-ref-type>javax.enterprise.inject.spi.BeanManager</resource-env-ref-type> </resource-env-ref> </web-app>At the bottom of the web.xml you can see the resource-env we define for Weld and the required listener to make sure Weld is started and our beans are injected. You can also see that we specified a different servlet name instead of the normal Vaadin servlet. For the details on this see the CDI plugin page: https://vaadin.com/directory#addon/cdi-utils The main steps are (taken from that page):Add empty beans.xml -file (CDI marker file) to your project under WEB-INF dir Add cdiutils*.jar to your project under WEB-INF/lib Create your Application class by extending AbstractCdiApplication Extend AbstractCdiApplicationServlet and annotate it with @WebServlet(urlPatterns = “/*”) Deploy to JavaEE/Web profile -compatible container (CDI apps can also be run on servlet containers etc. but some further configuration is required)Add weld placeholder At this point we have all the dependencies, we created a launcher that can be directly used from Eclipse, and we made sure Weld is loaded on startup. We’ve also configured the CDI plugin for Vaadin. At this point we’re almost done. We only need to add empty beans.xml files in the location we want to be included by the beans discovery of Weld. <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/beans_1_0.xsd"> </beans>I had to add these to the src/main/java/META-INF library and to the WEB-INF directory for Weld to pickup all the annotated beans. And that’s it. You can now start the launcher and you should see all kind of Weld and Vaadin logging appearing. Reference: Embedded Jetty, Vaadin and Weld from our JCG partner Jos Dirksen  at the Smart Java blog....
jboss-hibernate-logo

Problems with ORMs Part 2 – Queries

In my previous post on Object-Relational Mapping tools (ORMs), I discussed various issues that I’ve faced dealing with the common ORMs out there today, including Hibernate. This included issues related to generating a schema from POJOs, real-world performance and maintenance problems that crop up. Essentially, the conclusion is that ORMs get you most of the way there, but a balanced approach is needed, and sometimes you just want to avoid using your ORM’s toolset, so you should be able to bypass it when desired. One huge flaw in modern ORMs that I see though is that they really want to help you solve all your SQL problems. What do I mean by this why would I say this is a fault? Well, I believe that Hibernate et al just try too hard and end up providing features that actually hurt developers more than they help. The main thing I have in mind when I say this is query support. Actual support for complex queries that are easily maintained is seriously lacking in ORMs and not because they’ve omitted things — it’s just because the tools they provide don’t use SQL, which was designed from the ground up for exactly this purpose. Experiences in Hibernate It’s been my experience that when you use features like HQL, frequently you’re thinking about saving yourself a few minutes up front, and there’s nothing wrong with this in itself, but it can cause serious problems. It’s my experience that frequently you end up wanting or needing to replace HQL with something more flexible, either because of a bug fix or enhancement, and this is where the trouble starts. I consider myself an experienced developer and I pride myself on (usually) not breaking things — to me, that is one of the hallmarks of good developers. When you’re faced with ripping out a piece of code and replacing it wholesale, such as replacing HQL with SQL, you’re basically replacing code that has had a history that includes bug fixes, enhancements and performance tweaks. You are now responsible for duplicating every change to this code that’s ever been made and it’s quite possible you don’t understand the full scope of the changes or the niggling problems that were corrected in the past. Note that this also applies to all the other query methods that Hibernate provides, including the Query API, and through extension, query support within the JPA. The issue is that you don’t want a solution that is brittle or limited that it has to be fully replaced later. This means that if you need to revert to SQL to get things done, there’s a good chance you should just do that in the first place. This same concept applies to all areas of software development. So what do we aim for if the basic querying support in ORMs like Hibernate isn’t good enough? Criteria for a Solid ORM Bsaically, my personal requirements for an ORM come down to the following:Schema first – generate your model from a database, not the other way around. If you have a platform-agnostic way of specifying DDL for the database, great, but it’s not a deal-breaker. Generating a database from some other domain-specific language or format helps nobody and results in a poorly designed schema. SQL only – if you want to help me avoid writing code, then generate/expose key-based, etc. lookups for me. Don’t ask me to use your query API or some new query language. SQL was invented for queries, so let me use the right tool. Give me easy ways to populate my domain objects from queries. This gives me 99% of what I’ll ever need, while giving me flexibility. Allow me to populate arbitrary Java beans with query results – don’t tie me into your registry of known types. Don’t force me into using a typical transaction container like the one Hibernate or Spring provides – they are a disaster and I’ve never see a practical use for them that made any sense. Let me handle where connections/transactions are acquired and released in my application – typically this only happens in a few places with clear semantics anyway. This can be some abstracted version of JDBC, but let me control it. No clever/magic behaviour in my domain objects – when working with Hibernate, I spend a good time solving the same old proxy and lazy-loading issues. They never end and can’t be solved once-and-for-all which indicates a serious design issue.Though these points seem completely reasonable to me, I’ve not encountered any ORMs that really meet my expectations, so at Carfey we’ve rolled our own little ORM, and I have to say that weekend projects and just general development with what we have is far easier and faster than Hibernate or the other ORMs I’ve used. What does it provide? A Simple Utilitarian ORMJava domain classes are generated from a DB schema. There’s no platform-agnostic DDL yet, but it’s on our TODO list. Beans include support for child collections, FK references, but it’s all lazy and optional – the beans support it, but if you don’t use them, there’s no impact. Use IDs directly if you want, or domain objects themselves. Persistence handles persisting dirty objects only, and saves are only done when requested – no magic flush behaviour. Generated domain classes are for persistence only! Stick your business logic, etc. elsewhere. SQL is used for all lookups, including primary key fetches and foreign key relationships. If you need to enhance a lookup, just steal the generated SQL and build on it. Methods and SQL is generated automatically from any indexed column so they are all provided for you automatically and are typesafe. This also provides a warning to the developer – if a lookup is not available in your domain class, it likely will perform poorly since no index exists. Any domain class can be populated from a custom query in a typesafe manner – it’s flexible but easy to use. Improved classes hide the standard JDBC types such as Connnection and Statement for ease of use, but we don’t force any transaction semantics on you, and you can always fall back to things like direct result set handling. Some basic required features like a connection pool, database metadata, and soon, database slave failover.We at Carfey don’t believe we’ve created some incredible new ORM that surpasses every other effort out there, and there are many features we’d have to add if this was a public project, but what we have works for us, and I think we have the correct approach. And at the very least, hopefully our experience can help you choose how you use your preferred ORM wisely and not spend too much time serving the tool instead of delivering software. As a final note, if you have experience with ORMs that meet my list of requirements above and you’ve had good experiences with it, I’ve love to hear about it and would consider it for future Carfey projects. Reference: Problems with ORMs Part 2 – Queries from our JCG partner Craig Flichel at the Carfey Software Blog....
java-logo

What’s Cooking in Java 8 – Project Lambda

What is project lambda: Project lambda is the project to enable lambda expressions in java language syntax. Lambda expressions are major syntax in functional programming languages like lisp. Groovy would be the closest relative of java that has support for lambda expressions, also known as closures. So what is a lambda expression? It is a block of code that can be assigned to a variable or passed as an argument to a method or passed as an argument to another lambda expression just like any other data. The code can also be invoked whenever required. The main motivation behind supporting this in java is to remove a lot of boiler plate code in using certain API that require some code from the API user, but end up using inner classes just because of java’s syntax requirements. The most common such API is the java threading API where we need to be able to tell the API which code to execute in a new thread, but end up implementing Runnable. The specification is still under development and is continuously changing. This article just gives an idea as to what to expect. Functional interfaces: The java specification developers hardly ever want to modify the JVM specification, and this case is no exception. So they are making the specification in a way so that lambda can be implemented without any modification in the JVM. So you can compile a class easily with source version 1.8 and target version 1.5. So the lambda code will be kept in an implementation of an anonymous class implementing an interface that has only one method. Well not exactly, the interface can have more than one method, but it must be implementable by a class that defines only one method. We will call such an interface a functional interface. The following are some examples of functional interfaces. //A simple case, only one method //This is a functional interface public interface Test1{ public void doSomething(int x); }//Defines two methods, but toString() is already there in //any object by virtue of being subclass of java.lang.Object //This is a functional interface public interface Test2{ public void doSomething(int x); public String toString(); }//The method in Test3 is override compatible with //the method in Test1, so the interface is still //functional public interface Test3 extends Test1{ public void doSomething(int x); }//Not functional, the implementation must //explicitly implement two methods. public interface Test4 extends Test1{ public void doSomething(long x); }Lambda expressions: In java 8, the lambda expressions are just a different syntax to implement functional interfaces using anonymous classes. The syntax is indeed much simpler than that for creating anonymous classes. The syntax mainly is of this form: argumentList -> body The argumentList is just like java method argument list – comma separated and enclosed in parentheses, with one exception – if there is only one argument, the parentheses are optional. Also it is optional to mention the types of the arguments. In case the types are not specified, they are inferred. The body can be of two types – expression body and code block body. An expression body is just a valid java expression that returns a value. A code block body contains a code block just like a method body. The code block body has the same syntax as the method body including the mandatory pair of braces. The following example shows how a new thread is implemented using lambda syntax. //The thread will keep printing "Hello" new Thread(() -> { while(true){ System.out.println("Hello"); }}).start();The expression syntax is shown in the following example public interface RandomLongs{ public long randomLong(); }RandomLongs randomLongs = () -> ((long)(Math.random()*Long.MAX_VALUE)); System.out.println(randomLongs.randomLong());Generics and lambda: But what if we want to implement a generic method using lambda? The specification developers have come up with a nice syntax, the type parameters are declared before the type arguments. The following shows an example – public interface NCopies{ public <T extends Cloneable> List<T> getCopies(T seed, int num); }//Inferred types for arguments also supported for generic methods NCopies nCopies = <T extends Cloneable> (seed, num) -> { List<T> list = new ArrayList<>(); for(int i=0; i<num; i++) list.add(seed.clone()); return list; };A point to note: The actual interface and method implemented by a lambda expression depends on the context in which it is used. The context can be setup by the existence of either an assignment operation or by the passing of parameter in a method invocation. Without a context, the lambda is meaningless, so its not correct to simply call a method directly on a lambda expression. For example, the following will give a compilation error – public interface NCopies{ public <T extends Cloneable> List<T> getCopies(T seed, int num); }//This code will give a compilation error, //As the lambda is meaningless without a context (<T extends Cloneable> (seed, num) -> { List<T> list = new ArrayList<>(); for(int i=0; i<num; i++) list.add(seed.clone()); return list; }).getCopies(new CloneableClass(), 5);However, the following would be perfectly alright, because there is an assignment context for the lambda.NCopies nCopies = <T extends Cloneable> (seed, num) -> { List<T> list = new ArrayList<>(); for(int i=0; i<num; i++) list.add(seed.clone()); return list; }; nCopies.getCopies(new CloneableClass(), 5);The stripped down lambda: Lisp’s support for lambda is much more flexible than this. The whole lisp language is based on lambda. However, java has to restrict the syntax to fit into its own syntax. Besides, lisp is an interpreted language, which has the advantage of doing stuff in the runtime when all informations are available. Java being a compiled language, it has to stick to much more stringent rules for types and control-flow etc., so as to avoid surprises at runtime. Considering this, the stripped down lambda in java 8 does not look that bad. Reference: What’s Cooking in Java 8 – Project Lambda from our JCG partner Debasish Ray Chawdhuri  at the Geeky Articles blog....
software-development-2-logo

Growing hairy software, guided by tests

Software grows organically. One line at a time, one change at a time. These changes soon add up. In an ideal world, they add up to a coherent architecture with an intention revealing design. But sometimes software just grows hairy – full of little details that obscure the underlying logic. What makes software hairy and how can we stop it? Hairy code  Generally code starts out clean – brand new, shiny code. But each time you make a change that doesn’t quite fit the original design you add a hair – a small, subtle detail. It doesn’t detract from the overall purpose of the code, it just covers a specific detail that wasn’t thought of originally. One hair on its own is fine. But then you add another, and another, and another. Before you know it, your clean, shiny code is covered in little hairs. Eventually code becomes so hairy you can’t even see the underlying design any more. Let’s face it, we’re all basically maintenance programmers. How many of us actually work on a genuinely greenfield project? And anyway, soon after starting a greenfield project, you’re changing what went before and you’re back into maintenance land. We spend most of our time changing existing code. If we’re not careful, we spend most of our time adding new hairs. The simplest thing  When changing existing code, there’s a temptation to make the smallest change that could possibly work. Generally, it’s a good approach. Christ, TDD is great at keeping you focused on this. Write a test, make it pass. Write a test, make it pass. Do the simplest thing that could possibly work. But, you have to do the refactor step. “Red, green, refactor“, people. If you’re not refactoring, your code’s getting hairy. If you’re not refactoring, what you just added is a kludge. Sure, it’s a well tested, beautifully written kludge; but it’s still a kludge. The trouble is, it’s easy to forgive yourself. But it’s just a little if statement It’s just one little change. In this specific case we want to do something subtly different. It may not look like it, but it’s a kludge. You’ve described the logic of the change but not the reason. You’ve described how the behaviour is different, but not why. Congratulations, you just grew a new hair. An example Perhaps an example would help right about now. Let’s imagine we work for an online retailer. When we fulfill an order, we take each item and attempt to ship it. For those that are out of stock, we add to a queue to ship as soon as we get new stock. public class OrderItem { public void shipIt() { if (stockSystem.inStock(getItem()) > getQuantity()) { warehouse.shipItem(getItem(), getQuantity(), getCustomer()); } else { warehouse.addQueuedItem(getItem(), getQuantity(), getCustomer()); } } }As happens with online retailers, we’re slowly taking over the universe: now we’re expanding into shipping digital items as well as physical stuff. This means that some orders will be for items that don’t need physical shipment. Each item knows whether it’s a digital product or a physical product; the rights management team have created an electronic shipment management system (email to you and me) – so all we need to do is make sure we don’t try and post digital items but email them instead. Well, the simplest thing that could possibly work is: public class OrderItem { public void shipIt() { if (getItem().isDigitalDelivery()) { email.shipItem(getItem(), getCustomer()); } else if (stockSystem.inStock(getItem()) > getQuantity()) { warehouse.shipItem(gettem(), getQuantity(), getCustomer()); } else { warehouse.addQueuedItem(getItem(), getQuantity(), getCustomer()); } } }After all, it’s just a little “if”, right? This is all fine and dandy, until in UAT we realise that we’re showing delivery in 3 days for digital items. That’s not right, so we get a request to show immediate delivery for digital items. There’s a method on Item that calculates estimated delivery date: public class Item { private static final int STANDARD_POST_DAYS = 3; public int getEstimatedDaysToDelivery() { if (stockSystem.inStock(this) > 0) { return STANDARD_POST_DAYS; } else { return stockSystem.getEstArrivalDays(this) + STANDARD_POST_DAYS; } } }Well, it’s easy enough – each item knows whether it’s for digital delivery or not, so we can just add another if: public class Item { private static final int STANDARD_POST_DAYS = 3; public int getEstimatedDaysToDelivery() { if (isDigitalDelivery()) { return 0; } else if (stockSystem.inStock(this) > 0) { return STANDARD_POST_DAYS; } else { return stockSystem.getEstArrivalDays(getSKU()) + STANDARD_POST_DAYS; } } }After all, it’s just one more if, right? Where’s the harm? But little by little the code is getting hairier and hairier. The trouble is you get lots of little related hairs smeared across the code. You get a hair here, another one over there. You know they’re related – they were done as part of the same set of changes. But will someone else looking at this code in 6 months time? What if we need to make a change so users can select electronic and/or physical delivery for items that support both? Now I need to find all the places that were affected by our original change and make more changes. But, they’re not grouped together, they’ve been spread all over. Sure, I can be methodical and find them. But maybe if I’d built it better in the first place it would be easier? A better way  This all started with a little boolean flag – that was the first smell. Then we find ourselves checking the state of the flag and switching behaviour based on it. It’s almost like there was a new domain concept here of a delivery method. Say, instead I create a DeliveryMethod interface – so each Item can have a DeliveryMethod. public interface DeliveryMethod { void shipItem(Item item, int quantity, Customer customer); int getEstimatedDaysToDelivery(Item item); }I then create two concrete implementations of this: public class PostalDelivery implements DeliveryMethod { private static final int STANDARD_POST_DAYS = 3; @Override public void shipItem(Item item, int quantity, Customer customer) { if (stockSystem.inStock(item) > quantity) { warehouse.shipItem(item, quantity, customer); } else { warehouse.addQueuedItem(item, quantity, customer); } } @Override public int getEstimatedDaysToDelivery(Item item) { if (stockSystem.inStock(item) > 0) { return STANDARD_POST_DAYS; } else { return stockSystem.getEstArrivalDays(item) + STANDARD_POST_DAYS; } } }public class DigitalDelivery implements DeliveryMethod { @Override public void shipItem(Item item, int quantity, Customer customer) { email.shipItem(item, customer); } @Override public int getEstimatedDaysToDelivery(Item item) { return 0; } }Now all the logic about how different delivery methods work is local to the DeliveryMethod classes. This groups related changes together; if we later need to make a change to delivery rules we know exactly where they’ll be. Discipline  Ultimately writing clean code is all about discipline. TDD is a great discipline – it keeps you focused on the task at hand, only adding code that is needed right now; all the while ensuring you have near complete test coverage. However, avoiding hairy code needs yet more discipline. We need to remember to describe the intention of our change, not just the implementation. Code is primarily to be read by humans so expressing the reason the code does what it does is much more important than expressing the logic. The tests only ensure your logic is correct, you also need to make sure your code reveals it’s reasoning. References: Growing hairy software, guided by tests from our JCG partner David Green at the Actively Lazy blog. ...
apache-camel-logo

Apache Camel 2.9 Released – Top 10 Changes

On the last day of 2011 the Apache Camel artifacts just managed to be pushed to the central maven repo, just shy 1.5 hours before champagne bottles was cracked and we entered 2012. The 2.9 release is a record breaking release with about 500 JIRA tickets resolved since the 2.8 released 5 months ago. Here is a break down of 10 of the most noticeable improvements and new features: 1. JAR dependencies reduced. The camel-core JAR now only depend on the API from slf4j. On top of that about 15 components, no longer depends on Spring JARs. I have previously blogged about this. 2. The Simple language has been overhauled and has a much improved syntax parser, which gives precise error details, what is wrong. You can now also have embedded functions inside functions as well. And we have unary operators, such as ++ to easily increment counters. I also started experimenting with ternary operators, so expect Conditional and the Elvis operator to be introduced in the future :) I have previously blogged about this. 3. The Bean Component has been much improved as well. Now you can define bindings explicit in the method name option, to fully 100% decouple your bean code from Camel, when using more complicated bindings. Likewise you can pass in values such as literals, numbers, booleans etc as well. The bean component can now also invoke static methods directly, as well invoking private class beans if an interface exists. I have previously blogged about this. 4. Splitting big XML files in a streaming mode with low memory footprint is now possible. There is a tokenizer solution, that is pure String based by scanning tokens. And another solution to use the StAX and JAXB APIs. The former requires no JAXB bindings, as required by the latter solution. I have previously blogged about these two solutions [1] and [2]. 5. More cloud components. We now have 2 new AWS components for Simple Email Service, and Simple DB. There is also a new JClouds component. 6. Using request-reply over JMS with fixed reply queues now supports a new exclusive option which performs faster, than the default assumed shared queue. Likewise the JMS consumer supports a new asyncConsumer option, to allow the JMS consumer to leverage the asynchronous non-blocking routing engine. All good stuff that if enabled can make JMS goes faster under certain use-cases. 7. Added a new number of JMX annotations to allow custom components to easily expose custom JMX attributes and operations. We also have JMX load statistics on the ManagedCamelContext MBean which is similar to the unix top command, which has average load stats for the last 1-minute, 5-minutes, and 15-minutes. 8. The camel-cxf component now supports OSGi blueprint configuration for the CXF-RS as well. 9. There is a number of new Apache Karaf Camel commands for further managing your Camel applications from the command shell. 10. And as usual there is a lot of minor improvements and bug fixes as well. For example the file/ftp components now support the sendEmptyMessageWhenIdle to .. yeah send an empty message when there was no files to poll. Likewise the script and language components now more easily allow to load scripts from file/classpath. And the Camel Test Kit, now have more juice for swapping endpoints before unit testing, which makes it easier to swap real endpoints with mocks and whatnot without touching your route code in the tests. And we have as usual upgraded to the latest and greatest of 3rd party libraries, such as Apache CXF 2.5.1, Groovy 1.8.5, Jackson 1.9.2, AWS 1.2.12, Spring 3.0.6, and JPA2 etc. You can see more details at the 2.9 release notes, such as details about other improvements and bug fixes etc. Reference: Apache Camel 2.9 Released – Top 10 Changes from our JCG partner Claus Ibsen at the Claus Ibsen riding the Apache Camel blog....
java-logo

What is behind System.nanoTime()?

In java world there is a very good perception about System.nanoTime(). There is always some guys who says that it is fast, reliable and, whenever possible, should be used for timings instead of System.currentTimemillis(). In overall he is absolutely lying, it is not bad at all, but there are some drawback which developer should be aware about. Also, although they have a lot in common, these drawbacks are usually platform-specific. WINDOWS Functionality is implemented using QueryPerformanceCounter API, which is known to have some issues. There is possibility that it can leap forward, some people are reporting that is can be extremely slow on multiprocessor machines, etc. I spent a some time on net trying to find how exactly QueryPerformanceCounter works and what is does. There is no clear conclusion on that topic but there are some posts which can give some brief idea how it works. I would say that the most useful, probably are that and that ones. Sure, one can find more, if search a little bit, but info will be more or less that same. So, it looks like implementation is using HPET, if it is available. If not, then it uses TSC with some kind of synchronization of the value among CPUs. Interestingly that QueryPerformanceCounter promise to return value which increases with constant frequency. It means that in case of using TSC and several CPUs it may have some difficulties not just with the fact that CPUs may have just different value of TSC, but also may have different frequency. Keeping all that in mind Microsoft recommends to use SetThreadAffinityMask to stuck thread which calls to QueryPerformanceCounter to single processor, which, obviously, is not happening in JVM. LINUX Linux is very similar to Windows, apart from the fact that it is much more transparent (I managed to download sources :) ). The value is read from clock_gettime with CLOCK_MONOTONIC flag (for real man, source is available in vclock_gettime.c from Linux source). Which uses either TSC or HPET. The only difference with Windows is that Linux not even trying to sync values of TSC read from different CPUs, it just returns it as it is. It means that value can leap back and jump forward with dependency of CPU where it is read. Also, in contract to Windows, Linux doesn’t keep change frequency constant. On the other hand, it definitely should improve performance. SOLARIS Solaris is simple. I believe that via gethrtime it goes to more or less the same implementation of clock_gettime as linux does. The difference is that Solaris guarantees that counter will not leap back, which is possible on Linux, but it is possible that the same value will be returned back. That guarantee, as can be observed from source code, is implemented using CAS, which requires sync with the main memory and can be relatively expensive on multi-processor machines. The same as on Linux, change rate can vary. CONCLUSION The conclusion is king of cloudy. Developer has to be aware that function is not perfect, it can leap back or just forward. It may not change monotonically and change rate can vary with dependency on CPU clock speed. Also, it is not as fast as many may think. On my Windows 7 machine in a single threaded test it is just about 10% faster than System.currentTimeMillis(), on multi threaded test, where number of threads is the same as number of CPUs, it is just the same. So, in overall, all it gives is increase in resolution, which may be important for some cases. And as a final note, even when CPU frequency is not changing, do no think that you can map that value reliably to system clock, see details here. APPENDIX Appendix contains implementations of the function for different OSes. Source code is from OpenJDK v.7. Solaris // gethrtime can move backwards if read from one cpu and then a different cpu // getTimeNanos is guaranteed to not move backward on Solaris inline hrtime_t getTimeNanos() { if (VM_Version::supports_cx8()) { const hrtime_t now = gethrtime(); // Use atomic long load since 32-bit x86 uses 2 registers to keep long. const hrtime_t prev = Atomic::load((volatile jlong*)&max_hrtime); if (now <= prev) return prev; // same or retrograde time; const hrtime_t obsv = Atomic::cmpxchg(now, (volatile jlong*)&max_hrtime, prev); assert(obsv >= prev, "invariant"); // Monotonicity // If the CAS succeeded then we're done and return "now". // If the CAS failed and the observed value "obs" is >= now then // we should return "obs". If the CAS failed and now > obs > prv then // some other thread raced this thread and installed a new value, in which case // we could either (a) retry the entire operation, (b) retry trying to install now // or (c) just return obs. We use (c). No loop is required although in some cases // we might discard a higher "now" value in deference to a slightly lower but freshly // installed obs value. That's entirely benign -- it admits no new orderings compared // to (a) or (b) -- and greatly reduces coherence traffic. // We might also condition (c) on the magnitude of the delta between obs and now. // Avoiding excessive CAS operations to hot RW locations is critical. // See http://blogs.sun.com/dave/entry/cas_and_cache_trivia_invalidate return (prev == obsv) ? now : obsv ; } else { return oldgetTimeNanos(); } }Linux jlong os::javaTimeNanos() { if (Linux::supports_monotonic_clock()) { struct timespec tp; int status = Linux::clock_gettime(CLOCK_MONOTONIC, &tp); assert(status == 0, "gettime error"); jlong result = jlong(tp.tv_sec) * (1000 * 1000 * 1000) + jlong(tp.tv_nsec); return result; } else { timeval time; int status = gettimeofday(&time, NULL); assert(status != -1, "linux error"); jlong usecs = jlong(time.tv_sec) * (1000 * 1000) + jlong(time.tv_usec); return 1000 * usecs; } }Windows jlong os::javaTimeNanos() { if (!has_performance_count) { return javaTimeMillis() * NANOS_PER_MILLISEC; // the best we can do. } else { LARGE_INTEGER current_count; QueryPerformanceCounter(¤t_count); double current = as_long(current_count); double freq = performance_frequency; jlong time = (jlong)((current/freq) * NANOS_PER_SEC); return time; } }Reference: What is behind System.nanoTime()? from our JCG partner Stanislav Kobylansky at the Stas’s blog . Inside the Hotspot VM: Clocks, Timers and Scheduling Events Beware of QueryPerformanceCounter() Implement a Continuously Updating, High-Resolution Time Provider for Windows Game Timing and Multicore Processors High Precision Event Timer (Wikipedia) Time Stamp Counter (Wikipedia)...
javafx-logo

PopupMenu in JavaFX 2

Creating Popup Menus To create a Popupmenu in JavaFX you can use the ContextMenu class. You add MenuItems to it and can also create visual separators using SeparatorMenuItem.In the example below I’ve opted to subclass ContextMenu and add the MenuItems on its constructor. public class AnimationPopupMenu extends ContextMenu{ public AnimationPopupMenu() { (...) getItems().addAll( MenuItemBuilder.create() .text(ADD_PARTICLE) .graphic(createIcon(...)) .onAction(new EventHandler() { @Override public void handle(ActionEvent actionEvent) { // some code that gets called when the user clicks the menu item } }) .build(),(...) SeparatorMenuItemBuilder.create().build(), MenuItemBuilder.create() .text(ADD_DISTANCE_MEASURER) .onAction(new EventHandler() { @Override public void handle(ActionEvent actionEvent) { // Some code that will get called when the user clicks the menu item } }) .graphic(createIcon(...)) .build(), (...) ); }Line 5: I get the Collection of children of the ContextMenu and call addAll to add the MenuItems; Line 6: Uses the MenuItem builder do create a MenuItem; Line 7: Passes in the text of the menu item. Variable ADD_PARTICLE is equal to “Add Particle”; Line 8: Calls graphic which receives the menu item icon returned by createIcon:ImageView createIcon(URL iconURL) { return ImageViewBuilder.create() .image(new Image(iconURL.toString())) .build(); }Line 9: onAction receives the event handler which will be called when the user clicks the menu item; Line15: Finally the MenuItem gets created by executing build() on the MenuItemBuilder class; Line18: Creates The Separator which you can see on the figure on the start of this post. It’s the dotted line between “Add Origin” and “Add Distance Measurer”; The other lines of code just repeat the same process to create the rest of the menu items.Using JavaFX Popup Menus inside JFXPanel If your embeding a JavaFX scene in a Swing app you’ll have to do some extra steps manually, if you don’t there won’t be hover animations on the popup menu and it won’t get dismissed automatically when the user clicks outside of it. There is a fix targeted at JavaFX 3.0 for this – http://javafx-jira.kenai.com/browse/RT-14899 First you’ll have to request the focus on the javafx container so that the popup gets hover animations and when you click outside your app window it gets dismissed. In my case I pass a reference to the javafx swing container on the construtor of the popup menu, then I’ve overwritten the show method of ContextMenu so as to request the focus on the swing container before actually showing the popup: public void show(Node anchor, MouseEvent event) { wrapper.requestFocusInWindow(); super.show(anchor, event.getScreenX(), event.getScreenY()); }And lastly you’ll have to also dismiss the popup when the user clicks inside the javafx scene but outside of the popup by calling hide(). I almost forgot.. thanks to Martin Sladecek (Oracle JavaFX team) for giving me some pointers. Reference: PopupMenu in JavaFX 2 from our JCG partner Pedro Duque Vieira at the Pixel Duke blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books