Featured FREE Whitepapers

What's New Here?


The Java EE 6 Example – Galleria

Have you ever been wondering where to find some good end-to-end examples build with Java EE 6? I have. Most of the stuff you find on the net is very basic and doesn’t solve the real world problems. This is true for the Java EE 6 tutorial. All the other stuff, like most of what Adam Bien publishes are a very tight scoped examples which also doesn’t point you to a more complete solution. So, I was very pleased to stumble over a more complex example done by Vineet Reynolds. It’s called “Java EE 6 Galleria” and you can download the source code from bitbucket. Vineet is a software engineer contributing to the Arquillian project; more specifically, he contributed bugs fixes and worked on a few feature requests for Arquillian Core, and the GlassFish, WebLogic and Tomcat integrations for Arquillian. This is where I first came across his name. And following the Arquillian guys and him a little closer directly send me to this example. A big thanks to Vineet for a helping hand during my first tries to get this up and running. Follow him if you like on twitter @VineetReynolds. Here is a brief explanation about it’s background and this is also kicking of a series about running it in different settings and pointing you to a few more details under the hood. This is the basic introduction. About the Galleria The high level description of the project is the following: The Java EE 6-Galleria is a demo application demonstrating the use of JSF 2.0 and JPA 2.0 in a Java EE project using Domain Driven Design. It was written to serve as a showpiece for domain driven design in Java EE 6. The domain model of the application is not anemic, and is constituted of JPA entities. The entities are then used in session EJBs that act as the application layer. JSF facelets are used in the presentation tier, using Mojarra and PrimeFaces. The project seeks to achieve comprehensive coverage through the use of both unit and integration tests, written in JUnit 4. The unit and integration tests for EJBs and the domain model rely on the EJB 3.1 container API. The integration tests for the presentation layer relies on the Arquillian project and its Drone extension (for execution of Selenium tests). Domain driven design using Java EE 6 DDD as an architectural approach, is feasible in Java EE 6. This is primarily due to the changes made in EJB 3.x and in the introduction of JPA. The improvements made in the EJB 3.x and JPA specifications allow for a domain and application layer to be modeled in Java EE 6 using DDD. The basic idea here is to design an application ensuring that persistence services are injected into the application layer, and used to access/persist the entities within a transactional context established by the application layer. Domain Layer The application contains four domain entities for now – User, Group, Album and Photo which are the same as the JPA entities in the logical data model.Repository Layer On top of the logical data model you can find four repositories – UserRepository, GroupRepository, AlbumRepository and PhotoRepository. Each for one of the four domain entities. Even if the DDD requires that you only have repositories for an aggregated root, and not for all domain entities it is designed this way to allow the application layer to access the Album and Photo domain entities without having to navigate Albums and Photos via the UserRepository. The repositories are Stateless Session Beans with a no-interface view and are constructed using the Generic CRUD Service pattern published by Adam Bien. Application layer The application layer exposes services to be consumed by the presentation layer. It is also responsible for transaction management, while also serving as a fault barrier for the below layer(s). The application layer co-ordinates with the domain repositories and the domain objects to fulfill the desired objectives of the exposed services. In a way, this layer is the equivalent to a service layer in traditional applications. The application layer exposes it’s services through the UserService, GroupService, AlbumService and PhotoService interfaces and is also responsible for validating the providing domain objects from the above layers, before co-ordinating actions among objects in the domain layer. This is done via JSR-303 constraints on the domain objects. How it looks like And this is, how this example looks like if you are running it on latest GlassFish 3.1.2. Curious to set this up yourself? Wait for the next post or give it a try yourself ;)Below we are going to setup the example as it is directly with latest GlassFish 3.1.2, Hibernate and Derby. Preparation Get yourself in the mood for some configuration. Grep the latest NetBeans 7.1 (the Java EE edition already includes the needed GlassFish 3.1.2) and install it. I also assume you have a decent Java SDK 7 (6 would do the job, too) in place somewhere. Depending on the development strategy you also need a Mercurial Client and Maven. At least Maven is also included in NetBeans, so … I mean … why make your life harder than it already is? ;) Environments Some few more words about the environments. This example was setup for supporting different environments. Starting with the plain “development” environment you also need to configure your “test” and last but not least the “production” environment. All of the different environments are handled by maven profiles, so you might have to configure a bit during the following minutes. Create the Database instances First thing to do is to decide where to put all your stuff. The examples use derby out of the box and therefore you should either have the Java DB installed (part of the JDK) or use the GlassFish derby instance which is pre-configured with NetBeans. Let’s make it harder here and assume we use the Java DB installation that comes with your JDK. Go ahead, open a CMD prompt and navigate to your %JAVA_HOME% folder and further down the db folder/bin. Execute the “startNetWorkServer” script and watch out for the derby instance to start. Now open another CMD prompt also navigate to the db/bin folder and execute the “ij” script. This should come up with a prompt ” ij>. Now enter the following connect string: connect 'jdbc:derby://localhost:1527/GALLERIATEST;create=true';This command connects you to the derby instance and creates a GALLERIATEST database if it doesn’t already exist. The Galleria example uses a handy little tool called dbdeploy as a database change management tool. It let’s you do incremental updates to the physical database model which get tracked in a changelog table. (More about this later in the series). You have to create the changelog table: CREATE TABLE changelog ( change_number DECIMAL(22,0) NOT NULL, complete_dt TIMESTAMP NOT NULL, applied_by VARCHAR(100) NOT NULL, description VARCHAR(500) NOT NULL );ALTER TABLE changelog ADD CONSTRAINT Pkchangelog PRIMARY KEY (change_number);You can redo the steps for any other instance you need (production, etc.) by simply changing the database name in the connect statement. And don’t forget to create the changelog table in every instance. And if you don’t like this approach. Fire up NetBeans, switch to the services tab, select “New Connection” and add a new Java DB Network Connection with host:localhost, port:1527 and Database: GALLERIATEST;create=true. Set both user and password to “APP” and click “Test Connection”. Select APP as the schema for your new DB. And you are done!Create the GlassFish domain We are running this from latest GlassFish. First thing to do now is to create a new domain. navigate to your GlassFish installation directory and goto glassfish3/bin and execute the following: asadmin create-domain --portbase 10000 --nopassword test-domainThis creates a new test-domain for you. Now navigate to that domain folder (“glassfish3/glassfish/domains/test-domain”) and open the config/domain.xml file. We are now going to add the created derby database as a connection pool to you newly created GlassFish domain. Navigate to the <resources> element and add the following connection pool and jdbc-resource under the last closing  </jdbc-connection-pool> element:  <jdbc-connection-pool driver-classname="" datasource-classname="org.apache.derby.jdbc.ClientDataSource40" res-type="javax.sql.DataSource" description="" name="GalleriaPool" ping="true">       <property name="User" value="APP"></property>       <property name="DatabaseName" value="GALLERIATEST"></property>       <property name="RetrieveMessageText" value="true"></property>       <property name="Password" value="APP"></property>       <property name="ServerName" value="localhost"></property>       <property name="Ssl" value="off"></property>       <property name="SecurityMechanism" value="4"></property>       <property name="TraceFileAppend" value="false"></property>       <property name="TraceLevel" value="-1"></property>       <property name="PortNumber" value="1527"></property>       <property name="LoginTimeout" value="0"></property>     </jdbc-connection-pool>     <jdbc-resource pool-name="GalleriaPool" description="" jndi-name="jdbc/galleriaDS"></jdbc-resource>Now find the : <config name=”server-config”> element and inside it look for the last  <resource-ref entry. Add the following line there:   <resource-ref ref="jdbc/galleriaDS"></resource-ref>One last thing to do until we are ready to fire up our instance. We need to add the JDBC Realm for the Galleria example. again, find the <config name=”server-config”> and inside it, look for a </auth-realm>. Under this, put the following:   <auth-realm classname="com.sun.enterprise.security.auth.realm.jdbc.JDBCRealm" name="GalleriaRealm">       <property name="jaas-context" value="jdbcRealm"></property>       <property name="encoding" value="Hex"></property>       <property name="password-column" value="PASSWORD"></property>       <property name="datasource-jndi" value="jdbc/galleriaDS"></property>       <property name="group-table" value="USERS_GROUPS"></property>       <property name="charset" value="UTF-8"></property>       <property name="user-table" value="USERS"></property>       <property name="group-name-column" value="GROUPID"></property>       <property name="digest-algorithm" value="SHA-512"></property>       <property name="user-name-column" value="USERID"></property>     </auth-realm>Be sure not to put the new realm under the default-config. This will not work. Fine. Let’s get the sources :) Getting the Source and opening it in NetBeansVineet is hosting the Galleria example on bitbucket.org. So, you have to go there and visit the java-ee-6-galleria project. There are three ways you can bring the sources to your local HDD. Either via the hg command line:hg clone https://bitbucket.org/VineetReynolds/java-ee-6-galleriaor via the website download (upper right “get sources”) or directly via NetBeans. You need a Mercurial client for your OS for the first and the third option. I am using TortoiseHg for Windows. You should have this installed and configured with NetBeans before doing the following. Lets try the last alternative here. Select “Team > Clone Other”. Enter the Repository URL and leave user/password empty. Click “next” two times (we don’t need to change default paths ;)) and select a parent directory to put the stuff in. Click “Finish” and let the Mercurial client do the work. You are asked to open the found projects after it finished. This should look similar to the picture on the right. If you run into connection troubles make sure to update your proxy settings accordingly. If you try to build the project you will run into trouble. It is still missing some configuration which we are going to do next. Adding a Development Profile Next is to add some stuff to the Maven pom.xml of the galleria-ejb project. Open it and scroll down to the <profiles> section. You find two (sonar and production). We are going to add a development profile by adding the following lines to it (make sure to adjust the GlassFish paths to your environment): <profile> <id>development</id> <activation> <activeByDefault>true</activeByDefault> </activation> <properties> <galleria.derby.testInstance.jdbcUrl>jdbc:derby://localhost:1527/GALLERIATEST</galleria.derby.testInstance.jdbcUrl> <galleria.derby.testInstance.user>APP</galleria.derby.testInstance.user> <galleria.derby.testInstance.password>APP</galleria.derby.testInstance.password> <galleria.glassfish.testDomain.user>admin</galleria.glassfish.testDomain.user> <galleria.glassfish.testDomain.passwordFile>D:/glassfish-3.1.2-b22/glassfish3/glassfish/domains/test-domain/config/local-password</galleria.glassfish.testDomain.passwordFile> <galleria.glassfish.testDomain.glassfishDirectory>D:/glassfish-3.1.2-b22/glassfish3/glassfish/</galleria.glassfish.testDomain.glassfishDirectory> <galleria.glassfish.testDomain.domainName>test-domain</galleria.glassfish.testDomain.domainName> <galleria.glassfish.testDomain.adminPort>10048</galleria.glassfish.testDomain.adminPort> <galleria.glassfish.testDomain.httpPort>10080</galleria.glassfish.testDomain.httpPort> <galleria.glassfish.testDomain.httpsPort>10081</galleria.glassfish.testDomain.httpsPort> </properties> </profile>Ok. As you can see a couple of stuff is defined here. And the profile is activated by default. That’s it. For now. Testing the ejb-Galleria Project Lets try to run the testcases in the ejb-Galleria project.  Right click it and issue a  “clean and build”.Follow the console output to see what is actually going on. We are going to investigate this a little bit further with one of the next posts. Today we are only doing this to make sure, you have everything setup the right way. It should finish with: Tests run: 49, Failures: 0, Errors: 0, Skipped: 0 BUILD SUCCESSThat is a “Green-Bar” :-) Congratulations! Build and Deploy the Project Now go to NetBeans “Tools > Options > Miscellaneous > Maven” and check the box that says: “Skip Tests for any build executions not directly related to testing”. Return to the main window and right click the Galleria project and make a clean and build there. Reactor Summary:Galleria ................................. SUCCESS [0.431s] galleria-ejb ............................. SUCCESS [5.302s] galleria-jsf ............................. SUCCESS [4.486s] Galleria EAR ............................. SUCCESS [1.308s] ------------------------------------------------------------ BUILD SUCCESS ------------------------------------------------------------ Total time: 11.842sFine. Now lets fire up the GlassFish domain. Switch to your GlassFish installation and find the glassfish3/bin folder. Open a command line prompt there and run: asadmin start-domain test-domainYou can see the domain starting up. Now open a browser and navigate to http://localhost:10048/. After a few seconds this is going to show you the admin console of your GlassFish server. Now you need to install Hibernate. Select the “Update Tool” (bottom left) and switch to the “Available Add-Ons” tab. Select “hibernate” and click install (top right). Stop the server after you installed it and restart it with the command above. Open the admin console again and click on “Applications”. Click the little “deploy” button on top and browse for the “java-ee-6-galleria/galleria-ear/target/galleria-ear-0.0.1-SNAPSHOT.ear”. Click on “OK” (top right). You are done after a few seconds. Now switch to http://localhost:10080/Galleria/ and you will see the welcome screen. Congratulations. You setup the Galleria example on GlassFish! Sign up, Login and play with the application a bit!The next parts in the series will dive you into the details of the application. I am going to cover the tests and overall concepts. And we are also going to change both JPA provider and database in a future post. Want to know what it takes to get it up and running on latest WebLogic 12c ? Read on! Reference: The Java EE 6 Example – Galleria – Part 1 &  The Java EE 6 Example – Running Galleria on GlassFish 3.1.2 – Part 2 from our JCG partner Markus Eisele at the Enterprise Software Development with Java  blog....

FindBugs and JSR-305

Suppose that group of developers work in parallel on parts of big project – some developers are working on service implementation, while others are working on code using this service. Both groups agreed on service API, and started working separately, having in mind the API assumptions… Do you think this story will have happy end? Well, … – maybe :) – there are tools which can help achieve it :) – one of them is FindBugs, supported with JSR-305 (annotations for software defect detection). Let’s take a look at the service API contract: package com.blogspot.vardlokkur.services;import java.util.List;import javax.annotation.CheckForNull; import javax.annotation.Nonnull;import com.blogspot.vardlokkur.entities.domain.Employer;/** * Defines the API contract for the employer service. * * @author Warlock * @since 1.0 */ public interface EmployerService {/** * @param identifier the employer's identifier * @return the employer having specified {@code identifier}, {@code null} if not found */ @CheckForNull Employer withId(@Nonnull Long identifier);/** * @param specification defines which employers should be returned * @return the list of employers matching specification */ @Nonnull List thatAre(@Nonnull Specification specification);}As you see there are annotations like @Nonnull or @CheckForNull added to the service method signatures. The purpose of their usage is to define the requirements for the method parameters (ex. identifier parameter cannot be null), and the expectations for the values returned by methods (ex. service method result can be null and you should check it in your code). So what? – you may ask – should I check them in the code by myself or trust the co-workers that they will use the guidelines defined by those annotations? Of course not :) – trust no one, use the tools which will verify the API assumptions, like FindBugs. Suppose that we have following service API usage: package com.blogspot.vardlokkur.test;import org.junit.Before; import org.junit.Test;import com.blogspot.vardlokkur.services.EmployerService; import com.blogspot.vardlokkur.services.impl.DefaultEmployerService;/** * Employer service test. * * @author Warlock * @since 1.0 */ public class EmployerServiceTest {private EmployerService employers;@Before public void before() { employers = new DefaultEmployerService(); }@Test public void test01() { Long identifier = null; employers.withId(identifier); }@Test public void test02() { employers.withId(Long.valueOf(1L)).getBusinessName(); }@Test public void test03() { employers.thatAre(null); } }Let’s try to verify the code against the service API assumptions:FindBugs will analyze your code, and switch to the FindBugs perspective showing potential problems:Null passed for nonnull parameterPossible null pointer dereferenceSimilar way, guys writing the service code may verify their work against defined API assumptions, for ex. if you run FindBugs for the very early version of service implementation: package com.blogspot.vardlokkur.services.impl;import java.util.List;import com.blogspot.vardlokkur.entities.domain.Employer; import com.blogspot.vardlokkur.services.EmployerService; import com.blogspot.vardlokkur.services.Specification;/** * Default implementation of {@link EmployerService}. * * @author Warlock * @since 1.0 */ public class DefaultEmployerService implements EmployerService {/** * {@inheritDoc} */ public Employer withId(Long identifier) { return null; }/** * {@inheritDoc} */ public List thatAre(Specification specification) { return null; }}Following error will be found:As you see, nothing can hide from the FindBugs and his ally – JSR-305 ;) Few links for the dessert:JSR-305: Annotations for Software Defect Detection JSR 305: a silver bullet or not a bullet at all?Reference: FindBugs and JSR-305 from our JCG partner Micha? Ja?tak at the Warlock’s Thoughts blog....

Web Services with JAX-WS on Tomcat

Let us assume an enterprise is maintaining user authentication details in a centralized system. We need to create an AuthenticationService which will take credentials, validate them and return the status. The rest of the applications will use the AuthenticationService to authenticate the Users.Create AuthenticationService interface as follows: package com.sivalabs.caas.services; import javax.jws.WebService;import com.sivalabs.caas.domain.AuthenticationStatus; import com.sivalabs.caas.domain.Credentials; import com.sivalabs.caas.exceptions.AuthenticationServiceException;@WebService public interface AuthenticationService { public AuthenticationStatus authenticate(Credentials credentials) throws AuthenticationServiceException; }package com.sivalabs.caas.domain; /** * @author siva * */ public class Credentials { private String userName; private String password; public Credentials() { } public Credentials(String userName, String password) { super(); this.userName = userName; this.password = password; } //setters and getters }package com.sivalabs.caas.domain;/** * @author siva * */ public class AuthenticationStatus { private String statusMessage; private boolean success; //setters and getters }package com.sivalabs.caas.exceptions;/** * @author siva * */ public class AuthenticationServiceException extends RuntimeException { private static final long serialVersionUID = 1L; public AuthenticationServiceException() { } public AuthenticationServiceException(String msg) { super(msg); } } Now let us implement the AuthenticationService. package com.sivalabs.caas.services;import java.util.HashMap; import java.util.Map;import javax.jws.WebService;import com.sivalabs.caas.domain.AuthenticationStatus; import com.sivalabs.caas.domain.Credentials; import com.sivalabs.caas.exceptions.AuthenticationServiceException;/** * @author siva * */ @WebService(endpointInterface="com.sivalabs.caas.services.AuthenticationService", serviceName="AuthenticationService", targetNamespace="http://sivalabs.blogspot.com/services/AuthenticationService") public class AuthenticationServiceImpl implements AuthenticationService { private static final Map<string, string> CREDENTIALS = new HashMap<string, string>(); static { CREDENTIALS.put("admin", "admin"); CREDENTIALS.put("test", "test"); } @Override public AuthenticationStatus authenticate(Credentials credentials) throws AuthenticationServiceException { if(credentials == null) { throw new AuthenticationServiceException("Credentials is null"); } AuthenticationStatus authenticationStatus = new AuthenticationStatus(); String userName = credentials.getUserName(); String password = credentials.getPassword(); if(userName==null || userName.trim().length()==0 || password==null || password.trim().length()==0) { authenticationStatus.setStatusMessage("UserName and Password should not be blank"); authenticationStatus.setSuccess(false); } else { if(CREDENTIALS.containsKey(userName) && password.equals(CREDENTIALS.get(userName))) { authenticationStatus.setStatusMessage("Valid UserName and Password"); authenticationStatus.setSuccess(true); } else { authenticationStatus.setStatusMessage("Invalid UserName and Password"); authenticationStatus.setSuccess(false); } } return authenticationStatus; } }Here for simplicity we are checking the credentials against the static data stored in HashMap. In real applications this check will be done against database. Now we are going to publish the WebService. package com.sivalabs.caas.publisher;import javax.xml.ws.Endpoint;import com.sivalabs.caas.services.AuthenticationServiceImpl;public class EndpointPublisher { public static void main(String[] args) { Endpoint.publish("http://localhost:8080/CAAS/services/AuthenticationService", new AuthenticationServiceImpl()); }}Run this standalone class to publish the AuthenticationService. To check whether the Service published successfully point the browser to URL http://localhost:8080/CAAS/services/AuthenticationService?wsdl. If the service published successfully you will see the WSDL content. Now let us create a Standalone test client to test the webservice. package com.sivalabs.caas.client;import java.net.URL;import javax.xml.namespace.QName; import javax.xml.ws.Service;import com.sivalabs.caas.domain.AuthenticationStatus; import com.sivalabs.caas.domain.Credentials; import com.sivalabs.caas.services.AuthenticationService;/** * @author siva * */ public class StandaloneClient {public static void main(String[] args) throws Exception { URL wsdlUrl = new URL("http://localhost:8080/CAAS/services/AuthenticationService?wsdl"); QName qName = new QName("http://sivalabs.blogspot.com/services/AuthenticationService", "AuthenticationService"); Service service = Service.create(wsdlUrl,qName); AuthenticationService port = service.getPort(AuthenticationService.class); Credentials credentials=new Credentials(); credentials.setUserName("admin1"); credentials.setPassword("admin"); AuthenticationStatus authenticationStatus = port.authenticate(credentials); System.out.println(authenticationStatus.getStatusMessage()); credentials.setUserName("admin"); credentials.setPassword("admin"); authenticationStatus = port.authenticate(credentials); System.out.println(authenticationStatus.getStatusMessage()); }} Instead of writing StandaloneClient by our-self we can generate the Client using wsimport commandline tool. wsimport tool is there in JDK/bin directory. Go to your project src directory and execute the following command. wsimport -keep -p com.sivalabs.caas.client http://localhost:8080/CAAS/services/AuthenticationService?wsdl It will generate the following java and class files in com.sivalabs.caas.client package. Authenticate.java AuthenticateResponse.java AuthenticationService_Service.java AuthenticationService.java AuthenticationServiceException_Exception.java AuthenticationServiceException.java AuthenticationStatus.java Credentials.java ObjectFactory.java package-info.java Now you can use the generated Java files to test the Service. public static void main(String[] args) throws Exception { AuthenticationService_Service service = new AuthenticationService_Service(); com.sivalabs.caas.client.AuthenticationService authenticationServiceImplPort = service.getAuthenticationServiceImplPort(); com.sivalabs.caas.client.Credentials credentials = new com.sivalabs.caas.client.Credentials(); credentials.setUserName("admin1"); credentials.setPassword("admin"); com.sivalabs.caas.client.AuthenticationStatus authenticationStatus = authenticationServiceImplPort.authenticate(credentials); System.out.println(authenticationStatus.getStatusMessage()); credentials.setUserName("admin"); credentials.setPassword("admin"); authenticationStatus = authenticationServiceImplPort.authenticate(credentials); System.out.println(authenticationStatus.getStatusMessage()); } Now we are going to see how to deploy JAX-WS WebService on Tomcat Server. We are going to deploy The AuthenticationService developed in http://sivalabs.blogspot.com/2011/09/developing-webservices-using-jax-ws.html on apache-tomcat-6.0.32. To deploy our AuthenticationService we need to add the following configuration. 1.web.xml <web-app> <listener> <listener-class>com.sun.xml.ws.transport.http.servlet.WSServletContextListener</listener-class> </listener> <servlet> <servlet-name>authenticationService</servlet-name> <servlet-class>com.sun.xml.ws.transport.http.servlet.WSServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>authenticationService</servlet-name> <url-pattern>/services/AuthenticationService</url-pattern> </servlet-mapping> </web-app>2. Create a new file WEB-INF/sun-jax-ws.xml <?xml version="1.0" encoding="UTF-8"?> <endpoints   xmlns="http://java.sun.com/xml/ns/jax-ws/ri/runtime"   version="2.0">      <endpoint       name="AuthenticationService"       implementation="com.sivalabs.caas.services.AuthenticationServiceImpl"       url-pattern="/services/AuthenticationService"/>        </endpoints>3. Download the JAX-WS Reference Implementation from http://jax-ws.java.net/ Copy all the jar files from jaxws-ri/lib folder to WEB-INF/lib. Now deploy the application on Tomcat server. You don’t need to publish the Service by our-self as we did using EndpointPublisher. Once the tomcat is up and running see the generated wsdl at http://localhost:8080/CAAS/services/AuthenticationService?wsdl. Now if you test the AuthenticationService using standalone client it will work fine. public static void testAuthenticationService()throws Exception { URL wsdlUrl = new URL("http://localhost:8080/CAAS/services/AuthenticationService?wsdl"); QName qName = new QName("http://sivalabs.blogspot.com/services/AuthenticationService", "AuthenticationService"); Service service = Service.create(wsdlUrl,qName); AuthenticationService port = service.getPort(AuthenticationService.class); Credentials credentials=new Credentials(); credentials.setUserName("admin"); credentials.setPassword("admin"); AuthenticationStatus authenticationStatus = port.authenticate(credentials); System.out.println(authenticationStatus.getStatusMessage()); }But if you try to test with the wsimport tool generated client code make sure that you dont have jax-ws-ri jars in Client classpath. Otherwise you will get the below error: Exception in thread "main" java.lang.NoSuchMethodError: javax.xml.ws.WebFault.messageName()Ljava/lang/String; at com.sun.xml.ws.model.RuntimeModeler.processExceptions(RuntimeModeler.java:1162) at com.sun.xml.ws.model.RuntimeModeler.processDocWrappedMethod(RuntimeModeler.java:898)Reference: Developing WebServices using JAX-WS & Deploying JAX-WS WebService on Tomcat-6 from our JCG partner Siva Reddy at the My Experiments on Technology blog....

Key accomplishments of Eclipse over last 10 years

As I have written, Eclipse is celebrating 10 years of open source and community during the month of November. There have been a number of milestones that have shaped the Eclipse community but what have been the major accomplishments? What has Eclipse done to actually change the software industry? Here are what I see as some of the key accomplishments for Eclipse. 1. Dominant Java IDE. Eclipse started out as being a really great Java IDE and continues today to be the market leader for Java IDEs. If you think back to the late 1990?s and early 2000?s the Java IDE market was a dog-fight between Borland JBuilder, Visual Cafe, IBM VisualAge for Java. Eclipse is now the clear leader and has approximately 65% market share in the Java IDE market. 2. De-facto Solution for C/C++ Tools. If you are building a tooling solution for C/C++ developers there is a very good chance you are using Eclipse CDT as the platform. In the realtime operating system and embedded development market, Eclipse CDT has become the de-facto standard. There at least 50 companies that are building their developer tools solution based on CDT. 3. A large and innovative modeling community. I am not sure how to quantify it but I believe the Eclipse modeling community has grown to become one of the largest and innovative communities at Eclipse. If you are doing modeling, chances are you are using Eclipse Modeling Framework (EMF). However, EMF is just the core that has created a really amazing community of innovation and diversity that happens at Eclipse Modeling. There are over 70 modeling projects at Eclipse and I know a lot more not hosted at Eclipse. It is a great success. 4. Integrating ALM Tools. The Mylyn project has grown into becoming the industry hub for integrating tools across the application lifecycle. There are now over 70 different Mylyn connectors that integrate different projects into the developer desktop. 5. Modular runtimes. Equinox and the EclipseRT projects demonstrate how modularity can work on a large scale. Everything at Eclipse is based on Equinox, since it is the OSGi runtime. However, Equinox and the EclipseRT top-level project has spawned an industry around Eclipse RCP and server-side OSGi. The range of applications being built on RCP is impressive, including NASA Mars Rover, financial institutions, aircraft design, genome decoding, etc, etc. On the server side, Equinox is used by most enterprise Java application servers and Virgo is emerging as a new Equinox based platform. 6. Eclipse Release Train. The Eclipse release trains have demonstrated open source communities can be predictable and scale to large distributed teams. This is incredibly important as large more conservative companies become involved in open source. Very few other organizations can claim a track record of predictability and scale that compares to the Eclipse community. 7. Eclipse Ecosystem. Eclipse has become one of the two major development tools platform in the industry; MS Visual Studio being the other. No matter what language you are using there is most likely an Eclipse IDE for you. No matter what developer tool you are using, there is probably an Eclipse plugin. No other platform has been able to create such a diverse and large ecosystem. It has actually made building and integrating developer tools a lot easier! It has been an incredible 10 years and the community should be proud of what has been accomplished. I wasn’t around at the start of Eclipse but I have to believe the accomplishments of Eclipse has been more than anyone expected. I know this is not a complete list and I am sure I am missing some projects. Feel free to add suggestions in the comments. Reference: Key accomplishments of Eclipse over last 10 years from our JCG partner Ian Skerrett at the Ian Skerrett’s blog....

Here Is The Main Reason Why You Suck At Interviews

I’ve talked about interviews from one perspective or another on several occasions, you might even say it is a pet subject of mine. It’s fascinating because most people are no good at interviews and when it comes to developer interviews – well; let’s just say there is a whole new dimension for us to suck at with coding questions, whiteboards and whatnot. Of course, the other side of the equation is not pristine here, the interviewer can be just as much to blame for a terrible interview, either through lack of training, empathy, preparation or a host of other reasons, but that’s a whole separate discussion. So, why are we so bad at interviews? You can probably think of quite a few reasons straight away:it is a high pressure situation, you were nervous you just didn’t “click” with the interviewer you were asked all the “wrong” questions sometimes you just have a bad dayInfact, you can often work yourself into a self-righteous frenzy after a bad interview, thinking how every circumstance seemed to conspire against you, it was beyond your control, there was nothing you could do – hell, you didn’t even want to work for that stupid company anyway! But, deep down, we all know that those excuses are just so much bullshit. The truth is there were many things we could have done, but by the time the interview started it was much too late. I’ll use a story to demonstrate. The least stressful exam I’ve ever had was a computing theory exam in the second year of my computer science degree. I never really got “into it” during the semester, but a few weeks before exam time – for some inexplicable reason – I suddenly found the subject fascinating. I spent hours reading and hours more playing with grammars and automata. Long story short, when exam time rolled around I knew the material backwards – I groked it. There was some anxiety (you can’t eliminate it fully), but I went into the exam reasonably confident I’d blitz it (which I did). Practice and preparation made all the difference. Of course, this is hardly a revelation, everyone knows that if you study and practice you’ll do well (your parents wouldn’t shut up about it for years :)). Interviews are no different from any other skill/subject in this respect, preparation and practice are key. Can You Wing It? The one difference with interviews is that they are an occasional skill, almost meaningless most of the time, but of paramount importance once in a long while. It just doesn’t seem like it’s worth the effort to get good at them, especially if you happen to already have a job at the time (who knows, you may not need those skills for years). There are plenty of other subjects clamouring for your attention and anyway every interview is different, you can never predict what’s gonna happen so it would be stupid to waste your time trying. No – good communication skills and decent software development knowledge will see you through, right? Except, they don’t and it won’t. Sure, you might be able to stave off total disaster, but without preparation and practice, you’re mostly relying on luck. Things “click“; you get asked the “right” questions and are able to think of decent answers just in time. This is how most people get hired. As soon as the process gets a little more rigorous/scientific, so many candidates get weeded out that companies like Google, Facebook, Twitter etc. find themselves trying to steal people from each other since they know that those that have passed the rigorous interview processes of their competitors must be alright. The interesting thing is that the rejected candidates are not necessarily worse; they are often simply a lot less prepared and a little less lucky. Over the last few years presentation skills have seen quite a lot of press. Many a blog post and much literature has come out (e.g. Presentation Zen and Confessions of a Public Speaker are both great books). Many people have decent knowledge of their subject area and have good communication skills, they think they are excellent presenters – they are wrong. They put together some slides in a few hours and think their innate abilities will carry them through, but inevitably their presentations end up disjointed, mistargeted, boring or amateurish. Sometimes they sail through on luck, circumstances conspire and the presentation works, but these situations are not common. Malcolm Gladwell is a master presenter, he is one of the most highly sought after and highly paid speakers in the world (and has written a bunch of awesome books to boot) – this is not by chance. Without doubt he knows his stuff and has better communication skills than the majority of speakers out there and yet all his talks are rigorously prepared for and practiced. To my mind, the situation with interviews is similar to that of presentations, except the deluge of literature about interviews goes almost unnoticed since they are old-hat. The digital world hasn’t changed the interview process too significantly (unlike the public speaking process), except the internet age brings all the old advice together in one place for us and all of that advice is still surprisingly relevant. The Old-School Advice Everyone (and I mean everyone) always says that you should research the company you’ll be interviewing with beforehand. You would think people would have this one down by now, especially developers cause we’re smart, right? Nope, no such luck, just about everyone who rocks up for an interview knows next to nothing about the company they are trying to get a job at, unless the company is famous, in which case people are just full of hearsay. But hearsay is no substitute for a bit of research and it is so easy, I am reminded of an article Shoemoney wrote about getting press (well worth a read by the way, if you’re trying to promote a product/service) – there is a tremendous amount of info you can find out about a person by trawling the web for a bit and it is just as easy to learn something about a company. I mean, we do work in software, so any company you may want to work for should have a web presence and a half. And even if web info is sparse there is your social network or picking up the phone and seeing if someone will trade a coffee for some info. Yeah, you go to a bit of trouble but the fact that you did will be apparent in an interview, I mean, there is a reason why I work where I work and I’d prefer to work with other people who give a shit (everyone would) – you savvy? Of course if you do go to the trouble to find out what skills/tech/knowledge/processes a company is looking for/values you may be able to anticipate where an interview might head, the value there should be self-evident. Which leads me to interview questions. When it comes to developers, there are three types of questions we struggle with/despise:the behavioural/culture fit the coding question the Mount Fuji questionWith a bit of practice you can blitz all of these. The Fujis are the hardest, but even they can be prepared for, but I’ll get back to those shortly. Behavioural Questions The behavioural questions seem annoyingly difficult but are actually the easiest. You know the ones “Give me an example of a time when you demonstrated leadership/communication skills/problem solving/conflict resolution“. It is always the same question, with a different attribute of yours that you have to spruik. Being in essence the same question you can address all of them with what amounts to the same answer, once again substituting a different attribute. These are actually difficult to handle on the spot, but if you have something prepared it is a breeze. Example: “There was a time, when Bill the developer was being an obstinate bastard and wouldn’t buy in to the awesome that the rest of us were peddling, but I took charge of the situation and was able to convince Bill blah, blah ….” – leadership “Bill the contrary developer just wouldn’t agree with the way we were dishing out the awesome, but I took Bill out for coffee and we hashed it out one on one blah, blah …” – communication “There was insufficient awesome to go around and Bill and Joe just couldn’t share it and were coming to blows, but I stepped in and took them both to a whiteboard, we had a long chat blah, blah …” – conflict resolution As long as you think up a situation beforehand you can adapt it to answer any behavioural question, the situation can even be a fictitious, but you do need to think through it carefully for 10-15 minutes to be able to come up with something coherent. You will never have time to frame a coherent reply in the interview itself. Of course, it is best to have a few situations prepared just so you can change it up a bit, variety never hurt anyone. If the company you’re interviewing with is enlightened they won’t ask you these questions, but will instead focus on your dev skills, but there are many companies and few are enlightened, might as well prepare for the worst case and be pleasantly surprised if the best case happens. Coding QuestionsTalking about dev skills, the one thing that just about every company that hires developers will do, would be to ask a coding question at some stage. These usually take the form of a sandbox question or what I call a Uni question. You know the ones, “Reverse a list“, “Implement a linked list” it’s as if they are under the impression you studied computer science at some point, go figure :). People struggle here, because you just don’t come across this kind of question in your day-to-day work. If they asked you to implement a Twitter or Facebook clone, you could really show them your chops, but balancing a binary tree – who the hell remembers how to do that? And that’s the rub, you probably could dredge the information from the depths of your grey matter, but by the time you do the interview will have been long over. Because you don’t do this kind of question daily, you brain has dumped the info you need to tape and sent it to Switzerland (cause backups should be kept off-premises). The answer is simple; you gotta practice these questions well in advance of the interview. Preparation baby, that’s the ticket. Preferably you should be doing them regularly regardless of your employment status cause those questions are fun and bite-sized and give your brain a bit of a workout – you’ll be a better developer for it. The most interesting thing though is this, you do enough of them and you won’t really encounter anything new in an interview. There are really not so many themes when it comes to these questions, it will be the same formulae you’ll just need to plug in different numbers (a simplification but not too far from reality). I really gotta follow my own advice here. If you seriously want a leg up, there are books specifically about this, I prefer Cracking the Coding Interview but there is also Programming Interviews Exposed – read them, do the questions. Puzzles Mount Fuji questions are the most controversial, but regardless of whether you hate them or not, even they can be practiced. Yeah, alright, maybe you’re willing to walk out if any company ever dares to ask you about manhole covers, but I’d much rather answer the questions and then walk out in self-righteous satisfaction rejecting the offers they’ll be throwing at my feet, than storm out in self-righteous anger knowing deep down that I was a wuss for doing so. And anyway, I’d like to think that I’ve already demonstrated that not all Mount Fuji questions are created equal, some are coding problems, others deal with concurrency, others still might be back-of-the-envelope questions (the story of how these originated is actually pretty interesting, I am writing it up as we speak). The subset of questions that are simply a useless puzzle are a lot smaller than you might think. Doing these kinds of questions in your spare time is just another way to build your skills with a side benefit that you become an interview-proof individual. Of course, many people also enjoy an occasional puzzle – I’m just saying. There is still lots more to say, I haven’t even begun to talk about attitude, but this is already a tl;dr candidate, so I’ll leave that discussion for another time. Let’s sum up, if you feel that you suck at interviews, it’s because you didn’t prepare well enough and haven’t had enough practice – it is that simple. As an interviewer it is most disheartening to see an unprepared candidate, there are just so many of them, on the other hand a person who is clearly on the ball is just awesome. And I am well aware that practicing interviews is hard, there is no Toastmasters equivalent for that particular skill, but thorough preparation can significantly mitigate that issue. Even a few minutes of preparation will put you head and shoulders above most other people since the vast majority don’t bother at all. So, take the time to practice/prepare if you have an interview on the horizon, be smart about it and it is bound to pay off, not to mention a better experience for the interviewer as well, more fun and less stress for everyone. Reference: Here Is The Main Reason Why You Suck At Interviews from our JCG partner Alan Skorkin at the Developing in the Google Cloud blog....

Apache Thrift Quickstart Tutorial

Thrift is a cross language RPC framework initially developed at Facebook, now open sourced as an Apache project. This post will describe how to write a thrift service and client in different modes such as blocking, non blocking and asynchronous. (I felt latter two modes are less documented and needed some tutorial type introduction, hence the motivation of this post). To easily follow the tutorial it’s beneficial that you have a basic understanding of Thrift architecture consisting of Transports, Protocols and Processors. (A good paper can be found at [1]). Here I will be using Thrift version 0.7 and Thrift’s Java binding. Thrift Installation Installation instructions can be found at http://wiki.apache.org/thrift/ThriftInstallation. To sum up Ubuntu installation steps. 1. Install required dependencies. $ sudo apt-get install libboost-dev libboost-test-dev libboost-program-options-dev libevent-dev automake libtool flex bison pkg-config g++ libssl-dev 2. Go to the installation root directory. 3. $ ./configure 4. $ make 5. Become super user and    $ make install Now let’s get on with creating the service and consuming it. Service Definition Here a service with simple arithmetic operations is defined. Note the use of typedef directive to declare alternative names for base types i64 and i32. Add following in a file named ‘arithmetic.thrift’. namespace java tutorial.arithmetic.gen // define namespace for java codetypedef i64 long typedef i32 int service ArithmeticService { // defines simple arithmetic service long add(1:int num1, 2:int num2), long multiply(1:int num1, 2:int num2), }Code will be generated under ‘tutorial.arithmetic.gen’ package. Now generate Java code using following command line. $ thrift –gen java arithmetic.thrift The source tutorial.arithmetic.gen.ArithmeticService.java will be generated. Blocking Mode Let’s create a blocking mode server and a client to consume the service. First we need to implement the service using generated service skeleton. The interface to implement is ArithmeticService.Iface. public class ArithmeticServiceImpl implements ArithmeticService.Iface {public long add(int num1, int num2) throws TException { return num1 + num2; }public long multiply(int num1, int num2) throws TException { return num1 * num2; }}Now that being done let’s create the Thrift server which would server request for this service. Remember this is a blocking server so the server threads doing I/O will wait. public class Server {private void start() { try { TServerSocket serverTransport = new TServerSocket(7911);ArithmeticService.Processor processor = new ArithmeticService.Processor(new ArithmeticServiceImpl());TServer server = new TThreadPoolServer(new TThreadPoolServer.Args(serverTransport). processor(processor)); System.out.println("Starting server on port 7911 ..."); server.serve(); } catch (TTransportException e) { e.printStackTrace(); } }public static void main(String[] args) { Server srv = new Server(); srv.start(); }}Here TThreadPoolServer implementation is used which would utilize a thread pool to serve incoming requests. Now let’s write the client. public class ArithmeticClient {private void invoke() { TTransport transport; try { transport = new TSocket("localhost", 7911);TProtocol protocol = new TBinaryProtocol(transport);ArithmeticService.Client client = new ArithmeticService.Client(protocol); transport.open();long addResult = client.add(100, 200); System.out.println("Add result: " + addResult); long multiplyResult = client.multiply(20, 40); System.out.println("Multiply result: " + multiplyResult);transport.close(); } catch (TTransportException e) { e.printStackTrace(); } catch (TException e) { e.printStackTrace(); } }public static void main(String[] args) { ArithmeticClient c = new ArithmeticClient(); c.invoke();} }TBinaryProtocol is used for encoding data transferred between server and client. Now start the server and invoke the service using client to results. Non Blocking Mode Now lets create a non blocking server which uses Java non blocking I/O underneath. We can use the same service implementation as before (ArithmeticServiceImpl). public class NonblockingServer {private void start() { try { TNonblockingServerTransport serverTransport = new TNonblockingServerSocket(7911); ArithmeticService.Processor processor = new ArithmeticService.Processor(new ArithmeticServiceImpl());TServer server = new TNonblockingServer(new TNonblockingServer.Args(serverTransport). processor(processor)); System.out.println("Starting server on port 7911 ..."); server.serve(); } catch (TTransportException e) { e.printStackTrace(); } }public static void main(String[] args) { NonblockingServer srv = new NonblockingServer(); srv.start(); } }Here TNonblockingServerSocket is used which encapsulates a ServerSocketChannel. Code for the non blocking client is as follows. public class NonblockingClient {private void invoke() { TTransport transport; try { transport = new TFramedTransport(new TSocket("localhost", 7911)); TProtocol protocol = new TBinaryProtocol(transport);ArithmeticService.Client client = new ArithmeticService.Client(protocol); transport.open();long addResult = client.add(100, 200); System.out.println("Add result: " + addResult); long multiplyResult = client.multiply(20, 40); System.out.println("Multiply result: " + multiplyResult);transport.close(); } catch (TTransportException e) { e.printStackTrace(); } catch (TException e) { e.printStackTrace(); } }public static void main(String[] args) { NonblockingClient c = new NonblockingClient(); c.invoke(); }}Note the usage of TFramedTransport wrapping normal TSocket transport. Non blocking server requires client to use TFramedTransport which would frame the data sent over the wire. Fire up the server and send a request using the client. You will see the same results as before, this time using non blocking mode. Asynchronous Mode We can write asynchronous clients to call a Thrift service. A callback needs to be registered which will get invoked at successful completion of the request. Blocking mode server didn’t work (method invocation returns with an empty response) with the asynchronous client (May be it’s because we are using TNonblockingSocket at the client side. See construction of ArithmeticService.AsyncClient. So this may be the proper behaviour). Non blocking mode server seems to work without an issue. So you can use the non blocking server from earlier with the client shown below. public class AsyncClient {private void invoke() { try { ArithmeticService.AsyncClient client = new ArithmeticService. AsyncClient(new TBinaryProtocol.Factory(), new TAsyncClientManager(), new TNonblockingSocket("localhost", 7911));client.add(200, 400, new AddMethodCallback());client = new ArithmeticService. AsyncClient(new TBinaryProtocol.Factory(), new TAsyncClientManager(), new TNonblockingSocket("localhost", 7911)); client.multiply(20, 50, new MultiplyMethodCallback());} catch (TTransportException e) { e.printStackTrace(); } catch (TException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } }public static void main(String[] args) { AsyncClient c = new AsyncClient(); c.invoke();}class AddMethodCallback implements AsyncMethodCallback<ArithmeticService.AsyncClient.add_call> {public void onComplete(ArithmeticService.AsyncClient.add_call add_call) { try { long result = add_call.getResult(); System.out.println("Add from server: " + result); } catch (TException e) { e.printStackTrace(); } }public void onError(Exception e) { System.out.println("Error : "); e.printStackTrace(); }}class MultiplyMethodCallback implements AsyncMethodCallback<ArithmeticService.AsyncClient.multiply_call> {public void onComplete(ArithmeticService.AsyncClient.multiply_call multiply_call) { try { long result = multiply_call.getResult(); System.out.println("Multiply from server: " + result); } catch (TException e) { e.printStackTrace(); } }public void onError(Exception e) { System.out.println("Error : "); e.printStackTrace(); }}}Two callbacks have been defined corresponding to each operation of the service. Note the usage of two client instances for the two invocations. Each invocation needs a separate client instance or otherwise client will fail with following exception “Exception in thread “main” java.lang.IllegalStateException: Client is currently executing another method: tutorial.arithmetic.gen.ArithmeticService$AsyncClient$add_call“ So this wraps up my quick start on Thrift with different modes of operation. Hope somebody may find this useful. For any suggestions or corrections do not hesitate to comment. [1] http://thrift.apache.org/static/thrift-20070401.pdf Reference: Apache Thrift Quickstart Tutorial from our JCG partner Buddhika Chamith at the Source Open blog. ...

Building Rich Clients with JacpFX and JavaFX2

Creating fast and scaling desktop clients is always a challenge, particularly when dealing with a large amount of data and long running tasks. Although Eclipse RCP and Netbeans RCP are established platforms, the idea was to build a lightweight framework that handles components asynchronously, similar to web components. Developers should have less effort on threading topics and should be able to model the applications’ message flow on their own. These, and many other ideas resulted in the JacpFX project. JacpFX The JacpFX (Java Asynchronous Client Platform) Project is a Framework to create Rich Clients (Desktop and/or Web) in MVC style with JavaFX 2, Spring and an Actor like component approach. It provides a simple API to create a workspace, perspectives, and components; to communicate with all parts and to compose your Client application easily. What does JacpFX offer to you?Implement scaling and responsive Rich Clients against a small and simple API (~120Kb) in JavaFully integrated with Spring and JavaFX 2Add/move/remove defined Components at run-time in your UICreate your basic layout in Perspectives and define “placeholders” for your Component UIHandle Components outside the FX application thread“handle” methods are executed in a worker thread and a “posthandle” method is executed by the FX application threadStateful and stateless callback non-UI componentsOutsource long running processes and computationHandle asynchronous processes easilyNo need for explicit threading or things like Runtime.invoke()Communicate through Asynchronous Messages No shared data between Components Immutable MessagesThe following article will give you a short introduction on JacpFX and how to create JavaFX 2 based Rich Clients with it. The example client shown here, is a (pseudo) contact manager that you can try out here: http://developer.ahcp.de/demo/JACPFX2Demo.html ; The complete source code can you download here: http://code.google.com/p/jacp/downloads/list Pre-requirements : A valid JavaFX2 run-time must be installed to run the demo application (currently available on windows only). To compile the attached demo code, a JavaFX2 SDK (also available for Mac and Linux) and Apache Maven is assumed to be installed. See the detailed instructions at the end of this article:Image 1: Select a category for the first time and the client will ask you to generate 250.000 contacts. While those contacts are generated and incrementally added to the table view, you can select the next category, browse to the next table page, view the consumer chart data or just edit a contact. Keep in mind that no explicit threading was used in the demo-client codeApplication – Structure A JacpFX application is composed of an application-launcher, a workbench, minimum one perspective and at least one component. The example client uses three UI-Components and three non-UI (Callback) Components to create the data and to simulate heavy data access. The Application – Launcher (AFX2SpringLauncher) The application – launcher is the main class of your application where you define the Spring context. A JacpFX – application uses xml declaration to define the hierarchy of the application and all the meta-data, like “component-id” and “execution-target”. The Spring main.xml is located in the resources directory and will be declared in the launchers constructor: public class ContactMain extends AFX2SpringLauncher { public ContactMain() { super("main.xml"); } public static void main(String[] args) { Application.launch(args); } @Override public void postInit(Stage stage) { // define your css and other config stuff here } }Listing 1: application launcher The Application – Workbench (AFX2Workbench) The workbench is the UI-root node of a JacpFX application. Here you configure the application, set the resolution, define tool – bars, menus and have reference to the JavaFX stage. public class ContactWorkbench extends AFX2Workbench { @Override public void handleInitialLayout(IAction<Event, Object> action, IWorkbenchLayout<Node> layout, Stage stage) { layout.setWorkbenchXYSize(1024, 768); layout.registerToolBar(ToolbarPosition.NORTH); layout.setMenuEnabled(true); }@Override public void postHandle(FX2ComponentLayout layout) { final MenuBar menu = layout.getMenu(); final Menu menuFile = new Menu("File"); final MenuItem itemHelp = new MenuItem("Help"); /// add the event listener and show an option-pane with some help text menuFile.getItems().add(itemHelp); menu.getMenus().addAll(menuFile); } }Listing 2: Workbench The Perspective (AFX2Perspective) The task of a perspective is to provide the layout of the current view and to register the root and all leaf nodes of the view. The leaf nodes are the “execution-target” (container) for all components associated (injected) to this perspective. The demo perspective basically defines two SplitPanes and registers them as “execution-target” to display the content of the ContactTreeView on the left; the ContactTableView at top right and the ContactChartView at bottom right.Image 2: Three targets defined by the perspective in the demopublic class ContactPerspective extends AFX2Perspective { @Override public void onStartPerspective(FX2ComponentLayout layout) { // create button in toolbar; button should switch top and bottom id's ToolBar north = layout .getRegisteredToolBar(ToolbarPosition.NORTH); ... north.getItems().add(new Button("switch view");) ; }@Override public void onTearDownPerspective(FX2ComponentLayout layout) { }@Override public void handlePerspective(IAction<Event, Object> action, FX2PerspectiveLayout perspectiveLayout) { if (action.getLastMessage().equals(MessageUtil.INIT)) { createPerspectiveLayout(perspectiveLayout); } } private void createPerspectiveLayout(FX2PerspectiveLayout perspectiveLayout) { //// define your UI layout ... // Register root component perspectiveLayout.registerRootComponent(mainLayout); // register left menu perspectiveLayout.registerTargetLayoutComponent("PleftMenu", leftMenu); // register main content Top perspectiveLayout.registerTargetLayoutComponent(“PmainContentTop”, mainContentTop); // register main content Bottom perspectiveLayout.registerTargetLayoutComponent("PmainContentBottom", mainContentBottom); }} Listing 3: Perspective The UI-Components (AFX2Component) AFX2Components are the actual UI-Components in a JacpFX application that are rendered as JavaFX components. The demo defines a left (ContactTreeView), main-top (ContactTableView) and a main-bottom (ContactDemoChartView) AFX2Component. A JacpFX component has four methods to implement; “onStartComponent” and “onTearDownComponent” as well as “handleAction” and “postHandleAction”. While the “handleAction” is executed in a worker thread the “postHandle” runs in the FX application thread.Image 3: Component lifesycle of an AFX2ComponentYou can use lazy initialization of components and shutdown or restart components if you want. public class ContactTreeView extends AFX2Component { private ObservableList<Contact> contactList; ... @Override public Node handleAction(final IAction<Event, Object> action) { if (action.getLastMessage().equals(MessageUtil.INIT)) { this.pane = new ScrollPane(); ... return this.pane; } return null; }@Override public Node postHandleAction(final Node node, final IAction<Event, Object> action) { if (action.getLastMessage() instanceof Contact) { this.contactList.addAll((Contact) action.getLastMessage()); } return this.pane; }@Override public void onStartComponent(final FX2ComponentLayout layout) { final ToolBar north = layout .getRegisteredToolBar(ToolbarPosition.NORTH); … north.add(new Button("add category")); } @Override public void onTearDownComponent(final FX2ComponentLayout layout) { } ... } Listing 4: ContactTreeView (the left view) The “handleAction” method in Listing 4 is used to initialize the component UI. In the “postHandle” action of the demo new contacts were added; the contactList is bound to the existing TreeView, so you can’t update it outside the FX application thread and must therefore use the “postHandle” method. The Callback Components JacpFX callback-components are non-UI/service components. Similar to an AFX2Component, they have a method called “handle” that is executed in a worker thread. The result can be any type of object and will be automatically delivered back to the calling component or redirected to any other component. In these types of components you execute long running tasks, invoke service calls or simply retrieve data from the storage. JacpFX provides two types of callback components, an “AStatelessCallbackComponent” and a (stateful) “ACallbackComponent”. The demo client uses an “ACallbackComponent” component to generate the random chart data for a selected contact a table. To generate the large amount of contacts two “AStatelessCallbackComponents” are involved. One component divides the total amount into chunks and the second one just creates contacts. The result will be sent to the UI component directly and added to the table. Messaging Messaging is the essences of the JacpFX Framework. JacpFX uses asynchronous messaging to notify components and perspectives of your application. The message flow to create the initial amount of data for a category in the demo application looks like:Image 4 : Message Flow to create initial amount of data in the demo applicationThe state of a JacpFX component is always changed by messages. If a task should be handled in a background thread you simply send a message to a component and process the work in the “handle” method. The result can be sent back to the caller component or be processed by the FX application thread in the “postHandle” method (in case of UI Components). You should always avoid execution of long running tasks on the FX application thread; instead call your service- or DB-operation inside the “handle” method. JacpFX differs between two message types, a local and a global message. Local messages To trigger a local message simply get a listener IActionListener<EventHandler<Event>, Event, Object> listener = this.getActionListener(“message”);with only one argument – the message itself. This listener can be assigned to any JavaFX eventHandler (like onMouseEvent, etc.) or it can be fired by invoking listener.performAction(event);Global messages With global messages you communicate with other registered components. Callback components respond to a message by default so you don’t need to create a respond message explicitly – also you could. Messaging is the prefered way to exchange data between components and to trigger Tasks. Similar to local messages you create a listener instance but with an explicit target id: IActionListener<EventHandler<Event>, Event, Object> listener = this.getActionListener(“id“,“message”);Build the demo application from source Register JavaFX in your local Maven repository All JacpFX-Projects are Maven projects and require JavaFX 2 (jfxrt.jar). Some include JavaFX 2 as system dependency but we preferred to register the jfxrt.jar in the local repository. To create deployable files (jnlp and html) you additionally need to register the JavaFX Ant-Tasks (ant-javafx.jar). To do so change to your ${SDK-home}/rt/lib and type: mvn install:install-file -Dfile=jfxrt.jar -DgroupId=com.oracle -DartifactId=javafx-runtime -Dpackaging=jar -Dversion=2.0Then copy the “bin” directory (on linux i386) to your .m2\repository\com\oracle\javafx-runtime folder. Next change to your ${SDK-home}/tools directory and type: mvn install:install-file -Dfile=ant-javafx.jar -DgroupId=com.oracle -DartifactId=ant-javafx -Dpackaging=jar -Dversion=2.0Build the project To build the project, simply unzip the project folder and type mvn package. The jar, jnlp and the html file is created in the ${projectHome}/target/deploy. To create an Eclipse project simply type mvn eclipse:eclipse. What’s coming next? JacpFX is currently in Version 1.0; after many years of prototyping with Echo3 and Swing, JacpFX is the first stable release based on JavaFX 2 and a defined release plan. You can find a detailed documentation on the projects Wiki page (http://code.google.com/p/jacp/wiki/Documentation). Our release plan (also located at the Wiki) defines Version 1.1 in June this year. Major changes will be annotation support and an official FXML support (also you can already use FXML). Feedback is always welcome, so feel free to contact us. Reference: Building Rich Clients with JacpFX and JavaFX2 from our W4G partner Andy Moncsek....

Integration Framework Comparison – Spring Integration, Mule ESB or Apache Camel

Data exchanges between companies increase a lot. The number of applications which must be integrated increases, too. The interfaces use different technologies, protocols and data formats. Nevertheless, the integration of these applications shall be modeled in a standardized way, realized efficiently and supported by automatic tests. Three integration frameworks are available in the JVM environment, which fulfil these requirements: Spring Integration, Mule ESB and Apache Camel. They implement the well-known Enteprise Integration Patterns (EIP, http://www.eaipatterns.com) and therefore offer a standardized, domain-specific language to integrate applications. These integration frameworks can be used in almost every integration project within the JVM environment – no matter which technologies, transport protocols or data formats are used. All integration projects can be realized in a consistent way without redundant boilerplate code. This article compares all three alternatives and discusses their pros and cons. If you want to know, when to use a more powerful Enterprise Service Bus (ESB) instead of one of these lightweight integration frameworks, then you should read this blog post: http://www.kai-waehner.de/blog/2011/06/02/when-to-use-apache-camel/ (it explains when to use Apache Camel, but the title could also be „When to use a lightweight integration framework“). Comparison Criteria Several criteria can be used to compare these three integration frameworks:Open source Basic concepts / architecture Testability Deployment Popularity Commercial support IDE-Support Errorhandling Monitoring Enterprise readiness Domain specific language (DSL) Number of components for interfaces, technologies and protocols ExpandabilitySimilarities All three frameworks have many similarities. Therefore, many of the above comparison criteria are even! All implement the EIPs and offer a consistent model and messaging architecture to integrate several technologies. No matter which technologies you have to use, you always do it the same way, i.e. same syntax, same API, same automatic tests. The only difference is the the configuration of each endpoint (e.g. JMS needs a queue name while JDBC needs a database connection url). IMO, this is the most significant feature. Each framework uses different names, but the idea is the same. For instance, „Camel routes“ are equivalent to „Mule flows“, „Camel components“ are called „adapters“ in Spring Integration. Besides, several other similarities exists, which differ from heavyweight ESBs. You just have to add some libraries to your classpath. Therefore, you can use each framework everywhere in the JVM environment. No matter if your project is a Java SE standalone application, or if you want to deploy it to a web container (e.g. Tomcat), JEE application server (e.g. Glassfish), OSGi container or even to the cloud. Just add the libraries, do some simple configuration, and you are done. Then you can start implementing your integration stuff (routing, transformation, and so on). All three frameworks are open source and offer familiar, public features such as source code, forums, mailing lists, issue tracking and voting for new features. Good communities write documentation, blogs and tutorials (IMO Apache Camel has the most noticeable community). Only the number of released books could be better for all three. Commercial support is available via different vendors:Spring Integration: SpringSource (http://www.springsource.com) Mule ESB: MuleSoft (http://www.mulesoft.org) Apache Camel: FuseSource (http://fusesource.com) and Talend (http://www.talend.com)IDE support is very good, even visual designers are available for all three alternatives to model integration problems (and let them generate the code). Each of the frameworks is enterprise ready, because all offer required features such as error handling, automatic testing, transactions, multithreading, scalability and monitoring. Differences If you know one of these frameworks, you can learn the others very easily due to their same concepts and many other similarities. Next, let’s discuss their differences to be able to decide when to use which one. The two most important differences are the number of supported technologies and the used DSL(s). Thus, I will concentrate especially on these two criteria in the following. I will use code snippets implementing the well-known EIP „Content-based Router“ in all examples. Judge for yourself, which one you prefer. Spring Integration Spring Integration is based on the well-known Spring project and extends the programming model with integration support. You can use Spring features such as dependency injection, transactions or security as you do in other Spring projects. Spring Integration is awesome, if you already have got a Spring project and need to add some integration stuff. It is almost no effort to learn Spring Integration if you know Spring itself. Nevertheless, Spring Integration only offers very rudimenary support for technologies – just „basic stuff“ such as File, FTP, JMS, TCP, HTTP or Web Services. Mule and Apache Camel offer many, many further components! Integrations are implemented by writing a lot of XML code (without a real DSL), as you can see in the following code snippet: <file:inbound-channel-adapter id=”incomingOrders” directory=”file:incomingOrders”/><payload-type-router input-channel=”incomingOrders”> <mapping type=”com.kw.DvdOrder” channel=”dvdOrders” /> <mapping type=”com.kw.VideogameOrder” channel=”videogameOrders” /> <mapping type=”com.kw.OtherOrder” channel=”otherOrders” /></payload-type-router><file:outbound-channel-adapter id=”dvdOrders” directory=”dvdOrders”/><jms:outbound-channel-adapter id=”videogamesOrders” destination=”videogameOrdersQueue” channel=”videogamesOrders”/><logging-channel-adapter id=”otherOrders” level=”INFO”/>You can also use Java code and annotations for some stuff, but in the end, you need a lot of XML. Honestly, I do not like too much XML declaration. It is fine for configuration (such as JMS connection factories), but not for complex integration logic. At least, it should be a DSL with better readability, but more complex Spring Integration examples are really tough to read. Besides, the visual designer for Eclipse (called integration graph) is ok, but not as good and intuitive as its competitors. Therefore, I would only use Spring Integration if I already have got an existing Spring project and must just add some integration logic requiring only „basic technologies“ such as File, FTP, JMS or JDBC. Mule ESB Mule ESB is – as the name suggests – a full ESB including several additional features instead of just an integration framework (you can compare it to Apache ServiceMix which is an ESB based on Apache Camel). Nevertheless, Mule can be use as lightweight integration framework, too – by just not adding and using any additional features besides the EIP integration stuff. As Spring Integration, Mule only offers a XML DSL. At least, it is much easier to read than Spring Integration, in my opinion. Mule Studio offers a very good and intuitive visual designer. Compare the following code snippet to the Spring integration code from above. It is more like a DSL than Spring Integration. This matters if the integration logic is more complex. <flow name=”muleFlow”> <file:inbound-endpoint path=”incomingOrders”/> <choice> <when expression=”payload instanceof com.kw.DvdOrder” evaluator=”groovy”> <file:outbound-endpoint path=”incoming/dvdOrders”/> </when> <when expression=”payload instanceof com.kw.DvdOrder” evaluator=”groovy”> <jms:outbound-endpoint queue=”videogameOrdersQueue”/> </when> <otherwise> <logger level=”INFO”/> </otherwise> </choice> </flow>The major advantage of Mule is some very interesting connectors to important proprietary interfaces such as SAP, Tibco Rendevous, Oracle Siebel CRM, Paypal or IBM’s CICS Transaction Gateway. If your integration project requires some of these connectors, then I would probably choose Mule! A disadvantage for some projects might be that Mule says no to OSGi: http://blogs.mulesoft.org/osgi-no-thanks/ Apache Camel Apache Camel is almost identical to Mule. It offers many, many components (even more than Mule) for almost every technology you could think of. If there is no component available, you can create your own component very easily starting with a Maven archetype! If you are a Spring guy: Camel has awesome Spring integration, too. As the other two, it offers a XML DSL: <route> <from uri=”file:incomingOrders”/> <choice> <when> <simple>${in.header.type} is ‘com.kw.DvdOrder’</simple> <to uri=”file:incoming/dvdOrders”/> </when> <when> <simple>${in.header.type} is ‘com.kw.VideogameOrder’ </simple> <to uri=”jms:videogameOrdersQueue”/> </when> <otherwise> <to uri=”log:OtherOrders”/> </otherwise> </choice> </route>Readability is better than Spring Integration and almost identical to Mule. Besides, a very good (but commercial) visual designer called Fuse IDE is available by FuseSource – generating XML DSL code. Nevertheless, it is a lot of XML, no matter if you use a visual designer or just your xml editor. Personally, I do not like this. Therefore, let’s show you another awesome feature: Apache Camel also offers DSLs for Java, Groovy and Scala. You do not have to write so much ugly XML. Personally, I prefer using one of these fluent DSLs instead XML for integration logic. I only do configuration stuff such as JMS connection factories or JDBC properties using XML. Here you can see the same example using a Java DSL code snippet: from(“file:incomingOrders “) .choice() .when(body().isInstanceOf(com.kw.DvdOrder.class)) .to(“file:incoming/dvdOrders”) .when(body().isInstanceOf(com.kw.VideogameOrder.class)) .to(“jms:videogameOrdersQueue “) .otherwise() .to(“mock:OtherOrders “);The fluent programming DSLs are very easy to read (even in more complex examples). Besides, these programming DSLs have better IDE support than XML (code completion, refactoring, etc.). Due to these awesome fluent DSLs, I would always use Apache Camel, if I do not need some of Mule’s excellent connectors to proprietary products. Due to its very good integration to Spring, I would even prefer Apache Camel to Spring Integration in most use cases. By the way: Talend offers a visual designer generating Java DSL code, but it generates a lot of boilerplate code and does not allow vice-versa editing (i.e. you cannot edit the generated code). This is a no-go criteria and has to be fixed soon (hopefully)! And the winner is… … all three integration frameworks, because they are all lightweight and easy to use – even for complex integration projects. It is awesome to integrate several different technologies by always using the same syntax and concepts – including very good testing support. My personal favorite is Apache Camel due to its awesome Java, Groovy and Scala DSLs, combined with many supported technologies. I would only use Mule if I need some of its unique connectors to proprietary products. I would only use Spring Integration in an existing Spring project and if I only need to integrate „basic technologies“ such as FTP or JMS. Nevertheless: No matter which of these lightweight integration frameworks you choose, you will have much fun realizing complex integration projects easily with low efforts. Remember: Often, a fat ESB has too much functionality, and therefore too much, unnecessary complexity and efforts. Use the right tool for the right job! Reference: Spoilt for Choice: Which Integration Framework to use – Spring Integration, Mule ESB or Apache Camel? from our JCG partner Kai Wahner at the Blog about Java EE / SOA / Cloud Computing blog....

Java: Improvements to the client and desktop parts of Java SE 6 and Java SE 7 !!!

Client-Side Improvements in Java 6 and Java 7 Learn about the improvements to the client and desktop parts of Java SE 6 and Java SE 7, including the new applet plug-in, the Java Deployment Toolkit, shaped and translucent windows, heavyweight-lightweight mixing, and Java Web Start. Introduction Since the release of Java Platform, Standard Edition 6 (Java SE 6) in December of 2006, a lot of improvements have been made to the client and desktop parts of Java. In this article, we take a tour of Swing, and then we dive into some of the support technologies that let developers make great client apps, such as installation, Java Web Start, applets, and graphics improvements. Then we will take a peek at Java 7. Read more at Client-Side Improvements in Java 6 and Java 7 from Josh Marinacci....

Setup Git server with read/write HTTPS on Debian

Three months ago we decided to move our projects to Git. I guess you already know the advantages of Git over Subversion, as there are too many discussions for this subject. I will describe here a fast and minimal configuration of how to turn a Debian GNU/Linux box to a Git server that supports read and write actions via HTTP(S). Git supports 4 network protocols to transfer data: SSH, Local, Git, and HTTP(S). Many people are behind a firewall, which most of times permit access only to HTTP(S). Until recently, Git HTTP(S) was working only for read actions (only clone, fetch, pull, …). Almost 2 years ago Smart HTTP Transport came up. Using these clever cgi scripts, which now are part of stable Git, we can easily setup a Git server that supports read/write via HTTP(S). Configuration steps 1) Install all required packages on Debian: $> su - $> apt-get install git apache2 apache2-utils openssl2) As we need many Git repositories, we create a base directory that all repositories will be included. $> mkdir -p /opt/allgit/repos3) Create a bare Git repository, where developers can pull and push code. $> cd /opt/allgit/repos $> mkdir repo1.git $> cd repo1.git $> git init --bare4) We will use Apache 2 HTTP server to support HTTP(S) for Git. On Debian, when you install apache2 package, user www-data is created and is the user of Apache. As we need Apache to execute the Git actions, then www-data user must have read and write permissions on each Git repository. $> chown -R www-data:www-data repo1.git/5) The www-data user is used only by Apache and not by developers, so we need to define our Git users somewhere else. We can use HTTP authentication that is provided by Apache. We can use htpasswd, which is an Apache tool that produce an encrypted password from a plain text password for a user. These users are irrelevant to Unix users of OS. So, using this strategy we can define as many Git users as we need, without creating Unix users. Create a specific directory and file, with restricted privileges to www-data user, in order to keep git users and passwords. $> mkdir /etc/gitdata $> cd /etc/gitdata $> htpasswd -c gitusers.passwd gituser1 $> htpasswd gitusers.passwd gituser26) I choose to use secure HTTP for Git communication. So, I created a self signed SSL certificate using openssl. On the same directory create the certificate and the private keys pair. Fill all the fields you like that the next command requires. $> openssl req -x509 -nodes -days 999 -newkey rsa:2048 -keyout mygit_ssl.key -out mygit_ssl.crt7) Also, we need a few modules to be enabled on Apache, so make sure mod_ssl, mod_cgi, mod_alias, and mod_env are enabled. This can be easily done on Debian with a2enmod command, where it create symbolic links in the next directory. $> a2enmod <apache_module> $> ls /etc/apache2/mods-enabled/8) Now you only need to configure Apache site and you are ready. Assume you already have a domain name (e.g. mygitdomain.com) in order to use it for your server. A fast way to configure your Apache site(and test immediately) is to copy the existing configuration of Apache and make the required configuration changes. $> cd /etc/apache2/sites-available $> cp default-ssl mygitdomain.com9) Add (or alter) the existing configuration of mygitdomain.com file, which you just created, based on the next configuration. ServerName mygitdomain.com ServerAdmin webmaster@localhost ErrorLog /var/log/apache2/git_apache_error.log SetEnv GIT_PROJECT_ROOT /opt/allgit SetEnv GIT_HTTP_EXPORT_ALL SetEnv REMOTE_USER=$REDIRECT_REMOTE_USER ScriptAlias /allgit/ /usr/lib/git-core/git-http-backend/ <Directory "/usr/lib/git-core/"> AllowOverride None Options +ExecCGI -Includes Order allow,deny Allow from all <LocationMatch "^/allgit/repos/.*$"> AuthType Basic AuthName "My Git Repositories" AuthUserFile /etc/gitdata/gitusers.passwd Require valid-user SSLEngine on SSLCertificateFile /etc/gitdata/mygit_ssl.crt SSLCertificateKeyFile /etc/gitdata/mygit_ssl.keyTest your Git server 1) Configuration is finished. Enable this site in your Apache server and then restart Apache. $> a2ensite mygitdomain.com $> /etc/init.d/apache2 restart2) Try to clone your empty git repository. $> git clone https://mygitdomain.com/allgit/repos/repo1.git repo1Certificate verification error error: server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none while accessing https://mygitdomain.com/allgit/repos/repo1.git/info/refs fatal: HTTP request failedYou have 3 options to solve this: 1. Set this variable in the current shell (will be enabled for all the following commands). $> export GIT_SSL_NO_VERIFY=false 2. Set before each git command the same variable as before. $> env GIT_SSL_NO_VERIFY=true 3. In order to treat your certificate as trusted, even it is a self signed certificate, you can get server’s certificate and install it on one of the next directories: /etc/ssl/certs/ /usr/share/ca-certificates/ Notes This is just a fast configuration that I did 3 months ago in order to evaluate Git. There are many points that you can improve (stronger security) in order to have a better configuration. All the configuration is make by root user. Reference: Setup Git server with read/write HTTPS on Debian from our JCG partner Adrianos Dadis at the Java, Integration and the virtues of source blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: