Featured FREE Whitepapers

What's New Here?


Apache Commons SCXML: Finite State Machine Implementation

This article mentions about Finite State Machines (FSM), SCXML ( State Chart extensible Markup Language) and Apache Common’s SCXML library. A basic ATM finite state machine sample code is also provided with the article. Finite State Machines: You probably remember Finite State Machines from your Computer Science courses. FSMs are used to design computer programs or digital circuits.       A FSM is simply an abstract machine that can be in one of a finite number of states.The machine is in only one state at a time; the state it is in at any given time is called the current state. It can change from one state to another when initiated by a triggering event or condition, this is called a transition. A particular FSM is defined by a list of the possible transition states from each current state, and the triggering condition for each transition. SCXML Language: A working draft called SCXML (State Machine Notation for Control Abstraction, published by W3C) can be used to describe complex state machines. SCXML is a general-purpose xml-based state machine language. It is still a draft and latest version is 16 February 2012. Click here to get five minute introduction to SCXML documents. Apache Commons SCXML Library: Apache has an implementation aimed at creating and maintaining a Java SCXML engine capable of executing a state machine defined using a SCXML document, while abstracting out the environment interfaces. The latest stable version is 0.9.Library Website: http://commons.apache.org/scxml/index.html Eclipse Plugin: http://commons.apache.org/sandbox/gsoc/2010/scxml-eclipse/ (still under development)  UseCases: http://commons.apache.org/scxml/usecases.html SCXML Editors: Apache’s Eclipse Plugin aims to provide a visual editor to edit SCXML files but it is still under development. There is also scxml gui ( http://code.google.com/p/scxmlgui/ ) which is very successful. You can also check State Forge’s visual State Machine Diagram : http://www.stateforge.com/StateMachineDiagram/StateMachineDiagram.htmlCode Sample : In this part of the article, we will implement a basic ATM Status state-machine. As a brief information, we assume an ATM can have following statuses. :idle: When ATM has no activity, simply it is closed loading: when an idle atm tries to connect to ATM Server, configs and info is started loading Out-of-service: If ATM loading fails or ATM is shutdown In-service: If ATM laoding is successful or ATM is started up Disconnected: If ATM is not connected to networkSorry for the missing or incorrect information about ATM statuses. This is just an example. Let’s first draw our state machine using scxmlgui program. One can write his own scxml file but scxmlgui does that ugly task for you. Here is the state chart diagram which describes the status change of an ATM :And the output SCXML file describing the transitions in the diagram above: <scxml initial="idle" name="atm.connRestored" version="0.9" xmlns="http://www.w3.org/2005/07/scxml"> <state id="idle"> <transition event="atm.connected" target="loading"></transition> </state> <state id="loading"> <transition event="atm.loadSuccess" target="inService"></transition> <transition event="atm.connClosed" target="disconnected"></transition> <transition event="atm.loadFail" target="outOfService"></transition> </state> <state id="inService"> <transition event="atm.shutdown" target="outOfService"></transition> <transition event="atm.connLost" target="disconnected"></transition> </state> <state id="outOfService"> <transition event="atm.startup" target="inService"></transition> <transition event="atm.connLost" target="disconnected"></transition> </state> <state id="disconnected"> <transition event="atm.connRestored" target="inService"></transition> </state> </scxml> Our FSM implemantation is in AtmStatusFSM class.AtmStatusFSM class extends org.apache.commons.scxml.env.AbstractStateMachine. FSM is configured by giving the scxml file (atm_status.xml) path to super constructor. ATM state changes are controlled by events. When fireEvent method is called with related event name [e.g. fireEvent('atm.connected')], FSM state is updated automatically. You can get current state whenever you want. You can also write public methods having the state names of our FSM. These methods are called when the corresponding state is activated.package net.javafun.example.atmstatusfsm;import java.util.Collection; import java.util.Set;import org.apache.commons.scxml.env.AbstractStateMachine; import org.apache.commons.scxml.model.State;/** * Atm Status Finite State Machine * * @see Apache Commons Scxml Library * @author ozkansari.com * */ public class AtmStatusFSM extends AbstractStateMachine {/** * State Machine uses this scmxml config file */ private static final String SCXML_CONFIG_ATM_STATUS = "net/javafun/example/atmstatusfsm/atm_status.xml";/** CONSTRUCTOR(S) */ public AtmStatusFSM() { super(AtmStatusFSM.class.getClassLoader().getResource(SCXML_CONFIG_ATM_STATUS)); } /** HELPER METHOD(S) *//** * Fire the event */ public void firePreDefinedEvent(AtmStatusEventEnum eventEnum){ System.out.println("EVENT: " + eventEnum); this.fireEvent(eventEnum.getEventName()); } public void callState(String name){ this.invoke(name); } /** * Get current state ID as string */ public String getCurrentStateId() { Set states = getEngine().getCurrentStatus().getStates(); State state = (State) states.iterator().next(); return state.getId(); } /** * Get current state as apache's State object */ public State getCurrentState() { Set states = getEngine().getCurrentStatus().getStates(); return ( (State) states.iterator().next()); } /** * Get events belongs to current status of the FSM */ public Collection getCurrentStateEvents() { return getEngine().getCurrentStatus().getEvents(); } /** STATES */ // Each method below is the activity corresponding to a state in the // SCXML document (see class constructor for pointer to the document).public void idle() { System.out.println("STATE: idle"); } public void loading() { System.out.println("STATE: loading"); } public void inService() { System.out.println("STATE: inService"); } public void outOfService() { System.out.println("STATE: outOfService"); } public void disconnected() { System.out.println("STATE: disconnected"); } } We have the following enum file to describe our events. You don’t have to code such a class, but it might help to define the events. You can also get those events dynamically using getEngine().getCurrentStatus().getEvents()code fragment. package net.javafun.example.atmstatusfsm;/** * Atm Status Change Events * * @author ozkansari.com * */ public enum AtmStatusEventEnum {CONNECT("atm.connected"), CONNECTION_CLOSED("atm.connClosed"), CONNECTION_LOST("atm.connLost"), CONNECTION_RESTORED("atm.connRestored"), LOAD_SUCCESS("atm.loadSuccess"), LOAD_FAIL("atm.loadFail"), SHUTDOWN("atm.shutdown"), STARTUP("atm.startup");private final String eventName;private AtmStatusEventEnum(String eventName) { this.eventName = eventName; } public String getEventName() { return eventName; } public static String getNamesAsCsv(){ StringBuilder sb = new StringBuilder(); for (AtmStatusEventEnum e : AtmStatusEventEnum.values()) { sb.append(e.name()); sb.append(","); } return sb.substring(0,sb.length()-2); } } You can see the basic GUI code below. The GUI first shows the possible events that can be fired. When an event is selected and submitted, current ATM status is displayed and event list is updated. package net.javafun.example.atmstatusfsm;import java.awt.BorderLayout; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.util.List;import javax.swing.JButton; import javax.swing.JComboBox; import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.JPanel;import org.apache.commons.scxml.model.Transition;/** * Atm Status Change GUI * * @author ozkansari.com * */ public class AtmDisplay extends JFrame implements ActionListener {private static final long serialVersionUID = -5083315372455956151L; private AtmStatusFSM atmStatusFSM;private JButton button; private JLabel state; private JComboBox eventComboBox = new JComboBox(); public static void main(String[] args) { new AtmDisplay(); } public AtmDisplay() { super("ATM Display Demo"); atmStatusFSM = new AtmStatusFSM(); setupUI(); }@SuppressWarnings("deprecation") private void setupUI() { JPanel panel = new JPanel(); panel.setLayout(new BorderLayout()); setContentPane(panel); button = makeButton("FIRE_EVENT", AtmStatusEventEnum.getNamesAsCsv(), "Submit" ); panel.add(button, BorderLayout.CENTER); state = new JLabel(atmStatusFSM.getCurrentStateId()); panel.add(state, BorderLayout.SOUTH); initEvents(); panel.add(eventComboBox, BorderLayout.NORTH); pack(); setLocation(200, 200); setResizable(false); setSize(300, 125); show(); setDefaultCloseOperation(EXIT_ON_CLOSE); }@SuppressWarnings("unchecked") private void initEvents() { eventComboBox.removeAllItems(); List transitionList = atmStatusFSM.getCurrentState().getTransitionsList(); for (Transition transition : transitionList) { eventComboBox.addItem(transition.getEvent() ); } }public void actionPerformed(ActionEvent e) { String command = e.getActionCommand(); if(command.equals("FIRE_EVENT")) { checkAndFireEvent(); } }private boolean checkAndFireEvent() { atmStatusFSM.fireEvent(eventComboBox.getSelectedItem().toString()); state.setText(atmStatusFSM.getCurrentStateId()); initEvents(); repaint(); return true; }private JButton makeButton(final String actionCommand, final String toolTipText, final String altText) { JButton button = new JButton(altText); button.setActionCommand(actionCommand); button.setToolTipText(toolTipText); button.addActionListener(this); button.setOpaque(false); return button; }} The output of our simple program :The project files (with required libraries) as shown in Eclipse are given in the following image:For full source code visit https://github.com/ozkansari/atmstatemachine   Reference: Easy Finite State Machine Implementation with Apache Commons SCXML from our JCG partner Ozkan SARI at the Java Fun blog. ...

Jelastic, cloud platform for Java

Who is behind Jelastic? That was my first question so I took a look to the Jelastic web site. The best way to answer this is looking at the Jelastic Team section. Founders, advisers, special partners conforms a real professional team. As special partners you will find the MySQL (Michael “Monty” Widenius) and Nginx (Igor Sysoev) authors. Special mention too to their evangelists (not mentioned in the web page). In my case, Judah Johns spent their time writing me two personal emails simply to let me know about the Jelastic platform and the possibility to test it for free. That’s a real evangelist. Registration Sign up with the service is really easy. Once sent the registration email you will receive a welcome email with an initial password for log in. First impression My first impression with Jelastic, from the web page to the service once logged in, was: Ough!!! I know design is something subjective, what you love other can hate, but the first impression is what counts in a 75%. Sorry Jelastic but, from my point of view, you need a redesign. That darker theme is absolutely dreadful. Environments After the first impression I start working in something more functional, which is what really matters for a developer. An environment is a concrete configuration of servers for load balancing, application logic and storage.Load Balancing is achieved with Nginx server. Application logic is implemented as a Java server side application and can run on Tomcat6, Tomcat7, Jetty6 or GlasFish3 servers using JDK6 or JDK7. For storage we can use SQL or NoSQL solutions. For SQL we have the most known open source projects: PostgreSQL 8.4, MySQL 5.5 and MariaDB 5.2. For NoSQL we can use MongoDB 2.0 or CouchDB 1.1. Creating a new environment is incredible easy. We can choose to use a load balancer or not, define the number of application logic server instances, possibility of high availability (which means session replication) and the storage service. Once created, the environment’s topology can be modified at any time. At practice this means you can scale your application adding more application server instances or applying the high availability options, which allows to replicate the sessions. In addition you can change or add a new store services. Note: Be aware if you change your relational or NoSQL server because data lost. Deploying applications For testing purposes, Jelastic comes with a HelloWorld.war sample applications. Deploy it is as easy as selecting and deploying on one of your, previously created and configured, environments.To deploy your own application you need to upload it first. After uploaded your application will be shown in the applications list and you could deploy like previously commented.Server configuration Once created the environment, you have access to the configuration files of your servers. I played a bit with a simple Tomcat+MySQL configuration and see you:have access to modify files like web.xml or server.xml can change logging preferences can upload new JAR files to or remove them from the lib folder have access to the webapps folder have a shortened version of my.cnf file you can edit.Log files and monitoring Jelastic monitors the servers of your environments and presents the results in a nice graphical way.In addition it also allows to see the log files of the servers:Looking log files in the browser is something funny, but I would like a way (I didn’t find it) to download the log files to my local machine. Looking for errors in a production environments with tons of lines isn’t easy to do in that text area. Resources Connect your application to the storage service (relational or NoSQL database) is really easy. The documentation contains samples for all the databases the Jelastic has support. The application logic servers have access to a home directory where you can create property files or upload whatever you want your application can use later using: System.getProperty('user.home') Conclusions At the opposite of Amazon AWS, Google App Engine or others, Jealastic is completely oriented to Java. If you are a Java developer and ever worked with AWS or Google App Engine you will find Jelastic complete different and incredible easy to use, really similar as a usual day to day work. While AWS is machine oriented, where you start as many EC2 instance as you require, with Jelastic you have the concept of cloudlet and you can forget completely to manage machine instances and their resources. Note: A cloudlet is roughly equivalent to 128 MB RAM and 200Mhz CPU core. I have written this post before dinner so, as you can see, it is nothing exhaustive but a simple platform presentation. A great continuation would require to explain the experiences working with a real application, deploying operations and tweaking the running environment to achieve good performance with lowest cloudlet consume. If someone is interested, another great article could compare the cost of the same application running with Amazon AWS and Jelastic: where runs with better performance and which one is cheaper. Related Posts:Sending emails with Java Clinker, a software development ecosystem Generating map tiles without a map server. GeoTools the GIS swissknife. How to Create a Cross-Platform Application with NASA WorldWind & NetBeans Platform Downloading files from AEMET FTP server with Java and Apache Commons NetReference: JELASTIC, CLOUD PLATFORM FOR JAVA from our JCG partner Antonio Santiago at the A Curious Animal blog....

Testing legacy code: Hard-wired dependencies

When pairing with some developers, I’ve noticed that one of the reasons they are not unit testing existing code is because, quite often, they don’t know how to overcome certain problems. The most common one is related to hard-wired dependencies – Singletons and static calls. Let’s look at this piece of code: public List<Trip> getTripsByUser(User user) throws UserNotLoggedInException { List<Trip> tripList = new ArrayList<Trip>(); User loggedUser = UserSession.getInstance().getLoggedUser(); boolean isFriend = false; if (loggedUser != null) { for (User friend : user.getFriends()) { if (friend.equals(loggedUser)) { isFriend = true; break; } } if (isFriend) { tripList = TripDAO.findTripsByUser(user); } return tripList; } else { throw new UserNotLoggedInException(); } }Horrendous, isn’t it? The code above has loads of problems, but before we change it, we need to have it covered by tests. There are two challenges when unit testing the method above. They are: User loggedUser = UserSession.getInstance().getLoggedUser(); // Line 3 tripList = TripDAO.findTripsByUser(user); // Line 13As we know, unit tests should test just one class and not its dependencies. That means that we need to find a way to mock the Singleton and the static call. In general we do that injecting the dependencies, but we have a rule, remember? We can’t change any existing code if not covered by tests. The only exception is if we need to change the code to add unit tests, but in this case, just automated refactorings (via IDE) are allowed. Besides that, many of the mocking frameworks are not be able to mock static methods anyway, so injecting the TripDAO would not solve the problem. Overcoming the hard-dependencies problem NOTE: In real life I would be writing tests first and making the change just when I needed but in order to keep the post short and focused I will not go step by step here . First of all, let’s isolate the Singleton dependency on it’s own method. Let’s make it protected as well. But wait, this need to be done via automated “extract method” refactoring. Select just the following piece of code on TripService.java: UserSession.getInstance().getLoggedUser()Go to your IDE’s refactoring menu, choose extract method and give it a name. After this step, the code will look like that: public class TripService {public List<Trip> getTripsByUser(User user) throws UserNotLoggedInException { ... User loggedUser = loggedUser(); ... }protected User loggedUser() { return UserSession.getInstance().getLoggedUser(); } }Doing the same thing for TripDAO.findTripsByUser(user), we will have: public List<Trip> getTripsByUser(User user) throws UserNotLoggedInException { ... User loggedUser = loggedUser(); ... if (isFriend) { tripList = findTripsByUser(user); } ... } protected List<Trip> findTripsByUser(User user) { return TripDAO.findTripsByUser(user); } protected User loggedUser() { return UserSession.getInstance().getLoggedUser(); }In our test class, we can now extend the TripService class and override the protected methods we created, making them return whatever we need for our unit tests: private TripService createTripService() { return new TripService() { @Override protected User loggedUser() { return loggedUser; } @Override protected List<Trip> findTripsByUser(User user) { return user.trips(); } }; }And this is it. Our TripService is now testable. First we write all the tests we need to make sure the class/method is fully tested and all code branches are exercised. I use Eclipse’s eclEmma plugin for that and I strongly recommend it. If you are not using Java and/or Eclipse, try to use a code coverage tool specific to your language/IDE while writing tests for existing code. It helps a lot. So here is the my final test class: public class TripServiceTest { private static final User UNUSED_USER = null; private static final User NON_LOGGED_USER = null; private User loggedUser = new User(); private User targetUser = new User(); private TripService tripService;@Before public void initialise() { tripService = createTripService(); } @Test(expected=UserNotLoggedInException.class) public void shouldThrowExceptionWhenUserIsNotLoggedIn() throws Exception { loggedUser = NON_LOGGED_USER; tripService.getTripsByUser(UNUSED_USER); } @Test public void shouldNotReturnTripsWhenLoggedUserIsNotAFriend() throws Exception { List<Trip> trips = tripService.getTripsByUser(targetUser); assertThat(trips.size(), is(equalTo(0))); } @Test public void shouldReturnTripsWhenLoggedUserIsAFriend() throws Exception { User john = anUser().friendsWith(loggedUser) .withTrips(new Trip(), new Trip()) .build(); List<Trip> trips = tripService.getTripsByUser(john); assertThat(trips, is(equalTo(john.trips()))); }private TripService createTripService() { return new TripService() { @Override protected User loggedUser() { return loggedUser; } @Override protected List<Trip> findTripsByUser(User user) { return user.trips(); } }; } }Are we done? Of course not. We still need to refactor the TripService class. public class TripService {public List<Trip> getTripsByUser(User user) throws UserNotLoggedInException { List<Trip> tripList = new ArrayList<Trip>(); User loggedUser = loggedUser(); boolean isFriend = false; if (loggedUser != null) { for (User friend : user.getFriends()) { if (friend.equals(loggedUser)) { isFriend = true; break; } } if (isFriend) { tripList = findTripsByUser(user); } return tripList; } else { throw new UserNotLoggedInException(); } }protected List<Trip> findTripsByUser(User user) { return TripDAO.findTripsByUser(user); }protected User loggedUser() { return UserSession.getInstance().getLoggedUser(); }}How many problems can you see? Take your time before reading the ones I found.. :-) Refactoring   NOTE: When I’ve done it, I’ve done it step by step running the tests after every step. Here I’ll just summarise my decisions. The first thing I noticed is that the tripList variable does not need to be created when the logged user is null, since an exception is thrown and nothing else happens. I’ve decided to invert the outer if and extract the guard clause. public List<Trip> getTripsByUser(User user) throws UserNotLoggedInException { User loggedUser = loggedUser(); validate(loggedUser); List<Trip> tripList = new ArrayList<Trip>(); boolean isFriend = false; for (User friend : user.getFriends()) { if (friend.equals(loggedUser)) { isFriend = true; break; } } if (isFriend) { tripList = findTripsByUser(user); } return tripList; }private void validate(User loggedUser) throws UserNotLoggedInException { if (loggedUser == null) throw new UserNotLoggedInException(); }Feature Envy When a class gets data from another class in order to do some calculation or comparison on that data, quite often it means that the client class envies the other class. This is called Feature Envy (code smell) and it is a very common occurrence in long methods and is everywhere in legacy code. In OO, data and the operations on that data should be on the same object. So, looking at the code above, clearly the whole thing about determining if an user is friends with another doesn’t belong to the TripService class. Let’s move it to the User class. First the unit test: @Test public void shouldReturnTrueWhenUsersAreFriends() throws Exception { User John = new User(); User Bob = new User();John.addFriend(Bob);assertTrue(John.isFriendsWith(Bob)); }Now, let’s move the code to the User class. Here we can use the Java collections API a bit better and remove the whole for loop and the isFriend flag all together. public class User {...private List<User> friends = new ArrayList<User>();public void addFriend(User user) { friends.add(user); }public boolean isFriendsWith(User friend) { return friends.contains(friend); }... }After a few refactoring steps, here is the new code in the TripService public List<Trip> getTripsByUser(User user) throws UserNotLoggedInException { User loggedUser = loggedUser(); validate(loggedUser); return (user.isFriendsWith(loggedUser)) ? findTripsByUser(user) : new ArrayList<Trip>(); }Right. This is already much better but it is still not good enough. Layers and dependencies Some of you may still be annoyed by the protected methods we created in part one in order to isolate dependencies and test the class. Changes like that are meant to be temporary, that means, they are done so we can unit test the whole method. Once we have tests covering the method, we can start doing our refactoring and thinking about the dependencies we could inject. Many times we would think that we should just inject the dependency into the class. That sounds obvious. TripService should receive an instance of UserSession. Really? TripService is a service. That means, it dwells in the service layer. UserSession knows about logged users and sessions. It probably talks to the MVC framework and/or HttpSession, etc. Should the TripService be dependant on this class (even if it was an interface instead of being a Singleton)? Probably the whole check if the user is logged in should be done by the controller or whatever the client class may be. In order NOT to change that much (for now) I’ll make the TripService receive the logged user as a parameter and remove the dependency on the UserSession completely. I’ll need to do some minor changes and clean up in the tests as well. Naming No, unfortunately we are not done yet. What does this code do anyway? Return trips from a friend. Looking at the name of the method and parameters, or even the class name, there is no way to know that. The word “friend” is no where to be seen in the TripService’s public interface. We need to change that as well. So here is the final code: public class TripService {public List<Trip> getFriendTrips(User loggedUser, User friend) throws UserNotLoggedInException { validate(loggedUser); return (friend.isFriendsWith(loggedUser)) ? findTripsForFriend(friend) : new ArrayList<Trip>(); }private void validate(User loggedUser) throws UserNotLoggedInException { if (loggedUser == null) throw new UserNotLoggedInException(); }protected List<Trip> findTripsForFriend(User friend) { return TripDAO.findTripsByUser(friend); } }Better, isn’t it? We still have the issue with the other protected method, with the TripDAO static call, etc. But I’ll leave this last bit for another post on how to remove dependencies on static methods. I’ll park my refactoring for now. We can’t refactoring the entire system in one day, right? We still need to deliver some features. :-) Conclusion This was just a toy example and may not even make sense. However, it represents many of the problems we find when working with legacy (existing) code. It’s amazing how many problems we can find in such a tiny piece of code. Now imagine all those classes and methods with hundreds, if not thousands of lines. We need to keep refactoring our code mercilessly so we never get to a position where we don’t understand it any more and the whole business starts slowing down because we cannot adjust the software quick enough. Refactoring is not just about extracting methods or making a few tweaks in the logic. We need to think about the dependencies, the responsibilities that each class and method should have, the architectural layers, the design of our application and also the names we give to every class, method, parameter and variable. We should try to have the business domain expressed in the code. We should treat our code base as if it was a big garden. If we want it to be pleasant and maintainable, we need to be constantly looking after it . If you want to give this code a go or find more details about the implementation, check: https://github.com/sandromancuso/testing_legacy_code Reference: Testing legacy: Hard-wired dependencies (part 1 and 2) from our JCG partner Sandro Mancuso at the Crafted Software blog....

Hacking Maven

We don’t use M2Eclipse to integrate Maven and Eclipse, partly because we had some bad experiences a couple of years ago when we were still using Eclipse 3.4 and partly because some of our developers use IBMs RAD to develop, and it doesn’t (or at least wasn’t) compatible. Our process is to update our local sources from SVN and then to run Maven’s eclipse:eclipse locally with the relevant workspace setting so that our .project and .classpath files are generated, based on the dependencies and other information in the POMs. It works well enough, but the plan is indeed to move to M2Eclipse at some stage soon.We have parent POMs for each sub-system (each of which is made of several Eclipse projects). Some of us pull multiple sub-systems into our workspaces which is useful when you refactor interfaces between sub-systems, because Eclipse can refactor the code which calls the interfaces at the same time as you refactor the interfaces, saving lots of work. Maven generates .classpath files for Eclipse which reference everything in the workspace as a source projet rather than a JAR out of the local Maven repo. That is important because if Maven created JAR references and not project references, Eclipse’s refactoring wouldn’t adjust the code calling the refactored code.10 days ago we switched from a build process based on continuum and archiva to jenkins and nexus. All of a sudden we lost some of the source references which were replaced with JAR references. If a project was referred to in a parent POM then it was referenced as a project in Eclipse, but if it was a sub-system built using a second parent POM, then Eclipse was referring to the project as a JAR from the local Maven repo. That was bad news for our refactoring! Here is an example of what used to happen:subsystem-a-project-1 subsystem-a-project-2 references subsystem-a-project-1 as a projectsubsystem-b-project-1 references subsystem-a-project-1 as a projectHere is what was happening when we used Jenkins and Nexus:subsystem-a-project-1 subsystem-a-project-2 references subsystem-a-project-1 as a projectsubsystem-b-project-1 references subsystem-a-project-1 as a JAR!!!At the same time as moving to Jenkins and Nexus, we upgraded a few libraries and our organisational POM. We also moved our parent POMs from the root directory of the sub-system into a project for the parent (in preparation for using M2Eclipse which prefers having the parent POM in the workspace).The question was, where do we start looking? Some people suggested it was because we’d moved the parent POM, others thought it was because we’d updated a version of a library on which eclipse:eclipse might have a dependency and others were of the opinion we had to return to Continuum and Archiva.One of our sharp eyed developers noticed that we were getting the following output on the command line during the local Maven build:  Artifact subsystem-a-project-1 already available as a workspace project, but with different version...That was the single find which led to us solving the problem quickly. I downloaded the source of eclipse:eclipse from Maven central, under org/apache/maven/plugins/maven-eclipse-plugin. I created a dummy Java project in Eclipse. Next, I added the following environment variable to my command line from where I run Maven:set MAVEN_OPTS=-Xdebug -Xnoagent -Djava.compiler=NONE -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=28000That tells the JRE to open a debug port and wait until a debugger is connected before running the app. Running a Java program with those args makes it output "Listening for transport dt_socket at address: 28000". In Eclipse I went to the debug run configurations and added a remote connection at localhost on port 28000. It needs a project, so I added the dummy project I’d just created. I connected and Maven continued to run now that I was connected. The next step was to add the Maven Plugin source code which I’d downloaded from Maven Central. By right clicking on the debug view in the debug perspective (click on the tree), it is possible to add/update source attachments, and I added the sources JAR that I’d downloaded. The last bit was to find a suitable breakpoint. I extracted the sources from the downloaded ZIP and searched the files for the text which was being output by Maven which was hinting at the problem (“already available as a workspace project”). EclipsePlugin was the suspect class!I added the eclipse:eclipse JAR to my dummy project so that I could open the class in Eclipse using ctrl+shift+t. Eclipse opened the class, but hadn’t worked out that the source from the source attachment belongs to that class. There is a button in the editor to attach the source by locating the JAR downloaded from Maven Central and Eclipse then showed the source code and I was able to add a breakpoint. By this time Maven had finished, so I restarted it and this time I hit the breakpoint, and the problem became quite clear. For some reason, rather than the version being a simple SNAPSHOT version, it contained a timestamp. The eclipse:eclipse plugin was of the opinion that I didn’t have the correct code in the workspace, and so rather than create a project reference, it created a JAR reference based on the JAR in the local Maven repo.The internet is full of information about how Maven3 now uses unqiue timestamps for each build artefact deployed to the repo. There were some references saying you could disable it for Maven2 (which we still use), but when you move to Maven3 you can’t disable it. I thought about submitting a bug report to Codehaus (who supplies the eclipe:eclipse plugin), but it occurred to me that we were still referencing version 2.8 in our organisational POM and I’d spotted a 2.9 version when I was at Maven Central. So I updated the organisational POM to use version 2.9 of eclipse:eclipse and gave that a shot.Hooray! 23:17 and I’d fixed our problem. Shame I had a 06:30 start the next day for a meeting :-/This isn’t the first time that we’ve made the mistake of doing too much in a migration. We could have stuck with Continuum and Archiva when we updated our org-POM, and then step for step migrated to Nexus (still using Archiva) and then finally moved over to Nexus. Had we done that, we might not have had the problems we did; but the effort of a slow migration might also have been larger.For me, the point to take away is that it easier to debug maven and go straight to the source of the problem, than it is to attempt to fix the problem by trail and error, as we were doing before I debugged Maven. Debugging Maven might sound like it’s advanced or complicated, but it’s damn fun – it’s real hacking and the feeling of solving a puzzle in this way makes the effort worth it. I fully recommend it if for no other reason that you get exposed to reading other peoples (open source) code.Reference: Hacking Maven from our JCG partner Ant Kutschera at the The Kitchen in the Zoo blog....

ActiveMQ Network Connectors

This post is more for me and any ActiveMQ contributors that may be interested in how the Network Connectors work for ActiveMQ. I recently spent some time looking at the code and thought that it would be good to draw up some quick diagrams to help me remember what I learned and help to identify where to debug in the future if there are issues I am researching. If I make a mistake and you’d like to add clarification, please do so in the comments.First, you set up your network connectors by configuring them in the ActiveMQ configuration file. This configuration gets mapped to the corresponding ActiveMQ beans using the xbean library for which I have a separate blog post which explains exactly how this is done. To specify network connectors, you add the <networkConnectors/> element to your configuration file and add a <networkConnector/>, <multicastNetworkConnector/>, or <ldapNetworkConnector/>. These three different types of network connectors can be used to establish a network of brokers with <networkConnector/> being most common. Here’s how the three map to Java classes:<networkConnector/> maps to org.apache.activemq.network.DiscoveryNetworkConnector <multicastNetworkConnector/> maps to org.apache.activemq.network.MulticastNetworkConnector <ldapNetworkConnector/> maps to org.apache.activemq.network.LdapNetworkConnectorEach of those inherit from the org.apache.activemq.network.NetworkConnector super type as depicted in this diagram: So when you have a configuration like this:<networkConnector uri="static://(tcp://localhost:61626,tcp://localhost:61627)" />a new DiscoverNetworkConnector will be configured, instantiated, and added as a connector to the BrokerService (which is the main class for where a lot of the ActiveMQ broker details is handled). While the DiscoverNetworkConnector is being assembled from the configuration, the URI that you specify is used to create a DiscoveryAgent. The discover agent is in charge of assembling the connection and handling failover events that are packaged as DiscoverEvents. Determining which DiscoverAgent is picked depends on the DiscoverAgentFactory and the URI specified. In the case of “static”, the SimpleDiscoverAgent is used. Each URI in the list of possible URIs are treated differently and are assigned their own Transport (more on this in a sec). Which means, for each URI you list, a new socket will be established and the broker will attempt to establish a network connector over each of the sockets. You may be wondering how to best implement failover then? In the case described above, you will have multiple connections, and if one of those connections is to a slave that isn’t listening, you will see that the connection fails and the discover agent tries to establish the connection again. This could go on infinitely which consumes resources. Another approach is to use just one URI for the static discover agent that utilizes the failover() logic: <networkConnector uri="static:failover:(tcp://localhost:61626,tcp://localhost:61627)" />In this case, only one transport will be created, and the failover logic will wrap it and know about both URIs. If one is not available, it won’t keep retrying needlessly. Instead it will connect to whichever one it can and only reconnect to the failover URL if the current connection goes down. Note this approach had a bug in it before ActiveMQ version 5.5.1.-fuse-00-06.The discover agent is in charge of creating the bridge, but it delegates that responsibility to a DiscoverListener. In the example from above, the DiscoverListener interface is implemented by the DiscoverNetworkConnector.onServiceAdd() method.To establish the bridge, a transport is opened up for both the local broker (using VM) and the remote broker (using the specified protocol, in this case TCP). Once the local and remote transports are created, the bridge can be assembled in the DiscoverNetworkConnector.createBridge(…) method. This method uses the Factory pattern again to find which bridge to use.The possible bridge implementations are shown below:By default, with conduitSubscriptions=true, the DurableConduitBridge is used. Conduit subscriptions establish a single flow of messages to a remote broker to reduce duplicates that can happen when remote topics have multiple consumers. This works great by default, but if you want to load balance your messages across all consumers, then you will want to set conduit subscriptions to false (see the documentation for conduit subscriptions at FuseSource‘s documentation on Fuse Message Broker). When set to false, the DemandForwardingBridge is used. Once the bridge is assembled, it is configured in the NetworkConnector.configureBridge(…) method.Once everything is assembled and configured on the bridge, it’s then started. Once it’s started, it begins sending broker Command objects to the remote broker to identify itself, set up a session, and demand consumer info from it. This is in the DemandForwardingBridgeSupport.startRemoteBridge() super class method as seen from the diagram.If you’re debugging errors with the network connectors, hopefully this helps identify possible locations for where errors can take place.Reference: From inside the code: ActiveMQ Network Connectors from our JCG partner Christian Posta at the Christian Posta Software blog....

Connect Glassfish 3 to external ActiveMQ 5 broker

Introduction Here at ONVZ we’re using Glassfish 3 as our development and production application server, and we’re quite happy with its performance and stability, as well as the large community surrounding it. I rarely run into a problem that does not have a matching solution on stackoverflow or java.net. As part of our open source strategy we also run a customized ActiveMQ cluster called “ONVZ Message Bus”. To enable Message Driven Beans and other EJBs to consume and produce messages to and from the ActiveMQ message brokers, ignoring the internal OpenMQ broker that comes shipped with Glassfish, an ActiveMQ Resource Adapter has to be installed. Luckily for me Sven Hafner wrote a blog post about running an embedded ActiveMQ 5 broker in Glassfish 3, and I was able to distill the information I needed to connect to an external broker instead. This blog post describes what I did to get it to work. Install the ActiveMQ Resource AdapterBefore you start Glassfish copy the following libraries from an ActiveMQ installation directory or elsewhere to GlassfishCopy “slf4j-api-1.5.11.jar” from the ActiveMQ “lib” directory to the Glassfish “lib” directory Copy “slf4j-log4j12-1.5.11.jar” and “log4j-1.2.14.jar” from the ActiveMQ “lib/optional” directory to the Glassfish “lib” directory. Note: Instead of these two you can also download “slf4j-jdk14-1.5.11.jar” from the maven repo to the Glassfish “lib” directory.Download the resource adapter (activemq-rar-5.5.1.rar) from the following location Deploy the resource adapter in GlassfishIn the Glassfish Admin Console, go to “Applications”, and click on “Deploy” Click “Choose file” and select the rar file you just downloaded. Notice how the page recognized the selected rar file and automatically selected the correct Type and Application Name and finally click “Ok”Create the Resource Adapter ConfigIn the Glassfish Admin Console, go to “Resources”, and click on “Resource Adapter Configs” Click “New”, and select the ActiveMQ Resource Adapter we just depoyed, and select a Thread Pool. (“thread-pool-1? for instance) Set the property “ServerUrl”, “UserName” and “Password”, leave the rest untouched and click “OK”.Create the Connector Connection PoolIn the Glassfish Admin Console, go to “Resources”, “Connectors”, “Connector Connection Pools” Click “New”, fill in a pool name like “jms/connectionFactory” and select the ActiveMQ Resource Adapter. The Connection Definition will default to “javax.jms.ConnectionFactory”, which is correct, so click “Next”. Enable the “Ping” checkbox and click “Finish”.Create the Admin Object ResourceIn the Glassfish Admin Console, go to “Resources”, “Connectors”, “Admin Object Resources” Click “New”, set a JNDI Name such as “jms/queue/incoming” and select the ActiveMQ Resource Adapter Again, the other fields don’t need to be changed so click “OK”We now have everything in place (in JNDI actually) to start processing messages using a standard Java EE Message Driven Bean. The “Connector Connection Pool” you just created has resulted in a ConnectionFactory being registered in JNDI, and the “Admin Object Resource” resulted in a JMS Destination. You can find these objects in the admin console when you go to “Resources”, “JMS Resources”. In the Glassfish version I’m using (3.1.1) the admin console has a bug which results in the connection factory and destinations being only visible in the menu, and not on the right side of the page.  Create and deploy a Message Driven BeanCreate a new Java Enterprise project in your favorite IDE, and create a Message Driven Bean with the following contents:package com.example.activemq.glassfish;import javax.ejb.*; import javax.jms.*;@MessageDriven( activationConfig = { @ActivationConfigProperty(propertyName = 'destinationType', propertyValue = 'javax.jms.Queue'), @ActivationConfigProperty(propertyName = 'destination', propertyValue = 'jms/queue/incoming') } ) public class ExampleMessageBean implements MessageListener {public void onMessage(Message message) { try { System.out.println('We've received a message: ' + message.getJMSMessageID()); } catch (JMSException e) { e.printStackTrace(); } } }Glassfish will hookup your bean to the configured queue but it will try to do so with the default ConnectionFactory which connects to the embedded OpenMQ broker. This is not what we want, so we’ll instruct Glassfish which ConnectionFactory to use.Add a file called glassfish-ejb-jar.xml to the META-INF folder, and insert the following contents:<?xml version='1.0' encoding='UTF-8'?> <!DOCTYPE glassfish-ejb-jar PUBLIC '-//GlassFish.org//DTD GlassFish Application Server 3.1 EJB 3.1//EN' 'http://glassfish.org/dtds/glassfish-ejb-jar_3_1-1.dtd'> <glassfish-ejb-jar> <enterprise-beans> <ejb> <ejb-name>ExampleMessageBean</ejb-name> <mdb-connection-factory> <jndi-name>jms/connectionFactory</jndi-name> </mdb-connection-factory> <mdb-resource-adapter> <resource-adapter-mid>activemq-rar-5.5.1</resource-adapter-mid> </mdb-resource-adapter> </ejb> </enterprise-beans> </glassfish-ejb-jar>Deploy the MDB to glassfishGlassfish now uses the ActiveMQ ConnectionFactory and all is well. Use the ActiveMQ webconsole to send a message to a queue called “jms/queue/incoming”, or use some other tool to send a message. Glassfish catches all the sysout statements and prints those in it’s default glassfish log file. Reference: How to connect Glassfish 3 to an external ActiveMQ 5 broker from our JCG partner Geert Schuring at the Geert Schuring blog....

Test-driving Builders with Mockito and Hamcrest

A lot of people asked me in the past if I test getters and setters (properties, attributes, etc). They also asked me if I test my builders. The answer, in my case is it depends. When working with legacy code, I wouldn’t bother to test data structures, that means, objects with just getters and setters, maps, lists, etc. One of the reasons is that I never mock them. I use them as they are when testing the classes that uses them. For builders, when they are used just by test classes, I also don’t unit test them since they are used as “helpers” in many other tests. If they have a bug, the tests will fail. In summary, if these data structures and builders already exist, I wouldn’t bother retrofitting test for them. But now let’s talk about doing TDD and assume you need a new object with getters and setters. In this case, yes, I would write tests for the getters and setters since I need to justify their existence writing my tests first. In order to have a rich domain model, I normally tend to have business logic associated with the data and have a richer domain model. Let’s see the following example. In the real life, I would be writing on test at a time, making them pass and refactor. For this post, I’ll just give you the full classes for clarity’s sake. First let’s write the tests: package org.craftedsw.testingbuilders; import static org.hamcrest.Matchers.is; import static org.junit.Assert.assertThat; import static org.mockito.Matchers.anyString; import static org.mockito.Mockito.verify; import static org.mockito.Mockito.when; import org.junit.Before; import org.junit.Test; import org.junit.runner.RunWith; import org.mockito.Mock; import org.mockito.runners.MockitoJUnitRunner; @RunWith(MockitoJUnitRunner.class) public class TradeTest { private static final String INBOUND_XML_MESSAGE = '<message >'; private static final boolean REPORTABILITY_RESULT = true; private Trade trade; @Mock private ReportabilityDecision reportabilityDecision; @Before public void initialise() { trade = new Trade(); when(reportabilityDecision.isReportable(anyString())) .thenReturn(REPORTABILITY_RESULT); } @Test public void should_contain_the_inbound_xml_message() { trade.setInboundMessage(INBOUND_XML_MESSAGE); assertThat(trade.getInboundMessage(), is(INBOUND_XML_MESSAGE)); } @Test public void should_tell_if_it_is_reportable() { trade.setInboundMessage(INBOUND_XML_MESSAGE); trade.setReportabilityDecision(reportabilityDecision); boolean reportable = trade.isReportable(); verify(reportabilityDecision).isReportable(INBOUND_XML_MESSAGE); assertThat(reportable, is(REPORTABILITY_RESULT)); } } Now the implementation: package org.craftedsw.testingbuilders; public class Trade { private String inboundMessage; private ReportabilityDecision reportabilityDecision; public String getInboundMessage() { return this.inboundMessage; } public void setInboundMessage(String inboundXmlMessage) { this.inboundMessage = inboundXmlMessage; } public boolean isReportable() { return reportabilityDecision.isReportable(inboundMessage); } public void setReportabilityDecision(ReportabilityDecision reportabilityDecision) { this.reportabilityDecision = reportabilityDecision; } } This case is interesting since the Trade object has one property called inboundMessage with respective getters and setters and also uses a collaborator ( reportabilityDecision, injected via setter) in its isReportable business method. A common approach that I’ve seen many times to “test” the setReportabilityDecision method is to introduce a getReportabilityDecision method returning the reportabilityDecision (collaborator) object. This is definitely the wrong approach. Our objective should be to test how the collaborator is used, that means, if it is invoked with the right parameters and if whatever it returns (if it returns anything) is used. Introducing a getter in this case does not make sense since it does not guarantee that the object, after had the collaborator injected via setter, is interacting with the collaborator as we intended. As an aside, when we write tests that are about how collaborators are going to be used, defining their interface, is when we are using TDD as a design tool and not just simply as a testing tool. I’ll cover that in a future blog post. OK, now imagine that this trade object can be created in different ways, that means, with different reportability decisions. We also would like to make our code more readable and we decide to write a builder for the Trade object. Let’s also assume, in this case, that we want the builder to be used in the production and test code as well. In this case, we want to test drive our builder. Here is an example that I normally find when developers are test-driving a builder implementation. package org.craftedsw.testingbuilders; import static org.craftedsw.testingbuilders.TradeBuilder.aTrade; import static org.hamcrest.Matchers.is; import static org.junit.Assert.assertThat; import static org.mockito.Mockito.verify; import org.junit.Test; import org.junit.runner.RunWith; import org.mockito.Mock; import org.mockito.runners.MockitoJUnitRunner; @RunWith(MockitoJUnitRunner.class) public class TradeBuilderTest { private static final String TRADE_XML_MESSAGE = '<message >'; @Mock private ReportabilityDecision reportabilityDecision; @Test public void should_create_a_trade_with_inbound_message() { Trade trade = aTrade() .withInboundMessage(TRADE_XML_MESSAGE) .build(); assertThat(trade.getInboundMessage(), is(TRADE_XML_MESSAGE)); } @Test public void should_create_a_trade_with_a_reportability_decision() { Trade trade = aTrade() .withInboundMessage(TRADE_XML_MESSAGE) .withReportabilityDecision(reportabilityDecision) .build(); trade.isReportable(); verify(reportabilityDecision).isReportable(TRADE_XML_MESSAGE); } } Now let’s have a look at these tests. The good news is, the tests were written in the way developers want to read them. That also means that they were “designing” the TradeBuilder public interface (public methods). The bad news is how they are testing it. If you look closer, the tests for the builder are almost identical to the tests in the TradeTest class. You may say that it is OK since the builder is creating the object and the tests should be similar. The only different is that in the TradeTest we instantiate the object by hand and in the TradeBuilderTest we use the builder to instantiate it, but the assertions should be the same, right? For me, firstly we have duplication. Secondly, the TradeBuilderTest doesn’t show it’s real intent. After many refactorings and exploring different ideas, while pair-programming with one of the guys in my team we came up with this approach: package org.craftedsw.testingbuilders; import static org.mockito.BDDMockito.given; import static org.mockito.Mockito.verify; import org.junit.Test; import org.junit.runner.RunWith; import org.mockito.InjectMocks; import org.mockito.Mock; import org.mockito.Spy; import org.mockito.runners.MockitoJUnitRunner; @RunWith(MockitoJUnitRunner.class) public class TradeBuilderTest { private static final String TRADE_XML_MESSAGE = '<message >'; @Mock private ReportabilityDecision reportabilityDecision; @Mock private Trade trade; @Spy @InjectMocks TradeBuilder tradeBuilder; @Test public void should_create_a_trade_with_all_specified_attributes() { given(tradeBuilder.createTrade()).willReturn(trade); tradeBuilder .withInboundMessage(TRADE_XML_MESSAGE) .withReportabilityDecision(reportabilityDecision) .build(); verify(trade).setInboundMessage(TRADE_XML_MESSAGE); verify(trade).setReportabilityDecision(reportabilityDecision); } }So now, the TradeBuilderTest express what is expected from the TradeBuilder, that means, the side effect when the build method is called. We want it to create a Trade and set its attributes. There are no duplications with the TradeTest. It is left to the TradeTest to guarantee the correct behavior of the Trade object. For completion’s sake, here is the final TradeBuider class: package org.craftedsw.testingbuilders; public class TradeBuilder { private String inboundMessage; private ReportabilityDecision reportabilityDecision; public static TradeBuilder aTrade() { return new TradeBuilder(); } public TradeBuilder withInboundMessage(String inboundMessage) { this.inboundMessage = inboundMessage; return this; } public TradeBuilder withReportabilityDecision(ReportabilityDecision reportabilityDecision) { this.reportabilityDecision = reportabilityDecision; return this; } public Trade build() { Trade trade = createTrade(); trade.setInboundMessage(inboundMessage); trade.setReportabilityDecision(reportabilityDecision); return trade; } Trade createTrade() { return new Trade(); } } The combination of Mockito and Hamcrest is extremely powerful, allowing us to write better and more readable tests. Reference: Test-driving Builders with Mockito and Hamcrest from our JCG partner Sandro Mancuso at the Crafted Software blog....

A First Look at MVVM in ZK 6

MVVM vs. MVC In a previous post we’ve seen how the Ajax framework ZK adopts a CSS selector inspired Controller for wiring UI components in View and listening to their events. Under this ZK MVC pattern, the UI components in View need not to be bound to any Controller methods or data objects. The flexibility of using selector patterns as a mean to map View states and events to the Controller makes code more adaptive to change. MVVM approaches separation of concern in a reverse direction. Under this pattern, a View-Model and a binder mechanism take place of the Controller. The binder maps requests from View to action logic in View-Model and updates any value (data) on both sides, allowing the View-Model to be independent of any particular View. Anatomy of MVVM in ZK 6 The below is a schematic diagram of ZK 6′s MVVM pattern:Here are some additional points that’s not conveyed in the diagram: BindComposer:implements ZK’s standard controller interfaces (Composer & ComposerExt) the default implementation is sufficient, no modifications necessaryView:informs binder which method to call and what properties to update on the View-ModelView-Model:just a POJO communication with the binder is carried out via Java AnnotationMVVM in Action Consider the task of displaying a simplified inventory without knowledge of the exact UI markup. An inventory is a collection of items, so we have the object representation of such: public class Item { private String ID; private String name; private int quantity; private BigDecimal unitPrice;//getters & setters } It also makes sense to expect that an item on the list can be selected and operated on. Thus based on our knowledge and assumptions so far, we can go ahead and implement the View-Model. public class InventoryVM { ListModelList<Item> inventory; Item selectedItem; public ListModelList<Item> getInventory(){ inventory = new ListModelList<Item>(InventoryDAO.getInventory()); return inventory; }public Item getSelectedItem() { return selectedItem; } public void setSelectedItem(Item selectedItem) { this.selectedItem = selectedItem; }} Here we have a typical POJO for the View-Model implementation, data with their getters and setter. View Implementation, ‘Take One’ Now suppose we later learned the requirements for the View is just a simple tabular display:A possible mark-up to achieve the UI as indicated above is: <window title='Inventory' border='normal' apply='org.zkoss.bind.BindComposer' viewModel='@id('vm') @init('lab.zkoss.mvvm.ctrl.InventoryVM')' > <listbox model='@load(vm.inventory)' width='600px' > <auxhead><auxheader label='Inventory Summary' colspan='5' align='center'/> </auxhead> <listhead> <listheader width='15%' label='Item ID' sort='auto(ID)'/> <listheader width='20%' label='Name' sort='auto(name)'/> <listheader width='20%' label='Quantity' sort='auto(quantity)'/> <listheader width='20%' label='Unit Price' sort='auto(unitPrice)'/> <listheader width='25%' label='Net Value'/> </listhead> <template name='model' var='item'> <listitem> <listcell><label value='@load(item.ID)'/></listcell> <listcell><label value='@load(item.name)'/></listcell> <listcell><label value='@load(item.quantity)'/></listcell> <listcell><label value='@load(item.unitPrice)'/></listcell> <listcell><label value='@load(item.unitPrice * item.quantity)'/></listcell> </listitem> </template> </listbox> </window> Let’s elaborate a bit on the mark-up here.At line 1, we apply the default BindComposer to the Window component which makes all children components of the Window subject to the BindComposer’s effect. On the following line, we instruct the BindComposer which View-Model class to instantiate and we give the View-Model instance an ID so we could make reference to it. Since we’re loading a collection of data onto the Listbox, at line 3 we assign the property ‘inventory’ of our View-Model instance, which is a collection of the Item objects, to Listbox’s attribute ‘model’. At line 12, we then make use of the model on our Template component. Template iterates its enclosed components according to the model it receives. In this case, we have 5 list items which makes up a row in the Listbox. In each Listcell, we load the properties of each object and display them in Labels.Via ZK’s binding system, we were able to access data in our View-Model instance and load them in View using annotations. View Implementation, ‘Take Two’ Suppose later in development, it’s agreed that the current tabular display takes too much space in our presentation and we’re now asked to show the details of an item only when the item is selected in a Combobox, as shown below:Though both the presentation and behaviour(detail is shown only upon user’s selection) differ from our previous implementation, the View-Model class needs not be heavily modified. Since an item’s detail will be rendered only when it is selected in the Combobox, it’s obvious that we’d need to handle the ‘onSelect’ event, let’s add a new method doSelect: public class InventoryVM { ListModelList<Item> inventory; Item selectedItem;@NotifyChange('selectedItem') @Command public void doSelect(){ } //getters & setters} A method annotated with @Command makes it eligible to be called from our mark-up by its name, in our case: <combobox onSelect='@command('doSelect')' > The annotation @NotifyChange(‘selectedItem’) allows the property selectedItem to be updated automatically whenever user selects a new Item from the Combobox. For our purposes, no addition implementation is needed for the method doSelect. With this bit of change done, we can now see how this slightly-modified View-Model would work with our new mark-up: <window title='Inventory' border='normal' apply='org.zkoss.bind.BindComposer' viewModel='@id('vm') @init('lab.zkoss.mvvm.ctrl.InventoryVM')' width='600px'> ... <combobox model='@load(vm.inventory)' selectedItem='@bind(vm.selectedItem)' onSelect='@command('doSelect')' > <template name='model' var='item'> <comboitem label='@load(item.ID)'/> </template> <comboitem label='Test'/> </combobox> <listbox visible='@load(not empty vm.selectedItem)' width='240px'> <listhead> <listheader ></listheader> <listheader ></listheader> </listhead> <listitem> <listcell> <label value='Item Name: ' /> </listcell> <listcell> <label value='@load(vm.selectedItem.name)' /> </listcell> </listitem> <listitem> <listcell> <label value='Unit Price: ' /> </listcell> <listcell> <label value='@load(vm.selectedItem.unitPrice)' /> </listcell> </listitem> <listitem> <listcell> <label value='Units in Stock: ' /> </listcell> <listcell> <label value='@load(vm.selectedItem.quantity)' /> </listcell> </listitem> <listitem> <listcell> <label value='Net Value: ' /> </listcell> <listcell> <label value='@load(vm.selectedItem.unitPrice * vm.selectedItem.quantity)' /> </listcell> </listitem> </listbox> ... </window>At line 4, we load the data collection inventory to the Combobox’s model attribute so it can iteratively display the ID for each Item object in the data model using the Template component declared on line 7. At line 5, the selectedItem attribute points to the most recently selected Item on that list of Item objects At line 6, we’ve mapped the onSelect event to the View-Model’s doSelect method At line 12, we make the Listbox containing an Item’s detail visible only if the selectedItem property in View-Model is not empty (selectedItem will remain empty until an item is selected in the Combobox). The selectedItem’s properties are then loaded to fill out the Listbox.Recap Under the MVVM pattern, our View-Model class exposes its data and methods to the binder; there’s no reference made to any particular View component. The View implementations access data or invoke event handlers via the binder. In this post, we’re only exposed to the fundamental workings of ZK’s MVVM mechanisms. The binder is obviously not restricted to just loading data from the View-Model. In addition to saving data from View to ViewModel, we can also inject data converters and validators in the mix of View to View-Model communications. The MVVM pattern may also work in conjunction with the MVC model. That is, we can also wire components and listen to fired-events via the MVC Selector mechanism if we wish to do so. We’ll dig into some of these topics at a later time. Reference: A First Look at MVVM in ZK 6 from our JCG partner Lance Lu at the Tech Dojo blog....

JMX : Some Introductory Notes

JMX (Java Management Extensions) is a J2SE technology which enables management and monitoring of Java applications. The basic idea is to implement a set of management objects and register the implementations to a platform server from where these implementations can be invoked either locally or remotely to the JVM using a set of connectors or adapters. A management/instrumentation object is called an MBean (stands for Managed Bean). Once instantiated a MBean will be registered with a unique ObjectName with the platform MBeanServer. MBeanServer acts as a repository of MBeans enabling the creation, registering, accessing and removing of the MBeans. However MBeanServer does not persist the MBean information. So with a restart of the JVM you would loose all the MBeans in it. The MBeanServer is normally accessed through its MBeanServerConnection API which works both locally and remotely. The management interface of an MBean would typically consist of [1]Named and typed attributes that can be read/ written Named and typed operations that can be invoked Typed notifications that can be emitted by the MBeanFor example say it is required to manage a thread pool parameters of one of your applications at runtime. With JMX it?€™s a matter of writing a MBean with logic related to setting and getting these parameters and registering it to the MBeanServer. Now the next step is to expose these mbeans to the outside world so that remote clients can invoke these MBeans to manage your application. It can be done via various protocols implemented via protocol connectors and protocol adapters. A protocol connector basically expose MBeans as they are so that remote client sees the same interface (JMX RMI Connector is a good example). So basically the client or the remote management application should be enabled for JMX technology. A protocol adapter (e.g: HTML, SNMP) adapt the results according to the protocol the client is expecting (e.g: for a browser-based client sends the results in HTML over HTTP). Now that MBeans are properly exposed to the outside we need some clients to access these MBeans to manage our applications. There are basically two categories of clients available according to whether they use connectors or adapters. JMX Clients use JMX APIs to connect to MBeanServer and invoke MBeans. Generally JMX Clients use a MBeanServerConnection to connect to the MBeanServer and invoke MBeans through it by providing the MBean ID (ObjectName) and required parameters. There are basically three types of JMX Clients. Local JMX Client : A client that runs in the same JVM as the MBeanServer. These clients can also use MBeanServer API itself since they are running inside the same JVM.Agent : The agent is a local JMX Client which manages the MBeanServer itself. Remember that MBeanServer does not persist MBean information. So we can use an Agent to provide this logic which would encapsulate the MBeanServer with the additional functionality. So the Agent is responsible for initializing and managing the MBeanServer itself.Remote JMX Client : Remote client is only different from that of a local client in that it needs to instantiate a Connector for connecting to a Connector server in order to get a MBeanServerConnection. And of course they would be running in a remote JVM as the name suggests. Next type of client is the Management Clients which use protocol adapters to connect to MBeanServer. For these to work the respective adapter should be present and running in the JVM being managed. For example HTML adapter should be present in the JVM for a browser-based client to connect to it invoke MBeans.  The diagram below summarizes the concepts described so far.This concludes my quick notes on JMX. An extremely good read on main JMX concepts can be found at [2]. Also JMX learning trail at Oracle is a good starting point for getting good with JMX.[1] http://docs.oracle.com/javase/6/docs/technotes/guides/jmx/overview/instrumentation.html#wp998816 [2] http://pub.admc.com/howtos/jmx/architecture-chapt.html Reference: JMX : Some Introductory Notes from our JCG partner Buddhika Chamith at the Source Open blog....

Serving Files with Puppet Standalone in Vagrant

If you use Puppet in the client-server mode to configure your production environment then you might want to be able to copy & paste from the prod configuration into the Vagrant’s standalone puppet‘s configuration to test stuff. One of the key features necessary for that is enabling file serving via “source => ‘puppet:///path/to/file’”. In the client-server mode the files are served by the server, in the standalone mode you can configure puppet to read from a local (likely shared) folder. We will see how to do this. Credits: This post is based heavily on Akumria’s answer at StackOverflow: how to source a file in puppet manifest from module. Enabling Puppet Standalone in Vagrant to Resolve puppet:///… Quick overview:Make the directory with the files to be served available to the Vagrant VM  Create fileserver.conf to inform Puppet about the directory  Tell puppet about the fileserver.conf  Use it1. Make the directory with the files to be served available to the Vagrant VM For example as a shared folder: # Snippet of <vagrant directory>/Vagrantfile config.vm.share_folder "PuppetFiles", "/etc/puppet/files", "./puppet-files-symlink"(In my case this is actually a symlink to the actual folder in our puppet git repository. Beware that symlinks inside shared folders often don’t work and thus it’s better to use the symlink as a standalone shared folder root.) Notice you don’t need to declare a shared folder 2. Create fileserver.conf to inform Puppet about the directory You need to tell to Puppet that the source”puppet:///files/” should be served from /etc/puppet/files/: # <vagrant directory>/fileserver.conf [files] path /etc/puppet/files allow *3. Tell puppet about the fileserver.conf Puppet needs to know that it should read the fileserver.conf file: # Snippet of <vagrant directory>/Vagrantfile config.vm.provision :puppet, :options => ["--fileserverconfig=/vagrant/fileserver.conf"], :facter => { "fqdn" => "vagrant.vagrantup.com" } do |puppet| ... end4. Use it vagrant_dir$ echo "dummy content" > ./puppet-files-symlink/example-file.txt# Snippet of <vagrant directory>/manifests/<my manifest>.pp ... file{'/tmp/example-file.txt': ensure => file, source => 'puppet:///files/example-file.txt', } ...Caveats URLs with server name (puppet://puppet/) don’t work URLs like puppet://puppet/files/path/to/file don’t work, you must use puppet:///files/path/to/file instead (empty, i.e. implicit, server name => three slashes). The reason is, I believe, that if you state the server name explicitely then Puppet will try to find the server and get the files from there (that might be a desirable behavior if you run Puppet Master locally or elsewhere; in that case just add the server name to /etc/hosts in the Vagrant VM or make sure the DNS server used can resolve it). On the other hand, if you leave the server name out and rely on the implicit value then Puppet in the standalone mode will consult its fileserver.conf and behave accordingly. (Notice that in the server-client mode the implicit server name equals the puppet master, i.e. puppet:/// works perfectly well there.) If you use puppet://puppet/files/… then you’ll get an error like this: err: /Stage[main]/My_example_class/File[fetch_cdn_logs.py]: Could not evaluate: getaddrinfo: Name or service not known Could not retrieve file metadata for puppet://puppet/files/analytics/fetch_cdn_logs.py: getaddrinfo: Name or service not known at /tmp/vagrant-puppet/manifests/analytics_dev.pp:283Environment Puppet: 2.7.14, Vagrant:1.0.2 Reference: Serving Files with Puppet Standalone in Vagrant From the puppet:// URIs from our JCG partner Jakub Holy at the The Holy Java blog. ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books