Featured FREE Whitepapers

What's New Here?


Testing legacy code: Hard-wired dependencies

When pairing with some developers, I’ve noticed that one of the reasons they are not unit testing existing code is because, quite often, they don’t know how to overcome certain problems. The most common one is related to hard-wired dependencies – Singletons and static calls. Let’s look at this piece of code: public List<Trip> getTripsByUser(User user) throws UserNotLoggedInException { List<Trip> tripList = new ArrayList<Trip>(); User loggedUser = UserSession.getInstance().getLoggedUser(); boolean isFriend = false; if (loggedUser != null) { for (User friend : user.getFriends()) { if (friend.equals(loggedUser)) { isFriend = true; break; } } if (isFriend) { tripList = TripDAO.findTripsByUser(user); } return tripList; } else { throw new UserNotLoggedInException(); } }Horrendous, isn’t it? The code above has loads of problems, but before we change it, we need to have it covered by tests. There are two challenges when unit testing the method above. They are: User loggedUser = UserSession.getInstance().getLoggedUser(); // Line 3 tripList = TripDAO.findTripsByUser(user); // Line 13As we know, unit tests should test just one class and not its dependencies. That means that we need to find a way to mock the Singleton and the static call. In general we do that injecting the dependencies, but we have a rule, remember? We can’t change any existing code if not covered by tests. The only exception is if we need to change the code to add unit tests, but in this case, just automated refactorings (via IDE) are allowed. Besides that, many of the mocking frameworks are not be able to mock static methods anyway, so injecting the TripDAO would not solve the problem. Overcoming the hard-dependencies problem NOTE: In real life I would be writing tests first and making the change just when I needed but in order to keep the post short and focused I will not go step by step here . First of all, let’s isolate the Singleton dependency on it’s own method. Let’s make it protected as well. But wait, this need to be done via automated “extract method” refactoring. Select just the following piece of code on TripService.java: UserSession.getInstance().getLoggedUser()Go to your IDE’s refactoring menu, choose extract method and give it a name. After this step, the code will look like that: public class TripService {public List<Trip> getTripsByUser(User user) throws UserNotLoggedInException { ... User loggedUser = loggedUser(); ... }protected User loggedUser() { return UserSession.getInstance().getLoggedUser(); } }Doing the same thing for TripDAO.findTripsByUser(user), we will have: public List<Trip> getTripsByUser(User user) throws UserNotLoggedInException { ... User loggedUser = loggedUser(); ... if (isFriend) { tripList = findTripsByUser(user); } ... } protected List<Trip> findTripsByUser(User user) { return TripDAO.findTripsByUser(user); } protected User loggedUser() { return UserSession.getInstance().getLoggedUser(); }In our test class, we can now extend the TripService class and override the protected methods we created, making them return whatever we need for our unit tests: private TripService createTripService() { return new TripService() { @Override protected User loggedUser() { return loggedUser; } @Override protected List<Trip> findTripsByUser(User user) { return user.trips(); } }; }And this is it. Our TripService is now testable. First we write all the tests we need to make sure the class/method is fully tested and all code branches are exercised. I use Eclipse’s eclEmma plugin for that and I strongly recommend it. If you are not using Java and/or Eclipse, try to use a code coverage tool specific to your language/IDE while writing tests for existing code. It helps a lot. So here is the my final test class: public class TripServiceTest { private static final User UNUSED_USER = null; private static final User NON_LOGGED_USER = null; private User loggedUser = new User(); private User targetUser = new User(); private TripService tripService;@Before public void initialise() { tripService = createTripService(); } @Test(expected=UserNotLoggedInException.class) public void shouldThrowExceptionWhenUserIsNotLoggedIn() throws Exception { loggedUser = NON_LOGGED_USER; tripService.getTripsByUser(UNUSED_USER); } @Test public void shouldNotReturnTripsWhenLoggedUserIsNotAFriend() throws Exception { List<Trip> trips = tripService.getTripsByUser(targetUser); assertThat(trips.size(), is(equalTo(0))); } @Test public void shouldReturnTripsWhenLoggedUserIsAFriend() throws Exception { User john = anUser().friendsWith(loggedUser) .withTrips(new Trip(), new Trip()) .build(); List<Trip> trips = tripService.getTripsByUser(john); assertThat(trips, is(equalTo(john.trips()))); }private TripService createTripService() { return new TripService() { @Override protected User loggedUser() { return loggedUser; } @Override protected List<Trip> findTripsByUser(User user) { return user.trips(); } }; } }Are we done? Of course not. We still need to refactor the TripService class. public class TripService {public List<Trip> getTripsByUser(User user) throws UserNotLoggedInException { List<Trip> tripList = new ArrayList<Trip>(); User loggedUser = loggedUser(); boolean isFriend = false; if (loggedUser != null) { for (User friend : user.getFriends()) { if (friend.equals(loggedUser)) { isFriend = true; break; } } if (isFriend) { tripList = findTripsByUser(user); } return tripList; } else { throw new UserNotLoggedInException(); } }protected List<Trip> findTripsByUser(User user) { return TripDAO.findTripsByUser(user); }protected User loggedUser() { return UserSession.getInstance().getLoggedUser(); }}How many problems can you see? Take your time before reading the ones I found.. :-) Refactoring   NOTE: When I’ve done it, I’ve done it step by step running the tests after every step. Here I’ll just summarise my decisions. The first thing I noticed is that the tripList variable does not need to be created when the logged user is null, since an exception is thrown and nothing else happens. I’ve decided to invert the outer if and extract the guard clause. public List<Trip> getTripsByUser(User user) throws UserNotLoggedInException { User loggedUser = loggedUser(); validate(loggedUser); List<Trip> tripList = new ArrayList<Trip>(); boolean isFriend = false; for (User friend : user.getFriends()) { if (friend.equals(loggedUser)) { isFriend = true; break; } } if (isFriend) { tripList = findTripsByUser(user); } return tripList; }private void validate(User loggedUser) throws UserNotLoggedInException { if (loggedUser == null) throw new UserNotLoggedInException(); }Feature Envy When a class gets data from another class in order to do some calculation or comparison on that data, quite often it means that the client class envies the other class. This is called Feature Envy (code smell) and it is a very common occurrence in long methods and is everywhere in legacy code. In OO, data and the operations on that data should be on the same object. So, looking at the code above, clearly the whole thing about determining if an user is friends with another doesn’t belong to the TripService class. Let’s move it to the User class. First the unit test: @Test public void shouldReturnTrueWhenUsersAreFriends() throws Exception { User John = new User(); User Bob = new User();John.addFriend(Bob);assertTrue(John.isFriendsWith(Bob)); }Now, let’s move the code to the User class. Here we can use the Java collections API a bit better and remove the whole for loop and the isFriend flag all together. public class User {...private List<User> friends = new ArrayList<User>();public void addFriend(User user) { friends.add(user); }public boolean isFriendsWith(User friend) { return friends.contains(friend); }... }After a few refactoring steps, here is the new code in the TripService public List<Trip> getTripsByUser(User user) throws UserNotLoggedInException { User loggedUser = loggedUser(); validate(loggedUser); return (user.isFriendsWith(loggedUser)) ? findTripsByUser(user) : new ArrayList<Trip>(); }Right. This is already much better but it is still not good enough. Layers and dependencies Some of you may still be annoyed by the protected methods we created in part one in order to isolate dependencies and test the class. Changes like that are meant to be temporary, that means, they are done so we can unit test the whole method. Once we have tests covering the method, we can start doing our refactoring and thinking about the dependencies we could inject. Many times we would think that we should just inject the dependency into the class. That sounds obvious. TripService should receive an instance of UserSession. Really? TripService is a service. That means, it dwells in the service layer. UserSession knows about logged users and sessions. It probably talks to the MVC framework and/or HttpSession, etc. Should the TripService be dependant on this class (even if it was an interface instead of being a Singleton)? Probably the whole check if the user is logged in should be done by the controller or whatever the client class may be. In order NOT to change that much (for now) I’ll make the TripService receive the logged user as a parameter and remove the dependency on the UserSession completely. I’ll need to do some minor changes and clean up in the tests as well. Naming No, unfortunately we are not done yet. What does this code do anyway? Return trips from a friend. Looking at the name of the method and parameters, or even the class name, there is no way to know that. The word “friend” is no where to be seen in the TripService’s public interface. We need to change that as well. So here is the final code: public class TripService {public List<Trip> getFriendTrips(User loggedUser, User friend) throws UserNotLoggedInException { validate(loggedUser); return (friend.isFriendsWith(loggedUser)) ? findTripsForFriend(friend) : new ArrayList<Trip>(); }private void validate(User loggedUser) throws UserNotLoggedInException { if (loggedUser == null) throw new UserNotLoggedInException(); }protected List<Trip> findTripsForFriend(User friend) { return TripDAO.findTripsByUser(friend); } }Better, isn’t it? We still have the issue with the other protected method, with the TripDAO static call, etc. But I’ll leave this last bit for another post on how to remove dependencies on static methods. I’ll park my refactoring for now. We can’t refactoring the entire system in one day, right? We still need to deliver some features. :-) Conclusion This was just a toy example and may not even make sense. However, it represents many of the problems we find when working with legacy (existing) code. It’s amazing how many problems we can find in such a tiny piece of code. Now imagine all those classes and methods with hundreds, if not thousands of lines. We need to keep refactoring our code mercilessly so we never get to a position where we don’t understand it any more and the whole business starts slowing down because we cannot adjust the software quick enough. Refactoring is not just about extracting methods or making a few tweaks in the logic. We need to think about the dependencies, the responsibilities that each class and method should have, the architectural layers, the design of our application and also the names we give to every class, method, parameter and variable. We should try to have the business domain expressed in the code. We should treat our code base as if it was a big garden. If we want it to be pleasant and maintainable, we need to be constantly looking after it . If you want to give this code a go or find more details about the implementation, check: https://github.com/sandromancuso/testing_legacy_code Reference: Testing legacy: Hard-wired dependencies (part 1 and 2) from our JCG partner Sandro Mancuso at the Crafted Software blog....

Hacking Maven

We don’t use M2Eclipse to integrate Maven and Eclipse, partly because we had some bad experiences a couple of years ago when we were still using Eclipse 3.4 and partly because some of our developers use IBMs RAD to develop, and it doesn’t (or at least wasn’t) compatible. Our process is to update our local sources from SVN and then to run Maven’s eclipse:eclipse locally with the relevant workspace setting so that our .project and .classpath files are generated, based on the dependencies and other information in the POMs. It works well enough, but the plan is indeed to move to M2Eclipse at some stage soon.We have parent POMs for each sub-system (each of which is made of several Eclipse projects). Some of us pull multiple sub-systems into our workspaces which is useful when you refactor interfaces between sub-systems, because Eclipse can refactor the code which calls the interfaces at the same time as you refactor the interfaces, saving lots of work. Maven generates .classpath files for Eclipse which reference everything in the workspace as a source projet rather than a JAR out of the local Maven repo. That is important because if Maven created JAR references and not project references, Eclipse’s refactoring wouldn’t adjust the code calling the refactored code.10 days ago we switched from a build process based on continuum and archiva to jenkins and nexus. All of a sudden we lost some of the source references which were replaced with JAR references. If a project was referred to in a parent POM then it was referenced as a project in Eclipse, but if it was a sub-system built using a second parent POM, then Eclipse was referring to the project as a JAR from the local Maven repo. That was bad news for our refactoring! Here is an example of what used to happen:subsystem-a-project-1 subsystem-a-project-2 references subsystem-a-project-1 as a projectsubsystem-b-project-1 references subsystem-a-project-1 as a projectHere is what was happening when we used Jenkins and Nexus:subsystem-a-project-1 subsystem-a-project-2 references subsystem-a-project-1 as a projectsubsystem-b-project-1 references subsystem-a-project-1 as a JAR!!!At the same time as moving to Jenkins and Nexus, we upgraded a few libraries and our organisational POM. We also moved our parent POMs from the root directory of the sub-system into a project for the parent (in preparation for using M2Eclipse which prefers having the parent POM in the workspace).The question was, where do we start looking? Some people suggested it was because we’d moved the parent POM, others thought it was because we’d updated a version of a library on which eclipse:eclipse might have a dependency and others were of the opinion we had to return to Continuum and Archiva.One of our sharp eyed developers noticed that we were getting the following output on the command line during the local Maven build:  Artifact subsystem-a-project-1 already available as a workspace project, but with different version...That was the single find which led to us solving the problem quickly. I downloaded the source of eclipse:eclipse from Maven central, under org/apache/maven/plugins/maven-eclipse-plugin. I created a dummy Java project in Eclipse. Next, I added the following environment variable to my command line from where I run Maven:set MAVEN_OPTS=-Xdebug -Xnoagent -Djava.compiler=NONE -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=28000That tells the JRE to open a debug port and wait until a debugger is connected before running the app. Running a Java program with those args makes it output "Listening for transport dt_socket at address: 28000". In Eclipse I went to the debug run configurations and added a remote connection at localhost on port 28000. It needs a project, so I added the dummy project I’d just created. I connected and Maven continued to run now that I was connected. The next step was to add the Maven Plugin source code which I’d downloaded from Maven Central. By right clicking on the debug view in the debug perspective (click on the tree), it is possible to add/update source attachments, and I added the sources JAR that I’d downloaded. The last bit was to find a suitable breakpoint. I extracted the sources from the downloaded ZIP and searched the files for the text which was being output by Maven which was hinting at the problem (“already available as a workspace project”). EclipsePlugin was the suspect class!I added the eclipse:eclipse JAR to my dummy project so that I could open the class in Eclipse using ctrl+shift+t. Eclipse opened the class, but hadn’t worked out that the source from the source attachment belongs to that class. There is a button in the editor to attach the source by locating the JAR downloaded from Maven Central and Eclipse then showed the source code and I was able to add a breakpoint. By this time Maven had finished, so I restarted it and this time I hit the breakpoint, and the problem became quite clear. For some reason, rather than the version being a simple SNAPSHOT version, it contained a timestamp. The eclipse:eclipse plugin was of the opinion that I didn’t have the correct code in the workspace, and so rather than create a project reference, it created a JAR reference based on the JAR in the local Maven repo.The internet is full of information about how Maven3 now uses unqiue timestamps for each build artefact deployed to the repo. There were some references saying you could disable it for Maven2 (which we still use), but when you move to Maven3 you can’t disable it. I thought about submitting a bug report to Codehaus (who supplies the eclipe:eclipse plugin), but it occurred to me that we were still referencing version 2.8 in our organisational POM and I’d spotted a 2.9 version when I was at Maven Central. So I updated the organisational POM to use version 2.9 of eclipse:eclipse and gave that a shot.Hooray! 23:17 and I’d fixed our problem. Shame I had a 06:30 start the next day for a meeting :-/This isn’t the first time that we’ve made the mistake of doing too much in a migration. We could have stuck with Continuum and Archiva when we updated our org-POM, and then step for step migrated to Nexus (still using Archiva) and then finally moved over to Nexus. Had we done that, we might not have had the problems we did; but the effort of a slow migration might also have been larger.For me, the point to take away is that it easier to debug maven and go straight to the source of the problem, than it is to attempt to fix the problem by trail and error, as we were doing before I debugged Maven. Debugging Maven might sound like it’s advanced or complicated, but it’s damn fun – it’s real hacking and the feeling of solving a puzzle in this way makes the effort worth it. I fully recommend it if for no other reason that you get exposed to reading other peoples (open source) code.Reference: Hacking Maven from our JCG partner Ant Kutschera at the The Kitchen in the Zoo blog....

ActiveMQ Network Connectors

This post is more for me and any ActiveMQ contributors that may be interested in how the Network Connectors work for ActiveMQ. I recently spent some time looking at the code and thought that it would be good to draw up some quick diagrams to help me remember what I learned and help to identify where to debug in the future if there are issues I am researching. If I make a mistake and you’d like to add clarification, please do so in the comments.First, you set up your network connectors by configuring them in the ActiveMQ configuration file. This configuration gets mapped to the corresponding ActiveMQ beans using the xbean library for which I have a separate blog post which explains exactly how this is done. To specify network connectors, you add the <networkConnectors/> element to your configuration file and add a <networkConnector/>, <multicastNetworkConnector/>, or <ldapNetworkConnector/>. These three different types of network connectors can be used to establish a network of brokers with <networkConnector/> being most common. Here’s how the three map to Java classes:<networkConnector/> maps to org.apache.activemq.network.DiscoveryNetworkConnector <multicastNetworkConnector/> maps to org.apache.activemq.network.MulticastNetworkConnector <ldapNetworkConnector/> maps to org.apache.activemq.network.LdapNetworkConnectorEach of those inherit from the org.apache.activemq.network.NetworkConnector super type as depicted in this diagram: So when you have a configuration like this:<networkConnector uri="static://(tcp://localhost:61626,tcp://localhost:61627)" />a new DiscoverNetworkConnector will be configured, instantiated, and added as a connector to the BrokerService (which is the main class for where a lot of the ActiveMQ broker details is handled). While the DiscoverNetworkConnector is being assembled from the configuration, the URI that you specify is used to create a DiscoveryAgent. The discover agent is in charge of assembling the connection and handling failover events that are packaged as DiscoverEvents. Determining which DiscoverAgent is picked depends on the DiscoverAgentFactory and the URI specified. In the case of “static”, the SimpleDiscoverAgent is used. Each URI in the list of possible URIs are treated differently and are assigned their own Transport (more on this in a sec). Which means, for each URI you list, a new socket will be established and the broker will attempt to establish a network connector over each of the sockets. You may be wondering how to best implement failover then? In the case described above, you will have multiple connections, and if one of those connections is to a slave that isn’t listening, you will see that the connection fails and the discover agent tries to establish the connection again. This could go on infinitely which consumes resources. Another approach is to use just one URI for the static discover agent that utilizes the failover() logic: <networkConnector uri="static:failover:(tcp://localhost:61626,tcp://localhost:61627)" />In this case, only one transport will be created, and the failover logic will wrap it and know about both URIs. If one is not available, it won’t keep retrying needlessly. Instead it will connect to whichever one it can and only reconnect to the failover URL if the current connection goes down. Note this approach had a bug in it before ActiveMQ version 5.5.1.-fuse-00-06.The discover agent is in charge of creating the bridge, but it delegates that responsibility to a DiscoverListener. In the example from above, the DiscoverListener interface is implemented by the DiscoverNetworkConnector.onServiceAdd() method.To establish the bridge, a transport is opened up for both the local broker (using VM) and the remote broker (using the specified protocol, in this case TCP). Once the local and remote transports are created, the bridge can be assembled in the DiscoverNetworkConnector.createBridge(…) method. This method uses the Factory pattern again to find which bridge to use.The possible bridge implementations are shown below:By default, with conduitSubscriptions=true, the DurableConduitBridge is used. Conduit subscriptions establish a single flow of messages to a remote broker to reduce duplicates that can happen when remote topics have multiple consumers. This works great by default, but if you want to load balance your messages across all consumers, then you will want to set conduit subscriptions to false (see the documentation for conduit subscriptions at FuseSource‘s documentation on Fuse Message Broker). When set to false, the DemandForwardingBridge is used. Once the bridge is assembled, it is configured in the NetworkConnector.configureBridge(…) method.Once everything is assembled and configured on the bridge, it’s then started. Once it’s started, it begins sending broker Command objects to the remote broker to identify itself, set up a session, and demand consumer info from it. This is in the DemandForwardingBridgeSupport.startRemoteBridge() super class method as seen from the diagram.If you’re debugging errors with the network connectors, hopefully this helps identify possible locations for where errors can take place.Reference: From inside the code: ActiveMQ Network Connectors from our JCG partner Christian Posta at the Christian Posta Software blog....

Connect Glassfish 3 to external ActiveMQ 5 broker

Introduction Here at ONVZ we’re using Glassfish 3 as our development and production application server, and we’re quite happy with its performance and stability, as well as the large community surrounding it. I rarely run into a problem that does not have a matching solution on stackoverflow or java.net. As part of our open source strategy we also run a customized ActiveMQ cluster called “ONVZ Message Bus”. To enable Message Driven Beans and other EJBs to consume and produce messages to and from the ActiveMQ message brokers, ignoring the internal OpenMQ broker that comes shipped with Glassfish, an ActiveMQ Resource Adapter has to be installed. Luckily for me Sven Hafner wrote a blog post about running an embedded ActiveMQ 5 broker in Glassfish 3, and I was able to distill the information I needed to connect to an external broker instead. This blog post describes what I did to get it to work. Install the ActiveMQ Resource AdapterBefore you start Glassfish copy the following libraries from an ActiveMQ installation directory or elsewhere to GlassfishCopy “slf4j-api-1.5.11.jar” from the ActiveMQ “lib” directory to the Glassfish “lib” directory Copy “slf4j-log4j12-1.5.11.jar” and “log4j-1.2.14.jar” from the ActiveMQ “lib/optional” directory to the Glassfish “lib” directory. Note: Instead of these two you can also download “slf4j-jdk14-1.5.11.jar” from the maven repo to the Glassfish “lib” directory.Download the resource adapter (activemq-rar-5.5.1.rar) from the following location Deploy the resource adapter in GlassfishIn the Glassfish Admin Console, go to “Applications”, and click on “Deploy” Click “Choose file” and select the rar file you just downloaded. Notice how the page recognized the selected rar file and automatically selected the correct Type and Application Name and finally click “Ok”Create the Resource Adapter ConfigIn the Glassfish Admin Console, go to “Resources”, and click on “Resource Adapter Configs” Click “New”, and select the ActiveMQ Resource Adapter we just depoyed, and select a Thread Pool. (“thread-pool-1? for instance) Set the property “ServerUrl”, “UserName” and “Password”, leave the rest untouched and click “OK”.Create the Connector Connection PoolIn the Glassfish Admin Console, go to “Resources”, “Connectors”, “Connector Connection Pools” Click “New”, fill in a pool name like “jms/connectionFactory” and select the ActiveMQ Resource Adapter. The Connection Definition will default to “javax.jms.ConnectionFactory”, which is correct, so click “Next”. Enable the “Ping” checkbox and click “Finish”.Create the Admin Object ResourceIn the Glassfish Admin Console, go to “Resources”, “Connectors”, “Admin Object Resources” Click “New”, set a JNDI Name such as “jms/queue/incoming” and select the ActiveMQ Resource Adapter Again, the other fields don’t need to be changed so click “OK”We now have everything in place (in JNDI actually) to start processing messages using a standard Java EE Message Driven Bean. The “Connector Connection Pool” you just created has resulted in a ConnectionFactory being registered in JNDI, and the “Admin Object Resource” resulted in a JMS Destination. You can find these objects in the admin console when you go to “Resources”, “JMS Resources”. In the Glassfish version I’m using (3.1.1) the admin console has a bug which results in the connection factory and destinations being only visible in the menu, and not on the right side of the page.  Create and deploy a Message Driven BeanCreate a new Java Enterprise project in your favorite IDE, and create a Message Driven Bean with the following contents:package com.example.activemq.glassfish;import javax.ejb.*; import javax.jms.*;@MessageDriven( activationConfig = { @ActivationConfigProperty(propertyName = 'destinationType', propertyValue = 'javax.jms.Queue'), @ActivationConfigProperty(propertyName = 'destination', propertyValue = 'jms/queue/incoming') } ) public class ExampleMessageBean implements MessageListener {public void onMessage(Message message) { try { System.out.println('We've received a message: ' + message.getJMSMessageID()); } catch (JMSException e) { e.printStackTrace(); } } }Glassfish will hookup your bean to the configured queue but it will try to do so with the default ConnectionFactory which connects to the embedded OpenMQ broker. This is not what we want, so we’ll instruct Glassfish which ConnectionFactory to use.Add a file called glassfish-ejb-jar.xml to the META-INF folder, and insert the following contents:<?xml version='1.0' encoding='UTF-8'?> <!DOCTYPE glassfish-ejb-jar PUBLIC '-//GlassFish.org//DTD GlassFish Application Server 3.1 EJB 3.1//EN' 'http://glassfish.org/dtds/glassfish-ejb-jar_3_1-1.dtd'> <glassfish-ejb-jar> <enterprise-beans> <ejb> <ejb-name>ExampleMessageBean</ejb-name> <mdb-connection-factory> <jndi-name>jms/connectionFactory</jndi-name> </mdb-connection-factory> <mdb-resource-adapter> <resource-adapter-mid>activemq-rar-5.5.1</resource-adapter-mid> </mdb-resource-adapter> </ejb> </enterprise-beans> </glassfish-ejb-jar>Deploy the MDB to glassfishGlassfish now uses the ActiveMQ ConnectionFactory and all is well. Use the ActiveMQ webconsole to send a message to a queue called “jms/queue/incoming”, or use some other tool to send a message. Glassfish catches all the sysout statements and prints those in it’s default glassfish log file. Reference: How to connect Glassfish 3 to an external ActiveMQ 5 broker from our JCG partner Geert Schuring at the Geert Schuring blog....

Test-driving Builders with Mockito and Hamcrest

A lot of people asked me in the past if I test getters and setters (properties, attributes, etc). They also asked me if I test my builders. The answer, in my case is it depends. When working with legacy code, I wouldn’t bother to test data structures, that means, objects with just getters and setters, maps, lists, etc. One of the reasons is that I never mock them. I use them as they are when testing the classes that uses them. For builders, when they are used just by test classes, I also don’t unit test them since they are used as “helpers” in many other tests. If they have a bug, the tests will fail. In summary, if these data structures and builders already exist, I wouldn’t bother retrofitting test for them. But now let’s talk about doing TDD and assume you need a new object with getters and setters. In this case, yes, I would write tests for the getters and setters since I need to justify their existence writing my tests first. In order to have a rich domain model, I normally tend to have business logic associated with the data and have a richer domain model. Let’s see the following example. In the real life, I would be writing on test at a time, making them pass and refactor. For this post, I’ll just give you the full classes for clarity’s sake. First let’s write the tests: package org.craftedsw.testingbuilders; import static org.hamcrest.Matchers.is; import static org.junit.Assert.assertThat; import static org.mockito.Matchers.anyString; import static org.mockito.Mockito.verify; import static org.mockito.Mockito.when; import org.junit.Before; import org.junit.Test; import org.junit.runner.RunWith; import org.mockito.Mock; import org.mockito.runners.MockitoJUnitRunner; @RunWith(MockitoJUnitRunner.class) public class TradeTest { private static final String INBOUND_XML_MESSAGE = '<message >'; private static final boolean REPORTABILITY_RESULT = true; private Trade trade; @Mock private ReportabilityDecision reportabilityDecision; @Before public void initialise() { trade = new Trade(); when(reportabilityDecision.isReportable(anyString())) .thenReturn(REPORTABILITY_RESULT); } @Test public void should_contain_the_inbound_xml_message() { trade.setInboundMessage(INBOUND_XML_MESSAGE); assertThat(trade.getInboundMessage(), is(INBOUND_XML_MESSAGE)); } @Test public void should_tell_if_it_is_reportable() { trade.setInboundMessage(INBOUND_XML_MESSAGE); trade.setReportabilityDecision(reportabilityDecision); boolean reportable = trade.isReportable(); verify(reportabilityDecision).isReportable(INBOUND_XML_MESSAGE); assertThat(reportable, is(REPORTABILITY_RESULT)); } } Now the implementation: package org.craftedsw.testingbuilders; public class Trade { private String inboundMessage; private ReportabilityDecision reportabilityDecision; public String getInboundMessage() { return this.inboundMessage; } public void setInboundMessage(String inboundXmlMessage) { this.inboundMessage = inboundXmlMessage; } public boolean isReportable() { return reportabilityDecision.isReportable(inboundMessage); } public void setReportabilityDecision(ReportabilityDecision reportabilityDecision) { this.reportabilityDecision = reportabilityDecision; } } This case is interesting since the Trade object has one property called inboundMessage with respective getters and setters and also uses a collaborator ( reportabilityDecision, injected via setter) in its isReportable business method. A common approach that I’ve seen many times to “test” the setReportabilityDecision method is to introduce a getReportabilityDecision method returning the reportabilityDecision (collaborator) object. This is definitely the wrong approach. Our objective should be to test how the collaborator is used, that means, if it is invoked with the right parameters and if whatever it returns (if it returns anything) is used. Introducing a getter in this case does not make sense since it does not guarantee that the object, after had the collaborator injected via setter, is interacting with the collaborator as we intended. As an aside, when we write tests that are about how collaborators are going to be used, defining their interface, is when we are using TDD as a design tool and not just simply as a testing tool. I’ll cover that in a future blog post. OK, now imagine that this trade object can be created in different ways, that means, with different reportability decisions. We also would like to make our code more readable and we decide to write a builder for the Trade object. Let’s also assume, in this case, that we want the builder to be used in the production and test code as well. In this case, we want to test drive our builder. Here is an example that I normally find when developers are test-driving a builder implementation. package org.craftedsw.testingbuilders; import static org.craftedsw.testingbuilders.TradeBuilder.aTrade; import static org.hamcrest.Matchers.is; import static org.junit.Assert.assertThat; import static org.mockito.Mockito.verify; import org.junit.Test; import org.junit.runner.RunWith; import org.mockito.Mock; import org.mockito.runners.MockitoJUnitRunner; @RunWith(MockitoJUnitRunner.class) public class TradeBuilderTest { private static final String TRADE_XML_MESSAGE = '<message >'; @Mock private ReportabilityDecision reportabilityDecision; @Test public void should_create_a_trade_with_inbound_message() { Trade trade = aTrade() .withInboundMessage(TRADE_XML_MESSAGE) .build(); assertThat(trade.getInboundMessage(), is(TRADE_XML_MESSAGE)); } @Test public void should_create_a_trade_with_a_reportability_decision() { Trade trade = aTrade() .withInboundMessage(TRADE_XML_MESSAGE) .withReportabilityDecision(reportabilityDecision) .build(); trade.isReportable(); verify(reportabilityDecision).isReportable(TRADE_XML_MESSAGE); } } Now let’s have a look at these tests. The good news is, the tests were written in the way developers want to read them. That also means that they were “designing” the TradeBuilder public interface (public methods). The bad news is how they are testing it. If you look closer, the tests for the builder are almost identical to the tests in the TradeTest class. You may say that it is OK since the builder is creating the object and the tests should be similar. The only different is that in the TradeTest we instantiate the object by hand and in the TradeBuilderTest we use the builder to instantiate it, but the assertions should be the same, right? For me, firstly we have duplication. Secondly, the TradeBuilderTest doesn’t show it’s real intent. After many refactorings and exploring different ideas, while pair-programming with one of the guys in my team we came up with this approach: package org.craftedsw.testingbuilders; import static org.mockito.BDDMockito.given; import static org.mockito.Mockito.verify; import org.junit.Test; import org.junit.runner.RunWith; import org.mockito.InjectMocks; import org.mockito.Mock; import org.mockito.Spy; import org.mockito.runners.MockitoJUnitRunner; @RunWith(MockitoJUnitRunner.class) public class TradeBuilderTest { private static final String TRADE_XML_MESSAGE = '<message >'; @Mock private ReportabilityDecision reportabilityDecision; @Mock private Trade trade; @Spy @InjectMocks TradeBuilder tradeBuilder; @Test public void should_create_a_trade_with_all_specified_attributes() { given(tradeBuilder.createTrade()).willReturn(trade); tradeBuilder .withInboundMessage(TRADE_XML_MESSAGE) .withReportabilityDecision(reportabilityDecision) .build(); verify(trade).setInboundMessage(TRADE_XML_MESSAGE); verify(trade).setReportabilityDecision(reportabilityDecision); } }So now, the TradeBuilderTest express what is expected from the TradeBuilder, that means, the side effect when the build method is called. We want it to create a Trade and set its attributes. There are no duplications with the TradeTest. It is left to the TradeTest to guarantee the correct behavior of the Trade object. For completion’s sake, here is the final TradeBuider class: package org.craftedsw.testingbuilders; public class TradeBuilder { private String inboundMessage; private ReportabilityDecision reportabilityDecision; public static TradeBuilder aTrade() { return new TradeBuilder(); } public TradeBuilder withInboundMessage(String inboundMessage) { this.inboundMessage = inboundMessage; return this; } public TradeBuilder withReportabilityDecision(ReportabilityDecision reportabilityDecision) { this.reportabilityDecision = reportabilityDecision; return this; } public Trade build() { Trade trade = createTrade(); trade.setInboundMessage(inboundMessage); trade.setReportabilityDecision(reportabilityDecision); return trade; } Trade createTrade() { return new Trade(); } } The combination of Mockito and Hamcrest is extremely powerful, allowing us to write better and more readable tests. Reference: Test-driving Builders with Mockito and Hamcrest from our JCG partner Sandro Mancuso at the Crafted Software blog....

A First Look at MVVM in ZK 6

MVVM vs. MVC In a previous post we’ve seen how the Ajax framework ZK adopts a CSS selector inspired Controller for wiring UI components in View and listening to their events. Under this ZK MVC pattern, the UI components in View need not to be bound to any Controller methods or data objects. The flexibility of using selector patterns as a mean to map View states and events to the Controller makes code more adaptive to change. MVVM approaches separation of concern in a reverse direction. Under this pattern, a View-Model and a binder mechanism take place of the Controller. The binder maps requests from View to action logic in View-Model and updates any value (data) on both sides, allowing the View-Model to be independent of any particular View. Anatomy of MVVM in ZK 6 The below is a schematic diagram of ZK 6’s MVVM pattern:Here are some additional points that’s not conveyed in the diagram: BindComposer:implements ZK’s standard controller interfaces (Composer & ComposerExt) the default implementation is sufficient, no modifications necessaryView:informs binder which method to call and what properties to update on the View-ModelView-Model:just a POJO communication with the binder is carried out via Java AnnotationMVVM in Action Consider the task of displaying a simplified inventory without knowledge of the exact UI markup. An inventory is a collection of items, so we have the object representation of such: public class Item { private String ID; private String name; private int quantity; private BigDecimal unitPrice;//getters & setters } It also makes sense to expect that an item on the list can be selected and operated on. Thus based on our knowledge and assumptions so far, we can go ahead and implement the View-Model. public class InventoryVM { ListModelList<Item> inventory; Item selectedItem; public ListModelList<Item> getInventory(){ inventory = new ListModelList<Item>(InventoryDAO.getInventory()); return inventory; }public Item getSelectedItem() { return selectedItem; } public void setSelectedItem(Item selectedItem) { this.selectedItem = selectedItem; }} Here we have a typical POJO for the View-Model implementation, data with their getters and setter. View Implementation, ‘Take One’ Now suppose we later learned the requirements for the View is just a simple tabular display:A possible mark-up to achieve the UI as indicated above is: <window title='Inventory' border='normal' apply='org.zkoss.bind.BindComposer' viewModel='@id('vm') @init('lab.zkoss.mvvm.ctrl.InventoryVM')' > <listbox model='@load(vm.inventory)' width='600px' > <auxhead><auxheader label='Inventory Summary' colspan='5' align='center'/> </auxhead> <listhead> <listheader width='15%' label='Item ID' sort='auto(ID)'/> <listheader width='20%' label='Name' sort='auto(name)'/> <listheader width='20%' label='Quantity' sort='auto(quantity)'/> <listheader width='20%' label='Unit Price' sort='auto(unitPrice)'/> <listheader width='25%' label='Net Value'/> </listhead> <template name='model' var='item'> <listitem> <listcell><label value='@load(item.ID)'/></listcell> <listcell><label value='@load(item.name)'/></listcell> <listcell><label value='@load(item.quantity)'/></listcell> <listcell><label value='@load(item.unitPrice)'/></listcell> <listcell><label value='@load(item.unitPrice * item.quantity)'/></listcell> </listitem> </template> </listbox> </window> Let’s elaborate a bit on the mark-up here.At line 1, we apply the default BindComposer to the Window component which makes all children components of the Window subject to the BindComposer’s effect. On the following line, we instruct the BindComposer which View-Model class to instantiate and we give the View-Model instance an ID so we could make reference to it. Since we’re loading a collection of data onto the Listbox, at line 3 we assign the property ‘inventory’ of our View-Model instance, which is a collection of the Item objects, to Listbox’s attribute ‘model’. At line 12, we then make use of the model on our Template component. Template iterates its enclosed components according to the model it receives. In this case, we have 5 list items which makes up a row in the Listbox. In each Listcell, we load the properties of each object and display them in Labels.Via ZK’s binding system, we were able to access data in our View-Model instance and load them in View using annotations. View Implementation, ‘Take Two’ Suppose later in development, it’s agreed that the current tabular display takes too much space in our presentation and we’re now asked to show the details of an item only when the item is selected in a Combobox, as shown below:Though both the presentation and behaviour(detail is shown only upon user’s selection) differ from our previous implementation, the View-Model class needs not be heavily modified. Since an item’s detail will be rendered only when it is selected in the Combobox, it’s obvious that we’d need to handle the ‘onSelect’ event, let’s add a new method doSelect: public class InventoryVM { ListModelList<Item> inventory; Item selectedItem;@NotifyChange('selectedItem') @Command public void doSelect(){ } //getters & setters} A method annotated with @Command makes it eligible to be called from our mark-up by its name, in our case: <combobox onSelect='@command('doSelect')' > The annotation @NotifyChange(‘selectedItem’) allows the property selectedItem to be updated automatically whenever user selects a new Item from the Combobox. For our purposes, no addition implementation is needed for the method doSelect. With this bit of change done, we can now see how this slightly-modified View-Model would work with our new mark-up: <window title='Inventory' border='normal' apply='org.zkoss.bind.BindComposer' viewModel='@id('vm') @init('lab.zkoss.mvvm.ctrl.InventoryVM')' width='600px'> ... <combobox model='@load(vm.inventory)' selectedItem='@bind(vm.selectedItem)' onSelect='@command('doSelect')' > <template name='model' var='item'> <comboitem label='@load(item.ID)'/> </template> <comboitem label='Test'/> </combobox> <listbox visible='@load(not empty vm.selectedItem)' width='240px'> <listhead> <listheader ></listheader> <listheader ></listheader> </listhead> <listitem> <listcell> <label value='Item Name: ' /> </listcell> <listcell> <label value='@load(vm.selectedItem.name)' /> </listcell> </listitem> <listitem> <listcell> <label value='Unit Price: ' /> </listcell> <listcell> <label value='@load(vm.selectedItem.unitPrice)' /> </listcell> </listitem> <listitem> <listcell> <label value='Units in Stock: ' /> </listcell> <listcell> <label value='@load(vm.selectedItem.quantity)' /> </listcell> </listitem> <listitem> <listcell> <label value='Net Value: ' /> </listcell> <listcell> <label value='@load(vm.selectedItem.unitPrice * vm.selectedItem.quantity)' /> </listcell> </listitem> </listbox> ... </window>At line 4, we load the data collection inventory to the Combobox’s model attribute so it can iteratively display the ID for each Item object in the data model using the Template component declared on line 7. At line 5, the selectedItem attribute points to the most recently selected Item on that list of Item objects At line 6, we’ve mapped the onSelect event to the View-Model’s doSelect method At line 12, we make the Listbox containing an Item’s detail visible only if the selectedItem property in View-Model is not empty (selectedItem will remain empty until an item is selected in the Combobox). The selectedItem’s properties are then loaded to fill out the Listbox.Recap Under the MVVM pattern, our View-Model class exposes its data and methods to the binder; there’s no reference made to any particular View component. The View implementations access data or invoke event handlers via the binder. In this post, we’re only exposed to the fundamental workings of ZK’s MVVM mechanisms. The binder is obviously not restricted to just loading data from the View-Model. In addition to saving data from View to ViewModel, we can also inject data converters and validators in the mix of View to View-Model communications. The MVVM pattern may also work in conjunction with the MVC model. That is, we can also wire components and listen to fired-events via the MVC Selector mechanism if we wish to do so. We’ll dig into some of these topics at a later time. Reference: A First Look at MVVM in ZK 6 from our JCG partner Lance Lu at the Tech Dojo blog....

JMX : Some Introductory Notes

JMX (Java Management Extensions) is a J2SE technology which enables management and monitoring of Java applications. The basic idea is to implement a set of management objects and register the implementations to a platform server from where these implementations can be invoked either locally or remotely to the JVM using a set of connectors or adapters. A management/instrumentation object is called an MBean (stands for Managed Bean). Once instantiated a MBean will be registered with a unique ObjectName with the platform MBeanServer. MBeanServer acts as a repository of MBeans enabling the creation, registering, accessing and removing of the MBeans. However MBeanServer does not persist the MBean information. So with a restart of the JVM you would loose all the MBeans in it. The MBeanServer is normally accessed through its MBeanServerConnection API which works both locally and remotely. The management interface of an MBean would typically consist of [1]Named and typed attributes that can be read/ written Named and typed operations that can be invoked Typed notifications that can be emitted by the MBeanFor example say it is required to manage a thread pool parameters of one of your applications at runtime. With JMX it?€™s a matter of writing a MBean with logic related to setting and getting these parameters and registering it to the MBeanServer. Now the next step is to expose these mbeans to the outside world so that remote clients can invoke these MBeans to manage your application. It can be done via various protocols implemented via protocol connectors and protocol adapters. A protocol connector basically expose MBeans as they are so that remote client sees the same interface (JMX RMI Connector is a good example). So basically the client or the remote management application should be enabled for JMX technology. A protocol adapter (e.g: HTML, SNMP) adapt the results according to the protocol the client is expecting (e.g: for a browser-based client sends the results in HTML over HTTP). Now that MBeans are properly exposed to the outside we need some clients to access these MBeans to manage our applications. There are basically two categories of clients available according to whether they use connectors or adapters. JMX Clients use JMX APIs to connect to MBeanServer and invoke MBeans. Generally JMX Clients use a MBeanServerConnection to connect to the MBeanServer and invoke MBeans through it by providing the MBean ID (ObjectName) and required parameters. There are basically three types of JMX Clients. Local JMX Client : A client that runs in the same JVM as the MBeanServer. These clients can also use MBeanServer API itself since they are running inside the same JVM.Agent : The agent is a local JMX Client which manages the MBeanServer itself. Remember that MBeanServer does not persist MBean information. So we can use an Agent to provide this logic which would encapsulate the MBeanServer with the additional functionality. So the Agent is responsible for initializing and managing the MBeanServer itself.Remote JMX Client : Remote client is only different from that of a local client in that it needs to instantiate a Connector for connecting to a Connector server in order to get a MBeanServerConnection. And of course they would be running in a remote JVM as the name suggests. Next type of client is the Management Clients which use protocol adapters to connect to MBeanServer. For these to work the respective adapter should be present and running in the JVM being managed. For example HTML adapter should be present in the JVM for a browser-based client to connect to it invoke MBeans.  The diagram below summarizes the concepts described so far.This concludes my quick notes on JMX. An extremely good read on main JMX concepts can be found at [2]. Also JMX learning trail at Oracle is a good starting point for getting good with JMX.[1] http://docs.oracle.com/javase/6/docs/technotes/guides/jmx/overview/instrumentation.html#wp998816 [2] http://pub.admc.com/howtos/jmx/architecture-chapt.html Reference: JMX : Some Introductory Notes from our JCG partner Buddhika Chamith at the Source Open blog....

Serving Files with Puppet Standalone in Vagrant

If you use Puppet in the client-server mode to configure your production environment then you might want to be able to copy & paste from the prod configuration into the Vagrant’s standalone puppet‘s configuration to test stuff. One of the key features necessary for that is enabling file serving via “source => ‘puppet:///path/to/file’”. In the client-server mode the files are served by the server, in the standalone mode you can configure puppet to read from a local (likely shared) folder. We will see how to do this. Credits: This post is based heavily on Akumria’s answer at StackOverflow: how to source a file in puppet manifest from module. Enabling Puppet Standalone in Vagrant to Resolve puppet:///… Quick overview:Make the directory with the files to be served available to the Vagrant VM  Create fileserver.conf to inform Puppet about the directory  Tell puppet about the fileserver.conf  Use it1. Make the directory with the files to be served available to the Vagrant VM For example as a shared folder: # Snippet of <vagrant directory>/Vagrantfile config.vm.share_folder "PuppetFiles", "/etc/puppet/files", "./puppet-files-symlink"(In my case this is actually a symlink to the actual folder in our puppet git repository. Beware that symlinks inside shared folders often don’t work and thus it’s better to use the symlink as a standalone shared folder root.) Notice you don’t need to declare a shared folder 2. Create fileserver.conf to inform Puppet about the directory You need to tell to Puppet that the source”puppet:///files/” should be served from /etc/puppet/files/: # <vagrant directory>/fileserver.conf [files] path /etc/puppet/files allow *3. Tell puppet about the fileserver.conf Puppet needs to know that it should read the fileserver.conf file: # Snippet of <vagrant directory>/Vagrantfile config.vm.provision :puppet, :options => ["--fileserverconfig=/vagrant/fileserver.conf"], :facter => { "fqdn" => "vagrant.vagrantup.com" } do |puppet| ... end4. Use it vagrant_dir$ echo "dummy content" > ./puppet-files-symlink/example-file.txt# Snippet of <vagrant directory>/manifests/<my manifest>.pp ... file{'/tmp/example-file.txt': ensure => file, source => 'puppet:///files/example-file.txt', } ...Caveats URLs with server name (puppet://puppet/) don’t work URLs like puppet://puppet/files/path/to/file don’t work, you must use puppet:///files/path/to/file instead (empty, i.e. implicit, server name => three slashes). The reason is, I believe, that if you state the server name explicitely then Puppet will try to find the server and get the files from there (that might be a desirable behavior if you run Puppet Master locally or elsewhere; in that case just add the server name to /etc/hosts in the Vagrant VM or make sure the DNS server used can resolve it). On the other hand, if you leave the server name out and rely on the implicit value then Puppet in the standalone mode will consult its fileserver.conf and behave accordingly. (Notice that in the server-client mode the implicit server name equals the puppet master, i.e. puppet:/// works perfectly well there.) If you use puppet://puppet/files/… then you’ll get an error like this: err: /Stage[main]/My_example_class/File[fetch_cdn_logs.py]: Could not evaluate: getaddrinfo: Name or service not known Could not retrieve file metadata for puppet://puppet/files/analytics/fetch_cdn_logs.py: getaddrinfo: Name or service not known at /tmp/vagrant-puppet/manifests/analytics_dev.pp:283Environment Puppet: 2.7.14, Vagrant:1.0.2 Reference: Serving Files with Puppet Standalone in Vagrant From the puppet:// URIs from our JCG partner Jakub Holy at the The Holy Java blog. ...

An agile methodology for orthodox environments

My company designs and develop mobile and web based banking solutions. Our customers (banks for the most part) are highly bureaucratized, orthodox (ie. like to have everything pre-defined and pre-approved) and risk adverse, and therefore change and the disruption of the status quo is not a normal sight within most of them.Most banking IT departments are used to the good old waterfall development cycle (believe it or not). Additionally, when they purchase a tailor made system (or a highly customizable product based deployment) they prefer to know in advance exactly what the system will do, how will it do it and how long will it take to deploy it (even if they don’t know what they want themselves).I believe this happens a lot in provider/customer relationships, and not only in the finantial sector.But during real life software development projects at banks, as it happens on almost all software projects:Changes are inevitable Users don’t realize what they want until they see the system working Developers don’t understand what the user needs until they see the user?€™s face looking at the actual systemSo an agile methodology seems to be in order, right? But how to couple both worlds…What we decided to do is to take the bureaucratic items that we think are absolutely necessary for our customers to feel at ease and to actually buy our projects, and build the most agile methodology possible with these items as axioms.These undesired but unavoidable items are:Pre-defined initial scope Formal customer approval of user stories (or requirements specifications) Acceptance testing with a formal approval done by personnel appointed by the customer (be it from the actual customers staff, or sometimes from a third party) Documented and pre-approved change requestsWe took elements from several agile methodologies and personal experience of our staff, with a lot of influence from Scrum, and defined the following:Sprint zero, lasting from 1 to 5 weeks:General look & feel design General HTML template development List of all user stories compiled and prioritized System architecture definition External systems interface designRegular sprints lasting 5 to 8 weeks:Write user stories HTML development of relevant pages/widgets Validate user stories and HTML items with the customer Development (up to 2 user stories per developer per sprint) Internal testing and rework Validation testing and rework (with the customer) Testing/pre-production deployment of new versionRegular sprints after sprint number one should have a lower assignment load per developer than sprint one, to make room for rework/changes from previous sprints and validation testing. The assignment of user stories to each sprint is done using the prioritized list and the availability of human and system resources from the customer.We believe both our customers and our company are benefiting from this method:Requirements elicitation and validation is performed progressively and during most of the projects duration, motivating a greater involvement from the customer. The customer can see? a working system very soon (7-10 weeks after project start for the first version, and then a new version every 4-6 weeks). Including rework as a natural part of each sprint and the iterative nature of the method smooths the customer/provider relationship. In our experience, using a rigid ciclic methodology implies the use of strict change requests, and those tend to increase the number of hard negotiations and detriment the image of the provider in the eyes of the customer.I’ll post a follow-up with real life experiences and results of our methodology in action.Reference: Defining an agile methodology for orthodox environments from our JCG partner Ricardo Zuasti at the Ricardo Zuasti’s blog blog....

Four laws of robust software systems

Murphy’s Law (“If anything can go wrong, it will”) was born at Edwards Air Force Base in 1949 at North Base. It was named after Capt. Edward A. Murphy, an engineer working on Air Force Project MX981, (a project) designed to see how much sudden deceleration a person can stand in a crash. One day, after finding that a transducer was wired wrong, he cursed the technician responsible and said, “If there is any way to do it wrong, he’ll find it.”For that described reason it may be good to put some quality assurance process in place. I could also call this blog “the four laws of steady software quality”. It’s about some fundamental techniques that can help to achieve superior quality over a longer distance. This is particularly important if you’re developing some central component that will cause serious damage if it fails in production. OK, here is my (never final and not holistic) list of practical quality assurance tips.Law 1: facilitate changeThere is nothing permanent except change. If a system isn’t designed in accordance to this superior important reality, then the probability of failure may increase above average. A widely used technique to facilitate change is the development of a sufficient set of unit tests. Unit testing enables to uncover regressions in existing functionality after changes have been made to a system. It also encourages to really think about the desired functionality and required design of the component under development.Law 2: don’t rush through the functional testing phaseIn economics, the marginal utility of a good is the gain (or loss) from an increase (or decrease) in the consumption of that good. The law of diminishing marginal utility says, that the marginal utility of each (homogenous) unit decreases as the supply of units increases (and vice versa). The first functional test cases often walk through the main scenarios covering the main paths of the considered software. All the code tested wasn’t executed before. These test cases have a very high marginal utility. Subsequent test cases may walk through the same code ranges except specific sidepaths at specific validation conditions for instance. These test cases may cover three or four additional lines of code in your application. As a result, they will have a smaller marginal utility then the first test cases.My law about functional testing suggests: as long the execution of the next test case yields a significant utility the following applies: the more time you invest into testing the better the outcome! So don’t rush through a functional testing phase and miss out some useful test case (this assumes the special case in which usefulness can be quantified). Try to find the useful test cases that promise a significant gain in perceptible quality. On the other hand, if you’re executing test cases with a negative marginal utility you’re actually investing more effort then you gain in terms of perceptible quality. There is this special (but not uncommon) situation where the client does not run functional tests on systematic bases. This law will then suggest: the longer the application is in the test environment, the better the outcome.Law 3: run (non-functional) benchmark testsAnother peace of good permanent software quality is a regular load test. To make results usable load tests need a defined steady environment and a baseline of measured values (a benchmark). These values are at least: CPU, response time, memory footprint. Load tests of new releases can be compared to those load tests of older releases. That way we can also bypass the often stated requirement that the load test environment needs to have the same capacity parameters then the production environment. In many cases it is possible to see the real big issues with a relatively small set of parallel users (e.g. 50 users).It makes limited sense to do load testing if single user profiling results are bad. Therefore it’s a good idea to perform repeatable profiling test cases with every release. This way profiling results can be compared to each other (again: the benchmark idea). We do CPU and elapsed time profiling as well as memory profiling. Profiling is an activity that runs in parallel to actual development. It makes sence to focus on the main scenarios used regularly in production.Law 4: avoid dependency lock-inThe difference between trouble and severe crisis is the time it takes to fix the problem that causes the trouble. For this reason you may always need a way back to your previous release, you need a fallback scenario to avoid a production crisis with severe business impact. You enable rollback by avoiding dependency lock-in. Runtime-dependencies of your application may exist to neighbouring systems by joint interface or contract changes during development. If you implemented requirements that resulted in changed interfaces and contracts, then you cannot simply roll back, that’s obvious. Therefore you need to avoid too many interface and contract changes. Small release cycles help to reduce dependencies between application versions in one release ’cause less changes are rolled to production. Another counteraction against dependency lock-in is to let neighbouring systems be downwoards compatible for one version.That’s it in terms of robust systems.Reference: “5′ on IT-Architecture: four laws of robust software systems” from our JCG partner Niklas....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: