Featured FREE Whitepapers

What's New Here?


Connect Glassfish 3 to external ActiveMQ 5 broker

Introduction Here at ONVZ we’re using Glassfish 3 as our development and production application server, and we’re quite happy with its performance and stability, as well as the large community surrounding it. I rarely run into a problem that does not have a matching solution on stackoverflow or java.net. As part of our open source strategy we also run a customized ActiveMQ cluster called “ONVZ Message Bus”. To enable Message Driven Beans and other EJBs to consume and produce messages to and from the ActiveMQ message brokers, ignoring the internal OpenMQ broker that comes shipped with Glassfish, an ActiveMQ Resource Adapter has to be installed. Luckily for me Sven Hafner wrote a blog post about running an embedded ActiveMQ 5 broker in Glassfish 3, and I was able to distill the information I needed to connect to an external broker instead. This blog post describes what I did to get it to work. Install the ActiveMQ Resource AdapterBefore you start Glassfish copy the following libraries from an ActiveMQ installation directory or elsewhere to GlassfishCopy “slf4j-api-1.5.11.jar” from the ActiveMQ “lib” directory to the Glassfish “lib” directory Copy “slf4j-log4j12-1.5.11.jar” and “log4j-1.2.14.jar” from the ActiveMQ “lib/optional” directory to the Glassfish “lib” directory. Note: Instead of these two you can also download “slf4j-jdk14-1.5.11.jar” from the maven repo to the Glassfish “lib” directory.Download the resource adapter (activemq-rar-5.5.1.rar) from the following location Deploy the resource adapter in GlassfishIn the Glassfish Admin Console, go to “Applications”, and click on “Deploy” Click “Choose file” and select the rar file you just downloaded. Notice how the page recognized the selected rar file and automatically selected the correct Type and Application Name and finally click “Ok”Create the Resource Adapter ConfigIn the Glassfish Admin Console, go to “Resources”, and click on “Resource Adapter Configs” Click “New”, and select the ActiveMQ Resource Adapter we just depoyed, and select a Thread Pool. (“thread-pool-1? for instance) Set the property “ServerUrl”, “UserName” and “Password”, leave the rest untouched and click “OK”.Create the Connector Connection PoolIn the Glassfish Admin Console, go to “Resources”, “Connectors”, “Connector Connection Pools” Click “New”, fill in a pool name like “jms/connectionFactory” and select the ActiveMQ Resource Adapter. The Connection Definition will default to “javax.jms.ConnectionFactory”, which is correct, so click “Next”. Enable the “Ping” checkbox and click “Finish”.Create the Admin Object ResourceIn the Glassfish Admin Console, go to “Resources”, “Connectors”, “Admin Object Resources” Click “New”, set a JNDI Name such as “jms/queue/incoming” and select the ActiveMQ Resource Adapter Again, the other fields don’t need to be changed so click “OK”We now have everything in place (in JNDI actually) to start processing messages using a standard Java EE Message Driven Bean. The “Connector Connection Pool” you just created has resulted in a ConnectionFactory being registered in JNDI, and the “Admin Object Resource” resulted in a JMS Destination. You can find these objects in the admin console when you go to “Resources”, “JMS Resources”. In the Glassfish version I’m using (3.1.1) the admin console has a bug which results in the connection factory and destinations being only visible in the menu, and not on the right side of the page.  Create and deploy a Message Driven BeanCreate a new Java Enterprise project in your favorite IDE, and create a Message Driven Bean with the following contents:package com.example.activemq.glassfish;import javax.ejb.*; import javax.jms.*;@MessageDriven( activationConfig = { @ActivationConfigProperty(propertyName = 'destinationType', propertyValue = 'javax.jms.Queue'), @ActivationConfigProperty(propertyName = 'destination', propertyValue = 'jms/queue/incoming') } ) public class ExampleMessageBean implements MessageListener {public void onMessage(Message message) { try { System.out.println('We've received a message: ' + message.getJMSMessageID()); } catch (JMSException e) { e.printStackTrace(); } } }Glassfish will hookup your bean to the configured queue but it will try to do so with the default ConnectionFactory which connects to the embedded OpenMQ broker. This is not what we want, so we’ll instruct Glassfish which ConnectionFactory to use.Add a file called glassfish-ejb-jar.xml to the META-INF folder, and insert the following contents:<?xml version='1.0' encoding='UTF-8'?> <!DOCTYPE glassfish-ejb-jar PUBLIC '-//GlassFish.org//DTD GlassFish Application Server 3.1 EJB 3.1//EN' 'http://glassfish.org/dtds/glassfish-ejb-jar_3_1-1.dtd'> <glassfish-ejb-jar> <enterprise-beans> <ejb> <ejb-name>ExampleMessageBean</ejb-name> <mdb-connection-factory> <jndi-name>jms/connectionFactory</jndi-name> </mdb-connection-factory> <mdb-resource-adapter> <resource-adapter-mid>activemq-rar-5.5.1</resource-adapter-mid> </mdb-resource-adapter> </ejb> </enterprise-beans> </glassfish-ejb-jar>Deploy the MDB to glassfishGlassfish now uses the ActiveMQ ConnectionFactory and all is well. Use the ActiveMQ webconsole to send a message to a queue called “jms/queue/incoming”, or use some other tool to send a message. Glassfish catches all the sysout statements and prints those in it’s default glassfish log file. Reference: How to connect Glassfish 3 to an external ActiveMQ 5 broker from our JCG partner Geert Schuring at the Geert Schuring blog....

Test-driving Builders with Mockito and Hamcrest

A lot of people asked me in the past if I test getters and setters (properties, attributes, etc). They also asked me if I test my builders. The answer, in my case is it depends. When working with legacy code, I wouldn’t bother to test data structures, that means, objects with just getters and setters, maps, lists, etc. One of the reasons is that I never mock them. I use them as they are when testing the classes that uses them. For builders, when they are used just by test classes, I also don’t unit test them since they are used as “helpers” in many other tests. If they have a bug, the tests will fail. In summary, if these data structures and builders already exist, I wouldn’t bother retrofitting test for them. But now let’s talk about doing TDD and assume you need a new object with getters and setters. In this case, yes, I would write tests for the getters and setters since I need to justify their existence writing my tests first. In order to have a rich domain model, I normally tend to have business logic associated with the data and have a richer domain model. Let’s see the following example. In the real life, I would be writing on test at a time, making them pass and refactor. For this post, I’ll just give you the full classes for clarity’s sake. First let’s write the tests: package org.craftedsw.testingbuilders; import static org.hamcrest.Matchers.is; import static org.junit.Assert.assertThat; import static org.mockito.Matchers.anyString; import static org.mockito.Mockito.verify; import static org.mockito.Mockito.when; import org.junit.Before; import org.junit.Test; import org.junit.runner.RunWith; import org.mockito.Mock; import org.mockito.runners.MockitoJUnitRunner; @RunWith(MockitoJUnitRunner.class) public class TradeTest { private static final String INBOUND_XML_MESSAGE = '<message >'; private static final boolean REPORTABILITY_RESULT = true; private Trade trade; @Mock private ReportabilityDecision reportabilityDecision; @Before public void initialise() { trade = new Trade(); when(reportabilityDecision.isReportable(anyString())) .thenReturn(REPORTABILITY_RESULT); } @Test public void should_contain_the_inbound_xml_message() { trade.setInboundMessage(INBOUND_XML_MESSAGE); assertThat(trade.getInboundMessage(), is(INBOUND_XML_MESSAGE)); } @Test public void should_tell_if_it_is_reportable() { trade.setInboundMessage(INBOUND_XML_MESSAGE); trade.setReportabilityDecision(reportabilityDecision); boolean reportable = trade.isReportable(); verify(reportabilityDecision).isReportable(INBOUND_XML_MESSAGE); assertThat(reportable, is(REPORTABILITY_RESULT)); } } Now the implementation: package org.craftedsw.testingbuilders; public class Trade { private String inboundMessage; private ReportabilityDecision reportabilityDecision; public String getInboundMessage() { return this.inboundMessage; } public void setInboundMessage(String inboundXmlMessage) { this.inboundMessage = inboundXmlMessage; } public boolean isReportable() { return reportabilityDecision.isReportable(inboundMessage); } public void setReportabilityDecision(ReportabilityDecision reportabilityDecision) { this.reportabilityDecision = reportabilityDecision; } } This case is interesting since the Trade object has one property called inboundMessage with respective getters and setters and also uses a collaborator ( reportabilityDecision, injected via setter) in its isReportable business method. A common approach that I’ve seen many times to “test” the setReportabilityDecision method is to introduce a getReportabilityDecision method returning the reportabilityDecision (collaborator) object. This is definitely the wrong approach. Our objective should be to test how the collaborator is used, that means, if it is invoked with the right parameters and if whatever it returns (if it returns anything) is used. Introducing a getter in this case does not make sense since it does not guarantee that the object, after had the collaborator injected via setter, is interacting with the collaborator as we intended. As an aside, when we write tests that are about how collaborators are going to be used, defining their interface, is when we are using TDD as a design tool and not just simply as a testing tool. I’ll cover that in a future blog post. OK, now imagine that this trade object can be created in different ways, that means, with different reportability decisions. We also would like to make our code more readable and we decide to write a builder for the Trade object. Let’s also assume, in this case, that we want the builder to be used in the production and test code as well. In this case, we want to test drive our builder. Here is an example that I normally find when developers are test-driving a builder implementation. package org.craftedsw.testingbuilders; import static org.craftedsw.testingbuilders.TradeBuilder.aTrade; import static org.hamcrest.Matchers.is; import static org.junit.Assert.assertThat; import static org.mockito.Mockito.verify; import org.junit.Test; import org.junit.runner.RunWith; import org.mockito.Mock; import org.mockito.runners.MockitoJUnitRunner; @RunWith(MockitoJUnitRunner.class) public class TradeBuilderTest { private static final String TRADE_XML_MESSAGE = '<message >'; @Mock private ReportabilityDecision reportabilityDecision; @Test public void should_create_a_trade_with_inbound_message() { Trade trade = aTrade() .withInboundMessage(TRADE_XML_MESSAGE) .build(); assertThat(trade.getInboundMessage(), is(TRADE_XML_MESSAGE)); } @Test public void should_create_a_trade_with_a_reportability_decision() { Trade trade = aTrade() .withInboundMessage(TRADE_XML_MESSAGE) .withReportabilityDecision(reportabilityDecision) .build(); trade.isReportable(); verify(reportabilityDecision).isReportable(TRADE_XML_MESSAGE); } } Now let’s have a look at these tests. The good news is, the tests were written in the way developers want to read them. That also means that they were “designing” the TradeBuilder public interface (public methods). The bad news is how they are testing it. If you look closer, the tests for the builder are almost identical to the tests in the TradeTest class. You may say that it is OK since the builder is creating the object and the tests should be similar. The only different is that in the TradeTest we instantiate the object by hand and in the TradeBuilderTest we use the builder to instantiate it, but the assertions should be the same, right? For me, firstly we have duplication. Secondly, the TradeBuilderTest doesn’t show it’s real intent. After many refactorings and exploring different ideas, while pair-programming with one of the guys in my team we came up with this approach: package org.craftedsw.testingbuilders; import static org.mockito.BDDMockito.given; import static org.mockito.Mockito.verify; import org.junit.Test; import org.junit.runner.RunWith; import org.mockito.InjectMocks; import org.mockito.Mock; import org.mockito.Spy; import org.mockito.runners.MockitoJUnitRunner; @RunWith(MockitoJUnitRunner.class) public class TradeBuilderTest { private static final String TRADE_XML_MESSAGE = '<message >'; @Mock private ReportabilityDecision reportabilityDecision; @Mock private Trade trade; @Spy @InjectMocks TradeBuilder tradeBuilder; @Test public void should_create_a_trade_with_all_specified_attributes() { given(tradeBuilder.createTrade()).willReturn(trade); tradeBuilder .withInboundMessage(TRADE_XML_MESSAGE) .withReportabilityDecision(reportabilityDecision) .build(); verify(trade).setInboundMessage(TRADE_XML_MESSAGE); verify(trade).setReportabilityDecision(reportabilityDecision); } }So now, the TradeBuilderTest express what is expected from the TradeBuilder, that means, the side effect when the build method is called. We want it to create a Trade and set its attributes. There are no duplications with the TradeTest. It is left to the TradeTest to guarantee the correct behavior of the Trade object. For completion’s sake, here is the final TradeBuider class: package org.craftedsw.testingbuilders; public class TradeBuilder { private String inboundMessage; private ReportabilityDecision reportabilityDecision; public static TradeBuilder aTrade() { return new TradeBuilder(); } public TradeBuilder withInboundMessage(String inboundMessage) { this.inboundMessage = inboundMessage; return this; } public TradeBuilder withReportabilityDecision(ReportabilityDecision reportabilityDecision) { this.reportabilityDecision = reportabilityDecision; return this; } public Trade build() { Trade trade = createTrade(); trade.setInboundMessage(inboundMessage); trade.setReportabilityDecision(reportabilityDecision); return trade; } Trade createTrade() { return new Trade(); } } The combination of Mockito and Hamcrest is extremely powerful, allowing us to write better and more readable tests. Reference: Test-driving Builders with Mockito and Hamcrest from our JCG partner Sandro Mancuso at the Crafted Software blog....

A First Look at MVVM in ZK 6

MVVM vs. MVC In a previous post we’ve seen how the Ajax framework ZK adopts a CSS selector inspired Controller for wiring UI components in View and listening to their events. Under this ZK MVC pattern, the UI components in View need not to be bound to any Controller methods or data objects. The flexibility of using selector patterns as a mean to map View states and events to the Controller makes code more adaptive to change. MVVM approaches separation of concern in a reverse direction. Under this pattern, a View-Model and a binder mechanism take place of the Controller. The binder maps requests from View to action logic in View-Model and updates any value (data) on both sides, allowing the View-Model to be independent of any particular View. Anatomy of MVVM in ZK 6 The below is a schematic diagram of ZK 6’s MVVM pattern:Here are some additional points that’s not conveyed in the diagram: BindComposer:implements ZK’s standard controller interfaces (Composer & ComposerExt) the default implementation is sufficient, no modifications necessaryView:informs binder which method to call and what properties to update on the View-ModelView-Model:just a POJO communication with the binder is carried out via Java AnnotationMVVM in Action Consider the task of displaying a simplified inventory without knowledge of the exact UI markup. An inventory is a collection of items, so we have the object representation of such: public class Item { private String ID; private String name; private int quantity; private BigDecimal unitPrice;//getters & setters } It also makes sense to expect that an item on the list can be selected and operated on. Thus based on our knowledge and assumptions so far, we can go ahead and implement the View-Model. public class InventoryVM { ListModelList<Item> inventory; Item selectedItem; public ListModelList<Item> getInventory(){ inventory = new ListModelList<Item>(InventoryDAO.getInventory()); return inventory; }public Item getSelectedItem() { return selectedItem; } public void setSelectedItem(Item selectedItem) { this.selectedItem = selectedItem; }} Here we have a typical POJO for the View-Model implementation, data with their getters and setter. View Implementation, ‘Take One’ Now suppose we later learned the requirements for the View is just a simple tabular display:A possible mark-up to achieve the UI as indicated above is: <window title='Inventory' border='normal' apply='org.zkoss.bind.BindComposer' viewModel='@id('vm') @init('lab.zkoss.mvvm.ctrl.InventoryVM')' > <listbox model='@load(vm.inventory)' width='600px' > <auxhead><auxheader label='Inventory Summary' colspan='5' align='center'/> </auxhead> <listhead> <listheader width='15%' label='Item ID' sort='auto(ID)'/> <listheader width='20%' label='Name' sort='auto(name)'/> <listheader width='20%' label='Quantity' sort='auto(quantity)'/> <listheader width='20%' label='Unit Price' sort='auto(unitPrice)'/> <listheader width='25%' label='Net Value'/> </listhead> <template name='model' var='item'> <listitem> <listcell><label value='@load(item.ID)'/></listcell> <listcell><label value='@load(item.name)'/></listcell> <listcell><label value='@load(item.quantity)'/></listcell> <listcell><label value='@load(item.unitPrice)'/></listcell> <listcell><label value='@load(item.unitPrice * item.quantity)'/></listcell> </listitem> </template> </listbox> </window> Let’s elaborate a bit on the mark-up here.At line 1, we apply the default BindComposer to the Window component which makes all children components of the Window subject to the BindComposer’s effect. On the following line, we instruct the BindComposer which View-Model class to instantiate and we give the View-Model instance an ID so we could make reference to it. Since we’re loading a collection of data onto the Listbox, at line 3 we assign the property ‘inventory’ of our View-Model instance, which is a collection of the Item objects, to Listbox’s attribute ‘model’. At line 12, we then make use of the model on our Template component. Template iterates its enclosed components according to the model it receives. In this case, we have 5 list items which makes up a row in the Listbox. In each Listcell, we load the properties of each object and display them in Labels.Via ZK’s binding system, we were able to access data in our View-Model instance and load them in View using annotations. View Implementation, ‘Take Two’ Suppose later in development, it’s agreed that the current tabular display takes too much space in our presentation and we’re now asked to show the details of an item only when the item is selected in a Combobox, as shown below:Though both the presentation and behaviour(detail is shown only upon user’s selection) differ from our previous implementation, the View-Model class needs not be heavily modified. Since an item’s detail will be rendered only when it is selected in the Combobox, it’s obvious that we’d need to handle the ‘onSelect’ event, let’s add a new method doSelect: public class InventoryVM { ListModelList<Item> inventory; Item selectedItem;@NotifyChange('selectedItem') @Command public void doSelect(){ } //getters & setters} A method annotated with @Command makes it eligible to be called from our mark-up by its name, in our case: <combobox onSelect='@command('doSelect')' > The annotation @NotifyChange(‘selectedItem’) allows the property selectedItem to be updated automatically whenever user selects a new Item from the Combobox. For our purposes, no addition implementation is needed for the method doSelect. With this bit of change done, we can now see how this slightly-modified View-Model would work with our new mark-up: <window title='Inventory' border='normal' apply='org.zkoss.bind.BindComposer' viewModel='@id('vm') @init('lab.zkoss.mvvm.ctrl.InventoryVM')' width='600px'> ... <combobox model='@load(vm.inventory)' selectedItem='@bind(vm.selectedItem)' onSelect='@command('doSelect')' > <template name='model' var='item'> <comboitem label='@load(item.ID)'/> </template> <comboitem label='Test'/> </combobox> <listbox visible='@load(not empty vm.selectedItem)' width='240px'> <listhead> <listheader ></listheader> <listheader ></listheader> </listhead> <listitem> <listcell> <label value='Item Name: ' /> </listcell> <listcell> <label value='@load(vm.selectedItem.name)' /> </listcell> </listitem> <listitem> <listcell> <label value='Unit Price: ' /> </listcell> <listcell> <label value='@load(vm.selectedItem.unitPrice)' /> </listcell> </listitem> <listitem> <listcell> <label value='Units in Stock: ' /> </listcell> <listcell> <label value='@load(vm.selectedItem.quantity)' /> </listcell> </listitem> <listitem> <listcell> <label value='Net Value: ' /> </listcell> <listcell> <label value='@load(vm.selectedItem.unitPrice * vm.selectedItem.quantity)' /> </listcell> </listitem> </listbox> ... </window>At line 4, we load the data collection inventory to the Combobox’s model attribute so it can iteratively display the ID for each Item object in the data model using the Template component declared on line 7. At line 5, the selectedItem attribute points to the most recently selected Item on that list of Item objects At line 6, we’ve mapped the onSelect event to the View-Model’s doSelect method At line 12, we make the Listbox containing an Item’s detail visible only if the selectedItem property in View-Model is not empty (selectedItem will remain empty until an item is selected in the Combobox). The selectedItem’s properties are then loaded to fill out the Listbox.Recap Under the MVVM pattern, our View-Model class exposes its data and methods to the binder; there’s no reference made to any particular View component. The View implementations access data or invoke event handlers via the binder. In this post, we’re only exposed to the fundamental workings of ZK’s MVVM mechanisms. The binder is obviously not restricted to just loading data from the View-Model. In addition to saving data from View to ViewModel, we can also inject data converters and validators in the mix of View to View-Model communications. The MVVM pattern may also work in conjunction with the MVC model. That is, we can also wire components and listen to fired-events via the MVC Selector mechanism if we wish to do so. We’ll dig into some of these topics at a later time. Reference: A First Look at MVVM in ZK 6 from our JCG partner Lance Lu at the Tech Dojo blog....

JMX : Some Introductory Notes

JMX (Java Management Extensions) is a J2SE technology which enables management and monitoring of Java applications. The basic idea is to implement a set of management objects and register the implementations to a platform server from where these implementations can be invoked either locally or remotely to the JVM using a set of connectors or adapters. A management/instrumentation object is called an MBean (stands for Managed Bean). Once instantiated a MBean will be registered with a unique ObjectName with the platform MBeanServer. MBeanServer acts as a repository of MBeans enabling the creation, registering, accessing and removing of the MBeans. However MBeanServer does not persist the MBean information. So with a restart of the JVM you would loose all the MBeans in it. The MBeanServer is normally accessed through its MBeanServerConnection API which works both locally and remotely. The management interface of an MBean would typically consist of [1]Named and typed attributes that can be read/ written Named and typed operations that can be invoked Typed notifications that can be emitted by the MBeanFor example say it is required to manage a thread pool parameters of one of your applications at runtime. With JMX it?€™s a matter of writing a MBean with logic related to setting and getting these parameters and registering it to the MBeanServer. Now the next step is to expose these mbeans to the outside world so that remote clients can invoke these MBeans to manage your application. It can be done via various protocols implemented via protocol connectors and protocol adapters. A protocol connector basically expose MBeans as they are so that remote client sees the same interface (JMX RMI Connector is a good example). So basically the client or the remote management application should be enabled for JMX technology. A protocol adapter (e.g: HTML, SNMP) adapt the results according to the protocol the client is expecting (e.g: for a browser-based client sends the results in HTML over HTTP). Now that MBeans are properly exposed to the outside we need some clients to access these MBeans to manage our applications. There are basically two categories of clients available according to whether they use connectors or adapters. JMX Clients use JMX APIs to connect to MBeanServer and invoke MBeans. Generally JMX Clients use a MBeanServerConnection to connect to the MBeanServer and invoke MBeans through it by providing the MBean ID (ObjectName) and required parameters. There are basically three types of JMX Clients. Local JMX Client : A client that runs in the same JVM as the MBeanServer. These clients can also use MBeanServer API itself since they are running inside the same JVM.Agent : The agent is a local JMX Client which manages the MBeanServer itself. Remember that MBeanServer does not persist MBean information. So we can use an Agent to provide this logic which would encapsulate the MBeanServer with the additional functionality. So the Agent is responsible for initializing and managing the MBeanServer itself.Remote JMX Client : Remote client is only different from that of a local client in that it needs to instantiate a Connector for connecting to a Connector server in order to get a MBeanServerConnection. And of course they would be running in a remote JVM as the name suggests. Next type of client is the Management Clients which use protocol adapters to connect to MBeanServer. For these to work the respective adapter should be present and running in the JVM being managed. For example HTML adapter should be present in the JVM for a browser-based client to connect to it invoke MBeans.  The diagram below summarizes the concepts described so far.This concludes my quick notes on JMX. An extremely good read on main JMX concepts can be found at [2]. Also JMX learning trail at Oracle is a good starting point for getting good with JMX.[1] http://docs.oracle.com/javase/6/docs/technotes/guides/jmx/overview/instrumentation.html#wp998816 [2] http://pub.admc.com/howtos/jmx/architecture-chapt.html Reference: JMX : Some Introductory Notes from our JCG partner Buddhika Chamith at the Source Open blog....

Serving Files with Puppet Standalone in Vagrant

If you use Puppet in the client-server mode to configure your production environment then you might want to be able to copy & paste from the prod configuration into the Vagrant’s standalone puppet‘s configuration to test stuff. One of the key features necessary for that is enabling file serving via “source => ‘puppet:///path/to/file’”. In the client-server mode the files are served by the server, in the standalone mode you can configure puppet to read from a local (likely shared) folder. We will see how to do this. Credits: This post is based heavily on Akumria’s answer at StackOverflow: how to source a file in puppet manifest from module. Enabling Puppet Standalone in Vagrant to Resolve puppet:///… Quick overview:Make the directory with the files to be served available to the Vagrant VM  Create fileserver.conf to inform Puppet about the directory  Tell puppet about the fileserver.conf  Use it1. Make the directory with the files to be served available to the Vagrant VM For example as a shared folder: # Snippet of <vagrant directory>/Vagrantfile config.vm.share_folder "PuppetFiles", "/etc/puppet/files", "./puppet-files-symlink"(In my case this is actually a symlink to the actual folder in our puppet git repository. Beware that symlinks inside shared folders often don’t work and thus it’s better to use the symlink as a standalone shared folder root.) Notice you don’t need to declare a shared folder 2. Create fileserver.conf to inform Puppet about the directory You need to tell to Puppet that the source”puppet:///files/” should be served from /etc/puppet/files/: # <vagrant directory>/fileserver.conf [files] path /etc/puppet/files allow *3. Tell puppet about the fileserver.conf Puppet needs to know that it should read the fileserver.conf file: # Snippet of <vagrant directory>/Vagrantfile config.vm.provision :puppet, :options => ["--fileserverconfig=/vagrant/fileserver.conf"], :facter => { "fqdn" => "vagrant.vagrantup.com" } do |puppet| ... end4. Use it vagrant_dir$ echo "dummy content" > ./puppet-files-symlink/example-file.txt# Snippet of <vagrant directory>/manifests/<my manifest>.pp ... file{'/tmp/example-file.txt': ensure => file, source => 'puppet:///files/example-file.txt', } ...Caveats URLs with server name (puppet://puppet/) don’t work URLs like puppet://puppet/files/path/to/file don’t work, you must use puppet:///files/path/to/file instead (empty, i.e. implicit, server name => three slashes). The reason is, I believe, that if you state the server name explicitely then Puppet will try to find the server and get the files from there (that might be a desirable behavior if you run Puppet Master locally or elsewhere; in that case just add the server name to /etc/hosts in the Vagrant VM or make sure the DNS server used can resolve it). On the other hand, if you leave the server name out and rely on the implicit value then Puppet in the standalone mode will consult its fileserver.conf and behave accordingly. (Notice that in the server-client mode the implicit server name equals the puppet master, i.e. puppet:/// works perfectly well there.) If you use puppet://puppet/files/… then you’ll get an error like this: err: /Stage[main]/My_example_class/File[fetch_cdn_logs.py]: Could not evaluate: getaddrinfo: Name or service not known Could not retrieve file metadata for puppet://puppet/files/analytics/fetch_cdn_logs.py: getaddrinfo: Name or service not known at /tmp/vagrant-puppet/manifests/analytics_dev.pp:283Environment Puppet: 2.7.14, Vagrant:1.0.2 Reference: Serving Files with Puppet Standalone in Vagrant From the puppet:// URIs from our JCG partner Jakub Holy at the The Holy Java blog. ...

An agile methodology for orthodox environments

My company designs and develop mobile and web based banking solutions. Our customers (banks for the most part) are highly bureaucratized, orthodox (ie. like to have everything pre-defined and pre-approved) and risk adverse, and therefore change and the disruption of the status quo is not a normal sight within most of them.Most banking IT departments are used to the good old waterfall development cycle (believe it or not). Additionally, when they purchase a tailor made system (or a highly customizable product based deployment) they prefer to know in advance exactly what the system will do, how will it do it and how long will it take to deploy it (even if they don’t know what they want themselves).I believe this happens a lot in provider/customer relationships, and not only in the finantial sector.But during real life software development projects at banks, as it happens on almost all software projects:Changes are inevitable Users don’t realize what they want until they see the system working Developers don’t understand what the user needs until they see the user?€™s face looking at the actual systemSo an agile methodology seems to be in order, right? But how to couple both worlds…What we decided to do is to take the bureaucratic items that we think are absolutely necessary for our customers to feel at ease and to actually buy our projects, and build the most agile methodology possible with these items as axioms.These undesired but unavoidable items are:Pre-defined initial scope Formal customer approval of user stories (or requirements specifications) Acceptance testing with a formal approval done by personnel appointed by the customer (be it from the actual customers staff, or sometimes from a third party) Documented and pre-approved change requestsWe took elements from several agile methodologies and personal experience of our staff, with a lot of influence from Scrum, and defined the following:Sprint zero, lasting from 1 to 5 weeks:General look & feel design General HTML template development List of all user stories compiled and prioritized System architecture definition External systems interface designRegular sprints lasting 5 to 8 weeks:Write user stories HTML development of relevant pages/widgets Validate user stories and HTML items with the customer Development (up to 2 user stories per developer per sprint) Internal testing and rework Validation testing and rework (with the customer) Testing/pre-production deployment of new versionRegular sprints after sprint number one should have a lower assignment load per developer than sprint one, to make room for rework/changes from previous sprints and validation testing. The assignment of user stories to each sprint is done using the prioritized list and the availability of human and system resources from the customer.We believe both our customers and our company are benefiting from this method:Requirements elicitation and validation is performed progressively and during most of the projects duration, motivating a greater involvement from the customer. The customer can see? a working system very soon (7-10 weeks after project start for the first version, and then a new version every 4-6 weeks). Including rework as a natural part of each sprint and the iterative nature of the method smooths the customer/provider relationship. In our experience, using a rigid ciclic methodology implies the use of strict change requests, and those tend to increase the number of hard negotiations and detriment the image of the provider in the eyes of the customer.I’ll post a follow-up with real life experiences and results of our methodology in action.Reference: Defining an agile methodology for orthodox environments from our JCG partner Ricardo Zuasti at the Ricardo Zuasti’s blog blog....

Four laws of robust software systems

Murphy’s Law (“If anything can go wrong, it will”) was born at Edwards Air Force Base in 1949 at North Base. It was named after Capt. Edward A. Murphy, an engineer working on Air Force Project MX981, (a project) designed to see how much sudden deceleration a person can stand in a crash. One day, after finding that a transducer was wired wrong, he cursed the technician responsible and said, “If there is any way to do it wrong, he’ll find it.”For that described reason it may be good to put some quality assurance process in place. I could also call this blog “the four laws of steady software quality”. It’s about some fundamental techniques that can help to achieve superior quality over a longer distance. This is particularly important if you’re developing some central component that will cause serious damage if it fails in production. OK, here is my (never final and not holistic) list of practical quality assurance tips.Law 1: facilitate changeThere is nothing permanent except change. If a system isn’t designed in accordance to this superior important reality, then the probability of failure may increase above average. A widely used technique to facilitate change is the development of a sufficient set of unit tests. Unit testing enables to uncover regressions in existing functionality after changes have been made to a system. It also encourages to really think about the desired functionality and required design of the component under development.Law 2: don’t rush through the functional testing phaseIn economics, the marginal utility of a good is the gain (or loss) from an increase (or decrease) in the consumption of that good. The law of diminishing marginal utility says, that the marginal utility of each (homogenous) unit decreases as the supply of units increases (and vice versa). The first functional test cases often walk through the main scenarios covering the main paths of the considered software. All the code tested wasn’t executed before. These test cases have a very high marginal utility. Subsequent test cases may walk through the same code ranges except specific sidepaths at specific validation conditions for instance. These test cases may cover three or four additional lines of code in your application. As a result, they will have a smaller marginal utility then the first test cases.My law about functional testing suggests: as long the execution of the next test case yields a significant utility the following applies: the more time you invest into testing the better the outcome! So don’t rush through a functional testing phase and miss out some useful test case (this assumes the special case in which usefulness can be quantified). Try to find the useful test cases that promise a significant gain in perceptible quality. On the other hand, if you’re executing test cases with a negative marginal utility you’re actually investing more effort then you gain in terms of perceptible quality. There is this special (but not uncommon) situation where the client does not run functional tests on systematic bases. This law will then suggest: the longer the application is in the test environment, the better the outcome.Law 3: run (non-functional) benchmark testsAnother peace of good permanent software quality is a regular load test. To make results usable load tests need a defined steady environment and a baseline of measured values (a benchmark). These values are at least: CPU, response time, memory footprint. Load tests of new releases can be compared to those load tests of older releases. That way we can also bypass the often stated requirement that the load test environment needs to have the same capacity parameters then the production environment. In many cases it is possible to see the real big issues with a relatively small set of parallel users (e.g. 50 users).It makes limited sense to do load testing if single user profiling results are bad. Therefore it’s a good idea to perform repeatable profiling test cases with every release. This way profiling results can be compared to each other (again: the benchmark idea). We do CPU and elapsed time profiling as well as memory profiling. Profiling is an activity that runs in parallel to actual development. It makes sence to focus on the main scenarios used regularly in production.Law 4: avoid dependency lock-inThe difference between trouble and severe crisis is the time it takes to fix the problem that causes the trouble. For this reason you may always need a way back to your previous release, you need a fallback scenario to avoid a production crisis with severe business impact. You enable rollback by avoiding dependency lock-in. Runtime-dependencies of your application may exist to neighbouring systems by joint interface or contract changes during development. If you implemented requirements that resulted in changed interfaces and contracts, then you cannot simply roll back, that’s obvious. Therefore you need to avoid too many interface and contract changes. Small release cycles help to reduce dependencies between application versions in one release ’cause less changes are rolled to production. Another counteraction against dependency lock-in is to let neighbouring systems be downwoards compatible for one version.That’s it in terms of robust systems.Reference: “5′ on IT-Architecture: four laws of robust software systems” from our JCG partner Niklas....

OpenShift Express Web Management Console: Getting started

This week the newest release of OpenShift brought two really great features to an already awesome PaaS Cloud provider. First, JBoss AS has been upgraded from 7.0 to 7.1 and the all new Express Web Management Console has been released as a preview. In this article we examine how to use this new console and will help you create and then destroy an application. OverviewFigure 1: follow link to launch the Express ConsoleIn this section we assume you have already registered as an OpenShift user and are logged into the OpenShift Express start page. In figure 1 the Express landing page is shown and if you follow the Express Console link you will be brought to a page that currently shows the old administration console and includes a link to P review the new OpenShift Management Console. Follow this link to the preview as shown for my user in figure 2.Figure 2: preview Express management consoleIt provides an overview of the users existing application, with a Details button for each application. My user has two application already created, one a jBPM web editor project based on JBoss and a second PHP twitter project that makes use of mongodb as a backend, see figure 2. At the top of the applications list, you have a button to Create a New Application. We will be using this button to create an existing project called kitchensinkhtml5, a mobile application from the JBoss project Aerogear. The nice thing about this demo project is that you can view it both in your desktop browsers and in your mobile devices.Figure 3: choose a type of applicationCreate application   Since this user already has created a domain and has existing applications setup, we just need to start by using the Create a New Application button. This takes us to the first of three steps where we will c hoose a type of application, which will be the JBoss Application Server 7.1 chosen by the Select button shown in figure 3.Figure 4: create applicationThe next step is to configure and deploy the application, done by filling in an application name in the provided text box and clicking on the Create Application button. We will be calling this application kitchensinkhtml5, so we fill in that name in the text box and submit to create our new application as shown in figure 4.Figure 5: next stepsOnce we submit our creation request, the OpenShift Express magic is started to setup our new instance with JBoss AS 7.1 started. We are presented with a final screen that is labeled Next Steps which provides information on accessing your application, making code changes, how to manage your application and how to start adding capabilities. As shown in figure 5, we will be pulling in a git clone of our Express application repository so that we can setup our kitchensink application code. As stated in the section making code changes we will clone the repository locally from a shell command line: git clone ssh://8df3de8e983c4b058db372e51bfe5254@kitchensinkhtml5-inthe.rhcloud.com/~/git/kitchensinkhtml5.git/ cd kitchensinkhtml5/ Once that is done we need to pull in our existing kitchensink code base: cd kitchensinkhtml5 git remote add upstream -m master git://github.com/eschabell/kitchensink-html5-mobile-example.git git pull -s recursive -X theirs upstream masterFinally, we push this back upstream to our Express instance as follows: git push We can now view the application at the URL assigned to our Express instance: http://kitchensinkhtml5-{$domainname}.rhcloud.comYou should see the mobile member registration application as shown here in figure 6.Figure 6: mobile applicationDestroy application   A final action that you can do with the new OpenShift Express Web Management Console is to destroy your application. As we only get five instances at a time, you will soon find yourself creating and destroying Express instances with ease.Figure 7: delete applicationAfter logging in as described above and starting the preview of the web management console, you will see your list of existing applications. By selecting an applications Details button you will be shown an overview of the application, see figure 7 for our example editor application we will be destroying.Figure 8: application deletedYou will notice a Delete button in the right top corner of the application overview screen, see figure 7. When selected, you will be asked to confirm that you really want to destroy this application. If you confirm this decision by clicking on the Delete button, your application and Express instance will be cleaned up. You will be returned to the application overview screen, see figure 8, and are ready for your next interaction with the Express Web Administration Console. Summary   In this article we have covered the very basics of the newly released OpenShift Express Web Administration Console. We have shown you how to view your applications, create a new application and how to free up an Express instance by destroying one of your applications. Reference: Getting started with the OpenShift Express Web Management Console from our JCG partner Eric D. Schabell at the Thoughts on Middleware, Linux, software, cycling and other news… blog....

Google Protocol Buffers in Java

Overview Protocol buffers is an open source encoding mechanism for structured data. Developed at Google, it was designed to be language/platform neutral and extensible. In this post, my aim is to cover the basic use of protocol buffers in the context of the Java platform. Protobuffs are faster and simpler than XML and more compact than JSON. Currently, there is support for C++, Java, and Python. However, there are other platforms supported (not by Google) as open source projects –I tried a PHP implementation but it wasn’t fully developed so I stopped using it; nonetheless, support is catching on. With Google announcing support for PHP in Google App Engine, I believe they will take this to next level. Basically, you define how you want your data to be structured once using a .proto specification file. This is analogous to an IDL file or a specification language to describe a software component. This file is consumed by the protocol buffer compiler (protoc) which will generate supporting methods so that you can write and read objects to and from a variety of streams. The message format is very straightforward. Each message type has one or more uniquely numbered fields (we’ll see why this is later). Nested message types have their own set of uniquely numbered fields. Value types can be numbers, booleans, strings, bytes, collections and enumerations (inspired in the Java enum). Also, you can nest other message types, allowing you to structure your data hierarchically in much the same way JSON allows you to. Fields can be specified as optional, required, or repeated. Don’t let the type of the field (e.g enum, int32, float, string, etc) confuse you when implementing protocol buffers in Python. The types in the field are just hints to protoc about how to serialize a fields value and produce the message encoded format of your message (more on this later). The encoded format looks a flatten and compressed representation of your object. You would write this specification the exact same way whether you are using protocol buffers in Python, Java, or C++. Protobuffs are extensible, you can update the structure of your objects at a later time without breaking programs that used the old format. If you wanted to send data over the network, you would encode the data using Protocol Buffer API and then serialize the resulting string. This notion of extensibility is a rather important one since Java, and many other serialization mechanisms for that matter, could potentially have issues with interoperability and backwards compatibility. With this approach, you don’t have to worry about maintaining a serialVersionId field in your code that represents the structure of an object. Maintaining this field is essential as Java’s serialization mechanism will use it as a quick checksum when deserializing objects. As a result, once you have serialized your objects into some file system, or perhaps a blob store, it is risky to make drastic changes to your object structure at a later time. Protocol buffer suffers less from this. So long as you only add optional fields to your objects, you will be able to deserialize old types at which point you will probably upgrade them. Furthermore, you can define a package name for your .proto files with the java_package keyword. This is nice to avoid name collisions from the generated code. Another alternative is to specifically name the generated class file as I did in my example below. I prefixed my generated classes with “Proto” to indicate this was a generated class. Here’s a simple message specification describing a User with an embedded Address message User.proto: option java_outer_classname="ProtoUser";message User {required int32 id = 1; // DB record ID required string name = 2; required string firstname = 3; required string lastname = 4; required string ssn= 5;// Embedded Address message specmessage Address { required int32 id = 1; required string country = 2 [default = "US"];; optional string state = 3; optional string city = 4; optional string street = 5; optional string zip = 6;enum Type { HOME = 0;WORK = 1;}optional Type addrType = 7 [default = HOME];} repeated Address addr = 16; }Let’s talk a bit about the tag numbers you see to the right of each property since they are very important. These tags identify the field order of your message in the binary representation on an object of this specification. Tag values 1 – 15 will be stored as 1 byte, whereas fields tagged with values 16 – 2047 take 2 bytes to encode — not quiet sure why they do this. Google recommends you use tags 1 – 15 for very frequently occurring data and also reserve some tag values in this range for any future updates. Note: You cannot use numbers 19000 though 19999. There are reserved for protobuff implementation. Also, you can define fields to be required, repeated, and optional.From the Google documentation:required: a well-formed message must have exactly one of this field, i.e trying to build a message with a required field uninitialized will throw a RuntimeException. optional: a well-formed message can have zero or one of this field (but not more than one). repeated: this field can be repeated any number of times (including zero) in a well-formed message. The order of the repeated values will be preserved.The documentation warns developers to be cautious about using required, as this types of fields will cause problems if you ever decide to deprecate one. This is a classical backwards compatibility problem that all serialization mechanisms suffer from. Google engineers even recommend using optional for everything. Furthermore, I specified a nested message specification Address. I could have just as easily place this definition outside the User object in the same proto file. So for related message definitions it makes sense to have them all in the same .proto file. Even though the Address message type is not a very good example of this, I would go with a nested type if a message type does not make sense to exist outside of its ‘parent’ object. For instance, if you wanted to serialize a Node of a LinkedList. Then node would in this case be an embedded message definition. It’s up to you and your design. Optional message properties take on default values when they are left out. In particular a type-specific default value is used instead: for strings, the default value is the empty string; for bools, the default value is false; for numeric types, the default value is zero; for enums, the default value is the first value listed in the enum’s type definition (this is pretty cool but not so obvious). Enumerations are pretty nice. They work cross-platform in much the same way as enum works in Java. The value of the enum field can just be a single value. You can declare enumerations inside the message definition or outside as if it was it’s own independent entity. If specified inside a message type, you can expose it another message type via [Message-name].[enum-name]. Protoc When running the protocol buffer compiler against a .proto file, the compiler will generate code for chosen language. It will convert your message types into augmented classes providing, among other things, getters and setters for your properties. The compiler also generates convenience methods to serialize messages to and from output streams and strings. In the case of an enum type, the generated code will have a corresponding enum for Java or C++, or a special EnumDescriptor class for Python that’s used to create a set of symbolic constants with integer values in the runtime-generated class. For Java, the compiler will generate .java files with a fluent design Builder classes for each message type to streamline object creation and initialization. The message classes generated by the compiler are immutable; once built, they cannot be changed. You can read about other platforms (Python, C++) in the resources section with details into field encodings here: https://developers.google.com/protocol-buffers/docs/reference/overview. For our example, we will invoke protoc with the –java_out command line flag. This flag indicates to the compiler the output directory for the generated Java classes –one Java class for each proto file. API The generated API provides support for the following convenience methods:isInitialized() toString() mergeFrom(…) clear()For parsing and serialization:byte[] toByteArray() parseFrom() writeTo(OutputStream) Used in sample code to encode parseFrom(InputStream) Used in sample code to decodeSample Code Let’s set up a simple project. I like to follow the Maven default archetype: protobuff-example/src/main/java/ [Application Code] protobuff-example/src/main/java/gen [Generated Proto Classes] protobuff-example/src/main/proto [Proto file definitions] To generate the protocol buffer classes, I will execute the following command: # protoc --proto_path=/home/user/workspace/eclipse/trunk/protobuff/ --java_out=/home/user/workspace/eclipse/trunk/protobuff/src/main/java /home/user/workspace/eclipse/trunk/protobuff/src/main/proto/User.proto I will show some pieces of the generated code and speak about them briefly. The generated class is quiet large but it’s straightforward to understand. It will provide builders to create instances of User and Address. public final class ProtoUser {public interface UserOrBuilder extends com.google.protobuf.MessageOrBuilder...public interface AddressOrBuilder extends com.google.protobuf.MessageOrBuilder {....}The generated class contains Builder interfaces that makes for really fluent object creation. These builder interfaces have getters and setters for each property specified in our proto file, such as: public String getCountry() { java.lang.Object ref = country_; if (ref instanceof String) { return (String) ref; } else { com.google.protobuf.ByteString bs = (com.google.protobuf.ByteString) ref; String s = bs.toStringUtf8(); if (com.google.protobuf.Internal.isValidUtf8(bs)) { country_ = s; } return s; } }Since this is a custom encoding mechanism, logically all of the fields have custom byte wrappers. Our simple String field, when stored, is compacted using a ByteString which then gets de-serialized into a UTF-8 string. // required int32 id = 1; public static final int ID_FIELD_NUMBER = 1; private int id_; public boolean hasId() { return ((bitField0_ & 0x00000001) == 0x00000001); }In this call we see the importance of the tag numbers we spoke of at the beginning. Those tag numbers seem to represent some sort of bit position that define where the data is located in the byte string. Next we see snippets of the write and read methods I mentioned earlier. Writing an instance to the an output stream: public void writeTo(com.google.protobuf.CodedOutputStream output) throws java.io.IOException {getSerializedSize();if (((bitField0_ & 0x00000001) == 0x00000001)) { output.writeInt32(1, id_); } if (((bitField0_ & 0x00000002) == 0x00000002)) { output.writeBytes(2, getCountryBytes()); .... }Reading from an input stream: public static ProtoUser.User parseFrom(java.io.InputStream input) throws java.io.IOException { return newBuilder().mergeFrom(input).buildParsed(); }This class is about 2000 lines of code. There are other details such as how Enum types are mapped and how repeated types are stored. Hopefully, the snippets that I provided give you a high level idea of the structure of this class. Let’s take a look at some application level code for using the generated class. To persist the data, we can simply do: // Create instance of Address Address addr = ProtoUser.User.Address.newBuilder() .setAddrType(Address.Type.HOME) .setCity("Weston") .setCountry("USA") .setId(1) .setState("FL") .setStreet("123 Lakeshore") .setZip("90210") .build(); // Serialize instance of User User user = ProtoUser.User.newBuilder() .setId(1) .setFirstname("Luis") .setLastname("Atencio") .setName("luisat") .setSsn("555-555-5555") .addAddr(addr) .build(); // Write file FileOutputStream output = new FileOutputStream("target/user.ser"); user.writeTo(output); output.close();Once persisted, we can read as such: User user = User.parseFrom( new FileInputStream("target/user.ser"); System.out.println(user);To run the sample code, use: java -cp .:../lib/protobuf-java-2.4.1.jar app.Serialize ../target/user.ser Protobuff vs XML Google claims that protocol buffers are 20 to 100 times faster (in nanoseconds) than XML and 3 to 10 smaller removing whitespace. However, until there is support and adoption in all platforms (not just the aforementioned 3), XML will be continue to be a very popular serialization mechanism. In addition, not everyone has the performance requirements and expectations that Google users have. An alternative to XML is JSON. Protobuff vs JSON I did some comparison testing to evaluate using Protocol buffers over JSON. The results were quiet dramatic, a simple test reveals that protobuffs are 50%+ more efficient in terms of storage. I created a simple POJO version of my User-Address classes and used the GSON library to encode an instance with the same state as the example above (I will omit implementation details, please check gson project referenced below). Encoding the same user data, I got: -rw-rw-r-- 1 luisat luisat 206 May 30 09:47 json-user.ser -rw-rw-r-- 1 luisat luisat 85 May 30 09:42 user.serWhich is remarkable. I also found this in another blog (see resources below):It’s definitely worth a read. Conclusion and Further Remarks Protocol buffers can be a good solution to cross platform data encoding. With clients written in Java, Python, C++ and many others, storing/sending compressed data is really straightforward. One tricky point to make is: “Remember REQUIRED is forever.” If you go crazy and make every single field of your .proto file required, then it will extremely difficult to delete or edit those fields.Also a bit of incentive, protobuffs are used across Google’s data stores: there are 48,162 different message types defined in the Google code tree across 12,183 .proto files. Protocol Buffers promote good Object Oriented Design, since .proto files are basically dumb data holders (like structs in C++). According to Google documentation, if you want to add richer behavior to a generated class or you don’t have control over the design of the .proto file, the best way to do this is to wrap the generated protocol buffer class in an application-specific class. Finally, remember you should never add behaviour to the generated classes by inheriting from them . This will break internal mechanisms and is not good object-oriented practice anyway. A lot of the information presented here comes from personal experience, other resources, and most importantly google developer code. Please check out the documentation in the resources section. Resourceshttps://developers.google.com/protocol-buffers/docs/overview https://developers.google.com/protocol-buffers/docs/proto https://developers.google.com/protocol-buffers/docs/reference/java-generated https://developers.google.com/protocol-buffers/docs/reference/overview http://code.google.com/p/google-gson/ http://afrozahmad.hubpages.com/hub/protocolbuffersReference: Java Protocol Buffers from our JCG partner Luis Atencio at the Reflective Thought blog....

Java concurrency – Feedback from tasks

Picking up from where I left off in my last post about the java.util.concurrent package, it’s interesting and sometimes mandatory to get feedback from concurrent tasks after they are started. For example imagine an application that has to send email batches, besides from using a multi-threaded mechanism, you want to know how many of the intended emails were successfully dispatched, and during the actual sending process, the real-time progress of the whole batch. To implement this kind of multi-threading with feedback we can use the Callable interface. This interface works mostly the same way as Runnable, but the execution method (call()) returns a value that should reflect the outcome of the performed computation. Let’s first define the class that will perform the actual task: package com.ricardozuasti;import java.util.concurrent.Callable;public class FictionalEmailSender implements Callable<Boolean> { public FictionalEmailSender (String to, String subject, String body){ this.to = to; this.subject = subject; this.body = body; }@Override public Boolean call() throws InterruptedException { // Simulate that sending the email takes between 0 and 0.5 seconds Thread.sleep(Math.round(Math.random()* 0.5 * 1000));// Lets say we have an 80% chance of successfully sending our email if (Math.random()>0.2){ return true; } else { return false; } }private String to; private String subject; private String body; } Notice that your Callable can use any return type, so your task can return whatever info you need. Now we can use a thread pool ExecutorService to send our emails, and since our task is implemented as a Callable, we get a Future reference for each new task we submit for execution. Note that we will create our ExecutorService using a direct constructor instead of a utility method from Executors, this is because using the specific class (ThreadPoolExecutor) provides some methods that will come in handy (not present present in the ExecutorService interface). package com.ricardozuasti;import java.util.ArrayList; import java.util.List; import java.util.concurrent.Future; import java.util.concurrent.LinkedBlockingQueue; import java.util.concurrent.ThreadPoolExecutor; import java.util.concurrent.TimeUnit;public class Concurrency2 {public static void main(String[] args) { try { ThreadPoolExecutor executor = new ThreadPoolExecutor(30, 30, 1, TimeUnit.SECONDS, new LinkedBlockingQueue());List<Future<Boolean>> futures = new ArrayList<Future<Boolean>>(9000);// Lets spam every 4 digit numeric user on that silly domain for (int i = 1000; i < 10000; i++) { futures.add(executor.submit(new FictionalEmailSender(i + '@wesellnumericusers.com', 'Knock, knock, Neo', 'The Matrix has you...'))); }// All tasks have been submitted, wen can begin the shutdown of our executor System.out.println('Starting shutdown...'); executor.shutdown();// Every second we print our progress while (!executor.isTerminated()) { executor.awaitTermination(1, TimeUnit.SECONDS); int progress = Math.round((executor.getCompletedTaskCount() * 100) / executor.getTaskCount());System.out.println(progress + '% done (' + executor.getCompletedTaskCount() + ' emails have been sent).'); }// Now that we are finished sending all the emails, we can review the futures // and see how many were successfully sent int errorCount = 0; int successCount = 0; for (Future<Boolean> future : futures) { if (future.get()) { successCount++; } else { errorCount++; } }System.out.println(successCount + ' emails were successfully sent, but ' + errorCount + ' failed.');} catch (Exception ex) { ex.printStackTrace(); } } } After all tasks are submitted to the ExecutorService, we begin it’s shutdown (preventing new tasks from being submitted) and use a loop (in a real-life scenario you should continue doing something else if possible) to wait until all tasks are finished, calculating and printing the progress made so far on each iteration. Note that you could store the executor reference and query it from other threads any time to calculate and report the process progress. Finally, using the collection of Future references we got for each Callable submitted to the ExecutorService, we can inform the number of emails successfully sent and the number that failed to. This infrastructure is not only easy to use but also promotes clear separation of concerns, providing a pre-defined communication mechanism between the dispatcher program and the actual tasks. Reference: Java concurrency examples – Getting feedback from concurrent tasks from our JCG partner Ricardo Zuasti at the Ricardo Zuasti’s blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: