Featured FREE Whitepapers

What's New Here?

apache-activemq-logo

Mule ESB, ActiveMQ and the DLQ

In this post I will show a simple Mule ESB flow to see the DLQ feature of Active MQ in action. I assume you have a running Apache ActiveMQ instance available (if not you can download a version here). In this example I make use of Mule ESB 3.4.2 and ActiveMQ 5.9.0. We can create a simple Mule project based on the following pom file:       <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"><modelVersion>4.0.0</modelVersion> <groupId>net.pascalalma.demo</groupId> <artifactId>activemq-test-flow</artifactId> <packaging>mule</packaging> <name>${project.artifactId}</name> <version>1.0.0-SNAPSHOT</version> <properties> <mule.version>3.4.2</mule.version> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> <jdk.version>1.7</jdk.version> <junit.version>4.9</junit.version> <activemq.version>5.9.0</activemq.version> </properties> <dependencies> <!-- Mule Dependencies --> <dependency> <groupId>org.mule</groupId> <artifactId>mule-core</artifactId> <version>${mule.version}</version> </dependency> <!-- Mule Transports --> <dependency> <groupId>org.mule.transports</groupId> <artifactId>mule-transport-jms</artifactId> <version>${mule.version}</version> </dependency> <dependency> <groupId>org.mule.transports</groupId> <artifactId>mule-transport-vm</artifactId> <version>${mule.version}</version> </dependency> <!-- Mule Modules --> <dependency> <groupId>org.mule.modules</groupId> <artifactId>mule-module-client</artifactId> <version>${mule.version}</version> </dependency> <dependency> <groupId>org.mule.modules</groupId> <artifactId>mule-module-scripting</artifactId> <version>${mule.version}</version> </dependency> <!-- for testing --> <dependency> <groupId>org.mule.tests</groupId> <artifactId>mule-tests-functional</artifactId> <version>${mule.version}</version> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>${junit.version}</version> </dependency> <dependency> <groupId>org.apache.activemq</groupId> <artifactId>activemq-client</artifactId> <version>${activemq.version}</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2.3.2</version> <configuration> <source>${jdk.version}</source> <target>${jdk.version}</target> <encoding>${project.build.sourceEncoding}</encoding> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-resources-plugin</artifactId> <version>2.5</version> <configuration> <encoding>${project.build.sourceEncoding}</encoding> </configuration> </plugin> <plugin> <groupId>org.mule.tools</groupId> <artifactId>maven-mule-plugin</artifactId> <version>1.9</version> <extensions>true</extensions> <configuration> <copyToAppsDirectory>false</copyToAppsDirectory> </configuration> </plugin> </plugins> </build> </project> There is not much special here. Besides the necessary dependencies I have added the maven-mule-plugin so I can create a ‘mule’ packaging type and run Mule from my IDE. With this Maven pom in place we can create the following two Mule configurations. One for the Mule flow to test our transaction: <?xml version="1.0" encoding="UTF-8"?> <mule xmlns="http://www.mulesoft.org/schema/mule/core" xmlns:scripting="http://www.mulesoft.org/schema/mule/scripting" version="EE-3.4.1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd http://www.mulesoft.org/schema/mule/scripting http://www.mulesoft.org/schema/mule/scripting/current/mule-scripting.xsd"> <flow name="MainFlow"> <inbound-endpoint ref="event-queue" /> <logger category="net.pascalalma.demo.MainFlow" level="INFO" message="Received message from activeMQ" /> <scripting:component> <scripting:script engine="Groovy"> throw new Exception('Soap Fault Response detected') </scripting:script> </scripting:component> <outbound-endpoint ref="result-queue" /> </flow> </mule> In this flow we receive a message from the inbound endpoint, log a message and throw an exception before the message is put on the next queue. As we can see I didn’t add any exception handler. The configuration of the endpoints and connectors look like this: <?xml version="1.0" encoding="UTF-8"?><mule xmlns="http://www.mulesoft.org/schema/mule/core" xmlns:jms="http://www.mulesoft.org/schema/mule/jms" xmlns:spring="http://www.springframework.org/schema/beans" version="EE-3.4.1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd http://www.mulesoft.org/schema/mule/jms http://www.mulesoft.org/schema/mule/jms/current/mule-jms.xsd"><spring:bean id="redeliveryPolicy" class="org.apache.activemq.RedeliveryPolicy"> <spring:property name="maximumRedeliveries" value="5"/> <spring:property name="initialRedeliveryDelay" value="500"/> <spring:property name="maximumRedeliveryDelay" value="10000"/> <spring:property name="useExponentialBackOff" value="false"/> <spring:property name="backOffMultiplier" value="3"/> </spring:bean> <!-- ActiveMQ Connection factory --> <spring:bean id="amqFactory" class="org.apache.activemq.ActiveMQConnectionFactory" lazy-init="true"> <spring:property name="brokerURL" value="tcp://localhost:61616" /> <spring:property name="redeliveryPolicy" ref="redeliveryPolicy" /> </spring:bean> <jms:activemq-connector name="activeMqConnector" connectionFactory-ref="amqFactory" persistentDelivery="true" numberOfConcurrentTransactedReceivers="2" specification="1.1" /> <jms:endpoint name="event-queue" connector-ref="activeMqConnector" queue="event-queue" > <jms:transaction action="ALWAYS_BEGIN" /> </jms:endpoint> <jms:endpoint name="result-queue" connector-ref="activeMqConnector" queue="result-queue" > <jms:transaction action="ALWAYS_JOIN" /> </jms:endpoint> </mule> I defined a Spring bean for an ActiveMQ connection factory and one for the redelivery policy of this factory. With this redelivery policy we can configure how often Mule should retry to process a message from the queue when the original attempt failed. A nice feature in the redelivery policy is the ‘backOffMultiplier’ and ‘useExponentialBackOff’ combination. With these options you can have the period between two redelivery attempts increase exponentially until ‘maximumRedeliveryDelay’ is reached. In that case Mule will wait the ‘maximumRedeliveryDelay’ for the next attempt. So with these configurations we can create a Mule test class and run it. The test class would look something like this: package net.pascalalma.demo;import org.junit.Test; import org.mule.DefaultMuleMessage; import org.mule.api.MuleMessage; import org.mule.module.client.MuleClient; import org.mule.tck.junit4.FunctionalTestCase;public class TransactionFlowTest extends FunctionalTestCase {@Override protected String getConfigResources() { return "app/test-flow.xml, app/test-endpoints.xml"; }@Test public void testError() throws Exception { MuleClient client = new MuleClient(muleContext); MuleMessage inMsg = new DefaultMuleMessage("<txt>Some message</txt>", muleContext); client.dispatch("event-queue", inMsg);// Give Mule the chance to redeliver the message Thread.sleep(4000); } } If we run this test you will see messages in the logging like: Exception stack is: 1. "Message with id "ID:Pascals-MacBook-Pro-2.local-59158-1406440948059-1:1:3:1:1" has been redelivered 3 times on endpoint "jms://event-queue", which exceeds the maxRedelivery setting of 0 on the connector "activeMqConnector". Message payload is of type: ActiveMQTextMessage (org.mule.transport.jms.redelivery.MessageRedeliveredException) org.mule.transport.jms.redelivery.JmsXRedeliveryHandler:87 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/transport/jms/redelivery/MessageRedeliveredException.html) If we now switch to the ActiveMQ console which can be reached at http://localhost:8161 for the default local installation we can see the following queues:  As expected we see two queues being created, the event-queue which is empty and the default ActiveMQ.DLQ which contains our message:  As you can image it might be handy to have a specific DLQ for each queue instead of one DLQ which will contain all kinds of undeliverable messages. Luckily this is easy to configure in ActiveMQ. Just put the following in the ‘activemq.xml’ file that can be found in ‘$ACTIVEMQ_HOME/conf’ folder. <!-- Set the following policy on all queues using the '>' wildcard --> <policyEntry queue=">"> <deadLetterStrategy> <individualDeadLetterStrategy queuePrefix="DLQ." useQueueForQueueMessages="true" /> </deadLetterStrategy> </policyEntry> If we now restart ActiveMQ, remove the existing queues and rerun our test we see the following result:  So with this setup each queue has its own DLQ. For more options regarding these ActieMQ settings see here. With the Mule flow created in this post it is easy to test and play with these settings.Reference: Mule ESB, ActiveMQ and the DLQ from our JCG partner Pascal Alma at the The Pragmatic Integrator blog....
eclipse-logo

Developing Eclipse plugins

Recently I started working with a team on an Eclipse plugin. The team had developed an awesome plugin that does the intended purpose. Thus I checked out the source and tried building it. The project source contained all the required libraries and it could only be build in Eclipse. In today’s world of continuous delivery, this is a major impediment as such a project can not be built on Jenkins. The project not only contained the required libraries, but the complete eclipse settings were kept as part of source, so I thought of improving this first. I created a POM.xml in the Project and deleted the settings and libs. The build worked fine but as soon as I opened the project in eclipse it was a mess. Nothing worked there! It took sometime to realize that Eclipse and Maven are two different worlds that do not converge easily. Even the smallest of the things like the artifact-version and Bundle version do not converge easily. In maven anything can be the version e.g. 21-snapshot. But in eclipse there are standards, it has to be named [number].[number].[number].qualifier  e.g. 1.1.21.qualifier. Eclipse-Tycho In order to bridge the gap between the two worlds Sonatype have contributed Tycho to the Eclipse ecosystem. Add the plugin with the eclipse repository : <repository> <id>juno</id> <layout>p2</layout> <url>http://download.eclipse.org/releases/juno</url> </repository><plugin> <groupId>org.eclipse.tycho</groupId> <artifactId>tycho-versions-plugin</artifactId> <version>0.18.1</version> </plugin><plugin> <groupId>org.eclipse.tycho</groupId> <artifactId>target-platform-configuration</artifactId> <version>0.18.1</version> <configuration> <pomDependencies>consider</pomDependencies> <environments> <environment> <os>linux</os> <ws>gtk</ws> <arch>x86_64</arch> </environment> </environments> </configuration> </plugin>There are few points to note here:If the plugin is for a specific eclipse platform, the repository of the same should be added. The plugin could use dependencies from POM or MANIFEST.MF. If the dependencies are used from POM, then set pomDependenciesThe  Tycho plugin also brings along a set of plugins for version update, surefire tests etc. The plugins can be invoked individually to perform different goals e.g. the versions plugin can be used in the following manner to set versions: mvn tycho-versions:set-version -DnewVersion=1.1.1-SNAPSHOT This will set the 1.1.1-SNAPSHOT version in POM and 1.1.1.qualifier in the MANIFEST.MF While the plugins offer a lot, there are a few limitations as well. The plugin can not generate proper eclipse settings for PDE. Thus if we do not keep these settings we need to generate these again. Few other limitations are listed on the plugin page. After this now we were able to bridge the two worlds in some sense. Maven builds which generate Eclipse plugin were possible. Plugin Classloaders In eclipse PDE, there are plugins and fragments. Plugins are complete modules that offer a functionality and fragments is a module which attaches itself to a parent plugin then enhancing its capability. Thus a plugin can attach n number of fragments, enhancing it during runtime. We had a base plugin, which offered some basic features and a fragment was built on top to use Hadoop 1.x in the plugin.  After sometime the requirement came to support Hadoop 2.x as well. Now the two libraries are not compatible with each other. Thus some workaround was required to enable this Fortunately Eclipse being OSGI based has a different mechanism of loading class as compared to other java applications. Usually there is a single/hierarchy classloader(s) which load the complete application. Now in such a case if two incompatible jars are bundled, only one will be loaded. But in eclipse each plugin has its own classloader which can load its own classes. Now this offers couple of opportunities like supporting different versions of the same library. This feature is extended to plugin only and not fragments. Fragments do not have their own classloaders and use the parent plugin classloaders. We could have used plugin classloader support but the hadoop libs were loaded by fragment instead of plugin. We converted the fragment into a plugin, which required a complete task of refactoring the existing codebase. After the hadoop 1.x based plugin was formed. We could make more plugins for hadoop 2.x. Each plugin loads its own set of classes. Now the only requirement is to have more PermGem space as the complete plugin can not be loaded into the default PermGem space.Reference: Developing Eclipse plugins from our JCG partner Rahul Sharma at the The road so far… blog blog....
enterprise-java-logo

Smart Auto-PPR Change Event Policy

There is a common belief among ADF developers that setting the iterator binding change event policy to ppr  is not a good thing in terms of performance because this policy forces the framework to refresh all attribute bindings that are bound to this iterator on each request. That’s not true! The framework refreshes only attributes that have been changed during the request and attributes that depend on the changed attributes. Let’s consider a simple use-case. There is a form:      The iterator’s change event policy is set to ppr, which is default in JDeveloper 11gR2 and 12c. The “First Name” and the “Last Name” fields are auto-submitted. The “Full Name” field is going to be calculated by concatenation of the first and last names. So, in the setters of the first and last names we have a corresponding method call: public void setLastname(String value) {   setAttributeInternal(LASTNAME, value);  setFullname(getFirstname() + " " + getLastname()); } Let’s have a look at the response content generated by the framework once the “Last Name” has been inputted:In response to the modified last name the framework is going to partially refresh only two input components – the last name and the full name. The full name is going to be refreshed because its value has been changed during the request. The rest of the components on the form don’t participate in the partial request. Let’s consider a bit more complicated use case.We are going to show value of the “Title” field as a label of the “Full Name” field on the form: <af:inputText label="#{bindings.Title.inputValue}"               value="#{bindings.Fullname.inputValue}"               required="#{bindings.Fullname.hints.mandatory}"               columns="#{bindings.Fullname.hints.displayWidth}"               maximumLength="#{bindings.Fullname.hints.precision}"               shortDesc="#{bindings.Fullname.hints.tooltip}" id="itFullName"> </af:inputText> So, the label of the “Full Name” should be updated every time we make a selection of the title. For sure, the “Title” field is auto-submitted. And let’s have a look at the response content:Despite the value of the “Full Name” has not been changed during the request the input component is going to be refreshed because its label property points to the value of a changed field. And again only these two fields are going to be refreshed during the partial request. That’s it!Reference: Smart Auto-PPR Change Event Policy from our JCG partner Eugene Fedorenko at the ADF Practice blog....
scala-logo

Test your Dependencies with Degraph

I wrote before about (anti)patterns in package dependencies. And of course the regular reader of my blog knows about Degraph, my private project to provide a visualization for package dependencies which can help a lot when you try to identify and fix such antipatterns. But instead of fixing a problem we all probably prefer preventing the problem in the first place. Therefore in the latest version Degraph got a new feature: A DSL for testing Dependencies. You can write tests either in Scala or in Java, whatever fits better into your project. A typical test written with ScalaTest looks like this:   classpath // analyze everything found in the current classpath .including("de.schauderhaft.**") // only the classes that start with "de.schauderhaft." .withSlicing("module", "de.schauderhaft.(*).**") // use the third part of the package name as the module name, and make sure the modules don't have cycles .withSlicing("layer", ("persistence","de.schauderhaft.legacy.db.**"), // consider everything in the package de.schauderhaft.legacy.db and subpackages as part of the layer "persistence" "de.schauderhaft.*.(*).**") // for everything else use the fourth part of the package name as the name of the layer ) should be(violationFree) // check for violations (i.e. dependency circles)The equivalent test code in Java and JUnit looks like this: assertThat( classpath() // analyze everything found in the current classpath .including("de.schauderhaft.**") // only the classes that start with "de.schauderhaft." .withSlicing("module", "de.schauderhaft.(*).**") // use the third part of the package name as the module name, and make sure the modules don't have cycles .withSlicing("layer", new NamedPattern("persistence","de.schauderhaft.legacy.db.**"), // consider everything in the package de.schauderhaft.legacy.db and subpackages as part of the layer "persistence" "de.schauderhaft.*.(*).**") // for everything else use the fourth part of the package name as the name of the layer ), is(violationFree()) );You can also constrain the ways different slices depend on each other. For example: … .withSlicing("module", "de.schauderhaft.(*).**").allow(oneOf("order", "reporting"), "customer", "core") …Means:stuff in de.schauderhaft.order may depend on de.schauderhaft.customer and de.schauderhaft.core the same is true for de.schauderhaft.reporting de.schauderhaft.customer may depend on de.schauderhaft.core all other dependencies between those packages are disallowed packages from and to other packages are allowedIf you also want to allow dependencies between the order slice and the reporting slice replace oneOf with anyOf. If you want to disallow dependencies from reporting or order tocore you can replace allow with allowDirect. See the official documentation for more details, especially all the options the DSL offers, the imports needed and how to set up Degraph for testing. I’m trying to get Degraph into maven central to make usage inside projects easier. I also have some changes to the testing DSL on my to-do list. And finally I’m working on a HTML5 based front end. So stay tuned.Reference: Test your Dependencies with Degraph from our JCG partner Jens Schauder at the Schauderhaft blog....
mongodb-logo

Replica Set Members in Mongodb

In the previous articles we have discussed many aspects of replica set in mongodb. And in those articles we have talked many things about members. So, what are these members? What is their purpose? Let us discuss about these things in this article. What are members in mongodb? In short terms the members in mongodb are the mongod processes which need to be executed in replica set. Now, in general there are only 2 members:       Primary: As per mongodb.org the primary members receive all write operations. Secondary: Secondaries replicate operations from the primary to maintain an identical data set. Secondaries may have additional configurations for special usage profiles. We can use maximum of 12 members in a replica set. From which only 7 can vote. So now the question arises: why do members need to vote? Selection for primary member: Whenever a replica set is initiated or a primary member is unreachable. Or in simple terms if there is no primary member present then the election is commenced to choose a primary member from the secondary members. Although there are a few types of members than before 2, we will talk about them later. Primary member: The primary member is the only member in the replica set that receives write operations. Mongodb applies write operations on the primary and then records the operations on the primary’s oplog. All members of the replica set can accept read operations. But, by default an application directs its read operations to the primary member. The replica set can have at most one primary. In the following three-member replica set, the primary accepts all write operations. Then the secondaries replicate the oplog to apply to their data sets.Secondary member: A secondary member maintains a copy of the primary’s data set. To replicate data, a secondary applies operations from the primary’s oplog to its own data set in an asynchronous process. A replica set can have one or more secondary member. Data can’t be written to secondary, but data can be read from secondary members. In case of the primary member’s absence a secondary member can be primary through election. In the following three-member replica set, secondary member copies the primary member.Hidden members: Except the before two members, there are other members that comes into a replica set. One of them is a hidden member. Hidden members cannot become primary and are invisible to client applications. Hidden members do vote in elections. Hidden members are good for workloads with different usage patterns from the other members in the replica set. Also they must always be priority 0 members and so they cannot become primary. The most common use of hidden nodes is to support delayed members. To configure a secondary member as hidden, set its priority value to 0 and set its hidden value to true in its member configuration. To configure a hidden member, use the following sequence in a mongo shell connected to the primary, specifying the member to configure by its array index in the members array: c.members[0].priority = 0 c.members[0].hidden = true Delayed member: Another member is delayed member. Delayed members also copies data from the dataset. But as the name suggests the copied dataset is delayed than actual timing. As for example we can say that if we have an application to determine the current time. Then if the current time is 09:00 and a member has a delay of an hour, the delayed member has no operation more recent than 08:00. Because delayed members are a “rolling backup” or a running “historical” snapshot of the data set, they may be of help to recover from various kinds of human error. Delayed members apply operations from the oplog on a delay. Must be is equal to or greater than your maintenance windows. Must be smaller than the capacity of the oplog. For more information on oplog size. To configure a delayed secondary member, set its priority value to 0, its hidden value to true, and its slaveDelay value to the number of seconds to delay. c.members[0].priority = 0 c.members[0].hidden = true c.members[0].slaveDelay = 1200 Non-voting members: There is a lot of talk about election in replica sets. So, in the election few of the members participate and give votes to determine primary member. But there are also a few members, who do not participate in voting. These members are called non-voting members. Non-voting members allow you to add additional members for read distribution beyond the maximum seven voting members. To configure a member as non-voting, set its votes value to 0. c.members[5].votes = 0 c.members[4].votes = 0 Priority in members: These are the basic few members that we have to keep in mind for replica sets. To get here we have seen many operations as priority. Priority is indeed a very important thing to discuss about. The priority settings of replica set members affect the outcomes of elections for primary. The value of the member’s priority setting determines the member’s priority in elections. The higher the number, the higher the priority. Configuring member priority: To modify priorities, we have to update the members array in the replica configuration object. The value of priority can be any floating point number between 0 and 1000. The default value for the priority field is 1.Adjust priority during a scheduled maintenance window. Reconfiguring priority can force the current primary to step down, leading to an election. To block a member from seeking election as primary, assign the priority of that member to 0. We can complete configuring priority in simple 3 steps. Let us look at them:Copy the replica set configuration to a variable.In the mongo shell, use rs.conf() to retrieve the replica set configuration and assign it to a variable. For example: c = rs.conf()Change each member’s priority value.Change each member’s priority value, as configured in the members array. c.members[0].priority = 0.5 c.members[1].priority = 2 This sequence of operations modifies the value of c to set the priority for the first two members defined in the members array.Assign the replica set the new configuration.Use rs.reconfig() to apply the new configuration. rs.reconfig(c) In this article we have discussed about members in replica set. We can say now that members are the very basis of replica sets. The more we will get to know about the members, our control will be swifter in handling data in mongodb.Reference: Replica Set Members in Mongodb from our JCG partner Biswadeep Ghosh at the Phlox Blog blog....
jboss-hibernate-logo

Hibernate hidden gem: the pooled-lo optimizer

Introduction In this post we’ll uncover a sequence identifier generator combining identifier assignment efficiency and interoperability with other external systems (concurrently accessing the underlying database system). Traditionally there have been two sequence identifier strategies to choose from.        The sequence identifier, always hitting the database for every new value assignment. Even with database sequence preallocation we have a significant database round-trip cost. The seqhilo identifier, using the hi/lo algorithm. This generator calculates some identifier values in-memory, therefore reducing the database round-trip calls. The problem with this optimization technique is that the current database sequence value no longer reflects the current highest in-memory generated value. The database sequence is used as a bucket number, making it difficult for other systems to interoperate with the database table in question. Other applications must know the inner-workings of the hi/lo identifier strategy to properly generate non-clashing identifiers.The enhanced identifiers Hibernate offers a new class of identifier generators, addressing many shortcomings of the original ones. The enhanced identifier generators don’t come with a fixed identifier allocation strategy. The optimization strategy is configurable and we can even supply our own optimization implementation. By default Hibernate comes with the following built-in optimizers:none: every identifier is fetched from the database, so it’s equivalent to the original sequence generator. hi/lo: it uses the hi/lo algorithm and it’s equivalent to the original seqhilo generator. pooled: This optimizer uses a hi/lo optimization strategy, but the current in-memory identifiers highest boundary is extracted from an actual database sequence value. pooled-lo: It’s similar to the pooled optimizer but the database sequence value is used as the current in-memory lowest boundaryIn the official release announcement, the pooled optimizers are advertised as being interoperable with other external systems: Even if other applications are also inserting values, we’ll be perfectly safe because the SEQUENCE itself will handle applying this increment_size. This is actually what we are looking for; an identifier generator that’s both efficient and doesn’t clash when other external systems are concurrently inserting rows in the same database tables. Testing time The following test is going to check how the new optimizers get along with other external database table inserts. In our case the external system will be some native JDBC insert statements on the same database table/sequence. doInTransaction(new TransactionCallable<Void>() { @Override public Void execute(Session session) { for (int i = 0; i < 8; i++) { session.persist(newEntityInstance()); } session.flush(); assertEquals(8, ((Number) session.createSQLQuery("SELECT COUNT(*) FROM sequenceIdentifier").uniqueResult()).intValue()); insertNewRow(session); insertNewRow(session); insertNewRow(session); assertEquals(11, ((Number) session.createSQLQuery("SELECT COUNT(*) FROM sequenceIdentifier").uniqueResult()).intValue()); List<Number> ids = session.createSQLQuery("SELECT id FROM sequenceIdentifier").list(); for (Number id : ids) { LOGGER.debug("Found id: {}", id); } for (int i = 0; i < 3; i++) { session.persist(newEntityInstance()); } session.flush(); return null; } }); The pooled optimizer We’ll first use the pooled optimizer strategy: @Entity(name = "sequenceIdentifier") public static class PooledSequenceIdentifier {@Id @GenericGenerator(name = "sequenceGenerator", strategy = "enhanced-sequence", parameters = { @org.hibernate.annotations.Parameter(name = "optimizer", value = "pooled"), @org.hibernate.annotations.Parameter(name = "initial_value", value = "1"), @org.hibernate.annotations.Parameter(name = "increment_size", value = "5") } ) @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "sequenceGenerator") private Long id; } Running the test ends-up throwing the following exception: DEBUG [main]: n.t.d.l.SLF4JQueryLoggingListener - Name: Time:0 Num:1 Query:{[insert into sequenceIdentifier (id) values (?)][9]} DEBUG [main]: n.t.d.l.SLF4JQueryLoggingListener - Name: Time:0 Num:1 Query:{[insert into sequenceIdentifier (id) values (?)][10]} DEBUG [main]: n.t.d.l.SLF4JQueryLoggingListener - Name: Time:0 Num:1 Query:{[insert into sequenceIdentifier (id) values (?)][26]} WARN [main]: o.h.e.j.s.SqlExceptionHelper - SQL Error: -104, SQLState: 23505 ERROR [main]: o.h.e.j.s.SqlExceptionHelper - integrity constraint violation: unique constraint or index violation; SYS_PK_10104 table: SEQUENCEIDENTIFIER ERROR [main]: c.v.h.m.l.i.PooledSequenceIdentifierTest - Pooled optimizer threw org.hibernate.exception.ConstraintViolationException: could not execute statement at org.hibernate.exception.internal.SQLExceptionTypeDelegate.convert(SQLExceptionTypeDelegate.java:72) ~[hibernate-core-4.3.5.Final.jar:4.3.5.Final] Caused by: java.sql.SQLIntegrityConstraintViolationException: integrity constraint violation: unique constraint or index violation; SYS_PK_10104 table: SEQUENCEIDENTIFIER at org.hsqldb.jdbc.JDBCUtil.sqlException(Unknown Source) ~[hsqldb-2.3.2.jar:2.3.2] I am not sure if this is a bug or just a design limitation, but the pooled optimizer doesn’t meet the interoperability requirement. To visualize what happens I summarized the sequence calls in the following diagram:When the pooled optimizer retrieves the current sequence value, it uses it to calculate the lowest in-memory boundary. The lowest value is the actual previous sequence value and this value might have been already used by some other external INSERT statement. The pooled-lo optimizer Fortunately, there is one more optimizer(not mentioned in the reference documentation) to be tested. The pooled-lo optimizer uses the current database sequence value as the lowest in-memory boundary, so other systems may freely use the next sequence values without risking identifier clashing: @Entity(name = "sequenceIdentifier") public static class PooledLoSequenceIdentifier {@Id @GenericGenerator(name = "sequenceGenerator", strategy = "enhanced-sequence", parameters = { @org.hibernate.annotations.Parameter(name = "optimizer", value = "pooled-lo" ), @org.hibernate.annotations.Parameter(name = "initial_value", value = "1"), @org.hibernate.annotations.Parameter(name = "increment_size", value = "5") } ) @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "sequenceGenerator") private Long id; } To better understand the inner-workings of this optimizer, the following diagram summarizes the identifier assignment process:Conclusion A hidden gem is one of those great features that most don’t even know of its existence. The pooled-lo optimizer is extremely useful, yet most people don’t even know of its existence.Code available on GitHub.Reference: Hibernate hidden gem: the pooled-lo optimizer from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....
junit-logo

A JUnit Rule to Run a Test in Its Own Thread

Occasionally it would be helpful to be able to run a JUnit test in a separate thread. In particular when writing integration tests that interact with encapsulated ThreadLocals or the like this could come in handy. A separate thread would implicitly ensure that the thread related reference of the threadlocal is uninitialized for each test run. This post introduces a JUnit Rule that provides such a functionality and explains how to use it. To begin with take a look at the following example. It depicts a test case that produces intermittent failures of testB. The reason for this is that the outcome depends on the execution order of all tests due to side effects1. More precisely Display.getDefault() in principle returns a lazily instantiated singleton, while Display.getCurrent() is a simple accessor of this singleton. As a consequence testB fails if it runs after testA2. public class FooTest {@Test public void testA() { Display actual = Display.getDefault();assertThat( actual ).isNotNull(); }@Test public void testB() { Display actual = Display.getCurrent();assertThat( actual ).isNull(); } } To avoid some behind-the-scene-magic, which bears the risk to make code less understandable, we could ensure that an existing display is disposed before the actual test execution takes place3. @Before public void setUp() { if( Display.getCurrent() != null ) { Display.getCurrent().dispose(); } } Unfortunately this approach cannot be used within an integration test suite that runs PDE tests for example. The PDE runtime creates a single Display instance, whose lifetime spans all test runs. So display disposal would not be an option and testB would fail within PDE test suite execution all the time4. At this point it is important to remember that the Display singleton is bound to its creation thread (quasi ThreadLocal)5. Because of this testB should run reliable, if executed in its own thread. However thread handling is usually somewhat cumbersome at best and creates a lot of clutter, reducing the readability of test methods. This gave me the idea to create a TestRule implementation that encapsulates the thread handling and keeps the test code clean: public class FooTest {@Rule public RunInThreadRule runInThread = new RunInThreadRule();@Test public void testA() { Display actual = Display.getDefault();assertThat( actual ).isNotNull(); }@Test @RunInThread public void testB() { Display actual = Display.getCurrent();assertThat( actual ).isNull(); } } The RunInThreadRule class allows to run a single test method in its own thread. It takes care of the demon thread creation, test execution, awaiting of thread termination and forwarding of the test outcome to the main thread. To mark a test to be run in a separate thread the test method has to be annotated with @RunInThread as shown above. With this in place testB is now independent from the execution order of the tests and succeeds reliable. But one should be careful not to overuse RunInThreadRule. Although the @RunInThread annotation signals that a test runs in a separate thread it does not explain why. This may easily obfuscate the real scope of such a test. Hence I use this usually only as a last resort solution. E.g. it may be reasonable in case when a third party library relies on an encapsulated ThreadLocal that cannot be cleared or reset by API functionality. For those who like to check out the RunInThreadRule implementation I have created a GitHub gist: https://gist.github.com/fappel/65982e5ea7a6b2fde5a3 For a real-world usage you might also have a look at the PgmResourceBundlePDETest implementation of our Gonsole project hosted at: https://github.com/rherrmann/gonsole.  Note that JUnit sorts the test methods in a deterministic, but not predictable, order by default Consider also the possibility that testA might be in a other test case and the problem only occurs when running a large suite Then again I don’t like this kind of practice either, so for a more sophisitcated solution you may have a look at the post A JUnit Rule to Ease SWT Test Setup In the meanwhile you have probably recognized that the simplistic example test case is not very useful, but I hope it is sufficient to get the motivation explained. This makes such a thread the user interface thread in SWT. SWT implements a single-threaded UI model often called apartment threadingReference: A JUnit Rule to Run a Test in Its Own Thread from our JCG partner Frank Appel at the Code Affine blog....
jboss-drools-logo

Drools Executable Model

The Executable Model is a re-design of the Drools lowest level model handled by the engine. In the current series (up to 6.x) the executable model has grown organically over the last 8 years, and was never really intended to be targeted by end users. Those wishing to programmatically write rules were advised to do it via code generation and target drl; which was no ideal. There was never any drive to make this more accessible to end users, because extensive use of anonymous classes in Java was unwieldy. With Java 8 and Lambda’s this changes, and the opportunity to make a more compelling model that is accessible to end users becomes possible. This new model is generated during the compilation process of higher level languages, but can also be used on its own. The goal is for this Executable Model to be self contained and avoid the need for any further byte code munging (analysis, transformation or generation); From this model’s perspective, everything is provided either by the code or by higher level language layers. For example indexes etc must be provided by arguments, which the higher level language generates through analysis, when it targets the Executable model. It is designed to map well to a Fluent level builders, leveraging Java 8′s lambdas. This will make it more appealing to java developers, and language developers. Also this will allow low level engine feature design and testing, independent of any language. Which means we can innovate at an engine level, without having to worry about the language layer. The Executable Model should be generic enough to map into multiple domains. It will be a low level dataflow model in which you can address functional reactive programming models, but still usable to build a rule based system out of it too. The following example provides a first view of the fluent DSL used to build the executable model: DataSource persons = sourceOf(new Person("Mark", 37), new Person("Edson", 35), new Person("Mario", 40)); Variable<Person> markV = bind(typeOf(Person.class));Rule rule = rule("Print age of persons named Mark") .view( input(markV, () -> persons), expr(markV, person -> person.getName().equals("Mark")) ) .then( on(markV).execute(mark -> System.out.println(mark.getAge()) ) ); The previous code defines a DataSource containing a few person instances and declares the Variable markV of type Person. The rule itself contains the usual two parts: the LHS is defined by the set of inputs and expressions passed to the view() method, while the RHS is the action defined by the lambda expression passed to the then() method. Analyzing the LHS in more detail, the statement: input(markV, () -> persons) binds the objects from the persons DataSource to the markV variable, pattern matching by the object class. In this sense the DataSource can be thought as the equivalent of a Drools entry-point. Conversely the expression: expr(markV, person -> person.getName().equals("Mark")) uses a Predicate to define a condition that the object bound to the markV Variable has to satisfy in order to be successfully matched by the engine. Note that, as anticipated, the evaluation of the pattern matching is not performed by a constraint generated as a result of any sort of analysis or compilation process, but it’s merely executed by applying the lambda expression implementing the predicate ( in this case, person -> person.getName().equals(“Mark”) ) to the object to be matched. In other terms the former DSL produces the executable model of a rule that is equivalent to the one resulting from the parsing of the following drl. rule "Print age of persons named Mark" when markV : Person( name == "Mark" ) from entry-point "persons" then System.out.println(markV.getAge()); end It is also under development a rete builder that can be fed with the rules defined with this DSL. In particular it is possible to add these rules to a CanonicalKieBase and then to create KieSessions from it as for any other normal KieBase. CanonicalKieBase kieBase = new CanonicalKieBase(); kieBase.addRules(rule);KieSession ksession = kieBase.newKieSession(); ksession.fireAllRules(); Of course the DSL also allows to define more complex conditions like joins: Variable<Person> markV = bind(typeOf(Person.class)); Variable<Person> olderV = bind(typeOf(Person.class));Rule rule = rule("Find persons older than Mark") .view( input(markV, () -> persons), input(olderV, () -> persons), expr(markV, mark -> mark.getName().equals("Mark")), expr(olderV, markV, (older, mark) -> older.getAge() > mark.getAge()) ) .then( on(olderV, markV) .execute((p1, p2) -> System.out.println(p1.getName() + " is older than " + p2.getName()) ) ); or existential patterns: Variable<Person> oldestV = bind(typeOf(Person.class)); Variable<Person> otherV = bind(typeOf(Person.class));Rule rule = rule("Find oldest person") .view( input(oldestV, () -> persons), input(otherV, () -> persons), not(otherV, oldestV, (p1, p2) -> p1.getAge() > p2.getAge()) ) .then( on(oldestV) .execute(p -> System.out.println("Oldest person is " + p.getName()) ) ); Here the not() stands for the negation of any expression, so the form used above is actually only a shortcut for: not( expr( otherV, oldestV, (p1, p2) -> p1.getAge() > p2.getAge() ) ) Also accumulate is already supported in the following form: Variable<Person> person = bind(typeOf(Person.class)); Variable<Integer> resultSum = bind(typeOf(Integer.class)); Variable<Double> resultAvg = bind(typeOf(Double.class));Rule rule = rule("Calculate sum and avg of all persons having a name starting with M") .view( input(person, () -> persons), accumulate(expr(person, p -> p.getName().startsWith("M")), sum(Person::getAge).as(resultSum), avg(Person::getAge).as(resultAvg)) ) .then( on(resultSum, resultAvg) .execute((sum, avg) -> result.value = "total = " + sum + "; average = " + avg) ); To provide one last more complete use case, the executable model of the classical fire and alarm example can be defined with this DSL as it follows. Variable<Room> room = any(Room.class); Variable<Fire> fire = any(Fire.class); Variable<Sprinkler> sprinkler = any(Sprinkler.class); Variable<Alarm> alarm = any(Alarm.class);Rule r1 = rule("When there is a fire turn on the sprinkler") .view( input(fire), input(sprinkler), expr(sprinkler, s -> !s.isOn()), expr(sprinkler, fire, (s, f) -> s.getRoom().equals(f.getRoom())) ) .then( on(sprinkler) .execute(s -> { System.out.println("Turn on the sprinkler for room " + s.getRoom().getName()); s.setOn(true); }) .update(sprinkler, "on") );Rule r2 = rule("When the fire is gone turn off the sprinkler") .view( input(sprinkler), expr(sprinkler, Sprinkler::isOn), input(fire), not(fire, sprinkler, (f, s) -> f.getRoom().equals(s.getRoom())) ) .then( on(sprinkler) .execute(s -> { System.out.println("Turn off the sprinkler for room " + s.getRoom().getName()); s.setOn(false); }) .update(sprinkler, "on") );Rule r3 = rule("Raise the alarm when we have one or more fires") .view( input(fire), exists(fire) ) .then( execute(() -> System.out.println("Raise the alarm")) .insert(() -> new Alarm()) );Rule r4 = rule("Lower the alarm when all the fires have gone") .view( input(fire), not(fire), input(alarm) ) .then( execute(() -> System.out.println("Lower the alarm")) .delete(alarm) );Rule r5 = rule("Status output when things are ok") .view( input(alarm), not(alarm), input(sprinkler), not(sprinkler, Sprinkler::isOn) ) .then( execute(() -> System.out.println("Everything is ok")) );CanonicalKieBase kieBase = new CanonicalKieBase(); kieBase.addRules(r1, r2, r3, r4, r5);KieSession ksession = kieBase.newKieSession();// phase 1 Room room1 = new Room("Room 1"); ksession.insert(room1); FactHandle fireFact1 = ksession.insert(new Fire(room1)); ksession.fireAllRules();// phase 2 Sprinkler sprinkler1 = new Sprinkler(room1); ksession.insert(sprinkler1); ksession.fireAllRules();assertTrue(sprinkler1.isOn());// phase 3 ksession.delete(fireFact1); ksession.fireAllRules(); In this example it’s possible to note a few more things:Some repetitions are necessary to bind the parameters of an expression to the formal parameters of the lambda expression evaluating it. Hopefully it will be possible to overcome this issue using the -parameters compilation argument when this JDK bug will be resolved. any(Room.class) is a shortcut for bind(typeOf(Room.class)) The inputs don’t declare a DataSource. This is a shortcut to state that those objects come from a default empty DataSource (corresponding to the Drools default entry-point). In fact in this example the facts are programmatically inserted into the KieSession. Using an input without providing any expression for that input is actually a shortcut for input(alarm), expr(alarm, a -> true) In the same way an existential pattern without any condition like not(fire) is another shortcut for not( expr( fire, f -> true ) ) Java 8 syntax also allows to define a predicate as a method reference accessing a boolean property of a fact like in expr(sprinkler, Sprinkler::isOn) The RHS, together with the block of code to be executed, also provides a fluent interface to define the working memory actions (inserts/updates/deletes) that have to be performed when the rule is fired. In particular the update also gets a varargs of Strings reporting the name of the properties changed in the updated fact like in update(sprinkler, “on”). Once again this information has to be explicitly provided because the executable model has to be created without the need of any code analysis.Reference: Drools Executable Model from our JCG partner Mario Fusco at the Drools & jBPM blog....
java-logo

Template Method Pattern Example Using Java Generics

If you find that a lot of your routines are exactly the same except for certain sections, you might want to consider the Template Method to eliminate error-prone code duplication. Here’s an example: Below are two classes that do similar things:                Instantiate and initialize a Reader to read from a CSV file. Read each line and break it up into tokens. Unmarshal the tokens from each line into an entity, either a Product or a Customer. Add each entity into a Set. Return the Set.As you can see, it’s only in the third step that there’s a difference – unmarshalling to one entity or another. All other steps are the same. I’ve highlighted the line where the code is different in each of the snippets. ProductCsvReader.java public class ProductCsvReader { Set<Product> getAll(File file) throws IOException { Set<Product> returnSet = new HashSet<>(); try (BufferedReader reader = new BufferedReader(new FileReader(file))){ String line = reader.readLine(); while (line != null && !line.trim().equals("")) { String[] tokens = line.split("\\s*,\\s*"); Product product = new Product(Integer.parseInt(tokens[0]), tokens[1], new BigDecimal(tokens[2])); returnSet.add(product); line = reader.readLine(); } } return returnSet; } } CustomerCsvReader.java public class CustomerCsvReader { Set<Customer> getAll(File file) throws IOException { Set<Customer> returnSet = new HashSet<>(); try (BufferedReader reader = new BufferedReader(new FileReader(file))){ String line = reader.readLine(); while (line != null && !line.trim().equals("")) { String[] tokens = line.split("\\s*,\\s*"); Customer customer = new Customer(Integer.parseInt(tokens[0]), tokens[1], tokens[2], tokens[3]); returnSet.add(customer); line = reader.readLine(); } } return returnSet; } } For this example, there are only two entities, but a real system might have dozens of entities, so that’s a lot of error-prone duplicate code. You might find a similar situation with DAOs, where the select, insert, update, and delete operations of each DAO would do the same thing, only work with different entities and tables. Let’s start refactoring this troublesome code. According to one of the design principles found in the first part of the GoF Design Patterns book, we should “Encapsulate the concept that varies.” Between ProductCsvReader and CustomerCsvReader, what varies is the highlighted code. So our goal is to encapsulate what varies into separate classes, while moving what stays the same into a single class. Let’s start editing just one class first, ProductCsvReader. We use Extract Method to extract the line into its own method: ProductCsvReader.java after Extract Method public class ProductCsvReader { Set<Product> getAll(File file) throws IOException { Set<Product> returnSet = new HashSet<>(); try (BufferedReader reader = new BufferedReader(new FileReader(file))){ String line = reader.readLine(); while (line != null && !line.trim().equals("")) { String[] tokens = line.split("\\s*,\\s*"); Product product = unmarshall(tokens); returnSet.add(product); line = reader.readLine(); } } return returnSet; }Product unmarshall(String[] tokens) { Product product = new Product(Integer.parseInt(tokens[0]), tokens[1], new BigDecimal(tokens[2])); return product; } } Now that we have separated what varies with what stays the same, we will create a parent class that will hold the code that stays the same for both classes. Let’s call this parent class AbstractCsvReader. Let’s make it abstract since there’s no reason for the class to be instantiated on its own. We’ll then use the Pull Up Method refactoring to move the method that stays the same to this parent class. AbstractCsvReader.java abstract class AbstractCsvReader {Set<Product> getAll(File file) throws IOException { Set<Product> returnSet = new HashSet<>(); try (BufferedReader reader = new BufferedReader(new FileReader(file))){ String line = reader.readLine(); while (line != null && !line.trim().equals("")) { String[] tokens = line.split("\\s*,\\s*"); Product product = unmarshall(tokens); returnSet.add(product); line = reader.readLine(); } } return returnSet; } } ProductCsvReader.java after Pull Up Method public class ProductCsvReader extends AbstractCsvReader {Product unmarshall(String[] tokens) { Product product = new Product(Integer.parseInt(tokens[0]), tokens[1], new BigDecimal(tokens[2])); return product; } } This class won’t compile since it calls an “unmarshall” method that’s found in the subclass, so we need to create an abstract method called unmarshall. AbstractCsvReader.java with abstract unmarshall method abstract class AbstractCsvReader {Set<Product> getAll(File file) throws IOException { Set<Product> returnSet = new HashSet<>(); try (BufferedReader reader = new BufferedReader(new FileReader(file))){ String line = reader.readLine(); while (line != null && !line.trim().equals("")) { String[] tokens = line.split("\\s*,\\s*"); Product product = unmarshall(tokens); returnSet.add(product); line = reader.readLine(); } } return returnSet; }abstract Product unmarshall(String[] tokens); } Now at this point, AbstractCsvReader will make a great parent for ProductCsvReader, but not for CustomerCsvReader. CustomerCsvReader will not compile if you extend it from AbstractCsvReader. To fix this, we use Generics. AbstractCsvReader.java with Generics abstract class AbstractCsvReader<T> {Set<T> getAll(File file) throws IOException { Set<T> returnSet = new HashSet<>(); try (BufferedReader reader = new BufferedReader(new FileReader(file))){ String line = reader.readLine(); while (line != null && !line.trim().equals("")) { String[] tokens = line.split("\\s*,\\s*"); T element = unmarshall(tokens); returnSet.add(product); line = reader.readLine(); } } return returnSet; }abstract T unmarshall(String[] tokens); } ProductCsvReader.java with Generics public class ProductCsvReader extends AbstractCsvReader<Product> {@Override Product unmarshall(String[] tokens) { Product product = new Product(Integer.parseInt(tokens[0]), tokens[1], new BigDecimal(tokens[2])); return product; } } CustomerCsvReader.java with Generics public class CustomerCsvReader extends AbstractCsvReader<Customer> {@Override Customer unmarshall(String[] tokens) { Customer customer = new Customer(Integer.parseInt(tokens[0]), tokens[1], tokens[2], tokens[3]); return customer; } } And that’s it! No more duplicate code! The method in the parent class is the “template”, which holds the code that stays the same. The things that change are left as abstract methods, which are implemented in the child classes. Remember that when you refactor, you should always have automated Unit Tests to make sure you don’t break your code. I used JUnit for mine. You can find the code I’ve posted here, as well as a few other Design Patterns examples, at this Github repository. Before I go, I’d like to leave a quick note on the disadvantage of the Template Method. The Template Method relies on inheritance, which suffers from the the Fragile Base Class Problem. In a nutshell, the Fragile Base Class Problem describes how changes in base classes get inherited by subclasses, often causing undesired effects. In fact, one of the underlying design principles found at the beginning of the GoF book is, “favor composition over inheritance”, and many of the other design patterns show how to avoid code duplication, complexity or other error-prone code with less dependence on inheritance. Please give me feedback so I can continue to improve my articles.Reference: Template Method Pattern Example Using Java Generics from our JCG partner Calen Legaspi at the Calen Legaspi blog....
apache-camel-logo

Camel on JBoss EAP with Custom Modules

Apache Camel — the best open source integration library Apache Camel is an awesome, open-source, integration library that can be used as the backbone of an ESB, or in stand alone applications to do routing, transformation, or mediation of systems (read: integrating multiple systems). Camel is quite versatile and does not force users to deploy into any particular container or JVM technology. Deploy into OSGi for flexible modularity, deploy into Java EE when you use the Java EE stack, or deploy into Plain Jane Java Main if you’re doing lightweight microservices style deployments.  Running Camel on EAP I’ve had a few people ask questions recently about running Camel on JBoss Enterprise Application Platform, and I can usually say “well look at this awesome blog someone did about doing just that.” However, for some of the folks at large companies that prefer to curate their usage of third-party libraries and prefer to put them into a globally accessible classpath, packaging the Camel libs into their WAR/EAR is not an option. Here are some reasons why you might want to package Camel on EAP as a global library:Golden image, curated list reduce bloated war deployments can patch/update libs at a single source location assure all applications are using the approved versionsWhy you might NOT want to do this:Java EE containers are intended to be multi-tenant Not flexible in deployment options/versions Possible classpath issues/collisions depending on the third party library and transitive dependencies Complicates the management of the Java EE containerEAP Modules Regardless of the pro/con approaches, what’s the best way to go about getting Camel packaged as a module on JBoss EAP so that you can use it from the global classpath? The answer is to use JBoss EAP’s native modular system called, fittingly, “Modules.” We can create custom modules for EAP and enable for our skinny wars. Step by Step For this blog, I’ll use the previously created Camel example deployed as a simple WAR project. However, instead of including all of the camel jars as <scope>compile</scope> we will change the scope to provided: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-core</artifactId> <version>${camel.version}</version> <scope>provided</scope> </dependency> Just a refresh, the maven scope options help you finely control how your dependencies are packaged and presented to the classpath:compile — default scope, used for compiling the project and is packaged onto the classpath as part of the package phase provided — the dependency is required for compile time, but is NOT packaged in the artifact produced by the build in package phase runtime — the dependency must be on the classpath when it’s run, but is not required for compilation and is also not packagedThere are a couple others, but you may wish to check the docs to get a complete understanding. So now that we’ve changed the scope to provided, if we do a build, we should be able to inspect our WAR and verify there are no Camel jars: Build the project from $SOURCE_ROOT ceposta@postamachat$ mvn clean install [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 3.324s [INFO] Finished at: Wed Jul 16 14:16:53 MST 2014 [INFO] Final Memory: 29M/310M [INFO] ------------------------------------------------------------------------ List the contents of the WAR ceposta@postamachat$ unzip -l target/camel-cxf-contract-first-1.0.0-SNAPSHOT.war Archive: target/camel-cxf-contract-first-1.0.0-SNAPSHOT.war Length Date Time Name -------- ---- ---- ---- 0 07-16-14 14:15 META-INF/ 132 07-16-14 14:15 META-INF/MANIFEST.MF 0 07-16-14 14:15 WEB-INF/ 0 07-16-14 14:15 WEB-INF/classes/ 0 07-16-14 14:15 WEB-INF/classes/camelinaction/ 0 07-16-14 14:15 WEB-INF/classes/camelinaction/order/ 0 07-16-14 14:15 WEB-INF/classes/META-INF/ 0 07-16-14 14:15 WEB-INF/classes/META-INF/spring/ 0 07-16-14 14:15 WEB-INF/classes/wsdl/ 1927 07-16-14 14:15 WEB-INF/classes/camelinaction/order/ObjectFactory.class 992 07-16-14 14:15 WEB-INF/classes/camelinaction/order/OrderEndpoint.class 1723 07-16-14 14:15 WEB-INF/classes/camelinaction/order/OrderEndpointImpl.class 2912 07-16-14 14:15 WEB-INF/classes/camelinaction/order/OrderEndpointService.class 604 07-16-14 14:15 WEB-INF/classes/log4j.properties 1482 07-16-14 14:15 WEB-INF/classes/META-INF/spring/camel-cxf.xml 1935 07-16-14 14:15 WEB-INF/classes/META-INF/spring/camel-route.xml 3003 07-16-14 14:15 WEB-INF/classes/wsdl/order.wsdl 1193 05-23-14 04:22 WEB-INF/web.xml 0 07-16-14 14:15 META-INF/maven/ 0 07-16-14 14:15 META-INF/maven/com.redhat.demos/ 0 07-16-14 14:15 META-INF/maven/com.redhat.demos/camel-cxf-contract-first/ 8070 07-16-14 14:03 META-INF/maven/com.redhat.demos/camel-cxf-contract-first/pom.xml 134 07-16-14 14:15 META-INF/maven/com.redhat.demos/camel-cxf-contract-first/pom.properties -------- ------- 24107 23 files If we try to deploy this project to EAP, we would surely run into classpath issues because Camel is not included by default on the classpath in EAP. So let’s build the modules ourselves. First, get access to EAP by downloading from the Red Hat support portal. (Note, these steps may work in Wildfly, but I’m using EAP for this discussion). NOTE: I will use JBoss EAP 6.2 for this example as well as the Red Hat distribution of Apache Camel which comes from JBoss Fuse 6.1 For each of the dependencies in your pom that you’d like to create a custom module for, you’ll have to repeat these steps (Note these steps are formalized in the EAP knowledge base on the Red Hat support portal): create a folder under $EAP_HOME/modules to store your new module ceposta@postamachat(jboss-eap-6.2) $ cd modules ceposta@postamachat(modules) $ mkdir -p org/apache/camel/core create a folder named main under the module folder, as this is where we’ll place the jars for the module ceposta@postamachat(modules) $ mkdir org/apache/camel/core/main Now we’ll need to find out which dependencies/jars need to go into this module. If you use Maven’s Dependency Plugin this should help out tremendously. NOTE: these steps are a one-time effort, however, it’s probably worth a little bit of time to automate these steps with perl/python/bash script. for this demo, I didn’t create a script, but if you do, I’d appreciate you sharing it with everyone either let me know on twitter @christianposta or do a pull request on the github project associated with this blog.. thanks! show the dependencies for the project and each artifact: ceposta@postamachat$ mvn dependency:tree[INFO] ------------------------------------------------------------------------ [INFO] Building [TODO]Camel CXF Contract First Example 1.0.0-SNAPSHOT [INFO] ------------------------------------------------------------------------ [INFO] [INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ camel-cxf-contract-first --- [INFO] com.redhat.demos:camel-cxf-contract-first:war:1.0.0-SNAPSHOT [INFO] +- org.apache.camel:camel-core:jar:2.12.0.redhat-610379:provided [INFO] | \- com.sun.xml.bind:jaxb-impl:jar:2.2.6:provided [INFO] +- org.apache.camel:camel-cxf:jar:2.12.0.redhat-610379:provided [INFO] | +- org.apache.camel:camel-spring:jar:2.12.0.redhat-610379:provided [INFO] | | \- org.springframework:spring-tx:jar:3.2.8.RELEASE:provided [INFO] | +- org.apache.camel:camel-cxf-transport:jar:2.12.0.redhat-610379:provided [INFO] | +- org.apache.cxf:cxf-rt-frontend-jaxrs:jar:2.7.0.redhat-610379:provided [INFO] | | +- javax.ws.rs:javax.ws.rs-api:jar:2.0-m10:provided [INFO] | | \- org.apache.cxf:cxf-rt-bindings-xml:jar:2.7.0.redhat-610379:provided [INFO] | +- org.apache.cxf:cxf-rt-frontend-jaxws:jar:2.7.0.redhat-610379:provided [INFO] | | +- xml-resolver:xml-resolver:jar:1.2:provided [INFO] | | +- asm:asm:jar:3.3.1:provided [INFO] | | +- org.apache.cxf:cxf-rt-frontend-simple:jar:2.7.0.redhat-610379:provided [INFO] | | \- org.apache.cxf:cxf-rt-ws-addr:jar:2.7.0.redhat-610379:provided [INFO] | | \- org.apache.cxf:cxf-rt-ws-policy:jar:2.7.0.redhat-610379:provided [INFO] | | \- org.apache.neethi:neethi:jar:3.0.3:provided [INFO] | +- org.springframework:spring-core:jar:3.2.8.RELEASE:provided [INFO] | | \- commons-logging:commons-logging:jar:1.1.3:provided [INFO] | +- org.springframework:spring-beans:jar:3.2.8.RELEASE:provided [INFO] | +- org.springframework:spring-context:jar:3.2.8.RELEASE:provided [INFO] | | \- org.springframework:spring-expression:jar:3.2.8.RELEASE:provided [INFO] | +- org.apache.cxf:cxf-rt-features-clustering:jar:2.7.0.redhat-610379:provided [INFO] | \- org.apache.cxf:cxf-rt-bindings-soap:jar:2.7.0.redhat-610379:provided [INFO] | \- org.apache.cxf:cxf-rt-databinding-jaxb:jar:2.7.0.redhat-610379:provided [INFO] +- log4j:log4j:jar:1.2.16:provided [INFO] +- org.slf4j:slf4j-api:jar:1.6.6:provided [INFO] +- org.slf4j:slf4j-log4j12:jar:1.6.6:provided [INFO] +- org.apache.cxf:cxf-rt-transports-http-jetty:jar:2.7.0.redhat-610379:provided [INFO] | +- org.apache.cxf:cxf-api:jar:2.7.0.redhat-610379:provided [INFO] | | +- org.codehaus.woodstox:woodstox-core-asl:jar:4.2.0:provided [INFO] | | | \- org.codehaus.woodstox:stax2-api:jar:3.1.1:provided [INFO] | | +- org.apache.ws.xmlschema:xmlschema-core:jar:2.1.0:provided [INFO] | | +- org.apache.geronimo.specs:geronimo-javamail_1.4_spec:jar:1.7.1:provided [INFO] | | +- wsdl4j:wsdl4j:jar:1.6.3:provided [INFO] | | \- org.osgi:org.osgi.compendium:jar:4.2.0:provided [INFO] | +- org.apache.cxf:cxf-rt-transports-http:jar:2.7.0.redhat-610379:provided [INFO] | +- org.apache.cxf:cxf-rt-core:jar:2.7.0.redhat-610379:provided [INFO] | +- org.eclipse.jetty:jetty-server:jar:8.1.14.v20131031:provided [INFO] | | +- org.eclipse.jetty:jetty-continuation:jar:8.1.14.v20131031:provided [INFO] | | \- org.eclipse.jetty:jetty-http:jar:8.1.14.v20131031:provided [INFO] | | \- org.eclipse.jetty:jetty-io:jar:8.1.14.v20131031:provided [INFO] | | \- org.eclipse.jetty:jetty-util:jar:8.1.14.v20131031:provided [INFO] | +- org.eclipse.jetty:jetty-security:jar:8.1.14.v20131031:provided [INFO] | \- org.apache.geronimo.specs:geronimo-servlet_3.0_spec:jar:1.0:provided [INFO] +- org.apache.camel:camel-test-spring:jar:2.12.0.redhat-610379:provided [INFO] | +- org.apache.camel:camel-test:jar:2.12.0.redhat-610379:provided [INFO] | \- org.springframework:spring-test:jar:3.2.8.RELEASE:provided [INFO] +- junit:junit:jar:4.11:test [INFO] | \- org.hamcrest:hamcrest-core:jar:1.3:test [INFO] \- org.springframework:spring-web:jar:3.2.5.RELEASE:provided [INFO] +- aopalliance:aopalliance:jar:1.0:provided [INFO] \- org.springframework:spring-aop:jar:3.2.5.RELEASE:provided [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1.450s [INFO] Finished at: Wed Jul 16 15:03:08 MST 2014 [INFO] Final Memory: 17M/310M [INFO] ------------------------------------------------------------------------ This gives you the complete list of dependencies for your project and each of the top-level and transitive dependencies. Now you know what jars should go into each module. The next step is to download all of these jars to make it easy to copy them to the module folder: Copy all project dependencies to target/dependency ceposta@postamachat$ mvn dependency:copy-dependenciesceposta@postamachat$ ls -l target/dependencytotal 32072 -rw-r--r-- 1 ceposta staff 4467 Jul 16 14:50 aopalliance-1.0.jar -rw-r--r-- 1 ceposta staff 43581 Jul 16 14:50 asm-3.3.1.jar -rw-r--r-- 1 ceposta staff 2592519 Jul 16 14:50 camel-core-2.12.0.redhat-610379.jar -rw-r--r-- 1 ceposta staff 207482 Jul 16 14:43 camel-cxf-2.12.0.redhat-610379.jar -rw-r--r-- 1 ceposta staff 64726 Jul 16 14:50 camel-cxf-transport-2.12.0.redhat-610379.jar -rw-r--r-- 1 ceposta staff 244731 Jul 16 14:50 camel-spring-2.12.0.redhat-610379.jar -rw-r--r-- 1 ceposta staff 43947 Jul 16 14:50 camel-test-2.12.0.redhat-610379.jar -rw-r--r-- 1 ceposta staff 71455 Jul 16 14:50 camel-test-spring-2.12.0.redhat-610379.jar -rw-r--r-- 1 ceposta staff 62050 Jul 16 14:50 commons-logging-1.1.3.jar -rw-r--r-- 1 ceposta staff 1115924 Jul 16 14:50 cxf-api-2.7.0.redhat-610379.jar -rw-r--r-- 1 ceposta staff 204287 Jul 16 14:50 cxf-rt-bindings-soap-2.7.0.redhat-610379.jar -rw-r--r-- 1 ceposta staff 38847 Jul 16 14:50 cxf-rt-bindings-xml-2.7.0.redhat-610379.jar -rw-r--r-- 1 ceposta staff 408403 Jul 16 14:50 cxf-rt-core-2.7.0.redhat-610379.jar -rw-r--r-- 1 ceposta staff 129306 Jul 16 14:50 cxf-rt-databinding-jaxb-2.7.0.redhat-610379.jar -rw-r--r-- 1 ceposta staff 34276 Jul 16 14:50 cxf-rt-features-clustering-2.7.0.redhat-610379.jar -rw-r--r-- 1 ceposta staff 654099 Jul 16 14:50 cxf-rt-frontend-jaxrs-2.7.0.redhat-610379.jar -rw-r--r-- 1 ceposta staff 388669 Jul 16 14:50 cxf-rt-frontend-jaxws-2.7.0.redhat-610379.jar -rw-r--r-- 1 ceposta staff 67426 Jul 16 14:50 cxf-rt-frontend-simple-2.7.0.redhat-610379.jar -rw-r--r-- 1 ceposta staff 260274 Jul 16 14:50 cxf-rt-transports-http-2.7.0.redhat-610379.jar -rw-r--r-- 1 ceposta staff 97071 Jul 16 14:50 cxf-rt-transports-http-jetty-2.7.0.redhat-610379.jar -rw-r--r-- 1 ceposta staff 80014 Jul 16 14:50 cxf-rt-ws-addr-2.7.0.redhat-610379.jar -rw-r--r-- 1 ceposta staff 207480 Jul 16 14:50 cxf-rt-ws-policy-2.7.0.redhat-610379.jar -rw-r--r-- 1 ceposta staff 223298 Jul 16 14:50 geronimo-javamail_1.4_spec-1.7.1.jar -rw-r--r-- 1 ceposta staff 96323 Jul 16 14:50 geronimo-servlet_3.0_spec-1.0.jar -rw-r--r-- 1 ceposta staff 45024 Jul 16 14:50 hamcrest-core-1.3.jar -rw-r--r-- 1 ceposta staff 110928 Jul 16 14:50 javax.ws.rs-api-2.0-m10.jar -rw-r--r-- 1 ceposta staff 1112659 Jul 16 14:50 jaxb-impl-2.2.6.jar -rw-r--r-- 1 ceposta staff 21162 Jul 16 14:50 jetty-continuation-8.1.14.v20131031.jar -rw-r--r-- 1 ceposta staff 96122 Jul 16 14:50 jetty-http-8.1.14.v20131031.jar -rw-r--r-- 1 ceposta staff 104219 Jul 16 14:50 jetty-io-8.1.14.v20131031.jar -rw-r--r-- 1 ceposta staff 89923 Jul 16 14:50 jetty-security-8.1.14.v20131031.jar -rw-r--r-- 1 ceposta staff 357704 Jul 16 14:50 jetty-server-8.1.14.v20131031.jar -rw-r--r-- 1 ceposta staff 287680 Jul 16 14:50 jetty-util-8.1.14.v20131031.jar -rw-r--r-- 1 ceposta staff 245039 Jul 16 14:50 junit-4.11.jar -rw-r--r-- 1 ceposta staff 481535 Jul 16 14:50 log4j-1.2.16.jar -rw-r--r-- 1 ceposta staff 71487 Jul 16 14:50 neethi-3.0.3.jar -rw-r--r-- 1 ceposta staff 614152 Jul 16 14:50 org.osgi.compendium-4.2.0.jar -rw-r--r-- 1 ceposta staff 26176 Jul 16 14:50 slf4j-api-1.6.6.jar -rw-r--r-- 1 ceposta staff 9711 Jul 16 14:50 slf4j-log4j12-1.6.6.jar -rw-r--r-- 1 ceposta staff 335679 Jul 16 14:50 spring-aop-3.2.5.RELEASE.jar -rw-r--r-- 1 ceposta staff 612569 Jul 16 14:50 spring-beans-3.2.8.RELEASE.jar -rw-r--r-- 1 ceposta staff 866273 Jul 16 14:50 spring-context-3.2.8.RELEASE.jar -rw-r--r-- 1 ceposta staff 873608 Jul 16 14:50 spring-core-3.2.8.RELEASE.jar -rw-r--r-- 1 ceposta staff 196367 Jul 16 14:50 spring-expression-3.2.8.RELEASE.jar -rw-r--r-- 1 ceposta staff 457987 Jul 16 14:50 spring-test-3.2.8.RELEASE.jar -rw-r--r-- 1 ceposta staff 242436 Jul 16 14:50 spring-tx-3.2.8.RELEASE.jar -rw-r--r-- 1 ceposta staff 627339 Jul 16 14:50 spring-web-3.2.5.RELEASE.jar -rw-r--r-- 1 ceposta staff 182112 Jul 16 14:50 stax2-api-3.1.1.jar -rw-r--r-- 1 ceposta staff 482245 Jul 16 14:50 woodstox-core-asl-4.2.0.jar -rw-r--r-- 1 ceposta staff 186758 Jul 16 14:50 wsdl4j-1.6.3.jar -rw-r--r-- 1 ceposta staff 84091 Jul 16 14:50 xml-resolver-1.2.jar -rw-r--r-- 1 ceposta staff 165787 Jul 16 14:50 xmlschema-core-2.1.0.jar Now we find what jars go to what dependency and create modules. For example, looking above we see camel-core has a dependency on com.sun.xml.bind:jaxb-impl:jar:2.2.6 Luckily enough, that’s the only dependency and it’s a system dependency that JBoss EAP already provides. So all we need to copy to our JBoss Module directory is the org.apache.camel:camel-core:jar:2.12.0.redhat-610379 dependency. But where do we get that!? Well, since we used dependency:copy-dependencies, it should just be in your target/dependency folder. But the official answer is the Camel jars Red Hat curates are shipped as part of JBoss Fuse. So if you download the distribution for JBoss Fuse, and unpack it, you should see an /extras folder in that distribution. Inside that distribution is an archive file named apache-camel-2.12.0.redhat-610379.zip. If you unpack this archive and check the /lib folder, you will have all of the Camel components and jars that Red Hat supports. Now that we know camel-core is the only jar we’ll need for the camel-core module, let’s copy that over to our module folder on EAP: Copy all of the dependencies and transitive dependencies to module folder ceposta@postamachat(contract-first-camel-eap) $ cp target/dependency/camel-core-2.12.0.redhat-610379.jar $EAP_HOME/modules/org/apache/camel/core/main/ Create module.xml Now we’ll need to add a simple xml descriptor to let EAP know this is a valid module: <?xml version="1.0" encoding="UTF-8"?> <module xmlns="urn:jboss:module:1.1" name="org.apache.camel.core"> <resources> <resource-root path="camel-core-2.12.0.redhat-610379.jar"/> </resources> </module> And now you have a camel-core EAP module! If you have dependencies on other modules, you can add them like this for example, but not necessary for camel-core module (it’s just a sample of what it would look like for other modules that will need this): <dependencies> <module name="org.apache.commons.lang"/> <module name="org.apache.commons.logging" /> <module name="org.apache.commons.collections" /> <module name="org.apache.commons.io" /> <module name="org.apache.commons.configuration" /> </dependencies> Enable the camel-core module: The last thing to do is to enable the module in the global classpath. To do this, find the standalone configuration file and add it to the <global-modules> section of the “EE subsystem”: .... bunch of other stuff here....<subsystem xmlns="urn:jboss:domain:ee:1.1"> <global-modules> <module name="org.apache.camel.core" slot="main" /> </global-modules> </subsystem>.... bunch of other stuff here.... Now do this for the camel-cxf component (hint, these are the jars).. OR if already have some of your custom modules and you want to further split this out into reusable modules, split them by technology (spring, cxf, cxf-transport, etc): [INFO] +- org.apache.camel:camel-cxf:jar:2.12.0.redhat-610379:provided [INFO] | +- org.apache.camel:camel-spring:jar:2.12.0.redhat-610379:provided [INFO] | | \- org.springframework:spring-tx:jar:3.2.8.RELEASE:provided [INFO] | +- org.apache.camel:camel-cxf-transport:jar:2.12.0.redhat-610379:provided [INFO] | +- org.apache.cxf:cxf-rt-frontend-jaxrs:jar:2.7.0.redhat-610379:provided [INFO] | | +- javax.ws.rs:javax.ws.rs-api:jar:2.0-m10:provided [INFO] | | \- org.apache.cxf:cxf-rt-bindings-xml:jar:2.7.0.redhat-610379:provided [INFO] | +- org.apache.cxf:cxf-rt-frontend-jaxws:jar:2.7.0.redhat-610379:provided [INFO] | | +- xml-resolver:xml-resolver:jar:1.2:provided [INFO] | | +- asm:asm:jar:3.3.1:provided [INFO] | | +- org.apache.cxf:cxf-rt-frontend-simple:jar:2.7.0.redhat-610379:provided [INFO] | | \- org.apache.cxf:cxf-rt-ws-addr:jar:2.7.0.redhat-610379:provided [INFO] | | \- org.apache.cxf:cxf-rt-ws-policy:jar:2.7.0.redhat-610379:provided [INFO] | | \- org.apache.neethi:neethi:jar:3.0.3:provided [INFO] | +- org.springframework:spring-core:jar:3.2.8.RELEASE:provided [INFO] | | \- commons-logging:commons-logging:jar:1.1.3:provided [INFO] | +- org.springframework:spring-beans:jar:3.2.8.RELEASE:provided [INFO] | +- org.springframework:spring-context:jar:3.2.8.RELEASE:provided [INFO] | | \- org.springframework:spring-expression:jar:3.2.8.RELEASE:provided [INFO] | +- org.apache.cxf:cxf-rt-features-clustering:jar:2.7.0.redhat-610379:provided [INFO] | \- org.apache.cxf:cxf-rt-bindings-soap:jar:2.7.0.redhat-610379:provided [INFO] | \- org.apache.cxf:cxf-rt-databinding-jaxb:jar:2.7.0.redhat-610379:provided Note, you may want to split out the different third party dependencies here into their own modules. (For example, Spring Framework, Camel Spring, etc) Deploy our project to EAP: Now from the command line, go to the root of the source code for the sample project and do a build and deploy: ceposta@postamachat$ mvn clean installceposta@postamachat$ mvn jboss-as:deploy-only Where to go next? If you have issues with the above I’d be happy to assist, or contact Red Hat Support for quicker response!Reference: Camel on JBoss EAP with Custom Modules from our JCG partner Christian Posta at the Christian Posta – Software Blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books