As discussed in my previous post
, Spring Integration (SI) is a routing framework
built on top of the Spring Framework that allows you to use proven enterprise integration patterns
to solve system integration problems via messaging. Once you’ve gotten SI configured and working to perform your routing and mediation logic, you may find that you’d like to take the next step and add more robustness
to your solution.
You may wish to distribute some of your routing, mediation, or service logic across multiple hosts, you may wish to add some reliability to the messages transmitted through your SI channels, and you may wish to scale out your services more than you could with a traditional client-server architecture. Well, one way to achieve some of the goals mentioned above is to use a message broker to back your SI routes. SI provides abstractions for both AMQP brokers and JMS brokers. In this post, I’d like to use the Cafe sample from the Spring Integration Samples project to illustrate how to use the popular ActiveMQ message broker to back your SI routes with JMS.
JMS is a good way to integrate your existing java solutions with messaging. As the JMS spec is an API, you can take full advantage of relying on the interfaces to the broker regardless of what broker you’re using. You could use ActiveMQ, WebSphere MQ, or any other JMS-compliant message broker. I chose ActiveMQ for this example because of its maturity, robustness, ubiquity in industry, as well as it’s open source from the Apache Software Foundation with an Apache license. It fully implements JMS 1.1, provides high availability, and can scale horizontally through a network of brokers. If you’re integrating java applications, stick to JMS. ActiveMQ also provides bindings for C++, C#, Ruby, Python, Erlang, and many others (see their website for the full list
Note, AMQP is a viable alternative too. AMQP specifies a wire-level protocol that allows messaging systems built on different platforms and/or heterogeneous languages to interoperate with each other (not just java/JVM, which can use the JMS API). The Cafe demo already has an implementation of AMQP for use with Spring’s RabbitMQ server (a popular open-source AMQP broker that’s part of the Spring portfolio).
Backing your channels with point-to-point or publish-subscribe JMS destinations
In my example, I opted to use an embedded broker
. Since ActiveMQ is a pure java solution, you can embed the broker in a java application and use it internally as well as allow external clients to connect and participate in the messaging. Doing so does not limit your ability to configure ActiveMQ in any way. It can be easier to deploy a full integration solution with its own embedded broker rather than rely on an external instance being set up (by another group?) or configured externally.
The files that relate to backing the SI channels with JMS destinations are cafeDemo-amq-config.xml
. The cafeDemo-amq-config.xml
file is responsible for configuring the connection to the ActiveMQ broker. The name of the connection factory, in this case “connectionFactory”, is significant because SI will by default look for a bean of that name to configure the destinations later used by the JMS-backed channel.
file looks very similar to the non-broker implementation of the cafe sample (cafeDemo-xml.xml
) except that the channels have been converted to the JMS-backed versions and that the ActiveMQ broker is embedded with the rest of the configuration. Note that the method used for embedding the broker allows for complete configuration right within the spring file For this example, there is no dependency on an externally running broker. The configuration for this small example sets up only one transport connector (at the default port, 61616… we could have used the vm:// transport
, but I wanted to show an example using TCP) and does not configure broker security, destination policies, etc. It does, however, take advantage of the out-of-the-box configuration details, including the JMX management MBeans, as well as message persistence via the recommended and highly optimized KahaDB
. See the ActiveMQ documentation for more.
The channels used for the “coldDrinks” and “hotDrinks” were set up as polling channels in the original configuration. To accomplish that with JMS destinations, set the “message-driven” attribute on the channel to “false.” In this case we didn’t need to declare the destination names ahead of time, but if you’d like to add extra security and authorization properties around the destinations, you may wish to create them ahead of time either on the broker or from the SI configuration. The main class for running this sample is org.springframework.integration.samples.cafe.xml.CafeDemoActiveMQBackedChannels
The best way to observe that ActiveMQ is indeed being used is to run the sample and use JConsole
to review the MBeans in the JMX server. From JConsole, you can see indeed the messages are being enqueued and dequeued through the queues and/or consumed from the topics. To test robustness gained by using ActiveMQ, try running the sample and abort it half-way through. Then comment out the line in the main file that adds orders to the system and restart the sample. It will continue to process where it left off when abnormally terminated. And there you have reliability and recovery just by changing a few lines of configuration for the channels.
What about running different parts of your routes on different servers or at least outside of the same JVM?
This allows you to add more instances of a particular part of the route to improve throughput and scalability without making any code changes (among other advantages). Just hook up more consumers to a queue/topic. Both concepts are available within the SI process (using just SI channels) as well as outside of its process (with JMS).
To demonstrate that, we’ll use the JMS inbound/outbound gateways and/or channel adapters provided by SI. With the JMS gateway, we can achieve a request-reply message exchange while the channel-adapters allow us to just fire and forget using asyncronous semantics.
The example is set up the same way the AMQP sample is set up and it also relies on an externally running broker (although we could have embedded it as above). Start by running the consumers (CafeDemoAppBaristaColdActiveMQ, CafeDemoAppBaristaHotActiveMQ) that listen for cold or hot drink orders. Next, start up the flow that’s responsible for the main flow and orchestration (CafeDemoAppOperationsActiveMQ). This orchestration flow handles taking orders, splitting them, routing them to the appropriate services (the cold and hot drink Baristas from above) and then handling responses and aggregating them to be delivered by the waiter. In here you’ll see the JMS gateways set up appropriately. Finally, you’ll need to run the process that actually initiates the orders by sending them to an order queue (CafeDemoAppActiveMQ).
All four of these processes are run independently of each other and could run on separate machines if necessary. They have their own application contexts and are only visible to each other by the ActiveMQ message broker. This is a highly modular and decoupled solution that uses a message broker for reliable communication. The broker, as mentioned above, can be configured for high availability so it’s not a point of failure.
Advantages of this type of architecture:
- message reliability – the message broker stores and forwards messages. messages will be delivered at most one time. if the broker goes down, previously undelivered messages will persist and can be resent if the consumers didn’t get them
- flexibility – with the components decoupled and relying on EIP, you can maintain each one independently of each other, including deployment, enhancements, etc
- throttle or increase message processing – with components running in their own/separate processes or boxes or parts of the world, you can configure each component to consume or throttle messages depending on how much the environment can handle
- scaling – to handle a higher throughput, just add more instances of a component to listen on a JMS destination
- complexity – maintaining multiple components is more complicated that packing things into one process
- debugging – along with increased complexity comes difficulty debugging. async processes are inherently difficult to debug