Featured FREE Whitepapers

What's New Here?


Spring Integration Java DSL sample

A new Java based DSL has now been introduced for Spring Integration which makes it possible to define the Spring Integration message flows using pure java based configuration instead of using the Spring XML based configuration. I tried the DSL for a sample Integration flow that I have – I call it the Rube Goldberg flow, for it follows a convoluted path in trying to capitalize a string passed in as input. The flow looks like this and does some crazy things to perform a simple task:          It takes in a message of this type – “hello from spring integ” splits it up into individual words(hello, from, spring, integ) sends each word to a ActiveMQ queue from the queue the word fragments are picked up by a enricher to capitalize each word placing the response back into a response queue It is picked up, resequenced based on the original sequence of the words aggregated back into a sentence(“HELLO FROM SPRING INTEG”) and returned back to the application.To start with Spring Integration Java DSL, a simple Xml based configuration to capitalize a String would look like this: <channel id="requestChannel"/><gateway id="echoGateway" service-interface="rube.simple.EchoGateway" default-request-channel="requestChannel" /><transformer input-channel="requestChannel" expression="payload.toUpperCase()" /> There is nothing much going on here, a messaging gateway takes in the message passed in from the application, capitalizes it in a transformer and this is returned back to the application. Expressing this in Spring Integration Java DSL: @Configuration @EnableIntegration @IntegrationComponentScan @ComponentScan public class EchoFlow {@Bean public IntegrationFlow simpleEchoFlow() { return IntegrationFlows.from("requestChannel") .transform((String s) -> s.toUpperCase()) .get(); } }@MessagingGateway public interface EchoGateway { @Gateway(requestChannel = "requestChannel") String echo(String message); } Do note that @MessagingGateway annotation is not a part of Spring Integration Java DSL, it is an existing component in Spring Integration and serves the same purpose as the gateway component in XML based configuration. I like the fact that the transformation can be expressed using typesafe Java 8 lambda expressions rather than the Spring-EL expression. Note that the transformation expression could have coded in quite few alternate ways: ??.transform((String s) -> s.toUpperCase()) Or: ??.<String, String>transform(s -> s.toUpperCase()) Or using method references: ??.<String, String>transform(String::toUpperCase) Moving onto the more complicated Rube Goldberg flow to accomplish the same task, again starting with XML based configuration. There are two configurations to express this flow: rube-1.xml: This configuration takes care of steps 1, 2, 3, 6, 7, 8 :It takes in a message of this type – “hello from spring integ” splits it up into individual words(hello, from, spring, integ) sends each word to a ActiveMQ queue from the queue the word fragments are picked up by a enricher to capitalize each word placing the response back into a response queue It is picked up, resequenced based on the original sequence of the words aggregated back into a sentence(“HELLO FROM SPRING INTEG”) and returned back to the application.<channel id="requestChannel"/><!--Step 1, 8--> <gateway id="echoGateway" service-interface="rube.complicated.EchoGateway" default-request-channel="requestChannel" default-reply-timeout="5000"/><channel id="toJmsOutbound"/><!--Step 2--> <splitter input-channel="requestChannel" output-channel="toJmsOutbound" expression="payload.split('\s')" apply-sequence="true"/><channel id="sequenceChannel"/><!--Step 3--> <int-jms:outbound-gateway request-channel="toJmsOutbound" reply-channel="sequenceChannel" request-destination="amq.outbound" extract-request-payload="true"/><!--On the way back from the queue--> <channel id="aggregateChannel"/><!--Step 6--> <resequencer input-channel="sequenceChannel" output-channel="aggregateChannel" release-partial-sequences="false"/><!--Step 7--> <aggregator input-channel="aggregateChannel" expression="T(com.google.common.base.Joiner).on(' ').join(![payload])"/> and rube-2.xml for steps 4, 5:It takes in a message of this type – “hello from spring integ” splits it up into individual words(hello, from, spring, integ) sends each word to a ActiveMQ queue from the queue the word fragments are picked up by a enricher to capitalize each word placing the response back into a response queue It is picked up, resequenced based on the original sequence of the words aggregated back into a sentence(“HELLO FROM SPRING INTEG”) and returned back to the application.<channel id="enhanceMessageChannel"/><int-jms:inbound-gateway request-channel="enhanceMessageChannel" request-destination="amq.outbound"/><transformer input-channel="enhanceMessageChannel" expression="(payload + '').toUpperCase()"/> Now, expressing this Rube Goldberg flow using Spring Integration Java DSL, the configuration looks like this, again in two parts: EchoFlowOutbound.java: @Bean public DirectChannel sequenceChannel() { return new DirectChannel(); }@Bean public DirectChannel requestChannel() { return new DirectChannel(); }@Bean public IntegrationFlow toOutboundQueueFlow() { return IntegrationFlows.from(requestChannel()) .split(s -> s.applySequence(true).get().getT2().setDelimiters("\\s")) .handle(jmsOutboundGateway()) .get(); }@Bean public IntegrationFlow flowOnReturnOfMessage() { return IntegrationFlows.from(sequenceChannel()) .resequence() .aggregate(aggregate -> aggregate.outputProcessor(g -> Joiner.on(" ").join(g.getMessages() .stream() .map(m -> (String) m.getPayload()).collect(toList()))) , null) .get(); } and EchoFlowInbound.java: @Bean public JmsMessageDrivenEndpoint jmsInbound() { return new JmsMessageDrivenEndpoint(listenerContainer(), messageListener()); }@Bean public IntegrationFlow inboundFlow() { return IntegrationFlows.from(enhanceMessageChannel()) .transform((String s) -> s.toUpperCase()) .get(); } Again here the code is completely typesafe and is checked for any errors at development time rather than at runtime as with the XML based configuration. Again I like the fact that transformation, aggregation statements can be expressed concisely using Java 8 lamda expressions as opposed to Spring-EL expressions. What I have not displayed here is some of the support code, to set up the activemq test infrastructure, this configuration continues to remain as xml and I have included this code in a sample github project. All in all, I am very excited to see this new way of expressing the Spring Integration messaging flow using pure Java and I am looking forward to seeing its continuing evolution and may be even try and participate in its evolution in small ways. Here is the entire working code in a github repo: https://github.com/bijukunjummen/rg-si Resources and Acknowledgement:Spring Integration Java DSL introduction blog article by Artem Bilan: https://spring.io/blog/2014/05/08/spring-integration-java-dsl-milestone-1-released Spring Integration Java DSL website and wiki: https://github.com/spring-projects/spring-integration-extensions/wiki/Spring-Integration-Java-DSL-Reference. A lot of code has been shamelessly copied over from this wiki by me! Also, a big thanks to Artem for guidance on a question that I had Webinar by Gary Russell on Spring Integration 4.0 in which Spring Integration Java DSL is covered in great detail.Reference: Spring Integration Java DSL sample from our JCG partner Biju Kunjummen at the all and sundry blog....

Java 8 StampedLocks vs. ReadWriteLocks and Synchronized

Synchronized sections are kind of like visiting your parents-in-law. You want to be there as little as possible. When it comes to locking the rules are the same – you want to spend the shortest amount of time acquiring the lock and within the critical section, to prevent bottlenecks from forming. The core language idiom for locking has always been the synchronized keyword, for methods and discrete blocks. This keyword is really hardwired into the HotSpot JVM. Each object we allocate in our code, be it a String, Array or a full-blown JSON document, has locking capabilities built right into its header at the native GC level. The same goes for the JIT compiler that compiles and re-compiles bytecode depending on the specific state and contention levels for a specific lock. The problem with synchronized blocks is that they’re all or nothing – you can’t have more than one thread inside a critical section. This is especially a bummer in consumer / producer scenarios, where some threads are trying to edit some data exclusively, while others are only trying to read it and are fine with sharing access. ReadWriteLocks were meant to be the perfect solution for this. You can specify which threads block everyone else (writers), and which ones play well with others for consuming content (readers). A happy ending? Afraid not. Unlike synchronized blocks, RW locks are not built-in to the JVM and have the same capabilities as mere mortal code. Still, to implement a locking idiom you need to instruct the CPU to perform specific operations atomically, or in specific order, to avoid race conditions. This is traditionally done through the magical portal-hole into the JVM – the unsafe class. RW Locks use Compare-And-Swap (CAS) operations to set values directly into memory as part of their thread queuing algorithm. Even so, RWLocks are just not fast enough, and at times prove to be really darn slow, to the point of not being worth bothering with. However help is on the way, with the good folks at the JDK not giving up, and are now back with the new StampedLock. This RW lock employs a new set of algorithms and memory fencing features added to the Java 8 JDK to help make this lock faster and more robust. Does it deliver on its promise? Let’s see. Using the lock. On the face of it StampedLocks are more complex to use. They employ a concept of stamps that are long values that serve as tickets used by any lock / unlock operation. This means that to unlock a R/W operation you need to pass it its correlating lock stamp. Pass the wrong stamp, and you’re risking an exception, or worse – unexpected behavior. Another key piece to be really mindful of, is that unlike RWLocks, StampedLocks are not reentrant. So while they may be faster, they have the downside that threads can now deadlock against themselves. In practice, this means that more than ever, you should make sure that locks and stamps do not escape their enclosing code blocks. long stamp = lock.writeLock();  //blocking lock, returns a stamptry {write(stamp); // this is a bad move, you’re letting the stamp escape }finally {lock.unlock(stamp);// release the lock in the same block - way better } Another pet peeve I have with this design is that stamps are served as long values that don’t really mean anything to you. I would have preferred lock operations to return an object which describes the stamp – its type (R/W), lock time, owner thread etc.. This would have made debugging and logging easier. This is probably intentional though, and is meant to prevent developers from passing stamps between different parts of the code, and also save on the cost of allocating an object. Optimistic locking. The most important piece in terms of new capabilities for this lock is the new Optimistic locking mode. Research and practical experience show that read operations are for the most part not contended with write operations. Asa result, acquiring a full-blown read lock may prove to be overkill. A better approach may be to go ahead and perform the read, and at the end of it see whether the value has been actually modified in the meanwhile. If that was the case you can retry the read, or upgrade to a heavier lock. long stamp = lock.tryOptimisticRead(); // non blockingread();if(!lock.validate(stamp)){ // if a write occurred, try again with a read locklong stamp = lock.readLock();try {read(); } finally { lock.unlock(stamp); } } One of the biggest hassles in picking a lock, is that its actual behavior in production will differ depending on application state. This means that the choice of a lock idiom cannot be done in a vacuum, and must take into consideration the real-world conditions under which the code will execute. The number of concurrent reader vs. writer threads will change which lock you should use – a synchronized section or a RW lock. This gets harder as these numbers can change during the lifecycle of the JVM, depending on application state and thread contention. To illustrate this, I stress-tested four modes of locking – synchronized, RW Lock, Stamped RW lock and RW optimistic locking under different contention levels and R/W thread combinations. Reader threads will consume the value of a counter, while writer threads will increment it from 0 to 1M. 5 readers vs. 5 writers: Stacking up five concurrent reader and five writer threads, we see that the stamped lock shines, performing much better than synchronized by a factor of 3X. RW lock also performed well. The strange thing here is that optimistic locking, which on the surface of things should be the fastest, is actually the slowest here.  10 readers vs. 10 writers: Next, I increased the levels of contention to ten writer and ten reader threads. Here things start to materially change. RW lock is now an order of magnitude slower than stamped and synchronized locks, which perform at the same level. Notice that optimistic locking surprisingly is still slower stamped RW locking.  16 readers vs. 4 writers: Next, I maintained a high level of contention while tilting the balance in favor of reader threads: sixteen readers vs. four writers.  The RW lock continues to demonstrate the reason why it’s essentially being replaced – it’s a hundred times slower. Stamped and Optimistic perform well, with synchronized not that far behind.  19 readers vs. 1 writer:  Last, I looked at how a single writer thread does against nineteen readers. Notice that the results are much slower, as the single thread takes longer to complete the work. Here we get some pretty interesting results. Not surprisingly, the RW lock takes infinity to complete. Stamped locking isn’t doing much better though… Optimistic locking is the clear winner here, beating RW lock by a factor of 100. Even so keep in mind that this locking mode may fail you, as a writer may occur during that time. Synchronized, our old faithful, continues to deliver solid results.The full results can be found here… Hardware: MBP quad Core i7. The benchmark code can be found here. Conclusions It seems that on average the best performance overall is still being delivered by the intrinsic synchronized lock. Even so, the point here is not to say that it will perform the best in all situations. It’s mainly to show that your choice of locking idiom should be made based on testing both the expected level of contention, and the division between reader and writer threads before you take your code to production. Otherwise you run the risk of some serious production debugging pain. Additional reading about StampedLocks here. Questions, comments, notes on the benchmark? Let me know!Reference: Java 8 StampedLocks vs. ReadWriteLocks and Synchronized from our JCG partner Tal Weiss at the Takipi blog....

InterruptedException and interrupting threads explained

If InterruptedException wasn’t checked exception, probably no one would even notice it – which would actually prevent couple of bugs throughout these years. But since it has to be handled, many handle it incorrectly or thoughtlessly. Let’s take a simple example of a thread that periodically does some clean up, but in between sleeps most of the time.             class Cleaner implements Runnable {Cleaner() { final Thread cleanerThread = new Thread(this, "Cleaner"); cleanerThread.start(); }@Override public void run() { while(true) { cleanUp(); try { TimeUnit.SECONDS.sleep(1); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } } }private void cleanUp() { //... }}This code is wrong on so many layers!Starting Thread in a constructor might not be a good idea in some environments, e.g. some frameworks like Spring will create dynamic subclass to support method interception. We will end-up with two threads running from two instances. InterruptedException is swallowed, and the exception itself is not logged properly This class starts a new thread for every instance, it should use ScheduledThreadPoolExecutor instead, shared among many instances (more robust and memory-effective) Also with ScheduledThreadPoolExecutor we could avoid coding sleeping/working loop by ourselves, and also switch to fixed-rate as opposed to fixed-delay behaviour presented here. Last but not least there is no way to get rid of this thread, even when Cleaner instance is no longer referenced by anything else.All problems are valid, but swallowing InterruptedException is its biggest sin. Before we understand why, let us think for a while what does this exception mean and how we can take advantage of it to interrupt threads gracefully. Many blocking operations in JDK declare throwing InterruptedException, including:Object.wait() Thread.sleep() Process.waitFor() AsynchronousChannelGroup.awaitTermination() Various blocking methods in java.util.concurrent.*, e.g. ExecutorService.awaitTermination(), Future.get(), BlockingQueue.take(), Semaphore.acquire() Condition.await() and many, many others SwingUtilities.invokeAndWait()Notice that blocking I/O does not throw InterruptedException (which is a shame). If all these classes declare InterruptedException, you might be wondering when is this exception ever thrown?When a thread is blocked on some method declaring InterruptedException and you call Thread.interrupt() on such thread, most likely blocked method will immediately throw InterruptedException. If you submitted a task to a thread pool (ExecutorService.submit()) and you call Future.cancel(true) while the task was being executed. In that case the thread pool will try to interrupt thread running such task for you, effectively interrupting your task.Knowing what InterruptedException actually means, we are well equipped to handle it properly. If someone tries to interrupt our thread and we discovered it by catching InterruptedException, the most reasonable thing to do is letting said thread to finish, e.g.: class Cleaner implements Runnable, AutoCloseable {private final Thread cleanerThread;Cleaner() { cleanerThread = new Thread(this, "Cleaner"); cleanerThread.start(); }@Override public void run() { try { while (true) { cleanUp(); TimeUnit.SECONDS.sleep(1); } } catch (InterruptedException ignored) { log.debug("Interrupted, closing"); } }//...@Override public void close() { cleanerThread.interrupt(); } }Notice that try-catch block now surrounds while loop. This way if sleep() throws InterruptedException, we will break out of the loop. You might argue that we should log InterruptedException‘s stack-trace. This depends on the situation, as in this case interrupting a thread is something we really expect, not a failure. But it’s up to you. The bottom-line is that if sleep() is interrupted by another thread, we quickly escape from run() altogether. If you are very careful you might ask what happens if we interrupt thread while it’s in cleanUp() method rather than sleeping? Often you’ll come across manual flag like this: private volatile boolean stop = false;@Override public void run() { while (!stop) { cleanUp(); TimeUnit.SECONDS.sleep(1); } }@Override public void close() { stop = true; }However notice that stop flag (it has to be volatile!) won’t interrupt blocking operations, we have to wait until sleep() finishes. On the other side one might argue that explicit flag gives us better control since we can monitor its value at any time. It turns out thread interruption works the same way. If someone interrupted thread while it was doing non-blocking computation (e.g. inside cleanUp()) such computations aren’t interrupted immediately. However thread is marked as interrupted and every subsequent blocking operation (e.g. sleep()) will simply throw InterruptedException immediately – so we won’t loose that signal. We can also take advantage of that fact if we write non-blocking thread that still wants to take advantage of thread interruption facility. Instead of relying on InterruptedException we simply have to check for Thread.isInterrupted() periodically: public void run() { while (Thread.currentThread().isInterrupted()) { someHeavyComputations(); } }Above, if someone interrupts our thread, we will abandon computation as soon as someHeavyComputations() returns. If it runs for two long or infinitely, we will never discover interruption flag. Interestingly interrupted flag is not a one-time pad. We can call Thread.interrupted() instead of isInterrupted(), which will reset interrupted flag and we can continue. Occasionally you might want to ignore interrupted flag and continue running. In that case interrupted() might come in handy. BTW I (imprecisely) call “getters” that change the state of object being observed “Heisengetters“. Note on Thread.stop() If you are old-school programmer, you may recall Thread.stop() method, which has been deprecated for 10 years now. In Java 8 there were plans to “de-implement it”, but in 1.8u5 it’s still there. Nevertheless, don’t use it and refactor any code using Thread.stop() into Thread.interrupt(). Uninterruptibles from Guava Rarely you might want to ignore InterruptedException altogether. In that case have a look at Uninterruptibles from Guava. It has plenty of utility methods like sleepUninterruptibly() or awaitUninterruptibly(CountDownLatch). Just be careful with them. I know they don’t declare InterruptedException (which might be handful), but they also completely prevent current thread from being interrupted – which is quite unusual. Summary By now I hope you have some understanding why certain methods throw InterruptedException. The main takeaways are:Caught InterruptedException should be handled properly – most of the time it means breaking out of the current task/loop/thread entirely Swallowing InterruptedException is rarely a good idea If thread was interrupted while it wasn’t in a blocking call, use isInterrupted(). Also entering blocking method when thread was already interrupted should immediately throw InterruptedException.Reference: InterruptedException and interrupting threads explained from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog....

Law of Demeter in Java – Principle of least Knowledge – Real life Example

Law of Demeter also known as principle of least knowledge is a coding principle, which says that a module should not know about the inner details of the objects it manipulates. If a code depends upon internal details of a particular object, there is good chance that it will break as soon as internal of that object changes. Since Encapsulation is all about hiding internal details of object and exposing only operations, it also assert Law of  Demeter. One mistake many Java programmer makes it exposing internal detail of object using getter methods and this is where principle of least knowledge alerts you. I first come to know about this principle, while reading one of the must read programming book, Robert C. Martin’s Clean code. Apart from many good thing the book teaches you, “principle of least knowledge” is one principle, which I still remember. Like many bad things, you will tempt to violate Law of Demeter, because of beautiful chaining of methods written in fluent style. On surface it looks pretty good, but as soon as you think about principle of least knowledge, you start seeing the real picture. In this article, we will see formal definition of Law of Demeter and explore code snippet which violates this principle. Law of Demeter According to Law of Demeter, a method M of object O should only call following types of methods :Methods of Object O itself Methods of Object passed as an argument Method of object, which is held in instance variable Any Object which is created locally in method MMore importantly method should not invoke methods on objects that are returned by any subsequent method calls specified above and as Clean Code says “talk to friends, not to strangers”. Apart from knowing object oriented programming basic concepts e.g. Abstraction, Polymorphism, Inheritance and SOLID design principle, it’s also worth knowing useful principle like this, which has found it’s way via experience. In following example, we will see how a method can violate above rules to violate Law of Delimiter. public class LawOfDelimterDemo {/** * This method shows two violations of "Law of Delimiter" or "Principle of least knowledge". */ public void process(Order o) {// as per rule 1, this method invocation is fine, because o is a argument of process() method Message msg = o.getMessage();// this method call is a violation, as we are using msg, which we got from Order. // We should ask order to normalize message, e.g. "o.normalizeMessage();" msg.normalize();// this is also a violation, instead using temporary variable it uses method chain. o.getMessage().normalize();// this is OK, a constructor call, not a method call. Instrument symbol = new Instrument();// as per rule 4, this method call is OK, because instance of Instrument is created locally. symbol.populate();} }You can see that when we get internal of Order class and  call a method on that object, we violate Law of delimiter, because now this method knows about Message class. On the other hand calling method on Order object is fine because its passed to the method as parameter.  This image nicely explains what you need to do to follow Law of Demeter.Let’s see another example of code, which violates the Law of Demeter and how does it affect code quality. public class XMLUtils { public Country getFirstBookCategoryFromXML(XMLMessage xml) { return xml.getXML().getBooks().getBookArrary(0).getBookHeader().getBookCategory(); } } This code is now dependent upon lot of classes e.g. XMLMessage XML Book BookHeader BookCategory Which means this function knows about XMLMessage, XML, Book, BookHeader and BookCategory. It knows that XML has list of Book, which in-turn has BookHeader and which internally has BookCategory, that’s a lot of information. If any of the intermediate class or accessor method in this chained method call changes, then this code will break. This code is highly coupled and brittle. It’s much better to put the responsibility of finding internal data into the object, which owns it. If we look closely, we should only call getXML() method because its method from XMLMessage class, which is passed to method as argument. Instead of putting all this code in XMLUtils, should be putting on BookUtils or something similar, which can still follow Law of Demeter and can return the required information.Reference: Law of Demeter in Java – Principle of least Knowledge – Real life Example from our JCG partner Javin Paul at the Javarevisited blog....

Meet Fabric8: An open-source integration platform based on Camel and ActiveMQ

Fabric8 Fabric8 is a Apache 2.0 Licensed upstream community for the JBoss Fuse product from Red Hat. It’s is an integration platform based on Apache ActiveMQ, Camel, CXF,Karaf, HawtIO and others. It provides automated configuration and deployment management to help make deployments easy, reproducible, and less human-error prone.   The latest GA version of JBoss Fuse (v6.1), was recently released and is based on v1.0 of Fabric8:  Fabric8 unifies and packages those open-source projects to help you build integrations between systems and also tackle non-functional requirements like managing your deployments, service discovery, failover, load balancing, centralized configuration, automation, and more! It also gives a clear path to cloud deployments, such as on PaaS The best part is it’s familiar to people who already use Camel or ActiveMQ which are the most popular open-source integration libraries and messaging platforms respectively. You can get more info from the community docs, chat with the developers on IRC on freenode, and the mailing list at google-groups. Awesome, so what does Fabric8 give me? Fabric8 provides a LOT of functionality … but a couple of key pieces of functionality that I’d like to mention in this blog post, pieces that you’d otherwise have to build out yourself if you use the constituent projects directly, are: * Automated deployment and provisioning * Polycontainer support * Centralized management * Service discovery * Load balancing * High availability * Master/slave failover coordination  With Fabric8, you build your integration pieces, deploy them and manage them (together this creates a “fabric”) where nodes represent containers with provisioned pieces of your software (deployments) and the endpoints (HTTP, MQ, SOAP/REST) are registered in a repository for dynamic lookup. A DevOpsy story Think for a moment about what your current build and release process looks like… For Java shops you probably have Maven to build your source code, subversion or git to provide version control and change management around your source code, and maybe Jenkins for managing your builds, right? And that’s a very powerful set of tools for Java developers. But a build and release process is more than using a few tools regardless of how powerful they are. Getting your code to production involves a lot more on the operations side that developers either don’t get or are oblivious to. What containers does your code run in? What operating systems? What supporting software needs to be around? Are these environments carefully crafted and manually configured with behemoth containers that are brittle to change, are different depending on which environment they run in (DEV/QA/UAT/PROD, etc), ?? Successful IT shops embrace the DevOps movement and its principles of communication and automation to create an environment that is easily scripted/automated, reproducible, and removes as much human and manual configuration as possible. A dev person thinks in terms of code and app servers. An ops person might be thinking in terms of managing VMs, servers, OSs, network, etc. But therein lies a gap. What tools do developers have to automate deploying containers, provisioning their applications, configure those apps, and visualize/manage this from a central location? Ops folks are familiar with Puppet/Chef/Ansible/MCollective/capistrano… and using these tools in concert with Fabric8 will give you a very deep and powerful stack for automation and configuration management to help you achieve consistent and reproducible deployments to production to implement a continuous delivery model. So what’s the value that Fabric8 adds? Consistency across containers A consistent way of configuring your deployments with Profiles that works across java containers (Karaf, Tomcat, Wildfly, TomEE), micro-service frameworks (Dropwizard, Spring Boot, Vert.x), and plain-jain Java Main (PJJM, TM) based apps. Visualizations A unified web console based on HawtIO to manage your profiles, deployments, brokers, services, etc. There are even rich visualizations for your Camel routes and debugging and tracing when there are problems. Discovery For all the deployments within a Fabric, Fabric8 can not only manage them but also register them into a run-time registry that clients can use to automatically find the set of HTTP endpoints (SOAP/REST, etc) they need, or MQ services (brokers, master/slave pairs, network of brokers, etc). Additionally, external clients can also use the registry to discover services. Deep understanding about your running services While the familiar Ops tools like those mentioned above are great at getting software onto disk for sets of machines they cannot give a rich understanding about the services running. For example, with the Camel plugin for Fabric8, you can track #s of exchanges completed, those failed, amount of time an endpoint is taking to complete exchanges, etc. With the ActiveMQ plugin you can visualize your queues/producers/consumers, send messages to queues, move messages from DLQ, etc. Additionally, there are plugins for ElasticSearch/Kibana for even deeper understanding of business/integration processed implemented by your code/Camel routes. Familiarity Fabric8 uses tools that are already familiar to Java developers writing distributed integration services or applications. For example, all of the configurations (Profiles) are stored in git. The provisioning mechanisms use Maven. The coordination services use [Apache Zookeeper][zk], etc. Manage deployments in the cloud or across hybrid clouds Fabric8 has built in support for deploying and provisioning to IaaS or PaaS out of the box. There’s even support for Docker based containers which you can then ship and use in any environment! What about ServiceMix? ServiceMix is also an open-source ESB based on Apache Camel and ActiveMQ. So how does this relate to Fabric8? ServiceMix is the genesis of the current JBoss Fuse/Fabric8. It started off 9 or so years ago as an implementation of an EnterpriseServiceBus (ESB) based on the Java Business Integration spec. It’s goal was to provide a pluggable component architecture with a normalized messaging backbone that would adhere to standard interfaces and canonical XML data formats. ServiceMix gained a lot of popularity, despite JBI being a overly ceremonious spec (lots and lots of XML descriptors, packaging demands, etc). But, despite most products/projects offering integration services as a large, complex container, the need for routing, transformation, integrating with external systems, etc. shows up outside of that complex “ESB” environment as well! Around the SMX 3.x and 4.x timeframe, the project underwent some major refactoring. The JBI implementation was ripped out and simplified with routing/mediation DSL that would later become Apache Camel. This way the “heart” of the “ESB” could be used in other projects (ActiveMQ, stand alone, etc). Additionally, the core container also moved away from JBI and toward OSGi. Still later, the actual OSGi container was refactored out into its own project, now known as Karaf. So ServiceMix became less its own project and really a packaging of other projects like ActiveMQ, Karaf (which used to be core SMX) and Camel (which used to be core SMX). The older versions of JBoss Fuse (Fuse ESB/Fuse Enterprise) where basically a hardening of SMX which was already a repackaging of some Apache projects. Additionally a lot of the core developers working on SMX also moved toward contributing to the constituent pieces and not necessarily the core SMX. Fabric8 takes the “ESB” or “integration” spirit of ServiceMix and adds a nice management UI (HawtIO), and all of the DevOpsy stuff I mentioned above, and paints a clear path toward large-scale deployments and even moving to cloud/hybrid cloud architectures. If you want more info from the community, Claus Ibsen wrote a nice blog post. And a rather long discussion in the SMX community found here: Next steps If you develop systems/enterprise integrations with Camel, CXF, or ActiveMQ and deploy into OSGi (karaf), Servlet (Tomcat), Java EE (Wilffly) or stand alone (Vert.x, Spring Boot, DropWizard), then you should definitely take a look at Fabric8. Start by downloading the latest release and give us your feedback!! In subsequent posts, I’ll continue to dive into the functionality of Fabric8 and how you can use it to build robust, scalable integrations AND have a consistent and reproducible environment for deploying your integrations.Reference: Meet Fabric8: An open-source integration platform based on Camel and ActiveMQ from our JCG partner Christian Posta at the Christian Posta – Software Blog blog....

How I Learned To Appreciate Job Hoppers

The possibility of being labeled a job hopper is still a concern for many in the technology world. This fear is often unreasonable and is primarily a function of traditional and antiquated employment concepts being extended into an economy where they likely don’t belong. In other words, don’t take career advice from your parents. When I first started recruiting software engineers during the late 90′s dot-com boom, I was advised by more senior coworkers to avoid wasting time speaking with job hoppers. People who frequently switched employers were perceived as high risk for companies who needed to invest significant time and money into making new hires effective, only to lose them shortly thereafter. Recruiters were afraid a placement might not even reach their guarantee period. The economy and workforce as a whole were undergoing rapid and dramatic changes, but long-established preconceptions about employment were somewhat slower to adapt. Linear career paths were the expectation, and the prevailing practice was still to promote our best engineers into management without regard for their leadership potential or soft skills. Company loyalty was still a strong emotional factor that resulted in long tenures. This new well-funded software startup ecosystem siphoned talent from large and established companies, with engineers leaving behind stability and pensions for modest salaries and stock options that provided get-rich-quick possibilities. Once the big firm pool dried up, they cannibalized and created a class that appeared to many as mercenary startup engineers. This was a class of job hoppers, but their motivations (and subsequently their character) were often misrepresented or misunderstood. Were they mercenaries? There were (and still are) a fair share of individuals that chase short-term gains by making job decisions based almost exclusively on numbers. Accepting offers from the highest bidder every time will work for some, but to maximize lifetime earnings (ignoring job satisfaction here) the top offer may not always be the best path. Was this new class of startup job hoppers driven primarily by financial gain? For most, I believe the answer was no. Due to the competitive nature of the industry and basic economic principles of supply and demand, most job changes result in at least some small increase in compensation. It would be easy to assume that engineers accepted these offers based on the higher package, but for most the desire for change was probably attributed to the need to build new things. It’s no coincidence that we find many of the startup job hoppers went on to become independent consultants and contractors, where there is no stigma attached to short stints. We could again make the argument that they were driven to consulting by high rates, and some certainly were, but many point to their preference to finish projects and then move along. Engineers left many of their startup jobs after a year or two because they had built what they were hired to build, were drawn to the job based on the opportunity to create something, and were much less enthusiastic about maintaining it. This desire to build is a characteristic valued by managers who emphasize getting things done, so they can hardly fault them for leaving when there is little left to do. Job hopping vs the alternative Of course, the opposite of job hoppers are employees who remain in the same job for inordinate tenures. Most industries have historically interpreted a long stay with the same employer as a positive sign and an asset for one’s candidacy. In technology this is typically no longer the case. In recent years the adage “Do you have ten years of experience or one year of experience ten times?” has been applied to those who seem more driven by company loyalty and stability than career self-interest. There was a time when it was more difficult to find new work without a highly stable work history. In today’s technology market, I would make the argument that a career characterized by one or two lengthy employment stints is actually less marketable to the majority of tech employers than your standard job hopper. Discrimination against those with long tenures is often wrongfully attributed to ageism or an overqualified candidate, where the root of the discrimination is a belief that work variety produces better engineers. As a recruiter, removing the appearance of dust or stagnation is a major challenge when working with candidates coming off a long tenure. Positive vs negative hopping It’s important to note that there are different kinds of job hoppers, and the picture painted thus far has been mostly of those viewed favorably by the industry. These are people who make self-interested decisions to move once they have maximized their career gain from an opportunity. Usually this meant the ability to gain a new experience, such as learning a skill or building a product. To lose any potential negative stigma associated with job hopping, one should have a list of accomplishments and projects that were seen to completion. That doesn’t mean there can’t be failings along the way, but successful job hoppers have a track record of being hired for a purpose and meeting or exceeding the expectations of the employer. They should also be able to explain the motivations behind each move and why it was right for their career at that time. The job hoppers that are likely viewed in a negative light often lack both accomplishments and justifications for their transitions, and can often have résumé gaps that aren’t easily explained. They may have a history of abandoning efforts before completion, or are consistently wooed by new employers regardless of current project status. Unemployed job hoppers with these backgrounds eventually have a difficult time in job search. Conclusion Attitudes towards those that frequently change jobs have transitioned as the economy has changed, and companies have more realistic expectations about their employees acting in their own self-interests. The stigma around job hopping in technology has almost been eliminated at smaller companies, particularly for candidates who have a solid list of accomplishments and are able to articulate a history of positive career choices.Reference: How I Learned To Appreciate Job Hoppers from our JCG partner Dave Fecak at the Job Tips For Geeks blog....

Using IntelliJ..for 2 weeks, so far so good

It’s been almost 2 weeks that I have completely switched over to IntelliJ as my main Java IDE at home and at work. So far so good, here are my  initial findings.        Migration: I took me a couple of hours to migrated my projects over. Eventually if your project is already Mavenized, things are simple, no risk involved. Maven: As many people say, IntelliJ currently treats Maven-ized projects better, comparing to Eclipse Kepler and its internal plugin. The integration is not perfect, but I don’t thing there is such a thing. Profiles work , maven options work, the IDE seems to ‘re-fresh’ it’s state along with the ‘Maven’ one, especially during clean and package. This is what I wanted, so I am very happy about it. Key Bindings : At first I had selected the Eclipse Key Map, but soon realized that most of the examples out there were based on the intelliJ key bindings (especially when you were browsing help stuff). At the same time, some of the most exotic and clever functionality was not by default ‘configured’ to an eclipse combo. So I was feeling, I was missing some magic. During the second week, decided to change my settings to IntelliJ defaults and I was surprised that after a a day or so  with the help of the documentation and the Cmd+Shift+A, I found my way around.  Crashes : No crashes, ooh yes, this is so good. No crashes. Enterprise Features / Facets : I tried the Enterprise Version with all the extra features. It makes sense if you are a JavaEE developer BUT, like Eclipse, when the IDE activates all these Enteprise Wizards and facets it becomes slow. So I think I can live without them, despite the fact that they might save you some time in a configuration or special annotation. Maybe for less experienced developers these wizard can save you some time, at the time being I can still work with no JavaEE /JSF wizard Java Refactorings : It seems that the tool is more ‘clever’ java way, it spots on the fly common programming errors and provides on the spot suggestions. I have never seen a tool, doing so correct suggestions and scanning. Well done jetbrains team, well done . Searching stuff: Most of the time in fairly large project, finding a class, a resource something is a major repetitve time consuming task. I think that IntelliJ builts on top of the Ecipse legacy, which introduced back in the day fast and smart searching, and does it better. Ohh yes I loved the (Shift+Shift) combo. Quality: As I’ve already said the built in java lang scanning is very good, that means that the tool helps you write better code. The standard ‘Analyze’ functionality provides a variety of suggestions, most of them to the point. I have also installed the PMD, Findbugs, Checkstyle plugins, so I am very happy there is already integration with these very very important tools for ever Java Developer. Text editor:  Smart cursors, each renames and smart support for many different files, things I am not slowly trying to use and explore. App server support: Currently I am using Websphere (bliah) eventually the standard plugin is quite good, I can not fully evaluate it though since Websphere can not run on MacOSX so most of the stuff are just no use for me. Others in the team  though, are successfully using ‘hot swap’ and local deploy with no problem. I guess the tool supports all the major app servers, if it managed to do it properly with Websphere then the others must have been easier. Arquillian + JUnit : This is the one thing that I have not managed to make it work. The JUnit runner in Eclipse was most probaly capable on understanding my configuration and successfuly start Arquillian  with  GlassFish on JUnit tests. At the time being when I try to do the same on IntelliJ I fail miserably, maybe it is missing configuration from my side , dont know, this is the only reason I have eclipse on standy by, sometimes I like to debug while I unit test and currently i can not do it on IntelliJ.So far so good, with some small problems that I can live with though. It seems that our small team at work is slowly migrating over to intelliJ (Community Edition).Reference: Using IntelliJ..for 2 weeks, so far so good from our JCG partner Paris Apostolopoulos at the Papo’s log blog....

Neo4j 2.1: Passing around node ids vs UNWIND

When Neo4j 2.1 is released we’ll have the UNWIND clause which makes working with collections of things easier. In my blog post about creating adjacency matrices we wanted to show how many people were members of the first 5 meetup groups ordered alphabetically and then check how many were members of each of the other groups.           Without the UNWIND clause we’d have to do this: MATCH (g:Group) WITH g ORDER BY g.name LIMIT 5   WITH COLLECT(id(g)) AS groups   MATCH (g1) WHERE id(g1) IN groups MATCH (g2) WHERE id(g2) IN groups   OPTIONAL MATCH path = (g1)<-[:MEMBER_OF]-()-[:MEMBER_OF]->(g2)   RETURN g1.name, g2.name, CASE WHEN path is null THEN 0 ELSE COUNT(path) END AS overlap Here we get the first 5 groups, put their IDs into a collection and then create a cartesian product of groups by doing back to back MATCH’s with a node id lookup. If instead of passing around node ids in ‘groups’ we pass around nodes and then used those in the MATCH step we’d end up doing a full node scan which becomes very slow as the store grows. e.g. this version would be very slow: MATCH (g:Group) WITH g ORDER BY g.name LIMIT 5   WITH COLLECT(g) AS groups   MATCH (g1) WHERE g1 IN groups MATCH (g2) WHERE g2 IN groups   OPTIONAL MATCH path = (g1)<-[:MEMBER_OF]-()-[:MEMBER_OF]->(g2)   RETURN g1.name, g2.name, CASE WHEN path is null THEN 0 ELSE COUNT(path) END AS overlap This is the output from the original query: +-------------------------------------------------------------------------------------------------------------+ | g1.name | g2.name | overlap | +-------------------------------------------------------------------------------------------------------------+ | "Big Data Developers in London" | "Big Data / Data Science / Data Analytics Jobs" | 17 | | "Big Data Jobs in London" | "Big Data London" | 190 | | "Big Data London" | "Big Data Developers in London" | 244 | | "Cassandra London" | "Big Data / Data Science / Data Analytics Jobs" | 16 | | "Big Data Jobs in London" | "Big Data Developers in London" | 52 | | "Cassandra London" | "Cassandra London" | 0 | | "Big Data London" | "Big Data / Data Science / Data Analytics Jobs" | 36 | | "Big Data London" | "Cassandra London" | 422 | | "Big Data Jobs in London" | "Big Data Jobs in London" | 0 | | "Big Data / Data Science / Data Analytics Jobs" | "Big Data / Data Science / Data Analytics Jobs" | 0 | | "Big Data Jobs in London" | "Cassandra London" | 74 | | "Big Data Developers in London" | "Big Data London" | 244 | | "Cassandra London" | "Big Data Jobs in London" | 74 | | "Cassandra London" | "Big Data London" | 422 | | "Big Data / Data Science / Data Analytics Jobs" | "Big Data London" | 36 | | "Big Data Jobs in London" | "Big Data / Data Science / Data Analytics Jobs" | 20 | | "Big Data Developers in London" | "Big Data Jobs in London" | 52 | | "Cassandra London" | "Big Data Developers in London" | 69 | | "Big Data / Data Science / Data Analytics Jobs" | "Big Data Jobs in London" | 20 | | "Big Data Developers in London" | "Big Data Developers in London" | 0 | | "Big Data Developers in London" | "Cassandra London" | 69 | | "Big Data / Data Science / Data Analytics Jobs" | "Big Data Developers in London" | 17 | | "Big Data London" | "Big Data Jobs in London" | 190 | | "Big Data / Data Science / Data Analytics Jobs" | "Cassandra London" | 16 | | "Big Data London" | "Big Data London" | 0 | +-------------------------------------------------------------------------------------------------------------+ 25 rows If we use UNWIND we don’t need to pass around node ids anymore, instead we can collect up the nodes into a collection and then explode them out into a cartesian product: MATCH (g:Group) WITH g ORDER BY g.name LIMIT 5   WITH COLLECT(g) AS groups   UNWIND groups AS g1 UNWIND groups AS g2   OPTIONAL MATCH path = (g1)<-[:MEMBER_OF]-()-[:MEMBER_OF]->(g2)   RETURN g1.name, g2.name, CASE WHEN path is null THEN 0 ELSE COUNT(path) END AS overlap There’s not significantly less code but I think the intent of the query is a bit clearer using UNWIND. I’m looking forward to seeing the innovative uses of UNWIND people come up with once 2.1 is GA.Reference: Neo4j 2.1: Passing around node ids vs UNWIND from our JCG partner Mark Needham at the Mark Needham Blog blog....

Testing effectively

Recently, there was a heaty debate regarding TDD which started by DHH when he claimed that TDD is dead. This ongoing debate managed to capture the attention of developers world, including us. Some mini debates have happened in our office regarding the right practices to do testing. In this article, I will present my own view.   How many kinds of tests have you seen? From the time I joined industry, here are the kinds of tests that I have worked on:Unit Test System/Integration/Functional Test Regression Test Test Harness/Load Test Smoke Test/Spider TestThe above test categories are not necessarily mutually exclusive. For example, you can crate a set of automated functional tests or Smoke tests to be used as regression test. For the benefit of newbie, let’s do a quick review for these old concepts. Unit Test Unit Test aims to test the functionality of a unit of code/component. For Java world, unit of code is the class and each Java class is supposed to have a unit test. The philosophy of Unit Test is simple. When all the components are working, the system as a whole should work. A component rarely works alone. Rather, it normally interacts with other components. Therefore, in order to write Unit Test, developers need to mock other components. This is the problem that DHH and James O Coplien criticize Unit Test for, huge effort that gains little benefit. System/Integration/Functional Test There is no concrete naming as people often use different terms to describe similar things. Contradictory to Unit Test, for functional test, developers aim to test a system function as a whole, which may involve multiple components. Normally, for functional test, the data is retrieved and stored to the test database. Of course, there should be a pre-step to set-up test data before running. DHH likes this kind of test. It helps developers test all the functions of the system without huge effort to set-up mock object. Functional test may involve asserting web output. In the past, it was mostly done with htmlUnit but with the recent improvement of Selenium Grid, Selenium became the preferred choice. Regression Test In this industry, you may end up spending more time maintaining system than developing a new one. Software changes all the time and it is hard to avoid risk whenever you make changes. Regression Test is supposed to capture any defect that caused by changes. In the past, a software house did have one army of testers but the current trend is automated testing. It means that developers will deliver software with a full set of tests that is supposed to be broken whenever a function is spoiled. Whenever a bug is detected, a new test case should be added to cover the new bug. Developers create the test, let it fail, and fix the bug to make it pass. This practice is called Test Driven Development. Test Harness/Load Test Normal test case does not capture system performance. Therefore, we need to develop another set of tests for this purpose. In the simplest form, we can set the time out for the functional test that runs in continuous integration server. The tricky part in this kind of test is that it’s very system dependent and may fail if the system is overloaded. The more popular solution is to run load test manually by using a profiling tool like JMeter or create our own load test app.Smoke Test/Spider Test Smoke Test and Spider Test are two special kinds of tests that may be more relevant to us. WDS provides KAAS (Knowledge as a Service) for the wireless industry. Therefore, our applications are refreshed everyday with data changes rather than business logic changes. It is specific to us that system failure may come from data change rather than business logic. Smoke Tests are set of pre-defined test cases run on integration server with production data. It helps us to find out any potential issues for the daily LIVE deployment. Similar to Smoke Test, Spider Test runs with real data but it works like a crawler that randomly clicks on any link or button available. One of our system contains so many combinations of inputs that it is not possible to be tested by human (closed to 100.000 combinations of inputs). Our Smoke Test randomly chooses some combination of data to test. If it manages to run for a few hours without any defect, we will proceed with our daily/weekly deployment. The Test Culture in our environment To make it short, WDS is a TDD temple. If you create the implementation before writing test cases, better be quiet about it. If you look at WDS self introduction, TDD is mentioned only after Agile and XP “We are:- agile & XP, TDD & pairing, Java & JavaScript, git & continuous deployment, Linux & AWS, Jeans & T-shirts, Tea & cake” Many high level executives in WDS start their career as developers. That helps fostering our culture as an engineering-oriented company. Requesting resources to improve test coverage or infrastructure are common here. We do not have QA. In worst case, Product Owner or customers detect bugs. In best case, we detect bugs by test cases or by team mates during peer review stage. Regarding our Singapore office, most of our team members grew up absorbing Ken Beck and Martin Fowler books and philosophy. That’s why most of them are hardcore TDD worshipers. Even, one member of our team is Martin Fowler’s neighbour. The focus of testing in our working environment did bear fruits. WDS production defects rate is relatively low. My own experience and personal view with testing That is enough about self appraisal. Now, let me share my experience about testing. Generally, Automated Testing works better than QA  Comparing the output of a traditional software house that is packed with an army of QA with a modern Agile team that delivers fully test coverage products, the latter normally outperforms in terms of quality and may even cost effectiveness. Should QA jobs be extinct soon? Over monitoring may hint lack of quality It sounds strange but over the years, I developed an insecure feeling whenever I saw a project that had too many layers of monitoring. Over-monitoring may hint to a lack of confidence and indeed, these systems crash very often with unknown reasons. Writing test cases takes more time that developing features DDH is definitely right on this. Writing Test Cases means that you need to mock input and assert lots of things. Unless you keep writing spaghetti code, developing features take much less time compared to writing tests.UI Testing with javascript is painful You knew it when you did it. Life would be much better if you only needed to test Restful API or static html pages. Unfortunately, the trend of modern web application development involves lots of javascript on client side. For UI Testing, Asynchronous is evil. Whether you want to go with full control testing framework like htmlUnit or using a more practical, generic one like Selenium, it will be a great surprise for me if you never encounter random failures. I guess every developer knows the feeling of failing to get the build pass at the end of the week due to random failure test cases. Developers always over-estimate their software quality It is applicable to me as well because I am an optimistic person. We tend to think that our implementation is perfect until the tests fail or someone helps to point out a bug.Sometimes, we change our code to make writing test cases easier Want it or not, we must agree with DHH on this point. Pertaining to Java world, I have seen people exposing internal variables, creating dummy wrappers for framework objects (like HttpSession, HttpRequest,…) so that it is easier to write Unit Test. DHH finds it so uncomfortable that he chose to walk way from Unit Test. On this part, I half agree and half disagree with him. From my own view, altering design, implementation for the sake of testing is not favourable. It is better if developers can write the code without any concern of mocking input. However, aborting Unit Testing for the sake of having a simple and convenient life is too extreme. The right solution should be designing the system in such a way that business logic won’t be so tight-coupling with framework or infrastructure. This is what is called Domain Driven Design. Domain Driven Design For a newbie, Domain Driven Design gives us a system with the following layers.If you notice, the above diagram has more abstract layers than Rails or the Java adoption of Rails, Play framework. I understand that creating more abstract layers can cause bloated system but for DDD, it is a reasonable compromise. Let’s elaborate further on the content of each layer: Infrastructure This layer is where you store your repository implementation or any other environment specific concerns. For infrastructure, keep the API as simple, dummy as possible and avoid having any business logic implemented here. For this layer, Unit Test is a joke. If there is anything to write, it should be integration test, which works with real database. Domain Domain layer is the most important layer. It contains all system business logic without any framework, infrastructure, environment concern. Your implementation should look like a direct translation of user requirements. Any input, output, parameter are POJO only. Domain layer should be the first layer to be implemented. To fully complete the logic, you may need the interface/API of the infrastructure layer. It is a best practice to keep the API in the Domain Layer and concrete implementation in the Infrastructure layer. The best kind of test cases for the Domain layer is Unit Test as your concern is not the system UI or environment. Therefore, it helps developers to avoid doing dirty works of mocking framework object. For mocking internal state of object, my preferred choice is using a Reflection utility to setup objects rather than exposing internal variables through setters. Application Layer/User Interface Application Layer is where you start thinking about how to represent your business logic to the customer. If the logic is complex or involving many consecutive requests, it is possible to create Facades. Reaching this point, developers should think more about clients than the system. The major concerns should be customer’s devices, UI responsiveness, load balance, stateless or stateful session, Restful API. This is the place for developers to showcase framework talent and knowledge. For this layer, the better kind of test cases is functional/integration test. Similar as above, try your best to avoid having any business logic in Application Layer. Why it is hard to write Unit Test in Rails? Now, if you look back to Rails or Play framework, there is no clear separation of layers like above. The Controllers render inputs, outputs and may contain business logic as well. Similar behaviours applied if you use the ServletAPI without adding any additional layer. The Domain object in Rails is an active record and has a tight-coupling with database schema. Hence, for whatever unit of code that developers want to write test cases, the inputs and output are not POJO. This makes writing Unit Test tough. We should not blame DHH for this design as he follows another philosophy of software development with many benefits like simple design, low development effort and quick feedback. However, I myself do not follow and adopt all of his ideas for developing enterprise applications. Some of his ideas like convention over configuration are great and did cause a major mindset change in developers world but other ideas end up as trade off. Being able to quickly bring up a website may later turn to troubles implementing features that Rails/Play do not support. ConclusionUnit Test is hard to write if your business logic is tight-coupling to framework. Focusing and developing business logic first may help you create better design. Each kinds of components suit different kinds of test cases.This is my own view of Testing. If you have any other opinions, please provide some comments.Reference: Testing effectively from our JCG partner Tony Nguyen at the Developers Corner blog....

Connecting to Cassandra from Java

In my post Hello Cassandra, I looked at downloading the Cassandra NoSQL database and using cqlsh to connect to a Cassandra database. In this post, I look at the basics of connecting to a Cassandra database from a Java client. Although there are several frameworks available for accessing the Cassandra database from Java, I will use the DataStax Java Client JAR in this post. The DataStax Java Driver for Apache Cassandra is available on GitHub. The datastax/java-driver GitHub project page states that it is a “Java client driver for Apache Cassandra” that “works exclusively with the Cassandra Query Language version 3 (CQL3)” and is “licensed under the Apache License, Version 2.0.”   The Java Driver 2.0 for Apache Cassandra page provides a high-level overview and architectural details about the driver. Its Writing Your First Client section provides code listings and explanations regarding connecting to Cassandra with the Java driver and executing CQL statements from Java code. The code listings in this post are adaptations of those examples applied to my example cases. The Cassandra Java Driver has several dependencies. The Java Driver 2.0 for Apache Cassandra documentation includes a page called Setting up your Java development environment that outlines the Java Driver 2.0′s dependencies: cassandra-driver-core-2.0.1.jar (datastax/java-driver 2.0), netty-3.9.0-Final.jar (netty direct), guava-16.0.1.jar (Guava 16 direct), metrics-core-3.0.2.jar (Metrics Core), and slf4j-api-1.7.5.jar (slf4j direct). I also found that I needed to place LZ4Factory.java and snappy-java on the classpath. The next code listing is of a simple class called CassandraConnector. CassandraConnector.java package com.marxmart.persistence;import com.datastax.driver.core.Cluster; import com.datastax.driver.core.Host; import com.datastax.driver.core.Metadata; import com.datastax.driver.core.Session;import static java.lang.System.out;/** * Class used for connecting to Cassandra database. */ public class CassandraConnector { /** Cassandra Cluster. */ private Cluster cluster;/** Cassandra Session. */ private Session session;/** * Connect to Cassandra Cluster specified by provided node IP * address and port number. * * @param node Cluster node IP address. * @param port Port of cluster host. */ public void connect(final String node, final int port) { this.cluster = Cluster.builder().addContactPoint(node).withPort(port).build(); final Metadata metadata = cluster.getMetadata(); out.printf("Connected to cluster: %s\n", metadata.getClusterName()); for (final Host host : metadata.getAllHosts()) { out.printf("Datacenter: %s; Host: %s; Rack: %s\n", host.getDatacenter(), host.getAddress(), host.getRack()); } session = cluster.connect(); }/** * Provide my Session. * * @return My session. */ public Session getSession() { return this.session; }/** Close cluster. */ public void close() { cluster.close(); } } The above connecting class could be invoked as shown in the next code listing. Code Using CassandraConnector /** * Main function for demonstrating connecting to Cassandra with host and port. * * @param args Command-line arguments; first argument, if provided, is the * host and second argument, if provided, is the port. */ public static void main(final String[] args) { final CassandraConnector client = new CassandraConnector(); final String ipAddress = args.length > 0 ? args[0] : "localhost"; final int port = args.length > 1 ? Integer.parseInt(args[1]) : 9042; out.println("Connecting to IP Address " + ipAddress + ":" + port + "..."); client.connect(ipAddress, port); client.close(); } The example code in that last code listing specified default node and port of localhost and port 9042. This port number is specified in the cassandra.yaml file located in the apache-cassandra/conf directory. The Cassandra 1.2 documentation has a page on The cassandra.yaml configuration file which describes the cassandra.yaml file as “the main configuration file for Cassandra.” Incidentally, another important configuration file in that same directory is cassandra-env.sh, which defines numerous JVM options for the Java-based Cassandra database. For the examples in this post, I will be using a MOVIES table created with the following Cassandra Query Language (CQL): createMovie.cql CREATE TABLE movies ( title varchar, year int, description varchar, mmpa_rating varchar, dustin_rating varchar, PRIMARY KEY (title, year) ); The above file can be executed within cqlsh with the command source 'C:\cassandra\cql\examples\createMovie.cql' (assuming that the file is placed in the specified directory, of course) and this is demonstrated in the next screen snapshot.One thing worth highlighting here is that the columns that were created as varchar datatypes are described as text datatypes by the cqlsh describe command. Although I created this table directly via cqlsh, I also could have created the table in Java as shown in the next code listing and associated screen snapshot that follows the code listing. Creating Cassandra Table with Java Driver final String createMovieCql = "CREATE TABLE movies_keyspace.movies (title varchar, year int, description varchar, " + "mmpa_rating varchar, dustin_rating varchar, PRIMARY KEY (title, year))"; client.getSession().execute(createMovieCql); The above code accesses an instance variable client. The class with this instance variable that it might exist in is shown next. Shell of MoviePersistence.java package dustin.examples.cassandra;import com.datastax.driver.core.ResultSet; import com.datastax.driver.core.Row;import java.util.Optional;import static java.lang.System.out;/** * Handles movie persistence access. */ public class MoviePersistence { private final CassandraConnector client = new CassandraConnector();public MoviePersistence(final String newHost, final int newPort) { out.println("Connecting to IP Address " + newHost + ":" + newPort + "..."); client.connect(newHost, newPort); }/** * Close my underlying Cassandra connection. */ private void close() { client.close(); } } With the MOVIES table created as shown above (either by cqlsh or with Java client code), the next steps are to manipulate data related to this table. The next code listing shows a method that could be used to write new rows to the MOVIES table. /** * Persist provided movie information. * * @param title Title of movie to be persisted. * @param year Year of movie to be persisted. * @param description Description of movie to be persisted. * @param mmpaRating MMPA rating. * @param dustinRating Dustin's rating. */ public void persistMovie( final String title, final int year, final String description, final String mmpaRating, final String dustinRating) { client.getSession().execute( "INSERT INTO movies_keyspace.movies (title, year, description, mmpa_rating, dustin_rating) VALUES (?, ?, ?, ?, ?)", title, year, description, mmpaRating, dustinRating); } With the data inserted into the MOVIES table, we need to be able to query it. The next code listing shows one potential implementation for querying a movie by title and year. Querying with Cassandra Java Driver /** * Returns movie matching provided title and year. * * @param title Title of desired movie. * @param year Year of desired movie. * @return Desired movie if match is found; Optional.empty() if no match is found. */ public Optional<Movie> queryMovieByTitleAndYear(final String title, final int year) { final ResultSet movieResults = client.getSession().execute( "SELECT * from movies_keyspace.movies WHERE title = ? AND year = ?", title, year); final Row movieRow = movieResults.one(); final Optional<Movie> movie = movieRow != null ? Optional.of(new Movie( movieRow.getString("title"), movieRow.getInt("year"), movieRow.getString("description"), movieRow.getString("mmpa_rating"), movieRow.getString("dustin_rating"))) : Optional.empty(); return movie; } If we need to delete data already stored in the Cassandra database, this is easily accomplished as shown in the next code listing. Deleting with Cassandra Java Driver /** * Deletes the movie with the provided title and release year. * * @param title Title of movie to be deleted. * @param year Year of release of movie to be deleted. */ public void deleteMovieWithTitleAndYear(final String title, final int year) { final String deleteString = "DELETE FROM movies_keyspace.movies WHERE title = ? and year = ?"; client.getSession().execute(deleteString, title, year); } As the examples in this blog post have shown, it’s easy to access Cassandra from Java applications using the Java Driver. It is worth noting that Cassandra is written in Java. The advantage of this for Java developers is that many of Cassandra’s configuration values are JVM options that Java developers are already familiar with. The cassandra-env.sh file in the Cassandra conf directory allows one to specify standard JVM options used by Cassandra (such as heap sizing parameters -Xms, -Xmx, and -Xmn),HotSpot-specific JVM options (such as -XX:-HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath, garbage collection tuning options, and garbage collection logging options), enabling assertions (-ea), and exposing Cassandra for remote JMX management. Speaking of Cassandra and JMX, Cassandra can be monitored via JMX as discussed in the “Monitoring using JConsole” section of Monitoring a Cassandra cluster. The book excerpt The Basics of Monitoring Cassandra also discusses using JMX to monitor Cassandra. Because Java developers are more likely to be familiar with JMX clients such as JConsole and VisualVM, this is an intuitive approach to monitoring Cassandra for Java developers. Another advantage of Cassandra’s Java roots is that Java classes used by Cassandra can be extended and Cassandra can be customized via Java. For example, custom data types can be implemented by extending the AbstractType class. Conclusion The Cassandra Java Driver makes it easy to access Cassandra from Java applications. Cassandra also features significant Java-based configuration and monitoring and can even be customized with Java.Reference: Connecting to Cassandra from Java from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: