Featured FREE Whitepapers

What's New Here?


When to use Apache Camel?

Apache Camel is one of my favorite open source frameworks in the JVM / Java environment. It enables easy integration of different applications which use several protocols and technologies. This article shows when to use Apache Camel and when to use other alternatives. The Problem: Enterprise Application Integration (EAI) Enterprise application integration is necessary in almost every company due to new products and applications. Integrating these applications creates several problems. New paradigms come up every decade, for example client / server communication, Service-oriented Architecture (SOA) or Cloud Computing. Besides, different interfaces, protocols and technologies emerge. Instead of storing data in files in the past (many years ago), SQL databases are used often today. Sometimes, even NoSQL databases are required in some usecases. Synchronous remote procedure calls or asynchronous messaging is used to communicate via several technologies such as RMI, SOAP Web Services, REST or JMS. A lot of software silos exists. Nevertheless, all applications and products of these decades have to communicate with each other to work together perfectly. Enterprise Integration Patterns (EIP) Of course, you could reinvent the wheel for each problem, write some spaghetti code and let the applications work together. Unfortunately, your management will not like the long-term perspective of this solution. Enterprise Integration Patterns (www.eaipatterns.com) help to fragment problems and use standardized ways to integrate applications. Using these, you always use the same concepts to transform and route messages. Thus, it is a good idea to forget about reinventing the wheel each time you have a problem. Alternatives for integrating Systems Three alternatives exist for integrating applications. EIPs can be used in each solution. Solution 1: Own custom Solution Implement a individual solution that works for your problem without separating problems into little pieces. This works and is probably the fastest alternative for small use cases. You have to code all by yourself. Maintenance will probably be high if team members change. Solution 2: Integration Framework Use a framework which helps to integrate applications in a standardized way using several integration patterns. It reduces efforts a lot. Every developer will easily understand what you did (if he knows the used framework). Solution 3: Enterprise Service Bus (ESB) Use an enterprise service bus to integrate your applications. Under the hood, the ESB also uses an integration framework. But there is much more functionality, such as business process management, a registry or business activity monitoring. You can usually configure routing and such stuff within a graphical user interface – you have to decide at your own if that reduces complexity and efforts. Usually, an ESB is a complex product. The learning curve is much higher. But therefore you get a very powerful tool which should offer all your needs. What is Apache Camel? Apache Camel is a lightweight integration framework which implements all EIPs. Thus, you can easily integrate different applications using the required patterns. You can use Java, Spring XML, Scala or Groovy. Almost every technology you can imagine is available, for example HTTP, FTP, JMS, EJB, JPA, RMI, JMS, JMX, LDAP, Netty, and many, many more (of course most ESBs also offer support for them). Besides, own custom components can be created very easily. You can deploy Apache Camel as standalone application, in a web container (e.g. Tomcat or Jetty), in a JEE application Server (e.g. JBoss AS or WebSphere AS), in an OSGi environment or in combination with a Spring container. If you need more information about Apache Camel, please go to its web site as starting point: http://camel.apache.org. This article is no technical introduction J When to use Apache Camel? Apache Camel is awesome if you want to integrate several applications with different protocols and technologies. Why? There is one feature (besides supporting so many technologies and besides supporting different programming languages) which I really appreciate a lot: Every integration uses the same concepts! No matter which protocol you use. No matter which technology you use. No matter which domain specific language (DSL) you use – it can be Java, Scala, Groovy or Spring XML. You do it the same way. Always! There is a producer, there is a consumer, there are endpoints, there are EIPs, there are custom processors / beans (e.g. for custom transformation) and there are parameters (e.g. for credentials). Here is one example which contains all of these concepts using the Java DSL: from(„activeMQ:orderQueue“)..transaction().log(„processing order“).to(mock:“notYetExistingInterface“) Now let’s look at another example using the Scala DSL: „file:incomingOrders?noop=true“ process(new TransformationProcessor) to „jdbc:orderDatastore“ If you are a developer, you should be able to recognize what these routes do, don’t you? Two other very important features are the support for error-handling (e.g. using a dead letter queue) and automatic testing. You can test EVERYTHING very easily using a Camel-extension of JUnit! And again, you always use the same concepts, no matter which technology you have to support. Apache Camel is mature and production ready. It offers scalability, transaction support, concurrency and monitoring. Commercial support is available by FuseSource: http://fusesource.com/products/enterprise-camel When NOT to use Apache Camel? Well, yes, there exist some use cases where I would not use Apache Camel. I have illustrated this in the following graphic (remember the three alternatives I mentioned above: own custom integration, integration framework, enterprise service bus).If you have to integrate just one or two technologies, e.g. reading a file or sending a JMS message, it is probably much easier and faster to use some well known libraries such as Apache Commons IO or Spring JmsTemplate. But please do always use these helper classes, pure File or JMS integration with try-catch-error is soooo ugly! Although FuseSource offers commercial support, I would not use Apache Camel for very large integration projects. An ESB is the right tool for this job in most cases. It offers many additional features such as BPM or BAM. Of course, you could also use several single frameworks or products and „create“ your own ESB, but this is a waste of time and money (in my opinion). Several production-ready ESBs are already available. Usually, open source solutions are more lightweight than commercial products such as WebSphere Message Broker (you probably need a day or two just to install the evaluation version of this product)! Well-known open source ESBs are Apache ServiceMix, Mule ESB and WSO2 ESB. By the way: Did you know that some ESB base on the Apache Camel framework (e.g. Apache Service Mix and the Talend ESB). Thus, if you like Apache Camel, you could also use Apache ServiceMix or the commercial Fuse ESB which is based on ServiceMix. Conclusion Apache Camel is an awesome framework to integrate applications with different technologies. The best thing is that you always use the same concepts. Besides, support for many many technologies, good error handling and easy automatic testing make it ready for integration projects. Because the number of applications and technologies in each company will increase further, Apache Camel has a great future. Today we have application silos, in ten years we will probably have cloud silos which are deployed in Goggle App Engine, CloudFoundry, Amazon EC3, or any other cloud service. So I hope that Apache Camel will not oversleep to be ready for the cloud era, too (e.g. by offering components to connect to cloud frameworks easily). But that’s future… At the moment you really should try this framework out, if you have to integrate applications in the JVM / Java environment. By the way: I know that I praise Camel in this article, but I am neither a Camel committer nor working for FuseSource. I just really like this framework. Best regards, Reference: When to use Apache Camel? from our JCG partner Kai Wahner at the Blog about Java EE / SOA / Cloud Computing blog....

It’s About Confidentiality and Integrity (not so much Availability)

Everyone knows the C-I-A triad for information security: security is about protecting the Confidentiality, Integrity and Availability of systems and data. In a recent post, Warren Axelrod argues that Availability is the most important of these factors for security, more important than Integrity and Confidentiality – that C-I-A should be A-I-C. I don’t agree. Protecting the Confidentiality of customer data and sensitive business data and system resources is a critical priority for information security. It’s what you should first think about in security. And protecting the Integrity of data and systems, through firewalls and network engineering and operational hardening, and access control, data validation and encoding, auditing, digital signatures and so on is the other critical priority for information security. Availability is a devops problem, not a security problem Axelrod makes the point that it doesn’t matter if the Confidentiality or Integrity of data is protected if the system isn’t available, which is true. But Availability is already the responsibility of application architects and operations engineers. Their job is designing and building scalable applications that can handle load surges and failures, and architecting and operating technical infrastructure that is equally resilient to load and failures. Availability of systems is one of the key ways that their success is measured and one of the main things that they get paid to take care of. Availability of systems and data is a devops problem that requires application developers and architects and operations engineers to work together. I don’t see where security experts add value in ensuring Availability – with the possible exception of helping architects and engineers understand how to protect themselves from DDOS attacks. When it comes to Availability, I look to people like James Hamilton and Michael Nygard and John Allspaw for guidance and solutions, not to application security experts. In information security, C-I-A should be C and I, with a little bit of A – not the other way around. Reference: It’s About Confidentiality and Integrity (not so much Availability) from our JCG partner Jim Bird at the Building Real Software blog....

Plan B? That is Plan N … Nothing. Jigsaw follows in 2015

What a day. When the typical European is winding down people in the States are starting with coffee. This is why I had a good night sleep over the recent news by Mark Reinhold. In a post titled ‘ Project Jigsaw: Late for the train‘ he proposes to ‘defer Project Jigsaw to the next release, Java 9.’ With the modularization efforts being one of the key topics of Java’s future on recent conferences and blog-posts this was a quite surprising move. Yesterday everybody was speculating about if there will be an JSR for Jigsaw or not. Today we know why this didn’t happen. And I am disappointed about that. Here is why. Early notification? No – it’s salami slicing! Or? My first impression was: Hey, you guys don’t get it. Dropping features late in the timeline isn’t good for the community. But Donald made me realize that Java 8 is scheduled for May 2013. @ myfear @ jponge @ alexismp Again, I’m truly sorry that an 18 month in advance proposal isn’t good enough for you. — DonaldOJDK (@DonaldOJDK) July 17, 2012 That basically means, we are informed 18 months ahead. But you guessed right. The reason for me being disappointed isn’t the time. It’s about the way the future of Java has been communicated and used for marketing. Bert Ertmann naild it for me with his tweet:Plan B was promised for fall ’12. Then became fall ’13 and now one of its key features becomes fall ’15. Boy, what a mess! #jigsaw — Bert Ertman (@BertErtman) July 17, 2012 It seems to be a pattern here. Slicing everything until nothing relevant remains. But wait. Haven’t we all seen the save harbor slides? Have we been ignoring them? Or aren’t we aware of their real importance? Could all this be an agile planning process which simply isn’t communicated in the right way? The community as the most important stakeholder (beside Oracle internal interests) obviously wasn’t aware of the true reliability of the statements and plans. I have seen that before. And struggled with the same approach. Outlining the planning a bit more or even adding a burn down chart for the progress would be a very helpful instrument for a sneak into what’s actually happening with the development. No, I’m not proposing to see all the little numbers, but I would love to have an indicator about stuff that is working like it was planned and stuff that is … being postponed. I don’t want to miss the chance to say thanks to Donald and Mark and also Dalibor and the many others from the OpenJDK/Oracle team for listening to the community. I am thankful to see them on Twitter, Email, Blogs, Forums and everywhere around to gather feedback and try to work on the way Oracle is communicating proposals and decisions. The real reasons? Are there any more reasons behind that than the ones Mark expressed in his blog? ‘some significant technical challenges remain’ and there is ‘not enough time left for the broad evaluation, review, and feedback which such a profound change to the Platform demands.’ Following Mark’s twitter stream also reveals some more insights here. ‘Started on a shoestring at Sun, barely survived integration into Oracle, only fully staffed about a year ago …’ ( @mreinhold) For the external person the news sounded like … wow that stuff has been started years ago and nobody was actually coding there? With the insights from Mark about I hope he is doing another blog-post about this does actually sound a little different. It might be that the truth is much simpler here. And it also would be good to know what the community can do to help. Mark: Go on! Keep lifting the former secret parts and try to facilitate what the community has to offer! Dreams of Java on iOS over?Do you remember what has been said at last JavaOne? The iOS and Android versions of JavaFX? Mobile goddess is back with Java since Java ME never really lifted up? Awesome. One of the most prominent requirements for that to happen was the ability to repackage the JDK to the right size for the job. Jigsaw was the idea behind that. As of today Mark proposes to introduce ‘one or more compact Profiles in the Java SE 8 spech http://mail.openjdk.java.net/pipermail/java-se-8-spec-observers/2012-July/000001.html to solve the missing module system. This in fact wouldn’t be a ‘module’ system but simply ‘just different ways to build the JDK, resulting in different-sized JREs.’ ( @mreinhold). Yeah. Ok. And asked for the implications that might have the answer was: ‘We’ve already been preparing for the complexity of building and testing a modular platform.’ ( @mreinhold) Seems as if the building blocks of that proposal are in place and no additional overhead is needed to get the mobile promises on the road. So we will not have to fear >100 MB downloads for the JavaFX based apps. I don’t know if they will meet the proposed distribution size starting at 10 MB. But anyway I expect it to be at a reasonable size. We don’t need Jigsaw!? Really? We already have OSGI, JBoss Modules, HK2 Kernel abstraction. A lot of stuff is in place and Jigsaw would only have helped the JDK. Really? I’m looking at it from a slightly different perspective. Even if it is true that a module system would have helped the JDK in the first place, the dependent platform specifications (like Java EE) also are in big need for a module system. And Java simply hasn’t anything to over here. At least nothing that is in the reach of the JCP. So, looking for modularization approaches as of today would mean to embrace non JCP technologies. And we all know that this will not happen. So, looking at Java EE 7 and beyond we are quite sure that this proposal is putting a lot of pressure on the internal discussions. Not to forget about the additional years the competitors gain in entering and deciding the field. If you ask me, the worst thing that could happen is that Jigsaw ends up with being used JDK internally only. There is a good chance for exactly that to happen. What is left of Java 8? With Jigsaw being stripped of the Java 8 time-frame the most important question here is about the what is left. Even still under the save harbor statements that’s basically: – Project Lambda (JSR 335) will bring closures to the Java programming language. – New Date/Time API (JSR 310) – Type Annotations (JSR 308) – A couple of smaller feature With the new scope Java 8 will ship on time, around September 2013 according to Mark. Feeling better now? I don’t know. Even a good night sleep didn’t bring back that comfy feeling I had a few days ago talking about modularization with Java. But I think I have to get over it and this is still another one of those ‘bad-hair’ days which don’t have a real reason for feeling sad. Seems as if I personally have to look at the alternatives. Waiting until 2015 is not an option. OSGI, JBoss Modules … Here I come. Update 20.07.12 Alexis has put up an interesting piece about motivation and the true debacle behind Jigsaw: “As I wrote above, Oracle has the resources to declare Jigsaw a strategic goal. I can agree that it may be hard to deliver by late 2013 but waiting for 2016 is effectively killing Jigsaw and encouraging everyone to look at alternatives which will jeopardize yet even more Jigsaw’s chances of ever seeing the light of day. In fact, even Oracle is considering profiles in Java 8, an ugly band-aid if you ask me. One you’ll need to painfully tear off to get proper modularity in the platform. Jigsaw really shouldn’t be seen as “a new feature”, to me it’s really the Java reboot some people have been calling for a long time. Only a compatible one.” Reference: Plan B? That is Plan N … Nothing. Jigsaw follows in 2015 from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....

Java Lock Implementations

We all use 3rd party libraries as a normal part of development. Generally, we have no control over their internals. The libraries provided with the JDK are a typical example. Many of these libraries employ locks to manage contention.JDK locks come with two implementations. One uses atomic CAS style instructions to manage the claim process. CAS instructions tend to be the most expensive type of CPU instructions and on x86 have memory ordering semantics. Often locks are un-contended which gives rise to a possible optimisation whereby a lock can be biased to the un-contended thread using techniques to avoid the use of atomic instructions. This biasing allows a lock in theory to be quickly reacquired by the same thread. If the lock turns out to be contended by multiple threads the algorithm with revert from being biased and fall back to the standard approach using atomic instructions. Biased locking became the default lock implementation with Java 6.When respecting the single writer principle, biased locking should be your friend. Lately, when using the sockets API, I decided to measure the lock costs and was surprised by the results. I found that my un-contended thread was incurring a bit more cost than I expected from the lock. I put together the following test to compare the cost of the current lock implementations available in Java 6.The TestFor the test I shall increment a counter within a lock, and increase the number of contending threads on the lock. This test will be repeated for the 3 major lock implementations available to Java:Atomic locking on Java language monitors Biased locking on Java language monitors ReentrantLock introduced with the java.util.concurrent package in Java 5.I’ll also run the tests on the 3 most recent generations of the Intel CPU. For each CPU I’ll execute the tests up to the maximum number of concurrent threads the core count will support.The tests are carried out with 64-bit Linux (Fedora Core 15) and Oracle JDK 1.6.0_29. The Code import java.util.concurrent.BrokenBarrierException; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantLock; import java.util.concurrent.CyclicBarrier;import static java.lang.System.out;public final class TestLocks implements Runnable { public enum LockType { JVM, JUC } public static LockType lockType;public static final long ITERATIONS = 500L * 1000L *1000L; public static long counter = 0L;public static final Object jvmLock = new Object(); public static final Lock jucLock = new ReentrantLock(); private static int numThreads; private static CyclicBarrier barrier;public static void main(final String[] args) throws Exception { lockType = LockType.valueOf(args[0]); numThreads = Integer.parseInt(args[1]); runTest(numThreads); // warm up counter = 0L;final long start = System.nanoTime(); runTest(numThreads); final long duration = System.nanoTime() - start;out.printf("%d threads, duration %,d (ns)\n", numThreads, duration); out.printf("%,d ns/op\n", duration / ITERATIONS); out.printf("%,d ops/s\n", (ITERATIONS * 1000000000L) / duration); out.println("counter = " + counter); }private static void runTest(final int numThreads) throws Exception { barrier = new CyclicBarrier(numThreads); Thread[] threads = new Thread[numThreads];for (int i = 0; i < threads.length; i++) { threads[i] = new Thread(new TestLocks()); }for (Thread t : threads) { t.start(); }for (Thread t : threads) { t.join(); } }public void run() { try { barrier.await(); } catch (Exception e) { // don't care }switch (lockType) { case JVM: jvmLockInc(); break; case JUC: jucLockInc(); break; } }private void jvmLockInc() { long count = ITERATIONS / numThreads; while (0 != count--) { synchronized (jvmLock) { ++counter; } } }private void jucLockInc() { long count = ITERATIONS / numThreads; while (0 != count--) { jucLock.lock(); try { ++counter; } finally { jucLock.unlock(); } } } }Script the tests: set -x for i in {1..8}; do java -XX:-UseBiasedLocking TestLocks JVM $i; done for i in {1..8}; do java -XX:+UseBiasedLocking TestLocks JVM $i; done for i in {1..8}; do java TestLocks JUC $i; done The ResultsFigure 1Figure 2Figure 3Biased locking should no longer be the default lock implementation on modern Intel processors. I recommend you measure your applications and experiement with the -XX:-UseBiasedLocking JVM option to determine if you can benefit from using atomic lock based algorithm for the un-contended case.ObservationsBiased locking, in the un-contended case, is ~10% more expensive than the atomic locking. It seems that for recent CPU generations the cost of atomic instructions is less than the necessary housekeeping for biased locks. Previous to Nehalem, lock instructions would assert a lock on the memory bus to perform the these atomic operations, each would cost more than 100 cycles. Since Nehalem, atomic instructions can be handled local to a CPU core, and typically cost only 10-20 cycles if they do not need to wait on the store buffer to empty while enforcing memory ordering semantics. As contention increases, language monitor locks quickly reach a throughput limit regardless of thread count. ReentrantLock gives the best un-contended performance and scales significantly better with increasing contention compared to language monitors using synchronized. ReentrantLock has an odd characteristic of reduced performance when 2 threads are contending. This deserves further investigation. Sandybridge suffers from the increased latency of atomic instructions I detailed in a previous article when contended thread count is low. As contended thread count continues to increase, the cost of the kernel arbitration tends to dominate and Sandybridge shows its strength with increased memory throughput.ConclusionWhen developing your own concurrent libraries I would recommend ReentrantLock rather than using the synchronized keyword due to the significantly better performance on x86, if a lock-free alternative algorithm is not a viable option.Update 20-Nov-2011Dave Dice has pointed out that biased locking is not implemented for the locks created in the first few seconds of the JVM startup. I’ll re-run my tests this week and post the results. I’ve had some more quality feedback that suggests my results could be potentially invalid. Micro benchmarks can be tricky but the advice of measuring your own application in the large still stands.A re-run of the tests can be seen in this follow-on blog taking account of Dave’s feedback.Reference: Java Lock Implementations from our JCG partner Martin Thompson at the Mechanical Sympathy blog....

Database Usage Practices

After a long period of intense thinking, research and brainstorming sessions, I have finally made up my mind with few best practices that can be considered while using a database.All database servers today has capabilities to optimize the queries sent to the server, ironically, they lack the capability to alter query in such a way that the throughput is maximized. It is up to the developer to analyze and optimize the queries to attain maximum throughput. In this section we will be looking at some useful tips to optimize the queries to produce the optimal output.Prefer prepared statements over create statements. Preparing statements does a syntax checks only once during the query execution, while create statements compiles the queries every time they are executed. Considering today’s servers and application requirements this might not be a critical factor since there is only a benefit of few milliseconds or even lesser. Another more important benefit of using prepared statements is the use of bind variables. The quotes and other characters, used for SQL injection and corrupting database, are taken care by the driver itself, hence improving the security of the data and the database server. Using tools like EXPLAIN, ANALYZE. These are the most important tool for DBAs and Developers to optimize their queries (in posgresql). These commands will give a real-time analysis of how their queries perform. The EXPLAIN commands shows the query execution plan and we can identify bottlenecks in the plan and take appropriate measures to eliminate these bottlenecks. Take advantage of the indexes. Database uses the indexing capabilities wherever it can. Some databases even rearranges the ‘where clause’ so that it can return the output much faster. It is always best if you have the where clause on the indexes first followed by other clauses. Maintaining cache for read-only queries and storing master tables. Master tables are tables that are very rarely modified. Every query on these tables will result in I/O which is the slowest process in any machine. It is a best practice to cache the master records in the application so that it is stored in the RAM so that the records can be fetched much faster. If the master table itself is huge, it is a concern since it might eat up your RAM and might result in creating swap files which are even worse. In such scenarios, better extract the master records out of the main application and deploy a separate caching service for retrieving those records. Use VACUUM frequently on transaction tables. Frequently update/delete of records of a table might result in lots of dead tuples. When a record is deleted the memory allocated for that record is not actually freed but marked as dead. This dead space can be used to store another record that might fit into that space. Too many dead tuples could result in higher query execution time and indexing of the tables. The VACUUM command fragments these dead tuples. Another command, FULL_VACUUM, will reclaim the memory allocated for the dead tuples. Prefer BETWEEN rather than <= and >=. Almost all major database drivers will optimize the queries with <= and >=. But by using BETWEEN clauses, we are explicitly giving an optimized query to the driver. Even though this does not result in a huge difference in the response time, it could be a good point to consider in complex queries. Use LIMIT 1 when getting a unique row or if just one record is required for processing. Use LIMIT, OFFSET clauses while querying huge databases. This will reduce the query time since now the database server doesn’t have to scan the entire table. Always specify the required field names while executing SELECT/INSERT queries. This would prevent the application from breaking when the schema of the table changes. In where clause place the predicate that will eliminate the greatest number of rows first.Reference: Database Usage Practices from our JCG partner George Janeve at the Janeve.Me blog....

Logback: Logging revisited

Hi, I am back again with my rant about logging as an inherent part of any application design and development. I am a big fan of strong basics, and in my humble opinion logging is one of those often overlooked but basic critical element of any enterprise grade application. I have written about this before here. This is not really mandatory to read in order to make sense of the current article, but it might help to give it a cursory look, to set the context for this article.In the first article, I introduced logging as a high benefit, low cost alternative to the omnipresent System.out.println(), that all java folks love so much. I had used log4j in that article. Log4j is a solid framework and delivers on it’s promise. In all the years that I have used it, it has never let me down. I can whole heartedly recommend it. However, having said that, there are few alternatives also, which have been around in the market for a while and I am happy to say that at least one of them seem to be challenging log4j in it’s own turf. I am talking about Logback.It is certainly not new kid in the block – and that is one of the reasons I am suggesting you consider this for enterprise grade applications to start with. A quick look at Maven Central suggests that the first version was published way back in 2006. Between 2006 and 8-June2012 – which is when the latest version was pushed to Maven Central, there have been 46 versions. Compare this with log4j. The first version was pushed in Maven Central in 2005 and the last on 26 May 2012, and between these there have been a total of 14 different versions. I do not mean to use this data to compare these two frameworks. The only intent is to assure the reader that Logback have been around long enough and is current enough to be taken seriously.Being around is one thing and making your mark is different. As far as ambition and intent goes, Logback makes it pretty clear that it intends to be successor of log4j – and says that in clear words at it’s homepage. Of courser there is an exhaustive list of features / benefits that Logback claims over Log4j. You can read about them at this link. That’s it really. The point of this article is that I am suggesting that while designing and developing a enterprise grade java based applications, look at logging a bit more carefully and also consider using Logback.A few of the audience at this point, I am hoping, will like to roll up their sleeves, fire up their favorite editor and take Logback out for a spin. If you are one of them, then you and I have something in common. You might want to read on.The very first thing that Logback promises is faster implementation (at this link). Really? I would like to check that claim.I start by creating a vanilla java application using Maven.File: MavenCommands.bat call mvn archetype:create ^ -DarchetypeGroupId=org.apache.maven.archetypes ^ -DgroupId=org.academy ^ -DartifactId=loggerThis unfortunately is preloaded with JUnit 3. I set up JUnit 4 and also add Contiperf, so that I could run the tests multiple times – something that would come in handy if I were to check performance. File: /logger/pom.xml [...] UTF-8 4.10 2.2.0 [...][...] junit junit ${junit.version} test org.databene contiperf ${contiperf.version} testAlso, I like to explicitly control the java version that is being used to compile and execute my code. File: /logger/pom.xml [...] 2.0.2 1.7[...] org.apache.maven.plugins maven-compiler-plugin ${maven-compiler-plugin.version} ${java.version} ${java.version}Last of configurations – for the time being. Slap on surefire to run unit tests. File: /logger/pom.xml [...] 2.12[...] org.apache.maven.plugins maven-surefire-plugin ${maven-surefire-plugin.version} org.apache.maven.surefire surefire-junit47 ${maven-surefire-plugin.version} -XX:-UseSplitVerifierPlease note, I have taken the pains of adding all these dependencies to this article with their versions, just to ensure that should you try this yourself, you know exactly what was the software configuration of my test. Now, let us finally add the unit tests. File: /logger/src/test/java/org/academy/AppTest.java public class AppTest { private final static Logger logger = LoggerFactory .getLogger(AppTest.class); @Rule public ContiPerfRule i = new ContiPerfRule(); @Test @PerfTest(invocations = 10, threads = 1) @Required(max = 1200, average = 1000) public void test() { for(int i = 0; i<10000 ; i++){ logger.debug("Hello {}", "world."); } } }So, we have used the logger in my unit test but have not added an implementation of logger. What I intend to do is to add log4j (with slf4j) and logback (with inherent support of slf4j) one by one and run this simple test multiple times to compare performance. To add log4j I used this setting. File: /logger/pom.xml org.slf4j slf4j-api ${slf4j.version} org.slf4j jcl-over-slf4j ${slf4j.version} runtime org.slf4j slf4j-log4j12 ${slf4j.version} runtimeand for logback I used this setting. File: /logger/pom.xml ch.qos.logback logback-classic ${logback.version}with the following versions. File: /logger/pom.xml 1.6.1 1.0.6For either of these logger framework to actually log anything you will have to add a file telling loggers what to log and where. File: src/main/resources/log4j.properties # Set root logger level to DEBUG and its only appender to A1. log4j.rootLogger=DEBUG, A1# configure A1 to spit out data in console log4j.appender.A1=org.apache.log4j.ConsoleAppender log4j.appender.A1.layout=org.apache.log4j.PatternLayout log4j.appender.A1.layout.ConversionPattern=%d [%t] %-5p %c - %m%nFinally, for the moment of truth. I ran the tests thrice with each framework i.e. logback and log4j. Essentially I log.debug() a string 1000,000 times in each test and timed them. And this is how the final figures came out.Framework 1st run 2nd run 3rd runLogback 0.375 seconds 0.375 seconds 0.406 secondsLog4j 0.454 seconds 0.453 seconds 0.454 secondsAs far as this little experiment goes, Logback clearly performs faster than Log4j. Of course this is overly simplistic experiment and many valid scenarios have not been considered. For example, we have not really used vanilla log4j. We have used log4j in conjunction with the slf4j API, which is not quite the same thing. Also, being faster is not the only consideration. Log4j works asynchronously (read here and here) whereas as far as I know Logback does not. Logback has quite a few nifty features that Log4j does not.So, in isolation this little code does not really prove anything. If at all, it brings me back to the first point that I made – Logback is a serious potential and worth a good look if you are designing / coding an enterprise grade java based application.That is all for this article. Happy coding.Want to read on? May I suggest … The first article of this series. 10 tips for application logging. How to exclude commons logging easily A nice tip on boosting performance of NOT logging. Reference: Logging revisited from our JCG partner Partho at the Tech for Enterprise blog....

JUnit, Logback, Maven with Spring 3

In this series we have already learnt to set up a basic Spring MVC application and learnt how to handle forms in Spring MVC. Now it is time to take on some more involved topics. However, before we venture into deeper waters, let’s get some basics set up. Unit testing I am no TDD evangelist. There I said it. I have never ever been able to write any software where for every piece of code, I have written a test first and then code. If you have done so and are gainfully employed by coding, please do let me know. I would seriously like to know you better. Seriously. My difference in opinion with TDD ends there. Apart from writing test before code – which somehow I simply can’t get my brain to work with – I am a huge supporter of unit testing. I am a firm believer of using JUnit to test all functionality (public but non getter setter, methods). I am a huge fan of using cobertura to report on code coverage. I am a huge fan of maven which allows me to bring this all together in a nice HTML report with just one command. I will use JUnit 4 for this series. Let’s add the dependencies. File: \pom.xml <properties> <junit.version>4.10</junit.version> </properties><!-- Unit testing framework. --> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>${junit.version}</version> <scope>test</scope> </dependency>And let’s add a dumb class to demonstrate testing. File: /src/main/java/org/academy/HelloWorld.java package org.academy;public class HelloWorld { private String message = 'Hello world. Default setting.'; public String greet(){ return message; } public String getMessage() { return message; } public void setMessage(String message) { this.message = message; } }And finally the JUnit to test it. File: src/test/java/org/academy/HelloWorldTest.java package org.academy;import static org.junit.Assert.*;import org.junit.Test; import org.junit.runner.RunWith; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;@RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration public class HelloWorldTest {@Autowired HelloWorld helloWorld; private final static Logger logger = LoggerFactory .getLogger(HelloWorldTest.class);@Test public void test() { logger.debug(helloWorld.greet()); assertEquals(helloWorld.greet(), 'Hello world, from Spring.'); } }You would have noticed that the helloWorld within the unit test have never been initialized in the code. This is the bit of IoC magic of Spring. To make this work, we have used @RunWith, @ContextConfiguration and @Autowired. And I have also given Spring enough information to be able to create an instance of HelloWorld and then inject it to HelloWorldTest.helloWorld. Also, the assertEquals is checking for a very different message than what is actually hard coded in the HelloWorld class. This was done in a xml file mentioned below. Please do note the location of the file within Maven structure. File: /src/test/resources/org/academy/HelloWorldTest-context.xml <?xml version='1.0' encoding='UTF-8'?> <beans xmlns='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:p='http://www.springframework.org/schema/p' xmlns:context='http://www.springframework.org/schema/context' xsi:schemaLocation='http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans-3.0.xsdhttp://www.springframework.org/schema/contexthttp://www.springframework.org/schema/context/spring-context-3.0.xsd'><bean id='helloWorld' class='org.academy.HelloWorld'> <property name='message' value='Hello world, from Spring.' /> </bean> </beans>There are multiple ways I could have provided this configuration file to the unit test. @RunWith(SpringJUnit4ClassRunner.class) is a nice thing to add but is not mandatory. What I have provided here is just the vanilla approach that works in most cases, but I encourage the audience to experiment. Unit test coverage / code coverage. I don’t feel there is enough said about the importance of automated / semi automated / easy way of reporting on code coverage – both for individual developers and technical heads. Unless you are practising TDD religiously (which by the way I have mentioned before I personally have never been able to), it is absolutely impossible for even an individual developer to know if all logic branches of a code are covered by unit test. I am not even going to talk about how a technical head of a team / organization is going to ensure that his product(s) are sufficiently unit tested. I personally believe, any software product which is not sufficiently unit tested and test coverage reported, is an unacceptable risk. Period. Admittedly a bit of a hard stance, but that’s how it is. A bit of my conviction for the hard stance comes from the fact that it is so darn easy to report on test coverage. I will use cobertura in this example. You need to add cobertua to Maven pom. File: pom.xml <!-- Reporting --> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-site-plugin</artifactId> <version>3.0</version> <configuration> <reportPlugins> <!-- Reporting on success / failure of unit tests --> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-report-plugin</artifactId> <version>2.6</version> </plugin> <!-- Reporting on code coverage by unit tests. --> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> <version>2.5.1</version> <configuration> <formats> <format>xml</format> <format>html</format> </formats> </configuration> </plugin> </reportPlugins> </configuration>And once you have done this, and added JUnit, and added an actual JUnit test, you just need to run mvn -e clean install siteto create a nice looking HTML based code coverage report. This report will allow you to click through source code under test and give you nice green coloured patches for unit tested code and red coloured patches for those that slipped through the cracks. Logging Log4j is good, Logback is better. Just don’t use System.out.println() for logging. You could go a long way without proper logging. However, I have spent far too many weekends and nights chasing down production issues, with business breathing down my neck, wishing there was some way to know what was happening in the app rather than having to guess all my way. Now a days, with mature api like slf4j and stable implementation like logback, a developer needs to add just one extra line per class to take advantage of enterprise grade logging infrastructure. It just does not make sense not to use proper logging right from the beginning of any project. Add slf4j and logback to Maven dependencies. File: \pom.xml. <!-- Logging --> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>${logback.version}</version> </dependency>Ensure that Spring’s default logging i.e. commons logging is excluded. If you are wondering if logback is really this good that I claim it to be why did Spring not opt for it to start with. In my defense, here is a link at Spring’s official blog where they say ‘If we could turn back the clock and start Spring now as a new project it would use a different logging dependency. Probably the first choice would be the Simple Logging Facade for Java (SLF4J),…’ File: \pom.xml. <!-- Support for testing Spring applications with too TestNG This artifact is generally always defined the integration testing framework and unit testin <dependency> <groupId>org.springframework</groupId> <artifactId>spring-test</artifactId> <version>${org.springframework.version}</version> <scope>test</scope> <exclusions> <exclusion> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> </exclusion> </exclusions> </dependency>Provide configuration for logback. File: /src/main/resources/logback.xml  <?xml version='1.0' encoding='UTF-8'?> <configuration> <appender name='CONSOLE' class='ch.qos.logback.core.ConsoleAppender'> <encoder> <pattern>%d %5p | %t | %-55logger{55} | %m %n</pattern> </encoder> </appender><logger name='org.springframework'> <level value='INFO' /> </logger><root> <level value='DEBUG' /> <appender-ref ref='CONSOLE' /> </root> </configuration>Finally, add the magic one liner at the beginning of each class that needs logging (that ought to be all classes). File: src/test/java/org/academy/HelloWorldTest.java [...] private final static Logger logger = LoggerFactory .getLogger(HelloWorldTest.class); [...] logger.debug(helloWorld.greet()); [...]There you are all set up. Now is the time to wade deeper into Spring. Happy coding. Want to read more? Here are the links to earlier articles in this series. Hello World with Spring 3 MVC Handling Forms with Spring 3 MVC And, of course these are highly recommended Spring 3 Testing with JUnit 4. Running unit tests with the Spring Framework @RunWith JUnit4 with BOTH SpringJUnit4ClassRunner and Parameterized Issue with Junit and Spring. Reference: JUnit, Logback, Maven with Spring 3 from our JCG partner Partho at the Tech for Enterprise blog....

Developing a plugin for IntelliJ IDEA – some useful tips and links

When I started thinking about writing my first plugin for IntelliJ IDEA the biggest problem was lack of good and comprehensive guides how to do it and how to do it well gathered in one place. So this post will be a written collection of my personal experiences, links and other resources I found useful and noteworthy in my road to releasing Share with Pastie plugin to public IntelliJ repository. Links Before starting writing your mind-blowing and super-extra-useful plugin you should first do some reading. Below there is a list of places worth visiting: Getting Started – a place where you should start. The most basic introduction to creating new plugins. Basics of plugin development – second page similar to previous one but some interesting knowledge can be found there. Plugin Development FAQ – the best place with ready solutions and answers to your questions and problems Open API forum – a place to go if you are really stuck with your problem Show me the code Resources in the above links are helpful but sooner or later you will have to dive into code to check how something works or how you could implement some feature. There are two ways of analysing source code of IntelliJ IDEA itself: open your console and type: git clone git://git.jetbrains.org/idea/community.git idea or check IntelliJ IDEA 10 source code at GrepCode I have used both ways: Git repository to checkout the whole project, open it in IDEA and find usages, etc, and GrepCode to quickly find how single code fragment looks like. Learn from others Sometimes you might want to add some feature which is very similar to one you saw in plugin written by someone else. If you are lucky, source code of this plugin is available. The only thing you need to do is visit http://plugins.intellij.net/?idea, find plugin with similar feature and check if its source code is publicly available. That’s how I found the way to add green balloon with message that selected code fragment was successfully shared with Pastie. Some useful code samples There are a few elements which probably exists in source code of almost every plugin. To ease you pain of googling it or trying to figure it out, here are some two short samples I consider the most popular. Getting current project, current file, etc. Your plugin action should extend AnAction abstract class from IntelliJ OpenAPI. The only parameter passed to actionPerformed method is AnActionEvent. And from this object you can access various places: Project currentProject = DataKeys.PROJECT.getData(actionEvent.getDataContext()); VirtualFile currentFile = DataKeys.VIRTUAL_FILE.getData(actionEvent.getDataContext()); Editor editor = DataKeys.EDITOR.getData(actionEvent.getDataContext());// and so on... List of all places available in this way can be found in DataKeys class (and its parents) constants list. Balloon with info or error message This kind of popup is very useful to communicate feedback messages to user. It can be info message or error/warn. Default colors are green for info, red for error and orange for warnings. In Share With Pastie I am using it to inform that selected text was successfully sent to Pastie and link is waiting in clipboard.But before we show our balloon we need to specify a place when it will be located. In my plugin it is StatusBar (the lowest element in IDEA GUI): StatusBar statusBar = WindowManager.getInstance() .getStatusBar(DataKeys.PROJECT.getData(actionEvent.getDataContext()));and then we can prepare balloon and display it: JBPopupFactory.getInstance() .createHtmlTextBalloonBuilder(htmlText, messageType, null) .setFadeoutTime(7500) .createBalloon() .show(RelativePoint.getCenterOf(statusBar.getComponent()), Balloon.Position.atRight); A few trivias Plugin DevKit The most important thing is to download or activate plugin named Plugin DevKit which will allow to run and debug your own plugin during development process. This seems extremely trivial but you might have deactivated this plugin (like me ) or removed it to speed up start time. UI Designer If you are planning to develop plugin with additional windows, popups, etc. plugin named UI Designer is very handy. It’s something very similar to GUI builder from NetBeans and allows you to create Swing Panels through dragging, dropping and re-sizing components. Group and Action IDs When I was searching for a proper place to show my action in one of the menus available inside IntelliJ IDEA I encountered a page with list of group and action IDs which could help me with configuring my plugin. But this page appeared to be really outdated so I tried to find another way to determine proper values of those IDs. And of course, solution was lying just in front of me. If we press Alt + Insert in project view we will see a menu allowing to create several new objects. And one of them is Action. After clicking it, we will see a very friendly action creator which will looks like below. And of course it contains list of available groups and actions to place our plugin menu item next to them.Proper VM memory settings Next thing is to configure your run/debug configuration to work properly because depending on your hardware setup (available memory mainly) and IntelliJ IDEA settings you might encounter freezes when starting your plugin in development mode (most frequently those are crashes during index rebuilding) .To prevent such problems you should configure memory settings in “Run/Debug Configurations” window and add proper Virtual Machine parameters. For me -Xms256m -XX:PermSize=128m -XX:MaxPermSize=512m worked well. The end So this is list of my experiences, tips and thoughts about creating your own IntelliJ IDEA plugin. I hope you find it useful and your plugin will let us (Developers) to be even more productive and deliver code faster and in better quality My friend said “If I knew how to create plugins for IDEA, I would build a new one every three weeks because I am constantly having new ideas how to improve my workflow”, so maybe after reading this post at least start of plugin development process will be easier. Reference: Developing a plugin for IntelliJ IDEA – some useful tips and links from our JCG partner Tomasz Dziurko at the Code Hard Go Pro blog....

Teaser: Bare-knuckle SOA

I’m working on this idea, and I don’t know if it appeals to you guys. I’d like your input on whether this is something to explore further. Here’s the deal: I’ve encountered teams who, when working with SOA technologies have been dragged into the mud by the sheer complexity of their tools. I’ve only seen this in Java, but I’ve heard from some C# developers that they recognize the phenomenon there as well. I’d like to explore an alternative approach. This approach requires more hard work than adding a WSDL (web service definition language. Hocus pocus) file to your project and automatically generating stuff. But it comes with added understanding and increased testability. In the end, I’ve experienced that this has made me able to complete my tasks quicker, despite the extra manual labor. The purpose of this blog post (and if you like it, it’s expansions) is to explore a more bare-bones approach to SOA in general and to web services specifically. I’m illustrating these principles by using a concrete example: Let users be notified when their currency drops below a threshold relative to the US dollar. In order to make the service technologically interesting, I will be using the IP address of the subscriber to determine their currency. Step 1: create your active services by mocking external interactions Mocking the activity of your own services can help you construct the interfaces that define your interaction with external services. Teaser: public class CurrencyPublisherTest {private SubscriptionRepository subscriptionRepository = mock(SubscriptionRepository.class); private EmailService emailService = mock(EmailService.class); private CurrencyPublisher publisher = new CurrencyPublisher(); private CurrencyService currencyService = mock(CurrencyService.class); private GeolocationService geolocationService = mock(GeolocationService.class);@Test public void shouldPublishCurrency() throws Exception { Subscription subscription = TestDataFactory.randomSubscription(); String location = TestDataFactory.randomCountry(); String currency = TestDataFactory.randomCurrency(); double exchangeRate = subscription.getLowLimit() * 0.9;when(subscriptionRepository.findPendingSubscriptions()).thenReturn(Arrays.asList(subscription));when(geolocationService.getCountryByIp(subscription.getIpAddress())).thenReturn(location);when(currencyService.getCurrency(location)).thenReturn(currency); when(currencyService.getExchangeRateFromUSD(currency)).thenReturn(exchangeRate);publisher.runPeriodically();verify(emailService).publishCurrencyAlert(subscription, currency, exchangeRate); }@Before public void setupPublisher() { publisher.setSubscriptionRepository(subscriptionRepository); publisher.setGeolocationService(geolocationService); publisher.setCurrencyService(currencyService); publisher.setEmailService(emailService); } }Spoiler: I’ve recently started using random test data generation for my tests with great effect. The Publisher has a number of Services that it uses. Let us focus on one service for now: The GeoLocationService.Step 2: create a test and a stub for each service – starting with geolocationservice The top level test shows what we need from each external service. Informed by this and reading (yeah!) the WSDL for a service, we can test drive a stub for a service. In this example, we actually run the test using HTTP by starting Jetty embedded inside the test. Teaser: public class GeolocationServiceStubHttpTest {@Test public void shouldAnswerCountry() throws Exception { GeolocationServiceStub stub = new GeolocationServiceStub(); stub.addLocation("", "Norway");Server server = new Server(0); ServletContextHandler context = new ServletContextHandler(); context.addServlet(new ServletHolder(stub), "/GeoService"); server.setHandler(context); server.start();String url = "http://localhost:" + server.getConnectors()[0].getLocalPort();GeolocationService wsClient = new GeolocationServiceWsClient(url + "/GeoService"); String location = wsClient.getCountryByIp("");assertThat(location).isEqualTo("Norway"); } }Validate and create the xml payload This is the first “bare-knuckled” bit. Here, I create the XML payload without using a framework (the groovy “$”-syntax is courtesy of the JOOX library, a thin wrapper on top of the built-in JAXP classes):I add the XSD (more hocus pocus) for the actual service to the project and code to validate the message. Then I start building the XML payload by following the validation errors. Teaser: public class GeolocationServiceWsClient implements GeolocationService {private Validator validator; private UrlSoapEndpoint endpoint;public GeolocationServiceWsClient(String url) throws Exception { this.endpoint = new UrlSoapEndpoint(url); validator = createValidator(); }@Override public String getCountryByIp(String ipAddress) throws Exception { Element request = createGeoIpRequest(ipAddress); Document soapRequest = createSoapEnvelope(request); validateXml(soapRequest); Document soapResponse = endpoint.postRequest(getSOAPAction(), soapRequest); validateXml(soapResponse); return parseGeoIpResponse(soapResponse); }private void validateXml(Document soapMessage) throws Exception { validator.validate(toXmlSource(soapMessage)); }protected Validator createValidator() throws SAXException { SchemaFactory schemaFactory = SchemaFactory.newInstance(XMLConstants.W3C_XML_SCHEMA_NS_URI); Schema schema = schemaFactory.newSchema(new Source[] { new StreamSource(getClass().getResource("/geoipservice.xsd").toExternalForm()), new StreamSource(getClass().getResource("/soap.xsd").toExternalForm()), }); return schema.newValidator(); }private Document createSoapEnvelope(Element request) throws Exception { return $("S:Envelope", $("S:Body", request)).document(); }private Element createGeoIpRequest(String ipAddress) throws Exception { return $("wsx:GetGeoIP", $("wsx:IPAddress", ipAddress)).get(0); }private String parseGeoIpResponse(Element response) { // TODO return null; }private Source toXmlSource(Document document) throws Exception { return new StreamSource(new StringReader($(document).toString())); } }In this example, I get a little help (and a little pain) from the JOOX library for XML manipulation in Java. As XML libaries for Java are insane, I’m giving up with the checked exceptions, too. Spoiler: I’m generally very unhappy with the handling of namespaces, validation, XPath and checked exceptions in all XML libraries that I’ve found so far. So I’m thinking about creating my own. Of course, you can use the same approach with classes that are automatically generated from the XSD, but I’m not convinced that it really would help much. Stream the xml over http Java’s built in HttpURLConnection is a clunky, but serviceable way to get the XML to the server (As long as you’re not doing advanced HTTP authentication).Teaser: public class UrlSoapEndpoint {private final String url;public UrlSoapEndpoint(String url) { this.url = url; }public Document postRequest(String soapAction, Document soapRequest) throws Exception { URL httpUrl = new URL(url); HttpURLConnection connection = (HttpURLConnection) httpUrl.openConnection(); connection.setDoInput(true); connection.setDoOutput(true); connection.addRequestProperty("SOAPAction", soapAction); connection.addRequestProperty("Content-Type", "text/xml"); $(soapRequest).write(connection.getOutputStream());int responseCode = connection.getResponseCode(); if (responseCode != 200) { throw new RuntimeException("Something went terribly wrong: " + connection.getResponseMessage()); } return $(connection.getInputStream()).document(); } }Spoiler: This code should be expanded with logging and error handling and the validation should be moved into a decorator. By taking control of the HTTP handling, we can solve most of what people buy an ESB to solve. Create the stub and parse the xml The stub uses xpath to find the location in the request. It generates the response in much the same way as the ws client generated the request (not shown).public class GeolocationServiceStub extends HttpServlet {private Map<String,String> locations = new HashMap<String, String>();public void addLocation(String ipAddress, String country) { locations.put(ipAddress, country); }@Override protected void doPost(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { try { String ipAddress = $(req.getReader()).xpath("/Envelope/Body/GetGeoIP/IPAddress").text(); String location = locations.get(ipAddress); createResponse(location).write(resp.getOutputStream()); } catch (Exception e) { throw new RuntimeException("Exception at server " + e); } } }Spoiler: The stubs can be expanded to have a web page that lets me test my system without real integration to any external service. Validate and parse the response The ws client can now validate that the response from the stub complies with the XSD and parse the response. Again, this done using XPath. I’m not showing the code, as it’s just more of the same. The real thing! The code now verifies that the XML payload conforms to the XSD. This means that the ws client should now be usable with the real thing. Let’s write a separate test to check it:public class GeolocationServiceLiveTest {@Test public void shouldFindLocation() throws Exception { GeolocationService wsClient = new GeolocationServiceWsClient("http://www.webservicex.net/geoipservice.asmx"); assertThat(wsClient.getCountryByIp("")).isEqualTo("Norway"); }}Yay! It works! Actually, it failed the first time I tried it, as I didn’t have the correct country name for the IP address that I tested with. This sort of point-to-point integration test is slower and less robust than my other unit tests. However, I don’t find make too big of a deal out of that fact. I filter the test from my Infinitest config and I don’t care much beyond that. fleshing out all the services The SubscriptionRepository, CurrencyService and EmailService need to be fleshed out in the same way as the GeolocationService. However, since we know that we only need very specific interaction with each of these services, we don’t need to worry about everything that could possibly be sent or received as part of the SOAP services. As long as we can do the job that the business logic (CurrencyPublisher) needs, we’re good to go! Demonstration and value chain testing If we create web UI for the stubs, we can now demonstrate the whole value chain of this service to our customers. In my SOA projects, some of the services we depend on will only come online late in the project. In this case, we can use our stubs to show that our service works. Spoiler: As I get tired of verifying that the manual value chain test works, I may end up creating a test that uses WebDriver to set up the stubs and verify that the test ran okay, just like I would in the manual test. Taking the gloves off when fighting in an soa arena In this article, I’ve showed and hinted at more than half a dozen techniques to work with tests, http, xml and validation that don’t involve frameworks, ESBs or code generation. The approach gives the programmer 100% control over their place in the SOA ecosystem. Each of the areas have a lot more depth to explore. Let me know if you’d like to see it be explored. Oh, and I’d also like ideas for better web services to use, as the Geolocated currency email is pretty hokey. Reference: Teaser: Bare-knuckle SOA from our JCG partner Johannes Brodwall at the Thinking Inside a Bigger Box blog....

Read-only ViewObject and Declarative SQL mode

IntroductionThe declarative SQL mode is considered to be one of the most valuable advantages of the entity-based view objects. In this mode the VO’s SQL is generated at runtime depending on the attributes showed in UI. For example, if some page contains a table with only two columns EmployeeId and FirstName, then the query will be generated as “select Employee_ID, First_Name from Employees”. This feature can significantly improve the performance of ADF application. But what about read-only or SQL-based view objects? JDeveloper doesn’t allow you to choose the SQL mode for SQL-based VOs. ?nly “Expert” mode can be used with no chance to have the query generated on the fly. But everything is possible. In this post we have an example of some SQL-based view object VEmployees:Let’s generate View Object Definition class:We’re going to override some methods: @Override public boolean isRuntimeSQLGeneration() { return true; }@Override public boolean isFullSql() { return false; }@Override //In our case we know exactly the clause FROM public String buildDefaultFrom(AttributeDef[] attrDefs, SQLBuilder builder, BaseViewCriteriaManagerImpl vcManager) { return "Employees"; }@Override //Setting "Selected in Query" property for each attribute except PK protected void createDef() { for (AttributeDef at : getAttributeDefs()) if (!at.isPrimaryKey()) ((AttributeDefImpl) at).setSelected(false); }Actually, that’s it! Let’s test it. For the page showing the full set of attributes, we have the result:And generated query (I use ODL analyzer):For the page with only two attributes we have the following result:And the query:The sample application for this post requires JDeveloper and standard HR schema. Reference: Read-only ViewObject and Declarative SQL mode from our JCG partner Eugene Fedorenko at the ADF Practice blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: