What's New Here?

play-framework-logo

Play 2 – modules, plugins, what’s the difference?

There seems to be some confusion regarding Play 2 modules and plugins. I imagine this is because the two are often synonymous. In Play (both versions – 1 and 2) there are distinct differences. In this post, I’m going to look at what a plugin is, how to implement one in Java and Scala, and how to import plugins from modules. Plugins A Play 2 plugin is a class that extends the Java class play.Plugin or has the Scala trait play.api.Plugin. This class may be something you have written in your own application, or it may be a plugin from a module. Writing a plugin in Java Create new class, and have it extend play.Plugin. There are three methods available to override – onStart(), onStop() and enabled(). You can also add a constructor that takes a play.Application argument. To have some functionality occur when the application starts, override onStart(). To have functionality occur when the application stops, override onStop(). It’s that simple! Here’s an example implementation which doesn’t override enabled(). package be.objectify.example;import play.Application; import play.Configuration; import play.Logger; import play.Plugin;/** * An example Play 2 plugin written in Java. */ public class MyExamplePlugin extends Plugin { private final Application application;public MyExamplePlugin(Application application) { this.application = application; }@Override public void onStart() { Configuration configuration = application.configuration(); // you can now access the application.conf settings, including any custom ones you have added Logger.info("MyExamplePlugin has started"); }@Override public void onStop() { // you may want to tidy up resources here Logger.info("MyExamplePlugin has stopped"); } }Writing a plugin in Scala Create a new Scala class, and have it extends play.api.Plugin. Just as in the Java version, there are onStart(), onStop() and enabled() methods along with an play.api.Application constructor argument. Here’s the Scala implementation: package be.objectify.exampleimport play.api.{Logger, Application, Plugin}/** * An example Play 2 plugin written in Scala. */ class MyExamplePlugin(application: Application) extends Plugin { override def onStart() { val configuration = application.configuration; // you can now access the application.conf settings, including any custom ones you have added Logger.info("MyExamplePlugin has started"); }override def onStop() { // you may want to tidy up resources here Logger.info("MyExamplePlugin has stopped"); } }Hooking a plugin into your application Regardless of the implementation language, plugins are invoked directly by Play once you have added them to the conf/play.plugins file. This file isn’t created when you start a new application, so you need to add it yourself. The syntax is <priority>:<classname>. For example, to add the example plugin to your project, you would use 10000:be.objectify.example.MyExamplePluginThe class name is that of your plugin. The priority determines the order in which plugins start up, and just needs to be a number that is larger or smaller than that of another plugin. If you have several plugins, you can explicitly order them: 5000:be.objectify.example.MyExamplePlugin 10000:be.objectify.example.MyOtherExamplePluginModules A module can be thought of as a reusable application that you can include in your own app. It’s analogous to a third-party library that adds specific functionality. A module can contain plugins, which you can hook into your app using the conf/play.plugins file. For example, if you’re using Deadbolt 2 you would need to add the following to your play.plugins file: 10000:be.objectify.deadbolt.DeadboltPluginA list of Play 2 modules can be found on the Play 2 GitHub wiki. You can read more on creating modules for Play 2 here and here. Reference: Play 2 – modules, plugins, what’s the difference? from our JCG partner Steve Chaloner at the Objectify blog....
apache-camel-logo

Apache Camel Tutorial – Introduction to EIP, Routes, Components, Testing and other Concepts

Data exchanges between companies increase a lot. The number of applications, which must be integrated increases, too. The interfaces use different technologies, protocols and data formats. Nevertheless, the integration of these applications shall be modeled in a standardized way, realized efficiently and supported by automatic tests. Such a standard exists with the Enterprise Integration Patterns (EIP) [1], which have become the industry standard for describing, documenting and implementing integration problems. Apache Camel [2] implements the EIPs and offers a standardized, internal domain-specific language (DSL) [3] to integrate applications. This article gives an introduction to Apache Camel including several code examples. Enterprise Integration Patterns EIPs can be used to split integration problems into smaller pieces and model them using standardized graphics. Everybody can understand these models easily. Besides, there is no need to reinvent the wheel every time for each integration problem. Using EIPs, Apache Camel closes a gap between modeling and implementation. There is almost a one-to-one relation between EIP models and the DSL of Apache Camel. This article explains the relation of EIPs and Apache Camel using an online shop example. Use Case: Handling Orders in an Online Shop The main concepts of Apache Camel are introduced by implementing a small use case. Starting your own project should be really easy after reading this article. The easiest way to get started is using a Maven archetype [4]. This way, you can rebuild the following example within minutes. Of course, you can also download the whole example at once[5]. Figure 1 shows the example from EIP perspective. The task is to process orders of an online shop. Orders arrive in csv format. At first, the orders have to be transformed to the internal format. Order items of each order must be split because the shop only sells dvds and cds. Other order items are forwarded to a partner.  Figure 1: EIP Perspective of the Integration Problem This example shows the advantages of EIPs: The integration problem is split into several small, perseverative subproblems. These subproblems are easy to understand and solved the same way each time. After describing the use case, we will now look at the basic concepts of Apache Camel. Basic Concepts Apache Camel runs on the Java Virtual Machine (JVM). Most components are realized in Java. Though, this is no requirement for new components. For instance, the camel-scala component is written in Scala. The Spring framework is used in some parts, e.g. for transaction support. However, Spring dependencies were reduced to a minimum in release 2.9 [6]. The core of Apache Camel is very small and just contains commonly used components (i.e. connectors to several technologies and APIs) such as Log, File, Mock or Timer. Further components can be added easily due to the modular structure of Apache Camel., Maven is recommended for dependency management, because most technologies require additional libraries. Though, libraries can also be downloaded manually and added to the classpath, of course. The core functionality of Apache Camel is its routing engine. It allocates messages based on the related routes. A route contains flow and integration logic. It is implemented using EIPs and a specific DSL. Each message contains a body, several headers and optional attachments. The messages are sent from a provider to a consumer. In between, the messages may be processed, e.g. filtered or transformed. Figure 1 shows how the messages can change within a route. Messages between a provider and a consumer are managed by a message exchange container, which contains an unique message id, exception information, incoming and outgoing messages (i.e. request and response), and the used message exchange pattern (MEP). „In Only“ MEP is used for one-way messages such as JMS whereas „In Out“ MEP executes request-response communication such as a client side HTTP based request and its response from the server side. After shortly explaining the basic concepts of Apache Camel, the following sections will give more details and code examples. Let’s begin with the architecture of Apache Camel. Architecture Figure 2 shows the architecture of Apache Camel. A CamelContext provides the runtime system. Inside, processors handle things in between endpoints like routing or transformation. Endpoints connect several technologies to be integrated. Apache Camel offers different DSLs to realize the integration problems.Figure 2: Architecture of Apache Camel CamelContext The CamelContext is the runtime system of Apache Camel and connects its different concepts such as routes, components or endpoints. The following code snipped shows a Java main method, which starts the CamelContext and stops it after 30 seconds. Usually, the CamelContext is started when loading the application and stopped at shutdown. public class CamelStarter {public static void main(String[] args) throws Exception { CamelContext context = new DefaultCamelContext();context.addRoutes(new IntegrationRoute()); context.start();Thread.sleep(30000);context.stop();}}The runtime system can be included anywhere in the JVM environment, including web container (e.g. Tomcat), JEE application server (e.g. IBM WebSphere AS), OSGi container, or even in the cloud. Domain Specific Languages DSLs facilitate the realization of complex projects by using a higher abstraction level. Apache Camel offers several different DSLs. Java, Groovy and Scala use object-oriented concepts and offer a specific method for most EIPs. On the other side, Spring XML DSL is based on the Spring framework and uses XML configuration. Besides, OSGi blueprint XML is available for OSGi integration. Java DSL has best IDE support. Groovy and Scala DSL are similar to Java DSL, in addition they offer typical features of modern JVM languages such as concise code or closures. Contrary to these programming languages, Spring XML DSL requires a lot of XML. Besides, it offers very powerful Spring-based dependency injection mechanism and nice abstractions to simplify configurations (such as JDBC or JMS connections). The choice is purely a matter of taste in most use cases. Even a combination is possible. Many developer use Spring XML for configuration whilst routes are realized in Java, Groovy or Scala. Routes Routes are a crucial part of Apache Camel. The flow and logic of an integration is specified here. The following example shows a route using Java DSL: public class IntegrationRoute extends RouteBuilder {@Overridepublic void configure() throws Exception {from(“file:target/inbox”).process(new LoggingProcessor()).bean(new TransformationBean(),“makeUpperCase”).to(“file:target/outbox/dvd”); }}The DSL is easy to use. Everybody should be able to understand the above example without even knowing Apache Camel. The route realizes a part of the described use case. Orders are put in a file directory from an external source. The orders are processed and finally moved to the target directory. Routes have to extend the „RouteBuilder“ class and override the „configure“ method. The route itself begins with a „from“ endpoint and finishes at one or more „to“ endpoints. In between, all necessary process logic is implemented. Any number of routes can be implemented within one „configure“ method. The following snippet shows the same route realized via Spring XML DSL: <beans … > <bean class=”mwea.TransformationBean” id=”transformationBean”/> <bean class=”mwea.LoggingProcessor” id=”loggingProcessor”/><camelContext xmlns=”http://camel.apache.org/schema/spring”> <package>mwea</package> <route> <from uri=”file:target/inbox”/><process ref=”loggingProcessor”/><bean ref=”transformationBean”/><to uri=”file:target/outbox”/> </route> </camelContext> </beans> Besides routes, another important concept of Apache Camel is its components. They offer integration points for almost every technology. Components In the meantime, over 100 components are available. Besides widespread technologies such as HTTP, FTP, JMS or JDBC, many more technologies are supported, including cloud services from Amazon, Google, GoGrid, and others. New components are added in each release. Often, also the community builds new custom components because it is very easy. The most amazing feature of Apache Camel is its uniformity. All components use the same syntax and concepts. Every integration and even its automatic unit tests look the same. Thus, complexity is reduced a lot. Consider changing the above example: If orders should be sent to a JMS queue instead of a file directory, just change the „to“ endpoint from „file:target/outbox“ to „jms:queue:orders“. That’s it! (JMS must be configured once within the application before, of course) While components offer the interface to technologies, Processors and Beans can be used to add custom integration logic to a route. Processors and Beans Besides using EIPs, you have to add individual integration logic, often. This is very easy and again uses the same concepts always: Processors or Beans. Both were used in the route example above. Processor is a simple Java interface with one single method: „process“. Inside this method, you can do whatever you need to solve your integration problem, e.g. transform the incoming message, call other services, and so on. public class LoggingProcessor implements Processor { @Override public void process(Exchange exchange) throws Exception { System.out.println(“Received Order: ” + exchange.getIn().getBody(String.class)); } } The „exchange“ parameter contains the Messsage Exchange with the incoming message, the outgoing message, and other information. Due to implementing the Processor interface, you have got a dependency to the Camel API. This might be a problem sometimes. Maybe you already have got existing integration code which cannot be changed (i.e. you cannot implement the Processor interface)? In this case, you can use Beans, also called POJOs (Plain Old Java Object). You get the incoming message (which is the parameter of the method) and return an outgoing message, as shown in the following snipped: public class TransformationBean {public String makeUpperCase(String body) {String transformedBody = body.toUpperCase();return transformedBody; } }The above bean receives a String, transforms it, and finally sends it to the next endpoint. Look at the route above again. The incoming message is a File. You may wonder why this works? Apache Camel offers another powerful feature: More than 150 automatic type converters are included from scratch, e.g. FileToString, CollectionToObject[] or URLtoInputStream. By the way: Further type converters can be created and added to the CamelContext easily [7]. If a Bean only contains one single method, it even can be omitted in the route. The above call therefore could also be .bean(new TransformationBean()) instead of .bean(new TransformationBean(), “makeUpperCase”). Adding some more Enterprise Integration Patterns The above route transforms incoming orders using the Translator EIP before processing them. Besides this transformation, some more work is required to realize the whole use case. Therefore, some more EIPs are used in the following example: public class IntegrationRoute extends RouteBuilder { @Override public void configure() throws Exception { from(“file:target/inbox”).process(new LoggingProcessor()) .bean(new TransformationBean()).unmarshal().csv() .split(body().tokenize(“,”)) .choice() .when(body().contains(“DVD”)) .to(“file:target/outbox/dvd”) .when(body().contains(“CD”)) .to(“activemq:CD_Orders”) .otherwise().to(“mock:others”);}}Each csv file illustrates one single order containing one or more order items. The camel-csv component is used to convert the csv message. Afterwards, the Splitter EIP separates each order item of the message body. In this case, the default separator (a comma) is used. Though, complex regular expressions or scripting languages such as XPath, XQuery or SQL can also be used as splitter. Each order item has to be sent to a specific processing unit (remember: there are dvd orders, cd orders, and other orders which are sent to a partner). The content-based router EIP solves this problem without any individual coding efforts. Dvd orders are processed via a file directory whilst cd orders are sent to a JMS queue. ActiveMQ is used as JMS implementation in this example. To add ActiveMQ support to a Camel application, you only have to add the related maven dependency for the camel-activemq component or add the JARs to the classpath manually. That’s it. Some other components need a little bit more configuration, once. For instance, if you want to use WebSphere MQ or another JMS implementation instead of ActiveMQ, you have to configure the JMS provider. All other order items besides dvds and cds are sent to a partner. Unfortunately, this interface is not available, yet. The Mock component is used instead to simulate this interface momentarily. The above example shows impressively how different interfaces (in this case File, JMS, and Mock) can be used within one route. You always apply the same syntax and concepts despite very different technologies. Automatic Unit and Integration Tests Automatic tests are crucial. Nevertheless, it usually is neglected in integration projects. The reason is too much efforts and very high complexity due to several different technologies. Apache Camel solves this problem: It offers test support via JUnit extensions. The test class must extend CamelTestSupport to use Camel’s powerful testing capabilities. Besides additional assertions, mocks are supported implicitly. No other mock framework such as EasyMock or Mockito is required. You can even simulate sending messages to a route or receiving messages from it via a producer respectively consumer template. All routes can be tested automatically using this test kit. It is noteworthy to mention that the syntax and concepts are the same for every technology, again. The following code snipped shows a unit test for our example route: public class IntegrationTest extends CamelTestSupport {@Beforepublic void setup() throws Exception {super.setUp(); context.addRoutes(new IntegrationRoute());}@Testpublic void testIntegrationRoute() throws Exception { // Body of test message containing several order itemsString bodyOfMessage = “Harry Potter / dvd, Metallica / cd, Claus Ibsen –Camel in Action / book “;// Initialize the mock and set expected results MockEndpoint mock = context.getEndpoint(“mock:others”, MockEndpoint.class); mock.expectedMessageCount(1);mock.setResultWaitTime(1000);// Only the book order item is sent to the mock// (because it is not a cd or dvd)String bookBody = “Claus Ibsen – Camel in Action / book”.toUpperCase();mock.expectedBodiesReceived(bookBody);// ProducerTemplate sends a message (i.e. a File) to the inbox directorytemplate.sendBodyAndHeader(“file://target/inbox”, bodyOfMessage, Exchange.FILE_NAME, “order.csv”);Thread.sleep(3000); // Was the file moved to the outbox directory?File target = new File(“target/outbox/dvd/order.csv”);assertTrue(“File not moved!”, target.exists());// Was the file transformed correctly (i.e. to uppercase)?String content = context.getTypeConverter().convertTo(String.class, target);String dvdbody = “Harry Potter / dvd”.toUpperCase(); assertEquals(dvdbody, content); // Was the book order (i.e. „Camel in action“ which is not a cd or dvd) sent to the mock?mock.assertIsSatisfied(); }}The setup method creates an instance of CamelContext (and does some additional stuff). Afterwards, the route is added such that it can be tested. The test itself creates a mock and sets its expectations. Then, the producer template sends a message to the „from“ endpoint of the route. Finally, some assertions validate the results. The test can be run the same way as each other JUnit test: directly within the IDE or inside a build script. Even agile Test-driven Development (TDD) is possible. At first, the Camel test has to be written, before implementing the corresponding route. If you want to learn more about Apache Camel, the first address should be the book „Camel in Action“ [8], which describes all basics and many advanced features in detail including working code examples for each chapter. After whetting your appetite, let’s now discuss when to use Apache Camel… Alternatives for Systems Integration Figure 3 shows three alternatives for integrating applications:Own custom Solution: Implement an individual solution that works for your problem without separating problems into little pieces. This works and is probably the fastest alternative for small use cases. You have to code all by yourself.Integration Framework: Use a framework, which helps to integrate applications in a standardized way using several integration patterns. It reduces efforts a lot. Every developer will easily understand what you did. You do not have to reinvent the wheel each time.Enterprise Service Bus (ESB): Use an ESB to integrate your applications. Under the hood, the ESB often also uses an integration framework. But there is much more functionality, such as business process management, a registry or business activity monitoring. You can usually configure routing and such stuff within a graphical user interface (you have to decide at your own if that reduces complexity and efforts). Usually, an ESB is a complex product. The learning curve is much higher than using a lightweight integration framework. Though, therefore you get a very powerful tool, which should fulfill all your requirements in large integration projects.If you decide to use an integration framework, you still have three good alternatives in the JVM environment: Spring Integration [9], Mule [10], and Apache Camel. They are all lightweight, easy to use and implement the EIPs. Therefore, they offer a standardized way to integrate applications and can be used even in very complex integration projects. A more detailed comparison of these three integration frameworks can be found at [11]. My personal favorite is Apache Camel due to its awesome Java, Groovy and Scala DSLs, combined with many supported technologies. Spring Integration and Mule only offer XML configuration. I would only use Mule if I need some of its awesome unique connectors to proprietary products (such as SAP, Tibco Rendevous, Oracle Siebel CRM, Paypal or IBM’s CICS Transaction Gateway). I would only use Spring Integration in an existing Spring project and if I only need to integrate widespread technologies such as FTP, HTTP or JMS. In all other cases, I would use Apache Camel. Nevertheless: No matter which of these lightweight integration frameworks you choose, you will have much fun realizing complex integration projects easily with low efforts. Remember: Often, a fat ESB has too much functionality, and therefore too much, unnecessary complexity and efforts. Use the right tool for the right job! Apache Camel is ready for Enterprise Integration Projects Apache Camel already celebrated its fourth birthday in July 2011 [12] and represents a very mature and stable open source project. It supports all requirements to be used in enterprise projects, such as error handing, transactions, scalability, and monitoring. Commercial support is also available. The most important gains is its available DSLs, many components for almost every thinkable technology, and the fact, that the same syntax and concepts can be used always – even for automatic tests – no matter which technologies have to be integrated. Therefore, Apache Camel should always be evaluated as lightweight alternative to heavyweight ESBs. Get started by downloading the example of this article. If you need any help or further information, there is a great community and a well-written book available. Sources:[1] „Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions“, ISBN: 0321200683, Gregor Hohpe, Bobby Woolf [2] Apache Camel http://camel.apache.org [3] Internal DSL http://martinfowler.com/bliki/DomainSpecificLanguage.html [4] Camel Archetypes http://camel.apache.org/camel-maven-archetypes.html [5] Example Code for this Article at github https://github.com/megachucky/camel-infoq [6] Reduced Dependency on Spring JARs http://davsclaus.blogspot.com/2011/08/apache-camel-29-reduced-dependency-on.html [7] Camel Type Converter http://camel.apache.org/type-converter.html [8] “Camel in Action”, ISBN: 1935182366, Claus Ibsen, Jonathan Anstey, Hadrian Zbarcea [9] Spring Integration www.springsource.org/spring-integration [10] Mule ESB http://www.mulesoft.org [11] Comparison of Apache Camel, Mule ESB and Spring Integration http://www.kai-waehner.de/blog/2012/01/10/spoilt-for-choice-which-integration-framework-to-use-spring-integration-mule-esb-or-apache-camel [12] Fourth Birthday of Apache Camel http://camel.apache.org/2011/07/07/happy-birthday-camel.htmlReference: Apache Camel Tutorial – Introduction to EIP, Routes, Components, Testing, and other Concepts from our JCG partner Kai Wahner at the Blog about Java EE / SOA / Cloud Computing blog....
software-development-2-logo

Code Forensics

How do you know if using code metrics really does help to produce code with fewer bugs. I am convinced they do, but how can I possibly prove it?All projects have historic data. This is usually stored in your bug tracking and source code control tools. We can use the data stored in these systems to perform ‘code forensics’. We use the historic data from real issues to see if they could have been avoided.This can all be done without affecting any of your existing code or adding any risk to your project. Surely that’s a useful Software Engineering technique?DisclaimerFirstly I realize that most bugs you find in a standard project are not caused by code quality – it’s probably only a small percentage. However the ones that exist are avoidable. It is these avoidable quality issues that I want to concentrate on. I want to be able to determine when exceeding a metric threshold is likely to result in a problem. It’s possible that if enough code forensics are run on my individual code base, I may be able to come up with some numbers that are useful to me in the future. In the long term it may be possible for someone to do a large study and come up with better guidelines.ProcessThe process is quite straightforward.1. Query your bug tracking tool for all the issues that required a code fix. 2. Assess the defects. 3. Identify the code. 4. Get the root cause.Query your bug tracking tool.First thing you need to do is to identify all your recent bugs, let’s say for the last month. Do a simple query to bring back all of the bugs during that period. This should be easy – otherwise you’re using the wrong tool!Now you have a full list of all of the defects that you are potentially interested inAssess the defects.You now need to go through each of the bugs and assess whether the issue really was a code issue. Other things it might be include.• A requirements issue. • An issue with the deployment environment. • Configuration IssuesWhat you are left with is a list of issues that were really caused because of bad code. Identify the problematic codeYou now need to map your list of issues back to the relevant source. You will not be able to do this unless you have been disciplined with your check in comments. In most places I have worked, when checking in a bug fix you always start it with a reference to the problem it fixes. Assuming you have been commenting your commits with the reference, you can do a simple query to see which code was affected. This can be done in fisheye, tortoise etc to get the required codeGet to the root cause.Finally you have something to look at, so what do you do with it? Well first you have to understand how the fix works and decide if it was a code quality issue. Perhaps the issue was a simple error rather than something a metric would have caught. However you might open the code and find like this. The average complexity in our system is 10. This piece of code has a complexity of 106! This was an accident waiting to happen!Clearly the bug would have been more likely to have been caught, had we failed the build because the code failed to meet expected quality standards. This is a potentially avoidable error.Another Angle.Another way to try and establish a link between poor code quality and defects is to take advantage of something such as the Sonar hotspot view to see the most complex classes in you system.You can then work backwards and examine the history of those files to see if those classes are causing issues in your code base. The trouble is that it is not that simple. High complexity files, which are used infrequently, are less likely to cause you trouble than those which are more frequently used, but of a lower complexity.Automating the process.For this to be any use it probably needs to be automated so that a large sample of data can be examined. Some tools already make this link between defects and the related fix source code. The next step is to pull that data back and run your metrics analysis on the files.SummaryNone of this is conclusive, however I still think it’s a useful technique.What it is most likely to prove is that you have had past problems which you could have avoided with metrics. It should also give you an idea of which metrics to use.It’s likely also to show that most problems are not caused by poor code quality, but other factors instead.Reference: Code Forensics from our JCG partner John Dobie at the Agile Engineering Techniques blog....
enterprise-java-logo

ServletRequest startAsync() limited usefulness

Some time ago I came across What’s the purpose of AsyncContext.start(…) in Servlet 3.0? question. Quoting the Javadoc of aforementioned method:Causes the container to dispatch a thread, possibly from a managed thread pool, to run the specified Runnable.To remind all of you, AsyncContext is a standard way defined in Servlet 3.0 specification to handle HTTP requests asynchronously. Basically HTTP request is no longer tied to an HTTP thread, allowing us to handle it later, possibly using fewer threads. It turned out that the specification provides an API to handle asynchronous threads in a different thread pool out of the box. First we will see how this feature is completely broken and useless in Tomcat and Jetty – and then we will discuss why the usefulness of it is questionable in general.Our test servlet will simply sleep for given amount of time. This is a scalability killer in normal circumstances because even though sleeping servlet is not consuming CPU, but sleeping HTTP thread tied to that particular request consumes memory – and no other incoming request can use that thread. In our test setup I limited the number of HTTP worker threads to 10 which means only 10 concurrent requests are completely blocking the application (it is unresponsive from the outside) even though the application itself is almost completely idle. So clearly sleeping is an enemy of scalability.@WebServlet(urlPatterns = Array("/*")) class SlowServlet extends HttpServlet with Logging {protected override def doGet(req: HttpServletRequest, resp: HttpServletResponse) { logger.info("Request received") val sleepParam = Option(req.getParameter("sleep")) map {_.toLong} TimeUnit.MILLISECONDS.sleep(sleepParam getOrElse 10) logger.info("Request done") } }Benchmarking this code reveals that the average response times are close to sleep parameter as long as the number of concurrent connections is below the number of HTTP threads. Unsurprisingly the response times begin to grow the moment we exceed the HTTP threads count. Eleventh connection has to wait for any other request to finish and release worker thread. When the concurrency level exceeds 100, Tomcat begins to drop connections – too many clients are already queued.So what about the the fancy AsyncContext.start() method (do not confuse with ServletRequest.startAsync())? According to the JavaDoc I can submit any Runnable and the container will use some managed thread pool to handle it. This will help partially as I no longer block HTTP worker threads (but still another thread somewhere in the servlet container is used). Quickly switching to asynchronous servlet:@WebServlet(urlPatterns = Array("/*"), asyncSupported = true) class SlowServlet extends HttpServlet with Logging {protected override def doGet(req: HttpServletRequest, resp: HttpServletResponse) { logger.info("Request received") val asyncContext = req.startAsync() asyncContext.setTimeout(TimeUnit.MINUTES.toMillis(10)) asyncContext.start(new Runnable() { def run() { logger.info("Handling request") val sleepParam = Option(req.getParameter("sleep")) map {_.toLong} TimeUnit.MILLISECONDS.sleep(sleepParam getOrElse 10) logger.info("Request done") asyncContext.complete() } }) } }We are first enabling the asynchronous processing and then simply moving sleep() into a Runnable and hopefully a different thread pool, releasing the HTTP thread pool. Quick stress test reveals slightly unexpected results (here: response times vs. number of concurrent connections):                              Guess what, the response times are exactly the same as with no asynchronous support at all (!) After closer examination I discovered that when AsyncContext.start() is called Tomcat submits given task back to… HTTP worker thread pool, the same one that is used for all HTTP requests! This basically means that we have released one HTTP thread just to utilize another one milliseconds later (maybe even the same one). There is absolutely no benefit of calling AsyncContext.start() in Tomcat. I have no idea whether this is a bug or a feature. On one hand this is clearly not what the API designers intended. The servlet container was suppose to manage separate, independent thread pool so that HTTP worker thread pool is still usable. I mean, the whole point of asynchronous processing is to escape the HTTP pool. Tomcat pretends to delegate our work to another thread, while it still uses the original worker thread pool.So why I consider this to be a feature? Because Jetty is “broken” in exactly same way… No matter whether this works as designed or is only a poor API implementation, using AsyncContext.start() in Tomcat and Jetty is pointless and only unnecessarily complicates the code. It won’t give you anything, the application works exactly the same under high load as if there was no asynchronous logic at all.But what about using this API feature on correct implementations like IBM WAS? It is better, but still the API as is doesn’t give us much in terms of scalability. To explain again: the whole point of asynchronous processing is the ability to decouple HTTP request from an underlying thread, preferably by handling several connections using the same thread.AsyncContext.start() will run the provided Runnable in a separate thread pool. Your application is still responsive and can handle ordinary requests while long-running request that you decided to handle asynchronously are processed in a separate thread pool. It is better, unfortunately the thread pool and thread per connection idiom is still a bottle-neck. For the JVM it doesn’t matter what type of threads are started – they still occupy memory. So we are no longer blocking HTTP worker threads, but our application is not more scalable in terms of concurrent long-running tasks we can support.In this simple and unrealistic example with sleeping servlet we can actually support thousand of concurrent (waiting) connections using Servlet 3.0 asynchronous support with only one extra thread – and without AsyncContext.start(). Do you know how? Hint: ScheduledExecutorService.Postscriptum: Scala goodnessI almost forgot. Even though examples were written in Scala, I haven’t used any cool language features yet. Here is one: implicit conversions. Make this available in your scope:implicit def blockToRunnable[T](block: => T) = new Runnable { def run() { block } }And suddenly you can use code block instead of instantiating Runnable manually and explicitly:asyncContext start { logger.info("Handling request") val sleepParam = Option(req.getParameter("sleep")) map { _.toLong} TimeUnit.MILLISECONDS.sleep(sleepParam getOrElse 10) logger.info("Request done") asyncContext.complete() }Sweet!Reference: javax.servlet.ServletRequest.startAsync() limited usefulness from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog....
json-logo

Processing JSON in Scala with Jerkson

Introduction The previous tutorial covered basic XML processing in Scala, but as I noted, XML is not the primary choice for data serialization these days. Instead, JSON (JavaScript Object Notation) is more widely used for data interchange, in part because it is less verbose and better captures the core data structures (such as lists and maps) that are used in defining many objects. It was originally designed for working with JavaScript, but turned out to be quite effective as a language neutral format. A very nice feature of it is that it is straightforward to translate objects as defined in languages like Java and Scala into JSON and back again, as I’ll show in this tutorial. If the class definitions and the JSON structures are appropriately aligned, this transformation turns out to be entirely trivial to do — given a suitable JSON processing library. In this tutorial, I cover basic JSON processing in Scala using the Jerkson library, which itself is essentially a Scala wrapper around the Jackson library (written in Java). Note that other libraries like lift-json are perfectly good alternatives, but Jerkson seems to have some efficiency advantages for streaming JSON due to Jackson’s performance. Of course, since Scala plays nicely with Java, you can directly use whichever JVM-based JSON library you like, including Jackson. This post also shows how to do a quick start with SBT that will allow you to easily access third-party libraries as dependencies and start writing code that uses them and can be compiled with SBT. Note: As a “Jason” I insist that JSON should be pronounced Jay-SAHN (with stress on the second syllable) to distinguish it from the name. :) Getting set up An easy way to use the Jerkson library in the context of a tutorial like this is for the reader to set up a new SBT project, declare Jerkson as a dependency, and then fire up the Scala REPL using SBT’sconsole action. This sorts out the process of obtaining external libraries and setting up the classpath so that they are available in an SBT-initiated Scala REPL. Follow the instructions in this section to do so. Note: if you have already been working with Scalabha version 0.2.5 (or later), skip to the bottom of this section to see how to run the REPL using Scalabha’s build. Alternatively, if you have an existing project of your own, you can of course just add Jerkson as a dependency, import its classes as necessary and use it in your normal programming setup. The examples below will then help as some straightforward recipes for using it in your project. First, create a directory to work in and download the SBT launch jar. $ mkdir ~/json-tutorial $ cd ~/json-tutorial/ $ wget http://typesafe.artifactoryonline.com/typesafe/ivy-releases/org.scala-sbt/sbt-launch/0.11.3/sbt-launch.jar Note: If you don’t have wget installed on your machine, you can download the above sbt-launch.jar file in your browser and move it to the ~/json-tutorial directory. Now, save the following as the file ~/json-tutorial/build.sbt. Be aware that it is important to keep the empty lines between each of the declarations. name := 'json-tutorial'version := '0.1.0 'scalaVersion := '2.9.2'resolvers += 'repo.codahale.com' at 'http://repo.codahale.com'libraryDependencies += 'com.codahale' % 'jerkson_2.9.1' % '0.5.0'Then save the following in the file ~/json-tutorial/runSbt. java -Xms512M -Xmx1536M -Xss1M -XX:+CMSClassUnloadingEnabled -XX:MaxPermSize=384M -jar `dirname $0`/sbt-launch.jar '$@'Make that file executable and run it, which will show SBT doing a bunch of work and then leave you with the SBT prompt. $ cd ~/json-tutorial $ chmod a+x runSbt $ ./runSbt update Getting org.scala-sbt sbt_2.9.1 0.11.3 ... downloading http://repo.typesafe.com/typesafe/ivy-releases/org.scala-sbt/sbt_2.9.1/0.11.3/jars/sbt_2.9.1.jar ... [SUCCESSFUL ] org.scala-sbt#sbt_2.9.1;0.11.3!sbt_2.9.1.jar (307ms) ... ... more stuff including getting the the Jerkson library ... ... [success] Total time: 25 s, completed May 11, 2012 10:22:42 AM $You should be back in the Unix shell at this point, and now we are ready to run the Scala REPL using SBT. The important thing is that this instance of the REPL will have the Jerkson library and its dependencies in the classpath so that we can import the classes we need. ./runSbt console [info] Set current project to json-tutorial (in build file:/Users/jbaldrid/json-tutorial/) [info] Starting scala interpreter... [info] Welcome to Scala version 2.9.2 (Java HotSpot(TM) 64-Bit Server VM, Java 1.6.0_31). Type in expressions to have them evaluated. Type :help for more information.scala> import com.codahale.jerkson.Json._ import com.codahale.jerkson.Json._If nothing further is output, then you are all set. If things are amiss (or if you are running in the default Scala REPL), you’ll instead see something like the following. scala> import com.codahale.jerkson.Json._ <console>:7: error: object codahale is not a member of package com import com.codahale.jerkson.Json._ If this is what you got, try to follow the instructions above again to make sure that your setup is exactly as above. However, if you continue to experience problems, an alternative is to get version 0.2.5 of Scalabha (which already has Jerkson as a dependency), follow the instructions for setting it up and then run the following commands. $ cd $SCALABHA_DIR $ scalabha build consoleIf you just want to see some examples of using Jerkson as an API and not use it interactively, then it is entirely unnecessary to do the SBT setup — just read on and adapt the examples as necessary.Processing a simple JSON example As usual, let’s begin with a very simple example that shows some of the basic properties of JSON. {‘foo’: 42 ‘bar’: ['a','b','c'], ‘baz’: { ‘x’: 1, ‘y’: 2 }} This describes a data structure with three fields, foo, bar and baz. The field foo‘s value is the integer 42, bar‘s value is a list of strings, and baz‘s value is a map from strings to integers. These are language neutral (but universal) types. Let’s first consider deserializing each of these values individually as Scala objects, using Jerkson’s parse method. Keep in mind that JSON in a file is a string, so the inputs in all of these cases are strings (at times I’ll use triple-quoted strings when there are quotes themselves in the JSON). In each case, we tell the parse method what type we expect by providing a type specification before the argument. scala> parse[Int]('42') res0: Int = 42scala> parse[List[String]]('''['a','b','c']''') res1: List[String] = List(a, b, c)scala> parse[Map[String,Int]]('''{ 'x': 1, 'y': 2 }''') res2: Map[String,Int] = Map(x -> 1, y -> 2)So, in each case, the string representation is turned into a Scala object of the appropriate type. If we aren’t sure what the type is or if we know for example that a List is heterogeneous, we can use Any as the expected type. scala> parse[Any]('42') res3: Any = 42scala> parse[List[Any]]('''['a',1]''') res4: List[Any] = List(a, 1)If you give an expect type that can’t be parsed as such, you’ll get an error. scala> parse[List[Int]]('''['a',1]''') com.codahale.jerkson.ParsingException: Can not construct instance of int from String value 'a': not a valid Integer value at [Source: java.io.StringReader@2bc5aea; line: 1, column: 2] <...many more lines of stack trace...>How about parsing all of the attributes and values together? Save the whole thing in a variable simpleJson as follows. scala> :paste// Entering paste mode (ctrl-D to finish)val simpleJson = '''{'foo': 42, 'bar': ['a','b','c'], 'baz': { 'x': 1, 'y': 2 }}'''// Exiting paste mode, now interpreting.simpleJson: java.lang.String = {'foo': 42, 'bar': ['a','b','c'], 'baz': { 'x': 1, 'y': 2 }}Since it is a Map from Strings to different types of values, the best we can do is deserialize it as a Map[String, Any]. scala> val simple = parse[Map[String,Any]](simpleJson) simple: Map[String,Any] = Map(bar -> [a, b, c], baz -> {x=1, y=2}, foo -> 42)To get these out as more specific types than Any, you need to cast them to the appropriate types. scala> val fooValue = simple('foo').asInstanceOf[Int] fooValue: Int = 42scala> val barValue = simple('bar').asInstanceOf] barValue: java.util.ArrayList[String] = [a, b, c]scala> val bazValue = simple('baz').asInstanceOf] bazValue: java.util.LinkedHashMap[String,Int] = {x=1, y=2} Of course, you might want to be working with Scala types, which is easy if you import the implicit conversions from Java types to Scala types. scala> import scala.collection.JavaConversions._ import scala.collection.JavaConversions._scala> val barValue = simple('bar').asInstanceOf].toList barValue: List[String] = List(a, b, c)scala> val bazValue = simple('baz').asInstanceOf].toMap bazValue: scala.collection.immutable.Map[String,Int] = Map(x -> 1, y -> 2)Voila! When you are working with Java libraries in Scala, the JavaConversions usually prove to be extremely handy.Deserializing into user-defined types Though we were able to parse the simple JSON expression above and even cast values into appropriate types, things were still a bit clunky. Fortunately, if you have defined your own case class with the appropriate fields, you can provide that as the expected type instead. For example, here’s a simple case class that will do the trick. case class Simple(val foo: String, val bar: List[String], val baz: Map[String,Int])Clearly this has all the right fields (with variables named the same as the fields in the JSON example), and the variables have the types we’d like them to have. Unfortunately, due to class loading issues with SBT, we cannot carry on the rest of this exercise solely in the REPL and must define this class in code. This code can be compiled and then used in the REPL or by other code. To do this, save the following as ~/json-tutorial/Simple.scala. case class Simple(val foo: String, val bar: List[String], val baz: Map[String,Int])object SimpleExample { def main(args: Array[String]) { import com.codahale.jerkson.Json._ val simpleJson = '''{'foo':42, 'bar':['a','b','c'], 'baz':{'x':1,'y':2}}''' val simpleObject = parse[Simple](simpleJson) println(simpleObject) } }Then exit the Scala REPL session you were in for the previous section using the command :quit, and do the following. (If anything has gone amiss you can restart SBT (with runSbt) and do the following commands.) > compile [info] Compiling 1 Scala source to /Users/jbaldrid/json-tutorial/target/scala-2.9.2/classes... [success] Total time: 2 s, completed May 11, 2012 9:24:00 PM > run [info] Running SimpleExample SimpleExample Simple(42,List(a, b, c),Map(x -> 1, y -> 2)) [success] Total time: 1 s, completed May 11, 2012 9:24:03 PM You can make changes to the code in Simple.scala, compile it again (you don’t need to exit SBT to do so), and run it again. Also, now that you’ve compiled, if you start up the Scala REPL using the console action, then the Simple class is now available to you and you can carry on working in the REPL. For example, here are the same statements that are used in the SimpleExample main method given previously.scala> import com.codahale.jerkson.Json._ import com.codahale.jerkson.Json._scala> val simpleJson = '''{'foo':42, 'bar':['a','b','c'], 'baz':{'x':1,'y':2}}''' simpleJson: java.lang.String = {'foo':42, 'bar':['a','b','c'], 'baz':{'x':1,'y':2}}scala> val simpleObject = parse[Simple](simpleJson) simpleObject: Simple = Simple(42,List(a, b, c),Map(x -> 1, y -> 2))scala> println(simpleObject) Simple(42,List(a, b, c),Map(x -> 1, y -> 2))Another nice feature of JSON serialization is that if the JSON string has more information than you need to construct the object want to build from it, it is ignored. For example, consider deserializing the following example, which has an extra field eca in the JSON representation. scala> val ecaJson = '''{'foo':42, 'bar':['a','b','c'], 'baz':{'x':1,'y':2}, 'eca': true}''' ecaJson: java.lang.String = {'foo':42, 'bar':['a','b','c'], 'baz':{'x':1,'y':2}, 'eca': true}scala> val noEcaSimpleObject = parse[Simple](ecaJson) noEcaSimpleObject: Simple = Simple(42,List(a, b, c),Map(x -> 1, y -> 2)) The eca information silently slips away and we still get a Simple object with all the information we need. This property is very handy for ignoring irrelevant information, which I’ll show to be quite useful in a follow-up post on processing JSON formatted tweets from Twitter’s API. Another thing to note about the above example is that the Boolean values true and false are valid JSON (they are not quoted strings, but actual Boolean values). Parsing a Boolean is even quite forgiving as Jerkson will give you a Boolean even when it is defined as a String. scala> parse[Map[String,Boolean]]('''{'eca':true}''') res0: Map[String,Boolean] = Map(eca -> true)scala> parse[Map[String,Boolean]]('''{'eca':'true'}''') res1: Map[String,Boolean] = Map(eca -> true)And it will convert a Boolean into a String if you happen to ask it to do so.scala> parse[Map[String,String]]('''{'eca':true}''')res2: Map[String,String] = Map(eca -> true)But it (sensibly) won’t convert any String other than true or false into a Boolean. scala> parse[Map[String,Boolean]]('''{'eca':'brillig'}''') com.codahale.jerkson.ParsingException: Can not construct instance of boolean from String value 'brillig': only 'true' or 'false' recognized at [Source: java.io.StringReader@6b2739b8; line: 1, column: 2] <...stacktrace...>And it doesn’t admit unquoted values other than a select few, including true and false.scala> parse[Map[String,String]]('''{'eca':brillig}''')com.codahale.jerkson.ParsingException: Malformed JSON. Unexpected character ('b' (code 98)): expected a valid value (number, String, array, object, 'true', 'false' or 'null') at character offset 7. <...stacktrace...> In other words, your JSON needs to be grammatical. Generating JSON from an object If you have an object in hand, it is very easy to create JSON from it (serialize) using the generate method. scala> val simpleJsonString = generate(simpleObject) simpleJsonString: String = {'foo':'42','bar':['a','b','c'],'baz':{'x':1,'y':2}} This is much easier than the XML solution, which required explicitly declaring how an object was to be turned into XML elements. The restriction is that any such objects must be instances of a case class. If you don’t have a case class, you’ll need to do some special handling (not discussed in this tutorial).A richer JSON example In the vein of the previous tutorial on XML, I’ve created the JSON corresponding to the music XML example used there. You can find it as the Github gist music.json: https://gist.github.com/2668632 Save that file as /tmp/music.json. Tip: you can easily format condensed JSON to be more human-readable by using the mjson tool in Python. $ cat /tmp/music.json | python -mjson.tool [ { 'albums': [ { 'description': '\n\tThe King of Limbs is the eighth studio album by English rock band Radiohead, produced by Nigel Godrich. It was self-released on 18 February 2011 as a download in MP3 and WAV formats, followed by physical CD and 12\' vinyl releases on 28 March, a wider digital release via AWAL, and a special \'newspaper\' edition on 9 May 2011. The physical editions were released through the band's Ticker Tape imprint on XL in the United Kingdom, TBD in the United States, and Hostess Entertainment in Japan.\n ', 'songs': [ { 'length': '5:15', 'title': 'Bloom' }, <...etc...>Next, save the following code as ~/json-tutorial/MusicJson.scala. package music {case class Song(val title: String, val length: String) { @transient lazy val time = { val Array(minutes, seconds) = length.split(':') minutes.toInt*60 + seconds.toInt } }case class Album(val title: String, val songs: Seq[Song], val description: String) { @transient lazy val time = songs.map(_.time).sum @transient lazy val length = (time / 60)+':'+(time % 60) }case class Artist(val name: String, val albums: Seq[Album]) }object MusicJson { def main(args: Array[String]) { import com.codahale.jerkson.Json._ import music._ val jsonInput = io.Source.fromFile('/tmp/music.json').mkString val musicObj = parse[List[Artist]](jsonInput) println(musicObj) } } A couple of quick notes. The Song, Album, and Artist classes are the same as I used in the previous tutorial on XML processing, with two changes. The first is that I’ve wrapped them in a package music. This is only necessary to get around an issue with running Jerkson in SBT as we are doing here. The other is that the fields that are not in the constructor are marked as @transient: this ensures that they are not included in the output when we generate JSON from objects of these classes. An example showing how this matters is the way that I created the music.json file: I read in the XML as in the previous tutorial and then use Jerkson to generate the JSON — without the @transient annotation, those fields are included in the output. For reference, here’s the code to do the conversion from XML to JSON (which you can add to MusicJson.scala if you like). object ConvertXmlToJson { def main(args: Array[String]) { import com.codahale.jerkson.Json._ import music._ val musicElem = scala.xml.XML.loadFile('/tmp/music.xml')val artists = (musicElem \ 'artist').map { artist => val name = (artist \ '@name').text val albums = (artist \ 'album').map { album => val title = (album \ '@title').text val description = (album \ 'description').text val songList = (album \ 'song').map { song => Song((song \ '@title').text, (song \ '@length').text) } Album(title, songList, description) } Artist(name, albums) }val musicJson = generate(artists) val output = new java.io.BufferedWriter(new java.io.FileWriter(new java.io.File('/tmp/music.json'))) output.write(musicJson) output.flush output.close } }There are other serialization strategies (e.g. binary serialization of objects), and the @transient annotation is similarly respected by them. Given the code in MusicJson.scala, we can now compile and run it. In SBT, you can either do run or run-main. If you choose run and there are more than one main methods in your project, SBT will give you a choice. > runMultiple main classes detected, select one to run:[1] SimpleExample [2] MusicJson [3] ConvertXmlToJsonEnter number: 2[info] Running MusicJson List(Artist(Radiohead,List(Album(The King of Limbs,List(Song(Bloom,5:15), Song(Morning Mr Magpie,4:41), Song(Little by Little,4:27), Song(Feral,3:13), Song(Lotus Flower,5:01), Song(Codex,4:47), Song(Give Up the Ghost,4:50), Song(Separator,5:20)), The King of Limbs is the eighth studio album by English rock band Radiohead, produced by Nigel Godrich. It was self-released on 18 February 2011 as a download in MP3 and WAV formats, followed by physical CD and 12' vinyl releases on 28 March, a wider digital release via AWAL, and a special 'newspaper' edition on 9 May 2011. The physical editions were released through the band's Ticker Tape imprint on XL in the United Kingdom, TBD in the United States, and Hostess Entertainment in Japan. ), Album(OK Computer,List(Song(Airbag,4:44), Song(Paranoid <...more printed output...> [success] Total time: 3 s, completed May 12, 2012 11:52:06 AMWith run-main, you just explicitly provide the name of the object whose main method you wish to run.> run-main MusicJson[info] Running MusicJson <...same output as above...>So, either way, we have successfully de-serialized the JSON description of the music data. (You can also get the same result by entering the code of the main method of MusicJson into the REPL when you run it from the SBT console.)Conclusion This tutorial has shown how easy it is to serialize (generate) and deserialize (parse) objects to and from JSON format. Hopefully, this has demonstrated the relative ease of doing this with the Jerkson library and Scala, and especially the relative ease in comparison with working with XML for similar purposes. In addition to this ease, JSON is generally more compact than the equivalent XML. However, it still is far from being a truly compressed format, and there is a lot of obvious “waste”, like having the field names repeated again and again for each object. This matters a lot when data is represented as JSON strings and is being sent over networks and/or used in distributed processing frameworks like Hadoop. The Avro file format is an evolution of JSON that performs such compression: it includes a schema with each file and then each object is represented in a binary format that only specifies the data and not the field names. In addition to being more compact, it retains the properties of being easily splittable, which matters a great deal for processing large files in Hadoop. Reference: Processing JSON in Scala with Jerkson from our JCG partner Jason Baldridge at the Bcomposes blog....
spring-logo

Spring Integration with reCAPTCHA

Sometimes we just need CAPTCHA, that’s a sad fact. Today we will learn how to integrate with reCAPTCHA. Because the topic itself isn’t particularly interesting and advanced, we will overengineer a bit (?) by using Spring Integration to handle low-level details. The decision to use reCAPTCHA by Google was dictated by two factors: (1) it is a moderately good CAPTCHA implementation with decent images with built-in support for visually impaired people and (2) outsourcing CAPTCHA allows us to remain stateless on the server side. Not to mention we help in digitalizing books.The second reason is actually quite important. Typically you have to generate CAPTCHA on the server side and store the expected result e.g. in user session. When the response comes back you compare expected and entered CAPTCHA solution. Sometimes we don’t want to store any state on the server side, not to mention implementing CAPTCHA isn’t particularly rewarding task. So it is nice to have something ready-made and acceptable.The full source code is as always available, we are starting from a simple Spring MVC web application without any CAPTCHA. reCAPTCHA is free but requires registration, so the first step is to sing-up and generate your public/private keys and fill-in app.properties configuration file in our sample project.To display and include reCAPTCHA on your form all you have to do is add JavaScript library:<div id="recaptcha"> </div> ... <script src="http://www.google.com/recaptcha/api/js/recaptcha_ajax.js"></script>And place reCAPTCHA widget anywhere you like:Recaptcha.create("${recaptcha_public_key}", "recaptcha", { theme: "white", lang : 'en' } );The official documentation is very concise and descriptive, so I am not diving into details of that. When you include this widget inside your <form/> you will receive two extra fields when user submits: recaptcha_response_field and recaptcha_challenge_field. The first is the actual text typed by the user and the second is a hidden token generated per request. It is probably used by reCAPTCHA servers as a session key, but we don’t care, all we have to do is passing this fields further to reCAPTCHA server. I will use HttpClient 4 to perform HTTP request to external server and some clever pattern matching in Scala to parse the response:trait ReCaptchaVerifier { def validate(reCaptchaRequest: ReCaptchaSecured): Boolean}@Service class HttpClientReCaptchaVerifier @Autowired()( httpClient: HttpClient, servletRequest: HttpServletRequest, @Value("${recaptcha_url}") recaptchaUrl: String, @Value("${recaptcha_private_key}") recaptchaPrivateKey: String ) extends ReCaptchaVerifier {def validate(reCaptchaRequest: ReCaptchaSecured): Boolean = { val post = new HttpPost(recaptchaUrl) post.setEntity(new UrlEncodedFormEntity(List( new BasicNameValuePair("privatekey", recaptchaPrivateKey), new BasicNameValuePair("remoteip", servletRequest.getRemoteAddr), new BasicNameValuePair("challenge", reCaptchaRequest.recaptchaChallenge), new BasicNameValuePair("response", reCaptchaRequest.recaptchaResponse))) ) val response = httpClient.execute(post) isReCaptchaSuccess(response.getEntity.getContent) }private def isReCaptchaSuccess(response: InputStream) = { val responseLines = Option(response) map { Source.fromInputStream(_).getLines().toList } getOrElse Nil responseLines match { case "true" :: _ => true case "false" :: "incorrect-captcha-sol" :: _=> false case "false" :: msg :: _ => throw new ReCaptchaException(msg) case resp => throw new ReCaptchaException("Unrecognized response: " + resp.toList) } }}class ReCaptchaException(msg: String) extends RuntimeException(msg)The only missing piece is the ReCaptchaSecured trait encapsulating two reCAPTCHA fields mentioned earlier. In order to secure any web form with reCAPTCHA I am simply extending this model:trait ReCaptchaSecured { @BeanProperty var recaptchaChallenge = "" @BeanProperty var recaptchaResponse = "" }class NewComment extends ReCaptchaSecured { @BeanProperty var name = "" @BeanProperty var contents = "" }The whole CommentsController.scala is not that relevant. But the result is! So it works, but obviously it wasn’t really spectacular. What would you say about replacing the low-level HttpClient call with Spring Integration? The ReCaptchaVerifier interface (trait) remains the same so the client code doesn’t have to be changed. But we refactor HttpClientReCaptchaVerifier into two separate, small, relatively high-level and abstract classes:@Service class ReCaptchaFormToHttpRequest @Autowired() (servletRequest: HttpServletRequest, @Value("${recaptcha_private_key}") recaptchaPrivateKey: String) {def transform(form: ReCaptchaSecured) = Map( "privatekey" -> recaptchaPrivateKey, "remoteip" -> servletRequest.getRemoteAddr, "challenge" -> form.recaptchaChallenge, "response" -> form.recaptchaResponse).asJava}@Service class ReCaptchaServerResponseToResult {def transform(response: String) = { val responseLines = response.split('\n').toList responseLines match { case "true" :: _ => true case "false" :: "incorrect-captcha-sol" :: _=> false case "false" :: msg :: _ => throw new ReCaptchaException(msg) case resp => throw new ReCaptchaException("Unrecognized response: " + resp.toList) } }}Note that we no longer have to implement ReCaptchaVerifier, Spring Integration will do it for us. We only have to tell how is the framework suppose to use building blocks we have extracted above. I think I haven’t yet described what Spring Integration is and how it works. In few words it is a very pure implementation of enterprise integration patterns (some may call it ESB). The message flows are described using XML and can be embedded inside standard Spring XML configuration:<?xml version="1.0" encoding="UTF-8"?> <beans:beans xmlns:beans="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.springframework.org/schema/integration" xmlns:http="http://www.springframework.org/schema/integration/http" xsi:schemaLocation="http://www.springframework.org/schema/integrationhttp://www.springframework.org/schema/integration/spring-integration.xsdhttp://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans.xsdhttp://www.springframework.org/schema/integration/httphttp://www.springframework.org/schema/integration/http/spring-integration-http.xsd"><!-- configuration here --></beans:beans>In our case we will describe a message flow from HttpClientReCaptchaVerifier Java interface/Scala trait to the reCAPTCHA server and back. On the way the ReCaptchaSecured object must be translated into HTTP request and the HTTP response should be translated into meaningful result, returned transparently from the interface.<gateway id="ReCaptchaVerifier" service-interface="com.blogspot.nurkiewicz.recaptcha.ReCaptchaVerifier" default-request-channel="reCaptchaSecuredForm"/><channel id="reCaptchaSecuredForm" datatype="com.blogspot.nurkiewicz.web.ReCaptchaSecured"/><transformer input-channel="reCaptchaSecuredForm" output-channel="reCaptchaGoogleServerRequest" ref="reCaptchaFormToHttpRequest"/><channel id="reCaptchaGoogleServerRequest" datatype="java.util.Map"/><http:outbound-gateway request-channel="reCaptchaGoogleServerRequest" reply-channel="reCaptchaGoogleServerResponse" url="${recaptcha_url}" http-method="POST" extract-request-payload="true" expected-response-type="java.lang.String"/><channel id="reCaptchaGoogleServerResponse" datatype="java.lang.String"/><transformer input-channel="reCaptchaGoogleServerResponse" ref="reCaptchaServerResponseToResult"/>Despite the amount of XML, the overall message flow is quite simple. First we define gateway, which is a bridge between Java interface and Spring Integration message flow. The argument of ReCaptchaVerifier.validate() later becomes a message that is sent to reCaptchaSecuredForm channel. From that channel ReCaptchaSecured object is passed to a ReCaptchaFormToHttpRequest transformer. The purpose of the transformer is two translate from ReCaptchaSecured object to Java map representing a set of key-value pairs. Later this map is passed (through reCaptchaGoogleServerRequest channel) to http:outbound-gateway. The responsibility of this component is to translate previously created map into an HTTP request and send it to specified address.When the response comes back, it is sent to reCaptchaGoogleServerResponse channel. There ReCaptchaServerResponseToResult transformer takes action, translating HTTP response to business result (boolean). Finally the transformer result is routed back to the gateway. Everything happens synchronously by default so we can still use simple Java interface for reCAPTCHA validation.Believe it or not, this all works. We no longer use HttpClient (guess everything is better compared to HttpClient 4 API…) and instead of one “huge” class we have a set of smaller, focused, easy to test classes. The framework handles wiring up and the low-level details. Wonderful?Architect’s Dream or Developer’s Nightmare?Let me summarize our efforts by quoting the conclusions from the presentation above: balance architectural benefits with development effectiveness. Spring Integration is capable of receiving data from various heterogeneous sources like JMS, relational database or even FTP, aggregating, splitting, parsing and filtering messages in multiple ways and finally sending them further with the most exotic protocols. Coding all this by hand is a really tedious and error-prone task. On the other hand sometimes we just don’t need all the fanciness and getting our hands dirty (e.g. by doing a manual HTTP request and parsing the response) is much simpler and easier to understand. Before you blindly base your whole architecture either on very high-level abstractions or on hand-coded low-level procedures: think about the consequences and balance. No solution fits all problems. Which version of reCAPTCHA integration do you find better?Reference: Integrating with reCAPTCHA using… Spring Integration from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog....
java-logo

The Visitor Pattern Re-visited

The visitor pattern is one of the most overrated and yet underestimated patterns in object-oriented design. Overrated, because it is often chosen too quickly (possibly by an architecture astronaut), and then bloats an otherwise very simple design, when added in the wrong way. Underestimated, because it can be very powerful, if you don’t follow the school-book example. Let’s have a look in detail. Problem #1: The naming Its biggest flaw (in my opinion) is its naming itself. The “visitor” pattern. When we google it, we most likely find ourselves on the related Wikipedia article, showing funny images like this one:Wikipedia Visitor Pattern example Right. For the 98% of us thinking in wheels and engines and bodies in their every day software engineering work, this is immediately clear, because we know that the mechanic billing us several 1000$ for mending our car will first visit the wheels, then the engine, before eventually visiting our wallet and accepting our cash. If we’re unfortunate, he’ll also visit our wife while we’re at work, but she’ll never accept, that faithful soul. But what about the 2% that solve other problems in their worklife? Like when we code complex data structures for E-Banking systems, stock exchange clients, intranet portals, etc. etc. Why not apply a visitor pattern to a truly hierarchical data structure? Like folders and files? (ok, not so complex after all) OK, so we’ll “visit” folders and every folder is going to let its files “accept” a “visitor” and then we’ll let the visitor “visit” the files, too. What?? The car lets its parts accept the visitor and then let the visitor visit itself? The terms are misleading. They’re generic and good for the design pattern. But they will kill your real-life design, because no one thinks in terms of “accepting” and “visiting”, when in fact, you read/write/delete/modify your file system. Problem #2: The polymorphism This is the part that causes even more headache than the naming, when applied to the wrong situation. Why on earth does the visitor know everyone else? Why does the visitor need a method for every involved element in the hierarchy? Polymorphism and encapsulation claim that the implementation should be hidden behind an API. The API (of our data structure) probably implements the composite pattern in some way, i.e. its parts inherit from a common interface. OK, of course, a wheel is not a car, neither is my wife a mechanic. But when we take the folder/file structure, aren’t they all java.util.File objects? Understanding the problem The actual problem is not the naming and horrible API verbosity of visiting code, but the mis-understanding of the pattern. It’s not a pattern that is best suited for visiting large and complex data structures with lots of objects of different types. It’s the pattern that is best suited for visiting simple data structures with few different types, but visiting them with hundreds of visitors. Take files and folders. That’s a simple data structure. You have two types. One can contain the other, both share some properties. Various visitors could be:CalculateSizeVisitor FindOldestFileVisitor DeleteAllVisitor FindFilesByContentVisitor ScanForVirusesVisitor … you name itI still dislike the naming, but the pattern works perfectly in this paradigm. So when is the visitor pattern “wrong”? I’d like to give the jOOQ QueryPart structure as an example. There are a great many of them, modelling various SQL query constructs, allowing jOOQ to build and execute SQL queries of arbitrary complexity. Let’s name a few examples:ConditionCombinedCondition NotCondition InCondition BetweenConditionFieldTableField Function AggregateFunction BindValueFieldListThere are many more. Each one of them must be able to perform two actions: render SQL and bind variables. That would make two visitors each one knowing more than… 40-50 types…? Maybe in the faraway future, jOOQ queries will be able to render JPQL or some other query type. That would make 3 visitors against 40-50 types. Clearly, here, the classic visitor pattern is a bad choice. But I still want to “visit” the QueryParts, delegating rendering and binding to lower levels of abstraction. How to implement this, then? It’s simple: Stick with the composite pattern! It allows you to add some API elements to your data structure, that everyone has to implement. So by intuition, step 1 would be this interface QueryPart { // Let the QueryPart return its SQL String getSQL();// Let the QueryPart bind variables to a prepared // statement, given the next bind index, returning // the last bind index int bind(PreparedStatement statement, int nextIndex); } With this API, we can easily abstract a SQL query and delegate the responsibilities to lower-level artefacts. A BetweenCondition for instance. It takes care of correctly ordering the parts of a [field] BETWEEN [lower] AND [upper] condition, rendering syntactically correct SQL, delegating parts of the tasks to its child-QueryParts: class BetweenCondition { Field field; Field lower; Field upper;public String getSQL() { return field.getSQL() + ' between ' + lower.getSQL() + ' and ' + upper.getSQL(); }public int bind(PreparedStatement statement, int nextIndex) { int result = nextIndex;result = field.bind(statement, result); result = lower.bind(statement, result); result = upper.bind(statement, result);return result; } } Whereas BindValue on the other hand, would mainly take care of variable binding class BindValue { Object value;public String getSQL() { return '?'; }public int bind(PreparedStatement statement, int nextIndex) { statement.setObject(nextIndex, value); return nextIndex + 1; } } Combined, we can now easily create conditions of this form: ? BETWEEN ? AND ?. When more QueryParts are implemented, we could also imagine things like MY_TABLE.MY_FIELD BETWEEN ? AND (SELECT ? FROM DUAL), when appropriate Field implementations are available. That’s what makes the composite pattern so powerful, a common API and many components encapsulating behaviour, delegating parts of the behaviour to sub-components. Step 2 takes care of API evolution The composite pattern that we’ve seen so far is pretty intuitive, and yet very powerful. But sooner or later, we will need more parameters, as we find out that we want to pass state from parent QueryParts to their children. For instance, we want to be able to inline some bind values for some clauses. Maybe, some SQL dialects do not allow bind values in the BETWEEN clause. How to handle that with the current API? Extend it, adding a “boolean inline” parameter? No! That’s one of the reasons why the visitor pattern was invented. To keep the API of the composite structure elements simple (they only have to implement “accept”). But in this case, much better than implementing a true visitor pattern is to replace parameters by a “context”: interface QueryPart { // The QueryPart now renders its SQL to the context void toSQL(RenderContext context);// The QueryPart now binds its variables to the context void bind(BindContext context); } The above contexts would contain properties like these (setters and render methods return the context itself, to allow for method chaining): interface RenderContext { // Whether we're inlining bind variables boolean inline(); RenderContext inline(boolean inline);// Whether fields should be rendered as a field declaration // (as opposed to a field reference). This is used for aliased fields boolean declareFields(); RenderContext declareFields(boolean declare);// Whether tables should be rendered as a table declaration // (as opposed to a table reference). This is used for aliased tables boolean declareTables(); RenderContext declareTables(boolean declare);// Whether we should cast bind variables boolean cast();// Render methods RenderContext sql(String sql); RenderContext sql(char sql); RenderContext keyword(String keyword); RenderContext literal(String literal);// The context's 'visit' method RenderContext sql(QueryPart sql); }The same goes for the BindContext. As you can see, this API is quite extensible, new properties can be added, other common means of rendering SQL can be added, too. But the BetweenCondition does not have to surrender its encapsulated knowledge about how to render its SQL, and whether bind variables are allowed or not. It’ll keep that knowledge to itself: class BetweenCondition { Field field; Field lower; Field upper;// The QueryPart now renders its SQL to the context public void toSQL(RenderContext context) { context.sql(field).keyword(' between ') .sql(lower).keyword(' and ') .sql(upper); }// The QueryPart now binds its variables to the context public void bind(BindContext context) { context.bind(field).bind(lower).bind(upper); } } Whereas BindValue on the other hand, would mainly take care of variable binding class BindValue { Object value;public void toSQL(RenderContext context) { context.sql('?'); }public void bind(BindContext context) { context.statement().setObject(context.nextIndex(), value); } }Conclusion: Name it Context-Pattern, not Visitor-Pattern Be careful when jumping quickly to the visitor pattern. In many many cases, you’re going to bloat your design, making it utterly unreadable und difficult to debug. Here are the rules to remember, summed up:If you have many many visitors and a relatively simple data structure (few types), the visitor pattern is probably OK. If you have many many types and a relatively small set of visitors (few behaviours), the visitor pattern is overkill, stick with the composite pattern To allow for simple API evolution, design your composite objects to have methods taking a single context parameter. All of a sudden, you will find yourself with an “almost-visitor” pattern again, where context=visitor, “visit” and “accept”=”your proprietary method namesThe “Context Pattern” is at the same time intuitive like the “Composite Pattern”, and powerful as the “Visitor Pattern”, combining the best of both worlds. Reference: The Visitor Pattern Re-visited from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
mockito-logo

Mocks And Stubs – Understanding Test Doubles With Mockito

Introduction A common thing I come across is that teams using a mocking framework assume they are mocking. They are not aware that Mocks are just one of a number of ‘Test Doubles’ which Gerard Meszaros has categorised at xunitpatterns.com. It’s important to realise that each type of test double has a different role to play in testing. In the same way that you need to learn different patterns or refactoring’s, you need to understand the primitive roles of each type of test double. These can then be combined to achieve your testing needs. I’ll cover a very brief history of how this classification came about, and how each of the types differs. I’ll do this using some short, simple examples in Mockito. A Very Brief History For years people have been writing lightweight versions of system components to help with testing. In general it was called stubbing. In 2000′ the article ‘Endo-Testing: Unit Testing with Mock Objects’ introduced the concept of a Mock Object. Since then Stubs, Mocks and a number of other types of test objects have been classified by Meszaros as Test Doubles. This terminology has been referenced by Martin Fowler in ‘Mocks Aren’t Stubs’ and is being adopted within the Microsoft community as shown in ‘Exploring The Continuum of Test Doubles’ A link to each of these important papers are shown in the reference section. Categories of test doublesThe diagram above shows the commonly used types of test double. The following URL gives a good cross reference to each of the patterns and their features as well as alternative terminology. http://xunitpatterns.com/Test%20Double.html Mockito Mockito is a test spy framework and it is very simple to learn. Notable with Mockito is that expectations of any mock objects are not defined before the test as they sometimes are in other mocking frameworks. This leads to a more natural style(IMHO) when beginning mocking. The following examples are here purely to give a simple demonstration of using Mockito to implement the different types of test doubles. There are a much larger number of specific examples of how to use Mockito on the website. http://docs.mockito.googlecode.com/hg/latest/org/mockito/Mockito.html Test Doubles with Mockito Below are some basic examples using Mockito to show the role of each test double as defined by Meszaros. I’ve included a link to the main definition for each so you can get more examples and a complete definition. Dummy Object http://xunitpatterns.com/Dummy%20Object.html This is the simplest of all of the test doubles. This is an object that has no implementation which is used purely to populate arguments of method calls which are irrelevant to your test. For example, the code below uses a lot of code to create the customer which is not important to the test. The test couldn’t care less which customer is added, as long as the customer count comes back as one. public Customer createDummyCustomer() { County county = new County('Essex'); City city = new City('Romford', county); Address address = new Address('1234 Bank Street', city); Customer customer = new Customer('john', 'dobie', address); return customer; }@Test public void addCustomerTest() { Customer dummy = createDummyCustomer(); AddressBook addressBook = new AddressBook(); addressBook.addCustomer(dummy); assertEquals(1, addressBook.getNumberOfCustomers()); }We actually don’t care about the contents of customer object – but it is required. We can try a null value, but if the code is correct you would expect some kind of exception to be thrown. @Test(expected=Exception.class) public void addNullCustomerTest() { Customer dummy = null; AddressBook addressBook = new AddressBook(); addressBook.addCustomer(dummy); }To avoid this we can use a simple Mockito dummy to get the desired behaviour. @Test public void addCustomerWithDummyTest() { Customer dummy = mock(Customer.class); AddressBook addressBook = new AddressBook(); addressBook.addCustomer(dummy); Assert.assertEquals(1, addressBook.getNumberOfCustomers()); }It is this simple code which creates a dummy object to be passed into the call. Customer dummy = mock(Customer.class); Don’t be fooled by the mock syntax – the role being played here is that of a dummy, not a mock. It’s the role of the test double that sets it apart, not the syntax used to create one. This class works as a simple substitute for the customer class and makes the test very easy to read. Test stub http://xunitpatterns.com/Test%20Stub.html The role of the test stub is to return controlled values to the object being tested. These are described as indirect inputs to the test. Hopefully an example will clarify what this means. Take the following code public class SimplePricingService implements PricingService { PricingRepository repository;public SimplePricingService(PricingRepository pricingRepository) { this.repository = pricingRepository; }@Override public Price priceTrade(Trade trade) { return repository.getPriceForTrade(trade); }@Override public Price getTotalPriceForTrades(Collection trades) { Price totalPrice = new Price(); for (Trade trade : trades) { Price tradePrice = repository.getPriceForTrade(trade); totalPrice = totalPrice.add(tradePrice); } return totalPrice; } TheSimplePricingServicehas one collaborating object which is the trade repository. The trade repository provides trade prices to the pricing service through the getPriceForTrade method. For us to test the businees logic in the SimplePricingService, we need to control these indirect inputs i.e. inputs we never passed into the test. This is shown below.In the following example we stub the PricingRepository to return known values which can be used to test the business logic of the SimpleTradeService. @Test public void testGetHighestPricedTrade() throws Exception { Price price1 = new Price(10); Price price2 = new Price(15); Price price3 = new Price(25); PricingRepository pricingRepository = mock(PricingRepository.class); when(pricingRepository.getPriceForTrade(any(Trade.class))) .thenReturn(price1, price2, price3); PricingService service = new SimplePricingService(pricingRepository); Price highestPrice = service.getHighestPricedTrade(getTrades()); assertEquals(price3.getAmount(), highestPrice.getAmount()); }Saboteur Example There are 2 common variants of Test Stubs: Responder’s and Saboteur’s. Responder’s are used to test the happy path as in the previous example. A saboteur is used to test exceptional behaviour as below. @Test(expected=TradeNotFoundException.class) public void testInvalidTrade() throws Exception {Trade trade = new FixtureHelper().getTrade(); TradeRepository tradeRepository = mock(TradeRepository.class);when(tradeRepository.getTradeById(anyLong())) .thenThrow(new TradeNotFoundException());TradingService tradingService = new SimpleTradingService(tradeRepository); tradingService.getTradeById(trade.getId()); }Mock Object http://xunitpatterns.com/Mock%20Object.html Mock objects are used to verify object behaviour during a test. By object behaviour I mean we check that the correct methods and paths are excercised on the object when the test is run. This is very different to the supporting role of a stub which is used to provide results to whatever you are testing. In a stub we use the pattern of defining a return value for a method. when(customer.getSurname()).thenReturn(surname);In a mock we check the behaviour of the object using the following form. verify(listMock).add(s);Here is a simple example where we want to test that a new trade is audited correctly. Here is the main code. public class SimpleTradingService implements TradingService{TradeRepository tradeRepository; AuditService auditService; public SimpleTradingService(TradeRepository tradeRepository, AuditService auditService) { this.tradeRepository = tradeRepository; this.auditService = auditService; }public Long createTrade(Trade trade) throws CreateTradeException { Long id = tradeRepository.createTrade(trade); auditService.logNewTrade(trade); return id; }The test below creates a stub for the trade repository and mock for the AuditService We then call verify on the mocked AuditService to make sure that the TradeService calls it’s logNewTrade method correctly @Mock TradeRepository tradeRepository; @Mock AuditService auditService; @Test public void testAuditLogEntryMadeForNewTrade() throws Exception { Trade trade = new Trade('Ref 1', 'Description 1'); when(tradeRepository.createTrade(trade)).thenReturn(anyLong()); TradingService tradingService = new SimpleTradingService(tradeRepository, auditService); tradingService.createTrade(trade); verify(auditService).logNewTrade(trade); }The following line does the checking on the mocked AuditService. verify(auditService).logNewTrade(trade); This test allows us to show that the audit service behaves correctly when creating a trade. Test Spy http://xunitpatterns.com/Test%20Spy.html It’s worth having a look at the above link for the strict definition of a Test Spy. However in Mockito I like to use it to allow you to wrap a real object and then verify or modify it’s behaviour to support your testing. Here is an example were we check the standard behaviour of a List. Note that we can both verify that the add method is called and also assert that the item was added to the list. @Spy List listSpy = new ArrayList();@Test public void testSpyReturnsRealValues() throws Exception { String s = 'dobie'; listSpy.add(new String(s));verify(listSpy).add(s); assertEquals(1, listSpy.size()); }Compare this with using a mock object where only the method call can be validated. Because we only mock the behaviour of the list, it does not record that the item has been added and returns the default value of zero when we call the size() method. @Mock List listMock = new ArrayList ();@Test public void testMockReturnsZero() throws Exception { String s = 'dobie';listMock.add(new String(s));verify(listMock).add(s); assertEquals(0, listMock.size()); } Another useful feature of the testSpy is the ability to stub return calls. When this is done the object will behave as normal until the stubbed method is called. In this example we stub the get method to always throw a RuntimeException. The rest of the behaviour remains the same. @Test(expected=RuntimeException.class) public void testSpyReturnsStubbedValues() throws Exception { listSpy.add(new String('dobie')); assertEquals(1, listSpy.size()); when(listSpy.get(anyInt())).thenThrow(new RuntimeException()); listSpy.get(0); }In this example we again keep the core behaviour but change the size() method to return 1 initially and 5 for all subsequent calls. public void testSpyReturnsStubbedValues2() throws Exception { int size = 5; when(listSpy.size()).thenReturn(1, size); int mockedListSize = listSpy.size(); assertEquals(1, mockedListSize); mockedListSize = listSpy.size(); assertEquals(5, mockedListSize);mockedListSize = listSpy.size(); assertEquals(5, mockedListSize); }This is pretty Magic! Fake Object http://xunitpatterns.com/Fake%20Object.html Fake objects are usually hand crafted or light weight objects only used for testing and not suitable for production. A good example would be an in-memory database or fake service layer. They tend to provide much more functionality than standard test doubles and as such are probably not usually candidates for implementation using Mockito. That’s not to say that they couldn’t be constructed as such, just that its probably not worth implementing this way. Reference: Mocks And Stubs – Understanding Test Doubles With Mockito from our JCG partner John Dobie at the Agile Engineering Techniques blog....
android-logo

Android: Facebook’s Notification Widget

Have you ever checked out the Facebook app? When you click on the Notifications button at the top, the app creates a nice overlaid window that contains a scrolling list view of all your info.It doesn’t dim out the background, and it also disappears if you click anywhere on the screen that isn’t in the overlaid window itself. The overlay is a great idea to add some polish to a UI.The other day I began trying to figure out how Facebook does this overlay. A good way to reproduce this type of overlay is to actually use a Transparent Activity. You can define a Transparent Activity in the Android Manifest as follows:<activity Android:name=".OverlayActivity" android:theme="@android:style/Theme.Translucent.NoTitleBar" > </activity>This gives you an empty activity. So when this activity is started, it looks as if nothing has happened, and the user can no longer click on the UI elements from the previous activity. Now that we have an activity that lets the background of the last activity be visible, we need to design the overlay itself in XML. In Facebook’s case, this would be a ListView and the rectangle graphic. The layout would also need to be positioned correctly. Notice in the Facebook image above how the image is positioned in such a way that the rectangle pointer is directly below the button that activated it.The beauty of using this implementation is that the overlay’s functionality is completely segregated because it is housed in a new activity. In the OverlayActivity, we simply generate our list and set up the click handlers for the list. There are just a few more tricks left to get all of the functionality that Facebook has added.To make the overlay disappear when any area outside of the overlay is clicked, some work needs to be done in the Overlay’s layout. This involves setting up a click handler to cover all of the area that the Overlay’s visible widgets aren’t using. By setting the click handler in the root layout close the activity on click, we have our desired functionality. This works well in Android because any click handlers set up on other views (in Facebook’s case, the ListView) would override the root’s click handler.To do one step above Facebook’s implementation, animation could be added to provide transitions for when the overlay is shown and removed. For example, we could easily add fade in and fade out animations that would make the overlay look even sharper. Since we have an activity, it’s very easy to add any kind of animation to the overlay. For the animation fun, check out the Alpha animation in Android.Reference: Implementing Facebook’s Notification Widget in Android from our JCG partner Isaac Taylor at the Programming Mobile blog....
mongodb-logo

MongoDB performance testing

So, this morning I was hacking around in the mongo shell. I had come up with three different ways to aggregate the data I wanted, but wasn’t sure about which one I should subsequently port to code to use within my application. So how would I decide on which method to implement? Well, lets just chose the one that performs the best. Ok, how do I do that? Hmmm. I could download and install some of the tools out there, or I could just wrap the shell code in a function and add some timings. OR, I could use the same tool that I use to performance test everything else; JMeter. To me it was a no brainer. So how do we do it? There is a full tutorial here. Simply put, you need to do the following:Create a Sampler class. Create a BeanInfo class. Create a properties file. Bundle up into a jar and drop into the apache-jmeter-X.X\lib\ext folder Update search_paths=../lib/ext/mongodb.jar in jmeter.properties if you place the jar anywhere else.How I did it I tend to have a scratch pad project set up in my IDE, so I decided just to go with that. Just to be on the safe side, I imported all the dependencies from:apache-jmeter-X.X\lib apache-jmeter-X.X\lib\ext apache-jmeter-X.X\lib\junitI then created the two class and the properties file. I then exported the jar to apache-jmeter-X.X\lib\ext, and fired up jmeter. Go throunullgh the normal steps to set the test plan up:Right click Test Plan and add a Thread Group. Right click the Thread Group and add a Sampler, in this case a MongoDB Script Sampler. Add your script to the textarea; db.YOUR_COLLECTION_NAME.insert({“jan” : “thinks he is great”}) Run the testHappy days. You can then use JMeter as you would for any other sampler. Future enhancements This is just a hack that took me 37 minutes to get running, plus 24 minutes if you include this post. This can certainly be extended to allow you to enter the replicaset config details for instance and to pull the creation of the connection out so we’re not initiating this each time run a test. Reference: Performance testing MongoDB from our JCG partner Jan Ettles at the Exceptionally exceptional exceptions blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

15,153 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books