Featured FREE Whitepapers

What's New Here?

spring-logo

Everybody Knows About MVC…

From a recent blog, you may have gathered that I’ve recently been conducting some interviews and as they were for web application developers a question I asked was “can you explain what the MVC pattern is?”, and to their credit, every candidate knew the answer. For those of you who don’t know, MVC stands for Model, View, Controller and is a design pattern used to separate the business, data and presentation logic of an application into discreet components. There are many definitions on the web of the MVC pattern’s components, so at the risk of confusing things even more, here are mine: Model The model represents the data or knowledge within a system. It usually comes from, but is not limited to, the data in the database and may include business logic. To my mind, it’s really the information that the user wants to see on their screen. View The view is responsible for displaying the model on the screen. In the case of a web-app it’s presented by the browser and in the Java world, it’s commonly built using JSPs. Controller The controller links the user, model and view together, taking a user’s request, marrying it with the appropriate model and combining the model with the appropriate view. The diagram that explains this usually looks something like this:The benefits of doing this include reuse-ability, for example using the same controller to talk to both a web browser and a phone; maintainability as it’s easier to find, fix and enhance things; and testability, as you can test each component separately. The MVC pattern was invented by Trygve Reenskaug and has been around since about 1978. Trygve Reenskaug both has his own page on Wikipedia and maintains his own web-page detailing MVC. So far as web-apps go, there seems to be as many versions and definitions of the MVC as there are grains of sand on a beach, with various debates about what constitutes a model and a view. For example, in a web app does the view include the HTML or is it just the CSS? Hopefully, I’m not being contentious when I say that webapps generally employ a variation of MVC known as the Front Controller Pattern. In this pattern, there is usually a servlet that receives requests from a browser. This servlet examines the request and then delegates it to another object which acts as a sub-controller tying together the view and model for that particular request. Early implementations of the Front Controller often used what is known as the JSP Front Strategy, whereby each JSP for a particular request acted as the sub-controller. In using this strategy you are often faced with the task of writing a whole bunch of custom tag libraries for inclusion in each page. These are responsible for both marshaling the model and determining how it was presented on the view. From experience this leads to a breakdown in the separation of concerns with bits of controller, model and view all mixed together in one place and is usually demonstrated by JSPs within JSPs, containing custom tags for presentation logic, mixed with other custom tags for data access and all merged with Java Scriptlets, HTML, Javascript and a general air of developer confusion. When the separation of concerns breaks down, MVC breaks down and several anti-pattern rear their ugly heads including Functional Decomposition, the Monster Object and the Big Ball of Mud. Sun (now Oracle) in their J2EE Core Patterns do not recommend using the JSP Front Strategy. From experience this is something I definitely agree with… The following diagram demonstrates the pitfalls of the JSP Front Strategy:More recent implementations, steering well clear of the JSP Front Strategy, will delegate to a pure Java sub-controller, leaving the JSP solely responsible for sorting out the presentation. The sub-controller’s responsibility is to grab the data from the model and poke it into the JSP for rendering. This approach has been hugely successful having been adopted by many of the web application frameworks such as Struts, which uses Action classes, and Spring MVC which uses its @Controller annotation in version 3 and handler classes in version 2.x.There must be some pitfalls in using this technique, but no serious ones, such as the break down of the separation of concerns, come to mind. If you know of any please let me know… Reference: Everybody Knows About MVC… from our JCG partner Roger Hughes at the Captain Debug’s blog. Related Articles :Spring MVC Interceptors Example jqGrid, REST, AJAX and Spring MVC Integration SpringMVC 3 Tiles 2.2.2 Integration Tutorial Spring MVC3 Hibernate CRUD Sample Application Spring MVC Development – Quick Tutorial Spring, Quartz and JavaMail Integration Tutorial Spring Insight – Web Application Profiling  Java Tutorials and Android Tutorials list...
scala-logo

Scala Tutorial – scripting, compiling, main methods, return values of functions

Preface This is part 10 of tutorials for first-time programmers getting into Scala. Other posts are on this blog, and you can get links to those and other resources on the links page of the Computational Linguistics course I’m creating these for. Additionally you can find this and other tutorial series on the JCG Java Tutorials page. The tutorials up to this point have been based on working with the Scala REPL or running basic scripts that are run from the command line. The latter is called “scripting” and usually is done for fairly simple, self-contained coding tasks. For more involved tasks that require a number of different modules and accessing libraries produced by others, it is necessary to work with a build system that brings together your code, others’ code, allows you to compile it, test it, and package it so that you can use it as an application. This tutorial takes you from running Scala scripts to compiling Scala programs to create byte code that can be shared by different applications. This will act as a bridge to set you up for the next step of using a build system. Along the way, some points will be made about objects, extending on some of the ideas from the previous tutorial about object-oriented programming. At a high level, the relevance of objects to a larger, modularized code base should be pretty clear: objects encapsulate data and functions that can be used by other objects, and we need to be able to organize them so that objects know how to find other objects and class definitions. Build systems, which we’ll look at in the next tutorial, will make this straightforward. Running Scala scripts In the beginning, you started with the REPL. scala> println("Hello, World!") Hello, World!Of course, the REPL is just a (very useful) playground for trying out snippets of Scala code, not for doing real work. So, you saw that you could put code like println(“Hello, World!”) into a file called Hello.scala and run it from the command line. $ scala Hello.scala Hello, World!The homeworks and tutorials done so far have worked in this way, though they are a bit more complex. We can even include class definitions and objects created from a class. For example, using the Person class from the previous tutorial, we can put all the code into a file called People.scala (btw, this name doesn’t matter — could as well be Blurglecruncheon.scala). class Person ( val firstName: String, val lastName: String, val age: Int, val occupation: String ) { def fullName: String = firstName + " " + lastName def greet (formal: Boolean): String = { if (formal) "Hello, my name is " + fullName + ". I'm a " + occupation + "." else "Hi, I'm " + firstName + "!" } } val johnSmith = new Person("John", "Smith", 37, "linguist") val janeDoe = new Person("Jane", "Doe", 34, "computer scientist") val johnDoe = new Person("John", "Doe", 43, "philosopher") val johnBrown = new Person("John", "Brown", 28, "mathematician") val people = List(johnSmith, janeDoe, johnDoe, johnBrown) people.foreach(person => println(person.greet(true)))This can now be run from the command line, producing the expected result. $ scala People.scala Hello, my name is John Smith. I'm a linguist. Hello, my name is Jane Doe. I'm a computer scientist. Hello, my name is John Doe. I'm a philosopher. Hello, my name is John Brown. I'm a mathematician.However, suppose you wanted to use the Person class from a different application (e.g. that is defined in a different file). You might think you could save the following in the file Radiohead.scala, and then run it with Scala. val thomYorke = new Person("Thom", "Yorke", 43, "musician") val johnnyGreenwood = new Person("Johnny", "Greenwood", 39, "musician") val colinGreenwood = new Person("Colin", "Greenwood", 41, "musician") val edObrien = new Person("Ed", "O'Brien", 42, "musician") val philSelway = new Person("Phil", "Selway", 44, "musician") val radiohead = List(thomYorke, johnnyGreenwood, colinGreenwood, edObrien, philSelway) radiohead.foreach(bandmember => println(bandmember.greet(false)))However, if you do “scala Radiohead.scala” you’ll see five errors, each one complaining that the type Person wasn’t found. How could Radiohead.scala know about the Person class and where to find its definition? I’m not aware of a way to do this with scripting-style Scala programming, and even though I suspect there may be a way to do something this simple, I don’t even care to know it. Let’s just get straight to compiling. Compiling The usual thing we do with Scala is to compile our programs to byte code. We won’t go into the details of that, but it basically means that Scala turns the text of a Scala program into a compiled set of machine instructions that can be interpreted by your operating system. (It actually compiles to Java byte code, which is one reason it is pretty straightforward to use Java code when coding in Scala.) So, what does compilation look like? We need to start by changing the code we did above a bit. Make a directory that has nothing in it, say /tmp/tutorial. Then save the following as PersonApp.scala in that directory. class Person ( val firstName: String, val lastName: String, val age: Int, val occupation: String ) { def fullName: String = firstName + " " + lastName def greet (formal: Boolean): String = { if (formal) "Hello, my name is " + fullName + ". I'm a " + occupation + "." else "Hi, I'm " + firstName + "!" } } object PersonApp { def main (args: Array[String]) { val johnSmith = new Person("John", "Smith", 37, "linguist") val janeDoe = new Person("Jane", "Doe", 34, "computer scientist") val johnDoe = new Person("John", "Doe", 43, "philosopher") val johnBrown = new Person("John", "Brown", 28, "mathematician") val people = List(johnSmith, janeDoe, johnDoe, johnBrown) people.foreach(person => println(person.greet(true))) } }Notice that the code looks pretty similar to the script above, but now we have a PersonApp object with a main method. The main method contains all the stuff that the original script had after the Person definition. Notice also that there is an args argument to the main method, which should look familiar now. What you are seeing is that a Scala script is basically just a simplified view of an object with a main method. Such scripts use the convention that the Array[String] provided to the method is called args. Okay, so now consider what happens if you run “scala PersonApp.scala” — nothing at all. That’s because there is no executable code available outside of the object and class definitions. Instead, we need to compile the code and then run the main method of specific objects. The next step is to run scalac (N.B. “scalac” with a “c”, not “scala”) on PersonApp.scala. The name scalac is short for Scala compiler. Do the following steps in the /tmp/tutorial directory. $ scalac PersonApp.scala $ ls Person.class PersonApp.class PersonApp$$anonfun$main$1.class PersonApp.scala PersonApp$.classNotice that a number of *.class files have been generated. These are byte code files that the scala application is able to run. A nice thing here is that it all the compilation is done: when in the past you ran “scala” on your programs (scripts), it had to first compile the instructions and then run the program. Now we are separating these steps into a compilation phase and a running phase. Having generated the class files, we can run any object that has a main method, like PersonApp. $ scala PersonApp Hello, my name is John Smith. I'm a linguist. Hello, my name is Jane Doe. I'm a computer scientist. Hello, my name is John Doe. I'm a philosopher. Hello, my name is John Brown. I'm a mathematician.Try running “scala Person” to see the error message it gives you. Next, move the Radiohead.scala script that you saved earlier into this directory and run it. $ scala Radiohead.scala Hi, I'm Thom! Hi, I'm Johnny! Hi, I'm Colin! Hi, I'm Ed! Hi, I'm Phil!This is the same script, but now it is in a directory that contains the Person.class file, which tells Scala everything that Radiohead.scala needs to construct objects of the Person class. Scala makes available any class file that is defined in the CLASSPATH, an environment variable that by default includes the current working directory. Despite this success, we’re going away from script land with this post, so change the contents of Radiohead.scala to be the following. object RadioheadGreeting { def main (args: Array[String]) { val thomYorke = new Person("Thom", "Yorke", 43, "musician") val johnnyGreenwood = new Person("Johnny", "Greenwood", 39, "musician") val colinGreenwood = new Person("Colin", "Greenwood", 41, "musician") val edObrien = new Person("Ed", "O'Brien", 42, "musician") val philSelway = new Person("Phil", "Selway", 44, "musician") val radiohead = List(thomYorke, johnnyGreenwood, colinGreenwood, edObrien, philSelway) radiohead.foreach(bandmember => println(bandmember.greet(false))) } }Then run scalac on all of the *.scala files in the directory. There are now more class files, corresponding to the RadioheadGreeting object we defined. $ scalac *.scala $ ls Person.class Radiohead.scala PersonApp$$anonfun$main$1.class RadioheadGreeting$$anonfun$main$1.class PersonApp$.class RadioheadGreeting$.class PersonApp.class RadioheadGreeting.class PersonApp.scalaYou can now run “scala RadioheadGreeting” to get the greeting from the band members. Notice that the file RadioheadGreeting was saved in was called Radiohead.scala and that no class files were generated called Radiohead.class, etc. Again, the file name could have been named something entirely different, like Turlingdrome.scala. (Embrace your inner Vogon.) Multiple objects in the same file There is no problem having multiple objects with main methods in the same file. When you compile the file with scalac, each object generates its own set of class files, and you call scala on whichever class file contains the definition for the main method you want to run. As an example, save the following as Greetings.scala. object Hello { def main (args: Array[String]) { println("Hello, world!") } } object Goodbye { def main (args: Array[String]) { println("Goodbye, world!") } } object SayIt { def main (args: Array[String]) { args.foreach(println) } }Next compile the file and then you can run any of the generated class files (since they all have main methods). $ scalac Greetings.scala $ scala Hello Hello, world! $ scala Goodbye Goodbye, world! $ scala Goodbye many useless arguments Goodbye, world! $ scala SayIt "Oh freddled gruntbuggly" "thy micturations are to me" "As plurdled gabbleblotchits on a lurgid bee." Oh freddled gruntbuggly thy micturations are to me As plurdled gabbleblotchits on a lurgid bee.In case you missed it earlier, the args array is where the command line arguments go and you can thus make use of them (or not, as in the case of the Hello and Goodbye objects). Functions with return values versus those without Some functions return a value while others do not. As a simple example, consider the following pairs of functions. scala> def plusOne (x: Int) = x+1 plusOne: (x: Int)Int scala> def printPlusOne (x: Int) = println(x+1) printPlusOne: (x: Int)UnitThe first takes an Int argument and returns an Int, which is a value. The other takes an Int and returns Unit, which is to say it doesn’t return a value. Notice the difference in behavior between the two following uses of the functions. scala> val foo = plusOne(2) foo: Int = 3 scala> val bar = printPlusOne(2) 3 bar: Unit = ()Scala uses a slightly subtle distinction in function definitions that can distinguish functions that return values versus those that return Unit (no value): If you don’t use an equals sign in the definition, it means that the function returns Unit. scala> def plusOneNoEquals (x: Int) { x+1 } plusOneNoEquals: (x: Int)Unit scala> def printPlusOneNoEquals (x: Int) { println(x+1) } printPlusOneNoEquals: (x: Int)UnitNotice that the above definition of plusOneNoEquals returns Unit, even though it looks almost identical to plusOne defined earlier. Check it out. scala> val foo = plusOneNoEquals(2) foo: Unit = ()Now look back at the main methods given earlier. No equals. Yep, they don’t have a return value. They are the entry point into your code, and any effects of running the code must be output to the console (e.g. with println or via a GUI) or written to the file system (or the internet somewhere). The outputs of such functions (ones which do not return a value) are called side-effects. You need them for the main methods. However, in many styles of programming, a great deal of work is done with side-effects. I’ve been trying to gently lead the readers of this tutorial to adopt a more functional approach that tries to avoid them. I’ve found it a more effective style myself in my own coding, so I’m hoping it will serve you all better to start from that point. (Note that Scala supports many styles of programming, which is nice because you have choice and can go with what you find most suitable.) Cleaning up You may have noticed that the directory you are working in as you run scalac on your scala files becomes quite littered with class files. For example, here’s what the state of the code directory worked with in this tutorial looks like after compiling all files. $ ls Goodbye$.class PersonApp.scala Goodbye.class Radiohead.scala Greetings.scala RadioheadGreeting$$anonfun$main$1.class Hello$.class RadioheadGreeting$.class Hello.class RadioheadGreeting.class Person.class SayIt$$anonfun$main$1.class PersonApp$$anonfun$main$1.class SayIt$.class PersonApp$.class SayIt.class PersonApp.classA mess, right? Generally, one would rarely develop a Scala application by compiling it directly in this way. Instead a build system is used to manage the compilation process, organize the files, and allow one to easily access software libraries created by other developers. The next tutorial will cover this, using SBT (the Simple Build Tool). Reference: First steps in Scala for beginning programmers, Part 10 from our JCG partner Jason Baldridge at the Bcomposes blog. Related Articles :Scala Tutorial – Scala REPL, expressions, variables, basic types, simple functions, saving and running programs, comments Scala Tutorial – Tuples, Lists, methods on Lists and Strings Scala Tutorial – conditional execution with if-else blocks and matching Scala Tutorial – iteration, for expressions, yield, map, filter, count Scala Tutorial – regular expressions, matching Scala Tutorial – regular expressions, matching and substitutions with the scala.util.matching API Scala Tutorial – Maps, Sets, groupBy, Options, flatten, flatMap Scala Tutorial – scala.io.Source, accessing files, flatMap, mutable Maps Scala Tutorial – objects, classes, inheritance, traits, Lists with multiple related types, apply Scala Tutorial – SBT, scalabha, packages, build systems Scala Tutorial – code blocks, coding style, closures, scala documentation project Fun with function composition in Scala How Scala changed the way I think about my Java Code Testing with Scala Things Every Programmer Should Know...
junit-logo

Java RESTful API integration testing

This post will focus on basic principles and mechanics of writing Java integration tests for a RESTful API (with a JSON payload). The goal is to provide an introduction to the technologies and to write some tests for basic correctness. The examples will consume the latest version of the GitHub REST API. For an internal application, this kind of testing will usually run as a late step in a Continuous Integration process, consuming the REST API after it has already been deployed. When testing a REST resource, there are usually a few orthogonal responsibilities the tests should focus on:the HTTP response code other HTTP headers in the response the payload (JSON, XML)Each test should only focus on a single responsibility and include a single assertion. Focusing on a clear separation always has benefits, but when doing this kind of black box testing it’s even more important, as the general tendency is to write complex test scenarios in the very beginning. Another important aspect of the integration tests is adherence to the Single Level of Abstraction Principle – the logic within a test should be written at a high level. Details such as creating the request, sending the HTTP request to the server, dealing with IO, etc should not be done inline but via utility methods. Testing the HTTP response code @Test public void givenUserDoesNotExists_whenUserInfoIsRetrieved_then404IsReceived() throws ClientProtocolException, IOException{ // Given String name = randomAlphabetic( 8 ); HttpUriRequest request = new HttpGet( "https://api.github.com/users/" + name ); // When HttpResponse httpResponse = httpClient.execute( request ); // Then RestAssert.assertResponseCodeIs( httpResponse, 404 ); }This is a rather simple test, which verifies that a basic happy path is working, without adding to much complexity to the test suite. If, for whatever reason it fails, then there is no need to look at any other test for this URL until this is fixed. Because verifying the response code is one of the most common assertions of the integration testing suite, a custom assertion is used. public static void assertResponseCodeIs ( final HttpResponse response, final int expectedCode ){ final int statusCode = httpResponse.getStatusLine().getStatusCode(); assertEquals( expectedCode, statusCode ); }Testing other headers of the HTTP response @Test public void givenRequestWithNoAcceptHeader_whenRequestIsExecuted_thenDefaultResponseContentTypeIsJson() throws ClientProtocolException, IOException{ // Given String jsonMimeType = "application/json"; HttpUriRequest request = new HttpGet( "https://api.github.com/users/eugenp" ); // When HttpResponse response = this.httpClient.execute( request ); // Then String mimeType = EntityUtils.getContentMimeType( response.getEntity() ); assertEquals( jsonMimeType, mimeType ); }This ensures that the response when requesting the details of the user is actually JSON. There is a logical progression of the functionality under test – first the response code, to ensure that the request was OK, then the mime type of the request, and only then the verification that the actual JSON is correct. Testing the JSON payload of the HTTP response @Test public void givenUserExists_whenUserInformationIsRetrieved_thenRetrievedResourceIsCorrect() throws ClientProtocolException, IOException{ // Given HttpUriRequest request = new HttpGet( "https://api.github.com/users/eugenp" ); // When HttpResponse response = new DefaultHttpClient().execute( request ); // Then GitHubUser resource = RetrieveUtil.retrieveResourceFromResponse( response, GitHubUser.class ); assertThat( "eugenp", Matchers.is( resource.getLogin() ) ); }In this case, I know the default representation of GitHub resources is JSON, but usually the Content-Type header of the response should be tested alongside the Accept header of the request – the client asks for a particular type of representation via Accept, which the server should honor. The Utilities for testing Here are the utilities that enable the tests to remain at a higher level of abstraction: - decorate the HTTP request with the JSON payload (or directly with the POJO): public static < T >HttpEntityEnclosingRequest decorateRequestWithResource ( final HttpEntityEnclosingRequest request, final T resource ) throws IOException{ Preconditions.checkNotNull( request ); Preconditions.checkNotNull( resource ); final String resourceAsJson = JsonUtil.convertResourceToJson( resource ); return JsonUtil.decorateRequestWithJson( request, resourceAsJson ); }public static HttpEntityEnclosingRequest decorateRequestWithJson ( final HttpEntityEnclosingRequest request, final String json ) throws UnsupportedEncodingException{ Preconditions.checkNotNull( request ); Preconditions.checkNotNull( json ); request.setHeader( HttpConstants.CONTENT_TYPE_HEADER, "application/json" ); request.setEntity( new StringEntity( json ) ); return request; }- retrieve the JSON payload (or directly the POJO) from the HTTP response: public static String retrieveJsonFromResponse( final HttpResponse response ) throws IOException{ Preconditions.checkNotNull( response ); return IOUtils.toString( response.getEntity().getContent() ); }public static < T >T retrieveResourceFromResponse ( final HttpResponse response, final Class< T > clazz ) throws IOException{ Preconditions.checkNotNull( response ); Preconditions.checkNotNull( clazz ); final String jsonFromResponse = retrieveJsonFromResponse( response ); return ConvertUtil.convertJsonToResource( jsonFromResponse, clazz ); }- conversion utilities to and from java object (POJO) to JSON: public static < T >String convertResourceToJson( final T resource ) throws IOException{ Preconditions.checkNotNull( resource ); return new ObjectMapper().writeValueAsString( resource ); }public static < T >T convertJsonToResource ( final String json, final Class< T > clazzOfResource ) throws IOException{ Preconditions.checkNotNull( json ); Preconditions.checkNotNull( clazzOfResource ); return new ObjectMapper().readValue( json, clazzOfResource ); }Dependencies The utilities and tests make use of of the following libraries, all available in Maven central:Apache HttpCore and HttpClient Apache Commons IO Apache Commons Lang Jackson Guava HamcrestConclusion This is only one part of what the complete integration testing suite should be. The tests focus on ensuring basic correctness for the REST API, without going into more complex scenarios, discoverability of the API, consumption of different representations for the same resource or other more advanced areas. I will address these in a further post, in the meantime checkout the full project on github. Reference: Introduction to Java integration testing for a RESTful API from our JCG partner Eugen Paraschiv at the baeldung blog. Related Articles :RESTful Web Services with RESTeasy JAX-RS on Tomcat 7 – Eclipse and Maven project Spring 3 RESTful Web Services Spring 3 Testing with JUnit 4 – ContextConfiguration and AbstractTransactionalJUnit4SpringContextTests Code coverage with unit & integration tests jqGrid, REST, AJAX and Spring MVC Integration Rules in JUnit 4.9 (beta 3) Java Tutorials and Android Tutorials list...
quartz-scheduler-logo

Spring & Quartz Integration with Custom Annotation, the SPANN way

In a previous post, we demonstrated how to create and configure Quartz jobs with annotations in a Spring container. We used a class-level annotation to add some metadata to a bean which implements Quartz’s Job; the annotation defines the job’s name, group, and its cron-expression. Later, a big portion of the code is dedicated to handling that annotation: find the beans, read the annotation, create the JobDetail and the CronTrigger, apply their properties, and pass them over to the scheduler. If you are working on an average to big-size spring project, you will probably soon enough start to see boilerplate configuration and code which can be often refactored by encapsulating it in annotations; the @QuartzJob annotation is a good example. At masetta we tried to use the Polyforms project to use annotations for implementing DAO methods (which usually consist of some boilerplate code around a JPA Query). Soon enough we found it was not as configurable and extendable as we needed, had problems handling named query parameters and initialization-order problems (because how Polyforms uses aspects to to implement abstract methods). In addition, we used custom annotations and handled them “manually”, but they were getting too many… What we came up with is spann. Spann is an open source extension for the spring framework, which allows advanced configuration of spring beans using annotations. To give a peek at one of spann’s features I will rely on our previous post and implement similar functionality. Instead of coding, I will use spann. As you will see, the implementation is very brief. Overview The code uses Spring’s native Quartz scheduling implementation (as explained in the spring reference). Spring’s MethodInvokingJobDetailFactoryBean is used to create a JobDetail bean that delegates the job execution to another bean’s method. As a Trigger I use spring’s implementation of CronTrigger. To create and configure the JobDetail and the CronTrigger beans I will create method-level annotations using spann’s @BeanConfig annotation. The code The example-code can be checked out as a maven project from the spann trunk using svn co http://spann.googlecode.com/svn/trunk/spann-quartz-example It includes a pom with all needed dependency-coordinates and a functional test case. 1. Create an annotation to configure a MethodInvokingJobDetailFactoryBean package com.masetta.spann.quartzexample.annotations;import java.lang.annotation.*; import org.springframework.scheduling.quartz.MethodInvokingJobDetailFactoryBean; import com.masetta.spann.metadata.common.Artifact; import com.masetta.spann.spring.base.beanconfig.*; import com.masetta.spann.spring.base.beanconfig.factories.*;@Retention(RetentionPolicy.RUNTIME) @Target({ElementType.METHOD}) @BeanConfig( create=MethodInvokingJobDetailFactoryBean.class, attached=@Attached(role="quartzJob",scope=Artifact.METHOD), explicit=true, wire={ @WireMeta(property="targetObject",scope=Artifact.CLASS,factory=BeanReferenceFactory.class), @WireMeta(property="targetMethod",scope=Artifact.METHOD,factory=MetadataNameFactory.class) }) public @interface QuartzJob { String name(); String group(); boolean concurrent() default true;}The @BeanConfig annotation creates and configures a MethodInvokingJobDetailFactoryBean using the QuartzJob-annotation’s attributes (name, group and concurrent). The configured bean is “attached” to the annotated method with the ‘quartzJob‘ role. This will be used later to inject the JobDetail bean to the trigger. “Attaching” is an internal spann concept. It allows referencing beans by specifying an artifact (e.g. a class or a method) and a semantic role (here ‘quartzJob’) instead of by name. This enables annotation composition, spann’s most powerful feature, which is also demonstrated here. The wire attribute sets the targetObject and targetMethod properties with values populated from the current artifact’s Metadata (in this case MethodMetadata), ScanContext and Annotation using the given factories. 2. Create a cron trigger Annotation package com.masetta.spann.quartzexample.annotations;import java.lang.annotation.*; import org.springframework.scheduling.quartz.CronTriggerBean; import com.masetta.spann.metadata.common.Artifact; import com.masetta.spann.spring.base.beanconfig.*;@Retention(RetentionPolicy.RUNTIME) @Target(ElementType.METHOD) @BeanConfig( create=CronTriggerBean.class, attached=@Attached(role="quartzTrigger",scope=Artifact.METHOD), explicit=true, references=@SpannReference(property="jobDetail",role="quartzJob", scope=Artifact.METHOD) ) public @interface Cron { String cronExpression(); String timeZone() default ""; String misfireInstructionName() default ""; String[] triggerListenerNames() default {}; }Again I use the @BeanConfig annotation, this time creating and configuring a CronTriggerBean. The explicit attribute indicates how to handle default annotation-attribute values. When explicit is true, default attribute values are ignored. For example, the timeZone , misfireInstructionName and triggerListenerNames properties of the CronTriggerBean will only be set if the corresponding annotation-attribute value is set; the default value will be silently ignored. Using the references attribute, the jobDetail property is set to the bean created in step 1: spann will look up the bean attached to the annotated method with the ‘quartzJob‘ role. Note that the timeZone annotation-attribute type is String, while the type of CronTriggerBean‘s timeZone property is TimeZone. The value is handled natively by Spring, transparently converted to TimeZone using Spring’s PropertyEditor facility. You can even use Spring’s ${…} syntax for expression substitution. The checked-in code contains a third annotation to create an interval trigger, used later in this example. 3. Configuring spann and spring’s SchedulerFactoryBean Our applicationContext.xml is very brief: <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xmlns:spann="http://os.masetta.com/spann/schema/spann-1.0" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd http://os.masetta.com/spann/schema/spann-1.0 http://os.masetta.com/spann/schema/spann-1.0.xsd"><context:component-scan base-package="com.masetta.spann.quartzexample"/> <spann:scan base-package="com.masetta.spann.quartzexample"/> <bean class="org.springframework.scheduling.quartz.SchedulerFactoryBean" autowire=”byType”/></beans>If you know spring, there should not be any magic for you here: I configure a spring’s component scan, a spann scan and the SchedulerFactoryBean, as described in the spring reference, only here I let spring autowire all trigger beans to the corresponding property, hence autowire=’byType’. 4. Using the annotations package com.masetta.spann.quartzexample.test;import java.util.concurrent.atomic.AtomicInteger;import org.springframework.stereotype.Component;import com.masetta.spann.quartzexample.annotations.*; import com.masetta.spann.spring.core.annotations.VisitMethods;@Component @VisitMethods public class Savana { private AtomicInteger newElemphants = new AtomicInteger(); private AtomicInteger newZebras = new AtomicInteger(); @QuartzJob(name="zebraBorn",group="savana") @Interval(repeatInterval=200) public void zebraBorn() { newZebras.incrementAndGet(); }@QuartzJob(name="elephantBorn",group="savana") @Cron(cronExpression="0/2 * * ? * * *") public void elephantBorn() { newElemphants.incrementAndGet(); } public int getNewZebras() { return newZebras.get(); } public int getNewElephants() { return newElemphants.get(); }}The bean is configured via spring’s @Component annotation. Its a normal Spring bean, and any Spring- or aspect-annotation (@Autowired, @Resource, @Transactional) will be natively processed by Spring. By default, spann processes only class level annotations; @VisitMethods instructs spann to also visit this class’ methods and process their annotations, if present. The use of the new annotations is straight forward: each scheduled method should be annotated with both @QuartzJob (to create the delegating JobDetail) and either the @Cron or the @Interval annotation (not shown here but available in svn) to create the trigger. This also demonstrates spann’s annotation composition, which allows annotations to be granular and plugable: @QuartzJob can be used with any annotation which configures a Trigger bean, while @Cron and @Interval can be used with any annotation which configures a JobDetail bean. Summary Spann is an open source extension for the Spring framework which allows advanced bean configuration using annotations. The code demonstrates the use of spann’s @BeanConfig annotation to create Quartz-scheduled jobs using annotations. The example uses spann’s high-level API, namely the @BeanConfig annotation, implemented in the spann project itself. Spann’s high-level API includes other annotations that allow method-replacement (for implementing abstract methods on runtime, internally using cglib), synthetic-adapter creation and comprehensive JPA Query support. Spann’s integration with spring is very tight: it creates “plain old spring beans”, just like the ones defined in XML or by the @Component annotation. This allows you to leverage all of spring’s bean-features: the beans can be retrieved via spring’s ApplicationContext, have the normal bean lifecycle, can be post-processed (e.g. for expression substitution), autowired, intercepted using aspects, managed via JMX and so on. You don’t need hacks and workarounds and don’t need to reimplement or copy and adjust existing spring-code. In addition, you have less boilerplate-code and less boilerplate-configuration. As flexible as @BeanConfig and spann’s other annotations are, there are use-cases they do no cover. But spann’s low-level API allows creating new annotations from scratch, giving developers fine grained control over creation and configuration of bean definitions. You can even use spann to process any other class metadata by implementing your own MetadataVisitor, optionally ignoring annotations all together. Reference: Spring & Quartz Integration with Custom Annotation, the SPANN way from our W4G partner Ron Piterman. Related Articles :Spring, Quartz and JavaMail Integration Tutorial Swapping out Spring Bean Configuration at Runtime Spring MVC3 Hibernate CRUD Sample Application Exposing a POJO as a JMX MBean with Spring Java Tutorials and Android Tutorials list...
apache-log4j-logo

Log4j, Stat4j, SMTPAppender Integration – Aggregating Error Logs to Send Email When Too Many

Our development team wanted to get notified as soon as something goes wrong in our production system, a critical Java web application serving thousands of customers daily. The idea was to let it send us an email when there are too many errors, indicating usually a problem with a database, an external web service, or something really bad with the application itself. In this post I want to present a simple solution we have implemented using a custom Log4J Appender based on Stats4j and an SMTPAppender (which is more difficult to configure and troubleshoot than you might expect). The Challenge We faced the following challenges with the logs:It’s unfortunately normal to have certain number of exceptions (customers select search criteria yielding no results, temporary, unimportant outages of external services etc.) and we certainly don’t want to be spammed because of that. So the solution must have a configurable threshold and only send an alert when it is exceeded. The failure rate should be computed for a configurable period (long enough not to trigger an alert because of few-minutes outages yet short enough for the team to be informed ASAP when something serious happens). Once an alert is send, no further alerts should be send again for some time (ideally until the original problem is fixed), we don’t want to be spammed because of a problem we already know about.The Solution We’ve based our solution on Lara D’Abreo’s Stat4J, which provides a custom Log4J appender that uses the logs to compute configurable measures and triggers alerts when they exceed their warning or critical thresholds. It is couple of years old, alpha-quality (regarding its generality and flexibility) open-source library, which is fortunately simple enough to be modified easily for one’s needs. So we have tweaked Stat4J to produce alerts when the number of alerts exceeds thresholds and keep quiet thereafter and combined that with a Log4J SMTPAppender that listens for the alerts and sends them via e-mail to the team. Stat4J Tweaking The key components of Stat4J are the Stat4jAppender for Log4J itself, calculators (measures) that aggregate the individual logs (e.g. by counting them or extracting some number form them), statistics that define which logs to consider via regular expressions and how to process them by referencing a calculator, and finally alerts that log a warning when the value of a statistics exceeds its limits. You can learn more in an article that introduces Stat4J. We have implemented a custom measure calculator, RunningRate (to count the number of failures in the last N minutes) and modified Stat4J as follows:We’ve enhanced Alert to support a new attribute, quietperiod, so that once triggered, subsequent alerts will be ignored for that duration (unless the previous alert was just a warning while the new one is a critical one) We’ve modified the appender to include the log’s Throwable together with the log message, which is then passed to the individual statistics calculators, so that we could filter more precisely what we want to count Finally we’ve modified Alert to log alerts as errors instead of warnings so that the SMTPAppender wouldn’t ignore themGet our modified Stat4j from GitHub (sources or a compiled jar). Disclaimer: It is one day’s hack and I’m not proud of the code. Stat4J Configuration Take the example stat4j.properties and put it on the classpath. It is already configured with the correct calculator, statistics, and alert. See this part: ### JAKUB HOLY - MY CONFIG calculator.minuteRate.classname=net.sourceforge.stat4j.calculators.RunningRate # Period is in [ms] 1000 * 60 * 10 = 10 min: calculator.minuteRate.period=600000statistic.RunningErrorRate.description=Errors per 10 minutes statistic.RunningErrorRate.calculator=minuteRate # Regular expression to match "<throwable.toString> <- <original log message>" statistic.RunningErrorRate.first.match=.*Exception.*# Error Rate alert.TooManyErrorsRecently.description=Too many errors in the log alert.TooManyErrorsRecently.statistic=RunningErrorRate alert.TooManyErrorsRecently.warn= >=3 alert.TooManyErrorsRecently.critical= >=10 alert.TooManyErrorsRecently.category=alerts # Ignore following warnings (or criticals, after the first critical) for the given amount of time: # 1000 * 60 * 100 = 100 min alert.TooManyErrorsRecently.quietperiod=6000000The important config params arecalculator.minuteRate.period (in ms) – count errors over this period, reset the count at its end; a reasonable value may be 10 minutes alert.TooManyErrorsRecently.warn and alert.TooManyErrorsRecently.critical – trigger the alert when so many errors in the period has been encountered; reasonable values depend on your application’s normal error rate alert.TooManyErrorsRecently.quietperiod (in ms) – don’t send further alerts for this period not to spam in a persistent failure situation; the reasonable value depends on how quickly you usually fix problems, 1 hour would seem OK to meLog4J Configuration Now we need to tell Log4J to use the Stat4j appender to count error occurences and to send alerts via email: log4j.rootCategory=DEBUG, Console, FileAppender, Stat4jAppender ... ### Stat4jAppender & EmailAlertsAppender ### # Collects statistics about logs and sends alerts when there # were too many failures in cooperation with the EmailAlertsAppender## Stat4jAppender log4j.appender.Stat4jAppender=net.sourceforge.stat4j.log4j.Stat4jAppender log4j.appender.Stat4jAppender.Threshold=ERROR # For configuration see stat4j.properties## EmailAlertsAppender # BEWARE: SMTPAppender ignores its Thresholds and only evers sends ERROR or higher messages log4j.category.alerts=ERROR, EmailAlertsAppender log4j.appender.EmailAlertsAppender=org.apache.log4j.net.SMTPAppender log4j.appender.EmailAlertsAppender.To=dummy@example.com # BEWARE: The address below must have a valid domain or some receivers will reject it (e.g. GMail) log4j.appender.EmailAlertsAppender.From=noreply-stat4j@google.no log4j.appender.EmailAlertsAppender.SMTPHost=172.20.20.70 log4j.appender.EmailAlertsAppender.BufferSize=1 log4j.appender.EmailAlertsAppender.Subject=[Stat4j] Too many exceptions in log log4j.appender.EmailAlertsAppender.layout=org.apache.log4j.PatternLayout log4j.appender.EmailAlertsAppender.layout.ConversionPattern=%d{ISO8601} %-5p %X{clientIdentifier} %c %x - %m%nComments#8 Specify the Stat4J appender #9 Only send ERRORs to Stat4J, we are not interested in less serious exceptions #14 “alerts” is the log category used by Stat4jAppender to log alerts (the same you would create via Logger.getLogger(“alerts”)); as mentioned, SMTPAppender will without respect to the configuration only process ERRORs and higherIssues with the SMTPAppender It is quite tricky to get the SMTPAppender working. Some pitfalls:SMTPAppender ignores all logs that are not ERROR or higher without respect to how you set its threshold If you specify a non-existing From domain then some recipient’s mail servers can just delete the email as spam (e.g. GMail) To send emails, you of course need mail.jar (and for older JVMs also activation.jar), here are instructions for TomcatAnd one $100 tip: to debug it, run your application in the debug mode and set a method breakpoint on javax.mail.Transport#send (you don’t need the source code) and when there, set this.session.debug to true to get a very detailed log of the following SMTP communication in the server log. Sidenote The fact that this article is based on Log4J doesn’t mean I’d personally choose it, it just came with the project. I’d at least consider using the newer and shiny Logback instead :-) . Conclusion Stat4j + SMTPAppender are a very good base for a rather flexible do-it-yourself alerting system based on logs and e-mail. You can achieve the same thing out-out-the-box with Hyperic HQ plus. Reference: Aggregating Error Logs to Send a Warning Email When Too Many of Them – Log4j, Stat4j, SMTPAppender from our JCG partner Jakub Holý at “The Holy Java” Blog. Related Articles :Logging exceptions root cause first The Java Logging Mess 10 Tips for Proper Application Logging Sending emails with Java Spring, Quartz and JavaMail Integration Tutorial Sending e-mails in Java with Spring – GMail SMTP server example Java Tutorials and Android Tutorials list...
enterprise-java-logo

What are procedures and functions after all?

Many RDBMS support the concept of “routines”, usually calling them procedures and/or functions. These concepts have been around in programming languages for a while, also outside of databases. Famous languages distinguishing procedures from functions are:Ada BASIC Pascal etc…The general distinction between (stored) procedures and (stored) functions can be summarized like this: Procedures:Are called using JDBC CallableStatement Have no return value Usually support OUT parametersFunctions:Can be used in SQL statements Have a return value Usually don’t support OUT parametersBut there are exceptions to these rules:DB2, H2, and HSQLDB don’t allow for JDBC escape syntax when calling functions. Functions must be used in a SELECT statement H2 only knows functions (without OUT parameters) Oracle functions may have OUT parameters Oracle knows functions that mustn’t be used in SQL statements for transactional reasons Postgres only knows functions (with all features combined). OUT parameters can also be interpreted as return values, which is quite elegant/freaky, depending on your taste The Sybase jconn3 JDBC driver doesn’t handle null values correctly when using the JDBC escape syntax on functionsIn general, it can be said that the field of routines (procedures / functions) is far from being standardised in modern RDBMS. Every database has its ways and JDBC only provides little abstraction over the great variety of procedures / functions implementations, especially when advanced data types such as cursors / UDT’s / arrays are involved. Reference: What are procedures and functions after all? from our JCG partner Lukas Eder at the “Java, SQL, and jOOQ” Blog. Related Articles :Database schema navigation in Java Add APPLY to Your TSQL Tool Belt Java Persistence API: a quick intro… Hibernate mapped collections performance problems Problems with ORMs Java Tutorials and Android Tutorials list...
enterprise-java-logo

The new Java Caching Standard (javax.cache)

This post explores the new Java caching standard: javax.cache. How it Fits into the Java Ecosystem This standard is being developed by JSR107, of which the author is co-spec lead. JSR107 is included in Java EE 7, being developed by JSR342. Java EE 7 is due to be finalised at the end of 2012. But in the meantime javax.cache will work in Java SE 6 and higher and Java EE 6 environments as well as with Spring and other popular environments. JSR107 has draft status. We are currently at release 0.3 of the API, the reference implementation and the TCK. The code samples in this article work against this version. Adoption Vendors who are either active members of the expert group or have expressed interest in implementing the specification are:Terracotta – Ehcache Oracle – Coherence JBoss – Infinispan IBM – ExtemeScale SpringSource – Gemfire GridGain TMax Google App Engine JavaTerracotta will be releasing a module for Ehcache to coincide with the final draft and then updating that if required for the final version. Features From a design point of view, the basic concepts are a CacheManager that holds and controls a collection of Caches. Caches have entries. The basic API can be thought of map-­like with the following additional features:atomic operations, similar to java.util.ConcurrentMap read-through caching write-through caching cache event listeners statistics transactions including all isolation levels caching annotations generic caches which hold a defined key and value type definition of storage by reference (applicable to on heap caches only) and storage by valueOptional Features Rather than split the specification into a number of editions targeted at different user constituencies such as Java SE and Spring/EE, we have taken a different approach. Firstly, for Java SE style caching there are no dependencies. And for Spring/EE where you might want to use annotations and/or transactions, the dependencies will be satisfied by those frameworks. Secondly we have a capabilities API via ServiceProvider.isSupported(OptionalFeature feature)so that you can determine at runtime what the capabilities of the implementation are. Optional features are:storeByReference – storeByValue is the default transactional annotationsThis makes it possible for an implementation to support the specification without necessarily supporting all the features, and allows end users and frameworks to discover what the features are so they can dynamically configure appropriate usage. Good for Standalone and Distributed Caching While the specification does not mandate a particular distributed cache topology it is cognizant that caches may well be distributed. We have one API that covers both usages but it is sensitive to distributed concerns. For example CacheEntryListener has a NotificationScope of events it listens for so that events can be restricted to local delivery. We do not have high network cost map-like methods such as keySet() and values(). And we generally prefer zero or low cost return types. So while Map has V put(K key, V value) javax.cache.Cache has void put(K key, V value). Classloading Caches contain data shared by multiple threads which may themselves be running in different container applications or OSGi bundles within one JVM and might be distributed across multiple JVMs in a cluster. This makes classloading tricky. We have addressed this problem. When a CacheManager is created a classloader may be specified. If none is specified the implementation provides a default. Either way object de-serialization will use the CacheManager’s classloader. This is a big improvement over the approach taken by caches like Ehcache that use a fall-back approach. First the thread’s context classloader is used and it that fails, another classloader is tried. This can be made to work in most scenarios but is a bit hit and miss and varies considerably by implementation. Getting the Code The spec is in Maven central. The Maven snippet is: <dependency> <groupId>javax.cache</groupId> <artifactId>cache-api</artifactId> <version>0.3</version> </dependency>A Cook’s Tour of the API Creating a CacheManager We support the Java 6 java.util.ServiceLoader creational approach. It will automaticaly detect a cache implementation in your classpath. You then create a CacheManager with: CacheManager cacheManager = Caching.getCacheManager();which returns a singleton CacheManager called “__default__”. Subsequent calls return the same CacheManager. CacheManagers can have names and classloaders configured in. e.g. CacheManager cacheManager = Caching.getCacheManager("app1", Thread.currentThread().getContextClassLoader());Implementations may also support direct creation with new for maximum flexibility: CacheManager cacheManager = new RICacheManager("app1", Thread.currentThread().getContextClassLoader());Or to do the same thing without adding a compile time dependency on any particular implementation: String className = "javax.cache.implementation.RIServiceProvider"; Class<ServiceProvider> clazz = (Class<ServiceProvider>)Class.forName(className); ServiceProvider provider = clazz.newInstance(); return provider.createCacheManager(Thread.currentThread().getContextClassLoader(), "app1");We expect implementations to have their own well-known configuration files which will be used to configure the CacheManager. The name of the CacheManager can be used to distinguish the configuration file. For ehcache, this will be the familiar ehcache.xml placed at the root of the classpath with a hyphenated prefix for the name of the CacheManager. So, the default CacheManager will simply be ehcache.xml and “myCacheManager” will be app1-ehcache.xml. Creating a Cache The API supports programmatic creation of caches. This complements the usual convention of configuring caches declaratively which is left to each vendor. To programmatically configure a cache named “testCache” which is set for read-through cacheManager = getCacheManager(); CacheConfiguration cacheConfiguration = cacheManager.createCacheConfiguration(); cacheConfiguration.setReadThrough(true); Cache testCache = cacheManager.createCacheBuilder("testCache") .setCacheConfiguration(cacheConfiguration).build();Getting a reference to a Cache You get caches from the CacheManager. To get a cache called “testCache”: Cache<Integer, Date> cache = cacheManager.getCache("testCache");Basic Cache Operations To put to a cache: Cache<Integer, Date> cache = cacheManager.getCache(cacheName); Date value1 = new Date(); Integer key = 1; cache.put(key, value1);To get from a cache: Cache<Integer, Date> cache = cacheManager.getCache(cacheName); Date value2 = cache.get(key);To remove from a cache: Cache<Integer, Date> cache = cacheManager.getCache(cacheName); Integer key = 1; cache.remove(key);Annotations JSR107 introduces a standardized set of caching annotations, which do method level caching interception on annotated classes running in dependency injection containers. Caching annotations are becoming increasingly popular, starting with Ehcache Annotations for Spring, which then influenced Spring 3’s caching annotations. The JSR107 annotations cover the most common cache operations including:@CacheResult – use the cache @CachePut – put into the cache @CacheRemoveEntry – remove a single entry from the cache @CacheRemoveAll – remove all entries from the cacheWhen the required cache name, key and value can be inputed they are not required. See the JavaDoc for the details. To allow greater control, you can specify all these and more. In the following example, the cacheName attribute is specified to be “domainCache”, index is specified as the key and domain as the value. public class DomainDao { @CachePut(cacheName="domainCache") public void updateDomain(String domainId, @CacheKeyParam int index, @CacheValue Domain domain) { ... } }The reference implementation includes an implementation for both Spring and CDI. CDI is the standardised container driven injection introduced in Java EE 6. The implementation is nicely modularised for reuse, uses an Apache license, and we therefore expect several open source caches to reuse them. While we have not done an implementation for Guice, this could be easily done. Annotation Example This example shows how to use annotations to keep a cache in sync with an underlying data structure, in this case a Blog manager, and also how to use the cache to speed up responses, done with @CacheResult public class BlogManager {@CacheResult(cacheName="blogManager") public Blog getBlogEntry(String title) {...}@CacheRemoveEntry(cacheName="blogManager") public void removeBlogEntry(String title) {...}@CacheRemoveAll(cacheName="blogManager") public void removeAllBlogs() {...}@CachePut(cacheName="blogManager") public void createEntry(@CacheKeyParam String title, @CacheValue Blog blog) {...}@CacheResult(cacheName="blogManager") public Blog getEntryCached(String randomArg, @CacheKeyParam String title){...}}Wiring Up Spring For Spring the key is the following config line, which adds the caching annotation interceptors into the Spring context: <jcache-spring:annotation-driven proxy-target-class="true"/>A full example is: <beans> <context:annotation-config/> <jcache-spring:annotation-driven proxy-target-class="true"/> <bean id="cacheManager" factory-method="getCacheManager" /> </beans>Spring has it’s own caching annotations based on earlier work from JSR107 contributor Eric Dalquist. Those annotations and JSR107 will happily co-exist. Wiring Up CDI First create an implementation of javax.cache.annotation.BeanProvider and then tell CDI where to find it declaring a resource named javax.cache.annotation.BeanProvider in the classpath at /META-INF/services/. For an example using the Weld implementation of CDI, see the CdiBeanProvider in our CDI test harness. Further Reading For further reading visit the JSRs home page at https://github.com/jsr107/jsr107spec. Reference: javax.cache: The new Java Caching Standard from our JCG partner Greg Luck at Greg Luck’s Blog. Related Articles :Spring 3.1 Cache Abstraction Tutorial Java EE6 CDI, Named Components and Qualifiers JBoss 4.2.x Spring 3 JPA Hibernate Tutorial JBoss 4.2.x Spring 3 JPA Hibernate Tutorial Part #2 Java Tutorials and Android Tutorials list...
java-logo

Java Secret: Loading and unloading static fields

OVERVIEW To start with it is natural to assume that static fields have a special life cycle and live for the life of the application. You could assume that they live is a special place in memory like the start of memory in C or in the perm gen with the class meta information. However, it may be surprising to learn that static fields live on the heap, can have any number of copies and are cleaned up by the GC like any other object. This follows on from a previous discussion; Are static blocks interpreted? LOADING STATIC FIELDS When a class is obtained for linking it may not result in the static block being intialised. A simple example public class ShortClassLoadingMain { public static void main(String... args) { System.out.println("Start"); Class aClass = AClass.class; System.out.println("Loaded"); String s= AClass.ID; System.out.println("Initialised"); } }class AClass { static final String ID; static { System.out.println("AClass: Initialising"); ID = "ID"; } }prints Start Loaded AClass: Initialising InitialisedYou can see you can obtain a reference to a class, before it has been initialised, only when it is used does it get initialised. LOADING MULTIPLE COPIES OF A STATIC FIELD Each class loader which loads a class has its own copy of static fields. If you load a class in two different class loaders these classes can have static fields with different values. UNLOADING STATIC FIELDS static fields are unloaded when the Class’ ClassLoader is unloaded. This is unloaded when a GC is performed and there are no strong references from the threads’ stacks. PUTTING THESE TWO CONCEPTS TOGETHER Here is an example where a class prints a message when it is initialised and when its fields are finalized. class UtilityClass { static final String ID = Integer.toHexString(System.identityHashCode(UtilityClass.class)); private static final Object FINAL = new Object() { @Override protected void finalize() throws Throwable { super.finalize(); System.out.println(ID + " Finalized."); } };static { System.out.println(ID + " Initialising"); } }By loading this class repeatedly, twice at a time for (int i = 0; i < 2; i++) { cl = new CustomClassLoader(url); clazz = cl.loadClass(className); loadClass(clazz);cl = new CustomClassLoader(url); clazz = cl.loadClass(className); loadClass(clazz); triggerGC(); } triggerGC();you can see an output like this 1b17a8bd Initialising 2f754ad2 Initialising-- Starting GC 1b17a8bd Finalized. -- End of GC6ac2a132 Initialising eb166b5 Initialising-- Starting GC 6ac2a132 Finalized. 2f754ad2 Finalized. -- End of GC-- Starting GC eb166b5 Finalized. -- End of GCIn this log, two copies of the class are loaded first. The references to the first class/classloader are overwritten by references to the second class/classloader. The first one is cleaned up on a GC, the second one is retained. On the second loop, two more copies are initialised. The forth one is retained, the second and third are cleaned up on a GC. Finally the forth copy of the static fields are cleaned up on a GC when they are no longer references. THE CODE The first example – ShortClassLoadingMain The second example – LoadAndUnloadMain Reference: Java Secret: Loading and unloading static fields from our JCG partner Peter Lawrey at the Vanilla Java. Related Articles:Things Every Programmer Should Know 10 Tips for Proper Application Logging Laws of Software Design Java Best Practices Series 9 Tips on Surviving the Wild West Development Process...
software-development-2-logo

Weird Funny Java!

Sometimes we can do really weird and funny things with Java; Some other times we are just being creative! Take a look at the following three examples and you will find out what I mean! Have Fun! Strine translator Translating to Strine ;)   public static void main(String... args) { System.out.println("Hello World"); }static { try { Field value = String.class.getDeclaredField("value"); value.setAccessible(true); value.set("Hello World", value.get("G'Day Mate.")); } catch (Exception e) { throw new AssertionError(e); } }prints G’Day Mate. BTW: Strine is the Australian Dialect of English. Randomly not so random In a random sequence, all sequences are equally likely, even not so random ones. Random random = new Random(441287210); for(int i=0;i<10;i++) System.out.print(random.nextInt(10)+" "); }prints 1 1 1 1 1 1 1 1 1 1 and Random random = new Random(-6732303926L); for(int i=0;i<10;i++) System.out.println(random.nextInt(10)+" "); }prints 0 1 2 3 4 5 6 7 8 9 Lastly public static void main(String ... args) { System.out.println(randomString(-229985452)+' '+randomString(-147909649)); }public static String randomString(int seed) { Random rand = new Random(seed); StringBuilder sb = new StringBuilder(); for(int i=0;;i++) { int n = rand.nextInt(27); if (n == 0) break; sb.append((char) ('`' + n)); } return sb.toString(); }prints hello world Java plus A confusing piece of code here for you to parse. ;) int i = (byte) + (char) - (int) + (long) - 1; System.out.println(i);prints 1 Reference: Java plus, Randomly not so random and Strine translator from our JCG partner Peter Lawrey at the Vanilla Java. Related Articles:Funny Source Code Comments Funny Computer Programming Quotes Things Every Programmer Should Know 10 Tips for Proper Application Logging Laws of Software Design Java Best Practices Series 9 Tips on Surviving the Wild West Development Process...
google-gwt-logo

Testing GWT Apps with Selenium or WebDriver

Good functional testing is one of the most difficult tasks for web application developers and their teams. It is a challenge to develop tests that are cheap to maintain and yet provide good test coverage, which helps reduce QA costs and increase quality. Both Selenium and WebDriver (which is essentially now the successor to Selenium) provide a good way to functionally test web applications in multiple target environments without manual work. In the past, web UIs were built using the page navigation to allow users to submit forms, etc. These days, more and more web applications use Ajax and therefore act and look a lot more like desktop applications. However, this poses problems for testing – Selenium and WebDriver are designed to work with user interations resulting in page navigation and don’t play well with AJAX apps out of the box. GWT-based applications in particular have this problem, but there are some ways I’ve found to develop useful and effective tests. GWT also poses other issues in regards to simulating user input and locating DOM elements, and I discuss those below. Note that my code examples use Groovy to make them concise, but they can be pretty easily converted to Java code. Problem 1: Handling Asynchronous Changes One issue that developers face pretty quickly when testing applications based on GWT is detecting and waiting for a response to user interaction. For example, a user may click a button which results in an AJAX call which would either succeed and close a window or, alternatively, show an error message. What we need is a way to block until we see the expected changes, with a timeout so we can fail if we don’t see the expected changes. Solution: Use WebDriverWait The easiest way to do this is by taking advantage of the WebDriverWait (or Selenium’s Wait). This allows you to wait on a condition and proceed when it evaluates to true. Below I use Groovy code for the conciseness of using closures, but the same can be done in Java, though with a bit more code due to the need for anonymous classes. def waitForCondition(Closure closure) { int timeout = 20 WebDriverWait w = new WebDriverWait(driver, timeout) w.until({ closure() // wait until this closure evaluates to true } as ExpectedCondition) }def waitForElement(By finder) { waitForCondition { driver.findElements(finder).size() > 0; } }def waitForElementRemoval(By finder) { waitForCondition { driver.findElements(finder).size() == 0; } }// now some sample test codesubmitButton.click() // submit a form// wait for the expected error summary to show up waitForElement(By.xpath("//div[@class='error-summary']")) // maybe some more verification here to check the expected errors// ... correct error and resubmitsubmitButton.click() waitForElementRemoval(By.xpath("//div[@class='error-summary']")) waitForElementRemoval(By.id("windowId"))As you can see from the example, your code can focus on the actual test logic while handling the asynchronous nature of GWT applications seamlessly. Problem 2: Locating Elements when you have little control over DOM In web applications that use templating (JSPs, Velocity, JSF, etc.), you have good control and easy visibility into the DOM structure that your pages will have. With GWT, this isn’t always the case. Often, you’re dealing with nested elements that you can’t control at a fine level. With WebDriver and Selenium, you can target elements using a few methods, but the most useful are by DOM element ID and XPath. How can we leverage these to get maintainable tests that don’t break with minor layout changes? Solution: Use XPath combined with IDs to limit scope In my experience, to develop functional GWT tests in WebDriver, you should use somewhat loose XPath as your primary means of locating elements, and supplement it by scoping these calls by DOM ID, where applicable. In particular, use IDs at top level elements like windows or tabs that are unique in your application and won’t exist more than once in a page. These can help scope your XPath expressions, which can look for window or form titles, field labels, etc. Here are some examples to get you going. Note that we use // and * in our XPath to keep our expressions flexible so that layout changes do not break our tests unless they are major. By byUserName = By.xpath("//*[@id='userTab']//*/..//input") WebElement userNameField = webDriver.findElement(byUserName) userNameField.sendKeys("my new user")// maybe a user click and then wait for the window to disappear By submitLocator = By.xpath("//*[@id='userTab']//input[@type='submit']") WebElement submit = webDriver.findElement(submitLocator) submit.click()// use our helper method from Problem 1 waitForElementRemoval By.id("userTab")Problem 3: Normal element interaction methods don’t work! GWT and derivatives (Vaadin, GXT, etc.) often are doing some magic behind the scenes as far as managing the state of the DOM goes. To the developer, this means you’re not always dealing with plain <input> or <select>, etc. elements. Simply setting the value of the field through normal means may not work, and using WebDriver or Selenium’s click methods may not work. WebDriver has improved in this regard, but issues still persist. Solution: Unfortunately, just some workarounds The main problems you’re likely to encounter relate to typing into fields and clicking elements. Here are some variants that I have found necessary in the past to get around clicks not working as expected. Try them if you are hitting issues. The examples are in Selenium, but they can be adapted to the corresponding calls in WebDriver if you require them. You may also use the Selenium adapter for WebDriver (WebDriverBackedSelenium) if you want to use the examples directly. CLICK ISSUES Sometimes elements won’t respond to a click() call in Selenium or WebDriver. In these cases, you usually have to simulate events in the browser. This was true more of Selenium before 2.0 than WebDriver. // Selenium's click sometimes has to be simulated with events. def fullMouseClick(String locator) { selenium.mouseOver locator selenium.mouseDown locator selenium.mouseUp locator }// In some cases you need only mouseDown, as mouseUp may be // handled the same as mouseDown. // For example, this could result in a table row being selected, then deselected. def mouseOverAndDown(String locator) { selenium.mouseOver locator selenium.mouseDown locator }TYPING ISSUES These are the roundabout methods of typing I have been able to use successfully in the past when GWT doesn’t recognize typed input. // fires only key events (works for most GWT inputs) // Useful if WebDriver sendKeys() or Selenium type() aren't cooperating. def typeWithEvents(String locator, String text) { def keyEvents = ["keydown", "keypress", "keyup"] typeWithEvents(locator, text, keyEvents) }// fires key events, plus blur and focus for really picky cases def typeWithFullEvents(String locator, String text) { def fullEvents = ["keydown", "keypress", "keyup", "blur", "focus"] typeWithEvents(locator, text, fullEvents) }// use this directly to customize which events are fired def typeWithEvents(String locator, String text, def events) { text.eachWithIndex { ch, i -> selenium.type locator, text.substring(0, i+1) events.each{ event -> selenium.fireEvent locator, event } } }Note that the exact method that works will have to be figured out by trial-and-error and in some cases, you may get different behaviour in different browsers, so if you run your functional tests against different environments, you’ll have to ensure your method works for all of them. Conclusion Hopefully some of you find these tips useful. There are similar tips out there but I wanted to compile a good set of examples and workarounds so that others in similar situations don’t hit dead-ends or waste time on problems that require lots of guessing and time. Reference: Testing GWT Apps with Selenium or WebDriver from our JCG partners at the Carfey Software blog. Related Articles :Services, practices & tools that should exist in any software development house, part 2 Why Automated Tests Boost Your Development Speed Not doing Code Reviews? What’s your excuse? Lessons in Software Reliability This comes BEFORE your business logic! Code coverage with unit & integration tests...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books