Featured FREE Whitepapers

What's New Here?

junit-logo

JUnit’s Built-in Hamcrest Core Matcher Support

In the post Improving On assertEquals with JUnit and Hamcrest, I briefly discussed Hamcrest “core” matchers being “baked in” with modern versions of JUnit. In that post, I focused particularly on use of JUnit’s assertThat(T, Matcher) static method coupled with the Hamcrest core is() matcher that is automatically included in later versions of JUnit. In this post, I look at additional Hamcrest “core” matchers that are bundled with recent versions of JUnit. Two of the advantages of JUnit including Hamcrest “core” matchers out-of-the-box is that there is no need to specifically download Hamcrest and there is no need to include it explicitly on the unit test classpaths. Before looking at more of the handy Hamcrest “core” matchers, it is important to point out here that I am intentionally and repeatedly referring to “core” Hamcrest matchers because recent versions of JUnit only provide “core” (and not all) Hamcrest matchers automatically. Any Hamcrest matchers outside of the core matchers would still need to be downloaded separately and specified explicitly on the unit test classpath. One way to get an idea of what is Hamcrest “core” (and thus what matchers are available by default in recent versions of JUnit) is to look at that package’s Javadoc-based API documentation:From this JUnit-provided documentation for the org.hamcrest.core package, we see that the following matchers (with their descriptions) are available:Class Javadoc Class Description Covered Here?AllOf<T> Calculates the logical conjunction of two matchers. YesAnyOf<T> Calculates the logical disjunction of two matchers. YesDescribedAs<T> Provides a custom description to another matcher. YesIs<T> Decorates another Matcher, retaining the behavior but allowing tests to be slightly more expressive. AgainIsAnything<T> A matcher that always returns true. NoIsEqual<T> Is the value equal to another value, as tested by the Object.equals(java.lang.Object) invokedMethod? YesIsInstanceOf Tests whether the value is an instance of a class. YesIsNot<T> Calculates the logical negation of a matcher. YesIsNull<T> Is the value null? YesIsSame<T> Is the value the same object as another value? YesIn my previous post demonstrating the Hamcrest is() matcher used in conjunction with JUnit’s assertThat(), I used an IntegerArithmetic implementation as test fodder. I’ll use that again here for demonstrating some of the other Hamcrest core matchers. For convenience, that class is reproduced below. IntegerArithmetic.java package dustin.examples;/** * Simple class supporting integer arithmetic. * * @author Dustin */ public class IntegerArithmetic { /** * Provide the product of the provided integers. * * @param firstInteger First integer to be multiplied. * @param secondInteger Second integer to be multiplied. * @param integers Integers to be multiplied together for a product. * @return Product of the provided integers. * @throws ArithmeticException Thrown in my product is too small or too large * to be properly represented by a Java integer. */ public int multiplyIntegers( final int firstInteger, final int secondInteger, final int ... integers) { int returnInt = firstInteger * secondInteger; for (final int integer : integers) { returnInt *= integer; } return returnInt; } } In the Improving On assertEquals with JUnit and Hamcrest post, I relied largely on is() to compare expected results to actual results for the integer multiplication being tested. Another option would have been to use the equalTo matcher as shown in the next code listing. Using Hamcrest equalTo() /** * Test of multiplyIntegers method, of class IntegerArithmetic, using core * Hamcrest matcher equalTo. */ @Test public void testWithJUnitHamcrestEqualTo() { final int[] integers = {4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15}; final int expectedResult = 2 * 3 * 4 * 5 * 6 * 7 * 8 * 9 * 10 * 11 * 12 *13 * 14 * 15; final int result = this.instance.multiplyIntegers(2, 3, integers); assertThat(result, equalTo(expectedResult)); } Although not necessary, some developers like to use is and equalTo together because it feels more fluent to them. This is the very reason for is‘s existence: to make use of other matchers more fluent. I often use is() by itself (implying equalTo()) as discussed in Improving On assertEquals with JUnit and Hamcrest. The next example demonstrates using is() matcher in conjunction with the equalTo matcher. Using Hamcrest equalTo() with is() /** * Test of multiplyIntegers method, of class IntegerArithmetic, using core * Hamcrest matcher equalTo with "is" Matcher.. */ @Test public void testWithJUnitHamcrestEqualToAndIsMatchers() { final int[] integers = {4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15}; final int expectedResult = 2 * 3 * 4 * 5 * 6 * 7 * 8 * 9 * 10 * 11 * 12 *13 * 14 * 15; final int result = this.instance.multiplyIntegers(2, 3, integers); assertThat(result, is(equalTo(expectedResult))); } The equalTo Hamcrest matcher performs a comparison similar to calling Object.equals(Object). Indeed, its comparison functionality relies on use of the underlying object’s equals(Object) implementation. This means that the last two examples will pass because the numbers being compared are logically equivalent. When one wants to ensure an even greater identity equality (actually the same objects and not just the same logical content), one can use the Hamcrest sameInstance matcher as shown in the next code listing. The not matcher is also applied because the assertion will be true and the test will pass only with the “not” in place because the expected and actual results happen to NOT be the same instances! Using Hamcrest sameInstance() with not() /** * Test of multiplyIntegers method, of class IntegerArithmetic, using core * Hamcrest matchers not and sameInstance. */ @Test public void testWithJUnitHamcrestNotSameInstance() { final int[] integers = {4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15}; final int expectedResult = 2 * 3 * 4 * 5 * 6 * 7 * 8 * 9 * 10 * 11 * 12 *13 * 14 * 15; final int result = this.instance.multiplyIntegers(2, 3, integers); assertThat(result, not(sameInstance(expectedResult))); } It is sometimes desirable to control the text that is output from an assertion of a failed unit test. JUnit includes the core Hamcrest matcher asDescribed() to support this. A code example of this is shown in the next listing and the output of that failed test (and corresponding assertion) is shown in the screen snapshot of the NetBeans IDE that follows the code listing. Using Hamcrest asDescribed() with sameInstance() /** * Test of multiplyIntegers method, of class IntegerArithmetic, using core * Hamcrest matchers sameInstance and asDescribed. This one will assert a * failure so that the asDescribed can be demonstrated (don't do this with * your unit tests as home)! */ @Test public void testWithJUnitHamcrestSameInstanceDescribedAs() { final int[] integers = {4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15}; final int expectedResult = 2 * 3 * 4 * 5 * 6 * 7 * 8 * 9 * 10 * 11 * 12 *13 * 14 * 15; final int result = this.instance.multiplyIntegers(2, 3, integers); assertThat(result, describedAs( "Not same object (different identity reference)", sameInstance(expectedResult))); }Use of describedAs() allowed the reporting of a more meaningful message when the associated unit test assertion failed. I am going to now use another contrived class to help illustrate additional core Hamcrest matchers available with recent versions of JUnit. This that “needs testing” is shown next. SetFactory.java package dustin.examples;import java.lang.reflect.InvocationTargetException; import java.util.*; import java.util.logging.Level; import java.util.logging.Logger;/** * A Factory that provides an implementation of a Set interface based on * supplied SetType that indicates desired type of Set implementation. * * @author Dustin */ public class SetFactory<T extends Object> { public enum SetType { ENUM(EnumSet.class), HASH(HashSet.class), SORTED(SortedSet.class), // SortedSet is an interface, not implementation TREE(TreeSet.class), RANDOM(Set.class); // Set is an interface, not a concrete collectionprivate Class setTypeImpl = null;SetType(final Class newSetType) { this.setTypeImpl = newSetType; }public Class getSetImplType() { return this.setTypeImpl; } }private SetFactory() {}public static SetFactory newInstance() { return new SetFactory(); }/** * Creates a Set using implementation corresponding to the provided Set Type * that has a generic parameterized type of that specified. * * @param setType Type of Set implementation to be used. * @param parameterizedType Generic parameterized type for the new set. * @return Newly constructed Set of provided implementation type and using * the specified generic parameterized type; null if either of the provided * parameters is null. * @throws ClassCastException Thrown if the provided SetType is SetType.ENUM, * but the provided parameterizedType is not an Enum. */ public Set<T> createSet( final SetType setType, final Class<T> parameterizedType) { if (setType == null || parameterizedType == null) { return null; }Set<T> newSet = null; try { switch (setType) { case ENUM: if (parameterizedType.isEnum()) { newSet = EnumSet.noneOf((Class<Enum>)parameterizedType); } else { throw new ClassCastException( "Provided SetType of ENUM being supplied with " + "parameterized type that is not an enum [" + parameterizedType.getName() + "]."); } break; case RANDOM: newSet = LinkedHashSet.class.newInstance(); break; case SORTED: newSet = TreeSet.class.newInstance(); break; default: newSet = (Set<T>) setType.getSetImplType().getConstructor().newInstance(); break; } } catch ( InstantiationException | IllegalAccessException | IllegalArgumentException | InvocationTargetException | NoSuchMethodException ex) { Logger.getLogger(SetFactory.class.getName()).log(Level.SEVERE, null, ex); } return newSet; } } The contrived class whose code was just shown provides opportunities to use additional Hamcrest “core” matchers. As described above, it’s possible to use all of these matches with the is matcher to improve fluency of the statement. Two useful “core” matchers are nullValue() and notNullValue(), both of which are demonstrated in the next JUnit-based code listing (and is is used in conjunction in one case). Using Hamcrest nullValue() and notNullValue() /** * Test of createSet method, of class SetFactory, with null SetType passed. */ @Test public void testCreateSetNullSetType() { final SetFactory factory = SetFactory.newInstance(); final Set<String> strings = factory.createSet(null, String.class); assertThat(strings, nullValue()); }/** * Test of createSet method, of class SetFactory, with null parameterized type * passed. */ @Test public void testCreateSetNullParameterizedType() { final SetFactory factory = SetFactory.newInstance(); final Set<String> strings = factory.createSet(SetType.TREE, null); assertThat(strings, is(nullValue())); }@Test public void testCreateTreeSetOfStringsNotNullIfValidParams() { final SetFactory factory = SetFactory.newInstance(); final Set<String> strings = factory.createSet(SetType.TREE, String.class); assertThat(strings, notNullValue()); } The Hamcrest matcher instanceOf is also useful and is demonstrated in the next code listing (one example using instanceOf by itself and one example using it in conjunction with is). Using Hamcrest instanceOf() @Test public void testCreateTreeSetOfStringsIsTreeSet() { final SetFactory factory = SetFactory.newInstance(); final Set<String> strings = factory.createSet(SetType.TREE, String.class); assertThat(strings, is(instanceOf(TreeSet.class))); }@Test public void testCreateEnumSet() { final SetFactory factory = SetFactory.newInstance(); final Set<RoundingMode> roundingModes = factory.createSet(SetType.ENUM, RoundingMode.class); roundingModes.add(RoundingMode.UP); assertThat(roundingModes, instanceOf(EnumSet.class)); } Many of the Hamcrest core matchers covered so far increase fluency and readability, but I like the next two for even more reasons. The Hamcrest hasItem() matcher checks for the existence of the prescribed item in the collection and the even more useful Hamcrest hasItems() matcher checks for the existence of multiple prescribed items in the collection. It is easier to see this in code and the following code demonstrates these in action. Using Hamcrest hasItem() and hasItems() @Test public void testCreateTreeSetOfStringsHasOneOfAddedStrings() { final SetFactory factory = SetFactory.newInstance(); final Set<String> strings = factory.createSet(SetType.TREE, String.class); strings.add("Tucson"); strings.add("Arizona"); assertThat(strings, hasItem("Tucson")); }@Test public void testCreateTreeSetOfStringsHasAllOfAddedStrings() { final SetFactory factory = SetFactory.newInstance(); final Set<String> strings = factory.createSet(SetType.TREE, String.class); strings.add("Tucson"); strings.add("Arizona"); assertThat(strings, hasItems("Tucson", "Arizona")); } It is sometimes desirable to test the result of a certain tested method to ensure that it meets a wide variety of expectations. This is where the Hamcrest allOf matcher comes in handy. This matcher ensures that all conditions (expressed themselves as matchers) are true. This is illustrated in the following code listing, which tests with a single assert that a generated Set is not null, has two specific Strings in it, and is an instance of TreeSet. Using Hamcrest allOf() @Test public void testCreateSetAllKindsOfGoodness() { final SetFactory factory = SetFactory.newInstance(); final Set<String> strings = factory.createSet(SetType.TREE, String.class); strings.add("Tucson"); strings.add("Arizona"); assertThat( strings, allOf( notNullValue(), hasItems("Tucson", "Arizona"), instanceOf(TreeSet.class))); } To demonstrate the Hamcrest core “anyOf” matcher provided out-of-the-box with newer versions of JUnit, I am going to use yet another ridiculously contrived Java class that is in need of a unit test. Today.java package dustin.examples;import java.util.Calendar; import java.util.Locale;/** * Provide what day of the week today is. * * @author Dustin */ public class Today { /** * Provide the day of the week of today's date. * * @return Integer representing today's day of the week, corresponding to * static fields defined in Calendar class. */ public int getTodayDayOfWeek() { return Calendar.getInstance(Locale.US).get(Calendar.DAY_OF_WEEK); } } Now I need to test that the sole method in the class above returns a valid integer representing a day of the week correctly. I’d like my test(s) to ensure that a valid integer representing a day Sunday through Saturday is returned, but the method being tested is such that it may not be the same day of the week returned on any given test run. The code listing below indicates how this can be tested with the JUnit-included Hamcrest “anyOf” matcher. Using Hamcrest anyOf() /** * Test of getTodayDayOfWeek method, of class Today. */ @Test public void testGetTodayDayOfWeek() { final Today instance = new Today(); final int todayDayOfWeek = instance.getTodayDayOfWeek(); assertThat(todayDayOfWeek, describedAs( "Day of week not in range.", anyOf(is(Calendar.SUNDAY), is(Calendar.MONDAY), is(Calendar.TUESDAY), is(Calendar.WEDNESDAY), is(Calendar.THURSDAY), is(Calendar.FRIDAY), is(Calendar.SATURDAY)))); } While Hamcrest’s allOf requires all conditions to match for the assertion to be avoided, the existence of any one condition is sufficient to ensure that anyOf doesn’t lead to an assertion of a failure. My favorite way of determining which core Hamcrest matchers are available with JUnit is to use import completion in my Java IDE. When I statically import the org.hamcrest.CoreMatchers.* package contents, all of the available matchers are displayed. I can look in the IDE to see what the * represents to see what matchers are available to me. It is nice to have Hamcrest “core” matchers included with JUnit and this post has attempted to demonstrate the majority of these. Hamcrest offers many useful matchers outside of the “core” that are useful as well. More details on these are available in the Hamcrest Tutorial. Reference: JUnit’s Built-in Hamcrest Core Matcher Support from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
java-logo

Calling private Java methods publicly?

We Java developers, known 4 access modifiers in Java: private, protected, public, and package. Well, except for the private, the last three, can be called from outside of the class by inheritance, same package or from the instance. Now, the common question, can private be called publicly (from outside class)? well the answer is NO and YES. No when you use ‘usual’ way to access it, and YES when you ‘hack’ into it using the Reflection API provided by Java itself. Well okay, now just write the code that we will hack into. I called it as “TheVictim“ package com.namex.hack;public class TheVictim { private void hackTest() { System.out.println("hackTest called"); }private static void hackTestStatic() { System.out.println("hackTestStatic called"); }} Now after that, just follow my code and try to run it. I guarantee that if you followed it right, you will get TheVictim to call both of the hackTest and hackTestStatic. And you can see the output on your screen. package com.namex.hack;import java.lang.reflect.InvocationTargetException; import java.lang.reflect.Method; import java.lang.reflect.Modifier;public class HackTest { public static void main(String[] args) throws IllegalArgumentException, IllegalAccessException, InvocationTargetException {Class c = TheVictim.class;Method[] ms = c.getDeclaredMethods();for (Method each : ms) { String methodName = each.getName(); each.setAccessible(true); // this is the key if (Modifier.isPrivate(each.getModifiers())) { if (Modifier.isStatic(each.getModifiers())) { // static doesnt require the instance to call it. each.invoke(TheVictim.class, new Object[] {}); } else { each.invoke(new TheVictim(), new Object[] {}); } } }} } Output example: hackTestStatic called hackTest called Okay, this tutorial has met its purpose. Now you know the Reflection API of java is very powerful feature of programming language. And it’s all up to you to modify or even extend it for your own purpose. Have fun with Java Reference: Calling private methods publicly ? from our JCG partner Ronald Djunaedi at the Naming Exception blog....
grails-logo

Using Servlet 3.0 Async Features in Grails 2.0

I was talking to someone last week about the new support for Servlet 3.0 async features in Grails 2 and realized I didn’t know that much about what was available. So I thought I’d try it out and share some examples. The documentation is a little light on the subject, so first some background information. The primary hook to do asynchronous work in the 3.0 spec is the new startAsync method in the javax.servlet.ServletRequest class. This returns an instance of the javax.servlet.AsyncContext interface which has lifecycle methods such as dispatch and complete, gives you a hook back to the request and response, and lets you register an javax.servlet.AsyncListener. You call the start method passing in a Runnable to do the asynchronous work. Using this approach frees up server resources instead of blocking, which increases scalability since you can handle more concurrent requests. In order to use this however the servlet that handles the request must support async, and all applied filters in the filter chain must too. The main Grails servlet (GrailsDispatcherServlet) is registered in the 3.0 version of the web.xml template with the async-supported attribute set to true. And Servlet3AsyncWebXmlProcessor adds <async-supported>true</async-supported> to all filter declarations in web.xml after it’s generated. So that’s covered for you; there is no required web.xml configuration on your part. You also have to be configured to use servlet API 3.0. This is simple to do; just change the value of grails.servlet.version to “3.0? from the default value of “2.5?. Note that there is a legacy setting in application.properties with the name app.servlet.version; you should delete this line from your application.properties file since its value is ignored and overridden at runtime by the value from BuildConfig.groovy. You don’t call startAsync on the request from a controller though; call startAsync directly on the controller. This method is added as a controller method (wired in as part of the controllers’ AST transforms from ControllersAsyncApi (by ControllerAsyncTransformer if you’re curious)). It’s important to call the controller’s startAsync method because it does all of the standard work but also adds Grails integration. This includes adding the logic to integrate all registered PersistenceContextInterceptor instances, e.g. to bind a Hibernate Session to the thread, flush when finished, etc., and also integrates with Sitemesh. This is implemented by returning an instance of GrailsAsyncContext which adds the extra behavior and delegates to the real instance provided by the container (e.g. org.apache.catalina.core.AsyncContextImpl in Tomcat) for the rest. There are a few other new async-related methods available in the request; they include boolean isAsyncStarted() and AsyncContext getAsyncContext().I’ve attached a sample application (see below for the link) to demonstrate these features. There are two parts; a simple controller that looks up stock prices asynchronously, and a chat application. StockController is very simple. It just has a single action and suspends to look up the current stock price for the requested stock ticker. It does this asynchronously but it’s typically very fast, so you probably won’t see a real difference from the serial approach. But this pattern can be generalized to doing more time-consuming tasks. Call http://localhost:8080/asynctest/stock/GOOG, http://localhost:8080/asynctest/stock/AAPL, http://localhost:8080/asynctest/stock/VMW, etc. to test it. The second example is more involved and is based on the “async-request-war” example from the Java EE 6 SDK. This implements a chat application (it was previously implemented with Comet). The SDK example is one large servlet; I split it up into a controller to do the standard request work and the ChatManager class (registered as a Spring bean in resources.groovy) to handle client registration, message queueing and dispatching, and associated error handling. The implementation uses a hidden iframe which initiates a long-running request. This never completes and is used to send messages back to each registered client. When you “login” or send a message, the controller handles the request and queues a response message. ChatManager then cycles through each registered AsyncContext and sends JSONP to the iframe which updates a text area in the main page with incoming messages. One thing that hung me up for quite a while was that things worked fine with the SDK example but not mine. Everything looked good but messages weren’t being received by the iframe. It turns out this is due to the optimizations that are in place to make response rendering as fast as possible. Unfortunately this resulted in flush() calls on the response writer being ignored. Since we need responsive updates and aren’t rendering a large page of html, I added code to find the real response that’s wrapped by the Grails code and send directly to that. Try it out by opening http://localhost:8080/asynctest/ in two browsers. Once you’re “logged in” to both, messages sent will be displayed in both browsers. Some notes about the test application:All of the client logic is in web-app/js/chat.js grails-app/views/chat/index.gsp is the main page; it creates the text area to display messages and the hidden iframe to stay connected and listen for messages This requires a servlet container that implements the 3.0 spec. The version of Tomcat provided by the tomcat plugin and used by run-app does, and all 7.x versions of Tomcat do. I ran install-templates and edited web.xml to add metadata-complete="true" to keep Tomcat from scanning all jar files for annotated classes – this can cause an OOME due to a bug that’s fixed in version 7.0.26 (currently unreleased) Since the chat part is based on older code it uses Prototype but it could easily use jQueryYou can download the sample application code here. Reference: Using Servlet 3.0 Async Features in Grails 2.0 from our JCG partner Burt Beckwith at the An Army of Solipsists blog....
grails-logo

Using Servlet 3.0 Async Features in Grails 2.0

I was talking to someone last week about the new support for Servlet 3.0 async features in Grails 2 and realized I didn’t know that much about what was available. So I thought I’d try it out and share some examples. The documentation is a little light on the subject, so first some background information. The primary hook to do asynchronous work in the 3.0 spec is the new startAsync method in the javax.servlet.ServletRequest class. This returns an instance of the javax.servlet.AsyncContext interface which has lifecycle methods such as dispatch and complete, gives you a hook back to the request and response, and lets you register an javax.servlet.AsyncListener. You call the start method passing in a Runnable to do the asynchronous work. Using this approach frees up server resources instead of blocking, which increases scalability since you can handle more concurrent requests. In order to use this however the servlet that handles the request must support async, and all applied filters in the filter chain must too. The main Grails servlet (GrailsDispatcherServlet) is registered in the 3.0 version of the web.xml template with the async-supported attribute set to true. And Servlet3AsyncWebXmlProcessor adds <async-supported>true</async-supported> to all filter declarations in web.xml after it’s generated. So that’s covered for you; there is no required web.xml configuration on your part. You also have to be configured to use servlet API 3.0. This is simple to do; just change the value of grails.servlet.version to “3.0? from the default value of “2.5?. Note that there is a legacy setting in application.properties with the name app.servlet.version; you should delete this line from your application.properties file since its value is ignored and overridden at runtime by the value from BuildConfig.groovy. You don’t call startAsync on the request from a controller though; call startAsync directly on the controller. This method is added as a controller method (wired in as part of the controllers’ AST transforms from ControllersAsyncApi (by ControllerAsyncTransformer if you’re curious)). It’s important to call the controller’s startAsync method because it does all of the standard work but also adds Grails integration. This includes adding the logic to integrate all registered PersistenceContextInterceptor instances, e.g. to bind a Hibernate Session to the thread, flush when finished, etc., and also integrates with Sitemesh. This is implemented by returning an instance of GrailsAsyncContext which adds the extra behavior and delegates to the real instance provided by the container (e.g. org.apache.catalina.core.AsyncContextImpl in Tomcat) for the rest. There are a few other new async-related methods available in the request; they include boolean isAsyncStarted() and AsyncContext getAsyncContext().I’ve attached a sample application (see below for the link) to demonstrate these features. There are two parts; a simple controller that looks up stock prices asynchronously, and a chat application. StockController is very simple. It just has a single action and suspends to look up the current stock price for the requested stock ticker. It does this asynchronously but it’s typically very fast, so you probably won’t see a real difference from the serial approach. But this pattern can be generalized to doing more time-consuming tasks. Call http://localhost:8080/asynctest/stock/GOOG, http://localhost:8080/asynctest/stock/AAPL, http://localhost:8080/asynctest/stock/VMW, etc. to test it. The second example is more involved and is based on the “async-request-war” example from the Java EE 6 SDK. This implements a chat application (it was previously implemented with Comet). The SDK example is one large servlet; I split it up into a controller to do the standard request work and the ChatManager class (registered as a Spring bean in resources.groovy) to handle client registration, message queueing and dispatching, and associated error handling. The implementation uses a hidden iframe which initiates a long-running request. This never completes and is used to send messages back to each registered client. When you “login” or send a message, the controller handles the request and queues a response message. ChatManager then cycles through each registered AsyncContext and sends JSONP to the iframe which updates a text area in the main page with incoming messages. One thing that hung me up for quite a while was that things worked fine with the SDK example but not mine. Everything looked good but messages weren’t being received by the iframe. It turns out this is due to the optimizations that are in place to make response rendering as fast as possible. Unfortunately this resulted in flush() calls on the response writer being ignored. Since we need responsive updates and aren’t rendering a large page of html, I added code to find the real response that’s wrapped by the Grails code and send directly to that. Try it out by opening http://localhost:8080/asynctest/ in two browsers. Once you’re “logged in” to both, messages sent will be displayed in both browsers. Some notes about the test application:All of the client logic is in web-app/js/chat.js grails-app/views/chat/index.gsp is the main page; it creates the text area to display messages and the hidden iframe to stay connected and listen for messages This requires a servlet container that implements the 3.0 spec. The version of Tomcat provided by the tomcat plugin and used by run-app does, and all 7.x versions of Tomcat do. I ran install-templates and edited web.xml to add metadata-complete="true" to keep Tomcat from scanning all jar files for annotated classes – this can cause an OOME due to a bug that’s fixed in version 7.0.26 (currently unreleased) Since the chat part is based on older code it uses Prototype but it could easily use jQueryYou can download the sample application code here. Reference: Using Servlet 3.0 Async Features in Grails 2.0 from our JCG partner Burt Beckwith at the An Army of Solipsists blog....
twitter-logo

The Twitter API Management Model

The objective of this blog post is to explore in detail the patterns and practices Twitter has used in it’s API management. Twitter comes with a comprehensive set of REST APIs to let client apps talk to Twitter. Let’s take few examples… If you use following with cUrl – it returns the 20 most recent statuses, including retweets if they exist, from non-protected users. The public timeline is cached for 60 seconds. Requesting more frequently than that will not return any more data, and will count against your rate limit usage. curl https://api.twitter.com/1/statuses/public_timeline.json The example above is an open API – which requires no authentication from the client who accesses it. But keep in mind… it has a throttling policy associated with the API. That is the rate limit. For example the throttling policy associated with the ..statuses/public_timeline.json API could say, only allow maximum 20 API calls from the same IP address.. like wise.. so.. this policy is a global policy for this API. 1. Twitter has open APIs – where anonymous users can access. 2. Twitter has globally defined policies per API. Let’s take another sample API – statuses/retweeted_by_user – returns the 20 most recent retweets posted by the specified user – given that the user’s timeline is not protected. This is also another open API. But, what if I want to post to my twitter account? I could use the API statuses/update. This updates the authenticating user’s status and this is not an open API. Only the authenticated users can access this. How do we authenticate our selves to access the Twitter API? Twitter supported two methods. One way is to use BasicAuth over HTTPS and the other way is OAuth 1.0a. BasicAuth support was removed recently and now the only remaining way is with OAuth 1.0a. As of this writing Twitter doesn’t support OAuth 2.0. Why I need to use the APIs exposed by Twitter – is that I have some external applications that do want to talk to Twitter and these applications use the Twitter APIs for communication. If I am the application developer – following are the steps I need to follow to build my application to access protected APIs from Twitter. First the application developer needs to login to Twitter and creates an Application. Here, the Application is an abstraction for a set of protected APIs Twitter exposes outside. Each Application you create, needs to define the level of access it needs to those underling APIs. There are three values to pick from. – Read only – Read and Write – Read, Write and Access direct messages Let’s see what these values mean… If you pick ‘Read only’ – that means a user who is going to use your Application needs to give it the permission to read. In other words – the user will be giving access to invoke the APIs defined here which starts with GET, against his Twitter account. The only exception is Direct Messages APIs – with Read only your Application won’t have access to a given user’s Direct Messages – even GETs. Even you who develop the application – the above is valid for you too as well. If you want to give your application, access your Twitter account – there also you should give the application the required rights. If you pick Read and Write – that means a user who is going to use your application needs to give it the permission to read and write. In other words – the user will be giving access to invoke the APIs defined here which starts with GET or POST, against his Twitter account. The only exception is Direct Messages APIs – with Read and Write, your application won’t have access to a given user’s Direct Messages – even GETs or POSTs. 3. Twitter has an Application concept that groups APIs together. 4. Each API declares the actions those do support. GET or POST 5. Each Application has a required access level for it to function[Read only, Read and Write, Read Write and Direct Messages] Now lets dig in to the run-time aspects of this. I am going to skip OAuth related details here purposely for clarity. For our Application to access the Twitter APIs – it needs a key. Let’s name it as API_KEY [if you know OAuth – this is equivalent to the access_token]. Say I want to use this Application. First I need to go to Twitter and need to generate an API_KEY to access this Application. Although there are multiple APIs wrapped in the Application – I only need a single API_KEY. 6. API_KEY is per user per Application[a collection of APIs]. When I generate the API_KEY – I can specify what level of access I am going to give to that API_KEY – Read Only, Read & Write or Read, Write & Direct Message. Based on the access level I can generate my API_KEY. 7. API_KEY carries permissions to the underlying APIs. Now I give my API_KEY to the Application. Say it tries to POST to my Twitter time-line. That request also should include the API_KEY. Now once the Twitter gets the request – looking at the API_KEY it will identify the Application is trying to POST to ‘my’ time-line. Also it will check whether the API_KEY has Read & Write permissions – if so it will let the Application post to my Twitter time-line. If the Application tries to read my Direct Messages using the same API_KEY I gave it – then Twitter will detect that the API_KEY doesn’t have Read, Write & Direct Message permission and the request will fail. Even in the above case, if the Application tries to post to the Application Developer’s Twitter account – there also it needs an API_KEY – from the Application Developer, which he can get from Twitter. Once user grants access to an Application by it’s API_KEY – during the entire life time of the key, the application can access his account. But, if the user wants to revoke the key, Twitter provides a way to do that as well. Basically when you go here, it displays all the Applications you have given permissions to access your Twitter account – if you want, you can revoke access from there. 8. Twitter lets users revoke API_KEYs Also another interesting thing is how Twitter does API versioning. If you carefully look at the URLs, you will notice that the version number is included in the URL it self  -https://api.twitter.com/1/statuses/public_timeline.json. But it does not let Application developers to pick, which versions of the APIs they want to use. 9. Twitter tracks API versions in runtime. 10.Twitter does not let Application developers to pick API versions at the time he designs it. Twitter also has a way of monitoring the status of the API. Following shows a screenshot of it. 11. Twitter does API monitoring in runtime.  Reference: The Twitter API Management Model from our JCG partner Prabath Siriwardena at the Facile Login blog. ...
spring-interview-questions-answers

JAXB Custom Binding – Java.util.Date / Spring 3 Serialization

JaxB can handle Java.util.Date serialization, but it expects the following format: “yyyy-MM-ddTHH:mm:ss“. What if you need to format the date object in another format? I had the same issue when I was working with Spring MVc 3 and Jackson JSON Processor, and recently, I faced the same issue working with Spring MVC 3 and JAXB for XML serialization. Let’s digg into the issue: Problem: I have the following Java Beans which I want to serialize in XML using Spring MVC 3: package com.loiane.model;import java.util.Date;public class Company {private int id;private String company;private double price;private double change;private double pctChange;private Date lastChange;//getters and setters And I have another object which is going to wrap the POJO above: package com.loiane.model;import java.util.List;import javax.xml.bind.annotation.XmlElement; import javax.xml.bind.annotation.XmlRootElement;@XmlRootElement(name="companies") public class Companies {@XmlElement(required = true) private List<Company> list;public void setList(List<Company> list) { this.list = list; } } In my Spring controller, I’m going to return a List of Company through the the @ResponseBody annotation – which is going to serialize the object automatically with JaxB: @RequestMapping(value="/company/view.action") public @ResponseBody Companies view() throws Exception {} When I call the controller method, this is what it returns to the view: <companies> <list> <change>0.02</change> <company>3m Co</company> <id>1</id> <lastChange>2011-09-01T00:00:00-03:00</lastChange> <pctChange>0.03</pctChange> <price>71.72</price> </list> <list> <change>0.42</change> <company>Alcoa Inc</company> <id>2</id> <lastChange>2011-09-01T00:00:00-03:00</lastChange> <pctChange>1.47</pctChange> <price>29.01</price> </list> </companies> Note the date format. It is not the format I expect it to return. I need to serialize the date in the following format: “MM-dd-yyyy“ Solution: I need to create a class extending the XmlAdapter and override the marshal and unmarshal methods and in these methods I am going to format the date as I need to: package com.loiane.util;import java.text.SimpleDateFormat; import java.util.Date;import javax.xml.bind.annotation.adapters.XmlAdapter;public class JaxbDateSerializer extends XmlAdapter<String, Date>{private SimpleDateFormat dateFormat = new SimpleDateFormat("MM-dd-yyyy");@Override public String marshal(Date date) throws Exception { return dateFormat.format(date); }@Override public Date unmarshal(String date) throws Exception { return dateFormat.parse(date); } } And in my Java Bean class, I simply need to add the @XmlJavaTypeAdapter annotation in the get method of the date property. package com.loiane.model;import java.util.Date;import javax.xml.bind.annotation.adapters.XmlJavaTypeAdapter;import com.loiane.util.JaxbDateSerializer;public class Company {private int id;private String company;private double price;private double change;private double pctChange;private Date lastChange;@XmlJavaTypeAdapter(JaxbDateSerializer.class) public Date getLastChange() { return lastChange; } //getters and setters } If we try to call the controller method again, it is going to return the following XML: <companies> <list> <change>0.02</change> <company>3m Co</company> <id>1</id> <lastChange>09-01-2011</lastChange> <pctChange>0.03</pctChange> <price>71.72</price> </list> <list> <change>0.42</change> <company>Alcoa Inc</company> <id>2</id> <lastChange>09-01-2011</lastChange> <pctChange>1.47</pctChange> <price>29.01</price> </list> </companies> Problem solved! Happy Coding! Reference: JAXB Custom Binding – Java.util.Date / Spring 3 Serialization from our JCG partner Loiane Groner at the Loiane Groner’s blog blog....
apache-solr-logo

Solr: Creating a spellchecker

In a previous post I talked about how the Solr Spellchecker works and then I showed you some test results of its performance. Now we are going to see another aproach to spellchecking. This method, as many others, use a two step procedure. A rather fast “candidate word” selection, and then a scoring of those words. We are going to select different methods from the ones that Solr uses and test its performance. Our main objective will be effectiveness in the correction, and in a second term, velocity in the results. We can tolerate a slightly slower performance considering that we are gaining in correctness of the results. Our strategy will be to use a special Lucene index, and query it using fuzzy queries to get a candidate list. Then we are going to rank the candidates with a Python script (that can easily be transformed in a Solr spell checker subclass if we get better results). Candidate selection Fuzzy queries have historically been considered a slow performance query in relation with others but , as they have been optimized in the 1.4 version, they are a good choice for the first part of our algorithm. So, the idea will be very simple: we are going to construct a Lucene index where every document will be a dictionary word. When we have to correct a misspelled word we are going to do a simple fuzzy query of that word and get a list of results. The results will be words similar to the one we provided (ie with a small edit distance). I found that with approximately 70 candidates we can get excellent results. With fuzzy queries we are covering all the typos because, as I said in the previous post, most of the typos are of edit distance 1 with respect to the correct word. But although this is the most common error people make while typing, there are other kinds of errors.We can find three types of misspellings [Kukich]:Typographic errors Cognitive errors Phonetic errorsTypographic errors are the typos, when people knows the correct spelling but makes a motor coordination slip when typing. The cognitive errors are those caused by a lack of knowledge of the person. Finally, phonetic errors are a special case of cognitive errors that are words that sound correctly but are orthographically incorrect. We already covered typographic errors with the fuzzy query, but we can also do something for the phonetic errors. Solr has a Phonetic Filter in its analysis package that, among others, has the double methaphone algorithm. In the same way we perform fuzzy query to find similar words, we can index the methaphone equivalent of the word and perform fuzzy query on it. We must manually obtain the methaphone equivalent of the word (because the Lucene query parser don’t analyze fuzzy queries) and construct a fuzzy query with that word. In few words, for the candidate selection we construct an index with the following solr schema: <fieldType name="spellcheck_text" class="solr.TextField" positionIncrementGap="100" autoGeneratePhraseQueries="true"> <analyzer type="index"> <tokenizer class="solr.KeywordTokenizerFactory"/> <filter class="solr.LowerCaseFilterFactory"/> <filter class="solr.PhoneticFilterFactory" encoder="DoubleMetaphone" maxCodeLength="20" inject="false"/> </analyzer> </fieldType><field name="original_word" type="string" indexed="true" stored="true" multiValued="false"/> <field name="analyzed_word" type="spellcheck_text" indexed="true" stored="true" multiValued="false"/> <field name="freq" type="tfloat" stored="true" multiValued="false"/> As you can see the analyzed_word field contains the “soundslike” of the word. The freq field will be used in the next phase of the algorithm. It is simply the frequency of the term in the language. How can we estimate the frequency of a word in a language? Counting the frequency of the word in a big text corpus. In this case the source of the terms is the wikipedia and we are using the TermComponents of Solr to count how many times each term appears in the wikipedia. But the Wikipedia is written by common people that make errors! How can we trust in this as a “correct dictionary”? We make use of the “colective knowledge” of the people that writes the Wikipedia. This dictionary of terms extracted from the Wikipedia has a lot of terms! Over 1.800.00, and most of them aren’t even words. It is likely that words with a high frequency are correctly spelled in the Wikipedia. This approach of building a dictionary from a big corpus of words and considering correct the most frequent ones isn’t new. In [Cucerzan] they use the same concept but using query logs to build the dictionary. It apears that Google’s “Did you mean” use a similar concept. We can add little optimizations here. I have found that we can remove some words and get good results. For example, I removed words with frequency 1, and words that begin with numbers. We can continue removing words based on other criteria, but we’ll leave this like that. So the procedure for building the index is simple, we extract all the terms from the wikipedia index via the TermsComponent of Solr along with frequencies, and then create an index in Solr, using SolrJ. Candidate ranking Now the ranking of the candidates. For the second phase of the algorithm we are going to make use of information theory, in particular, the noisy channel model. The noisy channel applied to this case assumes that the human knows the correct spelling of a word but some noise in the channel introduces the error and as the result we get another word, misspelled. We intuitively know that it is very unlikely that we get ‘sarasa’ when trying to type ‘house’ so the noisy channel model introduces some formality to finding how probable an error was. For example, we have misspelled ‘houze’ and we want to know which one is the most likely word that we wanted to type. To accomplish that we have a big dictionary of possible words, but not all of them are equally probable. We want to obtain the word with the highest probability of having been intended to be typed. In mathematics that is called conditional probability; given that we typed ‘houze’ how high is the probability of each of the correct words to be the word that we intended. The notation of conditional probability is: P(‘house’|’houze’) that stands for the probability of ‘house’ given ‘houze’ This problem can be seen from two perspectives: we may think that the most common words are more probable, for example ‘house’ is more probable than ‘hose’ because the former is a more common word. In the other hand, we also intuitively think that ‘house’ is more probable than ‘photosinthesis’ because of the big difference in both words. Both of these aspects, are formally deduced by Bayes theorem:We have to maximize this probability and to do that we only have one parameter: the correct candidate word (‘house’ in the case shown).For that reason the probability of the misspelled word will be constant and we are not interested in it. The formula reduces toAnd to add more structure to this, scientists have given named to these two factors. The P(‘houze’|’house’) factor is the Error model (or Channel Model) and relates with how probable is that the channel introduces this particular misspell when trying to write the second word. The second term P(‘house’) is called the Language model and gives us an idea of how common a word is in a language.Up to this point, I only introduced the mathematical aspects of the model. Now we have to come up with a concrete model of this two probabilities. For the Language model we can use the frequency of the term in the text corpus. I have found empirically that it works much better to use the logarithm of the frequency rather than the frequency alone. Maybe this is because we want to reduce the weight of the very frequent terms more than the less frequent ones, and the logarithm does just that.There is not only one way to construct a Channel model. Many different ideas have been proposed. We are going to use a simple one based in the Damerau-Levenshtein distance. But also I found that the fuzzy query of the first phase does a good job in finding the candidates. It gives the correct word in the first place in more than half of the test cases with some datasets. So the Channel model will be a combination of the Damerau-Levenshtein distance and the score that Lucene created for the terms of the fuzzy query. The ranking formula will be:I programmed a small script (python) that does all that was previously said: from urllib import urlopen import doubleMethaphone import levenshtain import jsonserver = "http://benchmarks:8983/solr/testSpellMeta/"def spellWord(word, candidateNum = 70): #fuzzy + soundlike metaphone = doubleMethaphone.dm(word) query = "original_word:%s~ OR analyzed_word:%s~" % (word, metaphone[0])if metaphone[1] != None: query = query + " OR analyzed_word:%s~" % metaphone[1]doc = urlopen(server + "select?rows=%d&wt=json&fl=*,score&omitHeader=true&q=%s" % (candidateNum, query)).read( ) response = json.loads(doc) suggestions = response['response']['docs']if len(suggestions) > 0: #score scores = [(sug['original_word'], scoreWord(sug, word)) for sug in suggestions] scores.sort(key=lambda candidate: candidate[1]) return scores else: return []def scoreWord(suggestion, misspelled): distance = float(levenshtain.dameraulevenshtein(suggestion['original_word'], misspelled)) if distance == 0: distance = 1000 fuzzy = suggestion['score'] logFreq = suggestion['freq']return distance/(fuzzy*logFreq) From the previous listing I have to make some remarks. In line 2 and 3 we use third party libraries for Levenshtein distance and metaphone algorithms. In line 8 we are collecting a list of 70 candidates. This particular number was found empirically. With higher candidates the algorithm is slower and with fewer is less effective. We are also excluding the misspelled word from the candidates list in line 30. As we used the wikipedia as our source it is common that the misspelled word is found in the dictionary. So if the Leveshtain distance is 0 (same word) we add 1000 to its distance. Tests I ran some tests with this algorithm. The first one will be using the dataset that Peter Norvig used in his article. I found the correct suggestion of the word in the first position approximately 80% of the times!!! That’s is a really good result. Norvig with the same dataset (but with a different algoritm and training set) got 67% Now let’s repeat some of the test of the previous post to see the improvement. In the following table I show you the results.Test set % Solr % new Solr time [seconds] New time [seconds] Improvement Time lossFAWTHROP1DAT.643 45,61% 81,91% 31,50 74,19 79,58% 135,55%batch0.tab 28,70% 56,34% 21,95 47,05 96,30% 114,34%SHEFFIELDDAT.643 60,42% 86,24% 19,29 35,12 42,75% 82,06%We can see that we get very good improvements in effectiveness of the correction but it takes about twice the time. Future work How can we improve this spellchecker. Well, studying the candidates list it can be found that the correct word is generally (95% of the times) contained in it. So all our efforts should be aimed to improve the scoring algorithm.We have many ways of improving the channel model; several papers show that calculating more sophisticated distances weighting the different letter transformations according to language statistics can give us a better measure. For example we know that writing ‘houpe’ y less probable than writing ‘houze’.For the language model, great improvements can be obtained by adding more context to the word. For example if we misspelled ‘nouse’ it is very difficult to tell that the correct word is ‘house’ or ‘mouse’. But if we add more words “paint my nouse” it is evident that the word that we were looking for was ‘house’ (unless you have strange habits involving rodents). These are also called ngrams (but of words in this case, instead of letters). Google has offered a big collection of ngrams that are available to download, with their frequencies.Lastly but not least, the performance can be improved by programming the script in java. Part of the algorithm was in python.Bye!As an update for all of you interested, Robert Muir told me in the Solr User list that there is a new spellchecker, DirectSpellChecker, that was in the trunk then and now should be part of Solr 3.1. It uses a similar technique to the one i presented in this entry without the performance loses.     References[Kukich] Karen Kukich – Techniques for automatically correcting words in text – ACM Computing Surveys – Volume 24 Issue 4, Dec. 1992[Cucerzan] S. Cucerzan and E. Brill Spelling correction as an iterative process that exploits the collective knowledge of web users. July 2004Peter Norvig – How to Write a Spelling CorrectorReference: Creating a spellchecker with Solr from our JCG partner Emmanuel Espina at the emmaespina blog....
apache-openjpa-logo

Registering entity types with OpenJPA programmatically

I’ve just started work on an OpenJPA objectstore for Isis. In the normal scheme of things, one would register the entity types within the persistence.xml file. However, Isis is a framework that builds its own metamodel, and can figure out for itself which classes constitute entities. I therefore didn’t want to have to force the developer to repeat themselves, so the puzzle became how to register the entity types programmatically within the Isis code. It turns out to be pretty simple, if a little ugly. OpenJPA allows implementations of certain key components to be defined programmatically; these are specified in a properties map that is then passed through to javax.persistence.Persistence.createEntityManager(null, props). But it also supports a syntax that can be used to initialize those components through setter injection. In my case the component of interest is the openjpa.MetaDataFactory. At one point I thought I’d be writing my own implementation; but it turns out that the standard implementation does what I need, because it allows the types to be injected through its setTypes(List<String>) mutator. The list of strings is passed into that property as a ;-delimited list. So, here’s what I’ve ended up with: final Map<String, String> props = Maps.newHashMap();final String typeList = entityTypeList(); props.put("openjpa.MetaDataFactory", "org.apache.openjpa.persistence.jdbc.PersistenceMappingFactory(types=" + typeList + ")");// ... then add in regular properties such as // openjpa.ConnectionURL, openjpa.ConnectionDriverName etc... entityManagerFactory = Persistence.createEntityManagerFactory(null, props); where entityTypeList() in my case looks something like: private String entityTypeList() { final StringBuilder buf = new StringBuilder(); // loop thru Isis' metamodel looking for types that have been annotated using @Entity final Collection<ObjectSpecification> allSpecifications = getSpecificationLoader().allSpecifications(); for(ObjectSpecification objSpec: allSpecifications) { if(objSpec.containsFacet(JpaEntityFacet.class)) { final String fqcn = objSpec.getFullIdentifier(); buf.append(fqcn).append(";"); } } final String typeList = buf.toString(); return typeList; } Comments welcome, as ever Reference: Registering entity types with OpenJPA programmatically from our JCG partner Dan Haywood at the Dan Haywood blog blog....
netbeans-logo

NetBeans 7.2 Introduces TestNG

One of the advantages of code generation is the ability to see how a specific language feature or framework is used. As I discussed in the post NetBeans 7.2 beta: Faster and More Helpful, NetBeans 7.2 beta provides TestNG integration. I did not elaborate further in that post other than a single reference to that feature because I wanted to devote this post to the subject. I use this post to demonstrate how NetBeans 7.2 can be used to help a developer new to TestNG start using this alternative (to JUnit) test framework. NetBeans 7.2’s New File wizard makes it easier to create an empty TestNG test case. This is demonstrated in the following screen snapshots that are kicked off by using New File | Unit Tests (note that “New File” is available under the “File” drop-down menu or by right-clicking in the Projects window).Running the TestNG test case creation as shown above leads to the following generated test code. TestNGDemo.java (Generated by NetBeans 7.2) package dustin.examples;import org.testng.annotations.AfterMethod; import org.testng.annotations.AfterClass; import org.testng.annotations.BeforeMethod; import org.testng.annotations.BeforeClass; import org.testng.annotations.Test; import org.testng.Assert;/** * * @author Dustin */ public class TestNGDemo { public TestNGDemo() { } @BeforeClass public void setUpClass() { } @AfterClass public void tearDownClass() { } @BeforeMethod public void setUp() { } @AfterMethod public void tearDown() { } // TODO add test methods here. // The methods must be annotated with annotation @Test. For example: // // @Test // public void hello() {} } The test generated by NetBeans 7.2 includes comments indicate how test methods are added and annotated (similar to modern versions of JUnit). The generated code also shows some annotations for overall test case set up and tear down and for per-test set up and tear down (annotations are similar to JUnit’s). NetBeans identifies import statements that are not yet used at this point (import org.testng.annotations.Test; and import org.testng.Assert;), but are likely to be used and so have been included in the generated code.I can add a test method easily to this generated test case. The following code snippet is a test method using TestNG. testIntegerArithmeticMultiplyIntegers() @Test public void testIntegerArithmeticMultiplyIntegers() { final IntegerArithmetic instance = new IntegerArithmetic(); final int[] integers = {4, 5, 6}; final int expectedProduct = 2 * 3 * 4 * 5 * 6; final int product = instance.multiplyIntegers(2, 3, integers); assertEquals(product, expectedProduct); } This, of course, looks very similar to the JUnit equivalent I used against the same IntegerArithmetic class that I used for testing illustrations in the posts Improving On assertEquals with JUnit and Hamcrest and JUnit’s Built-in Hamcrest Core Matcher Support. The following screen snapshot shows the output in NetBeans 7.2 beta from right-clicking on the test case class and selecting “Run File” (Shift+F6).The text output of the TestNG run provided in the NetBeans 7.2 beta is reproduced next. [TestNG] Running: Command line suite[VerboseTestNG] RUNNING: Suite: "Command line test" containing "1" Tests (config: null) [VerboseTestNG] INVOKING CONFIGURATION: "Command line test" - @BeforeClass dustin.examples.TestNGDemo.setUpClass() [VerboseTestNG] PASSED CONFIGURATION: "Command line test" - @BeforeClass dustin.examples.TestNGDemo.setUpClass() finished in 33 ms [VerboseTestNG] INVOKING CONFIGURATION: "Command line test" - @BeforeMethod dustin.examples.TestNGDemo.setUp() [VerboseTestNG] PASSED CONFIGURATION: "Command line test" - @BeforeMethod dustin.examples.TestNGDemo.setUp() finished in 2 ms [VerboseTestNG] INVOKING: "Command line test" - dustin.examples.TestNGDemo.testIntegerArithmeticMultiplyIntegers() [VerboseTestNG] PASSED: "Command line test" - dustin.examples.TestNGDemo.testIntegerArithmeticMultiplyIntegers() finished in 12 ms [VerboseTestNG] INVOKING CONFIGURATION: "Command line test" - @AfterMethod dustin.examples.TestNGDemo.tearDown() [VerboseTestNG] PASSED CONFIGURATION: "Command line test" - @AfterMethod dustin.examples.TestNGDemo.tearDown() finished in 1 ms [VerboseTestNG] INVOKING CONFIGURATION: "Command line test" - @AfterClass dustin.examples.TestNGDemo.tearDownClass() [VerboseTestNG] PASSED CONFIGURATION: "Command line test" - @AfterClass dustin.examples.TestNGDemo.tearDownClass() finished in 1 ms [VerboseTestNG] [VerboseTestNG] =============================================== [VerboseTestNG] Command line test [VerboseTestNG] Tests run: 1, Failures: 0, Skips: 0 [VerboseTestNG] ============================================================================================== Command line suite Total tests run: 1, Failures: 0, Skips: 0 ===============================================Deleting directory C:\Users\Dustin\AppData\Local\Temp\dustin.examples.TestNGDemo test: BUILD SUCCESSFUL (total time: 2 seconds) The above example shows how easy it is to start using TestNG, especially if one is moving to TestNG from JUnit and is using NetBeans 7.2 beta. Of course, there is much more to TestNG than this, but learning a new framework is typically most difficult at the very beginning and NetBeans 7.2 gets one off to a fast start. Reference: NetBeans 7.2 Introduces TestNG from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
spring-interview-questions-answers

Make your Spring Security @Secured annotations more DRY

Recently a user on the Grails User mailing list wanted to know how to reduce repetition when defining @Secured annotations. The rules for specifying attributes in Java annotations are pretty restrictive, so I couldn’t see a direct way to do what he was asking. Using Groovy doesn’t really help here since for the most part annotations in a Groovy class are pretty much the same as in Java (except for the syntax for array values). Of course Groovy now supports closures in annotations, but this would require a code change in the plugin. But then I thought about some work Jeff Brown did recently in the cache plugin. Spring’s cache abstraction API includes three annotations; @Cacheable, @CacheEvict, and @CachePut. We were thinking ahead about supporting more configuration options than these annotations allow, but since you can’t subclass annotations we decided to use an AST transformation to find our versions of these annotations (currently with the same attributes as the Spring annotations) and convert them to valid Spring annotations. So I looked at Jeff’s code and it ended up being the basis for a fix for this problem. It’s not possible to use code to externalize the authority lists because you can’t control the compilation order. So I ended up with a solution that isn’t perfect but works – I look for a properties file in the project root (roles.properties). The format is simple – the keys are names for each authority list and the values are the lists of authority names, comma-delimited. Here’s an example: admins=ROLE_ADMIN, ROLE_SUPERADMIN switchUser=ROLE_SWITCH_USER editors=ROLE_EDITOR, ROLE_ADMIN These keys are the values you use for the new @Authorities annotation: package grails.plugins.springsecurity.annotation;import java.lang.annotation.Documented; import java.lang.annotation.ElementType; import java.lang.annotation.Inherited; import java.lang.annotation.Retention; import java.lang.annotation.RetentionPolicy; import java.lang.annotation.Target;import org.codehaus.groovy.transform.GroovyASTTransformationClass;/** * @author Burt Beckwith */ @Target({ElementType.FIELD, ElementType.METHOD, ElementType.TYPE}) @Retention(RetentionPolicy.RUNTIME) @Inherited @Documented @GroovyASTTransformationClass( "grails.plugins.springsecurity.annotation.AuthoritiesTransformation") public @interface Authorities { /** * The property file key; the property value will be a * comma-delimited list of role names. * @return the key */ String value(); } For example here’s a controller using the new annotation: @Authorities('admins') class SecureController {@Authorities('editors') def someAction() { ... } } This is the equivalent of this controller (and if you decompile the one with @Authorities you’ll see both annotations): @Secured(['ROLE_ADMIN', 'ROLE_SUPERADMIN']) class SecureController {@Secured(['ROLE_EDITOR', 'ROLE_ADMIN']) def someAction() { ... } } The AST transformation class looks for @Authorities annotations, loads the properties file, and adds a new @Secured annotation (the @Authorities annotation isn’t removed) using the role names specified in the properties file: package grails.plugins.springsecurity.annotation;import grails.plugins.springsecurity.Secured;import java.io.File; import java.io.FileReader; import java.io.IOException; import java.util.ArrayList; import java.util.List; import java.util.Properties;import org.codehaus.groovy.ast.ASTNode; import org.codehaus.groovy.ast.AnnotatedNode; import org.codehaus.groovy.ast.AnnotationNode; import org.codehaus.groovy.ast.ClassNode; import org.codehaus.groovy.ast.expr.ConstantExpression; import org.codehaus.groovy.ast.expr.Expression; import org.codehaus.groovy.ast.expr.ListExpression; import org.codehaus.groovy.control.CompilePhase; import org.codehaus.groovy.control.SourceUnit; import org.codehaus.groovy.transform.ASTTransformation; import org.codehaus.groovy.transform.GroovyASTTransformation; import org.springframework.util.StringUtils;/** * @author Burt Beckwith */ @GroovyASTTransformation(phase=CompilePhase.CANONICALIZATION) public class AuthoritiesTransformation implements ASTTransformation {protected static final ClassNode SECURED = new ClassNode(Secured.class);public void visit(ASTNode[] astNodes, SourceUnit sourceUnit) { try { ASTNode firstNode = astNodes[0]; ASTNode secondNode = astNodes[1]; if (!(firstNode instanceof AnnotationNode) || !(secondNode instanceof AnnotatedNode)) { throw new RuntimeException("Internal error: wrong types: " + firstNode.getClass().getName() + " / " + secondNode.getClass().getName()); }AnnotationNode rolesAnnotationNode = (AnnotationNode) firstNode; AnnotatedNode annotatedNode = (AnnotatedNode) secondNode;AnnotationNode secured = createAnnotation(rolesAnnotationNode); if (secured != null) { annotatedNode.addAnnotation(secured); } } catch (Exception e) { // TODO e.printStackTrace(); } }protected AnnotationNode createAnnotation(AnnotationNode rolesNode) throws IOException { Expression value = rolesNode.getMembers().get("value"); if (!(value instanceof ConstantExpression)) { // TODO System.out.println( "annotation @Authorities value isn't a ConstantExpression: " + value); return null; }String fieldName = value.getText(); String[] authorityNames = getAuthorityNames(fieldName); if (authorityNames == null) { return null; }return buildAnnotationNode(authorityNames); }protected AnnotationNode buildAnnotationNode(String[] names) { AnnotationNode securedAnnotationNode = new AnnotationNode(SECURED); List<Expression> nameExpressions = new ArrayList<Expression>(); for (String authorityName : names) { nameExpressions.add(new ConstantExpression(authorityName)); } securedAnnotationNode.addMember("value", new ListExpression(nameExpressions)); return securedAnnotationNode; }protected String[] getAuthorityNames(String fieldName) throws IOException {Properties properties = new Properties(); File propertyFile = new File("roles.properties"); if (!propertyFile.exists()) { // TODO System.out.println("Property file roles.properties not found"); return null; }properties.load(new FileReader(propertyFile));Object value = properties.getProperty(fieldName); if (value == null) { // TODO System.out.println("No value for property '" + fieldName + "'"); return null; }List<String> names = new ArrayList<String>(); String[] nameArray = StringUtils.commaDelimitedListToStringArray( value.toString()) for (String auth : nameArray) { auth = auth.trim(); if (auth.length() > 0) { names.add(auth); } }return names.toArray(new String[names.size()]); } } I’ll probably include this in the plugin at some point – I created a JIRA issue as a reminder – but for now you can just copy these two classes into your application’s src/java folder and create a roles.properties file in the project root. Any time you want to add or remove an entry or add or remove a role name from an entry, update the properties file, run grails clean and grails compile to be sure that the latest values are used. Reference: Make your Spring Security @Secured annotations more DRY from our JCG partner Burt Beckwith at the An Army of Solipsists blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close