What's New Here?


Canary Tests

Canary Tests are minimal tests to quickly and automatically verify that everything you depend on is ready. You run Canary tests before other time-consuming tests, and before wasting time investigating in your code when the other tests are red. If the canary test fails, you know you have to fix something on the environments first. This idea of Canary test is different from the Canary Deployment. In Canary Deployment you deploy to a small fraction of your users to check everything’s fine before rolling out to more users.     Save time by checking what should be always OK Canary tests check for the obvious and frequent sources of issues, such as:connectivity to network: firewall rules ok, ports open, proxy working fine, NAT, ping below a good threshold Databases and middleware are up disk quota for logs not almost full every needed login and password is valid installed software available in the right version: dll installed, registry set-up, environment variables set, user directories all exist, the frameworks and OS versions are fit, timezone and locale are as expected reference data integrity and consistency (dates, valuations…) are ok Database schema and audit of applied scripts are as expected Licences are not expired (there is usually a way to check that automatically)Canary tests should run regularly, ideally before any expensive tests like end-to-end tests. Of course you want to run them whenever there is a trouble somewhere, before wasting time on manual investigations in your code when the expected environment is not fully available. Even at the code level, a canary test is just a trivial test to verify that the testing framework works correctly, as mentioned by Marcus on his blog: assertTrue(true) Don’t forget to verify that your tests can fail too! Simple and low-maintenance The canary test tools should not assume much from the application. They must be independent from new developments to be as stable as possible. They should require little to no maintenance at all. One way to do that in practice is to simply scan configuration files for every URL, password and just ping them one by one against a predefined time threshold. Any log path mentioned in the configuration files can be scanned and checked for the required write permissions and available disk space. Any login and password can be checked, even though this may be more complicated. Canary tests are documentation too Doing Canary tests may require explicit declarations of expectations, e.g. an annotation AssumedPermission(’777′) to declare the permissions required on the files referenced in the configuration files. Alternatively you may rely on a Convention Over Configuration principle. For example every log.*.path variable is assumed  to be a log path to check against some predefined expectations like being writable and being ok with disk quota. When you add canary tests, this automation itself is a form of documentation that makes assumption more explicit. You could export a report of every canary test that has been ran into a readable form that can become part of your Living Documentation.   Reference: Canary Tests from our JCG partner Cyrille Martraire at the Cyrille Martraire’s blog blog. ...

Redis sort with Jedis

In this post we will talk about the Redis SORT command. Redis provides the SORT command that we can use to retrieve or store sorted values from a LIST, SET or ZSET. The simplest form we can use the command over a KEY like the example below:           SORT numbers_list This will sort the values contained in the key and return them. The command sorts the values as numbers. So, lets say we have a list with the following values: 1, 110, 5 The command above will return 1 5 110 We can specify to sort the values using alphabetically using the ALPHA modifier. There are a number of modifiers. We will take a look at some of them in the examples below. The examples will use the Jedis api. For our examples let’s consider that we have an idea management system. We have a list containing all the usernames from the system: all:users [junior, francisco, ribeiro, user4] And for every username there will be a hash containing the user’s information: user:user:junior - name: "Junior User" - num_ideas : "5" - email:"fjunior@email.com"user:francisco - name: "Francisco User" - num_ideas: "4" - email: "francisco@email.com"... We can see a class that will populate redis for our example: package br.com.xicojunior.redistest;import java.util.HashMap; import java.util.Map;import redis.clients.jedis.Jedis;public class App {public static Jedis jedis = new Jedis("localhost"); public static void main( String[] args ){String names[] = new String[]{"junior", "francisco", "ribeiro", "user4"}; for(String name: names){ jedis.lpush("all:users", name); } addUserHash(names[0], "Junior User", "junior@junior.com", "5"); addUserHash(names[1], "Francisco User", "francisco@francisco.com", "4"); addUserHash(names[2], "Ribeiro User", "ribeiro@ribeiro.com", "3"); addUserHash(names[3], "User 4", "user@user.com", "2");for(String name: names){ System.out.println(jedis.hgetAll("user:".concat(name))); }System.out.println(jedis.lrange("all:users", 0, -1));}public static void addUserHash(String username, String name, String email, String numberOfIdeas){ Map<String, String> userProp = new HashMap<String, String>(); userProp.put("name",name); userProp.put("email", email); userProp.put("num_ideas", String.valueOf(numberOfIdeas));jedis.hmset("user:".concat(username), userProp); } } Let’s take a look at the code example below: package br.com.xicojunior.redistest;import redis.clients.jedis.Jedis; import redis.clients.jedis.SortingParams;public class SortTest {public static void main(String[] args) { Jedis jedis = new Jedis("localhost");//[1]sorting the usernames System.out.println(jedis.sort("all:users")); //[ribeiro, francisco, junior, user4]//[2]sorting the username alpha //jedis sort method receives a SortingParams instance for modifiers System.out.println(jedis.sort("all:users", new SortingParams().alpha())); //[francisco, junior, ribeiro, user4]}} In the example above we sort the key “all:users“. In the first try, it doesn’t seem to have sorted correctly because the default sorting considers numbers. On the second example, we use the ALPHA modifier. We can do this by using the overloaded version of the sort method. It receives an instance of SortingParams class. In this case we see the usernames being sorted correctly. One nice feature of the SORT command is that we can sort the list using external values, values in other key(s). In the example below we will sort the all:users key by the number of ideas the user gave. It can be done using the “BY” modifier that receives the pattern of the keys to be used. Let’s see our example below: package br.com.xicojunior.redistest;import redis.clients.jedis.Jedis; import redis.clients.jedis.SortingParams;public class SortTest {public static void main(String[] args) { Jedis jedis = new Jedis("localhost");//[1] Sorting the usernames by the number of ideas System.out.println(jedis.sort("all:users", new SortingParams().by("user:*->num_ideas"))); //[user4, ribeiro, francisco, junior]//[1] Sorting the usernames by the number of ideas DESC System.out.println(jedis.sort("all:users", new SortingParams().by("user:*->num_ideas").desc())); }} In this second example, we are sorting the usernames by an external value, in our case by the field “num_ideas“. As in this case we are sorting by a hash field we used the following pattern “user:*->num_ideas“. With this pattern we are saying to look for the key “user:*” where this “*” will be replaced by the value from the list. As it is a hash we need to inform the field, we do this using the pattern “->fieldname“. If we were sorting by a string key we could use the following pattern “num_ideas_*” considering there was a key to store the number of ideas for each user. In the first call it retrieved the values sorting them ASC, we can also tell redis to sort it DESC using the DESC modifier. With jedis BY and DESC are methods from SortingParams. As all methods returns the instance, we can chain all the callings and that makes easier to read the code. With the SORT command we can also retrieve values from the external key or a field form an external hash. We can do this using the GET modifier, and we can use it many times. Let’s see some examples of this modifier below: package br.com.xicojunior.redistest;import redis.clients.jedis.Jedis; import redis.clients.jedis.SortingParams;public class SortTest {public static void main(String[] args) { Jedis jedis = new Jedis("localhost");//[1] Sorting the usernames by the number of ideas and retrieving the user name System.out.println(jedis.sort("all:users", new SortingParams().by("user:*->num_ideas").get("user:*->name"))); //[User 4, Ribeiro User, Francisco User, Junior User]//[2] Retrieving the name and email System.out.println(jedis.sort("all:users", new SortingParams().by("user:*->num_ideas").get("user:*->name","user:*->email"))); //[User 4, user@user.com, Ribeiro User, ribeiro@ribeiro.com, Francisco User, francisco@francisco.com, Junior User, junior@junior.com]//[3] Retrieve the value of the key being sorted - Special pattern # System.out.println(jedis.sort("all:users", new SortingParams().by("user:*->num_ideas").get("user:*->name","user:*->email","#"))); //[User 4, user@user.com, user4, Ribeiro User, ribeiro@ribeiro.com, ribeiro, Francisco User, francisco@francisco.com, francisco, Junior User, junior@junior.com, junior] }} In the code above we can see the use of the GET modifier, in order to return a hash field we can use a pattern similar to the one we used in the BY modifier. In the first we return simply the name, as we said, we can use GET many times, in the second we retrieve the name and the email from the user. We can also, retrieve the value for the key that was sorted by using a special pattern “#”. The method get, receives a vararg so we can pass all the external keys we want to retrieve the value from. Another thing we can do is to store the result from the sorting in a key. It is useful for cases when we want to cache the sort result, we can specify a dest key for the sort command. The result will be stored as a LIST. package br.com.xicojunior.redistest;import redis.clients.jedis.Jedis; import redis.clients.jedis.SortingParams;public class SortTest {public static void main(String[] args) { Jedis jedis = new Jedis("localhost");jedis.sort("all:users","dest_key1");System.out.println(jedis.lrange("dest_key1", 0, -1)); //[ribeiro, francisco, junior, user4]jedis.sort("all:users", new SortingParams().alpha().desc(), "dest_key2");System.out.println(jedis.lrange("dest_key2", 0, -1)); //[user4, ribeiro, junior, francisco] }} One very useful feature of the SORT command is that we can use it only to get values from related keys. There is a modifier indicating to do not the sort NOSORT package br.com.xicojunior.redistest;import redis.clients.jedis.Jedis; import redis.clients.jedis.SortingParams;public class SortTest {public static void main(String[] args) { Jedis jedis = new Jedis("localhost");System.out.println(jedis.sort("all:users", new SortingParams().get("user:*->name","user:*->email").nosort())); //[User 4, user@user.com, Ribeiro User, ribeiro@ribeiro.com, Francisco User, francisco@francisco.com, Junior User, junior@junior.com]}} This piece of code basically retrieves the name and email for all users. In case we don’t use SORT command, we would need at least two commands to do the same: LRANGE all:users 0 -1 //TO get all usernames and then for each username call hmget for each one like below HMGET user:junior name email //TO get the name and email from a userWe can find the command documentation in the redis site.  Reference: Redis sort with Jedis from our JCG partner Francisco Ribeiro Junior at the XICO JUNIOR’S WEBLOG blog. ...

Java 8 Friday Goodies: Lambdas and Sorting

At Data Geekery, we love Java. And as we’re really into jOOQ’s fluent API and query DSL, we’re absolutely thrilled about what Java 8 will bring to our ecosystem. We have blogged a couple of times about some nice Java 8 goodies, and now we feel it’s time to start a new blog series, the… Java 8 Friday Every Friday, we’re showing you a couple of nice new tutorial-style Java 8 features, which take advantage of lambda expressions, extension methods, and other great stuff. You’ll find the source code on GitHub. Java 8 Goodie: Lambdas and Sorting Sorting arrays, and collections is an awesome use-case for Java 8′s lambda expression for the simple reason that Comparator has always been a @FunctionalInterface all along since its introduction in JDK 1.2. We can now supply Comparators in the form of a lambda expression to various sort() methods. For the following examples, we’re going to use this simple Person class: static class Person { final String firstName; final String lastName;Person(String firstName, String lastName) { this.firstName = firstName; this.lastName = lastName; }@Override public String toString() { return "Person{" + "firstName='" + firstName + '\'' + ", lastName='" + lastName + '\'' + '}'; } } Obviously, we could add natural sorting to Person as well by letting it implement Comparable, but lets focus on external Comparators. Consider the following list of Person, whose names are generated with some online random name generator: List<Person> people = Arrays.asList( new Person("Jane", "Henderson"), new Person("Michael", "White"), new Person("Henry", "Brighton"), new Person("Hannah", "Plowman"), new Person("William", "Henderson") ); We probably want to sort them by last name and then by first name. Sorting with Java 7 A “classic” Java 7 example of such a Comparator is this: people.sort(new Comparator<Person>() { @Override public int compare(Person o1, Person o2) { int result = o1.lastName.compareTo(o2.lastName);if (result == 0) result = o1.firstName.compareTo(o2.firstName);return result; } }); people.forEach(System.out::println); And the above would yield: Person{firstName='Henry', lastName='Brighton'} Person{firstName='Jane', lastName='Henderson'} Person{firstName='William', lastName='Henderson'} Person{firstName='Hannah', lastName='Plowman'} Person{firstName='Michael', lastName='White'} Sorting with Java 8 Now, let’s translate the above to equivalent Java 8 code: Comparator<Person> c = (p, o) -> p.lastName.compareTo(o.lastName);c = c.thenComparing((p, o) -> p.firstName.compareTo(o.firstName));people.sort(c); people.forEach(System.out::println); The result is obviously the same. How to read the above? First, we assign a lambda expression to a local Person Comparator variable: Comparator<Person> c = (p, o) -> p.lastName.compareTo(o.lastName); Unlike Scala, C#, or Ceylon which know type inference from an expression towards a local variable declaration through a val keyword (or similar), Java performs type inference from a variable (or parameter, member) declaration towards an expression that is being assigned. In other, more informal words, type inference is performed from “left to right”, not from “right to left”. This makes chaining Comparators a bit cumbersome, as the Java compiler cannot delay type inference for lambda expressions until you pass the comparator to the sort() method. Once we have assigned a Comparator to a variable, however, we can fluently chain other comparators through thenComparing(): c = c.thenComparing((p, o) -> p.firstName.compareTo(o.firstName)); And finally, we pass it to the List‘s new sort() method, which is a default method implemented directly on the List interface: default void sort(Comparator<? super E> c) { Collections.sort(this, c); } Workaround for the above limitation While Java’s type inference “limitations” can turn out to be a bit frustrating, we can work around type inference by creating a generic IdentityComparator: class Utils { static <E> Comparator<E> compare() { return (e1, e2) -> 0; } } With the above compare() method, we can write the following fluent comparator chain: people.sort( Utils.<Person>compare() .thenComparing((p, o) -> p.lastName.compareTo(o.lastName)) .thenComparing((p, o) -> p.firstName.compareTo(o.firstName)) );people.forEach(System.out::println); Extracting keys This can get even better. Since we’re usually comparing the same POJO / DTO value from both Comparator arguments, we can provide them to the new APIs through a “key extractor” function. This is how it works: people.sort(Utils.<Person>compare() .thenComparing(p -> p.lastName) .thenComparing(p -> p.firstName)); people.forEach(System.out::println); So, given a Person p we provide the API with a function extracting, for instance, p.lastName. And in fact, once we use key extractors, we can omit our own utility method, as the libraries also have a comparing() method to initiate the whole chain: people.sort( Comparator.comparing((Person p) -> p.lastName) .thenComparing(p -> p.firstName)); people.forEach(System.out::println); Again, we need to help the compiler as it cannot infer all types, even if in principle, the sort() method would provide enough information in this case. To learn more about Java 8′s generalized type inference, see our previous blog post. Conclusion As with Java 5, the biggest improvements of the upgrade can be seen in the JDK libraries. When Java 5 brought typesafety to Comparators, Java 8 makes them easy to read and write (give or take the odd type inference quirk). Java 8 is going to revolutionise the way we program, and next week, we will see how Java 8 impacts the way we interact with SQL.   Reference: Java 8 Friday Goodies: Lambdas and Sorting from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog. ...

Optimising Your ApplicationContext

There’s a problem with Spring, it’s been there for some time and I’ve come across it in a number of projects. It’s nothing to do with Spring, or the Guys at Spring, it’s down to Spring’s users like you and me. Let me explain… In the old days of Spring 2 you had to configure your Application Context by hand, manually creating an XML configuration file that contained all your bean definitions. The down side of this technique was that it was time-consuming to create these XML files and then, you had the headache of maintaining this increasingly complex file. I seem to remember that at the time, it was known as “Spring Config Hell”. On the upside, at least you had a central record of everything that was loaded into the context. Bowing to demand and the popular notion that annotations were the way to go, Spring 3 introduced a whole raft of stereotyping classes such as @Service, @Component, @Controller and @Repository together with an addition to the XML configuration file of the <context:component-scan/> element. This made, from a programming point of view, things a lot simpler and is a hugely popular way of constructing Spring contexts. There is, however, a downside to using Spring annotations with wild abandon and peppering everything with @Service, @Component, @Controller or @Repository that becomes especially troublesome in large codebases. The problem is that your context becomes polluted with stuff that just doesn’t need to be there and that’s a problem because:You unnecessarily use up your perm gen space leading to the risk of more “out of perm gen space errors”. You unnecessarily use up your heap space. Your application can take a lot longer to load. Unwanted objects can “just do stuff”, especially if they’re multithreaded, have a start() method or implement InitializingBean. Unwanted objects can just stop your application from working…In small applications I guess that it doesn’t really matter if you have a couple of extra objects in your Spring context, but, as I said above, this can be particularly troublesome if your application is large, processor intensive or memory hungry. At this point it’s worth sorting this situation out and to do this you have to first figure out exactly what it is you’re loading into your Spring context. One way of doing this is just to enable debugging on the com.springsource package by adding something like the following to your log4j properties: log4j.logger.com.springsource=DEBUG In adding the above to your log4j properties (log4j 1.x in this case) you’ll get lots of information on your Spring context – and I mean lots. This is really only something you’d need to do if you’re one of the Guys at Spring and you’re working on the Spring source code. Another more succinct approach is to add a class to your application that’ll report exactly what’s being loaded into your Spring context. You can then examine the report and make any appropriate changes. This blog’s sample code consists of one class and it’s something that I’ve written two or three times before, working on different projects for different companies. It relies on a couple of Spring features; namely that Spring can call a method in your class after the Context has loaded and that Spring’s ApplicationContext interface contains a few methods that’ll tell you all about its internals. @Service public class ApplicationContextReport implements ApplicationContextAware, InitializingBean {  private static final String LINE = "====================================================================================================\n";   private static final Logger logger = LoggerFactory.getLogger("ContextReport");  private ApplicationContext applicationContext;  @Override   public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {     this.applicationContext = applicationContext;   }  @Override   public void afterPropertiesSet() throws Exception {     report();   }  public void report() {    StringBuilder sb = new StringBuilder("\n" + LINE);     sb.append("Application Context Report\n");     sb.append(LINE);    createHeader(sb);     createBody(sb);     sb.append(LINE);    logger.info(sb.toString());   }  private void createHeader(StringBuilder sb) {    addField(sb, "Application Name: ", applicationContext.getApplicationName());     addField(sb, "Display Name: ", applicationContext.getDisplayName());    String startupDate = getStartupDate(applicationContext.getStartupDate());     addField(sb, "Start Date: ", startupDate);    Environment env = applicationContext.getEnvironment();     String[] activeProfiles = env.getActiveProfiles();     if (activeProfiles.length > 0) {       addField(sb, "Active Profiles: ", activeProfiles);     }   }  private void addField(StringBuilder sb, String name, String... values) {    sb.append(name);     for (String val : values) {       sb.append(val);       sb.append(", ");     }     sb.setLength(sb.length() - 2);     sb.append("\n");   }  private String getStartupDate(long startupDate) {    SimpleDateFormat df = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSSZ");     return df.format(new Date(startupDate));   }  private void createBody(StringBuilder sb) {     addColumnHeaders(sb);     addColumnValues(sb);   }  private void addColumnHeaders(StringBuilder sb) {     sb.append("\nBean Name\tSimple Name\tSingleton\tFull Class Name\n");     sb.append(LINE);   }  private void addColumnValues(StringBuilder sb) {     String[] beanNames = applicationContext.getBeanDefinitionNames();    for (String name : beanNames) {       addRow(name, sb);     }   }  private void addRow(String name, StringBuilder sb) {     Object obj = applicationContext.getBean(name);    String fullClassName = obj.getClass().getName();     if (!fullClassName.contains("org.springframework")) {      sb.append(name);       sb.append("\t");       String simpleName = obj.getClass().getSimpleName();       sb.append(simpleName);       sb.append("\t");       boolean singleton = applicationContext.isSingleton(name);       sb.append(singleton ? "YES" : "NO");       sb.append("\t");       sb.append(fullClassName);       sb.append("\n");     }   } } The first thing to note is that this version of the code implements Spring’s InitializingBean interface. Spring will check for this this interface when it loads a class in to the context. If it finds it then it’ll call the AfterPropertiesSet() method. This is not the only way of getting Spring to call your class on start up, see: Three Spring Bean Lifecycle Techniques and Using JSR-250’s @PostConstruct Annotation to Replace Spring’s InitializingBean The next thing to note is that this report class implements Spring’s ApplicationContextAware interface. This is another useful Spring workhorse interface that’ll never normally need to use on a daily basis. The raison d’être behind this interface is to give your class access to your application’s ApplicationContext. It contains a single method: setApplicationContext(...), which is called by Spring to inject the ApplicationContext into your class. In this case, I’m simply saving the ApplicationContext argument as an instance variable. The main report generation is done by the report() method (called by afterPropertiesSet()). All the report() method does is to create a StringBuilder() class and then append lots of information. I won’t go through each line individually as this kind of code is rather linear and really boring. The highlight comes in the form of the addColumnValues(...) method called by createBody(...).   private void addColumnValues(StringBuilder sb) {     String[] beanNames = applicationContext.getBeanDefinitionNames();    for (String name : beanNames) {       addRow(name, sb);     }   } This method calls applicationContext.getBeanDefinitionNames() to get hold of an array containing the names of all the beans loaded by this context. Once I have this information I loop through the array calling applicationContext.getBean(...) on each bean name. Once you have the bean itself, you can add its class details to the StringBuilder as a row in the report. Having created the report there’s not much point mucking about writing your own file handling code that’ll save the contents of the StringBuilder to disk. This sort of code has been written many times before. In this case I’ve chosen to leverage Log4j (via slf4j) by adding a this logger line to the Java code above:   private static final Logger logger = LoggerFactory.getLogger("ContextReport"); …and by adding the following to my log4j XML config file: <appender name="fileAppender" class="org.apache.log4j.RollingFileAppender"> <param name="Threshold" value="INFO" /> <param name="File" value="/tmp/report.log"/> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d %-5p [%c{1}] %m %n" /> </layout> </appender><logger name="ContextReport" additivity="false"> <level value="info"/> <appender-ref ref="fileAppender"/> </logger> Note that if you’re using log4j 2.x, the XML would be different, but that’s beyond the scope of this blog. The thing to note here is that I use a RollingFileAppender, which writes a file to /tmp called report.log – though this file could obviously be located anywhere. The other config point to notice is the ContextReport Logger. This directs all its log output to the fileAppender and, because of the additivity="false" attribute, only the fileAppender and no where else. The only other chunk of config to remember is to add the report package to Spring’s component-scan element so that Spring will detect the @Service annotation and load the class. <context:component-scan base-package="com.captaindebug.report" /> To prove that it works, I’ve also created a JUnit test case as shown below: @RunWith(SpringJUnit4ClassRunner.class) @WebAppConfiguration @ContextConfiguration({ "file:src/main/webapp/WEB-INF/spring/appServlet/servlet-context.xml" }) public class ApplicationContextReportTest {  @Autowired   private ApplicationContextReport instance;  @Test   public void testReport() {    System.out.println("The report should now be in /tmp");   }} This uses the SpringJUnit4ClassRunner and the @ContextConfiguration annotation to load the application’s live servlet-context.xml file. I’ve also included the @WebAppConfiguration annotation to tell Spring that this is a web app. If you run the JUnit test you’ll get a report.log that contains something like this: 2014-01-26 18:30:25,920 INFO [ContextReport] ==================================================================================================== Application Context Report ==================================================================================================== Application Name: Display Name: org.springframework.web.context.support.GenericWebApplicationContext@74607cd0 Start Date: 2014-01-26T18:30:23.552+0000Bean Name Simple Name Singleton Full Class Name ==================================================================================================== deferredMatchUpdateController DeferredMatchUpdateController YES com.captaindebug.longpoll.DeferredMatchUpdateController homeController HomeController YES com.captaindebug.longpoll.HomeController DeferredService DeferredResultService YES com.captaindebug.longpoll.service.DeferredResultService SimpleService SimpleMatchUpdateService YES com.captaindebug.longpoll.service.SimpleMatchUpdateService shutdownService ShutdownService YES com.captaindebug.longpoll.shutdown.ShutdownService simpleMatchUpdateController SimpleMatchUpdateController YES com.captaindebug.longpoll.SimpleMatchUpdateController applicationContextReport ApplicationContextReport YES com.captaindebug.report.ApplicationContextReport the-match Match YES com.captaindebug.longpoll.source.Match theQueue LinkedBlockingQueue YES java.util.concurrent.LinkedBlockingQueue BillSykes MatchReporter YES com.captaindebug.longpoll.source.MatchReporter ==================================================================================================== The report contains a header that contains info such as Display Name and Start Date followed by the main body. The body is a tab separated table that contains the following columns: the bean name, the simple class name, whether or not the bean is a singleton or prototype and the full class name. You can now use this report to spot classes that you don’t want loaded into your Spring Context. For example, if you decided that you didn’t want to load the BillSykes instance of com.captaindebug.longpoll.source.MatchReporter, then you have the following options. Firstly, it’s probably the case that the BillSykes bean has loaded because it in the wrong package. This usually happens when you try to organise project structures along class type lines, for example putting all services together in a service package and all controllers together in a controller package; hence, including the service module into your application will load ALL service classes, even the ones you don’t need and that could cause you problems. It’s usually better to organise along functional lines as described in How Do You Organise Maven Sub-Modules?. Unfortunately, reorganising your entire project is particularly costly and will not yield much revenue. The other, cheaper way of solving the problem is to make adjustments to the Spring context:component-scan Element and to exclude those classes that are causing problems. <context:component-scan base-package="com.captaindebug.longpoll" /> <context:exclude-filter type="regex" expression="com\.captaindebug\.longpoll\.source\.MatchReporter"/> </context:component-scan> …or all classes from any given package: <context:component-scan base-package="com.captaindebug.longpoll" /> <context:exclude-filter type="regex" expression="com\.captaindebug\.longpoll\.source\..*"/> </context:component-scan> Using exclude-filter is a useful technique, but there’s a lot been written about it together with its counterpart: include-filter and so a full explanation of this XML config is beyond the scope of this blog though, maybe I’ll cover it at a later date.The code for this blog is available on GitHub as part of the long poll project at: https://github.com/roghughe/captaindebug/tree/master/long-poll  Reference: Optimising Your ApplicationContext from our JCG partner Roger Hughes at the Captain Debug’s Blog blog. ...

Fail SAFe

Last week I went to a presentation about Scaled Agile Framework – SAFe. I’ve read a bit about it before, but this was a more broad introduction to the topic. It’s going to be a success. When I talk about why scrum succeeded in crossing the chasms from developer world to business world, the main reason I see is that it dropped the developer jargon and talked in business language. SAFe goes the extra mile, and totally talking the business language. But it’s done what scrum never done: it offer ALL the answers. SAFe is detailed. Very detailed. It got details on all the needed roles, all processes, how to role it out, all specified and quantified. It got all the information up front, before you even ask. You can get all the information in the site. Answers are good, seems everyone’s looking for them these days. And SAFe not only has them, it makes sense too. All the pieces fit together, it’s a combination of tried and true rules and processes. You know why? Because we live in a complex world, where we don’t know all the answers. And we’re ready to pay anyone who can help us. SAFe is the first methodological framework to tie all team, project and program information into a whole organization solution. And being first is going to probably why it will stick around for the next few years. There’s just one thing. Snake Oil Alert The presentation was in the context of the Lean-Kanban group, you’d expect the audience to be a pretty agile group. Most of the questions were “How do we do X in SAFe”. While these were honest questions, I couldn’t help notice this weird thing: The agile crowd focused on “processes and tools”. Isn’t the whole point of agile dealing with an ever changing reality, where prescribed recipes don’t work? 13 years after the agile manifesto, 20 years of practice, you’d guess people actually get the picture, at least this bunch. And nobody wonders how in a this very short time of actual agile practice we now have all the answers. And that the complexity of life can now be reduced to a couple of ceremonies. Reality is winning. We’re confounded and continue to look for answers. The more desperate we are for answers, we’ll gladly believe the truth is out there. And that a few consultants have that knowledge and are happy to download it to us, and give us certificates that prove that we now know. Most people still believe in the silver bullet. It’s a safe bet (pun intended) that SAFe looks the part. In a few years, in the post-SAFe era, when people start questioning the holes left in the process, there’ll be another solution. Which we’ll gladly by into. Agile is not declining. We’re simply riding a sine wave.   Reference: Fail SAFe from our JCG partner Gil Zilberfeld at the Geek Out of Water blog. ...

ObjectStreamClass: Peeking at a Java Object’s Serialization

ObjectStreamClass can be a useful class to analyze the serialization characteristics of a serialized class loaded in the JVM. This post looks at some of the information this class provides about a loaded serialized class. ObjectStreamClass provides two static methods for lookup of a class: lookup(class) and lookupAny(Class). The first, lookup(Class), will only return an instance of ObjectStreamClass when the provided class is serializable and returns null if the provided class is not serializable. The second, lookupAny(Class) returns an instance of ObjectStreamClass for the provided class regardless of whether it’s serializable or not. Once an instance of ObjectStreamClass is provided via the static “lookup” methods, that instance can be queried for class name, for serial version UID, and for serializable fields. To demonstrate use of ObjectStreamClass, I first list the code listings for two simple classes that will be part of the demonstration. One class, Person, is Serializable, but has a transient field. The other class, UnserializablePerson, is nearly identical, but it is not Serializable. Person.java package dustin.examples.serialization;import java.io.Serializable;/** * Person class intended for demonstration of ObjectStreamClass. * * @author Dustin */ public class Person implements Serializable { private final String lastName; private final String firstName; transient private final String fullName;public Person(final String newLastName, final String newFirstName) { this.lastName = newLastName; this.firstName = newFirstName; this.fullName = this.firstName + " " + this.lastName; }public String getFirstName() { return this.firstName; }public String getLastName() { return this.lastName; }public String getFullName() { return this.fullName; }@Override public String toString() { return this.fullName; } } UnserializablePerson.java package dustin.examples.serialization;/** * Person class intended for demonstration of ObjectStreamClass. * * @author Dustin */ public class UnserializablePerson { private final String lastName; private final String firstName; private final String fullName;public UnserializablePerson(final String newLastName, final String newFirstName) { this.lastName = newLastName; this.firstName = newFirstName; this.fullName = this.firstName + " " + this.lastName; }public String getFirstName() { return this.firstName; }public String getLastName() { return this.lastName; }public String getFullName() { return this.fullName; }@Override public String toString() { return this.fullName; } } With two classes in place to run use in conjunction with ObjectStreamClass, it’s now time to look at a simple demonstration application that shows use of ObjectStreamClass. ObjectStreamClassDemo.java package dustin.examples.serialization;import static java.lang.System.out;import java.io.ObjectStreamClass; import java.io.ObjectStreamField;/** * Demonstrates use of ObjectStreamDemo. * * @author Dustin */ public class ObjectStreamClassDemo { /** * Displays class name, serial version UID, and serializable fields as * indicated by the provided instance of ObjectStreamClass. * * @param serializedClass */ public static void displaySerializedClassInformation( final ObjectStreamClass serializedClass) { final String serializedClassName = serializedClass.getName(); out.println("Class Name: " + serializedClassName); final long serializedVersionUid = serializedClass.getSerialVersionUID(); out.println("serialversionuid: " + serializedVersionUid); final ObjectStreamField[] fields = serializedClass.getFields(); out.println("Serialized Fields:"); for (final ObjectStreamField field : fields) { out.println("\t" + field.getTypeString() + " " + field.getName()); } }/** * Main function that demonstrates use of ObjectStreamDemo. * * @param arguments Command line arguments; none expected. */ public static void main(String[] arguments) { // Example 1: ObjectStreamClass.lookup(Class) on a Serializable class out.println("\n=== ObjectStreamClass.lookup(Serializable) ==="); final ObjectStreamClass serializedClass = ObjectStreamClass.lookup(Person.class); displaySerializedClassInformation(serializedClass);// Example 2: ObjectStreamClass.lookup(Class) on a class that is not // Serializable (which will result in a NullPointerException // when trying to access null returned from 'lookup' out.println("\n=== ObjectStreamClass.lookup(Unserializable) ==="); try { final ObjectStreamClass unserializedClass = ObjectStreamClass.lookup(UnserializablePerson.class); displaySerializedClassInformation(unserializedClass); } catch (NullPointerException npe) { out.println("NullPointerException: Unable to lookup unserializable class with ObjectStreamClass.lookup."); }// Example 3: ObjectStreamClass.lookupAny(Class) works without the // NullPointerException, but only provides name of the class as // Serial Version UID and serialized fields do not apply in the // case of a class that is not serializable. out.println("\n=== ObjectStreamClass.lookupAny(Unserializable) ==="); final ObjectStreamClass unserializedClass = ObjectStreamClass.lookupAny(UnserializablePerson.class); displaySerializedClassInformation(unserializedClass); } } The comments in the source code above indicate what is being demonstrated. The output from running this class is shown in the next screen snapshot.When the output shown above is correlated with the code before it, we can make several observations related to ObjectStreamClass. These include the fact that the transient field of a serializable class is not returned as one of the serializable fields. We also see that ObjectStreamClass.lookup(Class) method returns null if the class provided to it is not serializable. ObjectStreamClass.lookupAny(Class) returns an instance of ObjectStreamClass for classes that are not serializable, but only the class’s name is available in that case. The code above showed a Serial Version UID for Person.java of 1940442894442614965. When serialver is run on the command line, the same Serial Version UID is generated and displayed.What’s nice about the ability to programatically calculate the same Serial Version UID as would be calculated by the serialver tool that comes with the Oracle JDK is that one could explicitly add the same Serial Version UID to generated code as would be implicitly added anyway. Any JVM-friendly script or tool (such as one written in Groovy) that needs to know the implicit Serial Version UID of a class could use ObjectStreamClass to obtain that Serial Version UID.   Reference: ObjectStreamClass: Peeking at a Java Object’s Serialization from our JCG partner Dustin Marx at the Inspired by Actual Events blog. ...

Android: location based services

Introduction Developing applications for mobile devices gives us a lot more opportunities for context based information than a traditional web application. One of the inputs for context sensitive information is the users current location. This post describes several ways an Android application can obtain the users current location. Location API’s In previous versions of the Android SDK you had to manually implement a location service which abstracts away the underlying location providers (GPS or cellular based). This was not ideal since as a developer of an application, you probably are not concerned about the implementation details of obtaining a users location. Fortunately Google’s Location API’s provide a much better way for working with location data. The Location API’s provide the following functionality:Fused location provider which abstracts away the underlying location providers. Geofencing. Lets your application setup geographic boundaries around specific locations and then receive notifications when the user enters or leaves those areas. Activity recognition. Is the user walking or in a car.Check for Google play services Working with the Location API’s require the presence of the Google Play Services application on the device. It is good practice to test for the presence of Google play services before using the API. This can be done with the following code: protected boolean testPlayServices() { int checkGooglePlayServices = GooglePlayServicesUtil.isGooglePlayServicesAvailable(getActivity()); if (checkGooglePlayServices != ConnectionResult.SUCCESS) { // google play services is missing!!!! /* * Returns status code indicating whether there was an error. * Can be one of following in ConnectionResult: SUCCESS, SERVICE_MISSING, * SERVICE_VERSION_UPDATE_REQUIRED, SERVICE_DISABLED, SERVICE_INVALID. */ GooglePlayServicesUtil.getErrorDialog(checkGooglePlayServices, getActivity(), 1122).show(); return false; } return true; } Code listing 1 Code listing 1 shows how to check for the presence of Google play services. If the Google play services are not present a dialog is displayed giving the user the opportunity to download and install the Google play services application. This method returns false if the services are not found and can be placed around any code requiring Play services. Obtaining the users location The primary class for using the Location API’s is the LocationClient. The first thing to do is instantiating the LocationClient and passing the required listeners. See the following code which is usually called from the onCreate from within an activity or onActivityCreated if the LocationClient is instantiated within a fragment. locationClient = new LocationClient(getActivity(), this, this); The parameters are:The Context ConnectionCallbacks. Defines the onConnected() and onDisconnected() methods. OnConnectionFailedListener. Defines the onConnectionFailed() method.When the LocationClient is instantiated, the next thing to do is calling the connect() method of the LocationClient. This is typically done in the onResume method. In the inPause method the disconnect() method is called of the LocationClient. This ensures the LocationClient is only active when the activity is running. Should you need constant tracking of the users location when the app is in the background, it is better to create a background service for this. When the connect() method is successful, the onConnected() callback is called. In this method you can obtain the users last known location using the following method: locationClient.getLasLocation(); Periodic location updates Registering for periodic location updates involves slightly more work. The first thing to do is creating a new LocationRequest object. This object specifies the quality of service for receiving location updates. The following code demonstrates this: private static LocationRequest createLocationRequest() { final LocationRequest locationRequest = new LocationRequest(); locationRequest.setPriority(LocationRequest.PRIORITY_HIGH_ACCURACY); // The rate at which the application actively requests location updates. locationRequest.setInterval(60 * MILLISECONDS_IN_SECOND); // The fastest rate at which the application receives location updates, for example when another // application has requested a location update, this application also receives that event. locationRequest.setFastestInterval(10 * MILLISECONDS_IN_SECOND); return locationRequest; } After the LocationRequest is created and the connect() method of the LocationClient is successful, the onConnected method is called. In this method the LocationClient is instructed to send periodic location updates to the application using the following code: locationClient.requestLocationUpdates(locationRequest, this); The parameters are:locationRequest. Specifies the quality of services of the location updates. LocationListener. Defines several callback methods including the onLocationChanged which is called when a new location is available.Required dependencies To use the Google play services in your application you have to define the correct dependencies in the build.gradle. There are two versions of the API: one for Android 2.3. and higher and one for Android 2.2. Use the SDK manager to install the required packages. For Android 2.3 these are:Google play services Google repositoryFor Android 2.2 these are:Google play services for Froyo Google repositorySo if your applications targets Android 2.2 you must use the Google play services for Froyo library. In your build.gradle specify the following dependency: For Android 2.3: dependencies { compile 'com.google.android.gms:play-services:4.0.30' } For Android 2.2: dependencies { compile 'com.google.android.gms:play-services:3.2.65' } Testing with mock locations To test with mock locations you have to do the following:Enable mock locations in the developer options. Download the sample LocationProvider example app: http://developer.android.com/training/location/location-testing.html Modify the LocationUtils class with an array of locations you want to test with. Install the LocationProvider sample app on your device. Start the LocationProvider sample app. Start the application you want to test the location functionality for.A handy website to get the latitude and longitude of an address for testing purposes is: http://www.itouchmap.com/latlong.html Conclusion Working with location data in your mobile application can add a new dimension to the user experience. This article explains the steps needed to use the Google Location API’s for obtaining the users current location.   Reference: Android: location based services from our JCG partner Jamie Craane at the Jamie Craane’s Blog blog. ...

5 tools for Java developers

A way to improve the Java code we write is to work with the best tools. So, let’s check out the 5 most used tools that IDR Solutions suggests to help Java Developers write better code. FindBugs FindBugs is an open source program, distributed under the terms of the Lesser GNU Public license and operates on Java bytecode rather than source code. It can identify potential errors in the code of Java programs, such as null pointer dereferences, infinite recursive loops, bad uses of the Java libraries and deadlocks. FindBugs is mainly used for identifying serious defects in large applications and is capable of determining the severity of potential errors and are classified in ranks(scariest, scary, troubling, of concern). It is available as plug-in for Eclipse, NetBeans, IntelliJ IDEA. It can be used from the command line or within ant, eclipse, maven, netbeans and emacs. Apache Ant Apache Ant is an open source Apache project, released under the Apache Software License. It uses XML but is implemented in Java and is mainly used for Java projects. It consists of built-in tasks that allow developers to compile, assemble, test and run Java applications. Ant can also be used in building non Java applications, such as C or C++ applications andgenerally in types of process which can be described in terms of targets and tasks. It is flexible and does not put restrictions on coding conventions or directory layouts for Java projects. It is available for Eclipse, NetBeans, and IntelliJ IDEA. JProfiler JProfiler is a commercially licensed Java profiling tool developed by ej-technologies GmbH, mainly designed for use with Java EE and Java SE applications.It can be very useful when developers need to analyze performance bottlenecks, memory leaks, CPU loads and resolve threading issues. It supports both local and remote profiling, that is analysis of applications running on the same machine or remote machines. It can profile the information in both cases, so users can see live through a visual representation showing the load in terms of active and total bytes, instances, threads, classes, and garbage collector activites. JProfiler it can be either a stand-alone application or a plug-in for the Eclipse, NetBeans, and IntelliJ IDEA and Orcale JDeveloper software development environments. It is also available as part of application server integration in Adobes Coldfusion and Glassfish. Bash Bash is a UNIX shell, or command language interpreter, written for the GNU Project as a free software replacement for the Bourne shell. It is used as a command processor, typically running in a text window, and allows for type commands which cause actions. It reads commands from a file and supports filename wildcarding, piping, command substitution, variables. It can control structures for condition-testing and iteration. It is particularly useful as it allows for the automation of some tasks using Bash scripts. Sonarqube SonarQube is an open source platform that has become a world leader in code quality management systems, and is well known for its Continuous Inspection of code quality. Appart from Java it also supports C/C++, C#, PHP, Flex, Groovy, JavaScript, Python, PL/SQL, and COBOL. It can be used as part of Android development. It integrates with Maven, Ant, Gradle and other continuous integration tools. It reports on duplicated code, coding standards, unit tests, code coverage, complex code, potential bugs, comments and design and architecture. ...

Access private fields in unit tests

First of all, let me say out louder, you need to design your code to be testable, so you test your private fields through your public methods. But, (“buts” are the reasons why humans are still programming instead of the computer itself, so be happy here) sometimes you want to and should alter some private fields in order to test all the possible boundaries. Often private fields can be modified through public getter and setters or using the class constructor and in those cases the tests are easy to create and everybody is happy. But when you use external frameworks like Spring, it may be possible that you do not have control over injected private fields. I already explain how to mock spring components in your tests without the need of maintaining and creating ad-hoc test spring configuraitons in a previous post, here I will show you how to modify a private variable for your tests. Let speak code: import javax.annotation.PostConstruct; import org.springframework.beans.factory.annotation.Value; import org.springframework.stereotype.Service; import com.google.common.collect.ImmutableSet; @Service public class SomeService {        @Value("${whitelist.api.users:A,B,C}")         private String apiUsers;        private ImmutableSet<String> acceptableAPIBUsers;        @PostConstruct         public void init() {                 acceptableAPIBUsers = ImmutableSet.copyOf(apiUsers.replaceAll(" ", "").split(","));         }        public boolean isAnAcceptableUser(String user) {                 return user == null ? false : acceptableAPIBUsers.contains(user.toUpperCase());         } } We do not have control over  the apiUsers String, so we have couple of straightforward options, one is to create a Spring configuration for your test, modify the Spring context and mock the property, two is to create a setter to change the value of the property from your test. I discourage from creating public assessors only for you tests, it is confusing for other people looking at your code and creating and maintaing Spring configurations for your tests can be a pain. I know what you are thinking, “if I cannot do either of the above I’m going to get fired, my girlfriend will leave me and my life is finished”, but don’t you worry, I’m here to show you another option!You can create a groovy class with a static method to assess your private field in your test : import groovy.transform.CompileStatic @CompileStatic class SomeServiceAccessor {        public static void setApiUsers(SomeService someService,String apiUsers){                 someService.@apiUsers = apiUsers         } } And use it in your unit test: import static org.hamcrest.CoreMatchers.is; import static org.junit.Assert.assertThat; import org.junit.Before; import org.junit.Test; public class SomeServiceTest {        private SomeService service;        @Before         public void setUp() {                 service = new SomeSercvice();                 SomeSercviceAccessor.setApiUsers(service, "pippo,pluto,bungabunga");                 service.init();         }        @Test         public void testIsNotApiUser() {                 assertThat(service.isAnRTBUser(""), is(false));                 assertThat(service.isAnRTBUser(null), is(false));                 assertThat(service.isAnRTBUser("random"), is(false));         }        @Test         public void testIsRTBUser() {                 assertThat(service.isAnRTBUser("pippo"), is(true));                 assertThat(service.isAnRTBUser("PIPPO"), is(true));                 assertThat(service.isAnRTBUser("pluto"), is(true));                 assertThat(service.isAnRTBUser("bungabunga"), is(true));         } } Of course you can do the same in java changing the visibility of the field with reflection, but I think the groovy solution can be a cleaner and easier way. Now, I ll finish this post with the following recommendation: Do not use this solution unless you really really really need to modify private variables to unit test your class!    Reference: Access private fields in unit tests from our JCG partner Marco Castigliego at the Remove duplication and fix bad names blog. ...

Selecting level of detail returned by varying the content type, part II

In my previous entry, we looked at using the feature of MOXy to control the level of data output for a particular entity. This post looks at an abstraction provided by Jersey 2.x that allows you to define a custom set of annotations to have the same effect. As before we have an almost trivial resource that returns an object that Jersey will covert to JSON for us, note that for the moment there is nothing in this code to do the filtering – I am not going to pass in annotations to the Response object as in the Jersey examples:     import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.Produces;@Path("hello") public class SelectableHello {@GET @Produces({ "application/json; level=detailed", "application/json; level=summary", "application/json; level=normal" }) public Message hello() {return new Message();} } In my design I am going to define four annotations: NoView, SummaryView, NormalView and DetailedView. All root objects have to implement the NoView annotation to prevent un-annotated fields from being exposed – you might not feel this is necessary in your design. All of these classes look the same so I am going to only show one. Note that the factory method creating a AnnotationLiteral has to be used in preference to a factory that would create a dynamic proxy to have the same effect. There is code in 2.5 that will ignore any annotation implemented by a java.lang.reflect.Proxy object, this includes any annotations you may have retrieved from a class. I am working on submitting a fix for this. import java.lang.annotation.Documented; import java.lang.annotation.ElementType; import java.lang.annotation.Retention; import java.lang.annotation.RetentionPolicy; import java.lang.annotation.Target;import javax.enterprise.util.AnnotationLiteral;import org.glassfish.jersey.message.filtering.EntityFiltering;@Target({ ElementType.TYPE, ElementType.METHOD, ElementType.FIELD }) @Retention(RetentionPolicy.RUNTIME) @Documented @EntityFiltering public @interface NoView {/** * Factory class for creating instances of the annotation. */ public static class Factory extends AnnotationLiteral<NoView> implements NoView {private Factory() { }public static NoView get() { return new Factory(); } }} Now we can take a quick look at our Message bean, this is slightly more complicated than my previous example to showing filtering of subgraphs in a very simple form. As I said before the class is annotated with a NoView annotation at the root – this should mean that the privateData is never returned to the client as it is not specifically annotated. import javax.xml.bind.annotation.XmlRootElement;@XmlRootElement @NoView public class Message {private String privateData; @SummaryView private String summary; @NormalView private String message; @DetailedView private String subtext; @DetailedView private SubMessage submessage;public Message() { summary = "Some simple summary"; message = "This is indeed the message"; subtext = "This is the deep and meaningful subtext"; submessage = new SubMessage(); privateData = "The fox is flying tonight"; }// Getters and setters not shown }public class SubMessage {private String message;public SubMessage() { message = "Some sub messages"; }// Getters and setters not shown } As noted before there is no code in the resource class to deal with filtering – I considered this to be a cross cutting concern so I have abstracted this into a WriterInterceptor. Note the exception thrown if a entity is used that doesn’t have the NoView annotation on it. import java.io.IOException;import java.lang.annotation.Annotation;import java.util.Arrays; import java.util.LinkedHashSet; import java.util.Set;import javax.ws.rs.ServerErrorException; import javax.ws.rs.WebApplicationException; import javax.ws.rs.core.Context; import javax.ws.rs.core.HttpHeaders; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.Response; import javax.ws.rs.ext.Provider; import javax.ws.rs.ext.WriterInterceptor; import javax.ws.rs.ext.WriterInterceptorContext;@Provider public class ViewWriteInterceptor implements WriterInterceptor {private HttpHeaders httpHeaders;public ViewWriteInterceptor(@Context HttpHeaders httpHeaders) { this.httpHeaders = httpHeaders; }@Override public void aroundWriteTo(WriterInterceptorContext writerInterceptorContext) throws IOException, WebApplicationException {// I assume this case will never happen, just to be sure if (writerInterceptorContext.getEntity() == null) {writerInterceptorContext.proceed(); return; } else { Class<?> entityType = writerInterceptorContext.getEntity() .getClass(); String entityTypeString = entityType.getName();// Ignore any Jersey system classes, for example wadl // if (entityType == String.class || entityType.isArray() || entityTypeString.startsWith("com.sun") || entityTypeString.startsWith("org.glassfish")) { writerInterceptorContext.proceed(); return; }// Fail if the class doesn't have the default NoView annotation // this prevents any unannotated fields from showing up // else if (!entityType.isAnnotationPresent(NoView.class)) { throw new ServerErrorException("Entity type should be tagged with @NoView annotation " + entityType, Response.Status.INTERNAL_SERVER_ERROR);} }// Get hold of the return media type: //MediaType mt = writerInterceptorContext.getMediaType(); String level = mt.getParameters().get("level");// Get the annotations and modify as required //Set<Annotation> current = new LinkedHashSet<>(); current.addAll(Arrays.asList( writerInterceptorContext.getAnnotations()));switch (level != null ? level : "") { default: case "detailed": current.add(com.example.annotation.DetailedView.Factory.get()); case "normal": current.add(com.example.annotation.NormalView.Factory.get()); case "summary": current.add(com.example.annotation.SummaryView.Factory.get());}writerInterceptorContext.setAnnotations( current.toArray(new Annotation[current.size()]));//writerInterceptorContext.proceed(); } } Finally you have to enable the EntityFilterFeature manually, to do this you can simple register it in your Application class import java.lang.annotation.Annotation;import javax.ws.rs.ApplicationPath;import org.glassfish.jersey.message.filtering.EntityFilteringFeature; import org.glassfish.jersey.server.ResourceConfig;@ApplicationPath("/resources/") public class SelectableApplication extends ResourceConfig { public SelectableApplication() {packages("...");// Set entity-filtering scope via configuration. property(EntityFilteringFeature.ENTITY_FILTERING_SCOPE, new Annotation[] { NormalView.Factory.get(), DetailedView.Factory.get(), NoView.Factory.get(), SummaryView.Factory.get() }); register(EntityFilteringFeature.class); }} Once you have this all up and running the application will respond as before: GET .../hello Accept application/json; level=detailed or application/json { "message" : "This is indeed the message", "submessage" : { "message" : "Some sub messages" }, "subtext" : "This is the deep and meaningful subtext", "summary" : "Some simple summary" }GET .../hello Accept application/json; level=normal { "message" : "This is indeed the message", "summary" : "Some simple summary" }GET .../hello Accept application/json; level=summary { "summary" : "Some simple summary" } This is feel is a better alternative to using the MOXy annotations directly – using custom annotations should have to much easier to port your application to over implementation even if you have to provide you own filter. Finally it is worth also exploring the Jersey extension to this that allows Role based filtering which I can see as being useful in a security aspect.   Reference: Selecting level of detail returned by varying the content type, part II from our JCG partner Gerard Davison at the Gerard Davison’s blog blog. ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

15,153 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books