Featured FREE Whitepapers

What's New Here?

agile-logo

User Stories are Rarely Appropriate

All tools are useful when used appropriately, and User Stories are no different. User stories are fantastic when used in small teams on small projects where the team is co-located and has easy access to customers. User stories can quickly fall apart under any of the following situations:  the team or project is not small the team is not in a single location customers are hard to access project end date must be relatively fixedUser stories were introduced as a core part of Extreme Programming (XP). Extreme Programming assumes you have small co-located teams; if you relax (or abandon) any of these constraints and you will probably end up with a process out of control. XP, and hence user stories, works in high intensity environments where there are strong feedback loops inside the team and with customers:over Processes and Tools Customer Collaboration over Contract NegotiationUser stories need intense intra-team / customer communication to succeed User stories are a light-weight methodology that facilitates intense interactions between customers and developers and put the emphasis on the creation of code, not documentation. Their simplicity makes it easy for customers to help write them, but they must be complemented with timely interactions so that issues can be clarified. Large teams dilute interactions between developers; infrequence communication leads to a lack of team synchronization. Most organizations break larger teams into smaller groups where communication is primarily via email or managers — this kills communication and interaction. Larger projects have non-trivial architectures. Building non-trivial architecture by only looking at the end user requirements is impossible. This is like only having all the leaves of a tree and thinking you determine quickly where the branches and the trunk must be. User stories don’t work with teams where intense interaction is not possible. Teams distributed over multiple locations or time zones do not allow intense interaction. You are delusional if you think regular conference calls constitute intense interaction; most stand-up calls done via conference degrade into design or defect sessions. When emphasis is on the writing of code then it is critical that customers can be accessed in a timely fashion. If your customers are indirectly accessible through product managers or account representatives every few days then you will end up with tremendous latency. Live weekly demos with customers are necessary to flush out misunderstandings quickly and keep you on the same page User stories are virtually impossible to estimate. Often, we use user stories because there is a high degree of requirements uncertainty either because the requirements are unknown or it is difficult to get consistent requirements from customers. Since user stories are difficult to estimate, especially since you don’t know all the requirements, project end dates are impossible to predict with accuracy. To summarize, intense interactions between customers and developers are critical for user stories to be effective because this does several things:it keeps all the customers and developers on the same page it flushes out misunderstandings as quickly as possibleAll of the issues listed initially dilute the intensity of communication either between the team members or the developers and customers. Each issue that increases latency of communication will increase misunderstandings and increase the time it takes to find and remove defects. So if you have any of the following:Large or distributed teams Project with non-trivial architecture Difficult access to customers, i.e. high latency High requirements uncertainty but you need a fixed project end-dateThen user stories are probably not your best choice of requirements methodology. At best you may be able to complement your user stories with storyboards, at worst you may need some light-weight form of use case. Light-weight use case tutorial:(1 of 4) A use case is a dialog (2 of 4) Use case diagrams (UML) (3 of 4) Adding screens and reports (4 of 4) Adding minimal execution contextOther requirements articles:Shift Happens (long) Don’t manage enhancements in the Bug Tracker When BA means B∪ll$#!t ArtistReference: User Stories are Rarely Appropriate from our JCG partner Dalip Mahal at the Accelerated Development blog....
java-logo

The Knapsack problem

I found the Knapsack problem tricky and interesting at the same time. I am sure if you are visiting this page, you already know the problem statement but just for the sake of completion : Problem: Given a Knapsack of a maximum capacity of W and N items each with its own value and weight, throw in items inside the Knapsack such that the final contents has the maximum value. Yikes !!!      Link to the problem page in wikiHere’s the general way the problem is explained – Consider a thief gets into a home to rob and he carries a knapsack. There are fixed number of items in the home – each with its own weight and value – Jewellery, with less weight and highest value vs tables, with less value but a lot heavy. To add fuel to the fire, the thief has an old knapsack which has limited capacity. Obviously, he can’t split the table into half or jewellery into 3/4ths. He either takes it or leaves it. Example : Knapsack Max weight : W = 10 (units)Total items : N = 4Values of items : v[] = {10, 40, 30, 50}Weight of items : w[] = {5, 4, 6, 3} A cursory look at the example data tells us that the max value that we could accommodate with the limit of max weight of 10 is 50 + 40 = 90 with a weight of 7. Approach: The way this is optimally solved is using dynamic programming – solving for smaller sets of knapsack problems and then expanding them for the bigger problem. Let’s build an Item x Weight array called V (Value array): V[N][W] = 4 rows * 10 columns Each of the values in this matrix represent a smaller Knapsack problem. Base case 1 : Let’s take the case of 0th column. It just means that the knapsack has 0 capacity. What can you hold in them? Nothing. So, let’s fill them up all with 0s. Base case 2 : Let’s take the case of 0 row. It just means that there are no items in the house. What do you do hold in your knapsack if there are no items. Nothing again !!! All zeroes.Solution:Now, let’s start filling in the array row-wise. What does row 1 and column 1 mean? That given the first item (row), can you accommodate it in the knapsack with capacity 1 (column). Nope. The weight of the first item is 5. So, let’s fill in 0. In fact, we wouldn’t be able to fill in anything until we reach the column 5 (weight 5). Once we reach column 5 (which represents weight 5) on the first row, it means that we could accommodate item 1. Let’s fill in 10 there (remember, this is a Value array):  Moving on, for weight 6 (column 6), can we accommodate anything else with the remaining weight of 1 (weight – weight of this item => 6 – 5). Hey, remember, we are on the first item. So, it is kind of intuitive that the rest of the row will just be the same value too since we are unable to add in any other item for that extra weight that we have got.  So, the next interesting thing happens when we reach the column 4 in third row. The current running weight is 4.We should check for the following cases.Can we accommodate Item 2 – Yes, we can. Item 2′s weight is 4. Is the value for the current weight is higher without Item 2? – Check the previous row for the same weight. Nope. the previous row* has 0 in it, since we were not able able accommodate Item 1 in weight 4. Can we accommodate two items in the same weight so that we could maximize the value? – Nope. The remaining weight after deducting the Item2′s weight is 0.Why previous row? Simply because the previous row at weight 4 itself is a smaller knapsack solution which gives the max value that could be accumulated for that weight until that point (traversing through the items). Exemplifying,The value of the current item = 40 The weight of the current item = 4 The weight that is left over = 4 – 4 = 0 Check the row above (the Item above in case of Item 1 or the cumulative Max value in case of the rest of the rows). For the remaining weight 0, are we able to accommodate Item 1? Simply put, is there any value at all in the row above for the given weight?The calculation goes like so :Take the max value for the same weight without this item: previous row, same weight = 0=> V[item-1][weight]Take the value of the current item + value that we could accommodate with the remaining weight: Value of current item + value in previous row with weight 4 (total weight until now (4) - weight of the current item (4))=> val[item-1] + V[item-1][weight-wt[item-1]] Max among the two is 40 (0 and 40). The next and the most important event happens at column 9 and row 2. Meaning we have a weight of 9 and we have two items. Looking at the example data we could accommodate the first two items. Here, we consider few things: 1. The value of the current item = 40 2. The weight of the current item = 4 3. The weight that is left over = 9 - 4 = 5 4. Check the row above. At the remaining weight 5, are we able to accommodate Item 1.  So, the calculation is :Take the max value for the same weight without this item: previous row, same weight = 10Take the value of the current item + value that we could accumulate with the remaining weight: Value of current item (40) + value in previous row with weight 5 (total weight until now (9) - weight of the current item (4))= 10 10 vs 50 = 50.At the end of solving all these smaller problems, we just need to return the value at V[N][W] – Item 4 at Weight 10:Complexity Analyzing the complexity of the solution is pretty straight-forward. We just have a loop for W within a loop of N => O (NW) Implementation: Here comes the obligatory implementation code in Java: class Knapsack {public static void main(String[] args) throws Exception { int val[] = {10, 40, 30, 50}; int wt[] = {5, 4, 6, 3}; int W = 10;System.out.println(knapsack(val, wt, W)); }public static int knapsack(int val[], int wt[], int W) {//Get the total number of items. //Could be wt.length or val.length. Doesn't matter int N = wt.length;//Create a matrix. //Items are in rows and weight at in columns +1 on each side int[][] V = new int[N + 1][W + 1];//What if the knapsack's capacity is 0 - Set //all columns at row 0 to be 0 for (int col = 0; col <= W; col++) { V[0][col] = 0; }//What if there are no items at home. //Fill the first row with 0 for (int row = 0; row <= N; row++) { V[row][0] = 0; }for (int item=1;item<=N;item++){//Let's fill the values row by row for (int weight=1;weight<=W;weight++){//Is the current items weight less //than or equal to running weight if (wt[item-1]<=weight){//Given a weight, check if the value of the current //item + value of the item that we could afford //with the remaining weight is greater than the value //without the current item itself V[item][weight]=Math.max (val[item-1]+V[item-1][weight-wt[item-1]], V[item-1][weight]); } else { //If the current item's weight is more than the //running weight, just carry forward the value //without the current item V[item][weight]=V[item-1][weight]; } }}//Printing the matrix for (int[] rows : V) { for (int col : rows) {System.out.format("%5d", col); } System.out.println(); }return V[N][W];}}Reference: The Knapsack problem from our JCG partner Arun Manivannan at the Rerun.me blog....
java-logo

An Introduction to Generics in Java – Part 6

This is a continuation of an introductory discussion on Generics, previous parts of which can be found here. In the last article we were discussing about recursive bounds on type parameters. We saw how recursive bound helped us to reuse the vehicle comparison logic. At the end of that article, I suggested that a possible type mixing may occur when we are not careful enough. Today we will see an example of this. The mixing can occur if someone mistakenly creates a subclass of Vehicle in the following way:     /** * Definition of Vehicle */ public abstract class Vehicle<E extends Vehicle<E>> implements Comparable<E> { // other methods and propertiespublic int compareTo(E vehicle) { // method implementation } }/** * Definition of Bus */ public class Bus extends Vehicle<Bus> {}/** * BiCycle, new subtype of Vehicle */ public class BiCycle extends Vehicle<Bus> {}/** * Now this class’s compareTo method will take a Bus type * as its argument. As a result, you will not be able to compare * a BiCycle with another Bicycle, but with a Bus. */ cycle.compareTo(anotherCycle); // This will generate a compile time error cycle.compareTo(bus); // but you will be able to do this without any error This type of mix up does not occur with Enums because JVM takes care of subclassing and creating instances for enum types, but if we use this style in our code then we have to be careful. Let’s talk about another interesting application of recursive bounds. Consider the following class: public class MyClass { private String attrib1; private String attrib2; private String attrib3; private String attrib4; private String attrib5;public MyClass() {}public String getAttrib1() { return attrib1; }public void setAttrib1(String attrib1) { this.attrib1 = attrib1; }public String getAttrib2() { return attrib2; }public void setAttrib2(String attrib2) { this.attrib2 = attrib2; }public String getAttrib3() { return attrib3; }public void setAttrib3(String attrib3) { this.attrib3 = attrib3; }public String getAttrib4() { return attrib4; }public void setAttrib4(String attrib4) { this.attrib4 = attrib4; }public String getAttrib5() { return attrib5; }public void setAttrib5(String attrib5) { this.attrib5 = attrib5; } } If we want to create an instance of this class, then we can do this: MyClass mc = new MyClass(); mc.setAttrib1("Attribute 1"); mc.setAttrib2("Attribute 2"); The above code creates an instance of the class and initializes the properties. If we could use Method Chaining here, then we could have written: MyClass mc = new MyClass().setAttrib1("Attribute 1") .setAttrib2("Attribute 2"); which obviously looks much better than the first version. However, to enable this type of method chaining, we need to modify MyClass in the following way: public class MyClass { private String attrib1; private String attrib2; private String attrib3; private String attrib4; private String attrib5;public MyClass() {}public String getAttrib1() { return attrib1; }public MyClass setAttrib1(String attrib1) { this.attrib1 = attrib1; return this; }public String getAttrib2() { return attrib2; }public MyClass setAttrib2(String attrib2) { this.attrib2 = attrib2; return this; }public String getAttrib3() { return attrib3; }public MyClass setAttrib3(String attrib3) { this.attrib3 = attrib3; return this; }public String getAttrib4() { return attrib4; }public MyClass setAttrib4(String attrib4) { this.attrib4 = attrib4; return this; }public String getAttrib5() { return attrib5; }public MyClass setAttrib5(String attrib5) { this.attrib5 = attrib5; return this; } } and then we will be able to use method chaining for instances of this class. However, if we want to use method chaining where inheritance is involved, things kind of get messy: public abstract class Parent { private String attrib1; private String attrib2; private String attrib3; private String attrib4; private String attrib5;public Parent() {}public String getAttrib1() { return attrib1; }public Parent setAttrib1(String attrib1) { this.attrib1 = attrib1; return this; }public String getAttrib2() { return attrib2; }public Parent setAttrib2(String attrib2) { this.attrib2 = attrib2; return this; }public String getAttrib3() { return attrib3; }public Parent setAttrib3(String attrib3) { this.attrib3 = attrib3; return this; }public String getAttrib4() { return attrib4; }public Parent setAttrib4(String attrib4) { this.attrib4 = attrib4; return this; }public String getAttrib5() { return attrib5; }public Parent setAttrib5(String attrib5) { this.attrib5 = attrib5; return this; } }public class Child extends Parent { private String attrib6; private String attrib7;public Child() {}public String getAttrib6() { return attrib6; }public Child setAttrib6(String attrib6) { this.attrib6 = attrib6; return this; }public String getAttrib7() { return attrib7; }public Child setAttrib7(String attrib7) { this.attrib7 = attrib7; return this; } }/** * Now try using method chaining for instances of Child * in the following way, you will get compile time errors. */ Child c = new Child().setAttrib1("Attribute 1").setAttrib6("Attribute 6"); The reason for this is that even though Child inherits all the setters from its parent, the return type of all those setter methods are of type Parent, not Child. So the first setter will return reference of type Parent, calling setAttrib6 on which will result in compilation error,  because it does not have any such method. We can resolve this problem by introducing a generic type parameter on Parent and defining a recursive bound on it. All of its children will pass themselves as type argument when they extend from it, ensuring that the setter methods will return references of its type: public abstract class Parent<T extends Parent<T>> { private String attrib1; private String attrib2; private String attrib3; private String attrib4; private String attrib5;public Parent() { }public String getAttrib1() { return attrib1; }@SuppressWarnings("unchecked") public T setAttrib1(String attrib1) { this.attrib1 = attrib1; return (T) this; }public String getAttrib2() { return attrib2; }@SuppressWarnings("unchecked") public T setAttrib2(String attrib2) { this.attrib2 = attrib2; return (T) this; }public String getAttrib3() { return attrib3; }@SuppressWarnings("unchecked") public T setAttrib3(String attrib3) { this.attrib3 = attrib3; return (T) this; }public String getAttrib4() { return attrib4; }@SuppressWarnings("unchecked") public T setAttrib4(String attrib4) { this.attrib4 = attrib4; return (T) this; }public String getAttrib5() { return attrib5; }@SuppressWarnings("unchecked") public T setAttrib5(String attrib5) { this.attrib5 = attrib5; return (T) this; } }public class Child extends Parent<Child> { private String attrib6; private String attrib7;public String getAttrib6() { return attrib6; }public Child setAttrib6(String attrib6) { this.attrib6 = attrib6; return this; }public String getAttrib7() { return attrib7; }public Child setAttrib7(String attrib7) { this.attrib7 = attrib7; return this; } } Notice that we have to explicitly cast this to type T  because compiler does not know whether or not this conversion is possible, even though it is because T by definition is bounded by Parent<T>. Also since we are casting an object reference to T, an unchecked warning will be issued by the compiler. To suppress this we used @SuppressWarnings(“unchecked”) above the setters. With the above modifications, it’s perfectly valid to do this: Child c = new Child().setAttrib1("Attribute 1") .setAttrib6("Attribute 6"); When writing method setters this way, we should be careful as to not to use recursive bounds for any other purposes, like to access children’s states from parent, because that will expose parent to the internal details of its subclasses and will eventually break the encapsulation. With this post I finish the basic introduction to Generics. There are so many things that I did not discuss in this series, because I believe they are beyond the introductory level. Until next time.Reference: An Introduction to Generics in Java – Part 6 from our JCG partner Sayem Ahmed at the Random Thoughts blog....
android-logo

Event Tracking with Analytics API v4 for Android

As I’ve learned from developing my own mileage tracking app for cyclists and commuters, getting ratings and feedback from users can be challenging and time consuming. Event tracking can help by enabling you to develop a sense of how popular a particular feature is and how often it’s getting used by users of your app. In Android, Google Play Services’ Analytics API v4 can be used to gather statistics on the user-events that occur within your app.  In this post I’ll quickly show you how to use this API to accomplish simple event tracking.       Getting started It’s important to say at this point that all of these statistics are totally anonymous. App developers who use analytics have no idea who is using each feature or generating each event, only that an event occurred. Assuming you’ve set up Google Analytics v4 in your app as per my last post, tracking app events is fairly simple. The first thing you need is your analytics application Tracker (obtained in my case by calling getApplication() as per the previous post). Bear in mind that this method is only available in an object that extends Android’s Activity or Service class so you can’t use it everywhere without some messing about. Once you have your application Tracker you should use an analytics EventBuilder to build() an event and use the send() method on the Tracker to send it to Google. Building an event is easy. You simply create a new HitBuilders.EventBuilder, setting a ‘category’ and an ‘action’ for your new event. Sample code The sample code below shows how I track the users manual use use of the ‘START’ button in Trip Computer. I have similar tracking events for STOP and also for the use of key settings and features like the activation of the app’s unique ‘battery-saver’ mode (which I understand is quite popular with cyclists). // Get an Analytics Event tracker. Tracker myTracker = ((TripComputerApplication) getApplication()) .getTracker(TripComputerApplication.TrackerName.APP_TRACKER);// Build and Send the Analytics Event. myTracker.send(new HitBuilders.EventBuilder() .setCategory("Journey Events") .setAction("Pressed Start Button") .build()); Once the events are reported back to Google, the Analytics console will display them in the ‘Behaviour > Events > Overview’ panel and display a simple count how many times each event was raised within the tracking period. You can also further subdivide the actions by setting a ‘label’ or by providing a ‘value’ (but neither of these is actually required). More information For more information see the following articles:https://support.google.com/analytics/answer/1033068?hl=en https://developers.google.com/analytics/devguides/collection/gajs/eventTrackerGuide#AnatomyReference: Event Tracking with Analytics API v4 for Android from our JCG partner Ben Wilcock at the Ben Wilcock’s blog blog....
java-logo

Daemonizing JVM-based applications

Deployment architecture design is a vital part of any custom-built server-side application development project. Due to it’s significance, deployment architecture design should commence early and proceed in tandem with other development activities. The complexity of deployment architecture design depends on many aspects, including scalability and availability targets of the provided service, rollout processes as well as technical properties of the system architecture. Serviceability and operational concerns, such as deployment security, monitoring, backup/restore etc., relate to the broader topic of deployment architecture design. These concerns are cross-cutting in nature and may need to be addressed on different levels ranging from service rollout processes to the practical system management details. On the system management detail level the following challenges often arise when using a pure JVM-based application deployment model (on Unix-like platforms):how to securely shut down the app server or application? Often, a TCP listener thread listening for shutdown requests is used. If you have many instances of the same app server deployed on the same host, it’s sometimes easy to confuse the instances and shutdown the wrong one. Also, you’ll have to prevent unauthorized access to the shutdown listener. creating init scripts that integrate seamlessly with system startup and shutdown mechanisms (e.g. Sys-V init, systemd, Upstart etc.) how to automatically restart the application if it dies? log file management. Application logs can be managed (e.g. rotate, compress, delete) by a log library. App server or platform logs can sometimes be also managed using a log library, but occasionally integration with OS level tools (e.g. logrotate) may be necessary.There’s a couple of solutions to these problems that enable tighter integration between the operating system and application / application server. One widely used and generic solution is the Java Service Wrapper. The Java Service Wrapper is good at addressing the above challenges and is released under a proprietary license. GPL v2 based community licensing option is available as well. Apache commons daemon is another option. It has its roots in Apache Tomcat and integrates well with the app server, but it’s much more generic than that, and in addition to Java, commons daemon can be used with also other JVM-based languages such as Scala. As the name implies, commons daemon is Apache licensed. Commons daemon includes the following features:automatically restart JVM if it dies enable secure shutdown of JVM process using standard OS mechanisms (Tomcat TCP based shutdown mechanism is error-prone and unsecure) redirect STDERR/STDOUT and set JVM process name allow integration with OS init script mechanisms (record JVM process pid) detach JVM process from parent process and console run JVM and application with reduced OS privileges allow coordinating log file management with OS tools such as logrotate (reopen log files with SIGUSR1 signal)Deploying commons daemon From an application developer point of view commons daemon consists of two parts: the jsvc binary used for starting applications and commons daemon Java API. During startup, jsvc binary bootstraps the application through lifecycle methods implemented by the application and defined by commons daemon Java API. Jsvc creates a control process for monitoring and restarting the application upon abnormal termination. Here’s an outline for deploying commons daemon with your application:implement commons daemon API lifecycle methods in an application bootstrap class (see Using jsvc directly). compile and install jsvc. (Note that it’s usually not good practice to install compiler toolchain on production or QA servers). place commons-daemon API in application classpath figure out command line arguments for running your app through jsvc. Check out bin/daemon.sh in Tomcat distribution for reference. create a proper init script based on previous step. Tomcat can be installed via package manager on many Linux distributions and the package typically come with an init script that can be used as a reference.Practical experiences Tomcat distribution includes “daemon.sh”, a generic wrapper shell script that can be used as a basis for creating a system specific init script variant. One of the issues that I encountered was the wait configuration parameter default value couldn’t be overridden by the invoker of the wrapper script. In some cases Tomcat random number generator initialization could exceed the maximum wait time, resulting in the initialization script reporting a failure, even if the app server would eventually get started. This seems to be fixed now. Another issue was that the wrapper script doesn’t allow passing JVM-parameters with spaces in them. This can be handy e.g. in conjunction with the JVM “-XX:OnOutOfMemoryError” & co. parameters. Using the wrapper script is optional, and it can also be changed easily, but since it includes some pieces of useful functionality, I’d rather reuse instead of duplicating it, so I created a feature request and proposed tiny patch for this #55104. While figuring out the correct command line arguments for getting jsvc to bootstrap your application, the “-debug” argument can be quite useful for troubleshooting purposes. Also, by default the jsvc changes working directory to /, in which case absolute paths should typically be used with other options. The “-cwd” option can be used for overriding the default working directory value. Daemonizing Jetty In addition to Tomcat, Jetty is another servlet container I often use. Using commons daemon with Tomcat poses no challenge since the integration already exists, so I decided to see how things would work with an app server that doesn’t support commons daemon out-of-the-box. To implement the necessary changes in Jetty, I cloned the Jetty source code repository, added jsvc lifecycle methods in the Jetty bootstrap class and built Jetty. After that, I started experimenting with jsvc command line arguments for bootstrapping Jetty. Jetty comes with jetty.sh startup script that has an option called “check” for outputting various pieces of information related to the installation. Among other things it outputs the command line arguments that would be used with the JVM. This provided quite a good starting point for the jsvc command line. These are the command lines I ended up with: export JH=$HOME/jetty-9.2.2-SNAPSHOT export JAVA_HOME=`/usr/libexec/java_home -v 1.8` jsvc -debug -pidfile $JH/jetty.pid -outfile $JH/std.out -errfile $JH/std.err -Djetty.logs=$JH/logs -Djetty.home=$JH -Djetty.base=$JH -Djava.io.tmpdir=/var/folders/g6/zmr61rsj11q5zjmgf96rhvy0sm047k/T/ -classpath $JH/commons-daemon-1.0.15.jar:$JH/start.jar org.eclipse.jetty.start.Main jetty.state=$JH/jetty.state jetty-logging.xml jetty-started.xml This could be used as a starting point for a proper production grade init script for starting and shutting down Jetty. I submitted my code changes as issue #439672 in the Jetty project issue tracker and just received word that the change has been merged with the upstream code base, so you should be able to daemonize Jetty with Apache commons daemon jsvc in the future out-of-the-box.Reference: Daemonizing JVM-based applications from our JCG partner Marko Asplund at the practicing techie blog....
grails-logo

Grails Goodness: Using Converter Named Configurations with Default Renderers

Sometimes we want to support in our RESTful API a different level of detail in the output for the same resource. For example a default output with the basic fields and a more detailed output with all fields for a resource. The client of our API can then choose if the default or detailed output is needed. One of the ways to implement this in Grails is using converter named configurations. Grails converters, like JSON and XML, support named configurations. First we need to register a named configuration with the converter. Then we can invoke the use method of the converter with the name of the configuration and a closure with statements to generate output. The code in the closure is executed in the context of the named configuration. The default renderers in Grails, for example DefaultJsonRenderer, have a property namedConfiguration. The renderer will use the named configuration if the property is set to render the output in the context of the configured named configuration. Let’s configure the appropriate renderers and register named configurations to support the named configuration in the default renderers. In our example we have a User resource with some properties. We want to support short and details output where different properties are included in the resulting format. First we have our resource: // File: grails-app/domain/com/mrhaki/grails/User.groovy package com.mrhaki.grailsimport grails.rest.*// Set URI for resource to /users. // The first format in formats is the default // format. So we could use short if we wanted // the short compact format as default for this // resources. @Resource(uri = '/users', formats = ['json', 'short', 'details']) class User {String username String lastname String firstname String emailstatic constraints = { email email: true lastname nullable: true firstname nullable: true }} Next we register two named configurations for this resource in our Bootstrap: // File: grails-app/conf/Bootstrap.groovy class Bootstrap {def init = { servletContext -> ... JSON.createNamedConfig('details') { it.registerObjectMarshaller(User) { User user -> final String fullname = [user.firstname, user.lastname].join(' ') final userMap = [ id : user.id, username: user.username, email : user.email, ] if (fullname) { userMap.fullname = fullname } userMap } // Add for other resources a marshaller within // named configuration. }JSON.createNamedConfig('short') { it.registerObjectMarshaller(User) { User user -> final userMap = [ id : user.id, username: user.username ] userMap } // Add for other resources a marshaller within // named configuration. } ... } ... } Now we must register custom renderers for the User resource as Spring components in resources: // File: grails-app/conf/spring/resources.groovy import com.mrhaki.grails.User import grails.rest.render.json.JsonRenderer import org.codehaus.groovy.grails.web.mime.MimeTypebeans = { // Register JSON renderer for User resource with detailed output. userDetailsRenderer(JsonRenderer, User) { // Grails will compare the name of the MimeType // to determine which renderer to use. So we // use our own custom name here. // The second argument, 'details', specifies the // supported extension. We can now use // the request parameter format=details to use // this renderer for the User resource. mimeTypes = [new MimeType('application/vnd.com.mrhaki.grails.details+json', 'details')] // Here we specify the named configuration // that must be used by an instance // of this renderer. See Bootstrap.groovy // for available registered named configuration. namedConfiguration = 'details' }// Register second JSON renderer for User resource with compact output. userShortRenderer(JsonRenderer, User) { mimeTypes = [new MimeType('application/vnd.com.mrhaki.grails.short+json', 'short')]// Named configuration is different for short // renderer compared to details renderer. namedConfiguration = 'short' }// Default JSON renderer as fallback. userRenderer(JsonRenderer, User) { mimeTypes = [new MimeType('application/json', 'json')] }} We have defined some new mime types in grails-app/conf/spring/resources.groovy, but we must also add them to our grails-app/conf/Config.groovy file: // File: grails-app/conf/spring/resources.groovy ... grails.mime.types = [ ... short : ['application/vnd.com.mrhaki.grails.short+json', 'application/json'], details : ['application/vnd.com.mrhaki.grails.details+json', 'application/json'], ] ... Our application is now ready and configured. We mostly rely on Grails content negotiation to get the correct renderer for generating our output if we request a resource. Grails content negotiation can use the value of the request parameter format to find the correct mime type and then the correct renderer. Grails also can check the Accept request header or the URI extension, but for our RESTful API we want to use the format request parameter. If we invoke our resource with different formats we see the following results: $ curl -X GET http://localhost:8080/rest-sample/users/1 { "class": "com.mrhaki.grails.User", "id": 1, "email": "hubert@mrhaki.com", "firstname": "Hubert", "lastname": "Klein Ikkink", "username": "mrhaki" } $ curl -X GET http://localhost:8080/rest-sample/users/1?format=short { "id": 1, "username": "mrhaki" } $ curl -X GET http://localhost:8080/rest-sample/users/1?format=details { "id": 1, "username": "mrhaki", "email": "hubert@mrhaki.com", "fullname": "Hubert Klein Ikkink" } Code written with Grails 2.4.2.Reference: Grails Goodness: Using Converter Named Configurations with Default Renderers from our JCG partner Hubert Ikkink at the JDriven blog....
jboss-drools-logo

Examining Red Hat JBoss BRMS deployment architectures for rules and events (part II)

(Article guest authored together with John Hurlocker, Senior Middleware Consultant at Red Hat in North America) In this weeks tips & tricks we will be slowing down and taking a closer look at possible Red Hat JBoss BRMS deployment architectures. When we talk about deployment architectures we are referring to the options you have to deploy a rules and/or events project in your enterprise. This is the actual runtime architecture that you need to plan for at the start of your design phases, determining for your enterprise and infrastructure what the best way would be to deploy your upcoming application. It will also most likely have an effect on how you design the actual application that you want to build, so being aware of your options should help make your projects a success. This will be a multi-part series that will introduce the deployment architectures in phases. You can catch up on last weeks article before continuing with this weeks look at the next two architectures. The possibilities A rule administrator or architect work with application team(s) to design the runtime architecture for rules and depending on the organizations needs the architecture could be any one of the following architectures or a hybrid of the designs below. In this series we will present four different deployment architectures and discuss one design time architecture while providing the pros and cons for each one to allow for evaluation of each one for your own needs. The basic components in these architectures shown in the accompanying illustrations are:JBoss BRMS server Rules developer / Business analyst Version control (GIT) Deployment servers (JBoss EAP) Clients using your applicationRules execution server What you are doing in this architectural scenario is deploying JBoss BRMS as an application in its own environment. You can then expose it as a service (e.g. JMS, SOAP, etc.) so that any applications in your enterprise architecture can remotely execute rules and events.Illustration 1: Rules execution serverThis deployment architecture completely externalizes the entire JBoss BRMS rules and events component from your application development process as shown in illustration 1. It then only requires an application to make an external call for rules or event decisions. ProsCompletely decoupled architecture Common implementation to setup and execute rules Upgrades to BRMS versions become easier with single point of focus in your enterpriseConsPossible performance implications due to externalized component relative to your applications The execution server could be used by multiple applications.a team will need to take ownership of this application and maintain itHybrid of the rules execution server As a final example we present a hybrid architecture that leverages the previous basic rules execution server architecture and adds in the previously discussed (in part I) KieScanner component.Illustration 2: Hybrid architecture.With this architecture you have the ability to develop applications that just leverage a remote call to execute rules and events decisions, but add in the mix of being able to update rules and events packages without changing the execution server service structure. As a refresher, remember that the JBoss BRMS API contains a KieScanner that monitors the rules repository for new rule package versions. Once a new version is available it will be picked up by the KieScanner and loaded into your application. The Cool Store demo project provides an example that demonstrates the usage of JBoss BRMS KieScanner, with an example implementation showing how to scan your rule repository for the last freshly built package. Illustration 2 shows how the rule execution server is now hosting a KieScanner implemented component to monitor the rules and events packages for updates which would then automatically be picked up for the next application that calls. ProsCompletely decoupled architecture Common implementation to setup and execute rules Upgrades to BRMS versions become easier with single point of focus in your enterprise Less maintenance for the execution server componentConsPossible performance implications due to externalized component relative to your applicationsNext up Next time we will take a look at the design time architecture and the options for you to deploy your rules and events into your architecture.Reference: Examining Red Hat JBoss BRMS deployment architectures for rules and events (part II) from our JCG partner Eric Schabell at the Eric Schabell’s blog blog....
java-logo

Introduction to writing custom collectors in Java 8

Java 8 introduced the concept of collectors. Most of the time we barely use factory methods from Collectors class, e.g. collect(toList()), toSet() or maybe something more fancy like counting() or groupingBy(). Not many of us actually bother to look how collectors are defined and implemented. Let’s start from analysing what Collector<T, A, R> really is and how it works. Collector<T, A, R> works as a “sink” for streams – stream pushes items (one after another) into a collector, which should produce some “collected” value in the end. Most of the time it means building a collection (like toList()) by accumulating elements or reducing stream into something smaller (e.g. counting() collector that barely counts elements). Every collector accepts items of type T and produces aggregated (accumulated) value of type R (e.g. R = List<T>). Generic type A simply defines the type of intermediate mutable data structure that we are going to use to accumulate items of type T in the meantime. Type A can, but doesn’t have to be the same as R - in simple words the mutable data structure that we use to collect items from input Stream<T> can be different than the actual output collection/value. That being said, every collector must implement the following methods: interface Collector<T,A,R> { Supplier<A> supplier() BiConsumer<A,T> acumulator() BinaryOperator<A> combiner() Function<A,R> finisher() Set<Characteristics> characteristics() }supplier() returns a function that creates an instance of accumulator – mutable data structure that we will use to accumulate input elements of type T. accumulator() returns a function that will take accumulator and one item of type T, mutating accumulator. combiner() is used to join two accumulators together into one. It is used when collector is executed in parallel, splitting input Stream<T> and collecting parts independently first. finisher() takes an accumulator A and turns it into a result value, e.g. collection, of type R. All of this sounds quite abstract, so let’s do a simple example.Obviously Java 8 doesn’t provide a built-in collector for ImmutableSet<T> from Guava. However creating one is very simple. Remember that in order to iteratively build ImmutableSet we use ImmutableSet.Builder<T> - this is going to be our accumulator. import com.google.common.collect.ImmutableSet;public class ImmutableSetCollector<T> implements Collector<T, ImmutableSet.Builder<T>, ImmutableSet<T>> { @Override public Supplier<ImmutableSet.Builder<T>> supplier() { return ImmutableSet::builder; }@Override public BiConsumer<ImmutableSet.Builder<T>, T> accumulator() { return (builder, t) -> builder.add(t); }@Override public BinaryOperator<ImmutableSet.Builder<T>> combiner() { return (left, right) -> { left.addAll(right.build()); return left; }; }@Override public Function<ImmutableSet.Builder<T>, ImmutableSet<T>> finisher() { return ImmutableSet.Builder::build; }@Override public Set<Characteristics> characteristics() { return EnumSet.of(Characteristics.UNORDERED); } }First of all look carefully at generic types. Our ImmutableSetCollector takes input elements of type T, so it works for any Stream<T>. In the end it will produce ImmutableSet<T> - as expected. ImmutableSet.Builder<T> is going to be our intermediate data structure.supplier() returns a function that creates new ImmutableSet.Builder<T>. If you are not that familiar with lambdas in Java 8, ImmutableSet::builder is a shorthand for () -> ImmutableSet.builder(). accumulator() returns a function that takes builder and one element of type T. It simply adds said element to the builder. combiner() returns a function that will accept two builders and turn them into one by adding all elements from one of them into the other – and returning the latter. Finally finisher() returns a function that will turn ImmutableSet.Builder<T> into ImmutableSet<T>. Again this is a shorthand syntax for: builder -> builder.build(). Last but not least, characteristics() informs JDK what capabilities our collector has. For example if ImmutableSet.Builder<T> was thread-safe (it isn’t), we could say Characteristics.CONCURRENT as well.We can now use our custom collector everywhere using collect(): final ImmutableSet<Integer> set = Arrays .asList(1, 2, 3, 4) .stream() .collect(new ImmutableSetCollector<>());However creating new instance is slightly verbose so I suggest creating static factory method, similar to what JDK does: public class ImmutableSetCollector<T> implements Collector<T, ImmutableSet.Builder<T>, ImmutableSet<T>> {//...public static <T> Collector<T, ?, ImmutableSet<T>> toImmutableSet() { return new ImmutableSetCollector<>(); } }From now on we can take full advantage of our custom collector by simply typing: collect(toImmutableSet()). In the second part we will learn how to write more complex and useful collectors. Update @akarazniewicz pointed out that collectors are just verbose implementation of folding. With my love and hate relationship with folds, I have to comment on that. Collectors in Java 8 are basically object-oriented encapsulation of the most complex type of fold found in Scala, namely GenTraversableOnce.aggregate[B](z: ⇒ B)(seqop: (B, A) ⇒ B, combop: (B, B) ⇒ B): B. aggregate() is like fold(), but requires extra combop to combine two accumulators (of type B) into one. Comparing this to collectors, parameter z comes from a supplier(), seqop()reduction operation is an accumulator() and combop is a combiner(). In pseudo-code we can write: finisher( seq.aggregate(collector.supplier()) (collector.accumulator(), collector.combiner()))GenTraversableOnce.aggregate() is used when concurrent reduction is possible – just like with collectors.Reference: Introduction to writing custom collectors in Java 8 from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog....
agile-logo

Agile Myth #2: “Agile is About Project Management”

This is my third post on a multi-part series entitled, “Agile Myths and Misconceptions”. It’s based on the talk I gave at the first PSIA Softech Philippine Software Engineering Conference. I am striving to correct 12 common misconceptions about Agile Software Development. Agile has equal or more emphasis on engineering than project management. Just check out the authors of the Agile Manifesto – a large number of them are thought leaders in nuts-and-bolts software engineering: Kent Beck is the creator of JUnit and Test-Driven Development, Martin Fowler has written several books on Design Patterns and Refactoring, Robert Martin has written several books on Object-Oriented Design and code quality… If you check out Extreme Programming, which was the first Agile process to gain popularity, and precedes Agile itself, it leans more towards engineering practices, such as TDD, Continuous Integration, Coding Standards and Pair Programming. If you attended an early Agile training course, it was actually a programming workshop where you pair up with another programmer around a workstation and write code! Most of the Agile authors were aware that even if you do the project management side well, if you ignore the engineering side, a project will eventually be in quagmire. Consider what Martin Fowler said in his blog: There’s a mess I’ve heard about with quite a few projects recently. It works out like this:They want to use an agile process, and pick Scrum They adopt the Scrum practices, and maybe even the principles After a while progress is slow because the code base is a messWhat’s happened is that they haven’t paid enough attention to the internal quality of their software. If you make that mistake you’ll soon find your productivity dragged down because it’s much harder to add new features than you’d like. You’ve taken on a crippling Technical Debt and your scrum has gone weak at the knees. Here’s an illustration of what Martin Fowler is saying:  So initially, a team’s velocity increases with each iteration, but as the project progresses, the team’s velocity starts to steadily degrade, even if people are working just as hard or harder. Here’s a breakdown of what’s happening to the team. Let’s pretend that a team’s capacity is like a pipe. Requirements go in one end, some development happens inside, and then features come out the other end. Just like any pipe, the team’s capacity is finite.  Initially, all of the team’s capacity is used to turn requirements into features, but later on, naturally, some bugs are found, so part of the team’s capacity is now taken up by bug fixing.That situation is still fine, but if the team neglects the engineering practices, you end up with Technical Debt. In a nutshell, Technical Debt is difficulty in working code where the design and code quality is bad. The code is very difficult to understand, it’s hard to find bugs, a lot of things need to be modified for every change, and the code is very fragile. It’s a lot like real debt – when you don’t pay your credit card bill, the interest payments eat up your capacity to pay for day-to-day things like food and rent. With technical debt, if the team doesn’t take the effort to clean up their code, write automated tests, integrate often, apply proper design approaches, measure code quality and other engineering practices, eventually the code becomes harder and harder to work with, and just wrestling with the code takes up more and more of the team’s capacity, just like interest from debt takes up one’s capacity to spend. What’s more, since the code is more fragile, the project has more bugs, and so even more of the team’s capacity is used up in bug fixing.Just like real debt, if technical debt is not addressed, it just grows – the more code you add to the system, the harder and harder it just gets to work with. Eventually, the majority of the team’s capacity is taken up by just technical debt and bug fixing. Very little actual creation of new features comes out of the development team, if any at all.  I’m not exaggerating in any way. One of the clients I consulted for had development teams in three countries – with none of them producing any new features for months! The teams were bogged down with such a bad codebase that they couldn’t change any code without creating a large number of bugs. Again, the way you avoid this situation is through proper engineering practices. If you’re just starting out with engineering practices, I think a good place to start would be at coding practices. If you’re a Jave programmer, good book for this is Robert Martin’s “Clean Code”. After that, you might want to move to learning Test-Driven Development. My recommended TDD book for Java developers is “Test-Driven” by Lasse Koskela. After that, read up on proper practices for version control and Continuous Integration. For those of us working on legacy code, which is the majority I would think, a good reference is“Working Effectively with Legacy Code” by Michael Feathers. For those further along, you might want to start exploring approaches to design. Domain-model design is a good place to start. Some recommended references are “Applying UML and Patterns” by Craig Larman, “The Data Model Resource Book” by Len Silverston, “Domain-Driven Design” by Eric Evans, and “Implementing Domain-Driven Design” by Vaughn Vernon. For the other areas of design, you can check out the various design patterns books around such as Head First Design Patterns, Patterns of Enterprise Application Architecture, Enterprise Integration Patterns and Service Design Patterns. The next myth I will tackle is Agile Myth #3: “Agile is Short Milestones”.Reference: Agile Myth #2: “Agile is About Project Management” from our JCG partner Calen Legaspi at the Calen Legaspi blog....
software-development-2-logo

Complexity is the Excuse

When I speak to people about how it is possible to continuously deliver customer value with near zero issues, I usually get laughed at. After I tell them that there is nothing to laugh at, people start challenging me on how I integrate with other systems, how I manage defects, how I deal with changing requirements, how I manage regression issues, and how I cope with context switching and other similar issues. Luckily enough I have empirical data and my answers are based on experience and not some book or model I read about somewhere, so after a while I manage to get people moving from laughing at me to at least starting to be curious. It has to be said, people become curious but deep inside they still don’t believe me and still think I am a big fool. How could I blame them? I would have done the same a few years back. I know that people need to have to prove it for themselves to be able to believe it, I have no problem with that. While I manage to explain my way of dealing with defects, changing requirements, regression, and context switching, until now I haven’t been able to answer the biggest question of them all, the one that every conversation ends up with eventually: how do you deal with extremely complex systems that need to scale up? I have been thinking about this for a while now and the more I think about it the more I become convinced that Complexity is the Excuse. Complexity exists when we are not able to prioritise the value to deliver (we don’t know what we really want). Complexity exist when we are not able to understand and describe the system we are building. And finally, Complexity is a nice excuse for not doing our job properly as software engineers and having something to blame.Reduce Complexity and stop taking excuses:You want to deliver customer value, OK. You don’t need to deliver everything on day 1. Sit down with your business partners and identify the highest value added feature and focus on that one. If asked for bells and whistles, say, “we will do it later and only if we really need to”, chances are when you are finished doing the first feature you will have learned that you need something different anyway. When deciding how to implement such feature, look for the simplest and cheapest solution, do NOT future proof. By future proofing you will be adding complexity and guess what? YAGNI. Measure the success of your feature and feel free to change direction, failure is learning. Once you have identified the value to be delivered, make sure you break down its own complexity. If a user story or unit of work or whatever you call it has more than one happy path, then it is too complex, break it down into 2 or more units of work. If you start working on something and you discover it is more complex than you had thought, then stop and break it down to less complex units, if you keep on going saying nothing, you will hide complexity and sooner or later you are bound to mess it up. Scaling up is the wrong answer to the false complexity question. Chances are you don’t need to scale up at all: Read 1 and 2 again and again, you will find out that you don’t need as many resources as you thought you would. Scaling up, most of the times, is the easiest, most expensive and laziest approach to fight complexity. For doing 1. and 2. in a structured manner I strongly recommend an approach called Impact Mapping devised by Gojko Adzic, it works. For doing 3. click here For doing 4. use your head. TL;DR: stop blaming complexity when you don’t understand what you are building.Reference: Complexity is the Excuse from our JCG partner Augusto Evangelisti at the mysoftwarequality blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close