Featured FREE Whitepapers

What's New Here?


Property-based testing with Spock

Property based testing is an alternative approach to testing, complementing example based testing. The latter is what we’ve been doing all our lives: exercising production code against “examples” – inputs we think are representative. Picking these examples is an art on its own: “ordinary” inputs, edge cases, malformed inputs, etc. But why are we limiting ourselves to just few examples? Why not test hundreds, millions… ALL inputs? There are at least two difficulties with that approach:          Scale. A pure function taking just one int input would require 4 billion tests. This means few hundred gigabytes of test source code and several months of execution time. Square it if a function takes two ints. For String it practically goes to infinity. Assume we have these tests, executed on a quantum computer or something. How do you know the expected result for each particular input? You either enter it by hand (good luck) or generate expected output. By generate I mean write a program that produces expected value for every input. But aren’t we testing such program already in the first place? Are we suppose to write better, error-free version of code under test just to test it? Also known as ugly mirror antipattern.So you understand testing every single input, although ideal, is just a mental experiment, impossible to implement. That being said property based testing tries to get as close as possible to this testing nirvana. Issue #1 is solved by slamming code under test with hundreds or thousands of random inputs. Not all of them, not even a fraction. But a good, random representation. Issue #2 is surprisingly harder. Property based testing can generate random arguments, but it can’t figure out what should be the expected outcome for that random input. Thus we need a different mechanism, giving name to whole philosophy. We have to come up with properties (invariants, behaviours) that code under test exhibits no matter what the input is. This sounds very theoretically, but there are many such properties in various scenarios:Absolute value of any number should never be negative Encoding and decoding any string should yield the same String back for every symmetric encoding Optimized version of some old algorithm should produce the same result as the old one for any input Total money in a bank should remain the same after arbitrary number of intra-bank transactions in any orderAs you can see there are many properties we can think of that do not mention specific example inputs. This is not exhaustive and strict testing. It’s more like sampling and making sure samples are “sane”. There are many, many libraries supporting property based testing for virtually every language. In this article we will explore Spock and ScalaCheck later. Spock + custom data generators Spock does not support property based testing out-of-the-box. However with help from data driven testing and 3rd-party data generators we can go quite far. Data tables in Spock can be generalized into so-called data pipes: def 'absolute value of #value should not be negative'() { expect: value.abs() >= 0where: value << randomInts(100) }private static def List<Integer> randomInts(int count) { final Random random = new Random() (1..count).collect { random.nextInt() } }Code above will generate 100 random integers and make sure for all of them .abs() is non-negative. You might think this test is quite dumb, but to a great surprise it actually discovers one bug! But first let’s kill some boilerplate code. Generating random inputs, especially more complex, is cumbersome and boring. I found two libraries that can help us. spock-genesis: import spock.genesis.Gendef 'absolute value of #value should not be negative'() { expect: value.abs() >= 0where: value << Gen.int.take(100) }Looks great, but if you want to generate e.g. lists of random integers,net.java.quickcheck has nicer API and is not Groovy-specific: import static net.java.quickcheck.generator.CombinedGeneratorsIterables.someLists import static net.java.quickcheck.generator.PrimitiveGenerators.integersdef 'sum of non-negative numbers from #list should not be negative'() { expect: list.findAll{it >= 0}.sum() >= 0where: list << someLists(integers(), 100) }This test is interesting. It makes sure sum of non-negative numbers is never negative – by generating 100 lists of randoms ints. Sounds reasonable. However multiple tests are failing. First of all due to integer overflow sometimes two positive ints add up to a negative one. Duh! Another type of failure that was discovered is actually frightening. While [1,2,3].sum() is 6, obviously, [].sum() is… null (WAT?) As you can see even silliest and most basic property based tests can be useful in finding unusual corner cases in your data. But wait, I said testing absolute of int discovered one bug. Actually it didn’t, because of poor (too “random”) data generators, not returning known edge values in the first place. We will fix that in the next article.Reference: Property-based testing with Spock from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog....

Maven Common Problems and Pitfalls

Love it or hate it (and a lot of people seem to hate it), Maven is a widely used tool by 64% of Java developers (source – Java Tools and Technologies Landscape for 2014). Most experienced developers already got their share of Maven headaches. Usually in the hard way, banging with their head into a brick wall. Unfortunately, I feel that new developers are going through the same hard learning process. Looking into the main Java conferences around the world, you cannot find any Maven related sessions that guide you through the fundamentals. Maybe the community assumes that you should already know them, like the Java language itself. Still, recycling this knowledge could be a win-win situation for everyone. How much time do you or your teammates waste with not knowing how to deal with Maven particularities? If you are reading this, I’m also going to assume that you grasp Maven basics. If not, have a look into the following articles:Maven in 5 Minutes Introduction to the Build LifecycleThere a lot of other articles. I see no value in adding my own, repeating the same stuff, but if I feel the need I may write one. Let me know if you support it! Anyway, I think I can add some value by pointing out the main issues that teams came across when using Maven, explain them and how to fix them. Why is this jar in my build? Due to Maven transitive dependencies mechanism, the graph of included libraries can quickly grow quite large. If you see something in your classpath, and you didn’t put it there, most likely is because of a transitive dependency. You might need it or maybe not. Maybe the part of the code of the library you’re using doest not required all those extra jars. It feels like a gamble here, but you can have a rough idea if you use mvn dependency:analyze. This command will tell you which dependencies are actually in use by your project. I mostly do trial and error here, exclude what I think that I don’t need and run the code to see if everything is OK. Unfortunately, this command doesn’t go so far to tell you if the transitive dependencies are really needed for the dependencies that you are using. Hey, if someone knows a better way, let me know! I can’t see my changes! This can happen because of multiple reasons. Let’s look into the most common: Dependencies are not built in the local repository You may have Module A and Module B. Module B has a dependency to Module A. The changes you made to Module B are not visible in Module A. This happens, because Maven look into it’s own local jar repository to include in the classpath. If you make any changes, you need to place a copy of new jar into the local repository. You do that by running mvn install in the changed project. Dependency version is not correct This can be so simply as to change the version of the dependency that you are using, or a real pain to figure it out. When Maven performs the dependency lookup, it uses the rule Nearest Definition First. This means that the version used will be the closest one to your project in the tree of dependencies. Confused? So do I. Let’s try an example. You want to use dependency Dv1 in your project A, but you’re getting Dv2, and you have the following dependency tree: A -> B -> C -> Dv1 A -> E -> Dv2 Which dependency of D is included? Dv1 or Dv2? In the case Dv2 because of the Nearest Definition First rule. If two dependency versions are at the same depth in the dependency tree, it’s the order in the declaration that counts. To fix this problem you could explicitly add a dependency to Dv1 in A to force the use of Dv1 or just exclude Dv2. If you use the command mvn dependency:tree it will output a tree will all the dependencies and versions for the project. This is very helpful to debug these kind of problems. Remote repository has overwritten your changes It’s usual for companies to have an internal Maven repository, to cache artifacts, store releases or serve the latest changes of the project you are working on. This works great most of the time, but when you’re working with SNAPSHOT versions, Maven is always trying to pick up the latest changes to that dependency. Now, you are happily working on your Project B changes which has a dependency to Project A. You build everything locally and proceed to integrate the changes in Project A. Someone or something, upload a new SNAPSHOT version of Project B. Remember, your changes are not visible yet, since you have everything locally and did not commit to VCS yet. The next build you make of Project A it’s going to pick the Project B from the company repository and not the one in your local repository. The jar is not included in the distribution! To add a little more confusion, let’s talk about scopes. Maven has four scopes: compile, provided, runtime and test. Each dependency has a scope and the scope defines a different classpath for your application. If you are missing something, and assuming that you have the dependency defined correctly, most likely the problem is in the scope. Use the compile scope to be on the safe side (which is the default). The commands mvn dependency:analyze and mvn dependency:tree can also help you here. The artifact was not found! Ahh, the dreaded “Could not resolve dependencies … Could not find artifact”. This is like the Java NPE! There are many reasons for why this happens. A few more evident that others, but a pain to debug anyway. I usually follow this checklist to try to fix the problem:Check that the dependency is defined correctly Check if you are pointing to the correct remote repositories that store the dependency Check if the remote repository actually holds the dependency! Check if you have the most recent pom.xml files Check if the jar is corrupted Check if the company repository is caching the internet repositories and didn’t not issue a request to get the new libraries Check if the dependency definition is being overridden by something. Use mvn help:effective-pom for the actual maven setting building the project Don’t use -oConclusion Maven is not a perfect tool, but if you learn a few of tricks it will help you and save time debugging build problems. There are other that fix a few of these problems, but I don’t have enough knowledge to be able to voice my opinion about them. Anyway, a big chuck of projects use Maven as a build tool and I believe that developers should know about their build tool to be able to perform better in their everyday work. Hopefully this post can be useful to you. Feel free to post any other problem not covered here. Unfortunately, Maven sometimes seems a box full of surprises. One last advice: Never trust the IDE! If it works on the command-line then it’s an IDE problem!Reference: Maven Common Problems and Pitfalls from our JCG partner Roberto Cortez at the Roberto Cortez Java Blog blog....

Processing Java Annotations Using Reflection

In my previous article covering Java Annotations, I outlined a recent use case and provided you with some examples of custom annotations and how they might be used. In this article, I’m going to take that a step further and give you a few examples of custom annotations and how you would process these custom annotations using the Java Reflection API. Once you have gone through this tutorial, you should come away with a better understanding of the simplicity and flexibility that custom annotations can provide. So let’s dig into the code!       Custom Annotation Listings I have created three different annotations for the example code today which are the DoItLikeThis, DoItLikeThat and DoItWithAWhiffleBallBat annotations. Each annotation targets a different element type and has slightly different properties so that I can show you how to look for and process them accordingly. DoItLikeThis Annotation The DoItLikeThis annotation is targeted for the ElementType TYPE, which makes it only available for Java types. This annotation has the three optional elements description, action, and a boolean field shouldDoItLikeThis. If you don’t provide any values for these elements when using this annotation, they will default to the values specified. package com.keyhole.jonny.blog.annotations;import java.lang.annotation.ElementType; import java.lang.annotation.Retention; import java.lang.annotation.RetentionPolicy; import java.lang.annotation.Target;/** * Annotation created for doing it like this. */ @Target(ElementType.TYPE) @Retention(RetentionPolicy.RUNTIME) public @interface DoItLikeThis {/** * @return - The description. */ String description() default "";/** * @return - The action. */ String action() default "";/** * @return - Should we be doing it like this. */ boolean shouldDoItLikeThis() default false;} DoItLikeThat Annotation The DoItLikeThat annotation is an annotation that is targeted for Java fields only. This annotation also has a similar boolean element named shouldDoItLikeThat, which doesn’t specify a default value and is therefore a required element when using the annotation. The annotation also contains an element defined as a String array which will contain a list of user roles that should be checked. package com.keyhole.jonny.blog.annotations;import java.lang.annotation.ElementType; import java.lang.annotation.Retention; import java.lang.annotation.RetentionPolicy; import java.lang.annotation.Target;/** * Annotation created for doing it like that * instead of like this. */ @Target(ElementType.FIELD) @Retention(RetentionPolicy.RUNTIME) public @interface DoItLikeThat {/** * @return - Should we be doing it like that. */ boolean shouldDoItLikeThat();/** * @return - List of user roles that can do it like that. */ String[] roles() default{};} DoItWithAWhiffleBallBat Annotation The DoItWithAWhiffleBallBat annotation is targeted for use with methods only and similar to the other annotations. It also has a similar boolean element, this one is named shouldDoItWithAWhiffleBallBat. There is also another element defined which makes use of a WhiffleBallBat enum that defines the different type of whiffle ball bats that are available for use, defaulting to the classic yellow classic whiffle ball bat. package com.keyhole.jonny.blog.annotations;import java.lang.annotation.ElementType; import java.lang.annotation.Retention; import java.lang.annotation.RetentionPolicy; import java.lang.annotation.Target;/** * When you can't do it like this or do it like that, * do it with a whiffle ball bat. */ @Target(ElementType.METHOD) @Retention(RetentionPolicy.RUNTIME) public @interface DoItWithAWhiffleBallBat {/** * @return - Should we be doing it with a whiffle ball bat. */ boolean shouldDoItWithAWhiffleBallBat() default false;/** * @return - Sweet, which type of whiffle ball bat? */ WhiffleBallBat batType() default WhiffleBallBat.YELLOW_PLASTIC;} Annotated Classes Now that we have our annotations defined for our example, we need a couple of classes to annotate. Each class provides example uses of the annotations with elements specified as well as relying on the default values. There are also additional fields and methods included that are not annotated and therefore should not be processed by the annotation processor. Here is the source code for the two example classes: AnnotatedOne Class package com.keyhole.jonny.blog.annotations;import java.util.Date;@DoItLikeThis public class AnnotatedOne implements AnnotatedClass {@DoItLikeThat(shouldDoItLikeThat = false) private String field1;@DoItLikeThat(shouldDoItLikeThat = true, roles = { "admin", "root" }) private String field2;private String field3; private Date dateDoneLikeThis;/* setters and getters removed for brevity */@DoItWithAWhiffleBallBat(batType = WhiffleBallBat.BLACK_PLASTIC, shouldDoItWithAWhiffleBallBat = true) public void doWhateverItIs() { // method implementation }public void verifyIt() { // method implementation }} AnnotatedTwo Class package com.keyhole.jonny.blog.annotations;import java.util.Date;@DoItLikeThis(action = "PROCESS", shouldDoItLikeThis = true, description = "Class used for annotation example.") public class AnnotatedTwo implements AnnotatedClass {@DoItLikeThat(shouldDoItLikeThat = true) private String field1;@DoItLikeThat(shouldDoItLikeThat = true, roles = { "web", "client" }) private String field2;private String field3; private Date dateDoneLikeThis;/* setters and getters removed for brevity */@DoItWithAWhiffleBallBat(shouldDoItWithAWhiffleBallBat = true) public void doWhateverItIs() { // method implementation }public void verifyIt() { // method implementation }} Processing Annotations Processing annotations using reflections is actually quite simple. For each of the element types that you can create for and apply annotations to, there are methods on those elements for working with annotations. The first thing you will need to do is inspect the element to determine if there are any annotations or check to see if a particular annotation exists for the element. Each of the element types Class, Field, and Method all implement the interface AnnotatedElement, which has the following methods defined:getAnnotations() – Returns all annotations present on this element, which includes any that are inherited. getDeclaredAnnotations() – Returns only the annotations directly present on this element. getAnnotation(Class<A> annotationClass) – Returns the element’s annotation for the specified annotation type, if not found this returns null. isAnnotation() – Returns true if the element being inspected is an annotation. isAnnotationPresent(Class<? Extends Annotation> annotationClass) – Returns true if the annotation specified exists on the element being checked.When processing our annotations, the first thing we will want to do is check to see if the annotation is present. To do this, we’ll wrap our annotation processing with the following check: if (ac.getClass().isAnnotationPresent(DoItLikeThis.class)) { // process the annotation, "ac" being the instance of the object we are inspecting} Once we have found the annotation we are looking for, we will grab that annotation and do whatever processing we want to do for that annotation. At this point, we’ll have access to the annotations’ elements and their values. Notice there are not any getters or setters for accessing the elements of the annotation. DoItLikeThis anno = ac.getClass().getAnnotation(DoItLikeThis.class); System.out.println("Action: " + anno.action()); System.out.println("Description: " + anno.description()); System.out.println("DoItLikeThis:" + anno.shouldDoItLikeThis()); For fields and methods, checking for present annotations will be slightly different. For these types of elements, we’ll need to loop through all of the fields or methods to determine if the annotation exists on the element. You will need to get all of the fields or methods from the Class, loop through the Field or Method array, and then determine if the annotation is present on the element. That should look something like this: Field[] fields = ac.getClass().getDeclaredFields(); for (Field field : fields) { if (field.isAnnotationPresent(DoItLikeThat.class)) { DoItLikeThat fAnno = field.getAnnotation(DoItLikeThat.class); System.out.println("Field: " + field.getName()); System.out.println("DoItLikeThat:" + fAnno.shouldDoItLikeThat()); for (String role : fAnno.roles()) { System.out.println("Role: " + role); } } } Conclusion As you can see, creating your own annotations and processing them is fairly simple. In the examples I have provided, we are simply outputting the values of the elements to the console or log. Hopefully you can see the potential use of these and might actually consider creating your own in the future. Some of the best uses I’ve seen for annotations are where they replace some configuration code or common code that gets used often, such as validating the value of a field or mapping a business object to a web form. And finally, here is the full source code along with a simple Java main class to execute the code: AnnotatedClassProcessor package com.keyhole.jonny.blog.annotations;import java.lang.reflect.Field; import java.lang.reflect.Method;public class AnnotatedClassProcessor {public void processClass(AnnotatedClass ac) { System.out.println("------Class Processing Begin---------");System.out.println("Class: " + ac.getClass().getName()); if (ac.getClass().isAnnotationPresent(DoItLikeThis.class)) { // process the annotation, "ac" being the instance of the object we are inspecting DoItLikeThis anno = ac.getClass().getAnnotation(DoItLikeThis.class); System.out.println("Action: " + anno.action()); System.out.println("Description: " + anno.description()); System.out.println("DoItLikeThis:" + anno.shouldDoItLikeThis());System.out.println("------Field Processing---------"); Field[] fields = ac.getClass().getDeclaredFields(); for (Field field : fields) { if (field.isAnnotationPresent(DoItLikeThat.class)) { DoItLikeThat fAnno = field.getAnnotation(DoItLikeThat.class); System.out.println("Field: " + field.getName()); System.out.println("DoItLikeThat:" + fAnno.shouldDoItLikeThat()); for (String role : fAnno.roles()) { System.out.println("Role: " + role); } } }System.out.println("------Method Processing---------"); Method[] methods = ac.getClass().getMethods(); for (Method method : methods) { if ( method.isAnnotationPresent(DoItWithAWhiffleBallBat.class)) { DoItWithAWhiffleBallBat mAnno = method.getAnnotation(DoItWithAWhiffleBallBat.class); System.out.println("Use WhiffleBallBat? " + mAnno.shouldDoItWithAWhiffleBallBat()); System.out.println("Which WhiffleBallBat? " + mAnno.batType()); } }} System.out.println("------Class Processing End---------"); } } RunProcessor package com.keyhole.jonny.blog.annotations;public class RunProcessor {/** * @param args */ public static void main(String[] args) {AnnotatedClassProcessor processor = new AnnotatedClassProcessor(); processor.processClass(new AnnotatedOne()); processor.processClass(new AnnotatedTwo());}}Reference: Processing Java Annotations Using Reflection from our JCG partner Jonny Hackett at the Keyhole Software blog....

AngularJS: Different ways of using Array Filters

AngularJS provides filter feature which can be used to format input value or to filter an Array with the given matching input criteria. For example you can use ‘date’ filter to format a Date value into human readable Date representation like MM-DD-YYYY as {{dob | date}}. On the other hand there are Array filtering feature which is very useful while filtering data from an Array of JavaScript objects. The Array filtering is very commonly used with a Table along with ng-repeat directive. For example, we can have a list of Todos which we can display in a Table using ng-repeat tag. And we can have a text field to search todos which matches any one of the data properties of Todo object as follows: $scope.todos = [ {id: 1,title: 'Learn AngularJS', description: 'Learn AngularJS', done: true, date: new Date()} , {id: 2,title: 'Explore ui-router', description: 'Explore and use ui-router instead of ngRoute', done: true, date: new Date()} , {id: 3,title: 'Play with Restangular', description: 'Restangular seems better than $resource, have a look', done: false, date: new Date()} , {id: 4,title: 'Try yeoman', description: 'No more labour work..use Yeoman', done: false, date: new Date()} , {id: 5,title: 'Try MEANJS', description: 'Aah..MEANJS stack seems cool..why dont u try once', done: false, date: new Date()} ]; <input type="text" ng-model="searchTodos"> <table class="table table-striped table-bordered"> <thead> <tr> <th>#</th> <th>Title</th> <th>Description</th> <th>Done?</th> <th>Date</th> </tr> </thead> <tbody> <tr ng-repeat="todo in todos| filter: searchTodos"> <td>{{$index + 1}}</td> <td>{{todo.title}}</td> <td>{{todo.description}}</td> <td>{{todo.done}}</td> <td>{{todo.date | date}}</td> </tr></tbody></table> Observe that our search input field’s ng-model attribute is set to ‘searchTodos‘ which we have used to filter on ng-repeat attribute. As you type in the search input field, the $scope.todos array will be filtered and only matching records will be shown up. This is a “match anything” type filter, means the search criteria will be checked against all properties (id, title, description, date) of Todo object. If you want to search only on one field, say ‘description‘, you can apply filter as follows: <tr ng-repeat="todo in todos| filter: {description: searchTodos}"> If you want to display only Todos which aren’t done yet then you can do it as follows: <tr ng-repeat="todo in todos| filter: {description: searchTodos, done: false}"> Note that here the 2 conditions will be applied using AND conditions.  If you want to display only Todos which aren’t done yet and you want to search on all fields not just on ‘description‘ then you can do it as follows: <tr ng-repeat="todo in todos| filter: {$: searchTodos, done: false}"> Here $ means all fields. So far so good as it is a simple and straight forward case. How about having nested objects in our Array objects and we want to search based on a nested object property? Let’s look at such a type of scenario. In order to explain these scenarios I am using some code examples from my ebuddy application. In my ebuddy application I have an ExpenseManager module where I will keep track of my expenses as follows:I will have a list of Accounts such as Cash, savings Bank Account, CreditCard etc with the current balance details. I will have a list of Payees such as HouseRent, PowerBill, Salary etc which fall into INCOME or EXPENDITURE categories. I will record all my transactions by picking one of the account and a Payee and the amount.This application is just to record my financial transactions so that I can see monthly reports by Account or Payee wise. I hope you get an idea about the domain model. Now let us create a simple AngularJS application and set some sample data. <!DOCTYPE html> <html ng-app="myApp"> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <title>My AngularJS App</title> <meta name="description" content=""> <meta name="viewport" content="width=device-width, initial-scale=1"> <link href="//cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.2.0/css/bootstrap.min.css" rel="stylesheet" type="text/css"/> <script src="//cdnjs.cloudflare.com/ajax/libs/angular.js/1.2.20/angular.min.js"></script> <script src="//cdnjs.cloudflare.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <script src="//cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.2.0/js/bootstrap.min.js"></script> <script> var myApp = angular.module('myApp',[]); myApp.controller('SampleController', function($scope){ $scope.accounts = [ {id: 1, name: 'Cash'}, {id: 2, name: 'Bank Savings'} ]; $scope.payees = [ {id:'1',name:'HouseRent', txnType:'EXPENDITURE'}, {id: '2', name:'InternetBill', txnType:'EXPENDITURE'}, {id:'3', name: 'PowerBill', txnType:'EXPENDITURE'}, {id:'4', name: 'Salary', txnType:'INCOME'} ]; $scope.transactions = [ {id:'1', txnType:'EXPENDITURE', amount: 1000, account: $scope.accounts[0], payee: $scope.payees[0]}, {id:'2', txnType:'EXPENDITURE', amount: 500, account: $scope.accounts[1], payee: $scope.payees[1]}, {id:'3', txnType:'EXPENDITURE', amount: 1200, account: $scope.accounts[0], payee: $scope.payees[1]}, {id:'4', txnType:'INCOME', amount: 5000, account: $scope.accounts[1], payee: $scope.payees[3]}, {id:'5', txnType:'EXPENDITURE', amount:200, account: $scope.accounts[0], payee: $scope.payees[2]} ]; }); </script> </head> <body ng-controller="SampleController"> <br/> <div class="col-md-8 col-md-offset-2"> <h3>Transaction Details</h3> <table class="table table-striped table-bordered"> <thead> <tr> <th>#</th> <th>Account</th> <th>Type</th> <th>Payee</th> <th>Amount</th> </tr> </thead> <tbody> <tr ng-repeat="txn in transactions"> <td>{{$index + 1}}</td> <td>{{txn.account.name}}</td> <td>{{txn.txnType}}</td> <td>{{txn.payee.name}}</td> <td>{{txn.amount}}</td> </tr> </tbody> </table> </div></body> </html> This is a very simple AngularJS page which is displaying list of transactions in a table. Observe that the transactions contains nested objects (account, payee) and we are displaying nested properties (txn.account.name, txn.payee.name) in our table. Now we want to filter the transactions in a variety of ways, so lets look at them case by case. Case#1: Search by Payee Name  In our transaction object we have a nested payee object which contains name property on which we want to perform search. Let us create form which will contain all our filters before transactions table. The first thought that came to my mind to perform a search on a nested property is use the nested property path in filter as follows: <input type="text" ng-model="payeeName"> ... <tr ng-repeat="txn in transactions| filter: {payee.name : payeeName}"> But THIS WONT WORK.  To search on a nested property we can name our input field ng-model to match the target property path and use the root object name as filter as follows: <div class="col-md-8 col-md-offset-2"> <form class="form-horizontal" role="form"> <div class="form-group"> <label for="input1" class="col-sm-4 control-label">Search by Payee</label> <div class="col-sm-6"> <input type="text" class="form-control" id="input1" placeholder="Payee Name" ng-model="filterTxn.payee.name"> </div> </div> <!-- additional filters will come here --> </form> <h3>Transaction Details</h3> <table class="table table-striped table-bordered"> ... <tbody> <tr ng-repeat="txn in transactions| filter: filterTxn"> ... ... </tr> </tbody> </table> </div> Observe that we have bind the input field ng-model to “filterTxn.payee.name” and used filter: filterTxn as filter. So txn.payee.name will be matched against filterTxn.payee.name. Case#2: Filter by Accounts Dropdown  We would like to filter the transactions by using Accounts Select dropdown. First we need to populate a select dropdown using $scope.accounts and use it as a filter. Add the following filter after our first filter. <div class="form-group"> <label for="input2" class="col-sm-4 control-label">Search By Account</label> <div class="col-sm-6"> <select id="input2" class="form-control" ng-model="filterTxn.account"> <option value="">All Accounts</option> <option ng-repeat="item in accounts" value="{{item.id}}">{{item.name}}</option> </select> </div> </div> Here we are populating a <select> field with $scope.accounts array by displaying Account Name and using id as value. The key part here is we have bind ng-model to filterTxn.account. When we select an account, the selected account object reference will be stored in filterTxn.account. As we already have filterTxn as filter, the account filter will also be applied along with payee name filter. Also note that the first option “All Accounts” value is empty (“”) which will be treated as null by AngularJS, so when the “All Accounts” is option is selected no account filter will be applied. Case#3: Search By Transaction Type  We want to filter the transaction by transaction type (INCOME or EXPENDITURE): Add the following filter after the second filter: <div class="form-group"> <label for="input3" class="col-sm-4 control-label">Search By Type</label> <div class="col-sm-6"> <select id="input3" class="form-control" ng-model="filterTxn.txnType"> <option value="">All Types</option> <option value="EXPENDITURE">EXPENDITURE</option> <option value="INCOME">INCOME</option> </select> </div> </div> I hope no further explanation is need for this! Case#4: Search by Payees of Expenditure type  Aaah..this is interesting! We want to search by Payee names but only in EXPENDITURE type payees. We can’t simply apply filter like “filter: expPayeeFilter | filter: {txnType: ‘EXPENDITURE’}” because it will always filter by EXPENDITURE. So we will create a custom filter to perform “search by payee name in EXPENDITURE type payees only when some filter text entered” as follows: myApp.filter('expenditurePayeeFilter', [function($filter) { return function(inputArray, searchCriteria, txnType){ if(!angular.isDefined(searchCriteria) || searchCriteria == ''){ return inputArray; } var data=[]; angular.forEach(inputArray, function(item){ if(item.txnType == txnType){ if(item.payee.name.toLowerCase().indexOf(searchCriteria.toLowerCase()) != -1){ data.push(item); } } }); return data; }; }]); We have created a custom filter using myApp.filter() and inside it we have used angular.forEach() to iterate over the input array, rest is plain javascript..no magic. Now we will apply this custom filter as follows: <tr ng-repeat="txn in transactions| filter: filterTxn | expenditurePayeeFilter:searchCriteria:'EXPENDITURE'"> <td>{{$index + 1}}</td> <td>{{txn.account.name}}</td> <td>{{txn.txnType}}</td> <td>{{txn.payee.name}}</td> <td>{{txn.amount}}</td> </tr> Observer the syntax: customFilterName:param1:param2:..:paramN. These parameters will be passed as arguments to the function inside our custom directive. We have seen few interesting options on how to use AngularJS array filtering features.You can find the complete page at https://gist.github.com/sivaprasadreddy/fbee047803d14631fafdHope it helps!Reference: AngularJS: Different ways of using Array Filters from our JCG partner Siva Reddy at the My Experiments on Technology blog....

JUnit in a Nutshell: Unit Test Assertion

This chapter of JUnit in a Nutshell covers various unit test assertion techniques. It elaborates on the pros and cons of the built-in mechanism, Hamcrest matchers and AssertJ assertions. The ongoing example enlarges upon the subject and shows how to create and use custom matchers/assertions. Unit Test Assertion Trust, but verify Ronald ReaganThe post Test Structure explained why unit tests are usually arranged in phases. It clarified that the real testing aka the outcome verification takes place in the third phase. But so far we have only seen some simple examples for this, using mostly the built-in mechanism of JUnit. As shown in Hello World, verification is based on the error type AssertionError. This is the basis for writing so called self-checking tests. A unit test assertion evaluates predicates to true or false. In case of false an AssertionError is thrown. The JUnit runtime captures this error and reports the test as failed. The following sections will introduce three of the more popular unit test assertion variants. Assert The built-in assertion mechanism of JUnit is provided by the class org.junit.Assert. It offers a couple of static methods to ease test verification. The following snippet outlines the usage of the available method patterns: fail(); fail( "Houston, We've Got a Problem." );assertNull( actual ); assertNull( "Identifier must not be null.", actual );assertTrue( counter.hasNext() ); assertTrue( "Counter should have a successor.", counter.hasNext() );assertEquals( LOWER_BOUND, actual ); assertEquals( "Number should be lower bound value.", LOWER_BOUND, actual );Assert#fail() throws an assertion error unconditionally. This can be helpful to mark an incomplete test or to ensure that an expected exception has been thrown (see also the Expected Exceptions section in Test Structure). Assert#assertXXX(Object) is used to verify the initialization state of a variable. For this purpose there exists two methods called assertNull(Object) and assertNotNull(Object). Assert#assertXXX(boolean) methods test expected conditions passed by the boolean parameter. Invocation of assertTrue(boolean) expects the condition to be true, whereas assertFalse(boolean) expects the opposite. Assert#assertXXX(Object,Object) and Assert#assertXXX(value,value) methods are used for comparison verifications of values, objects and arrays. Although it makes no difference in result, it is common practice to pass the expected value as first parameter and the actual as second.All these types of methods provide an overloaded version, that takes a String parameter. In case of a failure this argument gets incorporated in the assertion error message. Many people consider this helpful to specify the failure reason more clearly. Others perceive such messages as clutter, making tests harder to read. This kind of unit test assertion seems to be intuitive upon first sight. Which is why I used it in the previous chapters for getting started. Besides it is still quite popular and tools support failure reporting well. However it is also somewhat limited with respect to the expressiveness of assertions that require more complex predicates. Hamcrest A library that aims to provide an API for creating flexible expressions of intent is Hamcrest. The utility offers nestable predicates called Matchers. These allow to write complex verification conditions in a way, many developers consider easier to read than boolean operator expressions. Unit test assertion is supported by the class MatcherAssert. To do so it offers the static assertThat(T, Matcher) method. The first argument passed is the value or object to verify. The second is the predicate used to evaluate the first one. assertThat( actual, equalTo( IN_RANGE_NUMBER ) ); As you can see, the matcher approach mimics the flow of a natural language to improve readability. The intention is even made more clear by the following snippet. This uses the is(Matcher) method to decorate the actual expression. assertThat( actual, is( equalTo( IN_RANGE_NUMBER ) ) ); MatcherAssert.assertThat(...) exists with two more signatures. First, there is a variant that takes a boolean parameter instead of the the Matcher argument. Its behavior correlates to Assert.assertTrue(boolean). The second variant passes an additional String to the method. This can be used to improve the expressiveness of failure messages: assertThat( "Actual number must not be equals to lower bound value.", actual, is( not( equalTo( LOWER_BOUND ) ) ) ); In a case of failure the error message for the given verification would look somewhat like this:Hamcrest comes with a set of useful matchers. The most important ones are listed in the tour of common matchers section of the library’s online documentation. But for domain specific problems readability of a unit test assertion could often be improved, if an appropriate matcher was available. For that reason the library allows to write custom matchers. Let us return to the tutorial‘s example for a discussion of this topic. First we adjust the scenario to be more reasonable for this chapter. Assume that NumberRangeCounter.next() returns the type RangeNumber instead of a simple int value: public class RangeNumber { private final String rangeIdentifier; private final int value;RangeNumber( String rangeIdentifier, int value ) { this.rangeIdentifier = rangeIdentifier; this.value = value; } public String getRangeIdentifier() { return rangeIdentifier; } public int getValue() { return value; } } We could use a custom matcher to check, that the return value of NumberRangeCounter#next() is within the counter’s defined number range: RangeNumber actual = counter.next();assertThat( actual, is( inRangeOf( LOWER_BOUND, RANGE ) ) ); An appropriate custom matcher could extend the abstract class TypeSafeMatcher<T>. This base class handles null checks and type safety. A possible implementation is shown below. Note how it adds the factory method inRangeOf(int,int) for convenient usage: public class InRangeMatcher extends TypeSafeMatcher<RangeNumber> {private final int lowerBound; private final int upperBound;InRangeMatcher( int lowerBound, int range ) { this.lowerBound = lowerBound; this.upperBound = lowerBound + range; } @Override public void describeTo( Description description ) { String text = format( "between <%s> and <%s>.", lowerBound, upperBound ); description.appendText( text ); } @Override protected void describeMismatchSafely( RangeNumber item, Description description ) { description.appendText( "was " ).appendValue( item.getValue() ); }@Override protected boolean matchesSafely( RangeNumber toMatch ) { return lowerBound <= toMatch.getValue() && upperBound > toMatch.getValue(); } public static Matcher<RangeNumber> inRangeOf( int lowerBound, int range ) { return new InRangeMatcher( lowerBound, range ); } } The effort may be a bit exaggerated for the given example. But it shows how the custom matcher can be used to eliminate the somewhat magical IN_RANGE_NUMBER constant of the previous posts. Besides the new type enforces compile time type-safety of the assertion statement. This means e.g. a String parameter would not be accepted for verification. The following picture shows how a failing test result would look like with our custom matcher:It is is easy to see in which way the implementation of describeTo and describeMismatchSafely influences the failure message. It expresses that the expected value should have been between the specified lower bound and the (calculated) upper bound1 and is followed by the actual value. It is a little unfortunate, that JUnit expands the API of its Assert class to provide a set of assertThat(…) methods. These methods actually duplicate API provided by MatcherAssert. In fact the implementation of those methods delegate to the according methods of this type. Although this might look as a minor issue, I think it is worth to mention. Due to this approach JUnit is firmly tied to the Hamcrest library. This dependency leads now and then to problems. In particular when used with other libraries, that do even worse by incorporating a copy of their own hamcrest version… Unit test assertion à la Hamcrest is not without competition. While the discussion about one-assert-per-test vs. single-concept-per-test [MAR] is out of scope for this post, supporters of the latter opinion might perceive the library’s verification statements as too noisy. Especially when a concept needs more than one assertion. Which is why I have to add another section to this chapter! AssertJ In the post Test Runners one of the example snippets uses two assertXXX statements. These verify, that an expected exception is an instance of IllegalArgumentException and provides a certain error message. The passage looks similar like this: Throwable actual = ...assertTrue( actual instanceof IllegalArgumentException ); assertEquals( EXPECTED_ERROR_MESSAGE, actual.getMessage() ); The previous section taught us how to improve the code using Hamcrest. But if you happen to be new to the library you may wonder, which expression to use. Or typing may feel a bit uncomfortable. At any rate the multiple assertThat statements would add up to the clutter. The library AssertJ strives to improve this by providing fluent assertions for java. The intention of the fluent interface API is to provide an easy to read, expressive programming style, that reduces glue code and simplifies typing. So how can this approach be used to refactor the code above? import static org.assertj.core.api.Assertions.assertThat; Similar to the other approaches AssertJ provides a utility class, that offers a set of static assertThat methods. But those methods return a particular assertion implementation for the given parameter type. This is the starting point for the so called statement chaining. Throwable actual = ...assertThat( actual ) .isInstanceOf( IllegalArgumentException.class ) .hasMessage( EXPECTED_ERROR_MESSAGE ); While readability is to some extend in the eye of the beholder, at any rate assertions can be written in a more compact style. See how the various verification aspects relevant for the specific concept under test are added fluently. This programming method supports efficient typing, as the IDE’s content assist can provide a list of the available predicates for a given value type. So you want to provide an expressive failure messages to the after-world? One possibility is to use describedAs as first link in the chain to comment the whole block: Throwable actual = ...assertThat( actual ) .describedAs( "Expected exception does not match specification." ) .hasMessage( EXPECTED_ERROR_MESSAGE ) .isInstanceOf( NullPointerException.class ); The snippet expects a NPE, but assume that an IAE is thrown at runtime. Then the failing test run would provide a message like this:Maybe you want your message to be more nuanced according to a given failure reason. In this case you may add a describedAs statement before each verification specification: Throwable actual = ...assertThat( actual ) .describedAs( "Message does not match specification." ) .hasMessage( EXPECTED_ERROR_MESSAGE ) .describedAs( "Exception type does not match specification." ) .isInstanceOf( NullPointerException.class ); There are much more AssertJ capabilities to explore. But to keep this post in scope, please refer to the utility’s online documentation for more information. However before coming to the end let us have a look at the in-range verification example again. This is how it can be solved with a custom assertion: public class RangeCounterAssertion extends AbstractAssert<RangeCounterAssertion, RangeCounter> {private static final String ERR_IN_RANGE_OF = "Expected value to be between <%s> and <%s>, but was <%s>"; private static final String ERR_RANGE_ID = "Expected range identifier to be <%s>, but was <%s>"; public static RangeCounterAssertion assertThat( RangeCounter actual ) { return new RangeCounterAssertion( actual ); } public InRangeAssertion hasRangeIdentifier( String expected ) { isNotNull(); if( !actual.getRangeIdentifier().equals( expected ) ) { failWithMessage( ERR_RANGE_ID, expected, actual.getRangeIdentifier() ); } return this; } public RangeCounterAssertion isInRangeOf( int lowerBound, int range ) { isNotNull(); int upperBound = lowerBound + range; if( !isInInterval( lowerBound, upperBound ) ) { int actualValue = actual.getValue(); failWithMessage( ERR_IN_RANGE_OF, lowerBound, upperBound, actualValue ); } return this; }private boolean isInInterval( int lowerBound, int upperBound ) { return actual.getValue() >= lowerBound && actual.getValue() < upperBound; }private RangeCounterAssertion( Integer actual ) { super( actual, RangeCounterAssertion.class ); } } It is common practice for custom assertions to extend AbstractAssert. The first generic parameter is the assertion’s type itself. It is needed for the fluent chaining style. The second is the type on which the assertion operates. The implementation provides two additional verification methods, that can be chained as in the example below. Because of this the methods return the assertion instance itself. Note how the call of isNotNull() ensures that the actual RangeNumber we want to make assertions on is not null. The custom assertion is incorporated by its factory method assertThat(RangeNumber). Since it inherits the available base checks, the assertion can verify quite complex specifications out of the box. RangeNumber first = ... RangeNumber second = ...assertThat( first ) .isInRangeOf( LOWER_BOUND, RANGE ) .hasRangeIdentifier( EXPECTED_RANGE_ID ) .isNotSameAs( second ); For completeness here is how the RangNumberAssertion looks in action:Unfortunately it is not possible to use two different assertion types with static imports within the same test case. Assumed of course, that those types follow the assertThat(...) naming convention. To circumvent this the documentation recommends to extend the utility class Assertions. Such an extension can be used to provide static assertThat methods as entry point to all of a project’s custom assertions. By using this custom utility class throughout the project no import conflicts can occur. A detailled description can be found in the section Providing a single entry point for all assertions : yours + AssertJ ones of the online documentation about custom assertions. Another problem with the fluent API is that single-line chained statements may be more difficult to debug. That is because debuggers may not be able to set breakpoints within the chain. Furthermore it may not be clear which of the method calls may have caused an exception. But as stated by Wikipedia on fluent interfaces, these issues can be overcome by breaking statements into multiple lines as show in the examples above. This way the user can set breakpoints within the chain and easily step through the code line by line. Conclusion This chapter of JUnit in a Nutshell introduced different unit test assertion approaches like the tool’s built-in mechanism, Hamcrest matchers and AssertJ assertions. It outlined some pros and cons and enlarged upon the subject by means of the tutorial’s ongoing example. Additionally it was shown how to create and use custom matchers and assertions. While the Assert based mechanism surely is somewhat dated and less object-oriented, it still has it advocates. Hamcrest matchers provide a clean separation of assertion and predicate definition, whereas AssertJ assertions score with a compact and easy to use programming style. So now you are spoilt for choice… Please regard that this will be the last chapter of my tutorial about JUnit testing essentials. Which does not mean that there is nothing more to say. Quite the contrary! But this would go beyond the scope this mini-series is tailored to. And you know what they are saying: always leave them wanting more…hm, I wonder if interval boundaries would be more intuitive than lower bound and range…Reference: JUnit in a Nutshell: Unit Test Assertion from our JCG partner Frank Appel at the Code Affine blog....

Garbage Collection: increasing the throughput

The inspiration for this post came after stumbling upon “Pig in the Python” definition in the memory management glossary. Apparently, this term is used to explain the situation where GC repeatedly promotes large objects from generation to generation. The effect of doing so is supposedly similar to that of a python swallowing its prey in whole only to become immobilised during digestion. For the next 24 hours I just could not get the picture of choking pythons out of my head. As the psychiatrists say, the best way to let go of your fears is to speak about them. So here we go. But instead of the pythons, the rest of the story will be about garbage collection tuning. I promise. Garbage Collection pauses are well known by their potential of becoming a performance bottleneck. Modern JVMs do ship with advanced garbage collectors, but as I have experienced, finding optimal configuration for a particular application is still darn difficult. To even stand a chance in manually approaching the issue, one would need to understand exact mechanics of garbage collection algorithms. This post might be able to help you in this regard, as I am going to use an example to demonstrate how small changes in JVM configuration can affect the throughput of your application. Example The application we use to demonstrate the GC impact on throughput is a simple one. It consists of just two threads:PigEater – simulating a situation where the python keeps eating one pig after another. The code achieves this via adding 32MB of bytes into a java.util.List and sleeping 100ms after each attempt. PigDigester – simulating an asynchronous digesting process. The code implements digestion by just nullifying that list of pigs. As this is a rather tiring process, then this thread sleeps for 2000ms after each reference cleaning.Both threads will run in a while loop, continuing to eat and digest until the snake is full. This happens at around 5,000 pigs eaten. package eu.plumbr.demo;public class PigInThePython { static volatile List pigs = new ArrayList(); static volatile int pigsEaten = 0; static final int ENOUGH_PIGS = 5000;public static void main(String[] args) throws InterruptedException { new PigEater().start(); new PigDigester().start(); }static class PigEater extends Thread {@Override public void run() { while (true) { pigs.add(new byte[32 * 1024 * 1024]); //32MB per pig if (pigsEaten > ENOUGH_PIGS) return; takeANap(100); } } }static class PigDigester extends Thread { @Override public void run() { long start = System.currentTimeMillis();while (true) { takeANap(2000); pigsEaten+=pigs.size(); pigs = new ArrayList(); if (pigsEaten > ENOUGH_PIGS) { System.out.format("Digested %d pigs in %d ms.%n",pigsEaten, System.currentTimeMillis()-start); return; } } } }static void takeANap(int ms) { try { Thread.sleep(ms); } catch (Exception e) { e.printStackTrace(); } } } Now lets define the throughput of this system as the “number of pigs digested per second”. Taking into account that the pigs are stuffed into the python after each 100ms, we see that theoretical maximal throughput this system can thus reach up to 10 pigs / second. Configuring the GC example Lets see how the system behaves using two different configuration. In all situations, the application was run using a dual-core Mac (OS X 10.9.3) with 8G of physical memory. First configuration:4G of heap (-Xms4g –Xmx4g) Using CMS to clean old (-XX:+UseConcMarkSweepGC) and Parallel to clean young generation -XX:+UseParNewGC) Has allocated 12,5% of the heap (-Xmn512m) to young generation, further restricting the sizes of Eden and Survivor spaces to equally sized.Second configuration is a bit different:2G of heap (-Xms2g –Xmx2g) Using Parallel GC to conduct garbage collection both in young and tenured generations(-XX:+UseParallelGC) Has allocated 75% of the heap to young generation (-Xmn1536m)Now it is time to make bets, which of the configurations performed better in terms of throughput (pigs eaten per second, remember?).  Those of you laying your money on the first configuration, I must disappoint you. The results are exactly reversed:First configuration (large heap, large old space, CMS GC) is capable of eating 8.2 pigs/second Second configuration (2x smaller heap, large young space, Parallel GC) is capable of eating 9.2 pigs/secondNow, let me put the results in perspective. Allocating 2x less resources (memory-wise) we achieved 12% better throughput. This is something so contrary to common knowledge that it might require some further clarification on what was actually happening. Interpreting the GC results The reason in what you face is not too complex and the answer is staring right at you when you take a more closer look to what GC is doing during the test run. For this, you can use the tool of your choice, I peeked under the hood with the help of jstat, similar to the following: jstat -gc -t -h20 PID 1s Looking at the data, I noticed that the first configuration went through 1,129 garbage collection cycles (YGCT+FGCT) which in total took 63.723 seconds: Timestamp S0C S1C S0U S1U EC EU OC OU PC PU YGC YGCT FGC FGCT GCT 594.0 174720.0 174720.0 163844.1 0.0 174848.0 131074.1 3670016.0 2621693.5 21248.0 2580.9 1006 63.182 116 0.236 63.419 595.0 174720.0 174720.0 163842.1 0.0 174848.0 65538.0 3670016.0 3047677.9 21248.0 2580.9 1008 63.310 117 0.236 63.546 596.1 174720.0 174720.0 98308.0 163842.1 174848.0 163844.2 3670016.0 491772.9 21248.0 2580.9 1010 63.354 118 0.240 63.595 597.0 174720.0 174720.0 0.0 163840.1 174848.0 131074.1 3670016.0 688380.1 21248.0 2580.9 1011 63.482 118 0.240 63.723 The second configuration paused in total of 168 times (YGCT+FGCT) for just 11.409 seconds. Timestamp S0C S1C S0U S1U EC EU OC OU PC PU YGC YGCT FGC FGCT GCT 539.3 164352.0 164352.0 0.0 0.0 1211904.0 98306.0 524288.0 164352.2 21504.0 2579.2 27 2.969 141 8.441 11.409 540.3 164352.0 164352.0 0.0 0.0 1211904.0 425986.2 524288.0 164352.2 21504.0 2579.2 27 2.969 141 8.441 11.409 541.4 164352.0 164352.0 0.0 0.0 1211904.0 720900.4 524288.0 164352.2 21504.0 2579.2 27 2.969 141 8.441 11.409 542.3 164352.0 164352.0 0.0 0.0 1211904.0 1015812.6 524288.0 164352.2 21504.0 2579.2 27 2.969 141 8.441 11.409 Considering that the work needed to carry out in both cases was equivalent in regard that – with no long-living objects in sight the duty of the GC in this pig-eating exercise is just to get rid of everything as fast as possible. And using the first configuration, the GC is just forced to run ~6.7x more often resulting in ~5,6x longer total pause times. So the story fulfilled two purposes. First and foremost, I hope I got the picture of a choking python out of my head. Another and more significant take-away from this is – that tuning GC is a tricky exercise at best, requiring deep understanding of several underlying concepts. Even with the truly trivial application used in this blog post, the results you will be facing can have significant impact on your throughput and capacity planning. In real-world applications the differences are even more staggering. So the choice is yours, you can either master the concepts, or, focus on your day-to-day work and let Plumbr to find out suitable GC configuration according to your needs.Reference: Garbage Collection: increasing the throughput from our JCG partner Nikita Salnikov Tarnovski at the Plumbr Blog blog....

Brand new JSF components in PrimeFaces Extensions

The PrimeFaces Extensions team is glad to announce several new components for the upcoming 3.0.0 main release. Our new committer Francesco Strazzullo gave a “Turbo Boost” for the project and brought at least 6 JSF components which have been successfully intergrated! The current development state is deployet on OpenShift – please look the showcase Below is a short overview about added components with screenshots. Analog Clock. This is a component similar to digital PrimeFaces Clock, but as an analog variant, enhanced with advanced settings.      Countdown. It simulates a countdown and fires an JSF listener after an customizable interval. You can start, stop and pause the countdown.DocumentViewer. This is JSF wrapper of Mozilla Foundation project PDF.js – a full HTML PDF reader.GChart. This is a JSF wrapper of Google Charts API. It’s the same chart library used by Google Analytics and other Google services. Please look at Organizational Chart and Geo Chart.A small note from me: charts can be built completely by model in Java. There is only one GChartModel which allows to add any options you want programmatically. I have used the same approach for my Chart library based on Flotcharts (thinking right now about adding it to the PF Extensions). There is only one generic model with generic setters to set options (options are serialized to JSON then). Advantage: you can export a chart on the server-side, e.g. with PhantomJS. This is a different approach to PrimeFaces’ charts where each chart type has a separate model class and hard-coded fix methods for options settings. Gravatar. This is a component for Gravatar services.Knob. This is a nice theme-aware component to insert numeric values in a range. It has many settings for visual customization, AJAX listener and more.Last but not least: we plan to deploy current SNAPSHOTs on the OpenShift in the future. More new components are coming soon. I intend to bring a component called pe:typeahead to the 3.0.0 too. It is based on Twitter’s Typeahed. In the next post, I will explain how I have added an excellent WAI ARIA support to this great autocomplete widget. Stay tuned!Reference: Brand new JSF components in PrimeFaces Extensions from our JCG partner Oleg Varaksin at the Thoughts on software development blog....

The Measure Of Success

What makes a successful project? Waterfall project management tells us it’s about meeting scope, time and cost goals. Do these success metrics also hold true to agile projects? Let’s see.  In an agile project we learn new information all the time. It’s likely that the scope will change over time, because we find out things we assumed the customer wanted were wrong, while features we didn’t even think of are actually needed. We know that we don’t know everything when we estimate scope, time and budget. This is true for both kinds of projects, but in agile projects we admit that, and therefore do not lock those as goals. The waterfall project plan is immune to feedback. In agile projects, we put feedback cycles into the plan so we will be able to introduce changes. We move from “we know what we need to do” to “let’s find out if what we’re thinking is correct” view. In waterfall projects, there’s an assumption of no variability, and that the plan covers any possible risk. In fact, one small shift in the plan can have disastrous (or wonderful) effects on product delivery. Working from a prioritized backlog in an agile project, means the project can end “prematurely”. If we have a happy customer with half the features, why not stop there? If we deliver a smaller scope, under-budget and before the deadline, has the project actually failed? Some projects are so long, that the people who did the original estimation are long gone. The assumptions they relied on are no longer true, technology has changed and the market too. Agile projects don’t plan that long into the future, and therefore cannot be measured according to the classic metrics. Quality is not part of the scope, time and cost trio, and usually not set as a goal. Quality is not easily measured, and suffers from the pressure of the other three. In agile projects quality is considered is first-class citizen, because we know it supports not only customer satisfaction, but also the ability of the team to deliver in a consistent pace.All kinds of differences. But they don’t answer a very simple question: What is success? In any kind of project, success has an impact. It creates happy customers. It creates a new market. It changes how people think and feel about the company. And it also changes how people inside the company view themselves. This impact is what makes a successful project. This is what we should be measuring. The problem with all of those, is that they cannot be measured at the delivery date, if at all. Cost, budget, and scope maybe measureable at the delivery date, including against the initial estimation, but they are not really indicative of success. In fact, there’s a destructive force within the scope, time and cost goals: They come at the expense of others, like quality and customer satisfaction. If a deadline is important, quality suffers. We’ve all been there. The cool thing about an agile project, is that we can gain confidence we’re on the right track, if customers were part of the process, and if the people developing the product were aligned with the customer’s feedback. The feedback tells us early on if the project is going to be successful, according to real life parameters. And if we’re wrong, that’s good too. We can cut our losses and turn to another opportunity. So agile is better, right? Yes, I’m pro-agile. No, I don’t think agile works every time. I ask that you define your success goals for your product and market, not based on a methodology, but on what impact it will make. Only the you can actually measure success.Reference: The Measure Of Success from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....

Load-Testing Guidelines

Load-testing is not trivial. It’s often not just about downloading JMeter or Gatling, recording some scenarios and then running them. Well, it might be just that, but you are lucky if it is. And what may sound like “Captain Obvious speaking”, it’s good to be reminded of some things that can potentially waste time. So, when you run the tests, eventually you will hit a bottleneck, and then you’ll have to figure out where it is. It can be:        client bottleneck – if your load-testing tool uses HttpURLConnection, the number of requests sent by the client is quite limited. You have to start from that and make sure enough requests are leaving your load-testing machine(s) network bottlenecks – check if your outbound connection allows the desired number of requests to reach the server server machine bottleneck – check the number of open files that your (most probably) linux server allows. For example, if the default is 1024, then you can have at most 1024 concurrent connections. So increase that (limits.conf) application server bottleneck – if the thread pool that handles requests is too low, requests may be kept waiting. If some other tiny configuration switch (e.g. whether to use NIO, which is worth a separate article) has the wrong value, that may reduce performance. You’d have to be familiar with the performance-related configurations of your server. database bottlenecks – check the CPU usage and response times of your database to see if it’s not the one slowing the requests. Misconfiguring your database, or having too small/few DB servers, can obviously be a bottleneck application bottleneck – these you’d have to investigate yourself, possibly using some performance monitoring tool (but be careful when choosing one, as there are many “new and cool”, but unstable and useless ones). We can divide this type in two:framework bottleneck – if a framework you are using has problems. This might be a web framework, a dependency injection framework, an actor system, an ORM, or even a JSON serialization tool application code bottleneck – if you are misusing a tool/framework, have blocking code, or just wrote horrible code with unnecessarily high computational complexityYou’d have to constantly monitor the CPU, memory, network and disk I/O usage of the machines, in order to understand when you’ve hit the hardware bottleneck. One important aspect is being able to bombard your servers with enough requests. It’s not unlikely that a single machine is insufficient, especially if you are a big company and your product is likely to attract a lot of customers at the start and/or making a request needs some processing power as well, e.g. for encryption. So you may need a cluster of machines to run your load tests. The tool you are using may not support that, so you may have to coordinate the cluster manually. As a result of your load tests, you’d have to consider how long does it make sense to keep connections waiting, and when to reject them. That is controlled by connect timeout on the client and registration timeout (or pool borrow timeout) on the server. Also have that in mind when viewing the results – too slow response or rejected connection is practically the same thing – your server is not able to service the request. If you are on AWS, there are some specifics. Leaving auto-scaling apart (which you should probably disable for at least some of the runs), you need to have in mind that the ELB needs warming up. Run the tests a couple of times to warm up the ELB (many requests will fail until it’s fine). Also, when using a load-balancer and long-lived connections are left open (or you use WebSocket, for example), the load balancer may leave connections from itself to the servers behind it open forever and reuse them when a new request for a long-lived connection comes. Overall, load (performance) testing and analysis is not straightforward, there are many possible problems, but is something that you must do before release. Well, unless you don’t expect more than 100 users. And the next time I do that, I will use my own article for reference, to make sure I’m not missing something.Reference: Load-Testing Guidelines from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog....

Continuous Delivery with Docker, Jenkins, JBoss Fuse and OpenShift PaaS

I recently put together an end-to-end demo showing step-by-step how to set up a Continuous Delivery pipeline to help automate your deployments and shorten your cycle times for getting code from development to production. Establishing a proper continuous delivery pipeline is a discipline that requires more than just tools and automation, but having good tools and a head start on setting this up can’t be understated. This project has two focuses:      Show how you’d do CD with JBoss Fuse and OpenShift Create a scripted, repeatable, pluggable and versioned demo so we can swap out pieces (like use JBoss Fuse 6.2/Fabric8/Docker/Kubernetes, or OpenStack, or VirtualBox, or go.cd, or travis-ci, or other code review systems)We use Docker containers to set up all of the individual pieces and make it easier to script it and version it. See the videos of me doing the demo below, or checkout the setup steps and follow the script to recreate the demo yourself! Part IContinuous Delivery with JBoss Fuse on OpenShift Enterprise from Christian Posta on Vimeo. Part IIContinuous Delivery with JBoss Fuse on OpenShift Enterprise Part II from Christian Posta on Vimeo. Part IIIContinuous Delivery with JBoss Fuse on OpenShift Enterprise part III from Christian Posta on Vimeo.Reference: Continuous Delivery with Docker, Jenkins, JBoss Fuse and OpenShift PaaS from our JCG partner Christian Posta at the Christian Posta – Software Blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: