Featured FREE Whitepapers

What's New Here?


From JPA to Hibernate’s legacy and enhanced identifier generators

JPA identifier generators JPA defines the following identifier strategies:                Strategy DescriptionAUTO The persistence provider picks the most appropriate identifier strategy supported by the underlying databaseIDENTITY Identifiers are assigned by a database IDENTITY columnSEQUENCE The persistence provider uses a database sequence for generating identifiersTABLE The persistence provider uses a separate database table to emulate a sequence objectIn my previous post I exampled the pros and cons of all these surrogate identifier strategies. Identifier optimizers While there’s not much application-side IDENTITY generator optimization (other than configuring database identity preallocation), the sequence identifiers offer much more flexibility in this regard. One of the most common optimization strategy is based on the hi/lo allocation algorithm. For this Hibernate offers:Generator DescriptionSequenceHiLoGenerator It uses a database sequence to generate the hi value, while the low value is incremented according to the hi/lo algorithmTableHiLoGeneratorA database table is used for generating the hi values. This generator is deprecated in favour of the MultipleHiLoPerTableGenerator, the enhanced TableGenerator or the SequenceStyleGenerator.MultipleHiLo PerTableGenerator It’s a hi/lo table generator capable of using a single database table even for multiple identifier sequences.SequenceStyleGenerator It’s an enhanced version of the previous sequence generator. It uses a sequence if the underlying database supports them. If the current database doesn’t support sequences it switches to using a table for generating sequence values. While the previous generators were having a predefined optimization algorithm, the enhanced generators can be configured with an optimizer strategy:none: there is no optimizing strategy applied, so every identifier is fetched from the database hi/lo: it uses the original hi/lo algorithm. This strategy makes it difficult for other systems to share the same identifier sequence, requiring other systems to implement the same identifier generation logic. pooled: This optimizer uses a hi/lo optimization strategy, but instead of saving the current hi value it stores the current range upper boundary (or lower boundary – hibernate.id.optimizer.pooled.prefer_lo).Pooled is the default optimizer strategy.TableGenerator Like MultipleHiLoPerTableGenerator it may use one single table for multiple identifier generators, while offering configurable optimizer strategies. Pooled is the default optimizer strategy.  JPA to Hibernate identifier mapping Having such an abundant generator offer, we cannot help asking which of those is being used as the default JPA generators. While the JPA specification doesn’t imply any particular optimization, Hibernate will prefer an optimized generator over one that always hit the database for every new identifier. The JPA SequenceGenerator We’ll define one entity configured with the SEQUENCE JPA identifier generator. A unit test is going to persists five such entities. @Entity(name = "sequenceIdentifier") public static class SequenceIdentifier {@Id @GeneratedValue(generator = "sequence", strategy=GenerationType.SEQUENCE) @SequenceGenerator(name = "sequence", allocationSize = 10) private Long id; }@Test public void testSequenceIdentifierGenerator() { LOGGER.debug("testSequenceIdentifierGenerator"); doInTransaction(new TransactionCallable<Void>() { @Override public Void execute(Session session) { for (int i = 0; i < 5; i++) { session.persist(new SequenceIdentifier()); } session.flush(); return null; } }); } Running this test we’ll give us the following output Query:{[call next value for hibernate_sequence][]} Generated identifier: 10, using strategy: org.hibernate.id.SequenceHiLoGenerator Generated identifier: 11, using strategy: org.hibernate.id.SequenceHiLoGenerator Generated identifier: 12, using strategy: org.hibernate.id.SequenceHiLoGenerator Generated identifier: 13, using strategy: org.hibernate.id.SequenceHiLoGenerator Generated identifier: 14, using strategy: org.hibernate.id.SequenceHiLoGenerator Query:{[insert into sequenceIdentifier (id) values (?)][10]} Query:{[insert into sequenceIdentifier (id) values (?)][11]} Query:{[insert into sequenceIdentifier (id) values (?)][12]} Query:{[insert into sequenceIdentifier (id) values (?)][13]} Query:{[insert into sequenceIdentifier (id) values (?)][14]} Hibernate chooses to use the legacy SequenceHiLoGenerator for backward compatibility with all those applications that were developed prior to releasing the enhanced generators. Migrating a legacy application to the new generators is not an easy process, so the enhanced generators are a better alternative for new applications instead. Hibernate prefers using the “seqhilo” generator by default, which is not an intuitive assumption, since many might expect the raw “sequence” generator (always calling the database sequence for every new identifier value). To enable the enhanced generators we need to set the following Hibernate property: properties.put("hibernate.id.new_generator_mappings", "true"); Giveing us the following output: Query:{[call next value for hibernate_sequence][]} Query:{[call next value for hibernate_sequence][]} Generated identifier: 1, using strategy: org.hibernate.id.enhanced.SequenceStyleGenerator Generated identifier: 2, using strategy: org.hibernate.id.enhanced.SequenceStyleGenerator Generated identifier: 3, using strategy: org.hibernate.id.enhanced.SequenceStyleGenerator Generated identifier: 4, using strategy: org.hibernate.id.enhanced.SequenceStyleGenerator Generated identifier: 5, using strategy: org.hibernate.id.enhanced.SequenceStyleGenerator Query:{[insert into sequenceIdentifier (id) values (?)][1]} Query:{[insert into sequenceIdentifier (id) values (?)][2]} Query:{[insert into sequenceIdentifier (id) values (?)][3]} Query:{[insert into sequenceIdentifier (id) values (?)][4]} Query:{[insert into sequenceIdentifier (id) values (?)][5]} The new SequenceStyleGenerator generates other identifier values than the legacy SequenceHiLoGenerator. The reason why the update statements differ between the old and the new generators is because the new generators default optimizer strategy is “pooled” while the old generators can only use the “hi/lo” strategy. The JPA TableGenerator @Entity(name = "tableIdentifier") public static class TableSequenceIdentifier {@Id @GeneratedValue(generator = "table", strategy=GenerationType.TABLE) @TableGenerator(name = "table", allocationSize = 10) private Long id; } Running the following test: @Test public void testTableSequenceIdentifierGenerator() { LOGGER.debug("testTableSequenceIdentifierGenerator"); doInTransaction(new TransactionCallable<Void>() { @Override public Void execute(Session session) { for (int i = 0; i < 5; i++) { session.persist(new TableSequenceIdentifier()); } session.flush(); return null; } }); } Generates the following SQL statement output: Query:{[select sequence_next_hi_value from hibernate_sequences where sequence_name = 'tableIdentifier' for update][]} Query:{[insert into hibernate_sequences(sequence_name, sequence_next_hi_value) values('tableIdentifier', ?)][0]} Query:{[update hibernate_sequences set sequence_next_hi_value = ? where sequence_next_hi_value = ? and sequence_name = 'tableIdentifier'][1,0]} Generated identifier: 1, using strategy: org.hibernate.id.MultipleHiLoPerTableGenerator Generated identifier: 2, using strategy: org.hibernate.id.MultipleHiLoPerTableGenerator Generated identifier: 3, using strategy: org.hibernate.id.MultipleHiLoPerTableGenerator Generated identifier: 4, using strategy: org.hibernate.id.MultipleHiLoPerTableGenerator Generated identifier: 5, using strategy: org.hibernate.id.MultipleHiLoPerTableGenerator Query:{[insert into tableIdentifier (id) values (?)][1]} Query:{[insert into tableIdentifier (id) values (?)][2]} Query:{[insert into tableIdentifier (id) values (?)][3]} Query:{[insert into tableIdentifier (id) values (?)][4]} Query:{[insert into tableIdentifier (id) values (?)][5]} As with the previous SEQUENCE example, Hibernate uses the MultipleHiLoPerTableGenerator to maintain the backward compatibility. Switching to the enhanced id generators: properties.put("hibernate.id.new_generator_mappings", "true"); Give us the following output: Query:{[select tbl.next_val from hibernate_sequences tbl where tbl.sequence_name=? for update][tableIdentifier]} Query:{[insert into hibernate_sequences (sequence_name, next_val) values (?,?)][tableIdentifier,1]} Query:{[update hibernate_sequences set next_val=? where next_val=? and sequence_name=?][11,1,tableIdentifier]} Query:{[select tbl.next_val from hibernate_sequences tbl where tbl.sequence_name=? for update][tableIdentifier]} Query:{[update hibernate_sequences set next_val=? where next_val=? and sequence_name=?][21,11,tableIdentifier]} Generated identifier: 1, using strategy: org.hibernate.id.enhanced.TableGenerator Generated identifier: 2, using strategy: org.hibernate.id.enhanced.TableGenerator Generated identifier: 3, using strategy: org.hibernate.id.enhanced.TableGenerator Generated identifier: 4, using strategy: org.hibernate.id.enhanced.TableGenerator Generated identifier: 5, using strategy: org.hibernate.id.enhanced.TableGenerator Query:{[insert into tableIdentifier (id) values (?)][1]} Query:{[insert into tableIdentifier (id) values (?)][2]} Query:{[insert into tableIdentifier (id) values (?)][3]} Query:{[insert into tableIdentifier (id) values (?)][4]} Query:{[insert into tableIdentifier (id) values (?)][5]} You can see that the new enhanced TableGenerator was used this time. For more about these optimization strategies you can read the original release note.Code available on GitHub.Reference: From JPA to Hibernate’s legacy and enhanced identifier generators from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....

How to Use Projection in MongoDB?

In mongodb, projection means selecting only necessary data rather than selecting the whole data of a document. If a document has 5 fields and you need to show only 3, then select only 3 fields from it. There are a few projection operators that mongodb provides and helps us reach that goal. Let us discuss about those operators in detail below. $: At first we are going to talk about $ operator. This operator limits the contents of the field that is included in the query results to contain the first matching element.It must required to appear the field in the query document. One positional $ operator can only be appear in the projection document. At a time only one array field can be appear in the query document.Now let us see a basic example of this operator. db.collection.find( { : ... },{ ".$": 1 } ) In the above example as you can see we have used find() on a collection. The method holds only 1 value for the array. It specifies that in a single query only one value can be retrieved from the array depending on the position. Array Field Limitation: Since only one array field can appear in the query document, if the array contains documents, to specify criteria on multiple fields of these documents, what can we use? $elematch: In MongoDB the $elemMatch projection operator is used to limits the contents of an array field which is included in the query results to contain only the first matching element in the array, according to the specified condition.The elements of the array are documents. If multiple elements match the $elemMatch condition, the operator returns the first matching element in the array. The $elemMatch projection operator is similar to the positional $ projection operator.To describe an example about this operator we have to have a database where the db consists a lot of document type data. In my opinion a student mark-sheet is appropriate for this. Let us see the query. db.grades.find( { records: { $elemMatch: { student: "stud1", grade: { $gt: 85 } } } } ); This example returns all documents in the grades collection where any element in the records array satisfies all of the conditions in the $elemMatch expression. This example returns all documents in the grades collection where the records array contains at least one element with both student equal to stud1 and grade greater than 85. But what happens if there are two parameter stating 2 different grades of the same student. Like this: db.grades.find( { records: { $elemMatch: { student: "stud1", grade: { $gt: 85 } } , : { student: "stud1", grade: { $gt: 90 } } } } ); because only 1 parameter is matched in the above query the output will show perfectly. In $elematch any 1 parameter has to match. However in the next case: db.grades.find( { records: { $elemMatch: { student: "stud1", grade: { $gt: 85 } } , : { student: "stud2", grade: { $gt: 90 } } } } ); It would not match the query because no embedded document meets the specified criteria. The differences in the projection operators: The positional ($) projection operator:limits the contents of an array field that is included in the query results to contain the first element that matches the query document. requires that the matching array field is included in the query criteria can only be used if a single array field appears in the query criteria can only be used once in a projectionThe $elemMatch projection operatorlimits the contents of an array field that is included in the query results to contain only the first array element that matches the $elemMatch condition. does not require the matching array to be in the query criteria can be used to match multiple conditions for array elements that are embedded documents$slice The $slice operator controls the number of items of an array that a query returns. For information on limiting the size of an array during an update with $push, see the $slice modifier instead. Let us see a basic query: db.collection.find( { field: value }, { array: {$slice: count } } ); This operation selects the document collection identified by a field named field that holds value and returns the number of elements specified by the value of count from the array stored in the array field. If count has a value greater than the number of elements in array the query returns all elements of the array. $slice accepts arguments in a number of formats, including negative values and arrays. As you saw the basic code, let us see how we can retrieve a set of comments from an array. Let us look below: db.posts.find( {}, { comments: { $slice: 5 } } ) Here, $slice selects the first five items in an array in the comments field. db.posts.find( {}, { comments: { $slice: -5 } } ) This operation returns the last five items in array. So, we have first and last five comments. But what happens if we need a certain amount of comments from between the array? You have to modify the above code a little bit. As follows: db.collection.find( { field: value }, { array: {$slice: [skip,limit] } } ); so, in the above code we can see a pair of parameters. Which tells us how to select data from between the array. How you ask? As the first parameter you can see that we have used skip. Which tells us how many positions in the array we have to skip to start the count. And in the second we have limit, which tells us where the count will stop? Let us modify the above example for better understanding: db.posts.find( {}, { comments: { $slice: [5,10] } } ) in the above example instead showing the first 5 comments, w have skipped them. After skipping, from the fifth position (because array positioning starts from 0) the limit count will start. $meta The $meta projection operator returns for each matching document the metadata (e.g. “textScore”) associated with the query. The $meta expression can be a part of the projection document as well as a sort() expression. A $meta expression has the following syntax: { <projectedFieldName>: { $meta: <metaDataKeyword> } } A textscore returns the score associated with the corresponding query:$text query for each matching document. The text score signifies how well the document matched the stemmed term or terms. If not used in conjunction with a query. Default order is descending. As a basic example we can see the following set of codes: db.collection.find( , { score: { $meta: "textScore" } } ) The $meta expression can be part of a sort() expression. We will get into detail later. db.collection.find( , { score: { $meta: "textScore" } } ).sort( { score: { $meta: "textScore" } } ) Summary: As in databases finding information is a very important aspect, same we can call about projection in mongodb. Although in this article we have scratched the surface of the projection. Saying that you can understand that there are many other ways we can use the find() for projecting a certain result in mongodb.Reference: How to Use Projection in MongoDB? from our JCG partner Piyas De at the Phlox Blog blog....

The Java Origins of Angular JS: Angular vs JSF vs GWT

A superheroic Javascript framework needs a good origin story. Let’s try to patch it together, while going over the use of Angular JS in the enterprise Java world and the Angular take on MVC. This post will go over the following topics, and end with an example:            The Java Origins of Angular JS Angular vs JSF Angular vs GWT Angular vs jQuery The Angular take on MVC (or MVW) The M in MVC – Scopes The V in MVC – Directives The C in MVC – ControllersThe Origins of Angular JS Angular is becoming a framework of choice for developing web applications in enterprise settings, where traditionally the backend is built in Java and the frontend is built in a Java/XML based framework such as JSF or GWT. As Java developers often living in the Spring/Hibernate world, we might wonder how a dependency-injection, dirty checking based MVC framework ever managed to jump from the server and into our browsers, and find that to be an interesting coincidence. The Story Behind Angular It turns out that the similarities are likelly not a coincidence, because at it’s roots Angular was built by Java Developers at Google, that felt that they where not being productive building frontend applications using Java, specifically GWT. These are some important quotes from the Angular developers about the origins of Angular, recently on the Javascript Jabber Podcast (transcript link here): we were building something in GWT and was getting really frustrated just how unproductive I was being. we could build the application (Google Feedback) much faster than we could build it in GWT. So this means Angular was effectively created by full-time Java GWT developers, as a response to how they felt that Java frameworks limited their frontend development productivity. Is JSF or GWT still the way to go? Although with two very different approaches, one of the main goals of both JSF and GWT is to abstract at least part of the web away, by allowing web development to be done in the Java/XML world. But it seems that in this day and age of HTML5 browsers, frameworks like JSF/GWT are much more complex than the underlying platform that they are trying to abstract away in the first place. Although they can be made to work fine, the question is: at what cost? Often the underlying browser technologies leak through to the developer, which ends up having to know HTML, CSS and Javascript anyway in order to be able to implement many real world requirements. This leaves the developer wondering why can’t browser technologies be used directly without so many constraints and intermediate layers of abstraction, because in the end there is really no escape from them. Browser technologies are actually simpler, more widespread and far better documented than any Java framework could ever be. Historical context of JSF and GWT It’s important to realize how JSF/GWT came to be in the first place: they where created to be used in scenarios where an enterprise backend already existed built in Java/XML, and a need existed to reuse that same team of enterprise developers to build also the frontend. From a project management point of view, on a first look and still today this makes a lot of sense. Also from an historical point of view, JSF/GWT where created in a context where the browser was a much more quirkier platform than it is today, and with a lot less developer tools available. So the goal of the framework was to abstract at least some of the browser technologies away, enabling it to be used by a wider developer base. Angular vs JSF JSF came more or less at the same time as Ajax exploded in the web development scene a decade ago. The initial version of JSF was not designed with Ajax in mind, but was instead meant as a full page request/response model. In this model, a DOM-like tree of components representing the user interface exists in memory, but this tree exists only on the server side. The server View then gets converted back and forth to HTML, CSS and Javascript, treating the browser mostly as a rendering platform with no state and limited control over what is going on. Pages are generated by converting the server View representation to HTML, CSS and Javascript via a set of special classes called Renderers, before sending the page to the user. How does JSF work? The user will then interact with the page and send back an action typically via an HTTP POST, and then a server side lifecycle is triggered via the JSF Controller, that restores the view tree, applies the new values to the view and validates them, updates the domain model, invokes the business logic and renders back a new view. The framework was then evolved in JSF 2 for native Ajax support and stateless web development, but the main approach of generating the HTML in the browser from a server side model remained. How does Angular compare to JSF The main design difference is that in Angular the Model, the View and the Controller where moved from the server and into the browser itself. In Angular, the browser technologies are not seen as something to be avoided or hidden, but something to be used to the full extent of it’s capabilities, to build something that is much more similar to a Swing fat client rather than a web page. Angular does not mandate this, but the server typically has very little to no state and serves mostly JSON via REST services. How important is Javascript in JSF? The take of JSF towards Javascript seems to be that the language is something that JSF library developers need to know, but usually not the application developers. The most widespread JSF library Primefaces contains internally thousands of lines of Javascript code for it’s jQuery based frontend widgets, but Primefaces based projects have often very little to no Javascript on the application code base itself. Still, in order to do custom component development in Primefaces, it’s important to know Javascript and jQuery, but usually only a small part of the application team needs to know it. Angular vs GWT A second generation take on Java web development on the browser came with the arrival of GWT. In the GWT take, the Model, View and Controller are also moved to the browser, just like in Angular. The main difference is the the way that Javascript is handled: GWT provides a Java to Javascript compiler that treats Javascript as a client side bytecode execution engine. In this model the development is made entirely in Java, and with a build process the code gets compiled down to Javascript and executed in the browser. The GWT take on HTML and CSS In GWT, HTML and CSS are not meant to be completely hidden from the developer, although XML namespaces are provided to the user to layout at least some of the page major blocks. When getting to the level of forms, an HtmlPanel is provided to allow to build pages in HTML and CSS directly. This is by the way also possible in JSF, although in the case of both frameworks typically developers try to avoid as much as possible HTML and CSS, by trying to use the XML namespaces to their maximum possible extent. Why the Javascript transpilation aproach ? GWT is not so different from Angular in a certain way: it’s MVC in the browser, with Javascript being a transpilation target rather than the application development language. The main goal of that transpilation is again reusing the same developer team that builds the backend as well, and abstracting away browser quirks. Does the GWT object oriented approach help ? The GWT programming model means that the web page is viewed from an object oriented point of view: the page is seen in the program as a network of interconnected objects instead of a document. The notion of document and elements are hidden away by the framework, but it turns out that this extra level of indirection altough familiar, ends up not being that helpful and often gets in the way of the developer more than anything else. Is the extra layer of abstraction needed? The fact is that the notion of page and elements are already simple and powerful enough so that they don’t need an extra layer of abstraction around it. With the object oriented abstraction of the page, the developer often ends up having to debug it’s way through a myriad of classes for simple things like finding where to add a or remove a simple CSS class or wrap an element in a div. Super Dev Mode helps, but it feels like the whole GWT hierarchy of objects, the Java to Javascript compiler and the several debug modes and browser and IDE plugins ecosystem are all together far more complex that what they are trying to hide away in the first place: the web. Angular vs jQuery Meanwhile and in parallel in the Javascript world, a new approach came along for tackling browser differences: the idea that a Javascript library can be created that provides a common API that works well in all browsers. The library would detect the browser at runtime and adapt internally the code used so that the same results occur in all browsers. Such library would be much simpler to use as it did not require browser quirk knowledge, and could appeal to a wider development base. The most successful of those libraries is jQuery, which is mostly a page manipulation library, but it’s not meant to be an MVC framework. jQuery in the Java World Still jQuery is the client side basis of the most popular JSF framework: Primefaces. The main difference between Angular and jQuery is that in jQuery there is no notion of Model or Controller, the document is instead directly manipulated. A lot of code like this is written if using jQuery (example from the Primefaces Javascript autocomplete widget): this.itemtip = $('<div id="' + this.id + '_itemtip" class="ui-autocomplete-itemtip ui-state-highlight ui-widget ui-corner-all ui-shadow"></div>') .appendTo(document.body); As we can see, the Primefaces developers themselves need to know about HTML, CSS and Javascript, altough many of the application developers use the provided XML tags that wrap the frontend widgets, and treat them as a black box. This type of code reminds of the code initially written in Java development, when the Servlet API came along but there weren’t yet any JSP’s: out.println(" " + message + ""); What Angular allows is to decouple the Model from the View, and loosely glue the two together with a Controller. The Angular JS take on MVC (or MVW) Angular positions itself as MVW framework – Model, View, Whatever. This means that it acknowledges the clear separation of a Model, that can be a View specific model and not necessarily a domain model. In Angular the Model is just a POJO – Plain Old Javascript Object. Angular acknowledges also the existence of a View, that is binded declaratively to the Model. The view is just HTML with some special expression language for Model and user interaction binding, and a reusable component building mechanism known as Directives. It also acknowledges the need to something to glue the Model and the View together, but it does name this element hence the “Wathever”. In MVC this element is the Controller, in MVP it’s the Presenter, etc. Minimal Angular Example Let’s go over the three elements of MVC and see what do they correspond in Angular by using a minimal interactive multiplication example, here it is working in a jsFiddle. As you can see, the result is updated immediately once the two factors change. Doing this in something like JSF or GWT would be a far larger amount of work. What would this look like in JSF and GWT? In JSF, for example in Primefaces this would mean having to write a small jQuery plugin or routine to add the interactive multiplication feature, create a facelet template, declare a facelet tag and add it to the tag library, etc. In GWT this would mean bootstraping a sample app, creating a UI binder template, add listeners to the two fields or setting up the editor framework, etc. Enhanced Developer Productivity We can see what the Angular JS developers meant with enhanced productivity, as the complete Angular version is the following, written in a few minutes: <div ng-app="Calculator" ng-controller="CalculatorCtrl"> <input type="text" ng-model="model.left"> * <input type="text" ng-model="model.right"> = <span>{{multiply()}}</span> </div> angular.module('Calculator', []) .controller('CalculatorCtrl', function($scope) { $scope.model = { left: 10, right: 10 }; $scope.multiply = function() { return $scope.model.left * $scope.model.right; } }); So let’s go over the MVC setup of this sample code, starting with the M. The M in MVC – Angular Scopes The Model in Angular is just a simple Javascript object. This is the model object, being injected into the scope: $scope.model = { left: 10, right: 10 }; Injecting the model into the scope makes it dirty checked, so that any changes in the model are reflected immediately back to the view. In the case of the example above, editing the factor input boxes triggers dirty checking which triggers the recalculation of the multiplication, which gets instantly reflected in the result. The V in MVC – Enhanced HTML The view in Angular is just HTML annotated with a special expression language, such as the definition of the multiply() field. The HTML is really acting in this case as client side template, that could be split into reusable HTML components called Directives. The C in MVC – Angular Controllers The CalculatorCtrl is the controller of the example application. It initializes the model before the view gets rendered, and act’s as the glue between the view and the model by defining the multiply function. The controller typically defines observers on the model that trigger event driven code. Conclusions It seems that polyglot development in both Java and Javascript is a viable option for the future of enterprise development, and that Angular is a major part of that view on how to build enterprise apps. The simplicity and speed of development that it brings is attractive to frontend Java developers, which to one degree or another already need to deal with HTML, CSS and often Javascript anyway. So an attractive option seems to be that a portion of enterprise application code will start being written in Javascript using Angular instead of Java, but only the next few years will tell. An alternative way of using Angular Another possibility is that Angular is used internally by frameworks such as JSF as an internal implementation mechanism. See for example this post from the lead of the Primefaces project: I have plans to add built-in js mvc framework support, probably it will be angular. So it’s possible that Angular will be used as an implementation mechanism for technologies that will follow the aproach of keeping the application developer experience to be Java and XML based as much as possible. One thing seems sure, Angular either as an application MVC framework or as an internal detail of a Java/XML based framework seems slowly but surely making it’s way into the enterprise Java world. Related Links: A great online resource for Angular: The egghead.io Angular lessons, a series of minimal 5 minutes video lectures by John Lindquist (@johnlindquist).Reference: The Java Origins of Angular JS: Angular vs JSF vs GWT from our JCG partner Aleksey Novik at the The JHades Blog blog....

Configuring chef Part-2

Lets recap what all we have done in the last blog :                  Setup workstation and chef-repo. Registered on chef to use hosted chef as the chef-server. Bootstrapped a node to be managed by the chef-server. Downloaded the “apache” cookbook in our chef-repo. Uploaded the “apache” cookbook to the chef-server. Added the recipe[apache] in the run-list of the node. Ran the chef-client on the client to apply the cookbook.Now lets continue, and try to understand some more concepts around chef and see them in action. Node Object The beauty of chef is that it gives an object oriented approach to the entire configuration management. The Node Object as the name suggests is an object of the class Node (http://rubydoc.info/gems/chef/Chef/Node). The node object consists of the run-list and node attributes, which is a JSON file that is stored on the Chef server. The chef-client gets a copy of the node object from the Chef server and maintains the state of a node. Attributes: An attribute is a specific detail about a node, such as an IP address, a host name, a list of loaded kernel modules, etc. Data-bags: Data bags are JSON files used to store the data essential across all nodes and not relative to particular cookbooks. They can be accessed inside the cookbooks, attribute files using search. example: user profiles, groups, users, etc. Used by roles and environments, a persistence available across all the nodes. Now, lets explore the node object and see the attributes and databags. We will also see how we can modify and set them. First lets see what all nodes are registered with chef-server: Anirudhs-MacBook-Pro:chef-repo anirudh$ knife node list aws-linux-node aws-node-ubuntu awsnode Now lets see the details of the node awsnode. Anirudhs-MacBook-Pro:chef-repo anirudh$ knife node show awsnode Node Name: awsnode Environment: _default FQDN: ip-172-31-36-73.us-west-2.compute.internal IP: Run List: recipe[apache] Roles: Recipes: apache, apache::default Platform: redhat 7.0 Tags: Finding specific attributes : You can find the fqdn of the aws node. Anirudhs-MacBook-Pro:chef-repo anirudh$ knife node show awsnode -a fqdn awsnode: fqdn: ip-172-31-36-73.us-west-2.compute.internal Search : Search is one of the best features of chef, ‘Search’. Chef Server uses Solr for searching the node objects. So we can provide Solr style queries to search the Json node object attributes and data-bags. Lets see how we can search all the nodes and see their fqdn (fully qualiifed domain name): Anirudhs-MacBook-Pro:chef-repo anirudh$ knife search node "*:*" -a fqdn 2 items foundnode1: fqdn: centos63.example.comawsnode: fqdn: ip-172-31-36-73.us-west-2.compute.internal Changing the defaults using attributes Lets try to change some defaults in our apache cookbook using the attributes. In the /chef-repo/cookbooks/apache/attributes folder we can find the file default.rb (create if not). Add the following : default["apache"]["indexfile"]="index1.html" Now go to the folder cookbooks/apache/files/default and make a file index1.html <html> <body> <h1> Dude!! This is index1.html, it has been changed by chef!</h1> </body> </html> The last thing we need to do get this working is change the recipe and tell it to pick the default index file from the node attribute ‘indexfile’ which we have just set. So, open the file ‘cookbooks/apache/recipes/default.rb’ and append this: cookbook_file "/var/www/index.html" do source node["apache"]["indexfile"] mode "0644" end Now upload the cookbook to the chef server using the command : Anirudhs-MacBook-Pro:chef-repo anirudh$ knife cookbook upload apache And then go to the node, and run the chef-client: opscode@awsnode:~$ sudo chef-client Now, hit the external IP of the node in the browser, and we can see the change. So, we just now used the attribute to change the default index page of the apache server. An important thing to note here is the precedence of setting attributes. Defaults in recipe take a precedence over the attributes, and Role takes precedence over the recipes. The order of precedence is as follows: Ohai > Role > Environment > Recipe > Attribute Roles: A Role tell us what a particular node is acting as, the type of the node, is it a “web server”, a “database” etc. The use of this feature is that we can associate the run_list with it. So, instead of providing recipies as run_list to the node, We will associate the run_lists with a role and then apply this role to a node. Creating a role: knife create role webserver Check if role is created: Anirudhs-MacBook-Pro:chef-repo anirudh$ knife role show webserver chef_type: role default_attributes: apache: sites: admin: port: 8000 description: Web Server env_run_lists: json_class: Chef::Role name: webserver override_attributes: run_list: recipe[apache] This role we just created has added apache recipe in the run_list. Assign this role to the node “awsnode” Anirudhs-MacBook-Pro:chef-repo anirudh$ knife node run_list add awsnode 'role[webserver]' awsnode: run_list: recipe[apache] role[webserver] Upload this role to the chef-server: Anirudhs-MacBook-Pro:chef-repo anirudh$ knife role from file webserver.rb Now run chef-client on the node. Environment: Environment means a QA, dev or a Production environment. We can assign a node any environment, and then apply some environment specific attributes. It is a mere tagging of nodes, environment attributes DOES NOT supersede role attributes. In the coming blogs we will see how we can use define dev, QA, production environments, apply different roles to nodes, configure attributes and data-bags and make a complete eco-system.Reference: Configuring chef Part-2 from our JCG partner Anirudh Bhatnagar at the anirudh bhatnagar blog....

User Stories are Rarely Appropriate

All tools are useful when used appropriately, and User Stories are no different. User stories are fantastic when used in small teams on small projects where the team is co-located and has easy access to customers. User stories can quickly fall apart under any of the following situations:  the team or project is not small the team is not in a single location customers are hard to access project end date must be relatively fixedUser stories were introduced as a core part of Extreme Programming (XP). Extreme Programming assumes you have small co-located teams; if you relax (or abandon) any of these constraints and you will probably end up with a process out of control. XP, and hence user stories, works in high intensity environments where there are strong feedback loops inside the team and with customers:over Processes and Tools Customer Collaboration over Contract NegotiationUser stories need intense intra-team / customer communication to succeed User stories are a light-weight methodology that facilitates intense interactions between customers and developers and put the emphasis on the creation of code, not documentation. Their simplicity makes it easy for customers to help write them, but they must be complemented with timely interactions so that issues can be clarified. Large teams dilute interactions between developers; infrequence communication leads to a lack of team synchronization. Most organizations break larger teams into smaller groups where communication is primarily via email or managers — this kills communication and interaction. Larger projects have non-trivial architectures. Building non-trivial architecture by only looking at the end user requirements is impossible. This is like only having all the leaves of a tree and thinking you determine quickly where the branches and the trunk must be. User stories don’t work with teams where intense interaction is not possible. Teams distributed over multiple locations or time zones do not allow intense interaction. You are delusional if you think regular conference calls constitute intense interaction; most stand-up calls done via conference degrade into design or defect sessions. When emphasis is on the writing of code then it is critical that customers can be accessed in a timely fashion. If your customers are indirectly accessible through product managers or account representatives every few days then you will end up with tremendous latency. Live weekly demos with customers are necessary to flush out misunderstandings quickly and keep you on the same page User stories are virtually impossible to estimate. Often, we use user stories because there is a high degree of requirements uncertainty either because the requirements are unknown or it is difficult to get consistent requirements from customers. Since user stories are difficult to estimate, especially since you don’t know all the requirements, project end dates are impossible to predict with accuracy. To summarize, intense interactions between customers and developers are critical for user stories to be effective because this does several things:it keeps all the customers and developers on the same page it flushes out misunderstandings as quickly as possibleAll of the issues listed initially dilute the intensity of communication either between the team members or the developers and customers. Each issue that increases latency of communication will increase misunderstandings and increase the time it takes to find and remove defects. So if you have any of the following:Large or distributed teams Project with non-trivial architecture Difficult access to customers, i.e. high latency High requirements uncertainty but you need a fixed project end-dateThen user stories are probably not your best choice of requirements methodology. At best you may be able to complement your user stories with storyboards, at worst you may need some light-weight form of use case. Light-weight use case tutorial:(1 of 4) A use case is a dialog (2 of 4) Use case diagrams (UML) (3 of 4) Adding screens and reports (4 of 4) Adding minimal execution contextOther requirements articles:Shift Happens (long) Don’t manage enhancements in the Bug Tracker When BA means B∪ll$#!t ArtistReference: User Stories are Rarely Appropriate from our JCG partner Dalip Mahal at the Accelerated Development blog....

The Knapsack problem

I found the Knapsack problem tricky and interesting at the same time. I am sure if you are visiting this page, you already know the problem statement but just for the sake of completion : Problem: Given a Knapsack of a maximum capacity of W and N items each with its own value and weight, throw in items inside the Knapsack such that the final contents has the maximum value. Yikes !!!      Link to the problem page in wikiHere’s the general way the problem is explained – Consider a thief gets into a home to rob and he carries a knapsack. There are fixed number of items in the home – each with its own weight and value – Jewellery, with less weight and highest value vs tables, with less value but a lot heavy. To add fuel to the fire, the thief has an old knapsack which has limited capacity. Obviously, he can’t split the table into half or jewellery into 3/4ths. He either takes it or leaves it. Example : Knapsack Max weight : W = 10 (units)Total items : N = 4Values of items : v[] = {10, 40, 30, 50}Weight of items : w[] = {5, 4, 6, 3} A cursory look at the example data tells us that the max value that we could accommodate with the limit of max weight of 10 is 50 + 40 = 90 with a weight of 7. Approach: The way this is optimally solved is using dynamic programming – solving for smaller sets of knapsack problems and then expanding them for the bigger problem. Let’s build an Item x Weight array called V (Value array): V[N][W] = 4 rows * 10 columns Each of the values in this matrix represent a smaller Knapsack problem. Base case 1 : Let’s take the case of 0th column. It just means that the knapsack has 0 capacity. What can you hold in them? Nothing. So, let’s fill them up all with 0s. Base case 2 : Let’s take the case of 0 row. It just means that there are no items in the house. What do you do hold in your knapsack if there are no items. Nothing again !!! All zeroes.Solution:Now, let’s start filling in the array row-wise. What does row 1 and column 1 mean? That given the first item (row), can you accommodate it in the knapsack with capacity 1 (column). Nope. The weight of the first item is 5. So, let’s fill in 0. In fact, we wouldn’t be able to fill in anything until we reach the column 5 (weight 5). Once we reach column 5 (which represents weight 5) on the first row, it means that we could accommodate item 1. Let’s fill in 10 there (remember, this is a Value array):  Moving on, for weight 6 (column 6), can we accommodate anything else with the remaining weight of 1 (weight – weight of this item => 6 – 5). Hey, remember, we are on the first item. So, it is kind of intuitive that the rest of the row will just be the same value too since we are unable to add in any other item for that extra weight that we have got.  So, the next interesting thing happens when we reach the column 4 in third row. The current running weight is 4.We should check for the following cases.Can we accommodate Item 2 – Yes, we can. Item 2’s weight is 4. Is the value for the current weight is higher without Item 2? – Check the previous row for the same weight. Nope. the previous row* has 0 in it, since we were not able able accommodate Item 1 in weight 4. Can we accommodate two items in the same weight so that we could maximize the value? – Nope. The remaining weight after deducting the Item2’s weight is 0.Why previous row? Simply because the previous row at weight 4 itself is a smaller knapsack solution which gives the max value that could be accumulated for that weight until that point (traversing through the items). Exemplifying,The value of the current item = 40 The weight of the current item = 4 The weight that is left over = 4 – 4 = 0 Check the row above (the Item above in case of Item 1 or the cumulative Max value in case of the rest of the rows). For the remaining weight 0, are we able to accommodate Item 1? Simply put, is there any value at all in the row above for the given weight?The calculation goes like so :Take the max value for the same weight without this item: previous row, same weight = 0=> V[item-1][weight]Take the value of the current item + value that we could accommodate with the remaining weight: Value of current item + value in previous row with weight 4 (total weight until now (4) - weight of the current item (4))=> val[item-1] + V[item-1][weight-wt[item-1]] Max among the two is 40 (0 and 40). The next and the most important event happens at column 9 and row 2. Meaning we have a weight of 9 and we have two items. Looking at the example data we could accommodate the first two items. Here, we consider few things: 1. The value of the current item = 40 2. The weight of the current item = 4 3. The weight that is left over = 9 - 4 = 5 4. Check the row above. At the remaining weight 5, are we able to accommodate Item 1.  So, the calculation is :Take the max value for the same weight without this item: previous row, same weight = 10Take the value of the current item + value that we could accumulate with the remaining weight: Value of current item (40) + value in previous row with weight 5 (total weight until now (9) - weight of the current item (4))= 10 10 vs 50 = 50.At the end of solving all these smaller problems, we just need to return the value at V[N][W] – Item 4 at Weight 10:Complexity Analyzing the complexity of the solution is pretty straight-forward. We just have a loop for W within a loop of N => O (NW) Implementation: Here comes the obligatory implementation code in Java: class Knapsack {public static void main(String[] args) throws Exception { int val[] = {10, 40, 30, 50}; int wt[] = {5, 4, 6, 3}; int W = 10;System.out.println(knapsack(val, wt, W)); }public static int knapsack(int val[], int wt[], int W) {//Get the total number of items. //Could be wt.length or val.length. Doesn't matter int N = wt.length;//Create a matrix. //Items are in rows and weight at in columns +1 on each side int[][] V = new int[N + 1][W + 1];//What if the knapsack's capacity is 0 - Set //all columns at row 0 to be 0 for (int col = 0; col <= W; col++) { V[0][col] = 0; }//What if there are no items at home. //Fill the first row with 0 for (int row = 0; row <= N; row++) { V[row][0] = 0; }for (int item=1;item<=N;item++){//Let's fill the values row by row for (int weight=1;weight<=W;weight++){//Is the current items weight less //than or equal to running weight if (wt[item-1]<=weight){//Given a weight, check if the value of the current //item + value of the item that we could afford //with the remaining weight is greater than the value //without the current item itself V[item][weight]=Math.max (val[item-1]+V[item-1][weight-wt[item-1]], V[item-1][weight]); } else { //If the current item's weight is more than the //running weight, just carry forward the value //without the current item V[item][weight]=V[item-1][weight]; } }}//Printing the matrix for (int[] rows : V) { for (int col : rows) {System.out.format("%5d", col); } System.out.println(); }return V[N][W];}}Reference: The Knapsack problem from our JCG partner Arun Manivannan at the Rerun.me blog....

An Introduction to Generics in Java – Part 6

This is a continuation of an introductory discussion on Generics, previous parts of which can be found here. In the last article we were discussing about recursive bounds on type parameters. We saw how recursive bound helped us to reuse the vehicle comparison logic. At the end of that article, I suggested that a possible type mixing may occur when we are not careful enough. Today we will see an example of this. The mixing can occur if someone mistakenly creates a subclass of Vehicle in the following way:     /** * Definition of Vehicle */ public abstract class Vehicle<E extends Vehicle<E>> implements Comparable<E> { // other methods and propertiespublic int compareTo(E vehicle) { // method implementation } }/** * Definition of Bus */ public class Bus extends Vehicle<Bus> {}/** * BiCycle, new subtype of Vehicle */ public class BiCycle extends Vehicle<Bus> {}/** * Now this class’s compareTo method will take a Bus type * as its argument. As a result, you will not be able to compare * a BiCycle with another Bicycle, but with a Bus. */ cycle.compareTo(anotherCycle); // This will generate a compile time error cycle.compareTo(bus); // but you will be able to do this without any error This type of mix up does not occur with Enums because JVM takes care of subclassing and creating instances for enum types, but if we use this style in our code then we have to be careful. Let’s talk about another interesting application of recursive bounds. Consider the following class: public class MyClass { private String attrib1; private String attrib2; private String attrib3; private String attrib4; private String attrib5;public MyClass() {}public String getAttrib1() { return attrib1; }public void setAttrib1(String attrib1) { this.attrib1 = attrib1; }public String getAttrib2() { return attrib2; }public void setAttrib2(String attrib2) { this.attrib2 = attrib2; }public String getAttrib3() { return attrib3; }public void setAttrib3(String attrib3) { this.attrib3 = attrib3; }public String getAttrib4() { return attrib4; }public void setAttrib4(String attrib4) { this.attrib4 = attrib4; }public String getAttrib5() { return attrib5; }public void setAttrib5(String attrib5) { this.attrib5 = attrib5; } } If we want to create an instance of this class, then we can do this: MyClass mc = new MyClass(); mc.setAttrib1("Attribute 1"); mc.setAttrib2("Attribute 2"); The above code creates an instance of the class and initializes the properties. If we could use Method Chaining here, then we could have written: MyClass mc = new MyClass().setAttrib1("Attribute 1") .setAttrib2("Attribute 2"); which obviously looks much better than the first version. However, to enable this type of method chaining, we need to modify MyClass in the following way: public class MyClass { private String attrib1; private String attrib2; private String attrib3; private String attrib4; private String attrib5;public MyClass() {}public String getAttrib1() { return attrib1; }public MyClass setAttrib1(String attrib1) { this.attrib1 = attrib1; return this; }public String getAttrib2() { return attrib2; }public MyClass setAttrib2(String attrib2) { this.attrib2 = attrib2; return this; }public String getAttrib3() { return attrib3; }public MyClass setAttrib3(String attrib3) { this.attrib3 = attrib3; return this; }public String getAttrib4() { return attrib4; }public MyClass setAttrib4(String attrib4) { this.attrib4 = attrib4; return this; }public String getAttrib5() { return attrib5; }public MyClass setAttrib5(String attrib5) { this.attrib5 = attrib5; return this; } } and then we will be able to use method chaining for instances of this class. However, if we want to use method chaining where inheritance is involved, things kind of get messy: public abstract class Parent { private String attrib1; private String attrib2; private String attrib3; private String attrib4; private String attrib5;public Parent() {}public String getAttrib1() { return attrib1; }public Parent setAttrib1(String attrib1) { this.attrib1 = attrib1; return this; }public String getAttrib2() { return attrib2; }public Parent setAttrib2(String attrib2) { this.attrib2 = attrib2; return this; }public String getAttrib3() { return attrib3; }public Parent setAttrib3(String attrib3) { this.attrib3 = attrib3; return this; }public String getAttrib4() { return attrib4; }public Parent setAttrib4(String attrib4) { this.attrib4 = attrib4; return this; }public String getAttrib5() { return attrib5; }public Parent setAttrib5(String attrib5) { this.attrib5 = attrib5; return this; } }public class Child extends Parent { private String attrib6; private String attrib7;public Child() {}public String getAttrib6() { return attrib6; }public Child setAttrib6(String attrib6) { this.attrib6 = attrib6; return this; }public String getAttrib7() { return attrib7; }public Child setAttrib7(String attrib7) { this.attrib7 = attrib7; return this; } }/** * Now try using method chaining for instances of Child * in the following way, you will get compile time errors. */ Child c = new Child().setAttrib1("Attribute 1").setAttrib6("Attribute 6"); The reason for this is that even though Child inherits all the setters from its parent, the return type of all those setter methods are of type Parent, not Child. So the first setter will return reference of type Parent, calling setAttrib6 on which will result in compilation error,  because it does not have any such method. We can resolve this problem by introducing a generic type parameter on Parent and defining a recursive bound on it. All of its children will pass themselves as type argument when they extend from it, ensuring that the setter methods will return references of its type: public abstract class Parent<T extends Parent<T>> { private String attrib1; private String attrib2; private String attrib3; private String attrib4; private String attrib5;public Parent() { }public String getAttrib1() { return attrib1; }@SuppressWarnings("unchecked") public T setAttrib1(String attrib1) { this.attrib1 = attrib1; return (T) this; }public String getAttrib2() { return attrib2; }@SuppressWarnings("unchecked") public T setAttrib2(String attrib2) { this.attrib2 = attrib2; return (T) this; }public String getAttrib3() { return attrib3; }@SuppressWarnings("unchecked") public T setAttrib3(String attrib3) { this.attrib3 = attrib3; return (T) this; }public String getAttrib4() { return attrib4; }@SuppressWarnings("unchecked") public T setAttrib4(String attrib4) { this.attrib4 = attrib4; return (T) this; }public String getAttrib5() { return attrib5; }@SuppressWarnings("unchecked") public T setAttrib5(String attrib5) { this.attrib5 = attrib5; return (T) this; } }public class Child extends Parent<Child> { private String attrib6; private String attrib7;public String getAttrib6() { return attrib6; }public Child setAttrib6(String attrib6) { this.attrib6 = attrib6; return this; }public String getAttrib7() { return attrib7; }public Child setAttrib7(String attrib7) { this.attrib7 = attrib7; return this; } } Notice that we have to explicitly cast this to type T  because compiler does not know whether or not this conversion is possible, even though it is because T by definition is bounded by Parent<T>. Also since we are casting an object reference to T, an unchecked warning will be issued by the compiler. To suppress this we used @SuppressWarnings(“unchecked”) above the setters. With the above modifications, it’s perfectly valid to do this: Child c = new Child().setAttrib1("Attribute 1") .setAttrib6("Attribute 6"); When writing method setters this way, we should be careful as to not to use recursive bounds for any other purposes, like to access children’s states from parent, because that will expose parent to the internal details of its subclasses and will eventually break the encapsulation. With this post I finish the basic introduction to Generics. There are so many things that I did not discuss in this series, because I believe they are beyond the introductory level. Until next time.Reference: An Introduction to Generics in Java – Part 6 from our JCG partner Sayem Ahmed at the Random Thoughts blog....

Event Tracking with Analytics API v4 for Android

As I’ve learned from developing my own mileage tracking app for cyclists and commuters, getting ratings and feedback from users can be challenging and time consuming. Event tracking can help by enabling you to develop a sense of how popular a particular feature is and how often it’s getting used by users of your app. In Android, Google Play Services’ Analytics API v4 can be used to gather statistics on the user-events that occur within your app.  In this post I’ll quickly show you how to use this API to accomplish simple event tracking.       Getting started It’s important to say at this point that all of these statistics are totally anonymous. App developers who use analytics have no idea who is using each feature or generating each event, only that an event occurred. Assuming you’ve set up Google Analytics v4 in your app as per my last post, tracking app events is fairly simple. The first thing you need is your analytics application Tracker (obtained in my case by calling getApplication() as per the previous post). Bear in mind that this method is only available in an object that extends Android’s Activity or Service class so you can’t use it everywhere without some messing about. Once you have your application Tracker you should use an analytics EventBuilder to build() an event and use the send() method on the Tracker to send it to Google. Building an event is easy. You simply create a new HitBuilders.EventBuilder, setting a ‘category’ and an ‘action’ for your new event. Sample code The sample code below shows how I track the users manual use use of the ‘START’ button in Trip Computer. I have similar tracking events for STOP and also for the use of key settings and features like the activation of the app’s unique ‘battery-saver’ mode (which I understand is quite popular with cyclists). // Get an Analytics Event tracker. Tracker myTracker = ((TripComputerApplication) getApplication()) .getTracker(TripComputerApplication.TrackerName.APP_TRACKER);// Build and Send the Analytics Event. myTracker.send(new HitBuilders.EventBuilder() .setCategory("Journey Events") .setAction("Pressed Start Button") .build()); Once the events are reported back to Google, the Analytics console will display them in the ‘Behaviour > Events > Overview’ panel and display a simple count how many times each event was raised within the tracking period. You can also further subdivide the actions by setting a ‘label’ or by providing a ‘value’ (but neither of these is actually required). More information For more information see the following articles:https://support.google.com/analytics/answer/1033068?hl=en https://developers.google.com/analytics/devguides/collection/gajs/eventTrackerGuide#AnatomyReference: Event Tracking with Analytics API v4 for Android from our JCG partner Ben Wilcock at the Ben Wilcock’s blog blog....

Daemonizing JVM-based applications

Deployment architecture design is a vital part of any custom-built server-side application development project. Due to it’s significance, deployment architecture design should commence early and proceed in tandem with other development activities. The complexity of deployment architecture design depends on many aspects, including scalability and availability targets of the provided service, rollout processes as well as technical properties of the system architecture. Serviceability and operational concerns, such as deployment security, monitoring, backup/restore etc., relate to the broader topic of deployment architecture design. These concerns are cross-cutting in nature and may need to be addressed on different levels ranging from service rollout processes to the practical system management details. On the system management detail level the following challenges often arise when using a pure JVM-based application deployment model (on Unix-like platforms):how to securely shut down the app server or application? Often, a TCP listener thread listening for shutdown requests is used. If you have many instances of the same app server deployed on the same host, it’s sometimes easy to confuse the instances and shutdown the wrong one. Also, you’ll have to prevent unauthorized access to the shutdown listener. creating init scripts that integrate seamlessly with system startup and shutdown mechanisms (e.g. Sys-V init, systemd, Upstart etc.) how to automatically restart the application if it dies? log file management. Application logs can be managed (e.g. rotate, compress, delete) by a log library. App server or platform logs can sometimes be also managed using a log library, but occasionally integration with OS level tools (e.g. logrotate) may be necessary.There’s a couple of solutions to these problems that enable tighter integration between the operating system and application / application server. One widely used and generic solution is the Java Service Wrapper. The Java Service Wrapper is good at addressing the above challenges and is released under a proprietary license. GPL v2 based community licensing option is available as well. Apache commons daemon is another option. It has its roots in Apache Tomcat and integrates well with the app server, but it’s much more generic than that, and in addition to Java, commons daemon can be used with also other JVM-based languages such as Scala. As the name implies, commons daemon is Apache licensed. Commons daemon includes the following features:automatically restart JVM if it dies enable secure shutdown of JVM process using standard OS mechanisms (Tomcat TCP based shutdown mechanism is error-prone and unsecure) redirect STDERR/STDOUT and set JVM process name allow integration with OS init script mechanisms (record JVM process pid) detach JVM process from parent process and console run JVM and application with reduced OS privileges allow coordinating log file management with OS tools such as logrotate (reopen log files with SIGUSR1 signal)Deploying commons daemon From an application developer point of view commons daemon consists of two parts: the jsvc binary used for starting applications and commons daemon Java API. During startup, jsvc binary bootstraps the application through lifecycle methods implemented by the application and defined by commons daemon Java API. Jsvc creates a control process for monitoring and restarting the application upon abnormal termination. Here’s an outline for deploying commons daemon with your application:implement commons daemon API lifecycle methods in an application bootstrap class (see Using jsvc directly). compile and install jsvc. (Note that it’s usually not good practice to install compiler toolchain on production or QA servers). place commons-daemon API in application classpath figure out command line arguments for running your app through jsvc. Check out bin/daemon.sh in Tomcat distribution for reference. create a proper init script based on previous step. Tomcat can be installed via package manager on many Linux distributions and the package typically come with an init script that can be used as a reference.Practical experiences Tomcat distribution includes “daemon.sh”, a generic wrapper shell script that can be used as a basis for creating a system specific init script variant. One of the issues that I encountered was the wait configuration parameter default value couldn’t be overridden by the invoker of the wrapper script. In some cases Tomcat random number generator initialization could exceed the maximum wait time, resulting in the initialization script reporting a failure, even if the app server would eventually get started. This seems to be fixed now. Another issue was that the wrapper script doesn’t allow passing JVM-parameters with spaces in them. This can be handy e.g. in conjunction with the JVM “-XX:OnOutOfMemoryError” & co. parameters. Using the wrapper script is optional, and it can also be changed easily, but since it includes some pieces of useful functionality, I’d rather reuse instead of duplicating it, so I created a feature request and proposed tiny patch for this #55104. While figuring out the correct command line arguments for getting jsvc to bootstrap your application, the “-debug” argument can be quite useful for troubleshooting purposes. Also, by default the jsvc changes working directory to /, in which case absolute paths should typically be used with other options. The “-cwd” option can be used for overriding the default working directory value. Daemonizing Jetty In addition to Tomcat, Jetty is another servlet container I often use. Using commons daemon with Tomcat poses no challenge since the integration already exists, so I decided to see how things would work with an app server that doesn’t support commons daemon out-of-the-box. To implement the necessary changes in Jetty, I cloned the Jetty source code repository, added jsvc lifecycle methods in the Jetty bootstrap class and built Jetty. After that, I started experimenting with jsvc command line arguments for bootstrapping Jetty. Jetty comes with jetty.sh startup script that has an option called “check” for outputting various pieces of information related to the installation. Among other things it outputs the command line arguments that would be used with the JVM. This provided quite a good starting point for the jsvc command line. These are the command lines I ended up with: export JH=$HOME/jetty-9.2.2-SNAPSHOT export JAVA_HOME=`/usr/libexec/java_home -v 1.8` jsvc -debug -pidfile $JH/jetty.pid -outfile $JH/std.out -errfile $JH/std.err -Djetty.logs=$JH/logs -Djetty.home=$JH -Djetty.base=$JH -Djava.io.tmpdir=/var/folders/g6/zmr61rsj11q5zjmgf96rhvy0sm047k/T/ -classpath $JH/commons-daemon-1.0.15.jar:$JH/start.jar org.eclipse.jetty.start.Main jetty.state=$JH/jetty.state jetty-logging.xml jetty-started.xml This could be used as a starting point for a proper production grade init script for starting and shutting down Jetty. I submitted my code changes as issue #439672 in the Jetty project issue tracker and just received word that the change has been merged with the upstream code base, so you should be able to daemonize Jetty with Apache commons daemon jsvc in the future out-of-the-box.Reference: Daemonizing JVM-based applications from our JCG partner Marko Asplund at the practicing techie blog....

Grails Goodness: Using Converter Named Configurations with Default Renderers

Sometimes we want to support in our RESTful API a different level of detail in the output for the same resource. For example a default output with the basic fields and a more detailed output with all fields for a resource. The client of our API can then choose if the default or detailed output is needed. One of the ways to implement this in Grails is using converter named configurations. Grails converters, like JSON and XML, support named configurations. First we need to register a named configuration with the converter. Then we can invoke the use method of the converter with the name of the configuration and a closure with statements to generate output. The code in the closure is executed in the context of the named configuration. The default renderers in Grails, for example DefaultJsonRenderer, have a property namedConfiguration. The renderer will use the named configuration if the property is set to render the output in the context of the configured named configuration. Let’s configure the appropriate renderers and register named configurations to support the named configuration in the default renderers. In our example we have a User resource with some properties. We want to support short and details output where different properties are included in the resulting format. First we have our resource: // File: grails-app/domain/com/mrhaki/grails/User.groovy package com.mrhaki.grailsimport grails.rest.*// Set URI for resource to /users. // The first format in formats is the default // format. So we could use short if we wanted // the short compact format as default for this // resources. @Resource(uri = '/users', formats = ['json', 'short', 'details']) class User {String username String lastname String firstname String emailstatic constraints = { email email: true lastname nullable: true firstname nullable: true }} Next we register two named configurations for this resource in our Bootstrap: // File: grails-app/conf/Bootstrap.groovy class Bootstrap {def init = { servletContext -> ... JSON.createNamedConfig('details') { it.registerObjectMarshaller(User) { User user -> final String fullname = [user.firstname, user.lastname].join(' ') final userMap = [ id : user.id, username: user.username, email : user.email, ] if (fullname) { userMap.fullname = fullname } userMap } // Add for other resources a marshaller within // named configuration. }JSON.createNamedConfig('short') { it.registerObjectMarshaller(User) { User user -> final userMap = [ id : user.id, username: user.username ] userMap } // Add for other resources a marshaller within // named configuration. } ... } ... } Now we must register custom renderers for the User resource as Spring components in resources: // File: grails-app/conf/spring/resources.groovy import com.mrhaki.grails.User import grails.rest.render.json.JsonRenderer import org.codehaus.groovy.grails.web.mime.MimeTypebeans = { // Register JSON renderer for User resource with detailed output. userDetailsRenderer(JsonRenderer, User) { // Grails will compare the name of the MimeType // to determine which renderer to use. So we // use our own custom name here. // The second argument, 'details', specifies the // supported extension. We can now use // the request parameter format=details to use // this renderer for the User resource. mimeTypes = [new MimeType('application/vnd.com.mrhaki.grails.details+json', 'details')] // Here we specify the named configuration // that must be used by an instance // of this renderer. See Bootstrap.groovy // for available registered named configuration. namedConfiguration = 'details' }// Register second JSON renderer for User resource with compact output. userShortRenderer(JsonRenderer, User) { mimeTypes = [new MimeType('application/vnd.com.mrhaki.grails.short+json', 'short')]// Named configuration is different for short // renderer compared to details renderer. namedConfiguration = 'short' }// Default JSON renderer as fallback. userRenderer(JsonRenderer, User) { mimeTypes = [new MimeType('application/json', 'json')] }} We have defined some new mime types in grails-app/conf/spring/resources.groovy, but we must also add them to our grails-app/conf/Config.groovy file: // File: grails-app/conf/spring/resources.groovy ... grails.mime.types = [ ... short : ['application/vnd.com.mrhaki.grails.short+json', 'application/json'], details : ['application/vnd.com.mrhaki.grails.details+json', 'application/json'], ] ... Our application is now ready and configured. We mostly rely on Grails content negotiation to get the correct renderer for generating our output if we request a resource. Grails content negotiation can use the value of the request parameter format to find the correct mime type and then the correct renderer. Grails also can check the Accept request header or the URI extension, but for our RESTful API we want to use the format request parameter. If we invoke our resource with different formats we see the following results: $ curl -X GET http://localhost:8080/rest-sample/users/1 { "class": "com.mrhaki.grails.User", "id": 1, "email": "hubert@mrhaki.com", "firstname": "Hubert", "lastname": "Klein Ikkink", "username": "mrhaki" } $ curl -X GET http://localhost:8080/rest-sample/users/1?format=short { "id": 1, "username": "mrhaki" } $ curl -X GET http://localhost:8080/rest-sample/users/1?format=details { "id": 1, "username": "mrhaki", "email": "hubert@mrhaki.com", "fullname": "Hubert Klein Ikkink" } Code written with Grails 2.4.2.Reference: Grails Goodness: Using Converter Named Configurations with Default Renderers from our JCG partner Hubert Ikkink at the JDriven blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: