Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?

github-logo

GitHub Social Graphs with Groovy and GraphViz

The Goal Using the GitHub API, Groovy and GraphViz to determine, interpret and render a graph of the relationships between GitHub users based on the watchers of their repositories. The end result can look something like this. The GitHub V3 API You can find the full documentation for the GitHub V3 API here. They do a great job of documenting the various endpoints and their behaviour as well as demonstrating usage of the API extensively with curl. For the purposes of this post the API calls I’m making are simple GET requests that do not require authentication. In particular I’m targeting two specific endpoints: repositories for a specific user and watchers for a repository. Limitations of the API Although a huge upgrade from the 50 requests per hour rate limit on the V2 API, I found it fairly easy to exhaust the 5000 requests per hour provided by the V3 API while gathering data. Fortunately, included with every response from GitHub is a convenient X-RateLimit-Remaining header we can use to check our limit. This allows us to stop processing before we run out of requests, after which GitHub will return errors for every request. For each user we examine one url to find their repositories, and for each of those repositories execute a separate request to find all of the watchers. While executing these requests, using my own GitHub account as the centerpoint, I was able to gather repository information about 1143 users and find 31142 total watchers- 18023 of which were unique in the data collected. This is somewhat of a broken figure as consistently, after reaching the rate limit, there were far more nodes left to process in the queue than already encountered. Myself I only have 31 total repository watchers but appearing within the graph we find users like igrigorik, an employee of Google with 529 repository watchers, and that tends to skew the results somewhat. The end result is that the data provided here is far from complete, I’m sorry to say, but that doesn’t mean it’s not interesting to visualize. Groovy and HttpBuilder Groovy and the HttpBuilder dsl abstract away most of the details of handling the HTTP connections. The graph I’m building starts with one central GitHub user and links that user to everyone that is presently watching one of their repositories. This requires a single GET request to load all of the repositories for the given user, and a GET request per repository to find the watchers. These two HTTP operations are very easily encapsulated with Closures using the HttpBuilder wrapper around HttpClient. Each call returns both the X-RateLimit-Remaining value and the requested data. Here’s what the configuration of HttpBuilder looks like: final String rootUrl = 'https://api.github.com' final HTTPBuilder builder = new HTTPBuilder(rootUrl)The builder object is created and fixed at the GitHub api url, simplifying the syntax for future calls. Now we define two closures, each of which targets a specific url and extracts the appropriate data from the (already automagically unmarshalled by HttpBuilder) JSON response. The findWatchers Closure has a little bit more logic in it to remove duplicate entries, and to exclude the user themselves from the list as by default GitHub records a self-referential link for all users with their own repositories. final String RATE_LIMIT_HEADER = 'X-RateLimit-Remaining' final Closure findReposForUser = { HTTPBuilder http, username -> http.get(path: "/users/$username/repos", contentType: JSON) { resp, json -> return [resp.headers[RATE_LIMIT_HEADER].value as int, json.toList()] } } final Closure findWatchers = { HTTPBuilder http, username, repo -> http.get(path: "/repos/$username/$repo/watchers", contentType: JSON) { resp, json -> return [resp.headers[RATE_LIMIT_HEADER].value as int, json.toList()*.login.flatten().unique() - username] } }Out of this data we’re only interested (for now) in keeping a simple map of Username -> Watchers which we can easily marshal as a JSON object and store in a file. The complete Groovy script code for loading the data can be run from the command line using the following code or executed remotely from a GitHub gist on the command line by calling groovy https://raw.github.com/gist/2468052/5d536c5a35154defb5614bed78b325eeadbdc1a7/repos.groovy {username}. In either case you should pass in the username you would like to center the graph on. The results will be output to a file called ‘reposOutput.json’ in the working directory. Please be patient, as this is going to take a little while; progress is output to the console as each user is processed so you can follow along. @Grab('org.codehaus.groovy.modules.http-builder:http-builder:0.5.2') import groovy.json.JsonBuilder import groovyx.net.http.HTTPBuilder import static groovyx.net.http.ContentType.JSON final rootUser = args[0] final String RATE_LIMIT_HEADER = 'X-RateLimit-Remaining' final String rootUrl = 'https://api.github.com' final Closure<Boolean> hasWatchers = {it.watchers > 1} final Closure findReposForUser = { HTTPBuilder http, username -> http.get(path: "/users/$username/repos", contentType: JSON) { resp, json -> return [resp.headers[RATE_LIMIT_HEADER].value as int, json.toList()] } } final Closure findWatchers = { HTTPBuilder http, username, repo -> http.get(path: "/repos/$username/$repo/watchers", contentType: JSON) { resp, json -> return [resp.headers[RATE_LIMIT_HEADER].value as int, json.toList()*.login.flatten().unique() - username] } } LinkedList nodes = [rootUser] as LinkedList Map<String, List> usersToRepos = [:] Map<String, List<String>> watcherMap = [:] boolean hasRemainingCalls = true final HTTPBuilder builder = new HTTPBuilder(rootUrl) while(!nodes.isEmpty() && hasRemainingCalls) { String username = nodes.remove() println "processing $username" println "remaining nodes = ${nodes.size()}" def remainingApiCalls, repos, watchers (remainingApiCalls, repos) = findReposForUser(builder, username) usersToRepos[username] = repos hasRemainingCalls = remainingApiCalls > 300 repos.findAll(hasWatchers).each{ repo -> (remainingApiCalls, watchers) = findWatchers(builder, username, repo.name) def oldValue = watcherMap.get(username, [] as LinkedHashSet) oldValue.addAll(watchers) watcherMap[username] = oldValue nodes.addAll(watchers) nodes.removeAll(watcherMap.keySet()) hasRemainingCalls = remainingApiCalls > 300 } if(!hasRemainingCalls) { println "Stopped with $remainingApiCalls api calls left." println "Still have not processed ${nodes.size()} users." } } new File('reposOutput.json').withWriter {writer -> writer << new JsonBuilder(watcherMap).toPrettyString() }The JSON file contains very simple data that looks like this: "bmuschko": [ "claymccoy", "AskDrCatcher", "roycef", "btilford", "madsloen", "phaggood", "jpelgrim", "mrdanparker", "rahimhirani", "seymores", "AlBaker", "david-resnick", ...Now we need to take this data and turn it into a representation that GraphViz can understand. We’re also going to add information about the number of watchers for each user and a link back to their GitHub page. Generating a GraphViz file in dot format GraphViz is a popular framework for generating graphs. The cornerstone of this is a simple format for describing a directed graph in a simple text file(commonly referred to as a ‘dot’ file) combined with a variety of different layouts for displaying the graph. For the purposes of this post, I’m after describing the following for inclusion in the graph:An edge from each watcher to the user whose repository they are watching.  A label on each node which includes the user’s name and the count of watchers for all of their repositories.  An embedded HTML link to the user’s GitHub page on each node. Highlighting the starting user in the graph by coloring that node red.  Assigning a ‘rank’ attribute to nodes that links all users with the same number of watchers. The script I’m using to create the ‘dot’ file is pretty much just brute force string processing and the full source code is available as a gist, but here are the interesting parts. First, loading in the JSON file that was output in the last step; converting it to a map structure is very simple: def data new File(filename).withReader {reader -> data = new JsonSlurper().parse(reader) }From this data structure we can extract particular details and group everything by the number of watchers per user. println "Number of mapped users = ${data.size()}" println "Number of watchers = ${data.values().flatten().size()}" println "Number of unique watchers = ${data.values().flatten().unique().size()}" //group the data by the number of watchers final Map groupedData = data.groupBy {it.value.size()}.sort {-it.key} final Set allWatchers = data.collect {it.value}.flatten() final Set allUsernames = data.keySet() final Set leafNodes = allWatchers - allUsernamesGiven this data, we create individual nodes with styling details like so: StringWriter writer = new StringWriter() groupedUsers.each {count, users -> users.each { username, watchers -> def user = "\t\"$username\"" def attrs = generateNodeAttrsMemoized(username, count) def rootAttrs = "fillcolor=red style=filled $attrs" if (username == rootUser) { writer << "$user [$rootAttrs];\n" } else { writer << "$user [$attrs ${extraAttrsMemoized(count, username)}];\n" } } }And this generates node and edge descriptions that look like this: ... "gyurisc" [label="gyurisc = 31" URL="https://github.com/gyurisc" ]; "kellyrob99" [fillcolor=red style=filled label="kellyrob99 = 31" URL="https://github.com/kellyrob99"]; ... "JulianDevilleSmith" -> "cfxram"; "rhyolight" -> "aalmiray"; "kellyrob99" -> "aalmiray"; ...If you created the JSON data already, you can run this command in the same directory in order to generate the GraphViz dot file: groovy https://raw.github.com/gist/2475460/78642d81dd9bc95f099e0f96c3d87389a1ef6967/githubWatcherDigraphGenerator.groovy {username} reposOutput.json. This will create a file named ‘reposDigraph.dot’ in that directory. From there the last step is to interpret the graph definition into an image. Turning a ‘dot’ file into an image I was looking for a quick and easy way to generate multiple visualizations from the same model quickly for comparison and settled on using GPars to generate them concurrently. We have to be a little careful here as some of the layout/format combinations can require a fair bit of memory and CPU – in the worst cases as much as 2GB of memory and processing times in the range of an hour. My recommendation is to stick with the sfdp and twopi(see the online documentation here) layouts for graphs of similar size to the one described here. If you’re after a huge, stunning graphic with complete detail, expect a png image to weigh in somewhere north of 150MB whereas the corresponding svg file will be less than 10MB. This Groovy script depends on having the GraphViz command line ‘dot’ executable already installed, exercises six of the available layout algorithms and generates png and svn files using four concurrently. import groovyx.gpars.GParsPool def inputfile = args[0] def layouts = [ 'dot', 'neato', 'twopi', 'sfdp', 'osage', 'circo' ] //NOTE some of these will fail to process large graphs def formats = [ 'png', 'svg'] def combinations = [layouts, formats].combinations() GParsPool.withPool(4) { combinations.eachParallel { combination -> String layout = combination[0] String format = combination[1] List args = [ '/usr/local/bin/dot', "-K$layout", '-Goverlap=prism', '-Goverlap_scaling=-10', "-T$format", '-o', "${inputfile}.${layout}.$format", inputfile ] println args final Process execute = args.execute() execute.waitFor() println execute.exitValue() } }Here’s a gallery with some examples of the images created and scaled down to be web friendly. The full size graphs I generated using this data weighed in as large as 300MB for a single PNG file. The SVG format takes up significantly less space but still more than 10MB. I also had trouble finding a viewer for the SVG format that was a) capable of showing the large graph in a navigable way and b) didn’t crash my browser due to memory usage. And just for fun Originally I had intended to publish this functionality as an application on the Google App Engine using Gaelyk, but since the API limit would make it suitable for pretty much one request per hour, and likely get me in trouble with GitHub, I ended up foregoing that bit. But along the way I developed a very simple page that will load all of the publicly available Gists for a particular user and display them in a table. This is a pretty clean example of how you can whip up a quick and dirty application and make it publicly available using GAE + Gaelyk. This involved setting up the infrastructure using the gradle-gaelyk-plugin combined with the gradle-gae-plugin, and using Gradle to build, test and deploy the app to the web- all told about an hour’s worth of effort. Try this link to load up all of my publicly available Gists- replace the username parameter if you’d like to check out somebody else. Please give it a second as GAE will undeploy the application if it hasn’t been requested in awhile, so the first call can take a few seconds. http://publicgists.appspot.com/gist?username=kellyrob99 Here’s the Groovlet implementation that loads the data and then forwards to the template page. def username = request.getParameter('username') ?: 'kellyrob99' def text = "https://gist.github.com/api/v1/json/gists/$username".toURL().text log.info text request.setAttribute('rawJSON', text) request.setAttribute('username', username) forward '/WEB-INF/pages/gist.gtpl'And the accompanying template page which renders a simple tabular view of the API request. <% include '/WEB-INF/includes/header.gtpl' %> <% import groovy.json.JsonSlurper %> <% def gistMap = new JsonSlurper().parseText(request['rawJSON']) %> <h1>Public Gists for username : ${request['username']} </h1> <p> <table class = "gridtable"> <th>Description</th> <th>Web page</th> <th>Repo</th> <th>Owner</th> <th>Files</th> <th>Created at</th> <% gistMap.gists.each { data -> def repo = data.repo %> <tr> <td>${data.description ?: ''}</td> <td> <a href="https://gist.github.com/${repo}">${repo}</a> </td> <td> <a href= "git://gist.github.com/${repo}.git">${repo}</a> </td> <td>{data.owner}</td> <td>${data.files}</td> <td>${data.created_at}</td> </tr> <% } %> </table> </p> <% include '/WEB-INF/includes/footer.gtpl' %>Reference: GitHub Social Graphs with Groovy and GraphViz  from our JCG partner Kelly Robinson at the The Kaptain on … stuff blog. ...
apache-maven-logo

A Birds’s Eye View of Maven

One of the things that we do on a daily basis is use Maven to build our projects by issuing build commands such as mvn install. Maven then looks at our project’s configuration file, affectionately known as a POM, magically figures out what do and, hey presto, your build is complete. I imagine that we do this so often that we never think about what’s going on behind the scenes, and in some cases without ever understanding what’s going on either. This blog takes a bird’s eye look at the Maven build lifecycle and reveals what happens when you issue commands such as mvn clean install. If you’ve ever looked at the Maven documentation you’ll have read that Maven is all about a hierarchical object oriented build structure. In this there are three main artefacts: build life-cycles, build phases and goals, so a good place to start would be to explain the relationship between these terms. Take a look at the following UML diagram:Jumping straight in, you can see that Maven HAS 1 one or more build life-cycles and each life-cycle HAS one or more build phases, which are executed in a given sequence. Likewise, each build phase has one or more build goals, which are also executed in a given sequence. A good way of defining a build phase is to give an example. The Maven documentation lists what’s called the default life-cycle and here are its build phases:validate – validate the project is correct and all necessary information is available compile – compile the source code of the project test – test the compiled source code using a suitable unit testing framework. These tests should not require the code be packaged or deployed package – take the compiled code and package it in its distributable format, such as a JAR. integration-test – process and deploy the package if necessary into an environment where integration tests can be run verify – run any checks to verify the package is valid and meets quality criteria install – install the package into the local repository, for use as a dependency in other projects locally deploy – done in an integration or release environment, copies the final package to the remote repository for sharing with other developers and projects.Hence, we can define a build phase as something that takes care of one part of a build life-cycle, for example compiling or testing your project. You can tell Maven to build your project by specifying a build phase on the command line. For example: mvn install …means “carry out all build phases up to and including install phase in the default build life cycle”. …whilst issuing a mvn clean install …means “carry all build phases of the clean life cycle up to and including the clean build phase and then carry out all build phases up to and including install phase in the default build life cycle”. From this you can deduce that issuing a mvn test …command will carry out the validate build phase, executing its goals; then the compile phase, executing its goals and finally the test phase, executing its goals. So, what are goals? In the Maven world a goal can be defined as a single task or job that actually does something concrete towards getting your project built. If we compare Maven to the company you probably work for, then the life-cycles would be the board of directors, the build phases the middle managers and the goals the workers who get the job done. Most build phases come with default goals attached, for example the compiler build phase is bound, as you may have guessed, to the compiler:compile goal, and likewise the install build phase is bound to the install:install goal. You can also bind your own goals to phases using the <plugin> element in your POM file, this can be used to either override a goal’s default behaviour or to add new goals and new behaviour. A final point to note on goals is that they are usually associated with your POM’s package type. This makes sense as, for example, the compiler:compile goal is associated with jars and ejb packaging, but would be meaningless in terms of the POM, wars or ear packages. In reading this, you may have gathered that by convention the names of goals contain a colon, whilst the names of build phases don’t. This allows you to specify goals on the Maven command line without confusing them with build phases. For example: mvn compiler:compile …will carry out the compiler:compile goal, which is in the compiler build phase of the default build lifecycle. And, mixing things up at bit… mvn install tomcat:redeploy …will carry out all build phases up to and including install in the default build life cycle followed by the tomcat:redeploy goal found in the Tomcat Mojo. And that’s the mile high, bird’s eye view of Maven. 1 HAS in the UML sense of the word. Reference: A Birds’s Eye View of Maven from our JCG partner Roger Hughes at the Captain Debug’s Blog blog....
software-development-2-logo

Key Exchange Patterns with Web Services Security

When we have message level security with web services – how we achieve integrity and confidentiality is through keys. Keys are used to sign and encrypt messages been passed from the rqeuestor to the recipient or form the client to the service and vise versa.During this blog post, we’ll be discussing different key exchange patterns and their related use cases.1. Direct Key TransferIf one party has a token and key and wishes to share this with another party, the key can be directly transferred. WS-Secure Conversation is a good example for this. Under WS-Secure Conversation, when the security context token is created by one of the communicating parties and propagated with a message it occupies this pattern to do the key exchange. This is accomplished by the initiator sending an RSTR (either in the body or header) to the other party. The RSTR contains the token and a proof-of-possession token that contains the key encrypted for the recipient.The initiator creates a security context token and sends it to the other parties on a message using the mechanisms described in WS-Trust specification. This model works when the sender is trusted to always create a new security context token. For this scenario the initiating party creates a security context token and issues a signed unsolicited <wst:RequestSecurityTokenResponse> to the other party. The message contains a <wst:RequestedSecurityToken> containing (or pointing to) the new security context token and a <wst:RequestedProofToken> pointing to the “secret” for the security context token.2. Brokered Key DistributionA third party MAY also act as a broker to transfer keys. For example, a requestor may obtain a token and proof-of-possession token from a third-party STS. The token contains a key encrypted for the target service (either using the service’s public key or a key known to the STS and target service). The proof-of-possession token contains the same key encrypted for the requestor (similarly this can use public or symmetric keys).WS-Secure Conversation also has an example for this pattern when the security context token is created by a security token service – The context initiator asks a security token service to create a new security context token. The newly created security context token is distributed to the parties through the mechanisms defined here and in WS-Trust. For this scenario the initiating party sends <wst:RequestSecurityToken> request to the token service and a <wst:RequestSecurityTokenResponseCollection> containing a <wst:RequestSecurityTokenResponse> is returned. The response contains a <wst:RequestedSecurityToken> containing (or pointing to) the new security context token and a <wst:RequestedProofToken> pointing to the “secret” for the returned context. The requestor then uses the security context token when securing messages to applicable services.3. Delegated Key TransferKey transfer can also take the form of delegation. That is, one party transfers the right to use a key without actually transferring the key. In such cases, a delegation token, e.g. XrML, is created that identifies a set of rights and a delegation target and is secured by the delegating party. That is, one key indicates that another key can use a subset (or all) of its rights. The delegate can provide this token and prove itself (using its own key – the delegation target) to a service. The service, assuming the trust relationships have been established and that the delegator has the right to delegate, can then authorize requests sent subject to delegation rules and trust policies.For example a custom token is issued from party A to party B. The token indicates that B (specifically B’s key) has the right to submit purchase orders. The token is signed using a secret key known to the target service T and party A (the key used to ultimately authorize the requests that B makes to T), and a new session key that is encrypted for T. A proof-of-possession token is included that contains the session key encrypted for B. As a result, B is effectively using A’s key, but doesn’t actually know the key.4. Authenticated Request/Reply Key TransferIn some cases the RST/RSTR mechanism is not used to transfer keys because it is part of a simple request/reply. However, there may be a desire to ensure mutual authentication as part of the key transfer. The mechanisms of WS-Security can be used to implement this scenario.Specifically, the sender wishes the following: - Transfer a key to a recipient that they can use to secure a reply - Ensure that only the recipient can see the key - Provide proof that the sender issued the keyThis scenario could be supported by encrypting and then signing. This would result in roughly the following steps:1. Encrypt the message using a generated key 2. Encrypt the key for the recipient 3. Sign the encrypted form, any other relevant keys, and the encrypted keyHowever, if there is a desire to sign prior to encryption then the following general process is used:1. Sign the appropriate message parts using a random key (or ideally a key derived from a random key) 2. Encrypt the appropriate message parts using the random key (or ideally another key derived from the random key) 3. Encrypt the random key for the recipient 4. Sign just the encrypted keyMost part of this blog post is extracted from WS-Trust 1.4 specification.Reference: Key Exchange Patterns with Web Services Security from our JCG partner Prabath Siriwardena at the Facile Login blog....
apache-tomcat-logo

Apache HTTP server with Tomcat on SSL

Introduction I recently needed to simulate our production deployment environment in my local machine. Our production applications are running on Apache HTTP server. We deploy application on multiple tomcat instances and also different Tomcats for different URL name spaces. Apache HTTP server is responsible as front door for all these. Apache HTTP server connects to the tomcat with mod_jk over mod_ssl. When Apache HTTP server receives a request, it checks for the requests and forward to the tomcat accordingly. This configuration is important for security and clustering. This tutorial contains following sections.Installing and configuring Apache HTTP server. Installing and configuring Apache tomcat. Installing and configuring mod_jk. Configuring mod_ssl. Testing the environment.Installing and configuring Apache HTTP server. Run the following command to download Apache HTTP server 2.2.22 wget http://mirrors.gigenet.com/apache//httpd/httpd-2.2.22.tar.bz2 wget http://www.apache.org/dist/httpd/httpd-2.2.22.tar.bz2.md5 md5sum -c httpd-2.2.22.tar.bz2.md5 After executing above command, httpd-2.2.22.tar.bz2 archive will be downloaded to your “Downloads” folder. To extract the zipped archive file, run the following command from /Downloads folder. cd /home/semika/Downloads tar -xjvf httpd-2.2.22.tar.bz2 Above command will extract the zipped archive file into httpd-2.2.22 folder under Downloads folder. Now, you should decide where you are going to install Apache HTTP server. I am going to install it to /home/semika/httpd-2.2.22 folder. You have to create the folder there. Navigate to your user folder and execute the following command to create new folder. cd /home/semika mkdir httpd-2.2.22 To install Apache to your particular platform, we need to compile the source distribution that we have already downloaded. If you see carefully inside the extracted folder under Downloads/httpd-2.2.22, you can see there is a configure script. We can compile the source distribution with that script and it will create necessary stuff to install Apache HTTP server. When compiling Apache, various options can be specified that are suited to our local environment. For the complete reference of options provided, see here. Since, we need mod_ssl to be configured with Apache compilation, we need to install OpenSSL development bundle. Otherwise, compilation will raise an exception. To install OpenSSL development libraries, run the following command. sudo apt-get install openssl libssl-dev Sometime, You might need to run the following command as well, if you encounter an error while apache compilation. sudo apt-get install zlib1g-dev sudo apt-get install libxml2-dev To compile Apache source distribution, execute the following commands. cd /home/semika/Downloads/httpd-2.2.22./configure --prefix=/home/semika/httpd-2.2.22 --enable-mods-shared=all --enable-log_config=static --enable-access=static --enable-mime=static --enable-setenvif=static --enable-dir=static -enable-ssl=yes–prefix : You can specify the installation directory –enable-mods-shared : Setting this to ‘all’ will enable to install all the shared modules. –enable-ssl : Since we are going to configure Apache HTTP server with mod_ssl, this has been set to ‘yes’ to compile Apache with mod_ssl. By default this option is disabled. For other options specified in the configuration command, please look into full options reference documentation. After successfully running the above command, execute the following commands. Before execute the following command, just have look on your specified installation directory, ie: /home/semika/httpd-2.2.22, you can see that it is still empty. make make install Now, you can see Apache HTTP server has been installed under /home/semika/httpd-2.2.22. Look for modules folder, you can see the list of modules installed. Confirm, whether it has installed mod_ssl.so. Now, you can start the Apache HTTP server. cd /home/semika/httpd-2.2.22/bin sudo ./apachectl start If you see bellow line when executing the above command, “httpd: Could not reliably determine the server’s fully qualified domain name, using 127.0.1.1 for ServerName” edit your /httpd-2.2.22/conf/httpd.conf file as follows. Look for ” ServerName” property in httpd.conf file. You will see following line there. #ServerName www.example.com:80 Uncomment this line and modify it as follows. ServerName localhost Again start the Apache HTTP server. If you did not change the default port which Apache server is running, check the following URL to check whether server is started or not. The default Apache HTTP server running port is 80. http://localhost/ With the above URL, if you see a page with “It works”, you are done with Apache HTTP server. Further, if you want to change the default port, you can edit the httpd.conf file as follows. Look for “Listen” property and change it as you wish. I have set it to 7000. So my Apache HTTP server is running on 7000. I have to access the following URL to get “It works” page. http://localhost:7000/ To stop Apache HTTP server,execute the following command. cd /home/semika/httpd-2.2.22/bin sudo ./apachectl stopInstalling and configuring Apache tomcat. Installing and configuring Apache tomcat is not a big thing, if you are involving with this kind of advance configuration. For the completeness of the tutorial, I will explain that a little. I am using Apache Tomcat 7.0.25. You can download it from Apache web site and extract it to some where in you local machine. After that, you have to set the environment variable as follows. export CATALINA_HOME=/home/semika/apache-tomcat-7.0.25 export PATH=$CATALINA_HOME/bin:$PATH You can start the tomcat with following command. cd home/semika/apache-tomcat-7.0.25/bin ./startup.sh If you want to see tomcat’s console out put, execute the following commands before starting the tomcat. cd home/semika/apache-tomcat-7.0.25/logs/ tail -f catalina.out After successfully starting the tomcat, try the following URL http://localhost:8080/ By default, tomcat will run on port 8080. Now our Apache HTTP server is running on port 7000 and tomcat is on 8080. Further, to configure Tomcat with Apache HTTP server, we need to create workers.properties file under home/semika/apache-tomcat-7.0.25/conf/. workers.properties # Define 1 real worker named ajp13 worker.list=ajp13# Set properties for worker named ajp13 to use ajp13 protocol, # and run on port 8009 worker.ajp13.type=ajp13 worker.ajp13.host=localhost worker.ajp13.port=8009 worker.ajp13.lbfactor=50 worker.ajp13.cachesize=10 worker.ajp13.cache_timeout=600 worker.ajp13.socket_keepalive=1 worker.ajp13.socket_timeout=300 Apache HTTP server will connect to Tomcat through port 8009. If you see server.xml file under Tomcat’s conf folder, you can see following connector declaration there. <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />Installing and configuring mod_jk If you look into httpd-2.2.22/modules folder, you can see mod_jk.so has been installed. The mod_jk is a connector for Apache HTTP server to connect to Apache Tomcat. To configure, you have to load mod_jk.so module to Apache HTTP server. Edit /httpd-2.2.22/conf/httpd.conf file as follows. You need to add these properties. # Load mod_jk module # Update this path to match your modules location LoadModule jk_module modules/mod_jk.so# Where to find workers.properties # Update this path to match your conf directory location JkWorkersFile /home/semika/apache-tomcat-7.0.25/conf/workers.properties# Where to put jk logs # Update this path to match your logs directory location JkLogFile /home/semika/apache-tomcat-7.0.25/logs/mod_jk.log# Set the jk log level [debug/error/info] JkLogLevel info# Select the log format JkLogStampFormat "[%a %b %d %H:%M:%S %Y]"# JkOptions indicate to send SSL KEY SIZE, JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories# JkRequestLogFormat set the request format JkRequestLogFormat "%w %V %T"# Send everything for context /rainyDay to worker ajp13 JkMount /rainyDay ajp13 JkMount /rainyDay/* ajp13 I guess most of the properties defined above are clear for you. What is ‘JkMount’ and ‘/rainyDay’. The ‘rainyDay’ is one of my application deployed on Apache Tomcat. It says that ” Forward all the request to the Apache Tomcat that are coming with /rainyDay namespace”. With that, we have finished the configuration of mod_jk. Now, We will test the environment. Try the following URL’s. http://localhost:8080/rainyDay Nothing special about the above URL. Since I have deployed my ‘rainyDay’ application on Apache Tomcat, We can access the application even without configuring with Apache HTTP server using mod_jk. Now, try the following URL. http://localhost:7000/rainyDay If you can access the application with above URL, our configuration is successful. We know that, We have not deployed the ‘rainyDay’ application on Apache HTTP server, but on Apache Tomcat and also Apache HTTP server is running on port 7000. We can still access the ‘rainyDay’ application deployed on Apache Tomcat via Apache HTTP server. Now just try the following URL. https://localhost:7000/rainyDay With the above URL, you can not access the application since URL has https protocol. To access the application with https://, we need to configure SSL with Apache HTTP server. Configuring mod_ssl. To enable SSL on Apache HTTP server, again you have to edit httpd.conf file. Open this file and look for the following line. # Secure (SSL/TLS) connections#Include conf/extra/httpd-ssl.confUncomment the above line and open it. That is under /httpd-2.2.22/conf/extra/httpd-ssl.conf. Look for the following properties. SSLPassPhraseDialog builtin SSLEngine on SSLCertificateFile "/home/semika/httpd-2.2.22/conf/server.crt" SSLCertificateKeyFile "/home/semika/httpd-2.2.22/conf/server.key" Uncomment, if some are already commented out. Next, we have to generate SSL certificate files, server.crt and server.key. To generate these file, execute the following commands. cd /home/sermika/Downloads openssl genrsa -des3 -out server.key 1024 openssl req -new -key server.key -out server.csr openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt When executing above commands, it will ask for some details and also a password and you have to provide those. Also, you have keep this password in mind, because it will be needed to provide when starting Apache HTTP server. To know more about the configuring SSL with Apache HTTP server, refer the this documentation. Now carefully look into /home/semika/Downloads folder. You can see the server.key and server.crt generated. You have to copy these two files into Apache HTTP server installation directory. cd /home/semika/Downloads cp server.crt /home/semika/httpd-2.2.22/conf/server.crt cp server.key /home/semika/httpd-2.2.22/conf/server.key Again open the httpd-ssl.conf file under /httpd-2.2.22/conf/extra. You can see </VirtualHost> element and some properties are defined within that element. Like we did in mod_jk configuration, here also, we have to declare the required application context URL’s or any other URL’s that are needed to be secured with SSL. You have to add JkMount declaration as follows. </VirtualHost> ........... ........... JkMount /rainyDay ajp13 JkMount /rainyDay/* ajp13 </VirtualHost> Now try the following URL’s https://localhost:7000/ This should load the ” It works” page. I used the port 7000, because I have change the Apache HTTP server default port. You are successfully configured mod_ssl. Now try the following URL as well. https://localhost:7000/rainyDay You should be able to load the application. Testing the environment. There is no any order of starting Apache Tomcat and Apache HTTP server. After starting your servers, you can test your configuration with following URL’s. http://localhost:8080/ If you see, Tomcat’s home page, Tomcat configuration is successful. http://localhost:8080/rainyDay If you can access the application, you application is successfully deployed on Apache Tomcat. http://localhost:7000/ If you see “It works” page, Apache HTTP server is successfully configured. http://localhost:7000/rainyDay If it loads you application, mod_jk configuration with Apache HTTP server is successful. https://localhost:7000/ Again if you see “It works” page, mod_ssl configuration with Apache HTTP server is success. https://localhost:7000/rainyDay Again if you can access the application, mod_ssl configuration with Apache HTTP server is success and Apache HTTP server properly handle all secure requests to ‘rainyDay’ application successfully. Reference: How to configure Apache HTTP server with Tomcat on SSL ? from our JCG partner Semika loku kaluge at the Code Box blog....
jenkins-logo

Managing Jenkins job configurations

In JBoss Tools and Developer Studio, we manage a lot of build jobs in Jenkins. In fact, for the 3.2.x/4.x and 3.3.x/5.x streams, there are over 195 jobs. When we start building our next year’s first milestone, we’ll spawn another 40+ jobs. Here are some of them:http://hudson.jboss.org/hudson/view/JBossTools/view/JBossTools_Trunk/ http://hudson.jboss.org/hudson/view/JBossTools/view/JBossTools_3.3.indigo http://hudson.jboss.org/hudson/view/JBossTools/view/JBossTools_3.2.heliosTo assist in performance, we use maven profiles in our parent pom to allow data to be shared outside the slaves’ workspaces, without using shared workspaces (as that can lead to conflicts when multiple maven processes try to write to the same .m2 repo). Here’s an example: <!-- same contents as jbosstools-nightly-staging-composite, but locally available (to improve network lag) --> <profile> <id>local.composite</id> <activation> <activeByDefault>false</activeByDefault> </activation> <repositories> <repository> <id>local.composite</id> <url>${local.composite}</url> <layout>p2</layout> <snapshots> <enabled>true</enabled> </snapshots> <releases> <enabled>true</enabled> </releases> </repository> </repositories> </profile>We also use profiles to turn on code coverage analysis or to take advantage of Jenkins variables like BUILD_NUMBER when setting a timestamped qualifier for features & plugins: <profile> <id>hudson</id> <activation> <property> <name>BUILD_NUMBER</name> </property> </activation> <properties> <local.site>file:///home/hudson/static_build_env/jbds/target-platform_3.3.indigo.SR2/e372M-wtp332M.target/</local.site> </properties> <build> <plugins> <plugin> <groupId>org.eclipse.tycho</groupId> <artifactId>tycho-packaging-plugin</artifactId> <version>${tychoVersion}</version> <configuration> <format>'v'yyyyMMdd-HHmm'-H${BUILD_NUMBER}-${BUILD_ALIAS}'</format> <archiveSite>true</archiveSite> </configuration> </plugin> </plugins> </build> </profile>But how do you deal with hundreds of jobs’ config files, and how do you update them all easily w/o hours of in-browser clicking? We maintain the job config files (config.xml) offline. To do this, we use a maven plugin I wrote to fetch jobs matching a given view & regular expression and store them locally on disk using the same structure as on the server.Then that same plugin can be used to push the config.xml files BACK to the server after making changes to one (or all) the files.For an extra level of auditing, we also commit these locally cached config.xml files to SVN so we can track their history. Admittedly Jenkins provides this functionality natively, but when you’re POSTing changes to config.xml files the server doesn’t always notice a change and record the delta, so having a backup (particularly one you can diff offline) is never a bad idea.Reference: Managing Jenkins job configurations from our JCG partner Nick Boldt at the DivByZero blog....
devops-logo

Why do we insist on consensus on the role of Ops?

I’ve seen so many threads over the last few weeks about who should do what, why, and what you should do about it if you don’t conform. I don’t get it. Ops is a team in a company – there are lots of types of companies. Companies typically have a few goals:Make mone  Change the world, as long as we can do #1.Lots of companies accomplish these goals doing things wrong. If you want proof, read Good to Great, there are oodles of examples of companies who didn’t qualify as “great” but who you would recognize as successful.When wagon trains migrated families west across the US, the idea of driving 40mph, of crossing a state in a day, would have been crazy talk. Then came the locomotive.When locomotives moved people across the country, the idea of a car making an interstate trip would have been crazy. It would be madness if everyone operated their own car. Then came cars, and road, and traffic signals, and road signs. This took time, lots of mistakes, lots of retrospectives, and year over year progress.Progress isn’t made by conforming to the conventions of today, it’s made by pushing for something better. That’s what some folks are doing in Ops today – they are trying to push the limits and do what works for them. Others are observing these patterns and following suit. Still others are sitting back and saying “That ain’t right, my process works just fine”. Perfect.It wasn’t necessary for automobile manufacturers to convince railroad operators that the car was the future. The car became the future because people adopted it, because it worked, and because over time the infrastructure that supported it became more mature.As our tools get better, as our patterns become more and more repeatable, as we start to understand what roads & traffic signals & road signs we need for Ops to get out of the way of Developers making changes in production, things will move. In the mean time – talk about what works for you, why it works for you, and don’t bother convincing other people why it should work for them.Reference: Why do we insist on consensus on the role of Ops? from our JCG partner Aaron Nichols at the Operation Bootstrap blog....
software-development-2-logo

Extract, Inject, Kill: Breaking hierarchies – Part 2

 In part one I explained the main idea behind this approach and  I started this example. Please read part one before reading this post Although the main ideas of Extract, Inject, Kill is already expressed, it’s good to finish the exercise just for completion’s sake. Here is where we stopped:Let’s have a look at the VoucherPricingService, that now is the only concrete class at the bottom of our hierarchy. public class VoucherPricingService extends UserDiscountPricingService { private VoucherService voucherService; @Override protected double applyAdditionalDiscounts(double total, User user, String voucher) { double voucherValue = voucherService.getVoucherValue(voucher); double totalAfterValue = total - voucherValue; return (totalAfterValue > 0) ? totalAfterValue : 0; } public void setVoucherService(VoucherService voucherService) { this.voucherService = voucherService; } } Note that it uses the VoucherService class to calculate the voucher value. public class VoucherService { public double getVoucherValue(String voucher) { Imagine that this calculate the voucher price. Keeping it simple so we can understand the approach. return 0; } }Before anything, let’s write some tests to VoucherPricingService.java @RunWith(MockitoJUnitRunner.class) public class VoucherPricingServiceTest { private static final User UNUSED_USER = null; private static final String NO_VOUCHER = null; private static final String TWENTY_POUNDS_VOUCHER = '20'; @Mock private VoucherService voucherService; private TestableVoucherPricingService voucherPricingService; @Before public void initialise() { voucherPricingService = new TestableVoucherPricingService(); voucherPricingService.setVoucherService(voucherService); when(voucherService.getVoucherValue(TWENTY_POUNDS_VOUCHER)).thenReturn(20D); } @Test public void should_not_apply_discount_if_no_voucher_is_received() { double returnedAmount = voucherPricingService.applyAdditionalDiscounts(1000, UNUSED_USER, NO_VOUCHER); assertThat(returnedAmount, is(1000D)); } @Test public void should_subtract_voucher_value_from_total() { double returnedAmount = voucherPricingService.applyAdditionalDiscounts(30D, UNUSED_USER, TWENTY_POUNDS_VOUCHER); assertThat(returnedAmount, is(equalTo(10D))); } @Test public void shoudl_return_zero_if_voucher_value_is_higher_than_total() { double returnedAmount = voucherPricingService.applyAdditionalDiscounts(10D, UNUSED_USER, TWENTY_POUNDS_VOUCHER); assertThat(returnedAmount, is(equalTo(0D))); } private class TestableVoucherPricingService extends VoucherPricingService { @Override protected double applyAdditionalDiscounts(double total, User user, String voucher) { return super.applyAdditionalDiscounts(total, user, voucher); } } } Once thing to notice is that the User parameter is not used for anything. So let’s remove it. Now it is time to user the Extract, Inject, Kill on the VoucherPricingService. Let’s Extract the content of the VoucherPricingService.applyAdditionalDiscounts(double, String) method and add it to a class called VoucherDiscountCalculation. Let’s call the method calculateVoucherDiscount(). Of course, let’s do that writing our tests first. They need to test exactly the same things that are tested on VoucherPricingService.applyAdditionalDiscounts(double, String). We also take the opportunity to pass the VoucherService object into the constructor of VoucherDiscountCalculation. @RunWith(MockitoJUnitRunner.class) public class VoucherDiscountCalculationTest { private static final String NO_VOUCHER = null; private static final String TWENTY_POUNDS_VOUCHER = '20'; @Mock private VoucherService voucherService; private VoucherDiscountCalculation voucherDiscountCalculation; @Before public void initialise() { voucherDiscountCalculation = new VoucherDiscountCalculation(voucherService); when(voucherService.getVoucherValue(TWENTY_POUNDS_VOUCHER)).thenReturn(20D); } @Test public void should_not_apply_discount_if_no_voucher_is_received() { double returnedAmount = voucherDiscountCalculation.calculateVoucherDiscount(1000, NO_VOUCHER); assertThat(returnedAmount, is(1000D)); } @Test public void should_subtract_voucher_value_from_total() { double returnedAmount = voucherDiscountCalculation.calculateVoucherDiscount(30D, TWENTY_POUNDS_VOUCHER); assertThat(returnedAmount, is(equalTo(10D))); } @Test public void should_return_zero_if_voucher_value_is_higher_than_total() { double returnedAmount = voucherDiscountCalculation.calculateVoucherDiscount(10D, TWENTY_POUNDS_VOUCHER); assertThat(returnedAmount, is(equalTo(0D))); } } public class VoucherDiscountCalculation { private VoucherService voucherService; public VoucherDiscountCalculation(VoucherService voucherService) { this.voucherService = voucherService; } public double calculateVoucherDiscount(double total, String voucher) { double voucherValue = voucherService.getVoucherValue(voucher); double totalAfterValue = total - voucherValue; return (totalAfterValue > 0) ? totalAfterValue : 0; } }If you noticed, when doing the extraction, we took the opportunity to give proper names to our new classes and methods and also to pass their essential dependencies to the constructor instead of using method injection. Let’s now change the code in the VoucherPricingService to use the new VoucherDiscountCalculation and see if all the tests still pass. public class VoucherPricingService extends UserDiscountPricingService { private VoucherService voucherService; @Override protected double applyAdditionalDiscounts(double total, String voucher) { VoucherDiscountCalculation voucherDiscountCalculation = new VoucherDiscountCalculation(voucherService); return voucherDiscountCalculation.calculateVoucherDiscount(total, voucher); } public void setVoucherService(VoucherService voucherService) { this.voucherService = voucherService; } } Cool. All the tests still pass, meaning that we have the same behaviour, but now in the VoucherDiscountCalculation class, and we are ready to move to the Inject stage. Let’s now inject VoucherDiscountCalculation into PricingService, that is the top class in the hierarchy. As always, let’s add a test that will test this new collaboration. @RunWith(MockitoJUnitRunner.class) public class PricingServiceTest { private static final String NO_VOUCHER = ''; private static final String FIVE_POUNDS_VOUCHER = '5'; private TestablePricingService pricingService = new TestablePricingService(); private ShoppingBasket shoppingBasket; @Mock private PriceCalculation priceCalculation; @Mock private VoucherDiscountCalculation voucherDiscountCalculation; @Before public void initialise() { this.pricingService.setPriceCalculation(priceCalculation); this.pricingService.setVoucherDiscountCalculation(voucherDiscountCalculation); } @Test public void should_calculate_price_of_all_products() { Product book = aProduct().named('book').costing(10).build(); Product kindle = aProduct().named('kindle').costing(80).build(); shoppingBasket = aShoppingBasket() .with(2, book) .with(3, kindle) .build(); double price = pricingService.calculatePrice(shoppingBasket, new User(), NO_VOUCHER); verify(priceCalculation, times(1)).calculateProductPrice(book, 2); verify(priceCalculation, times(1)).calculateProductPrice(kindle, 3); } @Test public void should_calculate_voucher_discount() { Product book = aProduct().named('book').costing(10).build(); when(priceCalculation.calculateProductPrice(book, 2)).thenReturn(20D); shoppingBasket = aShoppingBasket() .with(2, book) .build(); double price = pricingService.calculatePrice(shoppingBasket, new User(), FIVE_POUNDS_VOUCHER); verify(voucherDiscountCalculation, times(1)).calculateVoucherDiscount(20, FIVE_POUNDS_VOUCHER); } private class TestablePricingService extends PricingService { @Override protected double calculateDiscount(User user) { return 0; } @Override protected double applyAdditionalDiscounts(double total, String voucher) { return 0; } } }And here is the changed PriningService. public abstract class PricingService { private PriceCalculation priceCalculation; private VoucherDiscountCalculation voucherDiscountCalculation; public double calculatePrice(ShoppingBasket shoppingBasket, User user, String voucher) { double discount = calculateDiscount(user); double total = 0; for (ShoppingBasket.Item item : shoppingBasket.items()) { total += priceCalculation.calculateProductPrice(item.getProduct(), item.getQuantity()); } total = voucherDiscountCalculation.calculateVoucherDiscount(total, voucher); return total * ((100 - discount) 100); } protected abstract double calculateDiscount(User user); protected abstract double applyAdditionalDiscounts(double total, String voucher); public void setPriceCalculation(PriceCalculation priceCalculation) { this.priceCalculation = priceCalculation; } public void setVoucherDiscountCalculation(VoucherDiscountCalculation voucherDiscountCalculation) { this.voucherDiscountCalculation = voucherDiscountCalculation; } }Now it is time to kill the VoucherPricingService class and kill the PricingService.applyAdditionalDiscounts(double total, String voucher) template method, since it is not being used anymore. We can also kill the VoucherPricingServiceTest class and fix the PricingServiceTest removing the applyAdditionalDiscounts() method from the testable class. So now, of course, we don’t have a concrete class in our hierarchy anymore, since the VoucherPricingService was the only one. We can now safely promote UserDiscountPricingService to concrete. That is now how our object graph looks like:Our hierarchy is another level short. The only thing we need to do now is to apply Extract, Inject, Kill once again, extracting the logic inside UserDiscountPricingService into another class (e.g. UserDiscountCalculation), inject UserDiscountCalculation into PricingService, finally kill UserDiscountPricingService and the calculateDiscount(User user) template method. UserDiscountPricingService, Since the approach was described before, there is no need to go step by step anymore. Let’s have a look at the final result. Here is the diagram representing where we started:After the last Extract, Inject, Kill refactoring, this is what we’ve got:The cool thing about the final model pictured above is that now we don’t have any abstract classes anymore. All classes and methods are concrete and every single class is independently testable. That’s how the final PricingService class looks like: public class PricingService { private PriceCalculation priceCalculation; private VoucherDiscountCalculation voucherDiscountCalculation; private PrimeUserDiscountCalculation primeUserDiscountCalculation; public PricingService(PriceCalculation priceCalculation, VoucherDiscountCalculation voucherDiscountCalculation, PrimeUserDiscountCalculation primeUserDiscountCalculation) { this.priceCalculation = priceCalculation; this.voucherDiscountCalculation = voucherDiscountCalculation; this.primeUserDiscountCalculation = primeUserDiscountCalculation; } public double calculatePrice(ShoppingBasket shoppingBasket, User user, String voucher) { double total = getTotalValueFor(shoppingBasket); total = applyVoucherDiscount(voucher, total); return totalAfterUserDiscount(total, userDiscount(user)); } private double userDiscount(User user) { return primeUserDiscountCalculation.calculateDiscount(user); } private double applyVoucherDiscount(String voucher, double total) { return voucherDiscountCalculation.calculateVoucherDiscount(total, voucher); } private double totalAfterUserDiscount(double total, double discount) { return total * ((100 - discount) 100); } private double getTotalValueFor(ShoppingBasket shoppingBasket) { double total = 0; for (ShoppingBasket.Item item : shoppingBasket.items()) { total += priceCalculation.calculateProductPrice(item.getProduct(), item.getQuantity()); } return total; } } For a full implementation of the final code, please look at https://github.com/sandromancuso/breaking-hierarchies Note: For this three part blog post I used three different approaches to drawing UML diagrams. By hand, using ArgoUML and Astah community edition. I’m very happy with the latter.  Reference: Extract, Inject, Kill: Breaking hierarchies (part 3) from our JCG partner Sandro Mancuso at the Crafted Software blog....
software-development-2-logo

Extract, Inject, Kill: Breaking hierarchies – Part 1

Years ago, before I caught the TDD bug, I used to love the template method pattern. I really thought that it was a great way to have an algorithm with polymorphic parts. The use of inheritance was something that I had no issues with. But yes, that was many years ago. Over the years, I’ve been hurt by this ‘design style’. That’s the sort of design created by developers that do not TDD. The situation Very recently I was working on one part of our legacy code and found a six level deep hierarchy of classes. There were quite a few template methods defined in more than one of the classes. All classes were abstract with the exception of the bottom classes, that just implemented one or two of the template methods. There were just a single public method in the entire hierarchy, right at the very top. We had to make a change in one of the classes at the bottom. One of the (protected) template method implementations had to be changed. The problem How do you test it? Goes without saying that there were zero tests for the hierarchy. We know that we should never test private or protected methods. A class should ‘always’ be tested from its public interface. We should always write tests that express and test ‘what’ the method does and not ‘how’. That’s all well and good. However, in this case, the change needs to be done in a protected method (template method implementation) that is part of the implementation of a public method defined in a class six level up in the hierarchy. To test this method, invoking the public method of its grand grand grand grand parent we will need to understand the entire hierarchy, mock all dependencies, create the appropriate data, configure the mocks to have a well defined behaviour so that we can get this piece of code invoked and then tested. Worse than that, imagine that this class at the bottom has siblings overriding the same template method. When the siblings need to be changed, the effort to write tests for them will be the same as it was for our original class. We will have loads of duplications and will also need to understand all the code inside all the classes in the hierarchy. The ice in the cake: There are hundreds of lines to be understood in all parent classes. Breaking the rules Testing via the public method defined at the very top of the hierarchy has proven not to be worth it. The main reason is that, besides painful, we already knew that the whole design was wrong. When we look at the classes in the hierarchy, they didn’t even follow the IS-A rule of inheritance. They inherit from each other so some code could be re-used. After some time I thought: Screw the rules and this design. I’m gonna just directly test the protected method and then start breaking the hierarchy. The approach: Extract, Inject, Kill The overall idea is: 1. Extract all the behaviour from the template method into a class. 2. Inject the new class into the parent class (where the template is defined), replacing the template method invocation with the invocation of the method in the new class. 3. Kill the child class (the one that had the template method implementation). Repeat these steps until you get rid of the entire hierarchy. This was done writing the tests first, making the protected template method implementation public. NOTES 1. This may not be so simple if we have methods calling up the stack in the hierarchy. 2. If the class has siblings, we have to extract all the behaviour from the siblings before we can inject into the parent and kill the siblings. Here is a more concrete example in how to break deep hierarchies using the Extract, Inject, Kill approach. Imagine the following hierarchy. public abstract class PrincingService { public double calculatePrice(ShoppingBasket shoppingBasket, User user, String voucher) { double discount = calculateDiscount(user); double total = 0; for (ShoppingBasket.Item item : shoppingBasket.items()) { total += calculateProductPrice(item.getProduct(), item.getQuantity()); } total = applyAdditionalDiscounts(total, user, voucher); return total * ((100 - discount) 100); } protected abstract double calculateDiscount(User user); protected abstract double calculateProductPrice(Product product, int quantity); protected abstract double applyAdditionalDiscounts(double total, User user, String voucher); } public abstract class UserDiscountPricingService extends PrincingService { @Override protected double calculateDiscount(User user) { int discount = 0; if (user.isPrime()) { discount = 10; } return discount; } } public abstract class VoucherPrincingService extends UserDiscountPricingService { private VoucherService voucherService; @Override protected double applyAdditionalDiscounts(double total, User user, String voucher) { double voucherValue = voucherService.getVoucherValue(voucher); double totalAfterValue = total - voucherValue; return (totalAfterValue > 0) ? totalAfterValue : 0; } public void setVoucherService(VoucherService voucherService) { this.voucherService = voucherService; } } public class BoxingDayPricingService extends VoucherPrincingService { public static final double BOXING_DAY_DISCOUNT = 0.60; @Override protected double calculateProductPrice(Product product, int quantity) { return ((product.getPrice() * quantity) * BOXING_DAY_DISCOUNT); } } public class StandardPricingService extends VoucherPrincingService { @Override protected double calculateProductPrice(Product product, int quantity) { return product.getPrice() * quantity; } }Let’s start with the StandardPricingService. First, let’s write some tests: public class StandardPricingServiceTest { private TestableStandardPricingService standardPricingService = new TestableStandardPricingService(); @Test public void should_return_product_price_when_quantity_is_one() { Product book = aProduct().costing(10).build(); double price = standardPricingService.calculateProductPrice(book, 1); assertThat(price, is(10D)); } @Test public void should_return_product_price_multiplied_by_quantity() { Product book = aProduct().costing(10).build(); double price = standardPricingService.calculateProductPrice(book, 3); assertThat(price, is(30D)); } @Test public void should_return_zero_when_quantity_is_zero() { Product book = aProduct().costing(10).build(); double price = standardPricingService.calculateProductPrice(book, 0); assertThat(price, is(0D)); } private class TestableStandardPricingService extends StandardPricingService { @Override protected double calculateProductPrice(Product product, int quantity) { return super.calculateProductPrice(product, quantity); } } } Note that I used a small trick here, extending the StandardPricingService class inside the test class so I could have access to the protected method. We should not use this trick in normal circumstances. Remember that if you feel the need to test protected or private methods, it is because your design is not quite right, that means, there is a domain concept missing in your design. In other words, there is a class crying to come out from the class you are trying to test. Now, let’s do the step one in our Extract, Inject, Kill strategy. Extract the content of the calculateProductPrice() method into another class called StandardPriceCalculation. This can be done automatically using IntelliJ or Eclipse. After a few minor adjusts, that’s what we’ve got. public class StandardPriceCalculation { public double calculateProductPrice(Product product, int quantity) { return product.getPrice() * quantity; } } And the StandardPriceService now looks like this: public class StandardPricingService extends VoucherPrincingService { private final StandardPriceCalculation standardPriceCalculation = new StandardPriceCalculation(); @Override protected double calculateProductPrice(Product product, int quantity) { return standardPriceCalculation.calculateProductPrice(product, quantity); } } All your tests should still pass. As we create a new class, let’s add some tests to it. They should be the same tests we had for the StandardPricingService. public class StandardPriceCalculationTest { private StandardPriceCalculation priceCalculation = new StandardPriceCalculation(); @Test public void should_return_product_price_when_quantity_is_one() { Product book = aProduct().costing(10).build(); double price = priceCalculation.calculateProductPrice(book, 1); assertThat(price, is(10D)); } @Test public void should_return_product_price_multiplied_by_quantity() { Product book = aProduct().costing(10).build(); double price = priceCalculation.calculateProductPrice(book, 3); assertThat(price, is(30D)); } @Test public void should_return_zero_when_quantity_is_zero() { Product book = aProduct().costing(10).build(); double price = priceCalculation.calculateProductPrice(book, 0); assertThat(price, is(0D)); } } Great, one sibling done. Now let’s do the same thing for the BoxingDayPricingService. public class BoxingDayPricingServiceTest { private TestableBoxingDayPricingService boxingDayPricingService = new TestableBoxingDayPricingService(); @Test public void should_apply_boxing_day_discount_on_product_price() { Product book = aProduct().costing(10).build(); double price = boxingDayPricingService.calculateProductPrice(book, 1); assertThat(price, is(6D)); } @Test public void should_apply_boxing_day_discount_on_product_price_and_multiply_by_quantity() { Product book = aProduct().costing(10).build(); double price = boxingDayPricingService.calculateProductPrice(book, 3); assertThat(price, is(18D)); } private class TestableBoxingDayPricingService extends BoxingDayPricingService { @Override protected double calculateProductPrice(Product product, int quantity) { return super.calculateProductPrice(product, quantity); } } } Now let’s extract the behaviour into another class. Let’s call it BoxingDayPricingCalculation. public class BoxingDayPriceCalculation { public static final double BOXING_DAY_DISCOUNT = 0.60; public double calculateProductPrice(Product product, int quantity) { return ((product.getPrice() * quantity) * BOXING_DAY_DISCOUNT); } } The new BoxingDayPriceService is now public class BoxingDayPricingService extends VoucherPrincingService { private final BoxingDayPriceCalculation boxingDayPriceCalculation = new BoxingDayPriceCalculation(); @Override protected double calculateProductPrice(Product product, int quantity) { return boxingDayPriceCalculation.calculateProductPrice(product, quantity); } } We now need to add the tests for the new class. public class BoxingDayPriceCalculationTest { private BoxingDayPriceCalculation priceCalculation = new BoxingDayPriceCalculation(); @Test public void should_apply_boxing_day_discount_on_product_price() { Product book = aProduct().costing(10).build(); double price = priceCalculation.calculateProductPrice(book, 1); assertThat(price, is(6D)); } @Test public void should_apply_boxing_day_discount_on_product_price_and_multiply_by_quantity() { Product book = aProduct().costing(10).build(); double price = priceCalculation.calculateProductPrice(book, 3); assertThat(price, is(18D)); } } Now both StandardPricingService and BoxingDayPricingService have no implementation of their own. The only thing they do is to delegate the price calculation to StandardPriceCalculation and BoxingDayPriceCalculation respective. Both price calculation classes have the same public method, so now let’s extract a PriceCalculation interface and make them both implement it. public interface PriceCalculation { double calculateProductPrice(Product product, int quantity); } public class BoxingDayPriceCalculation implements PriceCalculation public class StandardPriceCalculation implements PriceCalculation Awesome. We are now ready for the Inject part of Extract, Inject, Kill approach. We just need to inject the desired behaviour into the parent (class that defines the template method). The calculateProductPrice() is defined in the PricingService, the class at the very top at the hierarchy. That’s where we want to inject the PriceCalculation implementation. Here is the new version: public abstract class PricingService { private PriceCalculation priceCalculation; public double calculatePrice(ShoppingBasket shoppingBasket, User user, String voucher) { double discount = calculateDiscount(user); double total = 0; for (ShoppingBasket.Item item : shoppingBasket.items()) { total += priceCalculation.calculateProductPrice(item.getProduct(), item.getQuantity()); } total = applyAdditionalDiscounts(total, user, voucher); return total * ((100 - discount) 100); } protected abstract double calculateDiscount(User user); protected abstract double applyAdditionalDiscounts(double total, User user, String voucher); public void setPriceCalculation(PriceCalculation priceCalculation) { this.priceCalculation = priceCalculation; } } Note that the template method calculateProductPrice() was removed from the PricingService, since its behaviour is now being injected instead of implemented by sub-classes. As we are here, let’s write some tests for this last change, checking if the PricingService is invoking the PriceCalculation correctly. Great. Now we are ready for the last bit of the Extract, Inject, Kill refactoring. Let’s kill both StandardPricingService and BoxingDayPricingService child classes. The VoucherPricingService, now the deepest class in the hierarchy, can be promoted to concrete class. Let’s have another look at the hierarchy:And that’s it. Now it is just to repeat the same steps for VoucherPricingService and UserDiscountPricingService. Extract the implementation of their template methods into classes, inject them into PricingService, and kill the classes. In doing so, every time you extract a class, try to give them proper names instead of calling them Service. Suggestions could be VoucherDiscountCalculation and PrimeUserDiscountCalculation. There were a few un-safe steps in the re-factoring described above and I also struggled a little bit to describe exactly how I did it since I was playing quite a lot with the code. Suggestions and ideas are very welcome. For the final solution, please check the secondt part of this blog post. NOTE If you are not used to use builders in your tests and is asking yourself where the hell aProduct() and aShoppingBasket() come from, check the code in here: ProductBuilder.java ShoppingBasketBuilder.java In part 2 I finish the exercise, breaking the entire hierarchy. Please have a look at it for the final solution.Reference: Extract, Inject, Kill: Breaking hierarchies (part 1), Extract, Inject, Kill: Breaking hierarchies (part 2) from our JCG partner Sandro Mancuso at the Crafted Software blog....
java-logo

java.lang.NoClassDefFoundError: How to resolve – Part 1

Exception in thread ‘main’ java.lang.NoClassDefFoundError is one of the common and difficult problems that you can face when developing Java EE enterprise or standalone Java applications. The complexity of the root cause analysis and resolution process mainly depend of the size of your Java EE middleware environment; especially given the high number of ClassLoaders present across the various Java EE applications. What I’m proposing to you is a series of articles which will provide you with a step by step approach on how to troubleshoot and resolve such problem. I will also share the most common Java NoClassDefFoundError problem patterns I have observed over the last 10 years. Sample Java programs will also be available in order to simplify your learning process. I also encourage you to post comments, share your problem case and ask me any question on this subject. The part 1 of the series will focus on a high level overview of this Java runtime error along with a Java ClassLoader overview. java.lang.NoClassDefFoundError – what is it? Now let’s begin what a simple overview of this problem. This runtime error is thrown by the JVM when there is an attempt by a ClassLoader to load the definition of a Class (Class referenced in your application code etc.) and when such Class definition could not be found within the current ClassLoader tree. Basically, this means that such Class definition was found at compiled time but is not found at runtime. Simple enough, what about adding the missing Class to the classpath? Well not so fast, this type of problem is not that simple to fix. Adding the missing Class / JAR to your runtime application classpath / ClassLoader is just one of the many possible solutions. The key is to perform proper root cause analysis first. This is exactly why I’m creating this whole series. For now, just keep in mind that this error does not necessarily mean that you are missing this Class definition from your “expected” classpath or ClassLoder so please do not assume anything at this point. Java ClassLoader overview Before going any further, it is very important that you have a high level of understanding of the Java ClassLoader principles. Quite often individuals debugging NoClassDefFoundError problems are struggling because they are lacking proper knowledge and understanding of Java ClassLoader principles; preventing them to pinpoint the root cause. A class loader is a Java object responsible for loading classes. Basically a class loader attempts to locate or generate data that constitutes a definition for the class. One of the key points to understand is that Java class loaders by default use a delegation model to search for classes. Each instance of ClassLoader has an associated parent class loader. So let’s say that your application class loader needs to load class A. The first thing that it will attempt to do is to delegate the search for Class A to its parent class loader before attempting to find the Class A itself. You can end up with a large class loader chain with many parent class loaders up to the JVM system classpath bootstrap class loader. What is the problem? Well if Class A is found from a particular parent class loader then it will be loaded by such parent which open the doors for NoClassDefFoundError if you are expecting Class A to be loaded by your application (child) class loader. For example, third part JAR file dependencies could only be present to your application child class loader. Now let’s visualize this whole process in the context of a Java EE enterprise environment so you can better understand.As you can see, any code loaded by the child class loader (Web application) will first delegate to the parent class loader (Java EE App). Such parent class loader will then delegate to the JVM system class path class loader. If no such class is found from any parent class loader then the Class will be loaded by the child class loader (assuming that the class was found). Please note that Java EE containers such as Oracle Weblogic have mechanisms to override this default class loader delegation behavior. I will get back to this in the future articles. Please feel free to post any comment or question of what you learned so far. The part 2 will follow shortly. Reference: java.lang.NoClassDefFoundError: How to resolve – Part 1 from our JCG partner Pierre-Hugues Charbonneau at the Java EE Support Patterns & Java Tutorial blog....
apache-struts-logo

Query grid with struts 2 without plugin

When using jQuery with struts 2, the developers are persuaded to use struts2-jQuery plug-in. Because most of the forums and other Internet resources support struts2 jQuery plug in. I have this experience. I wanted to use jQuery grid plug-in with struts 2,but without using struts2 jQuery plug-in. It was very hard for me to find a tutorial or any good resource to implement struts 2 action class to create the jQuery grid without using struts2 jQuery plug-in. Finally, I got through this by myself and intended to post for your convenience. This tutorial explains, How to create jQuery grid with struts2 without using the plug-in. I filtered this code out from my existing project. The architecture of the project is based on strts2, spring and hibernate integrated environment. I am sure, You can customise these code so that it suits to your environment. Step 01: Creating entity class for ‘Province‘ master screen. I use JPA as a persistence technology and hibernate data access support given by spring ( HibernateDaoSupport). I am not going to explain these stuff in detail. My major concern is, How to create the struts 2 action class that supports jQuery grid. Here is my entity class. Province.java /** * */ package com.shims.model.maintenance;import java.io.Serializable; import java.util.ArrayList; import java.util.List;import javax.persistence.CascadeType; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.FetchType; import javax.persistence.GeneratedValue; import javax.persistence.GenerationType; import javax.persistence.Id; import javax.persistence.OneToMany; import javax.persistence.SequenceGenerator; import javax.persistence.Table;import org.hibernate.annotations.Cascade;import com.shims.model.Audited;/** * @author Semika Siriwardana * */ @Entity @Table(name='PROVINCE') public class Province extends Audited implements Serializable {private static final long serialVersionUID = -6842726343310595087L; @Id @SequenceGenerator(name='province_seq', sequenceName='province_seq') @GeneratedValue(strategy = GenerationType.AUTO, generator = 'province_seq') private Long id; @Column(name='description', nullable = false) private String name; @Column(name='status', nullable = false) private char status; /** * */ public Province() { super(); }/** * @param id */ public Province(Long id) { super(); this.id = id; }/** * @return the id */ public Long getId() { return id; }/** * @param id the id to set */ public void setId(Long id) { this.id = id; }/** * @return the name */ public String getName() { return name; }/** * @param name the name to set */ public void setName(String name) { this.name = name; }/** * @return the status */ public char getStatus() { return status; }/** * @param status the status to set */ public void setStatus(char status) { this.status = status; } }Step 02: Creating JSP file for ‘Province’ master screen grid. Keep in mind that jQuery grid is a plug-in for jQuery. So You need to download the relevant CSS file and JS file for the jQuery grid plug-in. You may need to include following resources in the head part of the JSP file. <link type='text/css' rel='stylesheet' media='screen' href='<%=request.getContextPath()%>/css/jquery/themes/redmond/jquery-ui-1.8.16.custom.css'> <link type='text/css' rel='stylesheet' media='screen' href='<%=request.getContextPath()%>/css/ui.jqgrid.css'> <script src='<%=request.getContextPath()%>/js/jquery-1.6.2.min.js' type='text/javascript'></script> <script src='<%=request.getContextPath()%>/js/grid.locale-en.js' type='text/javascript'></script> <script src='<%=request.getContextPath()%>/js/jquery.jqGrid.src.js' type='text/javascript'></script> <script src='<%=request.getContextPath()%>/js/jquery-ui-1.8.16.custom.min.js' type='text/javascript'></script> Then, We will create the required DOM contents in JSP file to render the grid. For this, You need only a simple TABLE and DIV elements placed with in the JSP file with the given ID’s as follows. <table id='list'></table> <div id='pager'></div> The ‘ pager‘ DIV tag is needed to render the pagination bar of the jQuery grid. Step 03: Creating JS file for ‘Province’ master screen grid. jQuery grid is needed to be initiated with javascript. I am going to initiate the grid with page on load. There are so many functionalities like adding new records, updating records, deleting records, searching supported by jQuery grid. I guess, You can familiar with those stuff, If You can create the initial grid. This javascript contains only the code that initiating the grid. var SHIMS = {} var SHIMS.Province = { onRowSelect: function(id) { //Handle event }, onLoadComplete: function() { //Handle grid load complete event. },onLoadError: function() { //Handle when data loading into grid failed. },/** * Initialize grid */ initGrid: function(){ jQuery('#list').jqGrid({ url:CONTEXT_ROOT + '/secure/maintenance/province!search.action', id:'gridtable', caption:'SHIMS:Province Maintenance', datatype: 'json', pager: '#pager', colNames:['Id','Name','Status'], pagerButtons:true, navigator:true, jsonReader : { root: 'gridModel', page: 'page', total: 'total', records: 'records', repeatitems: false, id: '0' }, colModel:[ { name:'id', index:'id', width:200, sortable:true, editable:false, search:true },{ name:'name', index:'name', width:280, sortable:true, editable:true, search:true, formoptions:{elmprefix:'(*)'}, editrules :{required:true} },{ name:'provinceStatus', index:'provinceStatus', width:200, sortable:false, editable:true, search:false, editrules:{required:true}, edittype:'select', editoptions:{value:'A:A;D:D'} }], rowNum:30, rowList:[10,20,30], width:680, rownumbers:true, viewrecords:true, sortname: 'id', viewrecords: true, sortorder: 'desc', onSelectRow:SHIMS.Province.onRowSelect, loadComplete:SHIMS.Province.onLoadComplete, loadError:SHIMS.Province.onLoadError, editurl:CONTEXT_ROOT + '/secure/maintenance/province!edit.action' }); }, /** * Invoke this method with page on load. */ onLoad: function() { this.initGrid(); } }; I wish to highlight some code snips from the above code. The ‘ jsonReader‘ attribute of the grid initialisation object was the key point where it was difficult to find and spent plenty of times to make the grid work. root – This should be list of objects. page – Current page number. total – This is the total number of pages. For example, if You have 1000 records and the page size is 10, the ‘total’ value will be 100. records – The total number of records or count of records. If You are creating grid with JSON data, the response of the specified ‘ url‘ should be JSON response in this format. Sample JSON response is shown bellow. {'gridModel':[ {'id':15001,'name':'Western','provinceStatus':'A'}, {'id':14001,'name':'North','provinceStatus':'A'}, {'id':13001,'name':'North Central','provinceStatus':'A'}, {'id':12002,'name':'East','provinceStatus':'A'}, {'id':12001,'name':'Southern','provinceStatus':'A'} ], 'page':1, 'records':11, 'rows':30, 'sidx':'id', 'sord':'desc', 'total':2} Above fields should be declared in the action class and updated appropriately. I will come to that later. If You intend to use operations like add record, delete record, update record, search etc, You must specify ‘ editurl‘. With in the JSP file, just above the close body tag, I have placed the following script to call the grid initialisation code. var CONTEXT_ROOT = '<%=request.getContextPath()%>'; jQuery(document).ready(function(){ SHIMS.Province.onLoad(); }); Step 04: Creating Struts2 action class for ‘Province’ master screen grid. This is the most important part of this tutorial. ProvinceAction.java /** * */ package com.shims.web.actions.maintenance.province;import java.util.List;import org.apache.log4j.Logger; import org.apache.struts2.convention.annotation.Namespace; import org.apache.struts2.convention.annotation.ParentPackage; import org.apache.struts2.convention.annotation.Result; import org.apache.struts2.convention.annotation.Results; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Controller;import com.opensymphony.xwork2.ModelDriven; import com.shims.dto.Page; import com.shims.dto.ProvinceDto; import com.shims.model.maintenance.Province; import com.shims.service.maintenance.api.ProvinceService; import com.shims.support.SHIMSSoringSupport; import com.shims.web.actions.common.BaseAction; import com.shims.web.common.WebConstants;/** * @author Semika Siriwardana * */ @Controller @Namespace(WebConstants.MAINTENANCE_NAMESPACE) @ParentPackage(WebConstants.MAINTENANCE_PACKAGE) @Results({@Result(name=ProvinceAction.SUCCESS, location='/jsp/maintenance/province.jsp'), @Result(name = 'json', type = 'json')}) public class ProvinceAction extends BaseAction<Province> implements ModelDriven<Province> {private static final long serialVersionUID = -3007855590220260696L;private static Logger logger = Logger.getLogger(ProvinceAction.class); @Autowired private ProvinceService provinceService; private List<Province> gridModel = null; private Province model = new Province(); private Integer rows = 0; private Integer page = 0; private String sord; private String sidx; private Integer total = 0; private Integer records = 0; @Override public String execute() { return SUCCESS; } /** * Search provinces * @return */ public String search() throws Exception { Page<Province> resultPage = provinceService.findByCriteria(model, getRequestedPage(page)); List<Province> provinceList = resultPage.getResultList(); setGridModel(provinceList); setRecords(resultPage.getRecords()); setTotal(resultPage.getTotals()); return JSON; }/** * @return the gridModel */ public List<Province> getGridModel() { return gridModel; }/** * @param gridModel the gridModel to set */ public void setGridModel(List<Province> gridModel) { this.gridModel = gridModel; }/** * @return the page */ public Integer getPage() { return page; }/** * @param page the page to set */ public void setPage(Integer page) { this.page = page; }/** * @return the rows */ public Integer getRows() { return rows; }/** * @param rows the rows to set */ public void setRows(Integer rows) { this.rows = rows; }/** * @return the sidx */ public String getSidx() { return sidx; }/** * @param sidx the sidx to set */ public void setSidx(String sidx) { this.sidx = sidx; }/** * @return the sord */ public String getSord() { return sord; }/** * @param sord the sord to set */ public void setSord(String sord) { this.sord = sord; }/** * @return the total */ public Integer getTotal() { return total; }/** * @param total the total to set */ public void setTotal(Integer total) { this.total = total; }/** * @return the records */ public Integer getRecords() { return records; }/** * @param records the records to set */ public void setRecords(Integer records) { this.records = records; }@Override public Province getModel() { return model; } } The getRequestedPage() method is a generic method implemented with in the ‘BaseAction’ class which returns a ‘Page’ instance for a given type. BaseAction.java /** * */ package com.shims.web.actions.common;import java.util.Map;import javax.servlet.http.HttpServletRequest;import org.apache.struts2.interceptor.ServletRequestAware; import org.apache.struts2.interceptor.SessionAware;import com.opensymphony.xwork2.ActionSupport; import com.shims.dto.Page; import com.shims.dto.security.UserDto; import com.shims.web.common.WebConstants;import flexjson.JSONSerializer;/** * @author Semika Siriwardana * */ public abstract class BaseAction<T> extends ActionSupport implements ServletRequestAware, SessionAware {private static final long serialVersionUID = -8209196735097293008L; protected static final Integer PAGE_SIZE = 10;protected HttpServletRequest request; protected Map<String, Object> session; protected String JSON = 'json'; public abstract String execute(); public HttpServletRequest getRequest() { return request; }@Override public void setServletRequest(HttpServletRequest request) { this.request = request; }protected void setRequestAttribute(String key, Object obj) { request.setAttribute(key, obj); } /** * Returns generic Page instance. * @param domain * @param employeeDto * @return */ protected Page<T> getRequestedPage(Integer page){ Page<T> requestedPage = new Page<T>(); requestedPage.setPage(page); requestedPage.setRows(PAGE_SIZE); return requestedPage; } /** * @return the session */ public Map<String, Object> getSession() { return session; }/** * @param session the session to set */ public void setSession(Map<String, Object> session) { this.session = session; } } I already explained ‘ gridModel‘, ‘ page‘, ‘ total‘ and ‘records‘. ‘ sord‘ and ‘ sidx‘ which are referenced as ‘sorting order’ and ‘sorting index’, are passed by the jQuery grid when We sort the data in the grid with some column. To fetch those tow fields, We should declare it with in the action class and provide setter and getter methods. Later, We can sort our data list based on those tow parameters. Step 05: Implementing service methods. From here onwards, most of the techniques are specific to my current project framework. I will explain those so that You can understand, How I developed the related service and DAO methods. Since, jQuery grid supports for paging, It was need to have a proper way of exchanging grid information from front-end to back-end and then back-end to front-end. I implemented generic ‘Page’ class for this purpose. /** * */ package com.shims.dto;import java.util.ArrayList; import java.util.List;/** * @author semikas * */ public class Page<T> {/** * Query result list. */ private List<T> resultList = new ArrayList<T>(); /** * Requested page number. */ private Integer page = 1; /** * Number of rows displayed in a single page. */ private Integer rows = 10; /** * Total number of records return from the query. */ private Integer records; /** * Total number of pages. */ private Integer totals;/** * @return the resultDtoList */ public List<T> getResultList() { return resultList; }/** * @param resultDtoList the resultDtoList to set */ public void setResultList(List<T> resultList) { this.resultList = resultList; }/** * @return the page */ public Integer getPage() { return page; }/** * @param page the page to set */ public void setPage(Integer page) { this.page = page; }/** * @return the rows */ public Integer getRows() { return rows; }/** * @param rows the rows to set */ public void setRows(Integer rows) { this.rows = rows; }/** * @return the records */ public Integer getRecords() { return records; }/** * @param records the records to set */ public void setRecords(Integer records) { this.records = records; }/** * @return the totals */ public Integer getTotals() { return totals; }/** * @param totals the totals to set */ public void setTotals(Integer totals) { this.totals = totals; } } Also, with some search criteria, We can fetch the data and update the grid accordingly. My service interface and implementation classes are as follows. ProvinceService.java /** * */ package com.shims.service.maintenance.api;import java.util.List;import com.shims.dto.Page; import com.shims.exceptions.ServiceException; import com.shims.model.maintenance.Province;/** * @author Semika Siriwardana * */ public interface ProvinceService {/** * Returns list of provinces for a given search criteria. * @return * @throws ServiceException */ public Page<Province> findByCriteria(Province searchCriteria, Page<Province> page) throws ServiceException; }ProvinceServiceImpl.java /** * */ package com.shims.service.maintenance.impl;import java.util.ArrayList; import java.util.List;import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service;import com.shims.dto.Page; import com.shims.exceptions.ServiceException; import com.shims.model.maintenance.Province; import com.shims.persist.maintenance.api.ProvinceDao; import com.shims.service.maintenance.api.ProvinceService;/** * @author Semika Siriwardana * */ @Service public class ProvinceServiceImpl implements ProvinceService {@Autowired private ProvinceDao provinceDao; /** * {@inheritDoc} */ @Override public Page<Province> findByCriteria(Province searchCriteria, Page<Province> page) throws ServiceException { Page<Province> resultPage = provinceDao.findByCriteria(searchCriteria, page); return resultPage; }} I am using ‘ page‘ instance to exchange the grid information in between front end and back end. Next, I will explain the other important part of this tutorial. Step 06: Implementing data access method. In the DAO method, We should filtered only the records of the requested page by the user and also should be updated the ‘ page‘ instance attributes so that those should be reflected to the grid. Since, I am using hibernated, I used ‘Criteria’ to retrieve the required data from the database. You implement this in your way, But it should update the grid information properly. ProvinceDao.java /** * */ package com.shims.persist.maintenance.api;import com.shims.dto.Page; import com.shims.exceptions.DataAccessException; import com.shims.model.maintenance.Province; import com.shims.persist.common.GenericDAO;/** * @author Semika Siriwardana * */ public interface ProvinceDao extends GenericDAO<Province, Long> {/** * Returns search results for a given search criteria. * @param searchCriteria * @param page * @return * @throws DataAccessException */ public Page<Province> findByCriteria(Province searchCriteria, Page<Province> page) throws DataAccessException; }ProvinceDaoImpl.java /** * */ package com.shims.persist.maintenance.impl;import java.util.List;import org.hibernate.Criteria; import org.hibernate.SessionFactory; import org.hibernate.criterion.MatchMode; import org.hibernate.criterion.Projections; import org.hibernate.criterion.Restrictions; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Repository;import com.shims.dto.Page; import com.shims.exceptions.DataAccessException; import com.shims.model.maintenance.Province; import com.shims.persist.maintenance.api.ProvinceDao;/** * @author Semika Siriwardana * */ @Repository public class ProvinceDaoImpl extends AbstractMaintenanceDaoSupport<Province, Long> implements ProvinceDao {@Autowired public ProvinceDaoImpl(SessionFactory sessionFactory) { setSessionFactory(sessionFactory); }/** * {@inheritDoc} */ @SuppressWarnings('unchecked') @Override public Page<Province> findByCriteria(ProvinceDto searchCriteria, Page<Province> page) throws SHIMSDataAccessException { Criteria criteria = getSession().createCriteria(Province.class); if (searchCriteria.getName() != null && searchCriteria.getName().trim().length() != 0) { criteria.add(Restrictions.ilike('name', searchCriteria.getName(), MatchMode.ANYWHERE)); } //get total number of records first criteria.setProjection(Projections.rowCount()); Integer rowCount = ((Integer)criteria.list().get(0)).intValue(); //reset projection to null criteria.setProjection(null); Integer to = page.getPage() * page.getRows(); Integer from = to - page.getRows(); criteria.setFirstResult(from); criteria.setMaxResults(to); //calculate the total pages for the query Integer totNumOfPages =(int) Math.ceil((double)rowCount / (double)page.getRows()); List<Province> privinces = (List<Province>)criteria.list(); //Update 'page' instance. page.setRecords(rowCount); //Total number of records page.setTotals(totNumOfPages); //Total number of pages page.setResultList(privinces); return page; } }I think, This will be a good help for the developers who wish to use pure jQuery with struts2. If You find more related to this topic, please post bellow. Reference: How to use jQuery grid with struts 2 without plugin ? from our JCG partner Semika loku kaluge at the Code Box blog....
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close