Featured FREE Whitepapers

What's New Here?

jboss-hibernate-logo

Hibernate Composite Ids with association mappings

Recently, We faced a tricky situation with hibernate association mapping with a composite id field. We needed to have bidirectional association with one-to-may and many-to-one.Our tow tables was “REPORT” and “REPORT_SUMMARY” which has one-to-many relationship from REPORT to REPORT_SUMMARY and many-to-one relationship from REPORT_SUMMARY to REPORT table. The primary key of REPORT_SUMMARY table is defined as a composite primary key which consists of auto increment id field and the primary key of REPORT table. CREATE TABLE REPORT ( ID INT(10) NOT NULL AUTO_INCREMENT, NAME VARCHAR(45) NOT NULL, PRIMARY KEY (`ID`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1;CREATE TABLE REPORT_SUMMARY ( ID INT(10) NOT NULL AUTO_INCREMENT, NAME VARCHAR(45) NOT NULL, RPT_ID INT(10) NOT NULL, PRIMARY KEY (`ID`,`RPT_ID`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1;The hibernate entity classes are as fallows. Report.java package com.semika.autoac.entities;import java.io.Serializable; import java.util.HashSet; import java.util.Set; public class Report implements Serializable{private static final long serialVersionUID = 9146156921169669644L;private Integer id; private String name; private Set<ReportSummary> reportSummaryList = new HashSet<ReportSummary>(); public Integer getId() { return id; } public void setId(Integer id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public Set<ReportSummary> getReportSummaryList() { return reportSummaryList; } public void setReportSummaryList(Set<ReportSummary> reportSummaryList) { this.reportSummaryList = reportSummaryList; } }ReportSummary.java package com.semika.autoac.entities;import java.io.Serializable; public class ReportSummary implements Serializable {private static final long serialVersionUID = 8052962961003467437L;private ReportSummaryId id; private String name;public ReportSummaryId getId() { return id; } public void setId(ReportSummaryId id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } @Override public int hashCode() { final int prime = 31; int result = 1; result = prime * result + ((id == null) ? 0 : id.hashCode()); result = prime * result + ((name == null) ? 0 : name.hashCode()); return result; } @Override public boolean equals(Object obj) { if (this == obj) return true; if (obj == null) return false; if (getClass() != obj.getClass()) return false; ReportSummary other = (ReportSummary) obj; if (id == null) { if (other.id != null) return false; } else if (!id.equals(other.id)) return false; if (name == null) { if (other.name != null) return false; } else if (!name.equals(other.name)) return false;return true; } }ReportSummaryId.java package com.semika.autoac.entities;import java.io.Serializable;public class ReportSummaryId implements Serializable{private static final long serialVersionUID = 6911616314813390449L;private Integer id; private Report report;public Integer getId() { return id; } public void setId(Integer id) { this.id = id; } public Report getReport() { return report; } public void setReport(Report report) { this.report = report; } @Override public int hashCode() { final int prime = 31; int result = 1; result = prime * result + ((id == null) ? 0 : id.hashCode()); result = prime * result + ((report == null) ? 0 : report.hashCode()); return result; } @Override public boolean equals(Object obj) { if (this == obj) return true; if (obj == null) return false; if (getClass() != obj.getClass()) return false; ReportSummaryId other = (ReportSummaryId) obj; if (id == null) { if (other.id != null) return false; } else if (!id.equals(other.id)) return false; if (report == null) { if (other.report != null) return false; } else if (!report.equals(other.report)) return false;return true; } }Report object has a collection of ReportSummary objects and ReportSummaryId has a reference to Report object. The most important part of this implementation is hibernate mapping files. Report.hbm.xml <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <hibernate-mapping> <class name="com.semika.autoac.entities.Report" table="REPORT" > <id name="id" type="int" column="id" > <generator class="native"/> </id> <property name="name"> <column name="NAME" /> </property> <set name="reportSummaryList" table="REPORT_SUMMARY" cascade="all" inverse="true"> <key column="RPT_ID" not-null="true"></key> <one-to-many class="com.semika.autoac.entities.ReportSummary"/> </set> </class> </hibernate-mapping>ReportSummary.hbm.xml <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"><hibernate-mapping> <class name="com.semika.autoac.entities.ReportSummary" table="REPORT_SUMMARY" > <composite-id name="id" class="com.semika.autoac.entities.ReportSummaryId"> <key-property name="id" column="ID"></key-property> <key-many-to-one name="report" class="com.semika.autoac.entities.Report" column="RPT_ID"</key-many-to-one> </composite-id> <property name="name"> <column name="NAME" /> </property> </class> </hibernate-mapping>Reference: How to Use Hibernate for Composite Ids with association mappings from our JCG partner Semika loku kaluge at the Code Box blog....
apache-ant-logo

Using Gradle to Bootstrap your Legacy Ant Builds

Gradle provides several different ways to leverage your existing investment in Ant, both in terms of accumulated knowledge and the time you’ve already put into build files. This can greatly facilitate the process of porting Ant built projects over to Gradle, and can give you a path for incrementally doing so. The Gradle documentation does a good job of describing how you can use Ant in your Gradle build script, but here’s a quick overview and some particulars I’ve run into myself. Gradle AntBuilder Every Gradle Project includes an AntBuilder instance, making any and all of the facilities of Ant available within your build files. Gradle provides a simple extension to the existing Groovy AntBuilder which adds a simple yet powerful way to interface with existing Ant build files: the importBuild(Object antBuildFile) method. Internally this method utilizes an Ant ProjectHelper to parse the specified Ant build file and then wraps all of the targets in Gradle tasks making them available in the Gradle build. The following is a simple Ant build file used for illustration which contains some properties and a couple of dependent targets. <?xml version='1.0'?> <project name='build' default='all'> <echo>Building ${ant.file}</echo><property file='build.properties'/> <property name='root.dir' location='.'/><target name='dist' description='Build the distribution'> <property name='dist.dir' location='dist'/> <echo>dist.dir=${dist.dir}, foo=${foo}</echo> </target><target name='all' description='Build everything' depends='dist'/> </project> Importing this build file using Gradle is a one-liner. ant.importBuild('src/main/resources/build.xml') And the output of gradle tasks –all on the command line shows that the targets have been added to the build tasks. $ gradle tasks --all ... Other tasks ----------- all - Build everything dist - Build the distribution ... Properties used in the Ant build file can be specified in the Gradle build or on the command line and, unlike the usual Ant property behaviour, properties set by Ant or on the command line may be overwritten by Gradle. Given a simple build.properties file with foo=bar as the single entry, here’s a few combinations to demonstrate the override behaviour.Command line invocation Gradle Build Config Effect Resultgradle dist ant.importBuild(‘src/main/resources/build.xml’) build.properties value loaded from ant build is used foo=bargradle dist -Dfoo=NotBar ant.importBuild(‘src/main/resources/build.xml’) command line property is used foo=NotBargradle dist -Dfoo=NotBar ant.foo=’NotBarFromGradle’ ant.importBuild(‘src/main/resources/build.xml’) Gradle build property is used foo=NotBarFromGradlegradle dist -Dfoo=NotBar ant.foo=’NotBarFromGradle’ ant.importBuild(‘src/main/resources/build.xml’) ant.foo=’NotBarFromGradleAgain’ Gradle build property override is used foo=NotBarFromGradleAgainHow to deal with task name clashes Since Gradle insists on uniqueness of task names attempting to import an Ant build that contains a target with the same name as an existing Gradle task will fail. The most common clash I’ve encountered is with the clean task provided by the Gradle BasePlugin. With the help of a little bit of indirection we can still import and use any clashing targets by utilizing the GradleBuild task to bootstrap an Ant build import in an isolated Gradle project. Let’s add a new task to the mix in the Ant build imported and another dependency on the ant clean target to the all task. <!-- excerpt from buildWithClean.xml Ant build file --> <target name='clean' description='clean up'> <echo>Called clean task in ant build with foo = ${foo}</echo> </target> <target name='all' description='Build everything' depends='dist,clean'/> And a simple Gradle build file which will handle the import. ant.importBuild('src/main/resources/buildWithClean.xml') Finally, in our main gradle build file we add a task to run the targets we want. task importTaskWithExistingName(type: GradleBuild) { GradleBuild antBuild -> antBuild.buildFile ='buildWithClean.gradle' antBuild.tasks = ['all'] } This works, but unfortunately suffers from one small problem. When Gradle is importing these tasks it doesn’t properly respect the declared order of the dependencies. Instead it executes the dependent ant targets in alphabetical order. In this particular case Ant expects to execute the dist target before clean and Gradle executes them in the reverse order. This can be worked around by explicitly stating the task order, definitely not ideal, but workable. This Gradle task will execute the underlying Ant targets in the way we need. task importTasksRunInOrder(type: GradleBuild) { GradleBuild antBuild -> antBuild.buildFile ='buildWithClean.gradle' antBuild.tasks = ['dist', 'clean'] }Gradle Rules for the rest Finally, you can use a Gradle Rule to allow for calling any arbitrary target in a GradleBuild bootstrapped import. tasks.addRule('Pattern: a-<target> will execute a single <target> in the ant build') { String taskName -> if (taskName.startsWith('a-')) { task(taskName, type: GradleBuild) { buildFile = 'buildWithClean.gradle' tasks = [taskName - 'a-'] } } } In this particular example, this can allow you to string together calls as well, but be warned that they execute in completely segregated environments. $ gradle a-dist a-cleanSource code All of code referenced in this article is available on github if you’d like to take a closer look. Related posts:Why do I Like Gradle? A Groovy/Gradle JSLint Plugin Five Cool Things You Can Do With Groovy ScriptsReference: Using Gradle to Bootstrap your Legacy Ant Builds from our JCG partner Kelly Robinson at the The Kaptain on … stuff blog....
jenkins-logo

Hooking into the Jenkins (Hudson) API, Part 1

Which one – Hudson or Jenkins? Both. I started working on this little project a couple of months back using Hudson v1.395 and returned to it after the great divide happened. I took it as an opportunity to see whether there would be any significant problems should I choose to move permanently to Jenkins in the future. There were a couple of hiccups- most notably that the new CLI jar didn’t work right out of the box- but overall v1.401 of Jenkins worked as expected after the switch. The good news is the old version of the CLI jar still works, so this example is actually using a mix of code to get things done. Anyway, the software is great and there’s more than enough credit to go around. The API Jenkins/Hudson has a handy remote API packed with information about your builds and supports a rich set of functionality to control them, and the server in general, remotely. It is possible to trigger builds, copy jobs, stop the server and even install plugins remotely. You have your choice of XML, JSON or Python when interacting with the APIs of the server. And, as the build in documentation says, you can find the functionality you need on a relative path from the build server url at: “/…/api/ where ‘…’ portion is the object for which you’d like to access”. This will show a brief documentation page if you navigate to it in a browser, and will return a result if you add the desired format as the last part of the path. For instance, to load information about the computer running a locally hosted Jenkins server, a get request on this url would return the result in JSON format: http://localhost:8080/computer/api/json. { 'busyExecutors': 0, 'displayName': 'nodes', 'computer': [ { 'idle': true, 'executors': [ { }, { } ], 'actions': [], 'temporarilyOffline': false, 'loadStatistics': { }, 'displayName': 'master', 'oneOffExecutors': [], 'manualLaunchAllowed': true, 'offline': false, 'launchSupported': true, 'icon': 'computer.png', 'monitorData': { 'hudson.node_monitors.ResponseTimeMonitor': { 'average': 111 }, 'hudson.node_monitors.ClockMonitor': { 'diff': 0 }, 'hudson.node_monitors.TemporarySpaceMonitor': { 'size': 58392846336 }, 'hudson.node_monitors.SwapSpaceMonitor': null, 'hudson.node_monitors.DiskSpaceMonitor': { 'size': 58392846336 }, 'hudson.node_monitors.ArchitectureMonitor': 'Mac OS X (x86_64)' }, 'offlineCause': null, 'numExecutors': 2, 'jnlpAgent': false } ], 'totalExecutors': 2 } Here’s the same tree rendered using GraphViz.This functionality extends out in a tree from the root of the server, and you can gate how much of the tree you load from any particular branch by supplying a ‘depth’ parameter on your urls. Be careful how high you specify this variable. Testing with a load depth of four against a populous, long-running build server (dozens of builds with thousands of job executions) managed to regularly timeout for me. To give you an idea, here’s a very rough visualization of the domain at depth three from the root of the api.Getting data out of the server is very simple, but the ability to remotely trigger activity on the server is more interesting. In order to trigger a build of a job named ‘test’, a POST on http://localhost:8080/job/test/build does the job. Using the available facilities, it’s pretty easy to do things like:load a job’s configuration file, modify it and create a new job by POSTing the new config.xml file move a job from one build machine to another build up an overview of scheduled buildsThe CLI Jar There’s another way to remotely drive build servers in the CLI jar distributed along with the server. This jar provides simple facilities for executing certain commands remotely on the build server. Of note, this enables installing plugins remotely and executing a remote Groovy shell. I incorporated this functionality with a very thin wrapper around the main class exposed by the CLI jar as shown in the next code sample. /** * Drive the CLI with multiple arguments to execute. * Optionally accepts streams for input, output and err, all of which * are set by default to System unless otherwise specified. * @param rootUrl * @param args * @param input * @param output * @param err * @return */ def runCliCommand(String rootUrl, List<String> args, InputStream input = System.in, OutputStream output = System.out, OutputStream err = System.err) { def CLI cli = new CLI(rootUrl.toURI().toURL()) cli.execute(args, input, output, err) cli.close() } And here’s a simple test showing how you can execute a Groovy script to load information about jobs, similar to what you can do from the built-in Groovy script console on the server, which can be found for a locally installed deployment at http://localhost:8080/script. def 'should be able to query hudson object through a groovy script'() { final ByteArrayOutputStream output = new ByteArrayOutputStream() when: api.runCliCommand(rootUrl, ['groovysh', 'for(item in hudson.model.Hudson.instance.items) { println('job $item.name')}'], System.in, output, System.err)then: println output.toString() output.toString().split('\n')[0].startsWith('job') }Here are some links to articles about the CLI, if you want to learn more :Hudson CLI wikidoc Jenkins CLI wikidoc A template for PHP jobs on Jenkins An article from Kohsuke Kawaguchi A nice tutorialHTTPBuilder HTTPBuilder is my tool of choice when programming against an HTTP API nowadays. The usage is very straightforward and I was able to get away with only two methods to support reaching the entire API: one for GET and one for POST. Here’s the GET method, sufficient for executing the request, parsing the JSON response, and complete with (albeit naive) error handling. /** * Load info from a particular rootUrl+path, optionally specifying a 'depth' query * parameter(default depth = 0) * * @param rootUrl the base url to access * @param path the api path to append to the rootUrl * @param depth the depth query parameter to send to the api, defaults to 0 * @return parsed json(as a map) or xml(as GPathResult) */ def get(String rootUrl, String path, int depth = 0) { def status HTTPBuilder http = new HTTPBuilder(rootUrl) http.handler.failure = { resp -> println 'Unexpected failure on $rootUrl$path: ${resp.statusLine} ${resp.status}' status = resp.status }def info http.get(path: path, query: [depth: depth]) { resp, json -> info = json status = resp.status } info ?: status } Calling this to fetch data is a one liner, as the only real difference is the ‘path’ variable used when calling the API. private final GetRequestSupport requestSupport = new GetRequestSupport() ... /** * Display the job api for a particular Hudson job. * @param rootUrl the url for a particular build * @return job info in json format */ def inspectJob(String rootUrl, int depth = 0) { requestSupport.get(rootUrl, API_JSON, depth) } Technically, there’s nothing here that limits this to JSON only. One of the great things about HTTPBuilder is that it will happily just try to do the right thing with the response. If the data returned is in JSON format, as these examples are, it gets parsed into a JSONObject. If on the other hand, the data is XML, it gets parsed into a Groovy GPathResult. Both of these are very easily navigable, although the syntax for navigating their object graphs is different. What can you do with it? My primary motivation for exploring the API of Hudson/Jenkins was to see how I could make managing multiple servers easier. At present I work daily with four build servers and another handful of slave machines, and support a variety of different version branches. This includes a mix of unit and functional test suites, as well as a continuous deployment job that regularly pushes changes to test machines matching our supported platform matrix, so unfortunately things are not quite as simple as copying a single job when branching. Creating the build infrastructure for new feature branches in an automatic, or at least semi-automatic, fashion is attractive indeed, especially since plans are in the works to expand build automation. For a recent 555 day project, I utilized the API layer to build a Grails app functioning as both a cross-server build radiator and a central facility for server management. This proof of concept is capable of connecting to multiple build servers and visualizing job data as well as specific system configuration, triggering builds, and direct linking to each of the connected servers to allow for drilling down further. Here’s a couple of mock-ups that pretty much show the picture.Just a pretty cool app for installing Jenkins This is only very indirectly related, but I came across this very nice and simple Griffon app, called the Jenkins-Assembler which simplifies preparing your build server. It presents you with a list of plugins, letting you pick and choose, and then downloads and composes them into a single deployable war.Enough talking – where’s the code??? Source code related to this article is available on github. The tests are more of an exploration of the live API than an actual test of the code in this project. They run against a local server launched using the Gradle Jetty plugin. Finally, here’s some pretty pictures for you. [Show as slideshow] [View with PicLens]Continue to Part 2. Reference: Hooking into the Jenkins(Hudson) API from our JCG partner Kelly Robinson at the The Kaptain on … stuff blog....
jenkins-logo

Hooking into the Jenkins (Hudson) API, Part 2

This post continues from Part 1 of the tutorial. It’s been almost a year, but I finally had some time to revisit some code I wrote for interacting with the Jenkins api. I’ve used parts of this work to help manage a number of Jenkins build servers, mostly in terms of keeping plugins in sync and moving jobs from one machine to another. For this article I’m going to be primarily focusing on the CLI jar functionality and some of the things you can do with it. This has mostly been developed against Jenkins but I did some light testing with Hudson and it worked there for everything I tried, so the code remains mostly agnostic as to your choice of build server. The project structure The code is hosted on Github, and provides a Gradle build which downloads and launches a Jenkins(or Hudson) server locally to execute tests. The server is set to use the Gradle build directory as its working directory, so it can be deleted simply by executing gradle clean. I tried it using both the Jenkins and the Hudson versions of the required libraries and, aside from some quirks between the two CLI implementations, they continue to function very much the same. If you want to try it with Hudson instead of Jenkins, pass in the command flag -Pswitch and the appropriate war and libraries will be used. The project is meant to be run with Gradle 1.0-milestone-8, and comes with a Gradle wrapper for that version. Most of the code remains the same since the original article, but there are some enhancements and changes to deal with the newer versions of Jenkins and Hudson. The library produced by this project is published as a Maven artifact, and later on I’ll describe exactly how to get at it. There are also some samples included that demonstrate using that library in Gradle or Maven projects, and in Groovy scripts with Grapes. We’re using Groovy 1.8.6, Gradle 1.0-milestone-8 and Maven 3.0.3 to build everything. Getting more out of the CLI As an alternative to the api, the CLI jar is a very capable way of interacting with the build server. In addition to a variety of built-in commands, Groovy scripts can be executed remotely, and with a little effort we can easily serialize responses in order to work with data extracted on the server. As an execution environment, the server provides a Groovysh shell and stocks it with imports for the hudson.model package. Also passed into the Binding is the instance of the Jenkins/Hudson singleton object in that package. In these examples I’m using the backwards-compatible Hudson version, since the code is intended to be runnable on either flavor of the server. The available commands There’s a rich variety of built-in commands, all of which are implemented in the hudson.cli package. Here are the ones that are listed on the CLI page of the running application:build: Builds a job, and optionally waits until its completion. cancel-quiet-down: Cancel the effect of the “quiet-down” command. clear-queue: Clears the build queue connect-node: Reconnect to a node copy-job: Copies a job. create-job: Creates a new job by reading stdin as a configuration XML file. delete-builds: Deletes build record(s). delete-job: Deletes a job delete-node: Deletes a node disable-job: Disables a job disconnect-node: Disconnects from a node enable-job: Enables a job get-job: Dumps the job definition XML to stdout groovy: Executes the specified Groovy script. groovysh: Runs an interactive groovy shell. help: Lists all the available commands. install-plugin: Installs a plugin either from a file, an URL, or from update center. install-tool: Performs automatic tool installation, and print its location to stdout. Can be only called from inside a build. keep-build: Mark the build to keep the build forever. list-changes: Dumps the changelog for the specified build(s). login: Saves the current credential to allow future commands to run without explicit credential information. logout: Deletes the credential stored with the login command. mail: Reads stdin and sends that out as an e-mail. offline-node: Stop using a node for performing builds temporarily, until the next “online-node” command. online-node: Resume using a node for performing builds, to cancel out the earlier “offline-node” command. quiet-down: Quiet down Jenkins, in preparation for a restart. Don’t start any builds. reload-configuration: Discard all the loaded data in memory and reload everything from file system. Useful when you modified config files directly on disk. restart: Restart Jenkins safe-restart: Safely restart Jenkins safe-shutdown: Puts Jenkins into the quiet mode, wait for existing builds to be completed, and then shut down Jenkins. set-build-description: Sets the description of a build. set-build-display-name: Sets the displayName of a build set-build-result: Sets the result of the current build. Works only if invoked from within a build. shutdown: Immediately shuts down Jenkins server update-job: Updates the job definition XML from stdin. The opposite of the get-job command version: Outputs the current version. wait-node-offline: Wait for a node to become offline wait-node-online: Wait for a node to become online who-am-i: Reports your credential and permissionsIt’s not immediately apparent what arguments are required for each, but they almost universally follow a CLI pattern of printing usage details when called with no arguments. For instance, when you call the build command with no arguments, here’s what you get back in the error stream: Argument “JOB” is required java -jar jenkins-cli.jar build args… Starts a build, and optionally waits for a completion. Aside from general scripting use, this command can be used to invoke another job from within a build of one job. With the -s option, this command changes the exit code based on the outcome of the build (exit code 0 indicates a success.) With the -c option, a build will only run if there has been an SCM change JOB : Name of the job to build -c : Check for SCM changes before starting the build, and if there’s no change, exit without doing a build -p : Specify the build parameters in the key=value format. -s : Wait until the completion/abortion of the command Getting data out of the system All of the interaction with the remote system is handled by streams and it’s pretty easy to craft scripts that will return data in an easily parseable String format using built-in Groovy facilities. In theory, you should be able to marshal more complex objects as well, but let’s keep it simple for now. Here’s a Groovy script that just extracts all of the job names into a List, calling the Groovy inspect method to quote all values. @GrabResolver(name = 'glassfish', root = 'http://maven.glassfish.org/content/groups/public/') @GrabResolver(name = "github", root = "http://kellyrob99.github.com/Jenkins-api-tour/repository") @Grab('org.kar:hudson-api:0.2-SNAPSHOT') @GrabExclude('org.codehaus.groovy:groovy') import org.kar.hudson.api.cli.HudsonCliApiString rootUrl = 'http://localhost:8080' HudsonCliApi cliApi = new HudsonCliApi() OutputStream out = new ByteArrayOutputStream() cliApi.runCliCommand(rootUrl, ['groovysh', 'hudson.jobNames.inspect()'], System.in, out, System.err) List allJobs = Eval.me(cliApi.parseResponse(out.toString())) println allJobsOnce we get the response back, we do a little housekeeping to remove some extraneous characters at the beginning of the String, and use Eval.me to transform the String into a List. Groovy provides a variety of ways of turning text into code, so if your usage scenario gets more complicated than this simple case you can use a GroovyShell with a Binding or other alternative to parse the results into something useful. This easy technique extends to Maps and other types as well, making it simple to work with data sent back from the server. Some useful examples Finding plugins with updates and and updating all of them Here’s an example of using a Groovy script to find all of the plugins that have updates available, returning that result to the caller, and then calling the CLI ‘install-plugin’ command on all of them. Conveniently, this command will either install a plugin if it’s not already there or update it to the latest version if already installed. def findPluginsWithUpdates = ''' Hudson.instance.pluginManager.plugins.inject([]) { List toUpdate, plugin -> if(plugin.hasUpdate()) { toUpdate << plugin.shortName } toUpdate }.inspect() ''' OutputStream updateablePlugins = new ByteArrayOutputStream() cliApi.runCliCommand(rootUrl, ['groovysh', findPluginsWithUpdates], System.in, updateablePlugins, System.err)def listOfPlugins = Eval.me(parseOutput(updateablePlugins.toString())) listOfPlugins.each{ plugin -> cliApi.runCliCommand(rootUrl, ['install-plugin', plugin]) }Install or upgrade a suite of Plugins all at once This definitely beats using the ‘Manage Plugins’ UI and is idempotent so running it more than once can only result in possibly upgrading already installed Plugins. This set of plugins might be overkill, but these are some plugins I recently surveyed for possible use. @GrabResolver(name='glassfish', root='http://maven.glassfish.org/content/groups/public/') @GrabResolver(name="github", root="http://kellyrob99.github.com/Jenkins-api-tour/repository") @Grab('org.kar:hudson-api:0.2-SNAPSHOT') @GrabExclude('org.codehaus.groovy:groovy') import static java.net.HttpURLConnection.* import org.kar.hudson.api.* import org.kar.hudson.api.cli.HudsonCliApiString rootUrl = 'http://localhost:8080' HudsonCliApi cliApi = new HudsonCliApi()['groovy', 'gradle', 'chucknorris', 'greenballs', 'github', 'analysis-core', 'analysis-collector', 'cobertura', 'project-stats-plugin','audit-trail', 'view-job-filters', 'disk-usage', 'global-build-stats', 'radiatorviewplugin', 'violations', 'build-pipeline-plugin', 'monitoring', 'dashboard-view', 'iphoneview', 'jenkinswalldisplay'].each{ plugin -> cliApi.runCliCommand(rootUrl, ['install-plugin', plugin]) }// Restart a node, required for newly installed plugins to be made available. cliApi.runCliCommand(rootUrl, 'safe-restart')Finding all failed builds and triggering them It’s not all that uncommon that a network problem or infrastructure event can cause a host of builds to fail all at once. Once the problem is solved this script can be useful for verifying that the builds are all in working order. @GrabResolver(name = 'glassfish', root = 'http://maven.glassfish.org/content/groups/public/') @GrabResolver(name = "github", root = "http://kellyrob99.github.com/Jenkins-api-tour/repository") @Grab('org.kar:hudson-api:0.2-SNAPSHOT') @GrabExclude('org.codehaus.groovy:groovy') import org.kar.hudson.api.cli.HudsonCliApiString rootUrl = 'http://localhost:8080' HudsonCliApi cliApi = new HudsonCliApi() OutputStream out = new ByteArrayOutputStream() def script = '''hudson.items.findAll{ job -> job.isBuildable() && job.lastBuild && job.lastBuild.result == Result.FAILURE }.collect{it.name}.inspect() ''' cliApi.runCliCommand(rootUrl, ['groovysh', script], System.in, out, System.err) List failedJobs = Eval.me(cliApi.parseResponse(out.toString())) failedJobs.each{ job -> cliApi.runCliCommand(rootUrl, ['build', job]) }Open an interactive Groovy shell If you really want to poke at the server you can launch an interactive shell to inspect state and execute commands. The System.in stream is bound and responses from the server are immediately echoed back. @GrabResolver(name = 'glassfish', root = 'http://maven.glassfish.org/content/groups/public/') @GrabResolver(name = "github", root = "http://kellyrob99.github.com/Jenkins-api-tour/repository") @Grab('org.kar:hudson-api:0.2-SNAPSHOT') @GrabExclude('org.codehaus.groovy:groovy') import org.kar.hudson.api.cli.HudsonCliApi /** * Open an interactive Groovy shell that imports the hudson.model.* classes and exposes * a 'hudson' and/or 'jenkins' object in the Binding which is an instance of hudson.model.Hudson */ HudsonCliApi cliApi = new HudsonCliApi() String rootUrl = args ? args[0] :'http://localhost:8080' cliApi.runCliCommand(rootUrl, 'groovysh')Updates to the project A lot has happened in the last year and all of the project dependencies needed an update. In particular, there have been some very nice improvements to Groovy, Gradle and Spock. Most notably, Gradle has come a VERY long way since version 0.9.2. The JSON support added in Groovy 1.8 comes in handy as well. Spock required a small tweak for rendering dynamic content in test reports when using @Unroll, but that’s a small price to pay for features like the ‘old’ method and Chained Stubbing. Essentially, in response to changes in Groovy 1.8+, a Spock @Unroll annotation needs to change from: @Unroll('querying of #rootUrl should match #xmlResponse') to a Closure encapsulated GString expression: @Unroll({'querying of $rootUrl should match $xmlResponse'}) It sounds like the syntax is still in flux and I’m glad I found this discussion of the problem online. Hosting a Maven repository on Github Perhaps you noticed from the previous script examples, we’re referencing a published library to get at the HudsonCliApi class. I read an interesting article last week which describes how to use the built-in Github Pages for publishing a Maven repository. While this isn’t nearly as capable as a repository like Nexus or Artifactory, it’s totally sufficient for making some binaries available to most common build tools in a standard fashion. Simply publish the binaries along with associated poms in the standard Maven repo layout and you’re off to the races! Each dependency management system has its quirks(I’m looking at you Ivy!) but they’re pretty easy to work around, so here’s examples for Gradle, Maven and Groovy Grapes to use the library produced by this project code. Note that some of the required dependencies for Jenkins/Hudson aren’t available in the Maven central repository, so we’re getting them from the Glassfish repo. Gradle Pretty straight forward, this works with the latest version of Gradle and assumes that you are using the Groovy plugin. repositories { mavenCentral() maven { url 'http://maven.glassfish.org/content/groups/public/' } maven { url 'http://kellyrob99.github.com/Jenkins-api-tour/repository' } } dependencies { groovy 'org.codehaus.groovy:groovy-all:${versions.groovy}' compile 'org.kar:hudson-api:0.2-SNAPSHOT' }Maven Essentially the same content in xml and in this case it’s assumed that you’re using the GMaven plugin <repositories> <repository> <id>glassfish</id> <name>glassfish</name> <url>http://maven.glassfish.org/content/groups/public/</url> </repository> <repository> <id>github</id> <name>Jenkins-api-tour maven repo on github</name> <url>http://kellyrob99.github.com/Jenkins-api-tour/repository</url> </repository> </repositories><dependencies> <dependency> <groupId>org.codehaus.groovy</groupId> <artifactId>groovy-all</artifactId> <version>${groovy.version}</version> </dependency> <dependency> <groupId>org.kar</groupId> <artifactId>hudson-api</artifactId> <version>0.2-SNAPSHOT</version> </dependency> </dependencies>Grapes In this case there seems to be a problem resolving some transitive dependency for an older version of Groovy which is why there’s an explicit exclude for it. @GrabResolver(name='glassfish', root='http://maven.glassfish.org/content/groups/public/') @GrabResolver(name='github', root='http://kellyrob99.github.com/Jenkins-api-tour/repository') @Grab('org.kar:hudson-api:0.2-SNAPSHOT') @GrabExclude('org.codehaus.groovy:groovy')LinksThe Github Jenkins-api-tour project page Maven repositories on Github Scriptler example Groovy scripts Jenkins CLI documentationRelated posts:Hooking into the Jenkins(Hudson) API Five Cool Things You Can Do With Groovy Scripts A Grails App Demoing the StackExchange APIReference: Hooking into the Jenkins(Hudson) API, Part 2 from our JCG partner Kelly Robinson at the The Kaptain on … stuff blog....
career-logo

Learn A Different Language – Advice From A JUG Leader

The cry of “Java is Dead“? has been heard for many years now, yet Java still continues to be among the most used languages/ecosystems. I am not here to declare that Java is dead (it isn’t and won’t be anytime soon). My opinion, if you haven’t already heard:Java developers, it’s time to learn something elseFirst, a little background as basis for my opinions:I founded the Philadelphia Area Java Users’ Group in March 2000, and for the past 12 years I have served as ‘?JUGmaster’. Professionally, I have been a technology recruiter focused on (you guessed it) helping Philadelphia area software companies to hire Java talent since early 1999. I started a new recruiting firm in January that is not focused on Java, and I’m taking searches for mostly Java, Python, Ruby, Scala, Clojure, and mobile talent. This was a natural progression for me, as a portion of my candidate network had already transitioned to other technologies.I launched Philly JUG based on a recommendation from a candidate, who learned that the old group was dormant. Philly JUG grew from 30 to over 1300 members and we have been recognized twice by Sun as a Top JUG worldwide. This JUG is non-commercial (no product demos, no sales or recruiting activity directed to the group), entirely sponsor-funded, and I have had great success in attracting top Java minds to present for us.The early signsAfter several years of 100% Java-specific presentations at our meetings, I started to notice that an element of the membership requested topics that were not specifically Java EE or SE. I served as the sole judge of what content was appropriate (with requested input from some members), and I allowed the group to stray a bit from our standard fare. First was Practical JRuby back in ’06, but since that was ‘still Java’ there was no controversy. Groovy and Grails in ’08 wasn’t going to raise any eyebrows either. Then in ’09, we had consecutive non-Java meetings – Scala for Jarheads followed by Clojure and the Robot Apocalypse (exact dates for said apocalypse have been redacted). Obviously there is commonality with the JVM, but it was becoming readily apparent that some members of the group were less interested in simply hearing about JSP, EJB, Java ME or whatever the Java vendor universe might be promoting at the time.I noticed that the members that sought these other topics and attended these alternative meetings were my unofficial advisory committee over the years – the members I called first to ask opinions about topics. These people were the thought leadership of the group. Many of them were early adopters of Java as well.It was apparent that many of the better Java engineers I knew were choosing to broaden their horizons with new languages, which prompted me to write “Become a Better Java Programmer – Learn Something Else”. That ’09 article served to demonstrate that by learning another language, you should become a better overall engineer and your Java skills should improve just based on some new approaches. Today I go a step farther in my advice for the Java community, and simply say ‘Learn Something Else‘?.To be clear, the reason I make this suggestion is not because I feel Java as a language is going to die off, or that all companies will stop using Java in the near future. Java will obviously be around for many years to come, and the JVM itself will certainly continue to be a valued resource for developers. The reason I advise you to learn something else is that I strongly believe that the marketability of developers that only code in Java will diminish noticeably in the next few years, and the relevance and adoption of Java in new projects will decline. Known Java experts who are at the top few percent probably won’t see decreased demand, but the vast majority of the Java talent pool undoubtedly will.The writing on the wallI think at this point the writing on the wall is getting a bit too obvious to ignore, and you have two forces acting concurrently. First, there is a tangible groundswell of support for other languages. A month doesn’t seem to go by that we don’t hear about a new language being released, or read that a company transitioned from Java to another option. Much of this innovation is by former Java enthusiasts, who are often taking the best elements of Java and adding features that were often desired by the Java community but couldn’t get through the process for inclusion. Java has been lauded for its stability, and the price Java pays for that stability is slowed innovation.The second contributing factor is that Java has simply lost much of its luster and magic over the past few years. The Sun acquisition was a major factor, as Oracle is viewed as entirely profit-driven, ‘?big corporate’, and less focused on community-building than Sun was with Java. The Java community, in turn, is naturally less interested in helping to improve Java under Oracle. Giving away code or time to Oracle is like ‘?working for the man‘? to the Java community. Oracle deciding to run JavaOne alongside Oracle OpenWorld may have been an omen. Failures such as JavaFX and the inability to keep up with feature demand have not helped either.My suggestion to learn something else is also rooted in simple economic principles. I have seen the demand for engineers with certain skills (Ruby, and dare I say JavaScript are good examples) increasing quickly and dramatically, and the low supply of talent in these markets makes it an opportune time to make a move. It reminds me of the late 90′s when you could earn six-figures if you could spell J-A-V-A. Some companies are now even willing to teach good Java pros a new language on the job – what is better than getting paid to learn? The gap in supply and demand for Java was severe years ago, but it seems the supply has caught up recently. Java development also seems to be a skill that, in my experience, is shipped offshore a bit more than some other languages.Still don’t see it? Remember those early Java adopters, the thought leaders I mentioned? Many of them are still around Java, but they aren’t writing Java code anymore. They have come to appreciate the features of some of these other offerings, and are either bored or frustrated with Java. As this set of converts continue to use and evangelize alternative languages in production, they will influence more junior developers who I expect will follow their lead. The flow of Java developers to other languages will continue to grow, and there is still time to take advantage of the supply shortage in alternative language markets.Java will never die. However, the relevance and influence of Java tomorrow is certainly questionable, the marketability of ‘pure’ Java developers will decline, and the market for talent in alternative languages is too strong for proactive career-minded talent to ignore.Reference: Advice From A JUG Leader – Learn A Different Language from our JCG partner Dave Fecak at the Job Tips For Geeks blog....
software-development-2-logo

10 Object Oriented Design principles for the Java programmer

Object Oriented Design Principles are core of OOPS programming but I have seen most of Java programmer chasing design patterns like Singleton pattern , Decorator pattern or Observer pattern but not putting enough attention on Object oriented analysis and design or following these design principles. I have regularly seen Java programmers and developers of various experience level who either doesn’t heard about these OOPS and SOLID design principle or simply doesn’t know what benefits a particular design principle offers or how to use these design principle in coding.Bottom line is always strive for highly cohesive and loosely couple solution, code or design and looking open source code from Apache and Sun are good examples of Java design principles or how design principles should be used in Java coding. Java Development Kit follows several design principle like Factory Pattern in BorderFactory class, Singleton pattern in Runtime class and if you interested more on Java code read Effective Java by Joshua Bloch , a gem by the guy who wrote Java API. My another personal favorite on object oriented design pattern is Head First Design Pattern by Kathy Sierra and others and Head First Object Oriented Analysis and Design .Though best way of learning design principles or pattern is real world example and understanding the consequences of violating that design principle, subject of this article is Introducing Object oriented design principles for Java Programmers who are either not exposed to it or in learning phase. I personally think each of these design principle need an article to explain it clearly and I will definitely try to do it here but for now just get yourself ready for quick bike ride on design principle town :)Object oriented design principle 1 – DRY (Don’t repeat yourself)As name suggest DRY (don’t repeat yourself) means don’t write duplicate code, instead use abstraction to abstract common things in one place. if you use a hardcoded value more than one time consider making it public final constant, if you have block of code in more than two place consider making it a separate method. Benefit of this SOLID design principle is in maintenance. Its worth to note is don?€™t abuse it, duplicate is not for code but for functionality means if you used common code to validate OrderID and SSN it doesn?€™t mean they are same or they will remain same in future. By using common code for two different functionality or thing you closely couple them forever and when your OrderID changes its format , your SSN validation code will break. So beaware of such coupling and just don?€™t combine anything which uses similar code but are not related.Object oriented design principle 2 – Encapsulate what varies Only one thing is constant in software field and that is “Change”, So encapsulate the code you expect or suspect to be changed in future. Benefit of this OOPS Design principle is that Its easy to test and maintain proper encapsulated code. If you are coding in Java then follow principle of making variable and methods private by default and increasing access step by step e.g. from private to protected and not public. Several of design pattern in Java uses Encapsulation, Factory design pattern is one example of Encapsulation which encapsulate object creation code and provides flexibility to introduce new product later with no impact on existing code.Object oriented design principle 3 – Open Closed principle Classes, methods or functions should be Open for extension (new functionality) and Closed for modification. This is another beautiful object oriented design principle which prevents some-one from changing already tried and tested code. Ideally if you are adding new functionality only than your code should be tested and that’s the goal of Open Closed Design principle.Object oriented design principle 4 – Single Responsibility Principle (SRP) There should not be more than one reason for a class to change or a class should always handle single functionality. If you put more than one functionality in one Class in Java it introduce coupling between two functionality and even if you change one functionality there is chance you broke coupled functionality which require another round of testing to avoid any surprise on production environment.Object oriented design principle 5 – Dependency Injection or Inversion principleDon’t ask for dependency it will be provided to you by framework. This has been very well implemented in Spring framework, beauty of this design principle is that any class which is injected by DI framework is easy to test with mock object and easier to maintain because object creation code is centralized in framework and client code is not littered with that.There are multiple ways to implemented Dependency injection like using byte code instrumentation which some AOP (Aspect Oriented programming) framework like AspectJ does or by using proxies just like used in Spring.Object oriented design principle 6 – Favour Composition over InheritanceAlways favour composition over inheritance if possible. Some of you may argue this but I found that Composition is lot more flexible than Inheritance. Composition allows to change behaviour of a class at runtime by setting property during runtime and by using Interfaces to compose a class we use polymorphism which provides flexibility of to replace with better implementation any time. Even Effective Java advise to favor composition over inheritance.Object oriented design principle 7 – Liskov Substitution Principle (LSP) According to Liskov Substitution Principle Subtypes must be substitutable for super type i.e. methods or functions which uses super class type must be able to work with object of sub class without any issue”. LSP is closely related to Single responsibility principle and Interface Segregation Principle. If a class has more functionality than subclass might not support some of the functionality and does violated LSP. In order to follow LSP design principle, derived class or sub class must enhance functionality not reducing it.Object oriented design principle 8 – Interface Segregation principle (ISP) Interface Segregation Principle stats that a client should not implement an interface if it doesn’t use that. this happens mostly when one interface contains more than one functionality and client only need one functionality and not other.Interface design is tricky job because once you release your interface you can not change it without breaking all implementation. Another benefit of this desing principle in Java is, interface has disadvantage to implement all method before any class can use it so having single functionality means less method to implement.Object oriented design principle 9 – Programming for Interface not implementation Always program for interface and not for implementation this will lead to flexible code which can work with any new implementation of interface. So use interface type on variables, return types of method or argument type of methods in Java. This has been advised by many Java programmer including in Effective Java and head first design pattern book.Object oriented design principle 10 – Delegation principle Don’t do all stuff by yourself, delegate it to respective class. Classical example of delegation design principle is equals() and hashCode() method in Java. In order to compare two object for equality we ask class itself to do comparison instead of Client class doing that check. Benefit of this design principle is no duplication of code and pretty easy to modify behaviour.All these object oriented design principle helps you write flexible and better code by striving high cohesion and low coupling. Theory is first step but what is most important is to develope ability to find out when and to apply these design principle and find our whether we are violating any design principle and compromising flexibility of code. but again as nothing is perfect in this world, don’t always try to solve problem with design patterns and design principle they are mostly for large enterprise project which has longer maintenance cycle.Reference: 10 Object Oriented Design principles Java programmer should know from our JCG partner Javin Paul at the Javarevisited blog....
google-app-engine-logo

Vaadin App on Google App Engine in 5 Minutes

In this tutorial you’ll learn how to create your very first Vaadin web application, how to run it on a local AppEngine development server and how to deploy it to the Google App Engine infrastructure. And all of that in about 5 to 10 minutes. Yes, if you have the necessary prerequisites installed, you’ll be up and running straight away. Thanks to the power of Maven.This tutorial is in the form of a nicely formatted, 4-page, quick reference card. You can download it straight away, no sign-ups required.The card will guide you through the process of:Setting up your environment. How to run a Vaadin application on the Google App Engine development server. How to deploy that application. How to start customizing the Powered by Reindeer templates.Get started with Vaadin and Google AppEngine right now.Be Sociable, Share!Reference: Tutorial: a Vaadin Application on Google App Engine in 5 Minutes from our JCG partner Peter Backx at the Streamhead blog....
software-development-2-logo

Modelling Is Everything

I’m often asked, “What is the best way to learn about building high-performance systems”? There are many perfectly valid answers to this question but there is one thing that stands out for me above everything else, and that is modelling. Modelling what you need to implement is the most important and effective step in the process. I’d go further and say this principle applies to any development and the rest is just typing Domain Driven Design (DDD) advocates modelling the domain and expressing this model in code as fundamental to the successful delivery and ongoing maintenance of software. I wholeheartedly agree with this. How often do we see code that is an approximation of the problem domain? Code that exhibits behaviour which approximates to what is required via inappropriate abstractions and mappings which just about cope. Those mappings between what is in the code and the real domain are only contained in the developers’ heads and this is just not good enough. When requiring high-performance, code for parts of the system often have to model what is happening with the CPU, memory, storage sub-systems, or network sub-systems. When we have imperfect abstractions on top of these domains, performance can be very adversely affected. The goal of my “ Mechanical Sympathy” blog is to peek at what is under the hood so we can improve our abstractions. What is a Model? A model does not need to be the result of a 3-year exercise producing UML. It can be, and often is best as, people communicating via various means including speech, drawings, illustrations, metaphors, analogies, etc, to build a mental model for shared understanding. If an accurate and distilled understanding can be reached then this model can be turned into code with great results. Infrastructure Domain Models If developers writing a concurrent framework do not have a good model of how a typical cache sub-system works, i.e. it uses message passing to exchange cache lines, then the framework is unlikely to perform well or be correct. If their code drives the cache sub-system with mechanical sympathy and understanding, it is less likely to have bugs and more likely to perform well. It is much easier to predict performance from a sound model when coming from an understanding of the infrastructure for the underlying platform and its published abilities. For example, if you know how many packets per second a network sub-system can handle, and the size of its transfer unit, then it is easy to extrapolate expected bandwidth. With this model based understanding we can test our code for expectations with confidence. I’ve fixed many performance issues whereby a framework treated a storage sub-system as stream-based when it is really a block-based model. If you update part of a file on disk, the block to be updated must be read, the changes applied, and the results written back. Now if you know the system is block based and the boundaries of the blocks, you can write whole blocks back without incurring the read, modify, write back cycle replacing these actions with a single write. This applies even when appending to a file as the last block is likely to have been partially written previously. Business Domain Models The same thinking should be applied to the models we construct for the business domain. If a business process is modelled accurately, then the software will not surprise its end users. When we draw up a model it is important to describe the relationships for cardinality and the characteristics by which they will be traversed. This understanding will guide the selection of data structures to those best suited for implementing the relationships. I often see people use a list for a relationship which is mostly searched by key, for this case a map could be more appropriate. Are the entities at the other end of a relationship ordered? A tree or skiplist implementation may then be a better option. Identity Identity of entities in a model is so important. All models have to be entered in some way, and this normally starts with an entity from which to walk. That entity could be “Customer” by customer ID but could equally be “DiskBlock” by filename and offset in an infrastructure domain. The identity of each entity in the system needs to be clear so the model can be accessed efficiently. If for each interaction with a model we waste precious cycles trying to find our entity as a starting point, then other optimisations can become almost irrelevant. Make identity explicit in your model and, if necessary, index entities by their identity so you can efficiently enter the model for each interaction. Refine as we learn It is also important to keep refining a model as we learn. If the model grows as a series of extensions without refining and distilling, then we end up with a spaghetti mess that is very difficult to manage when trying to achieve predictable performance. Never mind how difficult it is to maintain and support. Everyday we learn new things. Reflect this in the model and keep it up to date. Implement no more, but also no less, than what is needed! The fastest code is code that just does what is needed and no more. Perform the instructions to complete the task and no more. Really fast code is normally not a weird mess of bit-shifting and compiler tricks. It is best to start with something clean and elegant. Then measure to see if you are within performance targets. So often this will be sufficient. Sometimes performance will be a surprise. You then need to apply science to test and measure before jumping to conclusions. A profiler will often tell you where the time is being taken. Once the basic modelling mistakes and assumptions have been corrected, it usually takes just a little mechanical sympathy to reach the performance goal. Unused code is waste. Try not to create it. If you happen to create some, then remove it from your codebase as soon as you notice it. Conclusion When non-functional requirements, such as performance and availability, are critical to success, I’ve found the most important thing is to get the model correct for the domain at all levels. That is, take the principles of DDD and make sure your code is an appropriate reflection of each domain. Be that the domain of business applications, or the domain of interactions with infrastructure, I’ve found modelling is everything. Reference: Modelling Is Everything from our JCG partner Martin Thompson at the Mechanical Sympathy blog....
spring-logo

Introducing Spring Integration

In this article we introduce Spring Integration. If you have not worked with Spring Integration before, it might help to brush up on Enterprise Integration Patterns by Gregor Hohpe. Also I will recommend this excellent introductory article by Josh Long. Context setting In a nutshell, Enterprise Integration Patterns is all about how to get two application (possibly on different technology stacks, different machines, different networks) talk to each other, in order to provide a single business functionality. The challenge is how to ensure that this communication remains transparent to business user, yet reliable and easy for applications. Messaging is one of the patterns. Using this pattern applications can talk to each other frequently, immediately, reliably, and asynchronously, using customizable formats. Applications talk to each other by sending data (called Messages) over virtual pipes (called Channels). This is overly simplistic introduction to the concept, but hopefully enough to make sense of the rest of the article. Spring Integration is not an implementation of any of the patterns, but it supports these patterns, primarily Messaging. The rest of this article is pretty hands on and is an extension of the series on Spring 3. The earlier articles of this series were:Hello World with Spring 3 MVC Handling Forms with Spring 3 MVC Unit testing and Logging with Spring 3 Handling Form Validation with Spring 3 MVCWithout further ado, let’s get started. Bare bones Spring Integration exampleAt the time of writing this article the latest version of Spring is 3.1.2.RELEASE. However, the latest version of Spring Integration is 2.1.3.RELEASE, as found in Maven Central. I was slightly – and in retrospect, illogically – taken aback that the Spring and Spring Integration should have different latest versions, but, hey that’s how it is. This means our pom.xml should have an addition now (if you are wondering where did that come from you need to follow through, at least on a very high level, the Spring 3 series that I have mentioned earlier in the article). File: /pom.xml <!-- Spring integration --> <dependency> <groupId>org.springframework.integration</groupId> <artifactId>spring-integration-core</artifactId> <version>2.1.3.RELEASE</version> </dependency> This one dependency in the pom, now allows my application to send message over channels. Notice that now we are referring to message and channels in the realm of Spring Integration, which is not necessarily exactly same as the same concepts referred earlier in this article in the realm of Enterprise Integration Patterns. It is probably worth having a quick look at the Spring Integration Reference Manual at this point. However, if you are just getting started with Spring Integration, you are perhaps better off following this article for the moment. I would recommend you get your hands dirty before returning to reference manual, which is very good but also very exhaustive and hence could be overwhelming for a beginner. To keep things simple, and since I generally try to do test first approach (wherever possible), let us try and write some unit tests to create message, send it over a channel and then receive it. I have blogged here about how to use JUnit and Logback in Spring 3 applications. Continuing with the same principle, assuming that we are going to write a HelloWorldTest.java, let’s set up the Spring configuration for the test. File: \src\test\resources\org\academy\integration\HelloWorldTest-context.xml <?xml version='1.0' encoding='UTF-8'?> <beans xmlns='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:p='http://www.springframework.org/schema/p' xmlns:int='http://www.springframework.org/schema/integration' xsi:schemaLocation='http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans-3.0.xsdhttp://www.springframework.org/schema/integrationhttp://www.springframework.org/schema/integration/spring-integration-2.1.xsd'><int:channel id='inputChannel'></int:channel><int:channel id='outputChannel'> <int:queue capacity='10' /> </int:channel><int:service-activator input-channel='inputChannel' output-channel='outputChannel' ref='helloService' method='greet' /><bean id='helloService' class='org.academy.integration.HelloWorld' /></beans> So, what did we just do? We have asked Spring Integration to create a ‘inputChannel’ to send messages to. A ‘outputChannel’ to read messages from. We have also configured for all messages on ‘inputChannel’ to be handed over to a ‘helloService’. This ‘helloService’ is an instance of org.academy.integration.HelloWorld class, which should be equipped to do something to the message. After that we have also configured that the output of the ‘helloService’ i.e. the modified message in this case to be handed over to the ‘outputChannel’. Simple, isn’t it? Frankly, when I had a worked with Spring Integration a few years ago for the first time, I found this all a bit confusing. It does not make much sense to me till I see this working. So, let’s keep going. Let’s add our business critical HelloWorld class. File: /src/main/java/org/academy/integration/HelloWorld.java package org.academy.integration;import org.slf4j.Logger; import org.slf4j.LoggerFactory;public class HelloWorld { private final static Logger logger = LoggerFactory .getLogger(HelloWorld.class); public String greet(String name){ logger.debug('Greeting {}', name); return 'Hello ' + name; } }As, you can see, given a ‘name’ it return ‘Hello {name}’. Now, let’s add the unit test to actually put this in action. File: /src/test/java/org/academy/integration/HelloWorldTest.java package org.academy.integration;import static org.junit.Assert.*;import org.junit.Test; import org.junit.runner.RunWith; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Qualifier; import org.springframework.integration.MessageChannel; import org.springframework.integration.core.PollableChannel; import org.springframework.integration.message.GenericMessage; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;@RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration public class HelloWorldTest { private final static Logger logger = LoggerFactory .getLogger(HelloWorldTest.class);@Autowired @Qualifier('inputChannel') MessageChannel inputChannel;@Autowired @Qualifier('outputChannel') PollableChannel outputChannel;@Test public void test() { inputChannel.send(new GenericMessage<String>('World')); assertEquals(outputChannel.receive().getPayload(), 'Hello World'); logger.debug('Checked basic Hello World with Spring Integration'); }}Although not mandatory, I find it easier to use the following logback setting. Feel free to use it, if you fancy. File: /src/main/resources/logback.xml <?xml version='1.0' encoding='UTF-8'?> <configuration> <appender name='CONSOLE' class='ch.qos.logback.core.ConsoleAppender'> <encoder> <pattern>%d %5p | %t | %-55logger{55} | %m %n</pattern> </encoder> </appender><logger name='org.springframework'> <level value='ERROR' /> <!-- level value='INFO' /> --> <!-- level value='DEBUG' /> --> </logger><root> <level value='DEBUG' /> <appender-ref ref='CONSOLE' /> </root> </configuration> Now, simply type ‘mvn -e clean install’ (or use m2e plugin) and you should be able to run the unit test and confirm that given string ‘World’ the HelloWorld service indeed returns ‘Hello World’ over the entire arrangement of channels and messages. Again, something optional but I highly recommend, is to run ‘mvn -e clean install site’. This – assuming you have correctly configured some code coverage tool (cobertura in my case) will give you a nice HTML report showing the code coverage. In this case it would be 100%. I have blogged a series on code quality which deals this subject in more detail, but to cut long story short, it is very important for me to ensure that whatever coding practice / framework I use and recommend use, complies to some basic code quality standards. Being able to unit test and measure that is one such fundamental check that I do. Needless to say, Spring in general (including Spring integration) passes that check with flying colours. ConclusionThat’s it for this article. In the next article, we will see how to insulate the application code from the Spring Integration specific code that we have in our current JUnit test i.e. inputChannel.send(…) etc. Till then, happy coding. Suggested further reading … Here are the links to earlier articles in this series:Hello World with Spring 3 MVC Handling Forms with Spring 3 MVC Unit testing and Logging with Spring 3 Handling Form Validation with Spring 3 MVCThese are excellent material that I can recommend:Getting started with Spring Integration Sample codes with Spring Integration Spring Integration – Session 1 – Hello World Spring Integration – Session 2 – More Hello WorldsContinue to Spring Integration with Gateways Reference: Introducing Spring Integration from our JCG partner Partho at the Tech for Enterprise blog....
spring-logo

Spring Integration with Gateways

This is the second article of the series on Spring Integration. This article builds on top of the first article where we introduced Spring Integration. Context setting In the first article, we created a simple java application whereA message was sent over a channel, It was intercepted by a service i.e. POJO and modified. It was then sent over a different channel The modified message was read from the channel and displayed.However, in doing this – keeping in mind that we were merely introducing the concepts there – we wrote some Spring specific code in our application i.e. the test classes. In this article we will take care of that and make our application code as insulated from Spring Integration api as possible. This is done by, what Spring Integration calls gateways. Gateways exist for the sole purpose of abstracting messaging related ‘plumbing’ code away from ‘business’ code. The business logic might really not care whether a functionality is being achieved be sending a message over a channel or by making a SOAP call. This abstraction – though logical and desirable – have not been very practical, till now. It is probably worth having a quick look at the Spring Integration Reference Manual at this point. However, if you are just getting started with Spring Integration, you are perhaps better off following this article for the moment. I would recommend you get your hands dirty before returning to reference manual, which is very good but also very exhaustive and hence could be overwhelming for a beginner. The gateway could be a POJO with annotations (which is convenient but in my mind beats the whole purpose) or with XML configurations (can very quickly turn into a nightmare in any decent sized application if unchecked). At the end of the day it is really your choice but I like to go the XML route. The configuration options for both styles are detailed out in this section of the reference implementation. Spring Integration with Gateways So, let’s create another test with gateway throw in for our HelloWorld service (refer to the first article of this series for more context). Let’s start with the Spring configuration for the test. File: src/test/resources/org/academy/integration/HelloWorld1Test-context.xml <?xml version='1.0' encoding='UTF-8'?> <beans xmlns='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:p='http://www.springframework.org/schema/p' xmlns:int='http://www.springframework.org/schema/integration' xsi:schemaLocation='http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans-3.0.xsdhttp://www.springframework.org/schema/integrationhttp://www.springframework.org/schema/integration/spring-integration-2.1.xsd'><int:channel id='inputChannel'></int:channel><int:channel id='outputChannel'> <int:queue capacity='10' /> </int:channel><int:service-activator input-channel='inputChannel' output-channel='outputChannel' ref='helloService' method='greet' /><bean id='helloService' class='org.academy.integration.HelloWorld' /><int:gateway service-interface='org.academy.integration.Greetings' default-request-channel='inputChannel' default-reply-channel='outputChannel'></int:gateway></beans> In this case, all that is different is that we have added a gateway. This is an interface called org.academy.integration.Greetings. It interacts with both ‘inputChannel’ and ‘outputChannel’, to send and read messages respectively. Let’s write the interface. File: /src/main/java/org/academy/integration/Greetings.java package org.academy.integration;public interface Greetings { public void send(String message);public String receive();} And then we add the implementation of this interface. Wait. There is no implementation. And we do not need any implementation. Spring uses something called GatewayProxyFactoryBean to inject some basic code to this gateway which allows it to read the simple string based message, without us needing to do anything at all. That’s right. Nothing at all. Note – You will need to add more code for most of your production scenarios – assuming you are not using Spring Integration framework to just push around strings. So, don’t get used to free lunches. But, while it is here, let’s dig in. Now, lets write a new test class using the gateway (and not interact with the channels and messages at all). File: /src/test/java/org/academy/integration/HelloWorld1Test.java package org.academy.integration;import static org.junit.Assert.*;import org.junit.Test; import org.junit.runner.RunWith; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;@RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration public class HelloWorld1Test {private final static Logger logger = LoggerFactory .getLogger(HelloWorld1Test.class);@Autowired Greetings greetings;@Test public void test() { greetings.send('World'); assertEquals(greetings.receive(), 'Hello World'); logger.debug('Spring Integration with gateways.'); }}Our test class is much cleaner now. It does not know about channels, or messages or anything related to Spring Integration at all. It only knows about a greetings instance – to which it gave some data by .send() method – and got modified data back by .receive() method. Hence, the business logic is oblivious of the plumbing logic, making for a much cleaner code. Now, simply type ‘mvn -e clean install’ (or use m2e plugin) and you should be able to run the unit test and confirm that given string ‘World’ the HelloWorld service indeed returns ‘Hello World’ over the entire arrangement of channels and messages. Again, something optional but I highly recommend, is to run ‘mvn -e clean install site’. This – assuming you have correctly configured some code coverage tool (cobertura in my case) will give you a nice HTML report showing the code coverage. In this case it would be 100%. I have blogged a series on code quality which deals this subject in more detail, but to cut long story short, it is very important for me to ensure that whatever coding practice / framework I use and recommend use, complies to some basic code quality standards. Being able to unit test and measure that is one such fundamental check that I do. Needless to say, Spring in general (including Spring integration) passes that check with flying colours. ConclusionThat’s it for this article. Happy coding. Suggested further reading … Here are the links to earlier articles in this series:Hello World with Spring 3 MVC Handling Forms with Spring 3 MVC Unit testing and Logging with Spring 3 Handling Form Validation with Spring 3 MVC Introducing Spring IntegrationThese are excellent material that I can recommend:Getting started with Spring Integration Sample codes with Spring Integration Spring Integration – Session 1 – Hello World Spring Integration – Session 2 – More Hello WorldsReference: Spring Integration with Gatways from our JCG partner Partho at the Tech for Enterprise blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close