Featured FREE Whitepapers

What's New Here?


Stumbling towards a better design

Some programs have a clear design and coding new features is quick and easy. Other programs are a patchwork quilt of barely comprehensible fragments, bug fixes, and glue. If you have to code new features for such programs, you’re often better off rewriting them.However there is a middle ground that I suspect is pretty common in these days of clean code and automated test suites: you have a good program with a clear design, but as you start to implement a new feature, you realise it’s a force fit and you’re not sure why. What to do?I’ve recently being implementing a feature in Eclipse Virgo which raised this question and I hope sheds some light on how to proceed. Let’s take a look.The New FeatureI’ve been changing the way the Virgo kernel isolates itself from applications. Previously, Equinox supported nested OSGi frameworks and it was easy to isolate the kernel from applications by putting the applications in a nested framework (a “user region”) and sharing selected packages and services with the kernel. However, the nested framework support is being withdrawn in favour of an OSGi standard set of framework hooks. These hooks let you control, or at least limit, the visibility of bundles, packages, and services — all in a single framework.So I set about re-basing Virgo on the framework hooks. The future looked good: eventually the hooks could be used to implement multiple user regions and even to rework the way application scoping is implemented in Virgo.An Initial ImplementationOne bundle in the Virgo kernel is responsible for region support, so I set about reworking it to use the framework hooks. After a couple of weeks the kernel and all its tests were running ok. However, the vision of using the hooks to implement multiple user regions and redo application scoping had receded into the distance given the rather basic way I had written the framework hooks. I had the option of ignoring this and calling “YAGNI!” (You Ain’t Gonna Need It!). But I was certain that once I merged my branch into master, the necessary generalisation would drop down the list of priorities. Also, if I ever did prioritise the generalisation work, I would have forgotten much of what was then buzzing around my head.Stepping BackSo the first step was to come up with a suitable abstract model. I had some ideas when we were discussing nested frameworks in the OSGi Alliance a couple of years ago: to partition the framework into groups of bundles and then to connect these groups together with one-way connections which would allow certain packages and services to be visible from one group to another.Using Virgo terminology, I set about defining how I could partition the framework into regions and then connect the regions together using package, service, and bundle filters. At first it was tempting to avoid cycles in the graph, but it soon became clear that cycles are harmless and indeed necessary for modelling Virgo’s existing kernel and user region, which need to be connected to each other with appropriate filters.A Clean AbstractionSoon I had a reasonably good idea of the kind of graph with filters that was necessary, so it was tempting to get coding and then refactor the thing into a reasonable shape. But I had very little idea of how the filtering of bundles and packages would interact. In the past I’ve found that refactoring from such a starting point can waste a lot of time, especially when tests have been written and need reworking. Code has inertia to being changed, so its often better to defer coding until I get a better understanding.To get a clean abstraction and a clear understanding, while avoiding “analysis paralysis”, I wrote a formal specification of these connected regions. This is essentially a mathematical model of the state of the graph and the operations on it. This kind of model enables properties of the system to be discovered before it is implemented in code. My friend and colleague, Steve Powell, was keen to review the spec and suggest several simplifications and before long, we had a formal spec with some rather nice algebraic properties for filtering and combining regions.To give you a feel for how these properties look, take this example which says that “combining” two regions (used when describing the combined appearance of two regions) and then filtering is equivalent to filtering the two regions first and then combining the result:Being a visual thinker and to make the formal spec more useful to non-mathematicians, I also drew plenty of pictures along the way. Here’s an example graph of regions:A New Implementation I defined a RegionDigraph (“digraph” is short for “directed graph”) interface, implemented it, and defined a suit of unit tests to give good code coverage. I then implemented a fresh collection of framework hooks in terms of the region digraph and then ripped out the old framework hooks and code supporting what in retrospect was a poorly formed notion of region membership and replaced this with the new framework hooks underpinned by the region digraph.I Really Did Need It (IRDNI?) It took a while to get all the kernel integration tests running again, mainly because the user region needs to be configured so that packages from the system bundle (which belongs in the kernel region) are imported along with some new services such as the region digraph service.As problems occurred, I could step back and think in terms of the underlying graph. By writing appropriate toString methods on Region and RegionDigraph implementation classes, the model became easier to visualise in the debugger. This gives me hope that if and when other issues arise, I will have a better chance of debugging them because I can understand the underlying model.A couple of significant issues turned up along the way, both related to the use of “side states” when Virgo deploys applications.The first is the need to temporarily add bundle descriptions to the user region.The second is the need to respect the region digraph when diagnosing resolver errors. This is relatively straightforward when deploying and diagnosing failures. It is less straightforward when dumping resolution failure states for offline analysis: the region digraph also needs to be dumped so it can also be used in the offline analysis.These issues would have been much harder to address in the initial framework hooks implementation. The first issue would have involved some fairly arbitrary code to record and remove bundle descriptions from the user region. The second would have been much trickier as there was a poorly defined and overly static notion of region membership which wouldn’t have lent itself to including in a state dump without looking like a gross hack. But with the region digraph it was easy to create a temporary “coregion” to contain the temporary bundle descriptions and it should be straightforward to capture the digraph alongside the state dump.Ok, so I’m convinced that the region digraph is pulling its weight and isn’t a bunch of YAGNI. But someone challenged me the other day by asking “Why do the framework hooks have to be so complex?”.Unnecessary Complexity? Well, firstly the region digraph ensures consistent behaviour across the five framework hooks (bundle find, bundle event, service find, service event, and resolver hooks), especially regarding filtering behaviour, treatment of the system bundle, and transitive dependencies (i.e. across more than one region connection). This consistency should lead to fewer bugs, more consistent documentation, and ease of understanding for users.Secondly, the region digraph is much more flexible than hooks based on a static notion of region membership: bundles may be added to the kernel after the user region has been created, application scoping should be relatively straightforward to rework in terms of regions thus giving scoping and regions consistent semantics (fewer bugs, better documentation etc), and multiple user regions should be relatively tractable to implement.Thirdly, the region digraph should be an excellent basis for implementing the notion of a multi-bundle application. In the OSGi Alliance, we are currently discussing how to standardise the multi-bundle application constructs in Virgo, Apache Aries, the Paremus Service Fabric, and elsewhere. Indeed I regard it as a proof of concept that the framework hooks can be used to implement certain basic kinds of multi-bundle application. As a nice spin-off, the development of the region digraph has resulted in several Equinox bugs being fixed and some clarifications being made to the framework hooks specification.Next Steps I am writing this while the region digraph is “rippling” through the Virgo repositories on its way into the 3.0 line. But this item is starting to have a broader impact. Last week I gave a presentation on the region digraph to the OSGi Alliance’s Enterprise Expert Group. There was a lot of interest and subsequently there has even been discussion of whether the feature should be implemented in Equinox so that it can be reused by other projects outside Virgo.Postscript (30 March 2010)   The region digraph is working out well in Virgo. We had to rework the function underlying the admin console because there is no longer a “surrogate” bundle representing the kernel packages and services in the user region. To better represent the connections from the user region to the kernel, the runtime artefact model inside Virgo needs to be upgraded to understand regions directly. This is work in progress in the 3.0 line.Meanwhile, Tom Watson, an Equinox committer, is working with me to move the region digraph code to Equinox. The rationale is to ensure that multiple users of the framework hooks can co-exist (by using the region digraph API instead of directly using the framework hooks).Tom contributed several significant changes to the digraph code in Virgo including persistence support. When Virgo dumps a resolution failure state, it also dumps the region digraph. The dumped digraph is read back in later and used to provide a resolution hook for analysing the dumped state, which ensures consistency between the live resolver and the dumped state analysis.Reference: Stumbling towards a better design from our JCG partner Glyn Normington at the Mind the Gap blog....

Why You Didn’t Get the Job

Over the course of my career I have scheduled thousands of software engineering interviews with hundreds of hiring managers at a wide array of companies and organizations. I have learned that although no two managers look for the exact same set of technical skills or behaviors, there are recognizable patterns in the feedback I receive when a candidate is not presented with a job offer.Obviously, if you are unable to demonstrate the basic fundamental skills for a position (for our purposes, software engineering expertise), anything else that happens during an interview is irrelevant. For that technical skills assessment, you are generally on your own, as recruiters should not provide candidates with specific technical questions that they will be asked in an interview.It should be helpful for job seekers to know where others have stumbled in interviews where technical skill was not the sole or even primary reason provided for the candidate’s rejection. The examples of feedback below are things I have heard repeatedly over the years, and tend to be the leading non-technical causes of failed interviews in the software industry (IMO).Candidate has wide technical breadth but little depth – The ‘jack of all trades’ response is not uncommon, particularly for folks that have perhaps bounced from job to job a little too much. Having experience working in diverse technical environments is almost always a positive, but only if you are there long enough to take some deeper skills and experience away with you. Companies will seek depth in at least some subset of your overall skill set.Candidate displayed a superiority complex or sense of entitlement - This seems most common when a candidate will subtly (or perhaps not so subtly) express that they may be unwilling to do any tasks aside from new development, such as code maintenance, or when a candidate confesses an interest in exclusively working with a certain subset of technologies. Candidates that are perceived as team players may mention preferences, but will also be careful to acknowledge their willingness to perform any relevant tasks that need to be done.Candidate showed a lack of passion – The lack of passion comment has various applications. Sometimes the candidate is perceived as apathetic about an opportunity or uninterested in the hiring company, or often it is described as what seems to be an overall apathy for the engineering profession (that software is not what they want to be doing). Regardless of the source of apathy, this perception is hard to overcome. If a candidate has no passion for the business, the technology, or the people, chances are the interview is a waste of time.Candidate talked more about the accomplishments of co-workers – This piece of feedback seems to be going viral lately. Candidates apparently ramble on about other groups that built pieces of their software product, QA, the devops team’s role, and everyone else in the company, yet they fail to dig deep into what their own contribution was. This signifies to interviewers that perhaps this candidate is either the least productive member of the team or is simply unable to describe their own duties. Give credit where it is due to your peers, but be sure to focus on your own accomplishments first.Candidate seems completely unaware of anything that happens beyond his/her desk – Repeatedly using answers such as “I don’t know who built that” or “I’m not sure how that worked” can be an indicator that the candidate is insulated in his/her role, or doesn’t have the curiosity to learn what others are doing in their company. As most engineering groups tend towards heavy collaboration these days, this lack of information will be a red flag for potential new employers.Candidate more focused on the tools/technology than on the profession – Although rare, this often troubles managers a great deal, and it’s often a symptom of the ‘fanboy’ complex. I first experienced this phenomenon when EJB first arrived on the scene in the Java world, and many candidates only wanted to work for companies that were using EJB. When a technologist is more focused on becoming great at a specific tool than becoming a great overall engineer, companies may show some reluctance. This is a trend that I expect could grow as the number of language/platform choices expands, and as fanatical response and the overall level of polarization of the tech community around certain technologies increases.Candidate’s claimed/résumé experience ? candidate’s actual experience – Embellishing the résumé is nothing new. A blatant lie on a résumé is obviously a serious no-no, but even some minor exaggerations or vague inaccuracies can come back and bite you. The most common example is when a candidate includes technologies or buzzwords on a résumé that they know nothing about. Including items in a skills matrix that are not representative of your current skill set is seen as dishonest by hiring managers.Candidate’s experience is not ‘transferable’ – If your company is only using homegrown frameworks and proprietary software, or if you have worked in the same company for many years without any fundamental changes in the development environment, this could be you. The interviewer in this case feels that you may be productive in your current environment, but when given a different set of tools, methodologies, and team members, the candidate may encounter too steep a learning curve. This is often a response on candidates that have worked within development groups at very large companies for many years.Reference: Why You Didn’t Get the Job from our JCG partner Dave Fecak at the Job Tips For Geeks blog....

Resolve circular dependency in Spring Autowiring

I would consider this post as best practice for using Spring in enterprise application development. When writing enterprise web application using Spring, the amount of services in the service layer, will probably grow. Each service in the service layer will probably consume other services, which will be injected via @Autowire . The problem: When services number start growing, a circular dependency might occur. It does not have to indicate on design problem… It’s enough that a central service, which is autowired in many services, consuming one of the other services, the circular dependency will likely to occur. The circular dependency will cause the Spring Application Context to fail and the symptom is an error which indicate clearly about the problem: Bean with name ‘*********’ has been injected into other beans [******, **********, **********, **********] in its raw version as part of a circular reference, but has eventually been wrapped (for example as part of auto-proxy creation). This means that said other beans do not use the final version of the bean. This is often the result of over-eager type matching – consider using ‘getBeanNamesOfType’ with the ‘allowEagerInit’ flag turned off, for example. The problem in modern spring application is that beans are defined via @nnotations (and not via XML) and the option of allowEagerInit flag, simply does not exist. The alternative solution of annotating the classes with @Lazy, simply did not work for me. The working solution was to add default-lazy-init=”true” to the application config xml file:<?xml version="1.0" encoding="UTF-8"?> <beans default-lazy-init="true" xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd > <context:component-scan base-package="com.package"></context:component-scan> <context:annotation-config/> ... </beans>Hope this helps. Not sure why it is not a default configuration. If you have suggestion why this configuration might be not ok, kindly share it with us all. Update: Following redesign I had, this mentioned solution simply did not do the trick. So I designed more aggressive solution to resolve that problem in 5 steps. Good luck! Reference: Resolve circular dependency in Spring Autowiring from our JCG partner Gal Levinsky at the Gal Levinsky’s blog blog....

Extenders: Pattern or Anti-Pattern?

The extender pattern has become popular in recent years and has even been utilised in OSGi standards such as the Blueprint service and the Web Applications specification. In Virgo, we’ve been working with extenders from the start, but in spite of their advantages, they have some significant downsides. Since the OSGi Alliance is considering using extenders in other specifications, I agreed to document some of the issues. The first difficulty is knowing when an extender has finished processing a bundle. For example, a bundle containing a blueprint XML file will transition to ACTIVE state as soon as any bundle activator has been driven. But that’s not the whole story. Administrators are interested in when the bundle is ready for use and so the management code in Virgo tracks the progress of the extender and presents an amalgamated state for the install artefact representing the bundle. The install artefact stays in STARTING state until the application context has been published, at which point it transitions to ACTIVE. Without such additional infrastructure, administrator cannot tell when a bundle processed by an extender really is ready for business. That’s the successful case, but there are complications in error cases too. The first complication is that since an extender runs in a separate thread to that which installed the bundle, if the extender throws an exception, this is not propagated to the code which installed the bundle. So the installer needs somehow to check for errors. Therefore Virgo has infrastructure to detect such errors and propagate them back to the thread which initiated deployment of the bundle: the deployment operation fails with a stack trace indicating what went wrong. The other error complication is where there is a (possibly indefinite) delay in an extender processing a bundle. For this kind of error Virgo tracks the progress of extender processing and issues warnings to the event log (intended for the administrator’s eyes) saying which pieces of processing have been delayed and in some common situations, for example when a blueprint is waiting for a dependency, what is causing the delay. Extenders suffer from needing to be able to see bundle lifecycle events and so for systems that partition the framework, it is necessary to install each extender into multiple partitions. On the flip side it is crucial to prevent multiple instances of an extender from ever seeing the same bundle event otherwise they will both attempt to extend the bundle. Another issue with extenders is the need to keep them running and healthy as there is little indication that an extender is down or sickly other than bundles not being processed by the extender. Virgo takes care to ensure its extenders are correctly started and its infrastructure for detecting delays helps to diagnose extender crashes or sickness (both of which are extremely rare situations). There is also an issue in passing parameters to an extender to affect its behaviour. This is typically done by embedding extender configuration in the bundles being processed or by attaching a fragment containing configuration to the extender bundle. But since the extender is not driven by an API, the normal approach of passing parameters on a call is not available. Essentially, an extender model implies that the programming model for deployment is restricted to BundleContext.installBundle. With considerable investment in additional infrastructure, Virgo has managed to support the Blueprint and Spring DM extenders reasonably well. But in the case of the Web Applications extender, Virgo couldn’t make this sufficiently robust and so it drives the underlying web componentry directly from the Virgo deployment pipeline to avoid the above issues. I understand at least one other server runtime project has encountered similar issues with extenders, so Virgo is not alone. There is a trade-off between loosely coupling the installer from the resource-specific processing, the main strength of the extender pattern (but far from unique to that pattern), and providing a robust programming model and usable management view — crucial features of a server runtime — which is far more straightforward without extenders. Reference: Extenders: Pattern or Anti-Pattern? from our JCG partner Glyn Normington at the Mind the Gap blog....

Hibernate Composite Ids with association mappings

Recently, We faced a tricky situation with hibernate association mapping with a composite id field. We needed to have bidirectional association with one-to-may and many-to-one.Our tow tables was “REPORT” and “REPORT_SUMMARY” which has one-to-many relationship from REPORT to REPORT_SUMMARY and many-to-one relationship from REPORT_SUMMARY to REPORT table. The primary key of REPORT_SUMMARY table is defined as a composite primary key which consists of auto increment id field and the primary key of REPORT table. CREATE TABLE REPORT ( ID INT(10) NOT NULL AUTO_INCREMENT, NAME VARCHAR(45) NOT NULL, PRIMARY KEY (`ID`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1;CREATE TABLE REPORT_SUMMARY ( ID INT(10) NOT NULL AUTO_INCREMENT, NAME VARCHAR(45) NOT NULL, RPT_ID INT(10) NOT NULL, PRIMARY KEY (`ID`,`RPT_ID`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1;The hibernate entity classes are as fallows. Report.java package com.semika.autoac.entities;import java.io.Serializable; import java.util.HashSet; import java.util.Set; public class Report implements Serializable{private static final long serialVersionUID = 9146156921169669644L;private Integer id; private String name; private Set<ReportSummary> reportSummaryList = new HashSet<ReportSummary>(); public Integer getId() { return id; } public void setId(Integer id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public Set<ReportSummary> getReportSummaryList() { return reportSummaryList; } public void setReportSummaryList(Set<ReportSummary> reportSummaryList) { this.reportSummaryList = reportSummaryList; } }ReportSummary.java package com.semika.autoac.entities;import java.io.Serializable; public class ReportSummary implements Serializable {private static final long serialVersionUID = 8052962961003467437L;private ReportSummaryId id; private String name;public ReportSummaryId getId() { return id; } public void setId(ReportSummaryId id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } @Override public int hashCode() { final int prime = 31; int result = 1; result = prime * result + ((id == null) ? 0 : id.hashCode()); result = prime * result + ((name == null) ? 0 : name.hashCode()); return result; } @Override public boolean equals(Object obj) { if (this == obj) return true; if (obj == null) return false; if (getClass() != obj.getClass()) return false; ReportSummary other = (ReportSummary) obj; if (id == null) { if (other.id != null) return false; } else if (!id.equals(other.id)) return false; if (name == null) { if (other.name != null) return false; } else if (!name.equals(other.name)) return false;return true; } }ReportSummaryId.java package com.semika.autoac.entities;import java.io.Serializable;public class ReportSummaryId implements Serializable{private static final long serialVersionUID = 6911616314813390449L;private Integer id; private Report report;public Integer getId() { return id; } public void setId(Integer id) { this.id = id; } public Report getReport() { return report; } public void setReport(Report report) { this.report = report; } @Override public int hashCode() { final int prime = 31; int result = 1; result = prime * result + ((id == null) ? 0 : id.hashCode()); result = prime * result + ((report == null) ? 0 : report.hashCode()); return result; } @Override public boolean equals(Object obj) { if (this == obj) return true; if (obj == null) return false; if (getClass() != obj.getClass()) return false; ReportSummaryId other = (ReportSummaryId) obj; if (id == null) { if (other.id != null) return false; } else if (!id.equals(other.id)) return false; if (report == null) { if (other.report != null) return false; } else if (!report.equals(other.report)) return false;return true; } }Report object has a collection of ReportSummary objects and ReportSummaryId has a reference to Report object. The most important part of this implementation is hibernate mapping files. Report.hbm.xml <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <hibernate-mapping> <class name="com.semika.autoac.entities.Report" table="REPORT" > <id name="id" type="int" column="id" > <generator class="native"/> </id> <property name="name"> <column name="NAME" /> </property> <set name="reportSummaryList" table="REPORT_SUMMARY" cascade="all" inverse="true"> <key column="RPT_ID" not-null="true"></key> <one-to-many class="com.semika.autoac.entities.ReportSummary"/> </set> </class> </hibernate-mapping>ReportSummary.hbm.xml <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"><hibernate-mapping> <class name="com.semika.autoac.entities.ReportSummary" table="REPORT_SUMMARY" > <composite-id name="id" class="com.semika.autoac.entities.ReportSummaryId"> <key-property name="id" column="ID"></key-property> <key-many-to-one name="report" class="com.semika.autoac.entities.Report" column="RPT_ID"</key-many-to-one> </composite-id> <property name="name"> <column name="NAME" /> </property> </class> </hibernate-mapping>Reference: How to Use Hibernate for Composite Ids with association mappings from our JCG partner Semika loku kaluge at the Code Box blog....

Using Gradle to Bootstrap your Legacy Ant Builds

Gradle provides several different ways to leverage your existing investment in Ant, both in terms of accumulated knowledge and the time you’ve already put into build files. This can greatly facilitate the process of porting Ant built projects over to Gradle, and can give you a path for incrementally doing so. The Gradle documentation does a good job of describing how you can use Ant in your Gradle build script, but here’s a quick overview and some particulars I’ve run into myself. Gradle AntBuilder Every Gradle Project includes an AntBuilder instance, making any and all of the facilities of Ant available within your build files. Gradle provides a simple extension to the existing Groovy AntBuilder which adds a simple yet powerful way to interface with existing Ant build files: the importBuild(Object antBuildFile) method. Internally this method utilizes an Ant ProjectHelper to parse the specified Ant build file and then wraps all of the targets in Gradle tasks making them available in the Gradle build. The following is a simple Ant build file used for illustration which contains some properties and a couple of dependent targets. <?xml version='1.0'?> <project name='build' default='all'> <echo>Building ${ant.file}</echo><property file='build.properties'/> <property name='root.dir' location='.'/><target name='dist' description='Build the distribution'> <property name='dist.dir' location='dist'/> <echo>dist.dir=${dist.dir}, foo=${foo}</echo> </target><target name='all' description='Build everything' depends='dist'/> </project> Importing this build file using Gradle is a one-liner. ant.importBuild('src/main/resources/build.xml') And the output of gradle tasks –all on the command line shows that the targets have been added to the build tasks. $ gradle tasks --all ... Other tasks ----------- all - Build everything dist - Build the distribution ... Properties used in the Ant build file can be specified in the Gradle build or on the command line and, unlike the usual Ant property behaviour, properties set by Ant or on the command line may be overwritten by Gradle. Given a simple build.properties file with foo=bar as the single entry, here’s a few combinations to demonstrate the override behaviour.Command line invocation Gradle Build Config Effect Resultgradle dist ant.importBuild(‘src/main/resources/build.xml’) build.properties value loaded from ant build is used foo=bargradle dist -Dfoo=NotBar ant.importBuild(‘src/main/resources/build.xml’) command line property is used foo=NotBargradle dist -Dfoo=NotBar ant.foo=’NotBarFromGradle’ ant.importBuild(‘src/main/resources/build.xml’) Gradle build property is used foo=NotBarFromGradlegradle dist -Dfoo=NotBar ant.foo=’NotBarFromGradle’ ant.importBuild(‘src/main/resources/build.xml’) ant.foo=’NotBarFromGradleAgain’ Gradle build property override is used foo=NotBarFromGradleAgainHow to deal with task name clashes Since Gradle insists on uniqueness of task names attempting to import an Ant build that contains a target with the same name as an existing Gradle task will fail. The most common clash I’ve encountered is with the clean task provided by the Gradle BasePlugin. With the help of a little bit of indirection we can still import and use any clashing targets by utilizing the GradleBuild task to bootstrap an Ant build import in an isolated Gradle project. Let’s add a new task to the mix in the Ant build imported and another dependency on the ant clean target to the all task. <!-- excerpt from buildWithClean.xml Ant build file --> <target name='clean' description='clean up'> <echo>Called clean task in ant build with foo = ${foo}</echo> </target> <target name='all' description='Build everything' depends='dist,clean'/> And a simple Gradle build file which will handle the import. ant.importBuild('src/main/resources/buildWithClean.xml') Finally, in our main gradle build file we add a task to run the targets we want. task importTaskWithExistingName(type: GradleBuild) { GradleBuild antBuild -> antBuild.buildFile ='buildWithClean.gradle' antBuild.tasks = ['all'] } This works, but unfortunately suffers from one small problem. When Gradle is importing these tasks it doesn’t properly respect the declared order of the dependencies. Instead it executes the dependent ant targets in alphabetical order. In this particular case Ant expects to execute the dist target before clean and Gradle executes them in the reverse order. This can be worked around by explicitly stating the task order, definitely not ideal, but workable. This Gradle task will execute the underlying Ant targets in the way we need. task importTasksRunInOrder(type: GradleBuild) { GradleBuild antBuild -> antBuild.buildFile ='buildWithClean.gradle' antBuild.tasks = ['dist', 'clean'] }Gradle Rules for the rest Finally, you can use a Gradle Rule to allow for calling any arbitrary target in a GradleBuild bootstrapped import. tasks.addRule('Pattern: a-<target> will execute a single <target> in the ant build') { String taskName -> if (taskName.startsWith('a-')) { task(taskName, type: GradleBuild) { buildFile = 'buildWithClean.gradle' tasks = [taskName - 'a-'] } } } In this particular example, this can allow you to string together calls as well, but be warned that they execute in completely segregated environments. $ gradle a-dist a-cleanSource code All of code referenced in this article is available on github if you’d like to take a closer look. Related posts:Why do I Like Gradle? A Groovy/Gradle JSLint Plugin Five Cool Things You Can Do With Groovy ScriptsReference: Using Gradle to Bootstrap your Legacy Ant Builds from our JCG partner Kelly Robinson at the The Kaptain on … stuff blog....

Hooking into the Jenkins (Hudson) API, Part 1

Which one – Hudson or Jenkins? Both. I started working on this little project a couple of months back using Hudson v1.395 and returned to it after the great divide happened. I took it as an opportunity to see whether there would be any significant problems should I choose to move permanently to Jenkins in the future. There were a couple of hiccups- most notably that the new CLI jar didn’t work right out of the box- but overall v1.401 of Jenkins worked as expected after the switch. The good news is the old version of the CLI jar still works, so this example is actually using a mix of code to get things done. Anyway, the software is great and there’s more than enough credit to go around. The API Jenkins/Hudson has a handy remote API packed with information about your builds and supports a rich set of functionality to control them, and the server in general, remotely. It is possible to trigger builds, copy jobs, stop the server and even install plugins remotely. You have your choice of XML, JSON or Python when interacting with the APIs of the server. And, as the build in documentation says, you can find the functionality you need on a relative path from the build server url at: “/…/api/ where ‘…’ portion is the object for which you’d like to access”. This will show a brief documentation page if you navigate to it in a browser, and will return a result if you add the desired format as the last part of the path. For instance, to load information about the computer running a locally hosted Jenkins server, a get request on this url would return the result in JSON format: http://localhost:8080/computer/api/json. { 'busyExecutors': 0, 'displayName': 'nodes', 'computer': [ { 'idle': true, 'executors': [ { }, { } ], 'actions': [], 'temporarilyOffline': false, 'loadStatistics': { }, 'displayName': 'master', 'oneOffExecutors': [], 'manualLaunchAllowed': true, 'offline': false, 'launchSupported': true, 'icon': 'computer.png', 'monitorData': { 'hudson.node_monitors.ResponseTimeMonitor': { 'average': 111 }, 'hudson.node_monitors.ClockMonitor': { 'diff': 0 }, 'hudson.node_monitors.TemporarySpaceMonitor': { 'size': 58392846336 }, 'hudson.node_monitors.SwapSpaceMonitor': null, 'hudson.node_monitors.DiskSpaceMonitor': { 'size': 58392846336 }, 'hudson.node_monitors.ArchitectureMonitor': 'Mac OS X (x86_64)' }, 'offlineCause': null, 'numExecutors': 2, 'jnlpAgent': false } ], 'totalExecutors': 2 } Here’s the same tree rendered using GraphViz.This functionality extends out in a tree from the root of the server, and you can gate how much of the tree you load from any particular branch by supplying a ‘depth’ parameter on your urls. Be careful how high you specify this variable. Testing with a load depth of four against a populous, long-running build server (dozens of builds with thousands of job executions) managed to regularly timeout for me. To give you an idea, here’s a very rough visualization of the domain at depth three from the root of the api.Getting data out of the server is very simple, but the ability to remotely trigger activity on the server is more interesting. In order to trigger a build of a job named ‘test’, a POST on http://localhost:8080/job/test/build does the job. Using the available facilities, it’s pretty easy to do things like:load a job’s configuration file, modify it and create a new job by POSTing the new config.xml file move a job from one build machine to another build up an overview of scheduled buildsThe CLI Jar There’s another way to remotely drive build servers in the CLI jar distributed along with the server. This jar provides simple facilities for executing certain commands remotely on the build server. Of note, this enables installing plugins remotely and executing a remote Groovy shell. I incorporated this functionality with a very thin wrapper around the main class exposed by the CLI jar as shown in the next code sample. /** * Drive the CLI with multiple arguments to execute. * Optionally accepts streams for input, output and err, all of which * are set by default to System unless otherwise specified. * @param rootUrl * @param args * @param input * @param output * @param err * @return */ def runCliCommand(String rootUrl, List<String> args, InputStream input = System.in, OutputStream output = System.out, OutputStream err = System.err) { def CLI cli = new CLI(rootUrl.toURI().toURL()) cli.execute(args, input, output, err) cli.close() } And here’s a simple test showing how you can execute a Groovy script to load information about jobs, similar to what you can do from the built-in Groovy script console on the server, which can be found for a locally installed deployment at http://localhost:8080/script. def 'should be able to query hudson object through a groovy script'() { final ByteArrayOutputStream output = new ByteArrayOutputStream() when: api.runCliCommand(rootUrl, ['groovysh', 'for(item in hudson.model.Hudson.instance.items) { println('job $item.name')}'], System.in, output, System.err)then: println output.toString() output.toString().split('\n')[0].startsWith('job') }Here are some links to articles about the CLI, if you want to learn more :Hudson CLI wikidoc Jenkins CLI wikidoc A template for PHP jobs on Jenkins An article from Kohsuke Kawaguchi A nice tutorialHTTPBuilder HTTPBuilder is my tool of choice when programming against an HTTP API nowadays. The usage is very straightforward and I was able to get away with only two methods to support reaching the entire API: one for GET and one for POST. Here’s the GET method, sufficient for executing the request, parsing the JSON response, and complete with (albeit naive) error handling. /** * Load info from a particular rootUrl+path, optionally specifying a 'depth' query * parameter(default depth = 0) * * @param rootUrl the base url to access * @param path the api path to append to the rootUrl * @param depth the depth query parameter to send to the api, defaults to 0 * @return parsed json(as a map) or xml(as GPathResult) */ def get(String rootUrl, String path, int depth = 0) { def status HTTPBuilder http = new HTTPBuilder(rootUrl) http.handler.failure = { resp -> println 'Unexpected failure on $rootUrl$path: ${resp.statusLine} ${resp.status}' status = resp.status }def info http.get(path: path, query: [depth: depth]) { resp, json -> info = json status = resp.status } info ?: status } Calling this to fetch data is a one liner, as the only real difference is the ‘path’ variable used when calling the API. private final GetRequestSupport requestSupport = new GetRequestSupport() ... /** * Display the job api for a particular Hudson job. * @param rootUrl the url for a particular build * @return job info in json format */ def inspectJob(String rootUrl, int depth = 0) { requestSupport.get(rootUrl, API_JSON, depth) } Technically, there’s nothing here that limits this to JSON only. One of the great things about HTTPBuilder is that it will happily just try to do the right thing with the response. If the data returned is in JSON format, as these examples are, it gets parsed into a JSONObject. If on the other hand, the data is XML, it gets parsed into a Groovy GPathResult. Both of these are very easily navigable, although the syntax for navigating their object graphs is different. What can you do with it? My primary motivation for exploring the API of Hudson/Jenkins was to see how I could make managing multiple servers easier. At present I work daily with four build servers and another handful of slave machines, and support a variety of different version branches. This includes a mix of unit and functional test suites, as well as a continuous deployment job that regularly pushes changes to test machines matching our supported platform matrix, so unfortunately things are not quite as simple as copying a single job when branching. Creating the build infrastructure for new feature branches in an automatic, or at least semi-automatic, fashion is attractive indeed, especially since plans are in the works to expand build automation. For a recent 555 day project, I utilized the API layer to build a Grails app functioning as both a cross-server build radiator and a central facility for server management. This proof of concept is capable of connecting to multiple build servers and visualizing job data as well as specific system configuration, triggering builds, and direct linking to each of the connected servers to allow for drilling down further. Here’s a couple of mock-ups that pretty much show the picture.Just a pretty cool app for installing Jenkins This is only very indirectly related, but I came across this very nice and simple Griffon app, called the Jenkins-Assembler which simplifies preparing your build server. It presents you with a list of plugins, letting you pick and choose, and then downloads and composes them into a single deployable war.Enough talking – where’s the code??? Source code related to this article is available on github. The tests are more of an exploration of the live API than an actual test of the code in this project. They run against a local server launched using the Gradle Jetty plugin. Finally, here’s some pretty pictures for you. [Show as slideshow] [View with PicLens]Continue to Part 2. Reference: Hooking into the Jenkins(Hudson) API from our JCG partner Kelly Robinson at the The Kaptain on … stuff blog....

Hooking into the Jenkins (Hudson) API, Part 2

This post continues from Part 1 of the tutorial. It’s been almost a year, but I finally had some time to revisit some code I wrote for interacting with the Jenkins api. I’ve used parts of this work to help manage a number of Jenkins build servers, mostly in terms of keeping plugins in sync and moving jobs from one machine to another. For this article I’m going to be primarily focusing on the CLI jar functionality and some of the things you can do with it. This has mostly been developed against Jenkins but I did some light testing with Hudson and it worked there for everything I tried, so the code remains mostly agnostic as to your choice of build server. The project structure The code is hosted on Github, and provides a Gradle build which downloads and launches a Jenkins(or Hudson) server locally to execute tests. The server is set to use the Gradle build directory as its working directory, so it can be deleted simply by executing gradle clean. I tried it using both the Jenkins and the Hudson versions of the required libraries and, aside from some quirks between the two CLI implementations, they continue to function very much the same. If you want to try it with Hudson instead of Jenkins, pass in the command flag -Pswitch and the appropriate war and libraries will be used. The project is meant to be run with Gradle 1.0-milestone-8, and comes with a Gradle wrapper for that version. Most of the code remains the same since the original article, but there are some enhancements and changes to deal with the newer versions of Jenkins and Hudson. The library produced by this project is published as a Maven artifact, and later on I’ll describe exactly how to get at it. There are also some samples included that demonstrate using that library in Gradle or Maven projects, and in Groovy scripts with Grapes. We’re using Groovy 1.8.6, Gradle 1.0-milestone-8 and Maven 3.0.3 to build everything. Getting more out of the CLI As an alternative to the api, the CLI jar is a very capable way of interacting with the build server. In addition to a variety of built-in commands, Groovy scripts can be executed remotely, and with a little effort we can easily serialize responses in order to work with data extracted on the server. As an execution environment, the server provides a Groovysh shell and stocks it with imports for the hudson.model package. Also passed into the Binding is the instance of the Jenkins/Hudson singleton object in that package. In these examples I’m using the backwards-compatible Hudson version, since the code is intended to be runnable on either flavor of the server. The available commands There’s a rich variety of built-in commands, all of which are implemented in the hudson.cli package. Here are the ones that are listed on the CLI page of the running application:build: Builds a job, and optionally waits until its completion. cancel-quiet-down: Cancel the effect of the “quiet-down” command. clear-queue: Clears the build queue connect-node: Reconnect to a node copy-job: Copies a job. create-job: Creates a new job by reading stdin as a configuration XML file. delete-builds: Deletes build record(s). delete-job: Deletes a job delete-node: Deletes a node disable-job: Disables a job disconnect-node: Disconnects from a node enable-job: Enables a job get-job: Dumps the job definition XML to stdout groovy: Executes the specified Groovy script. groovysh: Runs an interactive groovy shell. help: Lists all the available commands. install-plugin: Installs a plugin either from a file, an URL, or from update center. install-tool: Performs automatic tool installation, and print its location to stdout. Can be only called from inside a build. keep-build: Mark the build to keep the build forever. list-changes: Dumps the changelog for the specified build(s). login: Saves the current credential to allow future commands to run without explicit credential information. logout: Deletes the credential stored with the login command. mail: Reads stdin and sends that out as an e-mail. offline-node: Stop using a node for performing builds temporarily, until the next “online-node” command. online-node: Resume using a node for performing builds, to cancel out the earlier “offline-node” command. quiet-down: Quiet down Jenkins, in preparation for a restart. Don’t start any builds. reload-configuration: Discard all the loaded data in memory and reload everything from file system. Useful when you modified config files directly on disk. restart: Restart Jenkins safe-restart: Safely restart Jenkins safe-shutdown: Puts Jenkins into the quiet mode, wait for existing builds to be completed, and then shut down Jenkins. set-build-description: Sets the description of a build. set-build-display-name: Sets the displayName of a build set-build-result: Sets the result of the current build. Works only if invoked from within a build. shutdown: Immediately shuts down Jenkins server update-job: Updates the job definition XML from stdin. The opposite of the get-job command version: Outputs the current version. wait-node-offline: Wait for a node to become offline wait-node-online: Wait for a node to become online who-am-i: Reports your credential and permissionsIt’s not immediately apparent what arguments are required for each, but they almost universally follow a CLI pattern of printing usage details when called with no arguments. For instance, when you call the build command with no arguments, here’s what you get back in the error stream: Argument “JOB” is required java -jar jenkins-cli.jar build args… Starts a build, and optionally waits for a completion. Aside from general scripting use, this command can be used to invoke another job from within a build of one job. With the -s option, this command changes the exit code based on the outcome of the build (exit code 0 indicates a success.) With the -c option, a build will only run if there has been an SCM change JOB : Name of the job to build -c : Check for SCM changes before starting the build, and if there’s no change, exit without doing a build -p : Specify the build parameters in the key=value format. -s : Wait until the completion/abortion of the command Getting data out of the system All of the interaction with the remote system is handled by streams and it’s pretty easy to craft scripts that will return data in an easily parseable String format using built-in Groovy facilities. In theory, you should be able to marshal more complex objects as well, but let’s keep it simple for now. Here’s a Groovy script that just extracts all of the job names into a List, calling the Groovy inspect method to quote all values. @GrabResolver(name = 'glassfish', root = 'http://maven.glassfish.org/content/groups/public/') @GrabResolver(name = "github", root = "http://kellyrob99.github.com/Jenkins-api-tour/repository") @Grab('org.kar:hudson-api:0.2-SNAPSHOT') @GrabExclude('org.codehaus.groovy:groovy') import org.kar.hudson.api.cli.HudsonCliApiString rootUrl = 'http://localhost:8080' HudsonCliApi cliApi = new HudsonCliApi() OutputStream out = new ByteArrayOutputStream() cliApi.runCliCommand(rootUrl, ['groovysh', 'hudson.jobNames.inspect()'], System.in, out, System.err) List allJobs = Eval.me(cliApi.parseResponse(out.toString())) println allJobsOnce we get the response back, we do a little housekeeping to remove some extraneous characters at the beginning of the String, and use Eval.me to transform the String into a List. Groovy provides a variety of ways of turning text into code, so if your usage scenario gets more complicated than this simple case you can use a GroovyShell with a Binding or other alternative to parse the results into something useful. This easy technique extends to Maps and other types as well, making it simple to work with data sent back from the server. Some useful examples Finding plugins with updates and and updating all of them Here’s an example of using a Groovy script to find all of the plugins that have updates available, returning that result to the caller, and then calling the CLI ‘install-plugin’ command on all of them. Conveniently, this command will either install a plugin if it’s not already there or update it to the latest version if already installed. def findPluginsWithUpdates = ''' Hudson.instance.pluginManager.plugins.inject([]) { List toUpdate, plugin -> if(plugin.hasUpdate()) { toUpdate << plugin.shortName } toUpdate }.inspect() ''' OutputStream updateablePlugins = new ByteArrayOutputStream() cliApi.runCliCommand(rootUrl, ['groovysh', findPluginsWithUpdates], System.in, updateablePlugins, System.err)def listOfPlugins = Eval.me(parseOutput(updateablePlugins.toString())) listOfPlugins.each{ plugin -> cliApi.runCliCommand(rootUrl, ['install-plugin', plugin]) }Install or upgrade a suite of Plugins all at once This definitely beats using the ‘Manage Plugins’ UI and is idempotent so running it more than once can only result in possibly upgrading already installed Plugins. This set of plugins might be overkill, but these are some plugins I recently surveyed for possible use. @GrabResolver(name='glassfish', root='http://maven.glassfish.org/content/groups/public/') @GrabResolver(name="github", root="http://kellyrob99.github.com/Jenkins-api-tour/repository") @Grab('org.kar:hudson-api:0.2-SNAPSHOT') @GrabExclude('org.codehaus.groovy:groovy') import static java.net.HttpURLConnection.* import org.kar.hudson.api.* import org.kar.hudson.api.cli.HudsonCliApiString rootUrl = 'http://localhost:8080' HudsonCliApi cliApi = new HudsonCliApi()['groovy', 'gradle', 'chucknorris', 'greenballs', 'github', 'analysis-core', 'analysis-collector', 'cobertura', 'project-stats-plugin','audit-trail', 'view-job-filters', 'disk-usage', 'global-build-stats', 'radiatorviewplugin', 'violations', 'build-pipeline-plugin', 'monitoring', 'dashboard-view', 'iphoneview', 'jenkinswalldisplay'].each{ plugin -> cliApi.runCliCommand(rootUrl, ['install-plugin', plugin]) }// Restart a node, required for newly installed plugins to be made available. cliApi.runCliCommand(rootUrl, 'safe-restart')Finding all failed builds and triggering them It’s not all that uncommon that a network problem or infrastructure event can cause a host of builds to fail all at once. Once the problem is solved this script can be useful for verifying that the builds are all in working order. @GrabResolver(name = 'glassfish', root = 'http://maven.glassfish.org/content/groups/public/') @GrabResolver(name = "github", root = "http://kellyrob99.github.com/Jenkins-api-tour/repository") @Grab('org.kar:hudson-api:0.2-SNAPSHOT') @GrabExclude('org.codehaus.groovy:groovy') import org.kar.hudson.api.cli.HudsonCliApiString rootUrl = 'http://localhost:8080' HudsonCliApi cliApi = new HudsonCliApi() OutputStream out = new ByteArrayOutputStream() def script = '''hudson.items.findAll{ job -> job.isBuildable() && job.lastBuild && job.lastBuild.result == Result.FAILURE }.collect{it.name}.inspect() ''' cliApi.runCliCommand(rootUrl, ['groovysh', script], System.in, out, System.err) List failedJobs = Eval.me(cliApi.parseResponse(out.toString())) failedJobs.each{ job -> cliApi.runCliCommand(rootUrl, ['build', job]) }Open an interactive Groovy shell If you really want to poke at the server you can launch an interactive shell to inspect state and execute commands. The System.in stream is bound and responses from the server are immediately echoed back. @GrabResolver(name = 'glassfish', root = 'http://maven.glassfish.org/content/groups/public/') @GrabResolver(name = "github", root = "http://kellyrob99.github.com/Jenkins-api-tour/repository") @Grab('org.kar:hudson-api:0.2-SNAPSHOT') @GrabExclude('org.codehaus.groovy:groovy') import org.kar.hudson.api.cli.HudsonCliApi /** * Open an interactive Groovy shell that imports the hudson.model.* classes and exposes * a 'hudson' and/or 'jenkins' object in the Binding which is an instance of hudson.model.Hudson */ HudsonCliApi cliApi = new HudsonCliApi() String rootUrl = args ? args[0] :'http://localhost:8080' cliApi.runCliCommand(rootUrl, 'groovysh')Updates to the project A lot has happened in the last year and all of the project dependencies needed an update. In particular, there have been some very nice improvements to Groovy, Gradle and Spock. Most notably, Gradle has come a VERY long way since version 0.9.2. The JSON support added in Groovy 1.8 comes in handy as well. Spock required a small tweak for rendering dynamic content in test reports when using @Unroll, but that’s a small price to pay for features like the ‘old’ method and Chained Stubbing. Essentially, in response to changes in Groovy 1.8+, a Spock @Unroll annotation needs to change from: @Unroll('querying of #rootUrl should match #xmlResponse') to a Closure encapsulated GString expression: @Unroll({'querying of $rootUrl should match $xmlResponse'}) It sounds like the syntax is still in flux and I’m glad I found this discussion of the problem online. Hosting a Maven repository on Github Perhaps you noticed from the previous script examples, we’re referencing a published library to get at the HudsonCliApi class. I read an interesting article last week which describes how to use the built-in Github Pages for publishing a Maven repository. While this isn’t nearly as capable as a repository like Nexus or Artifactory, it’s totally sufficient for making some binaries available to most common build tools in a standard fashion. Simply publish the binaries along with associated poms in the standard Maven repo layout and you’re off to the races! Each dependency management system has its quirks(I’m looking at you Ivy!) but they’re pretty easy to work around, so here’s examples for Gradle, Maven and Groovy Grapes to use the library produced by this project code. Note that some of the required dependencies for Jenkins/Hudson aren’t available in the Maven central repository, so we’re getting them from the Glassfish repo. Gradle Pretty straight forward, this works with the latest version of Gradle and assumes that you are using the Groovy plugin. repositories { mavenCentral() maven { url 'http://maven.glassfish.org/content/groups/public/' } maven { url 'http://kellyrob99.github.com/Jenkins-api-tour/repository' } } dependencies { groovy 'org.codehaus.groovy:groovy-all:${versions.groovy}' compile 'org.kar:hudson-api:0.2-SNAPSHOT' }Maven Essentially the same content in xml and in this case it’s assumed that you’re using the GMaven plugin <repositories> <repository> <id>glassfish</id> <name>glassfish</name> <url>http://maven.glassfish.org/content/groups/public/</url> </repository> <repository> <id>github</id> <name>Jenkins-api-tour maven repo on github</name> <url>http://kellyrob99.github.com/Jenkins-api-tour/repository</url> </repository> </repositories><dependencies> <dependency> <groupId>org.codehaus.groovy</groupId> <artifactId>groovy-all</artifactId> <version>${groovy.version}</version> </dependency> <dependency> <groupId>org.kar</groupId> <artifactId>hudson-api</artifactId> <version>0.2-SNAPSHOT</version> </dependency> </dependencies>Grapes In this case there seems to be a problem resolving some transitive dependency for an older version of Groovy which is why there’s an explicit exclude for it. @GrabResolver(name='glassfish', root='http://maven.glassfish.org/content/groups/public/') @GrabResolver(name='github', root='http://kellyrob99.github.com/Jenkins-api-tour/repository') @Grab('org.kar:hudson-api:0.2-SNAPSHOT') @GrabExclude('org.codehaus.groovy:groovy')LinksThe Github Jenkins-api-tour project page Maven repositories on Github Scriptler example Groovy scripts Jenkins CLI documentationRelated posts:Hooking into the Jenkins(Hudson) API Five Cool Things You Can Do With Groovy Scripts A Grails App Demoing the StackExchange APIReference: Hooking into the Jenkins(Hudson) API, Part 2 from our JCG partner Kelly Robinson at the The Kaptain on … stuff blog....

Learn A Different Language – Advice From A JUG Leader

The cry of “Java is Dead“? has been heard for many years now, yet Java still continues to be among the most used languages/ecosystems. I am not here to declare that Java is dead (it isn’t and won’t be anytime soon). My opinion, if you haven’t already heard:Java developers, it’s time to learn something elseFirst, a little background as basis for my opinions:I founded the Philadelphia Area Java Users’ Group in March 2000, and for the past 12 years I have served as ‘?JUGmaster’. Professionally, I have been a technology recruiter focused on (you guessed it) helping Philadelphia area software companies to hire Java talent since early 1999. I started a new recruiting firm in January that is not focused on Java, and I’m taking searches for mostly Java, Python, Ruby, Scala, Clojure, and mobile talent. This was a natural progression for me, as a portion of my candidate network had already transitioned to other technologies.I launched Philly JUG based on a recommendation from a candidate, who learned that the old group was dormant. Philly JUG grew from 30 to over 1300 members and we have been recognized twice by Sun as a Top JUG worldwide. This JUG is non-commercial (no product demos, no sales or recruiting activity directed to the group), entirely sponsor-funded, and I have had great success in attracting top Java minds to present for us.The early signsAfter several years of 100% Java-specific presentations at our meetings, I started to notice that an element of the membership requested topics that were not specifically Java EE or SE. I served as the sole judge of what content was appropriate (with requested input from some members), and I allowed the group to stray a bit from our standard fare. First was Practical JRuby back in ’06, but since that was ‘still Java’ there was no controversy. Groovy and Grails in ’08 wasn’t going to raise any eyebrows either. Then in ’09, we had consecutive non-Java meetings – Scala for Jarheads followed by Clojure and the Robot Apocalypse (exact dates for said apocalypse have been redacted). Obviously there is commonality with the JVM, but it was becoming readily apparent that some members of the group were less interested in simply hearing about JSP, EJB, Java ME or whatever the Java vendor universe might be promoting at the time.I noticed that the members that sought these other topics and attended these alternative meetings were my unofficial advisory committee over the years – the members I called first to ask opinions about topics. These people were the thought leadership of the group. Many of them were early adopters of Java as well.It was apparent that many of the better Java engineers I knew were choosing to broaden their horizons with new languages, which prompted me to write “Become a Better Java Programmer – Learn Something Else”. That ’09 article served to demonstrate that by learning another language, you should become a better overall engineer and your Java skills should improve just based on some new approaches. Today I go a step farther in my advice for the Java community, and simply say ‘Learn Something Else‘?.To be clear, the reason I make this suggestion is not because I feel Java as a language is going to die off, or that all companies will stop using Java in the near future. Java will obviously be around for many years to come, and the JVM itself will certainly continue to be a valued resource for developers. The reason I advise you to learn something else is that I strongly believe that the marketability of developers that only code in Java will diminish noticeably in the next few years, and the relevance and adoption of Java in new projects will decline. Known Java experts who are at the top few percent probably won’t see decreased demand, but the vast majority of the Java talent pool undoubtedly will.The writing on the wallI think at this point the writing on the wall is getting a bit too obvious to ignore, and you have two forces acting concurrently. First, there is a tangible groundswell of support for other languages. A month doesn’t seem to go by that we don’t hear about a new language being released, or read that a company transitioned from Java to another option. Much of this innovation is by former Java enthusiasts, who are often taking the best elements of Java and adding features that were often desired by the Java community but couldn’t get through the process for inclusion. Java has been lauded for its stability, and the price Java pays for that stability is slowed innovation.The second contributing factor is that Java has simply lost much of its luster and magic over the past few years. The Sun acquisition was a major factor, as Oracle is viewed as entirely profit-driven, ‘?big corporate’, and less focused on community-building than Sun was with Java. The Java community, in turn, is naturally less interested in helping to improve Java under Oracle. Giving away code or time to Oracle is like ‘?working for the man‘? to the Java community. Oracle deciding to run JavaOne alongside Oracle OpenWorld may have been an omen. Failures such as JavaFX and the inability to keep up with feature demand have not helped either.My suggestion to learn something else is also rooted in simple economic principles. I have seen the demand for engineers with certain skills (Ruby, and dare I say JavaScript are good examples) increasing quickly and dramatically, and the low supply of talent in these markets makes it an opportune time to make a move. It reminds me of the late 90’s when you could earn six-figures if you could spell J-A-V-A. Some companies are now even willing to teach good Java pros a new language on the job – what is better than getting paid to learn? The gap in supply and demand for Java was severe years ago, but it seems the supply has caught up recently. Java development also seems to be a skill that, in my experience, is shipped offshore a bit more than some other languages.Still don’t see it? Remember those early Java adopters, the thought leaders I mentioned? Many of them are still around Java, but they aren’t writing Java code anymore. They have come to appreciate the features of some of these other offerings, and are either bored or frustrated with Java. As this set of converts continue to use and evangelize alternative languages in production, they will influence more junior developers who I expect will follow their lead. The flow of Java developers to other languages will continue to grow, and there is still time to take advantage of the supply shortage in alternative language markets.Java will never die. However, the relevance and influence of Java tomorrow is certainly questionable, the marketability of ‘pure’ Java developers will decline, and the market for talent in alternative languages is too strong for proactive career-minded talent to ignore.Reference: Advice From A JUG Leader – Learn A Different Language from our JCG partner Dave Fecak at the Job Tips For Geeks blog....

10 Object Oriented Design principles for the Java programmer

Object Oriented Design Principles are core of OOPS programming but I have seen most of Java programmer chasing design patterns like Singleton pattern , Decorator pattern or Observer pattern but not putting enough attention on Object oriented analysis and design or following these design principles. I have regularly seen Java programmers and developers of various experience level who either doesn’t heard about these OOPS and SOLID design principle or simply doesn’t know what benefits a particular design principle offers or how to use these design principle in coding.Bottom line is always strive for highly cohesive and loosely couple solution, code or design and looking open source code from Apache and Sun are good examples of Java design principles or how design principles should be used in Java coding. Java Development Kit follows several design principle like Factory Pattern in BorderFactory class, Singleton pattern in Runtime class and if you interested more on Java code read Effective Java by Joshua Bloch , a gem by the guy who wrote Java API. My another personal favorite on object oriented design pattern is Head First Design Pattern by Kathy Sierra and others and Head First Object Oriented Analysis and Design .Though best way of learning design principles or pattern is real world example and understanding the consequences of violating that design principle, subject of this article is Introducing Object oriented design principles for Java Programmers who are either not exposed to it or in learning phase. I personally think each of these design principle need an article to explain it clearly and I will definitely try to do it here but for now just get yourself ready for quick bike ride on design principle town :)Object oriented design principle 1 – DRY (Don’t repeat yourself)As name suggest DRY (don’t repeat yourself) means don’t write duplicate code, instead use abstraction to abstract common things in one place. if you use a hardcoded value more than one time consider making it public final constant, if you have block of code in more than two place consider making it a separate method. Benefit of this SOLID design principle is in maintenance. Its worth to note is don?€™t abuse it, duplicate is not for code but for functionality means if you used common code to validate OrderID and SSN it doesn?€™t mean they are same or they will remain same in future. By using common code for two different functionality or thing you closely couple them forever and when your OrderID changes its format , your SSN validation code will break. So beaware of such coupling and just don?€™t combine anything which uses similar code but are not related.Object oriented design principle 2 – Encapsulate what varies Only one thing is constant in software field and that is “Change”, So encapsulate the code you expect or suspect to be changed in future. Benefit of this OOPS Design principle is that Its easy to test and maintain proper encapsulated code. If you are coding in Java then follow principle of making variable and methods private by default and increasing access step by step e.g. from private to protected and not public. Several of design pattern in Java uses Encapsulation, Factory design pattern is one example of Encapsulation which encapsulate object creation code and provides flexibility to introduce new product later with no impact on existing code.Object oriented design principle 3 – Open Closed principle Classes, methods or functions should be Open for extension (new functionality) and Closed for modification. This is another beautiful object oriented design principle which prevents some-one from changing already tried and tested code. Ideally if you are adding new functionality only than your code should be tested and that’s the goal of Open Closed Design principle.Object oriented design principle 4 – Single Responsibility Principle (SRP) There should not be more than one reason for a class to change or a class should always handle single functionality. If you put more than one functionality in one Class in Java it introduce coupling between two functionality and even if you change one functionality there is chance you broke coupled functionality which require another round of testing to avoid any surprise on production environment.Object oriented design principle 5 – Dependency Injection or Inversion principleDon’t ask for dependency it will be provided to you by framework. This has been very well implemented in Spring framework, beauty of this design principle is that any class which is injected by DI framework is easy to test with mock object and easier to maintain because object creation code is centralized in framework and client code is not littered with that.There are multiple ways to implemented Dependency injection like using byte code instrumentation which some AOP (Aspect Oriented programming) framework like AspectJ does or by using proxies just like used in Spring.Object oriented design principle 6 – Favour Composition over InheritanceAlways favour composition over inheritance if possible. Some of you may argue this but I found that Composition is lot more flexible than Inheritance. Composition allows to change behaviour of a class at runtime by setting property during runtime and by using Interfaces to compose a class we use polymorphism which provides flexibility of to replace with better implementation any time. Even Effective Java advise to favor composition over inheritance.Object oriented design principle 7 – Liskov Substitution Principle (LSP) According to Liskov Substitution Principle Subtypes must be substitutable for super type i.e. methods or functions which uses super class type must be able to work with object of sub class without any issue”. LSP is closely related to Single responsibility principle and Interface Segregation Principle. If a class has more functionality than subclass might not support some of the functionality and does violated LSP. In order to follow LSP design principle, derived class or sub class must enhance functionality not reducing it.Object oriented design principle 8 – Interface Segregation principle (ISP) Interface Segregation Principle stats that a client should not implement an interface if it doesn’t use that. this happens mostly when one interface contains more than one functionality and client only need one functionality and not other.Interface design is tricky job because once you release your interface you can not change it without breaking all implementation. Another benefit of this desing principle in Java is, interface has disadvantage to implement all method before any class can use it so having single functionality means less method to implement.Object oriented design principle 9 – Programming for Interface not implementation Always program for interface and not for implementation this will lead to flexible code which can work with any new implementation of interface. So use interface type on variables, return types of method or argument type of methods in Java. This has been advised by many Java programmer including in Effective Java and head first design pattern book.Object oriented design principle 10 – Delegation principle Don’t do all stuff by yourself, delegate it to respective class. Classical example of delegation design principle is equals() and hashCode() method in Java. In order to compare two object for equality we ask class itself to do comparison instead of Client class doing that check. Benefit of this design principle is no duplication of code and pretty easy to modify behaviour.All these object oriented design principle helps you write flexible and better code by striving high cohesion and low coupling. Theory is first step but what is most important is to develope ability to find out when and to apply these design principle and find our whether we are violating any design principle and compromising flexibility of code. but again as nothing is perfect in this world, don’t always try to solve problem with design patterns and design principle they are mostly for large enterprise project which has longer maintenance cycle.Reference: 10 Object Oriented Design principles Java programmer should know from our JCG partner Javin Paul at the Javarevisited blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: