Featured FREE Whitepapers

What's New Here?


Jenkins enhancements without plugins

Jenkins is a popular open source continuous integration server. I use it heavily. Jenkins is super extensible CI server with huge plugins repository. But I must admit that there are a lot of cases when all these Jenkins plugin’s ‘zoo’ doesn’t help. What does it mean?:we need a tons of plugin to solve some non-trivial problem, too many plugins dependencies which must be properly managed some available plugins partially provide required functionality some plugins provide required functionality but contain bugs or it might be impossible to find necessary plugin, and task have to be done ASAP    Based on these cases mentioned above we have at least two solutions:Implement own plugin which will provide all required functionality. It will take some time and slowdown the overall progress for this particular business task Use some Jenkins extra facilities which gives us a chance for Jenkins automation without plugin writingI’m a real fun od 2d solution (at least as a prototyping phase or quick-and-dirty solution right-now-right-away). So, what’s the magic? Jenkins has two nice plugins (of cause there are much more similar plugin, but these two are the best for quick start) which give us a possibility to write Groovy scripts for build and post-build phases:Groovy plugin for Build phase Groovy Postbuild Plugin for Post-build phasePros:Groovy scripts have access to whole Jenkins infrastructure (Jenkins packages) and can invoke functionality of third-party plugins installed in this Jenkins instance It’s very easy to prototype your ideas and validate automation approaches It gives us very quick business results Groovy scripts can automate really amazing and non-trivial tasksCons:at the end these Groovy scripts are not easy for automated testing version managements for these scripts involve additional work (implement simple import/export tool for job configuration) these Groovy scripts can have dependency on some third-party plugins and these dependency must me somehow managed as well testing and debugging are really painful activities, because it involve too many interactions with Jenkins UI, etc. (yes, it can be somehow improved via extending both Groovy plugins, but it’s extra work)  Reference: Jenkins enhancements without plugins from our JCG partner Orest Ivasiv at the Knowledge Is Everything blog. ...

In-experienced development = Bowl full of Soup

There are times when things become an uphill task all of a sudden. It happened to me in my last assignment. I was asked to take charge of an existing product. The product has been under development for about 2 years, and now it needed to fix a few things so that the system can scale well. It started quite well, as the complete things was based on a licensed product from a vendor. The technology had loads of docs explaining things in quite detail. So I played around the technology and it performed well.         Next thing inline was to have a close look at the project, to see what needs to be fixed. Now, when I took the plunge, I had a sinking feeling. The project lacked some basic code hygiene e.g. the build scripts were broken. Then I questioned the team how they are making builds ? The answer was “Ecllipse-> Export as war”. Thus I thought of fixing build as the first thing. But the moment I started fixing them, I realized that the scripts were doing a lot of things which no one completely understood. The script was shipped with the product and most of the time a few of it were used, to generate the UI components. As I started playing around with build I was getting more enlightenment about things wrong in the project. In the absence of the script the project did not compile completely. There were sources which used classes from the older version of the project, which are not used now. Also the context xmls needed to be corrected for the same things.But after playing around with build for a couple of days I was able to get it working completely. While working on the build I had an initial thought of migrating things to maven but realized soon that it is a quite a task and discarded it. During the same period I was having a chat with the assigned manager. He was pushing for TDD to the team and I was telling him the project does not build. He said ok, lets just start in sometime, I was thinking how can you tell a team what TDD is if the build process is not in place. Anyways, I did not have discussion in that regard for now. Next thing on my plate was to interact with SOLR via the licensed product. After all the headache of build, this looked like an exciting thing to work upon. As soon as I started I realized it is again an uphill task. The licensed product worked in a particular way while interacting with external services and things must be extended in the similar fashion. Then the next question in my mind was, does anyone know how the product does it ? And the answer was debug fa silence, a few hints came along the way. Now again I started to debug things in-order to look for how the underlying technology does it. A problem in such situations is that you have limited amount of sources at your hand and if you get stuck then wait for a discussion with the vendor. It took me around 3 weeks to completely understand it and then develop a simple interaction with SOLR. After doing it twice I realized the biggest problem that comes up is the in-experience of the team w.r.t the underlying technology. Also if the team is mostly having in-experience developers e.g having at max exp of around 3 years, then it plays havoc it the development. You would always want to add latest features which the user demands. Try to fit the latest tech stack with the best practices. But you should understand that knowledge and hands-on experience of technology is a must. In the absence of either of the two, you would be able to create a product but the same would not be able stand-up to the mark. It would do things asked for but the moment you want to do extend it to do some other things you will definitely have a headache. Lack of understanding also makes the life of developers very difficult as they are running around for basic issues in the technology used. Development in any technology open-source or licensed always asks for some basic principles and guidelines. Most importantly it asks for complete understanding of the technology else it becomes quite hard to move around. Check your project if you are missing these things then you are definitively in SOUP.!!   Reference: In-experienced development = Bowl full of Soup from our JCG partner Rahul Sharma at the The road so far… blog blog. ...

GET / POST with RESTful Client API

There are many stuff in the internet how to work with RESTful Client API. These are basics. But even though the subject seems to be trivial, there are hurdles, especially for beginners. In this post I will try to summarize my know-how, how I did this in real projects. I usually use Jersey (reference implementation for building RESTful services). See e.g. my other post. In this post, I will call a real remote service from JSF beans. Let’s write a session scoped bean RestClient.         package com.cc.metadata.jsf.controller.common;import com.sun.jersey.api.client.Client; import com.sun.jersey.api.client.ClientResponse; import com.sun.jersey.api.client.WebResource;import java.io.Serializable; import javax.annotation.PostConstruct; import javax.faces.bean.ManagedBean; import javax.faces.bean.SessionScoped; import javax.faces.context.FacesContext;/** * This class encapsulates some basic REST client API. */ @ManagedBean @SessionScoped public class RestClient implements Serializable {private transient Client client;public String SERVICE_BASE_URI;@PostConstruct protected void initialize() { FacesContext fc = FacesContext.getCurrentInstance(); SERVICE_BASE_URI = fc.getExternalContext().getInitParameter('metadata.serviceBaseURI');client = Client.create(); }public WebResource getWebResource(String relativeUrl) { if (client == null) { initialize(); }return client.resource(SERVICE_BASE_URI + relativeUrl); }public ClientResponse clientGetResponse(String relativeUrl) { WebResource webResource = client.resource(SERVICE_BASE_URI + relativeUrl); return webResource.accept('application/json').get(ClientResponse.class); } } In this class we got the service base URI which is specified (configured) in the web.xml. <context-param> <param-name>metadata.serviceBaseURI</param-name> <param-value>http://somehost/metadata/</param-value> </context-param> Furthermore, we wrote two methods to receive remote resources. We intend to receive resources in JSON format and convert them to Java objects. The next bean demonstrates how to do this task for GET requests. The bean HistoryBean converts received JSON to a Document object by using GsonConverter. The last two classes will not be shown here (they don’t matter). Document is a simple POJO and GsonConverter is a singleton instance which wraps Gson. package com.cc.metadata.jsf.controller.history;import com.cc.metadata.jsf.controller.common.RestClient; import com.cc.metadata.jsf.util.GsonConverter; import com.cc.metadata.model.Document;import com.sun.jersey.api.client.ClientResponse;import java.io.Serializable; import java.util.ArrayList; import java.util.Arrays; import java.util.List; import javax.annotation.PostConstruct; import javax.faces.bean.ManagedBean; import javax.faces.bean.ManagedProperty; import javax.faces.bean.ViewScoped;/** * Bean getting history of the last extracted documents. */ @ManagedBean @ViewScoped public class HistoryBean implements Serializable {@ManagedProperty(value = '#{restClient}') private RestClient restClient;private List<Document> documents; private String jsonHistory;public List<Document> getDocuments() { if (documents != null) { return documents; }ClientResponse response = restClient.clientGetResponse('history');if (response.getStatus() != 200) { throw new RuntimeException('Failed service call: HTTP error code : ' + response.getStatus()); }// get history as JSON jsonHistory = response.getEntity(String.class);// convert to Java array / list of Document instances Document[] docs = GsonConverter.getGson().fromJson(jsonHistory, Document[].class); documents = Arrays.asList(docs);return documents; }// getter / setter ... } The next bean demonstrates how to communicate with the remote service via POST. We intent to send the content of uploaded file. I use the PrimeFaces’ FileUpload component, so that the content can be extracted as InputStream from the listener’s parameter FileUploadEvent. This is not important here, you can also use any other web frameworks to get the file content (also as byte array). More important is to see how to deal with RESTful Client classes FormDataMultiPart and FormDataBodyPart. package com.cc.metadata.jsf.controller.extract;import com.cc.metadata.jsf.controller.common.RestClient; import com.cc.metadata.jsf.util.GsonConverter; import com.cc.metadata.model.Document;import com.sun.jersey.api.client.ClientResponse; import com.sun.jersey.api.client.WebResource; import com.sun.jersey.core.header.FormDataContentDisposition; import com.sun.jersey.multipart.FormDataBodyPart; import com.sun.jersey.multipart.FormDataMultiPart;import org.primefaces.event.FileUploadEvent;import java.io.IOException; import java.io.Serializable; import javax.faces.application.FacesMessage; import javax.faces.bean.ManagedBean; import javax.faces.bean.ManagedProperty; import javax.faces.bean.ViewScoped; import javax.faces.context.FacesContext;import javax.ws.rs.core.MediaType;/** * Bean for extracting document properties (metadata). */ @ManagedBean @ViewScoped public class ExtractBean implements Serializable {@ManagedProperty(value = '#{restClient}') private RestClient restClient;private String path;public void handleFileUpload(FileUploadEvent event) throws IOException { String fileName = event.getFile().getFileName();FormDataMultiPart fdmp = new FormDataMultiPart(); FormDataBodyPart fdbp = new FormDataBodyPart(FormDataContentDisposition.name('file').fileName(fileName).build(), event.getFile().getInputstream(), MediaType.APPLICATION_OCTET_STREAM_TYPE); fdmp.bodyPart(fdbp);WebResource resource = restClient.getWebResource('extract'); ClientResponse response = resource.accept('application/json').type(MediaType.MULTIPART_FORM_DATA).post( ClientResponse.class, fdmp);if (response.getStatus() != 200) { throw new RuntimeException('Failed service call: HTTP error code : ' + response.getStatus()); }// get extracted document as JSON String jsonExtract = response.getEntity(String.class);// convert to Document instance Document doc = GsonConverter.getGson().fromJson(jsonExtract, Document.class);... }// getter / setter ... } Last but not least, I would like to demonstrate how to send a GET request with any query string (URL parameters). The next method asks the remote service by URL which looks as http://somehost/metadata/extract?file=<some file path> public void extractFile() { WebResource resource = restClient.getWebResource('extract'); ClientResponse response = resource.queryParam('file', path).accept('application/json').get( ClientResponse.class);if (response.getStatus() != 200) { throw new RuntimeException('Failed service call: HTTP error code : ' + response.getStatus()); }// get extracted document as JSON String jsonExtract = response.getEntity(String.class);// convert to Document instance Document doc = GsonConverter.getGson().fromJson(jsonExtract, Document.class);... }   Reference: GET / POST with RESTful Client API from our JCG partner Oleg Varaksin at the Thoughts on software development blog. ...

Java Thread: retained memory analysis

This article will provide you with a tutorial allowing you to determine how much and where Java heap space is retained from your active application Java threads. A true case study from an Oracle Weblogic 10.0 production environment will be presented in order for you to better understand the analysis process. We will also attempt to demonstrate that excessive garbage collection or Java heap space memory footprint problems are often not caused by true memory leaks but instead due to thread execution patterns and high amount of short lived objects.       Background As you may have seen from my past JVM overview article, Java threads are part of the JVM fundamentals. Your Java heap space memory footprint is driven not only by static and long lived objects but also by short lived objects. OutOfMemoryError problems are often wrongly assumed to be due to memory leaks. We often overlook faulty thread execution patterns and short lived objects they “retain” on the Java heap until their executions are completed. In this problematic scenario:Your “expected” application short lived / stateless objects (XML, JSON data payload etc.) become retained by the threads for too long (thread lock contention, huge data payload, slow response time from remote system etc.) Eventually such short lived objects get promoted to the long lived object space e.g. OldGen/tenured space by the garbage collector As a side effect, this is causing the OldGen space to fill up rapidly, increasing the Full GC (major collections) frequency Depending of the severity of the situation this can lead to excessive GC garbage collection, increased JVM paused time and ultimately OutOfMemoryError: Java heap space Your application is now down, you are now puzzled on what is going on Finally, you are thinking to either increase the Java heap or look for memory leaks…are you really on the right track?In the above scenario, you need to look at the thread execution patterns and determine how much memory each of them retain at a given time. OK I get the picture but what about the thread stack size? It is very important to avoid any confusion between thread stack size and Java memory retention. The thread stack size is a special memory space used by the JVM to store each method call. When a thread calls method A, it “pushes” the call onto the stack. If method A calls method B, it gets also pushed onto the stack. Once the method execution completes, the call is “popped” off the stack. The Java objects created as a result of such thread method calls are allocated on the Java heap space. Increasing the thread stack size will definitely not have any effect. Tuning of the thread stack size is normally required when dealing with java.lang.stackoverflowerror or OutOfMemoryError: unable to create new native thread problems.Case study and problem context The following analysis is based on a true production problem we investigated recently.Severe performance degradation was observed from a Weblogic 10.0 production environment following some changes to the user web interface (using Google Web Toolkit and JSON as data payload) Initial analysis did reveal several occurrences of OutOfMemoryError: Java heap space errors along with excessive garbage collection. Java heap dump files were generated automatically (-XX:+HeapDumpOnOutOfMemoryError) following OOM events Analysis of the verbose:gc logs did confirm full depletion of the 32-bit HotSpot JVM OldGen space (1 GB capacity) Thread dump snapshots were also generated before and during the problem The only problem mitigation available at that time was to restart the affected Weblogic server when problem was observed A rollback of the changes was eventually performed which did resolve the situationThe team first suspected a memory leak problem from the new code introduced. Thread dump analysis: looking for suspects…The first analysis step we did was to perform an analysis of the generated thread dump data. Thread dump will often show you the culprit threads allocating memory on the Java heap. It will also reveal any hogging or stuck thread attempting to send and receive data payload from a remote system. The first pattern we noticed was a good correlation between OOM events and STUCK threads observed from the Weblogic managed servers (JVM processes). Find below the primary thread pattern found: <10-Dec-2012 1:27:59 o'clock PM EST> <Error> <BEA-000337><[STUCK] ExecuteThread: '22' for queue:'weblogic.kernel.Default (self-tuning)'has been busy for '672' seconds working on the requestwhich is more than the configured time of '600' seconds.As you can see, the above thread appears to be STUCK or taking very long time to read and receive the JSON response from the remote server. Once we found that pattern, the next step was to correlate this finding with the JVM heap dump analysis and determine how much memory these stuck threads were taking from the Java heap. Heap dump analysis: retained objects exposed!The Java heap dump analysis was performed using MAT. We will now list the different analysis steps which did allow us to pinpoint the retained memory size and source. 1. Load the HotSpot JVM heap dump2. Select the HISTOGRAM view and filter by “ExecuteThread”* ExecuteThread is the Java class used by the Weblogic kernel for thread creation & execution *As you can see, this view was quite revealing. We can see a total of 210 Weblogic threads created. The total retained memory footprint from these threads is 806 MB. This is pretty significant for a 32-bit JVM process with 1 GB OldGen space. This view alone is telling us that the core of the problem and memory retention originates from the threads themselves. 3. Deep dive into the thread memory footprint analysisThe next step was to deep dive into the thread memory retention. To do this, simply right click over the ExecuteThread class and select: List objects > with outgoing references.As you can see, we were able to correlate STUCK threads from the thread dump analysis with high memory retention from the heap dump analysis. The finding was quite surprising. 4. Thread Java Local variables identificationThe final analysis step did require us to expand a few thread samples and understand the primary source of memory retention.As you can see, this last analysis step did reveal huge JSON response data payload at the root cause. That pattern was also exposed earlier via the thread dump analysis where we found a few threads taking very long time to read & receive the JSON response; a clear symptom of huge data payload footprint. It is crucial to note that short lived objects created via local method variables will show up in the heap dump analysis. However, some of those will only be visible from their parent threads since they are not referenced by other objects, like in this case. You will also need to analyze the thread stack trace in order to identify the true caller, followed by a code review to confirm the root cause. Following this finding, our delivery team was able to determine that the recent JSON faulty code changes were generating, under some scenarios, huge JSON data payload up to 45 MB+. Given the fact that this environment is using a 32-bit JVM with only 1 GB of OldGen space, you can understand that only a few threads were enough to trigger severe performance degradation. This case study is clearly showing the importance of proper capacity planning and Java heap analysis, including the memory retained from your active application & Java EE container threads. Learning is experience. Everything else is just informationI hope this article has helped you understand how you can pinpoint the Java heap memory footprint retained by your active threads by combining thread dump and heap dump analysis. Now, this article will remain just words if you don’t experiment so I highly recommend that you take some time to learn this analysis process yourself for your application(s).   Reference: Java Thread: retained memory analysis from our JCG partner Pierre-Hugues Charbonneau at the Java EE Support Patterns & Java Tutorial blog. ...

Top 10 JavaCodeGeeks posts for 2012

Following the tradition, we are once again compiling the top Java Code Geeks for the year that is ending. As with the Top 10 JavaCodeGeeks posts for 2010 and the Top 10 JavaCodeGeeks posts for 2011, we have created a compilation with the most popular posts for this year. The posts ranking was performed based on the absolute number of page views per post, not necessarily unique. It includes only articles published in 2012. So, let’s see in ascending order the top posts for 2012. 10) 4 Warning Signs that Agile Is Declining This article spread like fire and spurred a lot of conversations. The author provides some signs on why agile methodology is on the decline and suggests that Agile, as used nowadays, doesn’t deliver on its promise. 9) 20 Database Design Best Practices As the title suggests, this is a list with common database design best practices, ranging from name convention and data type selection, to security issues and documentation. 8) XML parsing using SaxParser with complete code A detailed tutorial on how to use the SAX parser. Including creating the model objects and parsing a sample XML document using callback methods. Also, check out our article Approaches to XML – Part 2 – What about SAX?. 7) JVM: How to analyze Thread Dump This article is a detailed guide on how to analyze a JVM Thread Dump and pinpoint the root cause of any problems. As the author says, Thread Dump analysis is a very critical skill-set to master for any individual involved in Java EE production support. 6) Why I will use Java EE instead of Spring in new Enterprise Java Projects in 2012 Another article that generated a lot of heated arguments. The eternal fight between Java EE and Spring framework. The author lists the advantages of both approaches and explains why he opted for Java EE. 5) Ajax with Spring MVC 3 using Annotations and JQuery Speaking of Spring, this is a detailed tutorial on how to implement Ajax functionality with Spring MVC and jQuery. These two technologies can be combined in order to achieve a smooth user experience on web apps. 4) Intellij vs. Eclipse Another battle, this time the battle of IDEs! All developers have their favorite IDE and this article explores the differences between two of the most popular in the Java world, namely Intellij and Eclipse. On the same note, check out What’s Cool In IntelliJIDEA Part I and Eclipse Shortcuts for Increased Productivity. 3) 20 Kick-ass programming quotes Addressing the funnier side of software development, this article is a compilation of great programming quotes, quotes from famous programmers, computer scientists and savvy entrepreneurs of our time. Check it out, you will find some gems in there! 2) Top 10 Things Every Software Engineer Should Know This article discusses the top things that every software engineer should be familiar with, adopting and all around approach. With this I mean that technical and business-how skills along with soft skills are discussed. Must read! 1) JSF 2, PrimeFaces 3, Spring 3 & Hibernate 4 Integration Project And finally, the most popular Java Code Geeks post for 2012 is this tutorial combining a number of enterprise Java technologies such as JSF, PrimeFaces, Spring and Hibernate. Honestly, this was a bit of surprise to me, but I think this shows how big is the adoption of these technologies by the Java developers world. That’s all folks. Our top posts for 2012. I hope you have enjoyed our articles during the past year and that you will continue to provide your support and give us your love in the year to come. Happy new year everyone! From the whole Java Code Geeks team, our best wishes! Be well, Ilias ...

Easier Multi-Field Validation with JSF 2.0

One of the most frequent needs when developing application forms is multi-field validation (or cross-field, but I’m not using this term because when I put it on Google I actually got some post-war pictures). I’m talking about situations where we need to compare whether an initial date is earlier than an end date or a value is lower than another one. Isn’t it an obvious feature in every business-oriented framework? Not really. Unfortunately, the JSF specification doesn’t support it by default. Therefore, until its latest production release ( JSR 245 – JSF 2.1), JSF did not offer an out-of-the-box multi-field validation feature.       We probably can hope for something coming in JSF 2.2, since the JSR 344 mentions ‘Multi-field validation’. Meanwhile, developers have used their fruitful creativity to implement their solutions. You can find plenty of working alternatives at Stackoverflow.com; people creating their own components; frameworks built on top of Java EE trying to cover this feature; and many other cases. I didn’t like any solution I found. Some are complex, others are not so elegant. So, I decided to be creative as well and try a simpler solution, easy to understand and change when the time for refactoring comes. It doesn’t mean that I’m proposing something better than other proposals. I’m just proposing something simpler. In the following example, I check whether an allocated budget is smaller than a budget limit. If not, then a message is shown to the user. The example considers only two fields, but it can scale to as many fields as you wish. Step 1: create an attribute in the managed bean for each field to be validated: The attributes below are exclusively used in the multi-field validation. private BigDecimal allocatedBudget; private BigDecimal budgetLimit; In this example, I’m coding inside a class named MBean, annotated with @ManagedBean and @RequestScoped. Step 2: create a validation method in the same managed bean for each field This solution considers validation methods implemented in the managed bean instead of implementations of the interface javax.faces.validator.Validator. You can give any name to validation methods as long as you define three standard parameters, which are the FacesContext, the UIComponent, and an Object representing the value input in the field. Only the value is useful for our validation. See the validation methods: public void validateAllocatedBudget(FacesContext context, UIComponent component, Object value) { this.validationAllocatedBudget = (BigDecimal) value; }public void validateBudgetLimit(FacesContext context, UIComponent component, Object value) { this.validationBudgetLimit = (BigDecimal) value; if(this.validationBudgetLimit .compareTo(this.validationAllocatedBudget) < 0) { throw new ValidatorException( new FacesMessage("Invalid allocated budget!"); } } The method validateAllocatedBudget doesn’t validate the allocated budget. It simply set the attribute validationAllocatedBudget to allow its value to be used afterwards. It is possible because the validation methods are called in the same sequence they are declared in the JSF code. So, you can create a simple method like that for each field involved in the validation. The effective validation occurs in the method validateBudgetLimit, which is the latest called validation method in the JSF file, thus the last one to execute. It’s a good idea to declare attributes and validation methods in the same order of the fields in the form. The order doesn’t interfere the functioning of the algorithm, but it helps to understand the logic. On the other hand, the order of calls in the JSF file is important. Step 3: use the parameter validator to reference the validation method The methods described above are called from the fields below. Remember that the attributes and methods were implemented in the class MBean. <h:outputLabel for="allocBudget" value="Allocated Budget"/> <h:inputText id="allocBudget" label="Allocated Budget" value="#{mBean.operation.allocatedBudget}" validator="#{mBean.validateAllocatedBudget}"/><h:outputLabel for="budgetLimit" value="Budget Limit"/> <h:inputText id="budgetLimit" label="Budget Limit" value="#{mBean.operation.budgetLimit}" validator="#{mBean.validateBudgetLimit}"/>   Reference: Easier Multi-Field Validation with JSF 2.0 from our JCG partner Hildeberto Mendonca at the Hildeberto’s Blog blog. ...

The Differences Between Test-First Programming and Test-Driven Development

There seems to be some confusion between Test-First Programming and Test-Driven Development (TDD). This post explains that merely writing the tests before the code doesn’t necessarily make it TDD.             Similarities Between Test-First Programming and Test-Driven DevelopmentIt’s not hard to see why people would confuse the two, since they have many things in common. My classification of tests distinguishes six dimensions: who, what, when, where, why, and how. Test-First programming and Test-Driven Development score the same in five of those six dimensions: they are both automated (how) functional (what) programmer (who) tests at the unit level (where) written before the code (when). The only difference is in why they are written. Differences Between Test-First Programming and Test-Driven Development Test-First Programming mandates that tests be written before the code, so that the code will always be testable. This is more efficient than having to change already written code to make it testable. Test-First Programming doesn’t say anything about other activities in the development cycle, like requirements analysis and design. This is a big difference with Test-Driven Development (TDD), since in TDD, the tests drive the design. Let’s take a detailed look at the TDD process of Red/Green/Refactor, to find out exactly how that differs from Test-First Programming. Red In the first TDD phase we write a test. Since there is no code yet to make the test pass, this test will fail. Unit testing frameworks like JUnit will show the result in red to indicate failure. In both Test-First Programming and Test-Driven Development, we use this phase to record a requirement as a test. TDD, however, goes a step further: we also explicitly design the client API. Test-First Programming is silent on how and when we should do that. Green In the next phase, we write code to make the test pass. Unit testing frameworks show passing tests in green. In Test-Driven Development, we always write the simplest possible code that makes the test pass. This allows us to keep our options open and evolve the design. We may evolve our code using simple transformations to increase the complexity of the code enough to satisfy the requirements that are expressed in the tests. Test-First Programming is silent on what sort of code you write in this phase and how you do it, as long as the test will pass. Refactor In the final TDD phase, the code is refactored to improve the design of the implementation. This phase is completely absent in Test-First Programming. Summary of Differences So we’ve uncovered two differences that distinguish Test-First Programming from Test-Driven Development:Test-Driven Development uses the Red phase to design the client API. Test-First Programming is silent on when and how you arrive at a good client API. Test-Driven Development splits the coding phase into two compared to Test-First Programming. In the first sub-phase (Green), the focus is on meeting the requirements. In the second sub-phase (Refactor), the focus is on creating a good design.I think there is a lot of value in the second point. Many developers focus too much on getting the requirements implemented and forget to clean up their code. The result is an accumulation of technical debt that will slow development down over time. TDD also splits the design activity into two. First we design the external face of the code, i.e. the API. Then we design the internal organization of the code. This is a useful distinction as well, because the heuristics you would use to tell a good API from a bad one are different from those for good internal design.Try Before You Buy All in all I think Test-Driven Development provides sufficient value over Test-First Programming to give it a try. All new things are hard, however, so be sure to practice TDD before you start applying it in the wild. There are numerous katas that can help you with that, like the Roman Numerals Kata.   Reference: The Differences Between Test-First Programming and Test-Driven Development from our JCG partner Remon Sinnema at the Secure Software Development blog. ...

Java – The 2012 Review and Future Predictions

This post will focus on the events big and small that occurred in 2012 and also take a look at some future predictions for 2013. Some of the predictions will be honest guesses, others…. well lets just say that my Diabolical side will have taken over . So without further adieu lets look at the year that was 2012 for Java…..             2012 – A Year in Review 2012 was a rocking year for Java, the JVM and the community. James Governer (RedMonk analyst) stated that ‘2012 was the dawning of a 2nd age for Java’. Java enters the cloud (for real this time) Java/JVM based cloud offerings became a serious reality in 2012 with a host of new PAAS and IAAS offerings. Cloudbees, JElastic, Heroku, Joyent, Oracle are just five of the large number of offerings out there now. What does that mean for you as a developer? Well, it means lots of choice and the ability to try out this space very cheaply. I highly recommend that you try some of these providers out over the holidays (it takes minutes to set-up a free account) and see what all of the fuss is about. Counter to this however is a lack of standardisation in this space and although JEE8 promises to change this (assuming the vendors get on board) – for the next few years you’ll need to be careful about being locked into a particular platform. If you’re a bit more serious about having agnostic services/code running on the various offerings then I can recommend looking at the jClouds API to assist you. It’s fair to say that many of the offerings are still feeling their way in terms of getting the most out of the JVM. In particular multi-tenancy is an issue, as is Garbage Collection and performance on a virtualised environment. Companies such as Waratek and jClarity (Disclaimer: I’m their CTO) now offer solutions to alleviate those gaps. The Java community thrives The community continues to thrive despite many main stream tech media reports of ‘developers leaving the Java platform’ or ‘Java is dead’. There are more Java User Groups (JUGs) than ever before, consisting of ~400,000 developers world wide. Notably, one of them, the London Java Community won several awards including the Duke’s Choice award and JCP Member of the Year (along with SouJava – the major Brazilian JUG). The conference circuit is bursting at the seams with large, sold out in advance, world-class Java conferences such as JFokus, Devoxx and of course JavaOne. In addition to this the host of regional conferences that often pack in an audience of over 1000 people all continued to do well. Oracle’s Java Magazine was launched and has grown to over 100,000 subscribers. Stalwarts like JaxEnter, Coderanch and the Javaposse continue to grow in audience sizes. OpenJDK Further OpenJDK reforms happened over 2012 and a new scorecard is now in place for the wider community to give feedback on governance, openness and transparency. 2012 also saw a record number of individuals and organisations joining OpenJDK. In particular, the port to the ARM processor and support for running Java on graphic cards (Project Sumatra) were highlights this year. Java Community Process (JCP) The Java Community Process (JCP), Java’s standards body also continued its revival with record numbers of new sign-ups and a hotly contested election. As well as dealing with the important business of trademarks, IP and licensing for Java, a re-focus on the technical aspects for Java Specification Requests (JSRs) occurred. In particular the new Adopt a JSR programme is being strongly supported by the JCP. Java and the JVM The JVM continues to improve rapidly through OpenJDK – the number of Java Enhancement Proposals (JEPs) going into Java 8 is enormous. Jigsaw dropping out was a disappointing but given the lack of broader vendor support and the vast amount of technical work required, it was the correct decision. JEE / Spring JEE7 is moving along nicely (and will be out soon), bringing Java developers a standard way to deal with the modern web (JSON, Web Sockets, etc). Of course many developers are already using the SpringSource suite of APIs but it’s good to see advancement in the underlying specs. Rapid Web Development Java/JVM based rapid web development frameworks are finally gaining the recognition they deserve. Frameworks like JBoss’s SEAM, Spring Roo, Grails, Play etc all give Java developers parity with the Rails and Django crowd. Mechanical Sympathy A major focus of 2012 was on Mechanical Sympathy (as coined by Martin Thompson in his blog). The tide has turned, and we now have to contend with having multi-core machines and virtualised O/S’s. Java developers have had to start thinking about how Java and the JVM interacts with the underlying platform and hardware. Performance companies like jClarity are building tooling to help developers understand this complex space, but it certainly doesn’t hurt to get those hardware manuals off the shelf again! 2013 – Future predictions It’s always fun to gaze into the crystal ball and here are my predictions for 2013! Java 8 will get delivered on time Java 8 with Nashorn, Lambda, plus a port to the ARM processor will open up loads of new opportunities for developers working on the leading edge of web and mobile tech. I anticipate rapid adoption of Java 8 (much faster than 7). However, the lack of JVMs present on iOS and Android devices will continue to curtail adoption there. Commercial Java in the cloud 2013 will be the year of commercial Java/JVM in the cloud – many of the kinks will get ironed out with regards to mutli-tenancy and memory management and a rich SAAS ecosystem will start to form. The organisations that enable enterprises to get their in house Java apps out onto the cloud will be the big commercial winners. We’ll also see some consolidation in this space as the larger vendors snap up smaller ones that have proven technology. OpenJDK OpenJDK will continue to truly open up with a public issue tracker based on JIRA, a distributed build farm available to developers and a far superior code review and patch system put in place. Oracle, IBM and other major vendors have also backed initiatives to bring their in house test suites out into the open, donating them to the project for the good of all. JVM languages and polyglot There will be a resurgence in Groovy thanks to its new static compilation capability and improved IDE tooling. Grails in particular will look like an even more attractive rapid development framework as it will offer decent performance for midrange web apps. Scala will continue to be hyped but will only be used successfully by small focused teams. Clojure will continue to be popular for small niche areas. Java will still outgrow them all in terms of real numbers and percentage growth. A random prediction is that JRuby may well entice over Rails developers that are looking to take advantage of the JVM’s performance and scalability.   Reference: Java – The 2012 Review and Future Predictions from our JCG partner Martijn Verburg at the Java Advent Calendar blog. ...

Waiting for the right moment – in integration testing

When you have to test multi-threaded programs, there is always the need to wait until the system arrives at a particular state, at which point the test can verify that the proper state has been reached. The usual way to do it is to insert a ‘probe’ in the system which will signal a synchronization primitive (like a Semaphore) and the test waits until the semaphore gets signaled or a timeout passes. (Two things which you should never do – but are a frequent mistake – is to insert sleeps into your code – because they slow you down and are fragile – or to use the Object.wait method without looping around it – because you might get spurious wakeups which will result in spurious, hard to diagnose and very frustrating test failures). This is all nice and good (although a little verbose – at least until the Java 8 lambdas arrive), but what if the second thread calls a third thread and doesn’t wait for it to finish, but in the test we want to wait for it? A concrete example would be: an integration test which verifies that a system composed out of a client which communicates trough a messaging middleware with a datagrid properly writes the data to the datagrid? Of course we will use a mock middleware and a mock datagrid, thus the startup/shutdown and processing will be very fast, but they would be still asynchronous (suppose that we can’t make it synchronous because the production one isn’t and the code is written such that it relies on this fact). The situation is described visually in the sequence graph below: we have the test running on T0 and we would like for it to wait until the task on T3 has finished before it checks the state the system arrived to.We can achieve this using a small modification to our execution framework (which probably is some kind of Executor). Given the following interface: public interface ActivityCollector { void before(); void after(); } We would call before() at the moment a task is enqueued for execution and after() after it has executed (these will usually occur on different threads). If we now consider that before increments a counter and after decrements it, we can just wait for the counter to become zero (with proper synchronization) at which point we know that all the tasks were processed by our system. You can find an Executor which implements this here. In production you can of course use an implementation of the interface which does nothing, thus removing any performance overhead. Now lets look at the interface which defines how we wait for the ‘processed’ condition: interface ActivityWatcher { void await(long time, TimeUnit timeUnit); } Two personal design choices used here were: only provide a way to wait for a specific time and no longer (if the test takes too long that’s probably a performance regression one needs to take a look at) and to use unchecked exceptions to make testing code shorter. A final feature would be to collect exceptions during the execution of the tasks and abort immediately if there is an exception somewhere rather than timing out. This means that we modify our interface as follows: public interface ActivityCollector { void before(); void after(); void collectException(Throwable t); } And the code wrapping the execution would be something like the following: try { command.run(); } catch (Throwable t) { activityCollector.collectException(t); throw t; } finally { activityCollector.after(); } You can find an implementation of ActivityWatcher/ActivityCollector here (they are quite linked, thus the one class implementing them both). Happy testing! A couple of caveats:This requires some modification to your production code, so it might not be the best solution (for example you can try creating synchronous mocks of your subsystems and do testing that way). This solution is not well suited for cases where Timers are involved because there will be times when ‘no tasks are waiting’, but in fact a task is waiting in a timer. You can work around this by using a custom timer which calls ‘before’ when scheduling and ‘after’ at the finish of the task. The same issue can come up if you are using network communication for more authenticity (even if it is inside of the same process): there will be a moment when no tasks are scheduled because they are serialized in the OSs network buffer. The ActivityCollector is a single point of synchronization. As such it might decrease performance and it might hide concurrency bugs. There are more complicated ways to implement it which avoids some of the synchronization overhead (like using a ConcurrentLinkedQueue), but you can’t eliminate it completely.PS. This example is based on an IBM article I can’t seem to find (dear lazyweb: if somebody finds it, please leave a comment – before/after were called tick/tock in it) as well as work by my colleagues. My only role was to write it up and synthesize it.   Reference: Waiting for the right moment – in integration testing from our JCG partner Attila-Mihaly Balazs at the Java Advent Calendar blog. ...

Some Advice for New University Graduates: Dreams of Developing Software

I have taken a step back in an attempt to put the year 2012 in focus. As always, it started with great hopes and there were highs and it seemed for a moment, that working life was back on track, but lurking in the background was an impending disaster. The problems were not fixed, I can see them now, but that is for another blog post. In this post, I went with another angle of working life, I pondered for a moment, what on earth would I tell myself as the twenty something university graduate? What advice would I give to another university graduate now?     Tools The tools for writing applications are definitely here. At the end of 2012 there are an abundance compared to slow pre-Internet age of 1992. You have lots of opportunities in programming languages such as Java. It runs on a virtual machine and you can forget about dreams of C++ being a guru engineer and purely object oriented development. The landscape is changing. I would say learn something about functional programming languages. You have to learn version control systems such Subversion, Mercurial and Git. Take advantage of the new technologies for learning, videos and on line courses, broadband Internet, and ways to amplify your knowledge. Remember: technology is not the answer to every human problem. There is such a breadth of knowledge waiting and so little time to learn it all. Choose wisely. I would warn myself about the dangers of social networking, and suggest you would do the very same. Privacy is dear. Code and ideas are dear. Keep some part of life in the off-line mode for your own security, if nothing else. Other people have been known to take ideas with out credit and attributions. Only put on the Internet, the stuff you are happy to let be public knowledge; find the balance between share-nothing and share-almost-anything for yourself. Be wary of code that you do put out there in the Internet. In my opinion in the future it could be held or used against. If you are showcasing ‘wares make sure it is the best work that you can do, don’t be shoddy or lazy about programming. Being part of open source framework as a committer is a good thing, it will open doors, and you get to meet electronically people on the other side of the planet. You might even be lucky enough to meet the other committers at conference or visit on holiday; maybe they may come to you. Life is better with the people you know; who know you and therefore have a bond with. Because they are too many tools, frameworks and programming languages out there, I would advise myself to choose the special interest wisely with a view of what is going to benefit my career in the long term. Nobody can be master of all trades in IT. Our profession is too long in the tooth for that now. If you want to developer, be that, a database girl be that, a security dude, then be that. Choose something you enjoy not the thing that your mother and father tells you that you must do. Listen to your beating first, before listening to the opinion of other people. Develop that gut. The gut-feeling, the little voice in your head, the spirt that comes sometimes you feel exciting or there is an ill-wind, whatever, because it is true, it is the one statement of a fact not a YAGNI, you are going to need it. You must live and work with other people. If the code is an experiment and is just for fun, then advertise as that with a definite label. Code is also nothing with people. Unfortunately code is the easy part, it is the dealing with the people, the communication, the handling of information between groups of folk, the social aspects, which are the hard parts. Elitism Surprise, surprise: be warned that Elitism is still in effect. Nothing has changed since the early 1990’s in what is legalised prejudice of university graduates. Employers are allowed to specify on job advertisements that they are only interested in certain set of candidates from so-called red brick universities [2] even though this smacks in the face of diversity and fair entrance. There are employers wanting the so-called best software developers out of university or higher education college, if you have less than second-class first level degree (2-1) your application might tossed directly straight in the bin [1]. In my day applications were sent by post, now it is quite easy to discard a very crafted Word or PDF document in to the digital waste receptacle in the sky. Yet, it is common knowledge, or it should be, in the IT profession that a certain Mr Bill Gates, of Microsoft, did not even graduate with a degree. My advice is to the same now as it was then, Keeping On Moving [10], there are always alternatives to elitist organisations, which may well go out of business sooner rather than later. I learnt very quickly there is always one choice, colloquially, known as The Law of Two Feet [9]. All you have to do build on the network that you started whilst in university. The teacher or lecturer you did the best project for, the mate that you had the best times with at the pub, even the gym is a place to find and discuss opportunity. If you have impressed a friend or colleague and if they are really your friend, know you personally, then you are more likely to get opportunities of work that are more suited to your skills. Job Shock During the early 1990’s the world was recovering from previous financial crisis, albeit it was a smaller compared to the massive crunching meltdown that we have had running now for five years, since 2007. For the record, I am also grating my teeth too, in frustration with you too. I feel. I am a human being too. The shocking stories of the job search of recent university graduate have left me cold. There was a time before the monetary union of Europe and the Euro, when each country in the European union had it’s own currency like the Deutschemark, the Franc, the Peseta and Lira; and therefore their own national bank of control, of monetary policy, then there was the possibility and the economic reality of at least Germany still being the powerhouse of Europe and the World when Britain was in the doldrums. Indeed, Germany was able to survive the recession of the early 1990’s, I know because I was living there for a time. Since the turn of the century, the sudden explosion of the Internet, the reliance on better communication links, the rise of common markets, radical improvements of technology, better efficiencies in trading have meant we have a global economy. The door has closed forever on hoping over the English Channel to find lucrative work, even if the language barriers were not there at all. A recession in Germany most certainly means a downturn in Britain and Ireland. For university graduates, this means that getting a job search is much harder than 15 and 20 years ago. The competition is fierce; the depression is deep. Some graduates wondered why they have invested their formative years in to getting a university paper only to find themselves flipping hamburgers at McDonald or desperately applying to become a retail shop assistant at the local Debenhams or Next fashion store [3][4]. The Job-Shock of 2012 is clearly worse than 1992. Eric, Newcastle “I have just passed my 1 year anniversary from my master’s degree. There’s nothing to celebrate because it’s also the same time I started looking for jobs and 1 year on, I have had no success. I have been to nearly a dozen interviews to progress on my career to be an engineer and have had no success.” Laura, London “I completely understand what you guys mean. It is so hard to keep motivated when you keep getting told, “Sorry, you haven’t got enough experience” and then you say “but that’s why I want a job!!” With the two years from finishing my degree to starting my graduate job I gained experience and continued to apply. Getting experience isn’t easy though because quite often you need some experience to get experience. My advice is to plan what skills you want to show experience in then make a plan from their, starting with smaller experience and aiming for the bigger stuff when you have something in hand.” We are losing young and gifted people across a wide-cross section of disciplines [5]. Some are giving up on their dreams of having a career. Sadly, some people who thought about a career in information technology, software development, programming or designing applications, may already be saying to themselves: too long and hard to achieve the result I dreamed of; do not think to apply because it never happens to people just like me. Continuous Reinvention I am here to tell you that if you want to get a programming job in information technology then it is possible. Don’t give on IT just yet. The roles are there, if you keep looking for them. It is quite similar to dating. Two people will never meet each other, if they stop searching of the other lover. If either one of them gives up then the cause of true love is lost. But then, how do I find a job? A better question is, how do I find a job that I really will enjoy? The best and ideal way to do this is, I think, is to find that company and group of employers that is enthusiastic, altruistic and cultured. In other words, the company must have a distinct lack of dysfunction, but you as a graduate candidate have already found that to be true, yours suspicions, which you most likely experienced on the job hunt are quite correct, I am afraid. You absolutely correct to note that every company that advertises, “We hire only the best candidates”, is logically not “the best”. Learn to read those job specifications and as some would say read between the lines. Ask some searching questions: what happened to last year’s recruitment? As an addendum to the infamous and standard question: How did this job become vacant? Start networking when your career is in infancy. Keep your ear to the ground and listening and learn the behaviours of others. It is sad, but true, in the IT career too, you have to watch your back as well. Resist the temptation to be closed and unapproachable, instead be that person, open to change, a mind like parachute. Remember who put the faith in you and got you to this great position that you are in now. You have a university degree or better, not many people in the world get that, and those who try to put you down, are jealous, because when they had their chance in life, they bloody blew it. Just because they took a mis-step then that does not mean you are going to. If you really want to be black and proud and be bad meaning good, then for heaven’s sake, buy the CD or download the MP3 of Public Enemy: Fight The Power, Rebel without a Pause and Bring The Noise [8]. Rock on out in your bedroom when you feel the world is against you. For all other people find some inspiration and music to gets you going, motivates and inspires positivity in yourself, whatever it is, whether music, theatre, classics, walking the dog, or a landscape that you remember as a child, then keep on at it and make it your central core, your sword and shield in the battle, the battle of survival. When you leave university and get on the job market for the first time, it is a great time to learn and identify the different types of institutions. For instance, you may have thought that big company ACME was the best for you to a get a job in, perhaps you were tempted by the glossy brochure, or the suited and booted personel at the job fair, maybe they had the best gizmos in the handout bag at a conference; and then you later find out that the much smaller FROZFIZZ is better. You will be probably be surprised at youreself suddenly turning to the FROZFIZZ, and finding this smaller enterprise attractive. Maybe it was because they have a better training scheme, perhaps they send there employees to get proper IT certifications, and perhaps they offer a real chance to use the next interesting new technology or framework there. More often or not, the FROZFIZZ employees seem really happy ,warm and generous. It is not fake, because you can confirm from a friend who recently got a job there. That is good-cultured. You know it when you find it. Some people spent their life trying to find the good culture. Okay, FROZFIZZ has a much lower starting salary than ACME and they cannot afford to pay an contributory pension plan or some other additional benefits compared to ACME. This is the time after university to learn how to measure up and down different employers when, most likely, you have not yet got the husband or the wife or long term spouse to bloody annoy you and you can concentrate on what is best for you and your career. Twenty years down the line, you will not regret choosing happiness in organisations like FROZFIZZ rather the gravy train of ACME. In fact, it is better to have worked at series of FROZFIZZ like companies than stick to the pressure and unloved atmosphere of ACME for ten years, even if you start climbing the promotional ladder in to senior management. The one thing that I want to hit you home with, that is almost universal truth, “The People are the Company”. In software industry, which is a global economy, being comfortable where you work and when you work is the most important reason for having a career. Yes it can be learning Java or Scala or Groovy some other programming language, but if the company is dysfunctional then the world can feel like a horrid place. In this day and age, we are rapidly seeing the decline of a job-for-life. If you cannot change the organisation, then change the organisation. Some people, do leave the country just to find that the one opportunity to start an IT career. If you want my advice, and you are seriously considering it, then do it. If nothing else, you will learn a new language, if English is not the native language of country that you will work in, and you will have a different culture and outlook of life to tune it in. It will demonstrate to the world, on your curriculum vitate that you are one of the few who is remarkable, courageous and brave. Although leaving the country is tough and deliberate decision for many people, you can always come back after a few years. Even fewer souls, permanently leave Great Britain for the USA or beyond and never return, their lives changed because they made the decision. It is all about finding alternatives. Remember you always a choice. Just ask Carol Vorderman [7]. Stay the course, and achieve your dreams of becoming a professional software developer; I guarantee you will not regret it. [1] http://www.standard.co.uk/news/work/sign-of-the-times-graduates-take-to-streets-in-search-of-job-8226282.html [2] http://en.wikipedia.org/wiki/Red_brick_university [3] http://www.independent.co.uk/news/education/education-news/redbrick-universities-are-more-elitist-than-oxbridge-634051.html [4] http://www.guardian.co.uk/money/2012/jul/04/graduate-recruiters-look-for-21-degree?intcmp=239 [5] http://www.guardian.co.uk/education/2012/jul/31/lower-second-degree-employment-prospects [6] http://en.wikipedia.org/wiki/Organizational_culture [7] http://www.guardian.co.uk/theguardian/shortcuts/2012/jul/04/dont-judge-job-applicant-by-degree [8] http://en.wikipedia.org/wiki/Fight_the_Power [9] http://en.wikipedia.org/wiki/Open-space_technology [10] http://en.wikipedia.org/wiki/Keep_On_Movin’_(Soul_II_Soul_song)   Reference: Some Advice for New University Graduates: Dreams of Developing Software from our JCG partner Peter Pilgrim at the Peter Pilgrim’s blog blog. ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: