Featured FREE Whitepapers

What's New Here?


Bash’ing your git deployment

Chuck Norris deploys after every commit. Smart men deploy after every successful build on their Continuous Integration server. Educated men, deploy code directly from their distributed version control systems. I, being neither, had to write my deployment script in bash. We’re using git and while doing so I wanted us to:deploy from working copy, but… make sure that you can deploy only if you committed everything make sure that you can deploy only if you pushed everything upstream tag the deployed hash display changelog (all the commits between two last tags)Here are some BASH procedures I wrote on the way, if you need them: make sure that you can deploy only if you committed everything   verifyEverythingIsCommited() { gitCommitStatus=$(git status --porcelain) if [ '$gitCommitStatus' != '' ]; then echo 'You have uncommited files.' echo 'Your git status:' echo $gitCommitStatus echo 'Sorry. Rules are rules. Aborting!' exit 1 fi }   make sure that you can deploy only if you pushed everything upstream   verifyEverythingIsPushedToOrigin() { gitPushStatus=$(git cherry -v) if [ '$gitPushStatus' != '' ]; then echo 'You have local commits that were NOT pushed.' echo 'Your 'git cherry -v' status:' echo $gitPushStatus echo 'Sorry. Rules are rules. Aborting!' exit 1 fi }   tag the deployed hash Notice: my script takes first parameter as the name of the server to deploy to (this is $1 passed to this procedure). Also notice, that ‘git push’ without the ‘–tags’ does not push your tags. tagLastCommit() { d=$(date '+%y-%m-%d_%H-%M-%S') git tag '$1_$d' git push --tags } This creates nice looking tags like these: preprod_12-01-11_15-16-24 prod_12-01-12_10-51-33 test_12-01-11_15-11-10 test_12-01-11_15-53-42 display changelog (all the commits between two last tags)   printChangelog() { echo 'This is changelog since last deploy. Send it to the client.' twoLastHashesInOneLine=$(git show-ref --tags -s | tail -n 2 | tr '\\n' '-'); twoLastHashesInOneLineWithThreeDots=${twoLastHashesInOneLine/-/...}; twoLastHashesInOneLineWithThreeDotsNoMinusAtTheEnd=$(echo $twoLastHashesInOneLineWithThreeDots | sed 's/-$//'); git log --pretty=oneline --no-merges --abbrev-commit $twoLastHashesInOneLineWithThreeDotsNoMinusAtTheEnd } The last command gives you a nice log like this: e755c63 deploy: fix for showing changelog from two first tags instead of two last ones 926eb02 pringing changelog between last two tags on deployment 34478b2 added git tagging to deploy   Reference: Bash’ing your git deployment from our JCG partner Jakub Nabrdalik at the Solid Craft blog. ...

My thoughts on Agile

The seeds that you plant at the beginning of an organization’s life influences how the organization grows over time. I think it’s safe to say that you have some macro goals:Predictability: you as managers want to predict what will happen and when it will happen Flexibility: you want to be nimble to satisfy customers and close sales Open Communications: you guys want to have good, open, honest communications with the team, especially when the situation is suboptimalIt’s been my experience that agile achieves the above goals, even though it might not seem that way. Waterfall, lots of up-front type of methodology is not agile and will ultimately result in falling short of the above stated goals, despite that having a long-term plan seems to jibe with predictability. So, when I say ‘agile’, I mean the following:The team works towards periodic goals. The period is the same every time and is relatively short: 1-3 weeks… an Iteration. All tasks are placed in a central ‘backlog’ and prioritized in that backlog. Simple tasks are called stories and complex tasks are called epics. Any story that cannot be completed in an iteration is an epic and before it is started, an epic must be broken into sub-stories. Anything that’s going to consume time is a story and that can include ‘be available for customer support requests’, ‘partnership calls’ and ‘take a few days off over Christmas.’ Backlog re-prioritization can happen any time. Each story is assigned a point value agreed to by the team. I have found that point values based on the Fibonacci series are most effective. At the beginning of an iteration, each team member signs up to complete certain stories and also identifies certain stories as stretch goals (they’re not guarantying that they’ll complete the stretch goals). Periodically through the iteration, we have short ‘stand up’ meetings (I’d recommend twice a week) where people each talk about ‘what they did since the last meeting, what they plan to do between now and the next stand-up, and any blocking issues’ No more than 60 seconds per person. These low impact touch-points help keep the team aligned. At the end of the iteration, each person demos what they completed so the whole team keeps in touch with what’s going on and so that we can celebrate each team member’s accomplishments. Points for completed tasks are put into a spreadsheet.It’s pretty obvious that the above methodology leads to flexibility. There’s no more than 1 iteration needed to change direction based on a reprioritized backlog. What is less obvious is that it also leads to better predictability. Having the short iterations means that people can do a better job of predicting how hard a 2 or 3 week task is going to be and if they can complete it or if it needs to be broken down into sub-tasks. So, on a per-iteration basis, you have immediate predictability. But, the benefit that you get is that over 3 or 4 iterations, you’ll get a very good and stable ‘point consumption’ model for the team as a whole and for each team member. This point consumption model leads to very accurate predictions. You’ll find that stories (less so epics) are accurately priced and that each team member has a narrow band of points they deliver, consistently, iteration after iteration. What is even less obvious is how agile leads to much better communications. In the traditional waterfall model, there are always slips. Slips lead to ass covering and blame and a whole lot of other ‘I don’t trust my team members, so I’m going to communicate in a way that could cause me the least pain.’ The bad and common case scenario and one I’ve seen over and over is the 3 month project that slips a month or two within weeks of the due date. By the time that happens, everybody is stressy and cranky (unlike the beginning phase of the project where you guys have money in the bank, prospects, and a motivated team… a phase where nothing seems impossible.) In an agile process, the folks who sign up for a task and don’t deliver it by the end of the iteration are not punished. What happens is that if it’s a repeated event, as managers, we can simply say, ‘Joe, you are signing up for 45 points this iteration, but you’ve historically consumed/delivered 30 points per iteration, how about signing up for 30 points and making the balance as stretch goals?’ Although consistant underperformers should be removed from the team. Another benefit of this approach is that all code and all ideas are open to improvement. This means that people are more likely to say, ‘I was wrong about that, let’s improve it,’ ‘I’m glad you improved on my code,’ and other things that leverage the skills of all the team members. But, you say, ‘what’s the architecture look like?’ I say, ‘who cares?’ Keep in mind that this is my attitude towards Lift. There are a couple of touchstone concepts in Lift (e.g., security, abstracting away the HTTP request response cycle), but by and large it’s grown based on user feedback. Whatever architecture we come up with today is going to be wrong, broken or so broad that it’ll be outdated in 4-6 months. On the other hand, if we grow the product organically and make sure we’re willing to throw away pieces that no longer fit, we can keep satisfying customer demand as well as learning from the mistakes we’ve made along the way. Note that no matter what direction we choose, we’ll make mistakes. In a waterfall model, we are encouraged to conceal our mistakes where as in an agile model, we are encouraged to acknowledge mistakes and fix them as interation tasks. It will take a few iterations to get the rhythm of the process, but it’s been my experience that once the rhythm is there, management and sales gets predictability and flexibility and the whole team gets open communication.   Reference: My thoughts on Agile from our JCG partner David Pollak at the Good Stuff blog. ...

Running HTTP/REST Integration Tests efficiently in Eclipse

Lately I had a chance to use the OSGi-JAX-RS-Connector library written by my dear fellow Holger Staudacher. The connector enables you to publish resources easily by registering @Path annotated types as OSGi services – which actually works quite nicely. While it is natural for me to write the service classes test driven using plain JUnit tests it is also very important to provide additional integration tests. Those tests allow to check runtime availability and functionality of such services. To provide the latter I used another little helper written by Holger – restfuse which is a JUnit extension for automated HTTP/REST Tests. The scenario looks somewhat like this: A service @Path( '/message' ) public class SampleService {@GET @Produces( MediaType.TEXT_PLAIN ) public String getMessage() { return 'Hello World'; } } A JUnit test case public class SampleServiceTest {@Test public void testGetMessage() { SampleService service = new SampleService();String message = service.getMessage();assertEquals( 'Hello World', message ); } } The service registration <?xml version='1.0' encoding='UTF-8'?> <scr:component xmlns:scr='http://www.osgi.org/xmlns/scr/v1.1.0' name='SampleService'> <implementation class='sample.SampleService'/> <service> <provide interface='sample.SampleService'/> </service> </scr:component> The restfuse integration test @RunWith( HttpJUnitRunner.class ) public class SampleServiceHttpTest {@Rule public Destination destination = new Destination( 'http://localhost:9092' );@Context private Response response;@HttpTest( method = Method.GET, path = '/services/message' ) public void checkMessage() { String body = response.getBody( String.class ); assertOk( response ); assertEquals( MediaType.TEXT_PLAIN, response.getType() ); assertEquals( 'HelloWorld', body ); } } The running serviceWhile all of this was quite straight forward it bugged me somehow that running the integration tests locally required first to launch the server before I was able to execute the integration test. Preoccupied by the task at hand I often forgot to launch the server and ran into connection timeouts or the like. But I found a solution for this by using a PDE JUnit launch configuration, because such a configuration can be setup to start the server within the process that runs the tests. To do so create and select a test suite that contains all the integration tests to run1……after that switch to the main tab and select the headless mode…… and last but not least configure the program arguments used by the server which in our case basically concerns the port definition.The bundle selection in the Plug-ins tab contains the same bundles as those of the osgi launch configuration that is used to run the server standalone plus the JUnit-, PDE JUnit-, restfuse-bundles and their dependencies. The selected test suite may looks like this: @RunWith( Suite.class ) @SuiteClasses( { SampleServiceHttpTest.class } ) public class AllRestApiIntegrationTestSuite {public static String BASE_URL = 'http://localhost:' + System.getProperty( 'org.osgi.service.http.port' ); } The only unusual thing here is the BASE_URL constant definition. As mentioned above the server port of the test run is specified as a program argument in the launch configuration. But restfuse tests need to provide the port during the destination rule definition. Using the approach above allows to change the port in the configuration without affecting the tests. Simply use the constant as parameter in the definition as shown in the following snippet23. @Rule public Destination destination = new Destination( BASE_URL ); This simple setup worked out very well and improved my workflow of running the integration tests locally. And saving the launch configuration in a shared project easily enables your team mates to reuse it. So this is it for today and as always feedback is highly appreciated. By the way, Holger promised me to write a post about how to integrate that stuff described above into a maven/tycho based build4 – so stay tunedOf course you can also use the possibility of running all tests of the selected project, package or source folder – but for our purposes here using the suite approach and running a single test case is quite ok You probably would provide a separate class for the constant definition in a real world scenario to avoid the coupling of the tests to the suite. I skipped this here for simplification. Note that the BASE_URL is included using static imports for better readability of the snippet Holger kept his promise, see: http://eclipsesource.com/blogs/2012/09/11/running-httprest-integration-tests-in-an-eclipse-tycho-build/  Reference: Running HTTP/REST Integration Tests efficiently in Eclipse from our JCG partner Frank Appel at the Code Affine blog. ...

Best Must-Read Books for Software Engineers

Here are the CodeBuild selection of must-read software engineering books. Books are grouped according to their content with some description.  Reference BooksThese Robert C. Martin and Gang of Four books are very fundamental OOP resources for every software engineer. Coding PerfectionThese Steve McConnell, Robert C. Martin and Joshua Bloch books are very helpful with increasing your coding skills. Refactoring and PatternsRefactoring and patterns are very important issues of OOP, which brings quality and maintainability. These Martin Fowler and Joshua Kerievsky books are maybe the best references about this issue. Pragmatic ProgrammingAndrew Hunt’s and David Thomas’s ‘pragmatic’ approach to programming brings very important viewpoints to software engineering. Project ManagementThere are many project management books in the market but Frederick P. Brooks Jr. and Tom DeMarco presents very impressive important viewpoints to project management.  Reference: Best Must-Read Books for Software Engineers from our JCG partner Çağdaş Başaraner at the CodeBuild blog. ...

Signing Java Code

In a previous post, we discussed how to secure mobile code.One of the measures mentioned was signing code. This post explores how that works for Java programs. Digital Signatures The basis for digital signatures is cryptography, specifically, public key cryptography. We use a set of cryptographic keys: a private and a public key. The private key is used to sign a file and must remain a secret. The public key is used to verify the signature that was generated with the private key. This is possible because of the special mathematical relationship between the keys.Both the signature and the public key need to be transferred to the recipient. Certificates In order to trust a file, one needs to verify the signature on that file. For this, one needs the public key that corresponds to the private key that was used to sign the file. So how can we trust the public key? This is where certificates come in. A certificate contains a public key and the distinguished name that identifies the owner of that key. The trust comes from the fact that the certificate is itself signed. So the certificate also contains a signature and the distinguished name of the signer. When we control both ends of the communication, we can just provide both with the certificate and be done with it. This works well for mobile apps you write that connect to a server you control, for instance. If you don’t control both ends, then we need an alternative. The distinguished name of the signer can be used to look up the signer’s certificate. With the public key from that certificate, the signature in the original certificate can be verified. We can continue in this manner, creating a certificate chain, until we reach a signer that we explicitly trust. This is usually a well-established Certificate Authority (CA), like VeriSign or Thawte. Keystores In Java, private keys and certificates are stored in a password-protected database called a keystore.Each key/certificate combination is identified by a string known as the alias. Code Signing Tools Java comes with two tools for code signing: keytool and jarsigner. Use the jarsigner program to sign jar files using certificates stored in a keystore. Use the keytool program to create private keys and the corresponding public key certificates, to retrieve/store those from/to a keystore, and to manage the keystore. The keytool program is not capable of creating a certificate signed by someone else. It can create a Certificate Signing Request, however, that you can send to a CA. It can also import the CA’s response into the keystore. The alternative is to use tools like OpenSSL or BSAFE, which support such CA capabilities. Code Signing Environment Code signing should happen in a secure environment, since private keys are involved and those need to remain secret. If a private key falls into the wrong hands, a third party could sign their code with your key, tricking your customers into trusting that code. This means that you probably don’t want to maintain the keystore on the build machine, since that machine is likely available to many people. A more secure approach is to introduce a dedicated signing server:You should also use different signing certificates for development and production. Timestamping Certificates are valid for a limited time period only. Any files signed with a private key for which the public key certificate has expired, should no longer be trusted, since it may have been signed after the certificate expired. We can alleviate this problem by timestamping the file. By adding a trusted timestamp to the file, we can trust it even after the signing certificate expires.But then how do we trust the timestamp? Well, by signing it using a Time Stamping Authority, of course! The OpenSSL program can help you with that as well. Beyond Code Signing When you sign your code, you only prove that the code came from you. For a customer to be able to trust your code, it needs to be trustworthy. You probably want to set up a full-blown Security Development Lifecycle (SDL) to make sure that it is as much as possible. Another thing to consider in this area is third-party code. Most software packages embed commercial and/or open source libraries. Ideally, those libraries are signed by their authors. But no matter what, you need to take ownership, since customers don’t care whether a vulnerability is found in code you wrote yourself or in a library you used.   Reference: Signing Java Code from our JCG partner Remon Sinnema at the Secure Software Development blog. ...

Stress/Load-Testing of Asynchronous HTTP/REST Services with JMeter

Although I have been using JMeter for stress- and load-testing of web applications a good few times it took us a while to figure out how to test asynchronous HTTP/REST based services with the tool. With us I mean a fellow programmer – Holger Staudacher, I have the honor to work with currently on a project – and my humble self. While Holger developed restfuse based on the experience of doing functional and integration tests for the project mentioned above, we decided to use JMeter for stress- and load-testing. The main service of the software under test processes a data structure that is uploaded to a certain URL. If the upload process was successful an URL pointing to a resource containing the processing result is returned. The resulting resource is not available immediately – processing takes a while. So polling is used to retrieve the resource once it is available1. Our goal was to measure the time it takes to upload the data structure, process it and download the result resource in one test run. Running such a test with multiple users concurrently should give us a fair impression of the throughput capabilities of the system. Does not sound too complicated, but… …our first approach writing a test plan for the scenario described in the previous paragraph using the on-board capabilities of JMeter did not work out very well. Neither was the plan comprehendible nor – and that was worse – made the measurement results any sense. In particular clamping the upload request and the polling loops together with a transaction controller seemed to have some unexpected side effects with timers. So after a while of additional Google research I stumbled across the JavaSamplerClient API, which I did not know before. There is an entry at stackoverflow.com that describes how to extend the AbstractJavaSamplerClient – an implementation of JavaSamplerClient – and use it within JMeter. So this was the way to solve our problem. We created an extension of AbstractJavaSamplerClient overriding runTest(JavaSamplerContext). Within that method we use HttpClient to perform the upload and poll requests. Once the processing result is successfully retrieved by a poll request all the header and content information is stored in an instance of SampleResult. The latter is returned by the overridden test sampler method for further processing by JMeter – quite straight forward2. Once you have created a jar that contains a custom JavaSampleClient and put this into the lib/ext/ folder below your JMeter installation directory you can add a Sampler type Java Request to your Thread Group. This allows you to select and configure a custom sampler as shown in the picture below:Using the the JavaSamplerClient made our test plan very simple and allowed us to use the common JMeter result measurement functionality as shown below examplary with the Graph Results view:And of course the measurement results are now reasonable… Since we had to fumble quite a while to get this done I thought our solution might be of interest for other people, too – which was the reason to write this post. But it would be also interesting to hear from you if there are easier solutions we did not notice. So feel welcome to provide feedbackWe started using WebHooks but our customer had problems to convince the IT admins to open up the firewall… ↩ For the sake of a reasonble length of this post I skip any description of how to deal with unsuccessful requests – but most of our implementation work had to be done in that area… ↩  Reference: Stress/Load-Testing of Asynchronous HTTP/REST Services with JMeter from our JCG partner Frank Appel at the Code Affine blog. ...

Back to Basics – good comments are targeted comments

I can’t think of a single person who enjoys writing comments in code. I don’t, my friends and colleagues don’t, and I’m pretty sure there isn’t a meetup group for fans of it. Outside of code that I write for blog posts, I can pretty much guarantee there are only place where I write comments is in interfaces. The simple reason for this is that a) interfaces should be explicit, and b) implementations should be self-documenting. The reason for this blog post is that I’m seeing a lot of code at the moment that falls into one of two traps – either everything is documented, or nothing is documented. Everything is documented Spot what’s wrong with this code: /** * The foo class represents blah blah blah...and so on, describing the * class in such detail it's a pity the code couldn't be generated from it */ public class Foo {/** The name */ private String name;/** * Do something with this instance. */ public void doSomething() { // Get the name String localName = this.getName();// munge the local name localName.munge(); }/** * Get the name. * @return the name */ public String getName() { // return the name return this.name; }/** * Set the name. * @param name the name */ public void setName(String name) { // set the name this.name = name; } } Or, to put it another way, spot what’s right with it. That’s a much shorter answer. The code is full of unnecessary comments – e.g. getName() gets the name – and code that seems to have been written just so it could be commented – e.g; String localName – this.getName(); The names have been changed to protect the guilty, but this is real code I’ve seen in a live codebase. As as I’m concerned, implementations don’t need code-level comments because that’s what the code does anyway. Nothing is documented At the other end of the scale is this little gem: public interface Parser { void parse(InputStream is) throws IOException, SQLFeatureNotSupportedException } Interfaces, on the other hand, should have clear documentation that defines what goes in, a generic description of what should happen, and a clear description of what comes out and what exceptions can be thrown. Information at this level should state if, for example, null arguments are allowed, if a null value can be returned, the circumstances in which certain exceptions can be thrown, and so on. Interfaces should be explicit Interfaces, to my way of thinking, are contracts, and contracts – as any blood-sucking lawyer can tell you – exist to be honoured. They can’t be honoured if the terms are not explicitly set out. There are no Burger Kings in Belgium, so if I’m in the UK or the Netherlands I am generally tempted to have one. On my most recent visit, I noticed this at the bottom of the receipt:“ Free drink with any adult burger with this receipt Excluding Hamburger, Cheeseburger or King deal or any promotional offers” Or, to put it another way… /** * Get the free drink promised by the receipt. This is valid for any burger. * @param burger the burger * @param receipt the receipt * @returns the free drink * @throws InvalidBurgerException if the burger doesn't qualify for a free drink */ public Drink getFreeDrink(Burger burger, Receipt receipt) throws InvalidBurgerException { if (MealType.HAMBURGER == meal.type() || MealType.CHEESEBURGER == meal.type() || MealType.KING_DEAL == meal.type() || meal.isPromo()) { throw new InvalidBurgerException(); } return new Drink(); } To my simple brain, this is confusing and contradictory as hell. Your API should be clear and (as my university teachers beat into me) unambiguous – for example, the words “any” and “except” should not appear in the same sentence. Strive for clarity – if you find your API is too hard to clearly document, there’s a good chance it will be annoying to use. In the case above, an improvement would be something along the lines of /** * Get the free drink promised by the receipt. Not every burger qualifies for a free drink. * @param burger the requested burger. This may or may not qualify for a free drink * @param receipt the receipt containing the offer * @returns the free drink. May be null, depending on the burger */ public Drink getFreeDrink(Burger burger, Receipt receipt) { // implementation } Note that I’ve also got rid of the exception as a result of being more explicit. Conclusion Annoying as it is, documentation is extremely important in the correct place and completely useless everywhere else. Done correctly, they will make your API easier to use and easier to maintain, which is generally a good thing.   Reference: Back to Basics – good comments are targeted comments from our JCG partner Steve Chaloner at the Objectify blog. ...

Spring MVC Error Handling Example

This post describes the different techniques to perform error handling in Spring MVC 3. The code is available on GitHub in the Spring-MVC-Error-Handling directory. It is based on the Spring MVC With Annotations examples. Handling Exceptions Before Spring 3 Before Spring 3, exceptions were handled with HandlerExceptionResolvers. This interface defines a single method:       ModelAndView resolveException( HttpServletRequest request, HttpServletResponse response, Object handler, Exception ex) Notice it returns a ModelAndView object. Therefore, encountering an error meant being forwarded to a special page. However, this method is not suited for REST Ajax calls to JSONs (for example). In this case, we do not want to return a page, and we may want to return a specific HTTP status code. A solution, described further, is available. For the sake of this example, two fake CustomizedException1 and CustomizedException2 exceptions have been created. To map customized exceptions to views, one could (and can still) use a impleMappingExceptionResolver: SimpleMappingExceptionResolver getSimpleMappingExceptionResolver() {SimpleMappingExceptionResolver result = new SimpleMappingExceptionResolver();// Setting customized exception mappings Properties p = new Properties(); p.put(CustomizedException1.class.getName(), 'Errors/Exception1'); result.setExceptionMappings(p);// Unmapped exceptions will be directed there result.setDefaultErrorView('Errors/Default');// Setting a default HTTP status code result.setDefaultStatusCode(HttpStatus.BAD_REQUEST.value());return result;} We map CustomizedException1 to the Errors/Exception1 JSP page (view). We also set a default error view for unmapped exception, namely CustomizedException2 in this example. We also set a default HTTP status code. Here is the Exception1 JSP page, the default page is similar: <%@page contentType='text/html' pageEncoding='UTF-8'%> <%@ taglib prefix='c' uri='http://java.sun.com/jsp/jstl/core' %> <!doctype html> <html lang='en'> <head> <meta http-equiv='Content-Type' content='text/html;' charset=UTF-8'> <title>Welcome To Exception I !!!</title> </head> <body> <h1>Welcome To Exception I !!!</h1> Exception special message:< ${exception.specialMsg} <a href='<c:url value='/'/>'>Home</a> </body> </html> We also create a dummy error controller to help triggering these exceptions: @Controller public class TriggeringErrorsController {@RequestMapping(value = '/throwCustomizedException1') public ModelAndView throwCustomizedException1( HttpServletRequest request,HttpServletResponse response) throws CustomizedException1 {throw new CustomizedException1( 'Houston, we have a problem!'); }@RequestMapping(value = '/throwCustomizedException2') public ModelAndView throwCustomizedException2( HttpServletRequest request,HttpServletResponse response) throws CustomizedException2 {throw new CustomizedException2( 'Something happened on the way to heaven!'); }...} Before Spring 3, one would declare a SimpleMappingExceptionResolver as a @Bean in web.xml. However, we will use a HandlerExceptionResolverComposite which we will describe later. We also configure a target page for HTTP status codes in web.xml, which is an other way to deal with issues: <error-page> <error-code>404</error-code> <location>/WEB-INF/pages/Errors/My404.jsp</location> </error-page>   What Is New Since Spring 3.X? The @ResponseStatus annotation is a new mean to set a Http status code when a method is invoked. These are handled by the ResponseStatusExceptionResolver. The @ExceptionHandler annotation facilitates the handling of exceptions in Spring. Such annotations are processed by the AnnotationMethodHandlerExceptionResolver. The following illustrates how these annotations can be used to set an HTTP status code to the response when our customized exception is triggered. The message is returned in the response’s body: @Controller public class TriggeringErrorsController {...@ExceptionHandler(Customized4ExceptionHandler.class) @ResponseStatus(value=HttpStatus.BAD_REQUEST) @ResponseBody public String handleCustomized4Exception( Customized4ExceptionHandler ex) {return ex.getSpecialMsg();}@RequestMapping(value = '/throwCustomized4ExceptionHandler') public ModelAndView throwCustomized4ExceptionHandler(HttpServletRequest request,HttpServletResponse response) throws Customized4ExceptionHandler {throw new Customized4ExceptionHandler('S.O.S !!!!');}} On the user side, if one uses an Ajax call, the error can be retrieved with the following (we are using JQuery): $.ajax({ type: 'GET', url: prefix + '/throwCustomized4ExceptionHandler', async: true, success: function(result) { alert('Unexpected success !!!'); }, error: function(jqXHR, textStatus, errorThrown) { alert(jqXHR.status + ' ' + jqXHR.responseText); } }); Some people using Ajax like to return a JSON with the error code and some message to handle exceptions. I find it overkill. A simple error number with a message keeps it simple. Since we are using several resolvers, we need a composite resolver (as mentioned earlier): @Configuration public class ErrorHandling {...@Bean HandlerExceptionResolverComposite getHandlerExceptionResolverComposite() {HandlerExceptionResolverComposite result = new HandlerExceptionResolverComposite();List<HandlerExceptionResolver> l = new ArrayList<HandlerExceptionResolver>();l.add(new AnnotationMethodHandlerExceptionResolver()); l.add(new ResponseStatusExceptionResolver()); l.add(getSimpleMappingExceptionResolver()); l.add(new DefaultHandlerExceptionResolver());result.setExceptionResolvers(l);return result;} The DefaultHandlerExceptionResolver resolves standard Spring exceptions and translates them to corresponding HTTP status codes. Running The Example Once compiled, the example can be run with mvn tomcat:run. Then, browse: http://localhost:8585/spring-mvc-error-handling/ The main page will look like this:If you click on the Exception 1 link, the following page will display:If you click on the Exception 2 link, the following page will display:If you click on the Exception Handler button, a pop-up will be displayed:These techniques are enough to cover error handling in Spring. More Spring related posts here.   Reference: Spring MVC Error Handling from our JCG partner Jerome Versrynge at the Technical Notes blog. ...

Investigating Deadlocks – Part 5: Using Explicit Locking

In my last blog I looked at fixing my broken, deadlocking balance transfer sample code using both Java’s traditional synchronized keyword and lock ordering. There is, however, an alternative method known as explicit locking. The idea here of calling a locking mechanism explicit rather than implicit is that the explicit means that it is not part of the Java language and that classes have been written to fulfill the locking functionality. Implicit locking, on the other hand, can be defined as locking that is part of the language and is implemented in the background using the language keyword synchronchized. You could argue as to whether or not explicit locking is a good idea. Shouldn’t the Java language be improved to include the features of explicit locking rather than adding yet another set of classes to the already enormous API? For example: trysynchronized(). Explicit locking is based around the Lock interface and its ReentrantLock implementation. Lock contains a bunch of methods that give you lots more control over locking than the traditional synchronized keyword.  It’s got the methods that you’d expect it to have, such as lock(), which will create an entry point into a guarded section of code and unlock(), which creates the exit point. It also has tryLock(), which will only acquire a lock if it’s available and not already acquired by another thread and tryLock(long time,TimeUnit unit), which will try to acquire a lock and, if it’s unavailable wait for the specified timer to expire before giving up. In order to implement explicit locking I’ve first added the Lock interface to the Account class used in previous blogs in this series.public class Account implements Lock {private final int number;private int balance;private final ReentrantLock lock;public Account(int number, int openingBalance) { this.number = number; this.balance = openingBalance; this.lock = new ReentrantLock(); }public void withDrawAmount(int amount) throws OverdrawnException {if (amount > balance) { throw new OverdrawnException(); }balance -= amount; }public void deposit(int amount) {balance += amount; }public int getNumber() { return number; }public int getBalance() { return balance; }// ------- Lock interface implementation@Override public void lock() { lock.lock(); }@Override public void lockInterruptibly() throws InterruptedException { lock.lockInterruptibly(); }@Override public Condition newCondition() { return lock.newCondition(); }@Override public boolean tryLock() { return lock.tryLock(); }@Override public boolean tryLock(long arg0, TimeUnit arg1) throws InterruptedException { return lock.tryLock(arg0, arg1); }@Override public void unlock() { if (lock.isHeldByCurrentThread()) { lock.unlock(); } }}In the code above you can seen that I’m favouring aggregation by encapsulating a ReentrantLock object to which the Account class delegates locking functionality. The only small GOTCHA to be aware of is in the  unlock() implementation:@Override public void unlock() { if (lock.isHeldByCurrentThread()) { lock.unlock(); } }This has an additional if() statement that checks whether or not the calling thread is the thread that currently holds the lock. If this line of code is missed out then you’ll get the following IllegalMonitorStateException: Exception in thread 'Thread-7' java.lang.IllegalMonitorStateException at java.util.concurrent.locks.ReentrantLock$Sync.tryRelease(ReentrantLock.java:155) at java.util.concurrent.locks.AbstractQueuedSynchronizer.release(AbstractQueuedSynchronizer.java:1260) at java.util.concurrent.locks.ReentrantLock.unlock(ReentrantLock.java:460) at threads.lock.Account.unlock(Account.java:76) at threads.lock.TrylockDemo$BadTransferOperation.transfer(TrylockDemo.java:98) at threads.lock.TrylockDemo$BadTransferOperation.run(TrylockDemo.java:67) So, how is this implemented? Below is the listing of my TryLockDemo sample that’s based upon my original DeadLockDemo program.public class TrylockDemo {private static final int NUM_ACCOUNTS = 10; private static final int NUM_THREADS = 20; private static final int NUM_ITERATIONS = 100000; private static final int LOCK_ATTEMPTS = 10000;static final Random rnd = new Random();List<Account> accounts = new ArrayList<Account>();public static void main(String args[]) {TrylockDemo demo = new TrylockDemo(); demo.setUp(); demo.run(); }void setUp() {for (int i = 0; i < NUM_ACCOUNTS; i++) { Account account = new Account(i, 1000); accounts.add(account); } }void run() {for (int i = 0; i < NUM_THREADS; i++) { new BadTransferOperation(i).start(); } }class BadTransferOperation extends Thread {int threadNum;BadTransferOperation(int threadNum) { this.threadNum = threadNum; }@Override public void run() {int transactionCount = 0;for (int i = 0; i < NUM_ITERATIONS; i++) {Account toAccount = accounts.get(rnd.nextInt(NUM_ACCOUNTS)); Account fromAccount = accounts.get(rnd.nextInt(NUM_ACCOUNTS)); int amount = rnd.nextInt(1000);if (!toAccount.equals(fromAccount)) {boolean successfulTransfer = false;try { successfulTransfer = transfer(fromAccount, toAccount, amount);} catch (OverdrawnException e) { successfulTransfer = true; }if (successfulTransfer) { transactionCount++; }} }System.out.println("Thread Complete: " + threadNum + " Successfully made " + transactionCount + " out of " + NUM_ITERATIONS); }private boolean transfer(Account fromAccount, Account toAccount, int transferAmount) throws OverdrawnException {boolean success = false; for (int i = 0; i < LOCK_ATTEMPTS; i++) {try { if (fromAccount.tryLock()) { try { if (toAccount.tryLock()) {success = true; fromAccount.withDrawAmount(transferAmount); toAccount.deposit(transferAmount); break; } } finally { toAccount.unlock(); } } } finally { fromAccount.unlock(); } }return success; }} }The idea is the same, I have a list of bank accounts and I’m going to randomly choose two accounts and transfer a random amount from one to the other. The heart of the matter is my updated transfer(...) method as shown below.private boolean transfer(Account fromAccount, Account toAccount, int transferAmount) throws OverdrawnException {boolean success = false; for (int i = 0; i < LOCK_ATTEMPTS; i++) {try { if (fromAccount.tryLock()) { try { if (toAccount.tryLock()) {success = true; fromAccount.withDrawAmount(transferAmount); toAccount.deposit(transferAmount); break; } } finally { toAccount.unlock(); } } } finally { fromAccount.unlock(); } }return success; }The idea here is that I try to lock the fromAccount and then the toAccount. If that works then I make the transfer before remembering to unlock both accounts. If then accounts are already locked, then my tryLock() method fails and the whole thing loops around and tries again. After 10000 lock attempts, the thread gives up and ignores the transfer. I guess that in a real world application you’d want to put this failure onto some sort of queue so that it can be investigated later. In using explicit locking, you have to consider how well it works, so take a look at the results below… Thread Complete: 17 Successfully made 58142 out of 100000 Thread Complete: 12 Successfully made 57627 out of 100000 Thread Complete: 9 Successfully made 57901 out of 100000 Thread Complete: 16 Successfully made 56754 out of 100000 Thread Complete: 3 Successfully made 56914 out of 100000 Thread Complete: 14 Successfully made 57048 out of 100000 Thread Complete: 8 Successfully made 56817 out of 100000 Thread Complete: 4 Successfully made 57134 out of 100000 Thread Complete: 15 Successfully made 56636 out of 100000 Thread Complete: 19 Successfully made 56399 out of 100000 Thread Complete: 2 Successfully made 56603 out of 100000 Thread Complete: 13 Successfully made 56889 out of 100000 Thread Complete: 0 Successfully made 56904 out of 100000 Thread Complete: 5 Successfully made 57119 out of 100000 Thread Complete: 7 Successfully made 56776 out of 100000 Thread Complete: 6 Successfully made 57076 out of 100000 Thread Complete: 10 Successfully made 56871 out of 100000 Thread Complete: 11 Successfully made 56863 out of 100000 Thread Complete: 18 Successfully made 56916 out of 100000 Thread Complete: 1 Successfully made 57304 out of 100000 These show that although the program didn’t deadlock and hang indefinitely, it only managed to make the balance transfer in only slightly more than half the transfer requests. This means that it’s burning lots of processing power looping and looping and looping – which is altogether not very efficient. Also, I said a moment ago that that the program “didn’t deadlock and hang indefinitely”, which is not quite true. If you think about what’s happening then you’ll realize that your program is deadlocking and then backing out of that situation. The second version of my explicit locking demo code uses the tryLock(long time,TimeUnit unit) mentioned above.private boolean transfer(Account fromAccount, Account toAccount, int transferAmount) throws OverdrawnException {boolean success = false;try { if (fromAccount.tryLock(LOCK_TIMEOUT, TimeUnit.MILLISECONDS)) { try { if (toAccount.tryLock(LOCK_TIMEOUT, TimeUnit.MILLISECONDS)) {success = true; fromAccount.withDrawAmount(transferAmount); toAccount.deposit(transferAmount); } } finally { toAccount.unlock(); } } } catch (InterruptedException e) { e.printStackTrace(); } finally { fromAccount.unlock(); }return success; }In this code I’ve replaced the for loop with a tryLock(...) timeout of 1 millisecond. This means that when the tryLock(...) is called and cannot acquire the lock, it’ll wait 1 mS before rolling back and giving up. Thread Complete: 0 Successfully made 26637 out of 100000 Thread Complete: 14 Successfully made 26516 out of 100000 Thread Complete: 3 Successfully made 26552 out of 100000 Thread Complete: 11 Successfully made 26653 out of 100000 Thread Complete: 7 Successfully made 26399 out of 100000 Thread Complete: 1 Successfully made 26602 out of 100000 Thread Complete: 18 Successfully made 26606 out of 100000 Thread Complete: 17 Successfully made 26358 out of 100000 Thread Complete: 19 Successfully made 26407 out of 100000 Thread Complete: 16 Successfully made 26312 out of 100000 Thread Complete: 15 Successfully made 26449 out of 100000 Thread Complete: 5 Successfully made 26388 out of 100000 Thread Complete: 8 Successfully made 26613 out of 100000 Thread Complete: 2 Successfully made 26504 out of 100000 Thread Complete: 6 Successfully made 26420 out of 100000 Thread Complete: 4 Successfully made 26452 out of 100000 Thread Complete: 9 Successfully made 26287 out of 100000 Thread Complete: 12 Successfully made 26507 out of 100000 Thread Complete: 10 Successfully made 26660 out of 100000 Thread Complete: 13 Successfully made 26523 out of 100000 The result above show that when using a timer the balance transfer success rate falls even more to slightly more than 25%. Although it’s not now burning stack of processor time, it is still highly inefficient. I could mess around with both of these code samples for quite some time choosing variables that tune the app and improve performance, but at the end of the day, there’s no real substitute for getting your lock ordering right. I’d personally prefer to use the old fashioned synchronized key word implicit locking where ever possible and reserve explicit locking for those few situations where the deadlocking code is old, gnarly, indecipherable, I’ve tried everything else, the app needs to go live, it’s late and it’s time to go home… For more information see the other blogs in this series. All source code for this an other blogs in the series are available on Github at git://github.com/roghughe/captaindebug.git   Reference: Investigating Deadlocks – Part 5: Using Explicit Locking from our JCG partner Roger Hughes at the Captain Debug’s Blog blog. ...

MongoDB From the Trenches: Masochistic Embedded Collections

MongoDB supports rich documents that can include, among other things, embedded documents. This feature embodies a has-a relationship quite nicely and can, if modeled properly, reduce the number of finds required to ascertain certain data as there are no joins in Mongo. As classic example of embedding a collection of documents inside a parent document is contact addresses (i.e. mailing, email, twitter, etc) associated with a person. Think business cards. You can, of course, model this in any number of ways – in the traditional relational world, this would be a one-to-many relationship between at least two tables. Nevertheless, with a document-oriented database, you can model a parent person document with an embedded collection of contacts that are each themselves documents containing, say, type (i.e. phone, twitter, email) and value (which could be 555-555-555, @jon_doe, etc). This relationship with Mongo works nicely if the child embedded document never needs to exist outside of its parent. In the case of a business card, the contact document representing a phone number, for example, doesn’t necessarily make sense outside the context of the person who it belongs to. With this relationship, you can easily find a particular person via his/her phone number (that is, via Mongo’s query language, you can reach inside arrays via its dot notion) effortlessly). And, once you have a handle to a person, you don’t need to execute a series of finds to ascertain contact information – it’s all right there. Nevertheless, things start to get painful quickly if you’d like to operate solely on a singular embedded document. That is, if you execute finds that are intended to deal with the expected resultant embedded document, you’re in for some work: as of Mongo 2.2, you can’t select a singular document from within a collection residing in a parent via a query. A find in this case will pull everything – it’s up to you (i.e your application) to filter things. An example will probably help: imagine the business card example from earlier – a person document containing an embedded collection of contacts:{ first_name: 'Andrew', last_name: 'Glover', contacts: [ { type: 'cell', value: '555-555-5555', last_updated: 2012-09-01 23:41:51 UTC }, { type: 'home', value: '555-555-5551', last_updated: 2012-02-11 12:21:11 UTC } ] }To find this document by a phone number is easy: db.persons.find({'contacts.value':'555-555-5555'}) But what if you wanted to find the contact that was recently updated, say since the beginning of the month, and change its value or add some additional meta-data? The query you’d like would look something like:db.persons.find({'contacts.last_updated': {$gte: datetime(2012, 8, 1)}})This query works and will match the person ‘Andrew Glover’ – but the catch here is that what is returned is the entire document. You can add query limiters if you’d like (i.e. {contacts:1}), however, that will merely return a person document with only a collection of contacts. Thus, you are left to iterate over the resultant collection of contacts and work your magic that way. That is, you still have to find the contact document that was edited this month! In your code! No big deal, you say? This particular example is, indeed, a bit contrived; however, imagine if the overall document is quite large (maybe it’s not a person but an organization!) and that the embedded collection is also lengthly (how many employees does Google have?). Now this simple update is pulling a lot of bytes across the wire (and taxing Mongo in the process) and then your app is working with a lot of bytes in memory (now the document is taxing your app!). Did you want this operation to happen quickly, under load too? Thus, with embedded document collections, if you envision having to work with a particular embedded document in isolation, it is better, at this point, to model has-a relationships with distinct collections (i.e. in this example, life would be much easier if there is a person collection and a contacts one). Indeed, the flexibility of document-oriented, schema-less data stores is a boon to rapid evolutionary development. But you still have to do some thinking up front. Unless, of course, you’re a masochist. I’m a huge fan of Mongo. Check out some of the articles, videos, and podcasts that I’ve done, which focus on Mongo, including:Java development 2.0: MongoDB: A NoSQL datastore with (all the right) RDBMS moves Video demo: An introduction to MongoDB Eliot Horowitz on MongoDB 10gen’s Steve Francia talks MongoDB  Reference: MongoDB From the Trenches: Masochistic Embedded Collections from our JCG partner Andrew Glover at the The Disco Blog blog. ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: