Featured FREE Whitepapers

What's New Here?


Defensive Programming: Being Just-Enough Paranoid

Hey, let’s be careful out there. Sergeant Esterhaus, daily briefing to the force of Hill Street Blues When developers run into an unexpected bug and can’t fix it, they’ll “add some defensive code” to make the code safer and to make it easier to find the problem. Sometimes just doing this will make the problem go away. They’ll tighten up data validation – making sure to check input and output fields and return values. Review and improve error handling – maybe add some checking around “impossible” conditions. Add some helpful logging and diagnostics. In other words, the kind of code that should have been there in the first place. Expect the Unexpected The whole point of defensive programming is guarding against errors you don’t expect. Steve McConnell, Code Complete The few basic rules of defensive programming are explained in a short chapter in Steve McConnell’s classic book on programming, Code Complete:Protect your code from invalid data coming from “outside”, wherever you decide “outside” is. Data from an external system or the user or a file, or any data from outside of the module/component. Establish “barricades” or “safe zones” or “trust boundaries” – everything outside of the boundary is dangerous, everything inside of the boundary is safe. In the barricade code, validate all input data: check all input parameters for the correct type, length, and range of values. Double check for limits and bounds. After you have checked for bad data, decide how to handle it. Defensive Programming is NOT about swallowing errors or hiding bugs. It’s about deciding on the trade-off between robustness (keep running if there is a problem you can deal with) and correctness (never return inaccurate results). Choose a strategy to deal with bad data: return an error and stop right away (fast fail), return a neutral value, substitute data values, … Make sure that the strategy is clear and consistent. Don’t assume that a function call or method call outside of your code will work as advertised. Make sure that you understand and test error handling around external APIs and libraries. Use assertions to document assumptions and to highlight “impossible” conditions, at least in development and testing. This is especially important in large systems that have been maintained by different people over time, or in high-reliability code. Add diagnostic code, logging and tracing intelligently to help explain what’s going on at run-time, especially if you run into a problem. Standardize error handling. Decide how to handle “normal errors” or “expected errors” and warnings, and do all of this consistently. Use exception handling only when you need to, and make sure that you understand the language’s exception handler inside out.Programs that use exceptions as part of their normal processing suffer from all the readability and maintainability problems of classic spaghetti code. The Pragmatic Programmer I would add a couple of other rules. From Michael Nygard’s Release It! never ever wait forever on an external call, especially a remote call. Forever can be a long time when something goes wrong. Use time-out/retry logic and his Circuit Breaker stability pattern to deal with remote failures. And for languages like C and C++, defensive programming also includes using safe function calls to avoid buffer overflows and common coding mistakes. Different Kinds of Paranoia The Pragmatic Programmer describes defensive programming as “Pragmatic Paranoia”. Protect your code from other people’s mistakes, and your own mistakes. If in doubt, validate. Check for data consistency and integrity. You can’t test for every error, so use assertions and exception handlers for things that “can’t happen”. Learn from failures in test and production – if this failed, look for what else can fail. Focus on critical sections of code – the core, the code that runs the business. Healthy Paranoid Programming is the right kind of programming. But paranoia can be taken too far. In the Error Handling chapter of Clean Code, Michael Feathers cautions that “many code bases are dominated by error handling” Too much error handling code not only obscures the main path of the code (what the code is actually trying to do), but it also obscures the error handling logic itself – so that it is harder to get it right, harder to review and test, and harder to change without making mistakes. Instead of making the code more resilient and safer, it can actually make the code more error-prone and brittle. There’s healthy paranoia, then there’s over-the-top-error-checking, and then there’s bat shit crazy crippling paranoia – where defensive programming takes over and turns in on itself. The first real world system I worked on was a “Store and Forward” network control system for servers (they were called minicomputers back then) across the US and Canada. It shared data between distributed systems, scheduled jobs, and coordinated reporting across the network. It was designed to be resilient to network problems and automatically recover and restart from operational failures. This was ground breaking stuff at the time, and a hell of a technical challenge. The original programmer on this system didn’t trust the network, didn’t trust the O/S, didn’t trust Operations, didn’t trust other people’s code, and didn’t trust his own code – for good reason. He was a chemical engineer turned self-taught system programmer who drank a lot while coding late at night and wrote thousands of lines of unstructured FORTRAN and Assembler under the influence. The code was full of error checking and self diagnostics and error-correcting code, the files and data packets had their own checksums and file-level passwords and hidden control labels, and there was lots of code to handle sequence accounting exceptions and timing-related problems – code that mostly worked most of the time. If something went wrong that it couldn’t recover from, programs would crash and report a “label of exit” and dump the contents of variables – like today’s stack traces. You could theoretically use this information to walk back through the code to figure out what the hell happened. None of this looked anything like anything that I learned about in school. Reading and working with this code was like programming your way out of Arkham Asylum. If the programmer ran into bugs and couldn’t fix them, that wouldn’t stop him. He would find a way to work around the bugs and make the system keep running. Then later after he left the company, I would find and fix a bug and congratulate myself until it broke some “error-correcting” code somewhere else in the network that now depended on the bug being there. So after I finally figured out what was going on, I took out as much of this “protection” as I could safely remove, and cleaned up the error handling so that I could actually maintain the system without losing what was left of my mind. I setup trust boundaries for the code – although I didn’t know that’s what it was called then – deciding what data couldn’t be trusted and what could. Once this was done I was able to simplify the defensive code so that I could make changes without the system falling over itself, and still protect the core code from bad data, mistakes in the rest of the code, and operational problems. Making code safer is simple The point of defensive coding is to make the code safer and to help whoever is going to maintain and support the code – not make their job harder. Defensive code is code – all code has bugs, and, because defensive code is dealing with exceptions, it is especially hard to test and to be sure that it will work when it has to. Understanding what conditions to check for and how much defensive coding is needed takes experience, working with code in production and seeing what can go wrong in the real world. A lot of the work involved in designing and building secure, resilient systems is technically difficult or expensive. Defensive programming is neither – like defensive driving, it’s something that everyone can understand and do. It requires discipline and awareness and attention to detail, but it’s something that we all need to do if we want to make the world safe. Reference: Defensive Programming: Being Just-Enough Paranoid from our JCG partner Jim Bird at the Building Real Software blog....

Git in colour

I’ve been using Git for a while now, but only today realized I can have coloured output for diff, grep, branch, show-branch and status, without having to hook in any other external tools (like colordiff, for example). Here’s my ~/.gitconfig file, which enables colour: [user] name = Nick Boldt email = nickboldt (at) gmail.com [giggle] main-window-maximized = false main-window-geometry = 1324x838+0+24 main-window-view = HistoryView [core] trustctime = false branch = auto diff = auto interactive = auto status = auto editor = vim [merge] tool = vimdiff [receive] denyCurrentBranch = warn [branch] autosetuprebase = local [color] ui = true diff = true grep = true branch = true showbranch = true status = true [color "diff"] plain = normal dim meta = yellow dim frag = blue bold old = magenta new = cyan whitespace = red reverse [color "status"] header = normal dim added = yellow untracked = magenta [color "branch"] current = yellow reverse local = yellow remote = redReference: Git in colour from our JCG partner Nick Boldt at the DivByZero blog....

GWT Custom Button using UIBinder

Here’s an example on how to create a custom button using UIBinder on GWT.public class GwtUIBinderButton implements EntryPoint {public void onModuleLoad() { Button button = new Button(); button.setText("Button"); button.addClickHandler(new ClickHandler(){ @Override public void onClick(ClickEvent event) { Window.alert("Button clicked"); } }); RootPanel.get("container").add(button); } }public class Button extends Composite implements HasText, HasClickHandlers, ClickHandler{private static ButtonUiBinder uiBinder = GWT.create(ButtonUiBinder.class);interface ButtonUiBinder extends UiBinder<Widget, Button> { }@UiField(provided=true) FocusPanel pane = new FocusPanel();@UiField(provided=true) Label label = new Label();public Button() {pane.addClickHandler(this); initWidget(uiBinder.createAndBindUi(this)); }@Override public HandlerRegistration addClickHandler(ClickHandler handler) { return addHandler(handler, ClickEvent.getType()); }@Overridepublic void onClick(ClickEvent event) {this.fireEvent(event);}@Override public String getText() { return label.getText(); }@Override public void setText(String text) { label.setText(text); }}<!DOCTYPE ui:UiBinder SYSTEM "http://dl.google.com/gwt/DTD/xhtml.ent"> <ui:UiBinder xmlns:ui="urn:ui:com.google.gwt.uibinder" xmlns:g="urn:import:com.google.gwt.user.client.ui"> <ui:style> .button{ background-color: #eeeeee; background-image: -webkit-gradient(linear, left top, left bottom, color-stop(0%, #eeeeee), color-stop(100%, #cccccc)); background-image: -webkit-linear-gradient(top, #eeeeee, #cccccc); background-image: -moz-linear-gradient(top, #eeeeee, #cccccc); background-image: -ms-linear-gradient(top, #eeeeee, #cccccc); background-image: -o-linear-gradient(top, #eeeeee, #cccccc); background-image: linear-gradient(top, #eeeeee, #cccccc); border: 1px solid #ccc; border-bottom: 1px solid #bbb; -webkit-border-radius: 3px; -moz-border-radius: 3px; -ms-border-radius: 3px; -o-border-radius: 3px; border-radius: 3px; color: #333; font: bold 11px "Lucida Grande", "Lucida Sans Unicode", "Lucida Sans", Geneva, Verdana, sans-serif; line-height: 1; padding: 0px 0; text-align: center; text-shadow: 0 1px 0 #eee; width: 120px; } .button:hover{ background-color: #dddddd; background-image: -webkit-gradient(linear, left top, left bottom, color-stop(0%, #dddddd), color-stop(100%, #bbbbbb)); background-image: -webkit-linear-gradient(top, #dddddd, #bbbbbb); background-image: -moz-linear-gradient(top, #dddddd, #bbbbbb); background-image: -ms-linear-gradient(top, #dddddd, #bbbbbb); background-image: -o-linear-gradient(top, #dddddd, #bbbbbb); background-image: linear-gradient(top, #dddddd, #bbbbbb); border: 1px solid #bbb; border-bottom: 1px solid #999; cursor: pointer; text-shadow: 0 1px 0 #ddd; } .button:active{ border: 1px solid #aaa; border-bottom: 1px solid #888; -webkit-box-shadow: inset 0 0 5px 2px #aaaaaa, 0 1px 0 0 #eeeeee; -moz-box-shadow: inset 0 0 5px 2px #aaaaaa, 0 1px 0 0 #eeeeee; box-shadow: inset 0 0 5px 2px #aaaaaa, 0 1px 0 0 #eeeeee; } .pane{ text-align: center; } </ui:style> <g:SimplePanel ui:field="pane" styleName="{style.button}"> <g:Label ui:field="label"></g:Label> </g:SimplePanel> </ui:UiBinder>Adding an Image: <g:SimplePanel ui:field="pane" styleName="{style.button}"> <g:HTMLPanel> <table align="center"> <tr> <td> <g:Image styleName="{style.pane}" url="gwt-logo-42x42.png"></g:Image> </td> <td> <g:Label ui:field="label"></g:Label> </td> </tr> </table> </g:HTMLPanel> </g:SimplePanel>Reference: GWT Custom Button using UIBinder from our JCG partner Mark Andro Silva at the GlyphSoft blog....

Play 2.0: Akka, Rest, Json and dependencies

I’ve been diving more and more into scala the last couple of months. Scala together with the “Play Framework” provides you with a very effective and quick development environment (as soon as you’ve grasped the idiosyncrasies of the Scala language, that is). The guys behind the Play framework have been hard at work at the new version Play 2.0. In Play 2.0 scala plays a much more important role, and especially the complete build process has been immensely improved. The only problem so far, I’ve encountered with Play 2.0 is the lack of good documentation. The guys are hard at work at updating the wiki, but its often still a lot of trial and error to get what you want. Note though, that often this isn’t just caused by Play, I also sometimes still struggle with the more exotic Scala constructs ;-) In this article, I’ll give you an introduction into how you can accomplish some common tasks in Play 2.0 using Scala. More specifically I’ll show you how to create an application that:uses sbt based dependency management to configure external dependencies is edited in Eclipse (with the Scala-ide plugin) using the play eclipsify command provides a Rest API using Play’s routes uses Akka 2.0 (provided by the Play framework) to asynchronously call the database and generate Json (just because we can) convert Scala objects to Json using the Play provided Json functionality (based on jerkson)I won’t show the database access using Querulous, if you want to know more about that look at this article. I’d like to convert the Querulous code to using Anorm. But since my last experiences with Anorm were, how do I put this, not convincingly positive, I’m saving that for a later day. Creating an application with Play 2.0 Getting up and running with Play 2.0 is very easy and is well documented, so I won’t spent too much time on this. For complete instruction see the Play 2.0 Wiki. To get up and running, after you you have downloaded and extracted Play 2.0, take the following steps: Execute the following command from the console: $play new FirstStepsWithPlay20This will create a new project, and show you something like the following output: _ __ | | __ _ _ _| | | '_ \| |/ _' | || |_| | __/|_|\____|\__ (_) |_| |__/ play! 2.0-RC2, http://www.playframework.org The new application will be created in /Users/jos/Dev/play-2.0-RC2/FirstStepsWithPlay20 What is the application name? > FirstStepsWithPlay20 Which template do you want to use for this new application? 1 - Create a simple Scala application 2 - Create a simple Java application 3 - Create an empty project > 1 OK, application FirstStepsWithPlay20 is created. Have fun!You’ve now got an application you can run. Change to the just created directory and execute play run. $ play run [info] Loading project definition from /Users/jos/Dev/play-2.0-RC2/FirstStepsWithPlay2/project [info] Set current project to FirstStepsWithPlay2 (in build file:/Users/jos/Dev/play-2.0-RC2/FirstStepsWithPlay2/) --- (Running the application from SBT, auto-reloading is enabled) --- [info] play - Listening for HTTP on port 9000... (Server started, use Ctrl+D to stop and go back to the console...)If you navigate to http://localhost:9000, you can see your first Play 2.0 application. And you’re done with the basic installation of Play 2.0. Dependency management I mentioned in the introduction that I didn’t start this project from scratch. I rewrote a Rest service I made with Play 1.2.4, Akka 1.x, JAX-RS and Json-Lift to the components provided by the Play 2.0 framework. Since dependency management changed between Play 1.2.4 and Play 2.0, I needed to configure my new project with the dependencies I needed. In Play 2.0 you do this in a file called build.scala, which you can find in the project folder in your project. After adding the dependencies from my previous project, this file looked like this: import sbt._ import Keys._ import PlayProject._ object ApplicationBuild extends Build { val appName = "FirstStepsWithPlay2" val appVersion = "1.0-SNAPSHOT" val appDependencies = Seq( "com.twitter" % "querulous" % "2.6.5" , "net.liftweb" %% "lift-json" % "2.4" , "com.sun.jersey" % "jersey-server" % "1.4" , "com.sun.jersey" % "jersey-core" % "1.4" , "postgresql" % "postgresql" % "9.1-901.jdbc4" ) val main = PlayProject(appName, appVersion, appDependencies, mainLang = SCALA).settings( // Add extra resolver for the twitter resolvers += "Twitter repo" at "http://maven.twttr.com/" , resolvers += "DevJava repo" at "http://download.java.net/maven/2/" ) }How to use this file is rather straightforward, once you’ve read the sbt documentation (http://code.google.com/p/simple-build-tool/wiki/LibraryManagement). Basically we define the libraries we want, using appDependencies, and we define some extra repositories where sbt should download its dependencies from (using resolvers). A nice thing to mention is that you can specify a %% when defining dependencies. This implies that we also want to search for a library that matches our version of scala. SBT looks at our current configured version and adds a qualifier for that version. This makes sure we get a version that works for our Scala version. Like I mentioned, I wanted to replace most external libraries I used with functionality from Play 2.0. After removing the stuff I didn’t use anymore this file looks like this: import sbt._ import Keys._ import PlayProject._ object ApplicationBuild extends Build { val appName = "FirstStepsWithPlay2" val appVersion = "1.0-SNAPSHOT" val appDependencies = Seq( "com.twitter" % "querulous" % "2.6.5" , "postgresql" % "postgresql" % "9.1-901.jdbc4" ) val main = PlayProject(appName, appVersion, appDependencies, mainLang = SCALA).settings( // Add extra resolver for the twitter resolvers += "Twitter repo" at "http://maven.twttr.com/" ) }With the dependencies configured, I can configure this project for my IDE. Even though all my colleagues are big IntelliJ proponents, I’m still coming back to what I’m used to: Eclipse. So lets see what you need to do to get this project up and running in Eclipse. Work from Eclipse In my Eclipse version I’ve got the scala plugin installed, and the Play 2.0 framework nicely works together with this plugin. To get your project in eclipse all you have to do is run the following command: play eclipsify jos@Joss-MacBook-Pro.local:~/dev/play-2.0-RC2/FirstStepsWithPlay2$ ../play eclipsify [info] Loading project definition from /Users/jos/Dev/play-2.0-RC2/FirstStepsWithPlay2/project [info] Set current project to FirstStepsWithPlay2 (in build file:/Users/jos/Dev/play-2.0-RC2/FirstStepsWithPlay2/) [info] About to create Eclipse project files for your project(s). [info] Compiling 1 Scala source to /Users/jos/Dev/play-2.0-RC2/FirstStepsWithPlay2/target/scala-2.9.1/classes... [info] Successfully created Eclipse project files for project(s): FirstStepsWithPlay2 jos@Joss-MacBook-Pro.local:~/dev/play-2.0-RC2/FirstStepsWithPlay2$Now you can use “import project” from Eclipse, and you can edit your Play 2.0 / Scala project directly from Eclipse. Its possible to start the Play environment directly from Eclipse, but I haven’t used that. I just start the Play project from the command line, once, and all the changes I make in Eclipse are immediately visible. For those of you who’ve worked with Play longer, this is probably not so special anymore. For me, personally, I still am amazed by the productivity of this environment. provides a Rest API using Play’s routes In my previous Play project I used the jersey module to be able to use JAX-RS annotations to specify my Rest API. Since Play 2.0 contains a lot of breaking API changes and is pretty much a rewrite from the ground up, you can’t expect all the old modules to work. This was also the case for the Jersey module. I did dive into the code of this module, to see if the changes were trivial, but since I couldn’t find any documentation on how to create a plugin for Play 2.0 that allows you to interact with the route processing, I decided to just switch to the way Play 2.0 does Rest. And using the “routes” file, it was very easy to connect the (just) two operations I exposed to a simple controller: # Routes # This file defines all application routes (Higher priority routes first) # ~~~~ GET /resources/rest/geo/list controllers.Application.processGetAllRequest GET /resources/rest/geo/:id controllers.Application.processGetSingleRequest(id:String)The corresponding controller looks like this: package controllers import akkawebtemplate.GeoJsonService import play.api.mvc.Action import play.api.mvc.Controller object Application extends Controller { val service = new GeoJsonService() def processGetSingleRequest(code: String) = Action { val result = service.processGetSingleRequest(code) Ok(result).as("application/json") } def processGetAllRequest() = Action { val result = service.processGetAllRequest; Ok(result).as("application/json"); } }As you can see I’ve just created to very simple, basic actions. Haven’t look at fault and exception handling yet, but the Rest API offered by Play really makes using additional Rest framework unnecessary. Thats the first of the frameworks. The next part of my original application that needed to change was the Akka code. Play 2.0 includes the latest version of the Akka library (2.0-RC1). Since my original Akka code was written against 1.2.4, there were a lot of conflicts. Updating the original code, wasn’t so easy though. Using Akka 2.0 I won’t dive into all the problems I had with Akka 2.0. Biggest problem was the very crappy documentation on the Play Wiki and the crappy documentation on the Akka website my crappy skills at locating the correct information in the Akka documentation. Together with me only using Akka for about three or four months, doesn’t make it the best combination. After a couple of frustation hours though, I just removed all the existing Akka code, and started from scratch. 20 minutes later I got everything working with Akka 2, and using the master configuration from Play. In the next listing you can see the corresponding code (I’ve intentionally left the imports, since in a lot of examples you can find, they are omitted, which makes an easy job, that much harder) import akka.actor.actorRef2Scala import akka.actor.Actor import akka.actor.Props import akka.dispatch.Await import akka.pattern.ask import akka.util.duration.intToDurationInt import akka.util.Timeout import model.GeoRecord import play.libs.Akka import resources.commands.Command import resources.commands.FULL import resources.commands.SINGLE import resources.Database /** * This actor is responsible for returning JSON objects from the database. It uses querulous to * query the database and parses the result into the GeoRecord class. */ class JsonActor extends Actor { /** * Based on the type recieved we determine what command to execute, most case classes * can be executed using the normal two steps. Execute a query, convert result to * a set of json data and return this result. */ def receive = { // when we receive a Command we process it and return the result case some: Command => { // execute the query from the FULL command and process the results using the // processRows function var records:Seq[GeoRecord] = null; // if the match parameter is null we do the normal query, if not we pass in a set of varargs some.parameters match { case null => records = Database.getQueryEvaluator.select(some.query) {some.processRows} case _ => records = Database.getQueryEvaluator.select(some.query, some.parameters:_*) {some.processRows} } // return the result as a json string sender ! some.toJson(records) } case _ => sender ! null } } /** * Handle the specified path. This rest service delegates the functionality to a specific actor * and if the result from this actor isn't null return the result */ class GeoJsonService { def processGetSingleRequest(code: String) = { val command = SINGLE(); command.parameters = List(code); runCommand(command); } /** * Operation that handles the list REST command. This creates a command * that forwards to the actor to be executed. */ def processGetAllRequest:String = { runCommand(FULL()); } /** * Function that runs a command on one of the actors and sets the response */ private def runCommand(command: Command):String = { // get the actor val actor = Akka.system.actorOf(Props[JsonActor]) implicit val timeout = Timeout(5 seconds) val result = Await.result(actor ? command, timeout.duration).asInstanceOf[String] // return result as String result } }A lot of code, but I wanted to show you the actor definition and how to use them. Summarizing, the Akka 2.0 code you need to use, to execute a request/reply pattern with Akka is this: private def runCommand(command: Command):String = { // get the actor val actor = Akka.system.actorOf(Props[JsonActor]) implicit val timeout = Timeout(5 seconds) val result = Await.result(actor ? command, timeout.duration).asInstanceOf[String] // return result as String result }This uses the global Akka configuration to retrieve an actor of the required type. We then send a command to the actor, and are returned a Future, on which we wait 5 seconds for a result, which we cast to a String. This Future waits for our Actor to send a reply. This is done in the actor itself: sender ! some.toJson(records)With Akka replaced I finally got a working system again. When looking through the documentation on Play 2.0 I noticed that they provided their own Json library, starting from 2.0. Since I used Json-Lift in the previous version, I thought it would be a nice exercise to move this code to the Json library, named Jerkson, provided by Play. Moving to Jerkson The move to the new library was a fairly easy one. Both Lift-Json and Jerkson use pretty much the same concept of building Json objects. In the old version I didn’t use any automatic marshalling (since I had to comply with the jsongeo format) so in this version I also did the marshalling manually. In the next listing you can see the old version and the new version together, As you can see the concepts used in both are pretty much the same. #New version using jerkson val jsonstring = JsObject( List("type" -> JsString("featureCollection"), "features" -> JsArray( records.map(r => (JsObject(List( "type" -> JsString("Feature"), "gm_naam" -> JsString(r.name), "geometry" -> Json.parse(r.geojson), "properties" -> ({ var toAdd = List[(String, play.api.libs.json.JsValue)]() r.properties.foreach(entry => (toAdd ::= entry._1 -> JsString(entry._2))) JsObject(toAdd) }))))) .toList))) #Old version using Lift-Json val json = ("type" -> "featureCollection") ~ ("features" -> records.map(r => (("type" -> "Feature") ~ ("gm_naam" -> r.name) ~ ("geometry" -> parse(r.geojson)) ~ ("properties" -> ({ // create an empty object var obj = JNothing(0) // iterate over the properties r.properties.foreach(entry => ( // add each property to the object, the reason // we do this is, that else it results in an // arraylist, not a list of seperate properties obj = concat(obj, JField(entry._1, entry._2)))) obj })))))And after all this, I have exactly the same as I already had. But now with Play 2.0 and not using any external libraries (except Querulous). So far my experiences with Play 2.0 have been very positive. The lack of good concrete examples and documentation can be annoying sometimes, but is understandable. They do provide a couple of extensive examples in their distribution, but nothing that matched my use cases. So hats off to the guys who are responsible for Play 2.0. What I’ve seen so far, great and comprehensive framework, lots of functionality and a great environment to program scala in. In the next couple of weeks I’ll see if I can get enough courage up to start with Anorm and I’ll look at what Play has to offer on the client side. So far I’ve looked at LESS which I really like, so I’ve got my hopes up for their template solution ;-) Reference: Play 2.0: Akka, Rest, Json and dependencies from our JCG partner Jos Dirksen at the Smart Java blog....

Disassembling Tell Don’t Ask

In my last blog I defined Tell Don’t Ask (TDA) using a simple shopping cart example. In it the shopping cart was responsible for working out the total cost of the items in the cart as opposed to the client asking for a list of items and then calculating the total cost itself. The TDA example is shown below: public class ShoppingCart {private final List<Item> items;public ShoppingCart() { items = new ArrayList<Item>(); }public void addItem(Item item) {items.add(item); }public double calcTotalCost() {double total = 0.0; for (Item item : items) { total += item.getPrice(); }return total; } }…and this is the test case: @Test public void calculateTotalCost() {ShoppingCart instance = new ShoppingCart();Item a = new Item("gloves", 23.43); instance.addItem(a);Item b = new Item("hat", 10.99); instance.addItem(b);Item c = new Item("scarf", 5.99); instance.addItem(c);double totalCost = instance.calcTotalCost(); assertEquals(40.41, totalCost, 0.0001); }I have to ask though, does TDA it all come down to a matter language and semantics. Consider the example above, is the client telling or asking? Although this code is much better than my ask don’t tell example, I think that a case can be made for saying that the client is asking for the total price. Consider the following issues:In telling the Shopping Cart to return total price how do you know that you’re not querying the internal state of the object? Looking at the ShoppingCart code, you can see that the total cost is not part of the object’s direct internal state (1), but the object calculates it and hence the total price is part of the object’s derived internal state and this is being returned to the caller. In a TDA world, why would the client need the total cost? To figure out if you need to add a shipping cost? That can be done by the ShoppingCart. To submit the bill to the appropriate payment system? Again that can be done by the shopping cart.If you agree that return values reflect the internal state of an object, whether direct or inferred then, as Mr Spock would say, logic dictates that you’d have to conclude that all method signatures would have a void return value, never throw exceptions and handle all errors themselves and look something like this: public void processOrder(String arg1, int arg2);And this is where some of the logic can start to unravel. Doesn’t a ShoppingCart contacting the Visa or Mastercard payment system break the single responsibility principle (SRP)? The WareHouse also needs to be informed that the order can be shipped. Is that the responsibility of the ShoppingCart? And what if we wanted to print an itemised bill in PDF format? You can tell the ShoppingCart to print itself, but are we breaking the SRP by adding printing code to the ShoppingCart however small it may be? Perhaps this requires further thought, so take a look at the Communication Diagram below:This diagram shows the straight forward scenario of adding an item to the ShoppingCart. TDA works perfectly here because there are no branches and no decisions, the whole process is linear: the browser TELLS the OrderController to add an item and the OrderController TELLS the ShoppingCart to add an item to itself. Now, take a look at the next scenario. Here the user is confirming the order and then sitting down to eagerly await its delivery.If you look at the diagram, you’ll see that the browser TELLS the OrderController to confirm the order. The OrderController TELLS the ShoppingCart to confirm the order. The ShoppingCart TELLS the Visa system to charge the total to the user’s card. If the the payment goes through okay then the Visa card TELLS the WareHouse to ship the order. Now this works as a design if you accept that a ShoppingCart is responsible for maintaining a list of items the user wants and paying for those items. You’ve also got to accept that the Visa card is responsible for shipping them to the customer: all of which doesn’t make sense to me as it breaks the SRP and, in my opinion, the SRP is one of the fundamental features of good software design. Besides, when I’m doing my shopping in the supermarket I don’t ask my shopping cart to pay the bill, I have to get my wallet out. If you take that analogy further, then you’ll conclude that it’s some other object’s responsibility to marshall the flow of the transaction; the OrderController’s perhaps?In this final diagram you can see that the browser TELLS the OrderController to confirm the order. The OrderController ASKS the ShoppingCart for the total cost and then TELLS the Visa object to pay the bill and finally if the Visa returns success the it TELLS the WareHouse to ship the ShoppingCart. To me this design makes more sense, it’s tell don’t ask with a touch of pragmatism. Now don’t get me wrong, I like the idea of tell don’t ask it makes perfect sense, but it’s not perfect and it can stumble. If you search the Internet for examples you often find that they’re linear in nature, reflecting the first two diagrams above where A calls B, B calls C and C calls D etc. This doesn’t reflect most applications as at some point in your programs execution you’re forced to ask an object for data, and to make a decision based upon that data. The second reason tell don’t ask stumbles is that it’s so easy to fall into the trap of asking an object for data without even thinking about it. For example, take a look at the snippet below: AnnotationChecker checker = new AnnotationChecker(clazz); if (checker.isMyComponent()) {Object obj = clazz.newInstance(); result.put(clazz, obj); }This example, plucked from my github dependency-injection-factory sample project shows me asking an object its state and using that information to make a decision. Procedural programming strikes again… (1) Direct internal state: a value held in an instance variable of an object. Derived or inferred internal state: a value returned by an object that is calculated or derived from the instance variables of that object. Reference: Disassembling Tell Don’t Ask from our JCG partner Roger Hughes at the Captain Debug’s Blog ....

JavaFX: Creating a Sprite Animation

While most of my posts so far dealt with JavaFX properties and bindings, today I want to write about another part of the JavaFX runtime I also work on: the animation API. In this article I will explain how to write custom animations in JavaFX and use this approach to create a class for sprite animations. (This will also be a good practice for one of my sessions at the conference 33rd Degree. I plan to write a game in JavaFX in just one hour. That’s going to be fun!)The Horse in MotionThere are many very good articles out there describing predefined Transitions (TranslateTransition, RotateTransition etc.) and Timelines. Most of the time these approaches are sufficient, but in some cases one just needs more flexibility. And that is when the Transition class comes into play, which can be extended to define a custom animation. To write your own animation class by extending Transition, two steps are required:Specify the duration of a single cycle Implement the interpolate() methodDuration of a single cycle You can set the duration of a cycle by calling the protected method setCycleDuration(). Most of the time the duration is either fixed (if the animation is used only once) or configurable by the user. Almost all predefined Transitions in the JavaFX runtime fall into the second category. They expose their cycle duration via the duration property, you probably want to do that in your class, too. In rare cases the duration of a cycle depends on other values. For example the duration of a SequentialTransition and ParallelTransition depend on the duration of their children. You can change the cycle duration as often as you like, but note that it does not effect a currently running animation. Only after an animation is stopped and started again is the new cycle duration taken into account. The interpolate() method The interpolate() method is abstract and needs to be overridden. It defines actual behavior of the animation. The method interpolate() is called by the runtime in every frame while the animation is playing. A value frac, a double between 0.0 and 1.0 (both inclusive) is passed in, which specifies the current position. The value 0.0 marks the start of the animation, the value 1.0 the end. Any value in between defines the relative position. Note that a possible Interpolator is already taken into account when the value of frac is calculated. The class SpriteAnimation To demonstrate how a custom transition is defined, we will look at a class that allows us to do Sprite animations. It takes an image that has several frames and moves the viewport from one frame to the other over time. We will test this class with the famous “The Horse in Motion” by Eadweard Muybridge. Enough talk, here is the code: package sandboxfx;import javafx.animation.Interpolator; import javafx.animation.Transition; import javafx.geometry.Rectangle2D; import javafx.scene.image.ImageView; import javafx.util.Duration;public class SpriteAnimation extends Transition {private final ImageView imageView; private final int count; private final int columns; private final int offsetX; private final int offsetY; private final int width; private final int height;private int lastIndex;public SpriteAnimation( ImageView imageView, Duration duration, int count, int columns, int offsetX, int offsetY, int width, int height) { this.imageView = imageView; this.count = count; this.columns = columns; this.offsetX = offsetX; this.offsetY = offsetY; this.width = width; this.height = height; setCycleDuration(duration); setInterpolator(Interpolator.LINEAR); }protected void interpolate(double k) { final int index = Math.min((int) Math.floor(k * count), count - 1); if (index != lastIndex) { final int x = (index % columns) * width + offsetX; final int y = (index / columns) * height + offsetY; imageView.setViewport(new Rectangle2D(x, y, width, height)); lastIndex = index; } } }Listing 1: The class SpriteAnimation For the sake of simplicity, this example class just takes all parameters in the constructor and does not allow to change them later. In most cases this is sufficient. The class expects an ImageView, the duration of a single cycle (that is how long it should take to go through all frames), the number of frames, the number of columns (how many frames are in one row in the image), the offset of the first frame, and the width and height of all frames. The duration of the full cycle is passed to the super class by calling setCycleDuration(), all other values are stored. As a last step in the constructor, the interpolator is set to linear. By default an easing interpolator is set for all transitions, because that is what usually gives the best results. But in our case we want to run through all frames with the same speed and an easing interpolator would look weird. The interpolate() method takes the passed in value and calculates which frame needs to be displayed at the moment. If it changed since the last time interpolate() was called, the position of the new frame is calculated and the viewport of the ImageView set accordingly. That’s it. The Horse in Motion To demonstrate the SpriteAnimation class, we will animate The Horse in Motion. The code to do that is straightforward, most of the work is already done. It creates an ImageView with the viewport set to the first frame and instantiates the SpriteAnimation class. The parameters are just estimates, you may want to tweak them a little. package sandboxfx;import javafx.animation.Animation; import javafx.application.Application; import javafx.geometry.Rectangle2D; import javafx.scene.Group; import javafx.scene.Scene; import javafx.scene.image.Image; import javafx.scene.image.ImageView; import javafx.stage.Stage; import javafx.util.Duration;public class SandboxFX extends Application {private static final Image IMAGE = new Image("http://upload.wikimedia.org/wikipedia/commons/7/73/The_Horse_in_Motion.jpg");private static final int COLUMNS = 4; private static final int COUNT = 10; private static final int OFFSET_X = 18; private static final int OFFSET_Y = 25; private static final int WIDTH = 374; private static final int HEIGHT = 243;public static void main(String[] args) { launch(args); }public void start(Stage primaryStage) { primaryStage.setTitle("The Horse in Motion");final ImageView imageView = new ImageView(IMAGE); imageView.setViewport(new Rectangle2D(OFFSET_X, OFFSET_Y, WIDTH, HEIGHT));final Animation animation = new SpriteAnimation( imageView, Duration.millis(1000), COUNT, COLUMNS, OFFSET_X, OFFSET_Y, WIDTH, HEIGHT ); animation.setCycleCount(Animation.INDEFINITE); animation.play();primaryStage.setScene(new Scene(new Group(imageView))); primaryStage.show(); } }Listing 2: The Horse in Motion with JavaFX Conclusion Defining your own animation by extending the Transition class is simple and straightforward. It is an extremely powerful approach though, because an animation created this way has all the functionality regular animations have. For example you can play it slower and faster by changing the rate and you can play it even backwards. You can run it in a loop and you can use it within ParallelTransition and SequentialTransition to create even more complex animations. Reference: Creating a Sprite Animation with JavaFX from our JCG partner Michael Heinrichs at the Mike’s Blog....

How Badly Do We Want a New Java Date/Time API?

The current Java.net poll question is, “How critical is it for JSR-310 (new Date and Time API) to be implemented in Java 8?” At the time of my writing of this post, nearly 150 respondents have voted and an overwhelming percentage have answered either “Very” (53%) or “It would be nice, but we can get by using the current classes” (22%). With 3/4 of the respondents feeling that it would either “be nice” or is “very important” to get a new Java Date/Time API, I think it’s safe to say that Java‘s current Date and Calendar approach has not grown on us. Perhaps my biggest surprise so far with the survey results is that 2% of the respondents have stated, “I prefer the current date and time classes.” Maybe that’s from the people who wrote those classes? I tend to use Java’s date/time/calendar APIs off and on. When I use them, I really don’t like them, but do start to tolerate them. I begin to forget how much I loathe them until I use them again. I recently helped a colleague familiar with Java (but not with the date/time APIs) to understand how to do some Date/Calendar/String manipulation and presentation. Explaining this mess out loud to him made the ridiculous difficulty of using these too-flexible APIs even more obvious to me. I could see on his face that he was thinking I was either kidding him or didn’t know what I was talking about. Although I’ve gotten to the point where I can make them make do, it’s much more difficult than it should be. Much has been written about the woes of date/time handling in Java. Rob Sanheim wrote in 2006 about date/time-related problems in three of his Top Five Worst APIs in Java (Calendar, Date, and DateFormat/SimpleDateFormat). Java’s Date-handling is focused on in Cameron Purdy‘s 2005 post The Seven Habits of Highly Dysfunctional Design. Tero Kadenius reminded us in the 2011 post Handling dates in Java that “The date/time API in Java is notoriously painful to work with.” The aptly named post Java Dates Still Suck was published in 2009. The current Java.net survey confirms my feeling after working with numerous Java developers and after reading many blogs and articles that the vast majority of Java developers are anxious to get a better standardized way of handling dates and times in Java. Reference: How Badly Do We Want a New Java Date/Time API?  from our JCG partner Dustin Marx at the Inspired by Actual Events  blog....

Working with legacy code

Context Large organisations’ systems may have from tens of thousands to a few million lines of code and a good part of those lines is legacy code. By legacy code I mean code without tests. Many of these systems started being written many years ago, before the existence of cool things and frameworks we take for granted today. Due to how some systems are configured (database, properties file, proprietary xml) , we cannot simply change a class constructor or method signature to pass in the dependencies without undertaking a much larger refactoring. Changing one piece of code can break completely unknown parts of the system. Developers are not comfortable in making changes in certain areas of the system. Test-first and unit testing is not widely used by developers. The Rule In our commitment to make the existing systems better, more reliable, easier to deal with (changing and adding more features), we established the following rule: We can not change existing code if it is not covered by tests. In case we need to change the existing code to be able to write the tests, we should do it only using the automated refactoring tools provided by our IDEs. Eclipse and IntelliJ are great for that, if you are a Java developer like me. That means, before we make any change, we need to have the current code under test, guaranteeing that we don’t break its current behaviour. Once the existing code is under test, we then write a new test for the new feature (or change an existing test if it’s a change in existing behaviour) and finally we are can change the code. Problem There is an eternal discussion on forums, mailing lists and user groups about TDD being responsible for the increase or decrease of the team’s velocity. I don’t want to start this discussion here because we are not doing TDD for majority of the time. With legacy code, we spend the majority of our time trying to test existing code before we can do TDD to satisfy the new requirement. And this is slow. Very slow. It can be, in our experience, somewhere between 5 to 20 times slower than just making the change we need and manually test it. You may be asking, why does it take so much longer? My answer is: If you haven’t, try unit testing all the possible execution paths in a 2000+ line class? Have you tried to do it with a 1000 line method, full of multiple return statements, hard-wired dependencies, loads of duplication, a big chain of nested IFs and loops, etc? Believe me, it will take you a bloody long time. Remember, you can’t change the existing code before writing tests. Just automated re-factorings. Safety first. :-) Why are we “under-performing”? Quite often, after start following this rule, management will enquire why the team is under-performing. Although it is a fact that the team is taking longer to implement a feature, this is also a very naive view of the problem. When management and development teams talk about team’s velocity, they just take into consideration the amount of time taken to write the code related to a few specific requirements. Over the time, management and teams just forget how much time they spend in things that are caused by the lack of quality and tests in their code. They never take into consideration that there is absolutely no way that developers would be able to manually test the entire system and guarantee they didn’t break anything. That would take days, for every piece of code they write or change. So they leave that to QA. By not increasing the test coverage, the team will never have a regression test suite. Manually testing the system, as it grows in size and complexity, will start taking longer and longer by the QA team. It will also be more error prone, since testing scripts need to be kept in sync with the current behaviour of the system. This is also waste of time. Also, just hacking code all the time will make the system more fragile and way more complicated to understand, which will make developers to take longer to hack more code in. Because we can’t press a button and just unit test a specific piece of code, the team is forced to spend a lot of time debugging the application, meaning that the team needs to spend time compiling and deploying it. In certain systems, in order to test a small piece of code, you need to rely on other systems to trigger something so your system can respond. Or you need to manually send messages to a queue so your piece of code is invoked. Sometimes, depending of the complexity of your system and it’s dependencies, you can’t even run it locally, meaning that you need to copy your application to another environment (smoke, UAT, or whatever name you use in your company) in order to test the small piece of code you wrote or changed. And when you finally manage to do all that to execute that line of code you just added, you realise that the code is wrong. Sigh. Let’s start over. Since I started practising TDD, I realised that I very rarely use my debugger and very rarely have to deploy and run my application in order to test it. So basically, when people think we are spending too much time to write a feature because we were writing tests for the existing code first, they are rarely considering the time spend elsewhere. I’ve seen it many times: “We don’t have time to do it now. Let’s just do a quick fix.”. And then, contradicting the laws of physics, more time is created when bugs are found and the QA phase needs to be extended. Panic takes over: “Oh, we need to abort the release. We can’t go live like that.”. Exactly.The graphic above is not based in any formal research. It is purely based on my own experience during my career. I also believe that the decrease in the team’s velocity is related to the state of the untested code. There are untested code out there that is very well written and writing tests to it is quite straightforward. However, in my experience, this would be an exception. In summary, if you are constantly applying the “Boy Scout Rule” and always keep improving the code you need to touch, over the time you will be making your code better and you will be going faster and easier. If you don’t do that, you will have a false impression of good velocity at start but gradually and quite often without noticing, you will start to slow down. Things that used to take a few days, now take a few weeks. Because this lost of velocity happened slowly (over months or even years), no one really noticed until it was too late and everything now take months. Drifting away and losing focus One interesting side effect of constantly making legacy code better, writing tests and refactoring, is that it is too easy to be carried away. More than once I’ve seen pairs, while trying to clean a piece of code, drifting away from the task/story they were working on. That also happened to me quite a few times. We get carried away when making progress and making things better. Very recently I was asked by one of the developers: “When do we stop?”. My answer was: “When we finish the task.”. In that context, that meant that the focus is always on task at hand. We need to keep delivering what was agreed with our product owners (or whoever the person in charge of the requirements are). So, make the system better but always keep the focus on the task. Don’t go crazy with the refactoring trying to re-write the whole system in a few days. Do enough to finish the task and try to isolate the parts that are still not good enough. Baby steps. A more positive side effect Another very common problem in legacy systems is the lack of understanding about the system and also about the business. Quite often we find situations where the developers and business people are long gone and no one really knows how the system behave. Writing tests for the existing code force us to understand what it does. We try to mine some hidden business concepts in the middle of the procedural code and try to make them very explicit when naming our tests. That is a great way to drive our refactoring later own, using the business concepts captured by the tests. Quite often, we also get the business people involved, asking them questions and checking if certain assumptions make sense. In writing the tests and refactoring the code, the previously completely unknown behaviour of the system is now well documented by the tests and code. Quality is a long term investment In a recent conversation, we were discussing how we could measure our “investment” in quality. A quick and simple answer would be that we could measure that by the number of bugs and the velocity that the teams are delivering new requirements. However, this is never too simple. The question is, why do you think you need quality? Which problems do you have today that makes you think they are quality related? What does quality mean anyway? When working with legacy, after many years, we tend to forget how things were in the past. We just “accept” that things take a long time to be done because they have been taking a long time to be done for a long time. We just accept a (long) QA phase. It’s part of the process, isn’t it? And this is just wrong. Constantly increasing the quality level in a legacy system can make a massive difference in the amount of time and money spend on that project. That’s what we are pursuing. We want to deliver more, faster and with no bugs. It’s up to us, professional software developers, to revert this situation showing a great attitude and respect for our customers. We should never allow ourselves to get into a situation where our velocity and quality of our work decreases because of our own incompetence. Reference: Working with legacy code from our JCG partner Sandro Mancuso at the Crafted Software blog....

The Java EE 6 Example – Galleria

Have you ever been wondering where to find some good end-to-end examples build with Java EE 6? I have. Most of the stuff you find on the net is very basic and doesn’t solve the real world problems. This is true for the Java EE 6 tutorial. All the other stuff, like most of what Adam Bien publishes are a very tight scoped examples which also doesn’t point you to a more complete solution. So, I was very pleased to stumble over a more complex example done by Vineet Reynolds. It’s called “Java EE 6 Galleria” and you can download the source code from bitbucket. Vineet is a software engineer contributing to the Arquillian project; more specifically, he contributed bugs fixes and worked on a few feature requests for Arquillian Core, and the GlassFish, WebLogic and Tomcat integrations for Arquillian. This is where I first came across his name. And following the Arquillian guys and him a little closer directly send me to this example. A big thanks to Vineet for a helping hand during my first tries to get this up and running. Follow him if you like on twitter @VineetReynolds. Here is a brief explanation about it’s background and this is also kicking of a series about running it in different settings and pointing you to a few more details under the hood. This is the basic introduction. About the Galleria The high level description of the project is the following: The Java EE 6-Galleria is a demo application demonstrating the use of JSF 2.0 and JPA 2.0 in a Java EE project using Domain Driven Design. It was written to serve as a showpiece for domain driven design in Java EE 6. The domain model of the application is not anemic, and is constituted of JPA entities. The entities are then used in session EJBs that act as the application layer. JSF facelets are used in the presentation tier, using Mojarra and PrimeFaces. The project seeks to achieve comprehensive coverage through the use of both unit and integration tests, written in JUnit 4. The unit and integration tests for EJBs and the domain model rely on the EJB 3.1 container API. The integration tests for the presentation layer relies on the Arquillian project and its Drone extension (for execution of Selenium tests). Domain driven design using Java EE 6 DDD as an architectural approach, is feasible in Java EE 6. This is primarily due to the changes made in EJB 3.x and in the introduction of JPA. The improvements made in the EJB 3.x and JPA specifications allow for a domain and application layer to be modeled in Java EE 6 using DDD. The basic idea here is to design an application ensuring that persistence services are injected into the application layer, and used to access/persist the entities within a transactional context established by the application layer. Domain Layer The application contains four domain entities for now – User, Group, Album and Photo which are the same as the JPA entities in the logical data model.Repository Layer On top of the logical data model you can find four repositories – UserRepository, GroupRepository, AlbumRepository and PhotoRepository. Each for one of the four domain entities. Even if the DDD requires that you only have repositories for an aggregated root, and not for all domain entities it is designed this way to allow the application layer to access the Album and Photo domain entities without having to navigate Albums and Photos via the UserRepository. The repositories are Stateless Session Beans with a no-interface view and are constructed using the Generic CRUD Service pattern published by Adam Bien. Application layer The application layer exposes services to be consumed by the presentation layer. It is also responsible for transaction management, while also serving as a fault barrier for the below layer(s). The application layer co-ordinates with the domain repositories and the domain objects to fulfill the desired objectives of the exposed services. In a way, this layer is the equivalent to a service layer in traditional applications. The application layer exposes it’s services through the UserService, GroupService, AlbumService and PhotoService interfaces and is also responsible for validating the providing domain objects from the above layers, before co-ordinating actions among objects in the domain layer. This is done via JSR-303 constraints on the domain objects. How it looks like And this is, how this example looks like if you are running it on latest GlassFish 3.1.2. Curious to set this up yourself? Wait for the next post or give it a try yourself ;)Below we are going to setup the example as it is directly with latest GlassFish 3.1.2, Hibernate and Derby. Preparation Get yourself in the mood for some configuration. Grep the latest NetBeans 7.1 (the Java EE edition already includes the needed GlassFish 3.1.2) and install it. I also assume you have a decent Java SDK 7 (6 would do the job, too) in place somewhere. Depending on the development strategy you also need a Mercurial Client and Maven. At least Maven is also included in NetBeans, so … I mean … why make your life harder than it already is? ;) Environments Some few more words about the environments. This example was setup for supporting different environments. Starting with the plain “development” environment you also need to configure your “test” and last but not least the “production” environment. All of the different environments are handled by maven profiles, so you might have to configure a bit during the following minutes. Create the Database instances First thing to do is to decide where to put all your stuff. The examples use derby out of the box and therefore you should either have the Java DB installed (part of the JDK) or use the GlassFish derby instance which is pre-configured with NetBeans. Let’s make it harder here and assume we use the Java DB installation that comes with your JDK. Go ahead, open a CMD prompt and navigate to your %JAVA_HOME% folder and further down the db folder/bin. Execute the “startNetWorkServer” script and watch out for the derby instance to start. Now open another CMD prompt also navigate to the db/bin folder and execute the “ij” script. This should come up with a prompt ” ij>. Now enter the following connect string: connect 'jdbc:derby://localhost:1527/GALLERIATEST;create=true';This command connects you to the derby instance and creates a GALLERIATEST database if it doesn’t already exist. The Galleria example uses a handy little tool called dbdeploy as a database change management tool. It let’s you do incremental updates to the physical database model which get tracked in a changelog table. (More about this later in the series). You have to create the changelog table: CREATE TABLE changelog ( change_number DECIMAL(22,0) NOT NULL, complete_dt TIMESTAMP NOT NULL, applied_by VARCHAR(100) NOT NULL, description VARCHAR(500) NOT NULL );ALTER TABLE changelog ADD CONSTRAINT Pkchangelog PRIMARY KEY (change_number);You can redo the steps for any other instance you need (production, etc.) by simply changing the database name in the connect statement. And don’t forget to create the changelog table in every instance. And if you don’t like this approach. Fire up NetBeans, switch to the services tab, select “New Connection” and add a new Java DB Network Connection with host:localhost, port:1527 and Database: GALLERIATEST;create=true. Set both user and password to “APP” and click “Test Connection”. Select APP as the schema for your new DB. And you are done!Create the GlassFish domain We are running this from latest GlassFish. First thing to do now is to create a new domain. navigate to your GlassFish installation directory and goto glassfish3/bin and execute the following: asadmin create-domain --portbase 10000 --nopassword test-domainThis creates a new test-domain for you. Now navigate to that domain folder (“glassfish3/glassfish/domains/test-domain”) and open the config/domain.xml file. We are now going to add the created derby database as a connection pool to you newly created GlassFish domain. Navigate to the <resources> element and add the following connection pool and jdbc-resource under the last closing  </jdbc-connection-pool> element:  <jdbc-connection-pool driver-classname="" datasource-classname="org.apache.derby.jdbc.ClientDataSource40" res-type="javax.sql.DataSource" description="" name="GalleriaPool" ping="true">       <property name="User" value="APP"></property>       <property name="DatabaseName" value="GALLERIATEST"></property>       <property name="RetrieveMessageText" value="true"></property>       <property name="Password" value="APP"></property>       <property name="ServerName" value="localhost"></property>       <property name="Ssl" value="off"></property>       <property name="SecurityMechanism" value="4"></property>       <property name="TraceFileAppend" value="false"></property>       <property name="TraceLevel" value="-1"></property>       <property name="PortNumber" value="1527"></property>       <property name="LoginTimeout" value="0"></property>     </jdbc-connection-pool>     <jdbc-resource pool-name="GalleriaPool" description="" jndi-name="jdbc/galleriaDS"></jdbc-resource>Now find the : <config name=”server-config”> element and inside it look for the last  <resource-ref entry. Add the following line there:   <resource-ref ref="jdbc/galleriaDS"></resource-ref>One last thing to do until we are ready to fire up our instance. We need to add the JDBC Realm for the Galleria example. again, find the <config name=”server-config”> and inside it, look for a </auth-realm>. Under this, put the following:   <auth-realm classname="com.sun.enterprise.security.auth.realm.jdbc.JDBCRealm" name="GalleriaRealm">       <property name="jaas-context" value="jdbcRealm"></property>       <property name="encoding" value="Hex"></property>       <property name="password-column" value="PASSWORD"></property>       <property name="datasource-jndi" value="jdbc/galleriaDS"></property>       <property name="group-table" value="USERS_GROUPS"></property>       <property name="charset" value="UTF-8"></property>       <property name="user-table" value="USERS"></property>       <property name="group-name-column" value="GROUPID"></property>       <property name="digest-algorithm" value="SHA-512"></property>       <property name="user-name-column" value="USERID"></property>     </auth-realm>Be sure not to put the new realm under the default-config. This will not work. Fine. Let’s get the sources :) Getting the Source and opening it in NetBeansVineet is hosting the Galleria example on bitbucket.org. So, you have to go there and visit the java-ee-6-galleria project. There are three ways you can bring the sources to your local HDD. Either via the hg command line:hg clone https://bitbucket.org/VineetReynolds/java-ee-6-galleriaor via the website download (upper right “get sources”) or directly via NetBeans. You need a Mercurial client for your OS for the first and the third option. I am using TortoiseHg for Windows. You should have this installed and configured with NetBeans before doing the following. Lets try the last alternative here. Select “Team > Clone Other”. Enter the Repository URL and leave user/password empty. Click “next” two times (we don’t need to change default paths ;)) and select a parent directory to put the stuff in. Click “Finish” and let the Mercurial client do the work. You are asked to open the found projects after it finished. This should look similar to the picture on the right. If you run into connection troubles make sure to update your proxy settings accordingly. If you try to build the project you will run into trouble. It is still missing some configuration which we are going to do next. Adding a Development Profile Next is to add some stuff to the Maven pom.xml of the galleria-ejb project. Open it and scroll down to the <profiles> section. You find two (sonar and production). We are going to add a development profile by adding the following lines to it (make sure to adjust the GlassFish paths to your environment): <profile> <id>development</id> <activation> <activeByDefault>true</activeByDefault> </activation> <properties> <galleria.derby.testInstance.jdbcUrl>jdbc:derby://localhost:1527/GALLERIATEST</galleria.derby.testInstance.jdbcUrl> <galleria.derby.testInstance.user>APP</galleria.derby.testInstance.user> <galleria.derby.testInstance.password>APP</galleria.derby.testInstance.password> <galleria.glassfish.testDomain.user>admin</galleria.glassfish.testDomain.user> <galleria.glassfish.testDomain.passwordFile>D:/glassfish-3.1.2-b22/glassfish3/glassfish/domains/test-domain/config/local-password</galleria.glassfish.testDomain.passwordFile> <galleria.glassfish.testDomain.glassfishDirectory>D:/glassfish-3.1.2-b22/glassfish3/glassfish/</galleria.glassfish.testDomain.glassfishDirectory> <galleria.glassfish.testDomain.domainName>test-domain</galleria.glassfish.testDomain.domainName> <galleria.glassfish.testDomain.adminPort>10048</galleria.glassfish.testDomain.adminPort> <galleria.glassfish.testDomain.httpPort>10080</galleria.glassfish.testDomain.httpPort> <galleria.glassfish.testDomain.httpsPort>10081</galleria.glassfish.testDomain.httpsPort> </properties> </profile>Ok. As you can see a couple of stuff is defined here. And the profile is activated by default. That’s it. For now. Testing the ejb-Galleria Project Lets try to run the testcases in the ejb-Galleria project.  Right click it and issue a  “clean and build”.Follow the console output to see what is actually going on. We are going to investigate this a little bit further with one of the next posts. Today we are only doing this to make sure, you have everything setup the right way. It should finish with: Tests run: 49, Failures: 0, Errors: 0, Skipped: 0 BUILD SUCCESSThat is a “Green-Bar” :-) Congratulations! Build and Deploy the Project Now go to NetBeans “Tools > Options > Miscellaneous > Maven” and check the box that says: “Skip Tests for any build executions not directly related to testing”. Return to the main window and right click the Galleria project and make a clean and build there. Reactor Summary:Galleria ................................. SUCCESS [0.431s] galleria-ejb ............................. SUCCESS [5.302s] galleria-jsf ............................. SUCCESS [4.486s] Galleria EAR ............................. SUCCESS [1.308s] ------------------------------------------------------------ BUILD SUCCESS ------------------------------------------------------------ Total time: 11.842sFine. Now lets fire up the GlassFish domain. Switch to your GlassFish installation and find the glassfish3/bin folder. Open a command line prompt there and run: asadmin start-domain test-domainYou can see the domain starting up. Now open a browser and navigate to http://localhost:10048/. After a few seconds this is going to show you the admin console of your GlassFish server. Now you need to install Hibernate. Select the “Update Tool” (bottom left) and switch to the “Available Add-Ons” tab. Select “hibernate” and click install (top right). Stop the server after you installed it and restart it with the command above. Open the admin console again and click on “Applications”. Click the little “deploy” button on top and browse for the “java-ee-6-galleria/galleria-ear/target/galleria-ear-0.0.1-SNAPSHOT.ear”. Click on “OK” (top right). You are done after a few seconds. Now switch to http://localhost:10080/Galleria/ and you will see the welcome screen. Congratulations. You setup the Galleria example on GlassFish! Sign up, Login and play with the application a bit!The next parts in the series will dive you into the details of the application. I am going to cover the tests and overall concepts. And we are also going to change both JPA provider and database in a future post. Want to know what it takes to get it up and running on latest WebLogic 12c ? Read on! Reference: The Java EE 6 Example – Galleria – Part 1 &  The Java EE 6 Example – Running Galleria on GlassFish 3.1.2 – Part 2 from our JCG partner Markus Eisele at the Enterprise Software Development with Java  blog....

FindBugs and JSR-305

Suppose that group of developers work in parallel on parts of big project – some developers are working on service implementation, while others are working on code using this service. Both groups agreed on service API, and started working separately, having in mind the API assumptions… Do you think this story will have happy end? Well, … – maybe :) – there are tools which can help achieve it :) – one of them is FindBugs, supported with JSR-305 (annotations for software defect detection). Let’s take a look at the service API contract: package com.blogspot.vardlokkur.services;import java.util.List;import javax.annotation.CheckForNull; import javax.annotation.Nonnull;import com.blogspot.vardlokkur.entities.domain.Employer;/** * Defines the API contract for the employer service. * * @author Warlock * @since 1.0 */ public interface EmployerService {/** * @param identifier the employer's identifier * @return the employer having specified {@code identifier}, {@code null} if not found */ @CheckForNull Employer withId(@Nonnull Long identifier);/** * @param specification defines which employers should be returned * @return the list of employers matching specification */ @Nonnull List thatAre(@Nonnull Specification specification);}As you see there are annotations like @Nonnull or @CheckForNull added to the service method signatures. The purpose of their usage is to define the requirements for the method parameters (ex. identifier parameter cannot be null), and the expectations for the values returned by methods (ex. service method result can be null and you should check it in your code). So what? – you may ask – should I check them in the code by myself or trust the co-workers that they will use the guidelines defined by those annotations? Of course not :) – trust no one, use the tools which will verify the API assumptions, like FindBugs. Suppose that we have following service API usage: package com.blogspot.vardlokkur.test;import org.junit.Before; import org.junit.Test;import com.blogspot.vardlokkur.services.EmployerService; import com.blogspot.vardlokkur.services.impl.DefaultEmployerService;/** * Employer service test. * * @author Warlock * @since 1.0 */ public class EmployerServiceTest {private EmployerService employers;@Before public void before() { employers = new DefaultEmployerService(); }@Test public void test01() { Long identifier = null; employers.withId(identifier); }@Test public void test02() { employers.withId(Long.valueOf(1L)).getBusinessName(); }@Test public void test03() { employers.thatAre(null); } }Let’s try to verify the code against the service API assumptions:FindBugs will analyze your code, and switch to the FindBugs perspective showing potential problems:Null passed for nonnull parameterPossible null pointer dereferenceSimilar way, guys writing the service code may verify their work against defined API assumptions, for ex. if you run FindBugs for the very early version of service implementation: package com.blogspot.vardlokkur.services.impl;import java.util.List;import com.blogspot.vardlokkur.entities.domain.Employer; import com.blogspot.vardlokkur.services.EmployerService; import com.blogspot.vardlokkur.services.Specification;/** * Default implementation of {@link EmployerService}. * * @author Warlock * @since 1.0 */ public class DefaultEmployerService implements EmployerService {/** * {@inheritDoc} */ public Employer withId(Long identifier) { return null; }/** * {@inheritDoc} */ public List thatAre(Specification specification) { return null; }}Following error will be found:As you see, nothing can hide from the FindBugs and his ally – JSR-305 ;) Few links for the dessert:JSR-305: Annotations for Software Defect Detection JSR 305: a silver bullet or not a bullet at all?Reference: FindBugs and JSR-305 from our JCG partner Micha? Ja?tak at the Warlock’s Thoughts blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: