Featured FREE Whitepapers

What's New Here?


The Measure Of Success

What makes a successful project? Waterfall project management tells us it’s about meeting scope, time and cost goals. Do these success metrics also hold true to agile projects? Let’s see.  In an agile project we learn new information all the time. It’s likely that the scope will change over time, because we find out things we assumed the customer wanted were wrong, while features we didn’t even think of are actually needed. We know that we don’t know everything when we estimate scope, time and budget. This is true for both kinds of projects, but in agile projects we admit that, and therefore do not lock those as goals. The waterfall project plan is immune to feedback. In agile projects, we put feedback cycles into the plan so we will be able to introduce changes. We move from “we know what we need to do” to “let’s find out if what we’re thinking is correct” view. In waterfall projects, there’s an assumption of no variability, and that the plan covers any possible risk. In fact, one small shift in the plan can have disastrous (or wonderful) effects on product delivery. Working from a prioritized backlog in an agile project, means the project can end “prematurely”. If we have a happy customer with half the features, why not stop there? If we deliver a smaller scope, under-budget and before the deadline, has the project actually failed? Some projects are so long, that the people who did the original estimation are long gone. The assumptions they relied on are no longer true, technology has changed and the market too. Agile projects don’t plan that long into the future, and therefore cannot be measured according to the classic metrics. Quality is not part of the scope, time and cost trio, and usually not set as a goal. Quality is not easily measured, and suffers from the pressure of the other three. In agile projects quality is considered is first-class citizen, because we know it supports not only customer satisfaction, but also the ability of the team to deliver in a consistent pace.All kinds of differences. But they don’t answer a very simple question: What is success? In any kind of project, success has an impact. It creates happy customers. It creates a new market. It changes how people think and feel about the company. And it also changes how people inside the company view themselves. This impact is what makes a successful project. This is what we should be measuring. The problem with all of those, is that they cannot be measured at the delivery date, if at all. Cost, budget, and scope maybe measureable at the delivery date, including against the initial estimation, but they are not really indicative of success. In fact, there’s a destructive force within the scope, time and cost goals: They come at the expense of others, like quality and customer satisfaction. If a deadline is important, quality suffers. We’ve all been there. The cool thing about an agile project, is that we can gain confidence we’re on the right track, if customers were part of the process, and if the people developing the product were aligned with the customer’s feedback. The feedback tells us early on if the project is going to be successful, according to real life parameters. And if we’re wrong, that’s good too. We can cut our losses and turn to another opportunity. So agile is better, right? Yes, I’m pro-agile. No, I don’t think agile works every time. I ask that you define your success goals for your product and market, not based on a methodology, but on what impact it will make. Only the you can actually measure success.Reference: The Measure Of Success from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....

Load-Testing Guidelines

Load-testing is not trivial. It’s often not just about downloading JMeter or Gatling, recording some scenarios and then running them. Well, it might be just that, but you are lucky if it is. And what may sound like “Captain Obvious speaking”, it’s good to be reminded of some things that can potentially waste time. So, when you run the tests, eventually you will hit a bottleneck, and then you’ll have to figure out where it is. It can be:        client bottleneck – if your load-testing tool uses HttpURLConnection, the number of requests sent by the client is quite limited. You have to start from that and make sure enough requests are leaving your load-testing machine(s) network bottlenecks – check if your outbound connection allows the desired number of requests to reach the server server machine bottleneck – check the number of open files that your (most probably) linux server allows. For example, if the default is 1024, then you can have at most 1024 concurrent connections. So increase that (limits.conf) application server bottleneck – if the thread pool that handles requests is too low, requests may be kept waiting. If some other tiny configuration switch (e.g. whether to use NIO, which is worth a separate article) has the wrong value, that may reduce performance. You’d have to be familiar with the performance-related configurations of your server. database bottlenecks – check the CPU usage and response times of your database to see if it’s not the one slowing the requests. Misconfiguring your database, or having too small/few DB servers, can obviously be a bottleneck application bottleneck – these you’d have to investigate yourself, possibly using some performance monitoring tool (but be careful when choosing one, as there are many “new and cool”, but unstable and useless ones). We can divide this type in two:framework bottleneck – if a framework you are using has problems. This might be a web framework, a dependency injection framework, an actor system, an ORM, or even a JSON serialization tool application code bottleneck – if you are misusing a tool/framework, have blocking code, or just wrote horrible code with unnecessarily high computational complexityYou’d have to constantly monitor the CPU, memory, network and disk I/O usage of the machines, in order to understand when you’ve hit the hardware bottleneck. One important aspect is being able to bombard your servers with enough requests. It’s not unlikely that a single machine is insufficient, especially if you are a big company and your product is likely to attract a lot of customers at the start and/or making a request needs some processing power as well, e.g. for encryption. So you may need a cluster of machines to run your load tests. The tool you are using may not support that, so you may have to coordinate the cluster manually. As a result of your load tests, you’d have to consider how long does it make sense to keep connections waiting, and when to reject them. That is controlled by connect timeout on the client and registration timeout (or pool borrow timeout) on the server. Also have that in mind when viewing the results – too slow response or rejected connection is practically the same thing – your server is not able to service the request. If you are on AWS, there are some specifics. Leaving auto-scaling apart (which you should probably disable for at least some of the runs), you need to have in mind that the ELB needs warming up. Run the tests a couple of times to warm up the ELB (many requests will fail until it’s fine). Also, when using a load-balancer and long-lived connections are left open (or you use WebSocket, for example), the load balancer may leave connections from itself to the servers behind it open forever and reuse them when a new request for a long-lived connection comes. Overall, load (performance) testing and analysis is not straightforward, there are many possible problems, but is something that you must do before release. Well, unless you don’t expect more than 100 users. And the next time I do that, I will use my own article for reference, to make sure I’m not missing something.Reference: Load-Testing Guidelines from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog....

Continuous Delivery with Docker, Jenkins, JBoss Fuse and OpenShift PaaS

I recently put together an end-to-end demo showing step-by-step how to set up a Continuous Delivery pipeline to help automate your deployments and shorten your cycle times for getting code from development to production. Establishing a proper continuous delivery pipeline is a discipline that requires more than just tools and automation, but having good tools and a head start on setting this up can’t be understated. This project has two focuses:      Show how you’d do CD with JBoss Fuse and OpenShift Create a scripted, repeatable, pluggable and versioned demo so we can swap out pieces (like use JBoss Fuse 6.2/Fabric8/Docker/Kubernetes, or OpenStack, or VirtualBox, or go.cd, or travis-ci, or other code review systems)We use Docker containers to set up all of the individual pieces and make it easier to script it and version it. See the videos of me doing the demo below, or checkout the setup steps and follow the script to recreate the demo yourself! Part IContinuous Delivery with JBoss Fuse on OpenShift Enterprise from Christian Posta on Vimeo. Part IIContinuous Delivery with JBoss Fuse on OpenShift Enterprise Part II from Christian Posta on Vimeo. Part IIIContinuous Delivery with JBoss Fuse on OpenShift Enterprise part III from Christian Posta on Vimeo.Reference: Continuous Delivery with Docker, Jenkins, JBoss Fuse and OpenShift PaaS from our JCG partner Christian Posta at the Christian Posta – Software Blog blog....

5 Error Tracking Tools Java Developers Should Know

Raygun, Stack Hunter, Sentry, Takipi and Airbrake: Modern developer tools to help you crush bugs before bugs crush your app! With the Java ecosystem going forward, web applications serving growing numbers of requests and users’ demand for high performance – comes a new breed of modern development tools. A fast paced environment with rapid new deployments requires tracking errors and gaining insight to an application’s behavior on a level traditional methods can’t sustain. In this post we’ve decided to gather 5 of those tools, see how they integrate with Java and find out what kind of tricks they have up their sleeves. It’s time to smash some bugs. Raygun Mindscape’s Raygun is a web based error management system that keeps track of exceptions coming from your apps. It supports various desktop, mobile and web programming languages, including Java, Scala, .NET, Python, PHP, and JavaScript. Besides that, sending errors to Raygun is possible through a REST API and a few more Providers (that’s how they call language and framework integrations) came to life thanks to developer community involvement.Key Features:Error grouping – Every occurrence of a bug is presented within one group with access to single instances of it, including its stack trace. Full text search – Error groups and all collected data is searchable. View app activity – Every action on an error group is displayed for all your team to see: status updates, comments and more. Affected users – Counts of affected users appear by each error. External integrations – Github, Bitbucket, Asana, JIRA, HipChat and many more.The Java angle: To use Raygun with Java, you’ll need to add some dependencies to your pom.xml file if you’re using Maven or add the jars manually. The second step would be to add an UncaughtExceptionHandler that would create an instance of RaygunClient and send your exceptions to it. In addition, you can also add custom data fields to your exceptions and send them together to Raygun. The full walkthrough is available here. Behind the curtain: Meet Robie Robot, the certified operator of Raygun. As in, the actual ray gun. Check it out on: https://raygun.io Sentry Started as a side-project, Sentry is an open-source web based solution that serves as a real time event logging and aggregation platform. It monitors errors and displays when, where and to whom they happen, promising to do so without relying solely on user feedback. Supported languages and frameworks include Ruby, Python, JS, Java, Django, iOS, .NET and more.Key Features:See the impact of new deployments in real time Provide support to specific users interrupted by an error Detect and thwart fraud as its attempted – notifications of unusual amounts of failures on purchases, authentication, and other sensitive areas External Integrations – GitHub, HipChat, Heroku, and many moreThe Java angle: Sentry’s Java client is called Raven and supports major existing logging frameworks like java.util.logging, Log4j, Log4j2 and Logback with Slf4j. An independent method to send events directly to Sentry is also available. To set up Sentry for Java with Logback for example, you’ll need to add the dependencies manually or through Maven, then add a new Sentry appender configuration and you’re good to do. Instructions are available here. Behind the curtain: Sentry was an internal project at Disqus back in 2010 to solve exception logging on a Django application by Chris Jennings and David Cramer Check it out on: https://www.getsentry.com/ Takipi Unlike most of the other tools, Takipi is far more than a stack trace prettifier. It was built with a simple objective in mind: Telling developers exactly when and why production code breaks. Whenever a new exception is thrown or a log error occurs – Takipi captures it and shows you the variable state which caused it, across methods and machines. Takipi will overlay this over the actual code which executed at the moment of error – so you can analyze the exception as if you were there when it happened.Key features:Detect – Caught/uncaught exceptions, Http and logged errors. Prioritize – How often errors happen across your cluster, if they involve new or modified code, and whether that rate is increasing. Analyze – See the actual code and variable state, even across different machines and applications. Easy to install – No code or configuration changes needed. Less than 2% overhead.The Java angle: Takipi was built for production environments in Java and Scala. The installation takes less than 1min, and includes attaching a Java agent to your JVM. Behind the curtain: Each exception type and error has a unique monster that represents it. You can find these monster here. Check it out on: http://www.takipi.com/ AirbrakeAnother tool that has put exception tracking on its eyesights is Rackspace’s Airbrake, taking on the mission of “No More Searching Log Files”. It provides users with a web based interface that includes a dashboard with error details and an application specific view. Supported languages include Ruby, PHP, Java, .NET, Python and even… Swift. Key Features:Detailed stack traces, grouping by error type, users and environment variables Team productivity – Filter importance errors from the noise Team collaboration – See who’s causing bugs and whose fixing them External Integrations – HipChat, GitHub, JIRA, Pivotal and over 30 moreThe Java angle: Airbrake officially supports only Log4j, although a Logback library is also available. Log4j2 support is currently lacking. The installation procedure is similar to Sentry, adding a few dependencies manually or through Maven, adding an appender, and you’re ready to start. Similarly, a direct way to send messages to Airbrake is also available with AirbrakeNotice and AirbrakeNotifier. More details are available here. Behind the curtain: Airbrake was acquired by Exceptional, which then got acquired by Rackspace. Check it out on: https://airbrake.io/ StackHunter Currently in beta, Stack Hunter provides a self hosted tool to track your Java exceptions. A change of scenery from the past hosted tools. Other than that, it aims to provide a similar feature set to inform developers of their exceptions and help solve them faster.Key Features:A single self hosted web interface to view all exceptions Collections of stack trace data and context including key metrics such as total exceptions, unique exceptions, users affected, & sessions affected Instant email alerts when exceptions occur Exceptions grouping by root causeThe Java angle: Built specifically for Java, StackHunter runs on any servlet container running Java 6 or above. Installation includes running StackHunter on a local servlet, configuring an outgoing mail server for alerts, and configuring the application you’re wishing to log. Full instructions are available here. Behind the curtain: StackHunter is developed by Dele Taylor, who also works on Data Pipeline – a tool for transforming and migrating data in Java. Check it out on: http://stackhunter.com/ Bonus: ABRT Another approach to error tracking worth mentioning is used by ABRT, an automatic bug detection and reporting tool from the Fedora ecosystem, which is a Red Hat sponsored community project. Unlike the 5 tools we covered here, this one is intended to be used not only by app developers – but their users as well. Reporting bugs back to Red Hat with richer context that otherwise would have been harder to understand and debug.The Java angle: Support for Java exceptions is still in its proof of concept stage. A Java connector developed by Jakub Filák is available here. Behind the curtain: ABRT is an open-source project developed by Red Hat. Check it out on: https://github.com/abrt/abrt Did we miss any other tools? How do you keep track of your exceptions? Please let me know in the comments section belowReference: 5 Error Tracking Tools Java Developers Should Know from our JCG partner Alex Zhitnitsky at the Takipi blog....

3 Examples of Parsing HTML File in Java using Jsoup

HTML is the core of the web, all the pages you see on the internet are based on HTML, whether they are dynamically generated by JavaScript, JSP, PHP, ASP or any other web technology. Your browser actually parse HTMLs and render it for you. But what do you do, if you need to parse an HTML document and find some elements, tags, attributes or check if a particular element exists or not, all that using a Java program. If you have been in Java programming for some years, I am sure you have done some XML parsing work using parsers like DOM and SAX. Ironically, there are few instances when you need to parse HTML document from a core Java application, which doesn’t include Servlet and other Java web technologies. To make things worse, there is no HTTP or HTML library in the core JDK as well. That’s why when it comes to parsing an HTML file, many Java programmers had to look at Google to find out how to get value of an HTML tag in Java. When I needed that I was sure that there would be an open source library which will implement that functionality for me, but didn’t know that it was as wonderful and feature rich as JSoup. It not only provides support to read and parse HTML document but also allows you to extract any element from the HTML file, their attributes, their CSS class in JQuery style, and at the same time it allows you to modify them. You can probably do anything with a HTML document using Jsoup. In this article, we will parse and HTML file and find out the value of the title and heading tags. We will also see examples of downloading and parsing HTML from file as well as any URL or internet by parsing Google’s home page in Java. What is JSoup Library Jsoup is an open source Java library for working with real-world HTML. It provides a very convenient API for extracting and manipulating data, using the best of DOM, CSS, and jquery-like methods. Jsoup implements the WHATWG HTML5 specification, and parses HTML to the same DOM as modern browsers like Chrome and Firefox do. Here are some of the useful features of jsoup library :    Jsoup can scrape and parse HTML from a URL, file, or string     Jsoup can find and extract data, using DOM traversal or CSS selectors     Jsoup allows you to manipulate the HTML elements, attributes, and text     Jsoup provides clean user-submitted content against a safe white-list, to prevent XSS attacks     Jsoup also output tidy HTMLJsoup is designed to deal with different kinds of HTML found in the real world, which includes properly validated HTML to incomplete non-validate tag collection. One of the core strengths of Jsoup is that it’s very robust. HTML Parsing in Java using JSoup In this Java HTML parsing tutorial, we will see three different examples of parsing and traversing HTML documents in Java using jsoup. In the first example, we will parse an HTML String, the contents of which are all tags, in form of a String literal in Java. In the Second example, we will download our HTML document from the web, and in the third example, we will load our own sample HTML file login.html for parsing. This file is a sample HTML document which contains a title tag and a div in the body section which contains an HTML form. It has input tags to capture username and password and submit and reset button for further action. It’s a proper HTML which can be validated i.e. all tags and attributes are properly closed. Here is how our sample HTML file look like : <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <title>Login Page</title> </head> <body> <div id="login" class="simple" > <form action="login.do"> Username : <input id="username" type="text" /><br> Password : <input id="password" type="password" /><br> <input id="submit" type="submit" /> <input id="reset" type="reset" /> </form> </div> </body> </html> HTML parsing is very simple with Jsoup, all you need to call is the static method Jsoup.parse()and pass your HTML String to it. JSoup provides several overloaded parse() method to read HTML files from String, a File, from a base URI, from an URL, and from an InputStream. You can also specify character encoding to read HTML files correctly in case they are not in “UTF-8″ format. The parse(String html) method parses the input HTML into a new Document. In Jsoup, Document extends Element which extends Node. Also TextNode extends Node. As long as you pass in a non-null string, you’re guaranteed to have a successful, sensible parse, with a Document containing (at least) a head and a body element. Once you have a Document, you can get the data you want by calling appropriate methods in Document and its parent classes Element and Node. Java Program to parse HTML Document Here is our complete Java program to parse an HTML String, an HTML file downloaded from the internet and an HTML file from the local file system. In order to run this program, you can either use the Eclipse IDE or you can just use any IDE or command prompt. In Eclipse, it’s very easy, just copy this code, create a new Java project, right click on src package and paste it. Eclipse will take care of creating proper package and Java source file with same name, so absolutely less work. If you already have a Sample Java project, then it’s just one step. Following Java program shows 3 examples of parsing and traversing HTML file. In first example, we directly parse an String with html content, in the second example we parse an HTML file downloaded from an URL, in the third example we load and parse an HTML document from local file system. import java.io.File; import java.io.IOException; import org.jsoup.Jsoup; import org.jsoup.nodes.Document; import org.jsoup.nodes.Element; /** * Java Program to parse/read HTML documents from File using Jsoup library. * Jsoup is an open source library which allows Java developer to parse HTML * files and extract elements, manipulate data, change style using DOM, CSS and * JQuery like method. * * @author Javin Paul */ public class HTMLParser{ public static void main(String args[]) { // Parse HTML String using JSoup library String HTMLSTring = "<!DOCTYPE html>" + "<html>" + "<head>" + "<title>JSoup Example</title>" + "</head>" + "<body>" + "<table><tr><td><h1>HelloWorld</h1></tr>" + "</table>" + "</body>" + "</html>"; Document html = Jsoup.parse(HTMLSTring); String title = html.title(); String h1 = html.body().getElementsByTag("h1").text(); System.out.println("Input HTML String to JSoup :" + HTMLSTring); System.out.println("After parsing, Title : " + title); System.out.println("Afte parsing, Heading : " + h1); // JSoup Example 2 - Reading HTML page from URL Document doc; try { doc = Jsoup.connect("http://google.com/").get(); title = doc.title(); } catch (IOException e) { e.printStackTrace(); } System.out.println("Jsoup Can read HTML page from URL, title : " + title); // JSoup Example 3 - Parsing an HTML file in Java //Document htmlFile = Jsoup.parse("login.html", "ISO-8859-1"); // wrong Document htmlFile = null; try { htmlFile = Jsoup.parse(new File("login.html"), "ISO-8859-1"); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } // right title = htmlFile.title(); Element div = htmlFile.getElementById("login"); String cssClass = div.className(); // getting class form HTML element System.out.println("Jsoup can also parse HTML file directly"); System.out.println("title : " + title); System.out.println("class of div tag : " + cssClass); } } Output: Input HTML String to JSoup :<!DOCTYPE html><html><head><title>JSoup Example</title></head><body><table><tr><td><h1>HelloWorld</h1></tr></table></body></html> After parsing, Title : JSoup Example Afte parsing, Heading : HelloWorld Jsoup Can read HTML page from URL, title : Google Jsoup can also parse HTML file directly title : Login Page class of div tag : simple The Jsoup HTML parser will make every attempt to create a clean parse from the HTML you provide, regardless of whether the HTML is well-formed or not. It can handle the following mistakes : unclosed tags (e.g. <p>Java <p>Scala to <p>Java</p> <p>Scala</p>) implicit tags (e.g.a naked <td>Java is Great</td> is wrapped into a <table><tr><td>) reliably creating the document structure (html containing a head and body, and only appropriate elements within the head). Jsoup is an excellent and robust open source library which makes reading html documents, body fragments, html strings and directly parsing html content from the web, extremely easy.Reference: 3 Examples of Parsing HTML File in Java using Jsoup from our JCG partner Javin Paul at the Javarevisited blog....

Solving ORM – Keep the O, Drop the R, no need for the M

ORM has a simple, production-ready solution hiding in plain sight in the Java world. Let’s go through it in this post, alongside with the following topics:ORM / Hibernate in 2014 – the word on the street ORM is still the Vietnam of Computer Science ORM has 2 main goals only When does ORM make sense? A simple solution for the ORM problem A production-ready ORM Java-based alternative  ORM / Hibernate in 2014 – the word on the street It’s been almost 20 years since ORM is around, and soon we will reach the 15th birthday of the creation of the de-facto and likely best ORM implementation in the Java world: Hibernate. We would then expect that this is by know a well understood problem. But what are developers saying these days about Hibernate and ORM? Let’s take some quotes from two recent posts on this topic: Thoughts on Hibernate and JPA Hibernate Alternatives: There are performance problems related to using Hibernate. A lot of business operations and reports involve writing complex queries. Writing them in terms of objects and maintaining them seems to be difficult. We shouldn’t be needing a 900 page book to learn a new framework. As Java developers we can easily relate to that: ORM frameworks tend to give cryptic error messages, the mapping is hard to do and the runtime behavior namely with lazy initialization exceptions can be surprising when first encountered. Who hasn’t had to maintain that application that uses Open Session In View pattern that generated a flood of SQL requests that took weeks to optimize? I believe it literally can take a couple of years to really understand Hibernate, lot’s of practice and several readings of the Java Persistence with Hibernate book (still 600 pages in it’s upcoming second edition). Are the criticisms on Hibernate warranted? I personally don’t think so, in fact most developers really criticize the complexity of the object-relational mapping approach itself, and not a concrete ORM implementation of it in a given language. This sentiment seems to come and go in periodic waves, maybe when a newer generation of developers hits the labor force. After hours and days trying to do what feels it should be much simpler, it’s only a natural feeling. The fact is that there is a problem: why do many projects spend 30% of their time developing the persistence layer still today? ORM is the Vietnam of Computer Science The problem is that the ORM problem is complex, and there are no good solutions. Any solution to it is a huge compromise. ORM has been famously named almost 10 years ago the Vietnam of Computer Science, in a blog post from one of the creators of Stackoverflow, Jeff Atwood. The problems of ORM are well known and we won’t go through them in detail here, here is a summary from Martin Fowler on why ORM is hard:object identity vs database identity How to map object oriented inheritance in the relational world unidirectional associations in the database vs bi-directional in the OO world Data navigation – lazy loading, eager fetching Database transactions vs no rollbacks in the OO worldThis is just to name the main obstacles. The problem is also that it’s easy to forget what we are trying to achieve in the first place. ORM has 2 main goals only ORM has two main goals clearly defined:map objects from the OO world into tables in a relational database provide a runtime mechanism for keeping an in-memory graph of objects and a set of database tables in syncGiven this, when should we use Hibernate and ORM in general? When does ORM make sense ? ORM makes sense when the project at hand is being done using a Domain Driven Development approach, where the whole program is built around a set of core classes called the domain model, that represent concepts in the real world such as Customer, Invoice, etc. If the project does not have a minimum threshold complexity that needs DDD, then an ORM can likely be overkill. The problem is that even the most simple of enterprise applications are well above this threshold, so ORM really pulls it’s weight most of the time. It’s just that ORM is hard to learn and full of pitfalls. So how can we tackle this problem? A simple solution for the ORM problem Someone once said something like this: A smart man solves a problem, but a wise man avoids it. As often happens in programming, we can find the solution by going back to the beginning and see what we are trying to solve:So we are trying to synchronize an in-memory graph of objects with a set of tables. But these are two completely different types of data structures! But which data structure is the most generic? It turns out that the graph is the most generic one of the two: actually a set of linked database tables is really just a special type of graph. The same can be said of basically almost any other data structure. Graphs and their traversal are very well understood and have a body of knowledge of decades available, similar to the theory on which relational databases are built upon: Relational Algebra. Solving the impedance mismatch The logical conclusion is that the solution for the ORM impedance mismatch is removing to remove the mismatch itself: Let’s store the graph of in-memory domain objects in a transactional-capable graph database! This solves the mapping problem, by removing the need for mapping in the first place. A production-ready solution for the ORM problem This is easier said than done, or is it? It turns out that graph databases have been around for years, and the prime example in the Java community is Neo4j. Neo4j is a stable and mature product that is well understood and documented, see the Neo4J in Action book. It can used as an external server or in embedded mode inside the Java process itself. But it’s core API is all about graphs and nodes, something like this: GraphDatabaseService gds = new EmbeddedGraphDatabase("/path/to/store"); Node forrest=gds.createNode(); forrest.setProperty("title","Forrest Gump"); forrest.setProperty("year",1994); gds.index().forNodes("movies").add(forrest,"id",1);Node tom=gds.createNode(); The problem is that this is too far from domain driven development, writing to this would be like coding JDBC by hand. This is the typical task of a framework like Hibernate, with the big difference that because the impedance mismatch is minimal such framework can operate in a much more transparent and less intrusive way. It turns out that such framework is already written. Spring support for Neo4J One of the creators of the Spring framework Rod Johnson took the task of implementing himself the initial version of the Neo4j integration, the Spring Data Neo4j project. This is an important extract from the foreword of Rod Johnson in the documentation concerning the design of the framework: Its use of AspectJ to eliminate persistence code from your domain model is truly innovative, and on the cutting edge of today’s Java technologies. So Spring Data Neo4J is a AOP-based framework that wraps domain objects in a relatively transparent way, and synchronizes a in-memory graph of objects with a Neo4j transactional data store. It’s aimed to write the persistence layer of the application in a simplified way, similar to Spring Data JPA. How does the mapping to a graph database look like It turns out that there is limited mapping needed (tutorial). We need for one to mark which classes we want to make persistent, and define a field that will act as an Id: @NodeEntity class Movie { @GraphId Long nodeId; String id; String title; int year; Set cast; } There are other annotations (5 more per the docs) for example for defining indexing and relationships with properties, etc. Compared with Hibernate there is only a fraction of the annotations for the same domain model. What does the query language look like? The recommended query language is Cypher, that is an ASCII art based language. A query can look for example like this: // returns users who rated a movie based on movie title (movieTitle parameter) higher than rating (rating parameter) @Query("start movie=node:Movie(title={0}) " + "match (movie)<-[r:RATED]-(user) " + "where r.stars > {1} " + "return user") Iterable getUsersWhoRatedMovieFromTitle(String movieTitle, Integer rating); This is a query language called Cypher, which is based on ASCII art. The query language is very different from JPQL or SQL and implies a learning curve. Still after the learning curve this language allows to write performant queries that usually can be problematic in relational databases. Performance of Queries in Graph vs Relational databases Let’s compare some frequent query types and how they should perform in a graph vs relational databases:lookup by Id: This is implemented for example by doing a binary search on an index tree, finding a match and following a ‘pointer’ to the result. This is a (very) simplified description, but it’s likely identical for both databases. There is no apparent reason why such query would take more time in a graph database than in a relational DB. lookup parent relations: This is the type of query that relational databases struggle. Self-joins might result in cartesian products of huge tables, bringing the database to an halt. A graph database can perform those queries in a fraction of that. lookup by non-indexed column: Here the relational database can scan tables faster due to the physical structure of the table and the fact than one read usually brings along multiple rows. But this type of queries (table scans) are to be avoided in relational databases anyway.There is more to say here, but there is no indication (no readilly-available DDD-related public benchmarks) that a graph-based data store would not be appropriate for doing DDD due to query performance. Conclusions I personally cannot find any (conceptual) reasons why a transaction-capable graph database would not be an ideal fit for doing Domain Driven Development, as an alternative to a relational database and ORM. No data store will ever fit perfectly every use case, but we can ask the question if graph databases shouldn’t become the default for DDD, and relational the exception. The disappearance of ORM would imply a great reduction of the complexity and the time that it takes to implement a project. The future of DDD in the enterprise The removal of the impedance mismatch and the improved performance of certain query types could be the killer features that drive the adoption of a graph based DDD solution. We can see practical obstacles: operations prefer relational databases, vendor contract lock-in, having to learn a new query language, limited expertise in the labor market, etc. But the economic advantage is there, and the technology is there also. And when that is case, it’s usually only a matter of time. What about you, could you think of any reason why Graph-based DDD would not work? Feel free to chime in on the comments bellow.Reference: Solving ORM – Keep the O, Drop the R, no need for the M from our JCG partner Aleksey Novik at the The JHades Blog blog....

WildFly 9 – Don’t cha wish your console was hawt like this!

Everybody heard the news probably. The first WildFly 9.0.0.Alpha1 release came out Monday. You can download it from the wildfly.org website The biggest changes are that it is built by a new feature provisioning tool which is layered on the now separate core distribution and also contains a new Servlet Distribution (only a 25 MB ZIP) which is based on it. It is called “web lite” until there’ll be a better name. The architecture now supports server suspend mode which is also known as graceful shutdown. For now only Undertow and EJB3 use this so far. Additional subsystems still need to be updated. The management APIs also got notification support. Overall 256 fixes and improvements were included in this release. But let’s put all the awesomeness aside for a second and talk about what this post should be about. Administration Console WildFly 9 got a brushed up admin console. After you downloaded, unzipped and started the server you only need to add a user (bin/add-user.sh/.bat) and point your browser to http://localhost:9990/ to see it.With some minor UI tweaks this is looking pretty hot already. BUT there’s another console out there called hawtio! And what is extremely hot is, that it already has some very first support for WildFly and EAP and here are the steps to make it work. Get Hawtio! You can use hawtio from a Chrome Extension or in many different containers – or outside a container in a stand alone executable jar. If you want to deploy hawtio as a console on WildFly make sure to look at the complete how-to written by Christian Posta. The easiest way is to just download latest executable 1.4.19 jar and start it on the command line: java -jar hawtio-app-1.4.19.jar --port 8090 The port parameter lets you specify on which port you want the console to run. As I’m going to use it with WildFly which also uses the hawtio default port this is just directly using another free port. Next thing to do is to install the JMX to JSON bridge, on which hawtio relies to connect to remote processes. Instead of directly using JMX which is blocked on most networks anyway the Jolokia project bridges JMX MBeans to JSON and hawtio operates on them. Download latest Jolokia WAR agent and deploy it to WildFly. Now you’re almost ready to go. Point your browser to the hawtio console (http://localhost:8090/hawtio/) and switch to the connect tab. Enter the following settings:And press the “Connect to remote server” button below. Until today there is not much to see here. Beside a very basic server information you have the deployment overview and the connector status page.But the good news is: Hawtio is open source and you can fork it from GitHub and add some more features to it. The WildFly/EAP console is in a hawtio-web subproject. Make sure to check out the contributor guidelines.Reference: WildFly 9 – Don’t cha wish your console was hawt like this! from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....

lambdas and side effects

Overview Java 8 has added features such as lambdas and type inference. This makes the language less verbose and cleaner, however it comes with more side effects as you don’t have to be as explicit in what you are doing. The return type of a lambda matters Java 8 infers the type of a closure. One way it does this is to look at the return type (or whether anything is returned). This can have a surprising side effect. Consider this code. es.submit(() -> { try(Scanner scanner = new Scanner(new FileReader("file.txt"))) { String line = scanner.nextLine(); process(line); } return null; });This code compiles fine. However, the line return null; appears redundant and you might be tempted to remove it.  However if you remove the line, you get an error. Error:(12, 39) java: unreported exception java.io.FileNotFoundException; must be caught or declared to be thrown This is complaining about the use of FileReader. What has the return null got to do with catching an uncaught exception !? Type inference ExecutorService.submit() is an overloaded method.  It has two methods which take one argument.ExecutorService.submit(Runnable runnable); ExecutorService.submit(Callable callable);Both these methods take no arguments, so how does the javac compiler infer the type of the lambda? It looks at the return type.  If you return null; it is a Callable<Void> however if nothing is returned, not even null, it is aRunnable. Callable and Runnable have another important difference. Callable throws checked exceptions, however Runnable doesn’t allow checked exceptions to be thrown. The side effect of returning null is that you don’t have to handle checked exceptions, these will be stored in the Future<Void> submit() returns.  If you don’t return anything, you have to handle checked exceptions. Conclusion While lambdas and type inference remove significant amounts of boiler plate code, you can find more edge cases, where the hidden details of what the compiler infers can be slightly confusing. Footnote You can be explicit about type inference with a cast. Consider this: Callable<Integer> calls = (Callable<Integer> & Serializable) () -> { return null; } if (calls instanceof Serializable) // is true This cast has a number of side effects. Not only does the call() method return an Integer and a marker interface added,  the code generated for the lambda changes i.e. it adds a writeObject() and readObject() method to support serialization of the lambda. Note: Each call site creates a new class meaning the details of this cast is visible at runtime via reflection.Reference: lambdas and side effects from our JCG partner Peter Lawrey at the Vanilla Java blog....

How to Safely Use SWT’s Display asyncExec

Most user interface (UI) toolkits are single-threaded and SWT is no exception. This means that UI objects must be accessed exclusively from a single thread, the so-called UI thread. On the other hand, long running tasks should be executed in background threads in order to keep the UI responsive. This makes it necessary for the background threads to enqueue updates to be executed on the UI thread instead of accessing UI objects directly. To schedule code for execution on the UI thread, SWT offers the Display asyncE‌xec() and syncE‌xec() methods.     Display asyncE‌xec vs syncE‌xec While both methods enqueue the argument for execution on the UI thread, they differ in what they do afterwards (or don’t). As the name suggests, asyncE‌xec() works asynchronously. It returns right after the runnable was enqueued and does not wait for its execution. Whereas syncE‌xec() is blocking and thus does wait until the code has been executed. As a rule of thumb, use asyncE‌xec() as long as you don’t depend on the result of the scheduled code, e.g. just updating widgets to report progress. If the scheduled code returns something relevant for the further control flow – e.g. prompts for an input in a blocking dialog – then I would opt for syncE‌xec(). If, for example, a background thread wants to report progress about the work done, the simplest form might look like this: progressBar.getDisplay().asyncE‌xec( new Runnable() { public void r‌un() { progressBar.setSelection( ticksWorked ); } } ); asyncE‌xec() schedules the runnable to be executed on the UI thread ‘at the next reasonable opportunity’ (as the JavaDoc puts it). Unfortunately, the above code will likely fail now and then with a widget disposed exception, or more precisely with an SWTException with code == SWT.ERROR_WIDGET_DISPOSED. The reason therefore is, that the progress bar might not exist any more when it is accessed (i.e. setSelection() is called). Though we still hold a reference to the widget it isn’t of much use since the widget itself is disposed. The solution is obvious: the code must first test if the widget still exists before operating on it: progressBar.getDisplay().asyncE‌xec( new Runnable() { public void r‌un() { if( !progressBar.isDisposed() ) { progressBar.setSelection( workDone ); } } } ); As obvious as it may seem, as tedious it is to implement such a check again and again. You may want to search the Eclipse bugzilla for ‘widget disposed’ to get an idea of how frequent this issue is. Therefore we extracted a helper class that encapsulates the check new UIThreadSynchronizer().asyncE‌xec( progressBar, new Runnable() { public void r‌un() { progressBar.setSelection( workDone ); } } ); The UIThreadSynchronizers asyncE‌xec() method expects a widget as its first parameter that serves as a context. The context widget is meant to be the widget that would be affected by the runnable or a suitable parent widget if more than one widget are affected. Right before the runnable is executed, the context widget is checked. If is is still alive (i.e. not disposed), the code will be executed, otherwise, the code will be silently dropped. Though the behavior to ignore code for disposed widgets may appear careless, it worked for all situations we encoutered so far. Unit testing code that does inter-thread communication is particularly hard to test. Therefore the UIThreadSynchronizer – though it is stateless – must be instantiated to be replacable through a test double.The source code with corresponding tests can be found here: https://gist.github.com/rherrmann/7324823630a089217f46While the examples use asncE‌xec(), the UIThreadSynchronizer also supports syncE‌xec(). And, of course, the helper class is also compatible with RAP/RWT. If you read the source code arefully you might have noticed that there is a possible race condition. Because none of the methods of class Widget is meant to be thread-safe, the value returned by isDisposed() or getDisplay() may be stale (see line 51 and line 60). This is deliberately ignored at that point in time – read: I haven’t found any better solution. Though the runnable could be enqueued mistakenly, the isDisposed()-check (which is executed on the UI thread) would eventually prevent the code from being executed. And there is another (admittedly small) chance for a threading issue left: right before (a)syncE‌xec() is called the display is checked for disposal in order to not run into a widget disposed exception. But exactly that may happen if the display gets disposed in between the check and the invocation of (a)syncE‌xec(). While this could be solved for asyncE‌xec() by wrapping the call into a try-catch block that ignores widget disposed exceptions, the same approach fails for syncE‌xec(). The SWTExceptions thrown by the runnable cannot be distinguished from those thrown by syncE‌xec() with reasonable effort.Reference: How to Safely Use SWT’s Display asyncExec from our JCG partner Rudiger Herrmann at the Code Affine blog....

Java Code Geeks and Genuitec are giving away FREE MyEclipse Pro Licenses (worth over $600)!

Ready to take your IDE to the next level? We are partnering with Genuitec, creator of cool Java tools, and we are running a contest giving away FREE licenses for the MyEclipse Ide. MyEclipse is a robust suite of tools for Java EE, Web and Mobile development. Get the best balance of popular technologies from all vendors. From Spring to Maven to REST web services, unify your development under a single stack that supports everything you need.    MyEclipse offers Unified Development under one platform:Enterprise: Eliminate engineering overhead by providing a MyEclipse IDE that meets Enterprise team requirements, including development for IBM WebSphere and other popular Java EE technologies. Save weeks normally lost to project on-ramping, keeping in sync, and releasing software. Mobile: With the evolving world of enterprise mobility, you need an IDE flexible enough for mobile applications. Get your mobile applications off the ground with the PhoneGap mobile project and build capabilities in MyEclipse. Web: With MyEclipse, quickly add technology capabilities to web projects, use visual editors for easier coding and configuration, and test your work on a variety of app servers. Cloud: Leave your silo and get into the cloud with built-in capabilities for exploring and connecting to cloud services.Enter the contest now to win your very own FREE MyElipse Pro Licence. There will be a total of 10 winners! In addition, we will send you free tips and the latest news from the Java community to master your technical knowledge (you can unsubscribe at any time). In order to increase your chances of winning, don’t forget to refer as much of your friends as possible! You will get 3 more entries for every friend you refer, that is 3 times more chances! Make sure to use your lucky URL to spread the word! You can share it on your social media channels, or even mention it on a blog post if you are a blogger! Good luck and may the force be with you! UPDATE: The giveaway has ended! Here is the list of the lucky winners! (emails hidden for privacy)ja…on@gmail.com ma…am@absa.co.za sh…77@gmail.com br…20@gmail.com de…a3@gmail.com em…an@gmail.com an…pu@gmail.com iv…ia@icontainers.com ra…la@gmail.com iu…na@gmail.comWe like to thank you all for participating to this giveaway. Till next time, Keep up the good work!...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: