Featured FREE Whitepapers

What's New Here?


JUnit in a Nutshell: Unit Test Assertion

This chapter of JUnit in a Nutshell covers various unit test assertion techniques. It elaborates on the pros and cons of the built-in mechanism, Hamcrest matchers and AssertJ assertions. The ongoing example enlarges upon the subject and shows how to create and use custom matchers/assertions. Unit Test Assertion Trust, but verify Ronald ReaganThe post Test Structure explained why unit tests are usually arranged in phases. It clarified that the real testing aka the outcome verification takes place in the third phase. But so far we have only seen some simple examples for this, using mostly the built-in mechanism of JUnit. As shown in Hello World, verification is based on the error type AssertionError. This is the basis for writing so called self-checking tests. A unit test assertion evaluates predicates to true or false. In case of false an AssertionError is thrown. The JUnit runtime captures this error and reports the test as failed. The following sections will introduce three of the more popular unit test assertion variants. Assert The built-in assertion mechanism of JUnit is provided by the class org.junit.Assert. It offers a couple of static methods to ease test verification. The following snippet outlines the usage of the available method patterns: fail(); fail( "Houston, We've Got a Problem." );assertNull( actual ); assertNull( "Identifier must not be null.", actual );assertTrue( counter.hasNext() ); assertTrue( "Counter should have a successor.", counter.hasNext() );assertEquals( LOWER_BOUND, actual ); assertEquals( "Number should be lower bound value.", LOWER_BOUND, actual );Assert#fail() throws an assertion error unconditionally. This can be helpful to mark an incomplete test or to ensure that an expected exception has been thrown (see also the Expected Exceptions section in Test Structure). Assert#assertXXX(Object) is used to verify the initialization state of a variable. For this purpose there exists two methods called assertNull(Object) and assertNotNull(Object). Assert#assertXXX(boolean) methods test expected conditions passed by the boolean parameter. Invocation of assertTrue(boolean) expects the condition to be true, whereas assertFalse(boolean) expects the opposite. Assert#assertXXX(Object,Object) and Assert#assertXXX(value,value) methods are used for comparison verifications of values, objects and arrays. Although it makes no difference in result, it is common practice to pass the expected value as first parameter and the actual as second.All these types of methods provide an overloaded version, that takes a String parameter. In case of a failure this argument gets incorporated in the assertion error message. Many people consider this helpful to specify the failure reason more clearly. Others perceive such messages as clutter, making tests harder to read. This kind of unit test assertion seems to be intuitive upon first sight. Which is why I used it in the previous chapters for getting started. Besides it is still quite popular and tools support failure reporting well. However it is also somewhat limited with respect to the expressiveness of assertions that require more complex predicates. Hamcrest A library that aims to provide an API for creating flexible expressions of intent is Hamcrest. The utility offers nestable predicates called Matchers. These allow to write complex verification conditions in a way, many developers consider easier to read than boolean operator expressions. Unit test assertion is supported by the class MatcherAssert. To do so it offers the static assertThat(T, Matcher) method. The first argument passed is the value or object to verify. The second is the predicate used to evaluate the first one. assertThat( actual, equalTo( IN_RANGE_NUMBER ) ); As you can see, the matcher approach mimics the flow of a natural language to improve readability. The intention is even made more clear by the following snippet. This uses the is(Matcher) method to decorate the actual expression. assertThat( actual, is( equalTo( IN_RANGE_NUMBER ) ) ); MatcherAssert.assertThat(...) exists with two more signatures. First, there is a variant that takes a boolean parameter instead of the the Matcher argument. Its behavior correlates to Assert.assertTrue(boolean). The second variant passes an additional String to the method. This can be used to improve the expressiveness of failure messages: assertThat( "Actual number must not be equals to lower bound value.", actual, is( not( equalTo( LOWER_BOUND ) ) ) ); In a case of failure the error message for the given verification would look somewhat like this:Hamcrest comes with a set of useful matchers. The most important ones are listed in the tour of common matchers section of the library’s online documentation. But for domain specific problems readability of a unit test assertion could often be improved, if an appropriate matcher was available. For that reason the library allows to write custom matchers. Let us return to the tutorial‘s example for a discussion of this topic. First we adjust the scenario to be more reasonable for this chapter. Assume that NumberRangeCounter.next() returns the type RangeNumber instead of a simple int value: public class RangeNumber { private final String rangeIdentifier; private final int value;RangeNumber( String rangeIdentifier, int value ) { this.rangeIdentifier = rangeIdentifier; this.value = value; } public String getRangeIdentifier() { return rangeIdentifier; } public int getValue() { return value; } } We could use a custom matcher to check, that the return value of NumberRangeCounter#next() is within the counter’s defined number range: RangeNumber actual = counter.next();assertThat( actual, is( inRangeOf( LOWER_BOUND, RANGE ) ) ); An appropriate custom matcher could extend the abstract class TypeSafeMatcher<T>. This base class handles null checks and type safety. A possible implementation is shown below. Note how it adds the factory method inRangeOf(int,int) for convenient usage: public class InRangeMatcher extends TypeSafeMatcher<RangeNumber> {private final int lowerBound; private final int upperBound;InRangeMatcher( int lowerBound, int range ) { this.lowerBound = lowerBound; this.upperBound = lowerBound + range; } @Override public void describeTo( Description description ) { String text = format( "between <%s> and <%s>.", lowerBound, upperBound ); description.appendText( text ); } @Override protected void describeMismatchSafely( RangeNumber item, Description description ) { description.appendText( "was " ).appendValue( item.getValue() ); }@Override protected boolean matchesSafely( RangeNumber toMatch ) { return lowerBound <= toMatch.getValue() && upperBound > toMatch.getValue(); } public static Matcher<RangeNumber> inRangeOf( int lowerBound, int range ) { return new InRangeMatcher( lowerBound, range ); } } The effort may be a bit exaggerated for the given example. But it shows how the custom matcher can be used to eliminate the somewhat magical IN_RANGE_NUMBER constant of the previous posts. Besides the new type enforces compile time type-safety of the assertion statement. This means e.g. a String parameter would not be accepted for verification. The following picture shows how a failing test result would look like with our custom matcher:It is is easy to see in which way the implementation of describeTo and describeMismatchSafely influences the failure message. It expresses that the expected value should have been between the specified lower bound and the (calculated) upper bound1 and is followed by the actual value. It is a little unfortunate, that JUnit expands the API of its Assert class to provide a set of assertThat(…) methods. These methods actually duplicate API provided by MatcherAssert. In fact the implementation of those methods delegate to the according methods of this type. Although this might look as a minor issue, I think it is worth to mention. Due to this approach JUnit is firmly tied to the Hamcrest library. This dependency leads now and then to problems. In particular when used with other libraries, that do even worse by incorporating a copy of their own hamcrest version… Unit test assertion à la Hamcrest is not without competition. While the discussion about one-assert-per-test vs. single-concept-per-test [MAR] is out of scope for this post, supporters of the latter opinion might perceive the library’s verification statements as too noisy. Especially when a concept needs more than one assertion. Which is why I have to add another section to this chapter! AssertJ In the post Test Runners one of the example snippets uses two assertXXX statements. These verify, that an expected exception is an instance of IllegalArgumentException and provides a certain error message. The passage looks similar like this: Throwable actual = ...assertTrue( actual instanceof IllegalArgumentException ); assertEquals( EXPECTED_ERROR_MESSAGE, actual.getMessage() ); The previous section taught us how to improve the code using Hamcrest. But if you happen to be new to the library you may wonder, which expression to use. Or typing may feel a bit uncomfortable. At any rate the multiple assertThat statements would add up to the clutter. The library AssertJ strives to improve this by providing fluent assertions for java. The intention of the fluent interface API is to provide an easy to read, expressive programming style, that reduces glue code and simplifies typing. So how can this approach be used to refactor the code above? import static org.assertj.core.api.Assertions.assertThat; Similar to the other approaches AssertJ provides a utility class, that offers a set of static assertThat methods. But those methods return a particular assertion implementation for the given parameter type. This is the starting point for the so called statement chaining. Throwable actual = ...assertThat( actual ) .isInstanceOf( IllegalArgumentException.class ) .hasMessage( EXPECTED_ERROR_MESSAGE ); While readability is to some extend in the eye of the beholder, at any rate assertions can be written in a more compact style. See how the various verification aspects relevant for the specific concept under test are added fluently. This programming method supports efficient typing, as the IDE’s content assist can provide a list of the available predicates for a given value type. So you want to provide an expressive failure messages to the after-world? One possibility is to use describedAs as first link in the chain to comment the whole block: Throwable actual = ...assertThat( actual ) .describedAs( "Expected exception does not match specification." ) .hasMessage( EXPECTED_ERROR_MESSAGE ) .isInstanceOf( NullPointerException.class ); The snippet expects a NPE, but assume that an IAE is thrown at runtime. Then the failing test run would provide a message like this:Maybe you want your message to be more nuanced according to a given failure reason. In this case you may add a describedAs statement before each verification specification: Throwable actual = ...assertThat( actual ) .describedAs( "Message does not match specification." ) .hasMessage( EXPECTED_ERROR_MESSAGE ) .describedAs( "Exception type does not match specification." ) .isInstanceOf( NullPointerException.class ); There are much more AssertJ capabilities to explore. But to keep this post in scope, please refer to the utility’s online documentation for more information. However before coming to the end let us have a look at the in-range verification example again. This is how it can be solved with a custom assertion: public class RangeCounterAssertion extends AbstractAssert<RangeCounterAssertion, RangeCounter> {private static final String ERR_IN_RANGE_OF = "Expected value to be between <%s> and <%s>, but was <%s>"; private static final String ERR_RANGE_ID = "Expected range identifier to be <%s>, but was <%s>"; public static RangeCounterAssertion assertThat( RangeCounter actual ) { return new RangeCounterAssertion( actual ); } public InRangeAssertion hasRangeIdentifier( String expected ) { isNotNull(); if( !actual.getRangeIdentifier().equals( expected ) ) { failWithMessage( ERR_RANGE_ID, expected, actual.getRangeIdentifier() ); } return this; } public RangeCounterAssertion isInRangeOf( int lowerBound, int range ) { isNotNull(); int upperBound = lowerBound + range; if( !isInInterval( lowerBound, upperBound ) ) { int actualValue = actual.getValue(); failWithMessage( ERR_IN_RANGE_OF, lowerBound, upperBound, actualValue ); } return this; }private boolean isInInterval( int lowerBound, int upperBound ) { return actual.getValue() >= lowerBound && actual.getValue() < upperBound; }private RangeCounterAssertion( Integer actual ) { super( actual, RangeCounterAssertion.class ); } } It is common practice for custom assertions to extend AbstractAssert. The first generic parameter is the assertion’s type itself. It is needed for the fluent chaining style. The second is the type on which the assertion operates. The implementation provides two additional verification methods, that can be chained as in the example below. Because of this the methods return the assertion instance itself. Note how the call of isNotNull() ensures that the actual RangeNumber we want to make assertions on is not null. The custom assertion is incorporated by its factory method assertThat(RangeNumber). Since it inherits the available base checks, the assertion can verify quite complex specifications out of the box. RangeNumber first = ... RangeNumber second = ...assertThat( first ) .isInRangeOf( LOWER_BOUND, RANGE ) .hasRangeIdentifier( EXPECTED_RANGE_ID ) .isNotSameAs( second ); For completeness here is how the RangNumberAssertion looks in action:Unfortunately it is not possible to use two different assertion types with static imports within the same test case. Assumed of course, that those types follow the assertThat(...) naming convention. To circumvent this the documentation recommends to extend the utility class Assertions. Such an extension can be used to provide static assertThat methods as entry point to all of a project’s custom assertions. By using this custom utility class throughout the project no import conflicts can occur. A detailled description can be found in the section Providing a single entry point for all assertions : yours + AssertJ ones of the online documentation about custom assertions. Another problem with the fluent API is that single-line chained statements may be more difficult to debug. That is because debuggers may not be able to set breakpoints within the chain. Furthermore it may not be clear which of the method calls may have caused an exception. But as stated by Wikipedia on fluent interfaces, these issues can be overcome by breaking statements into multiple lines as show in the examples above. This way the user can set breakpoints within the chain and easily step through the code line by line. Conclusion This chapter of JUnit in a Nutshell introduced different unit test assertion approaches like the tool’s built-in mechanism, Hamcrest matchers and AssertJ assertions. It outlined some pros and cons and enlarged upon the subject by means of the tutorial’s ongoing example. Additionally it was shown how to create and use custom matchers and assertions. While the Assert based mechanism surely is somewhat dated and less object-oriented, it still has it advocates. Hamcrest matchers provide a clean separation of assertion and predicate definition, whereas AssertJ assertions score with a compact and easy to use programming style. So now you are spoilt for choice… Please regard that this will be the last chapter of my tutorial about JUnit testing essentials. Which does not mean that there is nothing more to say. Quite the contrary! But this would go beyond the scope this mini-series is tailored to. And you know what they are saying: always leave them wanting more…hm, I wonder if interval boundaries would be more intuitive than lower bound and range…Reference: JUnit in a Nutshell: Unit Test Assertion from our JCG partner Frank Appel at the Code Affine blog....

Garbage Collection: increasing the throughput

The inspiration for this post came after stumbling upon “Pig in the Python” definition in the memory management glossary. Apparently, this term is used to explain the situation where GC repeatedly promotes large objects from generation to generation. The effect of doing so is supposedly similar to that of a python swallowing its prey in whole only to become immobilised during digestion. For the next 24 hours I just could not get the picture of choking pythons out of my head. As the psychiatrists say, the best way to let go of your fears is to speak about them. So here we go. But instead of the pythons, the rest of the story will be about garbage collection tuning. I promise. Garbage Collection pauses are well known by their potential of becoming a performance bottleneck. Modern JVMs do ship with advanced garbage collectors, but as I have experienced, finding optimal configuration for a particular application is still darn difficult. To even stand a chance in manually approaching the issue, one would need to understand exact mechanics of garbage collection algorithms. This post might be able to help you in this regard, as I am going to use an example to demonstrate how small changes in JVM configuration can affect the throughput of your application. Example The application we use to demonstrate the GC impact on throughput is a simple one. It consists of just two threads:PigEater – simulating a situation where the python keeps eating one pig after another. The code achieves this via adding 32MB of bytes into a java.util.List and sleeping 100ms after each attempt. PigDigester – simulating an asynchronous digesting process. The code implements digestion by just nullifying that list of pigs. As this is a rather tiring process, then this thread sleeps for 2000ms after each reference cleaning.Both threads will run in a while loop, continuing to eat and digest until the snake is full. This happens at around 5,000 pigs eaten. package eu.plumbr.demo;public class PigInThePython { static volatile List pigs = new ArrayList(); static volatile int pigsEaten = 0; static final int ENOUGH_PIGS = 5000;public static void main(String[] args) throws InterruptedException { new PigEater().start(); new PigDigester().start(); }static class PigEater extends Thread {@Override public void run() { while (true) { pigs.add(new byte[32 * 1024 * 1024]); //32MB per pig if (pigsEaten > ENOUGH_PIGS) return; takeANap(100); } } }static class PigDigester extends Thread { @Override public void run() { long start = System.currentTimeMillis();while (true) { takeANap(2000); pigsEaten+=pigs.size(); pigs = new ArrayList(); if (pigsEaten > ENOUGH_PIGS) { System.out.format("Digested %d pigs in %d ms.%n",pigsEaten, System.currentTimeMillis()-start); return; } } } }static void takeANap(int ms) { try { Thread.sleep(ms); } catch (Exception e) { e.printStackTrace(); } } } Now lets define the throughput of this system as the “number of pigs digested per second”. Taking into account that the pigs are stuffed into the python after each 100ms, we see that theoretical maximal throughput this system can thus reach up to 10 pigs / second. Configuring the GC example Lets see how the system behaves using two different configuration. In all situations, the application was run using a dual-core Mac (OS X 10.9.3) with 8G of physical memory. First configuration:4G of heap (-Xms4g –Xmx4g) Using CMS to clean old (-XX:+UseConcMarkSweepGC) and Parallel to clean young generation -XX:+UseParNewGC) Has allocated 12,5% of the heap (-Xmn512m) to young generation, further restricting the sizes of Eden and Survivor spaces to equally sized.Second configuration is a bit different:2G of heap (-Xms2g –Xmx2g) Using Parallel GC to conduct garbage collection both in young and tenured generations(-XX:+UseParallelGC) Has allocated 75% of the heap to young generation (-Xmn1536m)Now it is time to make bets, which of the configurations performed better in terms of throughput (pigs eaten per second, remember?).  Those of you laying your money on the first configuration, I must disappoint you. The results are exactly reversed:First configuration (large heap, large old space, CMS GC) is capable of eating 8.2 pigs/second Second configuration (2x smaller heap, large young space, Parallel GC) is capable of eating 9.2 pigs/secondNow, let me put the results in perspective. Allocating 2x less resources (memory-wise) we achieved 12% better throughput. This is something so contrary to common knowledge that it might require some further clarification on what was actually happening. Interpreting the GC results The reason in what you face is not too complex and the answer is staring right at you when you take a more closer look to what GC is doing during the test run. For this, you can use the tool of your choice, I peeked under the hood with the help of jstat, similar to the following: jstat -gc -t -h20 PID 1s Looking at the data, I noticed that the first configuration went through 1,129 garbage collection cycles (YGCT+FGCT) which in total took 63.723 seconds: Timestamp S0C S1C S0U S1U EC EU OC OU PC PU YGC YGCT FGC FGCT GCT 594.0 174720.0 174720.0 163844.1 0.0 174848.0 131074.1 3670016.0 2621693.5 21248.0 2580.9 1006 63.182 116 0.236 63.419 595.0 174720.0 174720.0 163842.1 0.0 174848.0 65538.0 3670016.0 3047677.9 21248.0 2580.9 1008 63.310 117 0.236 63.546 596.1 174720.0 174720.0 98308.0 163842.1 174848.0 163844.2 3670016.0 491772.9 21248.0 2580.9 1010 63.354 118 0.240 63.595 597.0 174720.0 174720.0 0.0 163840.1 174848.0 131074.1 3670016.0 688380.1 21248.0 2580.9 1011 63.482 118 0.240 63.723 The second configuration paused in total of 168 times (YGCT+FGCT) for just 11.409 seconds. Timestamp S0C S1C S0U S1U EC EU OC OU PC PU YGC YGCT FGC FGCT GCT 539.3 164352.0 164352.0 0.0 0.0 1211904.0 98306.0 524288.0 164352.2 21504.0 2579.2 27 2.969 141 8.441 11.409 540.3 164352.0 164352.0 0.0 0.0 1211904.0 425986.2 524288.0 164352.2 21504.0 2579.2 27 2.969 141 8.441 11.409 541.4 164352.0 164352.0 0.0 0.0 1211904.0 720900.4 524288.0 164352.2 21504.0 2579.2 27 2.969 141 8.441 11.409 542.3 164352.0 164352.0 0.0 0.0 1211904.0 1015812.6 524288.0 164352.2 21504.0 2579.2 27 2.969 141 8.441 11.409 Considering that the work needed to carry out in both cases was equivalent in regard that – with no long-living objects in sight the duty of the GC in this pig-eating exercise is just to get rid of everything as fast as possible. And using the first configuration, the GC is just forced to run ~6.7x more often resulting in ~5,6x longer total pause times. So the story fulfilled two purposes. First and foremost, I hope I got the picture of a choking python out of my head. Another and more significant take-away from this is – that tuning GC is a tricky exercise at best, requiring deep understanding of several underlying concepts. Even with the truly trivial application used in this blog post, the results you will be facing can have significant impact on your throughput and capacity planning. In real-world applications the differences are even more staggering. So the choice is yours, you can either master the concepts, or, focus on your day-to-day work and let Plumbr to find out suitable GC configuration according to your needs.Reference: Garbage Collection: increasing the throughput from our JCG partner Nikita Salnikov Tarnovski at the Plumbr Blog blog....

Brand new JSF components in PrimeFaces Extensions

The PrimeFaces Extensions team is glad to announce several new components for the upcoming 3.0.0 main release. Our new committer Francesco Strazzullo gave a “Turbo Boost” for the project and brought at least 6 JSF components which have been successfully intergrated! The current development state is deployet on OpenShift – please look the showcase Below is a short overview about added components with screenshots. Analog Clock. This is a component similar to digital PrimeFaces Clock, but as an analog variant, enhanced with advanced settings.      Countdown. It simulates a countdown and fires an JSF listener after an customizable interval. You can start, stop and pause the countdown.DocumentViewer. This is JSF wrapper of Mozilla Foundation project PDF.js – a full HTML PDF reader.GChart. This is a JSF wrapper of Google Charts API. It’s the same chart library used by Google Analytics and other Google services. Please look at Organizational Chart and Geo Chart.A small note from me: charts can be built completely by model in Java. There is only one GChartModel which allows to add any options you want programmatically. I have used the same approach for my Chart library based on Flotcharts (thinking right now about adding it to the PF Extensions). There is only one generic model with generic setters to set options (options are serialized to JSON then). Advantage: you can export a chart on the server-side, e.g. with PhantomJS. This is a different approach to PrimeFaces’ charts where each chart type has a separate model class and hard-coded fix methods for options settings. Gravatar. This is a component for Gravatar services.Knob. This is a nice theme-aware component to insert numeric values in a range. It has many settings for visual customization, AJAX listener and more.Last but not least: we plan to deploy current SNAPSHOTs on the OpenShift in the future. More new components are coming soon. I intend to bring a component called pe:typeahead to the 3.0.0 too. It is based on Twitter’s Typeahed. In the next post, I will explain how I have added an excellent WAI ARIA support to this great autocomplete widget. Stay tuned!Reference: Brand new JSF components in PrimeFaces Extensions from our JCG partner Oleg Varaksin at the Thoughts on software development blog....

The Measure Of Success

What makes a successful project? Waterfall project management tells us it’s about meeting scope, time and cost goals. Do these success metrics also hold true to agile projects? Let’s see.  In an agile project we learn new information all the time. It’s likely that the scope will change over time, because we find out things we assumed the customer wanted were wrong, while features we didn’t even think of are actually needed. We know that we don’t know everything when we estimate scope, time and budget. This is true for both kinds of projects, but in agile projects we admit that, and therefore do not lock those as goals. The waterfall project plan is immune to feedback. In agile projects, we put feedback cycles into the plan so we will be able to introduce changes. We move from “we know what we need to do” to “let’s find out if what we’re thinking is correct” view. In waterfall projects, there’s an assumption of no variability, and that the plan covers any possible risk. In fact, one small shift in the plan can have disastrous (or wonderful) effects on product delivery. Working from a prioritized backlog in an agile project, means the project can end “prematurely”. If we have a happy customer with half the features, why not stop there? If we deliver a smaller scope, under-budget and before the deadline, has the project actually failed? Some projects are so long, that the people who did the original estimation are long gone. The assumptions they relied on are no longer true, technology has changed and the market too. Agile projects don’t plan that long into the future, and therefore cannot be measured according to the classic metrics. Quality is not part of the scope, time and cost trio, and usually not set as a goal. Quality is not easily measured, and suffers from the pressure of the other three. In agile projects quality is considered is first-class citizen, because we know it supports not only customer satisfaction, but also the ability of the team to deliver in a consistent pace.All kinds of differences. But they don’t answer a very simple question: What is success? In any kind of project, success has an impact. It creates happy customers. It creates a new market. It changes how people think and feel about the company. And it also changes how people inside the company view themselves. This impact is what makes a successful project. This is what we should be measuring. The problem with all of those, is that they cannot be measured at the delivery date, if at all. Cost, budget, and scope maybe measureable at the delivery date, including against the initial estimation, but they are not really indicative of success. In fact, there’s a destructive force within the scope, time and cost goals: They come at the expense of others, like quality and customer satisfaction. If a deadline is important, quality suffers. We’ve all been there. The cool thing about an agile project, is that we can gain confidence we’re on the right track, if customers were part of the process, and if the people developing the product were aligned with the customer’s feedback. The feedback tells us early on if the project is going to be successful, according to real life parameters. And if we’re wrong, that’s good too. We can cut our losses and turn to another opportunity. So agile is better, right? Yes, I’m pro-agile. No, I don’t think agile works every time. I ask that you define your success goals for your product and market, not based on a methodology, but on what impact it will make. Only the you can actually measure success.Reference: The Measure Of Success from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....

Load-Testing Guidelines

Load-testing is not trivial. It’s often not just about downloading JMeter or Gatling, recording some scenarios and then running them. Well, it might be just that, but you are lucky if it is. And what may sound like “Captain Obvious speaking”, it’s good to be reminded of some things that can potentially waste time. So, when you run the tests, eventually you will hit a bottleneck, and then you’ll have to figure out where it is. It can be:        client bottleneck – if your load-testing tool uses HttpURLConnection, the number of requests sent by the client is quite limited. You have to start from that and make sure enough requests are leaving your load-testing machine(s) network bottlenecks – check if your outbound connection allows the desired number of requests to reach the server server machine bottleneck – check the number of open files that your (most probably) linux server allows. For example, if the default is 1024, then you can have at most 1024 concurrent connections. So increase that (limits.conf) application server bottleneck – if the thread pool that handles requests is too low, requests may be kept waiting. If some other tiny configuration switch (e.g. whether to use NIO, which is worth a separate article) has the wrong value, that may reduce performance. You’d have to be familiar with the performance-related configurations of your server. database bottlenecks – check the CPU usage and response times of your database to see if it’s not the one slowing the requests. Misconfiguring your database, or having too small/few DB servers, can obviously be a bottleneck application bottleneck – these you’d have to investigate yourself, possibly using some performance monitoring tool (but be careful when choosing one, as there are many “new and cool”, but unstable and useless ones). We can divide this type in two:framework bottleneck – if a framework you are using has problems. This might be a web framework, a dependency injection framework, an actor system, an ORM, or even a JSON serialization tool application code bottleneck – if you are misusing a tool/framework, have blocking code, or just wrote horrible code with unnecessarily high computational complexityYou’d have to constantly monitor the CPU, memory, network and disk I/O usage of the machines, in order to understand when you’ve hit the hardware bottleneck. One important aspect is being able to bombard your servers with enough requests. It’s not unlikely that a single machine is insufficient, especially if you are a big company and your product is likely to attract a lot of customers at the start and/or making a request needs some processing power as well, e.g. for encryption. So you may need a cluster of machines to run your load tests. The tool you are using may not support that, so you may have to coordinate the cluster manually. As a result of your load tests, you’d have to consider how long does it make sense to keep connections waiting, and when to reject them. That is controlled by connect timeout on the client and registration timeout (or pool borrow timeout) on the server. Also have that in mind when viewing the results – too slow response or rejected connection is practically the same thing – your server is not able to service the request. If you are on AWS, there are some specifics. Leaving auto-scaling apart (which you should probably disable for at least some of the runs), you need to have in mind that the ELB needs warming up. Run the tests a couple of times to warm up the ELB (many requests will fail until it’s fine). Also, when using a load-balancer and long-lived connections are left open (or you use WebSocket, for example), the load balancer may leave connections from itself to the servers behind it open forever and reuse them when a new request for a long-lived connection comes. Overall, load (performance) testing and analysis is not straightforward, there are many possible problems, but is something that you must do before release. Well, unless you don’t expect more than 100 users. And the next time I do that, I will use my own article for reference, to make sure I’m not missing something.Reference: Load-Testing Guidelines from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog....

Continuous Delivery with Docker, Jenkins, JBoss Fuse and OpenShift PaaS

I recently put together an end-to-end demo showing step-by-step how to set up a Continuous Delivery pipeline to help automate your deployments and shorten your cycle times for getting code from development to production. Establishing a proper continuous delivery pipeline is a discipline that requires more than just tools and automation, but having good tools and a head start on setting this up can’t be understated. This project has two focuses:      Show how you’d do CD with JBoss Fuse and OpenShift Create a scripted, repeatable, pluggable and versioned demo so we can swap out pieces (like use JBoss Fuse 6.2/Fabric8/Docker/Kubernetes, or OpenStack, or VirtualBox, or go.cd, or travis-ci, or other code review systems)We use Docker containers to set up all of the individual pieces and make it easier to script it and version it. See the videos of me doing the demo below, or checkout the setup steps and follow the script to recreate the demo yourself! Part IContinuous Delivery with JBoss Fuse on OpenShift Enterprise from Christian Posta on Vimeo. Part IIContinuous Delivery with JBoss Fuse on OpenShift Enterprise Part II from Christian Posta on Vimeo. Part IIIContinuous Delivery with JBoss Fuse on OpenShift Enterprise part III from Christian Posta on Vimeo.Reference: Continuous Delivery with Docker, Jenkins, JBoss Fuse and OpenShift PaaS from our JCG partner Christian Posta at the Christian Posta – Software Blog blog....

5 Error Tracking Tools Java Developers Should Know

Raygun, Stack Hunter, Sentry, Takipi and Airbrake: Modern developer tools to help you crush bugs before bugs crush your app! With the Java ecosystem going forward, web applications serving growing numbers of requests and users’ demand for high performance – comes a new breed of modern development tools. A fast paced environment with rapid new deployments requires tracking errors and gaining insight to an application’s behavior on a level traditional methods can’t sustain. In this post we’ve decided to gather 5 of those tools, see how they integrate with Java and find out what kind of tricks they have up their sleeves. It’s time to smash some bugs. Raygun Mindscape’s Raygun is a web based error management system that keeps track of exceptions coming from your apps. It supports various desktop, mobile and web programming languages, including Java, Scala, .NET, Python, PHP, and JavaScript. Besides that, sending errors to Raygun is possible through a REST API and a few more Providers (that’s how they call language and framework integrations) came to life thanks to developer community involvement.Key Features:Error grouping – Every occurrence of a bug is presented within one group with access to single instances of it, including its stack trace. Full text search – Error groups and all collected data is searchable. View app activity – Every action on an error group is displayed for all your team to see: status updates, comments and more. Affected users – Counts of affected users appear by each error. External integrations – Github, Bitbucket, Asana, JIRA, HipChat and many more.The Java angle: To use Raygun with Java, you’ll need to add some dependencies to your pom.xml file if you’re using Maven or add the jars manually. The second step would be to add an UncaughtExceptionHandler that would create an instance of RaygunClient and send your exceptions to it. In addition, you can also add custom data fields to your exceptions and send them together to Raygun. The full walkthrough is available here. Behind the curtain: Meet Robie Robot, the certified operator of Raygun. As in, the actual ray gun. Check it out on: https://raygun.io Sentry Started as a side-project, Sentry is an open-source web based solution that serves as a real time event logging and aggregation platform. It monitors errors and displays when, where and to whom they happen, promising to do so without relying solely on user feedback. Supported languages and frameworks include Ruby, Python, JS, Java, Django, iOS, .NET and more.Key Features:See the impact of new deployments in real time Provide support to specific users interrupted by an error Detect and thwart fraud as its attempted – notifications of unusual amounts of failures on purchases, authentication, and other sensitive areas External Integrations – GitHub, HipChat, Heroku, and many moreThe Java angle: Sentry’s Java client is called Raven and supports major existing logging frameworks like java.util.logging, Log4j, Log4j2 and Logback with Slf4j. An independent method to send events directly to Sentry is also available. To set up Sentry for Java with Logback for example, you’ll need to add the dependencies manually or through Maven, then add a new Sentry appender configuration and you’re good to do. Instructions are available here. Behind the curtain: Sentry was an internal project at Disqus back in 2010 to solve exception logging on a Django application by Chris Jennings and David Cramer Check it out on: https://www.getsentry.com/ Takipi Unlike most of the other tools, Takipi is far more than a stack trace prettifier. It was built with a simple objective in mind: Telling developers exactly when and why production code breaks. Whenever a new exception is thrown or a log error occurs – Takipi captures it and shows you the variable state which caused it, across methods and machines. Takipi will overlay this over the actual code which executed at the moment of error – so you can analyze the exception as if you were there when it happened.Key features:Detect – Caught/uncaught exceptions, Http and logged errors. Prioritize – How often errors happen across your cluster, if they involve new or modified code, and whether that rate is increasing. Analyze – See the actual code and variable state, even across different machines and applications. Easy to install – No code or configuration changes needed. Less than 2% overhead.The Java angle: Takipi was built for production environments in Java and Scala. The installation takes less than 1min, and includes attaching a Java agent to your JVM. Behind the curtain: Each exception type and error has a unique monster that represents it. You can find these monster here. Check it out on: http://www.takipi.com/ AirbrakeAnother tool that has put exception tracking on its eyesights is Rackspace’s Airbrake, taking on the mission of “No More Searching Log Files”. It provides users with a web based interface that includes a dashboard with error details and an application specific view. Supported languages include Ruby, PHP, Java, .NET, Python and even… Swift. Key Features:Detailed stack traces, grouping by error type, users and environment variables Team productivity – Filter importance errors from the noise Team collaboration – See who’s causing bugs and whose fixing them External Integrations – HipChat, GitHub, JIRA, Pivotal and over 30 moreThe Java angle: Airbrake officially supports only Log4j, although a Logback library is also available. Log4j2 support is currently lacking. The installation procedure is similar to Sentry, adding a few dependencies manually or through Maven, adding an appender, and you’re ready to start. Similarly, a direct way to send messages to Airbrake is also available with AirbrakeNotice and AirbrakeNotifier. More details are available here. Behind the curtain: Airbrake was acquired by Exceptional, which then got acquired by Rackspace. Check it out on: https://airbrake.io/ StackHunter Currently in beta, Stack Hunter provides a self hosted tool to track your Java exceptions. A change of scenery from the past hosted tools. Other than that, it aims to provide a similar feature set to inform developers of their exceptions and help solve them faster.Key Features:A single self hosted web interface to view all exceptions Collections of stack trace data and context including key metrics such as total exceptions, unique exceptions, users affected, & sessions affected Instant email alerts when exceptions occur Exceptions grouping by root causeThe Java angle: Built specifically for Java, StackHunter runs on any servlet container running Java 6 or above. Installation includes running StackHunter on a local servlet, configuring an outgoing mail server for alerts, and configuring the application you’re wishing to log. Full instructions are available here. Behind the curtain: StackHunter is developed by Dele Taylor, who also works on Data Pipeline – a tool for transforming and migrating data in Java. Check it out on: http://stackhunter.com/ Bonus: ABRT Another approach to error tracking worth mentioning is used by ABRT, an automatic bug detection and reporting tool from the Fedora ecosystem, which is a Red Hat sponsored community project. Unlike the 5 tools we covered here, this one is intended to be used not only by app developers – but their users as well. Reporting bugs back to Red Hat with richer context that otherwise would have been harder to understand and debug.The Java angle: Support for Java exceptions is still in its proof of concept stage. A Java connector developed by Jakub Filák is available here. Behind the curtain: ABRT is an open-source project developed by Red Hat. Check it out on: https://github.com/abrt/abrt Did we miss any other tools? How do you keep track of your exceptions? Please let me know in the comments section belowReference: 5 Error Tracking Tools Java Developers Should Know from our JCG partner Alex Zhitnitsky at the Takipi blog....

3 Examples of Parsing HTML File in Java using Jsoup

HTML is the core of the web, all the pages you see on the internet are based on HTML, whether they are dynamically generated by JavaScript, JSP, PHP, ASP or any other web technology. Your browser actually parse HTMLs and render it for you. But what do you do, if you need to parse an HTML document and find some elements, tags, attributes or check if a particular element exists or not, all that using a Java program. If you have been in Java programming for some years, I am sure you have done some XML parsing work using parsers like DOM and SAX. Ironically, there are few instances when you need to parse HTML document from a core Java application, which doesn’t include Servlet and other Java web technologies. To make things worse, there is no HTTP or HTML library in the core JDK as well. That’s why when it comes to parsing an HTML file, many Java programmers had to look at Google to find out how to get value of an HTML tag in Java. When I needed that I was sure that there would be an open source library which will implement that functionality for me, but didn’t know that it was as wonderful and feature rich as JSoup. It not only provides support to read and parse HTML document but also allows you to extract any element from the HTML file, their attributes, their CSS class in JQuery style, and at the same time it allows you to modify them. You can probably do anything with a HTML document using Jsoup. In this article, we will parse and HTML file and find out the value of the title and heading tags. We will also see examples of downloading and parsing HTML from file as well as any URL or internet by parsing Google’s home page in Java. What is JSoup Library Jsoup is an open source Java library for working with real-world HTML. It provides a very convenient API for extracting and manipulating data, using the best of DOM, CSS, and jquery-like methods. Jsoup implements the WHATWG HTML5 specification, and parses HTML to the same DOM as modern browsers like Chrome and Firefox do. Here are some of the useful features of jsoup library :    Jsoup can scrape and parse HTML from a URL, file, or string     Jsoup can find and extract data, using DOM traversal or CSS selectors     Jsoup allows you to manipulate the HTML elements, attributes, and text     Jsoup provides clean user-submitted content against a safe white-list, to prevent XSS attacks     Jsoup also output tidy HTMLJsoup is designed to deal with different kinds of HTML found in the real world, which includes properly validated HTML to incomplete non-validate tag collection. One of the core strengths of Jsoup is that it’s very robust. HTML Parsing in Java using JSoup In this Java HTML parsing tutorial, we will see three different examples of parsing and traversing HTML documents in Java using jsoup. In the first example, we will parse an HTML String, the contents of which are all tags, in form of a String literal in Java. In the Second example, we will download our HTML document from the web, and in the third example, we will load our own sample HTML file login.html for parsing. This file is a sample HTML document which contains a title tag and a div in the body section which contains an HTML form. It has input tags to capture username and password and submit and reset button for further action. It’s a proper HTML which can be validated i.e. all tags and attributes are properly closed. Here is how our sample HTML file look like : <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <title>Login Page</title> </head> <body> <div id="login" class="simple" > <form action="login.do"> Username : <input id="username" type="text" /><br> Password : <input id="password" type="password" /><br> <input id="submit" type="submit" /> <input id="reset" type="reset" /> </form> </div> </body> </html> HTML parsing is very simple with Jsoup, all you need to call is the static method Jsoup.parse()and pass your HTML String to it. JSoup provides several overloaded parse() method to read HTML files from String, a File, from a base URI, from an URL, and from an InputStream. You can also specify character encoding to read HTML files correctly in case they are not in “UTF-8″ format. The parse(String html) method parses the input HTML into a new Document. In Jsoup, Document extends Element which extends Node. Also TextNode extends Node. As long as you pass in a non-null string, you’re guaranteed to have a successful, sensible parse, with a Document containing (at least) a head and a body element. Once you have a Document, you can get the data you want by calling appropriate methods in Document and its parent classes Element and Node. Java Program to parse HTML Document Here is our complete Java program to parse an HTML String, an HTML file downloaded from the internet and an HTML file from the local file system. In order to run this program, you can either use the Eclipse IDE or you can just use any IDE or command prompt. In Eclipse, it’s very easy, just copy this code, create a new Java project, right click on src package and paste it. Eclipse will take care of creating proper package and Java source file with same name, so absolutely less work. If you already have a Sample Java project, then it’s just one step. Following Java program shows 3 examples of parsing and traversing HTML file. In first example, we directly parse an String with html content, in the second example we parse an HTML file downloaded from an URL, in the third example we load and parse an HTML document from local file system. import java.io.File; import java.io.IOException; import org.jsoup.Jsoup; import org.jsoup.nodes.Document; import org.jsoup.nodes.Element; /** * Java Program to parse/read HTML documents from File using Jsoup library. * Jsoup is an open source library which allows Java developer to parse HTML * files and extract elements, manipulate data, change style using DOM, CSS and * JQuery like method. * * @author Javin Paul */ public class HTMLParser{ public static void main(String args[]) { // Parse HTML String using JSoup library String HTMLSTring = "<!DOCTYPE html>" + "<html>" + "<head>" + "<title>JSoup Example</title>" + "</head>" + "<body>" + "<table><tr><td><h1>HelloWorld</h1></tr>" + "</table>" + "</body>" + "</html>"; Document html = Jsoup.parse(HTMLSTring); String title = html.title(); String h1 = html.body().getElementsByTag("h1").text(); System.out.println("Input HTML String to JSoup :" + HTMLSTring); System.out.println("After parsing, Title : " + title); System.out.println("Afte parsing, Heading : " + h1); // JSoup Example 2 - Reading HTML page from URL Document doc; try { doc = Jsoup.connect("http://google.com/").get(); title = doc.title(); } catch (IOException e) { e.printStackTrace(); } System.out.println("Jsoup Can read HTML page from URL, title : " + title); // JSoup Example 3 - Parsing an HTML file in Java //Document htmlFile = Jsoup.parse("login.html", "ISO-8859-1"); // wrong Document htmlFile = null; try { htmlFile = Jsoup.parse(new File("login.html"), "ISO-8859-1"); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } // right title = htmlFile.title(); Element div = htmlFile.getElementById("login"); String cssClass = div.className(); // getting class form HTML element System.out.println("Jsoup can also parse HTML file directly"); System.out.println("title : " + title); System.out.println("class of div tag : " + cssClass); } } Output: Input HTML String to JSoup :<!DOCTYPE html><html><head><title>JSoup Example</title></head><body><table><tr><td><h1>HelloWorld</h1></tr></table></body></html> After parsing, Title : JSoup Example Afte parsing, Heading : HelloWorld Jsoup Can read HTML page from URL, title : Google Jsoup can also parse HTML file directly title : Login Page class of div tag : simple The Jsoup HTML parser will make every attempt to create a clean parse from the HTML you provide, regardless of whether the HTML is well-formed or not. It can handle the following mistakes : unclosed tags (e.g. <p>Java <p>Scala to <p>Java</p> <p>Scala</p>) implicit tags (e.g.a naked <td>Java is Great</td> is wrapped into a <table><tr><td>) reliably creating the document structure (html containing a head and body, and only appropriate elements within the head). Jsoup is an excellent and robust open source library which makes reading html documents, body fragments, html strings and directly parsing html content from the web, extremely easy.Reference: 3 Examples of Parsing HTML File in Java using Jsoup from our JCG partner Javin Paul at the Javarevisited blog....

Solving ORM – Keep the O, Drop the R, no need for the M

ORM has a simple, production-ready solution hiding in plain sight in the Java world. Let’s go through it in this post, alongside with the following topics:ORM / Hibernate in 2014 – the word on the street ORM is still the Vietnam of Computer Science ORM has 2 main goals only When does ORM make sense? A simple solution for the ORM problem A production-ready ORM Java-based alternative  ORM / Hibernate in 2014 – the word on the street It’s been almost 20 years since ORM is around, and soon we will reach the 15th birthday of the creation of the de-facto and likely best ORM implementation in the Java world: Hibernate. We would then expect that this is by know a well understood problem. But what are developers saying these days about Hibernate and ORM? Let’s take some quotes from two recent posts on this topic: Thoughts on Hibernate and JPA Hibernate Alternatives: There are performance problems related to using Hibernate. A lot of business operations and reports involve writing complex queries. Writing them in terms of objects and maintaining them seems to be difficult. We shouldn’t be needing a 900 page book to learn a new framework. As Java developers we can easily relate to that: ORM frameworks tend to give cryptic error messages, the mapping is hard to do and the runtime behavior namely with lazy initialization exceptions can be surprising when first encountered. Who hasn’t had to maintain that application that uses Open Session In View pattern that generated a flood of SQL requests that took weeks to optimize? I believe it literally can take a couple of years to really understand Hibernate, lot’s of practice and several readings of the Java Persistence with Hibernate book (still 600 pages in it’s upcoming second edition). Are the criticisms on Hibernate warranted? I personally don’t think so, in fact most developers really criticize the complexity of the object-relational mapping approach itself, and not a concrete ORM implementation of it in a given language. This sentiment seems to come and go in periodic waves, maybe when a newer generation of developers hits the labor force. After hours and days trying to do what feels it should be much simpler, it’s only a natural feeling. The fact is that there is a problem: why do many projects spend 30% of their time developing the persistence layer still today? ORM is the Vietnam of Computer Science The problem is that the ORM problem is complex, and there are no good solutions. Any solution to it is a huge compromise. ORM has been famously named almost 10 years ago the Vietnam of Computer Science, in a blog post from one of the creators of Stackoverflow, Jeff Atwood. The problems of ORM are well known and we won’t go through them in detail here, here is a summary from Martin Fowler on why ORM is hard:object identity vs database identity How to map object oriented inheritance in the relational world unidirectional associations in the database vs bi-directional in the OO world Data navigation – lazy loading, eager fetching Database transactions vs no rollbacks in the OO worldThis is just to name the main obstacles. The problem is also that it’s easy to forget what we are trying to achieve in the first place. ORM has 2 main goals only ORM has two main goals clearly defined:map objects from the OO world into tables in a relational database provide a runtime mechanism for keeping an in-memory graph of objects and a set of database tables in syncGiven this, when should we use Hibernate and ORM in general? When does ORM make sense ? ORM makes sense when the project at hand is being done using a Domain Driven Development approach, where the whole program is built around a set of core classes called the domain model, that represent concepts in the real world such as Customer, Invoice, etc. If the project does not have a minimum threshold complexity that needs DDD, then an ORM can likely be overkill. The problem is that even the most simple of enterprise applications are well above this threshold, so ORM really pulls it’s weight most of the time. It’s just that ORM is hard to learn and full of pitfalls. So how can we tackle this problem? A simple solution for the ORM problem Someone once said something like this: A smart man solves a problem, but a wise man avoids it. As often happens in programming, we can find the solution by going back to the beginning and see what we are trying to solve:So we are trying to synchronize an in-memory graph of objects with a set of tables. But these are two completely different types of data structures! But which data structure is the most generic? It turns out that the graph is the most generic one of the two: actually a set of linked database tables is really just a special type of graph. The same can be said of basically almost any other data structure. Graphs and their traversal are very well understood and have a body of knowledge of decades available, similar to the theory on which relational databases are built upon: Relational Algebra. Solving the impedance mismatch The logical conclusion is that the solution for the ORM impedance mismatch is removing to remove the mismatch itself: Let’s store the graph of in-memory domain objects in a transactional-capable graph database! This solves the mapping problem, by removing the need for mapping in the first place. A production-ready solution for the ORM problem This is easier said than done, or is it? It turns out that graph databases have been around for years, and the prime example in the Java community is Neo4j. Neo4j is a stable and mature product that is well understood and documented, see the Neo4J in Action book. It can used as an external server or in embedded mode inside the Java process itself. But it’s core API is all about graphs and nodes, something like this: GraphDatabaseService gds = new EmbeddedGraphDatabase("/path/to/store"); Node forrest=gds.createNode(); forrest.setProperty("title","Forrest Gump"); forrest.setProperty("year",1994); gds.index().forNodes("movies").add(forrest,"id",1);Node tom=gds.createNode(); The problem is that this is too far from domain driven development, writing to this would be like coding JDBC by hand. This is the typical task of a framework like Hibernate, with the big difference that because the impedance mismatch is minimal such framework can operate in a much more transparent and less intrusive way. It turns out that such framework is already written. Spring support for Neo4J One of the creators of the Spring framework Rod Johnson took the task of implementing himself the initial version of the Neo4j integration, the Spring Data Neo4j project. This is an important extract from the foreword of Rod Johnson in the documentation concerning the design of the framework: Its use of AspectJ to eliminate persistence code from your domain model is truly innovative, and on the cutting edge of today’s Java technologies. So Spring Data Neo4J is a AOP-based framework that wraps domain objects in a relatively transparent way, and synchronizes a in-memory graph of objects with a Neo4j transactional data store. It’s aimed to write the persistence layer of the application in a simplified way, similar to Spring Data JPA. How does the mapping to a graph database look like It turns out that there is limited mapping needed (tutorial). We need for one to mark which classes we want to make persistent, and define a field that will act as an Id: @NodeEntity class Movie { @GraphId Long nodeId; String id; String title; int year; Set cast; } There are other annotations (5 more per the docs) for example for defining indexing and relationships with properties, etc. Compared with Hibernate there is only a fraction of the annotations for the same domain model. What does the query language look like? The recommended query language is Cypher, that is an ASCII art based language. A query can look for example like this: // returns users who rated a movie based on movie title (movieTitle parameter) higher than rating (rating parameter) @Query("start movie=node:Movie(title={0}) " + "match (movie)<-[r:RATED]-(user) " + "where r.stars > {1} " + "return user") Iterable getUsersWhoRatedMovieFromTitle(String movieTitle, Integer rating); This is a query language called Cypher, which is based on ASCII art. The query language is very different from JPQL or SQL and implies a learning curve. Still after the learning curve this language allows to write performant queries that usually can be problematic in relational databases. Performance of Queries in Graph vs Relational databases Let’s compare some frequent query types and how they should perform in a graph vs relational databases:lookup by Id: This is implemented for example by doing a binary search on an index tree, finding a match and following a ‘pointer’ to the result. This is a (very) simplified description, but it’s likely identical for both databases. There is no apparent reason why such query would take more time in a graph database than in a relational DB. lookup parent relations: This is the type of query that relational databases struggle. Self-joins might result in cartesian products of huge tables, bringing the database to an halt. A graph database can perform those queries in a fraction of that. lookup by non-indexed column: Here the relational database can scan tables faster due to the physical structure of the table and the fact than one read usually brings along multiple rows. But this type of queries (table scans) are to be avoided in relational databases anyway.There is more to say here, but there is no indication (no readilly-available DDD-related public benchmarks) that a graph-based data store would not be appropriate for doing DDD due to query performance. Conclusions I personally cannot find any (conceptual) reasons why a transaction-capable graph database would not be an ideal fit for doing Domain Driven Development, as an alternative to a relational database and ORM. No data store will ever fit perfectly every use case, but we can ask the question if graph databases shouldn’t become the default for DDD, and relational the exception. The disappearance of ORM would imply a great reduction of the complexity and the time that it takes to implement a project. The future of DDD in the enterprise The removal of the impedance mismatch and the improved performance of certain query types could be the killer features that drive the adoption of a graph based DDD solution. We can see practical obstacles: operations prefer relational databases, vendor contract lock-in, having to learn a new query language, limited expertise in the labor market, etc. But the economic advantage is there, and the technology is there also. And when that is case, it’s usually only a matter of time. What about you, could you think of any reason why Graph-based DDD would not work? Feel free to chime in on the comments bellow.Reference: Solving ORM – Keep the O, Drop the R, no need for the M from our JCG partner Aleksey Novik at the The JHades Blog blog....

WildFly 9 – Don’t cha wish your console was hawt like this!

Everybody heard the news probably. The first WildFly 9.0.0.Alpha1 release came out Monday. You can download it from the wildfly.org website The biggest changes are that it is built by a new feature provisioning tool which is layered on the now separate core distribution and also contains a new Servlet Distribution (only a 25 MB ZIP) which is based on it. It is called “web lite” until there’ll be a better name. The architecture now supports server suspend mode which is also known as graceful shutdown. For now only Undertow and EJB3 use this so far. Additional subsystems still need to be updated. The management APIs also got notification support. Overall 256 fixes and improvements were included in this release. But let’s put all the awesomeness aside for a second and talk about what this post should be about. Administration Console WildFly 9 got a brushed up admin console. After you downloaded, unzipped and started the server you only need to add a user (bin/add-user.sh/.bat) and point your browser to http://localhost:9990/ to see it.With some minor UI tweaks this is looking pretty hot already. BUT there’s another console out there called hawtio! And what is extremely hot is, that it already has some very first support for WildFly and EAP and here are the steps to make it work. Get Hawtio! You can use hawtio from a Chrome Extension or in many different containers – or outside a container in a stand alone executable jar. If you want to deploy hawtio as a console on WildFly make sure to look at the complete how-to written by Christian Posta. The easiest way is to just download latest executable 1.4.19 jar and start it on the command line: java -jar hawtio-app-1.4.19.jar --port 8090 The port parameter lets you specify on which port you want the console to run. As I’m going to use it with WildFly which also uses the hawtio default port this is just directly using another free port. Next thing to do is to install the JMX to JSON bridge, on which hawtio relies to connect to remote processes. Instead of directly using JMX which is blocked on most networks anyway the Jolokia project bridges JMX MBeans to JSON and hawtio operates on them. Download latest Jolokia WAR agent and deploy it to WildFly. Now you’re almost ready to go. Point your browser to the hawtio console (http://localhost:8090/hawtio/) and switch to the connect tab. Enter the following settings:And press the “Connect to remote server” button below. Until today there is not much to see here. Beside a very basic server information you have the deployment overview and the connector status page.But the good news is: Hawtio is open source and you can fork it from GitHub and add some more features to it. The WildFly/EAP console is in a hawtio-web subproject. Make sure to check out the contributor guidelines.Reference: WildFly 9 – Don’t cha wish your console was hawt like this! from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: