Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?


Challenging Myself With Coplien’s Why Most Unit Testing is Waste

James O. Coplien has written in 2014 the thought-provoking essay Why Most Unit Testing is Waste and further elaborates the topic in his Segue. I love testing but I also value challenging my views to expand my understanding so it was a valuable read. When encountering something so controversial, it’s crucial to set aside one’s emotions and opinions and ask: “Provided that it is true, what in my world view might need questioning and updating?” Judge for yourself how well have I have managed it. (Note: This post is not intended as a full and impartial summary of his writing but rather a overveiw of what I may learn from it.) Perhaps the most important lesson is this: Don’t blindly accept fads, myths, authorities and “established truths.” Question everything, collect experience, judge for yourself. As J. Coplien himself writes: Be skeptical of yourself: measure, prove, retry. Be skeptical of me for heaven’s sake. I am currently fond of unit testing so my mission is now to critically confront Coplien’s ideas and my own preconceptions with practical experience on my next projects. I would suggest that the main thing you take away isn’t “minimize unit testing” but rather “value thinking, focus on system testing, employ code reviews and other QA measures.” I’ll list my main take-aways first and go into detail later on:Think! Communicate! Tools and processes (TDD) cannot introduce design and quality for you The risk mitigation provided by unit tests is highly overrated; it’s better to spend resources where return on investment in terms of increased quality is higher (system tests, code reviews etc.) Unit testing is still meaningful in some cases Actually, testing – though crucial – is overrated. Analyse, design, and use a broad repertoire of quality assurance techniques Regarding automated testing, focus on system tests. Integrate code and run them continually Human insight beats machines; employ experience-based testing including exploratory testing Turn asserts from unit test into runtime pre-/post-condition checks in your production code We cannot test everything and not every bug impacts users and is thus worth discovering and fixing (It’s wasteful to test things that never occur in practice)I find two points especially valuable and thought-provoking, namely the (presumed) fact that the efficiency/cost ratio of unit testing is so low and that the overlap between what we test and what is used in production is so small. Additionally, the suggestion to use heavily pre- and post-condition checks in production code seems pretty good to me. And I agree that we need more automated acceptance/system tests and alternative QA techniques. Think! Unit testing is sometimes promoted as a way towards a better design (more modular and following principles such as SRP). However, according to Kent Beck, tests don’t help you to create a better design – they only help to surface the pain of a bad design. (TODO: Reference) If we set aside time to think about design for the sake of a good design, we will surely get a better result than when we employ tools and processes (JUnit, TDD) to force us to think about it (in the narrow context of testability). (On a related note, the “hammock-driven development” is also based on the appreciation of the value of thinking.) Together with thinking, it is also efficient to discuss and explore design with others. [..] it’s much better to use domain modeling techniques than testing to shape an artefact. I heartily agree with this. (Side note: Though thinking is good, overthinking and premature abstraction can lead us to waste resources on overengineered systems. So be careful.) How to achieve high quality according to CoplienCommunicate and analyse better to minimize faults due to misunderstanding of requirements and miscommunication Don’t neglect design as faults in design are expensive to fix and hard to detect with testing Use a broad repertoire of quality assurance techniques such as code reviews, static code analysis, pair programming, inspections of requirements and design Focus automated testing efforts on system (and, if necessary, integration) tests while integrating code and running these tests continually Leverage experience-based testing such as exploratory testing Employ defensive and design-by-contract programming. Turn asserts in unit test into runtime pre-/post-condition checks in your production code and rely on your stress tests, integration tests, and system tests to exercise the code Apply unit testing where meaningful Test where risk of defects is highTests cannot catch faults due to miscommunication/bad assumptions in requirements and design, which reportedly account for 45% of faults. Therefore communication and analysis are key. Unit testing in particular and testing in general is just one way of ensuring quality, and reportedly not the most efficient one. It is therefore best to combine a number of approaches for defect prevention and detection, such as those mentioned above. (I believe that in particular formal inspection of requirements, design, and code is one of the most efficient ones. Static code analysis can also help a lot with certain classes of defects.) (Also there is a history of high-quality software with little or no unit testing, I believe.) As argued below, unit testing is wasteful and provides a low value according to Coplien. Thus you should focus on tests that check the whole system or, if not practical, at least parts of it – system and integration tests. (I assume that the automated acceptance testing as promoted by Specification by Example belongs here as well.) Provide “hooks” to be able to observe the state and behavior of the system under test as necessary. Given modern hardware and possibilities, these tests should run continually. And developers should integrate their code and submit it for testing frequently (even the delay of 1 hour is too much). Use debugging and (perhaps temporary?) unit tests to pinpoint sources of failures. JH: I agree that whole-system tests provide more value than unit tests. But I believe that they are typically much more costly to create, maintain and run since they, by definition, depend on the whole system and thus have to relate to the whole (or a non-negligible portion) of it and can break because of changes at number of places. It is also difficult to locate the cause of a failure since they invoke a lot of code. And if we should test a small feature thoroughly, there would typically be a lot of duplication in its system tests (f.ex. to get the system to the state where we can start exercising the feature). Therefore people write more lower-level, more focused and cheaper tests as visualized in the famous Test Pyramid. I would love to learn how Mr. Coplien addresses these concerns. In any case we should really try to get at least a modest number of automated acceptance tests; I have rarely seen this. Experienced testers are great at guessing where defects are likely, simulating unexpected user behavior and generally breaking systems. Experience-based – f.ex. exploratory – testing is thus irreplaceable complement to (automated) requirements-based testing. Defensive and design-by-contract programming: Coplien argues that most asserts in unit tests are actually pre- and post-condition and invariant checks and that it is therefore better to have them directly in the production code as the design-by-contract programming proposes. You would of course need to generalize them to property-based rather than value-based assertions, f.ex. checking that the length of concatenated lists is the sum of the length of the individual lists, instead of concrete values. He writes: Assertions are powerful unit-level guards that beat most unit tests in two ways. First, each one can do the job of a large number (conceivably, infinite) of scenario-based unit tests that compare computational results to an oracle [JH: because they check properties, not values]. Second, they extend the run time of the test over a much wider range of contexts and detailed scenario variants by extending the test into the lifetime of the product. If something is wrong, it is certainly better to catch it and fail soon (and perhaps recover) than silently going on and failing later; failing fast also makes it much easier to locate the bug later on. You could argue that these runtime checks are costly but compared to the latency of network calls etc. they are negligible. Some unit tests have a value – f.ex. regression tests and testing algorithmic code where there is a “formal oracle” for assessing their correctness. Also, as mentioned elsewhere, unit tests can be a useful debugging (known defect localization) tool. (Regression tests should still be preferably written as system tests. They are valuable because, in contrary to most unit tests, they have clear business value and address a real risk. However Mr. Coplien also proposes to delete them if they haven’t failed within a year.) Doubts about the value of unit testing In summary, it seems that agile teams are putting most of their effort into the quality improvement area with the least payoff. … unless [..] you are prevented from doing broader testing, unit testing isn’t the best you can do“Most software failures come from the interactions between objects rather than being a property of an object or method in isolation.” Thus testing objects in isolation doesn’t help The cost to create, maintain, and run unit tests is often forgotten or underestimated. There is also cost incurred by the fact that unit tests slow down code evolution because to change a function we must also change all its tests. In practice we often encounter unit tests that have little value (or actually a negative one when we consider the associated costs) (F.ex. a test that essentially only verifies that we have set up mocks correctly) When we test a unit in isolation, we will likely test many cases that in practice do not occur in the system and we thus waste our limited resources without adding any real value; the smaller a unit the more difficult to link it to the business value of the system and thus the probability of testing something without a business impact is high The significance of code coverage and the risk mitigation provided by unit tests is highly overrated; in reality there are so many possibly relevant states of the system under test, code paths and interleavings that we can hardly approach the coverage of 1 in 1012. It’s being claimed that unit tests serve as a safety net for refactoring but that is based upon an unverified and doubtful assumption (see below) According to some studies, the efficiency of unit testing is much lower than that of other QA approaches (Casper Jones 2013) Thinking about design for the sake of design yields better results than design thinking forced by a test-first approach (as discussed previously)Limited resources, risk assessment, and testing of unused cases One of the main points of the essays is that our resources are limited and it is thus crucial to focus our quality assurance efforts where it pays off most. Coplien argues that unit testing is quite wasteful because we are very likely to test cases that never occur in practice. One of the reasons is that we want to test the unit – f.ex. a class – in its entirety while in the system it is used only in a particular way. It is typically impossible to link such a small unit to the business value of the application and thus to guide us to test cases that actually have business value. Due to polymorphism etc. there is no way, he argues, a static analysis of an object-oriented code could tell you whether a particular method is invoked and in what context/sequence. Discovering and fixing bugs that are in reality never triggered is waste of our limited resources. For example a Map (Dictionary) provides a lot of functionality but any application uses typically only a small portion of it. Thus: One can make ideological arguments for testing the unit, but the fact is that the map is much larger as a unit, tested as a unit, than it is as an element of the system. You can usually reduce your test mass with no loss of quality by testing the system at the use case level instead of testing the unit at the programming interface level. Unit tests can also lead to “test-induced design damage” – degradation of code and quality in the interest of making testing more convenient (introduction of interfaces for mocking, layers of indirection etc.). Moreover, the fact that unit tests are passing does not imply that the code does what the business needs, contrary to acceptance tests. Testing – at any level – should be driven by risk assessment and cost-benefit analysis. Coplien stresses that risk assessment/management requires at least rudimentary knowledge of statistics and information theory. The myth of code coverage A high code coverage makes us confident in the quality of our code. But the assumption that testing reproduces the majority of states and paths that the code will experience in deployment is rather dodgy. Even 100% line coverage is a big lie. A single line can be invoked in many different cases and testing just one of them provides little information. Moreover, in concurrent systems the result might depend on the interleaving of concurrently executing threads, and there are many. It may also – directly or indirectly – depend on the state of the whole system (an extreme example – a deadlock that is triggered only if the system is short of memory). I define 100% coverage as having examined all possible combinations of all possible paths through all methods of a class, having reproduced every possible configuration of data bits accessible to those methods, at every machine language instruction along the paths of execution. Anything else is a heuristic about which absolutely no formal claim of correctness can be made. JH: Focusing on the worst-case coverage might be too extreme, even though nothing else has a “formal claim of correctness.” Even though humans are terrible at foreseeing the future, we perhaps still have a chance to guess the most likely and frequent paths through the code and associated states since we know its business purpose, even though we will miss all the rare, hard-to-track-down defects. So I do not see this as an absolute argument against unit testing but it really makes the point that we must be humble and skeptical about what our tests can achieve. The myth of the refactoring and regression safety net Unit tests are often praised as a safety net against inadvertent changes introduced when refactoring or when introducing new functionality. (I certainly feel much safer when changing a codebase with a good test coverage (ehm, ehm); but maybe it is just a self-imposed illusion.) Coplien has a couple of counterarguments (in addition to the low-coverage one):Given that the whole is greater than a sum of its parts, testing a part in isolation cannot really tell you that a change to it hasn’t broken the system (a theoretical argument based on Weinberg’s Law of Composition) Tests as a safety net to catch inadvertent changes due to refactoring – “[..] it pretends that we can have a high, but not perfect, degree of confidence that we can write tests that examine only that portion of a function’s behavior that that will remain constant across changes to the code.” “Changing code means re-writing the tests for it. If you change its logic, you need to reevaluate its function. […] If a change to the code doesn’t require a change to the tests, your tests are too weak or incomplete.”Tips for reducing the mass of unit tests If the cost of maintaining and running your unit tests is too high, you can follow Coplien’s guidelines for eliminating the least valuable ones:Remove tests that haven’t failed in a year (informative value < maintenance and running costs) Remove tests that do not test functionality (i.e. don’t break when the code is modified) Remove tautological tests (f.ex. .setY(5); assert x.getY() == 5) Remove tests that cannot be tracked to business requirements and value Get rid of unit tests that duplicate what system tests do; if they’re too expensive to run, create subunit integration tests.(Notice that theoretically, and perhaps also practically speaking, a test that never fails has zero information value.) Open questions What is Coplien’s definition of a “unit test”? Is it one that checks an individual method? Or, at the other end of the scale, a one that checks an object or a group of related objects but runs quickly, is independent of other tests, manages its own setup and test data, and generally does not access the file system, network, or a database? He praises testing based on Monte Carlo techniques. What is it? Does the system testing he speaks about differ from BDD/Specification by Example way of automated acceptance testing? Coplien’s own summaryKeep regression tests around for up to a year — but most of those will be system-level tests rather than unit tests. Keep unit tests that test key algorithms for which there is a broad, formal, independent oracle of correctness, and for which there is ascribable business value. Except for the preceding case, if X has business value and you can test X with either a system test or a unit test, use a system test — context is everything. Design a test with more care than you design the code. Turn most unit tests into assertions. Throw away tests that haven’t failed in a year. Testing can’t replace good development: a high test failure rate suggests you should shorten development intervals, perhaps radically, and make sure your architecture and design regimens have teeth If you find that individual functions being tested are trivial, double-check the way you incentivize developers’ performance. Rewarding coverage or other meaningless metrics can lead to rapid architecture decay. Be humble about what tests can achieve. Tests don’t improve quality: developers do. [..] most bugs don’t happen inside the objects, but between the objects.Conclusion What am I going to change in my approach to software development? I want to embrace the design-by-contract style runtime assertions. I want to promote and apply automated system/acceptance testing and explore ways to make these tests cheaper to write and maintain. I won’t hesitate to step off the computer with a piece of paper to do some preliminary analysis and design. I want to experiment more with some of the alternative, more efficient QA approaches. And, based on real-world experiences, I will critically scrutinize the perceived value of unit testing as a development aid and regression/refactoring safety net and the actual cost of the tests themselves and as an obstacle to changing code. I will also scrutinize Coplien’s claims such as the discrepancy between code tested and code used. What about you, my dear reader? Find out for yourself what works for you in practice. Our resources are limited – use them on the most efficient QA approaches. Inverse the test pyramid – focus on system (and integration) tests. Prune unit tests regularly to keep their mass manageable. Think. Follow-Up There is a recording of Jim Coplien and Bob Martin debating TDD. Mr. Coplien says he doesn’t have an issue with TDD as practiced by Uncle Bob, his main problem is with people that forego architecture and expect it to somehow magically emerge from tests. He stresses that it is important to leverage domain knowledge when proposing an architecture. But he also agrees that architecture has to evolve and that one should not spend “too much” time on architecting. He also states that he prefers design by contract because it provides many of the same benefits as TDD (making one think about the code, outside view of the code) while it also has a much better coverage (since it applies to all input/output values, not just a few randomly selected ones) and it at least “has a hope” being traced to business requirements. Aside of that, his criticism of TDD and unit testing is based a lot on experiences with how people (mis)use it in practice. Side note: He also mentions that behavior-driven development is “really cool.” Related Slides from Coplien’s keynote Beyond Agile Testing to Lean Development The “Is TDD Dead?” dialogue – A series of conversations between Kent Beck, David Heinemeier Hansson, and Martin Fowler on the topic of Test-Driven Development (TDD) and its impact upon software design. Kent Beck’s reasons for participating are quite enlightening: I’m puzzled by the limits of TDD–it works so well for algorithm-y, data-structure-y code. I love the feeling of confidence I get when I use TDD. I love the sense that I have a series of achievable steps in front of me–can’t imagine the implementation? no problem, you can always write a test. I recognize that TDD loses value as tests take longer to run, as the number of possible faults per test failure increases, as tests become coupled to the implementation, and as tests lose fidelity with the production environment. How far out can TDD be pushed? Are there special cases where TDD works surprisingly well? Poorly? At what point is the cure worse than the disease? How can answers to any and all of these questions be communicated effectively? (If you are short of time, you might get an impression of the conversations in these summaries) Kent Beck’s RIP TDD lists some good reasons for doing TDD (and thus unit testing) (Aside of “Documentation,” none of them prevents you from using tests just as a development aid and deleting them afterwards, thus avoiding accumulating costs. It should also be noted that people evidently are different, to some TDD might indeed be a great focus and design aid.)Reference: Challenging Myself With Coplien’s Why Most Unit Testing is Waste from our JCG partner Jakub Holy at the The Holy Java blog....

How I’d Like Java To Be

I like Java. I enjoy programming in Java. But after using Python for a while, there are several things I would love to change about it. It’s almost purely syntactical, so there may be a JVM language that is better, but I’m not really interested since I still need to use normal Java for work. I realize that these changes won’t be implemented (although I thought I heard that one of them is actually in the pipeline for a future version); these are just some thoughts. I don’t want to free Java up the way that Python is open and free. I actually often relish in the challenges that the restrictions in Java present. I mostly just want to type less. So, here are the changes I’d love to see in Java.   Get Rid of Semicolons I realize that they serve a purpose, but they really aren’t necessary. In fact, they actually open up the code to possibly being harder to read, since shoving multiple lines of code onto the same line is almost always more difficult to read. Technically, with semicolons, you could compress an entire code file down to one line in order to reduce the file size, but how often is that done in Java? It may be done more than I know, but I don’t know of any case that it’s done. Remove the Curly Braces There are two main reasons for this. First off, we could end the curly brace cold war! Second, we can stop wasting lines of code on the braces. Also, like I said earlier, I’m trying to reduce how much typing I’m doing, and this will help. Lastly, by doing this, curly braces can be opened up for new uses (you’ll see later). Operator Overloading When it comes to mathematical operations, I don’t really care about operator overloading. They can be handy, but methods work okay for that. My biggest concern is comparison, especially ==. I really wish Java had followed Python in having == be for equality checking (you can even do it through the equals method) and “is” for identity checking. And while we’re at it, implementing Comparable should allow you to use the comparison operators with them, rather than needing to translate the numeric return values yourself. If you want, you can allow some way to overload math operators, too. Tuples and/or Data Structures I could use either one, but both would be better. Tuples are especially useful as a return type for returning multiple things at once, which is sometimes handy. The same can be done with simple data structures (essentially C structs), too, since they should be pretty lightweight. A big thing for data structures is getting rid of Java Beans. It would be even better if we would be able to define invariants with them too. The big problem with Java Beans is that we shouldn’t have to define a full-on class just to pass some data around. If we can’t get structs, then, at the very least, I would like to get the next thing. Properties Omg, I love properties, especially in Python. Allowing you to use simple accessors and mutators as if it was a straight variable makes for some nice-looking code. Default to public I’ve seen a few cases where people talk about “better defaults”, where leaving off a modifier keyword (such as public and private, or static) should be for the most typical case. public is easily the most used keyword for classes and methods, so why is the default “package-private”? I could argue for private to be the default for fields, too, but I kind of think that the default should be the same everywhere in order to reduce confusion, but I’m not stuck on that. I debate a little as to whether variables should default as final in order to help push people toward the idea of immutability, but I don’t care that much. Type Objects This kind of goes with the previous thing about smart defaults. I think the automatic thing for primitives is to be able to use them as objects. I don’t really care how you do this. Preferably, you’d leave it open to get the true primitives in order to optimize if you want. How this works doesn’t really matter to me. It would be kind of cool if they’re naturally passed around as primitives most of the time, but they autobox into the objects simply by calling any of their methods. Parameters and return types should not care which one is being passed. This would also help to hugely reduce the number of built-in functional interfaces in Java, since a majority are actually duplicates dealing with primitives. List, Dictionary, and Set Literals For those of you who have used javaScript or Python, you really know what I’m talking about. I mean, how flippin’ handy is THAT? This, tied in with constructors that can take Streams (sort of like Java’s version of Generators. Sort of), would make collections a fair bit easier to work with. Dictionary literals and set literals make for some really good uses of curly braces. Fin That’s my list of changes that I’d love to see in Java. Like I said before, I don’t think these will ever happen (although I think I heard that they were working towards type objects), and it’s really just a little wish list. Do you guys agree with my choices?Reference: How I’d Like Java To Be from our JCG partner Jacob Zimmerman at the Programming Ideas With Jake blog....

PrimeFaces: Opening external pages in dynamically generated dialog

I already blogged about one recipe in the upcomming 2. edition of the PrimeFaces Cookbook. In this post, I would like to post the second recipe about a small framework called Dialog Framework. I personally like it because I remember my costly effort to do the same thing with the Struts Framework. When you wanted to load an external page into a popup and submit some data to this page, you had to call with an hidden form, set passed values into hidden fields, submit the form to the external page via JavaScript and wait until the page is ready to use in window.onload or document.ready. A lot of manuelly work. PrimeFaces does this job for you and, in addition, provides with p:dialog a beautiful user interface as replacement for popup. The regular usage of PrimeFaces’ dialog is a declarative approach with p:dialog. Beside this declarative approach, there is a programmatic approach as well. The programmatic approach is based on a programmatic API where dialogs are created and destroyed at runtime. It is called Dialog Framework. The Dialog Framework is used to open external pages in dynamically generated dialog. The usage is quite simple, RequestContext provide two methods: openDialog and closeDialog that allow opening and closing dynamic dialogs. Furthermore, the Dialog Framework makes possible to pass data back from the page displayed in the dialog to the caller page. In this recipe, we will demonstrate all features available in the Dialog Framework. We will open a dialog with options programmatically and pass parameters to the page displayed in this dialog. We will also meet the possibility for communicating between the source (caller) page and the dialog. Getting ready Dialog Framework requires the following configuration in faces-config.xml: <application> <action-listener>org.primefaces.application.DialogActionListener</action-listener> <navigation-handler>org.primefaces.application.DialogNavigationHandler</navigation-handler> <view-handler>org.primefaces.application.DialogViewHandler</view-handler> </application> How to do it… We will develop a page with radio buttons to select one available PrimeFaces’ book for rating. The rating itself happens in a dialog after a click on the button Rate the selected book.The XHTML snippet to the screenshot is listed below. <p:messages id="messages" showSummary="true" showDetail="false"/><p:selectOneRadio id="books" layout="pageDirection" value="#{dialogFrameworkBean.bookName}"> <f:selectItem itemLabel="PrimeFaces Cookbook" itemValue="PrimeFaces Cookbook"/> <f:selectItem itemLabel="PrimeFaces Starter" itemValue="PrimeFaces Starter"/> <f:selectItem itemLabel="PrimeFaces Beginner's Guide" itemValue="PrimeFaces Beginner's Guide"/> <f:selectItem itemLabel="PrimeFaces Blueprints" itemValue="PrimeFaces Blueprints"/> </p:selectOneRadio><p:commandButton value="Rate the selected book" process="@this books" actionListener="#{dialogFrameworkBean.showRatingDialog}" style="margin-top: 15px"> <p:ajax event="dialogReturn" update="messages" listener="#{dialogFrameworkBean.onDialogReturn}"/> </p:commandButton> The page in the dialog is a full page bookRating.xhtml with a Rating component p:rating. It also shows the name of the book selected for rating. <!DOCTYPE html> <html xmlns="" xmlns:f="" xmlns:h="" xmlns:p=""> <f:view contentType="text/html" locale="en"> <f:metadata> <f:viewParam name="bookName" value="#{bookRatingBean.bookName}"/> </f:metadata> <h:head> <title>Rate the book!</title> </h:head> <h:body> <h:form> What is your rating for the book <strong>#{bookRatingBean.bookName}</strong>?<p/><p:rating id="rating"> <p:ajax event="rate" listener="#{bookRatingBean.onrate}"/> <p:ajax event="cancel" listener="#{bookRatingBean.oncancel}"/> </p:rating> </h:form> </h:body> </f:view> </html> The next screenshot demonstrates how the dialog looks like.A click on a rating star or the cancel symbol closes the dialog. The source (caller) page displays a message with the selected rating value in the range from 0 till 5.The most interesting part is the logic in beans. The bean DialogFrameworkBean opens the rating page within the dialog by invoking the method openDialog() with the outcome, options and POST parameters on a RequestContext instance. Furthermore, the bean defines an AJAX listener onDialogReturn() which is invoked when the data (selected rating) is returned from the dialog after it was closed. @Named @ViewScoped public class DialogFrameworkBean implements Serializable {private String bookName;public void showRatingDialog() { Map<String, Object> options = new HashMap<String, Object>(); options.put("modal", true); options.put("draggable", false); options.put("resizable", false); options.put("contentWidth", 500); options.put("contentHeight", 100); options.put("includeViewParams", true);Map<String, List<String>> params = new HashMap<String, List<String>>(); List<String> values = new ArrayList<String>(); values.add(bookName); params.put("bookName", values);RequestContext.getCurrentInstance().openDialog("/views/chapter11/bookRating", options, params); }public void onDialogReturn(SelectEvent event) { Object rating = event.getObject(); FacesMessage message = new FacesMessage(FacesMessage.SEVERITY_INFO, "You rated the book with " + rating, null);FacesContext.getCurrentInstance().addMessage(null, message); }// getters / setters ... } The bean BookRatingBean defines two listeners for the Rating component. They are invoked when the user clicks on a star and the cancel symbol respectively. We call there closeDialog() on a RequestContext instance to trigger the dialog closing and to pass the current rating value to the mentioned listener onDialogReturn(). @Named @RequestScoped public class BookRatingBean {private String bookName;public void onrate(RateEvent rateEvent) { RequestContext.getCurrentInstance().closeDialog(rateEvent.getRating()); }public void oncancel() { RequestContext.getCurrentInstance().closeDialog(0); }// getters / setters ... } How it works… The RequestContext provides two methods of the same name openDialog to open a dialog dynamically at runtime. The first one only has one parameter – the logical outcome used to resolve a navigation case. The second one has three parameters – outcome, dialog’s configuration options and parameters that are sent to the view displayed in the dialog. We used the second variant in the example. The options are put into a Map as key, value pairs. The parameters are put into a Map too. In our case, we put the name of the selected book. After that, the name is received in the dialog’s page bookRating.xhtml via the f:viewParam. f:viewParam sets the transferred parameter into the BookRatingBean, so that it is available in the heading above the Rating component. Tip: Please refer the PrimeFaces User’s Guide to see a full list of supported dialog’s configuration options. Let us go through the request-response life cycle. Once the response is received from the request caused by the command button, a dialog gets created with an iframe inside. The URL of the iframe points to the full page, in our case bookRating.xhtml. The page will be streamed down and shown in the dialog. As you can see, there are always two requests: the first initial POST and the second GET sending by iframe. Note that the Dialog Framework only works with initial AJAX requests. Non-AJAX request are ignored. Please also note the the title of the dialog is taken from the HTML title element. As we already mentioned above, the dialog can be closed programmatically by invoking the method closeDialog on a RequestContext instance. On the caller page, the button that triggers the dialog needs to have an AJAX listener for the dialogReturn event to be able to receive any data from the dialog. The data is passed as parameter to the method closeDialog(Object data). In the example, we pass either a positive integer value rateEvent.getRating() or 0.Reference: PrimeFaces: Opening external pages in dynamically generated dialog from our JCG partner Oleg Varaksin at the Thoughts on software development blog....

Web App Architecture – the Spring MVC – AngularJs stack

Spring MVC and AngularJs together make for a really productive and appealing frontend development stack for building form-intensive web applications.In this blog post we will see how a form-intensive web app can be built using these technologies, and compare such approach with other available options. A fully functional and secured sample Spring MVC / AngularJs web app can be found in this github repository.We will go over the following topics:          The architecture of a Spring MVC + Angular single page app How to structure a web UI using Angular Which Javascript / CSS libraries complement well Angular? How to build a REST API backend with Spring MVC Securing a REST API using Spring Security How does this compare with other approaches that use a full Java-based approach?The architecture of a Spring MVC + Angular single page web app Form-intensive enterprise class applications are ideally suited for being built as single page web apps. The main idea compared to other more traditional server-side architectures is to build the server as a set of stateless reusable REST services, and from an MVC perspective to take the controller out of the backend and move it into the browser:The client is MVC-capable and contains all the presentation logic which is separated in a view layer, a controller layer and a frontend services layer. After the initial application startup, only JSON data goes over the wire between client and server. How is the backend built? The backend of an enterprise frontend application can be built in a very natural and web-like way as a REST API. The same technology can be used to provide web services to third-party applications – obviating in many cases the need for a separate SOAP web services stack. From a DDD perspective, the domain model remains on the backend, at the service and persistence layer level. Over the wire only DTOs go by, but not the domain model. How to structure the frontend of a web app using Angular The frontend should be built around a view-specific model (which is not the domain model), and should only handle presentation logic, but no business logic. These are the three layers of the frontend: The View Layer The view layer is composed of Html templates, CSS, and any Angular directives representing the different UI components. This is an example of a simple view for a login form: <form ng-submit="onLogin()" name="form" novalidate="" ng-controller="LoginCtrl"> <fieldset> <legend>Log In</legend> <div class="form-field"> <input ng-model="vm.username" name="username" required="" ng-minlength="6" type="text"> <div class="form-field"> <input ng-model="vm.password" name="password" required="" ng-minlength="6" pattern="(?=.*\d)(?=.*[a-z])(?=.*[A-Z]).{6,}" type="password"> </div></div></fieldset> <button type="submit">Log In</button> <a href="/resources/public/new-user.html">New user?</a> </form> The Controller Layer The controller layer is made of Angular controllers that glue the data retrieved from the backend and the view together. The controller initializes the view model and defines how the view should react to model changes and vice-versa: angular.module('loginApp', ['common', 'editableTableWidgets']) .controller('LoginCtrl', function ($scope, LoginService) { $scope.onLogin = function () { console.log('Attempting login with username ' + $scope.vm.username + ' and password ' + $scope.vm.password); if ($scope.form.$invalid) { return; } LoginService.login($scope.vm.userName, $scope.vm.password); }; }); One of the main responsibilities of the controller is to perform frontend validations. Any validations done on the frontend are for user convenience only – for example they are useful to immediately inform the user that a field is required. Any frontend validations need to be repeated in the backend at the service layer level due to security reasons, as the frontend validations can be easily bypassed. The Frontend Services Layer A set of Angular services that allow to interact with the backend and that can be injected into Angular controllers: angular.module('frontendServices', []) .service('UserService', ['$http','$q', function($http, $q) { return { getUserInfo: function() { var deferred = $q.defer(); $http.get('/user') .then(function (response) { if (response.status == 200) { deferred.resolve(; } else { deferred.reject('Error retrieving user info'); } }); return deferred.promise; } Let’s see what other libraries we need to have the frontend up and running. Which Javascript / CSS libraries are necessary to complement Angular? Angular already provides a large part of the functionality needed to build the frontend of our app. Some good complements to Angular are:An easily themeable pure CSS library of only 4k from Yahoo named PureCss. Its Skin Builder allows to easily generate a theme based on a primary color. Its a BYOJ (Bring Your Own Javascript) solution, which helps keeping things the ‘Angular way’. a functional programming library to manipulate data. The one that seems the most used and better maintained and documented these days is lodash.With these two libraries and Angular, almost any form based application can be built, nothing else is really required. Some other libraries that might be an option depending on your project are:a module system like requirejs is nice to have, but because the Angular module system does not handle file retrieval this introduces some duplication between the dependency declarations of requirejs and the angular modules. A CSRF Angular module, to prevent cross-site request forgery attacks. An internationalization moduleHow to build a REST API backend using Spring MVC The backend is built using the usual backend layers:Router Layer: defines which service entry points correspond to a given HTTP url, and how parameters are to be read from the HTTP request Service Layer: contains any business logic such as validations, defines the scope of business transactions Persistence Layer: maps the database to/from in-memory domain objectsSpring MVC is currently best configured using only Java configuration. The web.xml is hardly ever needed, see here an example of a fully configured application using Java config only. The service and persistence layers are built using the usual DDD approach, so let’s focus our attention on the Router Layer. The Router Layer The same Spring MVC annotations used to build a JSP/Thymeleaf application can also be used to build a REST API. The big difference is that the controller methods do not return a String that defines which view template should be rendered. Instead the @ResponseBody annotation indicates that the return value of the controller method should be directly rendered and become the response body: @ResponseBody @ResponseStatus(HttpStatus.OK) @RequestMapping(method = RequestMethod.GET) public UserInfoDTO getUserInfo(Principal principal) { User user = userService.findUserByUsername(principal.getName()); Long todaysCalories = userService.findTodaysCaloriesForUser(principal.getName()); return user != null ? new UserInfoDTO(user.getUsername(), user.getMaxCaloriesPerDay(), todaysCalories) : null; } If all the methods of the class are to be annotated with @ResponseBody, then it’s better to annotate the whole class with @RestController instead. By adding the Jackson JSON library, the method return value will be directly converted to JSON without any further configuration. Its also possible to convert to XML or other formats, depending on the value of the Accept HTTP header specified by the client. See here an example of a couple of controllers with error handling configured. How to secure a REST API using Spring Security A REST API can be secured using Spring Security Java configuration. A good approach is to use form login with fallback to HTTP Basic authentication, and include some CSRF protection and the possibility to enforce that all backend methods are only accessible via HTTPS. This means the backend will propose the user a login form and assign a session cookie on successful login to browser clients, but it will still work well for non-browser clients by supporting a fallback to HTTP Basic where credentials are passed via the Authorization HTTP header. Following OWASP recommendations, the REST services can be made minimally stateless (the only server state is the session cookie used for authentication) to avoid having to send credentials over the wire for each request. This is an example of how to configure the security of a REST API: http .authorizeRequests() .antMatchers("/resources/public/**").permitAll() .anyRequest().authenticated() .and() .formLogin() .defaultSuccessUrl("/resources/calories-tracker.html") .loginProcessingUrl("/authenticate") .loginPage("/resources/public/login.html") .and() .httpBasic() .and() .logout() .logoutUrl("/logout"); if ("true".equals(System.getProperty("httpsOnly"))) {"launching the application in HTTPS-only mode"); http.requiresChannel().anyRequest().requiresSecure(); } This configuration covers the authentication aspect of security only, choosing an authorization strategy depends on the security requirements of the API. If you need a very fine-grained control on authorization then check if Spring Security ACLs could be a good fit for your use case. Let’s now see how this approach of building web apps compares with other commonly used approaches. Comparing the Spring / MVC Angular stack with other common approaches This approach of using Javascript for the frontend and Java for the backend makes for a simplified and productive development workflow. When the backend is running, no special tools or plugins are needed to achieve full frontend hot-deploy capability: just publish the resources to the server using your IDE (for example hitting Ctrl+F10 in IntelliJ) and refresh the browser page. The backend classes can still be reloaded using JRebel, but for the frontend nothing special is needed. Actually the whole frontend can be built by mocking the backend using for example json-server. This would allow for different developers to build the frontend and the backend in parallel if needed. Productivity gains of full stack development? From my experience being able to edit the Html and CSS directly with no layers of indirection in-between (see here a high-level Angular comparison with GWT and JSF) helps to reduce mental overhead and keeps things simple. The edit-save-refresh development cycle is very fast and reliable and gives a huge productivity boost. The largest productivity gain is obtained when the same developers build both the Javascript frontend and the Java backend, because often simultaneous changes on both are needed for most features. The potential downside of this is that developers need to know also Html, CSS and Javascript, but this seems to have become more frequent in the last couple of years. In my experience, going full stack allows to implement complex frontend use cases in a fraction of the time than the equivalent full Java solution (days instead of weeks), so the productivity gain makes the learning curve definitely worth it. Conclusions Spring MVC and Angular combined really open the door for a new way of building form-intensive web apps. The productivity gains that this approach allows make it an alternative worth looking into. The absence of any server state between requests (besides the authentication cookie) eliminates by design a whole category of bugs. For further details have a look at this sample application on github, and let us know your thoughts/questions on the comments bellow.Reference: Web App Architecture – the Spring MVC – AngularJs stack from our JCG partner Aleksey Novik at the The JHades Blog blog....

Testing and System.out with system-rules

Writing unit tests is an integral part of software development. One problem you have to solve when your class under test interacts with the operating system, is to simulate its behaviours. This can be done by using mocks instead of the real objects provided by the Java Runtime Environment (JRE). Libraries that support mocking for Java are for example mockito or jMock. Mocking objects is a great thing when you have complete control over their instantiation. When dealing with standard input and standard output this is a little bit tricky, but not impossible as java.lang.System lets you replace the standard InputStream and OutputStream.   System.setIn(in); System.setOut(out); In order that you do not have to replace the streams before and after each test case manually, you can utilize org.junit.rules.ExternalResource. This class provides the two methods before() and after() that are called, like their names suggest, before and after each test case. This way you can easily setup and cleanup resources that all of your tests within one class need. Or, to come back to the original problem, replace the input and output stream for java.lang.System. Exactly what I have described above, is implemented by the library system-rules. To see how it works, lets start with a simple example: public class CliExample { private Scanner scanner = new Scanner(, "UTF-8"); public static void main(String[] args) { CliExample cliExample = new CliExample();; } private void run() { try { int a = readInNumber(); int b = readInNumber(); int sum = a + b; System.out.println(sum); } catch (InputMismatchException e) { System.err.println("The input is not a valid integer."); } catch (IOException e) { System.err.println("An input/output error occurred: " + e.getMessage()); } } private int readInNumber() throws IOException { System.out.println("Please enter a number:"); String nextInput =; try { return Integer.valueOf(nextInput); } catch(Exception e) { throw new InputMismatchException(); } } } The code above reads two intergers from standard input and prints out its sum. In case the user provides an invalid input, the program should output an appropriate message on the error stream. In the first test case, we want to verify that the program correctly sums up two numbers and prints out the result: public class CliExampleTest { @Rule public final StandardErrorStreamLog stdErrLog = new StandardErrorStreamLog(); @Rule public final StandardOutputStreamLog stdOutLog = new StandardOutputStreamLog(); @Rule public final TextFromStandardInputStream systemInMock = emptyStandardInputStream(); @Test public void testSuccessfulExecution() { systemInMock.provideText("2\n3\n"); CliExample.main(new String[]{}); assertThat(stdOutLog.getLog(), is("Please enter a number:\r\nPlease enter a number:\r\n5\r\n")); } ... } To simulate we utilize system-rules’ TextFromStandardInputStream. The instance variable is initialized with an empty input stream by calling emptyStandardInputStream(). In the test case itself we provide the intput for the application by calling provideText() with a newline at the appropriate points. Then we call the main() method of our application. Finally we have to assert that the application has written the two input statements and the result to the standard input. The latter is done through an instance of StandardOutputStreamLog. By calling its method getLog() we retrieve everything that has been written to standard output during the current test case. The StandardErrorStreamLog can be used alike for the verification of what has been written to standard error: @Test public void testInvalidInput() throws IOException { systemInMock.provideText("a\n"); CliExample.main(new String[]{}); assertThat(stdErrLog.getLog(), is("The input is not a valid integer.\r\n")); } Beyond that system-rules also offers rules for the work with System.getProperty(), System.setProperty(), System.exit() and System.getSecurityManager(). Conclusion: With system-rules testing command line applications with unit tests becomes even more simpler than using junit’s Rules itself. All the boilerplate code to update the system environment before and after each test case comes within some easy to use rules. PS: You can find the complete sources here.Reference: Testing and System.out with system-rules from our JCG partner Martin Mois at the Martin’s Developer World blog....

Tips for Importing Data

I’m currently importing a large amount of spatial data into a PostgreSQL/PostGIS database and realized others could learn from my experience. Most of the advice is not specific to PostgreSQL or PostGIS. Know the basic techniques Know the basic techniques for loading bulk data. Use COPY if possible. If not use batch processing if possible. If not turn off auto-commits before doing individual INSERT or UPDATE calls and only commit every Nth calls. Use auto-committed INSERT and UPDATE as an absolute last resort. Unfortunately the latter is the usual default with JDBC connections. Other standard technique is to drop any indexes before loading large amounts of data and recreate them afterwards. This is not always possible, e.g., if you’re updating a live table, but it can mean a big performance boost. Finally remember to update your index statistics. In PostgreSQL this is VACUUM ANALYZE. Know your database PostgreSQL allows tables to be created as “UNLOGGED”. That means there can be data loss if the system crashes while (or soon after) uploading your data but so what? If that happens you’ll probably want to restart the upload from the start anyway. I haven’t done performance testing (yet) but it’s another trick to keep in mind. Know how to use the ExecutorService for multithreading Every call to the database will have dead time due to network latency and the time required for the database to complete the call. On the flip side the database is idle while waiting for the desktop to prepare and upload each call. You can fill that dead time by using a multithreaded uploader. The ExecutorService makes it easy to break the work into meaningful chunks. E.g., each thread uploads a single table or a single data file. The ExecutorService also allows you to be more intelligent about how you upload your data. Upload to a dedicated schema If possible upload to a dedicated schema. This gives you more flexibility later. In PostgreSQL this can be transparent to most users by calling ALTER DATABASE database SET SEARCH_PATH=schema1,schema2,schema3 and specifying both public and the schema containing the uploaded data. In other databases you can only have one active schema at a time and you will need to explicitly specify the schema later. Upload unprocessed data I’ve sometimes found it faster to upload raw data and process it in the database (with queries and stored procedures) than to process it on the desktop and upload the cooked data. This is especially true with fixed format records. There are benefits besides performance – see following items. Keep track of your source Few things are more frustrating than having questions about what’s in the database and no way to trace it back to its source. (Obviously this applies to data you uploaded from elsewhere, not data your application generated itself.) Adding a couple fields for, e.g., filename and record number, can save a lot of headaches later. Plan your work In my current app one of my first steps is to decompress and open each shapefile (data file), get the number of records, and then close it and delete the uncompressed file. This seems like pointless work but it allows me to get a solid estimate on the amount of time that will be required to upload the file. That, in turn, allows me to make intelligent decisions about how to schedule the work. E.g., one standard approach is to use a priority queue where you always grab the most expensive work item (e.g., number of records to process) when a thread becomes available. This will result in the fastest overall uploads. This has two other benefits. First, it allows me to verify that I can open and read all of the shapefiles. I won’t waste hours before running into a fatal problem. Second it allows me to verify that I wrote everything expected. There’s a problem if I found 813 records while planning but later could only find 811 rows in the table. Or worse I found 815 rows in the table. Log everything I write to a log table after every database call. I write to a log table after every exception. It’s expensive but it’s a lot easier to query a database table or two later than to parse log files. It’s also less expensive than you think if you have a large batch size. I log the time, the shapefile’s pathname, the number of records read and the number of records written. The exceptions log the time, shapefile’s pathname, and the exception messages. I log all exception messages – both ’cause’ and ‘nextException’ (for SQLExceptions). Perform quality checks Check that you’re written all expected records and had no exceptions. Check the data for self-consistency. Add unique indexes. Add referential integrity constraints. There’s a real cost to adding indexes when inserting and updating data but there’s no downside if these will be read-only tables. Check for null values. Check for reasonable values, e.g., populations and ages can never be negative. You can often perform checks specific to the data. E.g., I know that states/provinces must be WITHIN their respective countries, and cities must be WITHIN their respective states/provinces. (Some US states require cities to be WITHIN a single county but other states allow cities to cross country boundaries.) There are two possibilities if the QC fails. First, your upload could be faulty. This is easy to check if you recorded the source of each record. Second, the data itself could be faulty. In either case you want to know this as soon as possible. Backup your work This should be a no-brainer but immediately perform a database backup of these tables after the QC passes. Lock down the data This only applies if the tables will never or rarely be modified after loading. (Think things like tables containing information about states or area codes.) Lock down the tables. You can use both belts and suspenders: REVOKE INSERT, UPDATE, TRUNCATE ON TABLE table FROM PUBLIC. ALTER TABLE table SET READ ONLY. Some people prefer VACUUM FREEZE table SET READ ONLY. If you uploaded the data into a dedicated schema you can use the shorthand REVOKE INSERT, UPDATE, TRUNCATE ON ALL TABLES IN SCHEMA schema FROM PUBLIC. You can also REVOKE CREATE permissions on a schema.Reference: Tips for Importing Data from our JCG partner Bear Giles at the Invariant Properties blog....

Grails Tutorial for Beginners – HQL Queries (executeQuery and executeUpdate)

This Grails tutorial will teach the basics of using HQL. Grails supports dynamic finders which makes it convenient to perform simple database queries. But for more complex cases, Grails provides both Criteria API and HQL. This tutorial will focus on the latter. Introduction It is well known that Grails sits on top of Spring and Hibernate – two of the most popular Java frameworks. Hibernate is used as the underlying technology for the object-relational mapping of Grails (GORM). Hibernate is database agnostic. It means that since Grails is based on it, we could write applications that is compatible with most popular databases. We don’t need to write different queries for each possible database. The easiest way to perform database queries is through dynamic finders. It’s simple and very intuitive. Check my previous post for a tutorial on this topic. Dynamic finders however are very limited. It may not be suitable for complex requirements and cases where the developer needs a lower level of control. HQL is a very good alternative as it is very similar to SQL. HQL is fully object oriented and understands inheritance, polymorphism and association. Using it will provide a very powerful and flexible API yet preserving your application to be database agnostic. In Grails, there are two domain methods to use to invoke HQLexecuteQuery – Executes HQL queries (SELECT operations) executeUpdate – Updates the database with DML-style operations (UPDATE and DELETE)executeQuery Here is a sample Domain class that we will query from: package asia.grails.test class Person { String firstName String lastName int age static constraints = { } } Retrieve all domain objects This is the code to retrieve all Person objects from the database: def listOfAllPerson = Person.executeQuery("from Person") Notice that:executeQuery is a method of a Domain class and used for retrieving information (SELECT statements) Similar to SQL, the from identifier is required Instead of specifying the table, we specify the domain class right after the from keyword. We could also write the query like this def listOfAllPerson = Person.executeQuery("from asia.grails.test.Person")It is valid not to specify select clause. By default, it will return the object instances of the specified Domain class. In the example, it will return a list of all Person objects.Here is a sample code of how we could use the result listOfAllPerson.each { person -> println "First Name = ${person.firstName}" println "Last Name = ${person.lastName}" println "Age = ${person.age}" } Since listOfAllPerson is a list of Person instances, we could iterate over it and print the details. Select clause When the select clause is explicitly used, HQL will not return a list of domain objects. Instead, it will return a 2 dimensional list. Here is an example assuming that at least 1 record is in the database: def list = Person.executeQuery("select firstName, lastName from Person") def firstPerson = list[0] def firstName = firstPerson[0] def lastName = firstPerson[1] println "First Name = ${firstName}" println "Last Name = ${lastName}" The variable list will be assigned a list of items. Each item is a list that corresponds to the value as enumerated in the select clause. The code can also be written like this to help visualize the data structure: def list = Person.executeQuery("select firstName, lastName from Person") def firstName = list[0][0] def lastName = list[0][1] println "First Name = ${firstName}" println "Last Name = ${lastName}" Where clause Just like SQL, we can filter query results using where clause. Here are some examples: People with surname Doe def peopleWithSurnameDoe = Person.executeQuery("from Person where lastName = 'Doe'") People who are at least 18 years old def adults = Person.executeQuery("from Person where age >= 18") People having first name that contains John def peopleWithFirstNameLikeJohn = Person.executeQuery("from Person where firstName like '%John%'") Group clause Group clause is also permitted. The behavior is similar to SQL. Here is an example: def list = Person.executeQuery("select age, count(*) from Person group by age") list.each { item -> def age = item[0] def count = item[1] println "There are ${count} people with age ${age} years old" } This will print all ages found in the table and how many people have that age. Having clause The having clause is useful to filter out the result of a group by. Here is an example: def list = Person.executeQuery( "select age, count(*) from Person group by age having count(*) > 1") list.each { item -> def age = item[0] def count = item[1] println "There are ${count} people with age ${age} years old" } This will print all ages found in the table and how many people have that age, provided that there are more than 1 person in the age group. Pagination It is not good for performance to retrieve all records in a table all at once. It is more efficient to page results. For example, get 10 records at a time. Here is a code sample on how to do that: def listPage1 = Person.executeQuery("from Person order by id", [offset:0, max:10]) def listPage2 = Person.executeQuery("from Person order by id", [offset:10, max:10]) def listPage3 = Person.executeQuery("from Person order by id", [offset:20, max:10]) The parameter max informs GORM to fetch a maximum of 10 records only. The offset means how many records to skip before reading the first result.On page 1, we don’t skip any records and get the first 10 results On page 2, we skip the first 10 records and get the 11th to 20th records. On page 3, we skip the first 20 records and get the 21st to 30th records.GORM/Hibernate will translate the paging information to it’s proper SQL syntax depending on the database. Note: It is usually better to have an order by clause when paginating results, otherwise most database offers no guarantee on how records are sorted between each query. List Parameters HQL statements can have parameters. Here is an example: def result = Person.executeQuery( "from Person where firstName = ? and lastName = ?", ['John', 'Doe']) The parameters can be passed as a list. The first parameter (John) is used in the first question mark, the second parameter (Doe) is used in the second question mark, and so on. Results can also be paginated def result = Person.executeQuery( "from Person where firstName = ? and lastName = ?", ['John', 'Doe'], [offset:0, max:5]) Named Parameters Providing list parameters is usually hard to read and prone to errors. It is easier to use named parameters. For example: def result = Person.executeQuery( "from Person where firstName = :searchFirstName and lastName = :searchLastName", [searchFirstName:'John', searchLastName:'Doe']) The colon signifies a named parameter variable. Then the values can be passed as a map of values. Results can also be paginated: def result = Person.executeQuery( "from Person where firstName = :searchFirstName and lastName = :searchLastName", [searchFirstName:'John', searchLastName:'Doe'], [offset:0, max:5]) Here is a shorter version: def result = Person.executeQuery( "from Person where firstName = :searchFirstName and lastName = :searchLastName", [searchFirstName:'John', searchLastName:'Doe'], [offset:0, max:5]) How to perform JOINs Here is an example one to many relationship domain classes: package asia.grails.test class Purchase { static hasMany = [items:PurchaseItem] String customer Date dateOfPurchase double price } package asia.grails.test class PurchaseItem { static belongsTo = Purchase Purchase parentPurchase String product double price int quantity } Here is a sample code that joins the two tables: def customerWhoBoughtPencils = Purchase.executeQuery( "select p.customer from Purchase p join p.items i where i.product = 'Pencil' ") This returns all customers who bought pencils executeUpdate We can update or delete records using executeUpdate. This is sometimes more efficient specially when dealing with large sets of records. Delete Here are some examples of how to delete records using executeUpdated. Delete all person records in the database Purchase.executeUpdate("delete Person") Here are different ways to delete people with first name John Person.executeUpdate("delete Person where firstName = 'John'") Person.executeUpdate("delete Person where firstName = ? ", ['John']) Person.executeUpdate("delete Person where firstName = :firstNameToDelete ", [firstNameToDelete:'John']) Update Here are some examples of how to delete records using executeUpdated. Here are different ways on how to make all people have the age 15 Person.executeUpdate("update Person set age = 15") Person.executeUpdate("update Person set age = ?", [15]) Person.executeUpdate("update Person set age = :newAge", [newAge:15]) Here are different ways to set John Doe’s age to 15. Person.executeUpdate( "update Person set age = 15 where firstName = 'John' and lastName = 'Doe'") Person.executeUpdate( "update Person set age = ? where firstName = ? and lastName = ?", [15, 'John', 'Doe']) Person.executeUpdate( "update Person set age = :newAge where firstName = :firstNameToSearch and lastName = :lastNameToSearch", [newAge:15, firstNameToSearch:'John', lastNameToSearch:'Doe'])Reference: Grails Tutorial for Beginners – HQL Queries (executeQuery and executeUpdate) from our JCG partner Jonathan Tan at the Grails cookbook blog....

Hibernate locking patterns – How does Optimistic Lock Mode work

Explicit optimistic locking In my previous post, I introduced the basic concepts of Java Persistence locking. The implicit locking mechanism prevents lost updates and it’s suitable for entities that we can actively modify. While implicit optimistic locking is a widespread technique, few happen to understand the inner workings of explicit optimistic lock mode. Explicit optimistic locking may prevent data integrity anomalies, when the locked entities are always modified by some external mechanism.   The product ordering use case Let’s say we have the following domain model:Our user, Alice, wants to order a product. The purchase goes through the following steps:Alice loads a Product entity Because the price is convenient, she decides to order the Product the price Engine batch job changes the Product price (taking into consideration currency changes, tax changes and marketing campaigns) Alice issues the Order without noticing the price changeImplicit locking shortcomings First, we are going to test if the implicit locking mechanism can prevent such anomalies. Our test case looks like this: doInTransaction(new TransactionCallable<Void>() { @Override public Void execute(Session session) { final Product product = (Product) session.get(Product.class, 1L); try { executeAndWait(new Callable<Void>() { @Override public Void call() throws Exception { return doInTransaction(new TransactionCallable<Void>() { @Override public Void execute(Session _session) { Product _product = (Product) _session.get(Product.class, 1L); assertNotSame(product, _product); _product.setPrice(BigDecimal.valueOf(14.49)); return null; } }); } }); } catch (Exception e) { fail(e.getMessage()); } OrderLine orderLine = new OrderLine(product); session.persist(orderLine); return null; } }); The test generates the following output: #Alice selects a Product Query:{[select as id1_1_0_, abstractlo0_.description as descript2_1_0_, abstractlo0_.price as price3_1_0_, abstractlo0_.version as version4_1_0_ from product abstractlo0_ where][1]}#The price engine selects the Product as well Query:{[select as id1_1_0_, abstractlo0_.description as descript2_1_0_, abstractlo0_.price as price3_1_0_, abstractlo0_.version as version4_1_0_ from product abstractlo0_ where][1]} #The price engine changes the Product price Query:{[update product set description=?, price=?, version=? where id=? and version=?][USB Flash Drive,14.49,1,1,0]} #The price engine transaction is committed DEBUG [pool-2-thread-1]: o.h.e.t.i.j.JdbcTransaction - committed JDBC Connection#Alice inserts an OrderLine without realizing the Product price change Query:{[insert into order_line (id, product_id, unitPrice, version) values (default, ?, ?, ?)][1,12.99,0]} #Alice transaction is committed unaware of the Product state change DEBUG [main]: o.h.e.t.i.j.JdbcTransaction - committed JDBC Connection The implicit optimistic locking mechanism cannot detect external changes, unless the entities are also changed by the current Persistence Context. To protect against issuing an Order for a stale Product state, we need to apply an explicit lock on the Product entity. Explicit locking to the rescue The Java Persistence LockModeType.OPTIMISTIC is a suitable candidate for such scenarios, so we are going to put it to a test. Hibernate comes with a LockModeConverter utility, that’s able to map any Java Persistence LockModeType to its associated Hibernate LockMode. For simplicity sake, we are going to use the Hibernate specific LockMode.OPTIMISTIC, which is effectively identical to its Java persistence counterpart. According to Hibernate documentation, the explicit OPTIMISTIC Lock Mode will: assume that transaction(s) will not experience contention for entities. The entity version will be verified near the transaction end. I will adjust our test case to use explicit OPTIMISTIC locking instead: try { doInTransaction(new TransactionCallable<Void>() { @Override public Void execute(Session session) { final Product product = (Product) session.get(Product.class, 1L, new LockOptions(LockMode.OPTIMISTIC));executeAndWait(new Callable<Void>() { @Override public Void call() throws Exception { return doInTransaction(new TransactionCallable<Void>() { @Override public Void execute(Session _session) { Product _product = (Product) _session.get(Product.class, 1L); assertNotSame(product, _product); _product.setPrice(BigDecimal.valueOf(14.49)); return null; } }); } });OrderLine orderLine = new OrderLine(product); session.persist(orderLine); return null; } }); fail("It should have thrown OptimisticEntityLockException!"); } catch (OptimisticEntityLockException expected) {"Failure: ", expected); } The new test version generates the following output: #Alice selects a Product Query:{[select as id1_1_0_, abstractlo0_.description as descript2_1_0_, abstractlo0_.price as price3_1_0_, abstractlo0_.version as version4_1_0_ from product abstractlo0_ where][1]}#The price engine selects the Product as well Query:{[select as id1_1_0_, abstractlo0_.description as descript2_1_0_, abstractlo0_.price as price3_1_0_, abstractlo0_.version as version4_1_0_ from product abstractlo0_ where][1]} #The price engine changes the Product price Query:{[update product set description=?, price=?, version=? where id=? and version=?][USB Flash Drive,14.49,1,1,0]} #The price engine transaction is committed DEBUG [pool-1-thread-1]: o.h.e.t.i.j.JdbcTransaction - committed JDBC Connection#Alice inserts an OrderLine Query:{[insert into order_line (id, product_id, unitPrice, version) values (default, ?, ?, ?)][1,12.99,0]} #Alice transaction verifies the Product version Query:{[select version from product where id =?][1]} #Alice transaction is rolled back due to Product version mismatch INFO [main]: c.v.h.m.l.c.LockModeOptimisticTest - Failure: org.hibernate.OptimisticLockException: Newer version [1] of entity [[com.vladmihalcea.hibernate.masterclass.laboratory.concurrency. AbstractLockModeOptimisticTest$Product#1]] found in database The operation flow goes like this:The Product version is checked towards transaction end. Any version mismatch triggers an exception and a transaction rollback. Race condition risk Unfortunately, the application-level version check and the transaction commit are not an atomic operation. The check happens in EntityVerifyVersionProcess, during the before-transaction-commit stage: public class EntityVerifyVersionProcess implements BeforeTransactionCompletionProcess { private final Object object; private final EntityEntry entry;/** * Constructs an EntityVerifyVersionProcess * * @param object The entity instance * @param entry The entity's referenced EntityEntry */ public EntityVerifyVersionProcess(Object object, EntityEntry entry) { this.object = object; this.entry = entry; }@Override public void doBeforeTransactionCompletion(SessionImplementor session) { final EntityPersister persister = entry.getPersister();final Object latestVersion = persister.getCurrentVersion( entry.getId(), session ); if ( !entry.getVersion().equals( latestVersion ) ) { throw new OptimisticLockException( object, "Newer version [" + latestVersion + "] of entity [" + MessageHelper.infoString( entry.getEntityName(), entry.getId() ) + "] found in database" ); } } } The AbstractTransactionImpl.commit() method call, will execute the before-transaction-commit stage and then commit the actual transaction: @Override public void commit() throws HibernateException { if ( localStatus != LocalStatus.ACTIVE ) { throw new TransactionException( "Transaction not successfully started" ); }LOG.debug( "committing" );beforeTransactionCommit();try { doCommit(); localStatus = LocalStatus.COMMITTED; afterTransactionCompletion( Status.STATUS_COMMITTED ); } catch (Exception e) { localStatus = LocalStatus.FAILED_COMMIT; afterTransactionCompletion( Status.STATUS_UNKNOWN ); throw new TransactionException( "commit failed", e ); } finally { invalidate(); afterAfterCompletion(); } } Between the check and the actual transaction commit, there is a very short time window for some other transaction to silently commit a Product price change. Conclusion The explicit OPTIMISTIC locking strategy offers a limited protection against stale state anomalies. This race condition is a typical case of Time of check to time of use data integrity anomaly. In my next article, I will explain how we can save this example using the explicit lock upgrade technique.Code available on GitHub.Reference: Hibernate locking patterns – How does Optimistic Lock Mode work from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....

Learning Netflix Governator – Part 2

To continue from the previous entry on some basic learnings on Netflix Governator, here I will cover one more enhancement that Netflix Governator brings to Google Guice – Lifecycle Management Lifecycle Management essentially provides hooks into the different lifecycle phases that an object is taken through, to quote the wiki article on Governator:           Allocation (via Guice) | v Pre Configuration | v Configuration | V Set Resources | V Post Construction | V Validation and Warm Up | V -- application runs until termination, then... -- | V Pre Destroy To illustrate this, consider the following code: package;import; import; import sample.dao.BlogDao; import sample.model.BlogEntry; import sample.service.BlogService;import javax.annotation.PostConstruct; import javax.annotation.PreDestroy;@AutoBindSingleton(baseClass = BlogService.class) public class DefaultBlogService implements BlogService { private final BlogDao blogDao;@Inject public DefaultBlogService(BlogDao blogDao) { this.blogDao = blogDao; }@Override public BlogEntry get(long id) { return this.blogDao.findById(id); }@PostConstruct public void postConstruct() { System.out.println("Post-construct called!!"); } @PreDestroy public void preDestroy() { System.out.println("Pre-destroy called!!"); } } Here two methods have been annotated with @PostConstruct and @PreDestroy annotations to hook into these specific phases of the Governator’s lifecycle for this object. The neat thing is that these annotations are not Governator specific but are JSR-250 annotations that are now baked into the JDK. Calling the test for this class appropriately calls the annotated methods, here is a sample test: mport; import; import; import org.junit.Test; import sample.service.BlogService;import static org.hamcrest.MatcherAssert.*; import static org.hamcrest.Matchers.*;public class SampleWithGovernatorTest {@Test public void testExampleBeanInjection() throws Exception { Injector injector = LifecycleInjector .builder() .withModuleClass(SampleModule.class) .usingBasePackages("") .build() .createInjector();LifecycleManager manager = injector.getInstance(LifecycleManager.class);manager.start();BlogService blogService = injector.getInstance(BlogService.class); assertThat(blogService.get(1l), is(notNullValue())); manager.close(); }} Spring Framework has supported a similar mechanism for a long time – so the exact same JSR-250 based annotations work for Spring bean too. If you are interested in exploring this further, here is my github project with samples with Lifecycle management.Reference: Learning Netflix Governator – Part 2 from our JCG partner Biju Kunjummen at the all and sundry blog....

SSL with WildFly 8 and Undertow

I’ve been working my way through some security topics along WildFly 8 and stumbled upon some configuration options, that are not very well documented. One of them is the TLS/SSL configuration for the new web-subsystem Undertow. There’s plenty of documentation for the older web-subsystem and it is indeed still available to use, but here is the short how-to configure it the new way. Generate a keystore and self-signed certificate  First step is to generate a certificate. In this case, it’s going to be a self signed one, which is enough to show how to configure everything. I’m going to use the plain Java way of doing it, so all you need is the JRE keytool. Java Keytool is a key and certificate management utility. It allows users to manage their own public/private key pairs and certificates. It also allows users to cache certificates. Java Keytool stores the keys and certificates in what is called a keystore. By default the Java keystore is implemented as a file. It protects private keys with a password. A Keytool keystore contains the private key and any certificates necessary to complete a chain of trust and establish the trustworthiness of the primary certificate. Please keep in mind, that an SSL certificate serves two essential purposes: distributing the public key and verifying the identity of the server so users know they aren’t sending their information to the wrong server. It can only properly verify the identity of the server when it is signed by a trusted third party. A self signed certificate is a certificate that is signed by itself rather than a trusted authority. Switch to a command-line and execute the following command which has some defaults set, and also prompts you to enter some more information. $>keytool -genkey -alias mycert -keyalg RSA -sigalg MD5withRSA -keystore my.jks -storepass secret  -keypass secret -validity 9999What is your first and last name?   [Unknown]:  localhost What is the name of your organizational unit?   [Unknown]:  myfear What is the name of your organization?   [Unknown]: What is the name of your City or Locality?   [Unknown]:  Grasbrun What is the name of your State or Province?   [Unknown]:  Bavaria What is the two-letter country code for this unit?   [Unknown]:  ME Is CN=localhost, OU=myfear,, L=Grasbrun, ST=Bavaria, C=ME correct?   [no]:  yes Make sure to put your desired “hostname” into the “first and last name” field, otherwise you might run into issues while permanently accepting this certificate as an exception in some browsers. Chrome doesn’t have an issue with that though. The command generates a my.jks file in the folder it is executed. Copy this to your WildFly config directory (%JBOSS_HOME%/standalone/config). Configure The Additional WildFly Security Realm The next step is to configure the new keystore as a server identity for ssl in the WildFly security-realms section of the standalone.xml (if you’re using -ha or other versions, edit those).  <management>         <security-realms> <!-- ... -->  <security-realm name="UndertowRealm">                 <server-identities>                     <ssl>                         <keystore path="my.keystore" relative-to="jboss.server.config.dir" keystore-password="secret" alias="mycert" key-password="secret"/>                     </ssl>                 </server-identities>             </security-realm> <!-- ... --> And you’re ready for the next step. Configure Undertow Subsystem for SSL If you’re running with the default-server, add the https-listener to the undertow subsystem:   <subsystem xmlns="urn:jboss:domain:undertow:1.2">          <!-- ... -->             <server name="default-server">             <!-- ... -->                 <https-listener name="https" socket-binding="https" security-realm="UndertowRealm"/> <! -- ... --> That’s it, now you’re ready to connect to the ssl port of your instance https://localhost:8443/. Note, that you get the privacy error (compare screenshot). If you need to use a fully signed certificate you mostly get a PEM file from the cert authority. In this case, you need to import this into the keystore. This stackoverflow thread may help you with that.Reference: SSL with WildFly 8 and Undertow from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

Get ready to Rock!
To download the books, please verify your email address by following the instructions found on the email we just sent you.