What's New Here?


Generate your JAXB classes in a second with xjc

Since JAXB is part of the JDK, it is one of the most often used frameworks to process XML documents. It provides a comfortable way to retrieve and store data from XML documents to Java classes. As nearly every Java developer has already used JAXB, I will not explain the different JAXB annotations. Instead I will focus on a little command line tool called xjc and show you how to generate your binding classes based on an existing XSD schema description. Implementing all binding classes for an existing XML interface can be a time consuming and tedious task. But the good news is, you do not need to do it. If you have a XSD schema description, you can use the xjc binding compiler to create the required classes. And even better, xjc is part of the JDK. So there is no need for external tools and you should always have it at hand if required. Using xjc As you can see in the snippet below, xjc support lots of options. The most important are:-d to define where the generated classes shall be stored in the file system, -p to define the package to be used and of course -help if you need anything else.Usage: xjc [-options ...] <schema file/URL/dir/jar> ... [-b <bindinfo>] ... If dir is specified, all schema files in it will be compiled. If jar is specified, /META-INF/sun-jaxb.episode binding file will be compiled. Options: -nv : do not perform strict validation of the input schema(s) -extension : allow vendor extensions - do not strictly follow the Compatibility Rules and App E.2 from the JAXB Spec -b <file/dir> : specify external bindings files (each <file> must have its own -b) If a directory is given, **/*.xjb is searched -d <dir> : generated files will go into this directory -p <pkg> : specifies the target package -httpproxy <proxy> : set HTTP/HTTPS proxy. Format is [user[:password]@]proxyHost:proxyPort -httpproxyfile <f> : Works like -httpproxy but takes the argument in a file to protect password -classpath <arg> : specify where to find user class files -catalog <file> : specify catalog files to resolve external entity references support TR9401, XCatalog, and OASIS XML Catalog format. -readOnly : generated files will be in read-only mode -npa : suppress generation of package level annotations (**/package-info.java) -no-header : suppress generation of a file header with timestamp -target (2.0|2.1) : behave like XJC 2.0 or 2.1 and generate code that doesnt use any 2.2 features. -encoding <encoding> : specify character encoding for generated source files -enableIntrospection : enable correct generation of Boolean getters/setters to enable Bean Introspection apis -contentForWildcard : generates content property for types with multiple xs:any derived elements -xmlschema : treat input as W3C XML Schema (default) -relaxng : treat input as RELAX NG (experimental,unsupported) -relaxng-compact : treat input as RELAX NG compact syntax (experimental,unsupported) -dtd : treat input as XML DTD (experimental,unsupported) -wsdl : treat input as WSDL and compile schemas inside it (experimental,unsupported) -verbose : be extra verbose -quiet : suppress compiler output -help : display this help message -version : display version information -fullversion : display full version informationExtensions: -Xinject-code : inject specified Java code fragments into the generated code -Xlocator : enable source location support for generated code -Xsync-methods : generate accessor methods with the 'synchronized' keyword -mark-generated : mark the generated code as @javax.annotation.Generated -episode <FILE> : generate the episode file for separate compilation Example OK, so let’s have a look at an example. We will use the following XSD schema definition and xjc to generate the classes Author and Book with the described properties and required JAXB annotations. <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <xs:schema version="1.0" xmlns:xs="http://www.w3.org/2001/XMLSchema"><xs:element name="author" type="author"/><xs:element name="book" type="book"/><xs:complexType name="author"> <xs:sequence> <xs:element name="firstName" type="xs:string" minOccurs="0"/> <xs:element name="lastName" type="xs:string" minOccurs="0"/> </xs:sequence> </xs:complexType><xs:complexType name="book"> <xs:sequence> <xs:element ref="author" minOccurs="0"/> <xs:element name="pages" type="xs:int"/> <xs:element name="publicationDate" type="xs:dateTime" minOccurs="0"/> <xs:element name="title" type="xs:string" minOccurs="0"/> </xs:sequence> </xs:complexType> </xs:schema> The following command calls xjc and provides the target directory for the generated classes, the package and the XSD schema file. xjc -d src -p blog.thoughts.on.java schema.xsdparsing a schema... compiling a schema... blog\thoughts\on\java\Author.java blog\thoughts\on\java\Book.java blog\thoughts\on\java\ObjectFactory.java OK, the operation completed successfully and we now have 3 generated classes in our src directory. That might be one more than some have expected. So lets have a look at each of them. The classes Author and Book look like expected. They contain the properties described in the XSD schema and the required JAXB annotations. // // This file was generated by the JavaTM Architecture for XML Binding(JAXB) Reference Implementation, v2.2.4-2 // See <a href="http://java.sun.com/xml/jaxb">http://java.sun.com/xml/jaxb</a> // Any modifications to this file will be lost upon recompilation of the source schema. // Generated on: 2014.01.13 at 07:38:24 PM CET //package blog.thoughts.on.java;import javax.xml.bind.annotation.XmlAccessType; import javax.xml.bind.annotation.XmlAccessorType; import javax.xml.bind.annotation.XmlType;/**  * <p>Java class for author complex type.  *  * <p>The following schema fragment specifies the expected content contained within this class.  *  * <pre>  * <complexType name="author">  * <complexContent>  * <restriction base="{http://www.w3.org/2001/XMLSchema}anyType">  * <sequence>  * <element name="firstName" type="{http://www.w3.org/2001/XMLSchema}string" minOccurs="0"/>  * <element name="lastName" type="{http://www.w3.org/2001/XMLSchema}string" minOccurs="0"/>  * </sequence>  * </restriction>  * </complexContent>  * </complexType>  * </pre>  *  *  */ @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "author", propOrder = { "firstName", "lastName" }) public class Author {protected String firstName; protected String lastName;/**      * Gets the value of the firstName property.      *      * @return      * possible object is      * {@link String }      *      */ public String getFirstName() { return firstName; }/**      * Sets the value of the firstName property.      *      * @param value      * allowed object is      * {@link String }      *      */ public void setFirstName(String value) { this.firstName = value; }/**      * Gets the value of the lastName property.      *      * @return      * possible object is      * {@link String }      *      */ public String getLastName() { return lastName; }/**      * Sets the value of the lastName property.      *      * @param value      * allowed object is      * {@link String }      *      */ public void setLastName(String value) { this.lastName = value; }} // // This file was generated by the JavaTM Architecture for XML Binding(JAXB) Reference Implementation, v2.2.4-2 // See <a href="http://java.sun.com/xml/jaxb">http://java.sun.com/xml/jaxb</a> // Any modifications to this file will be lost upon recompilation of the source schema. // Generated on: 2014.01.13 at 07:38:24 PM CET //package blog.thoughts.on.java;import javax.xml.bind.annotation.XmlAccessType; import javax.xml.bind.annotation.XmlAccessorType; import javax.xml.bind.annotation.XmlSchemaType; import javax.xml.bind.annotation.XmlType; import javax.xml.datatype.XMLGregorianCalendar;/**  * <p>Java class for book complex type.  *  * <p>The following schema fragment specifies the expected content contained within this class.  *  * <pre>  * <complexType name="book">  * <complexContent>  * <restriction base="{http://www.w3.org/2001/XMLSchema}anyType">  * <sequence>  * <element ref="{}author" minOccurs="0"/>  * <element name="pages" type="{http://www.w3.org/2001/XMLSchema}int"/>  * <element name="publicationDate" type="{http://www.w3.org/2001/XMLSchema}dateTime" minOccurs="0"/>  * <element name="title" type="{http://www.w3.org/2001/XMLSchema}string" minOccurs="0"/>  * </sequence>  * </restriction>  * </complexContent>  * </complexType>  * </pre>  *  *  */ @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "book", propOrder = { "author", "pages", "publicationDate", "title" }) public class Book {protected Author author; protected int pages; @XmlSchemaType(name = "dateTime") protected XMLGregorianCalendar publicationDate; protected String title;/**      * Gets the value of the author property.      *      * @return      * possible object is      * {@link Author }      *      */ public Author getAuthor() { return author; }/**      * Sets the value of the author property.      *      * @param value      * allowed object is      * {@link Author }      *      */ public void setAuthor(Author value) { this.author = value; }/**      * Gets the value of the pages property.      *      */ public int getPages() { return pages; }/**      * Sets the value of the pages property.      *      */ public void setPages(int value) { this.pages = value; }/**      * Gets the value of the publicationDate property.      *      * @return      * possible object is      * {@link XMLGregorianCalendar }      *      */ public XMLGregorianCalendar getPublicationDate() { return publicationDate; }/**      * Sets the value of the publicationDate property.      *      * @param value      * allowed object is      * {@link XMLGregorianCalendar }      *      */ public void setPublicationDate(XMLGregorianCalendar value) { this.publicationDate = value; }/**      * Gets the value of the title property.      *      * @return      * possible object is      * {@link String }      *      */ public String getTitle() { return title; }/**      * Sets the value of the title property.      *      * @param value      * allowed object is      * {@link String }      *      */ public void setTitle(String value) { this.title = value; }} The third and maybe unexpected class is the class ObjectFactory. It contains factory methods for each generated class or interface. This can be really useful if you need to create JAXBElement representations of your objects. // // This file was generated by the JavaTM Architecture for XML Binding(JAXB) Reference Implementation, v2.2.4-2 // See <a href="http://java.sun.com/xml/jaxb">http://java.sun.com/xml/jaxb</a> // Any modifications to this file will be lost upon recompilation of the source schema. // Generated on: 2014.01.13 at 07:38:24 PM CET //package blog.thoughts.on.java;import javax.xml.bind.JAXBElement; import javax.xml.bind.annotation.XmlElementDecl; import javax.xml.bind.annotation.XmlRegistry; import javax.xml.namespace.QName;/**  * This object contains factory methods for each  * Java content interface and Java element interface  * generated in the blog.thoughts.on.java package.  * <p>An ObjectFactory allows you to programatically  * construct new instances of the Java representation  * for XML content. The Java representation of XML  * content can consist of schema derived interfaces  * and classes representing the binding of schema  * type definitions, element declarations and model  * groups. Factory methods for each of these are  * provided in this class.  *  */ @XmlRegistry public class ObjectFactory {private final static QName _Author_QNAME = new QName("", "author"); private final static QName _Book_QNAME = new QName("", "book");/**      * Create a new ObjectFactory that can be used to create new instances of schema derived classes for package: blog.thoughts.on.java      *      */ public ObjectFactory() { }/**      * Create an instance of {@link Author }      *      */ public Author createAuthor() { return new Author(); }/**      * Create an instance of {@link Book }      *      */ public Book createBook() { return new Book(); }/**      * Create an instance of {@link JAXBElement }{@code <}{@link Author }{@code >}}      *      */ @XmlElementDecl(namespace = "", name = "author") public JAXBElement<Author> createAuthor(Author value) { return new JAXBElement<Author>(_Author_QNAME, Author.class, null, value); }/**      * Create an instance of {@link JAXBElement }{@code <}{@link Book }{@code >}}      *      */ @XmlElementDecl(namespace = "", name = "book") public JAXBElement<Book> createBook(Book value) { return new JAXBElement<Book>(_Book_QNAME, Book.class, null, value); }} Conclusion We had a look at xjc and used it to generated the required binding classes for an existing XSD schema definition. xjc generated a class for each complex type and an additional factory class to ease the creation of JAXBElement representations. What do you think about xjc and the generated code? Please leave me a comment and tell me about it. I think this tool generates very clean code and saves a lot of time. In most of the cases the generated code can be directly added to a project. But even if this is not the case, it is much faster to do some refactoring based on the generated code than doing everything myself.Reference: Generate your JAXB classes in a second with xjc from our JCG partner Thorben Janssen at the Some thoughts on Java (EE) blog....

10 ideas to improve Eclipse IDE usability

Few years ago, we had a mini IDE war inside our office. It happened between Eclipse and Netbeans supporters. Fortunately, we did not have IntelliJ supporter. Each side tried their best to convince people from the other side to use their favourite IDE. On that war, I am the Eclipse hardcore supporter and I had a hard time fighting Netbeans team. Not as I expected, we end up on the defence side more often than attack. Look at what Netbeans offers, it is quite interesting for me to see how far Netbeans has improved and how Eclipse is getting slower and more difficult to use nowadays than in the past.     Let I share my experience on that mini war and my point of view on how Eclipse should be improved to keep its competitive edge. What is the benefit of using Netbeans For a long time and even up to now, Eclipse is still the dominant IDE in the market. But this did not happened before Eclipse 3.0, which was released in 2004. From there, Eclipse simply dominates the market share of Java IDE for the next decade. Even the C/C++ and Php folks also built their IDE plugin on top of Eclipse. However, things is getting less rosy now. Eclipse is still good, but not that much better than its competitors any more. IntelliJ is a commercial IDE and we will not compare it to Eclipse in this article. The other and more serious competitor is Netbeans. I myself have tried Netbeans, compared it to Eclipse 3.0 and never came back. But the Netbeans that Eclipse is fighting now and the Netbeans that I have tried are simply too different. It is much faster, more stable, configurable and easier to use than I have known. The key points of using Netbeans are the usability and first class support from Sun/Oracle for new Java features. It may not be very appealing to Eclipse veteran like myself but for a starter, it is a great advantage. Like any other wars in the technology worlds, Eclipse and Netbeans keep copying each other features for so long that it is very hard to find something that one IDE can do and the other one cannot. To consider the preferred IDE, what really matter is how things are done rather than what can be done. Regarding usability, I feel Eclipse failed to keep the competitive edge it once had against Netbeans. Eclipse interface is still very flexible and easy to customize but the recent plugins are not so well implemented and error prone (I am thinking of Maven, Git support). Eclipse market is still great but lots of plugins are not so well tested and may create performance or stability issue. Moreover, careless release (Juno 4.0) made Eclipse slow and hangup often. I did not recalled restarting Eclipse in the past but that happened to me once or twice a month now (I am using Eclipse Kepler 4.3). Plus, Eclipse did not fixed some of the discomforts that I have encountered from early day and I still need to bring along all the favourite plugins to help me ease the pain. What I expect from Eclipse There are lots of things I want Eclipse to have but never see from release note. Let share some thoughts:Warn me before I open a big file rather than hang upI guess this happen to most of us. My preferred view is the Package Explorer rather than Project Explorer or Navigator but it does not matter. When I search a file by Ctrl + Shift + R or left click on the file in Explorer, Eclipse will just open the file in Editor. If the file is a huge size XML file? Eclipse hangup and show me the content one minute later or I get frustrated and kill the process. Both are bad outcomes.Have a single Import/Export configuration endpointFor who does not know, Eclipse allow you to import/export Eclipse configuration to a file. When I first download a new release of Eclipse, there are few steps that I always do:Import -> Install -> From Existing Installation: This step help me to copy all my favourite features and plugins from old Eclipse to new Eclipse. Modify Xms, Xmx in eclipse.ini Import Formatter (from exported file) Import Shortkey (from exported file) Configure Installed JREs to point to local JDK Configure Server Runtime and create Server. Disable useless Validators Register svn repository And some other minor tasks that I cannot remember now…Why don’t make it simpler like Chrome installation when new Eclipse can copy whatever settings that I have done on the old Eclipse?Stop building or refreshing the whole workspaceIt happened to me and some of the folks here that I have hundred projects in my workspace. The common practice in our workplace is workspace per repository. To manage things, we create more than 10 Working Sets and constantly switch among them when moving to new task. For us, having Eclipse building, refreshing, scanning the whole workspace is so painful that whether we keep closing projects or sometimes, create a smaller workspace. But can Eclipse allow me to configure scanning Working Set rather than Workspace? Working Set is all what I care. Plus, sometimes, Ctrl + Shift + R and Ctrl + Shift + T does not reflect my active Working Set and not many people notice the small arrow on the top right of the dialogue to select this.Stop indexing by Git and Maven repository by defaultEclipse is nice, it helps us to index Maven and Git repository so that we can work faster later. But not all the time I open Eclipse to work with Maven or Git. Can these plugins be less resource consuming and let me trigger the indexing process when I want?Give me process id for any server or application that I have launchedThis must be a very simple task but I do not know why Eclipse don’t do it. It is even more helpful if Eclipse can provide the memory usage of each process and Eclipse itself. I would like to have a new views that tracking all running process (similar to Debug View) but with process id and memory usage.Implement Open File Explorer and Console hereI bet that most of us use console often when we do coding, whether for Vi, Maven or Git command. However, Eclipse does not give us this feature and we need to install additional plugin to get it.Improve the Editor I often install AnyEdit plugin because it offer many important features that I found hard to live without like converting, sorting,… These features are so crucial that they should be packaged together with Eclipse distribution rather than in a plugin.Stop showing nonsense warning and suggestionHave any of you build a project without a single yellow colour warning? I did that in the past, but let often now. For example, Eclipse asked me to introduce serialVersionUID because my Exception implements Serializable interface. But seriously, how many Java classes implement Serializable? Do we need to do this for every of them?Provide me short keys for the re-factoring tools that I always useSome folks like to type and seeing IDE dependent as a sin. I am on the opposite side. Whatever things can be done by IDE should be done by IDE. Developer is there to think rather than type. It means that I use lots of Eclipse short-keys and re-factoring tool like:Right click -> Surround With Right click -> Refactor Right click -> SourceSome of most common short keys I use everyday are Ctrl + O, Alt + Shift + L, Alt + Shift + M, Ctrl + Shift + F,… and I would like to have more. Eclipse allows me to define my own short keys but I would like it to be part of Eclipse distribution so that I can use them on other boxes as well. From my personal experience, some tools that worth having a short key are:Generate Getters and Setters Generate Constructor using Fields Generate toString() Extract Interface …I also want Eclipse to be more aggressive in defining Templates for Java Editor. Most of use are familiar with well-known Template like sysout, syserr, switch, why don’t we have more for log, toString(), hashCode(), while true,…Make the error messages easier to read by beginnerI have answered many Eclipse questions regarding some common errors because developers cannot figure out what the error message means. Let give few examples: A developer uses command “mvn eclipse:eclipse”. This command generates project classpath file and effectively disable Workspace Resolution. Later, he want to fix things by Update Project Configuration and encounter an error like below (if you want to understand this further, can take a look at the last part of my Maven series)Who understand that? The real problem is the m2e plugin fail to recognize some entries populated by Maven and the solution is to delete all Eclipse files and import Maven project again. Another well-known issue is the error message on pom editor due to m2e does not recognize Maven plugin. It is very confusing for newbie to see this kind of errors.Conclusion These are my thoughts and I wish Eclipse will grant my wishes some days. Do you have anything to share with us about how you want Eclipse to improve?Reference: 10 ideas to improve Eclipse IDE usability from our JCG partner Tony Nguyen at the Developers Corner blog....

From Scrum to Kanban

This month marks one year from the time we switched from Scrum to Kanban. I find it is a good time for us to review the impact of this change. Our Scrum I have experienced two working environment that practice Scrum and still they are quite different. That why it may be more valuable if we start with sharing of our Scrum practice.   Iteration Our iteration is 2 weeks long. I am quite satisfied with the duration as one week is a bit too short to develop any meaningful story and 1 month is a bit too long to plan or to do retrospective. Our iteration start with the first Monday morning retrospective. In the same day after noon, there is iteration planning. For the rest of the iteration, we do coding as much we want. Our product owner request us to do two rounds of demo, soft demo on the last Wednesday of iteration, where we can show the newly developed features on development machine or Stage environment. On the last day of iteration, we suppose to do final demo on UAT environment in order to get the stories accepted. Agile emphasize on adapting to change, but we still do T+2 planning (two iterations ahead). With this practice, we know quite well what is going to be delivered or to be worked on for at least one month ahead. If there is urgent work, the iteration will be re-planned and some stories will be pushed back to next iteration. Daily Life Our daily life starts with a morning alarm. Some ancient coders in the past set the rule of using alarm for office starting hour. Anyone come to office after the alarm ring will have the honour to donate 1 dollar to the team fund.  This fund can be used to host retrospective outdoor or to buy coffee. To be honest, I like this idea, even it effectively cut 22 dollars to my monthly income. 15 minutes later, we have another alarm for the daily stand-up. This short period supposed to be used to read email and catchup with what happen overnight. Our team bases in Asia but is actively working with project stakeholders in Europe and US. That why we need this short email checking session to have a meaningful stand-up. It is not really Scrum practice, but like most of other corporate environments, we need to fill up time-sheet at the end of the day. Using time-sheet, we keep track of the time spent versus the estimated effort and use that to calculate velocity. Roles As specified by Scrum , we have development team and product owner. In our company, product owner are called Capability Manager. At the moment, our management are discussing whether they should split Capability Manager to two roles, one focus on technical aspect of product and the other solely focus on business aspect. We do not have Scrum master, instead, we have Release Manager. This role is a bit confusing because it does not appear in any practice. In our environment, Release Manager work more like the traditional Project Manager. Not all the projects we have Release Manager but for some bigger scale projects, Release Manager can be quite useful and quite busy as well. Most of our products are SAAS applications, and some successful products can have more than 100 customers worldwide. Capability Manager can focus on product features and let the Release Manager deal with story planning, customer deadline and minor customization. There is also one more discussion on whether Release Manager job requires technical background as they need to do iteration planning and some stories are technically related. Tools We use mixture of Excel spreadsheet, Jira and Rally in our daily life. Jira is the leftover tool of the past, before we move to Rally. Now, we only use Jira to track support tickets and defects. Rally is the online platform for Agile practice with built-in support for iteration, story, defect, release, backlog,.. Even with these tool, we cannot avoid using the old day spreadsheet to keep track of team resources (team resource pipeline) and do resource planning (resource matrix) as well. Due to resource scarcity, we still have multi-tasks team that deal with few projects and few product owners at the same time. Periodically, the release managers need to sit together and bargain for their resource next few iterations. Spirit As one of my friend always say, Scrum is more about spirit rather than practices. I can’t agree more with this. Applying Scrum is more about doing things with Scrum mindset rather than strictly following written practices. Personally, I feel we are applying Scrum quite well. At first, in the team standup, we try our best to avoid making it look like a progress report but information sharing and collaborating session. Once in a while, the standup last more than default 15 minutes because developers spend time elaborating ideas and discussing on the spot. Release Manager or Product Owner do not join our daily standup.Our retrospective is a close door activity, which only involve team member. Both Release Manager and Product Owner will not join us unless we call them in to ask for information. Each team member takes turn to be the facilitator. The format of retrospective is not fixed. It is up to the facilitator imagination to decide what will we do for the retrospective. The rest just sit down, relax and wait to see what will  happen next.        The planning sessions includes tasking and Poker Style estimation game. It is up to the team to re-estimate (we estimate one time when the story still in backlog), verify the assumption and later arrange and commit the story to fit team resource for this iteration nicely. Sometimes, we have a mini debate if there is big gap between team member estimations.     Why we moved to Kanban You may wonder if our Scrum work so well, why did we move to Kanban. Well, it was not our team decision. Kaban was initiated at UK headquarters and spread to other regions. However, working with Scrum is not all perfect, let I share with you some of problems that we are facing. Resource Utilization at the end of iteration This problem may not be very severe in our office but it is a big concern in other regions. Due to technical difficulties, sometimes, estimation is very far from spent effort. This leave a big gap at the end of iteration. It may be good if the gap is big enough to schedule another story but most of the time, it does not. This creates the low productivity issue that management want to fix. They hope removing iteration will remove this virtual gap and let the developers focus on delivering work. The pressure from iteration commitment By committing to the planned stories in the iteration, we are under the pressure to deliver it. The stories were estimated with 2 weeks duration for development but we normally need to deliver them faster to match the soft demo on Wednesday and final demo on Friday. To make thing worse, our Web Service team is in other region and we need to raise the deployment ticket one day in advance to get things done. If the deployment ticket failed, we need one more day to redeploy. The consequence is whether we develop too fast to meet the deadline or we follow the estimation, then miss the commitment. Another concern is the pressure to estimate and commit to something developers don’t know so well and still be punished for missing the commitment. This creates the defensive mindset where developers will try to include a  safety buffer on any estimation they make. Then, our Kanban Life is not so much different when we move to Kanban. For the good, we have the budget to buy a big screen. For the bad, we do not do iteration planning any more. However, we still keep our retrospective on first Monday morning. Kanban board Now, we open the Kanban board in Rally to track our development progress.We create our Kanban board with 7 columns, which reflect our working processNone (equals to backlog) Tasking Building Peer Review (only after Stage deployment) Deploy to UAT Acceptance (story is signed of by Product Owner) Deploy to LIVEThe product owner creates stories in backlog, which will be pulled to Tasking column by Release Manager. After that, it is development team responsibility to move this story to Deploy to UAT column. After that, it is product owner responsibility to verify and accept it. If there is any feedback, the story will be put back to Building column. Otherwise, it is signed off and ready to be deploy to Production. It is up to the Release Managers when they want to deploy the accepted feature to Live environment. As Kanban practice, we want to limit multitasking and set the threshold of the capacity for each column. As we do pairing, with 8 developers in our team, the threshold for each column suppose to be no more than 4. However, this is easier to say than do as stories are often blocked by external factor and we need to work on something else. Planning There is no iteration planning any more. Rather, we do planning whenever there is new story in Tasking column. The story is both tasked and estimated by one pair rather than collecting inputs from the whole team. What is a bit unnatural is due to our multi-tasking nature, one pair do not follow one story from Tasking until Deploy to UAT. To deal with this, we often need to come back to the pair that do tasking to ask for explanation. Demo We still need to estimate but there is not fixed time for demo. In the regular meeting between team and Release Manager, the most asked question is “Do you have anything to demo today?” and the most popular answer is “No”. Estimation When aborting Scrum, we also abort Story Point Estimation. We still count the spent effort versus estimated effort but it only for reference. From last year, we moved back to estimation by pair day. Our feeling So, how do we feel after one year practising Kanban? I think it is a mixture feeling. On the good side, there are less thing to worry about, less commitment to keep and better focus on development. Plus, we have the big screen to look at it every morning. However, things are not all rosy. I do not know whether we do Kanban the wrong way or it is just the natural of Kanban, developers do not follow one story from beginning until the end. One guy may task the story this way following his skills set and someone else will end up delivering the work. Moreover, I feel Kanban treating every developer equal, which is not so true. If there is one story available in Building Column and you are free, you must take the story, no matter you have the skill or not. It hamper the productivity of the team. However, it also can be positively viewed as Kanban fostering skills and knowledge sharing among developers. Moving to Kanban also causes developers spending more time on story development. There is no pressure to cut corner to deliver but there is also a tendency to over-deliver good to have features, which are not included in the Acceptance Criterias. That is for us, for Release Manager, they seem to be not so happy with the transition. Lack of iteration only make their planning more ambiguous and difficult.Reference: From Scrum to Kanban from our JCG partner Tony Nguyen at the Developers Corner blog....

SSL encrypted EJB calls with JBoss AS 7

Encrypting the communication between client and server provides improved security and privacy protection for your system. This can be an important requirement by the customer, especially if client or server need to work in an unprotected network. This article shows you how to setup SSL encrypted EJB calls in JBoss AS 7. Server There are only two things that need to be done on server side:    creating a key store with the privat/public pair of keys for the encryption and referencing the key store in the server configuration.The source code of your application stays the same with or without encryption. Creating the keys Java provides the tool keytool which we will use to manage the key store and to create the private/public pair of keys. The example below creates a pair of 1024 bit keys using the RSA algorithm and adds them to the key store server.keystore. The key store will be created if it does not exist. keytool -genkey -alias jboss -keyalg RSA -keysize 1024 -keystore server.keystore -validity 365 -keypass 123456 -storepass 123456 -dname "CN=localhost, O=thoughts-on-java.org" We will need to provide this key store to the JBoss application server. Therefore I prefer to store it in the JBoss configuration directory. But you can store it where ever you want as long as the JBoss server can access it. Server configuration Now we have to reference the keystore in the JBoss configuration. Therefore we add a server-identities element to the security realm configuration of the application realm. The following snippet shows an example configuration using the standard ApplicationRealm configuration and a server.keystore file located in the JBoss configuration directory: <management> <security-realms> <security-realm name="ManagementRealm"> <authentication> <properties path="mgmt-users.properties" relative-to="jboss.server.config.dir"/> </authentication> </security-realm> <security-realm name="ApplicationRealm"> <server-identities> <ssl> <keystore path="server.keystore" relative-to="jboss.server.config.dir" password="123456"/> </ssl> </server-identities> <authentication> <properties path="application-users.properties" relative-to="jboss.server.config.dir"/> </authentication> </security-realm> </security-realms>... This is all that’s needs to be done on server side. Client On client side, we need to do the following things:import the public key of the server into the client key store, define SSL encryption in the EJBClientProperties and provide the location and password of a key store with the public key JVM arguments.Importing the key First we need to export the public key of the key pair we added to the server key store. This can be done with the keytool, too: keytool -export -keystore server.keystore -alias jboss -file server.cer -keypass 123456 -storepass 123456 The key store will be created if it does not exist. OK, now we can add the key to the client keystore: keytool -import -trustcacerts -alias jboss -file server.cer -keystore client.keystore -keypass 123456 -storepass 123456 EJBClientProperties There is no big difference in the EJBClientProperties. The properties remote.connectionprovider.create.options.org.xnio.Options.SSL_ENABLED and remote.connection.default.connect.options.org.xnio.Options.SSL_STARTTLS need to be set to true. The rest stays unchanged. The following snippet shows the creation of an SSL encrypted connection to the server and the lookup of an SLSB. // define EJB client properties final Properties props = new Properties(); // define SSL encryption props.put("remote.connectionprovider.create.options.org.xnio.Options.SSL_ENABLED", "true"); props.put("remote.connection.default.connect.options.org.xnio.Options.SSL_STARTTLS", "true"); // connection properties props.put("remote.connections", "default"); props.put("remote.connection.default.host", "localhost"); props.put("remote.connection.default.port", "4447"); // user credentials props.put("remote.connection.default.username", "test"); props.put("remote.connection.default.password", "1234");props.put("remote.connection.default.connect.options.org.xnio.Options.SASL_DISALLOWED_MECHANISMS", "JBOSS-LOCAL-USER"); props.put("remote.connection.default.connect.options.org.xnio.Options.SASL_POLICY_NOPLAINTEXT", "false"); props.put("remote.connection.default.connect.options.org.jboss.remoting3.RemotingOptions.HEARTBEAT_INTERVAL", "600000");// create EJB client configuration final EJBClientConfiguration clientConfiguration = new PropertiesBasedEJBClientConfiguration( props);// create and set a context selector final ContextSelector<EJBClientContext> contextSelector = new ConfigBasedEJBClientContextSelector( clientConfiguration); EJBClientContext.setSelector(contextSelector);// create InitialContext final Hashtable<Object, Object> contextProperties = new Hashtable<>(); ejbURLContextFactory.class.getName(); contextProperties.put(Context.URL_PKG_PREFIXES, "org.jboss.ejb.client.naming"); InitialContext initialContext = new InitialContext(contextProperties);// lookup SLSB GreeterRemote greeter = (GreeterRemote) initialContext .lookup("ejb:/test/Greeter!blog.thoughts.on.java.ssl.remote.GreeterRemote"); Assert.assertEquals("Hello World!", greeter.greet("World")); JVM arguments  OK, now we are nearly done. The only thing missing is the reference to the client key store. This can be done with the JVM arguments javax.net.ssl.trustStore for the location and javax.net.ssl.trustStorePassword for the password of the key store, e.g.: -Djavax.net.ssl.trustStore=src\test\resources\client.keystore -Djavax.net.ssl.trustStorePassword=123456 This is all needs to be done to setup SSL encrypted EJB calls with JBoss AS 7. Troubleshooting If there are any communication problems, you can set -Djavax.net.debug=true to enable debug messages. Conclusion In this article we had a look at the configuration and code changes to setup encrypted EJB calls with JBoss AS 7. It can be done in a few minutes and provides an improved security and privacy protection to your communication.Reference: SSL encrypted EJB calls with JBoss AS 7 from our JCG partner Thorben Janssen at the Some thoughts on Java (EE) blog....

Spring Rest Controller with angularjs resource

Angularjs ngResource is an angularjs module for interacting with REST based services. I used it recently for a small project with Spring MVC and wanted to document a configuration that worked well for me. The controller is run of the mill, it supports CRUD operations on a Hotel entity and supports the following methods:          POST /rest/hotels – creates a Hotel entity GET /rest/hotels – gets the list of Hotel entities GET /rest/hotels/:id – retrieves an entity with specified Id PUT /rest/hotels/:id – updates an entity DELETE /rest/hotels/:id – deletes an entity with the specified idThis can implemented in the following way using Spring MVC: @RestController @RequestMapping("/rest/hotels") public class RestHotelController { private HotelRepository hotelRepository; @Autowired public RestHotelController(HotelRepository hotelRepository) { this.hotelRepository = hotelRepository; }@RequestMapping(method=RequestMethod.POST) public Hotel create(@RequestBody @Valid Hotel hotel) { return this.hotelRepository.save(hotel); }@RequestMapping(method=RequestMethod.GET) public List<Hotel> list() { return this.hotelRepository.findAll(); }@RequestMapping(value="/{id}", method=RequestMethod.GET) public Hotel get(@PathVariable("id") long id) { return this.hotelRepository.findOne(id); } @RequestMapping(value="/{id}", method=RequestMethod.PUT) public Hotel update(@PathVariable("id") long id, @RequestBody @Valid Hotel hotel) { return hotelRepository.save(hotel); } @RequestMapping(value="/{id}", method=RequestMethod.DELETE) public ResponseEntity<Boolean> delete(@PathVariable("id") long id) { this.hotelRepository.delete(id); return new ResponseEntity<Boolean>(Boolean.TRUE, HttpStatus.OK); } } Note the @RestController annotation, this is a new annotation introduced with Spring Framework 4.0, with this annotation specified on the controller, the @ResponseBody annotation on each of the methods can be avoided. On the angularjs side, the ngResource module can be configured in a factory the following way, to consume this service: app.factory("Hotel", function ($resource) { return $resource("/rest/hotels", {id: "@id"}, { update: { method: 'PUT' } }); }); The only change to the default configuration is in specifying the additional “update” action with the Http method of PUT instead of POST. With this change, the REST API can be accessed the following way: POST /rest/hotels translates to: var hotel = new Hotel({name:"test",address:"test address", zip:"0001"}); hotel.$save(); Or another variation of this: Hotel.save({}, {name:"test",address:"test address", zip:"0001"}); GET /rest/hotels translates to : Hotel.query(); GET /rest/hotels/:id translates to : Hotel.get({id:1}) PUT /rest/hotels/:id translates to : var hotel = new Hotel({id:1, name:"test",address:"test address", zip:"0001"}); hotel.$update(); DELETE /rest/hotels/:id translates to: var hotel = new Hotel({id:1}); hotel.$delete(); OR Hotel.delete({id:1}); To handle successful and failure outcomes just pass in additional callback handlers: for eg. with create: var hotel = new Hotel({name:"test",address:"test address", zip:"0001"}); hotel.$save({},function(response){ //on success }, function(failedResponse){ //on failure });A complete CRUD working sample with angularjs and Spring MVC is available at this github location: https://github.com/bijukunjummen/spring-boot-mvc-test/tree/withangularReference: Spring Rest Controller with angularjs resource from our JCG partner Biju Kunjummen at the all and sundry blog....

Spice up your test code with custom assertions

Inspired by the @tkaczanowski talk during GeeCON conference I decided to have a closer look at custom assertions with AssertJ library. In my ‘Dice’ game I created a ‘Chance’ that is any combination of dice with the score calculated as a sum of all dice. This is relatively simple object:             class Chance implements Scorable {@Override public Score getScore(Collection<Dice> dice) { int sum = dice.stream() .mapToInt(die -> die.getValue()) .sum(); return scoreBuilder(this) .withValue(sum) .withCombination(dice) .build(); } }public interface Scorable { Score getScore(Collection<Dice> dice); } In my test I wanted to see how the score is calculated for different dice combination. I started with simple (and only one actually): public class ChanceTest {private Chance chance = new Chance();@Test @Parameters public void chance(Collection<Dice> rolled, int scoreValue) { // arrange Collection<Dice> rolled = dice(1, 1, 3, 3, 3); // act Score score = chance.getScore(rolled); // assert assertThat(actualScore.getScorable()).isNotNull(); assertThat(actualScore.getValue()).isEqualTo(expectedScoreValue); assertThat(actualScore.getReminder()).isEmpty(); assertThat(actualScore.getCombination()).isEqualTo(rolled); }} A single concept – score object – is validated in the test. To improve the readability and reusability of the score validation I will create a custom assertion. I would like my assertion is used like any other AssertJ assertion as follows: public class ChanceTest {private Chance chance = new Chance();@Test public void scoreIsSumOfAllDice() { Collection<Dice> rolled = dice(1, 1, 3, 3, 3); Score score = chance.getScore(rolled);ScoreAssertion.assertThat(score) .hasValue(11) .hasNoReminder() .hasCombination(rolled); } } In order to achieve that I need to create a ScoreAssertion class that extends from org.assertj.core.api.AbstractAssert. The class should have a public static factory method and all the needed verification methods. In the end, the implementation may look like the below one. class ScoreAssertion extends AbstractAssert<ScoreAssertion, Score> {protected ScoreAssertion(Score actual) { super(actual, ScoreAssertion.class); }public static ScoreAssertion assertThat(Score actual) { return new ScoreAssertion(actual); }public ScoreAssertion hasEmptyReminder() { isNotNull(); if (!actual.getReminder().isEmpty()) { failWithMessage("Reminder is not empty"); } return this; }public ScoreAssertion hasValue(int scoreValue) { isNotNull(); if (actual.getValue() != scoreValue) { failWithMessage("Expected score to be <%s>, but was <%s>", scoreValue, actual.getValue()); } return this; }public ScoreAssertion hasCombination(Collection<Dice> expected) { Assertions.assertThat(actual.getCombination()) .containsExactly(expected.toArray(new Dice[0])); return this; } } The motivation of creating such an assertion is to have more readable and reusable code. But it comes with some price – more code needs to be created. In my example, I know I will create more Scorables quite soon and I will need to verify their scoring algorithm, so creating an additional code is justified. The gain will be visible. For example, I created a NumberInARow class that calculates the score for all consecutive numbers in a given dice combination. The score is a sum of all dice with the given value: class NumberInARow implements Scorable {private final int number;public NumberInARow(int number) { this.number = number; }@Override public Score getScore(Collection<Dice> dice) {Collection<Dice> combination = dice.stream() .filter(value -> value.getValue() == number) .collect(Collectors.toList());int scoreValue = combination .stream() .mapToInt(value -> value.getValue()) .sum();Collection<Dice> reminder = dice.stream() .filter(value -> value.getValue() != number) .collect(Collectors.toList());return Score.scoreBuilder(this) .withValue(scoreValue) .withReminder(reminder) .withCombination(combination) .build(); } } I started with the test that checks a two fives in a row and I already missed on assertion – hasReminder – so I improved the ScoreAssertion. I continued with changing the assertion with other tests until I got quite well shaped DSL I can use in my tests: public class NumberInARowTest {@Test public void twoFivesInARow() { NumberInARow numberInARow = new NumberInARow(5); Collection<Dice> dice = dice(1, 2, 3, 4, 5, 5); Score score = numberInARow.getScore(dice); // static import ScoreAssertion assertThat(score) .hasValue(10) .hasCombination(dice(5, 5)) .hasReminder(dice(1, 2, 3, 4)); }@Test public void noNumbersInARow() { NumberInARow numberInARow = new NumberInARow(5); Collection<Dice> dice = dice(1, 2, 3); Score score = numberInARow.getScore(dice);assertThat(score) .isZero() .hasReminder(dice(1, 2, 3)); } }public class TwoPairsTest {@Test public void twoDistinctPairs() { TwoPairs twoPairs = new TwoPairs(); Collection<Dice> dice = dice(2, 2, 3, 3, 1, 4); Score score = twoPairs.getScore(dice);assertThat(score) .hasValue(10) .hasCombination(dice(2, 2, 3, 3)) .hasReminder(dice(1, 4)); } } The assertion after changes looks as follows: class ScoreAssertion extends AbstractAssert<ScoreAssertion, Score> {protected ScoreAssertion(Score actual) { super(actual, ScoreAssertion.class); }public static ScoreAssertion assertThat(Score actual) { return new ScoreAssertion(actual); }public ScoreAssertion isZero() { hasValue(Score.ZERO); hasNoCombination(); return this; }public ScoreAssertion hasValue(int scoreValue) { isNotNull(); if (actual.getValue() != scoreValue) { failWithMessage("Expected score to be <%s>, but was <%s>", scoreValue, actual.getValue()); } return this; }public ScoreAssertion hasNoReminder() { isNotNull(); if (!actual.getReminder().isEmpty()) { failWithMessage("Reminder is not empty"); } return this; }public ScoreAssertion hasReminder(Collection<Dice> expected) { isNotNull(); Assertions.assertThat(actual.getReminder()) .containsExactly(expected.toArray(new Dice[0])); return this; }private ScoreAssertion hasNoCombination() { isNotNull(); if (!actual.getCombination().isEmpty()) { failWithMessage("Combination is not empty"); } return this; }public ScoreAssertion hasCombination(Collection<Dice> expected) { isNotNull(); Assertions.assertThat(actual.getCombination()) .containsExactly(expected.toArray(new Dice[0])); return this; } } I really like the idea of custom AssertJ assertions. They will improve the readability of my code in certain cases. On the other hand, I am pretty sure they cannot be used in all scenarios. Especially in those, where the chance of reusability is minimal. In such a case private methods with grouped assertions can be used. What is your opinion? Resourceshttps://github.com/joel-costigliola/assertj-core/wiki/Creating-specific-assertions The evolution of assertions via @tkaczanowskiReference: Spice up your test code with custom assertions from our JCG partner Rafal Borowiec at the Codeleak.pl blog....

API design and performance

When you design a new API you have to take a lot of decisions. These decisions are based on a number of design principles. Joshua Bloch has summarized some of them in his presentation “How to Design a Good API and Why it Matters”. The main principles he mentions are:            Easy to learn Easy to use Hard to misuse Easy to read and maintain code that uses it Sufficiently powerful to satisfy requirements Easy to extend Appropriate to audienceAs we see from the list above, Joshua Bloch puts his emphasis on readability and usage. A point that is completely missing from this listing is performance. But can performance impact your design decisions at all? To answer this question let’s try to design a simple use case in form of an API and measure its performance. Then we can have a look at the results and decide whether performance considerations have an impact on the API or not. As an example we take the classic use case of loading a list of customers from some service/storage. What we also want to consider is the fact that not all users are allowed to perform this operation. Hence we will have to implement some kind of permission check. To implement this check and to return this information back to the caller, we have multiple ways to do so. The first try would look like this one: List<Customer> loadCustomersWithException() throws PermissionDeniedException Here we model an explicit exception for the case the caller has not the right to retrieve the list of customers. The method returns a list of Customer objects while we assume that the user can be retrieved from some container or ThreadLocal implementation and has not to be passed to each method. The method signature above is easy to use and hard to misuse. Code that uses this method is also easy to read: try { List<Customer> customerList = api.loadCustomersWithException(); doSomething(customerList); } catch (PermissionDeniedException e) { handleException(); } The reader immediately sees that a list of Customers is loaded and that we perform some follow-up action only in case we don’t get a PermissionDeniedException. But in terms of performance exceptions do cost some CPU time as the JVM has to stop the normal code execution and walk up the stack to find the position where the execution has to be continued. This is also extremely hard if we consider the architecture of modern processors with their eager execution of code sequences in pipelines. So would it be better in terms of performance to introduce another way of informing the caller about the missing permission? The first idea would be to create another method in order to check the permission before calling the method that eventually throws an exception. The caller code would then look like this: if(api.hasPermissionToLoadCustomers()) { List<Customer> customerList = api.loadCustomers(); doSomething(customerList); } The code is still readable, but we have introduced another method call that also costs runtime. But now we are sure that the exception won’t be thrown; hence we can omit the try/catch block. This code now violates the principle “Easy to use”, as we now have to invoke two methods for one use case instead of one. You have to pay attention not to forget the additional call for each retrieval operation. With regard to the whole project, your code will be cluttered with hundreds of permission checks. Another idea to overcome the exception is to provide an empty list to the API call and let the implementation fill it. The return value can then be a boolean value indicating if the user has had the permission to execute the operation or if the list is empty because no customers have been found. As this sounds like C or C++ programming where the caller manages the memory of the structures that the callees uses, this approach costs the construction of an empty list even if you don’t have a permission to retrieve the list at all: List<Customer> customerList = new ArrayList<Customer>(); boolean hasPermission = api.loadCustomersWithListAsParameter(customerList); if(hasPermission) { doSomething(customerList); } One last approach to solve the problem to return two pieces of information to the caller would be the introduction of a new class that holds next to the returned list of Customers also a boolean flag indicating if the user has had the permission to perform this operation: CustomerList customerList = api.loadCustomersWithReturnClass(); if(customerList.isUserHadPermission()) { doSomething(customerList.getCustomerList()); } Again we have to create additional objects that cost memory and performance, and we also have to deal with an additional class that has nothing more to do than to serve as a simple data holder to provide the two pieces of information. Although this approach is again easy to use and creates readable code, it creates additional overhead in order to maintain the separate class and has some kind of awkward means to indicate that an empty list is empty because of the missing permission. After having introduced these different approaches it is now time to measure their performance, one time for the case the caller has the permission and one time for the case the caller does not have the necessary permission. The results in the following table are shown for the first case with 1.000.000 repetitions:Measurement Time[ms]testLoadCustomersWithExceptionWithPermission 33testLoadCustomersWithExceptionAndCheckWithPermission 34testLoadCustomersWithReturnClassWithPermission 41testLoadCustomersWithListAsParameterWithPermission 66As we have expected before, the two approaches that introduce an additional class respectively pass an empty list cost more performance than the approaches that use an exception. Even the approach that uses a dedicated method call to check for the permission is not much slower than the one without it. The following table now shows the results for the case where the caller does not have the permission to retrieve the list:Measurement Time[ms]testLoadCustomersWithExceptionNoPermission 1187testLoadCustomersWithExceptionAndCheckNoPermission 5testLoadCustomersWithReturnClassNoPermission 4testLoadCustomersWithListAsParameterNoPermission 5Not surprisingly the approach where a dedicated exception is thrown is much slower than the other approaches. The magnitude of this impact is much higher than one would expect before. But from the table above we already know the solution for this case: Just introduce another method that can be used to check for the permission ahead, in case you expect a lot of permission denied use cases. The huge difference in runtime between the with and without permission use cases can be explained by the fact that I have returned an ArrayList with one Customer object in case the caller was in possession of the permission; hence the loadCustomer() calls where a bit more expensive than in case the user did not possess this permission. Conclusion When performance is a critical factor, you also have to consider it when designing a new API. As we have seen from the measurements above, this may lead to solutions that violate common principles of API design like “easy to use” and “hard to misuse”.Reference: API design and performance from our JCG partner Martin Mois at the Martin’s Developer World blog....

JPA 2.1 Type Converter – The better way to persist enums

Persisting enums with JPA 2.0 is possible, but there is no nice way to do it. Using the @Enumerated annotation, you can use EnumType.ORDINAL or EnumType.STRING to map the enum value to its database representation. But both options have some drawbacks, that we will discuss in the first part of this article. In the second part, I will show you to avoid these drawbacks by using a JPA 2.1 Type Converter. Persisting enums with JPA 2.0 EnumType.ORDINAL uses the return value of Enum.ordinal() to persist the enum. So the first value of the enum will be mapped to 0, the second to 1 and so on. While this looks compact and easy to use in the first place, it causes problems when changing the enum. Removing enum values or adding a new value somewhere in between will change the mapping of all following values, e.g.: before: Vehicle: CAR -> 0 TRAIN -> 1 PLANE -> 2 after: Vehicle: CAR -> 0 BUS -> 1 TRAIN -> 2 PLANE -> 3 Adding Bus at the second position would require a database update to fix the enum mapping. EnumType.STRING looks like a better option. It uses the String representation of the enum to persist it in the database. So adding or removing values will not change the mapping. But this representation can be quite verbose and renaming an enum value will break the mapping. before:  Vehicle: CAR -> CAR TRAIN -> TRAIN PLANE -> PLANE after: Vehicle: CAR -> CAR BUS -> BUS TRAIN -> TRAIN PLANE -> PLANE Using JPA 2.1 Type Converter JPA 2.1 Type Converter provide a third and in my opinion the best option. A Type Converter allows us to implement methods to convert the value of an entity attribute to its database representation and back. I will not get into too much details on how to implement a Type Converter because I already did this in one of my former articles. By implementing our own mapping, we can choose a compact database representation and make sure, that changing the enum in any way will not break the existing mapping. The following example shows how to implement a type converter for the Vehicle enum: @Converter(autoApply = true) public class VehicleConverter implements AttributeConverter<Vehicle, String> {@Override public String convertToDatabaseColumn(Vehicle vehicle) { switch (vehicle) { case BUS: return "B"; case CAR: return "C"; case PLANE: return "P"; case TRAIN: return "T"; default: throw new IllegalArgumentException("Unknown value: " + vehicle); } }@Override public Vehicle convertToEntityAttribute(String dbData) { switch (dbData) { case "B": return Vehicle.BUS; case "C": return Vehicle.CAR; case "P": return Vehicle.PLANE; case "T": return Vehicle.TRAIN; default: throw new IllegalArgumentException("Unknown value: " + dbData); } }} The VehicleConverter maps the enum value to a one character String. By declaring it with @Converter(autoApply = true), we tell the JPA provider to use this Type Mapper to map all Vehicle enums. So we do not need to specify the converter at each entity attribute of type Vehicle. But there is one thing we need to take care of and if you have read my former article about JPA Type Converter you might have wondered already. Type Converter cannot be applied to attributes annotated with @Enumerated. So we have to make sure that there is no @Enumerated annotation at our entity attributes of type Vehicle. Conclusion We implemented a simple Type Converter that uses our own rules to convert the Vehicle enum to its database representation. So we can make sure that changing the values of the Vehicle enum will not break the existing/remaining mappings.If you want to try it on your own, you can find the source code on github: https://github.com/somethoughtsonjava/JPA2.1-EnumConverterReference: JPA 2.1 Type Converter – The better way to persist enums from our JCG partner Thorben Janssen at the Some thoughts on Java (EE) blog....

5 Reasons you should try JCG Academy

It is a while now since we quietly launched JCG Academy. It is a subscription based site geared to serious developers that wish to sharpen their technical skills. It includes numerous courses on the latest technologies and it gets updated with new content on a weekly basis. I urge you to have a look at it and try it out, you will definitely be compensated for your time and money. Having said that, here are 5 reasons why you should try it out: 1) Kick-ass tutorials on cutting edge technologies Staying up to date in the ever changing technology world is undoubtedly quite challenging. More and more new technologies pop up every day and the task of catching up with all of them can be daunting. JCG Academy will help you keep up with all the new cool stuff so that you are not left behind. 2) It will actually save you money How is that possible, you might ask. Well, consider this: “Time is money”. This is a fact of life. If you manage to save even 1 hour per month by reading our high quality tutorials and not some random, outdated material on the internet, you will have already recouped your investment. That is of course if you value your time at more than 9$/hour (which I strongly hope you do). Stop losing precious time, get on-board. 3) Premium quality guaranteed by the Java Code Geeks We are the Java Code Geeks. We deliver high quality stuff for quite a while now. You know us, you trust us. You can be sure that we will be there for you. If at any point you face any issue with our platform, our team will immediately respond and resolve it. 4) 100% No B.S. money back guarantee We are so confident that you will gain enormous value from our courses that we offer a 100%, no questions asked, money back guarantee. If at any point you feel that our value proposition is not for you, just shoot us an email and we will refund your paid amount without further questions. Enough said. 5) It will give you a competitive advantage Whether you are looking to advance your career in the corporate world or create the next million dollar startup, you will need some serious technical education. Let’s face it, having knowledge of the latest technologies will give you a competitive advantage in the marketplace. JCG Academy will ensure that you get ahead of the competition by providing access to top notch material. So, these are only some of the reasons why you should try out JCG Academy. Get on-board, enjoy the ride and save some money at the same time. We are waiting for you. Happy reading! ...

Why Data Strategy Matters

Big Data has an alluring promise: crunch all of your datas and find out all these very important unknown things you didn’t know before. You will find out all sorts of important things about your customers. You’ll discover what they’re buying, what they’re not buying, what they like, what they hate. All sorts things that sound very strategic and important. It’s all very sexy and appealing. Mo’ Data Mo’ Problems But Big Data has a Big Problem. And that is that it is most often directionless. It’s not directed at solving a specific problem. The problem with Big Data, in a nutshell, is that it is a solution looking for a problem. I could tell you stories of large companies that I’ve worked with who have their own dedicated “Insights Team” whose only job is to hunt down these nuggets of strategic information. They routinely find entire books’ worth of information that is probably relevant to somebody–but they don’t know who. And so they send out spreadsheets that nobody reads and then grow progressively more frantic that their hard work goes unnoticed and unappreciated. I’ve realized that their audience doesn’t respond to these insights for several reasons:They can reveal incompetence or, even worse, laziness They often create more work for busy people to actually act on them Sometimes they embarrass people because they reveal information that is obvious… but only in retrospectYou can probably guess that this goes over like a lead balloon. This is what some people (ahem, Gartner) call the “trough of disillusionment”.I’ll admit I was completely smitten by the Big Data promise, and I jumped in headfirst. But I ran smack into the problems I just described, and it’s happening again, all over the world, every day. Businesses are becoming disillusioned with Big Data, and with good reason. They’ve been promised something earth-shattering and have received mountains of information that they don’t know what to do with. So how do we move on to realize the promise? There are a few problems that I’ve personally witnessed that need to be addressed:Discovery efforts need to be targeted at business directives Insights need to be routed to the subject matter expert to be vetted Someone needs to translate vetted insights into the appropriate actions Actions need to be implemented Results need to be trackedRight now the Big Data solutions we have aren’t really close to this. And the audience that’s interested in Big Data doesn’t really understand that this is what they’re signing on for. It’s a lot of work, and it creates a lot of work. People are busy, and most don’t have an appetite for yet more work. I believe what we’re seeing is a schism between reactive and proactive analytics. Reactive currently rules the day in that people expect answers to questions they already know they should be asking. Proactive is the ultimate promise of Big Data, but the tools–and the audience–simply aren’t mature enough yet. You NEED reactive analytics to tell you WHY things are happening. You would BENEFIT from proactive analytics, to help you decide where you should be spending your attention. Enter: Data Strategy I believe what is needed is a cohesive data strategy for organizations. And I think you’ll start to hear a lot more about this, because the current haphazard approach simply isn’t working. A viable Data Strategy should start from a menu of business topics that you KNOW your audience is interested in. Conversion rate optimization, marketing opportunities (to specific demographics), what creates customer churn–it’s going to be different for everyone. I’ve put together a comprehensive list for the book I’m writing and I’m up to 50 pages just describing the most common topics, so there are quite a few. Once you’ve created a focused list of topics that the business needs to pay attention to, the first step is to take care of the reactive bits. This is where legacy technology like dashboards and alerts come in. They’re your first line of defense against the unknown creeping into your day-to-day operations and wreaking havoc–they tell you when something has changed. Once the reactive intelligence systems have been established, then you can start getting into the more “out there” stuff. I liken this to peeling back the layers of an onion:Level 1 is reactive, answers questions you’ve explicitly asked (“What is my churn rate for the week?”) Level 2 starts to identify trends and patterns (“New customers are likely to churn within 1 month”). Level 3 begins to predict things that are going to happen, and why (“New customers who buy your generic brand products are likely to churn within 1 month”) Level 4 starts to tell you what you should do in order to address specific situations from Level 3 (“You should offer new customers who buy your generic brand a 10% off coupon to retain them long-term”). Level 5 performs the actions recommended in Level 5 automaticallyThe problem lies in the fact that Big Data often gets stuck at Level 2. This is the emailed-spreadsheet problem–you’re just shuffling more work into users’ inboxes, and they’re not likely to thank you for that. Data Strategy is about moving from Level 1 to Level 5. It’s your blueprint and roadmap to identify the questions that need to be answered and the resources needed to answer–and, ultimately, act on them–automatically. There are all sorts of topics that need to be addressed by your data strategy, and that need attention and careful consideration. Topics like:Should we let the patterns appear organically, or should we use set categories and labels that our organization is already familiar with? How much growth should we account for? Are there additional data sets available that we can use to answer other meaningful questions at the same time?The list goes on for quite a while. But the point is, if you go into a data analytics project with eyes open and on the prize, you’re much more likely to be successful than if you go in looking for a mystery. By the way, I’m currently writing a book about data strategy that I’m pretty excited about. It’s going to be free for the first week after it’s published, if you’d like to be notified when a free copy is available just drop your email in the sidebar to the right and I’ll be sure to let you know when it comes out. If you found this article interesting I think you’ll thoroughly enjoy the book.Reference: Why Data Strategy Matters from our JCG partner Jason Kolb at the Jason Kolb blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books