Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?


Lightweight Integration with Java EE and Camel

Enterprise Java has different flavors and perspectives. Starting at the plain platform technology, which is well known as Java EE over to different frameworks and integration aspects and finally use-cases which involve data-centric user interfaces or specific visualizations. The most prominent problem which isn’t solved by Java EE itself is “integration”. There are plenty of products out there from well know vendors which solve all kinds of integration problems and promise to deliver complete solutions. As a developer, all you need from time to time is a solution that just works. This is the ultimate “Getting Started Resource” for Java EE developers when it comes to system integration.   A Bit Of Integration Theory Integration challenges are nothing new. Since there has been different kinds of system and the need to combine their data into another one, this has been a central topic. Gregor Hohpe and Bobby Woolf started to collect a set of basic patterns they used to solve their customers integration problems with. These Enterprise Integration Patterns (EIPs) can be considered the bible of integration. It tries to find a common vocabulary and body of knowledge around asynchronous messaging architectures by defining 65 integration pattern. Forrester calls those “The core language of EAI”. What Is Apache Camel? Apache Camel offers you the interfaces for the EIPs, the base objects, commonly needed implementations, debugging tools, a configuration system, and many other helpers which will save you a ton of time when you want to implement your solution to follow the EIPs. It’s a complete production-ready framework. But it does not stop at those initially defined 65 patterns. It extends those with over 150 ready-to-use components which solve different problems around endpoints or system or technology integration. At a high level Camel consists of a CamelContext which contains a collection of Component instances. A Component is essentially a factory of Endpoint instances. You can explicitly configure Component instances in Java code or an IoC container like Spring, Guice or CDI, or they can be auto-discovered using URIs. Why Should A Java EE Developer Care? Enterprise projects require us to do so. Dealing with all sorts of system integrations always has been a challenging topic. You can either chose the complex road by using messaging systems and wiring them into your application and implement everything yourself or go the heavyweight road by using different products. I have been a fan of more pragmatic solutions since ever. And this is what Camel actually is: Comparably lightweight, easy to bootstrap and coming with a huge amount of pre-built integration components which let the developer focus on solving the business requirement behind it. Without having to learn new APIs or tooling. Camel comes with either a Java-based Fluent API, Spring or Blueprint XML Configuration files, and even a Scala DSL. So no matter which base you start to jump off from, you’ll always find something that you already know. How To Get Started? Did I got you? Want to give it a try? That’s easy, too. You have different ways according to the frameworks and platform you use. Looking back at the post title, this is going to focus on Java EE. So, first thing you can do is just bootstrap Camel yourself. All you need is the core camel dependency and the cdi-camel dependency. Setting up a plain Java EE 7 maven project and adding those two is more than sufficient. <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-core</artifactId> <version>${camel.version}</version> </dependency> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-cdi</artifactId> <version>${camel.version}</version> </dependency> Next thing you need to do is find a place to inject your first CamelContext. @Inject CdiCamelContext context; After everything is injected, you can start adding routes to it. A more complete example can be found in my CamelEE7 project on GitHub. Just fork it an go ahead. This one will work on a random Java EE application server. If you are on WildFly already, you can also take full advantage of the WildFly-Camel subsystem. The WildFly Camel Subsystem The strategy of wildfly-camel is, that a user can “just use” the camel core/component APIs in deployments that WildFly supports already. In other words, Camel should “just work” in standard Java EE deployments. The binaries are be provided by the platform. The deployment should not need to worry about module/wiring details. Defining and Deploying Camel Contexts can be done in different ways. You either can directly define a Context in your standalone-camel.xml server configuration or deploy it as part of your web-app either as a single XML file with a predefined -camel-context.xml file suffix or as part of another WildFly supported deployment as META-INF/jboss-camel-context.xml file. The WildFly Camel test suite uses the WildFly Arquillian managed container. This can connect to an already running WildFly instance or alternatively start up a standalone server instance when needed. A number of test enrichers have been implemented that allow you have these WildFly Camel specific types injected into your Arquillian test cases; you can inject a CamelContextFactory or a CamelContextRegistry as an  @ArquillianResource. If you want to get started with that you can take a more detailed look at my more detailed blog-post. Finding ExamplesIf you are excited and got everything up and running it is time to dig into some examples. First place to look at is the example directory in the distribution. There is an example for everything that you might need. One of the most important use-cases is the tight integration with ActiveMQ. And assuming that you have something like a bunch of JMS messages that need to be converted into Files that are stored in a filesystem: This is a perfect Camel job. You need to configure the ActiveMQ component additional to what you’ve seen above and it allows messages to be sent to a JMS Queue or Topic or messages to be consumed from a JMS Queue or Topic using Apache ActiveMQ. Teh following code shows you what it takes to convert a JMS messages from the test.queue queue into the file component which consumes them and stores them to disk.       context.addRoutes(new RouteBuilder() { public void configure() { from("test-jms:queue:test.queue").to("file://test"); } }); Imagine to do this yourself. Want more sophisticated examples? With Twitter integration? Or different other technologies? There are plenty of examples out there to pick from. Probably one of the most exciting aspects of Camel. It is lightweight, stable and out there since years. Make sure to also follow the mailing-lists and the discussion forums.Reference: Lightweight Integration with Java EE and Camel from our JCG partner Markus Eisele at the Java Advent Calendar blog....

Why Technical Resumes Need a Profile (because we’re dumb)

There is significant variation in résumé format across candidates. Name and contact information is always on top, but on any given day a recruiter might see the next section as Education, Skills, Experience, or even (gasp) an Objective. Career length influences which section comes first. Entry-level candidates usually choose Education, while veteran candidates gravitate towards experience and accomplishments. Unfortunately, going from a glance at contact information to dissecting intimate project details doesn’t make for a smooth transition. It’s jarring. A section that serves as a buffer to introduce résumé content that also subliminally instructs the reader on what content to pay attention to will help. And remember the résumé’s audience. Most recruiters and HR personnel don’t have the background of the candidates they assess, so reviewers benefit from any guidance (even subliminal) provided to understand content. Since few grow up aspiring to the glamorous world of tech recruitment, the industry is typically stocked with C students. Since a résumé “states your case” to employers, let’s look at lawyers… The Purpose of Opening Statements When trial attorneys present cases to juries, they don’t immediately start questioning witnesses. They start with opening statements. Why? The opening statement informs the jury as to what evidence they will hear during the trial, and why that evidence is significant to the case. The statement is a roadmap on what the attorney wants jurors to listen for (or ignore) during the trial and which elements are critical to the case. It provides background information and announces what is forthcoming. Before trial, jurors know nothing about the case. Without opening statements, jurors may get lost in trivial details while missing out on the important elements the attorney wants them to hear. Attorneys can’t trust a jury’s ability to make those decisions independently, so opening statements influence thought process. It is paramount that attorneys present their case in a manner consistent with their opening statements. Diversions from that roadmap will cause the jury to distrust the attorney and detract from the attorney’s credibility. Back to résumés… Just as jurors know nothing before trial, recruiters know nothing about applicants until they open the résumé. Job seekers today are less likely to provide a cover letter (with recruiters less likely to read them), and résumés are often given brief initial screening by untrained eyes. This creates a problem for qualified applicants who may be wrongfully passed over. What is the optimal strategy for expressing experience and ensuring that even novice reviewers will properly identify qualified candidates?  An opening statement. The Purpose of the Profile Profiles are the opening statement in a case for an interview, with the résumé content that follows the evidence. The Profile introduces experience to be detailed later in the document, which tacitly baits reviewers into seeking out evidence to specifically support (or refute) those claims. A résumé without a Profile makes no claims to be proven or disproven, and doesn’t give the reader any additional instruction on what to seek. When Profile claims are corroborated by details of experience, it results in a “buy” from the reader. The Profile was a promise of sorts, later fulfilled by the supporting evidence. When a Profile doesn’t reflect experience, it exposes the candidate as a potential fraud and detracts from any relevant experience the candidate does possess. Qualified candidates with overreaching Profiles put themselves in a precarious situation. Even well-written Profiles are a negative mark on applicants when the claims are inaccurate or unsupported. Just as attorneys must lay out cases in accordance with their opening statements, experience must match Profiles. Typical Profiles Are Noise, Not Signal The overwhelming majority of Profile statements are virtually identical. Words and phrases like hard-working, intelligent, dedicated, career-minded, innovative, etc. are, in this context, mere self-assessments impossible to qualify. It’s fluff, and contributes résumé noise that distracts readers’ attention from signal. Writing Profiles Useful Profiles clearly say what you have done and can do, and are ideally quantified for the reader to prevent any misunderstanding. If a temp at a startup is tasked to find résumés of software engineers with Python and Django experience, he is unlikely to ignore résumés with Profiles stating “Software engineer with six years of experience building solutions with Python and Django“. For candidates attempting to transition into new roles that might be less obvious to a reader, a Profile must double as a disguised Objective. These Profiles will first state the candidate’s current experience and end with what type of work the applicant seeks. “Systems Administrator with three years of Python and Bash scripting experience seeks transition into dedicated junior DevOps role” provides background as well as future intent, but the last seven words are needed to get the average recruiter’s attention. Just as Objectives are altered to match requirements, consider tweaking a Profile to highlight required skills. A candidate that identifies herself as a Mobile Developer in the Profile might not get selected for interview for a Web Developer position, even when the résumé demonstrates all necessary qualifications. How a candidate self-identifies suggests their career interests, unless stated otherwise (see paragraph above). Based on the importance of having Profile/experience agreement, it’s suggested that the Profile is written last. Lawyers can’t write an opening statement before knowing their case, and candidates should have all of their corroborating evidence in place before attempting to summarize it for clear interpretation. Conclusion Assume your résumé reviewer knows little about what you do, and that they need to be explicitly told what you do without having to interpret it through your listed experience. Identify yourself in the Profile as closely to the job description (and perhaps even title) as possible. Make sure that all claims made in the Profile are supported by evidence somewhere else in the résumé, ideally early on.Reference: Why Technical Resumes Need a Profile (because we’re dumb) from our JCG partner Dave Fecak at the Job Tips For Geeks blog....

Use Cases for Elasticsearch: Analytics

In the last post in this series we have seen how we can use Logstash, Elasticsearch and Kibana for doing logfile analytics. This week we will look at the general capabilities for doing analytics on any data using Elasticsearch and Kibana. Use Case We have already seen that Elasticsearch can be used to store large amounts of data. Instead of putting data into a data warehouse Elasticsearch can be used to do analytics and reporting. Another use case is social media data: Companies can look at what happens with their brand if they have the possibility to easily search it. Data can be ingested from multiple sources, e.g. Twitter and Facebook and combined in one system. Visualizing data in tools like Kibana can help with exploring large data sets. Finally mechanisms like Elasticsearchs Aggregations can help with finding new ways to look at the data. Aggregations Aggregations provide what the now deprecated facets have been providing but also a lot more. They can combine and count values from different documents and therefore show you what is contained in your data. For example if you have tweets indexed in Elasticsearch you can use the terms aggregation to find the most common hashtags. For details on indexing tweets in Elasticsearch see this post on the Twitter River and this post on the Twitter input for Logstash. curl -XGET "http://localhost:9200/devoxx/tweet/_search" -d' { "aggs" : { "hashtags" : { "terms" : { "field" : "hashtag.text" } } } }' Aggregations are requested using the aggs keyword, hashtags is a name I have chosen to identify the result and the terms aggregation counts the different terms for the given field (Disclaimer: For a sharded setup the terms aggregation might not be totally exact). This request might result in something like this: "aggregations": { "hashtags": { "buckets": [ { "key": "dartlang", "doc_count": 229 }, { "key": "java", "doc_count": 216 }, [...] The result is available for the name we have chosen. Aggregations put the counts into buckets that contain of a value and a count. This is very similar to how faceting works, only the names are different. For this example we can see that there are 229 documents for the hashtag dartlang and 216 containing the hashtag java. This could also be done with facets alone but there is more: Aggregations can even be combined. You can now nest another aggregation in the first one that for every bucket will give you more buckets for another criteria. curl -XGET "http://localhost:9200/devoxx/tweet/_search" -d' { "aggs" : { "hashtags" : { "terms" : { "field" : "hashtag.text" }, "aggs" : { "hashtagusers" : { "terms" : { "field" : "user.screen_name" } } } } } }' We still request the terms aggregation for the hashtag. But now we have another aggregation embedded, a terms aggregation that processes the user name. This will then result in something like this. "key": "scala", "doc_count": 130, "hashtagusers": { "buckets": [ { "key": "jaceklaskowski", "doc_count": 74 }, { "key": "ManningBooks", "doc_count": 3 }, [...] We can now see the users that have used a certain hashtext. In this case one user used one hashtag a lot. This is information that is not available that easily with queries and facets alone. Besides the terms aggreagtion we have seen here there are also lots of other interesting aggregations available and more are added with every release. You can choose between bucket aggregations (like the terms aggregation) and metrics aggregations, that calculate values from the buckets, e.g. averages oder other statistical values. Visualizing the Data Besides the JSON output we have seen above, the data can also be used for visualizations. This is something that can then be prepared even for a non technical audience. Kibana is one of the options that is often used for logfile data but can be used for data of all kind, e.g. the Twitter data we have already seen above.There are two bar charts that display the term frequencies for the mentions and the hashtags. We can already see easily which values are dominant. Also, the date histogram to the right shows at what time most tweets are sent. All in all these visualizations can provide a lot of value when it comes to trends that are only seen when combining the data. The image shows Kibana 3, which still relies on the facet feature. Kibana 4 will instead provide access to the aggregations. Conclusion This post ends the series on use cases for Elasticsearch. I hope you enjoyed reading it and maybe you learned something new along the way. I can’t spend that much time blogging anymore but new posts will be coming. Keep an eye on this blog.Reference: Use Cases for Elasticsearch: Analytics from our JCG partner Florian Hopf at the Dev Time blog....

RabbitMQ – Processing messages serially using Spring integration Java DSL

If you ever have a need to process messages serially with RabbitMQ with a cluster of listeners processing the messages, the best way that I have seen is to use a “exclusive consumer” flag on a listener with 1 thread on each listener processing the messages. Exclusive consumer flag ensures that only 1 consumer can read messages from the specific queue, and 1 thread on that consumer ensures that the messages are processed serially. There is a catch however, I will go over it later. Let me demonstrate this behavior with a Spring Boot and Spring Integration based RabbitMQ message consumer. First, this is the configuration for setting up a queue using Spring java configuration, note that since this is a Spring Boot application, it automatically creates a RabbitMQ connection factory when the Spring-amqp library is added to the list of dependencies: @Configuration @Configuration public class RabbitConfig {@Autowired private ConnectionFactory rabbitConnectionFactory;@Bean public Queue sampleQueue() { return new Queue("sample.queue", true, false, false); }} Given this sample queue, a listener which gets the messages from this queue and processes them looks like this, the flow is written using the excellent Spring integration Java DSL library: @Configuration public class RabbitInboundFlow { private static final Logger logger = LoggerFactory.getLogger(RabbitInboundFlow.class);@Autowired private RabbitConfig rabbitConfig;@Autowired private ConnectionFactory connectionFactory;@Bean public SimpleMessageListenerContainer simpleMessageListenerContainer() { SimpleMessageListenerContainer listenerContainer = new SimpleMessageListenerContainer(); listenerContainer.setConnectionFactory(this.connectionFactory); listenerContainer.setQueues(this.rabbitConfig.sampleQueue()); listenerContainer.setConcurrentConsumers(1); listenerContainer.setExclusive(true); return listenerContainer; }@Bean public IntegrationFlow inboundFlow() { return IntegrationFlows.from(Amqp.inboundAdapter(simpleMessageListenerContainer())) .transform(Transformers.objectToString()) .handle((m) -> {"Processed {}", m.getPayload()); }) .get(); }} The flow is very concisely expressed in the inboundFlow method, a message payload from RabbitMQ is transformed from byte array to String and finally processed by simply logging the message to the logs. The important part of the flow is the listener configuration, note the flag which sets the consumer to be an exclusive consumer and within this consumer the number of threads processing is set to 1. Given this even if multiple instances of the application is started up only 1 of the listeners will be able to connect and process messages. Now for the catch, consider a case where the processing of messages takes a while to complete and rolls back during processing of the message. If the instance of the application handling the message were to be stopped in the middle of processing such a message, then the behavior is a different instance will start handling the messages in the queue, when the stopped instance rolls back the message, the rolled back message is then delivered to the new exclusive consumer, thus getting a message out of order.If you are interested in exploring this further, here is a github project to play with this feature: RabbitMQ – Processing messages serially using Spring integration Java DSL from our JCG partner Biju Kunjummen at the all and sundry blog....

Updates on CDI 2.0

CDI 2.0 is the next version of Contexts and Dependency Injection for the Java EE Platform and a candidate for inclusion in Java EE 8. It is being worked upon since September 2014 and moving pretty rapidly !                 Major goals for CDI 2.0Alignment with Java SE 8 (of course!) Support for Java SE – Standardizing a Dependency Injection API for Java SE. Individual CDI implementations (Weld etc) do have support for Java SE but one needs to resort to vendor specific ways in order to work with these. This would hopefully be resolved and we will have a standard API for working with CDI on Java SE and EE ! CDI Modularity – splitting up CDI into easily manageable modules to make things easier both from a maintenance as well as adoption/implementation perspective Enhanced Events – one of the major enhancements is introduction of Asynchronous Events which was not there up until now (CDI 1.2). Other features – AOP (interceptor & decorators) and SPI related enhancementsIt is still very early days and nothing is set in stone as of yet. Things are evolving and will continue to do so. All the latest updates can be accessed on the official CDI spec page. Open and structured working style Have to say that from a Java EE observer standpoint, I am particularly impressed by the way the CDI spec team is going about its work – in a very structured yet open fashion.All the spec related work has been split up into high level topics (mentioned above) There is a workshop corresponding to each one of them. Each workshop (or work item) has a draft document which describes the related ideas, proposals and related details. The best part is that it’s out there for the community to read, respond and collaborate !More details about the work mantra of the CDI spec team available here and the latest details of the individual work streams are available on CDI Spec home page (scroll down to the bottom of the page). Note: Some discussions which are specific to the Asynchronous Events capability can be accessed here. Cutting edge stuff – JBoss Weld 3 Alpha 3 release is here already As many of you might be already aware of, JBoss Weld is the Reference Implementation of the CDI spec. Great news is that Weld 3 Alpha3 is already out there and includes some of the features proposed in CDI 2.0 ! All in all, some of the CDI 2.0 related features supported in Weld 3 are:Support for Asynchronous Events – now you can use fireAsync(yourPayloadObject) and the call returns immediately. Leveraging Java SE 8 features – you can now use repeatable annotations on qualifiers and interceptor bindings Prioritization of observer methods using @PriorityFor further details, check out this excellent write up. You can take Weld 3 for a spin on Wildfly 8.2. Just follow these instructions posted by Arun Gupta on his blog. Have fun living on the bleeding edge! Cheers!Reference: Updates on CDI 2.0 from our JCG partner Abhishek Gupta at the Object Oriented.. blog....

Really Too Bad that Java 8 Doesn’t Have

This is one of the more interesting recent Stack Overflow questions: Why does Iterable not provide stream() and parallelStream() methods? At first, it might seem intuitive to make it straight-forward to convert an Iterable into a Stream, because the two are really more or less the same thing for 90% of all use-cases. Granted, the expert group had a strong focus on making the Stream API parallel capable, but anyone who works with Java every day will notice immediately, that Stream is most useful in its sequential form. And an Iterable is just that. A sequential stream with no guarantees with respect to parallelisation. So, it would only be intuitive if we could simply write:; In fact, subtypes of Iterable do have such methods, e.g.; Brian Goetz himself gave an answer to the above Stack Overflow question. The reasons for this omittance are rooted in the fact that some Iterables might prefer to return an IntStream instead of a Stream. This really seems to be a very remote reason for a design decision, but as always, omittance today doesn’t mean omittance forever. On the other hand, if they had introduced today, and it turned out to be a mistake, they couldn’t have removed it again. Well, primitive types in Java are a pain and they did all sorts of bad things to generics in the first place, and now to Stream as well, as we have to write the following, in order to turn an Iterable into a Stream: Stream s =, false); Brian Goetz argues that this is “easy”, but I would disagree. As an API consumer, I experience a lot of friction in productivity because of:Having to remember this otherwise useless StreamSupport type. This method could very well have been put into the Stream interface, because we already have Stream construction methods in there, such as Stream.of(). Having to remember the subtle difference between Iterator and Spliterator in the context of what I believe has nothing to do with parallelisation. It may well be that Spliterators will become popular eventually, though, so this doubt is for the magic 8 ball to address. In fact, I have to repeat the information that there is nothing to be parallelised via the boolean argument falseParallelisation really has such a big weight in this new API, even if it will cover only around 5%-10% of all functional collection manipulation operations. While sequential processing was not the main design goal of the JDK 8 APIs, it is really the main benefit for all of us, and the friction around APIs related to sequential processing should be as low as possible. The method above should have just been called: Stream s =; It could be implemented like this: public static<T> Stream<T> stream(Iterable<T> i) { return, false); } Obviously with convenience overloads that allow for the additional specialisations, like parallelisation, or passing a Spliterator But again, if Iterable had its own stream() default method, an incredible number of APIs would be so much better integrated with Java 8 out of the box, without even supporting Java 8 explicitly! Take jOOQ for instance. jOOQ still supports Java 6, so a direct dependency is not possible. However, jOOQ’s ResultQuery type is an Iterable. This allows you to use such queries directly inline in foreach loops, as if you were writing PL/SQL: PL/SQL FOR book IN ( SELECT * FROM books ORDER BY books.title ) LOOP -- Do things with book END LOOP; Java for (BookRecord book : ctx.selectFrom(BOOKS).orderBy(BOOKS.TITLE) ) { // Do things with book } Now imagine the same thing in Java 8: ctx.selectFrom(BOOKS).orderBy(BOOKS.TITLE) .stream() .map / reduce / findAny, etc... Unfortunately, the above is currently not possible. You could, of course, eagerly fetch all the results into a jOOQ Result, which extends List: ctx.selectFrom(BOOKS).orderBy(BOOKS.TITLE) .fetch() .stream() .map / reduce / findAny, etc... But it’s one more method to call (every time), and the actual stream semantics is broken, because the fetch is done eagerly. Complaining on a high level This is, of course, complaining on a high level, but it would really be great if a future version of Java, e.g. Java 9, would add this missing method to the Iterable API. Again, 99% of all use-cases will want the Stream type to be returned, not the IntStream type. And if they do want that for whatever obscure reason (much more obscure than many evil things from old legacy Java APIs, looking at you Calendar), then why shouldn’t they just declare an intStream() method. After all, if someone is crazy enough to write Iterable<Integer> when they’re really operating on int primitive types, they’ll probably accept a little workaround.Reference: Really Too Bad that Java 8 Doesn’t Have from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....

The New Agile – Certifications

Last time we looked at how things came to be. How things converged around a group of software developers in a ski resort: There were actual experiments in the field with reported successes. There was a funnel for communication to spread those ideas. And now there was a joint vision and a name. The agile manifesto would still remain a nice website with a very modest cult, unless for Ken Schwaber’s business savvy. He decide that he’s not just going to teach scrum. There were going to be certificates involved. This is how the pyramid scheme of the CST (Certified Scrum Trainer) and the CSM (Certified Scrum Master) and derivatives started. Many people in the agile world (that do not belong to any of the training organizations, or make money of them) usually look with disdain at those certificates. “How can one become a Scrum Master in 3 days?” We ask. The simple answer is, of course, no one can become MASTER of any kind in 3 days. We can say what we want about certificates, it is because of them that we practice agile today in many organizations. Because much like agile inspects and adapts, so did scrum. It adapted to how business hired and trained people – through certifications. This is not new – that’s how guilds were created long ago, and scrum is a sort of guild. If Lean started it all (even before having that very cool, marketable name), and scrum led the way, the next evolution was David Anderson’s kanban. It too grew to be  a training operation – the Lean-Kanban University. While scrum and kanban are not dealing with the exact same thing, both fall under the same “process improvement” blanket. There’s no escaping the question “should we do scrum or kanban”, although both sides will say they solve a completely different set of problems. Both scrum descendants (after the divorce between Ken Schwaber and Jeff Sutherland) continue on their own, the Scrum Guide (the ultimate body of scrum knowledge) incorporates both Scrum Inc and inputs. There’s also the Scrum Alliance, not affiliated with either father, but a place for all the children to come and play. As scrum took over the world, managers took notice outside the development team. If development team can move more quickly, why not apply the same principles to bigger product development organizations? In fact, kanban was there first, because that’s what it did – kanban processes came from the Toyota Production System, which after all was a PRODUCTION system, not a software facility. Scrum at that point did not want to answer that. But where there’s money to be made, certification will follow, and so we got Scott Ambler’s Disciplined Agile Delivery (DAD) and others, but these are small fries compared to Dean Leffingwell’s 800 pound gorilla – Scaled Agile Framework (SAFe). We’ll talk about SAFe later, but so far it looks as the answer to enterprise agile: the scaling, the scrum and XP basis, along with an assuring cool name. It too offers certificates, and it looks like the current winner in the area. Certifications will continue to crown the kings of framework, unless there’s going to be a major shift in how organizations hire and train people, which won’t happen anywhere soon. Just as today, organizations “want scrum” because it’s the “best in breed” and everyone doing it, SAFe is probably going to be in the next few years. And where there’s a want, somebody will answer it.Reference: The New Agile – Certifications from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....

Creating a REST API with Spring Boot and MongoDB

This year I greeted Christmas in a different fashion: I was a part of the Java Advent Calendar. Let’s boot up for Christmas: Spring Boot is an opinionated framework that simplifies the development of Spring applications. It frees us from the slavery of complex configuration files and helps us to create standalone Spring applications that don’t need an external servlet container. This sounds almost too good to be true, but Spring Boot can really do all this. This blog post demonstrates how easy it is to implement a REST API that provides CRUD operations for todo entries that are saved to MongoDB database. Let’s start by creating our Maven project. This blog post assumes that you have already installed the MongoDB database. If you haven’t done this, you can follow the instructions given in the blog post titled: Accessing Data with MongoDB. Creating Our Maven Project We can create our Maven project by following these steps:Use the spring-boot-starter-parent POM as the parent POM of our Maven project. This ensures that our project inherits sensible default settings from Spring Boot. Add the Spring Boot Maven Plugin to our project. This plugin allows us to package our application into an executable jar file, package it into a war archive, and run the application. Configure the dependencies of our project. We need to configure the following dependencies:The spring-boot-starter-web dependency provides the dependencies of a web application. The spring-data-mongodb dependency provides integration with the MongoDB document database.Enable the Java 8 Support of Spring Boot. Configure the main class of our application. This class is responsible of configuring and starting our application.The relevant part of our pom.xml file looks as follows: <properties> <!-- Enable Java 8 --> <java.version>1.8</java.version> <>UTF-8</> <!-- Configure the main class of our Spring Boot application --> <start-class>com.javaadvent.bootrest.TodoAppConfig</start-class> </properties> <!-- Inherit defaults from Spring Boot --> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>1.1.9.RELEASE</version> </parent><dependencies> <!-- Get the dependencies of a web application --> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency><!-- Spring Data MongoDB--> <dependency> <groupId></groupId> <artifactId>spring-data-mongodb</artifactId> </dependency> </dependencies><build> <plugins> <!-- Spring Boot Maven Support --> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> Additional Reading:Spring Boot Reference Manual: 9.1.1 Maven installation Spring Boot Reference Manual: 12.1 Maven Spring Boot Maven Plugin – UsageLet’s move on and find out how we can configure our application. Configuring Our Application We can configure our Spring Boot application by following these steps:Create a TodoAppConfig class to the com.javaadvent.bootrest package. Enable Spring Boot auto-configuration. Configure the Spring container to scan components found from the child packages of the com.javaadvent.bootrest package. Add the main() method to the TodoAppConfig class and implement by running our application.The source code of the TodoAppConfig class looks as follows: package com.javaadvent.bootrest;import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.EnableAutoConfiguration; import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration;@Configuration @EnableAutoConfiguration @ComponentScan public class TodoAppConfig { public static void main(String[] args) {, args); } } We have now created the configuration class that configures and runs our Spring Boot application. Because the MongoDB jars are found from the classpath, Spring Boot configures the MongoDB connection by using its default settings. Additional Reading:Spring Boot Reference Manual: 13.2 Location the main application class Spring Boot Reference Manual: 14. Configuration classes The Javadoc of the @EnableAutoConfiguration annotation Spring Boot Reference Manual: 15. Auto-configuration The Javadoc of the SpringApplication class Spring Boot Reference Manual: 27.2.1 Connecting to a MongoDB databaseLet’s move on and implement our REST API. Implementing Our REST API We need implement a REST API that provides CRUD operations for todo entries. The requirements of our REST API are:A POST request send to the url ‘/api/todo’ must create a new todo entry by using the information found from the request body and return the information of the created todo entry. A DELETE request send to the url ‘/api/todo/{id}’ must delete the todo entry whose id is found from the url and return the information of the deleted todo entry. A GET request send to the url ‘/api/todo’ must return all todo entries that are found from the database. A GET request send to the url ‘/api/todo/{id}’ must return the information of the todo entry whose id is found from the url. A PUT request send to the url ‘/api/todo/{id}’ must update the information of an existing todo entry by using the information found from the request body and return the information of the updated todo entry.We can fulfill these requirements by following these steps:Create the entity that contains the information of a single todo entry. Create the repository that is used to save todo entries to MongoDB database and find todo entries from it. Create the service layer that is responsible of mapping DTOs into domain objects and vice versa. The purpose of our service layer is to isolate our domain model from the web layer. Create the controller class that processes HTTP requests and returns the correct response back to the client.This example is so simple that we could just inject our repository to our controller. However, because this is not a viable strategy when we are implementing real-life applications, we will add a service layer between the web and repository layers. Let’s get started. Creating the Entity We need to create the entity class that contains the information of a single todo entry. We can do this by following these steps:Add the id, description, and title fields to the created entity class. Configure the id field of the entity by annotating the id field with the @Id annotation. Specify the constants (MAX_LENGTH_DESCRIPTION and MAX_LENGTH_TITLE) that specify the maximum length of the description and title fields. Add a static builder class to the entity class. This class is used to create new Todo objects. Add an update() method to the entity class. This method simply updates the title and description of the entity if valid values are given as method parameters.The source code of the Todo class looks as follows: import;import static com.javaadvent.bootrest.util.PreCondition.isTrue; import static com.javaadvent.bootrest.util.PreCondition.notEmpty; import static com.javaadvent.bootrest.util.PreCondition.notNull;final class Todo {static final int MAX_LENGTH_DESCRIPTION = 500; static final int MAX_LENGTH_TITLE = 100;@Id private String id;private String description;private String title;public Todo() {}private Todo(Builder builder) { this.description = builder.description; this.title = builder.title; }static Builder getBuilder() { return new Builder(); }//Other getters are omittedpublic void update(String title, String description) { checkTitleAndDescription(title, description);this.title = title; this.description = description; }/** * We don't have to use the builder pattern here because the constructed * class has only two String fields. However, I use the builder pattern * in this example because it makes the code a bit easier to read. */ static class Builder {private String description;private String title;private Builder() {}Builder description(String description) { this.description = description; return this; }Builder title(String title) { this.title = title; return this; }Todo build() { Todo build = new Todo(this);build.checkTitleAndDescription(build.getTitle(), build.getDescription());return build; } }private void checkTitleAndDescription(String title, String description) { notNull(title, "Title cannot be null"); notEmpty(title, "Title cannot be empty"); isTrue(title.length() <= MAX_LENGTH_TITLE, "Title cannot be longer than %d characters", MAX_LENGTH_TITLE );if (description != null) { isTrue(description.length() <= MAX_LENGTH_DESCRIPTION, "Description cannot be longer than %d characters", MAX_LENGTH_DESCRIPTION ); } } } Additional Reading:Item 2: Consider a builder when faced with many constructor parametersLet’s move on and create the repository that communicates with the MongoDB database. Creating the Repository We have to create the repository interface that is used to save Todo objects to MondoDB database and retrieve Todo objects from it. If we don’t want to use the Java 8 support of Spring Data, we could create our repository by creating an interface that extends the CrudRepository<T, ID> interface. However, because we want to use the Java 8 support, we have to follow these steps:Create an interface that extends the Repository<T, ID> interface. Add the following repository methods to the created interface:The void delete(Todo deleted) method deletes the todo entry that is given as a method parameter. The ListfindAll()method returns all todo entries that are found from the database. The OptionalfindOne(String id)method returns the information of a single todo entry. If no todo entry is found, this method returns an empty Optional. The Todo save(Todo saved) method saves a new todo entry to the database and returns the the saved todo entry.The source code of the TodoRepository interface looks as follows: import;import java.util.List; import java.util.Optional;interface TodoRepository extends Repository<Todo, String> {void delete(Todo deleted);List<Todo> findAll();Optional<Todo> findOne(String id);Todo save(Todo saved); } Additional Reading:The Javadoc of the CrudRepository<T, ID> interface The Javadoc of the Repository<T, ID> interface Spring Data MongoDB Reference Manual: 5. Working with Spring Data Repositories Spring Data MongoDB Reference Manual: 5.3.1 Fine-tuning repository definitionLet’s move on and create the service layer of our example application. Creating the Service Layer First, we have to create a service interface that provides CRUD operations for todo entries. The source code of the TodoService interface looks as follows: import java.util.List;interface TodoService {TodoDTO create(TodoDTO todo);TodoDTO delete(String id);List<TodoDTO> findAll();TodoDTO findById(String id);TodoDTO update(TodoDTO todo); } The TodoDTO class is a DTO that contains the information of a single todo entry. We will talk more about it when we create the web layer of our example application. Second, we have to implement the TodoService interface. We can do this by following these steps:Inject our repository to the service class by using constructor injection. Add a private Todo findTodoById(String id) method to the service class and implement it by either returning the found Todo object or throwing the TodoNotFoundException. Add a private TodoDTO convertToDTO(Todo model) method the service class and implement it by converting the Todo object into a TodoDTO object and returning the created object. Add a private ListconvertToDTOs(Listmodels)and implement it by converting the list of Todo objects into a list of TodoDTO objects and returning the created list. Implement the TodoDTO create(TodoDTO todo) method. This method creates a new Todo object, saves the created object to the MongoDB database, and returns the information of the created todo entry. Implement the TodoDTO delete(String id) method. This method finds the deleted Todo object, deletes it, and returns the information of the deleted todo entry. If no Todo object is found with the given id, this method throws the TodoNotFoundException. Implement the ListfindAll()method. This methods retrieves all Todo objects from the database, transforms them into a list of TodoDTO objects, and returns the created list. Implement the TodoDTO findById(String id) method. This method finds the Todo object from the database, converts it into a TodoDTO object, and returns the created TodoDTO object. If no todo entry is found, this method throws the TodoNotFoundException. Implement the TodoDTO update(TodoDTO todo) method. This method finds the updated Todo object from the database, updates its title and description, saves it, and returns the updated information. If the updated Todo object is not found, this method throws the TodoNotFoundException.The source code of the MongoDBTodoService looks as follows: import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service;import java.util.List; import java.util.Optional;import static;@Service final class MongoDBTodoService implements TodoService {private final TodoRepository repository;@Autowired MongoDBTodoService(TodoRepository repository) { this.repository = repository; }@Override public TodoDTO create(TodoDTO todo) { Todo persisted = Todo.getBuilder() .title(todo.getTitle()) .description(todo.getDescription()) .build(); persisted =; return convertToDTO(persisted); }@Override public TodoDTO delete(String id) { Todo deleted = findTodoById(id); repository.delete(deleted); return convertToDTO(deleted); }@Override public List<tododto> findAll() { List<todo> todoEntries = repository.findAll(); return convertToDTOs(todoEntries); }private List<tododto> convertToDTOs(List<todo> models) { return .map(this::convertToDTO) .collect(toList()); }@Override public TodoDTO findById(String id) { Todo found = findTodoById(id); return convertToDTO(found); }@Override public TodoDTO update(TodoDTO todo) { Todo updated = findTodoById(todo.getId()); updated.update(todo.getTitle(), todo.getDescription()); updated =; return convertToDTO(updated); }private Todo findTodoById(String id) { Optional<todo> result = repository.findOne(id); return result.orElseThrow(() -> new TodoNotFoundException(id));}private TodoDTO convertToDTO(Todo model) { TodoDTO dto = new TodoDTO();dto.setId(model.getId()); dto.setTitle(model.getTitle()); dto.setDescription(model.getDescription());return dto; } } We have now created the service layer of our example application. Let’s move on and create the controller class. Creating the Controller Class First, we need to create the DTO class that contains the information of a single todo entry and specifies the validation rules that are used to ensure that only valid information can be saved to the database. The source code of the TodoDTO class looks as follows: import org.hibernate.validator.constraints.NotEmpty;import javax.validation.constraints.Size;public final class TodoDTO {private String id;@Size(max = Todo.MAX_LENGTH_DESCRIPTION) private String description;@NotEmpty @Size(max = Todo.MAX_LENGTH_TITLE) private String title;//Constructor, getters, and setters are omitted } Additional Reading:The Reference Manual of Hibernate Validator 5.0.3Second, we have to create the controller class that processes the HTTP requests send to our REST API and sends the correct response back to the client. We can do this by following these steps:Inject our service to our controller by using constructor injection. Add a create() method to our controller and implement it by following these steps:Read the information of the created todo entry from the request body. Validate the information of the created todo entry. Create a new todo entry and return the created todo entry. Set the response status to 201.Implement the delete() method by delegating the id of the deleted todo entry forward to our service and return the deleted todo entry. Implement the findAll() method by finding the todo entries from the database and returning the found todo entries. Implement the findById() method by finding the todo entry from the database and returning the found todo entry. Implement the update() method by following these steps:Read the information of the updated todo entry from the request body. Validate the information of the updated todo entry. Update the information of the todo entry and return the updated todo entry.Create an @ExceptionHandler method that sets the response status to 404 if the todo entry was not found (TodoNotFoundException was thrown).The source code of the TodoController class looks as follows: import org.springframework.beans.factory.annotation.Autowired; import org.springframework.http.HttpStatus; import org.springframework.web.bind.annotation.ExceptionHandler; import org.springframework.web.bind.annotation.PathVariable; import org.springframework.web.bind.annotation.RequestBody; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestMethod; import org.springframework.web.bind.annotation.ResponseStatus; import org.springframework.web.bind.annotation.RestController;import javax.validation.Valid; import java.util.List;@RestController @RequestMapping("/api/todo") final class TodoController {private final TodoService service;@Autowired TodoController(TodoService service) { this.service = service; }@RequestMapping(method = RequestMethod.POST) @ResponseStatus(HttpStatus.CREATED) TodoDTO create(@RequestBody @Valid TodoDTO todoEntry) { return service.create(todoEntry); }@RequestMapping(value = "{id}", method = RequestMethod.DELETE) TodoDTO delete(@PathVariable("id") String id) { return service.delete(id); }@RequestMapping(method = RequestMethod.GET) List<TodoDTO> findAll() { return service.findAll(); }@RequestMapping(value = "{id}", method = RequestMethod.GET) TodoDTO findById(@PathVariable("id") String id) { return service.findById(id); }@RequestMapping(value = "{id}", method = RequestMethod.PUT) TodoDTO update(@RequestBody @Valid TodoDTO todoEntry) { return service.update(todoEntry); }@ExceptionHandler @ResponseStatus(HttpStatus.NOT_FOUND) public void handleTodoNotFound(TodoNotFoundException ex) { } } If the validation fails, our REST API returns the validation errors as JSON and sets the response status to 400. If you want to know more about this, read a blog post titled: Spring from the Trenches: Adding Validation to a REST API. That is it. We have now created a REST API that provides CRUD operations for todo entries and saves them to MongoDB database. Let’s summarize what we learned from this blog post. Summary This blog post has taught us three things:We can get the required dependencies with Maven by declaring only two dependencies: spring-boot-starter-web and spring-data-mongodb. If we are happy with the default configuration of Spring Boot, we can configure our web application by using its auto-configuration support and “dropping” new jars to the classpath. We learned to create a simple REST API that saves information to MongoDB database and finds information from it.P.S. You can get the example application of this blog post from Github.Reference: Creating a REST API with Spring Boot and MongoDB from our JCG partner Petri Kainulainen at the Petri Kainulainen blog....

The Inconvenient Truth About Dynamic vs. Static Typing

Sometimes there are these moments of truth. They happen completely unexpectedly, such as when I read this tweet: Good discussion of Facebook Flow – — David J. Pearce (@whileydave) November 23, 2014David is the author of the lesser-known but not at all lesser-interesting Whiley programming language, a language that has a lot of static type checking built in it. One of the most interesting features of the Whiley language is flow sensitive typing (sometimes also simply called flow typing), which is mostly useful when used along with union types. An example from the getting started guide function indexOf(string str, char c) => null|int:function split(string str, char c) => [string]: var idx = indexOf(str,c)// idx has type null|int if idx is int:// idx now has type int string below = str[0..idx] string above = str[idx..] return [below,above]else: // idx now has type null return [str] // no occurrence Remember, other languages like Ceylon also know flow-sensitive typing, and even Java does to a certain extent, because Java has union types, too! try { ... } catch (SQLException | IOException e) { if (e instanceof SQLException) doSomething((SQLException) e); else doSomethingElse((IOException) e); } Granted, Java’s flow-sensitive typing is explicit and verbose. We could expect the Java compiler to infer all the types. The following should type-check and compile just as well: try { ... } catch (SQLException | IOException e) { if (e instanceof SQLException) // e is guaranteed to be of type SQLException doSomething(e); else // e is guaranteed to be of type IOException doSomethingElse(e); } Flow typing or flow sensitive typing means that the compiler can infer the only possible type from the control flow of the surrounding program. It is a relatively new concept in modern languages like Ceylon, and it makes static typing extremely powerful, especially if the language also supports sophisticated type inference via var or val keywords! JavaScript static typing with Flow Let’s get back to David’s Tweet and have a look at what the article said about Flow: The presence of a use of length with a null argument informs Flow that there should be a null check in that function. This version does type-check: function length(x) { if (x) { return x.length; } else { return 0; } }var total = length('Hello') + length(null); Flow is able to infer that x cannot be null inside the if body.That’s quite cunning. A similar upcoming feature can be observed in Microsoft’s TypeScript. But Flow is different (or claims to be different) from TypeScript. The essence of Facebook Flow can be seen in this paragraph from the official Flow announcement: Flow’s type checking is opt-in — you do not need to type check all your code at once. However, underlying the design of Flow is the assumption that most JavaScript code is implicitly statically typed; even though types may not appear anywhere in the code, they are in the developer’s mind as a way to reason about the correctness of the code. Flow infers those types automatically wherever possible, which means that it can find type errors without needing any changes to the code at all. On the other hand, some JavaScript code, especially frameworks, make heavy use of reflection that is often hard to reason about statically. For such inherently dynamic code, type checking would be too imprecise, so Flow provides a simple way to explicitly trust such code and move on. This design is validated by our huge JavaScript codebase at Facebook: Most of our code falls in the implicitly statically typed category, where developers can check their code for type errors without having to explicitly annotate that code with types.Let this sink in most JavaScript code is implicitly statically typedagain JavaScript code is implicitly statically typedYes! Programmers love type systems. Programmers love to reason formally about their data types and put them in narrow constraints to be sure the program is correct. That’s the whole essence of static typing: To make less mistakes because of well-designed data structures. People also love to put their data structures in well-designed forms in databases, which is why SQL is so popular and “schema-less” databases will not gain more market share. Because in fact, it’s the same story. You still have a schema in a “schema-less” database, it’s just not type checked and thus leaves you all the burden of guaranteeing correctness.On a side note: Obviously, some NoSQL vendors keep writing these ridiculous blog posts to desperately position their products, claiming that you really don’t need any schema at all, but it’s easy to see through that marketing gag. True need for schemalessness is as rare as true need for dynamic typing. In other words, when is the last time you’ve written a Java program and called every method via reflection? Exactly… But there’s one thing that statically typed languages didn’t have in the past and that dynamically typed languages did have: Means to circumvent verbosity. Because while programmers love type systems and type checking, programmers do not love typing (as in typing on the keyboard). Verbosity is the killer. Not static typing Consider the evolution of Java: Java 4 List list = new ArrayList(); list.add("abc"); list.add("xyz");// Eek. Why do I even need this Iterator? Iterator iterator = list.iterator(); while (iterator.hasNext()) { // Gee, I *know* I only have strings. Why cast? String value = (String);// [...] } Java 5 // Agh, I have to declare the generic type twice! List<String> list = new ArrayList<String>(); list.add("abc"); list.add("xyz");// Much better, but I have to write String again? for (String value : list) { // [...] } Java 7 // Better, but I still need to write down two // times the "same" List type List<String> list = new ArrayList<>(); list.add("abc"); list.add("xyz");for (String value : list) { // [...] } Java 8 // We're now getting there, slowly Stream.of("abc", "xyz").forEach(value -> { // [...] }); On a side-note, yes, you could’ve used Arrays.asList() all along. Java 8 is still far from perfect, but things are getting better and better. The fact that I finally do not have to declare a type anymore in a lambda argument list because it can be inferred by the compiler is something really important for productivity and adoption. Consider the equivalent of a lambda pre-Java 8 (if we had Streams before): // Yes, it's a Consumer, fine. And yes it takes Strings Stream.of("abc", "xyz").forEach(new Consumer<String>(){ // And yes, the method is called accept (who cares) // And yes, it takes Strings (I already say so!?) @Override public void accept(String value) { // [...] } }); Now, if we’re comparing the Java 8 version with a JavaScript version: ["abc", "xyz"].forEach(function(value) { // [...] }); We have almost reached as little verbosity as the functional, dynamically typed language that is JavaScript (I really wouldn’t mind those missing list and map literals in Java), with the only difference that we (and the compiler) know that value is of type String. And we know that the forEach() method exists. And we know that forEach() takes a function with one argument. In the end of the day, things seem to boil down to this: Dynamically typed languages like JavaScript and PHP have become popular mainly because they “just ran”. You didn’t have to learn all the “heavy” syntax that classic statically typed languages required (just think of Ada and PL/SQL!). You could just start writing your program. Programmers “knew” that the variables would contain strings, there’s no need to write it down. And that’s true, there’s no need to write everything down! Consider Scala (or C#, Ceylon, pretty much any modern language): val value = "abc" What else can it be, other than a String? val list = List("abc", "xyz") What else can it be, other than a List[String]? Note that you can still explicitly type your variables if you must – there are always those edge cases: val list : List[String] = List[String]("abc", "xyz") But most of the syntax is “opt-in” and can be inferred by the compiler. Dynamically typed languages are dead The conclusion of all this is that once syntactic verbosity and friction is removed from statically typed languages, there is absolutely no advantage in using a dynamically typed language. Compilers are very fast, deployment can be fast too, if you use the right tools, and the benefit of static type checking is huge. (don’t believe it? read this article) As an example, SQL is also a statically typed language where much of the friction is still created by syntax. Yet, many people believe that it is a dynamically typed language, because they access SQL through JDBC, i.e. through type-less concatenated Strings of SQL statements. If you were writing PL/SQL, Transact-SQL, or embedded SQL in Java with jOOQ, you wouldn’t think of SQL this way and you’d immediately appreciate the fact that your PL/SQL, Transact-SQL, or your Java compiler would type-check all of your SQL statements.So, let’s abandon this mess that we’ve created because we’re too lazy to type all the types (pun). Happy typing! And if you’re reading this, Java language expert group members, please do add var and val, as well as flow-sensitive typing to the Java language. We’ll love you forever for this, promised!Reference: The Inconvenient Truth About Dynamic vs. Static Typing from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....

Good Enough

We hear a lot about building products which are “good enough” or “just barely good enough.” How do we know what “good enough” means for our customers?  No one really tells us. Different Perspectives of Good Enough There are several important ways to think about a product being good enough – for this article, we will limit the context for discussion to “good enough to ship to customers” or “good enough to stop making it better (for now).”  Determining good enough informs the decision to ship or not.  Otherwise this is all academic There are several perspectives on good enough which are important  – but don’t help product managers enough. The body of work seems to be focused on aspects of goodness which don’t help product managers to make the prioritization decisions which inform their roadmaps.  They are important – they are necessary but not sufficient for product managers.  Here are some pointers to some great stuff, before I dive into what I feel is a missing piece.Good enough doesn’t mean you can just do 80% of the coding work you know you need to do, and ship the product – allowing technical debt to pile up.  Robert Lippert has an excellent article about this. Technical debt piles up in your code like donuts pile up on your waistline. This is important, although it only eventually affects product management as the code base becomes unwieldy and limits what the team can deliver – and increases the cost and time of delivery. Be pragmatic about perfectionism when delivering your product.  Steve Ropa has an excellent article about this.  As a fellow woodworker, his metaphors resonate with me.  The key idea is, as a craftsman, to recognize when you’re adding cost and effort to improve the quality of your deliverable in ways your customer will never notice.  This is important, and can affect product managers because increasing the cost of deliverables affects the bang-for-the-buck calculations, and therefore prioritization decisions.With the current mind share enjoyed by Lean Startup and minimally viable products (MVP), there is far too much shallow analysis from people jumping on the bandwagon of good ideas without fully understanding the ideas.  Products fail because of misunderstanding of the phrase minimum viable product.Many people mis-define product in MVP to mean experiment.  Max Dunn has an excellent article articulating how people conflate “running an experiment” with “shipping product” and has a good commentary on how there isn’t enough guidance on the distinction.  This is important for product managers to understand.  Learning from your customers is important – but it doesn’t mean you should ship half-baked products to your market in order to validate a hypothesis. MVP is an experimentation process, not a product development process. Ramli John makes this bold assertion in an excellent article.  Here’s a slap in the face which may just solve the problem, if we can get everyone to read it.  MVP / Lean Startup is a learning process fueled with hypothesis testing, following the scientific method.  Instead of trying to shoehorn it into a product-creation process, simply don’t.  Use the concept to drive learning, not roadmaps. “How much can we have right now?” is important to customers.  Christina Wodtke has a particularly useful and excellent article on including customers in the development of your roadmap.  “Now, next, or later” is an outstanding framework for simultaneously getting prioritization feedback and managing the expectations of customers (and other stakeholders) about delivery.  My concern is that in terms of guidance to product managers, this is as good as it gets.  Most people manage “what and when” but not “how effectively.”Three PerspectivesThere are three perspectives on how we approach defining good enough when making decisions about investment in our products.  The first two articles by Robert and Steve (linked above) address the concerns about when the team should stop coding in order to deliver the requested feature.  There is also the valid question of if a particular design – to which the developers are writing code – is good enough.  I’ll defer conversation about knowing when the design of how a particular capability will be delivered (as a set of features, interactions, etc) for another time.  [I’m 700 words into burying the lead so far]. For product managers, the most important perspective is intent.  What is it we are trying to enable our customers to do.  Christina’s article (linked above) expertly addresses half of the problem of managing intent. Note that this isn’t a critique of her focus on “what and when.” We need to address the question “how capable for now?” How Capable Must it Be for Now? Finally.  I wrote this article because everyone just waves their hand at this mystical concept of getting something out there for now and then making it better later.  But no one provides us with any tools for articulating how to define “good enough.”  Several years ago I wrote about delivering the not-yet-perfect product and satisficing your customers incrementally – but I didn’t provide any tools to help define good enough from an intent perspective. Once we identify a particular capability to be included in a release (or iteration), we have to define how capable the capability needs to be.  Here’s an example of what I’m trying to describe:We’ve decided that enabling our target customer to “reduce inventory levels” is the right investment to make during this release. How much of a reduction in inventory levels is the right amount to target?That’s the question.  What is  good enough? Our customer owns the definition of good enough.  And Kano analysis gives us a framework for talking about it.  When looking at a more is better capability, from the perspective of our customers, increases in the capability of the capability (for non-native English speakers “increasing the effectiveness of the feature” has substantially the same meaning) increases the value to them.We can deliver a product with a level of capability anywhere along this curve.  The question is – at what level is it “good enough?”Once we reach the point of delivering something which is “good enough,” additional investments to improve that particular capability are questionable – at least from the perspective of our customers. Amplifying the Problem Switch gears for a second and recall the most recent estimation and negotiation exercise you went through with your development team.  For many capabilities, making it “better” or “more” or “faster” also makes it more expensive.  “Getting search results in 2 seconds costs X, getting results in 1 second costs 10X.” As we increase the capability of our product, we simultaneously provide smaller benefit to our customers at increasingly higher cost.  This sounds like a problem on a microeconomics final exam.  A profit-maximizing point must exist somewhere. An Example Savings from driving a more fuel efficient car is a good example for describing diminishing returns. Apologies to people using other measures and currencies.  The chart below shows the daily operating cost of a vehicle based on some representative values for drivers in the USA.Each doubling of fuel efficiency sounds like a fantastic improvement in a car.  80 MPG is impressively “better” than 40 MPG from an inside-out perspective.  Imagine the engineering which went into improving (or re-inventing) of technology to double the fuel efficiency.  All of that investment to save the average driver $1 per day.  This is less than $2,000 based on the average length of car ownership in the USA. How much will a consumer pay to save that $2,000?  How much should the car maker invest to double fuel efficiency, based on how much they can potentially increase sales and/or prices? An enterprise software rule of thumb would suggest the manufacturer could raise prices between $200 and $300.  If the vendor’s development budget were 20% of revenue, they would be able to spend $40 – $60 (per anticipated car sold) to fund the dramatic improvement in capability. One Step Closer What good enough means, precisely, for your customer, for a particular capability of a particular product, given your product strategy is unique.  There is no one-size-fits-all answer. There is also no unifying equation which applies for everyone either.  Even after you build a model which represents the diminishing returns to your customers of incremental improvement, you have to put it in context.  What does a given level of improvement cost, for your team, working with your tech-stack?  How does improvement impact your competitive position – both with respect to this capability and overall. You have to do the customer-development work and build your understanding of how your markets behave and what they need. At least with the application of Kano analysis, you have a framework for making informed decisions about how much you ultimately need to do, and how much you need to do right now.  As a bonus, you have a clear vehicle for communicating decisions (and gaining consensus) within your organization.Reference: Good Enough from our JCG partner Scott Sehlhorst at the Tyner Blain’s blog blog....
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

Get ready to Rock!
To download the books, please verify your email address by following the instructions found on the email we just sent you.