Featured FREE Whitepapers

What's New Here?

software-development-2-logo

BDD (Behavior-Driven Development): Missing Piece in the Continuous Integration Puzzle

Behavior-Driven Development (BDD) is a process or it can be a tool. In many cases, BDD is both. However, it should not be a goal in itself. The goal of software development is to deliver quality as fast and as cheap as possible. The only real measure of quality is whether it fulfills user needs in a reliable manner. The best way we can take to accomplish that goal is through continuous integration, deployment and delivery. For the sake of this article I will ignore the differences between those three and refer to all of them as continuous integration or CI. CI is often misunderstood and BDD can provide a missing piece of the puzzle. It is usually implemented as a series of steps that are initiated with a commit to the repository, followed by software being built, statically checked, unit tested, integration tested and, finally delivered. With those steps we are confirming that the software always does what the team expects it to do. The only way to accomplish this goal is to have the team work as a single unified body. Even though there is always some type of specialization and different profiles might have some level of autonomy (front-end and back-end developers, testers…) they must all work together from the start until the end. Often overlooked element in this picture is the client and the users. Having the software always working as expected can not be accomplished unless those that set the expectations are involved throughout the whole process. Who sets the expectations? Users do. They are the only ones who can say whether the application we’re building is a success or not. They define what should be built because it is their needs that we are trying to fulfill. This is where BDD comes in and creates a wrapper around our CI process. With CI and BDD we can have the software that is always integrated in a way that fulfills expectations of our users instead doing what we think it should do. This sentence present small but very important difference. Whether software works as we expect it to work is not a goal we should aim for. It should do what users expect it to do. We do not set the expectations. Users do. BDD replaces traditional requirements with executable specifications written by or in cooperation with customers and users and provides continuous feedback when executed as part of our CI process. While narrative and scenarios are a substitute for traditional requirements or user stories, automation of those scenarios is required for BDD to be fully integrated into the CI process. Narrative and scenarios are a process that through different tools can provide automation that we require. It is both a process and a tool. Sprint 0 should be used to set up our tools and high level design (IDE, servers, architecture design…), CI server and the BDD framework. From there on we can start writing our BDD stories. Each of them, once written, should be taken by developers and implemented. If the story is pushed together with the implementation code, feedback obtained from the CI is almost immediate. That feedback is the piece often missing in order to have the successful implementation of the CI process. Having Jenkins (or any other similar framework) is not sufficient by itself. If we’re seeking to build reliable software continuously, final verification in the process must be based on some kind of integration and functional tests that confirm that user expectations are met. Otherwise, we’ll never have the confidence required for the decision to implement continuous deployment or delivery. The question might arise why the feedback from unit tests is not good enough to provide us with information whether our software is working as expected. Unit tests are a must because they are fast to write and to execute. However, they are telling us whether all our units of code are working properly. They can not assure us that all those units are integrated into the functionality they compose. How about other types of integration tests? If they are based on pure code, they can neither be written nor understood by the customer or users. Without them, integration tests are our assumption of what they want and might or might not be true. More over, since they must work in conjunction with requirements they represent duplication of work by providing two forms of the same concept. Requirements are tests often written in a different format. If requirements become executable, there is no need for separate artifacts. If BDD is the replacement for requirements and integration tests, can we get rid of unit tests? Yes we can but we should not. Even though one can write BDD on all levels, using it instead of unit tests would increase drastically the amount of work. More over, it would complicate the communication with the customer and users. Keeping unit tests as a way to verify all combinations software can do on a unit level frees us to write BDD scenarios in a compact way that confirms the integration of those unit tests while providing good communication tool that acts as a final verification of functionalities we are developing. Requirements themselves should be executable and that is what BDD is trying to accomplish. If integrated into the CI process, it provides the missing piece by converting the process that was continuously providing feedback regarding what we think should be developed to what the customer and users think should be developed. It is present throughout the whole process starting as a way to capture requirements, guide through the development and act as the final verification of the CI. It is the missing piece required to have a reliable delivery to production on a continuous basis.Reference: BDD (Behavior-Driven Development): Missing Piece in the Continuous Integration Puzzle from our JCG partner Viktor Farcic at the Technology conversations blog....
software-development-2-logo

SonarQube As An Education Platform

I’ve been using SonarQube [1] platform for more than four years. I remember the time when it was making its first baby steps as a code quality management tool. It looked more like a system that was integrated with various third-party static analysis tools (like PMD, FindBugs etc.) and provided a few but important code quality metrics. Many things changed over the next years. SonarQube today is considered a mature software eco-system (in my humble opninion the best) that provides a set of features for successfully applying the process of continous inspection to any development methodology. In this article I’m not going to discuss about SonarQube’s star features that help you manage and control your Techinical Debt. I will give a different point of view and explain how you can use it as an educational platform. Teaching developers with coding rules Since release 4.0, integration of external tools has been gradually dropped off and several of the coding rules provided by these tools have been replaced by rules, written using an in-house developed (but still open-sourced) language parsing library which is called SonarSource[2] Language Recognizer (SSLR)[3]. One of the great benefits of this rule re-writing is that they include a very explanatory description about the purpose of the rule as well as several code examples – if applicable – that present the right and wrong way of writing code. Let’s take a look at the following image which is a snapshot of a java coding rule that checks if Object.equals is overridden when Object.compareTo is overridden. As you see, the rule is not only backed-up by a very detailed and well-argued explanation but it also contains two code snippets: a compliant and a not-compliant one.Developers are able to read all this information when they are looking at an issue[4] that violated this rule. They are supposed to understand what they did wrong, fix it and hopefully don’t make the same mistake again in the future. But hey!! You don’t have to sit down and wait for SonarQube to raise an issue so that developers read about the correct way of writing code. You can send the developers to study the rules anytime they want. In other words, educate them before a quality flaw appears. In the company I work with, we have filtered out the rules that are not aligned with our coding style and then we grouped them by using the tagging mechanism provided by SonarQube [5]. Then, we organized training sessions where we walked through every rule of a specific tag(group) and we discussed the details of each rule and the suggested way of coding. That’s all! We noticed that the developers started writing better code from the very next day and SonarQube’s issues were very limited for the coding rules we have already discussed. Learning from code reviews If you don’t have enough time to allocate for the previous suggestion then you might consider an alternative approach. Most of you, are probably familiar with code reviews or at least know the basics and the benefits of applying such a practice. SonarQube provides a built-in tool that facilitates the code review process.In a few words, each issue can be assigned to a developer and can be also planned in an action plan. Code reviewers are able to confirm the issue, can mark it as a false-positive case by providing some additional reasoning for that, or just comment on it with some suggestions or possible solutions to fix the problematic code. All this issue-interactivity can be viewed as a way to teach people, especially young developers. Like the previous section, you can ask developers to read comments or study the raised issues. A nice way of doing this, without needing to cut time from your development tasks, would be the following. First prioritize SonarQube issues and plan them using action plans. For instance, you might have an action plan that includes all issues that should be fixed during the current iteration and another one for future iterations. Then try to hold short meetings during the iteration where you review all SonarQube issues and prioritize them. As soon as you finish and plan the required issues , let’s say for the current iteration, you can ask the developers to work as a team and come to a solution, especially for those issues that an one-line fix is not enough. Finally document the solution by commenting the relevant issues so that everyone can see it. The benefit of this approach is that developers are required to understand the underlying broken coding rules for all issues (not only for the ones that they created) and then figure out the fix. Conclusion Educating developers should be constant and continuous. But this is something that most companies forget, intentionally (lack of budget) or not (lack of time). If we try not to regard it as a necessary evil, but something that can take place during every day development tasks, then we might have a better chance. SonarQube’s coding rules and the easy-to-use coding review mechanism come to the rescue and can be put-upon to teach developers how to write better code and eventually make them better professionals.   This article was originally published at NDC Oslo Magazine 2014 Referenceshttp://www.sonarqube.org http://www.sonarsource.com http://docs.codehaus.org/display/SONAR/SSLR http://docs.codehaus.org/display/SONAR/Issues http://docs.codehaus.org/display/SONAR/Configuring+Rules#ConfiguringRules-TaggingRules Reference: SonarQube As An Education Platform from our JCG partner Patroklos Papapetrou at the Only Software matters blog....
devops-logo

Why you should build an Immutable Infrastructure

Some of the major challenges today when building infrastructure are predictability, scalability and automated recovery. A predictable system will promote the exact same artifact that you tested into your production system so no intermittent failure can cause any trouble. A scalable system makes it trivial, especially automatically, to deal with any rise in traffic. And automated recovery will make sure your team can focus on building a better product and sleep during the night instead of maintaining infrastructure constantly. At Codeship we’ve found that an Infrastructure made up of immutable components has helped us tremendously with these goals.   Julian Dunn from Chef recently released a blog post about their stance on immutable infrastructure. Chad Fowler summed it up very well in a tweet: @flomotlik pretty weak IMO. It conflates "containerisation" & "immutable infrastructure" then harps on a rigid definition of "immutable" — CHad Fowler (@chadfowler) June 30, 2014Instead of going over every piece of the article, I want to present an overview of the experience we – and others – have had in making parts of our infrastructure immutable. What is Immutable Infrastructure Immutable infrastructure is comprised of immutable components that are replaced for every deployment, rather than being updated in-place. Those components are started from a common image that is built once per deployment and can be tested and validated. The common image can be built through automation, but doesn’t have to be. Immutability is independent of any tool or workflow for building the images. Its best use case is in a cloud or virtualized environment. While it’s possible in non-virtualized environments, the benefit doesn’t outweigh the effort. State Isolation The main criticism against immutable infrastructure – as stated in the Chef blog post – is that there is always state somewhere in the system and, therefore, the whole system isn’t immutable. That misses the point of immutable components. The main advantage when it comes to state in immutable infrastructure is that it is siloed. The boundaries between layers storing state and the layers that are ephemeral are clearly drawn and no leakage can possibly happen between those layers. There simply is no way to mix state into different components when you can’t expect them to be up and running the next minute. Atomic Deployments and Validation Updating an existing server can easily have unintended consequences. That’s why Chef, Puppet, CFEngine or other such tools exist – to take care of consistency across your infrastructure. A central system is necessary to manage the expected state of each server and to take action to ensure compliance. Deployment is not an atomic action but a transition that can go wrong and lead to an unknown state. This becomes very hard and complex to debug, as the exact state you are in is hard to know. Chef, Puppet or CFEngine are very complex systems as they have to deal with an overly complex problem. Another solution to that problem is to build completely new images and servers that contain the application and the environment every time you want to deploy. In that case, the deployment doesn’t depend on the status the servers were in before, so the result is much more predictable and repeatable. Any third-party issues that may cause the deployment to fail can be caught by validating the new image and ensuring no production system was impacted. This one image can then be used to start any number of servers and switch atomically from the old machines to the new ones by changing the load balancer, for example. There are of course downsides to rebuilding your images with every deployment. A full rebuild of the system takes a lot longer than simply updating and restarting the application. By layering your deployment you can optimize this, e.g. have a repository to build a base image and use that base image to just put in your application for the deployment image, but it will still be a slower process. Another problem is that you introduce dependencies to third parties during deployment. If you install packages in the system and your apt repository is slow or down this can fail the deployment. While this could be a problem in a non immutable infrastructure as well you typically interact less with third party systems when you just push new code into an already provisioned system. By deploying from a pre-provisioned base image and updating that base image regularly you can soften that problem, but it’s still there and might fail a deployment from time to time. Building the automation currently still takes more time at the beginning of the project, as the tools for building immutable infrastructure are still new or need to be developed. It is definitely more investment in the beginning, but pays off immediately. You can still use Chef, Puppet, CFEngine or Ansible to build your images, but as they aren’t built for an immutable infrastructure workflow they tend to be more complex than necessary. Fast Recovery by preserving History As all deployments are done by building new images, history is preserved automatically for rollback when necessary. The same process and automation that is used to deploy the next version can be used to roll back, which ensures the process of rolling back will work. By automating the creation of the images, you can even recreate historical images and branch off from earlier points in the history of the infrastructure. Data schema changes are a potential problem, but that’s a general issue with rollbacks. Backwards compatibility and zero downtime deployments are a way to make sure this will work regardless of the changes. Simple Experimentation As you control the whole environment and application, any experiments with new versions of the language, operating system or dependencies are easy. With strict testing and validation in place, and the ability to roll-back if necessary, all the fear of upgrading any dependency is removed. Experimentation becomes an integral and trivial part of building your infrastructure. Makes you collect your logs and metrics in a central location With immutable components in place, it’s easy to simply kill a misbehaving server. While often errors are simply a product of the environment, for example a third party system misbehaving, and can be ignored, some will keep coming up. Not having access into the servers puts the right incentive on the team to collect and store logs and system metrics externally. This way, debugging can happen while the server is long gone. If logs and metrics are missing to properly debug an issue, it’s easy to add more data collection to the infrastructure and replace all existing servers. Then once the error comes up again you can debug it fully from the data stored on an external system. Conclusions Immutable components as part of your infrastructure are a way to reduce inconsistency in your infrastructure and improve the trust into your deployment process. Atomic deployments, combined with validation of the image and easy rollback, make managing your infrastructure a lot easier. It forces teams to silo data and expect failures that are inherent when building on top of a cloud infrastructure or when building systems in general. This increases resilience and trains you in a process to withstand any problems, especially in an automated fashion. Furthermore, it helps with building simple and independent components that are easy to deploy and scale. And it’s not a theoretical idea. At Codeship, we’ve built our infrastructure this way for a long time. Heroku and other PaaS providers are built as immutable components and lots of companies – small and very large – have used immutability as a core concept of their infrastructure. Tools like Packer have made building immutable components very easy. Together with existing cloud infrastructure they are a powerful concept to help you build better and safer infrastructure. Let me know in the comments if you have any questions or interesting insights to share.Reference: Why you should build an Immutable Infrastructure from our JCG partner Florian Motlik at the Codeship Blog blog....
logback-logo

How to Instantly Improve Your Java Logging With 7 Logback Tweaks

The benchmark tests to help you discover how Logback performs under pressure Logging is essential for server-side applications but it comes at a cost. It’s surprising to see though how much impact small changes and configuration tweaks can have on an app’s logging throughput. In this post we will benchmark Logback’s performance in terms of log entries per minute. We’ll find out which appenders perform best, what is prudent mode, and what are some of the awesome side effects of Async methods, sifting and console logging. Let’s get to it. The groundwork for the benchmark At its core, Logback is based on Log4j with tweaks and improvements under Ceki Gülcü’s vision. Or as they say, a better Log4j. It features a native slf4j API, faster implementation, XML configuration, prudent mode, and a set of useful Appenders which I will elaborate on shortly. Having said that, there are quite a few ways to log with the different sets of Appenders, patterns and modes available on Logback. We took a set of commonly used combinations and put them to a test on 10 concurrent threads to find out which can run faster. The more log entries written per minute, the more efficient the method is and more resources are free to serve users. It’s not exact science but to be more precise, we’ve ran each test 5 times, removed the top and bottom outliers and took the average of the results. To try and be fair, all log lines written also had an equal length of 200 characters. ** All code is available on GitHub right here. The test was run on a Debian Linux machine running on Intel i7-860 (4 core @ 2.80 GHz) with 8GB of RAM. First Benchmark: What’s the cost of synchronous log files? First we took a look at the difference between synchronous and asynchronous logging. Both writing to a single log file, the FileAppender writes entries directly to file while the AsyncAppender feeds them to a queue which is then written to file. The default queue size is 256, and when it’s 80% full it stops letting in new entries of lower levels (Except WARN and ERROR).The table compares between the FileAppender and different queue sizes for the AsyncAppender. Async came on top with the 500 queue size.Tweak #1: AsyncAppender can be 3.7x faster than the synchronous FileAppender. Actually, it’s the fastest way to log across all appenders.It performed way better than the default configuration that even trails behind the sync FileAppender which was supposed to finish last. So what might have happened? Since we’re writing INFO messages, and doing so from 10 concurrent threads, the default queue size might have been too small and messages could have been lost to the default threshold. Looking at results of the 500 and 1,000,000 queue sizes, you’ll notice that their throughput was similar so queue size and threshold weren’t an issue for them.Tweak #2: The default AsyncAppender can cause a 5 fold performance cut and even lose messages. Make sure to customize the queue size and discardingThreshold according to your needs.<appender name="ASYNC500" class="ch.qos.logback.classic.AsyncAppender"> <queueSize>500</queueSize> <discardingThreshold>0</discardingThreshold> <appender-ref ref="FILE" /> </appender> ** Setting an AsyncAppender’s queueSize and discardingThreshold Second Benchmark: Do message patterns really make a difference? Now we want to see the effect of log entry patterns on the speed of writing. To make this fair we kept the log line’s length equal (200 characters) even when using different patterns. The default Logback entry includes the date, thread, level, logger name and message, by playing with it we tried to see what the effects on performance might be.This benchmark demonstrates and helps see up close the benefit of logger naming conventions. Just remember to change its name accordingly to the class you use it in.Tweak #3: Naming the logger by class name provides 3x performance boost.Taking the loggers or the threads name off added some 40k-50k entries per minute. No need to write information you’re not going to use. Going minimal also proved to be a bit more effective.Tweak #4: Compared to the default pattern, using only the Level and Message fields provided 127k more entries per minute.Third Benchmark: Dear prudence, won’t you come out to play? In prudent mode a single log file can be accessed from multiple JVMs. This of course takes a hit on performance because of the need to handle another lock. We tested prudent mode on 2 JVMs writing to a single file using the same benchmark we ran earlier.Prudent mode takes a hit as expected, although my first guess was that the impact would be a stronger.Tweak #5: Use prudent mode only when you absolutely need it to avoid a throughput decrease.<appender name="FILE_PRUDENT" class="ch.qos.logback.core.FileAppender"> <file>logs/test.log</file> <prudent>true</prudent> </appender> ** Configuring Prudent mode on a FileAppender Fourth Benchmark: How to speed up synchronous logging? Let’s see how synchronous appenders other than the FileAppender perform. The ConsoleAppender writes to system.out or system.err (defaults to system.out) and of course can also be piped to a file. That’s how we we’re able to count the results. The SocketAppender writes to a specified network resource over a TCP socket. If the target is offline, the message is dropped. Otherwise, it’s received as if it was generated locally. For the benchmark, the socket was was sending data to the same machine so we avoided network issues and concerns.To our surprise, explicit file access through FIleAppender is more expensive than writing to console and piping it to a file. The same result, a different approach, and some 200k more log entries per minute. SocketAppender performed similarly to FileAppender in spite of adding serialization in between, the network resource if existed would have beared most of the overhead.Tweak #6: Piping ConsoleAppender to a file provided 13% higher throughput than using FileAppender.Fifth Benchmark: Now can we kick it up a notch? Another useful method we have in our toolbelt is the SiftingAppender. Sifting allows to break the log to multiple files. Our logic here was to create 4 separate logs, each holding the logs of 2 or 3 out of the 10 threads we run in the test. This is done by indicating a discriminator, in our case, logid, which determines the file name of the logs: <appender name="SIFT" class="ch.qos.logback.classic.sift.SiftingAppender"> <discriminator> <key>logid</key> <defaultValue>unknown</defaultValue> </discriminator> <sift> <appender name="FILE-${logid}" class="ch.qos.logback.core.FileAppender"> <file>logs/sift-${logid}.log</file> <append>false</append> </appender> </sift> </appender> ** Configuring a SiftingAppenderOnce again our FileAppender takes a beat down. The more output targets, the less stress on the locks and fewer context switching. The main bottleneck in logging, same as with the Async example, proves to be synchronising a file.Tweak #7: Using a SiftingAppender can allow a 3.1x improvement in throughput.Conclusion We found that the way to achieve the highest throughput is by using a customized AsyncAppender. If you must use synchronous logging, it’s better to sift through the results and use multiple files by some logic. I hope you’ve found the insights from the Logback benchmark useful and look forward to hear your thoughts at the comments below.Reference: How to Instantly Improve Your Java Logging With 7 Logback Tweaks from our JCG partner Alex Zhitnitsky at the Takipi blog....
java-logo

Java: Determining the status of data import using kill signals

A few weeks ago I was working on the initial import of ~ 60 million bits of data into Neo4j and we kept running into a problem where the import process just seemed to freeze and nothing else was imported. It was very difficult to tell what was happening inside the process – taking a thread dump merely informed us that it was attempting to process one line of a CSV line and was somehow unable to do so. One way to help debug this would have been to print out every single line of the CSV as we processed it and then watch where it got stuck but this seemed a bit over kill. Ideally we wanted to only print out the line we were processing on demand. As luck would have it we can do exactly this by sending a kill signal to our import process and have it print out where it had got up to. We had to make sure we picked a signal which wasn’t already being handled by the JVM and decided to go with ‘SIGTRAP’ i.e. kill -5 [pid] We came across a neat blog post that explained how to wire everything up and then created our own version: class Kill3Handler implements SignalHandler { private AtomicInteger linesProcessed; private AtomicReference<Map<String, Object>> lastRowProcessed;   public Kill3Handler( AtomicInteger linesProcessed, AtomicReference<Map<String, Object>> lastRowProcessed ) { this.linesProcessed = linesProcessed; this.lastRowProcessed = lastRowProcessed; }   @Override public void handle( Signal signal ) { System.out.println("Last Line Processed: " + linesProcessed.get() + " " + lastRowProcessed.get()); } } We then wired that up like so: AtomicInteger linesProcessed = new AtomicInteger( 0 ); AtomicReference<Map<String, Object>> lastRowProcessed = new AtomicReference<>( ); Kill3Handler kill3Handler = new Kill3Handler( linesProcessed, lastRowProcessed ); Signal.handle(new Signal("TRAP"), kill3Handler);   // as we iterate each line we update those variables   linesProcessed.incrementAndGet(); lastRowProcessed.getAndSet( properties ); // properties = a representation of the row we're processing This worked really well for us and we were able to work out that we had a slight problem with some of the data in our CSV file which was causing it to be processed incorrectly. We hadn’t been able to see this by visual inspection since the CSV files were a few GB in size. We’d therefore only skimmed a few lines as a sanity check. I didn’t even know you could do this but it’s a neat trick to keep in mind – I’m sure it shall come in useful again.Reference: Java: Determining the status of data import using kill signals from our JCG partner Mark Needham at the Mark Needham Blog blog....
java-logo

Identifying JVM – trickier than expected

In Plumbr we have spent the last month by building the foundation for future major improvements. One of such building blocks was addition of the unique identifier for JVM in order to link all sessions from the same JVM together. While it seems a trivial task at the beginning, the complexities surrounding the issue start raising their ugly heads when looking at the output of the JVM-bundled jps command listing all currently running Java processes in my machine: My Precious:tmp my$ jps 1277 start.jar 1318 Jps 1166 Above is listed the output of the jps command listing all currently running Java processes in my machine. If you are unfamiliar with the tool – it lists all processes process ID in the left and process name in the right column. Apparently the only one bothering to list itself under a meaningful name is the jps itself. Other two are not so polite. The one hiding behind the start.jar acronym is a Jetty instance and the completely anonymous one is actually Eclipse. I mean, really – the biggest IDE in the Java world cannot even bother to list itself under a name in the standard java tools? So, with a glimpse to the state of the art in built-in tooling, lets go back to our requirements at hand. Our current solution is identifying a JVM by process ID + machine name combination. This has one obvious disadvantage – whenever the process dies, its reincarnation not going to get the same ID from the kernel. So whenever the JVM Plumbr was monitoring was restarted or killed, we lost track and were not able to bind the subsequent invocations together. Apparently this is not a reasonable behaviour for a monitoring tool, so we went ahead to look for a better solution. Next obvious step was taken three months ago when we allowed our users to specify the name for the machine via -Dplumbr.application.name=my-precious-jvm startup parameter. Wise and obvious as it might seem, during those three months just 2% of our users have actually bothered to specify this parameter. So, it was time to go back to the drawing board and see what options we have when trying to automatically bind unique and human-readable identifier to a JVM instance. Our first approach was to acquire the name of the class with the main() method and use this as an identifier. Immediate drawbacks were quickly visible when we launched the build in the development box containing four different Jetty instances – immediately you had four different JVMs all binding themselves under the same not-so-unique identifier. Next attempt was to parse the content of the application and identify the application from the deployment descriptors – after all, most of the applications monitored by Plumbr are packaged as WAR/EAR bundles, thus it would make sense and use the information present within the bundle. And indeed, vast majority of the engineers have indeed given meaningful names in the <display-name> parameter inside web.xml or application.xml. This solved part of the problem – when all those four Jetty instances are running apps with different <display-name>’s, they would appear as unique. And indeed they did, until our staging environment revealed that this might not always be the case. We had several different Plumbr Server instances on the same machine, using different application servers but deploying the same WAR file with the same <display-name> parameter. As you might guess, this is again killing the uniqueness of such ID. Another issue raised was the fact that there are application servers running several webapps – what will happen when you have deployed several WAR files to your container? So we had to dig further. To distinguish between several JVMs running the same application in the same machine, we added the launch folder to warrant the uniqueness of the identifier. But the problem of multiple WAR’s still persisted. For this we fell back to our original hypothesis where we used the main class name as identifier. Some more technical nuances, such as distinguishing between the actual hash used for ID and the user-friendly version of the same hash, asides – we now the solution which will display you something similar in the list of your monitored JVMs:Machine JVM Up sinceartemis.staging Self Service (WAR) 07.07.2014 11:45artemis.staging E-Shop (WAR) 08.07.2014 18:30aramis.live com.ringbearer.BatchProcessor 01.01.2001 00:00  So, we were actually able to come up with a decent solution and fallback to manual naming with -Dplumbr.application.name parameter if everything else fails. One question still remains – why is something so commonly required by system administrators completely missing from the JVM tooling and APIs?Reference: Identifying JVM – trickier than expected from our JCG partner Ivo Mägi at the Plumbr Blog blog....
agile-logo

10 Tips for Creating an Agile Product Strategy with the Vision Board

Summary This post does what its title says: It shares my recommendations for creating an agile product strategy using the Vision Board. It addresses readers who want to find out more about using a product strategy in an agile, dynamic environment and readers who want to get better at using the Vision Board.          Start with What You Know NowTraditionally, a product strategy is the result of months of market research and business analysis work. It is intended to be factual, reliable, and ready to be implemented. But in an agile, dynamic environment a product strategy is best created differently: Start with your idea, state the vision behind it, and capture your initial strategy. Then identify the biggest risk or the crucial leap-of-faith assumption, address it, and change and improve your strategy. Repeat this process until you are confident that your product strategy is valid.This iterative approach, piloted by Lean Startup, helps you acquire the new knowledge fast and in a goal-oriented, focused manner addressing the key risks or assumptions. It avoids the danger of carrying out too much and too little research, reduces time-to-market, and increases your chances of creating a successful product.Focus on what Matters MostThe term product strategy means different things to different people, and strategies come in different shapes and sizes. While that’s perfectly fine, an initial product strategy that forms the basis for subsequent correction and refinement cycles should focus on what matters most: the market, the value proposition, the product’s unique selling points, and the business goals. This is where my Vision Board comes in. I have designed it as the simplest thing that could possibly work to capture the vision and the product strategy. You can download it from romanpichler.com/tools/vision-board for free.For an introduction to the Vision Board, please see my post “The Product Vision Board”.Create the Product Strategy CollaborativelyA great way to create your product strategy is to employ a collaborative workshop. Invite the key people required to develop, market, sell and service your product and the senior management sponsor. Such a workshop generates early buy-in, creates shared ownership, and leverages the collective knowledge and creativity of the group. Selling an existing vision and product strategy can be challenging. Co-creation is often the better option.Your initial Vision Board has to be good enough to create a shared understanding of your vision and initial strategy and to identify the biggest risk so you can start re-working your board. But don’t spend too much time on it and don’t try to make it perfect. Your board will change as you correct, improve and refine it.Let your Vision Guide youThe product vision is the very reason for creating your product: It describes your overarching goal. The vision also forms the basis of your product strategy as the path to reach your overall goal. As the vision is so important, you should capture it before you describe your strategy.Here are four tips to help you capture your vision:Make sure that your vision does not restate your product idea but goes beyond it. For instance, the idea for this post is to write about creating an agile product strategy, but my vision is to help you develop awesome and successful products. Choose a broad vision, a vision that engages people and that enables you to pivot – to change the strategy while staying true to your vision. Make your vision statement concise; capture it in one or two sentences; and ensure that it is clear and easy to understand. Try to come up with a motivating and inspiring vision that helps unite everyone working on the product. Choosing an altruistic vision, a vision that focuses on the benefits created for others, can help you with this.Put the Users FirstOnce you have captured your vision, work on your strategy by filling in the lower sections of the Vision Board from left to right. Start with the “Target Group”, the people who should use and buy your product rather than thinking about the cool, amazing product features or the smart business model that will monetise the product. While both aspects are important, capturing the users and customers and their needs forms the basis for making the right product and business model decisions.While it’s tempting to think of all the people who could possibly benefit from your product, it is more helpful to choose a clear-cut and narrow target group instead. Describe the users and customers as clearly as you can and state the relevant demographic characteristics. If there are several segments that your product could serve then choose the most promising one. Working with a focused target group makes it easier to test your assumptions, to select the right test group and test method, and to analyse the resulting feedback and data. If it turns out that you have picked the wrong group or made the segment is too small then simply pivot to a new or bigger one. A large or heterogeneous target group is usually difficult to test. What’s more, it leads to many diverse needs, which make it difficult to determine a clear and convincing value proposition and therefore to market and sell the product.Clearly State the Main Problem or BenefitOnce you have captured your target users and customers, describe their needs. Consider why they would purchase and use your product. What problem will your product solve, what pain or discomfort will it remove, what tangible benefit will it create?If you identify several needs, then determine the main problem or the main benefit, for instance, by putting it at the top of the section. This helps you test your ideas and create a convincing value proposition. I find that if I am not able to clearly describe the main problem or benefit, I don’t really understand why people would want to use and to buy a product.Describe the Essence of your ProductOnce you have captured the needs, use the “Product” section to describe your actual product idea. State the three to five key features of your product, those features that make the product desirable and that set it apart from its competitors. When capturing the features consider not only product functionality but also nonfunctional qualities such as performance and interoperability, and the visual design.Don’t make the mistake of turning this section into a product backlog. The point is not to describe the product comprehensively or in a great amount of detail but to identify those features that really matter to the target group.State your Business Goals and Key Business Model ElementsUse the “Value” section to state your business goals such as creating a new revenue stream, entering a new market, meeting a profitability goal, reducing cost, developing the brand, or selling another product. Make explicit why it is worthwhile for your company to invest in the product. Prioritise the business goals and state them in the order of their importance. This will guide your efforts and help you choose the right business model.Once you have captured the business goals, state the key elements of your business model including the main revenue sources and cost factors. This is particularly important when you work with a new or significantly changed business model.Extend your BoardThe Vision Board’s simplicity is one of its assets, but it can sometimes become restricting: The Product and the Value sections can get crowded as the board does not separately capture the competitors, the partners, the channels, the revenue sources, the cost factors, and other business model elements. Luckily there is a simple solution: Extend your board and add further sections, for instance, “Competitors”, “Channels”, “Revenue Streams”, and “Cost Factors”, or download an extended version from my website.But before using an extended Vision Board make sure that you understand who your customers and users are and why they would buy and use the product. There is no point in worrying about the marketing and the sales channels or the technologies if you are not confident that you have identified a problem that’s worthwhile addressing. Additionally, a more complex board usually contains more risks and assumptions. This makes it harder to identify the biggest risk and leap-of-faith assumption.Put it to the TestCapturing your vision and initial product strategy on the Vision Board is great. But it’s only the beginning of a journey in search of a valid strategy, as your initial board is likely to be wrong. After all, you have based the board on what you know now rather than extensive market research work. You should therefore review your initial Vision Board carefully, identify its critical risks or leap-of-faith assumptions, and select the most crucial risk or assumption. Determine the right test group, for instance, selected target users, and the right test method such as problem interviews. Carry out the test, analyse the feedback or data collected, and change your Vision Board with the newly gained knowledge as the picture below shows.If you find that the key risks and assumptions hard to identify then your board may be too vague. If that’s the case then narrow down the target group, select the main problem or benefit, reduce the key features to no more than five, identify the main business benefit, and remove everything else. Your board may significantly change as you iterate over your strategy, and you may have to pivot, to choose a different strategy to make your vision come true. If your Vision Board does not change at all then you should stop and reflect: Are you addressing the right risks in the right way and are you analysing the feedback and data effectively?Reference: 10 Tips for Creating an Agile Product Strategy with the Vision Board from our JCG partner Roman Pichler at the Pichler’s blog blog....
agile-logo

How Pairing & Swarming Work & Why They Will Improve Your Products

If you’ve been paying attention to agile at all, you’ve heard these terms: pairing and swarming. But what do they mean? What’s the difference? When you pair, two people work together to finish a piece of work. Traditionally, two developers paired. The “driver” wrote the piece of work. The other person, the “navigator,” observed the work, providing review, as the work was completed. I first paired as a developer in 1982 (kicking and screaming). I later paired in the late 1980′s as the tester in several developer-tester pairs. I co-wrote Behind Closed Doors: Secrets of Great Management with Esther Derby as a pair.   There is some data that says that when we pair, the actual coding takes about 15-20% longer. However, because we have built-in code review, there is much less debugging at the end. When Esther and I wrote the book, we threw out the original two (boring) drafts, and rewrote the entire book in six weeks. We were physically together. I had to learn to stop talking. (She is very funny when she talks about this.) We both had to learn each others’ idiosyncrasies about indentations and deletions when writing. That’s what you do when you pair. However, this book we wrote and published is nothing like what the original drafts were. Nothing. We did what pairs do: We discussed what we wanted this section to look like. One of us wrote for a few minutes. That person stopped. We changed. The other person wrote. Maybe we discussed as we went, but we paired. After about five hours, we were done for the day. Done. We had expended all of our mental energy. That’s pairing. Two developers. One work product. Not limited to code, okay? Now, let’s talk about swarming. Swarming is when the entire team says, “Let’s take this story and get it to done, all together.” You can think of swarming as pairing on steroids. Everyone works on the same problem. But how? Someone will have to write code. Someone will have to write tests. The question is this: in what order and who navigates? What does everyone else do? When I teach my agile and lean workshop, I ask the participants to select one feature that the team can complete in one hour. Everyone groans. Then they do it. Some teams do it by having the product owner explain what the feature is in detail. Then the developers pair and the tester(s) write tests, both automated and manual. They all come together at about the 45-minute mark. They see if what they have done works. (It often doesn’t.) Then the team starts to work together, to really swarm. “What if we do this here? How about if this goes there?” Some teams work together from the beginning. “What is the first thing we can do to add value?” (That is an excellent question.) They might move into smaller pairs, if necessary. Maybe. Maybe they need touchpoints every 15-20 minutes to re-orient themselves to say, “Where are we?” They find that if they ask for feedback from the product owner, that works well. If you first ask, “What is the first thing we can do to add value and complete this story?” you are probably on the right track. Why Do Pairing and Swarming Work So Well? Both pairing and swarming:Build feedback into development of the task at hand. No one works alone. Can the people doing the work still make a mistake? Sure. But it’s less likely. Someone will catch the mistake. Create teamwork. You get to know someone well when you work with them that intensely. Expose the work. You know where you are. Reduce the work in progress. You are less likely to multitask, because you are working with someone else. Encourage you to take no shortcuts, at least in my case. Because someone was watching me, I was on my best professional behavior. (Does this happen to you, too?)How do Pairing and Swarming Improve Your Products? The effect of pairing and swarming is what improves your products. The built-in feedback is what creates less debugging downstream. The improved teamwork helps people work together. When you expose the work in progress, you can measure it, see it, have no surprises. With reduced work in progress, you can increase your throughput. You have better chances for craftsmanship. You don’t have to be agile to try pairing or swarming. You can pair or swarm on any project. I bet you already have, if you’ve been on a “tiger team,” where you need to fix something for a “Very Important Customer” or you have a “Critical Fix” that must ship ASAP. If you had all eyes on one problem, you might have paired or swarmed. If you are agile, and you are not pairing or swarming, consider adding either or both to your repertoire, now.Reference: How Pairing & Swarming Work & Why They Will Improve Your Products from our JCG partner Johanna Rothman at the Managing Product Development blog....
enterprise-java-logo

New in JAX-RS 2.0 – @BeanParam annotation

JAX-RS is awesome to say the least and one of my favorites! Why?Feature rich Intuitive (hence the learning curve is not as steep) Easy-to-use and develop with Has great RIs – Jersey, RestEasy etcThere are enough JAX-RS fans out there who can add to this! JAX-RS 2.0 is the latest version of the JSR 311 specification and it was released along with Java EE 7.   Life without @BeanParam Before JAX-RS 2.0, in order to pass/inject information from an HTTP request into JAX-RS resource implementation methods, one couldInclude multiple method arguments annotated with @FormParam, @PathParam, @QueryParam etc   Or, have a model class backed by JAXB/JSON or a custom MessageBodyReader implementation for JAX-RS Provider to be able to unmarshall the HTTP message body to a Java object – read more about this in one of my previous posts    This means that something like a HTML5 based client would need to extract the FORM input, convert it into JSON or XML payload and then POST it over the wire. Simplification in JAX-RS 2.0 This process has been simplified by introduction of the @BeanParam annotation. It helps inject custom value/domain/model objects into fields or method parameters of JAX-RS resource classes. In case you want to refer to the code (pretty simple) or download the example/run it yourself, here is the GitHub link All we need to do is, annotate the fields of the model (POJO) class with the injection annotations that already exist i.e. @PathParam, @QueryParam, @HeaderParam, @MatrixParam etc – basically any of the @xxxParam metadata types and  Make sure that we include the @BeanParam annotation while injecting a reference variable of this POJO (only on METHOD, PARAMETER or FIELD).  JAX-RS provider automatically constructs and injects an instance of your domain object which you can now use within your methods. Just fill in the form information and POST it!     That’s it. . . Short and Sweet! Keep Coding!Reference: New in JAX-RS 2.0 – @BeanParam annotation from our JCG partner Abhishek Gupta at the Object Oriented.. blog....
software-development-2-logo

Keeping things DRY: Method overloading

A good clean application design requires discipline in keeping things DRY: Everything has to be done once. Having to do it twice is a coincidence. Having to do it three times is a pattern. — An unknown wise manNow, if you’re following the Xtreme Programming rules, you know what needs to be done, when you encounter a pattern: refactor mercilesslyBecause we all know what happens when you don’t:  Not DRY: Method overloading One of the least DRY things you can do that is still acceptable is method overloading – in those languages that allow it (unlike Ceylon, JavaScript). Being an internal domain-specific language, the jOOQ API makes heavy use of overloading. Consider the type Field (modelling a database column): public interface Field<T> {// [...]Condition eq(T value); Condition eq(Field<T> field); Condition eq(Select<? extends Record1<T>> query); Condition eq(QuantifiedSelect<? extends Record1<T>> query);Condition in(Collection<?> values); Condition in(T... values); Condition in(Field<?>... values); Condition in(Select<? extends Record1<T>> query);// [...]} So, in certain cases, non-DRY-ness is inevitable, also to a given extent in the implementation of the above API. The key rule of thumb here, however, is to always have as few implementations as possible also for overloaded methods. Try calling one method from another. For instance these two methods are very similar: Condition eq(T value); Condition eq(Field<T> field); The first method is a special case of the second one, where jOOQ users do not want to explicitly declare a bind variable. It is literally implemented as such: @Override public final Condition eq(T value) { return equal(value); }@Override public final Condition equal(T value) { return equal(Utils.field(value, this)); }@Override public final Condition equal(Field<T> field) { return compare(EQUALS, nullSafe(field)); }@Override public final Condition compare(Comparator comparator, Field<T> field) { switch (comparator) { case IS_DISTINCT_FROM: case IS_NOT_DISTINCT_FROM: return new IsDistinctFrom<T>(this, nullSafe(field), comparator);default: return new CompareCondition(this, nullSafe(field), comparator); } } As you can see:eq() is just a synonym for the legacy equal() method equal(T) is a more specialised, convenience form of equal(Field<T>) equal(Field<T>) is a more specialised, convenience form of compare(Comparator, Field<T>) compare() finally provides access to the implementation of this APIAll of these methods are also part of the public API and can be called by the API consumer, directly, which is why the nullSafe() check is repeated in each method. Why all the trouble? The answer is simple.There is only very little possibility of a copy-paste error throughout all the API. … because the same API has to be offered for ne, gt, ge, lt, le No matter what part of the API happens to be integration-tested, the implementation itself is certainly covered by some test. This way, it is extremely easy to provide users with a very rich API with lots of convenience methods, as users do not want to remember how these more general-purpose methods (like compare()) really work.The last point is particularly important, and because of risks related to backwards-compatibility, not always followed by the JDK, for instance. In order to create a Java 8 Stream from an Iterator, you have to go through all this hassle, for instance: // Aagh, my fingers hurt... StreamSupport.stream(iterator.spliterator(), false); // ^^^^^^^^^^^^^ ^^^^^^^^^^^ ^^^^^ // | | | // Not Stream! | | // | | // Hmm, Spliterator. Sounds like | | // Iterator. But what is it? ---------+ | // | // What's this true and false? | // And do I need to care? ------------------------+ When, intuitively, you’d like to have: // Not Enterprise enough iterator.stream(); In other words, subtle Java 8 Streams implementation details will soon leak into a lot of client code, and many new utility functions will wrap these things again and again. See Brian Goetz’s explanation on Stack Overflow for details. On the flip side of delegating overload implementations, it is of course harder (i.e. more work) to implement such an API. This is particularly cumbersome if an API vendor also allows users to implement the API themselves (e.g. JDBC). Another issue is the length of stack traces generated by such implementations. But we’ve shown before on this blog that deep stack traces can be a sign of good quality. Now you know why. Takeaway The takeaway is simple. Whenever you encounter a pattern, refactor. Find the most common denominator, factor it out into an implementation, and see that this implementation is hardly ever used by delegating single responsibility steps from method to method. By following these rules, you will:Have less bugs Have a more convenient APIHappy refactoring!Reference: Keeping things DRY: Method overloading from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.

Sign up for our Newsletter

20,709 insiders are already enjoying weekly updates and complimentary whitepapers! Join them now to gain exclusive access to the latest news in the Java world, as well as insights about Android, Scala, Groovy and other related technologies.

As an extra bonus, by joining you will get our brand new e-books, published by Java Code Geeks and their JCG partners for your reading pleasure! Enter your info and stay on top of things,

  • Fresh trends
  • Cases and examples
  • Research and insights
  • Two complimentary e-books