Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?


Vagrant with Docker provider, using WildFly and Java EE 7 image

What is Vagrant? Vagrant is a simplified and portable way to create virtual development environments. It works with multiple virtualization software such as VirtualBox, VMWare, AWS, and more. It also works with multiple configuration software such as Ansible, Chef, Puppet, or Salt. No more “works on my machine”!   The usual providers are, well, usual. Starting with version 1.6, Docker containers can be used as one of the backend providers as well. This allows your development environment to be based on Docker containers as opposed to full Virtual Machines. Read more about this at The complete development environment definition such as the type of machine, software that needs to be installed, networking, and other configuration information is defined in a text file, typically called as Vagrantfile. Based upon the provider, it creates the virtual development environment. Read more about what can be defined in the file, and how, at Getting Started with Vagrant Getting Started Guide is really simple and easy to follow to get your feet wet with Vagrant. Once your basic definition is created, the environment can be started with a simple command: vagrant up The complete set of commands are defined at Default provider for Vagrant is VirtualBox. An alternate provider can be specified at the CLI as: vagrant up --provider=docker This will spin up the Docker container based upon the image specified in the Vagrantfile. Packaging Format Vagrant environments are packaged as Boxes. You can search from the publicly available list of boxes to find the box of your choice. Or even create your own box, and add them to the central repository using the following command: vagrant box add USER/BOX Vagrant with WildFly Docker image After learning the basic commands, lets see what does it take to start a WildFly Docker image using Vagrant. The Vagrantfile is defined at and shown in line:Vagrant.configure(2) do |config| config.vm.provider "docker" do |d| # Define the Docker image d.image = "jboss/wildfly:latest" end endClone the git repo and change to docker-wildfly directory. Vagrant image can be started using the following command: vagrant up --provider=docker and shows the output as:docker-wildfly> vagrant up Bringing machine 'default' up with 'docker' provider... ==> default: Docker host is required. One will be created if necessary... default: Vagrant will now create or start a local VM to act as the Docker default: host. You'll see the output of the `vagrant up` for this VM below. default: default: Box 'mitchellh/boot2docker' could not be found. Attempting to find and install... default: Box Provider: virtualbox default: Box Version: >= 0 default: Loading metadata for box 'mitchellh/boot2docker' default: URL: default: Adding box 'mitchellh/boot2docker' (v1.2.0) for provider: virtualbox default: Downloading: default: Successfully added box 'mitchellh/boot2docker' (v1.2.0) for 'virtualbox'! default: Importing base box 'mitchellh/boot2docker'... default: Matching MAC address for NAT networking... default: Checking if box 'mitchellh/boot2docker' is up to date... default: Setting the name of the VM: docker-host_default_1421277252359_12510 default: Fixed port collision for 22 => 2222. Now on port 2203. default: Clearing any previously set network interfaces... default: Preparing network interfaces based on configuration... default: Adapter 1: nat default: Forwarding ports... default: 2375 => 2375 (adapter 1) default: 22 => 2203 (adapter 1) default: Running 'pre-boot' VM customizations... default: Booting VM... default: Waiting for machine to boot. This may take a few minutes... default: SSH address: default: SSH username: docker default: SSH auth method: private key default: Warning: Connection timeout. Retrying... default: default: Vagrant insecure key detected. Vagrant will automatically replace default: this with a newly generated keypair for better security. default: default: Inserting generated public key within guest... default: Removing insecure key from the guest if its present... default: Key inserted! Disconnecting and reconnecting using new SSH key... default: Machine booted and ready! ==> default: Syncing folders to the host VM... default: Installing rsync to the VM... default: Rsyncing folder: /Users/arungupta/workspaces/vagrant-images/docker-wildfly/ => /var/lib/docker/docker_1421277277_78698 ==> default: Warning: When using a remote Docker host, forwarded ports will NOT be ==> default: immediately available on your machine. They will still be forwarded on ==> default: the remote machine, however, so if you have a way to access the remote ==> default: machine, then you should be able to access those ports there. This is ==> default: not an error, it is only an informational message. ==> default: Creating the container...This will not work until #5187 is fixed. But at least this blog explained the main concepts of Vagrant.Reference: Vagrant with Docker provider, using WildFly and Java EE 7 image from our JCG partner Arun Gupta at the Miles to go 2.0 … blog....

Which UX Skills should Product Owners and Product Managers have?

Summary Providing a great user experience is a must for many digital products, and user experience (UX) design has consequently become prominent in recent years. Does this mean that product owners and product managers should become UX experts? Who should design the UX and which UX skills should product owners and product managers have? Read on to find out my recommendations. I often get asked how much product owners and product manager should know about user experience (UX) design and who should do the UX work on an agile team. To answer this question, we have to consider what you are responsible for as the product owner or the product manager: to ensure that your product creates the desired value for its customers and users and for the business. Your job is not to design a great user experience. Does this mean that product owners and product managers should not care about the user experience? Of course not! Many digital products must provide a great user experience to achieve product success. Take for example Monument Valley, a beautifully designed computer game. If the user interaction, the graphics, the animations, and the music of the game were not right, then it would not be enjoyable to play it. As a consequence, not many people would make in-app purchases, and the product would not generate enough revenue for Ustwo, the company that develops the game. For some products, creating a great user experience is the main differentiator, the quality that sets it apart form the competition and that helps it become a success. If the user experience design is important but if it’s not a core responsibility of the product owner or product manager, whose job is it? I prefer having one or more qualified user experience designers on the agile team, the cross-functional team that designs, builds, and tests the product. In a Scrum context, that’s the development team in the picture below.As the picture above shows, I view it as the job of the development team to create a great product in the right way, a product with the right user experience and the right quality. Its members should collaborate to build the best possible product within the timeframe and the budget available. As the product owner or product manager, you should know enough about the determining the right user experience and characterising it so that you can guide the team and effectively collaborate with its members. You should be able to figure out who the product is for, why people (would) buy and use it, and what the desired business benefits are. You should also be able to help describe the desired user experience, to validate it, and to enhance it, as the list of UX-related skills below shows.Area Skills TechniquesUser models You should be able to write realistic, helpful personas that contain a name, a picture, the relevant characteristics, and goals; determine the primary persona, the character who the product is mainly created for. PersonasUser interaction You can describe the users’ end-to-end interactions with the product and the steps required to achieve a specific goal.User journeys Scenarios Story maps Workflow diagramsFunctionality You are able to characterise the product’s functionality in form of stories including progressively decomposing bigger into smaller stories and writing acceptance criteria; you know how to write constraint stories to capture non-functional properties such as performance or interoperability.Epics User stories Constraint storiesProduct backlog You address key UX risks early on and you use a backlog tool that allows you to effectively describe and test the desired user experience, for instance, my Product Canvas.Prioritisation Product CanvasUser research and product validation You are able to choose the right research or validation technique to learn about the desired use experience, to test UX assumptions, and to address UX risks; apply the method effectively.Direct observation Usability test Product demo A/B testsThe table above is not intended to be a complete or definitive list. It simply wants to help you understand which skills are likely to benefit you to create a product with a great user experience, and to help you spot the ones which you might be lacking. Don’t get hung up about the techniques stated. The point is not to use a specific method but to be able to do the necessary work effectively. If, for instance, personas don’t work well for you as you manage a technical product like a platform, then actors may be more helpful for you. Similarly, don’t rely on a single technique that you always use. Mix and match different ones instead, particularly for the user research and the product validation work.Reference: Which UX Skills should Product Owners and Product Managers have? from our JCG partner Roman Pichler at the Pichler’s blog blog....

Acision launches “forgeathon” – its first WebRTC app challenge

Acision launches “forgeathon” – its first online richer communications (WebRTC) app challenge for developers globally  Join forgeathon and let Acision help you take your application or service global! Reading, UK – 6th January 2015: Acision, the global leader in secure, mobile engagement services and an industry innovator in WebRTC technology, today announces the launch of its first online “forgeathon” – a Richer Communications App Challenge for developers worldwide. Using the forge by Acision SDK toolkit, entrants are invited to either create a new Android, iOS or web app or service, or enhance an existing one*, by taking advantage of one of the richest, next generation communication capabilities on the market today. Forgeathon online app challenge As an online event, teams, individual developers, students and entrepreneurs from around the world are able and encouraged to take part in the challenge. The first prize will be all-expenses** paid trips for the winner to showcase their app or service in person during two of the world’s largest and leading tech events, Mobile World Congress 2015 and SXSW Interactive 2015 in March – providing a platform to publicise the app to leading influencers and audiences globally. The challenge officially kicks off today, 6th January 2015, and runs for six weeks with the closing date for submissions the 19th February. Winners will be announced on 23rd February. As a market leader in carrier-grade rich messaging and engagement services, and providing APIs and SDKs which create today’s richest apps, Acision, in conjunction with BeMyApp, is organising the forgeathon online challenge, to spotlight how developers can easily integrate the latest and most advanced communication features into new or existing Android, iOS and Web apps using the forge SDK. As a flexible communications framework, the forge by Acision platform enables accelerated application development with secure, rich and real-time communications, including IP messaging, presence, HD voice and video chat, all powered by WebRTC technology. The forge SDK allows developers to quickly build new innovative services for B2B or B2C purposes, so that businesses can enrich and enhance their customer engagement capabilities on mobile apps and websites. Eric Bilange, Head of Rich Engagement Services, at Acision commented: “The rich communication capabilities and WebRTC technology offered via the forge SDK, offers seamless, secure, low-latency services that can be used for apps dealing with customer relations, call centres, education, training, healthcare, banking, finance, travel and entertainment…the options are endless! We want to open up our SDK as part of this challenge to showcase the exciting ways in which our rich communication tools can be integrated into something without boundaries or limitations; we’re looking for something innovative, something smart, something that really stands out, and in return we will support the winner to take their mobile app or web service global, by providing them a stage to prototype to the media and influencers, on the Acision booth at two of the world’s leading tech events, Mobile World Congress and SXSW Interactive!” With the challenge officially launched from today, there will be two webinars held during the first four weeks, led by Acision evangelist and WebRTC guru Peter Dunkley, giving participants support and guidance on everything there is to know about the forge SDK, so they are in the best position to submit a great final prototype for judging. Along with the crowning winner, there will also be two runners up announced, who will both receive a drone, and free access*** to the forge SDK by Acision for one year. Bilange concluded: “By leveraging Acision’s rich communication toolkit, developers can really bring their application ideas to life.  Our aim with forgeathon is to spread the word about the possibilities with WebRTC technology, and build a community of developers that continually create WebRTC based apps.” Register for forgeathon To register or learn more about forgeathon – the Richer Communications (WebRTC) App Challenge, click here. For the latest news, and updates related to the forgeathon, follow Acision’s Twitter channel @acision. About Acision Acision connects the world by powering relevant, seamless mobile engagement services that interoperate across all IP platforms and enrich the user experience creating value and new communication opportunities for carriers, enterprises and consumers across the world. For more information, visit Acision and forge. Press contacts: Nikki Brown Acision Tel: +44 118 9308 620 Email: About BeMyApp BeMyApp is specialised in developer relations, and organises developer events such as online challenges, hackathons, incubator workshops and more, all around the world. Contact: Maud Levy BeMyApp Tel: +33 634 416 895 Email: Notes * Participants must ensure they have the requisite permissions and rights to enhance, modify or develop third party intellectual property rights. ** Terms, conditions and limitations apply. *** Terms, conditions and limitations apply. ...

Apache FOP Integration with Eclipse and OSGi

Apache FOP is an open source print processor driven by XSL formatting objects (XSL-FO). It can be quite useful to transform data objects into a PDF for example. However it turned out to be somewhat cumbersome to get it integrated into PDE and finally up and running as OSGi Service. Because of this I provided a P2 repository that contains the necessary bundles within a single feature. This way PDE target setup got much easier. The following sections explain how to use it. Apache FOP As stated by the documentation Apache FOP ‘is a Java application that reads a formatting object (FO) tree and renders the resulting pages to a specified output. Output formats currently supported include PDF, PS, PCL, AFP, XML (area tree representation), Print, AWT and PNG, and to a lesser extent, RTF and TXT. The primary output target is PDF’. Of course it is possible to embed the processor into a Java program. Based on on JAXP, Apache FOP relies on SAX events to receive the XSL-FO input document. A basic usage snippet looks somewhat like this: InputStream in = ... // the fo tree to process OutputStream out = ... // pdf destination FopFactory fopFactory = FopFactory.newInstance(); try { Fop fop = fopFactory.newFop( MimeConstants.MIME_PDF, out ); TransformerFactory factory = TransformerFactory.newInstance(); Transformer transformer = factory.newTransformer(); Source source = new StreamSource( in ); Result result = new SAXResult( fop.getDefaultHandler() ); transformer.transform( source, result ); } finally { out.flush(); } For a detailed explanation of Apache FOP embedded usage please refer to the online documentation. Dependency Hell While the snippet looks straight forward, integration in Eclipse/OSGi was not that easy. Maybe I behaved stupidly, but it took me almost two days to assemble a target definition, which eventually did the trick. To avoid this problem in the future and give others who may run into the same trap a better start, I created the P2 repository mentioned at the beginning. In order to use it simply add the repository location as Software Site to your Eclipse target definition:Now define the dependencies of the bundle that should contain the processor. These dependencies are org.apache.servicemix.bundles.fop and org.apache.servicemix.bundles.xmlgraphics-commons:Once those are in place the code above compiles. It is easy to develope a service class having a format method for PDF generation with a signature as shown here: public class FopService { private final FopFactory fopFactory;public FopService() { this.fopFactory = FopFactory.newInstance(); } public void format( InputStream input, OutputStream output, InputStream stylesheet ) { [...] // similar transformation code like above } } I knitted a simple usage example project that provides more details. The project contains an that already integrates Apache FOP. After importing the project, resolve and set its target definition in your Eclipse workspace. Run the ‘FOP example’ launch configuration. Last but not least open a browser and go to the URL http://localhost:10080/services/pdf. A freshly created PDF should be ready for download:The xml and xsl documents used to generate the PDF are the same as those processed by the ExampleXML2PDF class of the embedding Apache FOP examples. The PDF contains a simple listing of members of a fictive development team.Be aware that ‘Apache FOP may currently not be completely thread safe’ (see the Multithreading FOP section, of the Apache FOP: Embedding). Naturally a real world scenario must take this into account, in particular if running in a multithreaded server environment. Wrap Up Although the road was a bit bumpy at the beginning the Apache FOP integration works fine now. So if you want to checkout the example by yourself, it is located at The P2 repository is hosted at In case you run into a problem or have any questions or suggestions you might add an issue at the GitHub project or leave a comment in the section below.Reference: Apache FOP Integration with Eclipse and OSGi from our JCG partner Frank Appel at the Code Affine blog....

Mixing memory unit messages

One of my pet hates is developers messing up units of memory.  In general, the difference doesn’t matter much, or you can work out what they meant to say, but it does annoy me when developers who should know better appear to be confused about what they are trying to say. From Wikipedia’s Byte page, here is a cheat sheet of units:          “b” means bits “B” means bytes “k” or kilo- is standard prefix meaning 1000 “K” or “Ki”, sometimes called kibi- is 1024 “m” or mill-, means 1/1000th “M” or mega-, means 1000^2 “Mi” or mebi-, means 1024^2  Unit  Meaning In bytesmbmilli-bits 1/8000th of a bytemBmilli-bytes 1/1000th of a bytekbkilo-bits125 byteskBkilo-bytes1000 bytesKb or Kibkibi-bits128 bytesKB or KiBkibi-bytes1024 bytesMbmega-bits125,000 bytesMBmega-bytes1,000,000 bytesMibmebi-bits131,072 bytesMiBmebi-bytes1,048,576 bytesWhat annoys me is when professionals confuse these terms. In UNIX: In top on Unix it states: KiB Mem:  32893900 but when I do “head -1 /proc/meminfo” it states MemTotal:       32893900 kB this is 2.4% less. I suspect top is correct as it is closer the amount of memory installed and they used “KiB” which lends credibility, but I can’t be sure. In the JVM The default translation for “KBYTES” is “kbytes” but in Japanese and Chinese it is “KB” which is 2.4% more. While the JVM appears to be using KiB or kibibytes every where, it refers to “KiB” only three times, but uses “kB” in twice, “Kb” in seven places, “KB” in 127 places and “kilobytes” in three cases. Similarly “MiB” appears 4 times, “MB” in 87 places, “Mb” three times and “mb” three times. Puzzle Question: A computer program writes to memory at 49 mb/s, to disk at 50 mb/s and to the network at 100 mb/s. Which is it writing data at a higher rate to; the memory, the disk or the network? Answer: It is probably writing to memory most and the network least.  This is because it might be read as “… to memory at 49 MiB/s, to disk at 50 MB/s and to the network at 100 Mb/s. ” and one MiB/s is almost 8.4 Mb/s Conclusion If you don’t know what units you have in mind and which ones you don’t, it shouldn’t be surprising if someone reading your output is confused as well. I encourage everyone to use standard units in their code so when you see units you know exactly what they mean.Reference: Mixing memory unit messages from our JCG partner Peter Lawrey at the Vanilla Java blog....

Fork/Join Framework vs. Parallel Streams vs. ExecutorService: The Ultimate Fork/Join Benchmark

How does the Fork/Join framework act under different configurations? Just like the upcoming episode of Star Wars, there has been a lot of excitement mixed with criticism around Java 8 parallelism. The syntactic sugar of parallel streams brought some hype almost like the new lightsaber we’ve seen in the trailer. With many ways now to do parallelism in Java, we wanted to get a sense of the performance benefits and the dangers of parallel processing. After over 260 test runs, some new insights rose from the data and we wanted to share these with you in this post.ExecutorService vs. Fork/Join Framework vs. Parallel Streams A long time ago, in a galaxy far, far away…. I mean, some 10 years ago concurrency was available in Java only through 3rd party libraries. Then came Java 5 and introduced the java.util.concurrent library as part of the language, strongly influenced by Doug Lea. The ExecutorService became available and provided us a straightforward way to handle thread pools. Of course java.util.concurrent keeps evolving and in Java 7 the Fork/Join framework was introduced, building on top of the ExecutorService thread pools. With Java 8 streams, we’ve been provided an easy way to use Fork/Join that remains a bit enigmatic for many developers. Let’s find out how they compare to one another. We’ve taken 2 tasks, one CPU-intensive and the other IO-intensive, and tested 4 different scenarios with the same basic functionality. Another important factor is the number of threads we use for each implementation, so we tested that as well. The machine we used had 8 cores available so we had variations of 4, 8, 16 and 32 threads to get a sense of the general direction the results are going. For each of the tasks, we’ve also tried a single threaded solution, which you’ll not see in the graphs since, well, it took much much longer to execute. To learn more about exactly how the tests ran you can check out the groundwork section below. Now, let’s get to it. Indexing a 6GB file with 5.8M lines of text In this test, we’ve generated a huge text file, and created similar implementations for the indexing procedure. Here’s what the results looked like:** Single threaded execution: 176,267msec, or almost 3 minutes. ** Notice the graph starts at 20000 milliseconds. 1. Fewer threads will leave CPUs unutilized, too many will add overhead The first thing you notice in the graph is the shape the results are starting to take – you can get an impression of how each implementation behaves from only these 4 data points. The tipping point here is between 8 and 16 threads, since some threads are blocking in file IO, and adding more threads than cores helped utilize them better. When 32 threads are in, performance got worse because of the additional overhead. 2. Parallel Streams are the best! Almost 1 second better than the runner up: using Fork/Join directly Syntactic sugar aside (lambdas! we didn’t mention lambdas), we’ve seen parallel streams perform better than the Fork/Join and the ExecutorService implementations. 6GB of text indexed in 24.33 seconds. You can trust Java here to deliver the best result. 3. But… Parallel Streams also performed the worst: The only variation that went over 30 seconds This is another reminder of how parallel streams can slow you down. Let’s say this happens on machines that already run multithreaded applications. With a smaller number of threads available, using Fork/Join directly could actually be better than going through parallel streams – a 5 second difference, which makes for about an 18% penalty when comparing these 2 together. 4. Don’t go for the default pool size with IO in the picture When using the default pool size for Parallel Streams, the same number of cores on the machine (which is 8 here), performed almost 2 seconds worse than the 16 threads version. That’s a 7% penalty for going with the default pool size. The reason this happens is related with blocking IO threads. There’s more waiting going on, so introducing more threads lets us get more out of the CPU cores involved while other threads wait to be scheduled instead of being idle. How do you change the default Fork/Join pool size for parallel streams? You can either change the common Fork/Join pool size using a JVM argument: -Djava.util.concurrent.ForkJoinPool.common.parallelism=16 (All Fork/Join tasks are using a common static pool the size of the number of your cores by default. The benefit here is reducing resource usage by reclaiming the threads for other tasks during periods of no use.) Or… You can use this trick and run Parallel Streams within a custom Fork/Join pool. This overrides the default use of the common Fork/Join pool and lets you use a pool you’ve set up yourself. Pretty sneaky. In the tests, we’ve used the common pool. 5. Single threaded performance was 7.25x worse than the best result Parallelism provided a 7.25x improvement, and considering the machine had 8 cores, it got pretty close to the theoretic 8x prediction! We can attribute the rest to overhead. With that being said, even the slowest parallelism implementation we tested, which this time was parallel streams with 4 threads (30.24sec), performed 5.8x better than the single threaded solution (176.27sec). What happens when you take IO out of the equation? Checking if a number is prime For the next round of tests, we’ve eliminated IO altogether and examined how long it would take to determine if some really big number is prime or not. How big? 19 digits. 1,530,692,068,127,007,263, or in other words: one quintillion seventy nine quadrillion three hundred sixty four trillion thirty eight billion forty eight million three hundred five thousand thirty three. Argh, let me get some air. Anyhow, we haven’t used any optimization other than running to its square root, so we checked all even numbers even though our big number doesn’t divide by 2 just to make it process longer. Spoiler alert: it’s a prime, so each implementation ran the same number of calculations. Here’s how it turned out:** Single threaded execution: 118,127msec, or almost 2 minutes. ** Notice the graph starts at 20000 milliseconds 1. Smaller differences between 8 and 16 threads Unlike the IO test, we don’t have IO calls here so the performance of 8 and 16 threads was mostly similar, except for the Fork/Join solution. We’ve actually ran a few more sets of tests to make sure we’re getting good results here because of this “anomaly” but it turned out very similar time after time. We’d be glad to hear your thoughts about this in the comment section below. 2. The best results are similar for all methods We see that all implementations share a similar best result of around 28 seconds. No matter which way we tried to approach it, the results came out the same. This doesn’t mean that we’re indifferent to which method to use. Check out the next insight. 3. Parallel streams handle the thread overload better than other implementations This is the more interesting part. With this test, we see again that the the top results for running 16 threads are coming from using parallel streams. Moreover, in this version, using parallel streams was a good call for all variations of thread numbers. 4. Single threaded performance was 4.2x worse than the best result In addition, the benefit of using parallelism when running computationally intensive tasks is almost 2 times worse than the IO test with file IO. This makes sense since it’s a CPU intensive test, unlike the previous one where we could get an extra benefit from cutting down the time our cores were waiting on threads stuck with IO. Conclusion I’d recommend going to the source to learn more about when to use parallel streams and applying careful judgement anytime you do parallelism in Java. The best path to take would be running similar tests to these in a staging environment where you can try and get a better sense of what you’re up against. The factors you have to be mindful of are of course the hardware you’re running on (and the hardware you’re testing on), and the total number of threads in your application. This includes the common Fork/Join pool and code other developers on your team are working on. So try to keep those in check and get a full view of your application before adding parallelism of your own. Groundwork To run this test we’ve used an EC2 c3.2xlarge instance with 8 vCPUs and 15GB of RAM. A vCPU means there’s hyperthreading in place so in fact we have here 4 physical cores that each act as if it were 2. As far as the OS scheduler is concerned, we have 8 cores here. To try and make it as fair as we could, each implementation ran 10 times and we’ve taken the average run time of runs 2 through 9. That’s 260 test runs, phew! Another thing that was important is the processing time. We’ve chosen tasks that would take well over 20 seconds to process so the differences will be easier to spot and less affected by external factors. What’s next? The raw results are available right here, and the code is on GitHub. Please feel free to tinker around with it and let us know what kind of results you’re getting. If you have any more interesting insights or explanations for the results that we’ve missed, we’d be happy to read them and add it to the post.Reference: Fork/Join Framework vs. Parallel Streams vs. ExecutorService: The Ultimate Fork/Join Benchmark from our JCG partner Alex Zhitnitsky at the Takipi blog....

Working with Robolectric and Robotium in Android Studio and Gradle

I develop the TripComputer App for Android but I find testing apps using the standard Android Instrumentation framework is really slow and painful. Slow testing cycles can kill productivity and are a well documented disincentive to TDD. Therefore, most Android tutorials that talk about testing bestow the virtues of switching to something like the Robolectric framework when unit testing Android apps. Robolectric is great because it allows you to test your App against a ‘simulated’ set of Android SDK API’s using your desktop’s java virtual machine (jvm) as opposed to either ‘emulating’ these API’s in a pretend device or accessing them on a physical device which is what the standard Android Instrumentation Testing framework does. Robolectric also allows the use of JUnit v4 style testing annotations rather than the older JUnit v3 style required by the built in Android Instrumentation testing framework. However, there’s a problem: getting Robolectric to work in Android Studio is difficult. I’ve been an Android Studio user ever since it first went public nearly 2 years ago. It’s an awesome IDE but one consequence of it’s use is that it promotes the Gradle build system to be the default choice for Android projects. This is good news for Android developers but unfortunately, getting Android Studio, Gradle, Robolectric, Robotium, AppCompat and JUnit to all work happily side by side is a real pain in the rear. Over the past year or so it’s been a slowly improving picture, but now Android Studio has gone to a 1.0 release, I (and many others) have figured the time was right to try and bring these tools together. The android-alltest-gradle-sample project on GitHub is my attempt to create a template project that can be used as a starting point for anyone who wishes to use these best of breed Android Testing tools together with Gradle and Android Studio in one project. The tools integrated and supported by the sample project so far are:-AssertJ for Android. Makes the testing of android components simpler by introducing an android specific DSL for unit testing. Robolectric. Allows the the simulated testing of Android apps (i.e. device API’s are simulated, so there is no need for an emulator or physical device). JUnit. Used to simplify testing of core Java and simulated Android tests. Android AppCompat v7. Popular support library developed by Google to improve support for backwards compatibility in Android. Robotium. Used to augment normal Instrumentation Tests and provide black box integration testing from Android.There are lots of blogs out there talking about doing a similar thing, but as far as I know, this sample project is the first to demonstrate the combined use of these tools without the need for any special Gradle or Android Studio plugins to be applied. Instrumentation Tests do still have an important role to play. They are great when used to test how well ‘integrated’ the individual units of code are when combined together to form an App. I find that thinking of instrumentation testing as ‘Integration Testing’ allows me to appreciate it’s true benefit more. As a bonus, the sample project also includes Robotium, to make integration testing simpler and more productive. To use the sample project & code, simply clone the repository (or download a ZIP). Import the project (as a Gradle project) into Android Studio, test it and then start running code. Check out the Acknowledgements section in the readme for further help, tips and advice (including how to execute your Robolectric tests from within Android Studio in addition to the cmdline). For more information, check out the project on GitHub.Reference: Working with Robolectric and Robotium in Android Studio and Gradle from our JCG partner Ben Wilcock at the Ben Wilcock’s blog blog....

Get your Advanced Java Programming Degree with these Tutorials and Courses

Getting started as a Java developer these days is quite straightforward. There are countless books on the subject, and of course an abundance of online material to study. Of course, our own site offers a vast array of tutorials and articles to guide you through the language and we genuinely believe that Java Code Geeks offer the best way to learn Java programming. Things get a bit trickier once you have successfully passed the beginner phase. In order to reach a more advanced level of competence, you will need to reach out and look for targeted resources. A higher level of sophistication is required and the random tutorials that you find online might not “cut it”. For this reason, we have created and featured numerous tutorials on our site. You may find them at the following pages:Core Java Tutorials Enterprise Java Tutorials Spring Tutorials Desktop Java TutorialsAdditionally, we have created several “Ultimate” tutorials, discussing OOP concepts, popular Java tools and frameworks, and more. Have a look at those too:Java 8 Features Tutorial Java Annotations Tutorial Java Servlet Tutorial Java Reflection Tutorial Abstraction in Java JMeter Tutorial for Load Testing JUnit Tutorial for Unit Testing JAXB Tutorial for Java XML BindingOn top of the above, to get you prepared for your programming interviews, we have created some great QnA guides:115 Java Interview Questions and Answers 69 Spring Interview Questions and Answers Multithreading and Concurrency Interview Questions and Answers Core Java Interview Questions 40 Java Collections Interview Questions and Answers Top 100 Java Servlet QuestionsFor even more high-end training, we would like to suggest our JCG Academy courses. With JCG Academy’s course offerings, you tackle real-world projects built by programming experts. Courses offered are designed to help you master new concepts quickly and effectively. All courses could be beneficial to the modern age developer, but let’s focus on the Java related ones. The Advanced Java course is the flagship course that every Java developer should take. This course is designed to help you make the most effective use of Java. It discusses advanced topics, including object creation, concurrency, serialization, reflection and many more. It will guide you through your journey to Java mastery! Next on, we have the Java Design Patterns course (standalone version here). Design patterns are general reusable solutions to commonly occurring problems within a given context in software design. In this course you will delve into a vast number of Design Patterns and see how those are implemented and utilized in Java. You will understand the reasons why patterns are so important and learn when and how to apply each one of them. In the new age of multi-core processors, every developer should be competent in concurrent programming. For this reason we created the Java Concurrency Essentials course (you can join this for FREE!). In this course, you will dive into the magic of concurrency. You will be introduced to the fundamentals of concurrency and concurrent code and you will learn about concepts like atomicity, synchronization and thread safety. As you advance, the following lessons will deal with the tools you can leverage, such as the Fork/Join framework, the java.util.concurrent JDK package. Finally, in order to stay up to date with the latest developments, make sure to join our ever growing newsletter (with more than 73,000 subscribers). By joining, you will also get 11 programming books for FREE! Summing up, you don’t have to spend a bunch of money or waste countless hours to reach and advanced level in Java programming. Instead, you need to be able to study the correct material and use it in your day to day work in order to gain the relevant experience. The good thing about the programming world is that people care only about results. If you can show them that you are great at executing and getting results, you’ll do phenomenal as a Java programmer. Geek on! ...

Open Source Doesn’t Need More Support. It Needs Better Business Models

Jamie Allen, Typesafe‘s Director of Global Services published an interesting point of view on Twitter: Pivotal’s move to end support of Groovy is a stark reminder that enterprises who depend on FOSS projects should help support them. — Jamie Allen (@jamie_allen) January 20, 2015 And he’s right of course. We are constantly reminded of the fact that we should support FOSS projects on which we depend. Just recently, Wikipedia had this huge banner on top of it, asking for money, and we probably should. But do we really depend on them, and can we really make a difference? Let us look at the problem from a business perspective. There isn’t a second Red Hat About a year ago, there was an extremely interesting article on Tech Crunch by Peter Levine, partner at Andreessen Horowitz who have just invested $40M in Stack Exchange. The article was about Why There Will Never Be Another RedHat: The Economics Of Open Source. It compared Red Hat’s and VMWare’s market capitalisation and revenue with that of Microsoft, Oracle, or Amazon, showing that even Red Hat is a rather insignificant competitor in terms of these size metrics. Why is this so? Let’s go back to Groovy: There are probably tens of thousands of Groovy developers out there who have simply downloaded Groovy and then never again interacted with the vendor. It probably wouldn’t even be wrong to say that many developers weren’t aware of Pivotal having been the main sponsor behind Groovy. Sure, Groovy is a strong brand, but it is really “everybody’s brand”, and thus: nobody’s brand. Not being a strong brand, it attracted only techies with language interests (and it is a beautiful language to work with, indeed!) Now, Pivotal has withdrawn their engagement from Groovy, for completely understandable reasons. Let’s review Jamie’s point of view: Pivotal’s move to end support of Groovy is a stark reminder that enterprises who depend on FOSS projects should help support them. Would it have mattered if “we” had supported Groovy? Perhaps. A Groovy Foundation (similar to the Apache Foundation, or the Eclipse Foundation) might have made Groovy a bit less dependent on Pivotal, and this can still happen. Possibly, a couple of larger companies who depend on Groovy, or Gradle might chime in and become Silver or Gold or Platinum Sponsors, or something like that. Perhaps, Gradleware will seize the opportunity and “buy” Groovy to become THE Groovy company. But will it work? Does the same work for Typesafe? Can monetising an Open Source language and platform work in times when even C# is now given away for free? Red Hat can make money off Linux, because Linux is a very complex ecosystem that just requires a support subscription when you’re running it in production. No bank on this planet will ever run a server farm without the vendor promising 24h support with under 1h reaction time. But is the same true for Scala, Groovy? The “critical” work has long been done by the developers. The binaries are built and shipped onto production where operations takes over. Operations couldn’t care less if that binary is built with Groovy, Scala, Java, Ceylon, Kotlin, Fantom, or any of the other gazillion Java alternatives. All operations will ever care about is the JVM, or Weblogic – and the database, of course. And operations is where the long-term subscription money is, not development. This doesn’t mean that no one should make money from developers. Companies like JetBrains, ZeroTurnaround, or also ourselves, Data Geekery show that it works, on a much smaller scale. But if a company is “selling” a programming language that doesn’t immediately help them upsell their customers to buy their significant other subscriptions, you should be wary as the vendor motivation to produce the programming language product is very unclear – and in the case of Pivotal, “unclear” is not even close to describe the vendor motivation. Good examples of holistic platform strategies are these, because operations and the end user can immediately drive the decision chain that justifies the language lock-in for the developers:C# -> Visual Studio -> SQL Server -> Azure, etc. Java -> JVM / Weblogic -> Oracle Database -> Oracle Commerce, etc.OK examples are these, although the upselling potential might not be viable enough to maintain a whole ecosystem. We’ll see how it works:Kotlin -> IntelliJLess good examples are these, because the value proposition chain is really not obvious. There is no justification for the language lock-in:Groovy -> Cloud Platform ?? Scala -> Reactive Programming ?? Ceylon -> RHEL ??The Business Model Jamie Allen’s Tweet shows a lot about what’s wrong with many Open Source vendors. While he claims that end users depend on OSS products from their vendors, the opposite is true. The end user can simply fork the OSS product and lead it to a graceful end of life before replacing it. But the vendor really depends on the goodwill and the benevolence of their FOSS communities. The vendor then tries to leverage goodwill to make a weird-sounding upselling between completely unrelated products. This cannot work. So join us in our endeavours. Make Open Source a business. A viable business, a business driven by the vendor (and by the market, of course). A business that makes sense. A business that involves dual-licensing and reasonable upselling. A business that uses Open Source mainly as a freemium entry point for the actual business. You can be romantic about F(L)OSS in your heart, that’s OK. But please don’t depend on it. It would be too bad if you don’t succeed, just because you ran out of money from your “sponsors”, because you didn’t care about the business aspect of your product.Reference: Open Source Doesn’t Need More Support. It Needs Better Business Models from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....

Given When Then in Java

tl;dr you can use labels to clarify a given-when-then style of testing. What is given-when-then? given-when-then is a commonly used style of specifying system behaviour in which your tests are split into three sections.Given is the section that lays out the pre-conditions for the test, ie whatever state you’re assuming the world to be in before you start. The When clause performs the action being tested. The Then statement checks that the post condition holds. This is usually in the form of asserting values or checking interaction with mocks.It’s not always that case that you need to have three sections in the code of every test. For example your given section might be covered by a common setUp method. I think it’s a good idea to follow the pattern and split up the different sections because it allows you to clearly see the wood from the trees. Using labels with Junit In some projects I’ve been experimenting with going a bit further than just splitting out given/when/then and using Java labels in order to lay out the different sections of the test to make things really clear *. The following code snippet shows how you might implement that with Junit. Cafe cafe = new Cafe();@Test public void cafeShouldNeverServeCoffeeItDoesntHave() { Given: cafe.setCoffeesRemaining(1);When: cafe.serveCoffee();Then: assertFalse(cafe.canServeCoffee()); } This is a very simple example just to demonstrate the layout. Our test checks that the Cafe never serves coffee which it doesn’t have. There is a clear demarcation of the three sections of code by the labels. It’s a little unusual to see labels used like this – they are most commonly used in Java as a way to break out of nested loops in one go. Of course there is no real reason not to use them like this, it’s just a stylistic matter and there is no semantic difference between the code with and without the labels. Using labels with Lambda Behave While I’m sure most Java developers use Junit, I recently released a new library called Lambda Behave. This is designed to be a modern testing and behavioural specification framework for Java 8 which makes it easier to write fluent and readable tests. In lambda-behave you write tests by listing out a descriptive string instead of a restrictive method name and describe the body of the test in a lambda expression. I’ve found tests written in this style are much easier to read. You can use the same given/when/then label style within lambda-behave specs, as the following code sample shows: describe("a cafe", it -> {Cafe cafe = new Cafe();it.should("never serve coffee it doesn't have", expect -> { Given: cafe.setCoffeesRemaining(1);When: cafe.serveCoffee();Then: expect.that(cafe.canServeCoffee()).is(false); });}); Limitations & Alternatives The biggest annoyance of using labels in this way is that, for a reason unknown to me, you can’t write a label before a variable declaration statement in Java. This means that if you want to start your Given: clause using a new variable then you need to hoist the variable declaration to the top of the block or a to a field. I’ve not found this to be a big problem and in fact the hoisting can clean things up even further. An alternative, and probably more common, approach is to use comments to denote given/when/then clauses. I think the choice between the two is primarily stylistic rather than substantive. In both cases you’re just writing some explanatory text rather than baking the feature into your testing framework as things like Cucumber and JBehave do. I think the idea of using labels as individual comments is suitable if you’ve agreed a convention within your team to do so and if you want to have these labels stand out more than regular comments. Some people use alternative patterns to given/when/then that are similar, but have more phases such as the four phase test approach or even different names such as Arrange, Act, Assert. It’s possible to use a label based or a comment based convention with these styles as well. Conclusions I’ve put the example code up on github if anyone wants to have a look or a play in their IDE. There’s not much code because they are just very simple examples but it might be helpful to show that there’s no magic going on! I’ve shown how you can use labels to clarify the intent of blocks of code in this blog post and hopefully it’s a technique which people find useful and helpful. Regardless of whether you use labels to implement given-when-then or not I hope people write their tests following some kind of convention. It really makes what is going on a lot clearer. I’m sure some people have opinions on the matter, so let me know it you think it’s a good idea or not? * I /think/ that I got the idea from Jose Llarena after talking to him at an LJC event, so thank Jose!Reference: Given When Then in Java from our JCG partner Richard Warburton at the Insightful Logic blog....
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

Get ready to Rock!
To download the books, please verify your email address by following the instructions found on the email we just sent you.