Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?

jcg-logo

Acision launches “forgeathon” – its first WebRTC app challenge

Acision launches “forgeathon” – its first online richer communications (WebRTC) app challenge for developers globally  Join forgeathon and let Acision help you take your application or service global! Reading, UK – 6th January 2015: Acision, the global leader in secure, mobile engagement services and an industry innovator in WebRTC technology, today announces the launch of its first online “forgeathon” – a Richer Communications App Challenge for developers worldwide. Using the forge by Acision SDK toolkit, entrants are invited to either create a new Android, iOS or web app or service, or enhance an existing one*, by taking advantage of one of the richest, next generation communication capabilities on the market today. Forgeathon online app challenge As an online event, teams, individual developers, students and entrepreneurs from around the world are able and encouraged to take part in the challenge. The first prize will be all-expenses** paid trips for the winner to showcase their app or service in person during two of the world’s largest and leading tech events, Mobile World Congress 2015 and SXSW Interactive 2015 in March – providing a platform to publicise the app to leading influencers and audiences globally. The challenge officially kicks off today, 6th January 2015, and runs for six weeks with the closing date for submissions the 19th February. Winners will be announced on 23rd February. As a market leader in carrier-grade rich messaging and engagement services, and providing APIs and SDKs which create today’s richest apps, Acision, in conjunction with BeMyApp, is organising the forgeathon online challenge, to spotlight how developers can easily integrate the latest and most advanced communication features into new or existing Android, iOS and Web apps using the forge SDK. As a flexible communications framework, the forge by Acision platform enables accelerated application development with secure, rich and real-time communications, including IP messaging, presence, HD voice and video chat, all powered by WebRTC technology. The forge SDK allows developers to quickly build new innovative services for B2B or B2C purposes, so that businesses can enrich and enhance their customer engagement capabilities on mobile apps and websites. Eric Bilange, Head of Rich Engagement Services, at Acision commented: “The rich communication capabilities and WebRTC technology offered via the forge SDK, offers seamless, secure, low-latency services that can be used for apps dealing with customer relations, call centres, education, training, healthcare, banking, finance, travel and entertainment…the options are endless! We want to open up our SDK as part of this challenge to showcase the exciting ways in which our rich communication tools can be integrated into something without boundaries or limitations; we’re looking for something innovative, something smart, something that really stands out, and in return we will support the winner to take their mobile app or web service global, by providing them a stage to prototype to the media and influencers, on the Acision booth at two of the world’s leading tech events, Mobile World Congress and SXSW Interactive!” With the challenge officially launched from today, there will be two webinars held during the first four weeks, led by Acision evangelist and WebRTC guru Peter Dunkley, giving participants support and guidance on everything there is to know about the forge SDK, so they are in the best position to submit a great final prototype for judging. Along with the crowning winner, there will also be two runners up announced, who will both receive a drone, and free access*** to the forge SDK by Acision for one year. Bilange concluded: “By leveraging Acision’s rich communication toolkit, developers can really bring their application ideas to life.  Our aim with forgeathon is to spread the word about the possibilities with WebRTC technology, and build a community of developers that continually create WebRTC based apps.” Register for forgeathon To register or learn more about forgeathon – the Richer Communications (WebRTC) App Challenge, click here. For the latest news, and updates related to the forgeathon, follow Acision’s Twitter channel @acision. About Acision Acision connects the world by powering relevant, seamless mobile engagement services that interoperate across all IP platforms and enrich the user experience creating value and new communication opportunities for carriers, enterprises and consumers across the world. For more information, visit Acision and forge. Press contacts: Nikki Brown Acision Tel: +44 118 9308 620 Email: Nicola.brown@acision.com About BeMyApp BeMyApp is specialised in developer relations, and organises developer events such as online challenges, hackathons, incubator workshops and more, all around the world. Contact: Maud Levy BeMyApp Tel: +33 634 416 895 Email: Maud.levy@bemyapp.com Notes * Participants must ensure they have the requisite permissions and rights to enhance, modify or develop third party intellectual property rights. ** Terms, conditions and limitations apply. *** Terms, conditions and limitations apply. ...
jcg-logo

Get your Advanced Java Programming Degree with these Tutorials and Courses

Getting started as a Java developer these days is quite straightforward. There are countless books on the subject, and of course an abundance of online material to study. Of course, our own site offers a vast array of tutorials and articles to guide you through the language and we genuinely believe that Java Code Geeks offer the best way to learn Java programming. Things get a bit trickier once you have successfully passed the beginner phase. In order to reach a more advanced level of competence, you will need to reach out and look for targeted resources. A higher level of sophistication is required and the random tutorials that you find online might not “cut it”. For this reason, we have created and featured numerous tutorials on our site. You may find them at the following pages:Core Java Tutorials Enterprise Java Tutorials Spring Tutorials Desktop Java TutorialsAdditionally, we have created several “Ultimate” tutorials, discussing OOP concepts, popular Java tools and frameworks, and more. Have a look at those too:Java 8 Features Tutorial Java Annotations Tutorial Java Servlet Tutorial Java Reflection Tutorial Abstraction in Java JMeter Tutorial for Load Testing JUnit Tutorial for Unit Testing JAXB Tutorial for Java XML BindingOn top of the above, to get you prepared for your programming interviews, we have created some great QnA guides:115 Java Interview Questions and Answers 69 Spring Interview Questions and Answers Multithreading and Concurrency Interview Questions and Answers Core Java Interview Questions 40 Java Collections Interview Questions and Answers Top 100 Java Servlet QuestionsFor even more high-end training, we would like to suggest our JCG Academy courses. With JCG Academy’s course offerings, you tackle real-world projects built by programming experts. Courses offered are designed to help you master new concepts quickly and effectively. All courses could be beneficial to the modern age developer, but let’s focus on the Java related ones. The Advanced Java course is the flagship course that every Java developer should take. This course is designed to help you make the most effective use of Java. It discusses advanced topics, including object creation, concurrency, serialization, reflection and many more. It will guide you through your journey to Java mastery! Next on, we have the Java Design Patterns course (standalone version here). Design patterns are general reusable solutions to commonly occurring problems within a given context in software design. In this course you will delve into a vast number of Design Patterns and see how those are implemented and utilized in Java. You will understand the reasons why patterns are so important and learn when and how to apply each one of them. In the new age of multi-core processors, every developer should be competent in concurrent programming. For this reason we created the Java Concurrency Essentials course (you can join this for FREE!). In this course, you will dive into the magic of concurrency. You will be introduced to the fundamentals of concurrency and concurrent code and you will learn about concepts like atomicity, synchronization and thread safety. As you advance, the following lessons will deal with the tools you can leverage, such as the Fork/Join framework, the java.util.concurrent JDK package. Finally, in order to stay up to date with the latest developments, make sure to join our ever growing newsletter (with more than 73,000 subscribers). By joining, you will also get 11 programming books for FREE! Summing up, you don’t have to spend a bunch of money or waste countless hours to reach and advanced level in Java programming. Instead, you need to be able to study the correct material and use it in your day to day work in order to gain the relevant experience. The good thing about the programming world is that people care only about results. If you can show them that you are great at executing and getting results, you’ll do phenomenal as a Java programmer. Geek on! ...
wcg-logo

FREE Programming books with the WCG Newsletter

Dear fellow geek, it is with great honor that we announce the launch of Web Code Geeks! This is our sister site, targeted to Web programming developers. Come on, admit it, there is a web developer inside you too, so make sure to check it out. To celebrate this, we have decided to distribute 2 of our books for free. You can get access to them by joining our Newsletter. Additionally, you will also receive weekly news, tips and special offers delivered to your inbox courtesy of Web Code Geeks! This is just the beginning, and as Web Code Geeks grow, there will be more free goodies for you! So let’s see what you get in detail!    Building web apps with Node.js Node.js is an exciting software platform for building scalable server-side and networking applications. Node.js applications are written in JavaScript, and can be run within the Node.js runtime on Windows, Mac OS X and Linux with no changes. Node.js applications are designed to maximize throughput and efficiency, using non-blocking I/O and asynchronous events. In this book, you will get introduced to Node.js. You will learn how to install, configure and run the server and how to load various modules. Additionally, you will build a sample application from scratch and also get your hands dirty with Node.js command line programming.  CouchDB Database for the Web CouchDB, is an open source database that focuses on ease of use and on being “a database that completely embraces the web”. It is a NoSQL database that uses JSON to store data, JavaScript as its query language using MapReduce, and HTTP for an API. One of its distinguishing features is multi-master replication. CouchDB was first released in 2005 and later became an Apache project in 2008. This book is a hands-on course on CouchDB. You will learn how to install and configure CouchDB and how to perform common operations with it. Additionally, you will build an example application from scratch and then finish the course with advanced topics like scaling, replication and load balancing.   So, fellow geeks, hop on our newsletter and enjoy our kick-ass books!NOTE: If you have subscribed and not received an email yet, please send us an email at support[at]webcodegeeks.com and we will provide immediate assistance. ...
java-logo

Calling grandparent methods in Java: you can not

In the article Fine points of protection I detailed how “protected” extends the “package private” access. There I wrote: What you can do isOverride the method in the child class or call the parents method using the keyword super.And generally this is really all you can do with protected methods.  (Note that in this article I talk about methods and method calling, but the very similar statements can be said about fields, constructors.) If you can call super.method() to access the parent’s method() even if the actual class has overridden it why can not you call super.super.method()? The absolutely correct and short answer is: because Java language does not allow you to do that. (JVM does though, but you should not.) You can not directly access grandparent methods skipping parent methods. The interesting question is: Why? The reason lies in object orientation principles. When you extend a class you extend the defined functionality of the class. The fact that the parent class extends another class (the grandparent class) is part of the implementation that is none of the business of any other code outside of the class. This is the basic principle of encapsulation: advertise the defined functionality of a class to the outside world but keep the implementation private. There are secrets that you keep hidden even from your son. “Nich vor dem kind.” Generally this is the reason. If you could access the grandparent directly you would create a dependency on the implementation of the father, and this would violate encapsulation.Reference: Calling grandparent methods in Java: you can not from our JCG partner Peter Verhas at the Java Deep blog....
javafx-logo

JFXPanel and FX Platform Thread pitfalls

The JFXPanel is a component to embed JavaFX content into (legacy ;-)) Swing applications. Basically it makes it very easy to combine both tookits, but there are some pitfalls to master: Both UI Toolkits are single threaded (Swing: EDT + JavaFX: FX Platform Thread). When used together you have to deal with these two threads, e.g. javafx.embed.swing.SwingFXUtils.runOnFxThread(Runnable runnable) or javafx.embed.swing.SwingFXUtils.runOnEDT(Runnable rrunnable) The FX Platform Thread is implicitly started in the constructor of the JFXPanel by initFx():   // Initialize FX runtime when the JFXPanel instance is constructed private synchronized static void initFx() { // Note that calling PlatformImpl.startup more than once is OK PlatformImpl.startup(new Runnable() { @Override public void run() { // No need to do anything here } }); } But (if I got it right) JFXPanel overrides addNotify() from Component where a finishListener is added to the FX-Platform ( PlatformImpl.addListener(finishListener)). Platform.exit is then called when the last JFXPanel “dies”. This might lead into a weird situation: When JFXPanel is used e.g. with a JDialog: The first call opens the Dialog with a new JFXPanel and all is going well. But when this Dialog is closed the FX Platform Thread is exited and for some reasons it looks like the second call to open a new Dialog doesn’t start the FX Platform Thread again. So nothing happens on the JFXPanel! Solution: For me it worked to call (somewhere early in main()) Platform.setImplicitExit(false); to prevent closing the FX Thread implicitly (its closed then by the System.exit()).Reference: JFXPanel and FX Platform Thread pitfalls from our JCG partner Jens Deters at the JavaFX Delight blog....
java-interview-questions-answers

Required Reading: Iron Clad Java

They didn’t teach appsec in Comp Sci or in engineering or MIS or however you learned how to program. And they probably still don’t. So how could you be expected to know about XSS filter evasion or clickjacking attacks, or how to really store passwords safely. Your company can’t afford to send you on expensive appsec training, and you’re too busy coding anyways. Read a book? There hasn’t been a good book that explains how to write secure Java in, well… ever. But all that’s changed. Now you learn how to build a secure Java app at your desk or on the train or on the toilet. Iron Clad Java, by Jim Manico and August Detlefsen, has arrived. This is a master class in secure Java design and coding, written for developers by guys who truly know their shit. While it is focused on web apps, a lot of the book applies equally to mobile, Cloud, real-time and back-end systems, any kind of online system in Java. There’s no time wasted on theory. Iron Clad Java explains the most common and most dangerous attacks and how to defend against them, using straight forward patterns and Open Source libraries and free tools from OWASP. Each chapter is short and easy to read, with practical, up to date (as of Java 8) information and sample code:Fundamentals of web app security: HTTP/S, validating input Access control: common anti patterns and mistakes, how to design access control for single company or multitenant apps, how to use Apache Shiro and Spring Security Authentication and session management: you shouldn’t be writing this code on your own (this is what frameworks are for), but if you have to, here’s how to do it, as well as how to handle remember me and forgot password features, multi-factor authentication and more XSS defense: how to use the OWASP Java Encoder, HTML Sanitizer and JSON Sanitizer libraries and JQuery encoding CRF defense and Clickjacking: random tokens and framebusting Protecting sensitive data: how to do signing and crypto correctly, using Google KeyCzar and Bouncy Castle SQL injection and other kinds of injection: prepare your statements Safe file upload and file i/o Logging and error handling: what to log, what not to log, logging frameworks, safe error handling, using logging for intrusion detection Security in the SDLCSo no more excuses.Reference: Required Reading: Iron Clad Java from our JCG partner Jim Bird at the Building Real Software blog....
java-interview-questions-answers

EE JSP: The Reversed Jacket of Servlet

Generating HTML from Servlet is only practical if you have small amount of pages, or needed fine control of the content you are generating, (binary PDF etc). For most application, the output is going to be HTML, and we need a better way to do this; and that’s where the JSP (Java Server Pages) comes in. With JSP, you write and focus on the HTML content in a file; then only when you needed dynamic or condition logic in between the content, you would insert the Java code, which called the Scriptlet. When the application server process the JSP page, it will automatically generate a Servlet class that write these JSP file’s content out (as you would programatically writing it using PrintWriter as shown in my previous posts). Wherever you have the Scriptlet in JSP, it will be inlined in the generated Servlet class. The generated jsp servlet classes are all managed, compiled and deployed by the application server within your application automatically. In short, the JSP is nothing more than the reverse jacket of the Servlet. Here is a simple example of JSP that print Hello World and a server timestamp. <!DOCTYPE html> <html> <body> <p>Hello World!</p> <p>Page served on <%= new java.util.Date()%></p> </body> </html> Simply save this as text file named hello.jsp inside your src/main/webapp maven based folder, and it will be runnable within your NetBeans IDE. For JSP, you do not need to configure URL mapping as in Serlvet, and it’s directly accessable from your context path. For example, above should display in your browser by http://localhost:8080/hello.jsp URL. Notice the example also show how you can embed Java code. You can place a method or object inside <%= %> scriptlet, and it will use the resulted object’s toString() method output to concatenate to the HTML outside the scriptlet tag. You can also define new methods using <%! %> scriptlet tag, or execute any code that does not generate output using <% %> scriptlets. Note that you can add comments in JSP between <%-- --%> scriptlet as well. JSP also allows you to insert “Page Directives” to control how the JSP container render the result. For example, you can change the result content type by insert this on top of the page <%@ page contentType="text/txt" %> Another often used page directive is import Java package so you don’t need to prefix it on each Java statement line. <%@ page imporet="java.util.*" %>...<p>Page served on <%= new Date()%></p> There are many more directives you can use. Checkout the JSP spec doc for more details. Besides inserting your own Java code, JSP also predefined some variables you may access directly without declaring them. Here is an example that displays most of these built-in implicit variables. <!DOCTYPE html> <html> <body> <h1>JSP Examples</h1> <p>Implicit Variables</p> <table> <tr> <td>Name</td><td>Instance</td><td>Example</td> </tr> <tr> <td>applicationScope</td><td>${applicationScope}</td><td>${applicationScope['myAppName']}</td> </tr> <tr> <td>sessionSope</td><td>${sessionSope}</td><td>${sessionSope['loginSession']}</td> </tr> <tr> <td>pageScope</td><td>${pageScope}</td><td>${pageScope['javax.servlet.jsp.jspConfig']}</td> </tr> <tr> <td>requestScope</td><td>${requestScope}</td><td>${requestScope['foo']}</td> </tr> <tr> <td>param</td><td>${param}</td><td>${param['query']}</td> </tr> <tr> <td>header</td><td>${header}</td><td>${header['user-agent']}</td> </tr> <tr> <td>cookie</td><td>${cookie}</td><td>${cookie['JSESSIONID']}</td> </tr> <tr> <td>pageContext</td><td>${pageContext}</td><td>${pageContext.request.contextPath}</td> </tr> </table> <p>Page served on <%= new java.util.Date()%></p> </body> </html> In above example, I accessed the implicit variables using the JSP Expression Language (EL) syntax rather than the <%= %> scriptlet. The EL is more compact and easier to read, however it only can read variable that existed in any of the request, session or application scopes. The EL uses DOT notation to access fields or even nested fields from the object variable; assuming the fields have corresponding getter methods that is. EL can also access map with “myMap[key]” format, or a list with “myList[index]” syntax. Most of these implicit variables can be access as a Map object, and they exposed mainly from the ServletHttpRequest object on the request, as you would from your own Servlet class. JSP can be seen as a template language in the web application. It helps generate the “VIEW” part of the application. It let you or the authoring in your team to focus on the HTML and look and feel of the content. It can help building larger web application much easier. Be careful about using excessive and complex logic Java code inside your JSP files though, as it will make it harder to debug and read; especially if you have a Java statement that throws an exception. The line number from the stacktrace would be harder to track and match to your Scriptlet code. Also imagine if you start to have JavaScript code inside JSP files, then it can get really messy. Better to keep these in separate files. If you must embed Java code in JSP, try to wrap it in a single line of Java invocation call. Better yet, try to process request using Servlet code, and generate all the data you need to display in JSP by insert them into the request scope space, and then forward to a JSP file for rendering. With this pattern, you can actually limit usage of scriptlet in JSP, and only use EL and JSP tags.You can find above code in my jsp-example in GitHub.Reference: EE JSP: The Reversed Jacket of Servlet from our JCG partner Zemian Deng at the A Programmer’s Journal blog....
javafx-logo

How to allow users to customize the UI

Idea Take advantage of the declarative design pattern of JavafX/FXML and allow users to customize a certain view without any coding just by opening it with e.g. SceneBuilder to re-arrange the layout or add new controls or even change the style according to the users needs. The FXML file + CSS can be basically be placed whereever they are reachable via a URL. The user must only know the interface/methods of the assigned controller class inside the FXML.      RemoteController Assuming this simple demo controller class provides methods to remotely control devices and to send MQTT messages, a user is able to customize his own remote control. public class RemoteController{@FXML public void onTest(){ Alert alert = new Alert(Alert.AlertType.INFORMATION); alert.setContentText(""); alert.setHeaderText("WORKS!"); alert.show(); } public void onTest(String value){ Alert alert = new Alert(Alert.AlertType.INFORMATION); alert.setHeaderText("WORKS!"); alert.setContentText(value); alert.show(); } public void onSwitch(String houseCode, int groudId, int deviceId, String command){ Alert alert = new Alert(Alert.AlertType.INFORMATION); alert.setHeaderText("Switch!"); alert.setContentText(String.format("Command: send %s %d %d %s", houseCode, groudId, deviceId, command)); alert.show(); } } remote.fxml and remote.css Note the referenced de.jensd.shichimifx.demo.ext.RemoteController and remote.css. So basically controller actions can be called via: onAction="#onTest". Nice: If you add: <?language javascript?> to FXML, it’s also possible to pass parameters by a JavaScript call via the controller-instance. onAction=controller.onTest('OFF') onAction=controller.onSwitch('a',1,1,'ON') Unfortunately I can’t find more documentation about this feature than -> this, but somehow it magically it works ;-). Its even possible to pass different types of parameters. <?xml version="1.0" encoding="UTF-8"?><?language javascript?> <?import javafx.geometry.*?> <?import java.lang.*?> <?import java.net.*?> <?import java.util.*?> <?import javafx.scene.*?> <?import javafx.scene.control.*?> <?import javafx.scene.layout.*?><VBox alignment="TOP_CENTER" prefHeight="400.0" prefWidth="600.0" spacing="20.0" styleClass="main-pane" stylesheets="@remote.css" xmlns="http://javafx.com/javafx/8" xmlns:fx="http://javafx.com/fxml/1" fx:controller="de.jensd.shichimifx.demo.ext.RemoteController"> <children> <Label styleClass="title-label" text="Universal Remote" /> <HBox alignment="CENTER_RIGHT" spacing="20.0"> <children> <Label layoutX="228.0" layoutY="96.0" styleClass="sub-title-label" text="Light Frontdoor" /> <Button layoutX="43.0" layoutY="86.0" mnemonicParsing="false" onAction="#onTest" prefWidth="150.0" styleClass="button-on" text="ON" /> <Button layoutX="411.0" layoutY="86.0" mnemonicParsing="false" onAction="#onTest" prefWidth="150.0" styleClass="button-off" text="OFF" /> </children> <padding> <Insets left="10.0" right="10.0" /> </padding> </HBox> <HBox alignment="CENTER_RIGHT" spacing="20.0"> <children> <Label layoutX="228.0" layoutY="96.0" styleClass="sub-title-label" text="Light Garden" /> <Button layoutX="43.0" layoutY="86.0" mnemonicParsing="false" onAction="controller.onTest('ON')" prefWidth="150.0" styleClass="button-on" text="ON" /> <Button layoutX="411.0" layoutY="86.0" mnemonicParsing="false" onAction="controller.onTest('OFF')" prefWidth="150.0" styleClass="button-off" text="OFF" /> </children> <padding> <Insets left="10.0" right="10.0" /> </padding> </HBox> <HBox alignment="CENTER_RIGHT" spacing="20.0"> <children> <Label layoutX="228.0" layoutY="96.0" styleClass="sub-title-label" text="Light Garden" /> <Button layoutX="43.0" layoutY="86.0" mnemonicParsing="false" onAction="controller.onSwitch('a', 1,1,'ON')" prefWidth="150.0" styleClass="button-on" text="ON" /> <Button layoutX="411.0" layoutY="86.0" mnemonicParsing="false" onAction="controller.onTest('OFF')" prefWidth="150.0" styleClass="button-off" text="OFF" /> </children> <padding> <Insets left="10.0" right="10.0" /> </padding> </HBox> </children> <padding> <Insets bottom="20.0" left="20.0" right="20.0" top="20.0" /> </padding> </VBox> Based on this example a user is able to simple open the FXMl with SceneBuilder and to add new Button calling the controller.onSwitch() method to control different/new devices installed for home automation. FxmlUtils The next release of ShichimiFX will contain a new Utilily class to load FXML as shown in the ExternalFXMLDemoController. Note that the loaded Pane is added to the center of the externalPane (BorderPane) of the Demo-Application via onLoadExternalFxml(): public class ExternalFXMLDemoController {@FXML private ResourceBundle resources;@FXML private BorderPane externalPane;@FXML private TextField fxmlFileNameTextField;@FXML private Button chooseFxmlFileButton;@FXML private Button loadFxmlFileButton;private StringProperty fxmlFileName;public void initialize() { fxmlFileNameTextField.textProperty().bindBidirectional(fxmlFileNameProperty()); loadFxmlFileButton.disableProperty().bind(fxmlFileNameProperty().isEmpty()); }public StringProperty fxmlFileNameProperty() { if (fxmlFileName == null) { fxmlFileName = new SimpleStringProperty(""); } return fxmlFileName; }public String getFxmlFileName() { return fxmlFileNameProperty().getValue(); }public void setFxmlFileName(String fxmlFileName) { this.fxmlFileNameProperty().setValue(fxmlFileName); }@FXML public void chooseFxmlFile() { FileChooser chooser = new FileChooser(); chooser.setTitle("Choose FXML file to load"); if (getFxmlFileName().isEmpty()) { chooser.setInitialDirectory(new File(System.getProperty("user.home"))); } else { chooser.setInitialDirectory(new File(getFxmlFileName()).getParentFile()); }File file = chooser.showOpenDialog(chooseFxmlFileButton.getScene().getWindow()); if (file != null) { setFxmlFileName(file.getAbsolutePath()); } }@FXML public void onLoadExternalFxml() { try { Optional<URL> url = FxmlUtils.getFxmlUrl(Paths.get(getFxmlFileName())); if (url.isPresent()) { Pane pane = FxmlUtils.loadFxmlPane(url.get(), resources); externalPane.setCenter(pane); } else { Alert alert = new Alert(Alert.AlertType.WARNING); alert.setContentText(getFxmlFileName() + " could not be found!"); alert.show(); } } catch (IOException ex) { Dialogs.create().showException(ex); } } }Reference: How to allow users to customize the UI from our JCG partner Jens Deters at the JavaFX Delight blog....
scala-logo

Microservices Development with Scala, Spray, MongoDB, Docker and Ansible

This article tries to provide one possible approach to building microservices. We’ll use Scala as programming language. API will be RESTful JSON provided by Spray and Akka. MongoDB will be used as database. Once everything is done we’ll pack it all into a Docker container. Vagrant with Ansible will take care of our environment and configuration management needs. We’ll do the books service. It should be able to do following:        List all books Retrieve all the information related to a book Update an existing book Delete an existing bookThis article will not try to teach everything one should know about Scala, Spray, Akka, MongoDB, Docker, Vagrant, Ansible, TDD, etc. There is no single article that can do that. The goal is to show the flow and the setup that one might use when developing services. Actually, most of this article is equally relevant for other types of developments. Docker has much broader usage than microservices, Ansible and CM in general can be used for any type of provisioning and Vagrant is very useful for quick creation of virtual machines. Environment We’ll use Ubuntu as a development server. Easiest way to set up a server is with Vagrant. If you don’t have it already, please download and install it. You’ll also need Git to clone the repository with the source code. The rest of the article will not require any additional manual installations. Let’s start by cloning the repo. git clone https://github.com/vfarcic/books-service.git cd books-service Next we’ll create an Ubuntu server using Vagrant. The definition is following: # -*- mode: ruby -*- # vi: set ft=ruby :VAGRANTFILE_API_VERSION = "2"Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.box = "ubuntu/trusty64" config.vm.synced_folder ".", "/vagrant" config.vm.provision "shell", path: "bootstrap.sh" config.vm.provider "virtualbox" do |v| v.memory = 2048 end config.vm.define :dev do |dev| dev.vm.provision :shell, inline: 'ansible-playbook /vagrant/ansible/dev.yml -c local' end config.vm.define :prod do |prod| prod.vm.provision :shell, inline: 'ansible-playbook /vagrant/ansible/prod.yml -c local' end end We defined the box (OS) to be Ubuntu. Sync folder is /vagrant meaning that everything inside the current directory on the host will be available as the /vagrant directory inside the VM. The rest of things we’ll need will be installed using Ansible so we’re provisioning our VM with it through the bootstrap.sh script. Finally, this Vagrantfile has two VMs defined: dev and prod. Each of them will run Ansible that will make sure that everything is installed properly. Preferable way to work with Ansible is to divide configurations into roles. In our case, there are four roles located in ansible/roles directory. One will make sure that Scala and SBT are installed, the other that Docker is up and running, and another one will run the MongoDB container. The last role (books) will be used later to deploy the service we’re building to the production VM. As example, definition of the mongodb role is following. - name: Directory is present file: path=/data/db state=directory tags: [mongodb]- name: Container is running docker: name=mongodb image=dockerfile/mongodb ports=27017:27017 volumes=/data/db:/data/db tags: [mongodb] This should be self-explanatory for those used to work with Docker. The role makes sure that the directory is present and that the mongodb container is running. Playbook ansible/dev.yml is where we tie it all together. - hosts: localhost remote_user: vagrant sudo: yes roles: - scala - docker - mongodb As the previous example, this one should also be self-explanatory. Every time we run this playbook, all tasks from roles scala, docker and mongodb will be executed. Nice thing about Ansible and Configuration Management in general is that they don’t blindly run scripts but are acting only when needed. If you run the provisioning the second time, Ansible will detect that everything is in order and do nothing. On the other hand if, for example, you delete the directory /data/db, Ansible will detect that it is absent and create it again. Let’s bring the dev VM up! First time it might take a bit of time since Vagrant will need to download the whole Ubuntu distribution, install few packages and download Docker images for MongoDB. Each next run will be much faster. vagrant up dev vagrant ssh dev ll /vagrant vagrant up creates a new VM or brings the existing one to life. With vagrant ssh we can enter the newly created box. Finally, ll /vagrant lists all files within that directory as a proof that all our local files are available inside the VM. That’s it. Our development environment with Scala, SBT and MongoDB container is ready. Now it’s time to develop our books service. Books Service I love Scala and Akka. Scala is a very powerful language and Akka is my favourite framework for building message driven JVM applications. While it was born from Scala, Akka can be used with Java as well. Spray is simple yet very powerful toolkit for building REST/HTTP based applications. It’s asynchronous, uses Akka actors and has a great (if weird at the beginning) DSL for defining HTTP routes. In the TDD fashion, we do tests before implementation. Here’s an example of tests for the route that retrieves the list of all books. "GET /api/v1/books" should {"return OK" in { Get("/api/v1/books") ~> route ~> check { response.status must equalTo(OK) } }"return all books" in { val expected = insertBooks(3).map { book => BookReduced(book._id, book.title, book.author) } Get("/api/v1/books") ~> route ~> check { response.entity must not equalTo None val books = responseAs[List[BookReduced]] books must haveSize(expected.size) books must equalTo(expected) } }} These are very basic tests that hopefully show the direction one should take to test Spray based APIs. First we’re making sure that our route returns the code 200 (OK). The second spec, after inserting few example books to the DB, validates that they are correctly retrieved. Full source code with all tests can be found in ServiceSpec.scala. How would we implement those tests? Here’s the code that provides implementation based on the tests above. val route = pathPrefix("api" / "v1" / "books") { get { complete( collection.find().toList.map(grater[BookReduced].asObject(_)) ) } } That was easy. We define the route /api/v1/books, GET method and the response inside the complete statement. In this particular case, we’re retrieving all the books from the DB and transforming them to the BookReduced case class. Full source code with all methods (GET, PUT, DELETE) can be found in the ServiceActor.scala. Both tests and implementation presented here are simplified and in real world scenarios there would be more to do. Actually, complex routes and scenarios are where Spray truly shines. While developing you can run tests in a quick mode. [Inside the VM] cd /vagrant sbt ~test-quick Whenever source code changes, all affected tests will be re-run automatically. I tend to have terminal window with test results displayed at all times and get continuous feedback of the quality of the code I’m working on. Testing, Building and Deploying As any other application, this one should be tested, built and deployed. Let’s create a Docker container with the service. Definition needed for the creation of the container can be found in the Dockerfile. [Inside the VM] cd /vagrant sbt assembly sudo docker build -t vfarcic/books-service . sudo docker push vfarcic/books-service We assemble the JAR (tests are part of the assemble task), build docker container and push it to the Hub. If you’re planning to reproduce those steps, please create the account in hub.docker.com and change vfarcic to your username. The container that we built contains everything we need to run this service. It is based on Ubuntu, has JDK7, contains an instance of MongoDB and has the JAR that we assembled. From now on this container can be run on any machine that has Docker installed. There is no need for JDK, MongoDB or any other dependency to be installed on the server. Container is self-sufficient and can run anywhere. Let’s deploy (run) the container we just created in a different VM. That way we’ll simulate deployment to production. To create the production VM with books service deployed run following. [from source directory] vagrant halt dev vagrant up prod First command stops the development VM. Each requires 2GB. If you have plenty of RAM you might not need to stop it and can skip this command. The second brings up the production VM with the books service deployed. After a bit of waiting, new VM is created, Ansible is installed and the playbook prod.yml is run. It installs Docker and runs vfarcic/books-service that was previously built and pushed to the Docker Hub. While running, it will have the port 8080 exposed and share the directory /data/db with the host. Let’s try it out. First we should send PUT requests to insert some test data. curl -H 'Content-Type: application/json' -X PUT -d '{"_id": 1, "title": "My First Book", "author": "John Doe", "description": "Not a very good book"}' http://localhost:8080/api/v1/books curl -H 'Content-Type: application/json' -X PUT -d '{"_id": 2, "title": "My Second Book", "author": "John Doe", "description": "Not a bad as the first book"}' http://localhost:8080/api/v1/books curl -H 'Content-Type: application/json' -X PUT -d '{"_id": 3, "title": "My Third Book", "author": "John Doe", "description": "Failed writers club"}' http://localhost:8080/api/v1/books Let’s check whether the service returns correct data. curl -H 'Content-Type: application/json' http://localhost:8080/api/v1/books We can delete a book. curl -H 'Content-Type: application/json' -X DELETE http://localhost:8080/api/v1/books/_id/3 We can check that the deleted book is not present any more. curl -H 'Content-Type: application/json' http://localhost:8080/api/v1/books Finally, we can request a specific book. curl -H 'Content-Type: application/json' http://localhost:8080/api/v1/books/_id/1 That was a very quick way to develop, build and deploy a microservice. One of the advantages of Docker is that it simplifies deployments by reducing needed dependencies to none. Even though the service we built requires JDK and MongoDB, neither needs to be installed on the destination server. Everything is part of the container that will be run as a Docker process. Summary Microservices have existed for a long time but until recently they did not get enough attention due to problems that arise when trying to provision environments capable of running hundreds if not thousands microservices. Benefits that were gained with microservices (separation, faster development, scalability, etc) were not as big as problems that were created with increased efforts that needed to be put intro deployment and provisioning. Docker and CM tools like Ansible can reduce this effort is almost negligible. With deployment and provisioning problems out-of-the-way, microservices are getting back in fashion due to benefits they provide. Development, build and deployment times are faster when compared to monolithic applications. Spray is a very good choice for microservices. Docker containers shine when they contain everything the application needs but not more. Using big Web servers like JBoss and WebSphere would be an overkill for a single (small) service. Even Web servers with smaller footprint like Tomcat are not needed. Play! is great for building RESTful APIs. However, it still contains a lot of things we don’t need. Spray, on the other hand, does only one thing and does it well. It provides asynchronous routing capabilities for RESTful APIs. We could continue adding more features to this service. For example, we could add registration and authentication module. However, that would bring us one step closer to monolithic applications. In microservices world, new services would be new applications and in case of Docker new containers, each of them listening on a different port and happily responding to our HTTP requests. When building microservices try to create them in a way that they do one or very few things. Complexity is solved by combining them together, not building one big monolithic application.Reference: Microservices Development with Scala, Spray, MongoDB, Docker and Ansible from our JCG partner Viktor Farcic at the Technology conversations blog....
software-development-2-logo

If you got bugs, you’ll get pwned

The SEI recently published some fascinating research which shows a clear relationship between software quality and software security. The consensus of researchers is that at least half, and maybe as many as 70% of common software vulnerabilities are fundamental code quality problems that could be prevented by writing better software. Sloppy coding. Not checking input data. Bad – or no – error handling. Brackets in the wrong spot… Better code is more secure. Using Bug Counts to Predict Security Vulnerabilities – and vice versa The more bugs you have, the more security problems you have. Somewhere between 1% and 5% of software defects cause security vulnerabilities. Which means you can get a good idea of how secure an application is based on how many bugs it has. If you do everything right:Developers are trained in secure development so that they can prevent – or at least find and fix – security problems The system is designed and built with a deliberate focus on quality and security You collect/measure defect data and use it to assess and improve your development practicesThen you should expect to find only 1 security vulnerability for every 100 (give or take) bugs. If you’re not paying attention to security and quality, then the number of security vulnerabilities in the code will obviously be much higher. The more bugs that you find, the more security vulnerabilities you have somewhere in the code, still waiting to be found. Heartbleed and Goto Fail = Bad Coding The SEI looked at recent high profile security vulnerabilities including Heartbleed and the Apple “goto fail” SSL bug, both of which were caused by coding mistakes that could have and should have been caught in code reviews or thorough unit testing (read Martin Fowler’s exhaustive analysis here). No black hat security magic here. Just standard, accepted good development practices. This research also points out the limits of static analysis tools in ensuring safe and secure code. Bugs that could have been found by people working carefully could not be found by tools:“Heartbleed created a significant challenge for current software assurance tools, and we are not aware of any such tools that were able to discover the Heartbleed vulnerability at the time of announcement”.The only way to find the Heartbleed bug with today’s leading tools is to write custom rules or overrides, which means that you have to anticipate that this code is bad in the first place. You’d be better off spending your time reviewing or testing the code more carefully instead. If you got bugs, you’ll get pwned If you have a quality problem, then you have a security problem. Security and reliability have to be designed and engineered in. You can’t test this in:Medium- and large-scale systems typically contain many defects and these defects do not always cause problems when the software systems are used precisely as tested. Even a small system might require an enormous number of tests to confirm correct operations under expected conditions. As systems grow, the number of possible conditions may be infinite. For any non-trivial system, the tested area is small. Test, by necessity, focuses on the conditions most likely to be encountered and most likely to trigger a fault in the system. Test, therefore, can only find a fraction of the defects in the system. Functional testing proves that the system works as expected. This kind of testing, even at high levels of coverage, can’t prove that the system is reliable or secure. Pen testing, fuzzing, DAST and destructive testing stress the system in unexpected ways to see how the system behaves. But pen testing can’t prove that the system is secure either – for a big system, you would need an infinite number of pen testers on an infinite number of keyboards working for an infinite number of hours to maybe find all of the bugs. Like any other kind of testing, pen testing gives you information about the quality and completeness of the system’s design and implementation – where you made mistakes, where you missed something. The results tell you where to look deeper for other problems in the design or code, or problems in how you design or how you code. Pen testing is wasted if you don’t use this information to get to the root cause and make things better. The SEI’s research makes a few things clear:Security and reliability go hand in hand. Security-critical systems need to be built like safety-critical systems – with the same careful attention to quality. You can predict how secure your system is based on the total number of bugs that have been found in the code. Design reviews and code reviews (including desk checking your own code) are the most effective ways to find security and reliability problems. The amount of time spent in reviews is a key indicator of system reliability and security: top performers spent 2/3 as much time in reviews as in development. For security-critical or safety-critical code, you need to get experts involved in doing reviews. Static analysis testing should be part of everyone’s development program. But don’t lean too heavily on it. Run static analysis before code reviews to catch basic mistakes and clean them up, or to identify problem areas in the code that need to be reviewed carefully. Run static analysis after code reviews to verify that the code looks good. But don’t try to use static analysis as a substitute for code reviews. Focus on writing good, clean code. Most Level 1 (high severity) defects are caused by coding mistakes. Train developers in secure design and coding so they know what not to do, and what to look for when reviewing code, and so that they know how to fix security bugs properly.Building reliable and secure systems isn’t cheap and it isn’t easy, especially at scale. The SEI says that you must assume that complex systems are never error free. Which means that they will never be completely secure. Our job is to do the best that we can, and hope that it is enough.Reference: If you got bugs, you’ll get pwned from our JCG partner Jim Bird at the Building Real Software blog....
java-logo

Reason for Slower Reading of Large Lines in JDK 7 and JDK 8

I earlier posted the blog post Reading Large Lines Slower in JDK 7 and JDK 8 and there were some useful comments on the post describing the issue. This post provides more explanation regarding why the file reading demonstrated in that post (and used by Ant‘s LineContainsRegExp) is so much slower in Java 7 and Java 8 than in Java 6. X Wang‘s post The substring() Method in JDK 6 and JDK 7 describes how String.substring() was changed between JDK 6 and JDK 7. Wang writes in that post that the JDK 6 substring() “creates a new string, but the string’s value still points to the same [backing char] array in the heap.” He contrasts that with the JDK 7 approach, “In JDK 7, the substring() method actually create a new array in the heap.” Wang’s post is very useful for understanding the differences in String.substring() between Java 6 and Java 7. The comments on this post are also insightful. The comments include the sentiment that I can appreciate, “I would say ‘different’ not ‘improved’.” There are also explanations of how JDK 7 avoids a potential memory leak that could occur in JDK 6. The StackOverflow thread Java 7 String – substring complexity explains the motivation of the change and references bug JDK-4513622 : (str) keeping a substring of a field prevents GC for object. That bug states, “An OutOfMemory error [occurs] because objects don’t get garbage collected if the caller stores a substring of a field in the object.” The bug contains sample code that demonstrates this error occurring. I have adapted that code here: /** * Minimally adapted from Bug JDK-4513622. * * {@link http://bugs.java.com/view_bug.do?bug_id=4513622} */ public class TestGC { private String largeString = new String(new byte[100000]); private String getString() { return this.largeString.substring(0,2); } public static void main(String[] args) { java.util.ArrayList<String> list = new java.util.ArrayList<String>(); for (int i = 0; i < 1000000; i++) { final TestGC gc = new TestGC(); list.add(gc.getString()); } } } The next screen snapshot demonstrates that last code snippet (adapted from Bug JDK-4513622) executed with both Java 6 (jdk1.6 is part of the path of the executable Java launcher) and Java 8 (the default version on my host). As the screen snapshot shows, an OutOfMemoryError is thrown when the code is run in Java 6 but is not thrown when it is run in Java 8.In other words, the change in Java 7 fixes a potential memory leak at the cost of a performance impact when executing String.substring against lengthy Java Strings. This means that any implementations that use String.substring (including Ant’s LineContainsRegExp) to process really long lines probably need to be changed to implement this differently or should be avoided when processing very long lines when migrating from Java 6 to Java 7 and beyond. Once the issue is known (change of String.substring implementation in this case), it is easier to find documentation online regarding what is happening (thanks for the comments that made these resources easy to find). The duplicate bugs of JDK-4513622 have writes-ups that provide additional details. These bugs are JDK-4637640 : Memory leak due to String.substring() implementation and JDK-6294060 : Use of substring() causes memory leak. Other related online resources include Changes to String.substring in Java 7 [which includes a reference to String.intern() – there are better ways], Java 6 vs Java 7: When implementation matters, and the highly commented (over 350 comments) Reddit thread TIL Oracle changed the internal String representation in Java 7 Update 6 increasing the running time of the substring method from constant to N. The post Changes to String internal representation made in Java 1.7.0_06 provides a good review of this change and summarizes the original issue, the fix, and the new issue associated with the fix: Now you can forget about a memory leak described above and never ever use new String(String) constructor anymore. As a drawback, you now have to remember that String.substring has now a linear complexity instead of a constant one.Reference: Reason for Slower Reading of Large Lines in JDK 7 and JDK 8 from our JCG partner Dustin Marx at the Inspired by Actual Events blog....
software-development-2-logo

Challenging Myself With Coplien’s Why Most Unit Testing is Waste

James O. Coplien has written in 2014 the thought-provoking essay Why Most Unit Testing is Waste and further elaborates the topic in his Segue. I love testing but I also value challenging my views to expand my understanding so it was a valuable read. When encountering something so controversial, it’s crucial to set aside one’s emotions and opinions and ask: “Provided that it is true, what in my world view might need questioning and updating?” Judge for yourself how well have I have managed it. (Note: This post is not intended as a full and impartial summary of his writing but rather a overveiw of what I may learn from it.) Perhaps the most important lesson is this: Don’t blindly accept fads, myths, authorities and “established truths.” Question everything, collect experience, judge for yourself. As J. Coplien himself writes: Be skeptical of yourself: measure, prove, retry. Be skeptical of me for heaven’s sake. I am currently fond of unit testing so my mission is now to critically confront Coplien’s ideas and my own preconceptions with practical experience on my next projects. I would suggest that the main thing you take away isn’t “minimize unit testing” but rather “value thinking, focus on system testing, employ code reviews and other QA measures.” I’ll list my main take-aways first and go into detail later on:Think! Communicate! Tools and processes (TDD) cannot introduce design and quality for you The risk mitigation provided by unit tests is highly overrated; it’s better to spend resources where return on investment in terms of increased quality is higher (system tests, code reviews etc.) Unit testing is still meaningful in some cases Actually, testing – though crucial – is overrated. Analyse, design, and use a broad repertoire of quality assurance techniques Regarding automated testing, focus on system tests. Integrate code and run them continually Human insight beats machines; employ experience-based testing including exploratory testing Turn asserts from unit test into runtime pre-/post-condition checks in your production code We cannot test everything and not every bug impacts users and is thus worth discovering and fixing (It’s wasteful to test things that never occur in practice)I find two points especially valuable and thought-provoking, namely the (presumed) fact that the efficiency/cost ratio of unit testing is so low and that the overlap between what we test and what is used in production is so small. Additionally, the suggestion to use heavily pre- and post-condition checks in production code seems pretty good to me. And I agree that we need more automated acceptance/system tests and alternative QA techniques. Think! Unit testing is sometimes promoted as a way towards a better design (more modular and following principles such as SRP). However, according to Kent Beck, tests don’t help you to create a better design – they only help to surface the pain of a bad design. (TODO: Reference) If we set aside time to think about design for the sake of a good design, we will surely get a better result than when we employ tools and processes (JUnit, TDD) to force us to think about it (in the narrow context of testability). (On a related note, the “hammock-driven development” is also based on the appreciation of the value of thinking.) Together with thinking, it is also efficient to discuss and explore design with others. [..] it’s much better to use domain modeling techniques than testing to shape an artefact. I heartily agree with this. (Side note: Though thinking is good, overthinking and premature abstraction can lead us to waste resources on overengineered systems. So be careful.) How to achieve high quality according to CoplienCommunicate and analyse better to minimize faults due to misunderstanding of requirements and miscommunication Don’t neglect design as faults in design are expensive to fix and hard to detect with testing Use a broad repertoire of quality assurance techniques such as code reviews, static code analysis, pair programming, inspections of requirements and design Focus automated testing efforts on system (and, if necessary, integration) tests while integrating code and running these tests continually Leverage experience-based testing such as exploratory testing Employ defensive and design-by-contract programming. Turn asserts in unit test into runtime pre-/post-condition checks in your production code and rely on your stress tests, integration tests, and system tests to exercise the code Apply unit testing where meaningful Test where risk of defects is highTests cannot catch faults due to miscommunication/bad assumptions in requirements and design, which reportedly account for 45% of faults. Therefore communication and analysis are key. Unit testing in particular and testing in general is just one way of ensuring quality, and reportedly not the most efficient one. It is therefore best to combine a number of approaches for defect prevention and detection, such as those mentioned above. (I believe that in particular formal inspection of requirements, design, and code is one of the most efficient ones. Static code analysis can also help a lot with certain classes of defects.) (Also there is a history of high-quality software with little or no unit testing, I believe.) As argued below, unit testing is wasteful and provides a low value according to Coplien. Thus you should focus on tests that check the whole system or, if not practical, at least parts of it – system and integration tests. (I assume that the automated acceptance testing as promoted by Specification by Example belongs here as well.) Provide “hooks” to be able to observe the state and behavior of the system under test as necessary. Given modern hardware and possibilities, these tests should run continually. And developers should integrate their code and submit it for testing frequently (even the delay of 1 hour is too much). Use debugging and (perhaps temporary?) unit tests to pinpoint sources of failures. JH: I agree that whole-system tests provide more value than unit tests. But I believe that they are typically much more costly to create, maintain and run since they, by definition, depend on the whole system and thus have to relate to the whole (or a non-negligible portion) of it and can break because of changes at number of places. It is also difficult to locate the cause of a failure since they invoke a lot of code. And if we should test a small feature thoroughly, there would typically be a lot of duplication in its system tests (f.ex. to get the system to the state where we can start exercising the feature). Therefore people write more lower-level, more focused and cheaper tests as visualized in the famous Test Pyramid. I would love to learn how Mr. Coplien addresses these concerns. In any case we should really try to get at least a modest number of automated acceptance tests; I have rarely seen this. Experienced testers are great at guessing where defects are likely, simulating unexpected user behavior and generally breaking systems. Experience-based – f.ex. exploratory – testing is thus irreplaceable complement to (automated) requirements-based testing. Defensive and design-by-contract programming: Coplien argues that most asserts in unit tests are actually pre- and post-condition and invariant checks and that it is therefore better to have them directly in the production code as the design-by-contract programming proposes. You would of course need to generalize them to property-based rather than value-based assertions, f.ex. checking that the length of concatenated lists is the sum of the length of the individual lists, instead of concrete values. He writes: Assertions are powerful unit-level guards that beat most unit tests in two ways. First, each one can do the job of a large number (conceivably, infinite) of scenario-based unit tests that compare computational results to an oracle [JH: because they check properties, not values]. Second, they extend the run time of the test over a much wider range of contexts and detailed scenario variants by extending the test into the lifetime of the product. If something is wrong, it is certainly better to catch it and fail soon (and perhaps recover) than silently going on and failing later; failing fast also makes it much easier to locate the bug later on. You could argue that these runtime checks are costly but compared to the latency of network calls etc. they are negligible. Some unit tests have a value – f.ex. regression tests and testing algorithmic code where there is a “formal oracle” for assessing their correctness. Also, as mentioned elsewhere, unit tests can be a useful debugging (known defect localization) tool. (Regression tests should still be preferably written as system tests. They are valuable because, in contrary to most unit tests, they have clear business value and address a real risk. However Mr. Coplien also proposes to delete them if they haven’t failed within a year.) Doubts about the value of unit testing In summary, it seems that agile teams are putting most of their effort into the quality improvement area with the least payoff. … unless [..] you are prevented from doing broader testing, unit testing isn’t the best you can do“Most software failures come from the interactions between objects rather than being a property of an object or method in isolation.” Thus testing objects in isolation doesn’t help The cost to create, maintain, and run unit tests is often forgotten or underestimated. There is also cost incurred by the fact that unit tests slow down code evolution because to change a function we must also change all its tests. In practice we often encounter unit tests that have little value (or actually a negative one when we consider the associated costs) (F.ex. a test that essentially only verifies that we have set up mocks correctly) When we test a unit in isolation, we will likely test many cases that in practice do not occur in the system and we thus waste our limited resources without adding any real value; the smaller a unit the more difficult to link it to the business value of the system and thus the probability of testing something without a business impact is high The significance of code coverage and the risk mitigation provided by unit tests is highly overrated; in reality there are so many possibly relevant states of the system under test, code paths and interleavings that we can hardly approach the coverage of 1 in 1012. It’s being claimed that unit tests serve as a safety net for refactoring but that is based upon an unverified and doubtful assumption (see below) According to some studies, the efficiency of unit testing is much lower than that of other QA approaches (Casper Jones 2013) Thinking about design for the sake of design yields better results than design thinking forced by a test-first approach (as discussed previously)Limited resources, risk assessment, and testing of unused cases One of the main points of the essays is that our resources are limited and it is thus crucial to focus our quality assurance efforts where it pays off most. Coplien argues that unit testing is quite wasteful because we are very likely to test cases that never occur in practice. One of the reasons is that we want to test the unit – f.ex. a class – in its entirety while in the system it is used only in a particular way. It is typically impossible to link such a small unit to the business value of the application and thus to guide us to test cases that actually have business value. Due to polymorphism etc. there is no way, he argues, a static analysis of an object-oriented code could tell you whether a particular method is invoked and in what context/sequence. Discovering and fixing bugs that are in reality never triggered is waste of our limited resources. For example a Map (Dictionary) provides a lot of functionality but any application uses typically only a small portion of it. Thus: One can make ideological arguments for testing the unit, but the fact is that the map is much larger as a unit, tested as a unit, than it is as an element of the system. You can usually reduce your test mass with no loss of quality by testing the system at the use case level instead of testing the unit at the programming interface level. Unit tests can also lead to “test-induced design damage” – degradation of code and quality in the interest of making testing more convenient (introduction of interfaces for mocking, layers of indirection etc.). Moreover, the fact that unit tests are passing does not imply that the code does what the business needs, contrary to acceptance tests. Testing – at any level – should be driven by risk assessment and cost-benefit analysis. Coplien stresses that risk assessment/management requires at least rudimentary knowledge of statistics and information theory. The myth of code coverage A high code coverage makes us confident in the quality of our code. But the assumption that testing reproduces the majority of states and paths that the code will experience in deployment is rather dodgy. Even 100% line coverage is a big lie. A single line can be invoked in many different cases and testing just one of them provides little information. Moreover, in concurrent systems the result might depend on the interleaving of concurrently executing threads, and there are many. It may also – directly or indirectly – depend on the state of the whole system (an extreme example – a deadlock that is triggered only if the system is short of memory). I define 100% coverage as having examined all possible combinations of all possible paths through all methods of a class, having reproduced every possible configuration of data bits accessible to those methods, at every machine language instruction along the paths of execution. Anything else is a heuristic about which absolutely no formal claim of correctness can be made. JH: Focusing on the worst-case coverage might be too extreme, even though nothing else has a “formal claim of correctness.” Even though humans are terrible at foreseeing the future, we perhaps still have a chance to guess the most likely and frequent paths through the code and associated states since we know its business purpose, even though we will miss all the rare, hard-to-track-down defects. So I do not see this as an absolute argument against unit testing but it really makes the point that we must be humble and skeptical about what our tests can achieve. The myth of the refactoring and regression safety net Unit tests are often praised as a safety net against inadvertent changes introduced when refactoring or when introducing new functionality. (I certainly feel much safer when changing a codebase with a good test coverage (ehm, ehm); but maybe it is just a self-imposed illusion.) Coplien has a couple of counterarguments (in addition to the low-coverage one):Given that the whole is greater than a sum of its parts, testing a part in isolation cannot really tell you that a change to it hasn’t broken the system (a theoretical argument based on Weinberg’s Law of Composition) Tests as a safety net to catch inadvertent changes due to refactoring – “[..] it pretends that we can have a high, but not perfect, degree of confidence that we can write tests that examine only that portion of a function’s behavior that that will remain constant across changes to the code.” “Changing code means re-writing the tests for it. If you change its logic, you need to reevaluate its function. […] If a change to the code doesn’t require a change to the tests, your tests are too weak or incomplete.”Tips for reducing the mass of unit tests If the cost of maintaining and running your unit tests is too high, you can follow Coplien’s guidelines for eliminating the least valuable ones:Remove tests that haven’t failed in a year (informative value < maintenance and running costs) Remove tests that do not test functionality (i.e. don’t break when the code is modified) Remove tautological tests (f.ex. .setY(5); assert x.getY() == 5) Remove tests that cannot be tracked to business requirements and value Get rid of unit tests that duplicate what system tests do; if they’re too expensive to run, create subunit integration tests.(Notice that theoretically, and perhaps also practically speaking, a test that never fails has zero information value.) Open questions What is Coplien’s definition of a “unit test”? Is it one that checks an individual method? Or, at the other end of the scale, a one that checks an object or a group of related objects but runs quickly, is independent of other tests, manages its own setup and test data, and generally does not access the file system, network, or a database? He praises testing based on Monte Carlo techniques. What is it? Does the system testing he speaks about differ from BDD/Specification by Example way of automated acceptance testing? Coplien’s own summaryKeep regression tests around for up to a year — but most of those will be system-level tests rather than unit tests. Keep unit tests that test key algorithms for which there is a broad, formal, independent oracle of correctness, and for which there is ascribable business value. Except for the preceding case, if X has business value and you can test X with either a system test or a unit test, use a system test — context is everything. Design a test with more care than you design the code. Turn most unit tests into assertions. Throw away tests that haven’t failed in a year. Testing can’t replace good development: a high test failure rate suggests you should shorten development intervals, perhaps radically, and make sure your architecture and design regimens have teeth If you find that individual functions being tested are trivial, double-check the way you incentivize developers’ performance. Rewarding coverage or other meaningless metrics can lead to rapid architecture decay. Be humble about what tests can achieve. Tests don’t improve quality: developers do. [..] most bugs don’t happen inside the objects, but between the objects.Conclusion What am I going to change in my approach to software development? I want to embrace the design-by-contract style runtime assertions. I want to promote and apply automated system/acceptance testing and explore ways to make these tests cheaper to write and maintain. I won’t hesitate to step off the computer with a piece of paper to do some preliminary analysis and design. I want to experiment more with some of the alternative, more efficient QA approaches. And, based on real-world experiences, I will critically scrutinize the perceived value of unit testing as a development aid and regression/refactoring safety net and the actual cost of the tests themselves and as an obstacle to changing code. I will also scrutinize Coplien’s claims such as the discrepancy between code tested and code used. What about you, my dear reader? Find out for yourself what works for you in practice. Our resources are limited – use them on the most efficient QA approaches. Inverse the test pyramid – focus on system (and integration) tests. Prune unit tests regularly to keep their mass manageable. Think. Follow-Up There is a recording of Jim Coplien and Bob Martin debating TDD. Mr. Coplien says he doesn’t have an issue with TDD as practiced by Uncle Bob, his main problem is with people that forego architecture and expect it to somehow magically emerge from tests. He stresses that it is important to leverage domain knowledge when proposing an architecture. But he also agrees that architecture has to evolve and that one should not spend “too much” time on architecting. He also states that he prefers design by contract because it provides many of the same benefits as TDD (making one think about the code, outside view of the code) while it also has a much better coverage (since it applies to all input/output values, not just a few randomly selected ones) and it at least “has a hope” being traced to business requirements. Aside of that, his criticism of TDD and unit testing is based a lot on experiences with how people (mis)use it in practice. Side note: He also mentions that behavior-driven development is “really cool.” Related Slides from Coplien’s keynote Beyond Agile Testing to Lean Development The “Is TDD Dead?” dialogue – A series of conversations between Kent Beck, David Heinemeier Hansson, and Martin Fowler on the topic of Test-Driven Development (TDD) and its impact upon software design. Kent Beck’s reasons for participating are quite enlightening: I’m puzzled by the limits of TDD–it works so well for algorithm-y, data-structure-y code. I love the feeling of confidence I get when I use TDD. I love the sense that I have a series of achievable steps in front of me–can’t imagine the implementation? no problem, you can always write a test. I recognize that TDD loses value as tests take longer to run, as the number of possible faults per test failure increases, as tests become coupled to the implementation, and as tests lose fidelity with the production environment. How far out can TDD be pushed? Are there special cases where TDD works surprisingly well? Poorly? At what point is the cure worse than the disease? How can answers to any and all of these questions be communicated effectively? (If you are short of time, you might get an impression of the conversations in these summaries) Kent Beck’s RIP TDD lists some good reasons for doing TDD (and thus unit testing) (Aside of “Documentation,” none of them prevents you from using tests just as a development aid and deleting them afterwards, thus avoiding accumulating costs. It should also be noted that people evidently are different, to some TDD might indeed be a great focus and design aid.)Reference: Challenging Myself With Coplien’s Why Most Unit Testing is Waste from our JCG partner Jakub Holy at the The Holy Java blog....
java-logo

How I’d Like Java To Be

I like Java. I enjoy programming in Java. But after using Python for a while, there are several things I would love to change about it. It’s almost purely syntactical, so there may be a JVM language that is better, but I’m not really interested since I still need to use normal Java for work. I realize that these changes won’t be implemented (although I thought I heard that one of them is actually in the pipeline for a future version); these are just some thoughts. I don’t want to free Java up the way that Python is open and free. I actually often relish in the challenges that the restrictions in Java present. I mostly just want to type less. So, here are the changes I’d love to see in Java.   Get Rid of Semicolons I realize that they serve a purpose, but they really aren’t necessary. In fact, they actually open up the code to possibly being harder to read, since shoving multiple lines of code onto the same line is almost always more difficult to read. Technically, with semicolons, you could compress an entire code file down to one line in order to reduce the file size, but how often is that done in Java? It may be done more than I know, but I don’t know of any case that it’s done. Remove the Curly Braces There are two main reasons for this. First off, we could end the curly brace cold war! Second, we can stop wasting lines of code on the braces. Also, like I said earlier, I’m trying to reduce how much typing I’m doing, and this will help. Lastly, by doing this, curly braces can be opened up for new uses (you’ll see later). Operator Overloading When it comes to mathematical operations, I don’t really care about operator overloading. They can be handy, but methods work okay for that. My biggest concern is comparison, especially ==. I really wish Java had followed Python in having == be for equality checking (you can even do it through the equals method) and “is” for identity checking. And while we’re at it, implementing Comparable should allow you to use the comparison operators with them, rather than needing to translate the numeric return values yourself. If you want, you can allow some way to overload math operators, too. Tuples and/or Data Structures I could use either one, but both would be better. Tuples are especially useful as a return type for returning multiple things at once, which is sometimes handy. The same can be done with simple data structures (essentially C structs), too, since they should be pretty lightweight. A big thing for data structures is getting rid of Java Beans. It would be even better if we would be able to define invariants with them too. The big problem with Java Beans is that we shouldn’t have to define a full-on class just to pass some data around. If we can’t get structs, then, at the very least, I would like to get the next thing. Properties Omg, I love properties, especially in Python. Allowing you to use simple accessors and mutators as if it was a straight variable makes for some nice-looking code. Default to public I’ve seen a few cases where people talk about “better defaults”, where leaving off a modifier keyword (such as public and private, or static) should be for the most typical case. public is easily the most used keyword for classes and methods, so why is the default “package-private”? I could argue for private to be the default for fields, too, but I kind of think that the default should be the same everywhere in order to reduce confusion, but I’m not stuck on that. I debate a little as to whether variables should default as final in order to help push people toward the idea of immutability, but I don’t care that much. Type Objects This kind of goes with the previous thing about smart defaults. I think the automatic thing for primitives is to be able to use them as objects. I don’t really care how you do this. Preferably, you’d leave it open to get the true primitives in order to optimize if you want. How this works doesn’t really matter to me. It would be kind of cool if they’re naturally passed around as primitives most of the time, but they autobox into the objects simply by calling any of their methods. Parameters and return types should not care which one is being passed. This would also help to hugely reduce the number of built-in functional interfaces in Java, since a majority are actually duplicates dealing with primitives. List, Dictionary, and Set Literals For those of you who have used javaScript or Python, you really know what I’m talking about. I mean, how flippin’ handy is THAT? This, tied in with constructors that can take Streams (sort of like Java’s version of Generators. Sort of), would make collections a fair bit easier to work with. Dictionary literals and set literals make for some really good uses of curly braces. Fin That’s my list of changes that I’d love to see in Java. Like I said before, I don’t think these will ever happen (although I think I heard that they were working towards type objects), and it’s really just a little wish list. Do you guys agree with my choices?Reference: How I’d Like Java To Be from our JCG partner Jacob Zimmerman at the Programming Ideas With Jake blog....
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close