Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?

javafx-logo

JavaFX Tip 17: Animated Workbench Layout with AnchorPane

I recently had to implement a layout for an application where the menu area and the status area could be hidden or shown with a slide-in / slide-out animation based on whether the user was logged in or not. The following video shows the the layout in action:              In the past I probably would have implemented this kind of behavior with a custom control and custom layout code (as in “override layoutChildren() method in skin”). But this time my setup was different because I was using afterburner.fx from Adam Bien and now I had FXML and a controller class. So what do do? I decided to try my luck with an anchor pane and to update the constraints on the stack panes via a timeline instance. Constraints are stored in the observable properties map of the stack panes. Whenever these constraints change, a layout of the anchor pane is requested automatically. If this happens without any flickering then we end up with a nice smooth animation. By the way, coming from Swing, I always expect flickering, but it normally doesn’t happen with JavaFX. I ended up writing the following controller class managing the anchor pane and its children stack panes. Please notice the little trick with the intermediate properties menuPaneLocation and bottomPaneLocation. They are required because the animation timeline works with properties. So it updates these properties and whenever they change new anchor pane constraints are applied. import static javafx.scene.layout.AnchorPane.setBottomAnchor; import static javafx.scene.layout.AnchorPane.setLeftAnchor; import javafx.animation.KeyFrame; import javafx.animation.KeyValue; import javafx.animation.Timeline; import javafx.beans.property.BooleanProperty; import javafx.beans.property.DoubleProperty; import javafx.beans.property.SimpleBooleanProperty; import javafx.beans.property.SimpleDoubleProperty; import javafx.fxml.FXML; import javafx.scene.layout.StackPane; import javafx.util.Duration;</code>/** * This presenter covers the top-level layout concepts of the workbench. */ public class WorkbenchPresenter {@FXML private StackPane topPane;@FXML private StackPane menuPane;@FXML private StackPane centerPane;@FXML private StackPane bottomPane;public WorkbenchPresenter() { }private final BooleanProperty showMenuPane = new SimpleBooleanProperty(this, "showMenuPane", true);public final boolean isShowMenuPane() { return showMenuPane.get(); }public final void setShowMenuPane(boolean showMenu) { showMenuPane.set(showMenu); }/** * Returns the property used to control the visibility of the menu panel. * When the value of this property changes to false then the menu panel will * slide out to the left). * * @return the property used to control the menu panel */ public final BooleanProperty showMenuPaneProperty() { return showMenuPane; }private final BooleanProperty showBottomPane = new SimpleBooleanProperty(this, "showBottomPane", true);public final boolean isShowBottomPane() { return showBottomPane.get(); }public final void setShowBottomPane(boolean showBottom) { showBottomPane.set(showBottom); }/** * Returns the property used to control the visibility of the bottom panel. * When the value of this property changes to false then the bottom panel * will slide out to the left). * * @return the property used to control the bottom panel */ public final BooleanProperty showBottomPaneProperty() { return showBottomPane; }public final void initialize() { menuPaneLocation.addListener(it -> updateMenuPaneAnchors()); bottomPaneLocation.addListener(it -> updateBottomPaneAnchors());showMenuPaneProperty().addListener(it -> animateMenuPane()); showBottomPaneProperty().addListener(it -> animateBottomPane());menuPane.setOnMouseClicked(evt -> setShowMenuPane(false));centerPane.setOnMouseClicked(evt -> { setShowMenuPane(true); setShowBottomPane(true); });bottomPane.setOnMouseClicked(evt -> setShowBottomPane(false)); }/* * The updateMenu/BottomPaneAnchors methods get called whenever the value of * menuPaneLocation or bottomPaneLocation changes. Setting anchor pane * constraints will automatically trigger a relayout of the anchor pane * children. */private void updateMenuPaneAnchors() { setLeftAnchor(menuPane, getMenuPaneLocation()); setLeftAnchor(centerPane, getMenuPaneLocation() + menuPane.getWidth()); }private void updateBottomPaneAnchors() { setBottomAnchor(bottomPane, getBottomPaneLocation()); setBottomAnchor(centerPane, getBottomPaneLocation() + bottomPane.getHeight()); setBottomAnchor(menuPane, getBottomPaneLocation() + bottomPane.getHeight()); }/* * Starts the animation for the menu pane. */ private void animateMenuPane() { if (isShowMenuPane()) { slideMenuPane(0); } else { slideMenuPane(-menuPane.prefWidth(-1)); } }/* * Starts the animation for the bottom pane. */ private void animateBottomPane() { if (isShowBottomPane()) { slideBottomPane(0); } else { slideBottomPane(-bottomPane.prefHeight(-1)); } }/* * The animations are using the JavaFX timeline concept. The timeline updates * properties. In this case we have to introduce our own properties further * below (menuPaneLocation, bottomPaneLocation) because ultimately we need to * update layout constraints, which are not properties. So this is a little * work-around. */private void slideMenuPane(double toX) { KeyValue keyValue = new KeyValue(menuPaneLocation, toX); KeyFrame keyFrame = new KeyFrame(Duration.millis(300), keyValue); Timeline timeline = new Timeline(keyFrame); timeline.play(); }private void slideBottomPane(double toY) { KeyValue keyValue = new KeyValue(bottomPaneLocation, toY); KeyFrame keyFrame = new KeyFrame(Duration.millis(300), keyValue); Timeline timeline = new Timeline(keyFrame); timeline.play(); }private DoubleProperty menuPaneLocation = new SimpleDoubleProperty(this, "menuPaneLocation");private double getMenuPaneLocation() { return menuPaneLocation.get(); }private DoubleProperty bottomPaneLocation = new SimpleDoubleProperty(this, "bottomPaneLocation");private double getBottomPaneLocation() { return bottomPaneLocation.get(); } } The following is the FXML that was required for this to work: <?xml version="1.0" encoding="UTF-8"?><?import java.lang.*?> <?import javafx.scene.layout.*?><AnchorPane maxHeight="-Infinity" maxWidth="-Infinity" minHeight="-Infinity" minWidth="-Infinity" prefHeight="400.0" prefWidth="600.0" xmlns="http://javafx.com/javafx/8" xmlns:fx="http://javafx.com/fxml/1" fx:controller="com.workbench.WorkbenchPresenter"> <children> <StackPane fx:id="bottomPane" layoutX="-4.0" layoutY="356.0" prefHeight="40.0" AnchorPane.bottomAnchor="0.0" AnchorPane.leftAnchor="0.0" AnchorPane.rightAnchor="0.0" /> <StackPane fx:id="menuPane" layoutY="28.0" prefWidth="200.0" AnchorPane.bottomAnchor="40.0" AnchorPane.leftAnchor="0.0" AnchorPane.topAnchor="40.0" /> <StackPane fx:id="topPane" prefHeight="40.0" AnchorPane.leftAnchor="0.0" AnchorPane.rightAnchor="0.0" AnchorPane.topAnchor="0.0" /> <StackPane fx:id="centerPane" layoutX="72.0" layoutY="44.0" AnchorPane.bottomAnchor="40.0" AnchorPane.leftAnchor="200.0" AnchorPane.rightAnchor="0.0" AnchorPane.topAnchor="40.0" /> </children> </AnchorPane>Reference: JavaFX Tip 17: Animated Workbench Layout with AnchorPane from our JCG partner Dirk Lemmermann at the Pixel Perfect blog....
jsf-logo

A JSF List Example

This is an example list application built using JSF 2.0 (JavaServer Faces). The app is a list of todo (to do) items. This app has functions to add, edit or delete items in the list. The todo item has name and description properties. The completed app’s JSF page has:            A todo list implemented using h:selectOneListbox html tag. The data for the list is populated using the f:selectItems core tag . The todo name and description fields implemented using h:inputText and h:inputTextarea tags respectively. The functions new, edit, save, delete and cancel are implemented with h:commandButton tags. A status message implemented using an h:outputText tag.Classes used in the app:Todo: this represents a todo, and has name and description properties. TodosBean: this is a managed bean; this has code to run the application including listeners and accessor methods for the components. TodoConverter: this is a custom converter, converts the string todo name to a Todo object and vice-versa.The following figure shows the completed app’s user interface:This example app is explained in three steps. The first step explains the basic list implementation. The app’s function is enhanced over the next steps. The steps are:Step 1: The todo list displays items, and on select displays the selected todo item properties. Step 2: List with todos and function to add items to the list. Step 3: List with todos and functions to add, edit and delete list items.Step 1: The todo list displays items, and on select displays the selected todo item properties.The following are the code components for this application:The Todo.java class represents the todo. The index.xhtml is the JSF page with a listbox, and a status message that displays the selected item in the list. The TodosBean.java managed bean has functions to get the list data, run the list’s value change listener and to display the status message.Todo.java: This class represents the todo item. This has two attributes – name and description. Note the Object class’s overridden toString() method. Package com.javaquizplayer.example; public class Todo { private String name; private String desc; public Todo() { } public Todo(String name, String desc) { this.name = name; this.desc = desc; } public String getName() { return name; } public void setName(String name) { this.name = name; } public String getDesc() { return desc; } public void setDesc(String desc) { this.desc = desc; } @Override public String toString() { return this.name; } }index.xhtml: This JSF page displays the todos list. The list can be scrolled and an item can be selected. The selected item name is displayed in the status message. The listbox is implemented with the h:selectOneListbox html tag. The listbox’s current selected item value is specified with the attribute: value="#{bean.todo}". The selection items are specified with the f:selectItems core tag: <f:selectItems value="#{bean.data}"/> This tag is placed within the h:selectOneListbox tag. The listbox’s items are populated from the TodosBean‘s getData() method which returns a List collection. The listbox displays the labels – todo’s name values, i.e., the Todo object’s String value from the toString() method. The listbox’s value change listener is specified with the attribute: valueChangeListener=""#{bean.valueChanged}" When a list item is selected, the form is submitted and this listener code is executed. In this example, when a list item is selected, the todo’s name is displayed in the status message as “todo_item_name selected.”. The form is submitted each time an item is selected in the listbox. This is specified with the listbox’s attribute: onchange="submit()". The status message is displayed with the output component: <h:outputText id="msg" value="#{bean.message}" /> . <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:f="http://java.sun.com/jsf/core" xmlns:h="http://java.sun.com/jsf/html"> <h:head> <title>A JSF List Example</title> </h:head> <h:body> <h:form> <h3>TODOs List</h3> <h:selectOneListbox id="list" size="10" value="#{bean.todo}" valueChangeListener="#{bean.valueChanged}" onchange="submit()"> <f:selectItems value="#{bean.data}"/> </h:selectOneListbox> <h:outputText id="msg" value="#{bean.message}" /> </h:form> </h:body> </html> TodosBean: This managed bean class has functions:To create initial todo data and populate the todo list. A value change listener to get the currently selected list item. To capture the currently selected item value in the list. Set the status message.package com.javaquizplayer.example; import javax.faces.bean.SessionScoped; import javax.faces.bean.ManagedBean; import javax.faces.event.ValueChangeEvent; import java.io.Serializable; import java.util.List; import java.util.ArrayList; @ManagedBean(name="bean") @SessionScoped public class TodosBean implements Serializable { private List<Todo> data; // todo list data private String todo; // the currently selected item value private String msg; // status message public TodosBean() { loadData(); // select the first item in the list Todo t = data.get(0); setTodo(t.getName()); setMessage(t.getName() + " selected."); } private void loadData() { data = new ArrayList<>(); Todo t = new Todo("item 1", "item 1 description"); data.add(t); t = new Todo("item 2", "item 2 description"); data.add(t); t = new Todo("item 3", "item 3 description"); data.add(t); t = new Todo("item 4", "item 4 description"); data.add(t); } public List<Todo> getData() { return data; } public String getTodo() { return todo; } public void setTodo(String t) { todo = t; } // value change listener for list item selection public void valueChanged(ValueChangeEvent e) { String t = (String) e.getNewValue(); setMessage(t + " selected."); } public void setMessage(String s) { msg = s; } public String getMessage() { return msg; } } Step 2: List with todo items and function to add items to the list.In this step, the app has function to add a new todo item. Click the New button, enter the todo data in the name and description text fields and save. Cancel the new todo data entry by clicking the cancel button, or by selecting another item in the list. The code components are the same as that of the previous step, but are enhanced with new functions. A new custom converter class is added to the app.The Todo.java class represents the todo, is not changed. The index.xhtml is the JSF page with the listbox, and the status message that displays the selected item in the list. In addition there are widgets to enter new todo items and save. The TodosBean.java managed bean has the code to get the list data, run the lists value change listener and to display a message. In addition there are action listeners for new, save and cancel actions. A converter, TodoConverter.java converts data from todo string value to Todo object, and vice-versa.Todo.java: This class remains unchanged. index.xhtml: The following are the changes: The listbox’s currently selected item value is specified as: value="#{bean.todo}". In the previous step 1, the item value resolved to todo’s name string. Now, the value resolves to an instance of Todo. The following are newly added: The todo’s name and description fields are implemented with h:inputText and h:inputTextarea tags respectively. Note that these fields are editable only when the todo data is being edited (i.e., the New todo function): readonly="#{not bean.editable}". When the list is in select mode these fields are read only. A converter is attached to the list to convert the selected item name to a Todo object, and vice-versa using a f:converter core tag: <f:converter converterId="todoConvertor"/> Note that an attribute is set for the converter: <f:attribute name="beanattr" value="#{bean}"/> ; this is used to access the Todo data within the converter class. Three command buttons are added for the new, save and cancel actions using the h:commandButton tag. Each button has its respective action listener. For example: <h:commandButton value="New" actionListener="#{bean.newListener}"/> . Finally, the listbox’s submit is changed to an Ajax call using the f:ajax core tag: onchange="submit()" is replaced with <f:ajax execute="@this" render="msg name desc" /> Why is this change? With the submit option, when the New action is cancelled by selecting another list item, the text and textarea fields will not be populated with the selected item; the values will remain as edited. This is because the edited text values are also submitted with the form; these values will not change to the selected item values. But, with Ajax, the form is not submitted, only the text values are updated (the render attribute of the f:ajax tag specifies the fields to be updated: the status message, todo name and description). NOTE: In the following code, the newly added and removed code lines from the previous step 1 are highlighted. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:f="http://java.sun.com/jsf/core" xmlns:h="http://java.sun.com/jsf/html"> <h:head> <title>A JSF List Example</title> </h:head> <h:body> <h:form> <h3>TODOs List</h3> <h:panelGrid> <h:inputText id="name" value="#{bean.todoName}" size="30" readonly="#{not bean.editable}"/> <h:inputTextarea id="desc" value="#{bean.todoDesc}" rows="2" cols="40" readonly="#{not bean.editable}"/> <h:selectOneListbox id="list" size="10" value="#{bean.todo}" valueChangeListener="#{bean.valueChanged}" onchange="submit()"> <f:ajax execute="@this" render="msg name desc"/> <f:selectItems value="#{bean.data}"/> <f:attribute name="beanattr" value="#{bean}"/> <f:converter converterId="todoConvertor"/> </h:selectOneListbox> </h:panelGrid> <h:commandButton value="New" actionListener="#{bean.newListener}"/> <h:commandButton value="Save" actionListener="#{bean.saveListener}"/> <h:commandButton value="Cancel" actionListener="#{bean.cancelListener}"/> <h:outputText id="msg" value="#{bean.message}" /> </h:form> </h:body> </html>TodoConverter.java: This is a custom converter class, converts the Todo objects to string (todo name) and vice-versa. package com.javaquizplayer.example; import javax.faces.component.UIComponent; import javax.faces.context.FacesContext; import javax.faces.convert.Converter; import javax.faces.convert.ConverterException; import javax.faces.convert.FacesConverter; import java.util.Map; @FacesConverter(value="todoConvertor") public class TodoConverter implements Converter { private static final long serialVersionUID = 1L; @Override public Object getAsObject(FacesContext context, UIComponent component,String value) throws ConverterException { if (value == null) { return null; } Map<String, Object> attrs = component.getAttributes(); TodosBean bean = (TodosBean) attrs.get("beanattr"); Todo todo = bean.getTodoForName(value); return todo; } @Override public String getAsString(FacesContext context, UIComponent component,Object value) throws ConverterException { if (value == null) { return null; } Todo todo = (Todo) value; return todo.getName(); } } TodosBean.java: This managed bean class has functions:To populate the todo list. A value change listener to get the currently selected list item. To capture the currently selected item value in the list. Set the status message.These are the changes:Capture the currently selected item value in the list as a Todo object (instead of the todo name string used earlier). The value change listener is changed to get the Todo object rather than the string value todo name.These are newly added:Accessor methods for the todo name, description and their editability. Action listeners for the new, save and cancel actions.package com.javaquizplayer.example; import javax.faces.bean.SessionScoped; import javax.faces.bean.ManagedBean; import javax.faces.event.ValueChangeEvent; import javax.faces.event.ActionEvent; import java.io.Serializable; import java.util.List; import java.util.ArrayList; @ManagedBean(name="bean") @SessionScoped public class TodosBean implements Serializable { private List<Todo> data; private Todo todo; // selected item value private String msg; private String name; // text field value private String desc; private String actionFlag = "NONE"; // specifies the current action (NEW, NONE) private boolean editable; public TodosBean() { loadData(); if (data.size() == 0) { return; } Todo t = data.get(0); selectRow(t); // select the first item in the list } private void selectRow(Todo t) { setTodo(t); setTodoName(t.getName()); setTodoDesc(t.getDesc()); setMessage(t.getName() + " selected.");); } private void loadData() { data = new ArrayList<>(); } public List<Todo> getData() { return data; } public Todo getTodo() { return todo; } public void setTodo(Todo t) { todo = t; } public void valueChanged(ValueChangeEvent e) { if (! actionFlag.equals("NONE")) { setEditable(false); actionFlag = "NONE"; } Todo t = (Todo) e.getNewValue(); setMessage(t.getName() + " selected.");); setTodoName(t.getName()); setTodoDesc(t.getDesc()); } public void setMessage(String msg) { this.msg = msg; } public String getMessage() { return msg; } public String getTodoName() { return name; } public void setTodoName(String n) { name = n; } public String getTodoDesc() { return desc; } public void setTodoDesc(String d) { desc = d; } // returns the Todo object for a given todo name // method used in converter public Todo getTodoForName(String name) { for (Todo t : data) { if (name.equals(t.getName())) { return t; } } return null; } public void setEditable(boolean b) { editable = b; } public boolean getEditable() { return editable; } public void newListener(ActionEvent e) { setEditable(true); setMessage("Enter new todo. Name must be uniquie and at least 5 chars."); setTodoName("NEW Todo"); setTodoDesc(""); actionFlag = "NEW"; } public void saveListener(ActionEvent e) { if (! actionFlag.equals("NEW")) { return; } String name = getTodoName(); String desc = getTodoDesc(); if (name.length() < 5) { setMessage("Name must be at least 5 chars long."); return; } if (duplicateName(name)) { setMessage("Name must be unique."); return; } Todo t = new Todo(name, desc); data.add(t); setMessage(name + " saved."); setTodo(t); // select the saved item setEditable(false); actionFlag = "NONE"; } private boolean duplicateName(String name) { for (Todo t : data) { if (t.getName().equals(name)) { return true; } } return false; } public void cancelListener(ActionEvent e) { if (actionFlag.equals("NONE")) { return; } actionFlag = "NONE"; setMessage(actionFlag + " action cancelled"); if (data.size() == 0) { setTodoName(""); setTodoDesc(""); setEditable(false); return; } // populate text fields with selected item setTodoName(todo.getName()); setTodoDesc(todo.getDesc()); setEditable(false); } }Step 3: List with todos and functions to add, edit and delete list items.This is the completed app. This has functions to select, add, edit and delete a list item. In this step, two new functions are added – edit, delete a list item. To edit, select a todo list item, and click the Edit button. This allows change the name and description values, and save. The editing may be cancelled, either by clicking the cancel button or by selecting another list item. The delete functions allows delete a selected todo list item. The code components are the same as that of the previous step 2, but are enhanced with new functions.The Todo.java class represents the todo. This is not changed. A converter, TodoConverter.java converts data from todo string value to Todo object, and vice-versa. This is not changed. The index.xhtml is the JSF page with the listbox, todo properties, and the status message that displays the selected item in the list. In addition there are widgets to add, edit a selected item or delete it. The TodosBean.java managed bean has the code to get the list data, run the list’s value change listener and to display a message. In addition, there are action listeners for the new, edit, delete, save and cancel actions.Todo.java: This class remains unchanged. TodoConverter.java: This class remains unchanged. index.xhtml: The following are newly added: Two command buttons are added for the edit and delete actions. Each button has its respective action listener. TodoBean.java: This managed bean class has functions:To populate the todo list. A value change listener to get the currently selected list item. To capture the currently selected item value in the list. Set the status message. Accessor methods for the todo name, description and their editability. Action listeners for the new, save and cancel actions. These are enhanced to accommodate the edit and delete functions.These are newly added:Action listeners for the edit and delete actions.Code Download: These are links to download the completed app’s WAR file and the source code.Source code WAR fileNotes and References: This app is developed using the Apache MyFaces 2.0 (MyFaces 2.0 implements JavaServer Faces 2.0). The app is tested on Tomcat 6 webserver and GlassFish 3 application server (GlassFish 3 implements Java EE 6). Useful links:Apache MyFaces 2.0 GlassFish 3 documentaion Java EE 6 API...
software-development-2-logo

Plug into the Wall: Interfaces to the Outside World

Just to be clear, this article isn’t about interfacing with hardware, though what it says does apply a little bit. You’ve Heard It Before You’ve probably heard it quite a bit, actually: Program to an interface, not an implementation. More than likely, though, you’re not doing it nearly as well as you think you are. When you use a library in your project, are you using its interfaces as your interfaces? Or did you design your own interface to work with the library in a way that you want to use it? If not, you won’t be able to switch to a new library that does the functionality better. If not, you could be stuck with chunks of code revolving around pleasing the library being intermixed with your logical code. Is there a layer of abstraction between your code and the framework you’re using? If not, you cannot change to a different framework without a lot of work. Even directly connecting with a framework is an implementation detail that should be abstracted away. This is the idea that MVC was originally based on. It placed interfaces between each of the three layers in order to make it easy to swap views and models out (technically, you could swap out a controller too, but the controller is typically the mediator and sticks around). This basic idea is brought up in a couple spots in Clean Code, which is generally regarded as the authority on good code. The most memorable being the first case study in the next section. Case Studies API Doesn’t Exist Yet In chapter 8 of Clean Code, there is a section called Using Code That Does Not Yet Exist. In it, Uncle Bob describes a scenario where he and his colleagues were writing software for a radio communications system. In the system, there was a subsystem, Transmitter, that wasn’t defined yet, and these developers didn’t want to stop working just to wait for that to be defined. What did they do? They worked around it. Whenever they ran into code that would require using the Transmitter, they simply thought about how they thought it should be used. After a while, they figured out a good interface that they wanted to use and made a fake transmitter that used that interface. Once the real transmitter API came in, they simply wrapped it with an adapter (ss in the Adapter Pattern) to their own interface, making it so the only changes that were needed in the older code was the code that created the fake transmitter and replace it with code that made the adapter to the real thing. Fitnesse MVC Bob Martin also gave a talk once on the how the MVC pattern is supposed to supply interfaces for the interaction between the layers, where he gave an example of when his team was working on Fitnesse. They initially started off saying that they’d need a MySQL database, but someone piped up, saying that they needn’t make their database decision yet. So they didn’t. Instead, they created their model interface and simply had it persist in memory at first. They didn’t need persistence between runs while they were building it. Later on, they did start to need it to truly persist. Again, they almost made a database decision when someone said that they should just do flat files to put off the db decision longer. So they wrote up a version of the model that stored itself in files instead of the database, while keeping the same interface that they had before. In fact, when they were done with everything else, they decided that they didn’t even need database support, so they shipped it without it. The only reason they eventually added it was because a client of Fitnesse required the use of a MySQL database. So they very quickly and easily added the support. What Should I Do? Any time that you’re working with code that you don’t have control over, you should separated it from your real code by making an Adapter or Facade for it. This interface should be defined the way that you want to use it. MVC Frameworks provide an interesting challenge. The one big piece of advice that I have for you is to not use their controllers as your controllers, especially if their controllers use annotations and/or framework-specific types. Those controllers should simply act as the view’s adapter to your own controllers, so their use is to transform what the framework gives them into something that can be delegated to your controllers. You should make adapters and facades for libraries that you use, too. Even if a library has an interface that you really like, you should make your own, even if it’s an exact duplicate (you can remove parts of the API you never use, too!). Then, if you end up swapping out the library, or the library makes a breaking update, you can use the new library with the old interface by updating or making a new wrapper. The Benefits The first benefit of defining your own boundary interfaces is that they are under your control. This means that you can change them however you want in order to make the code that uses them cleaner and/or more efficient. It also provides a good, practical place for your helper methods for working with certain libraries. Instead of writing helper classes full of static methods, you can put the functionality into the wrappers, whether it’s explicit helpers that are a part of the interface, or it’s implicit helpers that are implemented privately by the specific wrapper. Either way, it removes the need of a barely-helpful helper class and puts the functionality in a much more helpful place. Using your own interface also allows you to not have the component ready when you start working with it. This can be due to waiting for the implementation to be ready or due to you putting off the decision of which component to use until later. It makes the user code cleaner (assuming you use a clean interface), which you can clean up and refactor at will because you’re in control of the interface it’s calling. It provides a single place for change. Need I say more? Exceptions Most rules have exceptions, and this rule isn’t an exception to that. One exception to this rule, though it doesn’t have to be, is with your testing framework. There is a chance that you’ll change your testing framework, but you probably won’t. If you expect you will, try to extract what will be common to be called from all frameworks and put that somewhere separate. Then have the code that actually follows the framework call that extracted code. If there is a framework or tool that your company always uses, you should be able to safely assume that it won’t be switched out. Keep in mind, though, the benefit of creating your own interface for the sake of cleaner code. This is still a valid reason to create your own. Oops! We WEREN’T the Exception If you designed your application using a certain framework or library and you suddenly need to switch out to something else, or the framework or library made a breaking update (but you NEED the new functionality that comes with the update), your first steps should be refactoring the code to use a new interface (if you make one that looks JUST like the tool’s interface, it’ll make the transition even easier!), in order to protect against it happening again. Often, this will not only protect you later, but it will actually make the current transition to the new software easier. Time to Confess I must confess, I haven’t done hardly any of this. I’ve heard about it a couple times, but it slipped from my mind before I had a chance to apply it. I’m making a resolution to start doing this a lot more, and I’ll probably tell you guys some day about how it’s working out for me. I expect that it’ll make for some nicer code in many places, but it’ll be tedious work to get it set up and started.Reference: Plug into the Wall: Interfaces to the Outside World from our JCG partner Jacob Zimmerman at the Programming Ideas With Jake blog....
agile-logo

The Pool–An Agile Fable

Once upon a time there was a pool. It was in a club, and the club people loved the pool. This pool had 6 lanes. Each lane was wide enough for 2 persons to swim in parallel. Because the pool was popular, sometimes in certain hours, there were more than 12 people, and then people joined in already populated lanes. It wasn’t comfortable as swimming alone, or with a pair, but it was possible, to get by with 3 people swimming in a lane. 4 was a stretch, but it sometimes came to that. There were certain rules for using the pool. But they were not enforced, they were used as guidelines. For example, two of the lanes had a small sign on them: “Fast Lane”. They were intended for fast swimmers. It was a well known guideline, but the guideline made sense only when the pool was full.  The system (pool and swimmers) was built that fast swimmers gravitated toward the Fast Lanes, slower swimmers to the other lanes. Some experienced people even tried to game the system. For example, when the 13th person came to the pool, he looked at the pace of the other swimmers, then joined the lane that was compatible with his or her pace. Others took into account which people they know and their schedule, how long others have already been in the pool, how long they are going to be in. Then they chose the lane that was optimized for these situation (mainly give them a free lane). And the people were mainly happy. Then one day, a new sheriff came to town, a new lifeguard. Fresh out of lifeguard school, he remembered what they told him in class: It is safer and more comfortable for swimmers if all in the lane swam at the same speed. His main concern was safety first, comfort second (as it should be). Armed with this set of priorities, knowledge and a set of more visible “Fast Lane” signs he started to enforce the methodology. He asked (rather authoritatively) people to move between lanes, so fast swimmers were swimming together, and the slower swimmers were swimming in their own lanes. Some of the swimmers had to move within the same session more than once, as the population of the lanes changed. Because there were basically two classes of service –  Fast Lane and Slow Lane – The people who got moved around were in-betweeners. The fast swimmers still gravitated naturally towards the Fast Lanes, the slow swimmers towards the slow ones. The lifeguard then looked at all others, and decided where they fit best, then moved them. And the people were less happy. (Actually, only those who now needed to move around, you can’t really argue with a life guard.) The agile perspective Let’s see what happened here from an agile point of view. Before the new regime, the pool was self-regulated. Even when gaming the system, it still supported safety and comfort, but the people themselves took decisions by themselves. Once it went into command & control mode, people felt that the choice was taken away from them, and resisted. Safety was maintained, but the comfort level dropped. In addition, all decisions of “which lane should I pick” now moved to the lifeguard. Since the lifeguard was not that busy, that wasn’t really a problem. But creating a single point of decision creates a bottleneck, that in regular team work is not healthy. Finally, rules and regulations are there for a reason, everyone will tell you. But the rules are only one way to solve problems. There are other ways to make sure safety is maintained. The morale is: Don’t mess with a self organizing team. The chances of them improving just because you come with a tried and true methodology, are not big.Reference: The Pool–An Agile Fable from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....
java-logo

Top 10 Easy Performance Optimisations in Java

There has been a lot of hype about the buzzword “web scale“, and people are going through lengths of reorganising their application architecture to get their systems to “scale”. But what is scaling, and how can we make sure that we can scale? Different aspects of scaling The hype mentioned above is mostly about scaling load, i.e. to make sure that a system that works for 1 user will also work well for 10 users, or 100 users, or millions. Ideally, your system is as “stateless” as possible such that the few pieces of state that really remain can be transferred and transformed on any processing unit in your network. When load is your problem, latency is probably not, so it’s OK if individual requests take 50-100ms. This is often also referred to as scaling out An entirely different aspect of scaling is about scaling performance, i.e. to make sure that an algorithm that works for 1 piece of information will also work well for 10 pieces, or 100 pieces, or millions. Whether this type of scaling is feasible is best described by Big O Notation. Latency is the killer when scaling performance. You want to do everything possible to keep all calculation on a single machine. This is often also referred to as scaling up If there was anything like free lunch (there isn’t), we could indefinitely combine scaling up and out. Anyway, today, we’re going to look at some very easy ways to improve things on the performance side. Big O Notation Java 7’s ForkJoinPool as well as Java 8’s parallel Stream help parallelising stuff, which is great when you deploy your Java program onto a multi-core processor machine. The advantage of such parallelism compared to scaling across different machines on your network is the fact that you can almost completely eliminate latency effects, as all cores can access the same memory. But don’t be fooled by the effect that parallelism has! Remember the following two things:Parallelism eats up your cores. This is great for batch processing, but a nightmare for asynchronous servers (such as HTTP). There are good reasons why we’ve used the single-thread servlet model in the past decades. So parallelism only helps when scaling up. Parallelism has no effect on your algorithm’s Big O Notation. If your algorithm is O(n log n), and you let that algorithm run on c cores, you will still have an O(n log n / c) algorithm, as c is an insignificant constant in your algorithm’s complexity. You will save wall-clock time, but not reduce complexity!The best way to improve performance, of course, is by reducing algorithm complexity. The killer is achieve O(1) or quasi-O(1), of course, for instance a HashMap lookup. But that is not always possible, let alone easy. If you cannot reduce your complexity, you can still gain a lot of performance if you tweak your algorithm where it really matters, if you can find the right spots. Assume the following visual representation of an algorithm:The overall complexity of the algorithm is O(N3), or O(N x O x P) if we want to deal with individual orders of magnitude. However, when profiling this code, you might find a funny scenario:On your development box, the left branch (N -> M -> Heavy operation) is the only branch that you can see in your profiler, because the values for O and P are small in your development sample data. On production, however, the right branch (N -> O -> P -> Easy operation or also N.O.P.E.) is really causing trouble. Your operations team might have figured this out using AppDynamics, or DynaTrace, or some similar software.Without production data, you might quickly jump to conclusions and optimise the “heavy operation”. You ship to production and your fix has no effect. There are no golden rules to optimisation apart from the facts that:A well-designed application is much easier to optimise Premature optimisation will not solve any performance problems, but make your application less well-designed, which in turn makes it harder to be optimisedEnough theory. Let’s assume that you have found the right branch to be the issue. It may well be that a very easy operation is blowing up in production, because it is called lots and lots of times (if N, O, and P are large). Please read this article in the context of there being a problem at the leaf node of an inevitable O(N3) algorithm. These optimisations won’t help you scale. They’ll help you save your customer’s day for now, deferring the difficult improvement of the overall algorithm until later! Here are the top 10 easy performance optimisations in Java: 1. Use StringBuilder This should be your default in almost all Java code. Try to avoid the + operator. Sure, you may argue that it is just syntax sugar for a StringBuilder anyway, as in: String x = "a" + args.length + "b"; … which compiles to 0 new java.lang.StringBuilder [16] 3 dup 4 ldc <String "a"> [18] 6 invokespecial java.lang.StringBuilder(java.lang.String) [20] 9 aload_0 [args] 10 arraylength 11 invokevirtual java.lang.StringBuilder.append(int) : java.lang.StringBuilder [23] 14 ldc <String "b"> [27] 16 invokevirtual java.lang.StringBuilder.append(java.lang.String) : java.lang.StringBuilder [29] 19 invokevirtual java.lang.StringBuilder.toString() : java.lang.String [32] 22 astore_1 [x] But what happens, if later on, you need to amend your String with optional parts? String x = "a" + args.length + "b";if (args.length == 1) x = x + args[0]; You will now have a second StringBuilder, that just needlessly consumes memory off your heap, putting pressure on your GC. Write this instead: StringBuilder x = new StringBuilder("a"); x.append(args.length); x.append("b");if (args.length == 1); x.append(args[0]); Takeaway In the above example, it is probably completely irrelevant if you’re using explicit StringBuilder instances, or if you rely on the Java compiler creating implicit instances for you. But remember, we’re in the N.O.P.E. branch. Every CPU cycle that we’re wasting on something as stupid as GC or allocating a StringBuilder‘s default capacity, we’re wasting N x O x P times. As a rule of thumb, always use a StringBuilder rather than the + operator. And if you can, keep the StringBuilder reference across several methods, if your String is more complex to build. This is what jOOQ does when you generate a complex SQL statement. There is only one StringBuilder that “traverses” your whole SQL AST (Abstract Syntax Tree) And for crying out loud, if you still have StringBuffer references, do replace them by StringBuilder. You really hardly ever need to synchronize on a string being created. 2. Avoid regular expressions Regular expressions are relatively cheap and convenient. But if you’re in the N.O.P.E. branch, they’re about the worst thing you can do. If you absolutely must use regular expressions in computation-intensive code sections, at least cache the Pattern reference instead of compiling it afresh all the time: static final Pattern HEAVY_REGEX = Pattern.compile("(((X)*Y)*Z)*"); But if your regular expression is really silly like String[] parts = ipAddress.split("\\."); … then you really better resort to ordinary char[] or index-based manipulation. For example this utterly unreadable loop does the same thing: int length = ipAddress.length(); int offset = 0; int part = 0; for (int i = 0; i < length; i++) { if (i == length - 1 || ipAddress.charAt(i + 1) == '.') { parts[part] = ipAddress.substring(offset, i + 1); part++; offset = i + 2; } } … which also shows why you shouldn’t do any premature optimisation. Compared to the split() version, this is unmaintainable. Challenge: The clever ones among your readers might find even faster algorithms. Takeaway Regular expressions are useful, but they come at a price. If you’re deep down in a N.O.P.E. branch, you must avoid regular expressions at all costs. Beware of a variety of JDK String methods, that use regular expressions, such as String.replaceAll(), or String.split(). Use a popular library like Apache Commons Lang instead, for your String manipulation. 3. Do not use iterator() Now, this advice is really not for general use-cases, but only applicable deep down in a N.O.P.E. branch. Nonetheless, you should think about it. Writing Java-5 style foreach loops is convenient. You can just completely forget about looping internals, and write: for (String value : strings) { // Do something useful here } However, every time you run into this loop, if strings is an Iterable, you will create a new Iterator instance. If you’re using an ArrayList, this is going to be allocating an object with 3 ints on your heap: private class Itr implements Iterator<E> { int cursor; int lastRet = -1; int expectedModCount = modCount; // ... Instead, you can write the following, equivalent loop and “waste” only a single int value on the stack, which is dirt cheap: int size = strings.size(); for (int i = 0; i < size; i++) { String value : strings.get(i); // Do something useful here } … or, if your list doesn’t really change, you might even operate on an array version of it: for (String value : stringArray) { // Do something useful here } Takeaway Iterators, Iterable, and the foreach loop are extremely useful from a writeability and readability perspective, as well as from an API design perspective. However, they create a small new instance on the heap for each single iteration. If you run this iteration many many times, you want to make sure to avoid creating this useless instance, and write index-based iterations instead. Discussion Some interesting disagreement about parts of the above (in particular replacing Iterator usage by access-by-index) has been discussed on Reddit here. 4. Don’t call that method Some methods are simple expensive. In our N.O.P.E. branch example, we don’t have such a method at the leaf, but you may well have one. Let’s assume your JDBC driver needs to go through incredible trouble to calculate the value of ResultSet.wasNull(). Your homegrown SQL framework code might look like this: if (type == Integer.class) { result = (T) wasNull(rs, Integer.valueOf(rs.getInt(index))); }// And then... static final <T> T wasNull(ResultSet rs, T value) throws SQLException { return rs.wasNull() ? null : value; } This logic will now call ResultSet.wasNull() every time you get an int from the result set. But the getInt() contract reads: Returns: the column value; if the value is SQL NULL, the value returned is 0 Thus, a simple, yet possibly drastic improvement to the above would be: static final <T extends Number> T wasNull( ResultSet rs, T value ) throws SQLException { return (value == null || (value.intValue() == 0 && rs.wasNull())) ? null : value; } So, this is a no-brainer: Takeaway Don’t call expensive methods in an algorithms “leaf nodes”, but cache the call instead, or avoid it if the method contract allows it. 5. Use primitives and the stack The above example is from jOOQ, which uses a lot of generics, and thus is forced to use wrapper types for byte, short, int, and long – at least before generics will be specialisable in Java 10 and project Valhalla. But you may not have this constraint in your code, so you should take all measures to replace: // Goes to the heap Integer i = 817598; … by this: // Stays on the stack int i = 817598; Things get worse when you’re using arrays: // Three heap objects! Integer[] i = { 1337, 424242 }; … by this: // One heap object. int[] i = { 1337, 424242 }; Takeaway When you’re deep down in your N.O.P.E. branch, you should be extremely wary of using wrapper types. Chances are that you will create a lot of pressure on your GC, which has to kick in all the time to clean up your mess. A particularly useful optimisation might be to use some primitive type and create large, one-dimensional arrays of it, and a couple of delimiter variables to indicate where exactly your encoded object is located on the array. An excellent library for primitive collections, which are a bit more sophisticated than your average int[] is trove4j, which ships with LGPL. Exception There is an exception to this rule: boolean and byte have few enough values to be cached entirely by the JDK. You can write: Boolean a1 = true; // ... syntax sugar for: Boolean a2 = Boolean.valueOf(true);Byte b1 = (byte) 123; // ... syntax sugar for: Byte b2 = Byte.valueOf((byte) 123); The same is true for low values of the other integer primitive types, including char, short, int, long. But only if you’re auto-boxing them, or calling TheType.valueOf(), not when you call the constructor! Never call the constructor on wrapper types, unless you really want a new instance This fact can also help you write a sophisticated, trolling April Fool’s joke for your co-workers Off heap Of course, you might also want to experiment with off-heap libraries, although they’re more of a strategic decision, not a local optimisation. An interesting article on that subject by Peter Lawrey and Ben Cotton is: OpenJDK and HashMap… Safely Teaching an Old Dog New (Off-Heap!) Tricks 6. Avoid recursion Modern functional programming languages like Scala encourage the use of recursion, as they offer means of optimising tail-recursing algorithms back into iterative ones. If your language supports such optimisations, you might be fine. But even then, the slightest change of algorithm might produce a branch that prevents your recursion from being tail-recursive. Hopefully the compiler will detect this! Otherwise, you might be wasting a lot of stack frames for something that might have been implemented using only a few local variables. Takeaway There’s not much to say about this apart from: Always prefer iteration over recursion when you’re deep down the N.O.P.E. branch 7. Use entrySet() When you want to iterate through a Map, and you need both keys and values, you must have a very good reason to write the following: for (K key : map.keySet()) { V value : map.get(key); } … rather than the following: for (Entry<K, V> entry : map.entrySet()) { K key = entry.getKey(); V value = entry.getValue(); } When you’re in the N.O.P.E. branch, you should be wary of maps anyway, because lots and lots of O(1) map access operations are still lots of operations. And the access isn’t free either. But at least, if you cannot do without maps, use entrySet() to iterate them! The Map.Entry instance is there anyway, you only need to access it. Takeaway Always use entrySet() when you need both keys and values during map iteration. 8. Use EnumSet or EnumMap There are some cases where the number of possible keys in a map is known in advance – for instance when using a configuration map. If that number is relatively small, you should really consider using EnumSet or EnumMap, instead of regular HashSet or HashMap instead. This is easily explained by looking at EnumMap.put(): private transient Object[] vals;public V put(K key, V value) { // ... int index = key.ordinal(); vals[index] = maskNull(value); // ... } The essence of this implementation is the fact that we have an array of indexed values rather than a hash table. When inserting a new value, all we have to do to look up the map entry is ask the enum for its constant ordinal, which is generated by the Java compiler on each enum type. If this is a global configuration map (i.e. only one instance), the increased access speed will help EnumMap heavily outperform HashMap, which may use a bit less heap memory, but which will have to run hashCode() and equals() on each key. Takeaway Enum and EnumMap are very close friends. Whenever you use enum-like structures as keys, consider actually making those structures enums and using them as keys in EnumMap. 9. Optimise your hashCode() and equals() methods If you cannot use an EnumMap, at least optimise your hashCode() and equals() methods. A good hashCode() method is essential because it will prevent further calls to the much more expensive equals() as it will produce more distinct hash buckets per set of instances. In every class hierarchy, you may have popular and simple objects. Let’s have a look at jOOQ’s org.jooq.Table implementations. The simplest and fastest possible implementation of hashCode() is this one: // AbstractTable, a common Table base implementation:@Override public int hashCode() {// [#1938] This is a much more efficient hashCode() // implementation compared to that of standard // QueryParts return name.hashCode(); } … where name is simply the table name. We don’t even consider the schema or any other property of the table, as the table names are usually distinct enough across a database. Also, the name is a string, so it has already a cached hashCode() value inside. The comment is important, because AbstractTable extends AbstractQueryPart, which is a common base implementation for any AST (Abstract Syntax Tree) element. The common AST element does not have any properties, so it cannot make any assumptions an optimised hashCode() implementation. Thus, the overridden method looks like this: // AbstractQueryPart, a common AST element // base implementation:@Override public int hashCode() { // This is a working default implementation. // It should be overridden by concrete subclasses, // to improve performance return create().renderInlined(this).hashCode(); } In other words, the whole SQL rendering workflow has to be triggered to calculate the hash code of a common AST element. Things get more interesting with equals() // AbstractTable, a common Table base implementation:@Override public boolean equals(Object that) { if (this == that) { return true; }// [#2144] Non-equality can be decided early, // without executing the rather expensive // implementation of AbstractQueryPart.equals() if (that instanceof AbstractTable) { if (StringUtils.equals(name, (((AbstractTable<?>) that).name))) { return super.equals(that); }return false; }return false; } First thing: Always (not only in a N.O.P.E. branch) abort every equals() method early, if:this == argument this "incompatible type" argumentNote that the latter condition includes argument == null, if you’re using instanceof to check for compatible types. We’ve blogged about this before in 10 Subtle Best Practices when Coding Java. Now, after aborting comparison early in obvious cases, you might also want to abort comparison early when you can make partial decisions. For instance, the contract of jOOQ’s Table.equals() is that for two tables to be considered equal, they must be of the same name, regardless of the concrete implementation type. For instance, there is no way these two items can be equal:com.example.generated.Tables.MY_TABLE DSL.tableByName("MY_OTHER_TABLE")If the argument cannot be equal to this, and if we can check that easily, let’s do so and abort if the check fails. If the check succeeds, we can still proceed with the more expensive implementation from super. Given that most objects in the universe are not equal, we’re going to save a lot of CPU time by shortcutting this method. some objects are more equal than others In the case of jOOQ, most instances are really tables as generated by the jOOQ source code generator, whose equals() implementation is even further optimised. The dozens of other table types (derived tables, table-valued functions, array tables, joined tables, pivot tables, common table expressions, etc.) can keep their “simple” implementation. 10. Think in sets, not in individual elements Last but not least, there is a thing that is not Java-related but applies to any language. Besides, we’re leaving the N.O.P.E. branch as this advice might just help you move from O(N3) to O(n log n), or something like that. Unfortunately, many programmers think in terms of simple, local algorithms. They’re solving a problem step by step, branch by branch, loop by loop, method by method. That’s the imperative and/or functional programming style. While it is increasingly easy to model the “bigger picture” when going from pure imperative to object oriented (still imperative) to functional programming, all these styles lack something that only SQL and R and similar languages have: Declarative programming. In SQL (and we love it, as this is the jOOQ blog) you can declare the outcome you want to get from your database, without making any algorithmic implications whatsoever. The database can then take all the meta data available into consideration (e.g. constraints, keys, indexes, etc.) to figure out the best possible algorithm. In theory, this has been the main idea behind SQL and relational calculus from the beginning. In practice, SQL vendors have implemented highly efficient CBOs (Cost-Based Optimisers) only since the last decade, so stay with us in the 2010’s when SQL will finally unleash its full potential (it was about time!) But you don’t have to do SQL to think in sets. Sets / collections / bags / lists are available in all languages and libraries. The main advantage of using sets is the fact that your algorithms will become much much more concise. It is so much easier to write: SomeSet INTERSECT SomeOtherSet rather than: // Pre-Java 8 Set result = new HashSet(); for (Object candidate : someSet) if (someOtherSet.contains(candidate)) result.add(candidate);// Even Java 8 doesn't really help someSet.stream() .filter(someOtherSet::contains) .collect(Collectors.toSet()); Some may argue that functional programming and Java 8 will help you write easier, more concise algorithms. That’s not necessarily true. You can translate your imperative Java-7-loop into a functional Java-8 Stream collection, but you’re still writing the very same algorithm. Writing a SQL-esque expression is different. This… SomeSet INTERSECT SomeOtherSet … can be implemented in 1000 ways by the implementation engine. As we’ve learned today, perhaps it is wise to transform the two sets into EnumSet automatically, before running the INTERSECT operation. Perhaps we can parallelise this INTERSECT without making low-level calls to Stream.parallel() Conclusion In this article, we’ve talked about optimisations done on the N.O.P.E. branch, i.e. deep down in a high-complexity algorithm. In our case, being the jOOQ developers, we have interest in optimising our SQL generation:Every query is generated only on a single StringBuilder Our templating engine actually parses characters, instead of using regular expressions We use arrays wherever we can, especially when iterating over listeners We stay clear of JDBC methods that we don’t have to call etc…jOOQ is at the “bottom of the food chain”, because it’s the (second-)last API that is being called by our customers’ applications before the call leaves the JVM to enter the DBMS. Being at the bottom of the food chain means that every line of code that is executed in jOOQ might be called N x O x P times, so we must optimise eagerly. Your business logic is not deep down in the N.O.P.E. branch. But your own, home-grown infrastructure logic may be (custom SQL frameworks, custom libraries, etc.) Those should be reviewed according to the rules that we’ve seen today. For instance, using Java Mission Control or any other profiler.Reference: Top 10 Easy Performance Optimisations in Java from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
vagrant-logo

Java EE 7 and WildFly on Kubernetes using Vagrant

This tip will show how to run a Java EE 7 application deployed in WildFly and hosted using Kubernetes and Docker. If you want to learn more about the basics, then this blog has already published quite a bit of content on the topic. A sampling of some of the content is given below:              Get started with Docker How to create your own Docker image Pushing your Docker images to registry Java EE 7 Hands-on Lab on WildFly and Docker WildFly/JavaEE7 and MySQL linked on two Docker containers Docker container linking across multiple hosts Key Concepts of Kubernetes Build Kubernetes on Mac OS X Vagrant with Docker provider, using WildFly and Java EE 7 imageLets get started! Start Kubernetes cluster Kubernetes cluster can be easily started on a Linux machine using the usual scripts. There are Getting Started Guides for different platforms such as Fedora, CoreOS, Amazon Web Services, and others. Running a Kubernetes cluster on Mac OS X require to use the Vagrant image which is also explained in Getting Started with Vagrant. This blog will use the Vagrant box.By default, Kubernetes cluster management scripts assumes you are running on Google Compute Engine. Kubernetes can be configured to run with a variety of providers: gce, gke, aws, azure, vagrant, local, vsphere. So lets set our provider to vagrant as: export KUBERNETES_PROVIDER=vagrant This means, your Kubernetes cluster is running inside a Fedora VM created by Vagrant. Start the cluster as:kubernetes> ./cluster/kube-up.sh Starting cluster using provider: vagrant ... calling verify-prereqs ... calling kube-up Using credentials: vagrant:vagrant Bringing machine 'master' up with 'virtualbox' provider... Bringing machine 'minion-1' up with 'virtualbox' provider.... . .Running: ./cluster/../cluster/vagrant/../../cluster/../cluster/vagrant/../../_output/dockerized/bin/darwin/amd64/kubectl --auth-path=/Users/arungupta/.kubernetes_vagrant_auth create -f - skydns ... calling setup-logging TODO: setup logging DoneNotice, this command is given from the kubernetes directory where it is already compiled as explained in Build Kubernetes on Mac OS X. By default, the Vagrant setup will create a single kubernetes-master and 1 kubernetes-minion. This involves creating Fedora VM, installing dependencies, creating master and minion, setting up connectivity between them, and a whole lot of other things. As a result, this step can take a few minutes (~10 mins on my machine).Verify Kubernetes cluster Now that the cluster has started, lets make sure we verify it does everything that its supposed to.Verify the that your Vagrant images are up correctly as:kubernetes> vagrant status Current machine states:master running (virtualbox) minion-1 running (virtualbox)This environment represents multiple VMs. The VMs are all listed above with their current state. For more information about a specific VM, run `vagrant status NAME`.This can also be verified by verifying the status in Virtual Box console as shown:boot2docker-vm is the Boot2Docker VM. Then there is Kubernetes master and minion VM. Two additional VMs are shown here but they not relevant to the example. Log in to the master as:kubernetes> vagrant ssh master Last login: Fri Jan 30 21:35:34 2015 from 10.0.2.2 [vagrant@kubernetes-master ~]$Verify that different Kubernetes components have started up correctly. Start with Kubernetes API server:[vagrant@kubernetes-master ~]$ sudo systemctl status kube-apiserver kube-apiserver.service - Kubernetes API Server Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled) Active: active (running) since Fri 2015-01-30 21:34:25 UTC; 7min ago Docs: https://github.com/GoogleCloudPlatform/kubernetes Main PID: 3506 (kube-apiserver) CGroup: /system.slice/kube-apiserver.service └─3506 /usr/local/bin/kube-apiserver --address=127.0.0.1 --etcd_servers=http://10.245.1.2:4001 --cloud_provider=vagrant --admission_c.... . .Then Kube Controller Manager:[vagrant@kubernetes-master ~]$ sudo systemctl status kube-controller-manager kube-controller-manager.service - Kubernetes Controller Manager Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled) Active: active (running) since Fri 2015-01-30 21:34:27 UTC; 8min ago Docs: https://github.com/GoogleCloudPlatform/kubernetes Main PID: 3566 (kube-controller) CGroup: /system.slice/kube-controller-manager.service └─3566 /usr/local/bin/kube-controller-manager --master=127.0.0.1:8080 --minion_regexp=.* --cloud_provider=vagrant --v=2. . .Similarly you can verify etcd and nginx as well. Docker and Kubelet are running in the minion and can be verified by logging in to the minion and using systemctl scripts as:kubernetes> vagrant ssh minion-1 Last login: Fri Jan 30 21:37:05 2015 from 10.0.2.2 [vagrant@kubernetes-minion-1 ~]$ sudo systemctl status docker docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled) Active: active (running) since Fri 2015-01-30 21:39:05 UTC; 8min ago Docs: http://docs.docker.com Main PID: 13056 (docker) CGroup: /system.slice/docker.service ├─13056 /usr/bin/docker -d -b=kbr0 --iptables=false --selinux-enabled └─13192 docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 4194 -container-ip 10.246.0.3 -container-port 8080. . . [vagrant@kubernetes-minion-1 ~]$ sudo systemctl status kubelet kubelet.service - Kubernetes Kubelet Server Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled) Active: active (running) since Fri 2015-01-30 21:36:57 UTC; 10min ago Docs: https://github.com/GoogleCloudPlatform/kubernetes Main PID: 3118 (kubelet) CGroup: /system.slice/kubelet.service └─3118 /usr/local/bin/kubelet --etcd_servers=http://10.245.1.2:4001 --api_servers=https://10.245.1.2:6443 --auth_path=/var/lib/kubele.... . .Check the minions as:kubernetes> ./cluster/kubectl.sh get minions Running: ./cluster/../cluster/vagrant/../../_output/dockerized/bin/darwin/amd64/kubectl --auth-path=/Users/arungupta/.kubernetes_vagrant_auth get minions NAME LABELS STATUS 10.245.1.3 <none> ReadyOnly one minion is created. This can be manipulated by setting an environment variable NUM_MINIONS variable to an integer before invoking kube-up.sh script. Finally check the pods as:kubernetes> ./cluster/kubectl.sh get pods Running: ./cluster/../cluster/vagrant/../../_output/dockerized/bin/darwin/amd64/kubectl --auth-path=/Users/arungupta/.kubernetes_vagrant_auth get pods POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS 22d4a478-a8c8-11e4-a61e-0800279696e1 10.246.0.2 etcd quay.io/coreos/etcd:latest 10.245.1.3/10.245.1.3 k8s-app=skydns Running kube2sky kubernetes/kube2sky:1.0 skydns kubernetes/skydns:2014-12-23-001This shows a single pod is created by default and has three containers running:skydns: SkyDNS is a distributed service for announcement and discovery of services built on top of etcd. It utilizes DNS queries to discover available services. etcd: A distributed, consistent key value store for shared configuration and service discovery with a focus on being simple, secure, fast, reliable. This is used for storing state information for Kubernetes. kube2sky: A bridge between Kubernetes and SkyDNS. This will watch the kubernetes API for changes in Services and then publish those changes to SkyDNS through etcd. No pods have been created by our application so far, lets do that next.Start WildFly and Java EE 7 application Pod Pod is created by using the kubectl script and providing the details in a JSON configuration file. The source code for our configuration file is available at github.com/arun-gupta/kubernetes-java-sample, and looks like:{ "id": "wildfly", "kind": "Pod", "apiVersion": "v1beta1", "desiredState": { "manifest": { "version": "v1beta1", "id": "wildfly", "containers": [{ "name": "wildfly", "image": "arungupta/javaee7-hol", "cpu": 100, "ports": [{ "containerPort": 8080, "hostPort": 8080 }, { "containerPort": 9090, "hostPort": 9090 }] }] } }, "labels": { "name": "wildfly" } }The exact payload and attributes of this configuration file are documented at kubernetes.io/third_party/swagger-ui/#!/v1beta1/createPod_0. Complete docs of all the possible APIs are at kubernetes.io/third_party/swagger-ui/. The key attributes in this fie particularly are:A pod is created. API allows other types such as “service”, “replicationController” etc. to be created. Version of the API is “v1beta1″. Docker image arungupta/javaee7-hol used to run the container. Exposes port 8080 and 9090, as they are originally exposed in the base image Dockerfile. This require further debugging on how the list of ports can be cleaned up. Pod is given a label “wildfly”. In this case, its not used much but would be more meaningful when services are created in a subsequent blog.As mentioned earlier, this tech tip will spin up a single pod, with one container. Our container will be using a pre-built image (arungupta/javaee7-hol) that deploys a typical 3-tier Java EE 7 application to WildFly. Start the WildFly pod as:kubernetes/>./cluster/kubectl create -f ../kubernetes-java-sample/javaee7-hol.jsonCheck the status of the created pod as:kubernetes> ./cluster/kubectl.sh get pods Running: ./cluster/../cluster/vagrant/../../_output/dockerized/bin/darwin/amd64/kubectl --auth-path=/Users/arungupta/.kubernetes_vagrant_auth get pods POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS 4c283aa1-ab47-11e4-b139-0800279696e1 10.246.0.2 etcd quay.io/coreos/etcd:latest 10.245.1.3/10.245.1.3 k8s-app=skydns Running kube2sky kubernetes/kube2sky:1.0 skydns kubernetes/skydns:2014-12-23-001 wildfly 10.246.0.5 wildfly arungupta/javaee7-hol 10.245.1.3/10.245.1.3 name=wildfly RunningThe WildFly pod is now created and shown in the list. The HOST column shows the IP address on which the application is accessible. The image below explains how all the components fit with each other:As only one minion is created by default, this pod will be created on that minion. The blog will show how multiple minions can be created. Kubernetes of course picks the minion where the pods are created. Running the pod ensures that the Java EE 7 application is deployed to WildFly. Access Java EE 7 Application From the kubectl.sh get pods output, HOST column shows the IP address where the application is externally accessible. In our case, the IP address is 10.245.1.3. So, access the application in the browser to see output as:This confirms that your Java EE 7 application is now accessible. Kubernetes Debugging Tips Once the Kubernetes cluster is created, you’ll need to debug it and see what’s going on under the hood. First of all, lets log in to the minion:kubernetes> vagrant ssh minion-1 Last login: Tue Feb 3 01:52:22 2015 from 10.0.2.2List of Docker containers on Minion Lets take a look at all the Docker containers running on minion-1:[vagrant@kubernetes-minion-1 ~]$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3f7e174b82b1 arungupta/javaee7-hol:latest "/opt/jboss/wildfly/ 16 minutes ago Up 16 minutes k8s_wildfly.a78dc60_wildfly.default.api_59be80fa-ab48-11e4-b139-0800279696e1_75a4a7cb 1c464e71fb69 kubernetes/pause:go "/pause" 20 minutes ago Up 20 minutes 0.0.0.0:8080->8080/tcp, 0.0.0.0:9090->9090/tcp k8s_net.7946daa4_wildfly.default.api_59be80fa-ab48-11e4-b139-0800279696e1_e74d3d1d 7bdd763df691 kubernetes/skydns:2014-12-23-001 "/skydns -machines=h 21 minutes ago Up 21 minutes k8s_skydns.394cd23c_4c283aa1-ab47-11e4-b139-0800279696e1.default.api_4c283aa1-ab47-11e4-b139-0800279696e1_3352f6bd 17a140aaabbe google/cadvisor:0.7.1 "/usr/bin/cadvisor" 22 minutes ago Up 22 minutes k8s_cadvisor.68f5108e_cadvisor-agent.file-6bb810db-kubernetes-minion-1.file_65235067df34faf012fd8bb088de6b73_86e59309 a5f8cf6463e9 kubernetes/kube2sky:1.0 "/kube2sky -domain=k 22 minutes ago Up 22 minutes k8s_kube2sky.1cbba018_4c283aa1-ab47-11e4-b139-0800279696e1.default.api_4c283aa1-ab47-11e4-b139-0800279696e1_126d4d7a 28e6d2e67a92 kubernetes/fluentd-elasticsearch:1.0 "/bin/sh -c '/usr/sb 23 minutes ago Up 23 minutes k8s_fluentd-es.6361e00b_fluentd-to-elasticsearch.file-8cd71177-kubernetes-minion-1.file_a190cc221f7c0766163ed2a4ad6e32aa_a9a369d3 5623edf7decc quay.io/coreos/etcd:latest "/etcd /etcd -bind-a 25 minutes ago Up 25 minutes k8s_etcd.372da5db_4c283aa1-ab47-11e4-b139-0800279696e1.default.api_4c283aa1-ab47-11e4-b139-0800279696e1_8b658811 3575b562f23e kubernetes/pause:go "/pause" 25 minutes ago Up 25 minutes 0.0.0.0:4194->8080/tcp k8s_net.beddb979_cadvisor-agent.file-6bb810db-kubernetes-minion-1.file_65235067df34faf012fd8bb088de6b73_8376ce8e 094d76c83068 kubernetes/pause:go "/pause" 25 minutes ago Up 25 minutes k8s_net.3e0f95f3_fluentd-to-elasticsearch.file-8cd71177-kubernetes-minion-1.file_a190cc221f7c0766163ed2a4ad6e32aa_6931ca22 f8b9cd5af169 kubernetes/pause:go "/pause" 25 minutes ago Up 25 minutes k8s_net.3d64b7f6_4c283aa1-ab47-11e4-b139-0800279696e1.default.api_4c283aa1-ab47-11e4-b139-0800279696e1_b0ebce5aThe first container is specific to our application, everything else is started by Kubernetes. Details about each Docker container More details about each container can be found by using their container id as:docker inspect <CONTAINER_ID>In our case, the output is shown as:[vagrant@kubernetes-minion-1 ~]$ docker inspect 3f7e174b82b1 [{ "AppArmorProfile": "", "Args": [ "-c", "standalone-full.xml", "-b", "0.0.0.0" ], "Config": { "AttachStderr": false, "AttachStdin": false, "AttachStdout": false, "Cmd": [ "/opt/jboss/wildfly/bin/standalone.sh", "-c", "standalone-full.xml", "-b", "0.0.0.0" ], "CpuShares": 102, "Cpuset": "", "Domainname": "", "Entrypoint": null, "Env": [ "KUBERNETES_PORT_443_TCP_PROTO=tcp", "KUBERNETES_RO_PORT_80_TCP=tcp://10.247.82.143:80", "SKYDNS_PORT_53_UDP=udp://10.247.0.10:53", "KUBERNETES_PORT_443_TCP=tcp://10.247.92.82:443", "KUBERNETES_PORT_443_TCP_PORT=443", "KUBERNETES_PORT_443_TCP_ADDR=10.247.92.82", "KUBERNETES_RO_PORT_80_TCP_PROTO=tcp", "SKYDNS_PORT_53_UDP_PROTO=udp", "KUBERNETES_RO_PORT_80_TCP_ADDR=10.247.82.143", "SKYDNS_SERVICE_HOST=10.247.0.10", "SKYDNS_PORT_53_UDP_PORT=53", "SKYDNS_PORT_53_UDP_ADDR=10.247.0.10", "KUBERNETES_SERVICE_HOST=10.247.92.82", "KUBERNETES_RO_SERVICE_HOST=10.247.82.143", "KUBERNETES_RO_PORT_80_TCP_PORT=80", "SKYDNS_SERVICE_PORT=53", "SKYDNS_PORT=udp://10.247.0.10:53", "KUBERNETES_SERVICE_PORT=443", "KUBERNETES_PORT=tcp://10.247.92.82:443", "KUBERNETES_RO_SERVICE_PORT=80", "KUBERNETES_RO_PORT=tcp://10.247.82.143:80", "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "JAVA_HOME=/usr/lib/jvm/java", "WILDFLY_VERSION=8.2.0.Final", "JBOSS_HOME=/opt/jboss/wildfly" ], "ExposedPorts": { "8080/tcp": {}, "9090/tcp": {}, "9990/tcp": {} }, "Hostname": "wildfly", "Image": "arungupta/javaee7-hol", "MacAddress": "", "Memory": 0, "MemorySwap": 0, "NetworkDisabled": false, "OnBuild": null, "OpenStdin": false, "PortSpecs": null, "StdinOnce": false, "Tty": false, "User": "jboss", "Volumes": null, "WorkingDir": "/opt/jboss" }, "Created": "2015-02-03T02:03:54.882111127Z", "Driver": "devicemapper", "ExecDriver": "native-0.2", "HostConfig": { "Binds": null, "CapAdd": null, "CapDrop": null, "ContainerIDFile": "", "Devices": null, "Dns": [ "10.247.0.10", "10.0.2.3" ], "DnsSearch": [ "default.kubernetes.local", "kubernetes.local", "c.hospitality.swisscom.com" ], "ExtraHosts": null, "IpcMode": "", "Links": null, "LxcConf": null, "NetworkMode": "container:1c464e71fb69adfb2a407217d0c84600a18f755721628ea3f329f48a2cdaa64f", "PortBindings": { "8080/tcp": [ { "HostIp": "", "HostPort": "8080" } ], "9090/tcp": [ { "HostIp": "", "HostPort": "9090" } ] }, "Privileged": false, "PublishAllPorts": false, "RestartPolicy": { "MaximumRetryCount": 0, "Name": "" }, "SecurityOpt": null, "VolumesFrom": null }, "HostnamePath": "", "HostsPath": "/var/lib/docker/containers/1c464e71fb69adfb2a407217d0c84600a18f755721628ea3f329f48a2cdaa64f/hosts", "Id": "3f7e174b82b1520abdc7f39f34ad4e4a9cb4d312466143b54935c43d4c258e3f", "Image": "a068decaf8928737340f8f08fbddf97d9b4f7838d154e88ed77fbcf9898a83f2", "MountLabel": "", "Name": "/k8s_wildfly.a78dc60_wildfly.default.api_59be80fa-ab48-11e4-b139-0800279696e1_75a4a7cb", "NetworkSettings": { "Bridge": "", "Gateway": "", "IPAddress": "", "IPPrefixLen": 0, "MacAddress": "", "PortMapping": null, "Ports": null }, "Path": "/opt/jboss/wildfly/bin/standalone.sh", "ProcessLabel": "", "ResolvConfPath": "/var/lib/docker/containers/1c464e71fb69adfb2a407217d0c84600a18f755721628ea3f329f48a2cdaa64f/resolv.conf", "State": { "Error": "", "ExitCode": 0, "FinishedAt": "0001-01-01T00:00:00Z", "OOMKilled": false, "Paused": false, "Pid": 17920, "Restarting": false, "Running": true, "StartedAt": "2015-02-03T02:03:55.471392394Z" }, "Volumes": {}, "VolumesRW": {} } ]Logs from the Docker container Logs from the container can be seen using the command:docker logs <CONTAINER_ID>In our case, the output is shown as:[vagrant@kubernetes-minion-1 ~]$ docker logs 3f7e174b82b1 =========================================================================JBoss Bootstrap EnvironmentJBOSS_HOME: /opt/jboss/wildflyJAVA: /usr/lib/jvm/java/bin/javaJAVA_OPTS: -server -Xms64m -Xmx512m -XX:MaxPermSize=256m -Djava.net.preferIPv4Stack=true -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true=========================================================================. . .02:04:12,078 INFO [org.jboss.as.jpa] (ServerService Thread Pool -- 57) JBAS011409: Starting Persistence Unit (phase 1 of 2) Service 'movieplex7-1.0-SNAPSHOT.war#movieplex7PU' 02:04:12,128 INFO [org.hibernate.jpa.internal.util.LogHelper] (ServerService Thread Pool -- 57) HHH000204: Processing PersistenceUnitInfo [ name: movieplex7PU ...] 02:04:12,154 INFO [org.hornetq.core.server] (ServerService Thread Pool -- 56) HQ221007: Server is now live 02:04:12,155 INFO [org.hornetq.core.server] (ServerService Thread Pool -- 56) HQ221001: HornetQ Server version 2.4.5.FINAL (Wild Hornet, 124) [f13dedbd-ab48-11e4-a924-615afe337134] 02:04:12,175 INFO [org.hornetq.core.server] (ServerService Thread Pool -- 56) HQ221003: trying to deploy queue jms.queue.ExpiryQueue 02:04:12,735 INFO [org.jboss.as.messaging] (ServerService Thread Pool -- 56) JBAS011601: Bound messaging object to jndi name java:/jms/queue/ExpiryQueue 02:04:12,736 INFO [org.hornetq.core.server] (ServerService Thread Pool -- 60) HQ221003: trying to deploy queue jms.queue.DLQ 02:04:12,749 INFO [org.jboss.as.messaging] (ServerService Thread Pool -- 60) JBAS011601: Bound messaging object to jndi name java:/jms/queue/DLQ 02:04:12,792 INFO [org.hibernate.Version] (ServerService Thread Pool -- 57) HHH000412: Hibernate Core {4.3.7.Final} 02:04:12,795 INFO [org.hibernate.cfg.Environment] (ServerService Thread Pool -- 57) HHH000206: hibernate.properties not found 02:04:12,801 INFO [org.hibernate.cfg.Environment] (ServerService Thread Pool -- 57) HHH000021: Bytecode provider name : javassist 02:04:12,820 INFO [org.jboss.as.connector.deployment] (MSC service thread 1-1) JBAS010406: Registered connection factory java:/JmsXA 02:04:12,997 INFO [org.hornetq.jms.server] (ServerService Thread Pool -- 59) HQ121005: Invalid "host" value "0.0.0.0" detected for "http-connector" connector. Switching to "wildfly". If this new address is incorrect please manually configure the connector to use the proper one. 02:04:13,021 INFO [org.jboss.as.messaging] (ServerService Thread Pool -- 59) JBAS011601: Bound messaging object to jndi name java:jboss/exported/jms/RemoteConnectionFactory 02:04:13,025 INFO [org.jboss.as.messaging] (ServerService Thread Pool -- 58) JBAS011601: Bound messaging object to jndi name java:/ConnectionFactory 02:04:13,072 INFO [org.hornetq.ra] (MSC service thread 1-1) HornetQ resource adaptor started 02:04:13,073 INFO [org.jboss.as.connector.services.resourceadapters.ResourceAdapterActivatorService$ResourceAdapterActivator] (MSC service thread 1-1) IJ020002: Deployed: file://RaActivatorhornetq-ra 02:04:13,078 INFO [org.jboss.as.messaging] (MSC service thread 1-4) JBAS011601: Bound messaging object to jndi name java:jboss/DefaultJMSConnectionFactory 02:04:13,076 INFO [org.jboss.as.connector.deployment] (MSC service thread 1-8) JBAS010401: Bound JCA ConnectionFactory 02:04:13,487 INFO [org.jboss.weld.deployer] (MSC service thread 1-2) JBAS016002: Processing weld deployment movieplex7-1.0-SNAPSHOT.war 02:04:13,694 INFO [org.hibernate.validator.internal.util.Version] (MSC service thread 1-2) HV000001: Hibernate Validator 5.1.3.Final 02:04:13,838 INFO [org.jboss.as.ejb3.deployment.processors.EjbJndiBindingsDeploymentUnitProcessor] (MSC service thread 1-2) JNDI bindings for session bean named ShowTimingFacadeREST in deployment unit deployment "movieplex7-1.0-SNAPSHOT.war" are as follows:java:global/movieplex7-1.0-SNAPSHOT/ShowTimingFacadeREST!org.javaee7.movieplex7.rest.ShowTimingFacadeREST java:app/movieplex7-1.0-SNAPSHOT/ShowTimingFacadeREST!org.javaee7.movieplex7.rest.ShowTimingFacadeREST java:module/ShowTimingFacadeREST!org.javaee7.movieplex7.rest.ShowTimingFacadeREST java:global/movieplex7-1.0-SNAPSHOT/ShowTimingFacadeREST java:app/movieplex7-1.0-SNAPSHOT/ShowTimingFacadeREST java:module/ShowTimingFacadeREST02:04:13,838 INFO [org.jboss.as.ejb3.deployment.processors.EjbJndiBindingsDeploymentUnitProcessor] (MSC service thread 1-2) JNDI bindings for session bean named TheaterFacadeREST in deployment unit deployment "movieplex7-1.0-SNAPSHOT.war" are as follows:java:global/movieplex7-1.0-SNAPSHOT/TheaterFacadeREST!org.javaee7.movieplex7.rest.TheaterFacadeREST java:app/movieplex7-1.0-SNAPSHOT/TheaterFacadeREST!org.javaee7.movieplex7.rest.TheaterFacadeREST java:module/TheaterFacadeREST!org.javaee7.movieplex7.rest.TheaterFacadeREST java:global/movieplex7-1.0-SNAPSHOT/TheaterFacadeREST java:app/movieplex7-1.0-SNAPSHOT/TheaterFacadeREST java:module/TheaterFacadeREST02:04:13,839 INFO [org.jboss.as.ejb3.deployment.processors.EjbJndiBindingsDeploymentUnitProcessor] (MSC service thread 1-2) JNDI bindings for session bean named MovieFacadeREST in deployment unit deployment "movieplex7-1.0-SNAPSHOT.war" are as follows:java:global/movieplex7-1.0-SNAPSHOT/MovieFacadeREST!org.javaee7.movieplex7.rest.MovieFacadeREST java:app/movieplex7-1.0-SNAPSHOT/MovieFacadeREST!org.javaee7.movieplex7.rest.MovieFacadeREST java:module/MovieFacadeREST!org.javaee7.movieplex7.rest.MovieFacadeREST java:global/movieplex7-1.0-SNAPSHOT/MovieFacadeREST java:app/movieplex7-1.0-SNAPSHOT/MovieFacadeREST java:module/MovieFacadeREST02:04:13,840 INFO [org.jboss.as.ejb3.deployment.processors.EjbJndiBindingsDeploymentUnitProcessor] (MSC service thread 1-2) JNDI bindings for session bean named SalesFacadeREST in deployment unit deployment "movieplex7-1.0-SNAPSHOT.war" are as follows:java:global/movieplex7-1.0-SNAPSHOT/SalesFacadeREST!org.javaee7.movieplex7.rest.SalesFacadeREST java:app/movieplex7-1.0-SNAPSHOT/SalesFacadeREST!org.javaee7.movieplex7.rest.SalesFacadeREST java:module/SalesFacadeREST!org.javaee7.movieplex7.rest.SalesFacadeREST java:global/movieplex7-1.0-SNAPSHOT/SalesFacadeREST java:app/movieplex7-1.0-SNAPSHOT/SalesFacadeREST java:module/SalesFacadeREST02:04:13,840 INFO [org.jboss.as.ejb3.deployment.processors.EjbJndiBindingsDeploymentUnitProcessor] (MSC service thread 1-2) JNDI bindings for session bean named TimeslotFacadeREST in deployment unit deployment "movieplex7-1.0-SNAPSHOT.war" are as follows:java:global/movieplex7-1.0-SNAPSHOT/TimeslotFacadeREST!org.javaee7.movieplex7.rest.TimeslotFacadeREST java:app/movieplex7-1.0-SNAPSHOT/TimeslotFacadeREST!org.javaee7.movieplex7.rest.TimeslotFacadeREST java:module/TimeslotFacadeREST!org.javaee7.movieplex7.rest.TimeslotFacadeREST java:global/movieplex7-1.0-SNAPSHOT/TimeslotFacadeREST java:app/movieplex7-1.0-SNAPSHOT/TimeslotFacadeREST java:module/TimeslotFacadeREST02:04:14,802 INFO [org.jboss.as.messaging] (MSC service thread 1-1) JBAS011601: Bound messaging object to jndi name java:global/jms/pointsQueue 02:04:14,931 INFO [org.jboss.weld.deployer] (MSC service thread 1-2) JBAS016005: Starting Services for CDI deployment: movieplex7-1.0-SNAPSHOT.war 02:04:15,018 INFO [org.jboss.weld.Version] (MSC service thread 1-2) WELD-000900: 2.2.6 (Final) 02:04:15,109 INFO [org.hornetq.core.server] (ServerService Thread Pool -- 57) HQ221003: trying to deploy queue jms.queue.movieplex7-1.0-SNAPSHOT_movieplex7-1.0-SNAPSHOT_movieplex7-1.0-SNAPSHOT_java:global/jms/pointsQueue 02:04:15,110 INFO [org.jboss.weld.deployer] (MSC service thread 1-6) JBAS016008: Starting weld service for deployment movieplex7-1.0-SNAPSHOT.war 02:04:15,787 INFO [org.jboss.as.jpa] (ServerService Thread Pool -- 57) JBAS011409: Starting Persistence Unit (phase 2 of 2) Service 'movieplex7-1.0-SNAPSHOT.war#movieplex7PU' 02:04:16,189 INFO [org.hibernate.annotations.common.Version] (ServerService Thread Pool -- 57) HCANN000001: Hibernate Commons Annotations {4.0.4.Final} 02:04:17,174 INFO [org.hibernate.dialect.Dialect] (ServerService Thread Pool -- 57) HHH000400: Using dialect: org.hibernate.dialect.H2Dialect 02:04:17,191 WARN [org.hibernate.dialect.H2Dialect] (ServerService Thread Pool -- 57) HHH000431: Unable to determine H2 database version, certain features may not work 02:04:17,954 INFO [org.hibernate.hql.internal.ast.ASTQueryTranslatorFactory] (ServerService Thread Pool -- 57) HHH000397: Using ASTQueryTranslatorFactory 02:04:19,832 INFO [org.hibernate.dialect.Dialect] (ServerService Thread Pool -- 57) HHH000400: Using dialect: org.hibernate.dialect.H2Dialect 02:04:19,833 WARN [org.hibernate.dialect.H2Dialect] (ServerService Thread Pool -- 57) HHH000431: Unable to determine H2 database version, certain features may not work 02:04:19,854 WARN [org.hibernate.jpa.internal.schemagen.GenerationTargetToDatabase] (ServerService Thread Pool -- 57) Unable to execute JPA schema generation drop command [DROP TABLE SALES] 02:04:19,855 WARN [org.hibernate.jpa.internal.schemagen.GenerationTargetToDatabase] (ServerService Thread Pool -- 57) Unable to execute JPA schema generation drop command [DROP TABLE POINTS] 02:04:19,855 WARN [org.hibernate.jpa.internal.schemagen.GenerationTargetToDatabase] (ServerService Thread Pool -- 57) Unable to execute JPA schema generation drop command [DROP TABLE SHOW_TIMING] 02:04:19,855 WARN [org.hibernate.jpa.internal.schemagen.GenerationTargetToDatabase] (ServerService Thread Pool -- 57) Unable to execute JPA schema generation drop command [DROP TABLE MOVIE] 02:04:19,856 WARN [org.hibernate.jpa.internal.schemagen.GenerationTargetToDatabase] (ServerService Thread Pool -- 57) Unable to execute JPA schema generation drop command [DROP TABLE TIMESLOT] 02:04:19,857 WARN [org.hibernate.jpa.internal.schemagen.GenerationTargetToDatabase] (ServerService Thread Pool -- 57) Unable to execute JPA schema generation drop command [DROP TABLE THEATER] 02:04:23,942 INFO [io.undertow.websockets.jsr] (MSC service thread 1-5) UT026003: Adding annotated server endpoint class org.javaee7.movieplex7.chat.ChatServer for path /websocket 02:04:24,975 INFO [javax.enterprise.resource.webcontainer.jsf.config] (MSC service thread 1-5) Initializing Mojarra 2.2.8-jbossorg-1 20140822-1131 for context '/movieplex7' 02:04:26,377 INFO [javax.enterprise.resource.webcontainer.jsf.config] (MSC service thread 1-5) Monitoring file:/opt/jboss/wildfly/standalone/tmp/vfs/temp/temp1267e5586f39ea50/movieplex7-1.0-SNAPSHOT.war-ea3c92cddc1c81c/WEB-INF/faces-config.xml for modifications 02:04:30,216 INFO [org.jboss.resteasy.spi.ResteasyDeployment] (MSC service thread 1-5) Deploying javax.ws.rs.core.Application: class org.javaee7.movieplex7.rest.ApplicationConfig 02:04:30,247 INFO [org.jboss.resteasy.spi.ResteasyDeployment] (MSC service thread 1-5) Adding class resource org.javaee7.movieplex7.rest.TheaterFacadeREST from Application class org.javaee7.movieplex7.rest.ApplicationConfig 02:04:30,248 INFO [org.jboss.resteasy.spi.ResteasyDeployment] (MSC service thread 1-5) Adding class resource org.javaee7.movieplex7.rest.ShowTimingFacadeREST from Application class org.javaee7.movieplex7.rest.ApplicationConfig 02:04:30,248 INFO [org.jboss.resteasy.spi.ResteasyDeployment] (MSC service thread 1-5) Adding class resource org.javaee7.movieplex7.rest.MovieFacadeREST from Application class org.javaee7.movieplex7.rest.ApplicationConfig 02:04:30,248 INFO [org.jboss.resteasy.spi.ResteasyDeployment] (MSC service thread 1-5) Adding provider class org.javaee7.movieplex7.json.MovieWriter from Application class org.javaee7.movieplex7.rest.ApplicationConfig 02:04:30,249 INFO [org.jboss.resteasy.spi.ResteasyDeployment] (MSC service thread 1-5) Adding provider class org.javaee7.movieplex7.json.MovieReader from Application class org.javaee7.movieplex7.rest.ApplicationConfig 02:04:30,267 INFO [org.jboss.resteasy.spi.ResteasyDeployment] (MSC service thread 1-5) Adding class resource org.javaee7.movieplex7.rest.SalesFacadeREST from Application class org.javaee7.movieplex7.rest.ApplicationConfig 02:04:30,267 INFO [org.jboss.resteasy.spi.ResteasyDeployment] (MSC service thread 1-5) Adding class resource org.javaee7.movieplex7.rest.TimeslotFacadeREST from Application class org.javaee7.movieplex7.rest.ApplicationConfig 02:04:31,544 INFO [org.wildfly.extension.undertow] (MSC service thread 1-5) JBAS017534: Registered web context: /movieplex7 02:04:32,187 INFO [org.jboss.as.server] (ServerService Thread Pool -- 31) JBAS018559: Deployed "movieplex7-1.0-SNAPSHOT.war" (runtime-name : "movieplex7-1.0-SNAPSHOT.war") 02:04:34,800 INFO [org.jboss.as] (Controller Boot Thread) JBAS015961: Http management interface listening on http://127.0.0.1:9990/management 02:04:34,859 INFO [org.jboss.as] (Controller Boot Thread) JBAS015951: Admin console listening on http://127.0.0.1:9990 02:04:34,859 INFO [org.jboss.as] (Controller Boot Thread) JBAS015874: WildFly 8.2.0.Final "Tweek" started in 38558ms - Started 400 of 452 services (104 services are lazy, passive or on-demand)WildFly startup log, including the application deployment, is shown here. Log into the Docker container Log into the container and show WildFly logs. There are a couple of ways to do that. First is to use the name and exec a bash shell. For that, get the name of the container as: docker inspect <CONTAINER_ID> | grep Name In our case, the output is:[vagrant@kubernetes-minion-1 ~]$ docker inspect 3f7e174b82b1 | grep Name "Name": "" "Name": "/k8s_wildfly.a78dc60_wildfly.default.api_59be80fa-ab48-11e4-b139-0800279696e1_75a4a7cb",Log into the container as:[vagrant@kubernetes-minion-1 ~]$ docker exec -it k8s_wildfly.a78dc60_wildfly.default.api_59be80fa-ab48-11e4-b139-0800279696e1_75a4a7cb bash [root@wildfly /]# pwd /Other, more classical way, is to get the process id of the container as:docker inspect <CONTAINER_ID> | grep PidIn our case, the output is:[vagrant@kubernetes-minion-1 ~]$ docker inspect 3f7e174b82b1 | grep Pid "Pid": 17920,Log into the container as:[vagrant@kubernetes-minion-1 ~]$ sudo nsenter -m -u -n -i -p -t 17920 /bin/bash docker exec -it [root@wildfly /]# pwd /And now the complete WildFly distribution is available at:[root@wildfly /]# cd /opt/jboss/wildfly [root@wildfly wildfly]# ls -la total 424 drwxr-xr-x 10 jboss jboss 4096 Dec 5 22:22 . drwx------ 4 jboss jboss 4096 Dec 5 22:22 .. drwxr-xr-x 3 jboss jboss 4096 Nov 20 22:43 appclient drwxr-xr-x 5 jboss jboss 4096 Nov 20 22:43 bin -rw-r--r-- 1 jboss jboss 2451 Nov 20 22:43 copyright.txt drwxr-xr-x 4 jboss jboss 4096 Nov 20 22:43 docs drwxr-xr-x 5 jboss jboss 4096 Nov 20 22:43 domain drwx------ 2 jboss jboss 4096 Nov 20 22:43 .installation -rw-r--r-- 1 jboss jboss 354682 Nov 20 22:43 jboss-modules.jar -rw-r--r-- 1 jboss jboss 26530 Nov 20 22:43 LICENSE.txt drwxr-xr-x 3 jboss jboss 4096 Nov 20 22:43 modules -rw-r--r-- 1 jboss jboss 2356 Nov 20 22:43 README.txt drwxr-xr-x 8 jboss jboss 4096 Feb 3 02:03 standalone drwxr-xr-x 2 jboss jboss 4096 Nov 20 22:43 welcome-contentClean up the cluster Entire Kubernetes cluster can be cleaned either using Virtual Box console, or using the command line as:kubernetes> vagrant halt ==> minion-1: Attempting graceful shutdown of VM... ==> minion-1: Forcing shutdown of VM... ==> master: Attempting graceful shutdown of VM... ==> master: Forcing shutdown of VM... kubernetes> vagrant destroy minion-1: Are you sure you want to destroy the 'minion-1' VM? [y/N] y ==> minion-1: Destroying VM and associated drives... ==> minion-1: Running cleanup tasks for 'shell' provisioner... master: Are you sure you want to destroy the 'master' VM? [y/N] y ==> master: Destroying VM and associated drives... ==> master: Running cleanup tasks for 'shell' provisioner...So we learned how to run a Java EE 7 application deployed in WildFly and hosted using Kubernetes and Docker. Enjoy!Reference: Java EE 7 and WildFly on Kubernetes using Vagrant from our JCG partner Arun Gupta at the Miles to go 2.0 … blog....
java-interview-questions-answers

Top 10 Most Common Java Performance Problems

Java performance is an issue of interest for all Java application developers, since making an application fast is as important as making it functional. Steven Haines uses his personal experience on Java performance issues to conclude that most of them have common root causes. So, as a performance analyst, Haines sorts the basic performance issues to three basic categories: Database problems, that mostly have to do with persistence configuration, caching or database connection thread pool configuration. Memory problems, that usually are garbage collection misconfiguration or memory leaks. Concurrency problems, and basically deadlocks, gridlocks and thread pool configuration problems. Let’s delve into each category… Database Since database is the basic component of an application functionality, it also is the basic root of performance issues. Problems may occur due to wrong use of access to the database, bad connection pool size or missing tuning. Persistence configuration Even though today Hibernate and other JPA implementations provide fine tuning of database access, there are some more options such as eager or lazy fetching, that may lead to long response times and database overheads. Eager fetching make less but more complex database calls, whereas lazy fetching makes more but more simple and fast database calls. Problems occur when the load of the application increases and it causes a much bigger database load. So, in order to fix this, you can take a look at the business transaction counters, the database counters, but basically at the correlation between a business transaction and database calls. To avoid such problems you must understand well the persistence technology used, set correctly all configuration options, so as to pair their functionality with your business domain needs. Caching Caching has optimized the performance of applications, since in-memory data is faster to access than persisted ones. Problems are caused when no caching is used, so every time a resource is needed it is retrieved from database. When caching is used, problems occur due to its bad configuration. Basic things to notice here are the fixed size of a cache and the distributed cache configuration. Cached objects are stateful, unlike pools that provide stateless objects. So a cache must be properly configured so as not to exhaust memory. But what if a removed object is requested again? This ‘miss’ ratio must be configured in cache settings, along with the memory. Distributed caching may also cause problems. Synchronization is necessary when caches are set to multiple servers. Thus, a cache update is propagated to caches in all servers. This is how consistency is achieved, but it is a very expensive procedure. When caching is used correctly the application load increase does not increase the database load, but when the caching settings are wrong, then the database load increases, causing CPU overhead an even disk I/O rate. In order to troubleshoot this problem you should first examine the database performance so as to decide if cache is needed or not. Then, you should determine the cache size, using the hit ratio and miss ratio metrics. You can avoid facing caching problems though, by planning correctly your application before building it. Make sure to use serialization and techniques that provide a scalable application. Pool connections Pool connections are usually created before starting the application, since they are expensive to create. A pool of connections is shared across the transactions and the pool size limits the database load. Pool size is important. Not enough connections make business transactions to wait and the database is under-utilized. On the other hand, too many connections cause bigger response time and database overload. In order to solve this problem you must check whether your application is waiting for a new connection or for a database query to be executed. You can always avoid it though, by optimising the database, test the application with different pool size to check which one fits the case. Memory Memory problems have to do with Garbage Collector and memory leaks. Garbage Collector Garbage collection may cause all threads to stop in order to reclaim memory. When this procedure takes too much time or occurs too often, then there is a problem. Its basic symptoms are the CPU spikes and big response times. To solve this you can configure your -verbosegc params, use a performance monitoring tool to find major GC occurs, and a tool to monitor heap usage and possible CPU spikes. It is almost impossible to avoid this problem, though can limit it by configuring heap size and cycling your JVM. Memory leaks Memory leaks in Java may occur in different ways than C or C++, since they are more of a reference management issue. In Java a reference to an object may be maintained even though it may not be used again. This may lead to an OutOfMemory error and demand a JVM restart. When the memory usage is increased and the heap runs out of memory then the memory leak issue has occurred. To solve it, you could configure the JVM params properly. To avoid having to deal with memory leaks, you can pay attention while coding to memory leak – sensitive Java collections, or session management. You can share memory leaks avoid tips with colleagues, have an expert take a look at your application code, and use tools to avoid memory leaks and analyze heap. Concurrency Concurrency occurs when several computations are executed at the same time. Java uses synchronization and locks to manage multithreading. But synchronization can cause thread deadlocks, gridlocks and thread pool size issues. Thread deadlocks Thread deadlocks occur when two or more threads are trying to access same resources and the one is waiting for the other one to release a resource and vice versa. When a deadlock occurs the JVM exhausts all threads and the application is getting slower. Deadlocks are very difficult to reproduce. So, a way to solve a deadlock problem is to capture a thread dump while two threads are deadlocked and examine stack traces of the threads. To avoid this problem you’d better make your application and its resources as immutable as possible, make use of synchronization and check for potential threads interactions. Thread gridlocks Thread gridlocks may occur when too much synchronization is used and thus too much time is spent waiting for a single resource. To notice this, you must have both slow response times and low CPU utilization, since many threads try to access the same code part and they are waiting for the one that has it to finish. So, how can you solve this? You must first check where your threads are waiting and why. Then, you should eliminate the synchronization requirements according to your business requirements. Thread pool configuration locks When an application uses an application server or a web container, a thread pool is used to control the concurrently processed requests. If this thread pool is small, then the requests will wait a lot, but if it is too large, then the processing resources will be too busy. So, at a small pool size the CPU is underutilized but the thread pool utilization is 100%, whereas at a large pool size the CPU is very busy. You can troubleshoot this problem easily, by checking your thread pool utilization and CPU utilization and decide whether to increase or decrease the pool size. To avoid it, you must tune the thread pool, and that is not so easy to do. Finally, two basic issues that may occur are the performance issue to be an afterthought, or the performance issue to be noticed by the end users. The first case is a common problem. Usually developers create an application that is functional but fails in performance tests. To solve this they usually have to make an architectural review of the application, where performance analysis tools seem very handy. To avoid this problem, try to test performance while developing the application, so continuous integration is the key. For the second case, what happens when end users of the application inform you that there are performance issues? There are tools to avoid this case, such as JMX to check your servers behavior. Business Transaction Performance results combined with JMX results may help too. Method-level response time checks all methods called in a business transaction and finds hotspots of the application. So, you’d better make use of one of these tools, so that end users will never alert you for performance. Interested to learn more? Then you should download the relevant ebook here. ...
apache-cassandra-logo

Apache Cassandra and Low-Latency Applications

Introduction Over the years, Grid Dynamics has had many projects related to NoSQL, particularly Apache Cassandra. In this post, we want to discuss a project which brought exciting challenges to us, and questions we tried to answer in that project remain relevant today, as well. Digital marketing and online ads were popular in 2012, and a demand for them has only increased. Real-time bidding (RTB) is an integral part of the domain area. RTB supposes that an ad is placed (bought and sold) via real-time auction of digital ads. If the bid is won, the buyer’s ad is instantly displayed on the publisher’s site. RTB requires a low-latency response from server side (<100ms), otherwise the bid is lost. One of our clients, a US media company, was interested in real-time bidding and user tracking (i.e. the analysis of website visitors’ behavior and their preferences). Initially, the client’s infrastructure for processing RTB requests included installations of Kyoto Cabinet. On the image below (Picture 1), you can see a source for RTB and third-party requests. All the requests were sent to real-time applications which performed lookup and update requests in the database. Kyoto Cabinet kept the whole dataset in memory, and custom add-ons provided functionality for retention management and persistence.The aforesaid architecture was good enough from latency perspective, but nevertheless, it had several disadvantages:Scalability. The architecture supposed only vertical scaling of servers with installations of Kyoto Cabinet. At that time, the servers were equipped with about 50GB memory each. It was clear to everybody that increasing memory amount would solve the problem long term. Robustness. The only installation of Kyoto Cabinet might cause very serious consequences in case of a failure. Cross-datacenter replication. The architecture did not have automatic synchronization between data centers. Manual synchronization was a real headache because it required a lot of additional operations.Our task was to create a new architecture for the system which would not have the aforesaid drawbacks and, at the same time, would allow us to achieve good results in response latency. In other words, we were in need of a data store which would allow us to keep user profiles as well as to perform lookups and updates on them, and all the operations were to be performed within a certain time interval. The architecture was supposed to be built around such a data store. Requirements The new architecture was intended to solve all these problems. The requirements for the new architecture were as follows:persistency (no data should be lost in case of power outage in one or both data centers) high availability (there should be no single point of failure) scalability (database volume should be relatively easy to increase by adding more nodes) cross-datacenter replication (data should be synchronized between both data centers) TTL for data (outdated user profiles should be automatically evicted) data volume (about 1 billion homogeneous records with multiple attributes, where one record is ~400 bytes) throughput (5000 random reads + 5000 random writes per second for each data center) latency of responses (3ms on average, processing time should not exceed 10 ms for 99% of requests)Also we had some limitations which were related to the infrastructure. One of the limitations was the ability to install a maximum of eight servers per database in each datacenter. At the same time we could select certain server hardware, such as memory amount, storage type, and size. One of the additional requirements from the client was to use replication factor TWO which was acceptable due to the statistical nature of data. This could reduce the hardware cost. We examined several possible solutions that could meet our requirements and finally opted for Cassandra. The new architecture with Cassandra became a much more elegant solution. It was just a Cassandra cluster synchronized between two data centers. But a question about its hardware specifications still remained unanswered. Initially we had two options:SDDs but less memory (less than the entire dataset) HDDs and more memory (sufficient for keeping the whole dataset)Actually, there was one more option which implied using hard drives and less memory, but this configuration did not provide the read latency acceptable for our requirements as random read from an HDD takes about 8ms even for 10K RPM hard drives. As a result, it was rejected from the very beginning. Thus, we had two configurations. After some tuning (the tuning itself will be discussed in the next section) they both satisfied our needs. Each of them had its own advantages and disadvantages. One of the main drawbacks of the SSD configuration was its cost. Enterprise-level SDDs were rather expensive at that time. Besides, some data center providers surcharged for maintaining servers with SSDs. The approach with HDDs meant reading data from disk cache. Most disadvantages of the configuration were related to the cache, for example, the problem of cold start. It was caused by the fact that the cache was cleaned after system reboot. As a result, reading uncached data from HDD brought about additional timeouts. The timeouts, in fact, were requests which got no response within 10ms. Besides, disk cache could be accidentally cleaned as a result of copying a large amount of data from a Cassandra server while it was up. The last issue was related to the memory size rather than to the cache. Increasing data amount for a single node was quite difficult. It was possible to add an additional HDD or several HDDs, but memory size for a single machine was limited and not very large. Finally, we managed to resolve most of the aforesaid issues of the HDD configuration. The cold start problem was resolved by reading data with cat utility and redirecting its output to /dev/null on startup. The issue related to disk cache cleaning went away after patching rsync which was used for creating backups. But the problem with memory limitations remained and caused some troubles later. In the end, the client selected the HDD + RAM configuration. Each node was equipped with 96GB memory and 8 HDDs in RAID 5+0. Tuning Cassandra A version of Cassandra we started with was 1.1.4. Further on, in the process of development we tried out different versions. Finally, we decided upon version 1.2.2 which was approved for production because it contained changes we had committed to Cassandra repository. For example, we added an improvement which allowed us to specify the populate_io_cache_on_flush option (which populates the disk cache on memtable flush and compaction) individually for each column family. We had to test both remaining configurations to select a more preferable one. For our tests we used a Cassandra cluster that included 3 nodes with 64GB memory and 8 cores each. We started the testing with write operations. During the test, we wrote data into Cassandra at the speed of 7000 writes per second. The speed was selected in proportion to the cluster size and the required throughput (doubled for writes in order to take into account cross-datacenter replication overhead). This methodology was applied to all tests. It is worth mentioning that we used the following preferences:replication_factor=2 write_consistency_level=TWO LeveledCompactionStrategyLeveledCompactionStrategy (LCS) was used because the client’s workflow was supposed to have a lot of update operations. Another reason for using LCS was the decreasing overall dataset size and read latency. Test results were the same for the both configurations:Avg Latency: ~1ms Timeouts: 0.01% CPU usage: <5%Both configurations satisfied our needs, though we did not spend time investigating timeouts nature at this stage. Timeouts will be discussed later. Presumably, most of the response time was taken by the network transfer. Also, we tried to increase the number of write queries per second and it yielded good results. There were no noticeable performance degradation. After that we moved to the next step, i.e. testing read operations. We used the same cluster. All read requests were sent with read_consistency_level=ONE. The write speed was set to 3500 queries per second. There were about 40GB of data on each server with the single record size of about 400 bytes. Thus, the whole dataset fit the memory size. Test results were as follows:Looking at test results for both configurations, we found unsatisfactory percentage values of timeouts which were 2-3 times the required value (2-3% against 1%). Also, we were anxious about the high CPU load (about 20%). At this point, we came to a conclusion that there was something wrong with our configurations. It was not a trivial task to find the root of the problem related to timeouts. Eventually, we modified the source code of Cassandra and made it return a single fixed value for all read requests (skipping any lookups from SSTables, memtables, etc.). After that, the same test on read operations was executed again. The result was perfect: GC activity and CPU usage were significantly reduced and there were almost no timeouts detected. We reverted our changes and tried to find an optimal configuration for GC. Having experimented with its options, we settled upon the following configuration:-XX:+UseParallelGC -XX:+UseParallelOldGC -XX:MaxTenuringThreshold=3 -Xmn1500M -Xmx3500M -Xms3500MWe managed to reduce the influence of GC to performance of Cassandra. It is worth noting that the number of timeouts on read operations exceeded that on write operations because Cassandra created a lot of objects in heap in the course of reading, which in its turn caused intensive CPU usage. As for the latency, it was low enough and could be largely attributed to the time for data transfer. Executing the same test with more intensive reads showed that in contrast to write operations increasing the number of read operations significantly affected the number of timeouts. Presumably, this fact is related to the growing activity of GC. It is a well-known fact that GC should be configured individually for each case. In this case, Concurrent Mark Sweep (CMS) was less effective than Parallel Old GC. It was also helpful to decrease the heap size to a relatively small value. The configuration described above is one that suited our needs, though it might have been not the best one. Also, we tried different versions of Java. Java 1.7 gave us some performance improvement against Java 1.6. The relative number of timeouts decreased. Another thing we tried was enabling/disabling row/key caching in Cassandra. Disabling caches slightly decreased GC activity. The next option that produced surprising results was the number of threads in pools which processed read/write requests in Cassandra. Increasing this value from 32 to 128 made a significant difference in performance as our benchmark emulated multiple clients (up to 500 threads). Also, we tried out different versions of CentOS and various configurations of SELinux. After switching to a later 6.3 version, we found that Java futures returned control by timeout in a shorter period of time. Changes in configuration of SELinux made no effect on performance. As soon as read performance issues were resolved, we performed tests in the mixed mode (reads + writes). Here we observed a situation which is described in the chart below (Picture 2). After each flush to SSTable Cassandra started to read data from disks, which in its turn caused increased timeouts on the client side. This problem was relevant for the HDD+RAM configuration because reading from SSD did not result in additional timeouts.We tried to tinker around with Cassandra configuration options, namely, populate_io_cache_on_flush (which is described above). This option was turned off by default, meaning that filesystem cache was not populated with new SSTables. Therefore when the data from a new SSTable was accessed, it was read from HDD. Setting its value to true fixed the issue. The chart below (Picture 3) displays disk reads after the improvement.In other words, Cassandra stopped reading from disks after the whole dataset was cached in memory even in the mixed mode. It’s noteworthy that populate_io_cache_on_flush option is turned on by default in Cassandra starting from version 2.1, though it was excluded from the configuration file. The summary below (Table 2) describes the changes we tried and their impact.Finally, after applying the changes described in this post, we achieved acceptable results for both SSD and HDD+RAM configurations. Much effort was also put into tuning a Cassandra client (we used Astyanax) to operate well with replication factor two and reliably return control on time in case of a timeout. We would also like to share some details about operations automation, monitoring, as well as ensuring proper work of the cross data center replication, but it is very difficult to cover all the aspects in a single post. As stated above, we had gone to production with HDD+RAM configuration and it worked reliably with no surprises, including Cassandra upgrade on the live cluster without downtime. Conclusion Cassandra was new to us when it was introduced into the project. We had to spend a lot of time exploring its features and configuration options. It allowed us to implement the required architecture and deliver the system on time. And at the same time we gained a great experience. We carried out significant work integrating Cassandra into our workflow. All our changes in Cassandra source code were contributed back to the community. Our digital marketing client benefited by having a more stable and scalable infrastructure with automated synchronization reducing the amount of time they had to maintain the systems. About Grid Dynamics Grid Dynamics is a leading provider of open, scalable, next-generation commerce technology solutions for Tier 1 retail. Grid Dynamics has in-depth expertise in commerce technologies and wide involvement in the open source community. Great companies, partnered with Grid Dynamics, gain a sustainable business advantage by implementing and managing solutions in the areas of omnichannel platforms, product search and personalization, and continuous delivery. To learn more about Grid Dynamics, find us at www.griddynamics.com or by following us on Twitter @GridDynamics.Reference: Apache Cassandra and Low-Latency Applications from our JCG partner Dmitry Yaraev at the Planet Cassandra blog....
java-logo

How JVMTI tagging can affect GC pauses

This post is analyzing why and how Plumbr Agents extended the length of GC pauses on certain occasions. Troubleshooting the underlying problem revealed interesting insights about how JVMTI tagging is handled during GC pauses. Spotting a problem One of our customers complained about the application being significantly less responsive with the Plumbr Agent attached. Upon analyzing the GC logs, we found an anomaly in the GC times. Here is the GC log snipped from the JVM without Plumbr:   2015-01-30T17:19:08.965-0200: 182.816: [Full GC (Ergonomics) [PSYoungGen: 524800K->0K(611840K)] [ParOldGen: 1102620K->1103028K(1398272K)] 1627420K->1103028K(2010112K), [Metaspace: 2797K->2797K(1056768K)], 0.9563188 secs] [Times: user=7.32 sys=0.01, real=0.96 secs] And here’s one with the Plumbr Agent attached: 2015-02-02T17:40:35.872-0200: 333.166: [Full GC (Ergonomics) [PSYoungGen: 524800K->0K(611840K)] [ParOldGen: 1194734K->1197253K(1398272K)] 1719534K->1197253K(2010112K), [Metaspace: 17710K->17710K(1064960K)], 1.9900624 secs] [Times: user=7.94 sys=0.01, real=1.99 secs] The anomaly is hidden in the elapsed time. The real time, is the actual time that has passed. If you looked at a stopwatch in your hand, real time would be equal to that number. The user time (plus the system time) is the total CPU time that has been consumed during the measurement. It can be greater than the real time if there are multiple threads on multiple cores. So, for the Parallel GC, the real time should be roughly equal to (user time / number of threads). On my machine this ratio should be close to 7 and it indeed was so without Plumbr Agent. But with Plumbr, this ratio significantly plunged. Definitely not okay! Initial Investigation Given such evidence, the following are the most likely hypotheses:Plumbr causes the JVM to do some heavy single-threaded operation after each GC Plumbr causes the JVM to use fewer threads for garbage collectionBut looking at just one line in the GC log gives too narrow view to proceed, so we went ahead and visualized the aforementioned ratios:The drop on the chart occurs at exactly the moment where Plumbr discovers the memory leak. Some extra burden to the GC during the root cause analysis was expected, but permanently affecting the GC pause length was definitely not a feature we had deliberately designed into our Agent. Such behavior favors the first hypothesis since it is very unlikely that we can influence the number of GC threads at runtime. Creating an isolated test case took a while, but with the help of the following constraints, we could nail it:The application must leak memory for Plumbr to detect The application must frequently pause for garbage collection … and as the breaking moment – the application must have a large live set, meaning that the number of objects surviving a Full GC must be large.After having compiled a small enough test case, it was possible to zoom into the root cause detection. A sound approach was to switch individual features of the Plumbr Agent on and off and see in which configurations the issue would reproduce. With this simple search, we managed to pinpoint the issue to a single action that Plumbr Agent does. The problem disappeared with JVMTI tagging turned off. During our analysis of path to gc root and reference chain, we tag every single object on the heap. Apparently, the GC times were somehow affected by the tags we generated. Finding The Underlying Root Cause Still, it was not clear why the GC pauses were extended. The garbage is quickly collected, and most of the tagged objects are supposed to be eligible for GC. What was discovered, though, was that with a large live set (which is one of the symptoms of a memory leak), a lot of tagged objects are retained. But hey, even if all the objects in the live set are tagged, this is not supposed to linearly affect the GC time. After GC is done, we receive notifications on all of our tagged objects that were collected, but the live set is not among those objects. This leads one to wonder if HotSpot, for some bizarre reason, iterates through all the tagged objects after each GC. To verify the claim one may take a look at the hotspot source code. After some digging, we eventually arrived at JvmtiTagMap::do_weak_oops, which indeed iterates over all the tags and does a number of not-so-cheap operations for all of them. To make things worse, this operation is performed sequentially and is not parallelized. And last piece of the puzzle was solved after finding the chain of invocations calling this method after each garbage collection. (Why it’s done the way it’s done and what it has to do with weak references is quite beyond the scope of this article) Running on Parallel GC and having as expensive operation as that running serially might initially seem like a design flaw. On the second thought, the JVMTI creators probably never expected anyone to tag all the heap and thus never bothered to heavily optimize this operation, or run it in parallel. After all, you can never predict all the ways in which people will use the features you designed, so maybe it is worth checking whether the post-GC activities in Hotspot should also get a chance to use all the gazillion cores a modern JVM tends to have access to. So, to counter this, we needed to clean up the tags that we do not need any more. Fixing it was as easy as adding merely three lines to one of our JVMTI callbacks: + if(isGenerated(*tag_ptr)) { + *tag_ptr = 0; + } And lo and behold, once the analysis is complete, we are almost as good as we were at the start. As seen in the following screenshot, there still is a temporary performance flux during the memory leak discovery and a slight deterioration after the memory leak analysis was completed:Wrapping it up The patch is now rolled out and the situation where the GC pause times were affected after Plumbr detected a leak is now fixed. Feel free to go and grab an updated Agent to tackle the performance issues. As a take-away, I can recommend to be extra careful with extensive tagging, as the “cheap” tags can pile up on corner cases building a cornerstone for a massive performance penalty. To make sure you are not abusing the tagging, flip the diagnostic option of -XX:+TraceJVMTIObjectTagging. It will allow you to get an estimate of how much native memory the tag map consumes and how much time the heap walks take..Reference: How JVMTI tagging can affect GC pauses from our JCG partner Gleb Smirnov at the Plumbr Blog blog....
android-logo

Codename One Charts

This post was written by Steve Hannah, one of the newest additions to the Codename One team and a long time community contributor. The upcoming update to Codename One will include a new package (com.codename1.charts) for rendering charts in your applications. This includes models and renderers for many common classes of charts including many flavours of bar charts, line charts, scatter charts, and pie charts. Goals For the charts package, we wanted to enable Codename One developers to add charts and visualizations to their apps without having to include external libraries or embedding web views. We also wanted to harness the new features in the graphics pipeline to maximize performance. Differences from CN1aChartEngine This package is based on the existing CN1aChartEngine library, but has been refactored substantially to reduce its size, improve its performance, and simplify its API. If you have used the existing CN1aChartEngine library, much of the API (e.g. models and renderers) will be familiar. The key differences are:API. It includes ChartComponent, a first-class Codename One Component that can be included anywhere inside your forms. CN1aChartEngine used a number of Android-like abstractions (e.g. View, Intent, and Activity) to simplify the porting process from the original Android library. While this indeed made it easier to port, it made the API a little bit confusing for Codename One development. Performance. It uses the built-in Codename One graphics pipeline for rendering all graphics. CN1aChartEngine used the CN1Pisces library for rendering graphics, which is an order of magnitude slower than the built-in pipeline. This was for historical reasons. When CN1aChartEngine was first developed, the built-in pipeline was missing some features necessary to implement charts.Note: Actually, just before refactoring CN1aChartEngine to produce the charts package, I ported it over to use the built-in pipeline. If you are already using CN1aChartEngine in your app, and want to benefit from the improved performance without having to change your code, you can update to that version.Device Support Since the charts package makes use of 2D transformations and shapes, it requires some of the new graphics features that are not yet available on all platforms. Currently the following platforms are supported:Simulator Android iOSIf you require support for other platforms, you may want to use the CN1aChartEngine library instead. FeaturesBuilt-in support for many common types of charts including bar charts, line charts, stacked charts, scatter charts, pie charts and more. Pinch Zoom – The ChartComponent class includes optional pinch zoom support. Panning Support – The ChartComponent class includes optional support for panning.Chart Types The com.codename1.charts package includes models and renderers for many different types of charts. It is also extensible so that you can add your own chart types if required. The following screen shots demonstrate a small sampling of the types of charts that can be created.Note: The above screenshots were taken from the ChartsDemo app. You can start playing with this app now by checking it out from our subversion repository. How to Create A Chart Adding a chart to your app involves four steps:Build the model. You can construct a model (aka data set) for the chart using one of the existing model classes in the com.codename1.charts.models package. Essentially, this is just where you add the data that you want to display. Set up a renderer. You can create a renderer for your chart using one of the existing renderer classes in the com.codename1.charts.renderers package. The renderer allows you to specify how the chart should look. E.g. the colors, fonts, styles, to use. Create the Chart View. Use one of the existing view classes in the com.codename1.charts.views package. Create a ChartComponent. In order to add your chart to the UI, you need to wrap it in a ChartComponent object.You can check out the ChartsDemo app for specific examples, but here is a high level view of some code that creates a Pie Chart. /** * Creates a renderer for the specified colors. */ private DefaultRenderer buildCategoryRenderer(int[] colors) { DefaultRenderer renderer = new DefaultRenderer(); renderer.setLabelsTextSize(15); renderer.setLegendTextSize(15); renderer.setMargins(new int[]{20, 30, 15, 0}); for (int color : colors) { SimpleSeriesRenderer r = new SimpleSeriesRenderer(); r.setColor(color); renderer.addSeriesRenderer(r); } return renderer; }/** * Builds a category series using the provided values. * * @param titles the series titles * @param values the values * @return the category series */ protected CategorySeries buildCategoryDataset(String title, double[] values) { CategorySeries series = new CategorySeries(title); int k = 0; for (double value : values) { series.add("Project " + ++k, value); }return series; }public Form createPieChartForm() {// Generate the values double[] values = new double[]{12, 14, 11, 10, 19};// Set up the renderer int[] colors = new int[]{ColorUtil.BLUE, ColorUtil.GREEN, ColorUtil.MAGENTA, ColorUtil.YELLOW, ColorUtil.CYAN}; DefaultRenderer renderer = buildCategoryRenderer(colors); renderer.setZoomButtonsVisible(true); renderer.setZoomEnabled(true); renderer.setChartTitleTextSize(20); renderer.setDisplayValues(true); renderer.setShowLabels(true); SimpleSeriesRenderer r = renderer.getSeriesRendererAt(0); r.setGradientEnabled(true); r.setGradientStart(0, ColorUtil.BLUE); r.setGradientStop(0, ColorUtil.GREEN); r.setHighlighted(true);// Create the chart ... pass the values and renderer to the chart object. PieChart chart = new PieChart(buildCategoryDataset("Project budget", values), renderer);// Wrap the chart in a Component so we can add it to a form ChartComponent c = new ChartComponent(chart);// Create a form and show it. Form f = new Form("Budget"); f.setLayout(new BorderLayout()); f.addComponent(BorderLayout.CENTER, c); return f; }Reference: Codename One Charts from our JCG partner Shai Almog at the Codename One blog....
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

Get ready to Rock!
To download the books, please verify your email address by following the instructions found on the email we just sent you.

THANK YOU!

Close