Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?

Jersey-logo

Custom Reason Phrase in HTTP status error message response with JAX-RS (Jersey)

In some of my recent work I got the request to produce a custom Reason Phrase in the  HTTP status response delivered to one of our REST API consuming clients when an error occurs. In this post I will demonstrate how you can achieve that with Jersey. 1. Define checked exception and exception mapper As you might have found out from my post Error handling in REST API with Jersey, I like handling the checked exceptions using Jersey’s ExceptionMapper capability. For the purpose of this demonstration I defined a CustomReasonPhraseException:   CustomReasonPhraseException package org.codingpedia.demo.rest.errorhandling;public class CustomReasonPhraseException extends Exception { private static final long serialVersionUID = -271582074543512905L; private final int businessCode;public CustomReasonPhraseException(int businessCode, String message) { super(message); this.businessCode = businessCode; }public int getBusinessCode() { return businessCode; } } and a  CustomReasonPhraseExceptionMapper to handle the mapping to a response if a CustomReasonPhraseException occurs: CustomReasonPhraseExceptionMapper package org.codingpedia.demo.rest.errorhandling;import javax.ws.rs.core.Response; import javax.ws.rs.core.Response.Status; import javax.ws.rs.ext.ExceptionMapper; import javax.ws.rs.ext.Provider;@Provider public class CustomReasonPhraseExceptionMapper implements ExceptionMapper<CustomReasonPhraseException> {public Response toResponse(CustomReasonPhraseException bex) { return Response.status(new CustomReasonPhraseExceptionStatusType(Status.BAD_REQUEST)) .entity("Custom Reason Phrase exception occured : " + bex.getMessage()) .build(); }} Reminder: When the application throws a CustomReasonPhraseException the toResponse method of the CustomReasonPhraseExceptionMapper instance will be invoked. In the ExceptionMapper code note line 12: CustomReasonPhraseExceptionStatusType return Response.status(new CustomReasonPhraseExceptionStatusType(Status.BAD_REQUEST)) In Jersey’s ResponseBuilder you have the possibility to define your own status types, by implementing the javax.ws.rs.core.Response.StatusType interface. 2. Implement custom StatusType To make a little more extensible I’ve created an AbstractStatusType class: AbstractStatusType package org.codingpedia.demo.rest.errorhandling;import javax.ws.rs.core.Response.Status; import javax.ws.rs.core.Response.Status.Family; import javax.ws.rs.core.Response.StatusType;/** * Class used to provide custom StatusTypes, especially for the the Reason Phrase that appears in the HTTP Status Response */ public abstract class AbstractStatusType implements StatusType {public AbstractStatusType(final Family family, final int statusCode, final String reasonPhrase) { super(); this.family = family; this.statusCode = statusCode; this.reasonPhrase = reasonPhrase; } protected AbstractStatusType(final Status status, final String reasonPhrase) { this(status.getFamily(), status.getStatusCode(), reasonPhrase); } @Override public Family getFamily() { return family; } @Override public String getReasonPhrase() { return reasonPhrase; } @Override public int getStatusCode() { return statusCode; }private final Family family; private final int statusCode; private final String reasonPhrase; } which I extend afterwards with the CustomReasonPhraseExceptionStatusType to provide the custom Reason Phrase I desire (e.g. “Custom error message”) in the response: CustomReasonPhraseExceptionStatusType extends AbstractStatusType package org.codingpedia.demo.rest.errorhandling;import javax.ws.rs.core.Response.Status;/** * Implementation of StatusType for CustomReasonPhraseException. * The Reason Phrase is set in this case to "Custom error message" */ public class CustomReasonPhraseExceptionStatusType extends AbstractStatusType{ private static final String CUSTOM_EXCEPTION_REASON_PHRASE = "Custom error message"; public CustomReasonPhraseExceptionStatusType(Status httpStatus) { super(httpStatus, CUSTOM_EXCEPTION_REASON_PHRASE); }}  3. Test the custom Reason Phrase in the HTTP status response 3.1. Request Request example GET http://localhost:8888/demo-rest-jersey-spring/mocked-custom-reason-phrase-exception HTTP/1.1 Accept-Encoding: gzip,deflate Host: localhost:8888 Connection: Keep-Alive User-Agent: Apache-HttpClient/4.1.1 (java 1.5) 3.2. Response Et voila: Response example HTTP/1.1 400 Custom error message Content-Type: text/plain Content-Length: 95 Server: Jetty(9.0.7.v20131107)Custom Reason Phrase exception occured : message attached to the Custom Reason Phrase Exception the custom Reason Phrase appears in the response as expected. Tip: If you want really learn how to design and implement REST API in Java read the following Tutorial – REST API design and implementation in Java with Jersey and Spring Summary You’ve seen in this post how to create a custom Reason Phrase in a HTTP status response when you want to flag a “special” error. Of course you can use this mechanism to define your own Reason Phrase-s for other HTTP statuses as well.  Actually you should not abuse this Reason Phrase feature as in the HTTP 1.1 rfc2616 is stated the following: “The Status-Code element is a 3-digit integer result code of the attempt to understand and satisfy the request. These codes are fully defined in section 10. The Reason-Phrase is intended to give a short textual description of the Status-Code. The Status-Code is intended for use by automata and the Reason-Phrase is intended for the human user. The client is not required to examine or display the Reason- Phrase.”[1] Well, that’s it. Keep on coding and keep on sharing coding knowledge.Reference: Custom Reason Phrase in HTTP status error message response with JAX-RS (Jersey) from our JCG partner Adrian Matei at the Codingpedia.org blog....
software-development-2-logo

Legacy Code to Testable Code #6: Add Overload

This post is part of the “Legacy Code to Testable Code” series. In the series we’ll talk about making refactoring steps before writing tests for legacy code, and how they make our life easier. In the last post, I’ve talked about Extract Class, and that sometimes in order to do that, we might want to change the signature of a method. Turns out adding an overload helps in other cases as well. We used a “setter” to expose and inject internal state before. Another option is to add a controllable overload to bypass the internal state. Let’s look at this code: public Bool isSameStreet(String newStreet) { return newStreet == this.currentAddress.getStreet(); } In this example, we compare an external value to an internal state. As we saw before, the option we could use is add a “setter” accessor, to allow injecting the internal value from outside. Instead, we can also add an overload: public Bool isSameStreet(String newStreet) { return isSameStreet(newStreet, this.currentAddress.getStreet()); } public Bool isSameStreet(String newStreet, String currentStreet) { return newStreet == currentStreet(); } We do the actual comparison on the new overload. The new overload should be accessible to the test, depending on language (so it doesn’t have to be completely public). In the original method, we delegate the call to the new implementation. The new overload is more controllable. We can stop there, but if our logic code does not rely on state anymore, why not use Extract Class? public Bool isSameStreet(String newStreet) { return StreetValidator.areEqual(newStreet, this.currentAddress.getStreet()); } The StreetValidator class can now be controlled and tested easily. Time to wrap up the series. So next time, in the last chapter – using dependency injection framework.Reference: Legacy Code to Testable Code #6: Add Overload from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....
software-development-2-logo

Five Tips for Tactical Management

Sometimes, you just need to get on with the work. You need to give yourself some breathing room so you can think for a while. Here are some tips that will help you tackle the day-to-day management work:                Schedule and conduct your one-on-ones. Being a manager means you make room for  the people stuff: the one-on-ones, the coaching and feedback or the meta-coaching or the meta-feedback that you offer in the one-on-ones. Those actions are tactical and if you don’t do them, they become strategic. As a manager, make sure you have team meetings. No, not serial status meetings. Never those. Problem solving meetings, please. The more managers you manage, the more critical this step is. If you miss these meetings, people notice. They wonder what’s wrong with you and they make up stories. While the stories might be interesting, you do not want people making stories up about what is wrong with you or your management, do you? Stop multitasking and delegate. Your people are way more capable than you think they are. Stop trying to do it all. Stop trying to do technical work if you are a manager. Take pride in your management work and do the management work. Stop estimating on behalf of your people. This is especially true for agile teams. If you don’t like the estimate, ask them why they think it will take that long, and then work with them on removing obstacles. If you have leftover time, it’s time to work on the strategic work. What is the most important work you and your team can do? What is your number one project? What work should you not be doing?  This is project portfolio management. You might find it difficult to make these decisions. But the more you make these decisions, the better it is for you and your group.Okay, there are your five tips. Happy management.Reference: Five Tips for Tactical Management from our JCG partner Johanna Rothman at the Managing Product Development blog....
java-interview-questions-answers

User sessions, Data controls and AM pooling

Recently I was asked an interesting question about application module pooling. As we know AM pool contains application module instances referenced by user sessions, which allows a session to fetch exactly the same AM instance from the pool at the subsequent request.              And if there is more than one root application module in the application, then each of them is going to have its own AM pool:And how about the situation when the application handles more than one instance of the same root application module. For example any kind of UI Shell application where each tab runs a task flow with isolated data control scope.In this case a user session references several AM instances in the pool. For this particular example there are going to be four AMs in the pool referenced by one session. One for the menu and three for the tabs.So the question is how come the framework doesn’t mess it all up and knows exactly which AM instance in the pool should be used by each tab. The answer is that an application module instance in the pool is not directly referenced by a user session. Instead of that it is referenced by a SessionCookie object which is unique for each DataControl instance. Since the task flows in the application have been run with isolated data control scope, there is a separate DataControl instance for each of them.That’s it!Reference: User sessions, Data controls and AM pooling from our JCG partner Eugene Fedorenko at the ADF Practice blog....
Jersey-logo

Spring Rest API with Swagger – Creating documentation

The real key to making your REST API easy to use is good documentation. But even if your documentation is done well, you need to set your company processes right to publish it correctly and on time. Ensuring that stakeholders receive it on time is one thing, but you are also responsible for updates in both the API and documentation. Having this process done automatically provides easy way out of trouble, since your documentation is no longer static deliverable and becomes a living thing. In previous post, I discussed how to integrate Swagger with your Spring application with Jersey. Now it’s time to show you how to create documentation and publish it for others to see. Before I get down to the actual documentation, lets start with a few notes on its form and properties. We will be using annotations to supply metadata to our API which answers question how. But what about why? On one hand we are supplying new annotations to already annotation ridden places like API endpoints or controllers (in case of integration with Spring MVC). But on the other, this approach has a standout advantage in binding release cycle of application, API and documentation in one delivery. Using this approach allows us to create and manage small cohesive units ensuring proper segmentation of documentation and its versioning as well. Creating endpoint documentation Everything starts right on top of your endpoint. In order to make Swagger aware of your endpoint, you need to annotate your class with @Api annotation. Basically, all you want to do here is name your endpoint and provide some description for your users. This is exactly what I am doing in the following code snippet. If you feel the need to go into more detail with your API documentation, check out @Api annotation description below. package com.jakubstas.swagger.rest;/** * REST endpoint for user manipulation. */ @Api(value = "users", description = "Endpoint for user management") @Path("/users") public class UsersEndpoint { ... } To verify the results just enter the URL from your basePath variable followed by /api-docs into your browser. This is the place where resource listing for your APIs resides. You can expect something similar to following snippet I received after annotating three of my endpoints and accessing http://[hostname]:[port]/SpringWithSwagger/rest/api-docs/: { "apiVersion":"1.0", "swaggerVersion":"1.2", "apis":[ { "path":"/users", "description":"Endpoint for user management" }, { "path":"/products", "description":"Endpoint for product management" }, { "path":"/employees", "description":"Endpoint for employee listing" } ] } However, please note that in order for an API to appear in APIs listing you have to annotate at least one API method with Swagger annotations. If none of your methods is annotated (or you haven’t provided any methods yet), API documentation will not be processed and published. @Api annotation Describes a top-level api. Classes with @Api annotations will be included in the resource listing. Annotation parameters:value - Short description of the Api description - general description of this class basePath - the base path that is prepended to all @Path elements position - optional explicit ordering of this Api in the resource listing produces - content type produced by this Api consumes - media type consumed by this Api protocols - protocols that this Api requires (i.e. https) authorizations - authorizations required by this ApiOperations documentation Now, lets move on to the key part of API documentation. There are basically two main parts of operation documentation – operation description and response description. Lets start with operation description. Using annotation @ApiOperation provides detailed description of what certain method does, its response, HTTP method and other useful information presented in annotation description below. Example of operation declaration for Swagger can be seen in the following code sample. @ApiOperation annotation Describes an operation or typically a HTTP method against a specific path. Operations with equivalent paths are grouped in an array in the Api declaration. Annotation parameters:value – brief description of the operation notes – long description of the operation response – default response class from the operation responseContainer – if the response class is within a container, specify it here tags – currently not implemented in readers, reserved for future use httpMethod – the HTTP method, i.e GET, PUT, POST, DELETE, PATCH, OPTIONS position – allow explicit ordering of operations inside the Api declaration nickname – the nickname for the operation, to override what is detected by the annotation scanner produces – content type produced by this Api consumes – media type consumed by this Api protocols – protocols that this Api requires (i.e. https) authorizations – authorizations required by this ApiYou may notice the use of response parameter in @ApiOperation annotation that specifies type of response (return type) from the operation. As you can see this value can be different from method return type, since it serves only for purposes of API documentation. @GET @Path("/{userName}") @Produces(MediaType.APPLICATION_JSON) @ApiOperation(value = "Returns user details", notes = "Returns a complete list of users details with a date of last modification.", response = User.class) @ApiResponses(value = { @ApiResponse(code = 200, message = "Successful retrieval of user detail", response = User.class), @ApiResponse(code = 404, message = "User with given username does not exist"), @ApiResponse(code = 500, message = "Internal server error")} ) public Response getUser(@ApiParam(name = "userName", value = "Alphanumeric login to the application", required = true) @PathParam("userName") String userName) { ... } Next, take a look at the use of @ApiParam. It is always useful to describe to the client what you need in order to fulfill their request. This is the primary aim of @ApiParam annotation. Whether you are working with path or query parameter, you should always provide clarification of what this parameter represents. @ApiParam annotation Represents a single parameter in an Api Operation. A parameter is an input to the operation. Annotation parameters:name – name of the parameter value – description of the parameter defaultValue – default value – if e.g. no JAX-RS @DefaultValue is given allowableValues – description of values this endpoint accepts required – specifies if the parameter is required or not access – specify an optional access value for filtering in a Filter implementation. This allows you to hide certain parameters if a user doesn’t have access to them allowMultiple – specifies whether or not the parameter can have multiple values providedFinally, lets look at the way of documenting the actual method responses in terms of messages and HTTP codes. Swagger comes with @ApiResponse annotation, which can be used multiple times when it’s wrapped using @ApiResponses wrapper. This way you can cover all alternative execution flows of your code and provide full API operation description for clients of your API. Each response can be described in terms of HTTP return code, description of result and type of the result. For more details about @ApiResponse see description below. @ApiResponse annotation An ApiResponse represents a type of response from a server. This can be used to describe both success codes as well as errors. If your Api has different response classes, you can describe them here by associating a response class with a response code. Note, Swagger does not allow multiple response types for a single response code. Annotation parameters:code – response code to describe message – human-readable message to accompany the response response – optional response class to describe the payload of the messageUsing these annotations is pretty simple and provides nicely structured approach to describing features of your API. If you want to check what your documentation looks like just enter the URL pointing to the API documentation of one of your endpoints by appending the value of parameter value from @Api annotation to the URL pointing to resource listing. Be careful no to enter the value of @Path annotation be mistake (unless they have the same value). In case of my example desired URL is http://[hostname]:[port]/SpringWithSwagger/rest/api-docs/users. You should be able to see output similar to following snippet: { "apiVersion":"1.0", "swaggerVersion":"1.2", "basePath":"http://[hostname/ip address]:[port]/SpringWithSwagger/rest", "resourcePath":"/users", "apis":[ { "path":"/users/{userName}", "operations":[ { "method":"GET", "summary":"Returns user details", "notes":"Returns a complete list of users details with a date of last modification.", "type":"User", "nickname":"getUser", "produces":[ "application/json" ], "authorizations":{}, "parameters":[ { "name":"userName", "description":"Alphanumeric login to application", "required":true, "type":"string", "paramType":"path", "allowMultiple":false } ], "responseMessages":[ { "code":200, "message":"Successful retrieval of user detail", "responseModel":"User" }, { "code":404, "message":"User with given username does not exist" }, { "code":500, "message":"Internal server error" } ] } ] } ], "models":{ "User":{ "id":"User", "properties": { "surname":{"type":"string"}, "userName":{"type":"string"}, "lastUpdated": { "type":"string", "format":"date-time" }, "avatar":{ "type":"array", "items":{"type":"byte"} }, "firstName":{"type":"string"}, "email":{"type":"string"} } } } } Creating model documentation By supplying User class to the response parameter of several annotations in previous example, I’ve managed to introduce new undocumented element into my API documentation. Swagger was able to pull out all the structural data about User class with no regard for its relevance to the API. To counter this effect, Swagger provides two annotations to provide additional information to the users of your API and restrict visibility of your model. To mark a model class for processing by Swagger just place @ApiModel on top of your class. As usual, you can provide description as well as inheritance configuration. For more information see @ApiModel description below. @ApiModel annotation A bean class used in the REST-api. Suppose you have an interface @PUT @ApiOperation(...) void foo(FooBean fooBean), there is no direct way to see what fields FooBean would have. This annotation is meant to give a description of FooBean and then have the fields of it be annotated with @ApiModelProperty. Annotation parameters:value – provide a synopsis of this class description – provide a longer description of the class parent – provide a superclass for the model to allow describing inheritence discriminator – for models with a base class, a discriminator can be provided for polymorphic use cases subTypesLast thing you need to do is to annotate class members with @ApiModelProperty annotation to provide documentation for each class member. Simple example of this can be seen in the following class. package com.jakubstas.swagger.model;@ApiModel public class User {private String userName;private String firstName;private String surname;private String email;private byte[] avatar;private Date lastUpdated;@ApiModelProperty(position = 1, required = true, value = "username containing only lowercase letters or numbers") public String getUserName() { return userName; }public void setUserName(String userName) { this.userName = userName; }@ApiModelProperty(position = 2, required = true) public String getFirstName() { return firstName; }public void setFirstName(String firstName) { this.firstName = firstName; }@ApiModelProperty(position = 3, required = true) public String getSurname() { return surname; }public void setSurname(String surname) { this.surname = surname; }@ApiModelProperty(position = 4, required = true) public String getEmail() { return email; }public void setEmail(String email) { this.email = email; }@JsonIgnore public byte[] getAvatar() { return avatar; }public void setAvatar(byte[] avatar) { this.avatar = avatar; }@ApiModelProperty(position = 5, value = "timestamp of last modification") public Date getLastUpdated() { return lastUpdated; }public void setLastUpdated(Date lastUpdated) { this.lastUpdated = lastUpdated; } } If you need to provide more details about your model, check following description of @ApiModelProperty: @ApiModelProperty annotation An ApiModelProperty describes a property inside a model class. The annotations can apply to a method, a property, etc., depending on how the model scanner is configured and used. Annotation parameters:value – Provide a human readable synopsis of this property allowableValues – If the values that can be set are restricted, they can be set here. In the form of a comma separated list registered, active, closed access – specify an optional access value for filtering in a Filter implementation. This allows you to hide certain parameters if a user doesn’t have access to them notes – long description of the property dataType – The dataType. See the documentation for the supported datatypes. If the data type is a custom object, set it’s name, or nothing. In case of an enum use ‘string’ and allowableValues for the enum constants required – Whether or not the property is required, defaults to false position – allows explicitly ordering the property in the model. Since reflection has no guarantee on ordering, you should specify property order to keep models consistent across different VM implementations and versionsIf you follow these instructions carefully, you should end up with complete API documentation in json on previously mentioned URL. Following is only model related part of resulting json, now with provided documentation. { ... "models":{ "User":{ "id":"User", "description":"", "required":[ "userName", "firstName", "surname", "email" ], "properties":{ "userName":{ "type":"string", "description":"username containing only lowercase letters or numbers" }, "firstName":{ "type":"string" }, "surname":{ "type":"string" }, "email":{ "type":"string" }, "lastUpdated":{ "type":"string", "format":"date-time", "description":"timestamp of last modification" } } } } } What is next? If you followed all steps you should now have working API documentation that may be published or further processed by automation tools. I will showcase how to present API documentation using Swagger UI module in my next article called Spring Rest API with Swagger – Exposing documentation. The code used in this micro series is published on GitHub and provides examples for all discussed features and tools. Please enjoy!Reference: Spring Rest API with Swagger – Creating documentation from our JCG partner Jakub Stas at the Jakub Stas blog....
java-logo

Clean Unit Test Patterns – Presentation Slides

I was given the opportunity to talk at the GDG DevFestKarlsruhe 2014 conference about ‘Clean Unit Test Patterns’. Thanks to the organizers for inviting me and thanks to all people listening to my talk. As promised I shared the presentation e.g. for those who want to have a look at the additional slides I did not cover during the talk:           Clean Unit Test Patterns GDG DevFest Karlsruhe 2014 – Oktober 25th, 2014 JUnit testing is not as trivial as it might look. If not written with care, tests can be a show-stopper with respect to maintenance and progression. Hence this session introduces the clean structure of well written unit tests. It explains the significance of test isolation and how it can be achieved by means of various test double patterns. The topic is deepened by a brief discussion about the pros and cons of test double frameworks. The talk continues with the JUnit concepts Runners and Rules. It illustrates in which way these effect testing efficiency and readability. Descriptive examples are used to enlarge upon the subject. Finally the presentation covers unit test assertions. It shows how custom verification patterns of Hamcrest or AssertJ can help writing clear, simple and expressive assertion statements.Reference: Clean Unit Test Patterns – Presentation Slides from our JCG partner Frank Appel at the Code Affine blog....
apache-commons-io-logo

Apache Commons IO Tutorial: A beginner’s guide

Apache Commons IO is a Java library created and maintained by the Apache Foundation. It provides a multitude of classes that enable developers to do common tasks easily and with much less boiler-plate code, that needs to be written over and over again for every single project.The importance of libraries like that is huge, because they are mature and maintained by experienced developers, who have thought of every possible edge-case, or fixed the various bugs when they appeared. In this example, we are going to present some methods with varying functionality, depending on the package of org.apache.commons.io that they belong to. We are not going to delve too deep inside the library, as it is enormous, but we are going to provide examples for some common usage that can definitely come in handy for every developer, beginner or not.1. Apache Commons IO Example The code for this example will be broken into several classes, and each of them will be representative of a particular area that Apache Commons IO covers. These areas are:Utility classes Input Output Filters Comparators File MonitorTo make things even clearer, we are going to break down the output in chunks, one for each of the classes that we have created. We have also created a directory inside the project folder (named ExampleFolder) which will contain the various files that will be used in this example to show the functionality of the various classes. NOTE: In order to use org.apache.commons.io, you need to download the jar files (found here) and add them to the build path of your Eclipse project, by right clicking on the project folder -> Build Path -> Add external archives. ApacheCommonsExampleMain.java public class ApacheCommonsExampleMain {public static void main(String[] args) { UtilityExample.runExample(); FileMonitorExample.runExample(); FiltersExample.runExample(); InputExample.runExample(); OutputExample.runExample(); ComparatorExample.runExample(); } }This is the main class that will be used to run the methods from the other classes of our example. You can comment certain classes in order to see the output that you want to.1.1 Utility Classes There are various Utility classes, inside the package org.apache.commons.io, most of which have to do with file manipulation and String comparison. We have used some of the most important ones here:FilenameUtils: This class has methods that work with file names, and the main point is to make life easier in every OS (works equally well in Unix and Windows systems). FileUtils: It provides methods for file manipulation (moving, opening and reading a file, checking if a file exists, etc). IOCase: String manipulation and comparison methods. FileSystemUtils: Its methods return the free space of a designated drive.UtilityExample.java import java.io.File; import java.io.IOException;import org.apache.commons.io.FileSystemUtils; import org.apache.commons.io.FileUtils; import org.apache.commons.io.FilenameUtils; import org.apache.commons.io.LineIterator; import org.apache.commons.io.IOCase;public final class UtilityExample { // We are using the file exampleTxt.txt in the folder ExampleFolder, // and we need to provide the full path to the Utility classes. private static final String EXAMPLE_TXT_PATH = "C:\\Users\\Lilykos\\workspace\\ApacheCommonsExample\\ExampleFolder\\exampleTxt.txt"; private static final String PARENT_DIR = "C:\\Users\\Lilykos\\workspace\\ApacheCommonsExample";public static void runExample() throws IOException { System.out.println("Utility Classes example..."); // FilenameUtils System.out.println("Full path of exampleTxt: " + FilenameUtils.getFullPath(EXAMPLE_TXT_PATH)); System.out.println("Full name of exampleTxt: " + FilenameUtils.getName(EXAMPLE_TXT_PATH)); System.out.println("Extension of exampleTxt: " + FilenameUtils.getExtension(EXAMPLE_TXT_PATH)); System.out.println("Base name of exampleTxt: " + FilenameUtils.getBaseName(EXAMPLE_TXT_PATH)); // FileUtils // We can create a new File object using FileUtils.getFile(String) // and then use this object to get information from the file. File exampleFile = FileUtils.getFile(EXAMPLE_TXT_PATH); LineIterator iter = FileUtils.lineIterator(exampleFile); System.out.println("Contents of exampleTxt..."); while (iter.hasNext()) { System.out.println("\t" + iter.next()); } iter.close(); // We can check if a file exists somewhere inside a certain directory. File parent = FileUtils.getFile(PARENT_DIR); System.out.println("Parent directory contains exampleTxt file: " + FileUtils.directoryContains(parent, exampleFile)); // IOCase String str1 = "This is a new String."; String str2 = "This is another new String, yes!"; System.out.println("Ends with string (case sensitive): " + IOCase.SENSITIVE.checkEndsWith(str1, "string.")); System.out.println("Ends with string (case insensitive): " + IOCase.INSENSITIVE.checkEndsWith(str1, "string.")); System.out.println("String equality: " + IOCase.SENSITIVE.checkEquals(str1, str2)); // FileSystemUtils System.out.println("Free disk space (in KB): " + FileSystemUtils.freeSpaceKb("C:")); System.out.println("Free disk space (in MB): " + FileSystemUtils.freeSpaceKb("C:") / 1024); } }Output Utility Classes example... Full path of exampleTxt: C:\Users\Lilykos\workspace\ApacheCommonsExample\ExampleFolder\ Full name of exampleTxt: exampleTxt.txt Extension of exampleTxt: txt Base name of exampleTxt: exampleTxt Contents of exampleTxt... This is an example text file. We will use it for experimenting with Apache Commons IO. Parent directory contains exampleTxt file: true Ends with string (case sensitive): false Ends with string (case insensitive): true String equality: false Free disk space (in KB): 32149292 Free disk space (in MB): 313951.2 File Monitor The org.apache.commons.io.monitor package contains methods that can get specific information about a File, but more importantly, it can create handlers that can be used to track changes in a specific file or folder and take action depending on the changes. Let’s take a look on the code: FileMonitorExample.java import java.io.File; import java.io.IOException;import org.apache.commons.io.FileDeleteStrategy; import org.apache.commons.io.FileUtils; import org.apache.commons.io.monitor.FileAlterationListenerAdaptor; import org.apache.commons.io.monitor.FileAlterationMonitor; import org.apache.commons.io.monitor.FileAlterationObserver; import org.apache.commons.io.monitor.FileEntry;public final class FileMonitorExample { private static final String EXAMPLE_PATH = "C:\\Users\\Lilykos\\workspace\\ApacheCommonsExample\\ExampleFolder\\exampleFileEntry.txt"; private static final String PARENT_DIR = "C:\\Users\\Lilykos\\workspace\\ApacheCommonsExample\\ExampleFolder"; private static final String NEW_DIR = "C:\\Users\\Lilykos\\workspace\\ApacheCommonsExample\\ExampleFolder\\newDir"; private static final String NEW_FILE = "C:\\Users\\Lilykos\\workspace\\ApacheCommonsExample\\ExampleFolder\\newFile.txt";public static void runExample() { System.out.println("File Monitor example..."); // FileEntry // We can monitor changes and get information about files // using the methods of this class. FileEntry entry = new FileEntry(FileUtils.getFile(EXAMPLE_PATH)); System.out.println("File monitored: " + entry.getFile()); System.out.println("File name: " + entry.getName()); System.out.println("Is the file a directory?: " + entry.isDirectory()); // File Monitoring // Create a new observer for the folder and add a listener // that will handle the events in a specific directory and take action. File parentDir = FileUtils.getFile(PARENT_DIR); FileAlterationObserver observer = new FileAlterationObserver(parentDir); observer.addListener(new FileAlterationListenerAdaptor() { @Override public void onFileCreate(File file) { System.out.println("File created: " + file.getName()); } @Override public void onFileDelete(File file) { System.out.println("File deleted: " + file.getName()); } @Override public void onDirectoryCreate(File dir) { System.out.println("Directory created: " + dir.getName()); } @Override public void onDirectoryDelete(File dir) { System.out.println("Directory deleted: " + dir.getName()); } }); // Add a monior that will check for events every x ms, // and attach all the different observers that we want. FileAlterationMonitor monitor = new FileAlterationMonitor(500, observer); try { monitor.start(); // After we attached the monitor, we can create some files and directories // and see what happens! File newDir = new File(NEW_DIR); File newFile = new File(NEW_FILE); newDir.mkdirs(); newFile.createNewFile(); Thread.sleep(1000); FileDeleteStrategy.NORMAL.delete(newDir); FileDeleteStrategy.NORMAL.delete(newFile); Thread.sleep(1000); monitor.stop(); } catch (IOException e) { e.printStackTrace(); } catch (InterruptedException e) { e.printStackTrace(); } catch (Exception e) { e.printStackTrace(); } } } Output File Monitor example... File monitored: C:\Users\Lilykos\workspace\ApacheCommonsExample\ExampleFolder\exampleFileEntry.txt File name: exampleFileEntry.txt Is the file a directory?: false Directory created: newDir File created: newFile.txt Directory deleted: newDir File deleted: newFile.txtLet’s take a look on what happened here. We used some classes of the org.apache.commons.io.monitor package, that enable us to create handlers that listen to specific events (in our case, everything that has to do with files, folders, directories etc). In order to achieve that, there are certain steps that need to be taken:Create a File object, that is a reference to the directory that we want to listen to for changes. Create a FileAlterationObserver object, that will observe for those changes. Add a FileAlterationListenerAdaptor to the observer using the addListener() method. You can create the adaptor using various ways, but in our example we used a nested class that implements only some of the methods (the ones we need for the example requirements). Create a FileAlterationMonitor and add the observers that you have, as well as the interval (in ms). Start the monitor using the start() method and stop it when necessary using the stop() method.1.3 Filters Filters can be used in a variety of combinations and ways. Their job is to allow us to easily make distinctions between files and get the ones that satisfy certain criteria. We can also combine filters to perform logical comparisons and get our files much more precisely, without using tedious String comparisons afterwards. FiltersExample.java import java.io.File;import org.apache.commons.io.FileUtils; import org.apache.commons.io.IOCase; import org.apache.commons.io.filefilter.AndFileFilter; import org.apache.commons.io.filefilter.NameFileFilter; import org.apache.commons.io.filefilter.NotFileFilter; import org.apache.commons.io.filefilter.OrFileFilter; import org.apache.commons.io.filefilter.PrefixFileFilter; import org.apache.commons.io.filefilter.SuffixFileFilter; import org.apache.commons.io.filefilter.WildcardFileFilter;public final class FiltersExample { private static final String PARENT_DIR = "C:\\Users\\Lilykos\\workspace\\ApacheCommonsExample\\ExampleFolder";public static void runExample() { System.out.println("File Filter example..."); // NameFileFilter // Right now, in the parent directory we have 3 files: // directory example // file exampleEntry.txt // file exampleTxt.txt // Get all the files in the specified directory // that are named "example". File dir = FileUtils.getFile(PARENT_DIR); String[] acceptedNames = {"example", "exampleTxt.txt"}; for (String file: dir.list(new NameFileFilter(acceptedNames, IOCase.INSENSITIVE))) { System.out.println("File found, named: " + file); } //WildcardFileFilter // We can use wildcards in order to get less specific results // ? used for 1 missing char // * used for multiple missing chars for (String file: dir.list(new WildcardFileFilter("*ample*"))) { System.out.println("Wildcard file found, named: " + file); } // PrefixFileFilter // We can also use the equivalent of startsWith // for filtering files. for (String file: dir.list(new PrefixFileFilter("example"))) { System.out.println("Prefix file found, named: " + file); } // SuffixFileFilter // We can also use the equivalent of endsWith // for filtering files. for (String file: dir.list(new SuffixFileFilter(".txt"))) { System.out.println("Suffix file found, named: " + file); } // OrFileFilter // We can use some filters of filters. // in this case, we use a filter to apply a logical // or between our filters. for (String file: dir.list(new OrFileFilter( new WildcardFileFilter("*ample*"), new SuffixFileFilter(".txt")))) { System.out.println("Or file found, named: " + file); } // And this can become very detailed. // Eg, get all the files that have "ample" in their name // but they are not text files (so they have no ".txt" extension. for (String file: dir.list(new AndFileFilter( // we will match 2 filters... new WildcardFileFilter("*ample*"), // ...the 1st is a wildcard... new NotFileFilter(new SuffixFileFilter(".txt"))))) { // ...and the 2nd is NOT .txt. System.out.println("And/Not file found, named: " + file); } } }Output File Filter example... File found, named: example File found, named: exampleTxt.txt Wildcard file found, named: example Wildcard file found, named: exampleFileEntry.txt Wildcard file found, named: exampleTxt.txt Prefix file found, named: example Prefix file found, named: exampleFileEntry.txt Prefix file found, named: exampleTxt.txt Suffix file found, named: exampleFileEntry.txt Suffix file found, named: exampleTxt.txt Or file found, named: example Or file found, named: exampleFileEntry.txt Or file found, named: exampleTxt.txt And/Not file found, named: example1.4 Comparators The org.apache.commons.io.comparator package contains classes that allow us to easily compare and sort files and directories. We just need to provide a list of files and, depending on the class, compare them in various ways. ComparatorExample.java import java.io.File; import java.util.Date;import org.apache.commons.io.FileUtils; import org.apache.commons.io.IOCase; import org.apache.commons.io.comparator.LastModifiedFileComparator; import org.apache.commons.io.comparator.NameFileComparator; import org.apache.commons.io.comparator.SizeFileComparator;public final class ComparatorExample { private static final String PARENT_DIR = "C:\\Users\\Lilykos\\workspace\\ApacheCommonsExample\\ExampleFolder"; private static final String FILE_1 = "C:\\Users\\Lilykos\\workspace\\ApacheCommonsExample\\ExampleFolder\\example"; private static final String FILE_2 = "C:\\Users\\Lilykos\\workspace\\ApacheCommonsExample\\ExampleFolder\\exampleTxt.txt"; public static void runExample() { System.out.println("Comparator example..."); //NameFileComparator // Let's get a directory as a File object // and sort all its files. File parentDir = FileUtils.getFile(PARENT_DIR); NameFileComparator comparator = new NameFileComparator(IOCase.SENSITIVE); File[] sortedFiles = comparator.sort(parentDir.listFiles()); System.out.println("Sorted by name files in parent directory: "); for (File file: sortedFiles) { System.out.println("\t"+ file.getAbsolutePath()); } // SizeFileComparator // We can compare files based on their size. // The boolean in the constructor is about the directories. // true: directory's contents count to the size. // false: directory is considered zero size. SizeFileComparator sizeComparator = new SizeFileComparator(true); File[] sizeFiles = sizeComparator.sort(parentDir.listFiles()); System.out.println("Sorted by size files in parent directory: "); for (File file: sizeFiles) { System.out.println("\t"+ file.getName() + " with size (kb): " + file.length()); } // LastModifiedFileComparator // We can use this class to find which file was more recently modified. LastModifiedFileComparator lastModified = new LastModifiedFileComparator(); File[] lastModifiedFiles = lastModified.sort(parentDir.listFiles()); System.out.println("Sorted by last modified files in parent directory: "); for (File file: lastModifiedFiles) { Date modified = new Date(file.lastModified()); System.out.println("\t"+ file.getName() + " last modified on: " + modified); } // Or, we can also compare 2 specific files and find which one was last modified. // returns > 0 if the first file was last modified. // returns 0) System.out.println("File " + file1.getName() + " was modified last because..."); else System.out.println("File " + file2.getName() + "was modified last because..."); System.out.println("\t"+ file1.getName() + " last modified on: " + new Date(file1.lastModified())); System.out.println("\t"+ file2.getName() + " last modified on: " + new Date(file2.lastModified())); } }Output Comparator example... Sorted by name files in parent directory: C:\Users\Lilykos\workspace\ApacheCommonsExample\ExampleFolder\comparator1.txt C:\Users\Lilykos\workspace\ApacheCommonsExample\ExampleFolder\comperator2.txt C:\Users\Lilykos\workspace\ApacheCommonsExample\ExampleFolder\example C:\Users\Lilykos\workspace\ApacheCommonsExample\ExampleFolder\exampleFileEntry.txt C:\Users\Lilykos\workspace\ApacheCommonsExample\ExampleFolder\exampleTxt.txt Sorted by size files in parent directory: example with size (kb): 0 exampleTxt.txt with size (kb): 87 exampleFileEntry.txt with size (kb): 503 comperator2.txt with size (kb): 1458 comparator1.txt with size (kb): 4436 Sorted by last modified files in parent directory: exampleTxt.txt last modified on: Sun Oct 26 14:02:22 EET 2014 example last modified on: Sun Oct 26 23:42:55 EET 2014 comparator1.txt last modified on: Tue Oct 28 14:48:28 EET 2014 comperator2.txt last modified on: Tue Oct 28 14:48:52 EET 2014 exampleFileEntry.txt last modified on: Tue Oct 28 14:53:50 EET 2014 File example was modified last because... example last modified on: Sun Oct 26 23:42:55 EET 2014 exampleTxt.txt last modified on: Sun Oct 26 14:02:22 EET 2014Let’s see what classes were used here:NameFileComparator: Compares files according to their name. SizeFileComparator: Compares files according to their size. LastModifiedFileComparator: Compares files according to the date they were last modified.You should also take notice here, that the comparisons can happen either in whole directories (were they are sorted using the sort() method), or separately for 2 files specifically (using compare()).1.5 Input There  are various implementations of InputStream in the org.apache.commons.io.input package. We are going to examine one of the most useful, TeeInputStream, which takes as arguments both an InputStream and an OutputStream, and automatically copies the read bytes from the input, to the output. Moreover, by using a third, boolean argument, by closing just the TeeInputStream in the end, the two additional streams close as well. InputExample.java import java.io.ByteArrayInputStream; import java.io.ByteArrayOutputStream; import java.io.File; import java.io.IOException;import org.apache.commons.io.FileUtils; import org.apache.commons.io.input.TeeInputStream; import org.apache.commons.io.input.XmlStreamReader;public final class InputExample { private static final String XML_PATH = "C:\\Users\\Lilykos\\workspace\\ApacheCommonsExample\\InputOutputExampleFolder\\web.xml"; private static final String INPUT = "This should go to the output.";public static void runExample() { System.out.println("Input example..."); XmlStreamReader xmlReader = null; TeeInputStream tee = null; try { // XmlStreamReader // We can read an xml file and get its encoding. File xml = FileUtils.getFile(XML_PATH); xmlReader = new XmlStreamReader(xml); System.out.println("XML encoding: " + xmlReader.getEncoding()); // TeeInputStream // This very useful class copies an input stream to an output stream // and closes both using only one close() method (by defining the 3rd // constructor parameter as true). ByteArrayInputStream in = new ByteArrayInputStream(INPUT.getBytes("US-ASCII")); ByteArrayOutputStream out = new ByteArrayOutputStream(); tee = new TeeInputStream(in, out, true); tee.read(new byte[INPUT.length()]);System.out.println("Output stream: " + out.toString()); } catch (IOException e) { e.printStackTrace(); } finally { try { xmlReader.close(); } catch (IOException e) { e.printStackTrace(); } try { tee.close(); } catch (IOException e) { e.printStackTrace(); } } } }Output Input example... XML encoding: UTF-8 Output stream: This should go to the output.1.6 Output Similar to the org.apache.commons.io.input, org.apache.commons.io.output has implementations of OutputStream, that can be used in many situations. A very interesting one is TeeOutputStream, which allows an output stream to be branched, or in other words, we can send an input stream to 2 different outputs. OutputExample.java import java.io.ByteArrayInputStream; import java.io.ByteArrayOutputStream; import java.io.IOException;import org.apache.commons.io.input.TeeInputStream; import org.apache.commons.io.output.TeeOutputStream;public final class OutputExample { private static final String INPUT = "This should go to the output.";public static void runExample() { System.out.println("Output example..."); TeeInputStream teeIn = null; TeeOutputStream teeOut = null; try { // TeeOutputStream ByteArrayInputStream in = new ByteArrayInputStream(INPUT.getBytes("US-ASCII")); ByteArrayOutputStream out1 = new ByteArrayOutputStream(); ByteArrayOutputStream out2 = new ByteArrayOutputStream(); teeOut = new TeeOutputStream(out1, out2); teeIn = new TeeInputStream(in, teeOut, true); teeIn.read(new byte[INPUT.length()]);System.out.println("Output stream 1: " + out1.toString()); System.out.println("Output stream 2: " + out2.toString()); } catch (IOException e) { e.printStackTrace(); } finally { // No need to close teeOut. When teeIn closes, it will also close its // Output stream (which is teeOut), which will in turn close the 2 // branches (out1, out2). try { teeIn.close(); } catch (IOException e) { e.printStackTrace(); } } } }Output Output example... Output stream 1: This should go to the output. Output stream 2: This should go to the output.2. Download the Complete Example This was an introduction to Apache Commons IO, covering most of the important classes that provide easy solutions to developers. There are many other capabilities in this vast package,but using this intro you get the general idea and a handful of useful tools for your future projects! DownloadYou can download the full source code of this example here: ApacheCommonsIOExample.rar ...
apache-zookeeper-logo

ZooKeeper on Kubernetes

The last couple of weeks I’ve been playing around with docker and kubernetes. If you are not familiar with kubernetes let’s just say for now that its an open source container cluster management implementation, which I find really really awesome. One of the first things I wanted to try out was running an Apache ZooKeeper ensemble inside kubernetes and I thought that it would be nice to share the experience. For my experiments I used Docker v. 1.3.0 and Openshift V3, which I built from source and includes Kubernetes.   ZooKeeper on Docker Managing a ZooKeeper ensemble is definitely not a trivial task. You usually need to configure an odd number of servers and all of the servers need to be aware of each other. This is a PITA on its own, but it gets even more painful when you are working with something as static as docker images. The main difficulty could be expressed as: “How can you create multiple containers out of the same image and have them point to each other?”One approach would be to use docker volumes and provide the configuration externally. This would mean that you have created the configuration for each container, stored it somewhere in the docker host and then pass the configuration to each container as a volume at creation time. I’ve never tried that myself, I can’t tell if its a good or bad practice, I can see some benefits, but I can also see that this is something I am not really excited about. It could look like this: docker run -p 2181:2181 -v /path/to/my/conf:/opt/zookeeper/conf my/zookeeper An other approach would be to pass all the required information as environment variables to the container at creation time and then create a wrapper script which will read the environment variables, modify the configuration files accordingly, launch zookeeper. This is definitely easier to use, but its not that flexible to perform other types of tuning without rebuilding the image itself. Last but not least one could combine the two approaches into one and do something like:Make it possible to provide the base configuration externally using volumes. Use env and scripting to just configure the ensemble.There are plenty of images out there that take one or the other approach. I am more fond of the environment variables approach and since I needed something that would follow some of the kubernetes conventions in terms of naming, I decided to hack an image of my own using the env variables way. Creating a custom image for ZooKeeper I will just focus on the configuration that is required for the ensemble. In order to configure a ZooKeeper ensemble, for each server one has to assign a numeric id and then add in its configuration  an entry per zookeeper server, that contains the ip of the server, the peer port of the server and the election port. The server id is added in a file called myid under the dataDir. The rest of the configuration looks like: server.1=server1.example.com:2888:3888 server.2=server2.example.com:2888:3888 server.3=server3.example.com:2888:3888 ... server.current=[bind address]:[peer binding port]:[election biding port] Note that if the server id is X the server.X entry needs to contain the bind ip and ports and not the connection ip and ports. So what we actually need to pass to the container as environment variables are the following:The server id. For each server in the ensemble:The hostname or ip The peer port The election portIf these are set, then the script that updates the configuration could look like: if [ ! -z "$SERVER_ID" ]; then echo "$SERVER_ID" > /opt/zookeeper/data/myid #Find the servers exposed in env. for i in `echo {1..15}`;doHOST=`envValue ZK_PEER_${i}_SERVICE_HOST` PEER=`envValue ZK_PEER_${i}_SERVICE_PORT` ELECTION=`envValue ZK_ELECTION_${i}_SERVICE_PORT`if [ "$SERVER_ID" = "$i" ];then echo "server.$i=0.0.0.0:2888:3888" >> conf/zoo.cfg elif [ -z "$HOST" ] || [ -z "$PEER" ] || [ -z "$ELECTION" ] ; then #if a server is not fully defined stop the loop here. break else echo "server.$i=$HOST:$PEER:$ELECTION" >> conf/zoo.cfg fidone fi For simplicity the function that read the keys and values from env are excluded. The complete image and helping scripts to launch zookeeper ensembles of variables size can be found in the fabric8io repository. ZooKeeper on Kubernetes The docker image above, can be used directly with docker, provided that you take care of the environment variables. Now I am going to describe how this image can be used with kubernetes. But first a little rambling… What I really like about using kubernetes with ZooKeeper, is that kubernetes will recreate the container, if it dies or the health check fails. For ZooKeeper this also means that if a container that hosts an ensemble server dies, it will get replaced by a new one. This guarantees that there will be constantly a quorum of ZooKeeper servers. I also like that you don’t need to worry about the connection string that the clients will use, if containers come and go. You can use kubernetes services to load balance across all the available servers and you can even expose that outside of kubernetes. Creating a Kubernetes confing for ZooKeeper I’ll try to explain how you can create 3 ZooKeeper Server Ensemble in Kubernetes. What we need is 3 docker containers all running ZooKeeper with the right environment variables: { "image": "fabric8/zookeeper", "name": "zookeeper-server-1", "env": [ { "name": "ZK_SERVER_ID", "value": "1" } ], "ports": [ { "name": "zookeeper-client-port", "containerPort": 2181, "protocol": "TCP" }, { "name": "zookeeper-peer-port", "containerPort": 2888, "protocol": "TCP" }, { "name": "zookeeper-election-port", "containerPort": 3888, "protocol": "TCP" } ] } The env needs to specify all the parameters discussed previously. So we need to add along with the ZK_SERVER_ID, the following:ZK_PEER_1_SERVICE_HOST ZK_PEER_1_SERVICE_PORT ZK_ELECTION_1_SERVICE_PORT ZK_PEER_2_SERVICE_HOST ZK_PEER_2_SERVICE_PORT ZK_ELECTION_2_SERVICE_PORT ZK_PEER_3_SERVICE_HOST ZK_PEER_3_SERVICE_PORT ZK_ELECTION_3_SERVICE_PORTAn alternative approach could be instead of adding all these manual configuration, to expose peer and election as kubernetes services. I tend to favor the later approach as it can make things simpler when working with multiple hosts. It’s also a nice exercise for learning kubernetes. So how do we configure those services? To configure them we need to know:the name of the port the kubernetes pod the provide the serviceThe name of the port is already defined in the previous snippet. So we just need to find out how to select the pod. For this use case, it make sense to have a different pod for each zookeeper server container. So we just need to have a label for each pod, the designates that its a zookeeper server pod and also a label that designates the zookeeper server id. "labels": { "name": "zookeeper-pod", "server": 1 } Something like the above could work. Now we are ready to define the service. I will just show how we can expose the peer port of server with id 1, as a service. The rest can be done in a similar fashion: { "apiVersion": "v1beta1", "creationTimestamp": null, "id": "zk-peer-1", "kind": "Service", "port": 2888, "containerPort": "zookeeper-peer-port", "selector": { "name": "zookeeper-pod", "server": 1 } } The basic idea is that in the service definition, you create a selector which can be used to query/filter pods. Then you define the name of the port to expose and this is pretty much it. Just to clarify, we need a service definition just like the one above per zookeeper server container. And of course we need to do the same for the election port. Finally, we can define an other kind of service, for the client connection port. This time we are not going to specify the sever id, in the selector, which means that all 3 servers will be selected. In this case kubernetes will load balance across all ZooKeeper servers. Since ZooKeeper provides a single system image (it doesn’t matter on which server you are connected) then this is pretty handy. { "apiVersion": "v1beta1", "creationTimestamp": null, "id": "zk-client", "kind": "Service", "port": 2181, "createExternalLoadBalancer": "true", "containerPort": "zookeeper-client-port", "selector": { "name": "zookeeper-pod" } } I hope you found it useful. There is definitely room for improvement so feel free to leave comments.Reference: ZooKeeper on Kubernetes from our JCG partner Ioannis Canellos at the Ioannis Canellos Blog blog....
java-logo

Chronicle Map and Yahoo Cloud Service Benchmark

Overview Yahoo Cloud Service Benchmark is a reasonably widely used benchmarking tool for testing key value stores for a significant number of key e.g 100 million, and a modest number of clients i.e. served from one machine. In this article I look at how a test of 100 million * 1 KB key/values performed using Chronicle Map on a single machinewith 128 GB memory, dual Intel E5-2650 v2 @ 2.60GHz, and six Samsung 840 EVO SSDs. The 1 KB value consists of ten fields of 100 byte Strings.  For a more optimal solution, primitive numbers would be a better choice. While the SSDs helped, the peak transfer rate was 700 MB/s which could be supported by two SATA SSD drives. These benchmarks were performed using the latest version at the time of the report, Chronicle Map 2.0.5a-SNAPSHOT. Micro-second world Something which confounds me when reading benchmarks about key-value stores is that they start with the premise that performance is really important.  IMHO, about 90% of the time, performance is not the most important feature, provided you have sufficient performance. These benchmark reports then continue to report times in milli-seconds, not micro-seconds and throughputs in thetens of thousands instead of the hundreds of thousands or millions.  If performance really was that important, they would have built their products around performance, instead of the useful features they do support, like multi-key transactionality, quorum updates and other features Chronicle Map doesn’t support, for performance reasons. So how would a key-store built for performance look with YCSB? Throughput measures The “50/50″ tests 50% random reads and 50% random writes, the “95/5″ tests 95% reads to 5% writes. It is expected that writes will be more expensive, and a higher percentage of reads results in higher throughputs.Threads 50/50 read/update 95/5 read/update1 122 K/s 245 K/s2 235 K/s 414 K/s4 339 K/s 750 K/s8 646 K/s 1.295 M/s15 819 K/s 1.452 M/s30 900 K/s 1.641 M/sLatencies The following latencies are in micro-seconds, not milli-seconds.Threads: 8 50/50 read 95/5 read 50/50 update 95/5 updateaverage 5 µs 3.9 µs 15.9 µs 11.3 µs95th 12 µs 8 µs 31 µs 19 µs99th 19 µs 14 µs 42 µs 27 µsworst 67 ms 70 ms 67 ms 70 ms  Note: the benchmark is not designed to be GC free and creates some garbage.  This is not particularly high and the benchmark itself uses only about 1/4 of CPU according to flight simulator, however it does impact the worst latencies. ConclusionMake sure the key-value store has the features you need, but if performance is critical, look for a solution designed for performance as this can be 100x faster than full featured products. Other high performance examples Aerospike benchmark – Single server benchmark with over 1 M TPS, sub-micro-second latencies. Uses smaller 100 byte records. NuoDB benchmark – Supports transactions across a quorum. 24 nodes for 1 M TPS. Oracle NoSQL benchmark - A couple of years old, uses a lot of threads, otherwise a good result. VoltDB benchmark – Not tested to 1 M TPS, but promising. Latencies around 1-2 ms, report has 99th percentile latencies which others don’t include.Room for improvement MongoDB driver benchmark – Has 1000s of micro-seconds instead of milli-seconds. Cassandra, HBase, Redis – Shows you can get 1 million TPS if you use enough servers, 288 nodes for 1 M TPS. Report including Elasticsearch – Report includes runtime in a “resource Austere Environment” Hyperdex – Cover throughput only. WhiteDB – Reports latencies in micro-seconds for 170 K records, and modest throughputs. Benchmark including Aerospace – ReportsFootnote Using smaller values helps, and we suggest trying to make values closer to 100 bytes.  This is the result of the 95/5 workload B, using 10×10 byte fields, and 50 M entries as the Aerospike benchmark does.[OVERALL], RunTime(ms), 29,542 [OVERALL], Throughput(ops/sec), 3,385,011 [READ], Operations, 94998832 [READ], AverageLatency(us), 1.88 [READ], MinLatency(us), 0 [READ], MaxLatency(us), 50201 [READ], 95thPercentileLatency(ms), 0.004 [READ], 99thPercentileLatency(ms), 0.006 [READ], Return=0, 48768825 [READ], Return=1, 46230007 [UPDATE], Operations, 5001168 [UPDATE], AverageLatency(us), 8.04 [UPDATE], MinLatency(us), 0 [UPDATE], MaxLatency(us), 50226 [UPDATE], 95thPercentileLatency(ms), 0.012 [UPDATE], 99thPercentileLatency(ms), 0.018 [UPDATE], Return=0, 5001168Reference: Chronicle Map and Yahoo Cloud Service Benchmark from our JCG partner Peter Lawrey at the Vanilla Java blog....
spring-interview-questions-answers

Spring Boot Actuator: custom endpoint with MVC layer on top of it

Spring Boot Actuator endpoints allow you to monitor and interact with your application. Spring Boot includes a number of built-in endpoints and you can also add your own.   Adding custom endpoints is as easy as creating a class that extends from org.springframework.boot.actuate.endpoint.AbstractEndpoint. But Spring Boot Actuator offers also possibility to decorate endpoints with MVC layer.   Endpoints endpoint There are many built-in endpoints, but one there is missing is the endpoint to expose all endpoints. By default endpoints are exposed via HTTP where ID of an endpoint is mapped to a URL. In the below example, the new endpoint with ID endpoints is created and its invoke method returns all available endpoints: @Component public class EndpointsEndpoint extends AbstractEndpoint<List<Endpoint>> {private List<Endpoint> endpoints;@Autowired public EndpointsEndpoint(List<Endpoint> endpoints) { super("endpoints"); this.endpoints = endpoints; }@Override public List<Endpoint> invoke() { return endpoints; } } @Component annotation adds endpoint to the list of existing endpoints. The /endpoints URL will now expose all endpoints with id, enabled and sensitive properties: [ { "id": "trace", "sensitive": true, "enabled": true }, { "id": "configprops", "sensitive": true, "enabled": true } ] New endpoint will be also registered with JMX server as MBean: [org.springframework.boot:type=Endpoint,name=endpointsEndpoint] MVC Endpoint Spring Boot Actuator offers an additional feature which is a strategy for the MVC layer on top of an Endpoint through org.springframework.boot.actuate.endpoint.mvc.MvcEndpoint interfaces. The MvcEndpoint can use @RequestMapping and other Spring MVC features. Please note that EndpointsEndpoint returns all available endpoints. But it would be nice if user could filter endpoints by its enabled and sensitive properties. In order to do so a new MvcEndpoint must be created with a valid @RequestMapping method. Please note that using @Controller and @RequestMapping on the class level is not allowed, therefore @Component was used to make the endpoint available: @Component public class EndpointsMvcEndpoint extends EndpointMvcAdapter {private final EndpointsEndpoint delegate;@Autowired public EndpointsMvcEndpoint(EndpointsEndpoint delegate) { super(delegate); this.delegate = delegate; }@RequestMapping(value = "/filter", method = RequestMethod.GET) @ResponseBody public Set<Endpoint> filter(@RequestParam(required = false) Boolean enabled, @RequestParam(required = false) Boolean sensitive) {} } The new method will be available under /endpoints/filter URL. The implementation of this method is simple: it gets optional enabled and sensitive parameters and filters the delegate’s invoke method result: @RequestMapping(value = "/filter", method = RequestMethod.GET) @ResponseBody public Set<Endpoint> filter(@RequestParam(required = false) Boolean enabled, @RequestParam(required = false) Boolean sensitive) {Predicate<Endpoint> isEnabled = endpoint -> matches(endpoint::isEnabled, ofNullable(enabled));Predicate<Endpoint> isSensitive = endpoint -> matches(endpoint::isSensitive, ofNullable(sensitive));return this.delegate.invoke().stream() .filter(isEnabled.and(isSensitive)) .collect(toSet()); }private <T> boolean matches(Supplier<T> supplier, Optional<T> value) { return !value.isPresent() || supplier.get().equals(value.get()); } Usage examples:All enabled endpoints: /endpoints/filter?enabled=true All sensitive endpoints: /endpoints/filter?sensitive=true All enabled and sensitive endpoints: /endpoints/filter?enabled=true&sensitive=trueMake endpoints discoverable EndpointsMvcEndpoint utilizes MVC capabilities, but still returns plain endpoint objects. In case Spring HATEOAS is in the classpath the filter method could be extended to return org.springframework.hateoas.Resource with links to endpoints: class EndpointResource extends ResourceSupport {private final String managementContextPath; private final Endpoint endpoint;EndpointResource(String managementContextPath, Endpoint endpoint) { this.managementContextPath = managementContextPath; this.endpoint = endpoint;if (endpoint.isEnabled()) {UriComponentsBuilder path = fromCurrentServletMapping() .path(this.managementContextPath) .pathSegment(endpoint.getId());this.add(new Link(path.build().toUriString(), endpoint.getId())); } }public Endpoint getEndpoint() { return endpoint; } } The EndpointResource will contain a link to each enabled endpoint. Note, that the constructor takes a managamentContextPath variable. This variable contains a Spring Boot Actuator management.contextPath property value. Used to set a prefix for management endpoint. The changes required in EndpointsMvcEndpoint class: @Component public class EndpointsMvcEndpoint extends EndpointMvcAdapter {@Value("${management.context-path:/}") // default to '/' private String managementContextPath;@RequestMapping(value = "/filter", method = RequestMethod.GET) @ResponseBody public Set<Endpoint> filter(@RequestParam(required = false) Boolean enabled, @RequestParam(required = false) Boolean sensitive) {// predicates declarationsreturn this.delegate.invoke().stream() .filter(isEnabled.and(isSensitive)) .map(e -> new EndpointResource(managementContextPath, e)) .collect(toSet()); } } The result in my Chrome browser with JSON Formatter installed:But why not returning the resource directly from EndpointsEnpoint? In EndpointResource a UriComponentsBuilder that extracts information from an HttpServletRequest was used which will throw an exception while calling to MBean’s getData operation (unless JMX is not desired). Manage endpoint state Endpoints can be used not only for monitoring, but also for management. There is already built-in ShutdownEndpoint (disabled by default) that allows to shutdown the ApplicationContext. In the below (hypothetical) example, user can change state of selected endpoint: @RequestMapping(value = "/{endpointId}/state") @ResponseBody public EndpointResource enable(@PathVariable String endpointId) { Optional<Endpoint> endpointOptional = this.delegate.invoke().stream() .filter(e -> e.getId().equals(endpointId)) .findFirst(); if (!endpointOptional.isPresent()) { throw new RuntimeException("Endpoint not found: " + endpointId); }Endpoint endpoint = endpointOptional.get(); ((AbstractEndpoint) endpoint).setEnabled(!endpoint.isEnabled());return new EndpointResource(managementContextPath, endpoint); } While calling a disabled endpoint user should receive the following response: { "message": "This endpoint is disabled" } Going further The next step could be adding a user interface for custom (or existing) endpoints, but it is not in scope of this article. If you are interested you may have a look at Spring Boot Admin that is a simple admin interface for Spring Boot applications. Summary Spring Boot Actuator provides all of Spring Boot’s production-ready features with a number of built-in endpoints. With minimal effort custom endpoints can be added to extend monitoring and management capabilities of the application. Referenceshttp://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#production-readyReference: Spring Boot Actuator: custom endpoint with MVC layer on top of it from our JCG partner Rafal Borowiec at the Codeleak.pl blog....
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close