Featured FREE Whitepapers

What's New Here?

java-interview-questions-answers

Securing WebSockets using Username/Password and Servlet Security

RFC 6455 provide a complete list of security considerations for WebSockets. Some of them are baked in the protocol itself, and others need more explanation on how they can be achieved on a particular server. Lets talk about some of the security built into the protocol itself:              The Origin header in HTTP request includes only the information required to identify the principal (web page, JavaScript or any other client) that initiated the request (typically the scheme, host, and port of initiating origin). For WebSockets, this header field is included in the client’s opening handshake. This is used to inform server of the script origin generating the WebSocket connection request. The server may then decide to accept or reject the handshake request accordingly. This allows the server to protect against unauthorized cross-origin use of a WebSocket server by scripts using the WebSocket API in a browser.For example, if Java EE 7 WebSocket Chat sample is deployed to WildFly and accessed at localhost:8080/chat/ then the Origin header is “http://localhost:8080″. Non-browser clients may use the Origin header to specify the origin of the request. WebSocket servers should be careful about receiving such requests. WebSocket opening handshake from client must include Sec-WebSocket-Key and Sec-WebSocket-Version HTTP header field. XMLHttpRequest can be used to make HTTP requests, and allows to set headers as part of that request as: xhr.onreadystatechange = function () { if (xhr.readyState == 4 && xhr.status == 200) { document.getElementById("myDiv").innerHTML = xhr.responseText; } } xhr.open("GET", "http://localhost:8080", true); xhr.setRequestHeader("foo", "bar"); xhr.setRequestHeader("Sec-WebSocket-Key", "myKey"); xhr.send(); If XMLHttpRequest tries to set any header fields starting with Sec- then they are ignored. So a malicious user cannot simulate a WebSocket connection to a server by using HTML and JavaScript APIs.In addition to these two primary ways, WebSockets can be secured using client authentication mechanism available to any HTTP servers. This Tech Tip will show how to authenticate Java EE 7 WebSockets deployed on WildFly. Lets get started!Clone Java EE 7 Samples workspace: git clone https://github.com/javaee-samples/javaee7-samples.gitThe “websocket/endpoint-security” sample shows how client authentication can be done before the WebSocket handshake is initiated from the client. This is triggered by including the following deployment descriptor: <security-constraint> <web-resource-collection> <web-resource-name>WebSocket Endpoint</web-resource-name> <url-pattern>/*</url-pattern> <http-method>GET</http-method> </web-resource-collection> <auth-constraint> <role-name>g1</role-name> </auth-constraint> </security-constraint> <login-config> <auth-method>BASIC</auth-method> <realm-name>file</realm-name> </login-config> <security-role> <role-name>g1</role-name> </security-role> Some key points to understand about this descriptor:<url-pattern> indicates that any request made to this application will be prompted for authentication <auth-constraint> defines the security role that can access this resource <login-config> shows that file-based realm is used with basic authentication <security-role> defines the security roles referenced by this applicationIn our particular case, the page that creates the WebSocket connection is protected by basic authentication. Download WildFly 8.1, unzip, and add a new user by invoking the following script: ./bin/add-user.sh -a -u u1 -p p1 -g g1 This will add user “u1″ with password “p1″ in group “g1″. The group specified here needs to match as defined in <role-name> in the deployment descriptor. Deploy the sample by giving the command: mvn wildfly:deployNow when the application is accessed at localhost:8080/endpoint-security then a security dialog box pops up as shown:Enter “u1″ as the username and “p1″ as the password to authenticate. These credentials are defined in the group “g1″ which is referenced in the deployment descriptor. Any other credentials will keep bringing the dialog back. As soon as the request is successfully authenticated, the WebSocket connection is established and a message is shown on the browser. If you are interested in securing only the WebSocket URL then change the URL pattern from: /* to: /websocket In websocket.js, change the URL to create WebSocket endpoint from: var wsUri = "ws://" + document.location.host + document.location.pathname + "websocket"; to: var wsUri = "ws://u1:p1@" + document.location.host + document.location.pathname + "websocket"; Note, how credentials are passed in the URL itself. As of Google Chrome 38.0.2125.104, a browser popup does not appear if only WebSocket URL requires authentication. Next Tech Tip will explain how to secure WebSocket using wss:// protocol.Reference: Securing WebSockets using Username/Password and Servlet Security from our JCG partner Arun Gupta at the Miles to go 2.0 … blog....
java-interview-questions-answers

Java EE 7 / JAX-RS 2.0: Simple REST API Authentication & Authorization with Custom HTTP Header

REST has made a lot of conveniences when it comes to implementing web services with the already available HTTP protocol at its disposal. By just firing GET, POST and other HTTP methods through the designated URL, you’ll sure to get something done through a response out of a REST service. But whatever conveniences which REST has given to the developers, the subject of security and access control should always be addressed. This article will show you how to implement simple user based authentication with the use of HTTP Headers and JAX-RS 2.0 interceptors.         Authenticator Let’s begin with an authenticator class. This DemoAuthenticator with the codes below provides the necessary methods for authenticating any users which is request access to the REST web service. Please read through the codes and the comments are there to guide the understanding. Codes for DemoAuthenticator: package com.developerscrappad.business;   import java.util.HashMap; import java.util.Map; import java.util.UUID; import java.security.GeneralSecurityException; import javax.security.auth.login.LoginException;   public final class DemoAuthenticator {   private static DemoAuthenticator authenticator = null;   // A user storage which stores <username, password> private final Map<String, String> usersStorage = new HashMap();   // A service key storage which stores <service_key, username> private final Map<String, String> serviceKeysStorage = new HashMap();   // An authentication token storage which stores <service_key, auth_token>. private final Map<String, String> authorizationTokensStorage = new HashMap();   private DemoAuthenticator() { // The usersStorage pretty much represents a user table in the database usersStorage.put( "username1", "passwordForUser1" ); usersStorage.put( "username2", "passwordForUser2" ); usersStorage.put( "username3", "passwordForUser3" );   /**   * Service keys are pre-generated by the system and is given to the   * authorized client who wants to have access to the REST API. Here,   * only username1 and username2 is given the REST service access with   * their respective service keys.   */ serviceKeysStorage.put( "f80ebc87-ad5c-4b29-9366-5359768df5a1", "username1" ); serviceKeysStorage.put( "3b91cab8-926f-49b6-ba00-920bcf934c2a", "username2" ); }   public static DemoAuthenticator getInstance() { if ( authenticator == null ) { authenticator = new DemoAuthenticator(); }   return authenticator; }   public String login( String serviceKey, String username, String password ) throws LoginException { if ( serviceKeysStorage.containsKey( serviceKey ) ) { String usernameMatch = serviceKeysStorage.get( serviceKey );   if ( usernameMatch.equals( username ) && usersStorage.containsKey( username ) ) { String passwordMatch = usersStorage.get( username );   if ( passwordMatch.equals( password ) ) {   /**   * Once all params are matched, the authToken will be   * generated and will be stored in the   * authorizationTokensStorage. The authToken will be needed   * for every REST API invocation and is only valid within   * the login session   */ String authToken = UUID.randomUUID().toString(); authorizationTokensStorage.put( authToken, username );   return authToken; } } }   throw new LoginException( "Don't Come Here Again!" ); }   /**   * The method that pre-validates if the client which invokes the REST API is   * from a authorized and authenticated source.   *   * @param serviceKey The service key   * @param authToken The authorization token generated after login   * @return TRUE for acceptance and FALSE for denied.   */ public boolean isAuthTokenValid( String serviceKey, String authToken ) { if ( isServiceKeyValid( serviceKey ) ) { String usernameMatch1 = serviceKeysStorage.get( serviceKey );   if ( authorizationTokensStorage.containsKey( authToken ) ) { String usernameMatch2 = authorizationTokensStorage.get( authToken );   if ( usernameMatch1.equals( usernameMatch2 ) ) { return true; } } }   return false; }   /**   * This method checks is the service key is valid   *   * @param serviceKey   * @return TRUE if service key matches the pre-generated ones in service key   * storage. FALSE for otherwise.   */ public boolean isServiceKeyValid( String serviceKey ) { return serviceKeysStorage.containsKey( serviceKey ); }   public void logout( String serviceKey, String authToken ) throws GeneralSecurityException { if ( serviceKeysStorage.containsKey( serviceKey ) ) { String usernameMatch1 = serviceKeysStorage.get( serviceKey );   if ( authorizationTokensStorage.containsKey( authToken ) ) { String usernameMatch2 = authorizationTokensStorage.get( authToken );   if ( usernameMatch1.equals( usernameMatch2 ) ) {   /**   * When a client logs out, the authentication token will be   * remove and will be made invalid.   */ authorizationTokensStorage.remove( authToken ); return; } } }   throw new GeneralSecurityException( "Invalid service key and authorization token match." ); } }General Code Explanation: Generally, there are only a few important items that makes up the authenticator and that that is: service key, authorization token, username and password. The username and password will commonly go in pairs. Service Key The service key may be new to some readers; in some public REST API service, a service key and sometimes known as API key, is generated by the system and then sends to the user/client (either through email or other means) that is permitted to access the REST service. So besides login into the REST service with just mere username and password, the system will also check on the service key if the user/client is permitted to access the REST APIs. The usernames, passwords and service keys are all predefined in the codes above for now only demo purpose. Authorization Token Upon authentication (through the login() method), the system will then generate an authorization token for the authenticated user. This token is passed back to the user/client through HTTP response and is to be used for any REST API invocation later. The user/client will have to find a way to store and use it throughout the login session. We’ll look at that later. Required HTTP Headers Name Definition Moving forward, instead of having the service key and authorization token to be passed to the server-side app as HTTP parameters (Form or Query), we’ll have it pass as HTTP Headers. This is to allow the request to be first filtered before being processed by the targeted REST method. The names for the HTTP Headers are below:HTTP Header Name Descriptionservice_key The service key that enables a HTTP client to access the REST Web Services. This is the first layer of authenticating and authorizing the HTTP Request.auth_token The token generated upon username/password authentication, which is to be used for any REST Web Service calls (except for the authentication method shown later).REST API Implementation For convenience and further code error reduction, let’s put the HTTP Header names into an interface as static final variables for the use in the rest of the classes. Codes for DemoHTTPHeaderNames.java: package com.developerscrappad.intf;   public interface DemoHTTPHeaderNames {   public static final String SERVICE_KEY = "service_key"; public static final String AUTH_TOKEN = "auth_token"; }For the implementation of the authentication process and other demo methods, the methods’ signature are defined in DemoBusinessRESTResourceProxy, along with the appropriate HTTP Methods, parameters and the business implementation is defined in DemoBusinessRESTResource. Codes for DemoBusinessRESTResourceProxy.java: package com.developerscrappad.intf;   import java.io.Serializable; import javax.ejb.Local; import javax.ws.rs.FormParam; import javax.ws.rs.GET; import javax.ws.rs.POST; import javax.ws.rs.Path; import javax.ws.rs.Produces; import javax.ws.rs.core.Context; import javax.ws.rs.core.HttpHeaders; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.Response;   @Local @Path( "demo-business-resource" ) public interface DemoBusinessRESTResourceProxy extends Serializable {   @POST @Path( "login" ) @Produces( MediaType.APPLICATION_JSON ) public Response login( @Context HttpHeaders httpHeaders, @FormParam( "username" ) String username, @FormParam( "password" ) String password );   @GET @Path( "demo-get-method" ) @Produces( MediaType.APPLICATION_JSON ) public Response demoGetMethod();   @POST @Path( "demo-post-method" ) @Produces( MediaType.APPLICATION_JSON ) public Response demoPostMethod();   @POST @Path( "logout" ) public Response logout( @Context HttpHeaders httpHeaders ); }Codes for DemoBusinessRESTResource.java: package com.developerscrappad.business;   import com.developerscrappad.intf.DemoBusinessRESTResourceProxy; import com.developerscrappad.intf.DemoHTTPHeaderNames; import java.security.GeneralSecurityException; import javax.ejb.Stateless; import javax.json.Json; import javax.json.JsonObject; import javax.json.JsonObjectBuilder; import javax.security.auth.login.LoginException; import javax.ws.rs.FormParam; import javax.ws.rs.core.CacheControl; import javax.ws.rs.core.Context; import javax.ws.rs.core.HttpHeaders; import javax.ws.rs.core.Response;   @Stateless( name = "DemoBusinessRESTResource", mappedName = "ejb/DemoBusinessRESTResource" ) public class DemoBusinessRESTResource implements DemoBusinessRESTResourceProxy {   private static final long serialVersionUID = -6663599014192066936L;   @Override public Response login( @Context HttpHeaders httpHeaders, @FormParam( "username" ) String username, @FormParam( "password" ) String password ) {   DemoAuthenticator demoAuthenticator = DemoAuthenticator.getInstance(); String serviceKey = httpHeaders.getHeaderString( DemoHTTPHeaderNames.SERVICE_KEY );   try { String authToken = demoAuthenticator.login( serviceKey, username, password );   JsonObjectBuilder jsonObjBuilder = Json.createObjectBuilder(); jsonObjBuilder.add( "auth_token", authToken ); JsonObject jsonObj = jsonObjBuilder.build();   return getNoCacheResponseBuilder( Response.Status.OK ).entity( jsonObj.toString() ).build();   } catch ( final LoginException ex ) { JsonObjectBuilder jsonObjBuilder = Json.createObjectBuilder(); jsonObjBuilder.add( "message", "Problem matching service key, username and password" ); JsonObject jsonObj = jsonObjBuilder.build();   return getNoCacheResponseBuilder( Response.Status.UNAUTHORIZED ).entity( jsonObj.toString() ).build(); } }   @Override public Response demoGetMethod() { JsonObjectBuilder jsonObjBuilder = Json.createObjectBuilder(); jsonObjBuilder.add( "message", "Executed demoGetMethod" ); JsonObject jsonObj = jsonObjBuilder.build();   return getNoCacheResponseBuilder( Response.Status.OK ).entity( jsonObj.toString() ).build(); }   @Override public Response demoPostMethod() { JsonObjectBuilder jsonObjBuilder = Json.createObjectBuilder(); jsonObjBuilder.add( "message", "Executed demoPostMethod" ); JsonObject jsonObj = jsonObjBuilder.build();   return getNoCacheResponseBuilder( Response.Status.ACCEPTED ).entity( jsonObj.toString() ).build(); }   @Override public Response logout( @Context HttpHeaders httpHeaders ) { try { DemoAuthenticator demoAuthenticator = DemoAuthenticator.getInstance(); String serviceKey = httpHeaders.getHeaderString( DemoHTTPHeaderNames.SERVICE_KEY ); String authToken = httpHeaders.getHeaderString( DemoHTTPHeaderNames.AUTH_TOKEN );   demoAuthenticator.logout( serviceKey, authToken );   return getNoCacheResponseBuilder( Response.Status.NO_CONTENT ).build(); } catch ( final GeneralSecurityException ex ) { return getNoCacheResponseBuilder( Response.Status.INTERNAL_SERVER_ERROR ).build(); } }   private Response.ResponseBuilder getNoCacheResponseBuilder( Response.Status status ) { CacheControl cc = new CacheControl(); cc.setNoCache( true ); cc.setMaxAge( -1 ); cc.setMustRevalidate( true );   return Response.status( status ).cacheControl( cc ); } }The login() method is to authenticate the username, the password and also the right service key. After login(), the authorization token will be generated and returned to the client. The client will have to use it for any other methods invocation later on. The demoGetMethod() and the demoPostMethod() are just dummy methods which returns a JSON message for demo purpose, but with a special condition that a valid authorization token must be present. The logout() method is to log the user out of the REST service; user is identified by the “auth_token“. The service key and the authorization token will be made available to the REST service methods through: @Context HttpHeaders httpHeaders The httpHeaders, an instance of javax.ws.rs.core.HttpHeaders, is an object that contains the header name and values for the use of the application further on. But in order to get the REST service to accept the HTTP Header, something needs to be done first through both the REST request interceptor and the response interceptor. Authentication With HTTP Headers Through JAX-RS 2.0 Interceptors Due to certain security limitation, just don’t hope that any HTTP headers could be passed using any REST client and expect the REST service to accept it. It just doesn’t work that way. In order to make a specific header to be accepted in the REST service, we have to define the acceptance of HTTP Header very specifically in the response filter interceptor. Codes for DemoRESTResponseFilter.java: package com.developerscrappad.interceptors;   import com.developerscrappad.intf.DemoHTTPHeaderNames; import java.io.IOException; import java.util.logging.Logger; import javax.ws.rs.container.ContainerRequestContext; import javax.ws.rs.container.ContainerResponseContext; import javax.ws.rs.container.ContainerResponseFilter; import javax.ws.rs.container.PreMatching; import javax.ws.rs.ext.Provider;   @Provider @PreMatching public class DemoRESTResponseFilter implements ContainerResponseFilter {   private final static Logger log = Logger.getLogger( DemoRESTResponseFilter.class.getName() );   @Override public void filter( ContainerRequestContext requestCtx, ContainerResponseContext responseCtx ) throws IOException {   log.info( "Filtering REST Response" );   responseCtx.getHeaders().add( "Access-Control-Allow-Origin", "*" ); // You may further limit certain client IPs with Access-Control-Allow-Origin instead of '*' responseCtx.getHeaders().add( "Access-Control-Allow-Credentials", "true" ); responseCtx.getHeaders().add( "Access-Control-Allow-Methods", "GET, POST, DELETE, PUT" ); responseCtx.getHeaders().add( "Access-Control-Allow-Headers", DemoHTTPHeaderNames.SERVICE_KEY + ", " + DemoHTTPHeaderNames.AUTH_TOKEN ); } }DemoRESTResponseFilter is a JAX-RS 2.0 interceptor which implements ContainerResponseFilter. Don’t forget to annotate it with both @Provide and @PreMatching. In order to allow certain specific custom HTTP headers to be accepted, the header name “Access-Control-Allow-Headers” follow by the value of custom headers with “,” as the separator must be added as part of the custom headers value. This is the way to inform the browser or REST client of the custom headers allowed. The rest of the headers are for CORS, which you can read more in one of our articles Java EE 7 / JAX-RS 2.0 – CORS on REST (How to make REST APIs accessible from a different domain). Next, to validate and verify the service key and authorization token, we need to extract it out from the HTTP Headers and pre-process it with the request filter interceptor. Codes for DemoRESTRequestFilter: package com.developerscrappad.interceptors;   import com.developerscrappad.business.DemoAuthenticator; import com.developerscrappad.intf.DemoHTTPHeaderNames; import java.io.IOException; import java.util.logging.Logger; import javax.ws.rs.container.ContainerRequestContext; import javax.ws.rs.container.ContainerRequestFilter; import javax.ws.rs.container.PreMatching; import javax.ws.rs.core.Response; import javax.ws.rs.ext.Provider;   @Provider @PreMatching public class DemoRESTRequestFilter implements ContainerRequestFilter {   private final static Logger log = Logger.getLogger( DemoRESTRequestFilter.class.getName() );   @Override public void filter( ContainerRequestContext requestCtx ) throws IOException {   String path = requestCtx.getUriInfo().getPath(); log.info( "Filtering request path: " + path );   // IMPORTANT!!! First, Acknowledge any pre-flight test from browsers for this case before validating the headers (CORS stuff) if ( requestCtx.getRequest().getMethod().equals( "OPTIONS" ) ) { requestCtx.abortWith( Response.status( Response.Status.OK ).build() );   return; }   // Then check is the service key exists and is valid. DemoAuthenticator demoAuthenticator = DemoAuthenticator.getInstance(); String serviceKey = requestCtx.getHeaderString( DemoHTTPHeaderNames.SERVICE_KEY );   if ( !demoAuthenticator.isServiceKeyValid( serviceKey ) ) { // Kick anyone without a valid service key requestCtx.abortWith( Response.status( Response.Status.UNAUTHORIZED ).build() );   return; }   // For any pther methods besides login, the authToken must be verified if ( !path.startsWith( "/demo-business-resource/login/" ) ) { String authToken = requestCtx.getHeaderString( DemoHTTPHeaderNames.AUTH_TOKEN );   // if it isn't valid, just kick them out. if ( !demoAuthenticator.isAuthTokenValid( serviceKey, authToken ) ) { requestCtx.abortWith( Response.status( Response.Status.UNAUTHORIZED ).build() ); } } } }To get the header value, we invoke the getHeaderString() method of the object instance of ContainerRequestContext, for example: String serviceKey = requestCtx.getHeaderString( "service_key" ); The rest of the codes in DemoRESTRequestFilter is pretty straight forward on validating and verifying the service key and the authorization token. REST Service Deployment Don’t forget to have the web.xml for the enablement of REST service define. Codes for web.xml:  javax.ws.rs.core.Application 1 javax.ws.rs.core.Application /rest-api/*  For this demo, I have packaged the compiled codes into a war file naming it RESTSecurityWithHTTPHeaderDemo.war. I have chosen to deploy on Glassfish 4.0 on the domain developerscrappad.com (the domain of this blog). If you are going through everything in this tutorial, you may choose a different domain of your own. The REST API URLs will be in the format of: http://<domain>:<port>/RESTSecurityWithHTTPHeaderDemo/rest-api/path/method-path/ Anyway, the summary of the URLs for the test client which I’m using are:Method REST URL HTTP MethodDemoBusinessRESTResourceProxy.login() http://developerscrappad.com:8080/RESTSecurityWithHTTPHeaderDemo/rest-api/demo-business-resource/login/ POSTDemoBusinessRESTResourceProxy. demoGetMethod() http://developerscrappad.com:8080/RESTSecurityWithHTTPHeaderDemo/rest-api/demo-business-resource/demo-get-method/ GETDemoBusinessRESTResourceProxy. demoPostMethod() http://developerscrappad.com:8080/RESTSecurityWithHTTPHeaderDemo/rest-api/demo-business-resource/demo-post-method/ POSTDemoBusinessRESTResourceProxy.logout() http://developerscrappad.com:8080/RESTSecurityWithHTTPHeaderDemo/rest-api/demo-business-resource/logout/ POSTTHE REST Client Putting it altogether, here’s a REST client which I’ve wrote to test the REST APIs. The REST Client is just a HTML file (specifically HTML5, which supports web storage) that leverages jQuery for REST API calls. What the REST Client does is as follow:First, the REST Client will make a REST API call without service key and authorization token. The call will be rejected with HTTP Status 401 (Unauthorized) Next, it will perform a login with the specific service key (hard coded for now in the Authenticator.java) for “username2″. Once the authorisation token had been received, it will be stored in the sessionStorage for further use. Then, it will call the dummy get and post methods. After that, it will pereform a logout Once the user is logged-out, the client will then perform a call to to the dummy get and post method, but the access will be denied with HTTP Status 401 due to the expiration of the authorization token.Codes for rest-auth-test.html: <html> <head> <title>REST Authentication Tester</title> <meta charset="UTF-8"> </head> <body> <div id="logMsgDiv"></div>   <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.11.0/jquery.min.js"></script> <script type="text/javascript"> var $ = jQuery.noConflict();   // Disable async $.ajaxSetup( { async: false } );   // Using Service Key 3b91cab8-926f-49b6-ba00-920bcf934c2a and username2   // This is what happens when there you call the REST APIs without a service key and authorisation token $.ajax( { cache: false, crossDomain: true, url: "http://www.developerscrappad.com:8080/RESTSecurityWithHTTPHeaderDemo/rest-api/demo-business-resource/demo-post-method/", type: "POST", success: function( jsonObj, textStatus, xhr ) { var htmlContent = $( "#logMsgDiv" ).html( ) + "<p style='color: red;'>If this is portion is executed, something must be wrong</p>"; $( "#logMsgDiv" ).html( htmlContent ); }, error: function( xhr, textStatus, errorThrown ) { var htmlContent = $( "#logMsgDiv" ).html( ) + "<p style='color: red;'>This is what happens when there you call the REST APIs without a service key and authorisation token." + "<br />HTTP Status: " + xhr.status + ", Unauthorized access to demo-post-method</p>";   $( "#logMsgDiv" ).html( htmlContent ); } } );   // Performing login with username2 and passwordForUser2 $.ajax( { cache: false, crossDomain: true, headers: { "service_key": "3b91cab8-926f-49b6-ba00-920bcf934c2a" }, dataType: "json", url: "http://www.developerscrappad.com:8080/RESTSecurityWithHTTPHeaderDemo/rest-api/demo-business-resource/login/", type: "POST", data: { "username": "username2", "password": "passwordForUser2" }, success: function( jsonObj, textStatus, xhr ) { sessionStorage.auth_token = jsonObj.auth_token;   var htmlContent = $( "#logMsgDiv" ).html( ) + "<p>Perform Login. Gotten auth-token as: " + sessionStorage.auth_token + "</p>"; $( "#logMsgDiv" ).html( htmlContent ); }, error: function( xhr, textStatus, errorThrown ) { console.log( "HTTP Status: " + xhr.status ); console.log( "Error textStatus: " + textStatus ); console.log( "Error thrown: " + errorThrown ); } } );   // After login, execute demoteGetMethod with the auth-token obtained $.ajax( { cache: false, crossDomain: true, headers: { "service_key": "3b91cab8-926f-49b6-ba00-920bcf934c2a", "auth_token": sessionStorage.auth_token }, dataType: "json", url: "http://www.developerscrappad.com:8080/RESTSecurityWithHTTPHeaderDemo/rest-api/demo-business-resource/demo-get-method/", type: "GET", success: function( jsonObj, textStatus, xhr ) { var htmlContent = $( "#logMsgDiv" ).html( ) + "<p>After login, execute demoteGetMethod with the auth-token obtained. JSON Message: " + jsonObj.message + "</p>"; $( "#logMsgDiv" ).html( htmlContent ); }, error: function( xhr, textStatus, errorThrown ) { console.log( "HTTP Status: " + xhr.status ); console.log( "Error textStatus: " + textStatus ); console.log( "Error thrown: " + errorThrown ); } } );   // Execute demoPostMethod with the auth-token obtained $.ajax( { cache: false, crossDomain: true, headers: { "service_key": "3b91cab8-926f-49b6-ba00-920bcf934c2a", "auth_token": sessionStorage.auth_token }, dataType: "json", url: "http://www.developerscrappad.com:8080/RESTSecurityWithHTTPHeaderDemo/rest-api/demo-business-resource/demo-post-method/", type: "POST", success: function( jsonObj, textStatus, xhr ) { var htmlContent = $( "#logMsgDiv" ).html( ) + "<p>Execute demoPostMethod with the auth-token obtained. JSON message: " + jsonObj.message + "</p>"; $( "#logMsgDiv" ).html( htmlContent ); }, error: function( xhr, textStatus, errorThrown ) { console.log( "HTTP Status: " + xhr.status ); console.log( "Error textStatus: " + textStatus ); console.log( "Error thrown: " + errorThrown ); } } );   // Let's logout after all the above. No content expected $.ajax( { cache: false, crossDomain: true, headers: { "service_key": "3b91cab8-926f-49b6-ba00-920bcf934c2a", "auth_token": sessionStorage.auth_token }, url: "http://www.developerscrappad.com:8080/RESTSecurityWithHTTPHeaderDemo/rest-api/demo-business-resource/logout/", type: "POST", success: function( jsonObj, textStatus, xhr ) { var htmlContent = $( "#logMsgDiv" ).html( ) + "<p>Let's logout after all the above. No content expected.</p>"; $( "#logMsgDiv" ).html( htmlContent ); }, error: function( xhr, textStatus, errorThrown ) { console.log( "HTTP Status: " + xhr.status ); console.log( "Error textStatus: " + textStatus ); console.log( "Error thrown: " + errorThrown ); } } );   // This is what happens when someone reuses the authorisation token after a user had been logged out $.ajax( { cache: false, crossDomain: true, headers: { "service_key": "3b91cab8-926f-49b6-ba00-920bcf934c2a", "auth_token": sessionStorage.auth_token }, url: "http://www.developerscrappad.com:8080/RESTSecurityWithHTTPHeaderDemo/rest-api/demo-business-resource/demo-get-method/", type: "GET", success: function( jsonObj, textStatus, xhr ) { var htmlContent = $( "#logMsgDiv" ).html( ) + "<p style='color: red;'>If this is portion is executed, something must be wrong</p>"; $( "#logMsgDiv" ).html( htmlContent ); }, error: function( xhr, textStatus, errorThrown ) { var htmlContent = $( "#logMsgDiv" ).html( ) + "<p style='color: red;'>This is what happens when someone reuses the authorisation token after a user had been logged out" + "<br />HTTP Status: " + xhr.status + ", Unauthorized access to demo-get-method</p>";   $( "#logMsgDiv" ).html( htmlContent ); } } ); </script> </body> </html>The Result The rest-auth-test.html need not be packaged with the war file, this is to separate invoking client script from the server-side app to simulate a cross-origin request. To run the rest-auth-test.html, all you need to do is to execute it from a web browser. For me, I have done this through Firefox with the Firebug plugin, and the below is the result:It worked pretty well. The first and the last request will be rejected as 401 (Unauthorized) HTTP Status because it was executed before authentication and after logout (invalid auth_token). Final Words When it comes to dealing with custom HTTP Headers in a JAX-RS 2.0 application, just remember to have the the custom HTTP Header names to be included as part of “Access-Control-Allow-Headers” in the response filter, e.g. Access-Control-Allow-Headers: custom_header_name1, custom_header_name2 After that, getting the HTTP Headers could be done easily in the REST Web Service methods with the help of javax.ws.rs.core.HttpHeaders through the REST context. Don’t forget the restriction and impact for CORS, which should be taken care in both REST Request and Response interceptors. Thank you for reading and hope this article helps. Related Articles:Java EE 7 / JAX-RS 2.0 – CORS on REST (How to make REST APIs accessible from a different domain) http://en.wikipedia.org/wiki/Cross-origin_resource_sharing http://www.html5rocks.com/en/tutorials/cors/ http://www.w3.org/TR/cors/ https://developer.mozilla.org/en/docs/HTTP/Access_control_CORSReference: Java EE 7 / JAX-RS 2.0: Simple REST API Authentication & Authorization with Custom HTTP Header from our JCG partner Max Lam at the A Developer’s Scrappad blog....
grails-logo

Grails Generate Asynchronous Controller

Since version 2.3, Grails supports asynchronous parallel programming to support modern multiple core hardware. Therefore a new Grails command is added to generate asynchronous controllers for domain classes. The generated controller contains CRUD actions for a given domain class. In the example below, we will generate a default asynchronous implementation of a Grails controller. First we create a domain object:       $ grails create-domain-class grails.data.Movie Second we will generate the asynchronous controller using the new generate-async-controller command: $ grails generate-async-controller grails.data.Movie Grails now generates an asynchronous controller with the name MovieController. Below you can see the default implementation of the index method: def index(Integer max) {params.max = Math.min(max ?: 10, 100) Movie.async.task { [movieInstanceList: list(params), count: count() ] }.then { result -> respond result.movieInstanceList, model:[movieInstanceCount: result.count] } } The async namespace makes sure GORM methods in the task method will be performed in a different thread and therefore is asynchronous. The task method which is used, returns a Promises object which you can use to perform callback operations like onError and onComplete.Reference: Grails Generate Asynchronous Controller from our JCG partner Albert van Veen at the JDriven blog....
scala-logo

Testing your plugin with multiple version of Play

So, you’ve written a plugin for Play…are you sure it works? I’ve been giving Deadbolt some love recently, and as part of the work I’ve added a test application for functional testing. This is an application that uses all the features of Deadbolt, and is driven by HTTP calls by REST-Assured. Initially, it was based on Play 2.3.5 but this ignores the supported Play versions of 2.3.1 through to 2.3.4. Additionally, those hard-working people on the Play team at Typesafe keep cranking out new feature-filled versions. On top of that, support for Scala 2.10.4 and 2.11.1 is required so cross-Scala version testing is needed. Clearly, testing your plugin against a single version of Play is not enough. Seems like some kind of continuous integration could help us out here… Building on Travis CI Deadbolt builds on Travis CI, a great CI platform that’s free for open-source projects. This runs the tests, and publishs snapshot versions to Sonatype. I’m not going into detail on this, because there’s already a great guide over at Cake Solutions. You can find the guide here: http://www.cakesolutions.net/teamblogs/publishing-artefacts-to-oss-sonatype-nexus-using-sbt-and-travis-ci-here… I’ve made some changes to the build script because the plugin code is not at the top level of the repositry; rather, it resides one level down. The repository looks like this: deadbolt-2-java |-code # plugin code lives here |-test-app # the functional test application As a result, the .travis.yml file that defines the build, looks like this. language: scala jdk: - openjdk6 scala: - 2.11.1 script: - cd code - sbt ++$TRAVIS_SCALA_VERSION +test - cd ../test-app - sbt ++$TRAVIS_SCALA_VERSION +test - cd ../code - sbt ++$TRAVIS_SCALA_VERSION +publish-local after_success: - ! '[[ $TRAVIS_BRANCH == "master" ]] && { sbt +publish; };' env: global: - secure: foo - secure: bar This sets the Java version (people get angry when I don’t provide Java 6-compatible versions), and defines a script as the build process. Note the cd commands used to switch between the plugin directory and the test-app directory. This script already covers the cross-Scala version requirement – prefixing a command with +, e.g. +test, will execute that command against all versions of Scala defined in your build.sbt. It’s important to note that although only Scala 2.11.1 is defined in .travis.yml, SBT itself will take care of setting the current build version based on build.sbt. crossScalaVersions := Seq("2.11.1", "2.10.4") Testing multiple versions of Play However, the version of Play used by the test-app is still hard-coded to 2.3.5 in test-app/project/plugins.sbt. addSbtPlugin("com.typesafe.play" % "sbt-plugin" % "2.3.5") Happily, .sbt files are not just configuration files but actual code. This means we can change the Play version based on environment properties. A default value of 2.3.5 is given to allow the tests to run locally without having to set the version. addSbtPlugin("com.typesafe.play" % "sbt-plugin" % System.getProperty("playTestVersion", "2.3.5")) Finally, we update .travis.yml to take advantage of this. language: scala jdk: - openjdk6 scala: - 2.11.1 script: - cd code - sbt ++$TRAVIS_SCALA_VERSION +test - cd ../test-app - sbt ++$TRAVIS_SCALA_VERSION -DplayTestVersion=2.3.1 +test - sbt ++$TRAVIS_SCALA_VERSION -DplayTestVersion=2.3.2 +test - sbt ++$TRAVIS_SCALA_VERSION -DplayTestVersion=2.3.3 +test - sbt ++$TRAVIS_SCALA_VERSION -DplayTestVersion=2.3.4 +test - sbt ++$TRAVIS_SCALA_VERSION -DplayTestVersion=2.3.5 +test - cd ../code - sbt ++$TRAVIS_SCALA_VERSION +publish-local after_success: - ! '[[ $TRAVIS_BRANCH == "master" ]] && { sbt +publish; };' env: global: - secure: foo - secure: bar This means the following steps occur during the build:sbt ++$TRAVIS_SCALA_VERSION +testRun the plugin tests against Scala 2.11.1 Run the plugin tests against Scala 2.10.4sbt ++$TRAVIS_SCALA_VERSION -DplayTestVersion=2.3.1 +testRun the functional tests of the test-app against Scala 2.11.1 and Play 2.3.1 Run the functional tests of the test-app against Scala 2.10.4 and Play 2.3.1sbt ++$TRAVIS_SCALA_VERSION -DplayTestVersion=2.3.2 +testRun the functional tests of the test-app against Scala 2.11.1 and Play 2.3.2 Run the functional tests of the test-app against Scala 2.10.4 and Play 2.3.2sbt ++$TRAVIS_SCALA_VERSION -DplayTestVersion=2.3.3 +testRun the functional tests of the test-app against Scala 2.11.1 and Play 2.3.3 Run the functional tests of the test-app against Scala 2.10.4 and Play 2.3.3sbt ++$TRAVIS_SCALA_VERSION -DplayTestVersion=2.3.4 +testRun the functional tests of the test-app against Scala 2.11.1 and Play 2.3.4 Run the functional tests of the test-app against Scala 2.10.4 and Play 2.3.4sbt ++$TRAVIS_SCALA_VERSION -DplayTestVersion=2.3.5 +testRun the functional tests of the test-app against Scala 2.11.1 and Play 2.3.5 Run the functional tests of the test-app against Scala 2.10.4 and Play 2.3.5If all these steps pass, the after_success branch of the build script will execute. If any of the steps fail, the build will break and the snapshots won’t be published.You can take a look at a repository using this approach here: https://github.com/schaloner/deadbolt-2-java.The resulting Travis build is available here: https://travis-ci.org/schaloner/deadbolt-2-java.Reference: Testing your plugin with multiple version of Play from our JCG partner Steve Chaloner at the Objectify blog....
junit-logo

Quo Vadis JUnit

For me JUnit is the most important library of the Java universe. But I think a new version of it is overdue. With it’s approach of having a method definition as a test definition JUnit is mighty inflexible and needs various hacks … sorry features, to do what you really should be able to do with basic (Java 8) language features. If you aren’t sure, what I’m talking about, check out this article about ScalaTest. Something like this should be the standard for JUnit. Of course you can implement your own TestRunner to get something like this going. But there are already many important TestRunners (SpringJUnit4ClassRunner anyone?) and they have the huge drawback that you can have only one of them. Another alternative would be to just say good-bye to JUnit and use a different Testframework. But all these other Testframeworks don’t have the support from third-party tools that JUnit has, so I’d really prefer JUnit to evolve, instead of it being replaced by something else. I was thinking about these issues for quite some time and actually brought them up on the JUnit mailing list, with lots of interesting feedback, but nothing happened. So when I met Marc, one of the JUnit committers at the XP-Days we started to discuss the situation, joined by Stefan, another JUnit committer and various XP-Days participants. And as so often nothing is as easy as it seems. JUnit is a very successful library, but it also doesn’t offer all the features people want or need. This has the effect that people use JUnit in all kinds of weird ways, which makes it really hard to evolve. E.g. Marc and Stefan told a story about the latest version of JUnit where they learned that a certain IDE uses reflection on private fields of JUnit, resulting in a “Bug” when the name of that field was changed. Therefore it seems, before one can make a change as big as a different default TestRunner, one has to revamp JUnit. I envision something like the following:gather the various features that others bolted onto JUnit, that probably should be part of JUnit itself. provide a clean, supported API for those apply gentle pressure and time for third parties to switch to the new APIs behind that API provide a new more flexible way to create tests profitAnd since JUnit is an open source project and all developers seem to work only in their private time on it, we started right there at the XP-Days gathering stuff that needs consideration. I put the results in a wiki page in the JUnit github repository. Get over there and see if you can add something.Reference: Quo Vadis JUnit from our JCG partner Jens Schauder at the Schauderhaft blog....
java-logo

When a String is Null but Not Null

Introduction A junior programmer on my team at work had an interesting bug.  Its symptoms I have seen more than once.  This post is what to look for and how to prevent it in the future.  I also explore different solutions to the problem. Symptoms The code in question looked well made.:   if(trouble != null && !trouble.isEmpty()) { System.out.println(“fine here: ” + trouble); } else { System.out.println(“not so fine here: ” + trouble); } The code would hit the “fine here” block but would print the value of “null.”  The variable was set by reading a file. Investigation The developer and I looked at the print out and ran the test several times but the same result came up.  I looked where the variable was being set.  It should have set the value to null if there was nothing there, yet the print out stated the value was null.  I had an idea and we decided to test it.  He changed the code: if(trouble != null && !trouble.isEmpty() && !trouble.equals(“null”)) { System.out.println(“fine here”); } else { System.out.println(“not so fine here”); }The tests went to the “not so fine here” every time.  It appears the value was set to the string “null” not to the value null. What to Learn To tell the truth, I have seen this before.  It took me about a day when my code started doing the same thing to figure it out.  What I learned from this is that parameter checking is still a good thing.  I found that the valid string check could be used in several places in my code.  To prevent the copy and paste anti-pattern, I abstracted the validation into a method. private static boolean isValidString(String test) { boolean isValid = (test != null && !test.isEmpty() && !test.equals(“null”)); return isValid; } The next step to prevent a longer and longer and validation line is to abstract it to a validation object.  This allows for a dirty word list. public class StringValidator { private List<String> dirtyWords;public static final int ARRAY_SIZE = 20; public StringValidator() { dirtyWords = new ArrayList(ARRAY_SIZE); }public boolean isValid(String test) { boolean isValid = false; isValid = (test != null) && !test.isEmpty(); if(isValid) { for(String word: dirtyWords) { if(word.equals(test)) { isValid = false; break; } } } return isValid; }public void addDirtyWord(String word) { if(!isValidString(word)){ throw new IllegalArgumentException(word + ” is not a good dirty word”); }dirtyWords.add(word); }private boolean isValidString(String test) { return ((test != null) && !test.isEmpty()); } }which leads to this parameter checking code:if(validator.isValid(trouble)) { System.out.println(“fine here”); } else { System.out.println(“not so fine here”); }Conclusion Sometimes we need to think a little outside the box to figure out a problem.  Don’t feel bad to get a second set of eyes on a problem; it maybe the best thing that happened.  I explored the solution ending up creating a validator allowing for the inclusion of a dirty word list without a long and confusing test.Reference: When a String is Null but Not Null from our JCG partner Daryl Mathison at the Daryl Mathison’s Java Blog blog....
java-interview-questions-answers

Resource scheduling and task launching with Apache Mesos and Apache Aurora at Twitter

Episode # 23 of the podcast was a talk with Bill Farner Bill explained how Twitter, using Apache Mesos and Apache Aurora, gets more for their money for the hardware and saves engineering time (both development and operations) by utilizing fine grained resources scheduling across their infrastructure. Bill talked a bit how the power of what he saw and experienced at Google with Borg is how they wanted to run things at Twitter and what they built Aurora for.  Now after years of running in production at Twitter, Aurora is open source, part of the Apache foundation and available for use. Lots of new use cases that they didn’t see coming have become very powerful for their teams and Bill went into more detail about that too. Bill also talked about the type of instrumentation that was done with features in Aurora to get to a place where now all new systems and almost all legacy systems at Twitter are run on top of Aurora. Bill went into detail about how that works in regards to Twitter’s cache and how the SLA features of Aurora make this a reality. Aurora is amazing providing end users (everyone from engineers to analysts) the ability to have full access to the potential resources of their hardware clusters. Aurora provides features like quotas and preemption so that any user can be provided the access to the compute resources of the entire hardware infrastructure without worry of abuse to hog resources and keep production always as the priority. Apache Mesos abstracts CPU, memory, storage, and other compute resources away from machines (physical or virtual), enabling fault-tolerant and elastic distributed systems to easily be built and run effectively. Mesos is built using the same principles as the Linux kernel, only at a different level of abstraction. The Mesos kernel runs on every machine and provides applications (e.g., Hadoop, Spark, Kafka, Elastic Search) with API’s for resource management and scheduling across entire datacenter and cloud environments. Apache Aurora is a Mesos framework.  A Mesos frameworks is a scheduler of resources and launcher of tasks. Aurora provides a Job abstraction consisting of a Task template and instructions for creating near-identical replicas of that Task. Typically a Task is a single Process corresponding to a single command line, such as python2.6 my_script.py. However, sometimes you must colocate separate Processes together within a single Task, which runs within a single container and chroot, often referred to as a “sandbox”. For example, if you run multiple cooperating agents together such as logrotate, installer, and master or slave processes. Thermos provides a Process abstraction under the Mesos Tasks. To use and get up to speed on Aurora, you should look the docs in this directory in this order:How to deploy Aurora or, how to install Aurora on virtual machines on your private machine (the Tutorial uses the virtual machine approach). As a user, get started quickly with a Tutorial. For an overview of Aurora’s process flow under the hood, see the User Guide. To learn how to write a configuration file, look at our Configuration Tutorial. From there, look at the Aurora + Thermos Reference. Then read up on the Aurora Command Line Client. Find out general information and useful tips about how Aurora does Resource Isolation.For some more great background on Mesos and Aurora please check out these three videos. Datacenter Management with Apache MesosAn intro video to Apache AuroraPast, Present, Future of Apache AuroraReference: Resource scheduling and task launching with Apache Mesos and Apache Aurora at Twitter from our JCG partner Joe Stein at the All Things Hadoop blog....
devops-logo

Linux productivity tools for me

This is the stack/are the tools that keep me productive in my day to day programming experience on linux. Well, first of all let me tell you, that this might not fit your needs , as I’m a full time java programmer so have OS choices open (still I won’t go to my java-related stuff here, to make infos general and possibly usable for non-java guys as well.). Linux is my choice for quite some years already and this post won’t detail the reasons, rather will focus on tools/utilities that help me survive in this world.     Distro Well, let’s start from the ground. I don’t have a clear winner here. In fact I run currently 3 different ones (didn’t count my router and phone here):Fedora – on my work laptop Xubuntu – on my home laptop Raspbian (Debian) – on my Raspberry Pi serverIn fact the info present in this post won’t detail too much on the 3.rd one, as it’s a server that just runs. The only thing I need there is a ssh access and that’s pretty much it, so this post would be rather short for that one! The reasons for Fedora, and Ubuntu in my case are:stability (well, let’s say things don’t crash that often for me), packages for most of the SW I need available in repositories and the software packages are not at some archaic versions.Window manager Well, the nice thing is that most common window managers are available across all the popular distros. My choice here is Xfce. Well, I don’t hate others, this just fits my needs best:no need for fancy effects for me as speed is rather my preference. still I need something that I would not fight with every day.Project I’m keeping my eye on is: LXQt. Still waiting for a more stabilized release (0.8 was released recently, which says to be ready for production desktops, but I’d plan to give it a try with the next release as well as when I have some time to set it up). Shell I have 2 candidates here:bash and zsh.bash I use for scripting. It’s been my choice for quite some time already and I don’t see a need for change any time soon. As I’m quite used to it and the most people/projects I share the scripts with are OK with that/have used it already. zsh as my default shell. This is still the world I consider new to me (see my recent post on that). Shell env sync As I have multiple machines I’m working on, there are 2 essential projects in the area for me:oh-my-zsh as well as homeshick.oh-my-zsh Provides for my zsh:nicely/systematically structured plugin approach, autocompletition and all sorts of aliases.To document my use case a bit here, these are the plugins I have present in my .zshrc: plugins=(git mvn glassfish yum colored-man vagrant z common-aliases gradle homeshick vim-interaction powerline tmuxinator tmux) Please note: Some are not available officially, and in the time of writing are just present as pull request from me to the project. Namely:homeshick and powerline.Well, might sound like a self promotion, but feel free to check and provide feedback on these if you find them useful. Homeshick Lets me synchronize all my custom zsh plugins as well as .bashrc files over git repository. The only requirements are:git and bash on all the clients and the git repo accessible from all my clients.In my case, there are things I don’t want to expose to public, but have no problem to have it hosted in some private git repo. So bitbucket is my choice, as I have there private git repo for free for the purpose. Shell sessions bootstrapping There are 2 important projects for me here:tmux and tmuxinator.Tmux and tmuxinator enable me to have just one yaml formatted file for bootstrapping my terminals. I preffer it to having multiple tabs open in some GUI terminal, as it:bootstraps all my shells daily in a same/reproducible way, provides me with the nice way on organizing these and navigation between these works with keyboard only => switching between different tasks becomes after some time of usage just a routine.So no more searching in countless cards/windows for particular task. Shell productivity I don’t want to list all the linux utils I use, as I guess I’d make list too long and too boring (even for me to write). So let’s just name some, that I consider worth it:ack – powerful grep replacement (I might document in the separate blog post), notify-send and all sorts of aliases and zsh/bash functions, that would be too many to list here, moreover might be too specific and useless for others.Notify-send I blogged about notify-send already. I use it for all the app-server (in my case Glassfish) lifecycle management operations (start/stop/restart domain) as well as deployment stuff. As these:take quite some time to finish and might end up with errors,I let it myself updated with notification having:exit code chunk of the last couple lines from the log file.This is great, as I can work in parallel and it gets my attention, once the job is done. Desktop app launcher Without having an option for fast startup of my favourite programs I’d waste my time searching the icons or menus. As Xfce doesn’t provide me with the powerful one, I use: Synapse. File manager Well, there are times when I play with files in the command line, but sometimes it just fits better to use some UI for the purpose. My choice is Krusader. Well, the most stuff I need is available, namely:file/folder manipulation, file folder comparison and file/folder contents search.The only thing that bothers me is the fact that development is rather stalled in the project. Still, viable alternative might be: Double commander, which seem to be even cross-platform and can use the total commander plugins (which used to be my choice on windows for the purpose). Editor I tried to live with Gedit and for simple note-taking it might be a good choice, however as I like to play with ruby these days, I tried to find something that would help more in the area. (G)Vim After searching I came to conclusion, that (G)Vim is quite popular in the Linux world. Well, it’s worth to mention vim joke here, which expresses the feelings of many on this editor: Infinite Vim monkey theorem (see Infinite monkey theorem for explanation). Still, I’ve seen:many people favouring it, it’s the first editor I remember to have available once logging in to my school linux account (OK no real argument, just sentiment), it seemed to provide countless plugins for all sorts of stuff and it’s incredibly powerful, as far as I’ve seen and learning it might pay back.So I decided cca 2 years back to uninstall the editors I’ve been used to and force myself to use (G)Vim. Great source of information was for me: vimcasts, which helped me a lot in the area. Well, I plan to document my .vimrc setup in a separate post, as it could make this one way too long. Files sync tool Unison is my choice her. For more info on my use case, see this blog post. Conclusion Hope you find some inspiration here. And would be glad to hear from you guys about any I missed, but you could not live without. Still, I can’t believe anyone read this far. As I guess I would not force myself to!Reference: Linux productivity tools for me from our JCG partner Peter Butkovic at the pb’s blog about life and IT blog....
java-logo

How to Use Callable and FutureTask

Introduction Since Java 1.5 there has been a new set of objects under java.util.concurrent.  This package has a number of different classes including thread queues.  I could have used those when I was programming with Java 1.2!  When I started looking at the new toys I became hesitant.  What is this Callable thing and what is the Future?  It turns out that there is nothing wrong with a Future and Callable.  In fact, they are what I have been hoping, looking for in my Java career.       Differences Between Callable and Runnable Callable is what Runnable hoped to become.  Callable’s only method is “T call().”  What makes it so neat is that it returns something.  This is a step above having to create a getter for the answer to a task.  While this is cool, there needs to be a way to get at the returned value. The Future is here Future has a way to get the value out when the Callable is done.  The function is get() or get(long timeout, TimeUnit unit).  This is the equivalent of calling thread.join(); runnable.getValue() at the same time. Example I created a class called CounterCallable.  All it does is add numbers from the variable start to variable end. CounterCallable package org.mathison.futurecallable;import java.util.concurrent.Callable;/** * * @author Daryl */ public class CounterCallable implements Callable {private long start; private long end;public CounterCallable(long start, long end) { this.start = start; this.end = end; }@Override public SumTimeAnswer call() throws Exception { long sum = 0; long startTime = System.currentTimeMillis(); for(long i = start; i <= end; i++){ sum += i; } long endTime = System.currentTimeMillis();return new SumTimeAnswer(sum, endTime - startTime); } } SumTimeAnswer Class SumTimeAnswer is really a simple getter class that holds the sum and the amount of time it took to do the operation. package org.mathison.futurecallable;/** * * @author Daryl */ public class SumTimeAnswer { private long timeToFinish; private long sum;public SumTimeAnswer(long sum, long timeToFinish) { this.sum = sum; this.timeToFinish = timeToFinish; }public long getTimeToFinish() { return timeToFinish; }public long getSum() { return sum; } } App App is just a main class pulling everything together package org.mathison.futurecallable;import java.util.concurrent.CancellationException; import java.util.concurrent.ExecutionException; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.FutureTask;/** * Hello world! * */ public class App { public static final long BEGIN = 0; public static final long END = 100000; public static void main( String[] args ) { FutureTask task = new FutureTask(new CounterCallable(BEGIN, END)); FutureTask firstHalf = new FutureTask(new CounterCallable(BEGIN, END/2)); FutureTask secondHalf = new FutureTask(new CounterCallable(END/2 + 1, END)); ExecutorService pool = Executors.newSingleThreadExecutor(); pool.submit(task); pool.submit(firstHalf); pool.submit(secondHalf); try { SumTimeAnswer taskAnswer = task.get(); System.out.println("just one thread Time: " + taskAnswer.getTimeToFinish() + " Total: " + taskAnswer.getSum()); SumTimeAnswer taskFirstAnswer = firstHalf.get(); SumTimeAnswer taskSecondAnswer = secondHalf.get(); long totalTime = taskFirstAnswer.getTimeToFinish() + taskSecondAnswer.getTimeToFinish(); long totalSum = taskFirstAnswer.getSum() + taskSecondAnswer.getSum(); System.out.println("Two thread time: " + totalTime + " Total: " + totalSum); } catch(CancellationException | InterruptedException | ExecutionException e) { e.printStackTrace(); } pool.shutdown(); } } Conclusion In this post, classes Callable and FutureTask were used to demonstrate how to use the java.util.concurrent package.Reference: How to Use Callable and FutureTask from our JCG partner Daryl Mathison at the Daryl Mathison’s Java Blog blog....
software-development-2-logo

Legacy Code to Testable Code #5: Extract Class

This post is part of the “Legacy Code to Testable Code” series. In the series we’ll talk about making refactoring steps before writing tests for legacy code, and how they make our life easier. A few years ago I got this from Erik Talboom: “A private method is a design smell”. It took me a while to fully understand it and to apply it. There’s a nice logic in there. If at some point, we’re extracting a private method from inside a public method, it probably means the public method did too much. It was long, and procedural, and probably made a number of operations. Under those circumstances, it made sense to extract some of the logic into the private method. It made sense that we could name it more clearly, as the extracted method was simpler. It also means that the public method broke the Single Responsibility Principle. After all, we just broke it in two (at least). If that’s the case, the private method we extracted contains a separate functionality from the rest of the public method. This functionality we extracted should probably be tested. It would be easier to test, because it’s smaller. If we keep it private, but called from the public method, we’d prefer not to test it directly, but through the public interface. If however we extract it to a new class, testing both will be easier, because both are simpler. Testing and simplicity go hand in hand, and extracting into a separate class makes sense in so many cases. Too bad it’s not applied often. Here’s a simple example: public void UpdateStreet(string newStreet) { if(string.IsNullOrEmpty(street) && !street.StartsWith(" ") && !street.StartsWith("@")) { Address address = new Address(this); address.SetStreet(newStreet); } } It makes sense to extract the validation: public void UpdateStreet(string newStreet) { if (ValidateStreet(newStreet)) { Address address = new Address(this); address.SetStreet(newStreet); } } private bool ValidateStreet(string street) { return string.IsNullOrEmpty(street) && !street.StartsWith(" ") && !street.StartsWith("@"); } If we keep it like that, testing the validation is problematic. Instead, we can extract the method into separate class called StreetValidator: public void UpdateStreet(string newStreet) { if (StreetValidator.Validate(newStreet)) { Address address = new Address(this); address.SetStreet(newStreet); } } Now we can test the Validate method, and then the original UpdateStreet method separately. We could also expose the method as public, or make it static since it doesn’t change any state. However, this may not always be the case. Sometimes in order to perform the separation, we need to actually cut the cord. Suppose that our validation now includes comparison to the current address’ street: public void UpdateStreet(string newStreet) { if(string.IsNullOrEmpty(street) && !street.StartsWith(" ") && !street.StartsWith("@")) && currentAddress.GetStreet().CompareTo(street) == 0) { Address address = new Address(this); address.SetStreet(newStreet); } } currentAddress is a field in our class, so it’s easy to extract it into a private method: private bool ValidateStreet(string street) { return string.IsNullOrEmpty(street) && !street.StartsWith(" ") && !street.StartsWith("@") && currentAddress.GetStreet().CompareTo(street) == 0; } However, extracting this into separate class, requires us to pass the currentAddress as a parameter. We can do this in two steps. First, we change the signature of the method, and add a parameter with the same name as the field: private bool ValidateStreet(string street, Address currentAddress) { return string.IsNullOrEmpty(street) && !street.StartsWith(" ") && !street.StartsWith("@") && currentAddress.GetStreet().CompareTo(street) == 0; } Now that we “shadowed” the field, we decoupled the method from its class. The method can now be extracted to a separate class. I find people accept extract class (if it’s safe and easy) more than exposing methods, or creating accessors. The effect is the same (and the risk that someone will call it is the same), but the simplicity makes it more bearable, I guess. Extracting a class reduces the complexity of the code and testing. Instead of the combinatorial order code paths that need testing, we lowered the test cases into linear order. Testing is not only possible, but also more probable – we are always likely to test more when the there is less to test. That trick we did when we added the parameter? We’ll discuss it with more details next.Reference: Legacy Code to Testable Code #5: Extract Class from our JCG partner Gil Zilberfeld at the Geek Out of Water blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close