Featured FREE Whitepapers

What's New Here?


Deep diving into Cloning

Before we proceed with the cloning concept let’s refresh our basics with the object creation concept. When the objects are created using the new operator the objects gets the memory allocation in the Heap.In Java ideally objects are modified through reference variable only i.e. only the memory address of the object is copied and hence any changes in the original object will be reflected in the new variable. Glass objGlass1=new Glass(); Glass objGlass2=objGlass1; Here, in this case any changes you make to object objGlass1 will reflect in object objGlass2 and vice versa. That means ‘objGlass1== objGlass2‘ will return true both the reference variables objGlass1 and objGlass2 are referring to the same object. However if you are intending to copy the object unlike just copying the reference of the object then you will need cloning. What is cloning? Cloning is a process of copying an object i.e. creating a new instance by copying itself. Cloning in Java can be done by use of clone() method of the object. Cloning creates and returns a copy of the object, with the same class and with all the fields having the same values. Glass objGlass1 = new Glass(); Glass objGlass2 = (Glass) objGlass.clone(); Let’s see the below analysis after cloning:objGlass1!=objGlass2 returns TRUE which means that objGlass1 and objGlass2 are referring to two different memory locations i.e. two different objects. objGlass1.getClass()==objGlass2 .getClass() returns TRUE which means the cloned object and the original object should be of the same type. objGlass1.equals(objGlass2) returns TRUE which means the cloned object data should be equal to the original one(However it can be changed any time after cloning).  Shallow Cloning vs Deep Cloning Java supports two types of Cloning – Shallow Cloning and Deep Cloning. In case of Shallow cloning a new object is created that has an exact copy of the values in the original object. The clone() method of the Object provides the Shallow cloning. In this cloning mechanism the object is copied without its contained objects. Shallow clone only copies the top level structure of the object not the lower levels. Structure:In the above diagram the OriginalObject1 has Field1 and a object contained called ReferenceObject1. Now during Shallow cloning of OriginalObject1 the ClonedObject2 is created with Field2 having the copied value from Field1 and it still points to the ReferenceObject1. The reason behind this is the Field1 is of primitive type so its values are copied into a Field2. However since ReferenceObject1 is an Object type the ClonedObject2 points to the same ReferenceObject1. Any changes made to ReferenceObject1 will be visible ClonedObject2.In case of Deep Cloning all the fields are copied. In this case even the objects referenced are copied in the cloned object along with the fields.As mentioned in the above figure the OriginalObject1 has a primitive type Field1 and a ReferenceObject1. Now when we do a deep cloning of the OriginalObject1 then the ClonedObject2 is created along with Field2 is having the copied values from Field1 and ReferenceObject2 containing the copied values of ReferenceObject1.  Example of Shallow Cloning:  Shallow Cloning Example In the above example we have an original object Employee which has reference to Department class and a field EmployeeName. At the first instance let’s assume that the values of the EmployeeName= “Chris” and DepartmentName =”Sales”. When we clone the object Employee through Shallow Cloning then a ClonedEmployee object is created which has a duplicate field cloned EmployeeName and a Department. However we need to note that there is no duplicate Department object created. The cloned Employee object refers the same memory address of the referenced class Department. So now when we change the original object values of EmployeeName to “Peter” and DepartmentName to “Finance” then the cloned EmployeeName field does not change. It still contains the old value (as per the above section of the diagram). However we must notice that the cloned DepartmentName has been modified now to “Finance” to reflect the change. This is because the cloned Employee refers to the same memory address as of the original object. So any change made to the original object reference is also visible to the cloned object referencing the original object. It doesn’t get duplicated like the fields. Code Example for Shallow Cloning Department.java(ReferenceObject) public class Department { private String deptName;public Department(String str) { deptName = str; }public String getDeptName() { return deptName; }public void setDeptName(String deptName) { this.deptName = deptName; } } Employee.java (main object) public class Employee implements Cloneable { private String employeeName; private Department dept;public String getEmployeeName() { return employeeName; }public void setEmployeeName(String employeeName) { this.employeeName = employeeName; }public Department getDept() { return dept; }public void setDept(Department dept) { this.dept = dept; }public Employee(String emp, String empDept) { employeeName = emp; dept = new Department(empDept); }public Object clone() { try { return super.clone(); } catch (CloneNotSupportedException e) { e.printStackTrace(); return null; } } } Client.java public static void main(String[] args) { Employee emp = new Employee('Chris', 'Sales'); System.out.println('Original Object value - Employee Name:' + emp.getEmployeeName() + ' & department name:' + emp.getDept().getDeptName()); Employee clonedEmployee = (Employee) emp.clone(); System.out.println('Cloned object value - Employee Name:' + clonedEmployee.getEmployeeName() + ' & department name:' + clonedEmployee.getDept().getDeptName()); // Now let's change values of Original Object emp.setEmployeeName('Peter'); emp.getDept().setDeptName('Finance'); System.out .println('Original Object value after it is modified - Employee Name:' + emp.getEmployeeName() + ' & department name:' + emp.getDept().getDeptName()); System.out .println('Cloned object value after modification of original object' + ' - Employee Name:' + clonedEmployee.getEmployeeName() + ' & department name:' + clonedEmployee.getDept().getDeptName()); }   Deep Cloning ExampleIn case of Deep cloning unlike the Shallow cloning the all the fields of the original object is copied to the clone object including the object which was referenced by the original object. This process make a copy of the dynamically allocated memory pointed to by the fields. In the above example even if the original object is modified and its values are changed then the cloned object is not changed including the reference object value since it does not refer to the same memory address. Code Example for Deep Cloning: In case of deep cloning the only change happens in the clone() method. Unlike Shallow cloning the super.clone() method is not called and instead an object is created using the new operator inside the clone() method. public Object clone() { //Deep Copy process Employee e = new Employee(employeeName, dept.getDeptName()); return e; } Hope you have enjoyed this article. Please feel free to provide your feedback and comments.   Reference: Deep diving into Cloning from our JCG partner Mainak Goswami at the Idiotechie blog. ...

Java2Days 2012: Java EE

The Java2Days conference is the major event in Eastern Europe to present the latest trends in Java development. This year, it took place in Sofia, Bulgaria on 25 – 26 October. I was there and enjoyed the opportunity to taste some cutting edge Java, cloud, and mobile content delivered right to my home city, together with few of my colleagues from SAP. I am visiting this event for a second year in a row. This year, it was bigger and included new things such as “Free Tech Trainings”. There is also a contest (open until 11 November) with some cool programming problems to solve. In this blog post I would like to share short summaries of some of the conference sessions which I attended, based on my notes and some additional research I did afterwards.JavaEE.Next(): Java EE 7, 8 and Beyond, by Reza Rahman Building HTML5/WebSocket Applications with GlassFish, Grizzly and JSR 356, by Reza Rahman Domain Driven Design with Java EE 6, by Reza Rahman  JavaEE.Next(): Java EE 7, 8 and Beyond In the conference opening session, Reza Rahman, a Java EE / GlassFish evangelist at Oracle, presented the key changes in the upcoming 7th edition of Java EE, which is to be released in March / April next year, as well as a glimpse into Java EE 8. We learned that cloud support is postponed to Java EE 8, and saw an overview of various changes such as an overhaul of JMS, WebSocket and HTML 5 support, a standard API for JSON processing, the next version of JAX-RS, the Caching API, and more. JMS 2.0 The new version of Java Message Service (JMS), which is currently being developed as JSR 343, introduces some major improvements over the existing JMS 1.1, which was released almost 5 years ago. It introduces new messaging features such as batch delivery and delivery delay, as well as several changes to the JMS API to make it simpler and easier to use, such as new annotations and dependency injection. With the new API, sending a message involves essentially injecting a context, looking-up a queue, and finally a single method call, with all needed objects created on the fly: @Inject private JMSContext context;@Resource(mappedName = "jms/inboundQueue") private Queue inboundQueue;public void sendMessage(String payload) { context.createProducer().send(inboundQueue, payload); } See also JMS 2.0 Early Draft. Java API for WebSocket The Java API for WebSocket (JSR 356) is a new specification introduced for the first time with Java EE 7. It features both server and client-side API for WebSocket, in two different flavors – programmatic and declarative. For more information, see the summary of the dedicated session on this topic below. Java API for JSON Processing The Java API for JSON Processing (JSR 353) is also a new specification, which brings the ability to parse, generate, transform, and query JSON content. It features both object model and streaming APIs, similar to DOM and StAX in the XML world. In the future, there will be also binding JSON to Java objects, similar to JAXB, but for the moment, this is still not part of the scope of this JSR. See also JSON-P: Java API for JSON Processing. JAX-RS 2.0 The major addtions in the next version of Java API for RESTful Web Services (JAX-RS) are the standard client API, as well as new features such as message filters and handlers, entity interceptors, asynchronous processing, hypermedia support, and common configuration. See also JAX-RS 2.0 Early Draft Explained. Java Caching API The Java Caching API (JSR 107) has been languishing for almost 10 years, and is finally going to be part of Java EE. It introduces a standard API and annotations for storing and retrieving objects from a cache. The API is pretty simple, for example putting a customer object into a cache would involve code similar to the following: public class CustomerDao { @CachePut(cacheName = "customers") public void addCustomer( @CacheKeyParam int cid, @CacheValue Customer customer) { ... } }   Other Java EE 7 FeaturesJPA 2.1 adds built-in schema generation, including indexes, as well as support for stored procedures, and other enhancements. JTA 1.2 introduces support for declarative transactions outside EJB, and the @TransactionScoped annotation. JSF 2.2 introduces better HTML5 support, including islands of HTML5 in JSF pages and binding of HTML5 input types to JSF beans, as well as additional annotations such as @FlowScoped and @ViewScoped, CDI alignment, etc. Batch Processing for Java EE is a new specification which introduces an API for batch processing based on jobs which consist of steps, each one involving reading, processing, and writing items. Instead of implementing interfaces, the methods for reading, processing, and writing items can be simply annotated as @ReadItem, @ProcessItem, and @WriteItem Bean Validation 1.1 introduces method constraints and injectable artifacts.  Java EE 8 The 8th edition of JavaEE is expected to be released 2 years after Java EE 7, which is in 2015. It will focus on the following areas:Cloud, PaaS, multitenancy Modulatiry based on Jigsaw Even better HTML5 support More CDI and EJB alignment NoSQL (?)  Summary This was indeed a lot of information to swallow in 45 minutes, but the speaker did a good job delivering it. It seems that Java EE 7 will be a solid addition to the series, and even more major features will be coming with Java EE 8. The slides from this session are available here. You can try these new features out by downloading GlassFish 4. To keep up-to-date, you can follow The Aquarium blog. Building HTML5/WebSocket Applications with GlassFish, Grizzly and JSR 356 This was in fact the last conference session, in which Reza Rahman presented one very interesting part of the upcoming Java EE 7, namely support for WebSocket. WebSocket Primer Two-way communication over the half-duplex HTTP protocol has always been a challenge. Flavors of server push such as polling, long-polling, or AJAX have been introduced but they are complex, inefficient, and wasteful. In contrast, WebSocket provides a bi-directional, full-duplex communication channel over a single TCP socket. Unlike pure TCP, however, all communications are done over the standard HTTP port (e.g. 80), which allows it to work in environments which block non-standard Internet connections using a firewall. It was originally proposed as part of HTML 5. W3C defines a very simple WebSocket JavaScript API and a WebSocket Protocol. The Protocol defines “handshake” and “framing”, where the handshake defines how a normal HTTP connection can be upgraded to a WebSocket connection, while the framing defines wire format of the message. The API defines methods for opening and closing a connection and for sending a message in various flavors. It is currently supported by all major browsers. Java API for WebSocket Java EE 7 introduces WebSocket support via the Java API for WebSocket (JSR 356). The reference implementation is called Tyrus and is part of GlassFish 4. The specification defines two different API flavors – programmatic based on interfaces and declarative based on annotations, both for the client and the server. The declarative API is overly simplistic and completely hides the WebSocket internals from the developer. In its simplest form, a WebSocket endpoint can be defined as follows: @WebSocketEndpoint("/hello") public class HelloBean {@WebSocketMessage public String sayHello(String name) { return "Hello " + name; } } There are the following annotations:@WebSocketEndpoint is a class-level annotation that declares a POJO to accept WebSocket messages. The path at which the messages are accepted is specified in this annotation. Optionally, it can also specify decoders, encoders, and subprotocols. @WebSocketMessage is a method-level annotation that for the method that is invoked when the endpoint receives a message. The value returned from this method is sent back to the other side. @WebSocketOpen and @WebSocketClose are method-level annotations for intercepting connection open and close events. @WebSocketError is a method-level annotations for intercepting errors during the conversation. @WebSocketParam is a parameter-level annotation for path segments that are passed as parameters.  Summary WebSocket seems to be indeed a much better alternative to the currently used server push approaches, so it’s cool that support for this is coming in JavaEE quite soon. The slides from this session are available here. There is also a similar presentation already available on Slideshare, Building HTML5 WebSocket Apps in Java. Domain Driven Design with Java EE 6 The third Java EE session by Reza Rahman focused on Domain Driven Design (DDD). This is an approach to developing software by connecting the implementation to an evolving model. Its premise is placing the project’s primary focus on the core domain and domain logic, and basing designs on a model of the domain. The term was first coined by Eric Evans in his book of the same title. DDD emphasizes a return to Object Oriented Analysis and Design, and is gradually gaining popularity as an alternative to traditional architectures originally popularized by J2EE Blue Prints. The goal of the session was to demonstrate how DDD can be implemented using Java EE 6 by mapping its core concepts to Java EE specifications, and by providing a comprehensive code example. Core ConceptsDomain: A sphere of knowledge (ontology), influence, or activity. Model: A system of abstractions that describes selected aspects of a domain and can be used to solve problems related to that domain. Ubiquitous Language: A language structured around the domain model and used by all team members. Context: The setting in which a word or statement appears that determines its meaning.  Building Blocks The diagram below is the “canonical image” of Domain Driven Design.Entity: An object that is not defined by its attributes, but rather by a thread of continuity and its identity. Value Object: An object that contains attributes but has no conceptual identity. Aggregate: A collection of objects that are bound together by a root entity, otherwise known as an aggregate root. Service: When an operation does not conceptually belong to any object. Repository: Methods for retrieving domain objects should delegate to a specialized Repository object. Factory: Methods for creating domain objects should delegate to a specialized Factory object.  Maintaining Model Integrity (Strategic DDD)  Layers User Interface: Shows the user information and receives input.Application: Thin layer to coordinate application activity. No business logic or business object state is in this layer. Domain: All the business objects and their state reside here. Objects in the domain layer should be free of concern about displaying or persisting themselves. Infrastructure: The objects dealing with housekeeping tasks like persistence.  Mapping DDD to Java EEDDD Concept / Layer Java EE ConceptContext Packaging, ModularityUI Layer JSF, ServletServices, Application Layer EJB, CDIEntities, Value Objects, Repositories JPA (Entities), CDIInfrastructure Layer JPA (Entity Manager API), JMS  Sample Application The sample application that was demoed during is an evolution of the DDD Sample already available at SourceForge. It is a Shipping Tracker application. The packaging structure of the application reveals clearly the different layers mentioned above. The renovated DDD sample application will be published soon as a java.net project. Until then, you can download and play with the existing DDD sample, it builds with Maven, can be run in an embedded Jetty, and of course imported into Eclipse for a deeper look. SummaryI found DDD quite interesting, even though initially it was a bit hard to grasp. Reza explained everything in details, however this shortened the time available for demoing and showing the code of the sample application. I and my colleagues were intrigued by the fact that the business logic in the sample is captured in the Domain layer classes themselves, without any controller of facade. This seemed initially as a somewhat unusual approach to me but upon second though it made perfect sense. The slides from this session are available here. While researching DDD after the conference, I found the following interesting resources:Domain Driven Design Quickly, a short, quick-readable summary and introduction to the fundamentals of DDD. Domain-driven design with Java EE 6, article in JavaWorld (2009). Domain Driven Design Community.  Reference: Java2Days 2012: Java EE from our JCG partner Stoyan Rachev at the Stoyan Rachev’s Blog blog. ...

Couchbase : Create a large dataset using Twitter and Java

An easy way to create large dataset when playing/demonstrating Couchbase -or any other NoSQL engine- is to inject Twitter feed into your database. For this small application I am using:Couchbase Server 2.0 Server Couchbase Java SDK (will be installed by Maven) Twitter4J (will be installed by Maven) Twitter Streaming API called using Twitter4JIn this example I am using Java to inject Tweets into Couchbase, you can obviously use another langage if you want to. The sources of this project are available on my Github repository Twitter Injector for Couchbase you can also download the Binary version here, and execute the application from the command line, see Run The Application paragraph. Do not forget to create your Twitter oAuth keys (see next paragraph) Create oAuth Keys The first thing to do to be able to use the Twitter API is to create a set of keys. If you want to learn more about all these keys/tokens take a look to the oAuth protocol : http://oauth.net/ 1. Log in into the Twitter Development Portal : https://dev.twitter.com/ 2. Create a new Application Click on the ‘Create an App’ link or go into the ‘User Menu > My Applications > Create a new application’ 3. Enter the Application Details information4. Click ‘Create Your Twitter Application’ button Your application’s OAuth settings are now available :5- Go down on the Application Settings page and click on the ‘Create My Access Token’ buttonYou have now all the necessary information to create your application:Consumer key Consumer secret Access token Access token secretThese keys will be uses in the twitter4j.properties file when running the Java application from the command line see Create the Java ApplicationThe following code is the main code of the application: package com.couchbase.demo;import com.couchbase.client.CouchbaseClient; import org.json.JSONException; import org.json.JSONObject; import twitter4j.*; import twitter4j.json.DataObjectFactory;import java.io.InputStream; import java.net.URI;import java.util.ArrayList; import java.util.List; import java.util.Properties;public class TwitterInjector {public final static String COUCHBASE_URIS = "couchbase.uri.list"; public final static String COUCHBASE_BUCKET = "couchbase.bucket"; public final static String COUCHBASE_PASSWORD = "couchbase.password";private List<URI> couchbaseServerUris = new ArrayList<URI>(); private String couchbaseBucket = "default"; private String couchbasePassword = "";public static void main(String[] args) { TwitterInjector twitterInjector = new TwitterInjector(); twitterInjector.setUp(); twitterInjector.injectTweets(); }private void setUp() { try { Properties prop = new Properties(); InputStream in = TwitterInjector.class.getClassLoader().getResourceAsStream("twitter4j.properties"); if (in == null) { throw new Exception("File twitter4j.properties not found"); } prop.load(in); in.close();if (prop.containsKey(COUCHBASE_URIS)) { String[] uriStrings = prop.getProperty(COUCHBASE_URIS).split(","); for (int i=0; i<uriStrings.length; i++) { couchbaseServerUris.add( new URI( uriStrings[i] ) ); }} else { couchbaseServerUris.add( new URI("") ); }if (prop.containsKey(COUCHBASE_BUCKET)) { couchbaseBucket = prop.getProperty(COUCHBASE_BUCKET); }if (prop.containsKey(COUCHBASE_PASSWORD)) { couchbasePassword = prop.getProperty(COUCHBASE_PASSWORD);}} catch (Exception e) { System.out.println( e.getMessage() ); System.exit(0); }}private void injectTweets() { TwitterStream twitterStream = new TwitterStreamFactory().getInstance(); try { final CouchbaseClient cbClient = new CouchbaseClient( couchbaseServerUris , couchbaseBucket , couchbasePassword ); System.out.println("Send data to : "+ couchbaseServerUris +"/"+ couchbaseBucket ); StatusListener listener = new StatusListener() {@Override public void onStatus(Status status) { String twitterMessage = DataObjectFactory.getRawJSON(status);// extract the id_str from the JSON document // see : https://dev.twitter.com/docs/twitter-ids-json-and-snowflake try { JSONObject statusAsJson = new JSONObject(twitterMessage); String idStr = statusAsJson.getString("id_str"); cbClient.add( idStr ,0, twitterMessage ); System.out.print("."); } catch (JSONException e) { e.printStackTrace(); } }@Override public void onDeletionNotice(StatusDeletionNotice statusDeletionNotice) { }@Override public void onTrackLimitationNotice(int numberOfLimitedStatuses) { }@Override public void onScrubGeo(long userId, long upToStatusId) { }@Override public void onException(Exception ex) { ex.printStackTrace(); } };twitterStream.addListener(listener); twitterStream.sample();} catch (Exception e) { e.printStackTrace(); } }} Some basic explanation:The setUp() method simply reads the twitter4j.properties file from the classpath to build the Couchbase connection string. The injectTweets opens the Couchbase connection -line 76- and calls the TwitterStream API. A Listener is created and will receive all the onStatus(Status status) from Twitter. The most important method is onStatus() that receive the message and save it into Couchbase. One interesting thing : since Couchbase is a JSON Document database it allows your to just take the JSON String and save it directly. cbClient.add(idStr,0 ,twitterMessage);  Packaging To be able to execute the application directly from the Jar file, I am using the assembly plugin with the following informations from the pom.xml : ... <archive> <manifest> <mainclass>com.couchbase.demo.TwitterInjector</mainclass> </manifest> <manifestentries> <class-path>.</class-path> </manifestentries> </archive> ... Some information:The mainClass entry allows you to set which class to execute when running java -jar command. The Class-Path entry allows you to set the current directory as part of the classpath where the program will search for the twitter4j.properties file. The assembly file is also configure to include all the dependencies (Twitter4J, Couchbase client SDK, …)If you do want to build it from the sources, simply run : mvn clean package This will create the following Jar file . /target/CouchbaseTwitterInjector.jar Run the Java ApplicationBefore running the application you must create a twitter4j.properties file with the following information : twitter4j.jsonStoreEnabled=trueoauth.consumerKey=[YOUR CONSUMER KEY] oauth.consumerSecret=[YOUR CONSUMER SECRET KEY] oauth.accessToken=[YOUR ACCESS TOKEN] oauth.accessTokenSecret=[YOUR ACCESS TOKEN SECRET]couchbase.uri.list= couchbase.bucket=default couchbase.password= Save the properties file and from the same location run: jar -jar [path-to-jar]/CouchbaseTwitterInjector.jar This will inject Tweets into your Couchbase Server. Enjoy !   Reference: Couchbase : Create a large dataset using Twitter and Java from our JCG partner Tugdual Grall at the Tug’s Blog blog. ...

Request and response – discovering Akka

In the previous part we implemented our first actor and sent message to it. Unfortunately actor was incapable of returning any result of processing this message, which rendered him rather useless. In this episode we will learn how to send reply message to the sender and how to integrate synchronous, blocking API with (by definition) asynchronous system based on message passing. Before we begin I must draw very strong distinction between an actor (extending Actor trait) and an actor reference of ActorRef type. When implementing an actor we extend Actor trait which forces us to implement receive method. However we do not create instances of actors directly, instead we ask ActorSystem:     val randomOrgBuffer: ActorRef = system.actorOf(Props[RandomOrgBuffer], "buffer") To our great surprise returned object is not of RandomOrgBuffer type like our actor, it’s not even an Actor. Thanks to ActorRef, which is a wrapper (proxy) around an actor:internal state, i.e. fields, of an actor is inaccessible from outside (encapsulation) the Akka system makes sure that receive method of each actor processes at most one message at a time (single-threaded) and queues awaiting messages the actual actor can be deployed on a different machine in the cluster, ActorRef transparently and invisibly to the client sends messages over the wire to the correct node in the cluster (more on that later in the series).That being said let’s somehow “return” random numbers fetched inside our actor. In turns out that inside every actor there is a method with very promising name sender. It won’t be a surprise if I say that it’s an ActorRef pointing to an actor that sent the message which we are processing right now: object Bootstrap extends App { //... randomOrgBuffer ! RandomRequest //... }//...class RandomOrgBuffer extends Actor with ActorLogging {def receive = { case RandomRequest => if(buffer.isEmpty) { buffer ++= fetchRandomNumbers(50) } sender ! buffer.dequeue() } } I hope you are already accustomed to ! notation used to send a message to an actor. If not, there are more conservative alternatives: sender tell buffer.dequeue() sender.tell(buffer.dequeue()) Nevertheless instead of printing new random numbers on screen we send them back to the sender. Quick test of our program reveals that… nothing happens. Looking closely at the sender reference we discover that it points to Actor[akka://Akka/deadLetters]. deadLetters doesn’t sound very well, but it’s logical. sender represents a reference to an actor that sent given message. We have sent the message inside a normal Scala class, not from an actor. If we were using two actors and the first one would have sent a message to the second one, then the second actor can use sender reference pointing to the first actor to send the reply back. Obviously then we would still not be capable of receiving the reply, despite increasing the abstraction. We will look at multi-actor scenarios soon, for the time being we have to learn how to integrate normal, non-Akka code with actors. In other words how to receive a reply so that Akka is not only a black hole that receives messages and never sends any results back. The solution is surprisingly simple – we can wait for a reply! implicit val timeout = Timeout(1 minutes)val future = randomOrgBuffer ? RandomRequest val veryRandom: Int = Await.result(future.mapTo[Int], 1 minute) The name future is not a coincidence. Although it’s not an instance of java.util.concurrent.Future, semantically it represents the exact same concept. But first note that instead of exclamation mark we use question mark (?) to send a message. This communication model is known as “ask” as opposed to already introduced “tell”. In essence Akka created a temporary actor named Actor[akka://Akka/temp/$d], sent a message on behalf of that actor and now waits up to one minute for a reply sent back to aforementioned temporary actor. Sending a message is still not-blocking and future object represents result of an operation that might not have been finished yet (it will be available in the future). Next (now in blocking manner) we wait for a reply. In addition mapTo[Int] is necessary since Akka does not know what type of response we expect. You must remember that using the “ask” pattern and waiting/blocking for a reply is very rare. Typically we rely on asynchronous messages and event driven architecture. One actor should never block waiting for a reply from another actor. But in this particular case we need a direct access to return value as we are building bridge between imperative request/response method and message-driven Akka system. Having a reply, what interesting use-cases can we support? For example we can design our own java.util.Random implementation based entirely on ideal, true random numbers! class RandomOrgRandom(randomOrgBuffer: ActorRef) extends java.util.Random { implicit val timeout = Timeout(1 minutes)override def next(bits: Int) = { if(bits <= 16) { random16Bits() & ((1 << bits) - 1) } else { (next(bits - 16) << 16) + random16Bits() } }private def random16Bits(): Int = { val future = randomOrgBuffer ? RandomRequest Await.result(future.mapTo[Int], 1 minute) } } The implementation details are irrelevant, enough to say that we must implement the next() method returning requested number of random bits, whereas our actor always returns 16 bits. The only thing we need now is a lightweight scala.util.Random wrapping java.util.Random and enjoy ideally shuffled sequence of numbers: val javaRandom = new RandomOrgRandom(randomOrgBuffer) val scalaRandom = new scala.util.Random(javaRandom) println(scalaRandom.shuffle((1 to 20).toList)) //List(17, 15, 14, 6, 10, 2, 1, 9, 8, 3, 4, 16, 7, 18, 13, 11, 19, 5, 12, 20) Let’s wrap up. First we developed a simple system based on one actor that (if necessary) connects to external web service and buffers a batch of random numbers. When requested, it sends back one number from the buffer. Next we integrated asynchronous world of actors with synchronous API. By wrapping our actor we implemented our own java.util.Random implementation (see also java.security.SecureRandom). This class can now be used in any place where we need random numbers of very high quality. However the implementation is far from perfect, which we will address in next parts. Source code is available on GitHub (request-response tag). This was a translation of my article “Poznajemy Akka: zadanie i odpowiedz” originally published on scala.net.pl.   Reference: Request and response – discovering Akka from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog. ...

Investigating Deadlocks – Part 4: Fixing the Code

In the last in this short series of blogs in which I’ve been talking about analysing deadlocks, I’m going to fix my BadTransferOperation code. If you’ve seen the other blogs in this series, you’ll know that in order to get to this point I’ve created the demo code that deadlocks, shown how to get hold of a thread dump and then analysed the thread dump, figuring out where and how a deadlock was occurring. In order to save space, the discussion below refers to both the Account and DeadlockDemo classes from part 1 of this series, which contains full code listings. Textbook descriptions of deadlocks usually go something like this: “Thread A will acquire a lock on object 1 and wait for a lock on object 2, while thread B acquires a lock on object 2 whilst waiting for a lock on object 1”. The pile-up shown in my previous blog, and highlighted below, is a real-world deadlock where other threads, locks and objects get in the way of the straightforward, simple, theoretical deadlock situation.Found one Java-level deadlock: ============================= 'Thread-21': waiting to lock monitor 7f97118bd560 (object 7f3366f58, a threads.deadlock.Account), which is held by 'Thread-20' 'Thread-20': waiting to lock monitor 7f97118bc108 (object 7f3366e98, a threads.deadlock.Account), which is held by 'Thread-4' 'Thread-4': waiting to lock monitor 7f9711834360 (object 7f3366e80, a threads.deadlock.Account), which is held by 'Thread-7' 'Thread-7': waiting to lock monitor 7f97118b9708 (object 7f3366eb0, a threads.deadlock.Account), which is held by 'Thread-11' 'Thread-11': waiting to lock monitor 7f97118bd560 (object 7f3366f58, a threads.deadlock.Account), which is held by 'Thread-20' If you relate the text and image above back to the following code, you can see that Thread-20 has locked its fromAccount object (f58) and is waiting to lock its toAccount object (e98)private void transfer(Account fromAccount, Account toAccount, int transferAmount) throws OverdrawnException {synchronized (fromAccount) { synchronized (toAccount) { fromAccount.withdraw(transferAmount); toAccount.deposit(transferAmount); } } }Unfortunately, because of timing issues, Thread-20 cannot get a lock on object e98 because it’s waiting for Thread-4 to release its lock on that object. Thread-4 cannot release the lock because it’s waiting for Thread-7, Thread-7 is waiting for Thread-11 and Thread-11 is waiting for Thread-20 to release its lock on object f58. This real-world deadlock is just a more complicated version of the textbook description. The problem with this code is that, from the snippet below, you can see that I’m randomly choosing two Account objects from the Accounts array as the fromAccount and the toAccount and locking them. As the  fromAccount and toAccount can reference any object from the accounts array, it means that they’re being locked in a random order.Account toAccount = accounts.get(rnd.nextInt(NUM_ACCOUNTS)); Account fromAccount = accounts.get(rnd.nextInt(NUM_ACCOUNTS));Therefore, the fix is to impose order on how the Account object are locked and any order will do, so long as it’s consistent.private void transfer(Account fromAccount, Account toAccount, int transferAmount) throws OverdrawnException {if (fromAccount.getNumber() > toAccount.getNumber()) {synchronized (fromAccount) { synchronized (toAccount) { fromAccount.withdraw(transferAmount); toAccount.deposit(transferAmount); } } } else {synchronized (toAccount) { synchronized (fromAccount) { fromAccount.withdraw(transferAmount); toAccount.deposit(transferAmount); } } } }The code above shows the fix. In this code I’m using the account number to ensure that I’m locking the Account object with the highest account number first, so that the deadlock situation above never arises. The code below is the complete listing for the fix:public class AvoidsDeadlockDemo {private static final int NUM_ACCOUNTS = 10; private static final int NUM_THREADS = 20; private static final int NUM_ITERATIONS = 100000; private static final int MAX_COLUMNS = 60;static final Random rnd = new Random();List<Account> accounts = new ArrayList<Account>();public static void main(String args[]) {AvoidsDeadlockDemo demo = new AvoidsDeadlockDemo(); demo.setUp(); demo.run(); }void setUp() {for (int i = 0; i < NUM_ACCOUNTS; i++) { Account account = new Account(i, rnd.nextInt(1000)); accounts.add(account); } }void run() {for (int i = 0; i < NUM_THREADS; i++) { new BadTransferOperation(i).start(); } }class BadTransferOperation extends Thread {int threadNum;BadTransferOperation(int threadNum) { this.threadNum = threadNum; }@Override public void run() {for (int i = 0; i < NUM_ITERATIONS; i++) {Account toAccount = accounts.get(rnd.nextInt(NUM_ACCOUNTS)); Account fromAccount = accounts.get(rnd.nextInt(NUM_ACCOUNTS)); int amount = rnd.nextInt(1000);if (!toAccount.equals(fromAccount)) { try { transfer(fromAccount, toAccount, amount); System.out.print("."); } catch (OverdrawnException e) { System.out.print("-"); }printNewLine(i); } } System.out.println("Thread Complete: " + threadNum); }private void printNewLine(int columnNumber) {if (columnNumber % MAX_COLUMNS == 0) { System.out.print("\n"); } }/** * This is the crucial point here. The idea is that to avoid deadlock you need to ensure that threads can't try * to lock the same two accounts in the same order */ private void transfer(Account fromAccount, Account toAccount, int transferAmount) throws OverdrawnException {if (fromAccount.getNumber() > toAccount.getNumber()) {synchronized (fromAccount) { synchronized (toAccount) { fromAccount.withdraw(transferAmount); toAccount.deposit(transferAmount); } } } else {synchronized (toAccount) { synchronized (fromAccount) { fromAccount.withdraw(transferAmount); toAccount.deposit(transferAmount); } } } } } }In my sample code, a deadlock occurs because of a timing issue and the nested synchronized keywords in my BadTransferOperation class. In this code, the synchronized keywords are on adjacent lines; however, as a final point, it’s worth noting that it doesn’t matter where in your code the synchronized keywords are (they don’t have to be adjacent). So long as you’re locking two (or more) different monitor objects with the same thread, then ordering matters and deadlocks happen. For more information see the other blogs in this series. All source code for this an other blogs in the series are available on Github at git://github.com/roghughe/captaindebug.git   Reference: Investigating Deadlocks – Part 4: Fixing the Code from our JCG partner Roger Hughes at the Captain Debug’s Blog blog. ...

Polyglot Persistence: EclipseLink with MongoDB and Derby

Polyglot persistence has been in the news since some time now. Kicked off by the famous Fowler post from end 2011 I see more an more nice ideas coming up. Latest one was a company internal student project in which we used Scala as a backend persisting data into MongoDB, Derby and Solar. I’m not a big fan of Scala and remembered EclipseLink’s growing support for NoSQL databases. Given that I simply had to try this. Where to start?The biggest issue are the missing examples. You find quite a bit stuff about how to change the data-containers (either NoSQL or RDBMS) with EclipseLink but you will not find a single one which exactly uses both technologies seamlessly. Thanks to Shaun Smith and Gunnar Wagenkrnecht we have this great JavaOne talk about Polyglot Persistence: EclipseLink JPA for NoSQL, Relational, and Beyond which talks exactly about this. Unfortunately the sources still haven’t been pushed anywhere and I had to rebuild this from the talk.So, credits go to Shaun and Gunnar for this. The magic solution is called Persistence Unit Composition. You need one persistence unit for every data container. That looks like the following basic example. You have a couple of entities in each PU and a composite PU is the umbrella.  Let’s go You should have MongoDB in place before you’re going to start this little tutorial example. Fire up NetBeans and create two java projects. Lets call them polyglot-persistence-nosql-pu and polyglot-persistence-rational-pu. Put the following entities into the nosql-pu: Customer, Address, Order and OrderLine. (Mostly taken from the EclipseLink nosql examples) and put a Product entity into the rational-pu. The single products go into Derby while all the other entities persist into MongoDB. The interesting part is, where OrderLine has a One-to-One relation to a Product: @OneToOne(cascade = {CascadeType.REMOVE, CascadeType.PERSIST}) private Product product;This is the point where both worlds come together. More on that later. Both PUs need to be transaction-type=’RESOURCE_LOCAL’ and need to contain the following line in the persistence.xml: <property name='eclipselink.composite-unit.member' value='true'/> Don’t forget to add the db specific configuration. For MongoDB this is <property name='eclipselink.nosql.property.mongo.port' value='27017'/> <property name='eclipselink.nosql.property.mongo.host' value='localhost'/> <property name='eclipselink.nosql.property.mongo.db' value='mydb'/> For derby this is something like this: <property name='javax.persistence.jdbc.url' value='jdbc:derby://localhost:1527/mydb'/> <property name='javax.persistence.jdbc.password' value='sa'/> <property name='javax.persistence.jdbc.driver' value='org.apache.derby.jdbc.ClientDriver'/> <property name='javax.persistence.jdbc.user' value='sa'/> Now we need something to link those two PUs together. The combined-pu resides in a sample polyglot-persistence-web module and looks like this: <persistence-unit name='composite-pu' transaction-type='RESOURCE_LOCAL'> <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider> <jar-file>\lib\polyglot-persistence-rational-pu-1.0-SNAPSHOT.jar</jar-file> <jar-file>\lib\polyglot-persistence-nosql-pu-1.0-SNAPSHOT.jar</jar-file> <properties> <property name='eclipselink.composite-unit' value='true'/> </properties> </persistence-unit> </persistence> Watch out for the jar-file path. We are going to package this in a war-archive and because of this, the nosql-pu and the rational-pu will go into WEB-INF/lib folder. As you can see, my example is build with maven. Make sure to use the latest EclipseLink dependency. Even GlassFish still ships with a lower version. MongoDB support has been added beginning with 2.4. <dependency> <groupId>org.eclipse.persistence</groupId> <artifactId>eclipselink</artifactId> <version>2.4.1</version> </dependency> Beside this, you also need to turn GlassFish’s classloaders around: <class-loader delegate='false'/> Don’t worry about the details. I put up everything to github.com/myfear so, you might dig into the complete example later on your own. Testing it Let’s make some very brief tests with it. Create a nice little Demo servlet and inject the composite-pu to it. Create an EntityManager from it and get a transaction. Now start creating prodcuts, a customer, the order and the separate order-lines. All plain JPA. No further magic here: @PersistenceUnit(unitName = 'composite-pu') private EntityManagerFactory emf;protected void processRequest() // [...] {EntityManager em = emf.createEntityManager(); em.getTransaction().begin(); // Products go into RDBMS Product installation = new Product('installation'); em.persist(installation);Product shipping = new Product('shipping'); em.persist(shipping);Product maschine = new Product('maschine'); em.persist(maschine);// Customer into NoSQL Customer customer = new Customer(); customer.setName('myfear'); em.persist(customer); // Order into NoSQL Order order = new Order(); order.setCustomer(customer); order.setDescription('Pinball maschine');// Order Lines mapping NoSQL --- RDBMS order.addOrderLine(new OrderLine(maschine, 2999)); order.addOrderLine(new OrderLine(shipping, 59)); order.addOrderLine(new OrderLine(installation, 129));em.persist(order); em.getTransaction().commit(); String orderId = order.getId(); em.close(); If you put the right logging properties in place you can see, what is happening: A couple of sequences are assigned to the created Product entities (GeneratedValue). The Customer entity gets persisted into Mongo with a MappedInteraction. Entities map onto collections in MongoDB. FINE: Executing MappedInteraction() spec => null properties => {mongo.collection=CUSTOMER, mongo.operation=INSERT} input => [DatabaseRecord( CUSTOMER._id => 5098FF0C3D9F5D2CCB3CFECF CUSTOMER.NAME => myfear)] After that you see the products being inserted into Derby and again the MappedInteraction, that perssits the Order into MongoDB. The really cool part is down at the OrderLines: ORDER.ORDERLINES => [DatabaseRecord( LINENUMBER => 1 COST => 2999.0 PRODUCT_ID => 3), DatabaseRecord( LINENUMBER => 2 COST => 59.0 PRODUCT_ID => 2), DatabaseRecord( LINENUMBER => 3 COST => 129.0 PRODUCT_ID => 1)] Orderlines has an object which has the product_id which was generated for the related product entities. Further on you can also find the related Order and iterate over the products and get their descriptions: Order order2 = em.find(Order.class, orderId); for (OrderLine orderLine : order2.getOrderLines()) { String desc = orderLine.getProduct().getDescription(); } The nice little demo looks like this:Thanks Shaun, thanks Gunnar for this nice little example. Now go to github.com/myfear and get your hands dirty :)   Reference: Polyglot Persistence: EclipseLink with MongoDB and Derby from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog. ...

20 Kick-ass programming quotes

This post serves as a compilation of great programming quotes, quotes from famous programmers, computer scientists and savvy entrepreneurs of our time. Some of them are funny, some of them are motivational, some of them are… just awesome! So, in no particular order, let’s see what we have… ...

Oracle ADF Mobile World! Hello!

Hello, ADF Mobile, World! As you probably already know… ADF Mobile is here! Here are some links that will make you feel at home.. Home page of ADF Mobile: http://www.oracle.com/technetwork/developer-tools/adf/overview/adf-mobile-096323.html How to setup your JDeveloper: http://docs.oracle.com/cd/E18941_01/tutorials/MobileTutorial/jdtut_11r2_54_1.html Developer’s Guide http://docs.oracle.com/cd/E35521_01/doc.111230/e24475/toc.htm Some sales stuff http://www.oracle.com/technetwork/developer-tools/jdev/adf-mobile-development-129800.pdf And of course, the samples!! Samples are good. We need samples! Samples are goooood: http://www.oracle.com/technetwork/developer-tools/adf/adf-mobile-samples-1865088.html Additional references: http://technology.amis.nl/2012/10/22/adf-mobile-is-now-generally-available/ Well,  That is all we need for now… This post is about mobile.. (daaaaaawn of the dead).. obviously.. So lets get started. This post does not aim to replace any of the official documentation. First we have to setup our JDeveloper ( for the ADF mobile development (everything in this post is well documented in the above links.. this is just for reference flavour and colour)You have to install the plugin for ADF Mobile development. This is fairly easy. Just go to updates of your JDeveloper and update it through the updates process. After you have downloaded and installed the plugin, you have to restart. So, restart. Then, you have to load the extension. That is easy as well, just go to tools-preferences-ADF mobile and press ‘Load Extension‘ After that you have to select the platform you want to develop on. This sample uses iOS. You have to install Xcode to get it working on your Mac. In case you noticed. There is a strange behaviour in the preferences of ADF Mobile. If you select iOS and then select ADF Mobile and platforms back again, you will have the Android platform selected… (see video here). The good thing is that it does not loose your paths. For those of you that dont have the simulator path set by default. The hint below the input text is quite good. Just follow the path and in your mac and you will be fine. Dont forget, you have to install Xcode first!! OK, we have that working now! (We will see if that strange behaviour is going to affect us in the process). What else is there? Oh yes. the Sample application!!!!But wait?? I have some questions first! What is going on with the DB? Do we needs Web Services?? Do we have to bake a cake first? Is there anything else that we have to do prior to develop a very simple ADF mobile application?? Yes of course. There are lots of things to do before making a very first ADF mobile application.. Why dont we understand the architecture first? (see references). Why dont we bake a cake and cook a meal first? Why dont we make up excuses in order to postpone the inevitable? The world went mobile!!! Lets get mobile then! Lets start coding and we will get the rest in time. There are lots to learn indeed. But lets make small steps. No! I would like to learn the bigger picture now! I want to know what is going on.. I want to know how to talk the language. Alright.. It sounds like you want to know everything about snowboarding without even trying to see if you can simply balance and slide…(image from official documentation) Nice isn’t it? do you feel better now? You like it dont you? Do you get the bigger picture now? Great. By the way, do you have any questions?? I am sure you do. In fact we all do! But perhaps it would be a lot better if we see everything in slow motion and with small examples in a series of posts. At least that is my intention. Small and Simple for starters. One interesting thing to notice here, apart the others, is the use of PhoneGap. As you can see in the above image, The Web View contains all the types of Views (Server HTML HTML5 etc..) and PhoneGap covers the gap between those views and the Devices.. For more information about PhoneGap Please visit the FAQ of PhoneGap itself. The above link will give you enough answers to get the picture for now. Another very important thing is that with every ADF Mobile application, there is a small JVM included! The following content is extracted from the official documentation:Java runtime powered by an embedded Java VM bundled with each application.Note: ADF Mobile’s model-view-controller stack resides on a mobile device and represents reimplementation of ADF’s model-view-controller layers. UI metadata is rendered to native components on device and is bound to the model through the ADF Model. You see that every application is powered by an embedded JVM !! And you can use that in your iPhone!!! Without going to many details. The last thing that we note here is the Local Data. The following content is extracted from the official documentation: Local Data refers to data stores that reside on the device. In ADF Mobile, these are implemented as encrypted SQLite databases. Create Retrieve Update Delete (CRUD) operations are supported to this local data store through the Java layer, using JDBC-based APIs. So in all: we will be using phoneGap, JVM and embedded encrypted SQLite databases!! Which means that we can create applications that can store data in the local DB.. I think this brief introduction gives the basic idea of ADF Mobile. On with the Coding!! Where were we? Oh yes! nowhere.. we just setup our environment. Wait! do we need a database for this sample application? No we don’t. This is going to be fairly simple. So what do we do? Lets go bowling! Shut the front door!!! We are doing this. Just create a new application from JDeveloperJust follow the wizards from then and eventually you will get the following:Sorry what?? What is that:That is the adfmf-feature.xml file. This file is to configure the features of your application. We wont be needing this for now. But I am sure that some of you will want to search it a bit more. So here is the documentation: http://docs.oracle.com/cd/E35521_01/doc.111230/e24475/define_features.htm#autoId19 The following content is extracted from the above link: The adfmf-feature.xml file enables you to configure the actual mobile application features that are referenced by the element in the corresponding adfmf-application.xml file. So basically, what is says is, that adfmf-feature.xml is the configuration file of all the features your application might have. All those features are stored in the adfmf-application.xml file. That file is located in the descriptors section in JDeveloper. see image below:So, adfmf-application.xml holds the features of your application and adfmf-features.xml configures it. Additional resource on the adfmf-application.xml and adfmf-features.xml in a more basic level. http://docs.oracle.com/cd/E35521_01/doc.111230/e24475/getting_started.htm#autoId3 More on that later on. An additional interesting thing, is that we already have a DataControl generated!What is that DataControl about? That dataControl handles the operations on your device http://docs.oracle.com/cd/E35521_01/doc.111230/e24475/getting_started.htm#autoId3 The following content is extracted from the above link After you complete an ADF Mobile application project, JDeveloper adds application-level and project-level artifacts , JDeveloper creates the  DeviceFeatures data control. The PhoneGap Java API is abstracted through this data control, thus enabling the application features implemented as ADF Mobile AMX to access various services embedded on the device. JDeveloper also creates the ApplicationFeatures data control, which enables you to build a springboard page. By dragging and dropping the operations provided by the DeviceFeatures data control into an ADF Mobile AMX page (which is described in Section 9.5, ‘Using the DeviceFeatures Data Control’ ), you add functions to manage the user contacts stored on the device, create and send both e-mail and SMS text messages, ascertain the location of the device, use the device’s camera, and retrieve images stored in the device’s file system. That autogenerated DeviceFeatures DataControl is there to help us access various services that are embedded on the device. The  ApplicationFeatures DataControl is a different story and we will talk about it in later posts. Ok. Lets try to create a simple page. In order to create a page, just right click on the ViewController and create a new html page. lets say HelloWorld.html The result will be something like the following:Write some text:Are we there yet?? No. lets go bowling then! No. What else is there? Well, we need a feature!! Remember adfmf-features.xml file? great! go there! and add a new feature. place the name you want and make sure it is selected. Since this will be a local html page. we have to set it up as such. So in the properties of the feature, make sure that the type is htmlSince this is going to be a local page, we have to provide the path.thats it! All we have to do is to package it as an iOS application and test it with the simulator. This is not a simple right click and run. We have to create a deployment profile.. Since we want to run this with the iphone simulator.. we have to create the deployment profile.. So, right click on the Application and select deploy— new deployment profile.Press ok. Then, make sure that the settings are correct for your simulator: I had to manually set them.Click ok and the deployment profile is ready. In order to test the application, right click on the application and select the profile you created previously and deploy it. This will start your iOS Simulator and you will be able to find your applicationif you click on the application you will see our page!And that is it! Once we understand how it works. one step at a time. it is fairly easy to remember. This is the beginning!   Reference: Oracle ADF Mobile World! Hello! from our JCG partner Dimitrios Stassinopoulos at the Born To DeBug blog. ...

JPA/Hibernate: Version-Based Optimistic Concurrency Control

This article is an introduction to version-based optimistic concurrency control in Hibernate and JPA. The concept is fairly old and much has been written on it, but anyway I have seen it reinvented, misunderstood and misused. I’m writing it just to spread knowledge and hopefully spark interest in the subject of concurrency control and locking. Use Cases Let’s say we have a system used by multiple users, where each entity can be modified by more than one user. We want to prevent situations where two persons load some information, make some decision based on what they see, and update the state at the same time. We don’t want to lose changes made by the user who first clicked “save” by overwriting them in the following transaction. It can also happen in server environment – multiple transactions can modify a shared entity, and we want to prevent scenarios like this:Transaction 1 loads data Transaction 2 updates that data and commits Using state loaded in step 1 (which is no longer current), transaction 1 performs some calculations and update the stateIn some ways it’s comparable to non-repeatable reads. Solution: Versioning Hibernate and JPA implement the concept of version-based concurrency control for this reason. Here’s how it works. You can mark a simple property with @Version or <version> (numeric or timestamp). It’s going to be a special column in database. Our mapping can look like: @Entity @Table(name = 'orders') public class Order { @Id private long id;@Version private int version;private String description;private String status;// ... mutators } When such an entity is persisted, the version property is set to a starting value. Whenever it’s updated, Hibernate executes query like: update orders set description=?, status=?, version=? where id=? and version=? Note that in the last line, the WHERE clause now includes version. This value is always set to the “old” value, so that it only will update a row if it has the expected version. Let’s say two users load an order at version 1 and take a while looking at it in the GUI. Anne decides to approve the order and executes such action. Status is updated in database, everything works as expected. Versions passed to update statement look like: update orders set description=?, status=?, version=2 where id=? and version=1 As you can see, while persisting that update the persistence layer increments the version counter to 2. In her GUI, Betty still has the old version (number 1). When she decides to perform an update on the order, the statement looks like: update orders set description=?, status=?, version=2 where id=? and version=1 At this point, after Anne’s update, the row’s version in database is 2. So this second update affects 0 rows (nothing matches the WHERE clause). Hibernate detects that and an org.hibernate.StaleObjectStateException (wrapped in a javax.persistence.OptimisticLockException). As a result, the second user cannot perform any updates unless he refreshes the view. For proper user experience we need some clean exception handling, but I’ll leave that out. Configuration There is little to customize here. The @Version property can be a number or a timestamp. Number is artificial, but typically occupies fewer bytes in memory and database. Timestamp is larger, but it always is updated to “current timestamp”, so you can actually use it to determine when the entity was updated. Why? So why would we use it?It provides a convenient and automated way to maintain consistency in scenarios like those described above. It means that each action can only be performed once, and it guarantees that the user or server process saw up-to-date state while making a business decision. It takes very little work to set up. Thanks to its optimistic nature, it’s fast. There is no locking anywhere, only one more field added to the same queries. In a way it guarantees repeatable reads even with read committed transaction isolation level. It would end with an exception, but at least it’s not possible to create inconsistent state. It works well with very long conversations, including those that span multiple transactions. It’s perfectly consistent in all possible scenarios and race conditions on ACID databases. The updates must be sequential, an update involves a row lock and the “second” one will always affect 0 rows and fail.  Demo To demonstrate this, I created a very simple web application. It wires together Spring and Hibernate (behind JPA API), but it would work in other settings as well: Pure Hibernate (no JPA), JPA with different implementation, non-webapp, non-Spring etc. The application keeps one Order with schema similar to above and shows it in a web form where you can update description and status. To experiment with concurrency control, open the page in two tabs, do different modifications and save. Try the same thing without @Version. It uses an embedded database, so it needs minimal setup (only a web container) and only takes a restart to start with a fresh database. It’s pretty simplistic – accesses EntityManager in a @Transactional @Controller and backs the form directly with JPA-mapped entity. May not be the best way to do things for less trivial projects, but at least it gathers all code in one place and is very easy to grasp. Full source code as Eclipse project can be found at my GitHub repository.   Reference: Version-Based Optimistic Concurrency Control in JPA/Hibernate from our JCG partner Konrad Garus at the Squirrel’s blog. ...

By your Command – Command design pattern

Command design pattern is one of the widely known design pattern and it falls under the Behavioral Design Pattern (part of Gang of Four). As the name suggests it is related to actions and events in an application.   Problem statement: Imagine a scenario where we have a web page will multiple menus in it. One way of writing this code is to have multiple if else condition and executing the actions on each click of the menu.         private void getAction(String action){ if(action.equalsIgnoreCase('New')){ //Create new file } else if(action.equalsIgnoreCase('Open')){ //Open existing file } if(action.equalsIgnoreCase('Print')){ //Print the file } if(action.equalsIgnoreCase('Exit')){ //get out of the application } } We have to execute the actions based on the action string. However the above code is having too many if conditions and is not readable if it extends further. Intent:The requestor of the action needs to be decoupled from the object that carries out this action. Allow encapsulation of the request as an object. Note this line as this is very important concept for Command Pattern. Allow storage of the requests in the queue i.e. allows you to store a list of actions that you can execute later.  Solution: To resolve the above problem the Command pattern is here to rescue. As mentioned above the command pattern moves the above action to objects through encapsulation. These objects when executed it executes the command. Here every command is an object. So we will have to create individual classes for each of the menu actions like NewClass, OpenClass, PrintClass, ExitClass. And all these classes inherit from the Parent interface which is the Command interface. This interface (Command interface) abstracts/wraps all the child action classes. Now we introduce an Invoker class whose main job is to map the action with the classes which have that action. It basically holds the action and get the command to execute a request by calling the execute() method. Oops!! We missed another stakeholder here. It is the Receiver class. The receiver class has the knowledge of what to do to carry out an operation. The receiver has the knowledge of what to do when the action is performed. Structure:Following are the participants of the Command Design pattern:Command – This is an interface for executing an operation. ConcreteCommand – This class extends the Command interface and implements the execute method. This class creates a binding between the action and the receiver. Client – This class creates the ConcreteCommand class and associates it with the receiver. Invoker – This class asks the command to carry out the request. Receiver – This class knows to perform the operation.  Example:  Steps:Define a Command interface with a method signature like execute(). In the above example ActionListenerCommand is the command interface having a single execute() method. Create one or more derived classes that encapsulate some subset of the following: a “receiver” object, the method to invoke, the arguments to pass. In the above example ActionOpen and ActionSave are the Concrete command classes which creates a binding between the receiver and the action. ActionOpen class calls the receiver(in this case the Document class) class’s action method inside the execute(). Thus ordering the receiver class what needs to be done. Instantiate a Command object for each deferred execution request. Pass the Command object from the creator to the invoker. The invoker decides when to execute(). The client instantiates the Receiver object(Document) and the Command objects and allows the invoker to call the command.  Code Example: Command interface: public interface ActionListenerCommand { public void execute(); } Receiver class: public class Document { public void Open(){ System.out.println('Document Opened'); } public void Save(){ System.out.println('Document Saved'); } } Concrete Command: public class ActionOpen implements ActionListenerCommand { private Document adoc;public ActionOpen(Document doc) { this.adoc = doc; } @Override public void execute() { adoc.Open(); } } Invoker class: public class MenuOptions { private ActionListenerCommand openCommand; private ActionListenerCommand saveCommand;public MenuOptions(ActionListenerCommand open, ActionListenerCommand save) { this.openCommand = open; this.saveCommand = save; } public void clickOpen(){ openCommand.execute(); } public void clickSave(){ saveCommand.execute(); } } Client class: public class Client { public static void main(String[] args) { Document doc = new Document(); ActionListenerCommand clickOpen = new ActionOpen(doc); ActionListenerCommand clickSave = new ActionSave(doc); MenuOptions menu = new MenuOptions(clickOpen, clickSave); menu.clickOpen(); menu.clickSave(); }}   Benefits: Command pattern helps to decouple the invoker and the receiver. Receiver is the one which knows how to perform an action. A command should be able to implement undo and redo operations. This pattern helps in terms of extensibility as we can add new command without changing existing code. Drawback: The main disadvantage of the Command pattern is the increase in the number of classes for each individual command. These items could have been also done through method implementation. However the command pattern classes are more readable than creating multiple methods using if else condition. Interesting points:Implementations of java.lang.Runnable and javax.swing.Action follows command design pattern. Command can use Memento to maintain the state required for an undo operation.  Download Sample Code:  Reference: By your Command from our JCG partner Mainak Goswami at the Idiotechie blog. ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: