Featured FREE Whitepapers

What's New Here?


Couchbase : Create a large dataset using Twitter and Java

An easy way to create large dataset when playing/demonstrating Couchbase -or any other NoSQL engine- is to inject Twitter feed into your database. For this small application I am using:Couchbase Server 2.0 Server Couchbase Java SDK (will be installed by Maven) Twitter4J (will be installed by Maven) Twitter Streaming API called using Twitter4JIn this example I am using Java to inject Tweets into Couchbase, you can obviously use another langage if you want to. The sources of this project are available on my Github repository Twitter Injector for Couchbase you can also download the Binary version here, and execute the application from the command line, see Run The Application paragraph. Do not forget to create your Twitter oAuth keys (see next paragraph) Create oAuth Keys The first thing to do to be able to use the Twitter API is to create a set of keys. If you want to learn more about all these keys/tokens take a look to the oAuth protocol : http://oauth.net/ 1. Log in into the Twitter Development Portal : https://dev.twitter.com/ 2. Create a new Application Click on the ‘Create an App’ link or go into the ‘User Menu > My Applications > Create a new application’ 3. Enter the Application Details information4. Click ‘Create Your Twitter Application’ button Your application’s OAuth settings are now available :5- Go down on the Application Settings page and click on the ‘Create My Access Token’ buttonYou have now all the necessary information to create your application:Consumer key Consumer secret Access token Access token secretThese keys will be uses in the twitter4j.properties file when running the Java application from the command line see Create the Java ApplicationThe following code is the main code of the application: package com.couchbase.demo;import com.couchbase.client.CouchbaseClient; import org.json.JSONException; import org.json.JSONObject; import twitter4j.*; import twitter4j.json.DataObjectFactory;import java.io.InputStream; import java.net.URI;import java.util.ArrayList; import java.util.List; import java.util.Properties;public class TwitterInjector {public final static String COUCHBASE_URIS = "couchbase.uri.list"; public final static String COUCHBASE_BUCKET = "couchbase.bucket"; public final static String COUCHBASE_PASSWORD = "couchbase.password";private List<URI> couchbaseServerUris = new ArrayList<URI>(); private String couchbaseBucket = "default"; private String couchbasePassword = "";public static void main(String[] args) { TwitterInjector twitterInjector = new TwitterInjector(); twitterInjector.setUp(); twitterInjector.injectTweets(); }private void setUp() { try { Properties prop = new Properties(); InputStream in = TwitterInjector.class.getClassLoader().getResourceAsStream("twitter4j.properties"); if (in == null) { throw new Exception("File twitter4j.properties not found"); } prop.load(in); in.close();if (prop.containsKey(COUCHBASE_URIS)) { String[] uriStrings = prop.getProperty(COUCHBASE_URIS).split(","); for (int i=0; i<uriStrings.length; i++) { couchbaseServerUris.add( new URI( uriStrings[i] ) ); }} else { couchbaseServerUris.add( new URI("") ); }if (prop.containsKey(COUCHBASE_BUCKET)) { couchbaseBucket = prop.getProperty(COUCHBASE_BUCKET); }if (prop.containsKey(COUCHBASE_PASSWORD)) { couchbasePassword = prop.getProperty(COUCHBASE_PASSWORD);}} catch (Exception e) { System.out.println( e.getMessage() ); System.exit(0); }}private void injectTweets() { TwitterStream twitterStream = new TwitterStreamFactory().getInstance(); try { final CouchbaseClient cbClient = new CouchbaseClient( couchbaseServerUris , couchbaseBucket , couchbasePassword ); System.out.println("Send data to : "+ couchbaseServerUris +"/"+ couchbaseBucket ); StatusListener listener = new StatusListener() {@Override public void onStatus(Status status) { String twitterMessage = DataObjectFactory.getRawJSON(status);// extract the id_str from the JSON document // see : https://dev.twitter.com/docs/twitter-ids-json-and-snowflake try { JSONObject statusAsJson = new JSONObject(twitterMessage); String idStr = statusAsJson.getString("id_str"); cbClient.add( idStr ,0, twitterMessage ); System.out.print("."); } catch (JSONException e) { e.printStackTrace(); } }@Override public void onDeletionNotice(StatusDeletionNotice statusDeletionNotice) { }@Override public void onTrackLimitationNotice(int numberOfLimitedStatuses) { }@Override public void onScrubGeo(long userId, long upToStatusId) { }@Override public void onException(Exception ex) { ex.printStackTrace(); } };twitterStream.addListener(listener); twitterStream.sample();} catch (Exception e) { e.printStackTrace(); } }} Some basic explanation:The setUp() method simply reads the twitter4j.properties file from the classpath to build the Couchbase connection string. The injectTweets opens the Couchbase connection -line 76- and calls the TwitterStream API. A Listener is created and will receive all the onStatus(Status status) from Twitter. The most important method is onStatus() that receive the message and save it into Couchbase. One interesting thing : since Couchbase is a JSON Document database it allows your to just take the JSON String and save it directly. cbClient.add(idStr,0 ,twitterMessage);  Packaging To be able to execute the application directly from the Jar file, I am using the assembly plugin with the following informations from the pom.xml : ... <archive> <manifest> <mainclass>com.couchbase.demo.TwitterInjector</mainclass> </manifest> <manifestentries> <class-path>.</class-path> </manifestentries> </archive> ... Some information:The mainClass entry allows you to set which class to execute when running java -jar command. The Class-Path entry allows you to set the current directory as part of the classpath where the program will search for the twitter4j.properties file. The assembly file is also configure to include all the dependencies (Twitter4J, Couchbase client SDK, …)If you do want to build it from the sources, simply run : mvn clean package This will create the following Jar file . /target/CouchbaseTwitterInjector.jar Run the Java ApplicationBefore running the application you must create a twitter4j.properties file with the following information : twitter4j.jsonStoreEnabled=trueoauth.consumerKey=[YOUR CONSUMER KEY] oauth.consumerSecret=[YOUR CONSUMER SECRET KEY] oauth.accessToken=[YOUR ACCESS TOKEN] oauth.accessTokenSecret=[YOUR ACCESS TOKEN SECRET]couchbase.uri.list= couchbase.bucket=default couchbase.password= Save the properties file and from the same location run: jar -jar [path-to-jar]/CouchbaseTwitterInjector.jar This will inject Tweets into your Couchbase Server. Enjoy !   Reference: Couchbase : Create a large dataset using Twitter and Java from our JCG partner Tugdual Grall at the Tug’s Blog blog. ...

Request and response – discovering Akka

In the previous part we implemented our first actor and sent message to it. Unfortunately actor was incapable of returning any result of processing this message, which rendered him rather useless. In this episode we will learn how to send reply message to the sender and how to integrate synchronous, blocking API with (by definition) asynchronous system based on message passing. Before we begin I must draw very strong distinction between an actor (extending Actor trait) and an actor reference of ActorRef type. When implementing an actor we extend Actor trait which forces us to implement receive method. However we do not create instances of actors directly, instead we ask ActorSystem:     val randomOrgBuffer: ActorRef = system.actorOf(Props[RandomOrgBuffer], "buffer") To our great surprise returned object is not of RandomOrgBuffer type like our actor, it’s not even an Actor. Thanks to ActorRef, which is a wrapper (proxy) around an actor:internal state, i.e. fields, of an actor is inaccessible from outside (encapsulation) the Akka system makes sure that receive method of each actor processes at most one message at a time (single-threaded) and queues awaiting messages the actual actor can be deployed on a different machine in the cluster, ActorRef transparently and invisibly to the client sends messages over the wire to the correct node in the cluster (more on that later in the series).That being said let’s somehow “return” random numbers fetched inside our actor. In turns out that inside every actor there is a method with very promising name sender. It won’t be a surprise if I say that it’s an ActorRef pointing to an actor that sent the message which we are processing right now: object Bootstrap extends App { //... randomOrgBuffer ! RandomRequest //... }//...class RandomOrgBuffer extends Actor with ActorLogging {def receive = { case RandomRequest => if(buffer.isEmpty) { buffer ++= fetchRandomNumbers(50) } sender ! buffer.dequeue() } } I hope you are already accustomed to ! notation used to send a message to an actor. If not, there are more conservative alternatives: sender tell buffer.dequeue() sender.tell(buffer.dequeue()) Nevertheless instead of printing new random numbers on screen we send them back to the sender. Quick test of our program reveals that… nothing happens. Looking closely at the sender reference we discover that it points to Actor[akka://Akka/deadLetters]. deadLetters doesn’t sound very well, but it’s logical. sender represents a reference to an actor that sent given message. We have sent the message inside a normal Scala class, not from an actor. If we were using two actors and the first one would have sent a message to the second one, then the second actor can use sender reference pointing to the first actor to send the reply back. Obviously then we would still not be capable of receiving the reply, despite increasing the abstraction. We will look at multi-actor scenarios soon, for the time being we have to learn how to integrate normal, non-Akka code with actors. In other words how to receive a reply so that Akka is not only a black hole that receives messages and never sends any results back. The solution is surprisingly simple – we can wait for a reply! implicit val timeout = Timeout(1 minutes)val future = randomOrgBuffer ? RandomRequest val veryRandom: Int = Await.result(future.mapTo[Int], 1 minute) The name future is not a coincidence. Although it’s not an instance of java.util.concurrent.Future, semantically it represents the exact same concept. But first note that instead of exclamation mark we use question mark (?) to send a message. This communication model is known as “ask” as opposed to already introduced “tell”. In essence Akka created a temporary actor named Actor[akka://Akka/temp/$d], sent a message on behalf of that actor and now waits up to one minute for a reply sent back to aforementioned temporary actor. Sending a message is still not-blocking and future object represents result of an operation that might not have been finished yet (it will be available in the future). Next (now in blocking manner) we wait for a reply. In addition mapTo[Int] is necessary since Akka does not know what type of response we expect. You must remember that using the “ask” pattern and waiting/blocking for a reply is very rare. Typically we rely on asynchronous messages and event driven architecture. One actor should never block waiting for a reply from another actor. But in this particular case we need a direct access to return value as we are building bridge between imperative request/response method and message-driven Akka system. Having a reply, what interesting use-cases can we support? For example we can design our own java.util.Random implementation based entirely on ideal, true random numbers! class RandomOrgRandom(randomOrgBuffer: ActorRef) extends java.util.Random { implicit val timeout = Timeout(1 minutes)override def next(bits: Int) = { if(bits <= 16) { random16Bits() & ((1 << bits) - 1) } else { (next(bits - 16) << 16) + random16Bits() } }private def random16Bits(): Int = { val future = randomOrgBuffer ? RandomRequest Await.result(future.mapTo[Int], 1 minute) } } The implementation details are irrelevant, enough to say that we must implement the next() method returning requested number of random bits, whereas our actor always returns 16 bits. The only thing we need now is a lightweight scala.util.Random wrapping java.util.Random and enjoy ideally shuffled sequence of numbers: val javaRandom = new RandomOrgRandom(randomOrgBuffer) val scalaRandom = new scala.util.Random(javaRandom) println(scalaRandom.shuffle((1 to 20).toList)) //List(17, 15, 14, 6, 10, 2, 1, 9, 8, 3, 4, 16, 7, 18, 13, 11, 19, 5, 12, 20) Let’s wrap up. First we developed a simple system based on one actor that (if necessary) connects to external web service and buffers a batch of random numbers. When requested, it sends back one number from the buffer. Next we integrated asynchronous world of actors with synchronous API. By wrapping our actor we implemented our own java.util.Random implementation (see also java.security.SecureRandom). This class can now be used in any place where we need random numbers of very high quality. However the implementation is far from perfect, which we will address in next parts. Source code is available on GitHub (request-response tag). This was a translation of my article “Poznajemy Akka: zadanie i odpowiedz” originally published on scala.net.pl.   Reference: Request and response – discovering Akka from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog. ...

Investigating Deadlocks – Part 4: Fixing the Code

In the last in this short series of blogs in which I’ve been talking about analysing deadlocks, I’m going to fix my BadTransferOperation code. If you’ve seen the other blogs in this series, you’ll know that in order to get to this point I’ve created the demo code that deadlocks, shown how to get hold of a thread dump and then analysed the thread dump, figuring out where and how a deadlock was occurring. In order to save space, the discussion below refers to both the Account and DeadlockDemo classes from part 1 of this series, which contains full code listings. Textbook descriptions of deadlocks usually go something like this: “Thread A will acquire a lock on object 1 and wait for a lock on object 2, while thread B acquires a lock on object 2 whilst waiting for a lock on object 1”. The pile-up shown in my previous blog, and highlighted below, is a real-world deadlock where other threads, locks and objects get in the way of the straightforward, simple, theoretical deadlock situation.Found one Java-level deadlock: ============================= 'Thread-21': waiting to lock monitor 7f97118bd560 (object 7f3366f58, a threads.deadlock.Account), which is held by 'Thread-20' 'Thread-20': waiting to lock monitor 7f97118bc108 (object 7f3366e98, a threads.deadlock.Account), which is held by 'Thread-4' 'Thread-4': waiting to lock monitor 7f9711834360 (object 7f3366e80, a threads.deadlock.Account), which is held by 'Thread-7' 'Thread-7': waiting to lock monitor 7f97118b9708 (object 7f3366eb0, a threads.deadlock.Account), which is held by 'Thread-11' 'Thread-11': waiting to lock monitor 7f97118bd560 (object 7f3366f58, a threads.deadlock.Account), which is held by 'Thread-20' If you relate the text and image above back to the following code, you can see that Thread-20 has locked its fromAccount object (f58) and is waiting to lock its toAccount object (e98)private void transfer(Account fromAccount, Account toAccount, int transferAmount) throws OverdrawnException {synchronized (fromAccount) { synchronized (toAccount) { fromAccount.withdraw(transferAmount); toAccount.deposit(transferAmount); } } }Unfortunately, because of timing issues, Thread-20 cannot get a lock on object e98 because it’s waiting for Thread-4 to release its lock on that object. Thread-4 cannot release the lock because it’s waiting for Thread-7, Thread-7 is waiting for Thread-11 and Thread-11 is waiting for Thread-20 to release its lock on object f58. This real-world deadlock is just a more complicated version of the textbook description. The problem with this code is that, from the snippet below, you can see that I’m randomly choosing two Account objects from the Accounts array as the fromAccount and the toAccount and locking them. As the  fromAccount and toAccount can reference any object from the accounts array, it means that they’re being locked in a random order.Account toAccount = accounts.get(rnd.nextInt(NUM_ACCOUNTS)); Account fromAccount = accounts.get(rnd.nextInt(NUM_ACCOUNTS));Therefore, the fix is to impose order on how the Account object are locked and any order will do, so long as it’s consistent.private void transfer(Account fromAccount, Account toAccount, int transferAmount) throws OverdrawnException {if (fromAccount.getNumber() > toAccount.getNumber()) {synchronized (fromAccount) { synchronized (toAccount) { fromAccount.withdraw(transferAmount); toAccount.deposit(transferAmount); } } } else {synchronized (toAccount) { synchronized (fromAccount) { fromAccount.withdraw(transferAmount); toAccount.deposit(transferAmount); } } } }The code above shows the fix. In this code I’m using the account number to ensure that I’m locking the Account object with the highest account number first, so that the deadlock situation above never arises. The code below is the complete listing for the fix:public class AvoidsDeadlockDemo {private static final int NUM_ACCOUNTS = 10; private static final int NUM_THREADS = 20; private static final int NUM_ITERATIONS = 100000; private static final int MAX_COLUMNS = 60;static final Random rnd = new Random();List<Account> accounts = new ArrayList<Account>();public static void main(String args[]) {AvoidsDeadlockDemo demo = new AvoidsDeadlockDemo(); demo.setUp(); demo.run(); }void setUp() {for (int i = 0; i < NUM_ACCOUNTS; i++) { Account account = new Account(i, rnd.nextInt(1000)); accounts.add(account); } }void run() {for (int i = 0; i < NUM_THREADS; i++) { new BadTransferOperation(i).start(); } }class BadTransferOperation extends Thread {int threadNum;BadTransferOperation(int threadNum) { this.threadNum = threadNum; }@Override public void run() {for (int i = 0; i < NUM_ITERATIONS; i++) {Account toAccount = accounts.get(rnd.nextInt(NUM_ACCOUNTS)); Account fromAccount = accounts.get(rnd.nextInt(NUM_ACCOUNTS)); int amount = rnd.nextInt(1000);if (!toAccount.equals(fromAccount)) { try { transfer(fromAccount, toAccount, amount); System.out.print("."); } catch (OverdrawnException e) { System.out.print("-"); }printNewLine(i); } } System.out.println("Thread Complete: " + threadNum); }private void printNewLine(int columnNumber) {if (columnNumber % MAX_COLUMNS == 0) { System.out.print("\n"); } }/** * This is the crucial point here. The idea is that to avoid deadlock you need to ensure that threads can't try * to lock the same two accounts in the same order */ private void transfer(Account fromAccount, Account toAccount, int transferAmount) throws OverdrawnException {if (fromAccount.getNumber() > toAccount.getNumber()) {synchronized (fromAccount) { synchronized (toAccount) { fromAccount.withdraw(transferAmount); toAccount.deposit(transferAmount); } } } else {synchronized (toAccount) { synchronized (fromAccount) { fromAccount.withdraw(transferAmount); toAccount.deposit(transferAmount); } } } } } }In my sample code, a deadlock occurs because of a timing issue and the nested synchronized keywords in my BadTransferOperation class. In this code, the synchronized keywords are on adjacent lines; however, as a final point, it’s worth noting that it doesn’t matter where in your code the synchronized keywords are (they don’t have to be adjacent). So long as you’re locking two (or more) different monitor objects with the same thread, then ordering matters and deadlocks happen. For more information see the other blogs in this series. All source code for this an other blogs in the series are available on Github at git://github.com/roghughe/captaindebug.git   Reference: Investigating Deadlocks – Part 4: Fixing the Code from our JCG partner Roger Hughes at the Captain Debug’s Blog blog. ...

Polyglot Persistence: EclipseLink with MongoDB and Derby

Polyglot persistence has been in the news since some time now. Kicked off by the famous Fowler post from end 2011 I see more an more nice ideas coming up. Latest one was a company internal student project in which we used Scala as a backend persisting data into MongoDB, Derby and Solar. I’m not a big fan of Scala and remembered EclipseLink’s growing support for NoSQL databases. Given that I simply had to try this. Where to start?The biggest issue are the missing examples. You find quite a bit stuff about how to change the data-containers (either NoSQL or RDBMS) with EclipseLink but you will not find a single one which exactly uses both technologies seamlessly. Thanks to Shaun Smith and Gunnar Wagenkrnecht we have this great JavaOne talk about Polyglot Persistence: EclipseLink JPA for NoSQL, Relational, and Beyond which talks exactly about this. Unfortunately the sources still haven’t been pushed anywhere and I had to rebuild this from the talk.So, credits go to Shaun and Gunnar for this. The magic solution is called Persistence Unit Composition. You need one persistence unit for every data container. That looks like the following basic example. You have a couple of entities in each PU and a composite PU is the umbrella.  Let’s go You should have MongoDB in place before you’re going to start this little tutorial example. Fire up NetBeans and create two java projects. Lets call them polyglot-persistence-nosql-pu and polyglot-persistence-rational-pu. Put the following entities into the nosql-pu: Customer, Address, Order and OrderLine. (Mostly taken from the EclipseLink nosql examples) and put a Product entity into the rational-pu. The single products go into Derby while all the other entities persist into MongoDB. The interesting part is, where OrderLine has a One-to-One relation to a Product: @OneToOne(cascade = {CascadeType.REMOVE, CascadeType.PERSIST}) private Product product;This is the point where both worlds come together. More on that later. Both PUs need to be transaction-type=’RESOURCE_LOCAL’ and need to contain the following line in the persistence.xml: <property name='eclipselink.composite-unit.member' value='true'/> Don’t forget to add the db specific configuration. For MongoDB this is <property name='eclipselink.nosql.property.mongo.port' value='27017'/> <property name='eclipselink.nosql.property.mongo.host' value='localhost'/> <property name='eclipselink.nosql.property.mongo.db' value='mydb'/> For derby this is something like this: <property name='javax.persistence.jdbc.url' value='jdbc:derby://localhost:1527/mydb'/> <property name='javax.persistence.jdbc.password' value='sa'/> <property name='javax.persistence.jdbc.driver' value='org.apache.derby.jdbc.ClientDriver'/> <property name='javax.persistence.jdbc.user' value='sa'/> Now we need something to link those two PUs together. The combined-pu resides in a sample polyglot-persistence-web module and looks like this: <persistence-unit name='composite-pu' transaction-type='RESOURCE_LOCAL'> <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider> <jar-file>\lib\polyglot-persistence-rational-pu-1.0-SNAPSHOT.jar</jar-file> <jar-file>\lib\polyglot-persistence-nosql-pu-1.0-SNAPSHOT.jar</jar-file> <properties> <property name='eclipselink.composite-unit' value='true'/> </properties> </persistence-unit> </persistence> Watch out for the jar-file path. We are going to package this in a war-archive and because of this, the nosql-pu and the rational-pu will go into WEB-INF/lib folder. As you can see, my example is build with maven. Make sure to use the latest EclipseLink dependency. Even GlassFish still ships with a lower version. MongoDB support has been added beginning with 2.4. <dependency> <groupId>org.eclipse.persistence</groupId> <artifactId>eclipselink</artifactId> <version>2.4.1</version> </dependency> Beside this, you also need to turn GlassFish’s classloaders around: <class-loader delegate='false'/> Don’t worry about the details. I put up everything to github.com/myfear so, you might dig into the complete example later on your own. Testing it Let’s make some very brief tests with it. Create a nice little Demo servlet and inject the composite-pu to it. Create an EntityManager from it and get a transaction. Now start creating prodcuts, a customer, the order and the separate order-lines. All plain JPA. No further magic here: @PersistenceUnit(unitName = 'composite-pu') private EntityManagerFactory emf;protected void processRequest() // [...] {EntityManager em = emf.createEntityManager(); em.getTransaction().begin(); // Products go into RDBMS Product installation = new Product('installation'); em.persist(installation);Product shipping = new Product('shipping'); em.persist(shipping);Product maschine = new Product('maschine'); em.persist(maschine);// Customer into NoSQL Customer customer = new Customer(); customer.setName('myfear'); em.persist(customer); // Order into NoSQL Order order = new Order(); order.setCustomer(customer); order.setDescription('Pinball maschine');// Order Lines mapping NoSQL --- RDBMS order.addOrderLine(new OrderLine(maschine, 2999)); order.addOrderLine(new OrderLine(shipping, 59)); order.addOrderLine(new OrderLine(installation, 129));em.persist(order); em.getTransaction().commit(); String orderId = order.getId(); em.close(); If you put the right logging properties in place you can see, what is happening: A couple of sequences are assigned to the created Product entities (GeneratedValue). The Customer entity gets persisted into Mongo with a MappedInteraction. Entities map onto collections in MongoDB. FINE: Executing MappedInteraction() spec => null properties => {mongo.collection=CUSTOMER, mongo.operation=INSERT} input => [DatabaseRecord( CUSTOMER._id => 5098FF0C3D9F5D2CCB3CFECF CUSTOMER.NAME => myfear)] After that you see the products being inserted into Derby and again the MappedInteraction, that perssits the Order into MongoDB. The really cool part is down at the OrderLines: ORDER.ORDERLINES => [DatabaseRecord( LINENUMBER => 1 COST => 2999.0 PRODUCT_ID => 3), DatabaseRecord( LINENUMBER => 2 COST => 59.0 PRODUCT_ID => 2), DatabaseRecord( LINENUMBER => 3 COST => 129.0 PRODUCT_ID => 1)] Orderlines has an object which has the product_id which was generated for the related product entities. Further on you can also find the related Order and iterate over the products and get their descriptions: Order order2 = em.find(Order.class, orderId); for (OrderLine orderLine : order2.getOrderLines()) { String desc = orderLine.getProduct().getDescription(); } The nice little demo looks like this:Thanks Shaun, thanks Gunnar for this nice little example. Now go to github.com/myfear and get your hands dirty :)   Reference: Polyglot Persistence: EclipseLink with MongoDB and Derby from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog. ...

20 Kick-ass programming quotes

This post serves as a compilation of great programming quotes, quotes from famous programmers, computer scientists and savvy entrepreneurs of our time. Some of them are funny, some of them are motivational, some of them are… just awesome! So, in no particular order, let’s see what we have… ...

Oracle ADF Mobile World! Hello!

Hello, ADF Mobile, World! As you probably already know… ADF Mobile is here! Here are some links that will make you feel at home.. Home page of ADF Mobile: http://www.oracle.com/technetwork/developer-tools/adf/overview/adf-mobile-096323.html How to setup your JDeveloper: http://docs.oracle.com/cd/E18941_01/tutorials/MobileTutorial/jdtut_11r2_54_1.html Developer’s Guide http://docs.oracle.com/cd/E35521_01/doc.111230/e24475/toc.htm Some sales stuff http://www.oracle.com/technetwork/developer-tools/jdev/adf-mobile-development-129800.pdf And of course, the samples!! Samples are good. We need samples! Samples are goooood: http://www.oracle.com/technetwork/developer-tools/adf/adf-mobile-samples-1865088.html Additional references: http://technology.amis.nl/2012/10/22/adf-mobile-is-now-generally-available/ Well,  That is all we need for now… This post is about mobile.. (daaaaaawn of the dead).. obviously.. So lets get started. This post does not aim to replace any of the official documentation. First we have to setup our JDeveloper ( for the ADF mobile development (everything in this post is well documented in the above links.. this is just for reference flavour and colour)You have to install the plugin for ADF Mobile development. This is fairly easy. Just go to updates of your JDeveloper and update it through the updates process. After you have downloaded and installed the plugin, you have to restart. So, restart. Then, you have to load the extension. That is easy as well, just go to tools-preferences-ADF mobile and press ‘Load Extension‘ After that you have to select the platform you want to develop on. This sample uses iOS. You have to install Xcode to get it working on your Mac. In case you noticed. There is a strange behaviour in the preferences of ADF Mobile. If you select iOS and then select ADF Mobile and platforms back again, you will have the Android platform selected… (see video here). The good thing is that it does not loose your paths. For those of you that dont have the simulator path set by default. The hint below the input text is quite good. Just follow the path and in your mac and you will be fine. Dont forget, you have to install Xcode first!! OK, we have that working now! (We will see if that strange behaviour is going to affect us in the process). What else is there? Oh yes. the Sample application!!!!But wait?? I have some questions first! What is going on with the DB? Do we needs Web Services?? Do we have to bake a cake first? Is there anything else that we have to do prior to develop a very simple ADF mobile application?? Yes of course. There are lots of things to do before making a very first ADF mobile application.. Why dont we understand the architecture first? (see references). Why dont we bake a cake and cook a meal first? Why dont we make up excuses in order to postpone the inevitable? The world went mobile!!! Lets get mobile then! Lets start coding and we will get the rest in time. There are lots to learn indeed. But lets make small steps. No! I would like to learn the bigger picture now! I want to know what is going on.. I want to know how to talk the language. Alright.. It sounds like you want to know everything about snowboarding without even trying to see if you can simply balance and slide…(image from official documentation) Nice isn’t it? do you feel better now? You like it dont you? Do you get the bigger picture now? Great. By the way, do you have any questions?? I am sure you do. In fact we all do! But perhaps it would be a lot better if we see everything in slow motion and with small examples in a series of posts. At least that is my intention. Small and Simple for starters. One interesting thing to notice here, apart the others, is the use of PhoneGap. As you can see in the above image, The Web View contains all the types of Views (Server HTML HTML5 etc..) and PhoneGap covers the gap between those views and the Devices.. For more information about PhoneGap Please visit the FAQ of PhoneGap itself. The above link will give you enough answers to get the picture for now. Another very important thing is that with every ADF Mobile application, there is a small JVM included! The following content is extracted from the official documentation:Java runtime powered by an embedded Java VM bundled with each application.Note: ADF Mobile’s model-view-controller stack resides on a mobile device and represents reimplementation of ADF’s model-view-controller layers. UI metadata is rendered to native components on device and is bound to the model through the ADF Model. You see that every application is powered by an embedded JVM !! And you can use that in your iPhone!!! Without going to many details. The last thing that we note here is the Local Data. The following content is extracted from the official documentation: Local Data refers to data stores that reside on the device. In ADF Mobile, these are implemented as encrypted SQLite databases. Create Retrieve Update Delete (CRUD) operations are supported to this local data store through the Java layer, using JDBC-based APIs. So in all: we will be using phoneGap, JVM and embedded encrypted SQLite databases!! Which means that we can create applications that can store data in the local DB.. I think this brief introduction gives the basic idea of ADF Mobile. On with the Coding!! Where were we? Oh yes! nowhere.. we just setup our environment. Wait! do we need a database for this sample application? No we don’t. This is going to be fairly simple. So what do we do? Lets go bowling! Shut the front door!!! We are doing this. Just create a new application from JDeveloperJust follow the wizards from then and eventually you will get the following:Sorry what?? What is that:That is the adfmf-feature.xml file. This file is to configure the features of your application. We wont be needing this for now. But I am sure that some of you will want to search it a bit more. So here is the documentation: http://docs.oracle.com/cd/E35521_01/doc.111230/e24475/define_features.htm#autoId19 The following content is extracted from the above link: The adfmf-feature.xml file enables you to configure the actual mobile application features that are referenced by the element in the corresponding adfmf-application.xml file. So basically, what is says is, that adfmf-feature.xml is the configuration file of all the features your application might have. All those features are stored in the adfmf-application.xml file. That file is located in the descriptors section in JDeveloper. see image below:So, adfmf-application.xml holds the features of your application and adfmf-features.xml configures it. Additional resource on the adfmf-application.xml and adfmf-features.xml in a more basic level. http://docs.oracle.com/cd/E35521_01/doc.111230/e24475/getting_started.htm#autoId3 More on that later on. An additional interesting thing, is that we already have a DataControl generated!What is that DataControl about? That dataControl handles the operations on your device http://docs.oracle.com/cd/E35521_01/doc.111230/e24475/getting_started.htm#autoId3 The following content is extracted from the above link After you complete an ADF Mobile application project, JDeveloper adds application-level and project-level artifacts , JDeveloper creates the  DeviceFeatures data control. The PhoneGap Java API is abstracted through this data control, thus enabling the application features implemented as ADF Mobile AMX to access various services embedded on the device. JDeveloper also creates the ApplicationFeatures data control, which enables you to build a springboard page. By dragging and dropping the operations provided by the DeviceFeatures data control into an ADF Mobile AMX page (which is described in Section 9.5, ‘Using the DeviceFeatures Data Control’ ), you add functions to manage the user contacts stored on the device, create and send both e-mail and SMS text messages, ascertain the location of the device, use the device’s camera, and retrieve images stored in the device’s file system. That autogenerated DeviceFeatures DataControl is there to help us access various services that are embedded on the device. The  ApplicationFeatures DataControl is a different story and we will talk about it in later posts. Ok. Lets try to create a simple page. In order to create a page, just right click on the ViewController and create a new html page. lets say HelloWorld.html The result will be something like the following:Write some text:Are we there yet?? No. lets go bowling then! No. What else is there? Well, we need a feature!! Remember adfmf-features.xml file? great! go there! and add a new feature. place the name you want and make sure it is selected. Since this will be a local html page. we have to set it up as such. So in the properties of the feature, make sure that the type is htmlSince this is going to be a local page, we have to provide the path.thats it! All we have to do is to package it as an iOS application and test it with the simulator. This is not a simple right click and run. We have to create a deployment profile.. Since we want to run this with the iphone simulator.. we have to create the deployment profile.. So, right click on the Application and select deploy— new deployment profile.Press ok. Then, make sure that the settings are correct for your simulator: I had to manually set them.Click ok and the deployment profile is ready. In order to test the application, right click on the application and select the profile you created previously and deploy it. This will start your iOS Simulator and you will be able to find your applicationif you click on the application you will see our page!And that is it! Once we understand how it works. one step at a time. it is fairly easy to remember. This is the beginning!   Reference: Oracle ADF Mobile World! Hello! from our JCG partner Dimitrios Stassinopoulos at the Born To DeBug blog. ...

JPA/Hibernate: Version-Based Optimistic Concurrency Control

This article is an introduction to version-based optimistic concurrency control in Hibernate and JPA. The concept is fairly old and much has been written on it, but anyway I have seen it reinvented, misunderstood and misused. I’m writing it just to spread knowledge and hopefully spark interest in the subject of concurrency control and locking. Use Cases Let’s say we have a system used by multiple users, where each entity can be modified by more than one user. We want to prevent situations where two persons load some information, make some decision based on what they see, and update the state at the same time. We don’t want to lose changes made by the user who first clicked “save” by overwriting them in the following transaction. It can also happen in server environment – multiple transactions can modify a shared entity, and we want to prevent scenarios like this:Transaction 1 loads data Transaction 2 updates that data and commits Using state loaded in step 1 (which is no longer current), transaction 1 performs some calculations and update the stateIn some ways it’s comparable to non-repeatable reads. Solution: Versioning Hibernate and JPA implement the concept of version-based concurrency control for this reason. Here’s how it works. You can mark a simple property with @Version or <version> (numeric or timestamp). It’s going to be a special column in database. Our mapping can look like: @Entity @Table(name = 'orders') public class Order { @Id private long id;@Version private int version;private String description;private String status;// ... mutators } When such an entity is persisted, the version property is set to a starting value. Whenever it’s updated, Hibernate executes query like: update orders set description=?, status=?, version=? where id=? and version=? Note that in the last line, the WHERE clause now includes version. This value is always set to the “old” value, so that it only will update a row if it has the expected version. Let’s say two users load an order at version 1 and take a while looking at it in the GUI. Anne decides to approve the order and executes such action. Status is updated in database, everything works as expected. Versions passed to update statement look like: update orders set description=?, status=?, version=2 where id=? and version=1 As you can see, while persisting that update the persistence layer increments the version counter to 2. In her GUI, Betty still has the old version (number 1). When she decides to perform an update on the order, the statement looks like: update orders set description=?, status=?, version=2 where id=? and version=1 At this point, after Anne’s update, the row’s version in database is 2. So this second update affects 0 rows (nothing matches the WHERE clause). Hibernate detects that and an org.hibernate.StaleObjectStateException (wrapped in a javax.persistence.OptimisticLockException). As a result, the second user cannot perform any updates unless he refreshes the view. For proper user experience we need some clean exception handling, but I’ll leave that out. Configuration There is little to customize here. The @Version property can be a number or a timestamp. Number is artificial, but typically occupies fewer bytes in memory and database. Timestamp is larger, but it always is updated to “current timestamp”, so you can actually use it to determine when the entity was updated. Why? So why would we use it?It provides a convenient and automated way to maintain consistency in scenarios like those described above. It means that each action can only be performed once, and it guarantees that the user or server process saw up-to-date state while making a business decision. It takes very little work to set up. Thanks to its optimistic nature, it’s fast. There is no locking anywhere, only one more field added to the same queries. In a way it guarantees repeatable reads even with read committed transaction isolation level. It would end with an exception, but at least it’s not possible to create inconsistent state. It works well with very long conversations, including those that span multiple transactions. It’s perfectly consistent in all possible scenarios and race conditions on ACID databases. The updates must be sequential, an update involves a row lock and the “second” one will always affect 0 rows and fail.  Demo To demonstrate this, I created a very simple web application. It wires together Spring and Hibernate (behind JPA API), but it would work in other settings as well: Pure Hibernate (no JPA), JPA with different implementation, non-webapp, non-Spring etc. The application keeps one Order with schema similar to above and shows it in a web form where you can update description and status. To experiment with concurrency control, open the page in two tabs, do different modifications and save. Try the same thing without @Version. It uses an embedded database, so it needs minimal setup (only a web container) and only takes a restart to start with a fresh database. It’s pretty simplistic – accesses EntityManager in a @Transactional @Controller and backs the form directly with JPA-mapped entity. May not be the best way to do things for less trivial projects, but at least it gathers all code in one place and is very easy to grasp. Full source code as Eclipse project can be found at my GitHub repository.   Reference: Version-Based Optimistic Concurrency Control in JPA/Hibernate from our JCG partner Konrad Garus at the Squirrel’s blog. ...

By your Command – Command design pattern

Command design pattern is one of the widely known design pattern and it falls under the Behavioral Design Pattern (part of Gang of Four). As the name suggests it is related to actions and events in an application.   Problem statement: Imagine a scenario where we have a web page will multiple menus in it. One way of writing this code is to have multiple if else condition and executing the actions on each click of the menu.         private void getAction(String action){ if(action.equalsIgnoreCase('New')){ //Create new file } else if(action.equalsIgnoreCase('Open')){ //Open existing file } if(action.equalsIgnoreCase('Print')){ //Print the file } if(action.equalsIgnoreCase('Exit')){ //get out of the application } } We have to execute the actions based on the action string. However the above code is having too many if conditions and is not readable if it extends further. Intent:The requestor of the action needs to be decoupled from the object that carries out this action. Allow encapsulation of the request as an object. Note this line as this is very important concept for Command Pattern. Allow storage of the requests in the queue i.e. allows you to store a list of actions that you can execute later.  Solution: To resolve the above problem the Command pattern is here to rescue. As mentioned above the command pattern moves the above action to objects through encapsulation. These objects when executed it executes the command. Here every command is an object. So we will have to create individual classes for each of the menu actions like NewClass, OpenClass, PrintClass, ExitClass. And all these classes inherit from the Parent interface which is the Command interface. This interface (Command interface) abstracts/wraps all the child action classes. Now we introduce an Invoker class whose main job is to map the action with the classes which have that action. It basically holds the action and get the command to execute a request by calling the execute() method. Oops!! We missed another stakeholder here. It is the Receiver class. The receiver class has the knowledge of what to do to carry out an operation. The receiver has the knowledge of what to do when the action is performed. Structure:Following are the participants of the Command Design pattern:Command – This is an interface for executing an operation. ConcreteCommand – This class extends the Command interface and implements the execute method. This class creates a binding between the action and the receiver. Client – This class creates the ConcreteCommand class and associates it with the receiver. Invoker – This class asks the command to carry out the request. Receiver – This class knows to perform the operation.  Example:  Steps:Define a Command interface with a method signature like execute(). In the above example ActionListenerCommand is the command interface having a single execute() method. Create one or more derived classes that encapsulate some subset of the following: a “receiver” object, the method to invoke, the arguments to pass. In the above example ActionOpen and ActionSave are the Concrete command classes which creates a binding between the receiver and the action. ActionOpen class calls the receiver(in this case the Document class) class’s action method inside the execute(). Thus ordering the receiver class what needs to be done. Instantiate a Command object for each deferred execution request. Pass the Command object from the creator to the invoker. The invoker decides when to execute(). The client instantiates the Receiver object(Document) and the Command objects and allows the invoker to call the command.  Code Example: Command interface: public interface ActionListenerCommand { public void execute(); } Receiver class: public class Document { public void Open(){ System.out.println('Document Opened'); } public void Save(){ System.out.println('Document Saved'); } } Concrete Command: public class ActionOpen implements ActionListenerCommand { private Document adoc;public ActionOpen(Document doc) { this.adoc = doc; } @Override public void execute() { adoc.Open(); } } Invoker class: public class MenuOptions { private ActionListenerCommand openCommand; private ActionListenerCommand saveCommand;public MenuOptions(ActionListenerCommand open, ActionListenerCommand save) { this.openCommand = open; this.saveCommand = save; } public void clickOpen(){ openCommand.execute(); } public void clickSave(){ saveCommand.execute(); } } Client class: public class Client { public static void main(String[] args) { Document doc = new Document(); ActionListenerCommand clickOpen = new ActionOpen(doc); ActionListenerCommand clickSave = new ActionSave(doc); MenuOptions menu = new MenuOptions(clickOpen, clickSave); menu.clickOpen(); menu.clickSave(); }}   Benefits: Command pattern helps to decouple the invoker and the receiver. Receiver is the one which knows how to perform an action. A command should be able to implement undo and redo operations. This pattern helps in terms of extensibility as we can add new command without changing existing code. Drawback: The main disadvantage of the Command pattern is the increase in the number of classes for each individual command. These items could have been also done through method implementation. However the command pattern classes are more readable than creating multiple methods using if else condition. Interesting points:Implementations of java.lang.Runnable and javax.swing.Action follows command design pattern. Command can use Memento to maintain the state required for an undo operation.  Download Sample Code:  Reference: By your Command from our JCG partner Mainak Goswami at the Idiotechie blog. ...

Coherence Event Processing by using Map Trigger Feature

This article shows how to process Coherence events by using Map Triggers. Basically, Distributed Data Management in Oracle Coherence is suggested to look over basic configuration and implementation of Oracle Coherence API Map Triggers are one of the most important features of Oracle Coherence to provide a highly customized cache management system. MapTrigger represents a functional agent that allows to validate, reject or modify mutating operations against an underlying map. Also, they can prevent invalid transactions, enforce security, provide event logging and auditing, and gather statistics on data modifications. For example, we have code that is working with a NamedCache, and we want to change an entry’s behavior or contents before the entry is inserted into the map. This change can be made without modifying all the existing code by enabling a map trigger. There are two ways to add Map Triggers feature to application : 1) A MapTriggerListener can be used to register a MapTrigger with a Named Cache 2) The class-factory mechanism can be used in the coherence-cache-config.xml configuration file In the following sample application, MapTrigger functionality is implemented by following the first way. A new cluster called OTV, is created and User bean is distributed by user-map NamedCache object used among two members of the cluster. Used Technologies : JDK 1.6.0_35 Spring 3.1.2 Coherence 3.7.1 Maven 3.0.2   STEP 1 : CREATE MAVEN PROJECT A maven project is created as below. (It can be created by using Maven or IDE Plug-in).  STEP 2 : COHERENCE PACKAGE Coherence is downloaded via Coherence Package STEP 3 : LIBRARIES Firstly, Spring dependencies are added to Maven’ s pom.xml. <!-- Spring 3.1.2 dependencies --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>${spring.version}</version> </dependency> Coherence library is installed to Local Maven Repository manually and its description is added to pom.xml as below. Also if Maven is not used to manage the project, coherence.jar file can be added to classpath. <!-- Coherence library(from local repository) --> <dependency> <groupId>com.tangosol</groupId> <artifactId>coherence</artifactId> <version>3.7.1</version> </dependency> For creating runnable-jar, the following Maven plugin can be used. <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> <version>1.3.1</version><executions> <execution> <phase>package</phase> <goals> <goal>shade</goal> </goals> <configuration> <transformers> <transformer implementation='org.apache.maven.plugins.shade.resource.ManifestResourceTransformer'> <mainClass>com.otv.exe.Application</mainClass> </transformer> <transformer implementation='org.apache.maven.plugins.shade.resource.AppendingTransformer'> <resource>META-INF/spring.handlers</resource> </transformer> <transformer implementation='org.apache.maven.plugins.shade.resource.AppendingTransformer'> <resource>META-INF/spring.schemas</resource> </transformer> </transformers> </configuration> </execution> </executions> </plugin>   STEP 4 : CREATE otv-coherence-cache-config.xml First Coherence configuration file is otv-coherence-cache-config.xml. It contains caching-schemes(distributed or replicated) and caching-scheme-mapping configuration. Created cache configuration should be added to coherence-cache-config.xml. <?xml version='1.0'?><cache-config xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns='http://xmlns.oracle.com/coherence/coherence-cache-config' xsi:schemaLocation='http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd'><caching-scheme-mapping> <cache-mapping> <cache-name>user-map</cache-name> <scheme-name>MapDistCache</scheme-name> </cache-mapping> </caching-scheme-mapping><caching-schemes> <distributed-scheme> <scheme-name>MapDistCache</scheme-name> <service-name>MapDistCache</service-name> <backing-map-scheme> <local-scheme> <unit-calculator>BINARY</unit-calculator> </local-scheme> </backing-map-scheme> <autostart>true</autostart> </distributed-scheme> </caching-schemes></cache-config>   STEP 5 : CREATE tangosol-coherence-override.xml Second Coherence configuration file is tangosol-coherence-override.xml. It contains cluster, member-identity and configurable-cache-factory configuration. tangosol-coherence-override.xml for first member of the cluster : <?xml version='1.0'?><coherence xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns='http://xmlns.oracle.com/coherence/coherence-operational-config' xsi:schemaLocation='http://xmlns.oracle.com/coherence/coherence-operational-config coherence-operational-config.xsd'><cluster-config><member-identity> <cluster-name>OTV</cluster-name> <role-name>OTV1</role-name> </member-identity><unicast-listener> <well-known-addresses> <socket-address id='1'> <address>x.x.x.x</address> <port>8089</port> </socket-address> <socket-address id='2'> <address>x.x.x.x</address> <port>8090</port> </socket-address> </well-known-addresses><machine-id>1001</machine-id> <address>x.x.x.x</address> <port>8089</port> <port-auto-adjust>true</port-auto-adjust> </unicast-listener></cluster-config><configurable-cache-factory-config> <init-params> <init-param> <param-type>java.lang.String</param-type> <param-value system-property='tangosol.coherence.cacheconfig'> otv-coherence-cache-config.xml</param-value> </init-param> </init-params> </configurable-cache-factory-config></coherence>   tangosol-coherence-override.xml for second member of the cluster : <?xml version='1.0'?><coherence xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns='http://xmlns.oracle.com/coherence/coherence-operational-config' xsi:schemaLocation='http://xmlns.oracle.com/coherence/coherence-operational-config coherence-operational-config.xsd'><cluster-config><member-identity> <cluster-name>OTV</cluster-name> <role-name>OTV2</role-name> </member-identity><unicast-listener><well-known-addresses> <socket-address id='1'> <address>x.x.x.x</address> <port>8090</port> </socket-address> <socket-address id='2'> <address>x.x.x.x</address> <port>8089</port> </socket-address> </well-known-addresses><machine-id>1002</machine-id> <address>x.x.x.x</address> <port>8090</port> <port-auto-adjust>true</port-auto-adjust></unicast-listener></cluster-config><configurable-cache-factory-config> <init-params> <init-param> <param-type>java.lang.String</param-type> <param-value system-property='tangosol.coherence.cacheconfig'> otv-coherence-cache-config.xml</param-value> </init-param> </init-params> </configurable-cache-factory-config></coherence>   STEP 6 : CREATE applicationContext.xml Spring Configuration file, applicationContext.xml, is created. <beans xmlns='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:schemaLocation='http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans-3.0.xsd'><!-- Beans Declaration --> <bean id='userCacheService' class='com.otv.srv.UserCacheService'></bean><bean id='userCacheUpdater' class='com.otv.exe.UserCacheUpdater'> <property name='userCacheService' ref='userCacheService' /> </bean></beans>   STEP 7 : CREATE User CLASS A new User Spring bean is created. This bean will be distributed between two node in OTV cluster. For serializing, java.io.Serializable interface has been implemented but PortableObject can be implemented for better performance. package com.otv.user;import java.io.Serializable;/** * User Bean * * @author onlinetechvision.com * @since 29 Oct 2012 * @version 1.0.0 * */ public class User implements Serializable {private static final long serialVersionUID = -1963764656789800896L;private String id; private String name; private String surname;public String getId() { return id; }public void setId(String id) { this.id = id; }public String getName() { return name; }public void setName(String name) { this.name = name; }public String getSurname() { return surname; }public void setSurname(String surname) { this.surname = surname; }@Override public String toString() { StringBuilder strBuff = new StringBuilder(); strBuff.append('id : ').append(id); strBuff.append(', name : ').append(name); strBuff.append(', surname : ').append(surname); return strBuff.toString(); } }   STEP 8 : CREATE IUserCacheService INTERFACE A new IUserCacheService Interface is created for service layer to expose cache functionality. package com.otv.srv;import com.tangosol.net.NamedCache;/** * IUserCacheService Interface exposes User Cache operations * * @author onlinetechvision.com * @since 29 Oct 2012 * @version 1.0.0 * */ public interface IUserCacheService {/** * Adds user entries to cache * * @param Object key * @param Object value * */ void addToUserCache(Object key, Object value);/** * Deletes user entries from cache * * @param Object key * */ void deleteFromUserCache(Object key);/** * Gets user cache * * @retun NamedCache Coherence named cache */ NamedCache getUserCache();}   STEP 9 : CREATE UserCacheService IMPL CLASS UserCacheService is created by implementing IUserCacheService. package com.otv.srv;import com.otv.listener.UserMapListener; import com.otv.trigger.UserMapTrigger; import com.tangosol.net.CacheFactory; import com.tangosol.net.NamedCache; import com.tangosol.util.MapTriggerListener;/** * CacheService Class implements the ICacheService * * @author onlinetechvision.com * @since 29 Oct 2012 * @version 1.0.0 * */ public class UserCacheService implements IUserCacheService {private NamedCache userCache = null; private static final String USER_MAP = 'user-map'; private static final long LOCK_TIMEOUT = -1;public UserCacheService() { setUserCache(CacheFactory.getCache(USER_MAP)); getUserCache().addMapListener(new UserMapListener()); getUserCache().addMapListener(new MapTriggerListener(new UserMapTrigger())); }/** * Adds user entries to cache * * @param Object key * @param Object value * */ public void addToUserCache(Object key, Object value) { // key is locked getUserCache().lock(key, LOCK_TIMEOUT); try { // application logic getUserCache().put(key, value); } finally { // key is unlocked getUserCache().unlock(key); } }/** * Deletes user entries from cache * * @param Object key * */ public void deleteFromUserCache(Object key) { // key is locked getUserCache().lock(key, LOCK_TIMEOUT); try { // application logic getUserCache().remove(key); } finally { // key is unlocked getUserCache().unlock(key); } }/** * Gets user cache * * @retun NamedCache Coherence named cache */ public NamedCache getUserCache() { return userCache; }public void setUserCache(NamedCache userCache) { this.userCache = userCache; }}   STEP 10 : CREATE UserMapTrigger CLASS A new UserMapTrigger class is created by implementing com.tangosol.util.MapTrigger Interface. This trigger processes the logic before the entry is inserted into the user-map. package com.otv.trigger;import org.apache.log4j.Logger;import com.otv.listener.UserMapListener; import com.otv.user.User; import com.tangosol.util.MapTrigger;/** * UserMapTrigger executes required logic before the operation is committed * * @author onlinetechvision.com * @since 29 Oct 2012 * @version 1.0.0 * */ public class UserMapTrigger implements MapTrigger {private static final long serialVersionUID = 5411263646665358790L; private static Logger logger = Logger.getLogger(UserMapListener.class);/** * Processes user cache entries * * @param MapTrigger.Entry entry * */ public void process(MapTrigger.Entry entry) { User user = (User) entry.getValue(); String id = user.getId(); String name = user.getName(); String updatedName = name.toUpperCase();String surname = user.getSurname(); String updatedSurname = surname.toUpperCase();if (!updatedName.equals(name)) { user.setName(updatedName); }if (!updatedSurname.equals(surname)) { user.setSurname(updatedSurname); }user.setId(user.getName() + '_' + user.getSurname());entry.setValue(user);logger.debug('UserMapTrigger processes the entry before committing. ' + 'oldId : ' + id + ', newId : ' + ((User)entry.getValue()).getId() + ', oldName : ' + name + ', newName : ' + ((User)entry.getValue()).getName() + ', oldSurname : ' + surname + ', newSurname : ' + ((User)entry.getValue()).getSurname() ); }public boolean equals(Object o) { return o != null && o.getClass() == this.getClass(); }public int hashCode() { return getClass().getName().hashCode(); } }   STEP 11 : CREATE USERMAPLISTENER IMPL CLASS A new UserMapListener class is created. This listener receives distributed user-map events. package com.otv.listener;import org.apache.log4j.Logger;import com.tangosol.util.MapEvent; import com.tangosol.util.MapListener;/** * UserMapListener Class listens user cache events * * @author onlinetechvision.com * @since 29 Oct 2012 * @version 1.0.0 * */ public class UserMapListener implements MapListener {private static Logger logger = Logger.getLogger(UserMapListener.class);public void entryDeleted(MapEvent me) { logger.debug('Deleted Key = ' + me.getKey() + ', Value = ' + me.getOldValue()); }public void entryInserted(MapEvent me) { logger.debug('Inserted Key = ' + me.getKey() + ', Value = ' + me.getNewValue()); }public void entryUpdated(MapEvent me) { // logger.debug('Updated Key = ' + me.getKey() + ', New_Value = ' + me.getNewValue() + ', Old Value = ' + me.getOldValue()); } }   STEP 12 : CREATE CacheUpdater CLASS CacheUpdater Class is created to add new entry to cache and monitor cache content. package com.otv.exe;import java.util.Collection;import org.apache.log4j.Logger;import com.otv.srv.IUserCacheService; import com.otv.user.User;/** * CacheUpdater Class updates and prints user cache entries * * @author onlinetechvision.com * @since 29 Oct 2012 * @version 1.0.0 * */ public class UserCacheUpdater implements Runnable {private static Logger logger = Logger.getLogger(UserCacheUpdater.class);private IUserCacheService userCacheService;/** * Runs the UserCacheUpdater Thread * */ public void run() {//New User are created... User user = new User();//Only Name and Surname properties are set and Id property will be set at trigger level. user.setName('James'); user.setSurname('Joyce');//Entries are added to cache... getUserCacheService().addToUserCache('user1', user);// The following code block shows the entry which will be inserted via second member of the cluster // so it should be opened and above code block should be commented-out before the project is built.// user.setName('Thomas'); // user.setSurname('Moore'); // getUserCacheService().addToUserCache('user2', user);//Cache Entries are being printed... printCacheEntries(); }/** * Prints User Cache Entries * */ @SuppressWarnings('unchecked') private void printCacheEntries() { Collection<User> userCollection = null; try { while(true) { userCollection = (Collection<User>)getUserCacheService().getUserCache().values();for(User user : userCollection) { logger.debug('Cache Content : '+user); }Thread.sleep(60000); } } catch (InterruptedException e) { logger.error('CacheUpdater is interrupted!', e); } }public IUserCacheService getUserCacheService() { return userCacheService; }public void setUserCacheService(IUserCacheService userCacheService) { this.userCacheService = userCacheService; } }   STEP 13 : CREATE Application CLASS Application Class is created to run the application. package com.otv.exe;import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext;/** * Application class starts the application * * @author onlinetechvision.com * @since 29 Oct 2012 * @version 1.0.0 * */ public class Application {/** * Starts the application * * @param String[] args * */ public static void main(String[] args) { ApplicationContext context = new ClassPathXmlApplicationContext('applicationContext.xml');UserCacheUpdater cacheUpdater = (UserCacheUpdater) context.getBean('userCacheUpdater'); new Thread(cacheUpdater).start(); } } nbsp; STEP 14 : BUILD PROJECT After OTV_Spring_Coherence_MapTrigger Project is build, OTV_Spring_Coherence_MapTrigger-0.0.1-SNAPSHOT.jar will be created. Important Note : The Members of the cluster have got different configuration for Coherence so the project should be built separately for each member. STEP 15 : RUN PROJECT BY STARTING ON MEMBER OF THE CLUSTER After created OTV_Spring_Coherence-0.0.1-SNAPSHOT.jar file is run at the members of the cluster, below output logs will be shown on first member’ s console: --A new cluster is created and First Member joins the cluster and adds a new entry to the cache. 29.10.2012 18:26:44 DEBUG (UserMapTrigger.java:49) - UserMapTrigger processes the entry before committing. oldId : null, newId : JAMES_JOYCE , oldName : James, newName : JAMES, oldSurname : Joyce, newSurname : JOYCE 29.10.2012 18:26:44 DEBUG (UserMapListener.java:25) - Inserted Key = user1, Value = id : JAMES_JOYCE, name : JAMES, surname : JOYCE 29.10.2012 18:26:44 DEBUG (UserCacheUpdater.java:63) - Cache Content : id : JAMES_JOYCE, name : JAMES, surname : JOYCE.......--Second Member joins the cluster and adds a new entry to the cache. 29.10.2012 18:27:33 DEBUG (UserMapTrigger.java:49) - UserMapTrigger processes the entry before committing. oldId : null, newId : THOMAS_MOORE, oldName : Thomas, newName : THOMAS, oldSurname : Moore, newSurname : MOORE 29.10.2012 18:27:34 DEBUG (UserMapListener.java:25) - Inserted Key = user2, Value = id : THOMAS_MOORE, name : THOMAS, surname : MOORE.......--After second member adds a new entry, cache content is shown as below : 29.10.2012 18:27:44 DEBUG (UserCacheUpdater.java:63) - Cache Content : id : THOMAS_MOORE, name : THOMAS, surname : MOORE 29.10.2012 18:27:45 DEBUG (UserCacheUpdater.java:63) - Cache Content : id : JAMES_JOYCE, name : JAMES, surname : JOYCE 29.10.2012 18:28:45 DEBUG (UserCacheUpdater.java:63) - Cache Content : id : THOMAS_MOORE, name : THOMAS, surname : MOORE 29.10.2012 18:28:45 DEBUG (UserCacheUpdater.java:63) - Cache Content : id : JAMES_JOYCE, name : JAMES, surname : JOYCE Second member’ s console : --After Second Member joins the cluster and adds a new entry to the cache, cache content is shown as below and the members has got same entries :. 29.10.2012 18:27:34 DEBUG (UserMapListener.java:25) - Inserted Key = user2, Value = id : THOMAS_MOORE, name : THOMAS, surname : MOORE 29.10.2012 18:27:34 DEBUG (UserCacheUpdater.java:63) - Cache Content : id : JAMES_JOYCE, name : JAMES, surname : JOYCE 29.10.2012 18:27:34 DEBUG (UserCacheUpdater.java:63) - Cache Content : id : THOMAS_MOORE, name : THOMAS, surname : MOORE 29.10.2012 18:28:34 DEBUG (UserCacheUpdater.java:63) - Cache Content : id : JAMES_JOYCE, name : JAMES, surname : JOYCE 29.10.2012 18:28:34 DEBUG (UserCacheUpdater.java:63) - Cache Content : id : THOMAS_MOORE, name : THOMAS, surname : MOORE   STEP 16 : DOWNLOAD https://github.com/erenavsarogullari/OTV_Spring_Coherence_MapTrigger   Reference: Coherence Event Processing by using Map Trigger Feature from our JCG partner Eren Avsarogullari at the Online Technology Vision blog. ...

Modeling Mongo Documents With Mongoose

Without a doubt, one of the quickest ways to build an application that leverages MongoDB is with Node. It’s as if the two platforms were made for each other; the sheer number of Node libraries available for dealing with Mongo is testimony to a vibrant, innovative community. Indeed, one of my favorite Mongo focused libraries these days is Mongoose. Briefly, Mongoose is an object modeling framework that makes it incredibly easy to model collections and ultimately work with intuitive objects that support a rich feature set. Like most things in Node, it couldn’t be any easier to get set up. Essentially, to use Mongoose, you’ll need to define Schema objects – these are your documents – either top level or even embedded. For example, I’ve defined a words collection that contains documents (representing…words) that each contain an embedded collection of definition documents. A sample document looks like this: { _id: '4fd7c7ac8b5b27f21b000001', spelling: 'drivel', synonyms: ['garbage', 'dribble', 'drool'], definitions: [ { part_of_speech: 'noun', definition:'saliva flowing from the mouth, or mucus from the nose; slaver.' }, { part_of_speech: 'noun', definition:'childish, silly, or meaningless talk or thinking; nonsense; twaddle.' }] }From an document modeling standpoint, I’d like to work with a Word object that contains a list of Definition objects and a number of related attributes (i.e. synonyms, parts of speech, etc). To model this relationship with Mongoose, I’ll need to define two Schema types and I’ll start with the simplest: Definition = mongoose.model 'definition', new mongoose.Schema({ part_of_speech : { type: String, required: true, trim: true, enum: ['adjective', 'noun', 'verb', 'adverb'] }, definition : {type: String, required: true, trim: true} })As you can see, a Definition is simple – the part_of_speech attribute is an enumerated String that’s required; what’s more, the definition attribute is also a required String. Next, I’ll define a Word: Word = mongoose.model 'word', new mongoose.Schema({ spelling : {type: String, required: true, trim: true, lowercase: true, unique: true}, definitions : [Definition.schema], synonyms : [{ type: String, trim: true, lowercase: true }] })As you can see, a Word instance embeds a collection of Definitions. Here I’m also demonstrating the usage of lowercase and the index unique placed on the spelling attribute. To create a Word instance and save the corresponding document couldn’t be easier. Mongo array’s leverage the push command and Mongoose follows this pattern to the tee. word = new models.Word({spelling : 'loquacious'}) word.synonyms.push 'verbose' word.definitions.push {definition: 'talking or tending to talk much or freely; talkative; \ chattering; babbling; garrulous.', part_of_speech: 'adjective' } word.save (err, data) ->Finding a word is easy too: it 'findOne should return one', (done) -> models.Word.findOne spelling:'nefarious', (err, document) -> document.spelling.should.eql 'nefarious' document.definitions.length.should.eql 1 document.synonyms.length.should.eql 2 document.definitions[0]['part_of_speech'].should.eql 'adjective' done(err)In this case, the above code is a Mocha test case (which uses should for assertions) that demonstrates Mongoose’s findOne. You can find the code for these examples and more at my Github repo dubbed Exegesis and while you’re at it, check out the developerWorks videos I did for Node!   Reference: Modeling Mongo Documents With Mongoose from our JCG partner Andrew Glover at the The Disco Blog blog.   ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: