Featured FREE Whitepapers

What's New Here?


Beginner’s Guide to Hazelcast Part 3

This is a continuation of a series of posts on how to use Hazelcast from a beginner’s point of view.  If you haven’t read the last two, I encourage reading them:Beginner’s Guide to Hazelcast Part 1 Beginner’s Guide to Hazelcast Part 2          The Primitives are Coming During my last post I mentioned to use an ILock with IList and ISet because they are not thread safe.  It hit me that I had not covered a basic part of Hazelcast, the distributed primitives.  They solve the problem of  synchronizing the use of resources in a distributed way.  Those who do a lot of threaded programming will recognize them right away.  For those of you who are new to programming in threads, I will explain what each primitive does and give an example. IAtomicLong This is a distributed atomic long.  This means that every operation happens all at once.  For example, one can add a number and retrieve the resulting value in one operation.  One can get the value then add a value.  This is true for every operation one does on this primitive.  As one can imagine, it is thread safe but one cannot do this and it be thread safe. atomicLong.addAndGet(2 * atomicLong.get()); The line above creates a race condition because there are three operations, the reading of the contents of the atomic long, multiplying by two and adding that to the instance. The thread safely is only there if the operation is guaranteed to happen in one step. To do that, IAtomicLong has a method called alterAndGet. AlterAndGet takes a IFunction object. This makes multi-step operations one step. There is always one synchronous backup of an IAtomicLong and it is not configurable. IdGenerator IAtomicLongs are great to use to keep track of how many of what one has. The problem is that since the call is most likely remote, IAtomicLongs for some situations are not an ideal solution. One of those situations is generating unique ids. IdGenerator was made just for that purpose. The way it works is that each member claims one million ids to generate. Once all of those claimed numbers are taken, the segment claims another million. So since each member has a million ids tucked away, the chances the call to an IdGenerator is remote is one in a million. This makes it very fast way to generate unique ids. If any duplicates happen it may be because the members didn’t join up. If a member goes down before its segment is used up, there will be gaps in the ids. For unique id generation missing numbers are not an issue. I do feel members not hooking up to the cluster is an issue but if that is happening, there are bigger things to worry about. If the cluster get restarted, the ids start at zero again. That is because the id is not persisted. This is a in memory database, one takes their chances. To counter that, IdGenerators can be set to start at particular number as long it isn’t claimed by someone else and no ids have been generated yet. Alternatives are to creating ones own id generator or use the java.util.UUID class. This may take more space but each project has its own requirements to meet. IdGenerators always have one synchronous backup and cannot be configured. ILock Here is a classic synchronization method with a twist. It is an exclusive lock that is distributed. One just invokes the method lock and a thread either waits or obtains a lock. Once the lock is established, the critical section can be preformed. Once the work is done, the unlock method is used. Veterans of this technique will put the critical section in a try finally block, establishing the lock just outside the try block and the unlock in the finally section. This is invaluable for performing actions on structures that are not thread safe. The process that gets the lock owns the lock and is required to call unlock for other processes to be able to establish locks. This can be problematic when one has threads in multiple locations on the network. Hazelcast thought of this problem and has the lock released when a member goes down. Another feature is that the lock method has a timeout of 300 seconds. This prevents starved threads. ILocks have one synchronous backup and is not configurable. A bit of advice from someone that has experience, keep the critical sections as small as possible; this helps performance, and prevent deadlocks. Deadlocks are a pain to debug and harder to test because of the unknown execution order of threads. One time the bug manifests itself then it does not. This can continue for a week or more because of a misplaced lock. Then one has to make sure it will not happen again. This is hard to prove because of the unknown execution of the threads. By the time it is all done, the boss is frustrated because of the time it took and one does not know if the bug is fixed or not. ICondition Ever wanted to wait for an event to happen but did not want other people to have to wait for it too? That is exactly what conditions are for in threaded programming. Before Java 1.5, this was accomplished via the synchronized-wait-notify technique. This can be preformed by the lock-condition technique. Take a trip with me and I can show one how this works. Imagine a situation where there is a non-thread safe list and it has a producer and a consumer writing and reading from it. Obviously, there are critical sections that need to be protected. That falls into the lap of a lock. After a lock is established, critical work can begin. The only problem is that the resource in a state that is useless to the thread. For example, a consumer cannot pull entries from an empty list. A producer cannot put entries on a full list. This is where a condition comes in. The producer or consumer will enter a while loop that tests for the condition that is favorable and call condition.await(). Once await is called, the thread gives up its lock and lets other threads access their critical sections. The awaiting thread will get the lock back to test for its condition and may await some more or the condition is satisfied and starts doing work. Once the critical section is complete, the thread can call signal() or signalAll() to tell the other threads to wake up and check their conditions. Conditions are created by the lock instead of the Hazelcast instance. Another thing is that if one wants the condition to be distributed, one must use the lock.newCondition(String name) method. IConditions have one synchronous backup and cannot be configured. I cannot tell one how many deadlocks can occur using this technique. Sometimes the signal comes when the thread is waiting and everything is good. The other side is that the signal is sent when the thread is not waiting, enters the wait state and it waits forever. For this reason, I advocate using a timeout while waiting so the thread can check every once in a while if the condition has been met. That way if the signal misses, the worst that can happen is a little waiting time instead of forever waiting. I used the timeout technique in my example. Copy and paste the code as much as one wants. I would rather have tested techniques being used rather than untested code invading the Internet. ICountDownLatch A ICountDownLatch is a synchronizing tool that triggers when its counter goes to zero. This is not a common way to do coordinate but it is there when needed. The example section, I think, provides a much better explanation of how it works. The latch can be reset after it goes to zero so it can be used over again. If the owning member goes away, all of the threads waiting for the latch to strike zero are signaled as if it zero has been achieved. The ICountDownLatch is backed up synchronously in one other place and cannot be configured. ISemaphore Yes, there is a distributed version of the classic semaphore. This is exciting to me be because last time I went to a Operating System class, semaphores needed a bit of hardware support. Maybe I just dated myself, oh well, it is still cool (again dating myself). Semaphores work by limiting the number of threads that can access a resource. Unlike locks, semaphores have no sense of ownership so different threads can release the claim on the resource. Unlike the rest of the primitives, the ISemaphore can be configured. I configure one in my example. It is in the hazelcast.xml in the default package of my project. Examples Here are the examples. I had a comment about my last post asking me to indent my code so it is more readable. I will do that for sure this time because of the amount of code I am posting. One will see a couple of things that I have not discussed before. One is the IExecutorService. This is a distributed version of the ExecutorService. One can actually sent jobs off to be completed by different members. Another thing is that all of the Runnable/Callable classes that are defined implement Serializable. This is necessary in a distributed environment because the object can be sent across to different members. The last thing is the HazelcastInstanceAware interface. It allows a class to access the local Hazelcast instance. Then the class can get instances of the resources it needs (like ILists). Without further ado, here we go. IAtomicLong package hazelcastprimitives.iatomiclong;import com.hazelcast.core.Hazelcast; import com.hazelcast.core.HazelcastInstance; import com.hazelcast.core.IAtomicLong; import com.hazelcast.core.IFunction; import java.io.Serializable;/** * * @author Daryl */ public class IAtomicLongExample { public static class MultiplyByTwoAndSubtractOne implements IFunction, Serializable {@Override public Long apply(Long t) { return (long)(2 * t - 1); } } public static final void main(String[] args) { HazelcastInstance instance = Hazelcast.newHazelcastInstance(); final String NAME = "atomic"; IAtomicLong aLong = instance.getAtomicLong(NAME); IAtomicLong bLong = instance.getAtomicLong(NAME); aLong.getAndSet(1L); System.out.println("bLong is now: " + bLong.getAndAdd(2)); System.out.println("aLong is now: " + aLong.getAndAdd(0L)); MultiplyByTwoAndSubtractOne alter = new MultiplyByTwoAndSubtractOne(); aLong.alter(alter); System.out.println("bLong is now: " + bLong.getAndAdd(0L)); bLong.alter(alter); System.out.println("aLong is now: " + aLong.getAndAdd(0L)); System.exit(0); } } Notice that even the MutilpyAndSubtractOne class implements Serializable. IdGenerator package hazelcastprimitives.idgenerator;import com.hazelcast.core.Hazelcast; import com.hazelcast.core.HazelcastInstance; import com.hazelcast.core.IdGenerator;/** * * @author Daryl */ public class IdGeneratorExample { public static void main(String[] args) { HazelcastInstance instance = Hazelcast.newHazelcastInstance();IdGenerator generator = instance.getIdGenerator("generator"); for(int i = 0; i < 10; i++) { System.out.println("The generated value is " + generator.newId()); } instance.shutdown(); System.exit(0); } } ILock This ILock example can also be considered an ICondition example. I had to use a condition because the ListConsumer was always running before the ListProducer so I made the ListConsumer wait until the IList had something to consume. package hazelcastprimitives.ilock;import com.hazelcast.core.Hazelcast; import com.hazelcast.core.HazelcastInstance; import com.hazelcast.core.HazelcastInstanceAware; import com.hazelcast.core.ICondition; import com.hazelcast.core.IExecutorService; import com.hazelcast.core.IList; import com.hazelcast.core.ILock; import java.io.Serializable; import java.util.concurrent.TimeUnit;/** * * @author Daryl */ public class ILockExample {static final String LIST_NAME = "to be locked"; static final String LOCK_NAME = "to lock with"; static final String CONDITION_NAME = "to signal with"; /** * @param args the command line arguments */ public static void main(String[] args) { HazelcastInstance instance = Hazelcast.newHazelcastInstance(); IExecutorService service = instance.getExecutorService("service"); ListConsumer consumer = new ListConsumer(); ListProducer producer = new ListProducer(); try { service.submit(producer); service.submit(consumer); Thread.sleep(10000); } catch(InterruptedException ie){ System.out.println("Got interrupted"); } finally { instance.shutdown(); } } public static class ListConsumer implements Runnable, Serializable, HazelcastInstanceAware {private transient HazelcastInstance instance; @Override public void run() { ILock lock = instance.getLock(LOCK_NAME); ICondition condition = lock.newCondition(CONDITION_NAME); IList list = instance.getList(LIST_NAME); lock.lock(); try { while(list.isEmpty()) { condition.await(2, TimeUnit.SECONDS); } while(!list.isEmpty()) { System.out.println("value is " + list.get(0)); list.remove(0); } } catch(InterruptedException ie) { System.out.println("Consumer got interrupted"); } finally { lock.unlock(); } System.out.println("Consumer leaving"); }@Override public void setHazelcastInstance(HazelcastInstance hazelcastInstance) { instance = hazelcastInstance; } } public static class ListProducer implements Runnable, Serializable, HazelcastInstanceAware { private transient HazelcastInstance instance;@Override public void run() { ILock lock = instance.getLock(LOCK_NAME); ICondition condition = lock.newCondition(CONDITION_NAME); IList list = instance.getList(LIST_NAME); lock.lock(); try { for(int i = 1; i <= 10; i++){ list.add(i); } condition.signalAll(); } finally { lock.unlock(); } System.out.println("Producer leaving"); }@Override public void setHazelcastInstance(HazelcastInstance hazelcastInstance) { instance = hazelcastInstance; } } } ICondition Here is the real ICondition example. Notice how the SpunProducer and SpunConsumer both share the same ICondition and signal each other. Note I am making use of timeouts to prevent deadlocks. package hazelcastprimitives.icondition;import com.hazelcast.core.Hazelcast; import com.hazelcast.core.HazelcastInstance; import com.hazelcast.core.HazelcastInstanceAware; import com.hazelcast.core.ICondition; import com.hazelcast.core.IExecutorService; import com.hazelcast.core.IList; import com.hazelcast.core.ILock; import java.io.Serializable; import java.util.concurrent.TimeUnit;/** * * @author Daryl */ public class IConditionExample { static final String LOCK_NAME = "lock"; static final String CONDITION_NAME = "condition"; static final String SERVICE_NAME = "spinderella"; static final String LIST_NAME = "list"; public static final void main(String[] args) { HazelcastInstance instance = Hazelcast.newHazelcastInstance(); IExecutorService service = instance.getExecutorService(SERVICE_NAME); service.execute(new SpunConsumer()); service.execute(new SpunProducer()); try { Thread.sleep(10000);} catch(InterruptedException ie) { System.out.println("Hey we got out sooner than I expected"); } finally { instance.shutdown(); System.exit(0); } } public static class SpunProducer implements Serializable, Runnable, HazelcastInstanceAware {private transient HazelcastInstance instance; private long counter = 0; @Override public void run() { ILock lock = instance.getLock(LOCK_NAME); ICondition condition = lock.newCondition(CONDITION_NAME); IList list = instance.getList(LIST_NAME); lock.lock(); try { if(list.isEmpty()) { populate(list); System.out.println("telling the consumers"); condition.signalAll(); } for(int i = 0; i < 2; i++) { while(!list.isEmpty()) { System.out.println("Waiting for the list to be empty"); System.out.println("list size: " + list.size() ); condition.await(2, TimeUnit.SECONDS); } populate(list); System.out.println("Telling the consumers"); condition.signalAll(); } } catch(InterruptedException ie) { System.out.println("We have a found an interuption"); } finally { condition.signalAll(); System.out.println("Producer exiting stage left"); lock.unlock(); } }@Override public void setHazelcastInstance(HazelcastInstance hazelcastInstance) { instance = hazelcastInstance; } private void populate(IList list) { System.out.println("Populating list"); long currentCounter = counter; for(; counter < currentCounter + 10; counter++) { list.add(counter); } } } public static class SpunConsumer implements Serializable, Runnable, HazelcastInstanceAware {private transient HazelcastInstance instance; @Override public void run() { ILock lock = instance.getLock(LOCK_NAME); ICondition condition = lock.newCondition(CONDITION_NAME); IList list = instance.getList(LIST_NAME); lock.lock(); try { for(int i = 0; i < 3; i++) { while(list.isEmpty()) { System.out.println("Waiting for the list to be filled"); condition.await(1, TimeUnit.SECONDS); } System.out.println("removing values"); while(!list.isEmpty()){ System.out.println("value is " + list.get(0)); list.remove(0); } System.out.println("Signaling the producer"); condition.signalAll(); } } catch(InterruptedException ie) { System.out.println("We had an interrupt"); } finally { System.out.println("Consumer exiting stage right"); condition.signalAll(); lock.unlock(); } }@Override public void setHazelcastInstance(HazelcastInstance hazelcastInstance) { instance = hazelcastInstance; } }} ICountDownLatch package hazelcastprimitives.icountdownlatch;import com.hazelcast.core.Hazelcast; import com.hazelcast.core.HazelcastInstance; import com.hazelcast.core.HazelcastInstanceAware; import com.hazelcast.core.ICountDownLatch; import com.hazelcast.core.IExecutorService; import com.hazelcast.core.IList; import com.hazelcast.core.ILock; import java.io.Serializable; import java.util.concurrent.TimeUnit;/** * * @author Daryl */ public class ICountDownLatchExample { static final String LOCK_NAME = "lock"; static final String LATCH_NAME = "condition"; static final String SERVICE_NAME = "spinderella"; static final String LIST_NAME = "list"; public static final void main(String[] args) { HazelcastInstance instance = Hazelcast.newHazelcastInstance(); IExecutorService service = instance.getExecutorService(SERVICE_NAME); service.execute(new SpunMaster()); service.execute(new SpunSlave()); try { Thread.sleep(10000);} catch(InterruptedException ie) { System.out.println("Hey we got out sooner than I expected"); } finally { instance.shutdown(); System.exit(0); } } public static class SpunMaster implements Serializable, Runnable, HazelcastInstanceAware {private transient HazelcastInstance instance; private long counter = 0; @Override public void run() { ILock lock = instance.getLock(LOCK_NAME); ICountDownLatch latch = instance.getCountDownLatch(LATCH_NAME); IList list = instance.getList(LIST_NAME); lock.lock(); try { latch.trySetCount(10); populate(list, latch); } finally { System.out.println("Master exiting stage left"); lock.unlock(); } }@Override public void setHazelcastInstance(HazelcastInstance hazelcastInstance) { instance = hazelcastInstance; } private void populate(IList list, ICountDownLatch latch) { System.out.println("Populating list"); long currentCounter = counter; for(; counter < currentCounter + 10; counter++) { list.add(counter); latch.countDown(); } } } public static class SpunSlave implements Serializable, Runnable, HazelcastInstanceAware {private transient HazelcastInstance instance; @Override public void run() { ILock lock = instance.getLock(LOCK_NAME); ICountDownLatch latch = instance.getCountDownLatch(LATCH_NAME); IList list = instance.getList(LIST_NAME); lock.lock(); try { if(latch.await(2, TimeUnit.SECONDS)) { while(!list.isEmpty()){ System.out.println("value is " + list.get(0)); list.remove(0); }} } catch(InterruptedException ie) { System.out.println("We had an interrupt"); } finally { System.out.println("Slave exiting stage right"); lock.unlock(); } }@Override public void setHazelcastInstance(HazelcastInstance hazelcastInstance) { instance = hazelcastInstance; } }} ISemaphore Configuration Here is the ISemaphore configuration: <?xml version="1.0" encoding="UTF-8"?> <hazelcast xsi:schemaLocation ="http://www.hazelcast.com/schema/config http://www.hazelcast.com/schema/config/hazelcast-config-3.0.xsd " xmlns ="http://www.hazelcast.com/schema/config " xmlns:xsi ="http://www.w3.org/2001/XMLSchema-instance"> <network> <join><multicast enabled="true"/></join> </network> <semaphore name="to reduce access"> <initial-permits>3</initial-permits> </semaphore> </hazelcast> Example Code package hazelcastprimitives.isemaphore;import com.hazelcast.core.Hazelcast; import com.hazelcast.core.HazelcastInstance; import com.hazelcast.core.HazelcastInstanceAware; import com.hazelcast.core.IExecutorService; import com.hazelcast.core.ISemaphore; import com.hazelcast.core.IdGenerator; import java.io.Serializable; import java.util.ArrayList; import java.util.List; import java.util.concurrent.Callable; import java.util.concurrent.ExecutionException; import java.util.concurrent.Future;/** * * @author Daryl */ public class ISemaphoreExample { static final String SEMAPHORE_NAME = "to reduce access"; static final String GENERATOR_NAME = "to use"; /** * @param args the command line arguments */ public static void main(String[] args) { HazelcastInstance instance = Hazelcast.newHazelcastInstance(); IExecutorService service = instance.getExecutorService("service"); List<Future> futures = new ArrayList(10); try { for(int i = 0; i < 10; i++) { futures.add(service.submit(new GeneratorUser(i))); } // so I wait til the last man. No this may not be scalable. for(Future future: futures) { future.get(); } } catch(InterruptedException ie){ System.out.printf("Got interrupted."); } catch(ExecutionException ee) { System.out.printf("Cannot execute on Future. reason: %s\n", ee.toString()); } finally { service.shutdown(); instance.shutdown(); }} static class GeneratorUser implements Callable, Serializable, HazelcastInstanceAware { private transient HazelcastInstance instance; private final int number; public GeneratorUser(int number) { this.number = number; } @Override public Long call() { ISemaphore semaphore = instance.getSemaphore(SEMAPHORE_NAME); IdGenerator gen = instance.getIdGenerator(GENERATOR_NAME); long lastId = -1; try { semaphore.acquire(); try { for(int i = 0; i < 10; i++){ lastId = gen.newId(); System.out.printf("current value of generator on %d is %d\n", number, lastId); Thread.sleep(1000); } } catch(InterruptedException ie) { System.out.printf("User %d was Interrupted\n", number); } finally { semaphore.release(); } } catch(InterruptedException ie) { System.out.printf("User %d Got interrupted\n", number); } System.out.printf("User %d is leaving\n", number); return lastId; }@Override public void setHazelcastInstance(HazelcastInstance hazelcastInstance) { instance = hazelcastInstance; } }} Conclusion Hazelcast’s primitives were discussed in this post. Most if not all of them revolved around thread coordination. Explanations of the primitive and personal experience were shared. In the examples, the different types of coordination were shown. The examples can be downloaded via subversion at http://darylmathisonblog.googlecode.com/svn/trunk/HazelcastPrimitives. ReferencesThe Book of Hazelcast: found at www.hazelcast.com Hazelcast documentation: found in the Hazelcast download found at www.hazelcast.orgReference: Beginner’s Guide to Hazelcast Part 3 from our JCG partner Daryl Mathison at the Daryl Mathison’s Java Blog blog....

Spring Boot and Spring Data REST – exposing repositories over REST

Exposing Spring Data repositories over REST is pretty easy with Spring Boot and Spring Data REST. With minimal code one can create REST representations of JPA entities that follow the HATEOAS principle. I decided to re-use Spring PetClinic’s JPA entities (business layer) as the foundation for this article.       Application foundation The PetClinic’s model is relatively simple, but it consist of some unidirectional and bi-directional associations, as well as basic inheritance:In addition, the Spring’s PetClinic provides SQL scripts for HSQLDB which makes that generating schema and populating it with sample data in my new application was super easy. Project dependencies As a base for the configuration I used Spring Initializr and I generated a basic Gradle project. In order to utilize Spring Data REST in a Spring Boot Application I added the following Boot Starters: compile("org.springframework.boot:spring-boot-starter-web") compile("org.springframework.boot:spring-boot-starter-data-jpa") compile("org.springframework.boot:spring-boot-starter-data-rest") In addition, I added HSQLDB dependency to the project: compile("org.hsqldb:hsqldb:2.3.2") The original project uses org.joda.time.DateTime for date fields and uses org.jadira.usertype.dateandtime.joda.PersistentDateTime that allows persisting it with Hibernate. In order to be able to use it in the new project I needed to add the following dependencies: compile("joda-time:joda-time:2.4") compile("org.jadira.usertype:usertype.jodatime:2.0.1") While working with the API, I noticed that although the date fields in the original project were annotated with Spring’s @DateTimeFormat they were not properly serialized. I found out that I need to use @JsonFormatter, so another dependency was added to the build.gradle: compile("com.fasterxml.jackson.datatype:jackson-datatype-joda:2.4.2"); Once in the classpath, Spring Boot auto configures com.fasterxml.jackson.datatype.joda.JodaModule via org.springframework.boot.autoconfigure.jackson.JacksonAutoConfiguration. Please note, that if you wanted to serialize Java 8 Date & Time types properly, you would need to add Jackson Datatype JSR310 dependency to project. Initializing the database To initialize data source I added schema-hsqldb.sql and data-hsqldb.sql files to src/main/resources. Finally, I several properties were added to application.properties: spring.datasource.platform = hsqldb spring.jpa.generate-ddl = false spring.jpa.hibernate.ddl-auto = none Now, at the application start up, files will be picked up automatically and the data source will be initialized and discovering the API will be much easier, as there is data! Repositories The general idea of Spring Data REST is that builds on top of Spring Data repositories and automatically exports those as REST resources. I created several repositories, one for each entity (OwnerRepository, PetRepository and so on). All repositories are Java interfaces extending from PagingAndSortingRepository. No additional code is needed at this stage: no @Controllers, no configuration (unless customization is needed). Spring Boot will automagically configure everything for us. Running the application With the whole configuration in place the project can be executed (you will find link to the complete project in the bottom of the article). If you are lucky, the application will start and you can navigate to http://localhost:8080 that points to a collection of links to all available resources (root resource). The response’s content type is . HAL The resources are implemented in a Hypermedia-style and by default Spring Data REST uses HAL with content type application/hal+json to render responses. HAL is a simple format that gives an easy way to link resources. Example: $ curl localhost:8080/owners/1 { "firstName" : "George", "lastName" : "Franklin", "_links" : { "self" : { "href" : "http://localhost:8080/owners/1" }, "pets" : { "href" : "http://localhost:8080/owners/1/pets" } } } In terms of Spring Data REST, there are several types of resources: collection, item, search, query method and association and all utilize application/hal+json content type in responses. Collection and item resource Collection resource support both GET and POST methods. Item resources generally support GET, PUT, PATCH and DELETE methods. Note that, PATCH applies values sent with the request body whereas PUT replaces the resource. Search and find method resource The search resource returns links for all query methods exposed by a repository whereas the query method resource executes the query exposed through an individual query method on the repository interface. Both are read-only therefore support only GET method. To visualize that, I added a find method to OwnerRepository: List<Owner> findBylastName(@Param("lastName") String lastName); Which was then exposed under http://localhost:8080/owners/search: $ curl http://localhost:8080/owners/search { "_links" : { "findBylastName" : { "href" : "http://localhost:8080/owners/search/findBylastName{?lastName}", "templated" : true } } } Association resource Spring Data REST exposes sub-resources automatically. The association resource supports GET, POST and PUT methods. and allow managing them. While working with association you need to be aware of text/uri-list content type. Requests with this content type contain one or more URIs (each URI shall appear on one and only one line) of resource to add to the association. In the first example, we will look at unidirectional relation in Vet class: @ManyToMany(fetch = FetchType.EAGER) @JoinTable(name = "vet_specialties", joinColumns = @JoinColumn(name = "vet_id"), inverseJoinColumns = @JoinColumn(name = "specialty_id")) private Set<Specialty> specialties; In order to add existing specialties to the collection of vet’s specialties PUT request must be executed: curl -i -X PUT -H "Content-Type:text/uri-list" -d $'http://localhost:8080/specialties/1\nhttp://localhost:8080/specialties/2' http://localhost:8080/vets/1/specialties Removing the association can be done with DELETE method as below: curl -i -X DELETE http://localhost:8080/vets/1/specialties/2 Let’s look at another example: // Owner @OneToMany(mappedBy = "owner", cascade = CascadeType.ALL, orphanRemoval = true) private Set<Pet> pets;// Pet @ManyToOne(cascade = CascadeType.ALL, optional = false) @JoinColumn(name = "owner_id") private Owner owner; Setting owner of a pet can be done with the below request: curl -i -X PUT -H "Content-Type:text/uri-list" -d "http://localhost:8080/owners/1" http://localhost:8080/pets/2/owner But what about removing the owner? Since the owner must be always set for the pet, we get HTTP/1.1 409 Conflict whilst trying to unset it with the below command: curl -i -X DELETE http://localhost:8080/pets/2/owner Integration Tests With Spring Boot, it is possible to start a web application in a test and verify it with Spring Boot’s @IntegrationTest. Instead of using mocked server side web application context (MockMvc) we will use RestTemplate and its Spring Boot’s implementation to verify actual REST calls. As we already know, the resources are of content type application/hal+json. So actually, it will not be possible to deserialize them directly to entity object (e.g. Owner). Instead, it must be deserialized to org.springframework.hateoas.Resource that wraps an entity and adds links to it. And since Resource is a generic type ParameterizedTypeReference must be used with RestTemplate. The below example visualizes that: private RestTemplate restTemplate = new TestRestTemplate();@Test public void getsOwner() { String ownerUrl = "http://localhost:9000/owners/1";ParameterizedTypeReference<Resource<Owner>> responseType = new ParameterizedTypeReference<Resource<Owner>>() {};ResponseEntity<Resource<Owner>> responseEntity = restTemplate.exchange(ownerUrl, GET, null, responseType);Owner owner = responseEntity.getBody().getContent(); assertEquals("George", owner.getFirstName());// more assertions} This approach is well described in the following article: Consuming Spring-hateoas Rest service using Spring RestTemplate and Super type tokens Summary With couple steps and the power of Spring Boot and Spring Data REST I created API for an existing PetClinic’s database. There is much more one can do with Spring Data REST (e.g. customization) and apart from rather poor documentation, comparing to other Spring projects, it seems like Spring Data REST may speedup the development significantly. In my opinion, this is a good project to look at when rapid prototyping is needed. ReferencesSource codeSpring Boot PetClinic API on GitHubDocumentation:Spring Data REST Spring HATEOASArticles:RESTify your JPA Entities Consuming Spring-hateoas Rest service using Spring RestTemplate and Super type tokensReference: Spring Boot and Spring Data REST – exposing repositories over REST from our JCG partner Rafal Borowiec at the Codeleak.pl blog....

Stateless Spring Security Part 2: Stateless Authentication

This second part of the Stateless Spring Security series is about exploring means of authentication in a stateless way. If you missed the first part about CSRF you can find it here. So when talking about Authentication, its all about having the client identify itself to the server in a verifiable manner. Typically this start with the server providing the client with a challenge, like a request to fill in a username / password. Today I want to focus on what happens after passing such initial (manual) challenge and how to deal with automatic re-authentication of futher HTTP requests.     Common approaches Session Cookie based The most common approach we probably all know is to use a server generated secret token (Session key) in the form of a JSESSIONID cookie. Initial setup for this is near nothing these days perhaps making you forget you have a choice to make here in the first place. Even without further using this “Session key” to store any other state “in the session”, the key itself is in fact state as well.  I.e. without a shared and persistent storage of these keys, no successful authentication will survive a server reboot or requests being load balanced to another server. OAuth2 / API keys Whenever talking about REST APIs and Security; OAuth2 and other types of API keys are mentioned. Basically they involve sending custom tokens/keys within the HTTP Authorization header. When used properly both relieve clients from dealing with Cookies using the header instead. This solves CSRF vulnerabilities and other Cookie related issues. One thing they do not solve however is the need for the server to check the presented authentication keys, pretty much demanding some persistent and maintainable shared storage for linking the keys to users/authorizations. Stateless approaches 1. HTTP Basis Auth The oldest and most crude way of dealing with authentication. Simply have the user send its username/password with every request. This probably sounds horrible, but considering any of the approaches mentioned above also send secret keys over the wire, this isn’t really all that less secure at all. Its mainly the user experience and flexibility that makes the other approaches a better choice. 2. Server signed tokens A neat little trick to dealing with state across requests in a stateless way is to have the server “sign” it. It can then be transported back and forth between the client/server each request with the guarantee that it is not tempered with. This way any user identification data can be shared in plain-text, adding a special signing hash to it. Considering it is signed, the server can simply validate if the signing hash still matches the received content, without needing to hold any server-side state. The common standard that can be used for this is JSON Web Tokens (JWT) which is still in draft. For this blog post I’d like to get down and dirty though, skipping full compliance and the scream for using a library that comes with it. Picking just what we actually need from it. (Leaving out the header/variable hash algoritms and url-safe base64 encoding) Implementation As mentioned we’re going to roll our own implementation, using  Spring Security and Spring Boot to plug it all together. Without any library or fancy API obfuscating what’s really happening on the token level. The token is going to look like this in pseudo-code: content = toJSON(user_details) token = BASE64(content) + "." + BASE64(HMAC(content)) The dot in the token serves as a separator, so each part can be identified and decoded separately as the dot character is not part of any base64 encoded string. The HMAC stands for a Hash-based Message Authentication Code, which is basically a hash made from any data using a predefined secret key. In actual Java the generation of the token looks a lot like the pseudo-code: create token public String createTokenForUser(User user) { byte[] userBytes = toJSON(user); byte[] hash = createHmac(userBytes); final StringBuilder sb = new StringBuilder(170); sb.append(toBase64(userBytes)); sb.append(SEPARATOR); sb.append(toBase64(hash)); return sb.toString(); } The relevant User properties used in the JSON are id, username, expires and roles, but could be anything you want really. I marked the “password” property of the User object to be ignored during jackson JSON serialization so it does not become part of the token: Ignore password @JsonIgnore public String getPassword() { return password; } For real worlds scenarios you probably just want to use a dedicated object for this. The decoding of the token is a bit more complex with some input validation to prevent/catch parsing errors due to tempering with the token: decode the token public User parseUserFromToken(String token) { final String[] parts = token.split(SEPARATOR_SPLITTER); if (parts.length == 2 && parts[0].length() > 0 && parts[1].length() > 0) { try { final byte[] userBytes = fromBase64(parts[0]); final byte[] hash = fromBase64(parts[1]);boolean validHash = Arrays.equals(createHmac(userBytes), hash); if (validHash) { final User user = fromJSON(userBytes); if (new Date().getTime() < user.getExpires()) { return user; } } } catch (IllegalArgumentException e) { //log tampering attempt here } } return null; } It essentially validates if the provided hash is the same as a fresh computed hash of the content. Because the createHmac method uses an undisclosed secret key internally to compute the hash, no client will be able to temper with the content and provide a hash that is the same as the one the server will produce. Only after passing this test the provided data will be interpreted as JSON representing a User object. Zooming in on the Hmac part, lets see the exact Java involved. First it must be initialized with a secret key, which I do as part of TokenHandler’s constructor: HMAC initialization ... private static final String HMAC_ALGO = "HmacSHA256";private final Mac hmac;public TokenHandler(byte[] secretKey) { try { hmac = Mac.getInstance(HMAC_ALGO); hmac.init(new SecretKeySpec(secretKey, HMAC_ALGO)); } catch (NoSuchAlgorithmException | InvalidKeyException e) { throw new IllegalStateException( "failed to initialize HMAC: " + e.getMessage(), e); } } ... After initialization it can be (re-)used, using a single method call! (doFinal’s JavaDoc reads “Processes the given array of bytes and finishes the MAC operation. A call to this method resets this Mac object to the state it was in when previously initialized via a call to init(Key) or init(Key, AlgorithmParameterSpec)…”) createHmac // synchronized to guard internal hmac object private synchronized byte[] createHmac(byte[] content) { return hmac.doFinal(content); } I used some crude synchronization here, to prevent conflicts when used within a Spring Singleton Service. The actual method is very fast (~0.01ms) so it shouldn’t cause a problem unless your going for 10k+ requests per seconds per server. Speaking of the Service, lets work our way up to a fully working token-based authentication service: TokenAuthenticationService @Service public class TokenAuthenticationService {private static final String AUTH_HEADER_NAME = "X-AUTH-TOKEN"; private static final long TEN_DAYS = 1000 * 60 * 60 * 24 * 10;private final TokenHandler tokenHandler;@Autowired public TokenAuthenticationService(@Value("${token.secret}") String secret) { tokenHandler = new TokenHandler(DatatypeConverter.parseBase64Binary(secret)); }public void addAuthentication(HttpServletResponse response, UserAuthentication authentication) { final User user = authentication.getDetails(); user.setExpires(System.currentTimeMillis() + TEN_DAYS); response.addHeader(AUTH_HEADER_NAME, tokenHandler.createTokenForUser(user)); }public Authentication getAuthentication(HttpServletRequest request) { final String token = request.getHeader(AUTH_HEADER_NAME); if (token != null) { final User user = tokenHandler.parseUserFromToken(token); if (user != null) { return new UserAuthentication(user); } } return null; } } Pretty straight-forward, initializing a private TokenHandler to do the heavy lifting. It provides methods for adding and reading the custom HTTP token header. As you can see it does not use any (database driven) UserDetailsService to lookup the user details. All details required to let Spring Security handle further authorization checks are provided by means of the token. Finally we can now plug-in all of this into Spring Security adding two custom filters in the Security configuration: Security configuration inside StatelessAuthenticationSecurityConfig ... @Override protected void configure(HttpSecurity http) throws Exception { http ... // custom JSON based authentication by POST of // {"username":"<name>","password":"<password>"} // which sets the token header upon authentication .addFilterBefore(new StatelessLoginFilter("/api/login", ...), UsernamePasswordAuthenticationFilter.class)// custom Token based authentication based on // the header previously given to the client .addFilterBefore(new StatelessAuthenticationFilter(...), UsernamePasswordAuthenticationFilter.class); } ... The StatelessLoginFilter adds the token upon successful authentication: StatelessLoginFilter ... @Override protected void successfulAuthentication(HttpServletRequest request, HttpServletResponse response, FilterChain chain, Authentication authentication) throws IOException, ServletException {// Lookup the complete User object from the database and create an Authentication for it final User authenticatedUser = userDetailsService.loadUserByUsername(authentication.getName()); final UserAuthentication userAuthentication = new UserAuthentication(authenticatedUser);// Add the custom token as HTTP header to the response tokenAuthenticationService.addAuthentication(response, userAuthentication);// Add the authentication to the Security context SecurityContextHolder.getContext().setAuthentication(userAuthentication); } ... the StatelessAuthenticationFilter simply sets the authentication based upon the header: StatelessAuthenticationFilter ... @Override public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain) throws IOException, ServletException {SecurityContextHolder.getContext().setAuthentication( tokenAuthenticationService.getAuthentication((HttpServletRequest) req)); chain.doFilter(req, res); // always continue } ... Note that unlike most Spring Security related filters, I choose to continue down the filter chain regardless of successful authentication. I wanted to support triggering Spring’s  AnonymousAuthenticationFilter to support anonymous authentication. The big difference here being that the filter is not configured to map to any url specifically meant for authentication, so not providing the header isn’t really a fault. Client-side Implementation Client-side implementation is again pretty straight-forward. Again I’m keeping it minimalistic to prevent the authentication bit being lost in AngularJS details. If you’re looking for an AngularJS JWT example more thoroughly integrated with routes you should take a look here. I borrowed some of the interceptor logic from it. Logging in, is simply a matter of storing the token (in localStorage): login $scope.login = function () { var credentials = { username: $scope.username, password: $scope.password }; $http.post('/api/login', credentials).success(function (result, status, headers) { $scope.authenticated = true; TokenStorage.store(headers('X-AUTH-TOKEN')); }); }; Logging out is even simpler (no call to the server necessary): logout $scope.logout = function () { // Just clear the local storage TokenStorage.clear(); $scope.authenticated = false; }; To check if a user is “already logged in” ng-init=”init()” works nicely: init $scope.init = function () { $http.get('/api/users/current').success(function (user) { if(user.username !== 'anonymousUser'){ $scope.authenticated = true; $scope.username = user.username; } }); }; I choose to use an anonymously reachable endpoint to prevent triggering 401/403′s. You could also decode the token itself and check the expiration time, trusting the local client time to be accurate enough. Finally in order to automate the process of adding the header a simple interceptor much like in last blog entry does nicely: TokenAuthInterceptor factory('TokenAuthInterceptor', function($q, TokenStorage) { return { request: function(config) { var authToken = TokenStorage.retrieve(); if (authToken) { config.headers['X-AUTH-TOKEN'] = authToken; } return config; }, responseError: function(error) { if (error.status === 401 || error.status === 403) { TokenStorage.clear(); } return $q.reject(error); } }; }).config(function($httpProvider) { $httpProvider.interceptors.push('TokenAuthInterceptor'); }); It also takes care of automatically clearing the token after receiving an HTTP 401 or 403, assuming the client isn’t going to allow calls to areas that need higher privileges. TokenStorage The TokenStorage is just a wrapper service over localStorage which I’ll not bother you with. Putting the token in the localStorage protects it from being read by script outside the origin of the script that saved it, just like cookies. However because the token is not an actual Cookie, no browser can be instructed add it to requests automatically. This is essential as it completely prevents any form of CSRF attacks. Thus saving you from having to implement any (Stateless) CSRF protection mentioned in my previous blog.You can find a complete working example with some nice extras at github.Make sure you have gradle 2.0 installed and simply run it using “gradle build” followed by a “gradle run”. If you want to play with it in your IDE like Eclipse, go with “gradle eclipse” and just import and run it from within your IDE (no server needed).Reference: Stateless Spring Security Part 2: Stateless Authentication from our JCG partner Robbert van Waveren at the JDriven blog....

Akka Notes – ActorSystem (Configuration and Scheduling) – 4

As we saw from our previous posts, we could create an Actor using the actorOf method of the ActorSystem. There’s actually much more you could do with ActorSystem. We’ll touch upon just the Configuration and the Scheduling bit in this write-up. Let’s look at the subsets of methods available in the ActorSystem.            1. Configuration Management Remember the application.conf file we used for configuring our log level in the previous write-up? This configuration file is just like those .properties files in Java applications and much more. We’ll be soon seeing how we could use this configuration file to customize our dispatchers, mailboxes etc. (I am not even closely doing justice to the power of the typesafe config. Please go through some examples to really appreciate its awesomeness). So, when we create the ActorSystem using the ActorSystem object’s apply method without specifying any configuration, it looks out for application.conf, application.json and application.properties in the root of the classpath and loads them automatically. So: val system=ActorSystem("UniversityMessagingSystem") is the same as: val system=ActorSystem("UniversityMessagingSystem", ConfigFactory.load()) To provide evidence to that argument, check out the apply method in ActorSystem.scala: def apply(name: String, config: Option[Config] = None, classLoader: Option[ClassLoader] = None, defaultExecutionContext: Option[ExecutionContext] = None): ActorSystem = { val cl = classLoader.getOrElse(findClassLoader()) val appConfig = config.getOrElse(ConfigFactory.load(cl)) new ActorSystemImpl(name, appConfig, cl, defaultExecutionContext).start() } a. Overriding default configuration If you are not keen on using the application.conf (as in testcases) or would like to have your own custom configuration file (as in testing againt different configuration or deploying to different environments), you are free to override this by passing in your own configuration instead of wanting the one from the classpath. ConfigFactory.parseString is one option val actorSystem=ActorSystem("UniversityMessageSystem", ConfigFactory.parseString("""akka.loggers = ["akka.testkit.TestEventListener"]""")) or simply in your Testcase as: class TeacherTestLogListener extends TestKit(ActorSystem("UniversityMessageSystem", ConfigFactory.parseString("""akka.loggers = ["akka.testkit.TestEventListener"]"""))) with WordSpecLike with MustMatchers with BeforeAndAfterAll { There’s also a ConfigFactory.load val system = ActorSystem("UniversityMessageSystem", ConfigFactory.load("uat-application.conf")) If you need access to your own config parameters in runtime, you could do it via its API like so : val system=ActorSystem("UniversityMessageSystem", ConfigFactory.parseString("""akka.loggers = ["akka.testkit.TestEventListener"]""")) println (system.settings.config.getValue("akka.loggers")) // Results in > SimpleConfigList(["akka.testkit.TestEventListener"]) b. Extending default configuration Other than overriding, you could also extend the default configuration with your custom configuration using the withFallback method of the Config. Let’s say your application.conf looks like : akka{ loggers = ["akka.event.slf4j.Slf4jLogger"] loglevel = DEBUG arun="hello" } and you decide to override the akka.loggers property like : val config=ConfigFactory.parseString("""akka.loggers = ["akka.testkit.TestEventListener"]""") val system=ActorSystem("UniversityMessageSystem", config.withFallback(ConfigFactory.load())) You end up with a merged configuration of both : println (system.settings.config.getValue("akka.arun")) //> ConfigString("hello") println (system.settings.config.getValue("akka.loggers")) //> SimpleConfigList(["akka.testkit.TestEventListener"]) So, why did I tell this whole story on configuration? Because our ActorSystem is the one which loads and provides access to all the configuration information. IMPORTANT NOTE Watch out the order of falling back here – which is the default and which is the extension configuration. Remember, you have to fall back to the default configuration. So: config.withFallback(ConfigFactory.load()) would work but: ConfigFactory.load().withFallback(config) would not get the results that you may need. 2. SchedulerAs you can see from the API of ActorSystem, there is a powerful little method in ActorSystem called scheduler which returns a Scheduler. The Scheduler has a variety of schedule methods with which we could do some fun stuff inside the Actor environment. a. Schedule something to execute onceTaking our Student-Teacher example, assume our StudentActor would want to send message to the teacher only after 5 seconds of it receiving the InitSignal from our Testcase and not immediately, our code looks like : class StudentDelayedActor (teacherActorRef:ActorRef) extends Actor with ActorLogging {def receive = { case InitSignal=> { import context.dispatcher context.system.scheduler.scheduleOnce(5 seconds, teacherActorRef, QuoteRequest) //teacherActorRef!QuoteRequest } ... ... } } Testcase Let’s cook up a testcase to verify this : "A delayed student" must {"fire the QuoteRequest after 5 seconds when an InitSignal is sent to it" in {import me.rerun.akkanotes.messaging.protocols.StudentProtocol._val teacherRef = system.actorOf(Props[TeacherActor], "teacherActorDelayed") val studentRef = system.actorOf(Props(new StudentDelayedActor(teacherRef)), "studentDelayedActor")EventFilter.info (start="Printing from Student Actor", occurrences=1).intercept{ studentRef!InitSignal } }} Increasing the timeout for Eventfilter interception Ouch. The default timeout for the EventFilter to wait for the message to appear in the EventStream is 3 seconds. Let’s increase that to 7 seconds now to verify our testcase. The filter-leeway configuration property helps us achieve that. class RequestResponseTest extends TestKit(ActorSystem("TestUniversityMessageSystem", ConfigFactory.parseString(""" akka{ loggers = ["akka.testkit.TestEventListener"] test{ filter-leeway = 7s } } """))) with WordSpecLike with MustMatchers with BeforeAndAfterAll with ImplicitSender { ... ... b. Schedule something to execute repeatedly In order to execute something repeatedly, you use the schedule method of the Scheduler. One of the frequently used overload of the schedule method is the one which sends a message to the Actor on a regular basis. It acccepts 4 parameters :How long should be initial delay be before the first execution begins Frequency of subsequent executions The target ActorRef that we are going to send a message to The Messagecase InitSignal=> { import context.dispatcher context.system.scheduler.schedule(0 seconds, 5 seconds, teacherActorRef, QuoteRequest) //teacherActorRef!QuoteRequest } TRIVIA The import import context.dispatcher is very important here. The schedule methods requires a very important implicit parameter – ExecutionContext, the reason for which would be pretty obvious once we see the implementation of the schedule method : final def schedule( initialDelay: FiniteDuration, interval: FiniteDuration, receiver: ActorRef, message: Any)(implicit executor: ExecutionContext, sender: ActorRef = Actor.noSender): Cancellable = schedule(initialDelay, interval, new Runnable { def run = { receiver ! message if (receiver.isTerminated) throw new SchedulerException("timer active for terminated actor") } }) The schedule method just wraps the tell in a Runnable which eventually is executed by the ExecutionContext that we pass in. In order to make an ExecutionContext available in scope as an implicit, we leverage upon the implicit dispatcher available on the context. From ActorCell.scala (Context): /** * Returns the dispatcher (MessageDispatcher) that is used for this Actor. * Importing this member will place an implicit ExecutionContext in scope. */ implicit def dispatcher: ExecutionContextExecutor Code As always, the entire project could be downloaded from github here.Reference: Akka Notes – ActorSystem (Configuration and Scheduling) – 4 from our JCG partner Arun Manivannan at the Rerun.me blog....

MongoDB Incremental Migration Scripts

Introduction An incremental software development process requires an incremental database migration strategy. I remember working on an enterprise application where the hibernate.hbm2ddl.auto was the default data migration tool. Updating the production environment required intensive preparation and the migration scripts were only created on-the-spot. An unforeseen error could have led production data corruption. Incremental updates to the rescue The incremental database update is a technical feature that needs to be addressed in the very first application development iterations. We used to develop our own custom data migration implementations and spending time on writing/supporting frameworks is always working against your current project budget. A project must be packed with both application code and all associated database schema/data updates scripts. Using incremental migration scripts allows us to automate the deployment process and to take advantage of continuous delivery. Nowadays you don’t have to implement data migration tools, Flyway does a better job than all our previous custom frameworks. All database schema and data changes have to be recorded in incremental update scripts following a well-defined naming convention. A RDBMS migration plan addresses both schema and data changes. It’s always good to separate schema and data changes. Integration tests might only use the schema migration scripts in conjunction with test-time related data . Flyway supports all major relation database systems but for NoSQL (e.g. MongoDB) your need to to look somewhere else. Mongeez Mongeez is an open-source project aiming to automate MongoDB data migration. MongoDB is schema-less, so migration scripts are only targeting data updates only. Integrating mongeez First you have to define a mongeez configuration file: mongeez.xml <changeFiles> <file path="v1_1__initial_data.js"/> <file path="v1_2__update_products.js"/> </changeFiles> Then you add the actual migrate scripts: v1_1__initial_data.js //mongeez formatted javascript //changeset system:v1_1 db.product.insert({ "_id": 1, "name" : "TV", "price" : 199.99, "currency" : 'USD', "quantity" : 5, "version" : 1 }); db.product.insert({ "_id": 2, "name" : "Radio", "price" : 29.99, "currency" : 'USD', "quantity" : 3, "version" : 1 }); v1_2__update_products.js //mongeez formatted javascript //changeset system:v1_2 db.product.update( { name : 'TV' }, { $inc : { price : -10, version : 1 } }, { multi: true } ); And you need to add the MongeezRunner too: <bean id="mongeez" class="org.mongeez.MongeezRunner" depends-on="mongo"> <property name="mongo" ref="mongo"/> <property name="executeEnabled" value="true"/> <property name="dbName" value="${mongo.dbname}"/> <property name="file" value="classpath:mongodb/migration/mongeez.xml"/> </bean> Running mongeez When the application first starts, the incremental scripts will be analyzed and only run if necessary: INFO [main]: o.m.r.FilesetXMLReader - Num of changefiles 2 INFO [main]: o.m.ChangeSetExecutor - ChangeSet v1_1 has been executed INFO [main]: o.m.ChangeSetExecutor - ChangeSet v1_2 has been executed Mongeez uses a separate MongoDB collection to record previously run scripts: db.mongeez.find().pretty(); { "_id" : ObjectId("543b69eeaac7e436b2ce142d"), "type" : "configuration", "supportResourcePath" : true } { "_id" : ObjectId("543b69efaac7e436b2ce142e"), "type" : "changeSetExecution", "file" : "v1_1__initial_data.js", "changeId" : "v1_1", "author" : "system", "resourcePath" : "mongodb/migration/v1_1__initial_data.js", "date" : "2014-10-13T08:58:07+03:00" } { "_id" : ObjectId("543b69efaac7e436b2ce142f"), "type" : "changeSetExecution", "file" : "v1_2__update_products.js", "changeId" : "v1_2", "author" : "system", "resourcePath" : "mongodb/migration/v1_2__update_products.js", "date" : "2014-10-13T08:58:07+03:00" } Conclusion To automate the deployment process you need to create self-sufficient packs, containing both bytecode and all associated configuration (xml files, resource bundles and data migration scripts). Before starting writing your own custom framework, you should always investigate for available open-source alternatives.Code available on GitHub.Reference: MongoDB Incremental Migration Scripts from our JCG partner Vlad Mihalcea at the Vlad Mihalcea’s Blog blog....

Java EE 7 Batch Processing and World of Warcraft – Part 1

This was one of my sessions at the last JavaOne. This post is going to expand the subject and look into a real application using the Batch JSR-352 API. This application integrates with the MMORPG World of Warcraft. Since the JSR-352 is a new specification in the Java EE world, I think that many people don’t know how to use it properly. It may also be a challenge to identify the use cases to which this specification apply. Hopefully this example can help you understand better the use cases.       Abstract World of Warcraft is a game played by more than 8 million players worldwide. The service is offered by region: United States (US), Europe (EU), China and Korea. Each region has a set of servers called Realm that you use to connect to be able to play the game. For this example, we are only looking into the US and EU regions.One of the most interesting features about the game is that allows you to buy and sell in-game goods called Items, using an Auction House. Each Realm has two Auction House’s. On average each Realm trades around 70.000 Items. Let’s crunch some numbers:512 Realm’s (US and EU) 70 K Item’s per Realm More than 35 M Item’s overallThe Data Another cool thing about World of Warcraft is that the developers provide a REST API to access most of the in-game information, including the Auction House’s data. Check here the complete API. The Auction House’s data is obtained in two steps. First we need to query the correspondent Auction House Realm REST endpoint to get a reference to a JSON file. Next we need to access this URL and download the file with all the Auction House Item’s information. Here is an example: http://eu.battle.net/api/wow/auction/data/aggra-portugues The Application Our objective here is to build an application that downloads the Auction House’s, process it and extract metrics. These metrics are going to build a history of the Items price evolution through time. Who knows? Maybe with this information we can predict price fluctuation and buy or sell Items at the best times. The Setup For the setup, we’re going to use a few extra things to Java EE 7:Java EE 7 Angular JS Angular ng-grid UI Bootstrap Google Chart WildflyJobs The main work it’s going to be performed by Batch JSR-352 Jobs. A Job is an entity that encapsulates an entire batch process. A Job will be wired together via a Job Specification Language. With JSR-352, a Job is simply a container for the steps. It combines multiple steps that belong logically together in a flow. We’re going to split the business login into three jobs:Prepare – Creates all the supporting data needed. List Realms, create folders to copy files. Files – Query realms to check for new files to process. Process – Downloads the file, process the data, extract metrics.The Code Back-end – Java EE 7 with Java 8 Most of the code is going to be in the back-end. We need Batch JSR-352, but we are also going to use a lot of other technologies from Java EE: like JPA, JAX-RS, CDI and JSON-P. Since the Prepare Job is only to initialize application resources for the processing, I’m skipping it and dive into the most interesting parts. Files Job The Files Job is an implementation of AbstractBatchlet. A Batchlet is the simplest processing style available in the Batch specification. It’s a task oriented step where the task is invoked once, executes, and returns an exit status. This type is most useful for performing a variety of tasks that are not item-oriented, such as executing a command or doing file transfer. In this case, our Batchlet is going to iterate on every Realm make a REST request to each one and retrieve an URL with the file containing the data that we want to process. Here is the code: LoadAuctionFilesBatchlet @Named public class LoadAuctionFilesBatchlet extends AbstractBatchlet { @Inject private WoWBusiness woWBusiness;@Inject @BatchProperty(name = "region") private String region; @Inject @BatchProperty(name = "target") private String target;@Override public String process() throws Exception { List<Realm> realmsByRegion = woWBusiness.findRealmsByRegion(Realm.Region.valueOf(region)); realmsByRegion.parallelStream().forEach(this::getRealmAuctionFileInformation);return "COMPLETED"; }void getRealmAuctionFileInformation(Realm realm) { try { Client client = ClientBuilder.newClient(); Files files = client.target(target + realm.getSlug()) .request(MediaType.TEXT_PLAIN).async() .get(Files.class) .get(2, TimeUnit.SECONDS);files.getFiles().forEach(auctionFile -> createAuctionFile(realm, auctionFile)); } catch (Exception e) { getLogger(this.getClass().getName()).log(Level.INFO, "Could not get files for " + realm.getRealmDetail()); } }void createAuctionFile(Realm realm, AuctionFile auctionFile) { auctionFile.setRealm(realm); auctionFile.setFileName("auctions." + auctionFile.getLastModified() + ".json"); auctionFile.setFileStatus(FileStatus.LOADED);if (!woWBusiness.checkIfAuctionFileExists(auctionFile)) { woWBusiness.createAuctionFile(auctionFile); } } } A cool thing about this is the use of Java 8. With parallelStream() invoking multiple REST request at once is easy as pie! You can really notice the difference. If you want to try it out, just run the sample and replace parallelStream() with stream() and check it out. On my machine, using parallelStream() makes the task execute around 5 or 6 times faster. Update Usually, I would not use this approach. I’ve done it, because part of the logic involves invoking slow REST requests and parallelStreams really shine here. Doing this using batch partitions is possible, but hard to implement. We also need to pool the servers for new data every time, so it’s not terrible if we skip a file or two. Keep in mind that if you don’t want to miss a single record a Chunk processing style is more suitable. Thank you to Simon Martinelli for bringing this to my attention. Since the Realms of US and EU require different REST endpoints to invoke, these are perfect to partitioned. Partitioning means that the task is going to run into multiple threads. One thread per partition. In this case we have two partitions. To complete the job definition we need to provide a JoB XML file. This needs to be placed in the META-INF/batch-jobs directory. Here is the files-job.xml for this job: files-job.xml <job id="loadRealmAuctionFileJob" xmlns="http://xmlns.jcp.org/xml/ns/javaee" version="1.0"> <step id="loadRealmAuctionFileStep"> <batchlet ref="loadAuctionFilesBatchlet"> <properties> <property name="region" value="#{partitionPlan['region']}"/> <property name="target" value="#{partitionPlan['target']}"/> </properties> </batchlet> <partition> <plan partitions="2"> <properties partition="0"> <property name="region" value="US"/> <property name="target" value="http://us.battle.net/api/wow/auction/data/"/> </properties> <properties partition="1"> <property name="region" value="EU"/> <property name="target" value="http://eu.battle.net/api/wow/auction/data/"/> </properties> </plan> </partition> </step> </job> In the files-job.xml we need to define our Batchlet in batchlet element. For the partitions just define the partition element and assign different properties to each plan. These properties can then be used to late bind the value into the LoadAuctionFilesBatchlet with the expressions #{partitionPlan['region']} and #{partitionPlan['target']}. This is a very simple expression binding mechanism and only works for simple properties and Strings. Process Job Now we want to process the Realm Auction Data file. Using the information from the previous job, we can now download the file and do something with the data. The JSON file has the following structure: item-auctions-sample.json { "realm": { "name": "Grim Batol", "slug": "grim-batol" }, "alliance": { "auctions": [ { "auc": 279573567, // Auction Id "item": 22792, // Item for sale Id "owner": "Miljanko", // Seller Name "ownerRealm": "GrimBatol", // Realm "bid": 3800000, // Bid Value "buyout": 4000000, // Buyout Value "quantity": 20, // Numbers of items in the Auction "timeLeft": "LONG", // Time left for the Auction "rand": 0, "seed": 1069994368 }, { "auc": 278907544, "item": 40195, "owner": "Mongobank", "ownerRealm": "GrimBatol", "bid": 38000, "buyout": 40000, "quantity": 1, "timeLeft": "VERY_LONG", "rand": 0, "seed": 1978036736 } ] }, "horde": { "auctions": [ { "auc": 278268046, "item": 4306, "owner": "Thuglifer", "ownerRealm": "GrimBatol", "bid": 570000, "buyout": 600000, "quantity": 20, "timeLeft": "VERY_LONG", "rand": 0, "seed": 1757531904 }, { "auc": 278698948, "item": 4340, "owner": "Celticpala", "ownerRealm": "Aggra(Português)", "bid": 1000000, "buyout": 1000000, "quantity": 10, "timeLeft": "LONG", "rand": 0, "seed": 0 } ] } } The file has a list of the Auction’s from the Realm it was downloaded from. In each record we can check the item for sale, prices, seller and time left until the end of the auction. Auction’s are algo aggregated by Auction House type: Alliance and Horde. For the process-job we want to read the JSON file, transform the data and save it to a database. This can be achieved by Chunk Processing. A Chunk is an ETL (Extract – Transform – Load) style of processing which is suitable for handling large amounts of data. A Chunk reads the data one item at a time, and creates chunks that will be written out, within a transaction. One item is read in from an ItemReader, handed to an ItemProcessor, and aggregated. Once the number of items read equals the commit interval, the entire chunk is written out via the ItemWriter, and then the transaction is committed. ItemReader The real files are so big that they cannot be loaded entirely into memory or you may end up running out of it. Instead we use JSON-P API to parse the data in a streaming way. AuctionDataItemReader @Named public class AuctionDataItemReader extends AbstractAuctionFileProcess implements ItemReader { private JsonParser parser; private AuctionHouse auctionHouse;@Inject private JobContext jobContext; @Inject private WoWBusiness woWBusiness;@Override public void open(Serializable checkpoint) throws Exception { setParser(Json.createParser(openInputStream(getContext().getFileToProcess(FolderType.FI_TMP))));AuctionFile fileToProcess = getContext().getFileToProcess(); fileToProcess.setFileStatus(FileStatus.PROCESSING); woWBusiness.updateAuctionFile(fileToProcess); }@Override public void close() throws Exception { AuctionFile fileToProcess = getContext().getFileToProcess(); fileToProcess.setFileStatus(FileStatus.PROCESSED); woWBusiness.updateAuctionFile(fileToProcess); }@Override public Object readItem() throws Exception { while (parser.hasNext()) { JsonParser.Event event = parser.next(); Auction auction = new Auction(); switch (event) { case KEY_NAME: updateAuctionHouseIfNeeded(auction);if (readAuctionItem(auction)) { return auction; } break; } } return null; }@Override public Serializable checkpointInfo() throws Exception { return null; }protected void updateAuctionHouseIfNeeded(Auction auction) { if (parser.getString().equalsIgnoreCase(AuctionHouse.ALLIANCE.toString())) { auctionHouse = AuctionHouse.ALLIANCE; } else if (parser.getString().equalsIgnoreCase(AuctionHouse.HORDE.toString())) { auctionHouse = AuctionHouse.HORDE; } else if (parser.getString().equalsIgnoreCase(AuctionHouse.NEUTRAL.toString())) { auctionHouse = AuctionHouse.NEUTRAL; }auction.setAuctionHouse(auctionHouse); }protected boolean readAuctionItem(Auction auction) { if (parser.getString().equalsIgnoreCase("auc")) { parser.next(); auction.setAuctionId(parser.getLong()); parser.next(); parser.next(); auction.setItemId(parser.getInt()); parser.next(); parser.next(); parser.next(); parser.next(); auction.setOwnerRealm(parser.getString()); parser.next(); parser.next(); auction.setBid(parser.getInt()); parser.next(); parser.next(); auction.setBuyout(parser.getInt()); parser.next(); parser.next(); auction.setQuantity(parser.getInt()); return true; } return false; }public void setParser(JsonParser parser) { this.parser = parser; } } To open a JSON Parse stream we need Json.createParser and pass a reference of an inputstream. To read elements we just need to call the hasNext() and next() methods. This returns a JsonParser.Event that allows us to check the position of the parser in the stream. Elements are read and returned in the readItem() method from the Batch API ItemReader. When no more elements are available to read, return null to finish the processing. Note that we also implements the method open and close from ItemReader. These are used to initialize and clean up resources. They only execute once. ItemProcessor The ItemProcessor is optional. It’s used to transform the data that was read. In this case we need to add additional information to the Auction. AuctionDataItemProcessor @Named public class AuctionDataItemProcessor extends AbstractAuctionFileProcess implements ItemProcessor { @Override public Object processItem(Object item) throws Exception { Auction auction = (Auction) item;auction.setRealm(getContext().getRealm()); auction.setAuctionFile(getContext().getFileToProcess());return auction; } } ItemWriter Finally we just need to write the data down to a database: AuctionDataItemWriter @Named public class AuctionDataItemWriter extends AbstractItemWriter { @PersistenceContext protected EntityManager em;@Override public void writeItems(List<Object> items) throws Exception { items.forEach(em::persist); } } The entire process with a file of 70 k record takes around 20 seconds on my machine. I did notice something very interesting. Before this code, I was using an injected EJB that called a method with the persist operation. This was taking 30 seconds in total, so injecting the EntityManager and performing the persist directly saved me a third of the processing time. I can only speculate that the delay is due to an increase of the stack call, with EJB interceptors in the middle. This was happening in Wildfly. I will investigate this further. To define the chunk we need to add it to a process-job.xml file: process-job.xml <step id="processFile" next="moveFileToProcessed"> <chunk item-count="100"> <reader ref="auctionDataItemReader"/> <processor ref="auctionDataItemProcessor"/> <writer ref="auctionDataItemWriter"/> </chunk> </step> In the item-count property we define how many elements fit into each chunk of processing. This means that for every 100 the transaction is committed. This is useful to keep the transaction size low and to checkpoint the data. If we need to stop and then restart the operation we can do it without having to process every item again. We have to code that logic ourselves. This is not included in the sample, but I will do it in the future. Running To run a job we need to get a reference to a JobOperator. The JobOperator provides an interface to manage all aspects of job processing, including operational commands, such as start, restart, and stop, as well as job repository related commands, such as retrieval of job and step executions. To run the previous files-job.xml Job we execute: Execute Job JobOperator jobOperator = BatchRuntime.getJobOperator(); jobOperator.start("files-job", new Properties()); Note that we use the name of job xml file without the extension into the JobOperator. Next Steps We still need to aggregate the data to extract metrics and display it into a web page. This post is already long, so I will describe the following steps in a future post. Anyway, the code for that part is already in the Github repo. Check the Resources section. Resources You can clone a full working copy from my github repository and deploy it to Wildfly. You can find instructions there to deploy it.Reference: Java EE 7 Batch Processing and World of Warcraft – Part 1 from our JCG partner Roberto Cortez at the Roberto Cortez Java Blog blog....

Rapid Mobile App Development With Appery.io – Creating Vacation Request App

I just got back from Stamford, CT where I did a talk at the Web, Mobile and Backend Developers Meetup. In about 90 minutes we built a prototype version of a productivity app called Vacation Request using Appery.io cloud development platform. The app helps employees request and submit time off from their mobile phones. The app has the following functionality:            User login and registration Submit vacation request. The request is saved into Appery.io database Send an SMS message to the manager notifying him/her of a new request Send an email to the manger notifying him/her of a new request Push notifications to notify of a new request, or notify the employ when the request is approved Customer Console for the manager to view/approve requests from a user-friendly console Package the app for AndroidLet me walk you through the app in more details. The first page we designed was the Login page:Appery.io Backend Services comes with out-of-the-box User Management built-in so you can register users, sign in users, and logout users. This is how the Users collection looks:As the App Builder and the Database are integrated, it’s fast to generate the login/sign up services automatically:Then the service is added to the page and mapped to the page. This is request mapping (from page to service):and this is response mapping (from service to page or local storage). In our example, we are saving the user id and the user session into local storage:The steps are identical for registration. In case login or registration fail for some reason, we will display a basic error:Next we built the Vacation Request page where you make the actual request. This page is based on a template which has a Panel menu that slides from the left:And this is how it looks when the menu (from the template) is opened in development:The Save button saves the request into Appery.io Database (into Vacation collection):The Email button sends an email to the manager using the SendGrid API. The functionality was imported as plugin. The SMS buttons sends an SMS message to the manager using the Twilio API. Once we were done building the app, we added push notifications capability:To send a push notification, the app has to be installed on the device. Packaging for various native platforms is as simple as clicking a button:Lastly, we activated the Customer Console which allows the manager to view the data (vacation requests or any other app data) and approve the requests there. The Customer Console is a user-friendly app that allows editing the app data without asking the developer to do that. It also allows to send push notifications. Access to data and whether you can send push messages is configurable.The goal was to show how rapidly you can build a mobile app using Appery.io. In about 90 minutes, we were able to build a prototype or a first version of an app that saves vacation requests, allows sending an email or an SMS message, with push notifications. And we built a binary for Android.Reference: Rapid Mobile App Development With Appery.io – Creating Vacation Request App from our JCG partner Max Katz at the Maxa blog blog....

The Cloud Winners and Losers?

The cloud is revolutionising IT. However there are two sides to every story: the winners and the losers. Who are they going to be and why? If you can’t wait here are the losers: HP, Oracle, Dell, SAP, RedHat, Infosys, VMWare, EMC, Cisco, etc. Survivors: IBM, Accenture, Intel, Apple, etc. Winners: Amazon, Salesforce, Google, CSC, Workday, Canonical, Metaswitch, Microsoft, ARM, ODMs. Now the question is why and is this list written in stone?       What has cloud changed? If you are working in a hardware business (storage, networking, etc. is also included) then cloud computing is a value destroyer. You have an organisation that is assuming small, medium and large enterprises have and always will run their own data centre. As such you have been blown out of the water by the fact that cloud has changed this fundamental rule. All of a sudden Amazon, Google and Facebook go and buy specialised webscale hardware from your suppliers, the ODMs. Facebook all of a sudden open sources hardware, networking, rack and data centre designs and makes it that anybody can compete with you. Cloud is all about scale out and open source hence commodity storage, software defined networks and network virtualisation functions are converting your portfolio in commodity products. If you are an enterprise software vendor then you always assumed that companies will buy an instance of your product, customise it and manage it themselves. You did not expect that software can be offered as a service and that one platform can offer individual solutions to millions of enterprises. You also did not expect that software can be sold by the hour instead of licensed forever. If you are an outsourcing company then you assume that companies that have invested in customising Siebel will want you to run this forever and not move to Salesforce. Reviewing the losers HP’s Cloud Strategy HP has been living from printers and hardware. Meg rightfully has taken the decision to separate the cashcow, stop subsidising other less profitable divisions and let it be milked till it dies. The other group will focus on Cloud, Big Data, etc. However HP Cloud is more expensive and slower moving than any of the big three so economies of scale will push it into niche areas or make it die. HP’s OpenStack is a product that came 2-3 years late to the market. A market as we will see later that is about to be commoditised. HP’s Big Data strategy? Overpay for Vertica and Autonomy and focus your marketing around the lawsuits with former owners, not any unique selling proposition. Also Big Data can only be sold if you have an open source solution that people can test. Big Data customers are small startups that quickly have become large dotcoms. Most enterprises would not know what to do with Hadoop even if they could download it for free [YES you can actually download it for free!!!]. Oracle’s Cloud Strategy Oracle has been denying Cloud existed until their most laggard customer started asking questions. Until very recently you could only buy Oracle databases by the hour from Amazon. Oracle has been milking the enterprise software market for years and paying surprise visits to audit your usage of their database and send you an unexpected bill. Recently they have started to cloud-wash [and Big Data wash] their software portfolio but Salesforce and Workday already are too far ahead to catch them. A good Christmas book Larry could buy from Amazon would be “The Innovator’s Dilemma“. Dell’s Cloud Strategy Go to the main Dell page and you will not find the word Big Data or Cloud. I rest my case. SAP’s Cloud Strategy Workday is working hard on making SAP irrelevant. Salesforce overtook Siebel. Workday is likely to do the same with SAP. People don’t want to manage their ERP themselves. RedHat’s Cloud Strategy [I work for their biggest competitor] RedHat salesperson to its customers: There are three versions. Fedora if you need innovation but don’t want support. CentOS if you want free but no security updates. RHEL is expensive and old but with support. Compare this to Canonical. There is only one Ubuntu, it is innovative, free to use and if you want support you can buy it extra. For Cloud the story is that RedHat is three times cheaper than VMWare and your old stuff can be made to work as long as you want it according to a prescribed recipe. Compare this with an innovator that wants to completely commoditise OpenStack [ten times cheaper] and bring the most innovative and flexible solution [any SDN, any storage, any hypervisor, etc.] that instantly solves your problems [deploy different flavours of OpenStack in minutes without needing any help]. Infosys or any outsourcing company If the data centre is going away then the first thing that will go away is that CRM solution we bought in the 90’s from a company that no longer exists. VMWare For the company that brought virtualisation into the enterprise it is hard to admit that by putting a rest API in front of it, you don’t need their solution in each enterprise any more. EMC Commodity storage means that scale out storage can be offered at a fraction of the price of a regular EMC SAN solution. However the big killer is Amazon’s S3 that can give you unlimited storage in minutes without worries. Cisco A Cisco router is an extremely expensive device that is hard to manage and build on top of proprietary hardware, a proprietary OS and proprietary software. What do you think will happen in a world where cheap ASIC + commodity CPU, general purpose OS and many thousands of network apps from an app store become available? Or worse, a network will no longer need many physical boxes because most of it is virtualised. What does a cloud loser mean? A cloud loser means that their existing cash cows will be crunched by disruptive innovations. Does this mean that losers will disappear or can not recuperate? Some might disappear. However if smart executives in these losing companies would be given the freedom to bring to market new solutions that build on top of the new reality then they might come out stronger. IBM has shown they were able to do so many times. Let’s look at the cloud survivors. IBM IBM has shown over and over again that it can reinvent itself. It sold its x86 servers in order to show its employees and the world that the future is no longer there. In the past it bought PWC’s consultancy which will keep on reinventing new service offerings for customers that are lost in the cloud. Accenture Just like PWC’s consultancy arm within IBM, Accenture will have consultants that help people make the transition from data centre to the cloud. Accenture will not be leading the revolution but will be a “me-to” player that can put more people faster than others. Intel X86 is not going to die soon. The cloud just means others will be buying it. Intel will keep on trying to innovate in software and go nowhere [e.g. Intel's Hadoop was going to eat the world] but at least its processors will keep it above the water. Apple Apple knows what consumers want but they still need to prove they understand enterprises. Having a locked-in world is fine for consumers but enterprises don’t like it. Either they come up with a creative solution or the billions will not keep on growing. What does a cloud survivor mean? A cloud survivor means that the key cash cows will not be killed by the cloud. It does not give a guarantee that the company will grow. It just means that in this revolution, the eye of the tornado rushed over your neighbours house, not yours. You can still have lots of collateral damage… Amazon IaaS = Amazon. No further words needed. Amazon will extend Gov Cloud into Health Cloud, Bank Cloud, Energy Cloud, etc. and remove the main laggard’s argument: “for legal & security reasons I can’t move to the cloud”. Amazon currently has 40-50 Anything-as-a-Service offerings in 36 months they will have 500. Salesforce PaaS & SaaS = Salesforce. Salesforce will become more than a CRM on steroids, it will be the world’s business solutions platform. If there is no business solution for it on Salesforce then it is not a business problem worth solving. They are likely to buy competitors like Workday. Google Google is the king of the consumer cloud. Google Apps has taken the SME market by storm. Enterprise cloud is not going anywhere soon however. Google was too late with IaaS and is not solving on-premise transitional problems unlike its competitors. With Kubernetes Google will re-educate the current star programmers and over time will revolutionise the way software is written and managed and might win in the long run. Google’s cloud future will be decided in 5-10 years. They invented most of it and showed the world 5 years later in a paper. CSC CSC has moved away from being a bodyshop to having several strategic important products for cloud orchestration and big data. They have a long-term future focus, employing cloud visionaries like Simon Wardley, that few others match. You don’t win a cloud war in the next quarter. It took Simon 4 years to take Ubuntu from 0% to 70% on public clouds. Workday What Salesforce did to Oracle’s Siebel, Workday is doing to SAP. Companies that have bought into Salesforce will easily switch to Workday in phase 2. Canonical Since RedHat is probably reading this blog post, I can’t be explicit. But a company of 600 people that controls up to 70% of the operating systems on public clouds, more than 50% of OpenStack, brings out a new server OS every 6 months, a phone OS in the next months, a desktop every 6 months, a complete cloud solution every 6 months, can convert bare-metal into virtual-like cloud resources in minutes, enables anybody to deploy/integrate/scale any software on any cloud or bare-metal server [Intel, IBM Power 8, ARM 64] and is on a mission to completely commoditise cloud infrastructure via open source solutions in 2015 deserves to make it to the list. Metaswitch Metaswitch has been developing network software for the big network guys for years. These big network guys would put it in a box and sell it extremely expensive. In a world of commodity hardware, open source and scale out, Clearwater and Calico have catapulted Metaswitch to the list of most innovative telecom supplier. Telecom providers will be like cloud providers, they will go to the ODM that really knows how things work and will ignore the OEM that just puts a brand on the box. The Cloud still needs WAN networks. Google Fibre will not rule the world in one day. Telecom operators will have to spend their billions with somebody. Microsoft If you are into Windows you will be on Azure and it will be business as usual for Microsoft. ARM In an ODM dominated world, ARM processors are likely to move from smart phones into network and into cloud. ODM Nobody knows them but they are the ones designing everybody’s hardware. Over time Amazon, Google and Microsoft might make their own hardware but for the foreseeable future they will keep on buying it “en masse” from ODMs. What does a cloud winner mean? Billions and fame for some, large take-overs or IPOs for others. But the cloud war is not over yet. It is not because the first battles were won that enemies can’t invent new weapons or join forces. So the war is not over, it is just beginning. History is written today…Reference: The Cloud Winners and Losers? from our JCG partner Maarten Ectors at the Telruptive blog....

Easy REST endpoints with Apache Camel 2.14

Apache Camel has a new release recently, and some of the new features were blogged about by my colleague Claus Ibsen. You really should check out his blog entry and dig into more detail, but one of the features I was looking forward to trying was the new REST DSL. So what is this new DSL? Actually, it’s an extension to Camel’s routing DSL, which is a powerful domain language for declaratively describing integration flows and is available in many flavors. It’s pretty awesome, and is a differentiator between integration libraries. If you haven’t seen Camel’s DSL, you should check it out. Have I mentioned that Camel’s DSL is awesome? k.. back to the REST story here.. Before release 2.14, creating rest endpoints meant using camel-cxfrs which can be difficult to approach for a new user just trying to expose a simple REST endpoint. Actually, it’s a very powerful approach to doing contract-first REST design, but I’ll leave that for the next blog post. However, in a previous post I did dive into using camel-cxfrs for REST endpoints so you can check it out. With the 2.14, the DSL has been extended to make it easier to create REST endpoints. For example: rest("/user").description("User rest service") .consumes("application/json").produces("application/json").get("/{id}").description("Find user by id").outType(User.class) .to("bean:userService?method=getUser(${header.id})").put().description("Updates or create a user").type(User.class) .to("bean:userService?method=updateUser").get("/findAll").description("Find all users").outTypeList(User.class) .to("bean:userService?method=listUsers"); In this example, we can see we use the DSL to define REST endpoints, and it’s clear, intuitive and straight forward. All you have to do is set up the REST engine with this line: restConfiguration().component("jetty") .bindingMode(RestBindingMode.json) .dataFormatProperty("prettyPrint", "true") .port(8080); Or this in your Spring context XML: <camelContext> ... <restConfiguration bindingMode="auto" component="jetty" port="8080"/> ... </camelContext> The cool part is you can use multiple HTTP/servlet engines with this approach, including a micrservices style with embedded jetty (camel-jetty) or through an existing servlet container (camel-servlet). Take a look at the REST DSL documentation for the complete of HTTP/servlet components you can use with this DSL. Lastly, some might ask, what about documenting the REST endpoint? Eg, WADL? Well, luckily, the new REST DSL is integrated out of the box with awesome Swagger library and REST documenting engine ! So you can auto document your REST endpoints and have the docs/interface/spec generated for you! Take a look at the camel-swagger documentation and the camel-example-servlet-rest-tomcat example that comes with the distribution to see more. Give it a try, and let us know (Camel mailing list, comments, stackoverflow, somehow!!!) how it works for you.Reference: Easy REST endpoints with Apache Camel 2.14 from our JCG partner Christian Posta at the Christian Posta – Software Blog blog....

Validate Configuration on Startup

Do you remember that time when you spent a whole day trying to fix a problem, only to realize that you have mistyped a configuration setting? Yes. And it was not just one time. Avoiding that is not trivial, as not only you, but also the frameworks that you use should take care. But let me outline my suggestion. Always validate your configuration on startup of your application. This involves three things: First, check if your configuration values are correct. Test database connection URLs, file paths, numbers and periods of time. If a directory is missing, a database is unreachable, or you have specified a non-numeric value where a number or period of time is expected, you should know that immediately, rather the application has been used for a while. Second, make sure all required parameters are set. If a property is required, fail if it has not been set, and fail with a meaningful exception, rather than an empty NullPointerException (e.g. throw new IllegalArgumentException("database.url is required")). Third, check if only allowed values are set in the configuration file. If a configuration is not recognized, fail immediately and report it. This will save you from spending a whole day trying to find out why setting the “request.timeuot” property didn’t have effect. This is applicable to optional properties that have default values, and comes with the extra step of adding new properties to a predefined list of allowed properties, and possibly forgetting to do that leading to an exception, but that is unlikely to waste more than a minute. A simple implementation of the last suggestion would like like this: Properties properties = loadProperties(); for (Object key : properties.keySet()) { if (!VALID_PROPERTIES.contains(key)) { throw new IllegalArgumentException("Property " + key + " is not recognized as a valid property. Maybe a typo?"); } } Implementing the first one is a bit harder, as it needs some logic – in your generic properties loading mechanism you don’t know if a property is a database connection url, a folder, a timeout. So you have to do these checks in the classes that know the purpose if each property. Your database connection handler knows how to work with a database url, your file storage handler knows what a backup directory is, and so on. This can be combined with the required property verification. Here, a library like Typesafe config may come handy, but it won’t solve all problems. This is not only useful for development, but also for newcomers to the project that try to configure their local server, and most importantly – production, where you can immediately find out if there has been a misconfiguration in this release. Ultimately, the goal is to fail as early as possible if there is any problem with the supplied configuration, rather than spending hours chasing typos, missing values and services that are accidentally not running.Reference: Validate Configuration on Startup from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: