Featured FREE Whitepapers

What's New Here?


Spring 3.1 Caching and @Cacheable

Caches have been around in the software world for long time. They’re one of those really useful things that once you start using them you wonder how on earth you got along without them so, it seems a little strange that the Guys at Spring only got around to adding a caching implementation to Spring core in version 3.1. I’m guessing that previously it wasn’t seen as a priority and besides, before the introduction of Java annotations, one of the difficulties of caching was the coupling of caching code with your business code, which could often become pretty messy. However, the Guys at Spring have now devised a simple to use caching system based around a couple of annotations: @Cacheable and @CacheEvict. The idea of the @Cacheable annotation is that you use it to mark the method return values that will be stored in the cache. The @Cacheable annotation can be applied either at method or type level. When applied at method level, then the annotated method’s return value is cached. When applied at type level, then the return value of every method is cached. The code below demonstrates how to apply @Cacheable at type level: @Cacheable(value = "employee") public class EmployeeDAO {public Person findEmployee(String firstName, String surname, int age) {return new Person(firstName, surname, age); }public Person findAnotherEmployee(String firstName, String surname, int age) {return new Person(firstName, surname, age); } }The Cacheable annotation takes three arguments: value, which is mandatory, together with key and condition. The first of these, value, is used to specify the name of the cache (or caches) in which the a method’s return value is stored. @Cacheable(value = "employee") public Person findEmployee(String firstName, String surname, int age) {return new Person(firstName, surname, age); }The code above ensures that the new Person object is stored in the “employee” cache. Any data stored in a cache requires a key for its speedy retrieval. Spring, by default, creates caching keys using the annotated method’s signature as demonstrated by the code above. You can override this using @Cacheable’s second parameter: key. To define a custom key you use a SpEL expression. @Cacheable(value = "employee", key = "#surname") public Person findEmployeeBySurname(String firstName, String surname, int age) {return new Person(firstName, surname, age); }In the findEmployeeBySurname(…) code, the ‘#surname’ string is a SpEL expression that means ‘go and create a key using the surname argument of the findEmployeeBySurname(…) method’. The final @Cacheable argument is the optional condition argument. Again, this references a SpEL expression, but this time it’s specifies a condition that’s used to determine whether or not your method’s return value is added to the cache. @Cacheable(value = "employee", condition = "#age < 25") public Person findEmployeeByAge(String firstName, String surname, int age) {return new Person(firstName, surname, age); }In the code above, I’ve applied the ludicrous business rule of only caching Person objects if the employee is less than 25 years old. Having quickly demonstrated how to apply some caching, the next thing to do is to take a look at what it all means. @Test public void testCache() {Person employee1 = instance.findEmployee("John", "Smith", 22); Person employee2 = instance.findEmployee("John", "Smith", 22);assertEquals(employee1, employee2); }The above test demonstrates caching at its simplest. The first call to findEmployee(…), the result isn’t yet cached so my code will be called and Spring will store its return value in the cache. In the second call to findEmployee(…) my code isn’t called and Spring returns the cached value; hence the local variable employee1 refers to the same object reference as employee2, which means that the following is true: assertEquals(employee1, employee2);But, things aren’t always so clear cut. Remember that in findEmployeeBySurname I’ve modified the caching key so that the surname argument is used to create the key and the thing to watch out for when creating your own keying algorithm is to ensure that any key refers to a unique object. @Test public void testCacheOnSurnameAsKey() {Person employee1 = instance.findEmployeeBySurname("John", "Smith", 22); Person employee2 = instance.findEmployeeBySurname("Jack", "Smith", 55);assertEquals(employee1, employee2); }The code above finds two Person instances which are clearly refer to different employees; however, because I’m caching on surname only, Spring will return a reference to the object that’s created during my first call to findEmployeeBySurname(…). This isn’t a problem with Spring, but with my poor cache key definition. Similar care has to be taken when referring to objects created by methods that have a condition applied to the @Cachable annotation. In my sample code I’ve applied the arbitrary condition of only caching Person instances where the employee is under 25 years old. @Test public void testCacheWithAgeAsCondition() {Person employee1 = instance.findEmployeeByAge("John", "Smith", 22); Person employee2 = instance.findEmployeeByAge("John", "Smith", 22);assertEquals(employee1, employee2); }In the above code, the references to employee1 and employee2 are equal because in the second call to findEmployeeByAge(…) Spring returns its cached instance. @Test public void testCacheWithAgeAsCondition2() {Person employee1 = instance.findEmployeeByAge("John", "Smith", 30); Person employee2 = instance.findEmployeeByAge("John", "Smith", 30);assertFalse(employee1 == employee2); }Similarly, in the unit test code above, the references to employee1 and employee2 refer to different objects as, in this case, John Smith is over 25. That just about covers @Cacheable, but what about @CacheEvict and clearing items form the cache? Also, there’s the question adding caching to your Spring config and choosing a suitable caching implementation. However, more on that later…. Reference: Spring 3.1 Caching and @Cacheable from our JCG partner Roger Hughes at the Captain Debug’s Blog blog....

Help, My Code Isn’t Testable! Do I Need to Fix the Design?

Our code is often untestable because there is no easy way to “sense1” the results in a good way and because the code depends on external data/functionality without making it possible to replace or modify these during a test (it’s missing a seam2, i.e. a place where the behavior of the code can be changed without modifying the code itself). In such cases the best thing to do is to fix the design to make the code testable instead of trying to write a brittle and slow integration test. Let’s see an example of such code and how to fix it. Example Spaghetti Design The following code is a REST-like service that fetches a list of files from the Amazon’s Simple Storage Service (S3) and displays them as a list of links to the contents of the files: public class S3FilesResource { AmazonS3Client amazonS3Client; ... @Path('files') public String listS3Files() { StringBuilder html = new StringBuilder('<html><body>'); List<S3ObjectSummary> files = this.amazonS3Client.listObjects('myBucket').getObjectSummaries(); for (S3ObjectSummary file : files) { String filePath = file.getKey(); if (!filePath.endsWith('')) { exclude directories html.append('<a href='content?fileName=').append(filePath).append(''>').append(filePath) .append('<br>'); } } return html.append('<body><html>').toString(); } @Path('content') public String getContent(@QueryParam('fileName') String fileName) { throw new UnsupportedOperationException('Not implemented yet'); } }Why is the code difficult to test?There is no seam that would enable us to bypass the external dependency on S3 and thus we cannot influence what data is passed to the method and cannot test it easily with different values. Moreover we depend on network connection and correct state in the S3 service to be able to run the code. It’s difficult to sense the result of the method because it mixes the data with their presentation. It would be much easier to have direct access to the data to verify that directories are excluded and that the expected file names are displayed. Moreover the core logic is much less likely to change than the HTML presentation but changing the presentation will break our tests even though the logic won’t change.What can we do to improve it? We first test the code as-is to be sure that our refactoring doesn’t break anything (the test will be brittle and ugly but it is just temporary), refactor it to break the external dependency and split the data and presentation, and finally re-write the tests. We start by writing a simple test: public class S3FilesResourceTest { @Test public void listFilesButNotDirectoriesAsHtml() throws Exception { S3FilesResource resource = new S3FilesResource(* pass AWS credentials ... *); String html = resource.listS3Files(); assertThat(html) .contains('<a href='content?fileName=dirfile1.txt'>dirfile1.txt') .contains('<a href='content?fileName=diranother.txt'>diranother.txt') .doesNotContain('dir'); directories should be excluded assertThat(html.split(quote(''))).hasSize(2 + 1); two links only } }Refactoring the Design This is the refactored design, where I have decoupled the code from S3 by introducing a Facade/Adapter and split the data processing and rendering: public interface S3Facade { List<S3File> listObjects(String bucketName); }  public class S3FacadeImpl implements S3Facade { AmazonS3Client amazonS3Client; @Override public List<S3File> listObjects(String bucketName) { List<S3File> result = new ArrayList<S3File>(); List<S3ObjectSummary> files = this.amazonS3Client.listObjects(bucketName).getObjectSummaries(); for (S3ObjectSummary file : files) { result.add(new S3File(file.getKey(), file.getKey())); later we can use st. else for the display name } return result; } }public class S3File { public final String displayName; public final String path; public S3File(String displayName, String path) { this.displayName = displayName; this.path = path; } } public class S3FilesResource { S3Facade amazonS3Client = new S3FacadeImpl(); ... @Path('files') public String listS3Files() { StringBuilder html = new StringBuilder('<html><body>'); List<S3File> files = fetchS3Files(); for (S3File file : files) { html.append('<a href='content?fileName=').append(file.path).append(''>').append(file.displayName) .append('<br>'); } return html.append('<body><html>').toString(); } List<S3File> fetchS3Files() { List<S3File> files = this.amazonS3Client.listObjects('myBucket'); List<S3File> result = new ArrayList<S3File>(files.size()); for (S3File file : files) { if (!file.path.endsWith('')) { result.add(file); } } return result; } @Path('content') public String getContent(@QueryParam('fileName') String fileName) { throw new UnsupportedOperationException('Not implemented yet'); } }In practice I’d consider using the built-in conversion capabilities of Jersey (with a custom MessageBodyWriter for HTML) and returning List<S3File> from listS3Files. This is what the test looks like now: public class S3FilesResourceTest { private static class FakeS3Facade implements S3Facade { List<S3File> fileList; public List<S3File> listObjects(String bucketName) { return fileList; } } private S3FilesResource resource; private FakeS3Facade fakeS3; @Before public void setUp() throws Exception { fakeS3 = new FakeS3Facade(); resource = new S3FilesResource(); resource.amazonS3Client = fakeS3; } @Test public void excludeDirectories() throws Exception { S3File s3File = new S3File('file', 'file.xx'); fakeS3.fileList = asList(new S3File('dir', 'mydir'), s3File); assertThat(resource.fetchS3Files()) .hasSize(1) .contains(s3File); } ** Simplest possible test of listS3Files * @Test public void renderToHtml() throws Exception { fakeS3.fileList = asList(new S3File('file', 'file.xx')); assertThat(resource.listS3Files()) .contains('file.xx'); } } Next I’d implement an integration test for the REST service but still using the FakeS3Facade to verify that the service works and is reachable at the expected URL and that the link to a file content works as well. I would also write an integration test for the real S3 client (through S3FilesResource but without running it on a server) that would be executed only on-demand to verify that our S3 credentials are correct and that we can reach S3. (I don’t want to execute it regularly as depending on an external service is slow and brittle.) Disclaimer: The service above isn’t a good example of proper REST usage and I have taken couple of shortucts that do not represent good code for the sake of brevity. Happy coding and don’t forget to share! Reference: Help, My Code Isn’t Testable! Do I Need to Fix the Design? from our JCG partner Jakub Holy at the The Holy Java blog....

JCG Flashback – 2011 – W36

Hello guys, This is the first post in the JCG Flashback series. These posts will take you back in time and list the hot articles on Java Code Geeks from one year ago. So, let’s see what was popular back then: 8. The Truly Educated Never Graduate An interesting article on the concepts of teaching and understanding and how those apply to software development. Appropriate advice is also given (reading books, being involved with open source projects etc.) 7. What is Dependency Inversion? Is it IoC? This article attempts to explain the commonly confused concepts of Dependency Inversion and Inversion of Control and how those relate to Dependency Injection. 6. Yes, Virginia, Scala is hard David Pollak, creator of the Lift framework, Scala champion and JCG contributor, claims that Scala is indeed hard, but it all boils down to the nature of the developers that are going to use them. 5. Eclipse Shortcuts for Increased Productivity Eclipse is almost the de-facto tool when it comes to Java development. So, here is a collection of some of the most useful Eclipse shortcuts that will help you increase your productivity when working with it. Don’t forget to leave yours! 4. Quick tips for improving Java apps performance Application performance should always be an issue for Java developers. This article mentions several tips that will help you boost your application’s performance.     3. Problems with ORMs Undoubtedly, Object Relational Mapping (ORM) tools like Hibernate have helped developers make huge productivity gains in dealing with relational databases in the past several years. However, the well-documented problems with object-relational impedance mismatch inevitably cause headaches for developers. Check out some issues that might arise. 2. Google Guava Libraries Essentials Google Guava is an excellent library, similar to Apache Commons, that helps Java developers write code more readable, robust and maintainable. This article provides a nice introduction to it, discussing Throwables, Iterables, Multimaps, Preconditions and much more! 1. The Ten Minute Build The “build” is an essential part in all software development processes. Given a development environment any developer should be able to get hold of the source code, click a button or type a simple command and run a build. The build should compile and perform its unit tests within about ten minutes. Find out why! That’s all guys. Stay tuned for more, here at Java Code Geeks. And don’t forget to share! Cheers, Ilias ...

Implementing Entity Services using NoSQL – Part 1: Outline

Over the past few weeks I’ve been doing some R&D into the advantages of using NoSQL databases to implement Entity services (also known as Data Services). Entity service is a classification of service coined in the Service Technology series of books from Thomas Erl. It’s used to describe services that are highly agnostic and reusable because they deal primarily with the persistence of information modelled as business data ‘entities’. The ultimate benefit of having thin layer of these entity services is in the ease at which you can re-use them to support more complex service compositions. This approach is further described in the Entity Abstraction SOA pattern. Entity service layers are therefore a popular architectural choice in SOA, and implementing them has meant big business for vendors like Oracle and IBM, both of whom offer software to support this very task. There is even a separate standard for technologies in this area called Service Data Objects (or SDO for short). This is all well and good, but these applications come with dedicated servers and specialised IDE’s and its all a bit ‘heavyweight’. These specialised solutions can be terribly expensive if all you really want are some simple CRUD-F operations (Create, Read, Update, Delete, Find) on a service that manages the persistence of a simple canonical data type like a Product or a Customer. So the usual and basic implementation method would be to break out the Java and use a normal relational database with something like JPA (Java Persistence API) to help you with the object/relational mapping and persistence. This is a good choice and it can simplify the code a great deal, but there are still challenges. In web services where XML is being used as the payload, there is still the matter of converting between JAXB generated Java objects and the Java objects used to persist data via JPA. You can use something like HyperJaxB to annotate JAXB objects with JPA annotations, making the resulting data objects dual purpose, but you still have some issues with versioning and get none of the scalability advantages of NoSQL. Besides, I’ve used this method before in an earlier blog, so it where’s the fun doing it again? Using NoSQL. A relatively new and enticing alternative is to use a NoSQL database for persistent storage. NoSQL databases have proved incredibly popular over the last few years, due mainly to their ability to achieve huge scalability and strong resilience. Lots of very high profile and high throughput websites use NoSQL datastores to manage and persist their data including Goole, Twitter, Foursquare, Facebook, and Ebay. The term NoSQL is used to describe “a class of database management system identified by its non-adherence to the widely used relational database management system (RDBMS) model” – Wikipedia. NoSQL datastores do not follow the conventional wisdom of a relational table based approach opting instead for a schema-less data structure that’s often ‘document centric’ and capable of supporting very large volumes of data in highly distributed environments. Choosing a NoSQL Database. There are lots of different NoSQL implementations, so I won’t go into detail here other than to say that my requirements were simple. I wanted something…available via 3rd party PaaS providers like Amazon and Jelastic that uses a document store approach (as opposed to key/value or graph) open source and freely available with a good Java API with good developer documentation that can be installed locally which I could administer myself (easier the better since I don’t want to be a DBA)In the end my database choices came down to the two market leaders: MongoDB and CouchDB. Mongo has a great Java API, it’s popular with the Java community and it has good developer documentation. However, its admin features are rather unfriendly, with just a command line to keep you company. CouchDB, on the other hand, is much friendlier thanks to its ‘Futon’ UI. CouchDB has most of the technical benefits of Mongo (certainly in this R&D setting) but it lacks an out of the box Java API (REST is the default interface). Luckily, the Java community has stepped in with a number of native Java drivers for CouchDB, the best for me being the Ektorp library which is very simple to use but also very effective. Summary. My goals for this R&D exercise are to:implement a viable entity service using a contract-first approach (Web Service bound to SOAP, fully WS-I compliant contract and with predefined data structures). discover if using a NoSQL database rather than JPA for data persistence and retrieval can increase developer productivity and reducing the overall effort of entity service implementation. Use the following SOA patterns: Service Facade (separates business logic), Contract/Schema Centralisation (canonical contract hosted via a simple service repository), Decoupled Contract, Concurrent Contract (SOAP & REST (maybe)), Message Metadata (headers) and Service Agent (for validation).Essentially I want to build the entity service by using as little Java code as possible but at the same time preserve the contract-first approach. A contract-first approach is vital for good SOA development because it allows for a looser coupling between the consumer and the service and doesn’t corrupting the relationship with lots of technology specific dependencies like database table definitions and data types. The main technologies I’ll be using for this development will be Java (JEE), Jax-WS, JaxB, CouchDB & Ektorp and Glassfish v3. As usual I’ll also be using Maven and Jenkins. All are production ready applications and frameworks, but because they’re open source the total cost so far is £0.00. In the next article in this series I’ll be telling you how I got started on the development of the service, beginning with the web service contract or ‘WSDL’. Update: It seems I’m on trend for once, with a number of interesting NoSQL articles coming to light in the last few days… InfoQ asks ‘What is CouchDB‘ which is an article that I could have done with about a month ago. It’s a fairly comprehensive ‘getting started’ guide and contains more detail that I’ll go into regarding coding with CouchDB. Therefore, I’d advise that anyone looking for a more step by step Java coding guide to check out the article straight away. The InfoQ article also references two other blog posts that could be of interest to architects. The first is a comparison of a number of different NoSQL databases (including Cassandra Tom!), and the second is a handy NoSQL selection guide. Continue to Part 2. Reference: Implementing Entity Services using NoSQL – Part 1: Outline from our JCG partner Ben Wilcock at the SOA, BPM, Agile & Java blog....

Implementing Entity Services using NoSQL – Part 2: Contract-first

It’s time to begin the coding of my SOA entity service with NoSQL project, and as promised I’m starting with the web service’s contract. Take a look at Part 1 of this series. This technique of starting with a web service contract definition is at the heart of the ‘contract-first’ approach to service-oriented architecture implementation and has numerous technical benefits including…Positive logic-to-contract coupling (because the implementation code follows the contract). Positive consumer-to-contract coupling (because the consumer couples to the contract). Avoids contract-to-implementation coupling (where the implementation influences the contract). Avoids contract-to-technology coupling (where the consumer becomes dependent on the implementation technology).I don’t want to go on about contract-first SOA, but it is really important. In fact it’s the only method allowed by some web services frameworks such as the well respected Spring Web Services. Springsource’s reasoning for only supporting the contract-first method is explained in great detail here. The business case for my service. I’ve decided to implement a web service for managing ‘Product’ entities which I’m going to refer to as the ‘Product Entity Service‘. Product information management (or PIM for short) is a very common business activity and therefore, my entity service should have lots of re-use potential. I personally know this is true because of my previous retail and defence logistics experience, but if I wanted to prove this to be the case, I’d normally analyse the business processes and look for all the places where product information is of benefit. If I did that I’d probably find that the following business processes would be potential consumers of the Product Entity Service (in a traditional retail setting for example)…Buying, product purchasing and on-boarding Sales order capture Sales order fulfilment Customer service Catalogue production Business-2-Business enablement etc. etc.My Product Entity Service’s Operations. Because I’m creating a service that is purely tasked with managing the Product entities, I’m going to keep the operations quite rudimentary. My service will offer consumers create, read, update, delete and find operations. The service will be a SOAP based web service with a WS-I interoperability certificate to help ensure cross platform compatibility with a wide range of consumers. I may, at a later date, also offer a REST version of the same service (often referred to as the concurrent contracts pattern). My service consumers (possibly other services or processes) can then do with these Product entities whatever they like, for example by offering more business aligned features to support Product work-flows such as ‘approve’ or ‘discontinue’. My service contract will be described using the Web Services Description Language (WSDL). I tend to hand craft these and then check them against the WS-I basic profile to make sure I’ve created an interoperable contract. WSDL’s are not particularly friendly files to work with, but any good SOA architect should be able to write one in my opinion. The Product Entity’s data model. A Product data entity should be capable of describing a real life Product that is of value to the business. Every business has its own ideas of what exactly this data item should contain, so in order to keep it simple I’ll just define a few basic fields such as id, name, description, manufacturer, category, and size. I’ll also add some housekeeping fields such as version, date created/updated/deleted, etc. It’s best to think of this data as a ‘document’ as both SOA and NoSQL definitely benefit from a ‘document-centric’ view of the world. The Product document will be described using XML Schema (i.e. as an XSD). I also tend to do these by hand and I use lots of modularity in the structure to help support the schema centralisation pattern which fosters reuse and interoperability amongst the data models used in SOA. This technique is often referred to as creating a ‘canonical data model’ that describes all the business entities within one central model. Creating the Java Service. Now that the service contract is complete, I’m ready to create my maven project and begin the implementation of the service. To do this I use the latest Netbeans IDE because it has great wizards for starting Maven projects and importing WSDL’s for implementation. Maven helps with code compilation, packaging, deployment and testing as well as managing dependencies and performing code generation for my services. Both of these tools are free. The WSDL importing process creates a Java interface that represents and reflects the service’s contract. It also creates a set of Java objects that represent the XML structures used by the service as messages. These objects are annotated by the import routine using JAXB annotations. JAXB offers ‘marshalling and unmarshalling’ of XML text into Java objects. This happens invisibly behind the scenes as part of the JAX-WS web services framework. All I have to do now is create an implementation of the methods on the service. In the first instance I simply add some basic boilerplate code just to get something working. Once that’s done you I deploy the service to a server and do some basic integration testing to check it’s all hanging together and that the service endpoint is being exposed as intended. The server I use for this is Glassfish 3.1 from Oracle which can be integrated within Netbeans and is also free. Initial Service Integration Testing I use SOAP UI for my service testing because it’s free and very capable. It can be used as a testing tool for almost any SOAP or REST service, and using a test harness like this will prevent me from having to build a working service client which can be quite time consuming. I should mention that service development can be done in a completely test driven way with SOAP-UI but at the very start it’s easier to have a basic service deployed (even if it doesn’t work) just so that you can grab it’s WSDL from it’s endpoint using the “http://service?wsdl” convention and check that everything is deployed and integrated correctly. If I didn’t do this, I could get started just with the WSDL, but the endpoint location wouldn’t work so tests would fail not because of bad logic but because of a general lack of service availability. I’m now able to create basic tests which pass Product messages backwards and forwards successfully between the service implementation hosted locally on Glassfish and the SOAP-UI test client, even if those messages don’t do anything and the Products they contain don’t get persisted yet. The next stage is to begin the CouchDB integration so that the Product messages can be persisted and retrieved from the NoSQL database. Then, between the service and the CouchDB DAO I’ll add any business logic that I need to make it all behave as it should. Subscribe now to get an alert when I start the CouchDB DAO. If you missed Part 1 of this diary series you can catch-up here. Costs so far:Software – £0. Time – 2-to-8 hours (depending on experience).Continue to Part 3. Reference: Implementing Entity Services using NoSQL – Part 2: Contract-first from our JCG partner Ben Wilcock at the SOA, BPM, Agile & Java blog....

Implementing Entity Services using NoSQL – Part 3: CouchDB

Following on from Part 2 of this series where I created and deployed the Product Entity Service using the SOA ‘contract-first’ technique, I’m now going to work on the NoSQL database aspects of the service implementation. As I already mentioned in Part 1, I’ve already selected CouchDB as my NoSQL database and the Ektorp library as the database driver. CouchDB – The Relaxed Database. CouchDB is essentially going to act as my document store. Natively it stores its documents in JSON format (actually a binary version called BSON I think) and it requires no prior knowledge of the document’s structure. You can pretty much store anything, and even mix documents of different types in the same database if you want to. Because there are no prior setup steps such as DDL scripts or XSD schemas, getting started can be very quick. In fact if you can use CURL you don’t need anything, just issue HTTP commands to CouchDB and your stuff gets stored. As it says on the tin – CouchDB is fairly relaxed. For a fuller explanation on getting started with the basics of CouchDB see CouchDB: The Definitive Guide. Getting from Java to JSON format. Of course when we’re in Java, JSON as a String based representation isn’t that convenient for us to use. This is where Ektorp steps in, using the Jackson Java-to-JSON library to smooth things over. Jackson facilitates an out of the box POJO to JSON conversion process behind the scenes within Ektorp. Jackson is an important feature for this project because I want to acheive a clean and hassle free development flow from XML to Java Objects to Database Documents and back again. Jackson is a key component in making this work as we’ll see later. CouchDB’s Document Storage Pre-Requisites. Although CouchDB doesn’t need a schema, it does need two basic pieces of data for each document: a unique id and a document revision number. These data items help with managing the documents and implementing the idempotency rules that help maintain document integrity in multi user environments. CouchDB expects these fields to be named ‘_id’ and ‘_revision’. _id can be assigned by the user or by the database during create operations. _revision is assigned by the database and increments upwards each time a document’s record is updated. Now obviously I didn’t want database specific fields to go into my XML documents, so my definition of a product has a field called ‘Id’ and a field called ‘Revision’. Unless I do something, this document would not meet the necessary criteria for storage in CouchDB, and strange things would start to happen like extra _id and _revision fields being added to the database records at runtime that didn’t match the Id and Revision of the XML document I asked CouchDB to store. I don’t want to change my XML schema for a Product in order to add these database specific fields, so what do I do? Cleverly, Jackson can be configured to rectify this problem without touching the Java/JaxB definition of the ‘Product’ object that is derived from the Product XML schema. It can be told to remap the Product’s ‘Id’ and ‘Revision’ fields to the CouchDb ‘_id’ and ‘_revision’ fields at runtime. This maintains a degree of loose coupling but allows me to use the same JaxB generated Java objects throughout my code saving a lot of time and effort. Accessing the Database. CouchDB is not accessed via JDBC and it doesn’t have a traditional JDBC driver. Instead it uses a REST interface (http based GET, PUT, POST, DELETE, etc.) and communicates using JSON formatted content. Ektorp provides some helper classes to help you work with the CouchDB database. There is a Connector class that can be instantiated to establish a workable connection to the database, and a customisable RepositorySupport class that offers type-safe convenience methods for interacting with the database and its records. Creating a DAO. Once correctly customised by extension and class-typing, the RepositorySupport class can be used for all your basic Data Access Object requirements such as Get, Create, Update and Remove operations. It can also generate CouchDB views automatically based purely on the name of the methods you add to it (as long as they follow certain rules). This makes it easy to add ‘find’ methods to your DAO such as ‘findByManufacturerName’ or ‘findByCategoryId’. Finally, if you need more sophisticated views or map/reduce queries, it can help with those too. Pulling it all together. By configuring Jackson and by using Ektorp to create a DAO, it’s now just a case of writing some integration tests to make sure it all hangs together. The tests I used initially are quite simple, I asked my DAO to…create a fresh JaxB Product object and assign it an ID save it to my CouchDB ‘Product’ database read the Product object from the ‘Product’ database using it’s ID modify the Product object & update it retrieve it once more, checking the revision was increased finally, delete the Product object & check that attempts to read it now failIf the DAO code can do all these things, then I have the basic behaviours that I need for my Product Entity Service implementation. However, because it’s an integration test, I need a working CouchDB service to be available during the testing cycle. Maven can help with integration testing by using the Maven Failsafe plugin to bind these kinds of tests to the integration-testing specific parts of the Maven build lifecycle. This prevents mixing integration tests in with normal unit tests which usually have fewer dependencies and runtime requirements. Getting CouchDB working locally is pretty simple, but it’s also possible to use a free cloud hosted CouchDB development instance if you can’t be bothered with the install and set-up process. I’ve tried both, and they work equally well. What’s next? Now my CouchDB DAO is complete it’s time to move into the final stages of the project where I’ll link up the DAO behaviours to the Web Service capabilities I created earlier. To do this I’ll be using Java Enterprise Edition 6. If you’d like an email notification when the next instalment is published then follow the link on the right to subscribe. Continue to Part 4. Reference: Implementing Entity Services using NoSQL – Part 3: CouchDB from our JCG partner Ben Wilcock at the SOA, BPM, Agile & Java blog....

Implementing Entity Services using NoSQL – Part 4: Java EE

Now that I have prepared a skeleton contract-first web-service and created a data access layer using Ektorp and CouchDB, it’s time to wire them together into a fully working entity service. To do this I’m going to use Java EE and Glassfish 3.1. It’s worth noting at this point that it’s not strictly necessary that I use Java EE at all for his kind of R&D work. I don’t need the security or the transaction features that are provided by a JEE server like Glassfish and I could probably use something a little lighter like Tomcat or Jetty. However, I do like the convenience and the features of JEE, and many applications that begin life on an standard Java application server like Tomcat do end up either grafting JEE features into Tomcat (like JAX-WS) or migrating to a full JEE server like Glassfish. Tomcat’s users do often require JEE features – this was the main reasoning behind starting up the TomEE project at Apache. This project adds JEE Web Profile features into the vanilla Tomcat stack so it can handle things like EJB’s and JAX-WS. Separating the Business Logic into Beans. My application already has 2 distinct layers. The first (from the perspective of the consumer) is the web service layer which is tasked with providing all the web service operations and other service specific tasks like handling the custom SOAP headers and messaging metadata that help with problems like idempotency. The last layer is the database access layer which is in charge of communicating with the database and dealing with persistence and retrieval of my Product entities. The third and final layer that I’m now adding is the middle tier that bridges the previous two – the business logic layer. This layer will be responsible for implementing the rules and decisions of the Product Entity Service such as ensuring that any semantically important information is present, added or validated before a persistence operation is carried out. One example of this semantically important information is the Products ‘state’. In my model, I allow Products to transition through a number of states in order to maintain a strict product lifecycle. The stages are as follows and are roughly linear in nature (each state follows the last)…PROVISIONAL STOCKABLE SALEABLE DISCONTINUED REMOVEDIn my business logic layer, my Product Manager bean ensures that the state of each entity makes sense for each service operation. For example, if you call the createProduct() operation with a Product, the Product given must have a state of ‘Provisional’. If it doesn’t, my logic will change it so that it does. These kind of rules are unique to each business so it’s not a one size fits all solution. In the real world, a rules engine or something similar would be ideal as it would allow some additional flexibility in the definition and policing of these rules. However, for my basic R&D needs, this hard-coded solution is fine and adequately demonstrates the reason why it’s good to provide a business logic layer – so that you can separate the business logic ‘concerns’ from the message and database handling logic. One data model to rule them all. There is one thing that all these layers have in common and that’s the data (a.k.a Entity) objects that they manage. Product entities are represented by XML, described by XSD’s and referenced by the WSDL. These definitions are turned into Java objects by JAX-WS and these same Java objects are used natively throughout the code thereby avoiding any data model transformation. This technique is know as ‘transformation avoidance’ is one of the major benefits of this particular style of NoSQL based entity service development technique. Transformation Avoidance is a best practice which improves a service’s reusability and composability – soapatterns.org. In essence, with this service development I’ve managed to use these same Java data objects in every layer and yet maintained a true contract-first development approach. That’s really good news for developers. I’ve also avoided the need for the data model transformation layers that often become necessary when you have incompatible data models between messages and databases (bad news for ESB salespeople). Using NoSQL has also allowed me to totally avoid any SQL DDL for tables and data-relations and I don’t need any complex object mappings such as those required to handle traditional ORM. I can even morph my data model over time without stuff breaking quite so regularly (great for service versioning). Notes on keeping JEE simple. In order to cut down on the deployment and configuration hassle associated with JEE I’ve used the new deployment and packing mechanisms that allow you to locatate EJB’s and web applications within the same application WAR file. This makes using JEE features a breeze and greatly simplifies the Maven build as I only use one project and zero deployment descriptors (even web.xml is missing!). JEE with EJB 3.1 couldn’t be simpler as it’s now based on the use of some very simple Java annotations. For example, specifying a Stateless EJB can be as straightforward as adding the @Stateless annotation to a class. In doing so, you’re telling the application server to deploy the class into a pool in order to make it highly available and wrap calls to its methods in a transaction. As a stateless bean, it will have no concept of a session and will not maintain any state between calls (ideal for stateless services). @Stateless public class ProductsManager In order to use this bean from another part of your application (from the @WebService class for example), you’d simply add a reference variable of the correct class type, and annotate that variable with the @EJB annotation. This tells the application server to ‘inject’ and instance of the correct type from the pre-populated bean pool at runtime using a mechanism called dependency injection. @WebService(...) public class ProductsEntityService implements Products { @EJB private ProductsManager bean; ... Other useful JEE features. Message driven beans are great for implementing event driven messaging where persistent and asynchronous communication is required between message producers and consumers. However I probably won’t use them for this particular R&D effort as for my requirements the usecase is too weak to justify the effort (who would I notify about new products?). Besides, the @MessageDriven bean annotation has made this capability very easy to use and it’s a well established and highly reliable feature that’s based on JMS. EJB 3.1 also allows for a number of new and useful bean types. Singleton beans are singleton classes that are managed by the server and specified using the @Singleton annotation (handy if things like clustered singleton’s are a worry for you). And the @Schedule annotation can be used to generate regular events based on a schedule (such as every Friday at Noon) which can be handy for reporting etc. Summary So I now have a fully working n-tier web-service that is capable of persisting, managing and retrieving Product entities using a NoSQL database. Next time I’ll cover the implementation of some more SOA patterns using these technologies. Subscribe to my blog to get a notification when this happens. Continue to Part 5. Reference: Implementing Entity Services using NoSQL – Part 4: Java EE from our JCG partner Ben Wilcock at the SOA, BPM, Agile & Java blog....

Implementing Entity Services using NoSQL – Part 5: Improving autonomy using the Cloud

In the previous posts I discussed how I went about building my SOA ‘Entity’ service for Products by using a combination of Java Web Services, Java EE and the CouchDB NoSQL database. In this final post in the series I’m going to leverage some of the technical assets that I’ve created and implement some new user stories using some popular SOA patterns. Take a look at Part 1 and Part 2 as well. My current Product Entity Service implementation is very business process agnostic, and therefore highly re-usable in any scenario where consumers want to discover or store Product information. However, as it stands the Product Entity Service is designed to be used within a trusted environment. This means that there are no restrictions on access to operations like Create, Update or Delete. This is fine within a strictly controlled corporate sandbox but what if I want to share some of my service operations or Product information with non trusted users? Lets imagine that in addition to our in-house use of the Product Entity Service we also wanted to cater for the following agile ‘user story’… So that: My published product information is accurate and constantly available to off-site customers. As a: Sales Director & IT Manager. We want to: Offer a highly-available ‘read-only’ copy of my product information to off-site users and systems. This scenario is not uncommon in business and I’ve implemented user stories like it in the past with some of my clients. But what’s fantastic about the technologies that we’re using to implement our Entity service (Java/JAX-WS and CouchDB NoSQL) is that they make developing this scenario very easy. First of all, the architectural design. In order to implement this user story I’m going to call upon two more SOA design patterns for the service architecture – Service Data Replication and Concurrent Contracts. In post two, we already talked about the ‘contract-first’ approach so I won’t go into any more detail other than to say that it still applies here. The contract is still a Standardised and Decoupled Contract. Service Data Replication is a pattern that helps you to achieve high levels of service autonomy and availability by creating whole copies of the data required by the service elsewhere on the infrastructure. This copy can then be used instead of the original in order to balance the load on the infrastructure. The Concurrent Contracts pattern is used when ‘[an existing] service’s contract may not be suitable for or applicable to all potential service consumers’. For example, when security, protocol or accessibility concerns require that something similar (but not the same) be made available to a specific subset of users (like off-site consumers or mobile devices with restricted processing power or bandwidth). In order to implement our new user story, we’ll use both of these patterns together to provide a ‘read-only’ version of the Product Entity Service. However, by providing a second ‘read-only’ version of the Product Entity service, you could also say that we are partially implementing the Redundant Implementation SOA pattern, because we’re offering a second redundant option for some of the service’s key operations. The resulting architecture looks something like this (click to enlarge)…The service contract. The original Product Entity Service offered five operations – Get, Find, Create, Update and Delete but offering all these capabilities to outsiders is not necessary and would probably be quite problematic (in terms of security, integrity, etc.). We certainly don’t want any external users creating or updating our product information without our permission. Therefore, in our web service contract for the new ‘Read-only Product Entity Service‘ I’ve removed the Create, Update and Delete operations completely and only provided Get and Find. All the datatypes remain the same (the same Product.xsd describes the product entity etc.). Keeping the datatypes the same is important because I’m deliberately applying the Canonical Schema and Schema Centralisation patterns and utililising the Standardised Service Contract principal in order to avoid transformation. The Java code. With this new read-only service I’m still working contract first, so I’ve created a new Maven project who’s first task during a build is to import the Read-only Product Entity Service’s WSDL contact and create from it a set of JAX-WS service interfaces and data objects. From this point on, I can reuse some of the code that I’ve developed already for the ‘full’ Product Entity Service. By refactoring my previous codebase into modules, I can even get Maven to treat the original code as a dependency for this new service. In essence, the bits that I’m interested in are the EJB’s and DAO’s I created for the business and persistence logic of the Get and Find operations. By reusing the existing code and by creating a Glassfish server on the Amazon cloud, I can stand-up the new service in record quick time and be halfway to completing the user story. All I need now is some replicated Product data to work with… Starting the data replication. Couch DB offers a fantastic enterprise class replication system out of the box and for free. This makes implementing the Service Data Replication SOA pattern very easy. Couch DB’s built in data replicator can replicate whole databases of documents between any two CouchDb instances that are available on the network or over the web. In this case I’ve created a CouchDb database at a hosted provider called IrisCouch. They can provide me with secured CouchDB instances or various sizes at the drop of a hat and will look after all the necessary infrastructure and backup requirements. For small users they even do this free. In order to setup a replication I just need to use the CURL command-line tool to issue specific instructions via HTTP to my local CouchDB instance. These instructions tells my local CouchDB service to replicate my Product data with my remote CouchDB database on IrisCouch over the web. The database replication instruction is a JSON document and looks something like this… {'source':'products', 'target':'https://username:password@instance.iriscouch.com:6984/products', 'create_target':true, 'continuous':true} In essence this JSON instruction says “replicate the local Products database documents to the remote iriscouch instance, forever“. Immediately on issuing the command, CouchDB sets to work and starts replicating my local data to my remote database instance. This includes updates and deletes and any ‘design documents’ stored in the Products database. This replica of Products is then immediately available to my Read-only Product Entity Service which is hosted in the Amazon EC2 cloud. From this point onwards, service consumers using the Get or Find operations on this service will see a replica of the data that is used in-house. The information will be kept up to date by the replication. Finally, the user acceptance test. So how well did we do against our new user story’s requirements? By hosting the read-only version of the Product Entity Service on the Amazon cloud, we’ve created an off site web-service that that is highly available. The data it provides is an exact replica of the data we use on-site with only the smallest amount of latency between the two copies under normal conditions. If my on-site network goes down, the read-only cloud based version of the Product Entity Service will carry on regardless, and because we’re using the Amazon cloud infrastructure it can benefit from almost unlimited runtime resources if necessary. Availability shouldn’t ever be a problem. We can add more machines at any time, and offer load balancing and potentially spread machines over multiple continents if necessary. My guess is that the Sales Director will be pleased that our customers can always see the information in our product catalogue and customers themselves should be be pretty satisfied with the comprehensive and reliable service that they now receive. Finally, the IT Manager will be pleased that network security remains intact and that the new off-site hosted service will cost next to nothing to run and be very reliable. All that remains is to publicise the Read-only Product Entity Service endpoint to our customers and support them in their use of it. All in all a pretty successful day at the office. Would you like to try the Read-only Product Service for yourself? The endpoint details can be found in the SOA Growers Simple Service Repository. Follow the ‘Service Repository’ link and look for the ‘R20121231? release. In there you fill find links to the service’s WSDL, WS-I certificate and link to a live demo web-service endpoint hosted on an AWS micro-instance. The best way to experience the live demo is to download the SOAP-UI test suite. This test suite requires SOAP-UI v4 (which can be downloaded for free). The test suite contains a simple test of all the operations available on the service. Catch-up with the entire blog series online… That’s probably the last in this series on building entity services with Java and CouchDB. If you missed any of the earlier blog posts in this series and you would like to catch up, the other entries are listed below…Implementing Entity Services using NoSQL – Part 1: Outline Implementing Entity Services using NoSQL – Part 2: Contract-first Implementing Entity Services using NoSQL – Part 3: CouchDB Implementing Entity Services using NoSQL – Part 4: JavaEEDon’t forget to share! Reference: Implementing Entity Services using NoSQL – Part 5: Improving autonomy using the Cloud from our JCG partner Ben Wilcock at the SOA, BPM, Agile & Java blog....

Getting started with Scala and Scalatra – Part I

In this series of tutorials we’re going to look a bit closer at scalatra. Scalatra is a lightweight scala based micro-web framework, that can be used to create high performance websites and APIs. In this first tutorial we’ll just get started with the installation of scalatra and import our test project into Eclipse. SBT and giter8 Before you can get started you need to install a couple of tools (I’m assuming you’ve got a JDK 1.6+ installed). I’ll give you the condensed installation instructions, and extensive version of this can be found at the following scalatra page (http://www.scalatra.org/getting-started/first-steps.html). This approach should work for most environments, for my own however, it didn’t work… It downloaded old versions of giter8 and an old version of sbt. For sbt I used macports to get the latest version: port install sbt And I also downloaded giter8 manually. curl https://raw.github.com/n8han/conscript/master/setup.sh | sh cs n8han/giter8 This last command will install g8 in your /bin directory. I’ve tested this with this g8 version: ~/bin/g8 giter8 0.5.0 Usage: g8 [TEMPLATE] [OPTION]... Apply specified template. OPTIONS -b, --branch Resolves a template within a given branch --paramname=paramvalue Set given parameter value and bypass interaction. Apply template and interactively fulfill parameters. g8 n8han/giter8 Or g8 git://github.com/n8han/giter8.git Apply template from a remote branch g8 n8han/giter8 -b some-branch Apply template from a local repo g8 file://path/to/the/repo Apply given name parameter and use defaults for all others. g8 n8han/giter8 --name=template-testCreate initial project Go to the root directory where you keep you projects and run the following: jos@Joss-MacBook-Pro.local:~/dev/scalatra/firststeps$ g8 scalatra/scalatra-sbt organization [com.example]: org.smartjava package [com.example.app]: org.smartjava.scalatra name [scalatra-sbt-prototype]: hello-scalatra servlet_name [MyScalatraServlet]: HelloScalatraServlet scala_version [2.9.1]: version [0.1.0-SNAPSHOT]: This will create a hello-scalatra folder in the directory your in that contains the project. ./build.sbt ./project ./project/build.properties ./project/plugins.sbt ./README.md ./src ./src/main ./src/main/resources ./src/main/resources/logback.xml ./src/main/scala ./src/main/scala/org ./src/main/scala/org/smartjava ./src/main/scala/org/smartjava/scalatra ./src/main/scala/org/smartjava/scalatra/HelloScalatraServlet.scala ./src/main/scala/Scalatra.scala ./src/main/webapp ./src/main/webapp/WEB-INF ./src/main/webapp/WEB-INF/layouts ./src/main/webapp/WEB-INF/layouts/default.scaml ./src/main/webapp/WEB-INF/views ./src/main/webapp/WEB-INF/views/hello-scalate.scaml ./src/main/webapp/WEB-INF/web.xml ./src/test ./src/test/scala ./src/test/scala/org ./src/test/scala/org/smartjava ./src/test/scala/org/smartjava/scalatra ./src/test/scala/org/smartjava/scalatra/HelloScalatraServletSpec.scala To test if everything is working, go into this directory and use sbt to start the application from sbt with ‘container:start’. This will download a lot of stuff, and finally start the application: jos@Joss-MacBook-Pro.local:~/dev/scalatra/firststeps/hello-scalatra$ sbt [info] Loading project definition from /Users/jos/Dev/scalatra/firststeps/hello-scalatra/project [info] Set current project to hello-scalatra (in build file:/Users/jos/Dev/scalatra/firststeps/hello-scalatra/) > container:start [info] jetty-8.1.5.v20120716 [info] NO JSP Support for /, did not find org.apache.jasper.servlet.JspServlet [info] started o.e.j.w.WebAppContext{/,[file:/Users/jos/Dev/scalatra/firststeps/hello-scalatra/src/main/webapp/]} [info] started o.e.j.w.WebAppContext{/,[file:/Users/jos/Dev/scalatra/firststeps/hello-scalatra/src/main/webapp/]} 15:12:44.604 [pool-6-thread-4] INFO o.scalatra.servlet.ScalatraListener - Initializing life cycle class: Scalatra [info] started o.e.j.w.WebAppContext{/,[file:/Users/jos/Dev/scalatra/firststeps/hello-scalatra/src/main/webapp/]} 15:12:44.727 [pool-6-thread-4] INFO o.f.s.servlet.ServletTemplateEngine - Scalate template engine using working directory: /var/folders/mc/vvzshptn22lg5zpp7fdccdzr0000gn/T/scalate-5609005579304140836-workdir [info] Started SelectChannelConnector@ [success] Total time: 1 s, completed Sep 7, 2012 3:12:44 PM > [success] Total time: 1 s, completed Sep 7, 2012 3:10:05 PM To test if everything is correct point the browser to localhost:8080 and you’ll see the following screen:Import in Eclipse I usually develop with Eclipse, so in this case, lets make sure we can edit the source code in Eclipse. For this we can use sbt eclipse plugin. Adding this plugin is very easy. Go to the ‘project’ folder and add the following (with an empty line before it) to the plugins.sbt file. addSbtPlugin('com.typesafe.sbteclipse' % 'sbteclipse-plugin' % '2.1.0') When you next run sbt, you’ll see a lot of stuff being downloaded. From sbt run ‘eclipse’ and eclipse configuration files (.classpath and .project) will be created. Now start up the Eclipse IDE and you can import the project you just created (make sure you also install the scala-ide for Eclipse).Now we can start the container, and let sbt listen for changes in our resources. $ sbt > container:start > ~ ;copy-resources;aux-compile Any time we save a file in Eclipse, sbt will copy and recompile the resources. So we can directly see our changed files in the browser. Open the file HelloScalatraServlet and change the welcome text. If you save this, sbt will reload the application and you’ll directly see the changed files.That’s it for part one of this tutorial. Next week we’ll look at how we can use scalatra to create a REST based service. Happy coding and don’t forget to share! Reference: Tutorial: Getting started with scala and scalatra – Part I from our JCG partner Jos Dirksen at the Smart Java blog....

Real World Scala Test

There was recently a study about Scala vs. Java where some of Scala’s productivity, conciseness, and multi-core abilities were put to the test. The basic idea is take some semi-experienced programmers, teach them Scala for 4 weeks and see how they do coding up a contrived week-long multi-core example. The results basically did not support Scala’s claims of being radically better than Java.This is the real world, folks A fair number of pro-Scala people have commented that it’s not really fair to give a developer 4 weeks of training on a new language that has a ton of new paradigms and expect the developer to be proficient. Except, that’s the way it is in the real world. An organization that adopts Scala is likely to give their developers far less than 4 weeks of training on Scala before the developers embark on a new project. So, as a real-world thing, the study demonstrates what you can expect a month or two into a Scala project. The study is in fact fair because in the real world, we don’t have the luxury of letting developers take 2 years to explore a new language before measuring how well they will do with the new language.It took me years to become excellent with Scala I’m one of the most prolific Scala coders around (I wrote a lot of Lift as well as Beginning Scala). It took me until I was done writing a book on Scala before I felt really comfortable with Scala. I had the luxury of spending a lot of time with the language, but that’s not the normal case.Looking at the long term Yes, someone who invests 2+ years in Scala and who is generally good to excellent with coding will be a more productive in Scala than in Java. I’ve seen a number of projects where Scala has made the difference in terms of (1) raw developer productivity and (2) ability to recruit top-notch talent. The combination of the two facts has led to results that I do not think could have been achieved with Java. On the other hand, a company adopting Scala for a project without making the long term commitment to training and developer growth will likely see the same kinds of results demonstrated by this study.Java 8 And then there’s Java 8. With closures in Java 8 (no, they are not as good or flexible and Scala’s), the gap between Java and Scala gets closed significantly. It will become much easier to build libraries that have a lot of ARM (automatic resource management) stuff built in. It will be easier to deal with collections with Java 8 and closures.Am I slamming Scala? No, I’m not slamming Scala. I’m giving my view of Scala through a real-world lens. I love Scala. I wrote Telegram in Scala and really grooved on the excellence that Lift and Scala bring to the table. But having a study that reflects the way Scala will do in the real world is super-important. Advocating for Scala means helping people understand the expected outcomes of Scala project, rather than the best case, pie-in-the-sky outcome. This study sheds valuable light on likely outcomes of a first or second Scala project in a company. There are other examples (like Foursquare and OpenStudy) that demonstrate just how amazing a small, excellent team can be with Lift and Scala. Don’t forget to share! Reference: Standalone Puppet with Capistrano from our JCG partner David Pollak at the Good Stuff blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: