Featured FREE Whitepapers

What's New Here?

wurfl-logo

Comparing OpenDDR to WURFL

Web content delivered to mobile devices usually benefits from being tailored to take into account a range of factors such as screen size, markup language support and image format support. Such information is stored in “Device Description Repositories” (DDRs). Both WURFL and OpenDDR projects provide an API to access the DDRs, in order to ease and promote the development of Web content that adapts to its Delivery Context. WURFL recently changed its license to AGPL (Affero GPL) v3. Meaning that it is not free to use commercially anymore. Consequently some free open source alternatives have recently started to show up. OpenDDR is one of them. In this post I will share my findings on how the OpenDDR Java API compares to WURFL. Add dependencies to project This section describes how to add WURFL and OpenDDR to a Maven project. WURFL WURFL is really straightforward since it is available on Maven central repository. All you have to do is to include the dependency on your project: <dependency> <groupId>net.sourceforge.wurfl</groupId> <artifactId>wurfl</artifactId> <version>1.2.2</version><!-- the last free version --> </dependency>OpenDDR OpenDDR on the other hand is quite difficult to configure. Follow these steps to include OpenDDR in your project:Download OpenDDR-Simple-API zip. Unzip it and create a new Java project on Eclipse based on the resulting folder. Export OpenDDR-Simple-API JAR using Eclipse File >> Export..., include only the content of the src folder excluding oddr.properties file. Install the resulting JAR and DDR-Simple-API.jar from the lib folder into your local Maven repository mvn install:install-file -DgroupId=org.w3c.ddr.simple -DartifactId=DDR-Simple-API -Dversion=2008-03-30 -Dpackaging=jar -Dfile=DDR-Simple-API.jar -DgeneratePom=true -DcreateChecksum=true mvn install:install-file -DgroupId=org.openddr.simpleapi.oddr -DartifactId=OpenDDR -Dversion=1.0.0.6 -Dpackaging=jar -Dfile=OpenDDR-1.0.0.6.jar -DgeneratePom=true -DcreateChecksum=trueAdd the dependencies to your project pom.xml file: <dependency> <groupId>org.w3c.ddr.simple</groupId> <artifactId>DDR-Simple-API</artifactId> <version>2008-03-30</version> </dependency> <dependency> <groupId>org.openddr.simpleapi.oddr</groupId> <artifactId>OpenDDR</artifactId> <version>1.0.0.6</version> </dependency> <dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-jexl</artifactId> <version>2.1.1</version> </dependency> <dependency> <groupId>commons-lang</groupId> <artifactId>commons-lang</artifactId> <version>2.6</version> </dependency>Load repository/capabilities file This section describes how to load WURFL and OpenDDR repository file(s) and import it in your project. WURFL Copy wurfl-2.1.1.xml.gz file (the last free version) into your project src/main/resources folder and import it using: WURFLHolder wurflHolder = new CustomWURFLHolder(getClass().getResource("/wurfl-2.1.1.xml.gz").toString());OpenDDR Copy oddr.properties from the OpenDDR-Simple-API src folder and all the files inside OpenDDR-Simple-API resources folder into your project src/main/resources folder. Import them using: Service identificationService = null; try { Properties initializationProperties = new Properties(); initializationProperties.load(getClass().getResourceAsStream("/oddr.properties")); identificationService = ServiceFactory .newService("org.openddr.simpleapi.oddr.ODDRService", initializationProperties.getProperty(ODDRService.ODDR_VOCABULARY_IRI), initializationProperties); } catch (IOException e) { LOGGER.error(e.getMessage(), e); } catch (InitializationException e) { LOGGER.error(e.getMessage(), e); } catch (NameException e) { LOGGER.error(e.getMessage(), e); }Using the API This section describes how use WURFL and OpenDDR Java APIs to access the device capabilities. WURFL WURFL API is very easy to use and has the big advantage of having a fall-back hierarchy inferring capabilities for devices not yet in its repository file. Device device = wurflHolder.getWURFLManager().getDeviceForRequest(getContext().getRequest()); int resolutionWidth = Integer.valueOf(device.getCapability("resolution_width")); int resolutionHeight = Integer.valueOf(device.getCapability("resolution_height")); There’s no need to validate device.getCapability("resolution_width") against null value when no data is available. OpenDDR OpenDDR is quite the opposite. Very cumbersome and does not have a fall-back hierarchy, forcing the developer to validate each property value. PropertyRef displayWidthRef; PropertyRef displayHeightRef;try { displayWidthRef = identificationService.newPropertyRef("displayWidth"); displayHeightRef = identificationService.newPropertyRef("displayHeight"); } catch (NameException ex) { throw new RuntimeException(ex); }PropertyRef[] propertyRefs = new PropertyRef[] { displayWidthRef, displayHeightRef }; Evidence e = new ODDRHTTPEvidence(); e.put("User-Agent", getContext().getRequest().getHeader("User-Agent"));int maxImageWidth = 320; // A default value int maxImageHeight = 480; // A default value try { PropertyValues propertyValues = identificationService.getPropertyValues(e, propertyRefs); PropertyValue displayWidth = propertyValues.getValue(displayWidthRef); PropertyValue displayHeight = propertyValues.getValue(displayHeightRef);if (displayWidth.exists()) { maxImageWidth = displayWidth.getInteger(); } if (displayHeight.exists()) { maxImageHeight = displayHeight.getInteger(); } } catch (Exception ex) { throw new RuntimeException(ex); }Results The following table shows the results of tests run against an application for server-side image adaptation using both WURFL and OpenDDR. These tests were performed on real devices and pages were served as XHTML BASIC (same as XHTML MP).Platform Device Property WURFL max_image_width (1) / max_image_height WURFL resolution_width / resolution_height OpenDDR displayWidth / displayHeightN/A Firefox desktop width 650 640 Not supportedheight 600 480 Not supportediOS iPhone 4S width 320 320 320height 480 480 480Android HTC One V width 320 540 Not supportedheight 400 960 Not supportedHTC Hero width 300 320 320height 460 480 480Windows Phone 7.5 Nokia Lumia 710 width 600 640 480height 600 480 800BlackBerry BlackBerry Bold 9900 width 228 480 640height 280 640 480Symbian S60 Nokia E52 (Webkit) width 234 240 240height 280 320 320Nokia E52 (Opera Mobile) width 240 240 Not supportedheight 280 320 Not supportedWindows Mobile 6.1 HTC Touch HD T8282 width 440 480 480height 700 800 800(1) max_image_width capability is very handy:Width of the images viewable (usable) width expressed in pixels. This capability refers to the image when used in “mobile mode”, i.e. when the page is served as XHTML MP, or it uses meta-tags such as “viewport”, “handheldfriendly”, “mobileoptimised” to disable “web rendering” and force a mobile user-experience. Note: The color #9f9 highlights the results that performed better. Pros and ConsPros ConsWURFLA Device Hierarchy that yields a high-chance that the value of capabilities is inferred correctly even when the device is not yet recognized. Lots and lots of capabilities. Easier to configure. Cleaner API.Pricing and Licensing.OpenDDRFree to use, even commercially. Growing community.Limited capabilities. OpenDDR seems to be limited to W3C DDR Core Vocabulary.Related PostsEclipse RCP to Cellphones Java EE 6 Testing Part II – Introduction to Arquillian and ShrinkWrap Java EE 6 Testing Part I – EJB 3.1 Embeddable API Stripes framework XSS Interceptor Maven 2 Cobertura Plugin – Updated Previous Entry: Java EE 6 Testing Part II – Introduction to Arquillian and ShrinkWrapReference: Comparing Device Description Repositories from our JCG partner Samuel Santos at the Samaxes blog....
java-logo

Avoid Null Pointer Exception in Java

Null Pointer Exception is the most common and most annoying exception in Java. In this post I want to avoid this undesired exception. First let’s create example that raise Null Pointer Exception private Boolean isFinished(String status) { if (status.equalsIgnoreCase("Finish")) { return Boolean.TRUE; } else { return Boolean.FALSE; } } In previous method if we pass the value of “status” variable as null it will raise Null Pointer Exception in below line if (status.equalsIgnoreCase("Finish")) { So we should change my code to below code to avoid Null Pointer Exception private Boolean isFinished(String status) { if ("Finish".equalsIgnoreCase(status)) { return Boolean.TRUE; } else { return Boolean.FALSE; } } In previous method if we path the value of “status” variable as null it will not raise Null Pointer Exception. If you have object.equals(”literal”) you should replace with “literal”.equals(object) . If you have object.equals(Enum.enumElement) you should replace with Enum.enumElement.equals(object). At general expose equals method of the object that you are sure that it doesn’t has null value. I will continue in providing more best practice and advices. In part 1 post I listed how to avoid NPE in equalsIgnoreCase() method and enumerator, today I will write about below cases 1- Empty Collection 2- Use some Methods 3- assert Keyword 4- Assert Class 5- Exception Handling 6- Too many dot syntax 7- StringUtils Class 1- Empty Collection Empty collection is collection which hasn’t any elements. Some developers return null value for Collection which has no elements but this is false, you should return Collections.EMPTY_LIST, Collections.EMPTY_SET and Collections.EMPTY_MAP. Wrong Code public static List getEmployees() { List list = null; return list; } Correct Code public static List getEmployees() { List list = Collections.EMPTY_LIST; return list; }2- Use some Method Use some methods to assure that null value not exist like contains(), indexOf(), isEmpty(), containsKey(), containsValue() and hasNext(). Example String myName = "Mahmoud A. El-Sayed"; List list = Collections.EMPTY_LIST; boolean exist = list.contains(myName); int index = list.indexOf(myName); boolean isEmpty =list.isEmpty(); Map map =Collections.EMPTY_MAP; exist=map.containsKey(myName); exist=map.containsValue(myName); isEmpty=map.isEmpty(); Set set=Collections.EMPTY_SET; exist=set.contains(myName); isEmpty=set.isEmpty(); Iterator iterator; exist = iterator.hasNext();3- assert Keyword assert is keyword provided in Java 1.4 which enables you to test your assumptions about your code. Syntax of assert keyword assert expression1 ;expression1 is boolean expression which is evaluated and if it is false system will throw AssertionError with no detail message assert expression1 : expression2 ;expression1 is boolean expression which is evaluated and if it is false system will throw AssertionError and detail message is expression2 For example in my post I want to assert that expression is not null then I should write below code public static String getManager(String employeeId) { assert (employeeId != null) : "employeeId must be not null"; return "Mahmoud A. El-Sayed"; } If I try to call getManager method using getManager(null); It will raise “java.lang.AssertionError: employeeId must be not null” Note use -enableassertion in your java option while run your code to enable assertion. 4- Assert Class Assert class exists in com.bea.core.repackaged.springframework.util package and has a lot of methods used in assertion. Example public static String getManager(String employeeId) { Assert.notNull(employeeId, "employeeId must be not null"); Assert.hasLength(employeeId, "employeeId must has length greater than 0"); return "Mahmoud A. El-Sayed"; } If I try to call getManager method using getManager(null); It will raise “java.lang.IllegalArgumentException: employeeId must be not null” 5- Exception Handling I should take care in exception handling using try catch statement or checking of null value of variables For example public static String getManager(String employeeId) { return null; } I will cal it using below code String managerId = getManager("A015"); System.out.println(managerId.toString()); It will raise “java.lang.NullPointerException” , so to handle this exception I should use try catch or checking of null values a- try catch statement I will change calling code to below code String managerId = getManager("A015"); try { System.out.println(managerId.toString()); } catch (NullPointerException npe) { //write your code here } b- Checking of Null values I will change calling code to below code String managerId = getManager("A015"); if (managerId != null) { System.out.println(managerId.toString()); } else { //write your code here } 6- Too many dot syntax Some developers use this approach as he writes less code but in the future will not be easier for maintenance and handling exception Wrong Code String attrValue = (String)findViewObject("VO_NAME").getCurrentRow().getAttribute("Attribute_NAME"); Correct Code ViewObject vo = findViewObject("VO_NAME"); Row row = vo.getCurrentRow(); String attrValue = (String)row.getAttribute("Attribute_NAME"); 7- StringUtils Class StringUtils class is part of org.apache.commons.lang package, I can use it to avoid NPE specially all its methods are null safe For example StringUtils. IsEmpty(), StringUtils. IsBlank(), StringUtils.equals(), and much more. You can read specification of this class from here Conclusion Always take care of NullPointerException when writing code and guess how it will be thrown in your code and write //TODO in your code for solving it later if you haven’t more time. Reference: Avoid Null Pointer Exception Part 1, Avoid Null Pointer Exception Part 2 from our JCG partner Mahmoud A. ElSayed at the Dive in Oracle blog....
devops-logo

Establishing ownership in Ops Teams

I’ve been having some discussions about this lately so figured I would write something about the topic. Being a member of an Ops team can be pretty challenging at times. The job can be high pressure and often it feels like you spend all your time fighting fires, shaving yaks, etc. One of the difficult parts of being in Ops is that it’s often hard to put your mark on things, to use your skills to leave a lasting impression.The reason it’s hard to leave a mark isn’t because there’s a lack of work, but because the work changes so frequently that influencing the long term outcome of a project can be hard. This can often be even more difficult in Operations teams following Agile methodologies because the work is broken into smaller stories and those stories may get worked by multiple folks. Even within these teams though, there are individuals with skills in certain areas and often there is more than one person with passion for a particular topic. Someone who’s passionate about a topic is more likely to do a great job, in my experience, and so we should see how we can leverage it.Roles and Responsibilities MatrixOne successful tool I was shown was a Roles and Responsibilities matrix. The goal of this is to establish some basic ownership of components within an infrastructure so that individuals can focus their work. This often happens naturally in teams, but doing this formally accomplishes a few important goals:Allows individuals with no experience, but with an interest, to raise their hand and work with new things. Allows the team to agree on who is responsible for what infrastructure pieces. This is not sole ownership, but more about establishing expertise & creating less contention over decisions. Helps you, as the manager, formalize who to work with on specific issues.The matrix is pretty simple, for each component (you can partition this however you want) you define two roles, a “P1? and a “P2?. These are the primary and secondary points of contact for that component. But there’s more to this than just having a primary and secondary:P1: This person is the current “non-expert”, the trainee. All escalations for this component should go to them first. If they don’t know the answer it’s their responsibility to work with the P2 & resolve the issue. In this process, they learn. P2: This person is the current expert, the trainer. They understand that they are P2 and are to work with the P1 on issues where they need help.I have also observed this setup where there’s only a P1 and they are the expert because there just aren’t enough folks to have a P1/P2 for that component (or it’s not a priority). Another reason for the P1 to be the expert is if the system is going through a lot of changes and you want someone to keep tight reigns on what changes are made.Here is what an example matrix might look likeLooking over this, each person is a P1 for one component & a P2 for some other. In a perfect world it works out like this, but the world aint perfect. Do your best with what you have – but try to setup something like this.This is usually established during a meeting every quarter or every 6 months. You walk through the list of functional areas and ask for volunteers. This more often than not ends with very little contest, but in the event where there are concerns about who is P1 or P2 you should try to understand why it’s important to each person to have a role in this, what they want to accomplish, and consider what other areas they also want to accomplish things in. Often, after discussing their vision on this component along with other stuff they are working on it’ll become more self evident who is the best P1 & you can get agreement.Defining cross-functional areasThe matrix above works well, but the first question from folks is usually something like “what about monitoring, if I own that does it mean I have to do all that work for everyone else?”. The answer is “no” in most cases. There are some functional areas which are pretty clear & mostly self contained but there are others which cut across all the other areas. Examples where something intersects with everything else are Monitoring, Networking, Configuration Management and sometimes things like Storage, depending on your architecture.For areas where your area of expertise is a dependency for others there needs to be shared ownership of those tasks. I generally look at it this way, using Monitoring as an example:The P1 is responsible for overall architecture & infrastructure, training, documentation & escalations for that system. They are responsible for enabling the other team members to use the system effectively & for bringing any major changes to the team for review & consensus. The P1 owners of other components are responsible for integrating their systems with monitoring, for writing any monitors, and for establishing meaningful metrics & thresholds for that system. Both P1 owners work together to make sure any monitoring / metrics are done in a consistent way that is inline with what the team has agreed is the architecture.In this way you are avoiding making the monitoring owners job suck by having to spend all day writing monitors for a million different components, but they have ownership of the overall success of the monitoring infrastructure. Individuals who own other components are making decisions about how best to monitor their own systems within the constraints of the best practices for the monitoring system & they can work with the monitoring owner if they want to break new ground on doing things a different way.Working outside of OperationsOne of the most important roles Operations plays (in my opinion) is in working with Development as closely as possible. This is becoming more and more obvious and more teams are starting to give it names, like DevOps. Some Ops folks are better at this than others and some will go out and find Developers to work with and others need to be prodded a bit.Defining clear roles for individuals in Ops is a good way to force this collaboration. By assigning one Ops person to an upcoming Dev project & setting clear expectations around that role, you help foster their involvement and empower them to start working with other teams. That Dev team becomes a functional area, and they get a P1 & P2 like any other component.What I would typically advocate for smaller Dev organizations is integrating one Ops person per Dev team if you can. This means that Ops person attends stand-ups, they go to planning meetings, and they are familiar with all the stuff that Dev team is working on. Should there become a need for Ops related work (or communication, which is always needed), the assigned Ops team member is responsible for that role. They aren’t necessarily responsible for all of the work but they are responsible for making sure the work is communicated & making sure it gets done.Another approach is to assign Ops team members to individual projects. As projects arise, team members start to attend those meetings & start to get involved with any stand-ups and work around that project. I don’t like this approach as much because it relies on the Dev teams reaching out and saying “Ok, we’re ready for an Ops person now” most of the time – and that often happens late. Having Ops members already in position inside teams gives you much earlier warning and helps shape the end result much earlier.Tracking & Communicating workNow that everyone is working on their own projects, there will be a tendency to communicate that work less often & less completely. It takes some work to avoid this but it’s actually not all that hard. The important aspect of this is that each team member is talking about what they worked on each day at stand up & are being clear about their priorities during planning sessions. How you achieve this is up to you – but I’ll throw out some ideas.Kanban works well as a visualization tool for work in progress. From an Ops perspective, I think that’s where the role of Kanban starts and ends. Operations is an inherently interrupt driven team and while many organizations get out of that mode through lots of practice – if you are at that point you probably don’t need my help in tracking & communicating work. Where I have seen Kanban work really well is in prioritizing work during planning (abc must come before xyz, move the card) and in visually showing what you did, what you are doing, and what you will be doing next.Daily stand-ups are really, really helpful. Things change day to day in Ops teams and taking 10 minutes each morning to get everyone in sync with what’s going on is a huge help. Identifying blocks and talking about how to clear those is a big part of this. When everyone is there talking about things, saying “I’m blocked waiting for xyz” is an opportunity to get that problem solved today.Also documenting proposals using a shared document system like Google Docs is a massive improvement. I can write up a proposal for something and instead of asking for feedback, people can add it right to the document – they can make comments, etc. We get together for a 30-60 minute meeting to review the document & the feedback and we take a shot at a final proposal. If there are still open questions we go back and answer those. The key is that much of the work is done asynchronously rather than asking that everyone bring their best, most un-distracted thoughts, into a meeting.Rotating rolesLastly, with all of this, there is change. Nobody wants to be stuck in the same role for years – people in Operations want to learn new things, they want an opportunity to take something that needs improvement and leave their mark on it. In every infrastructure there are some cool projects and there are some lame projects. There are also those parts of the system that are just a pain in the ass to maintain & nobody wants to do it. It’s important to rotate these around.What has worked in my experience is a periodic review of the priorities. You start with a review of work in progress so that folks know what they are signing up for if they want to tackle an area they aren’t working in today. Then you wipe the slate clean & go functional area by functional area asking who wants to be involved.The trick with this process is to try to allow folks who have projects in flight to maintain that responsibility while giving someone else a shot at learning about the system. This is where the P1/P2 roles can really be leveraged. If you are re-building your network and you really need the same guy to maintain his momentum in that project – he becomes the P2, continuing that work. You assign a new P1 (if someone new wants to be involved) and you have them tackle the day to day interrupts. The two members work together on it and the new gets to learn while the old gets to finish their project.If a functional area has no work in progress and you really want to move something new forward there, find the person who’s passionate about making that change and make them a P1. Find a P2 that can help enable them and let them go for it.Wrapping upOwnership is an important part of any job and in Operations it has been the light that keeps me coming back. Giving that ability to every member of your team is important, and hopefully this gives you some ideas about how to do that. Reference: Establishing ownership in Ops Teams from our JCG partner Aaron Nichols at the Operation Bootstrap blog. ...
scala-logo

Power with control: Scala control structures and abstractions

So ramping up with the Scala 101 series, I thought now is an appropriate juncture to introduce control structures in Scala. To a certain extent, working with the Scala language presents a vista wherein the developer is afforded much greater freedom than in many other environments, but therein lies a great many choices and a sense of responsibility. As such, I’ve consciously tried to restrict this post to covering some of the main flavors and options for control-flow and iteration within Scala, how they differ and provide examples of usage. I expect this to be something of a ‘white knuckle ride’ through the building blocks of the language and hope to provide both a contextual guide and a point of reference for usage. My ideal is that coupled with a knowledge of the basics of the how Class and Objects differ and are constructed in Scala, this should provide a launchpad to be able to write productive Scala code.   Note: this coverage doesn’t claim, or even attempt, to be authoritative, but tries to cover some of the iteration styles and constructs I have found most useful. So without further ado, let the click-clacking of chains purr along as we start our ascent.. if() …is one of the handful of built-in control structures in Scala and uses a syntax familiar to most Java developers. In Scala the implementation differs slightly from that in Java as all statements are expected to return a result. This is something of a general trend in functional languages, to err on the side of binding and returning variables and values and (effectively) acting as a pipeline of execution and enrichment as opposed to ’side effect’ centric processing. As such, Scala uses the ternary operator model as the default, whereby values can be assigned based on the result of an if() evaluation.  e.g. if() as a ternary expression val result = if (1 > 2) true else false Note: – if the evaluation fails to return a result than the original variable (if already initialised) will fail to execute due to type mismatches, or the variable (if non-initialised) will be initialised to AnyVal. – initialised vals cannot be reassigned as they are immutable e.g. if as a ternary with just one return value declared var result2 = if(1>2) 1 // assigns result2 = AnyVal while and do.. while …are considered loops and not expressions in Scala as they don’t have to return ‘ interesting‘ types (i.e. they can return the Unit type, which is Scala’s equivalent of void in Java). Many functional languages tend to dispense with the while() construct completely and instead rely on recursion as a control abstraction, but, given the multi-paradigm nature of Scala (and it’s use as a gateway language towards the functional paradigm), these are supported as built-in control structures. Consequently, while loops are often eshewed in favour of a more functional style of doing interation (such as using recursion). Nonetheless, here’s a sample of standard while and do.. while syntax: e.g. while() and do.. while() var i = 0 while(i < 3) println("Hello " + {i = i + 1; i} ) or do { println("Hello " + { i = i + 1; i } ) } while(i < 3) for(i to/until) ..uses similar syntax to the enhanced for loop in Java 5 with some notable additions, namely: expression scoped value use type inference; the iterated value is intialised using a generator and multiple generators can be inlined in the expression, also; filter support is baked into the expression. A simple example should make this clearer:   e.g. Generators and filters in a for() comprehension for(i <- 1 to 4; j <- 1 until 4 if (i * j % 2 == 0)) { println(i * j) } As displayed above, generator bounds can be specified using either ‘to’ or ‘until’ and values can be assigned during each iteration. Simlarly filters allow multiple statements (separated by semi-colons) to prune which values enter the body of the expression. Iterations using for() are called expressions, (as opposed to loops) as they usually return a value and can be used to initialise a variable. One subtlety here is that implementation and desire to return values from the for() construct is infered, (by the compiler), by the context in which the expression is used. Digging a little deeper, we can see that there are actually two styles of for loop, one imperative and the other functional. The imperative style is recognisable for the fact that it ‘yields’ a result (which is returned from the expression), whereas the imperative style is used for its side effects (such as in the example shown above). For the more inquisitive reader, a thorough extrapolation of the intricacies of for() expressions are very well covered here. This is an excellent article which describes how for() expressions in Scala are actually closures and why this can give surprising results. As an appetite moistener here are a few droplets of the detail:clauses contained in for expressions are actually closures and, as such, they fail to ‘break’ in the expected imperative sense. eager evaluation is the default implementation of most for() expressions. ‘real-time’ (i.e. lazy) evaluation of break clauses is possible using the .view of the collection. (Note: view replaces the [now] deprecated .projection method) however… to actually exit the closure early, the break condition should be evaluated against the projected surrogate object.e.g. An example for() comprehension using the functional form: var multiplesOfTwo = for(i <- 1 to 10 if i % 2 == 0) yield i e.g. A lazy evaluated for() comprehension with a break gate var gate = true // note this has to be a 'var' type, as we hope to reassign the variable during iteration for(i <- testList.view.takeWhile(i => gate)) if(i < 5) println(i) else gate = false Note: As the view and projection operations are on List, (and as the name suggests), I don’t believe it is possible to apply these ‘escape semantics‘ to a set of values created by a generator. However, given this use case, the while() loop will satisfy these needs:   e.g. using a while() to escape non-Collection iteration var index = 1 var escapeGate = false while(index < 10000000 && !escapeGate) { if(index > 5) escapeGate = true println(index) i += 1 } Useful operations on Lists Lists are one of the major, (if not the main), collection types in Scala, and can be both easily created and manipulated. Next we’ll look at some typical List operations and their use. Some of the value members description below has been gratuitously harvested from the excellent Scaladoc.   e.g. Creating a List val testList = List(1, 2, 3, 4, 5, 6, 7, 8, 9, 10) or val testList = 1 :: 2 :: 3 :: 4 :: 5 :: 6 :: 7 :: 8 :: 9 :: 10 :: Nil Note: the :: operation is right associative. As a general rule, operations that end in a colon tend to be right associative. Also ::: operation can be used to concatenate multiple Lists into a new List. Joining of Lists of different types results in a new List of elements of the ‘ lowest common denominator‘ type for the individual lists   e.g. Inheritence amongst List elements class Car; class Vovlo extends Car; class VW extends Car; var lcdList = List(new Volvo()) ::: List(new VW()) // returns a List of Type 'Car' Also Note: That List supports trivial operations such as head (to return the first element of the list) and tail (which returns the contents of the list after the first elements of the List). No further detail on these operations will be presented here.   foreach() Use: Applies a function to every element of a list. This looping construct is used for the side effects that are created Example: testList foreach(x => println(x*2)) Note: If the function takes a single parameter, method parameters don’t have to be listed explicitly. Also, the underscore ‘_’ wildcard can replace a whole parameter list. (N.B. There is a further nuance for the underscore wildcard, where it is followed by an asterisk when used as an array substitution, e.g. ‘_*’):  e.g. foreach with inferred parameters testList foreach(println) and testList foreach(println _) filter()   Use: Selects all elements from a List that satisfy a given predicate. Example: val evensList = testList filter (_ % 2 == 0) Note: Order of elements in the returned List is retained from the Source List. partition() Use: Splits a list into 2 lists based upon a predicate submitted, and returns a Tuple2 containing the returned Lists. Example: val tupleWrappedOddAndEvenLists = testList partition (_ % 2 == 0) // the individual elements of the Tuple can be accessed in the usual way, e.g.: println(tupleWrappedOddAndEvenLists._1) Note: Tuple support and usage in Scala extends beyond acting as a simple wrapper to input/return multiple variables, as they can also be used in extractors, and hence reused in pattern matches. Both topics are beyond the scope of this article. forall() Use: Returns a boolean indicating whether all elements of a List pass a given predicate Example: testList forall(a => a.isInstanceOf[Int]) exists() Use: Tests whether any element of a given List is true according to a given predicate Example: testList exists(a => a == 4) map()   Use: Applies a function to all elements of a List and returns a new List containing the results. Example: val doubledList = testList map (_ * 2) flatMap() Use: Similar to the map operation, but will concatenate the results into a single List. For example, given a List of Lists and a function as input parameters, a single (transformed) List is returned. Example: val palindromeList = List(testList, testList reverse) flatMap (_.toList) Note: This function is particularly useful when trying to obtain a single List of (transformed) leaf nodes from a tree structure, e.g. file system browsing or for getting a consolidated list of financial/sports markets transformed into a required domain model structure.   fold[Left/Right]() Use: Applies a given function to all elements of a List to return an aggregated value. Practically, this is indespensible when doing terse concatenation or summation operations. The difference between foldLeft and foldRight is the direction in which the function is applied. This is of note of operations that are noncommutative. Example: (1 foldLeft testList) (_-_) // This takes the seed value for the consolidated return as 1, before applying the muliplication operation on all elements of the palindromeList (effectively adding the results into a running total)// to see the difference in foldLeft and foldRight for noncommutative operations, let's look at the minus operations applied with a foldLeft and a foldRight (using the shorthand syntax), and observe how the results differ:(1 /: testList) (_-_) // left fold on the minus operation returns -54 (testList :\ 1) (_-_) // right fold on the minus operation returns -4Note: The shortcut syntax for the fold operations are /: (for foldLeft) and :\ (for foldRight). The ‘/:’ naming is such, that the operation actually ‘looks like’ the direction in which the fold is occuring, with the semi-colon suggesting from which direction input is being garnered. Therefore, (1 foldLeft palindromeList) (_*_) above is equivalent to (1 /: palindromeList) (_*_)   Related: Catamorphism reduce[Left/Right]() Use: Recursively applies a given function to elements from of a List. The resulting value from the first iteration is used as the primary input for the subsequent iteration, and this is pattern of substitution is then applied recursively. Example: testList reduceLeft ((a, b) => if (a > b) a else b) So far, this is a scattering of some of the various control structures, abstractions and List operations available in Scala and should provide a firm foundation for performing initial Scala katas. A final noteworthy point, is to be aware that the Scala compiler will optimise tail-recursive calls, by wrapping any tail recursive operations in for() loops, hence reusing the runtime stack frame. Next time, I’ll try to cover one of the most powerful abstractions in the Scala language, namely pattern matching and show its’ interplay with Exception handling in Scala. Hopefully this overview has given the reader an appropriate bredth and depth of information to engage in further investigation, as well as displaying some of the power of recursion and the functional style of programming that Scala facilitates. Happy hacking ! Reference: Power with control… control structures and abstractions in Scala from our JCG partner Kingsley Davies at the Scalabound blog....
scrumalliance-logo

Why an Agile Project Manager is Not a Scrum Master

A reader asked why the lifecycle in Agile Lifecycles for Geographically Distributed Teams, Part 1 is not Scrum. It’s not Scrum for these reasons:The project manager and product owner start the release planning and ask the team if the release planning is ok. The team does not generate the initial draft of release planning itself. In Scrum, the team is supposed to generate all of the planning itself. The checkin is different from the Scrum standup and the objectives of the checkin are different. I did suggest to the teams that if you want to create a cross-functional team where the functions are separated, if you ask people how they are working together, you might help them work together. Sometimes those questions work, and sometimes they don’t. It depends on the team and whether the people want to work together.I didn’t mention retrospectives or backlogs in my examples so far, because I took them for granted. Yes, both examples of these teams do perform retrospectives and have product backlogs. They also have agile feature roadmaps, which are on my list to blog about.The real difference is the difference between a Scrum Master and an Agile Project Manager. A Scrum Master is not a project manager. A scrum master does not manage risk by him or herself. A project manager will take on the risk management responsibility without asking the team.A Scrum Master has only allegiance to the team. A project manager has responsibility to the team and to the organization. That means that the project manager might feel torn when the organization pressures the project manager to do something stupid. (Although, I just downloaded the Scrum Guide, and the Scrum Master’s responsibilities have grown considerably since I took my CSM with Jeff way back in 2006.)But agile provides transparency when the organization asks the agile project manager to do something stupid, so it’s easier to retain your integrity as a project manager.Want to move a feature higher in the backlog? Change the feature roadmap with the product owner and then change the backlog with the product owner. I expect the agile project manager to collaborate on the feature roadmap and the backlog with the product owner.Want to change the velocity of the team to please some crazed manager? Both the Scrum Master or the agile project manager protects the team in these ways: Explain that velocity is not a productivity metric Say No and explain why Play the Double Your Velocity schedule game Or choose some other way to remove this management obstacle.Agile makes it easy to protect the team. The question is this: does the Scrum Master have other responsibilities in addition to protecting the team or is the Scrum Master full time? An agile project manager tends to be full time on a geographically distributed team. Even on a geographically distributed team, a Scrum Master is not seen as a full time position. Bless their tiny little hearts, managers don’t seem to understand that transitioning to agile, especially for silo’d distributed teams with different cultural norms is non-trivial. They will make room for a project manager, but a Scrum Master? Oh no. Makes me nuts.Cut corners on quality? I don’t see how. The team doesn’t meet the acceptance criteria on the stories and doesn’t meet their criteria of done for an iteration, and can’t show a demo. How does that serve anyone?Help a team go faster? This is the one place where a project manager may have an edge over a Scrum Master, and that’s only because of education. An agile project manager is a project manager. That means he or she is actively studying project management, which means he or she is studying lean also, looking into work in progress. (I realize many project managers do not actively study project management.) I have high expectations of an agile project manager, and that is to limit WIP, work in progress, to measure cumulative flow. But, Johanna, that’s a lean project manager. Yes, that’s correct. Why not use all of the tools available to us at all times? This is not to help a team actually go faster, but to provide feedback to the team about their WIP. If everyone takes a story at the start of the iteration and everyone always works on their own story, it’s likely the team is at the slowest possible velocity. It’s worth knowing that, or at least retrospecting about the data. A project manager will gather the data. A Scrum Master, especially one who was not a trained project manager, may not know to gather the data.I have nothing against Scrum Masters. Some of my good friends are CSTs (Certified Scrum Trainers). However, they are not all project managers, and have not been project managers, and have not studied the field of project management. Some have been. And, the real issue is this: In a two or three day workshop, they cannot convey to a person who may or may not have been a practicing project manager all of their project knowledge.Organizations do not always pick project managers to be Scrum Masters. And, with good reason. Some project managers are command-and-control project managers. I suspect back in my long-ago past, I was. I gave it up long ago because it didn’t work. Some people never gave up command-and-control project management. Those people are not good project managers for agile projects. They are terrible project managers for geographically distributed projects, where you must work through influence.You can have self-managing teams that are geographically distributed. You can have self-directed teams that are geographically distributed. But, they don’t start that way. They evolve into self-directed and self-managing teams. They start as management-led teams.And, especially when they are silo’d teams, they need the coordination of a project manager, someone who will manage the risk between the silos, and someone who has the organizational backing, and yes, someone who has the allegiance to the organization to say, “We need to do this project” to write the project charter.In a geographically distributed team, the agile project manager writes the project charter either with the team, or as a strawman for the people to edit and approve. Shane and I recommend that the people get together to write it together. We like it if people get together in person. We know how rarely that happens. (Penny wise, pound foolish.) So we teach people how to write a project charter when they are divided in space.Because until there is a project charter, there is no organizing principle for the silo’d teams. Those developers in France, testers in Belarus, product managers and project manager in San Francisco, they all need something to coalesce around. The charter, which includes the project vision provides that. The iterations provide the project heartbeat.So, that’s why I don’t think Agile Lifecycles for Geographically Distributed Teams, Part 1 is Scrum. It’s close, but no cigar. I respect Ken and Jeff’s work too much to call it Scrum when it’s not.Now that I’m mostly recovered from my cold, I can continue the series about lifecycles.Reference: Why an Agile Project Manager is Not a Scrum Master from our JCG partner Johanna Rothman at the Managing Product Development blog....
java-logo

Array, list, set, map, tuple, record literals in Java

Occasionally, when I’m thrilled by the power and expressiveness of JavaScript, I find myself missing one or two features in the Java world. Apart from lambda expressions / closures or whatever you want to call “anonymous functions”, it’s the use of advanced literals for common data types, such as arrays, lists, sets, maps, etc. In JavaScript, no one would think about constructing a constant Map like this: var map = new Object(); map["a"] = 1; map["b"] = 2; map["c"] = 3; Instead, you’d probably write var map = { "a":1, "b":2, "c":3 }; Specifically, when passing complex parameters to an API function, this turns out to be a very handy syntax. What about these things in Java? I’ve recently posted about a workaround that you can use for creating a “List literal” using Arrays.asList(…) here: http://blog.jooq.org/2011/10/28/javas-arrays-aslist-is-underused/ This is somewhat OK. You can also construct arrays when you assign them, using array literals. But you cannot pass an array literal to a method: // This will work: int[] array = { 1, 2, 3 };// This won't: class Test { public void callee(int[] array) {} public void caller() { // Compilation error here: callee({1, 2, 3}); } }Brian Goetz’s mentioning of various literals on lambda-dev Missing this feature for quite a while, I was very thrilled to read Brian Goetz’s mentioning of them on the lambda-dev mailing list: http://mail.openjdk.java.net/pipermail/lambda-dev/2012-May/004979.html The ideas he was listing were these: #[ 1, 2, 3 ] // Array, list, set #{ "foo" : "bar", "blah" : "wooga" } // Map literals #/(\d+)$/ // Regex #(a, b) // Tuple #(a: 3, b: 4) // Record #"There are {foo.size()} foos" // String literal Unfortunately, he also added the following disclaimer: Not that we’d embrace all of these immediately (or ever) Obviously, at this stage of current Java language evolvements for Java 8, he cannot make any guarantee whatsoever about what might be added in the future. But from a jOOQ perspective, the idea of being able to declare tuple and record literals (with the appropriate backing language-support for such types!) is quite thrilling. Imagine selecting arbitrary tuples / records with their associated index/type, column/type pairs. Imagine a construct like this one in Java or Scala (using jOOQ): // For simplicity, I'm using Scala's val operator here, // indicating type inference. It's hard to guess what true // record support in the java language should look like for (val record : create.select( BOOK.AUTHOR_ID.as("author"), count().as("books")) .from(BOOK) .groupBy(BOOK.AUTHOR_ID) .fetch()) { // With true record support, you could now formally extract // values from the result set being iterated on. In other // words, the formal column alias and type is available to // the compiler: int author = record.author; int books = record.books; } Obviously, this is only speculation, but you can see that with true tuple / record support in the Java language, a lot of features would be unleashed in the Java universe with a very high impact on all existing libraries and APIs Stay tuned! Reference: Array, list, set, map, tuple, record literals in Java from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog....
android-logo

Android – Dashboard design pattern implementation

Before reading this article, please learn the prerequisites mentioned above so that you can have better idea for the implementation of this solution which i am going to discuss here. Do you know what is Dashboard pattern exactly? In brief, we can say Dashboard is a page containing large and clear symbols of main functionality and optionally an area for relevant new information. Go through these articles: 1. UI Design Pattern – Dashboard (From Juhani Lehtimaki) 2. Android UI design patterns The main agenda of this article is to implement Dashboard design patter same as below:Step 1: Create Title bar layout Yes we define title bar (header) layout only once but it requires in every screens. We will just show/hide home button and other buttons whenever needed. Once you are done with defining title bar layout, we can use the same layout in other layouts by using ViewStub. Here is example of Title bar (header) xml layout: header.xml <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="wrap_content" android:background="@color/title_background" ><LinearLayout android:id="@+id/panelIconLeft" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentLeft="true" android:layout_centerVertical="true" android:layout_margin="5dp" ><Button android:id="@+id/btnHome" android:layout_width="wrap_content" android:layout_height="wrap_content" android:background="@drawable/ic_home" android:onClick="btnHomeClick" /> </LinearLayout><TextView android:id="@+id/txtHeading" style="@style/heading_text" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_centerInParent="true" android:layout_marginLeft="5dp" android:layout_marginRight="5dp" android:layout_toLeftOf="@+id/panelIconRight" android:layout_toRightOf="@id/panelIconLeft" android:ellipsize="marquee" android:focusable="true" android:focusableInTouchMode="true" android:gravity="center" android:marqueeRepeatLimit="marquee_forever" android:singleLine="true" android:text="" android:textColor="@android:color/white" /><LinearLayout android:id="@+id/panelIconRight" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentRight="true" android:layout_centerVertical="true" android:layout_margin="5dp" ><Button android:id="@+id/btnFeedback" android:layout_width="wrap_content" android:layout_height="wrap_content" android:background="@drawable/ic_feedback" android:onClick="btnFeedbackClick" /> </LinearLayout></RelativeLayout> In above layout code, i have referenced style from styles.xml and dimentions from dimen.xml: styles.xml <?xml version="1.0" encoding="utf-8"?> <resources> <style name="heading_text"> <item name="android:textColor">#ff000000</item> <item name="android:textStyle">bold</item> <item name="android:textSize">16sp</item> <item name="android:padding">5dp</item> </style> <style name="HomeButton"> <item name="android:layout_gravity">center_vertical</item> <item name="android:layout_width">fill_parent</item> <item name="android:layout_height">wrap_content</item> <item name="android:layout_weight">1</item> <item name="android:gravity">center_horizontal</item> <item name="android:textSize">@dimen/text_size_medium</item> <item name="android:textStyle">normal</item> <item name="android:textColor">@color/foreground1</item> <item name="android:background">@null</item> </style></resources>dimen.xml <?xml version="1.0" encoding="utf-8"?> <resources> <dimen name="title_height">45dip</dimen> <dimen name="text_size_small">14sp</dimen> <dimen name="text_size_medium">18sp</dimen> <dimen name="text_size_large">22sp</dimen> </resources>Step 2: Create super (abstract) class Actually, In this abstract super class, we will define: 1) event handlers for both the buttons: Home and Feedback 2) other methods The Home and Feedback buttons, which are going to be visible in almost every activities and require the same actions to perform (i.e. take user to home activity), So instead of writing the same code in every activity, we write event handler only once in the abstract class which is going to be a super class for every activity. You may have noticed in above header.xml layout: android:onClick=”btnHomeClick” (Home button) and android:onClick=”btnFeedbackClick” (Feedback button), so we will define this method only once in super class (abstract). Please refer ViewStub example if you dont know about it. Now, here is the code for Abstract (super) class, i call it as DashboardActivity.java package com.technotalkative.viewstubdemo;import android.app.Activity; import android.content.Intent; import android.os.Bundle; import android.view.View; import android.view.ViewStub; import android.widget.Button; import android.widget.TextView;public abstract class DashBoardActivity extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); }public void setHeader(String title, boolean btnHomeVisible, boolean btnFeedbackVisible) { ViewStub stub = (ViewStub) findViewById(R.id.vsHeader); View inflated = stub.inflate();TextView txtTitle = (TextView) inflated.findViewById(R.id.txtHeading); txtTitle.setText(title);Button btnHome = (Button) inflated.findViewById(R.id.btnHome); if(!btnHomeVisible) btnHome.setVisibility(View.INVISIBLE);Button btnFeedback = (Button) inflated.findViewById(R.id.btnFeedback); if(!btnFeedbackVisible) btnFeedback.setVisibility(View.INVISIBLE);}/** * Home button click handler * @param v */ public void btnHomeClick(View v) { Intent intent = new Intent(getApplicationContext(), HomeActivity.class); intent.setFlags (Intent.FLAG_ACTIVITY_CLEAR_TOP); startActivity(intent);}/** * Feedback button click handler * @param v */ public void btnFeedbackClick(View v) { Intent intent = new Intent(getApplicationContext(), FeedbackActivity.class); startActivity(intent); } }Step 3: Define Dashboard layout <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical" ><ViewStub android:id="@+id/vsHeader" android:layout_width="fill_parent" android:layout_height="wrap_content" android:inflatedId="@+id/header" android:layout="@layout/header" /><LinearLayout android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_weight="1" android:orientation="vertical" android:padding="6dip" ><LinearLayout android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_weight="1" android:orientation="horizontal" ><Button android:id="@+id/main_btn_eclair" style="@style/HomeButton" android:drawableTop="@drawable/android_eclair_logo" android:onClick="onButtonClicker" android:text="@string/EclairActivityTitle" /><Button android:id="@+id/main_btn_froyo" style="@style/HomeButton" android:drawableTop="@drawable/android__logo_froyo" android:onClick="onButtonClicker" android:text="@string/FroyoActivityTitle" /> </LinearLayout><LinearLayout android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_weight="1" android:orientation="horizontal" ><Button android:id="@+id/main_btn_gingerbread" style="@style/HomeButton" android:drawableTop="@drawable/android_gingerbread_logo" android:onClick="onButtonClicker" android:text="@string/GingerbreadActivityTitle" /><Button android:id="@+id/main_btn_honeycomb" style="@style/HomeButton" android:drawableTop="@drawable/android_honeycomb_logo" android:onClick="onButtonClicker" android:text="@string/HoneycombActivityTitle" /> </LinearLayout><LinearLayout android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_weight="1" android:orientation="horizontal" ><Button android:id="@+id/main_btn_ics" style="@style/HomeButton" android:drawableTop="@drawable/android_ics_logo" android:onClick="onButtonClicker" android:text="@string/ICSActivityTitle" /><Button android:id="@+id/main_btn_jellybean" style="@style/HomeButton" android:drawableTop="@drawable/android_jellybean_logo" android:onClick="onButtonClicker" android:text="@string/JellyBeanActivityTitle" /> </LinearLayout> </LinearLayout> </LinearLayout>Step 4: Define activity for handling this dashboard layout buttons click events. In this activity, you will find the usage of setHeader() method to set the header for home activity, yes in this method i have passed “false” for home button because its already a home activity, but i have passed “true” for feedback button because feedback is needed to be visible. Other process are same as defining button click handlers. package com.technotalkative.viewstubdemo;import android.content.Intent; import android.os.Bundle; import android.view.View;public class HomeActivity extends DashBoardActivity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); setHeader(getString(R.string.HomeActivityTitle), false, true); }/** * Button click handler on Main activity * @param v */ public void onButtonClicker(View v) { Intent intent;switch (v.getId()) { case R.id.main_btn_eclair: intent = new Intent(this, Activity_Eclair.class); startActivity(intent); break;case R.id.main_btn_froyo: intent = new Intent(this, Activity_Froyo.class); startActivity(intent); break;case R.id.main_btn_gingerbread: intent = new Intent(this, Activity_Gingerbread.class); startActivity(intent); break;case R.id.main_btn_honeycomb: intent = new Intent(this, Activity_Honeycomb.class); startActivity(intent); break;case R.id.main_btn_ics: intent = new Intent(this, Activity_ICS.class); startActivity(intent); break;case R.id.main_btn_jellybean: intent = new Intent(this, Activity_JellyBean.class); startActivity(intent); break; default: break; } } }Step 5: Define other activities and their UI layouts Now, its time to define activities that we want to display based on the particular button click from dashboard. So define every activities and their layouts. Don’t forget to call setHeader() method wherever necessary. Here is one example for such activity – Activity_Eclair.java package com.technotalkative.viewstubdemo;import android.os.Bundle;public class Activity_Eclair extends DashBoardActivity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_eclair); setHeader(getString(R.string.EclairActivityTitle), true, true); } } activity_eclair.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical" ><ViewStub android:id="@+id/vsHeader" android:layout_width="fill_parent" android:layout_height="wrap_content" android:inflatedId="@+id/header" android:layout="@layout/header" /><TextView android:id="@+id/textView1" android:layout_width="match_parent" android:layout_height="match_parent" android:gravity="center" android:text="@string/EclairActivityTitle" /></LinearLayout>Step 6: Declare activities inside the AnroidManifest.xml file Now you are DONE  Output:Home screen _ Landscapeinner screeninner screenYou can download source code from here: Android – Dashboard pattern implementation Feedback/review are always welcome Reference: Android – Dashboard design pattern implementation from our JCG partner Paresh N. Mayani at the TechnoTalkative blog....
oracle-coherence-logo

Distribute Spring Beans in Oracle Coherence

This article shows how to distribute Spring beans by using EntryProcessor and Portable Object Format(POF) features in Oracle Coherence. Coherence supports a lock-free programming model through the EntryProcessor API. This feature improves system performance by reducing network access and performing an implicit low-level lock on the entries. This implicit low-level locking functionality is different than the explicit lock(key) provided by ConcurrentMap API. Explicit locking, Transaction Framework API and Coherence Resource Adapter are other Coherence Transaction Options as Entry Processors. For detailed informations about Coherence Transaction Options, please look at the references section. In addition, Distributed Data Management in Oracle Coherence Article can be suggested for the Coherence Explicit locking implementation. Portable Object Format(POF) is a platform-independent serialization format. It allows to encode equivalent Java, .NET and C++ objects into the identical sequence of bytes. POF is suggested for the system performance since Serialization and Deserialization performances of POF are better than the Standart Java Serialization(According to Coherence Reference document, in a simple test class with a String, a long, and three ints, (de)serialization was seven times faster than the Standart Java Serialization). Coherence offers many kinds of cache types such as Distributed(or Partitioned), Replicated, Optimistic, Near, Local and Remote Cache. Distributed cache is defined as a collection of data that is distributed (or, partitioned) across any number of cluster nodes such that exactly one node in the cluster is responsible for each piece of data in the cache, and the responsibility is distributed (or, load-balanced) among the cluster nodes. Please note that distributed cache type has been used in this article. Since the other cache-types are not in the scope of this article, please look at the References section or Coherence Reference document. Their configurations are very similar to distributed cache configuration. How to distribute Spring Beans by using Coherence Article covering Explicit locking – Java Standart Serialization is suggested to compare two different implementations(EntryProcessor – Portable Object Format(POF) and Explicit locking – Java Standart Serialization). In this article, a new cluster named OTV has been created and a spring bean has been distributed by using a cache object named user-cache. It has been distributed between two members of the cluster. Let us look at implementation of AbsctractProcessor implementing EntryProcessor Interface and PortableObject Interface for Spring Beans’ distribution between JVMs in a cluster. Used Technologies : JDK 1.6.0_31 Spring 3.1.1 Coherence 3.7.0 SolarisOS 5.10 Maven 3.0.2 STEP 1 : CREATE MAVEN PROJECT A maven project is created as below. (It can be created by using Maven or IDE Plug-in).STEP 2 : COHERENCE PACKAGE Coherence is downloaded via Coherence Package STEP 3 : LIBRARIES Firstly, Spring dependencies are added to Maven’ s pom.xml. Please note that Coherence library is installed to Local Maven Repository and its description is added to pom.xml as follows. Also if the maven is not used, coherence.jar file can be added to classpath. <properties> <spring.version>3.1.1.RELEASE</spring.version> </properties><dependencies><!-- Spring 3 dependencies --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>${spring.version}</version> </dependency><!-- Coherence library(from local repository) --> <dependency> <groupId>com.tangosol</groupId> <artifactId>coherence</artifactId> <version>3.7.0</version> </dependency><!-- Log4j library --> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>1.2.16</version> </dependency></dependencies> The following maven-plugin can be used to create runnable-jar. <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> <version>1.3.1</version><executions> <execution> <phase>package</phase> <goals> <goal>shade</goal> </goals> <configuration> <transformers> <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer"> <mainClass>com.otv.exe.Application</mainClass> </transformer> <transformer implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer"> <resource>META-INF/spring.handlers</resource> </transformer> <transformer implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer"> <resource>META-INF/spring.schemas</resource> </transformer> </transformers> </configuration> </execution> </executions> </plugin>STEP 4 : CREATE otv-pof-config.xml otv-pof-config.xml covers the classes using Portable Object Format(POF) feature for serialization. In this example; User, UpdateUserProcessor and DeleteUserProcessor classes implement the com.tangosol.io.pof.PortableObject Interface. -Dtangosol.pof.config argument can be used to define otv-pof-config.xml path in startup script. <?xml version="1.0"?> <!DOCTYPE pof-config SYSTEM "pof-config.dtd"> <pof-config> <user-type-list> <!-- coherence POF user types --> <include>coherence-pof-config.xml</include> <!-- The definition of classes which use Portable Object Format --> <user-type> <type-id>1001</type-id> <class-name>com.otv.user.User</class-name> </user-type> <user-type> <type-id>1002</type-id> <class-name>com.otv.user.processor.UpdateUserProcessor</class-name> </user-type> <user-type> <type-id>1003</type-id> <class-name>com.otv.user.processor.DeleteUserProcessor</class-name> </user-type> </user-type-list> <allow-interfaces>true</allow-interfaces> <allow-subclasses>true</allow-subclasses> </pof-config>STEP 5 : CREATE otv-coherence-cache-config.xml otv-coherence-cache-config.xml contains caching-schemes(distributed or replicated) and caching-scheme-mapping configuration. Created cache configuration should be added to coherence-cache-config.xml. <?xml version="1.0"?><cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"><caching-scheme-mapping> <cache-mapping> <cache-name>user-cache</cache-name> <scheme-name>UserDistributedCacheWithPof</scheme-name> </cache-mapping> </caching-scheme-mapping><caching-schemes><distributed-scheme> <scheme-name>UserDistributedCacheWithPof</scheme-name> <service-name>UserDistributedCacheWithPof</service-name><serializer> <instance> <class-name>com.tangosol.io.pof.SafeConfigurablePofContext </class-name> <init-params> <init-param> <param-type>String</param-type> <param-value> <!-- pof-config.xml path should be set--> otv-pof-config.xml </param-value> </init-param> </init-params> </instance> </serializer> <backing-map-scheme> <local-scheme /> </backing-map-scheme> <autostart>true</autostart> </distributed-scheme> </caching-schemes></cache-config> STEP 6 : CREATE tangosol-coherence-override.xml tangosol-coherence-override.xml covers cluster, member-identity and configurable-cache-factory configuration. Also the following configuration xml file shows first member of the cluster. -Dtangosol.coherence.override argument can be used to define tangosol-coherence-override.xml path in startup script. tangosol-coherence-override.xml for first member of the cluster : <?xml version='1.0'?><coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-operational-config coherence-operational-config.xsd"><cluster-config><member-identity> <cluster-name>OTV</cluster-name> <!-- Name of the first member of the cluster --> <role-name>OTV1</role-name> </member-identity><unicast-listener> <well-known-addresses> <socket-address id="1"> <!-- IP Address of the first member of the cluster --> <address>x.x.x.x</address> <port>8089</port> </socket-address> <socket-address id="2"> <!-- IP Address of the second member of the cluster --> <address>y.y.y.y</address> <port>8089</port> </socket-address> </well-known-addresses><!-- Name of the first member of the cluster --> <machine-id>OTV1</machine-id> <!-- IP Address of the first member of the cluster --> <address>x.x.x.x</address> <port>8089</port> <port-auto-adjust>true</port-auto-adjust> </unicast-listener></cluster-config><configurable-cache-factory-config> <init-params> <init-param> <param-type>java.lang.String</param-type> <param-value system-property="tangosol.coherence.cacheconfig"> <!-- coherence-cache-config.xml path should be set--> otv-coherence-cache-config.xml </param-value> </init-param> </init-params> </configurable-cache-factory-config></coherence> tangosol-coherence-override.xml for second member of the cluster : <?xml version='1.0'?><coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-operational-config coherence-operational-config.xsd"><cluster-config><member-identity> <cluster-name>OTV</cluster-name> <!-- Name of the second member of the cluster --> <role-name>OTV2</role-name> </member-identity><unicast-listener><well-known-addresses> <socket-address id="1"> <!-- IP Address of the first member of the cluster --> <address>x.x.x.x</address> <port>8089</port> </socket-address> <socket-address id="2"> <!-- IP Address of the second member of the cluster --> <address>y.y.y.y</address> <port>8089</port> </socket-address> </well-known-addresses><!-- Name of the second member of the cluster --> <machine-id>OTV2</machine-id> <!-- IP Address of the second member of the cluster --> <address>y.y.y.y</address> <port>8089</port> <port-auto-adjust>true</port-auto-adjust></unicast-listener></cluster-config><configurable-cache-factory-config> <init-params> <init-param> <param-type>java.lang.String</param-type> <param-value system-property="tangosol.coherence.cacheconfig"> <!-- coherence-cache-config.xml path should be set--> otv-coherence-cache-config.xml</param-value> </init-param> </init-params> </configurable-cache-factory-config></coherence>STEP 7 : CREATE applicationContext.xml applicationContext.xml is created. <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans-3.0.xsd"><!-- Beans Declaration --> <bean id="User" class="com.otv.user.User" scope="prototype" /> <bean id="UserCacheService" class="com.otv.user.cache.srv.UserCacheService" /> <bean id="CacheUpdaterTask" class="com.otv.cache.updater.task.CacheUpdaterTask"> <property name="userCacheService" ref="UserCacheService" /> </bean> </beans>STEP 8 : CREATE SystemConstants CLASS SystemConstants Class is created. This class covers all system constants. package com.otv.common;/** * System Constants * * @author onlinetechvision.com * @since 2 Jun 2012 * @version 1.0.0 * */ public class SystemConstants {public static final String APPLICATION_CONTEXT_FILE_NAME = "applicationContext.xml";//Named Cache Definition... public static final String USER_CACHE = "user-cache";//Bean Names... public static final String BEAN_NAME_CACHE_UPDATER_TASK = "CacheUpdaterTask"; public static final String BEAN_NAME_USER = "User";}STEP 9 : CREATE User BEAN A new User Spring bean is created. This bean will be distributed between two nodes in OTV cluster. PortableObject can be implemented for the serialization. PortableObject Interface has got two unimplemented methods as readExternal and writeExternal. The properties which are only serialized, must be defined. In this example, all the properties(id, name and surname of User) are serialized. package com.otv.user;import java.io.IOException;import com.tangosol.io.pof.PofReader; import com.tangosol.io.pof.PofWriter; import com.tangosol.io.pof.PortableObject;/** * User Bean * * @author onlinetechvision.com * @since 2 Jun 2012 * @version 1.0.0 * */ public class User implements PortableObject {private String id; private String name; private String surname;/** * Gets User Id * * @return String id */ public String getId() { return id; }/** * Sets User Id * * @param String id */ public void setId(String id) { this.id = id; }/** * Gets User Name * * @return String name */ public String getName() { return name; }/** * Sets User Name * * @param String name */ public void setName(String name) { this.name = name; }/** * Gets User Surname * * @return String surname */ public String getSurname() { return surname; }/** * Sets User Surname * * @param String surname */ public void setSurname(String surname) { this.surname = surname; }@Override public String toString() { StringBuilder strBuilder = new StringBuilder(); strBuilder.append("Id : ").append(id); strBuilder.append(", Name : ").append(name); strBuilder.append(", Surname : ").append(surname); return strBuilder.toString(); }/** * Restore the contents of a user type instance by reading its state * using the specified PofReader object. * * @param PofReader in */ public void readExternal(PofReader in) throws IOException { this.id = in.readString(0); this.name = in.readString(1); this.surname = in.readString(2); }/** * Save the contents of a POF user type instance by writing its state * using the specified PofWriter object. * * @param PofWriter out */ public void writeExternal(PofWriter out) throws IOException { out.writeString(0, id); out.writeString(1, name); out.writeString(2, surname); } }STEP 10 : CREATE IUserCacheService INTERFACE A new IUserCacheService Interface is created to perform cache operations. package com.otv.user.cache.srv;import com.otv.user.User; import com.otv.user.processor.DeleteUserProcessor; import com.otv.user.processor.UpdateUserProcessor; import com.tangosol.net.NamedCache;/** * User Cache Service Interface * * @author onlinetechvision.com * @since 2 Jun 2012 * @version 1.0.0 * */ public interface IUserCacheService {/** * Gets Distributed User Cache * * @return NamedCache User Cache */ NamedCache getUserCache();/** * Adds user to cache * * @param User user */ void addUser(User user);/** * Updates user on the cache * * @param String userId * @param UpdateUserProcessor processor * */ void updateUser(String userId, UpdateUserProcessor processor);/** * Deletes user from the cache * * @param String userId * @param DeleteUserProcessor processor * */ void deleteUser(String userId, DeleteUserProcessor processor);}STEP 11 : CREATE UserCacheService CLASS UserCacheService Class is created by implementing IUserCacheService Interface. package com.otv.user.cache.srv;import com.otv.cache.listener.UserMapListener; import com.otv.common.SystemConstants; import com.otv.user.User; import com.otv.user.processor.DeleteUserProcessor; import com.otv.user.processor.UpdateUserProcessor; import com.tangosol.net.CacheFactory; import com.tangosol.net.NamedCache;/** * User Cache Service * * @author onlinetechvision.com * @since 2 Jun 2012 * @version 1.0.0 * */ public class UserCacheService implements IUserCacheService {private NamedCache userCache = null;public UserCacheService() { setUserCache(CacheFactory.getCache(SystemConstants.USER_CACHE)); //UserMap Listener is registered to listen user-cache operations getUserCache().addMapListener(new UserMapListener()); }/** * Adds user to cache * * @param User user */ public void addUser(User user) { getUserCache().put(user.getId(), user); }/** * Deletes user from the cache * * @param String userId * @param DeleteUserProcessor processor * */ public void deleteUser(String userId, DeleteUserProcessor processor) { getUserCache().invoke(userId, processor); }/** * Updates user on the cache * * @param String userId * @param UpdateUserProcessor processor * */ public void updateUser(String userId, UpdateUserProcessor processor) { getUserCache().invoke(userId, processor); }/** * Gets Distributed User Cache * * @return NamedCache User Cache */ public NamedCache getUserCache() { return userCache; }/** * Sets User Cache * * @param NamedCache userCache */ public void setUserCache(NamedCache userCache) { this.userCache = userCache; } }STEP 12 : CREATE USERMAPLISTENER CLASS A new UserMapListener class is created. This listener receives distributed user-cache events. package com.otv.cache.listener;import org.apache.log4j.Logger;import com.tangosol.util.MapEvent; import com.tangosol.util.MapListener;/** * User Map Listener * * @author onlinetechvision.com * @since 2 Jun 2012 * @version 1.0.0 * */ public class UserMapListener implements MapListener {private static Logger logger = Logger.getLogger(UserMapListener.class);/** * This method is invoked when an entry is deleted from the cache... * * @param MapEvent me */ public void entryDeleted(MapEvent me) { logger.debug("Deleted Key = " + me.getKey() + ", Value = " + me.getOldValue()); }/** * This method is invoked when an entry is inserted to the cache... * * @param MapEvent me */ public void entryInserted(MapEvent me) { logger.debug("Inserted Key = " + me.getKey() + ", Value = " + me.getNewValue()); }/** * This method is invoked when an entry is updated on the cache... * * @param MapEvent me */ public void entryUpdated(MapEvent me) { logger.debug("Updated Key = " + me.getKey() + ", New_Value = " + me.getNewValue() + ", Old Value = " + me.getOldValue()); } }STEP 13 : CREATE UpdateUserProcessor CLASS AbstractProcessor is an abstract class under package com.tangosol.util.processor. It implements EntryProcessor Interface. UpdateUserProcessor Class is created to process User Update operation on the cache. When UpdateUserProcessor is invoked for a key, firstly the member containing the key is found in the cluster. After then, UpdateUserProcessor is invoked from the member which contains the related key and its value(User object) is updated. Therefore, network traffic is reduced. package com.otv.user.processor;import java.io.IOException;import org.apache.log4j.Logger;import com.otv.user.User; import com.tangosol.io.pof.PofReader; import com.tangosol.io.pof.PofWriter; import com.tangosol.io.pof.PortableObject; import com.tangosol.util.InvocableMap.Entry; import com.tangosol.util.processor.AbstractProcessor;/** * Update User Processor * * @author onlinetechvision.com * @since 2 Jun 2012 * @version 1.0.0 * */ public class UpdateUserProcessor extends AbstractProcessor implements PortableObject {private static Logger logger = Logger.getLogger(UpdateUserProcessor.class); private User newUser;/** * This empty constructor is added for Portable Object Format(POF). * */ public UpdateUserProcessor() {}public UpdateUserProcessor(User newUser) { this.newUser = newUser; }/** * Processes a Map.Entry object. * * @param Entry entry * @return Object newUser */ public Object process(Entry entry) { Object newValue = null; try { newValue = getNewUser(); entry.setValue(newValue); } catch (Exception e) { logger.error("Error occured when entry was being processed!", e); }return newValue; }/** * Gets new user * * @return User newUser */ public User getNewUser() { return newUser; }/** * Sets new user * * @param User newUser */ public void setNewUser(User newUser) { this.newUser = newUser; }/** * Restore the contents of a user type instance by reading its state * using the specified PofReader object. * * @param PofReader in */ public void readExternal(PofReader in) throws IOException { setNewUser((User) in.readObject(0)); }/** * Save the contents of a POF user type instance by writing its state * using the specified PofWriter object. * * @param PofWriter out */ public void writeExternal(PofWriter out) throws IOException { out.writeObject(0, getNewUser()); } } STEP 14 : CREATE DeleteUserProcessor CLASS DeleteUserProcessor Class is created to process User Deletion operation on the cache. When DeleteUserProcessor is invoked for a key, firstly the member containing the key is found in the cluster. After then, DeleteUserProcessor is invoked from the member which contains the related key. Therefore, network traffic is reduced. package com.otv.user.processor;import java.io.IOException;import org.apache.log4j.Logger;import com.otv.user.User; import com.tangosol.io.pof.PofReader; import com.tangosol.io.pof.PofWriter; import com.tangosol.io.pof.PortableObject; import com.tangosol.util.InvocableMap.Entry; import com.tangosol.util.processor.AbstractProcessor;/** * Delete User Processor * * @author onlinetechvision.com * @since 2 Jun 2012 * @version 1.0.0 * */ public class DeleteUserProcessor extends AbstractProcessor implements PortableObject {private static Logger logger = Logger.getLogger(DeleteUserProcessor.class);/** * Processes a Map.Entry object. * * @param Entry entry * @return Object user */ public Object process(Entry entry) { User user = null; try { user = (User) entry.getValue(); entry.remove(true); } catch (Exception e) { logger.error("Error occured when entry was being processed!", e); }return user; }/** * Restore the contents of a user type instance by reading its state * using the specified PofReader object. * * @param PofReader in */ public void readExternal(PofReader in) throws IOException {}/** * Save the contents of a POF user type instance by writing its state * using the specified PofWriter object. * * @param PofWriter out */ public void writeExternal(PofWriter out) throws IOException {} }STEP 15 : CREATE CacheUpdaterTask CLASS CacheUpdaterTask Class is created to perform cache operations(add, update and delete) and monitor cache content. package com.otv.cache.updater.task;import java.util.Collection;import org.apache.log4j.Logger; import org.springframework.beans.BeansException; import org.springframework.beans.factory.BeanFactory; import org.springframework.beans.factory.BeanFactoryAware;import com.otv.common.SystemConstants; import com.otv.user.User; import com.otv.user.cache.srv.IUserCacheService; import com.otv.user.processor.DeleteUserProcessor; import com.otv.user.processor.UpdateUserProcessor;/** * Cache Updater Task * * @author onlinetechvision.com * @since 2 Jun 2012 * @version 1.0.0 * */ public class CacheUpdaterTask implements BeanFactoryAware, Runnable {private static Logger log = Logger.getLogger(CacheUpdaterTask.class); private IUserCacheService userCacheService; private BeanFactory beanFactory;public void run() { try { while(true) { /** * Before the project is built for the first member, * this code block should be used instead of * method processRequestsOnSecondMemberOfCluster. */ processRequestsOnFirstMemberOfCluster();/** * Before the project is built for the second member, * this code block should be used instead of * method processRequestsOnFirstMemberOfCluster. */ // processRequestsOnSecondMemberOfCluster(); } } catch (InterruptedException e) { e.printStackTrace(); } }/** * Processes the cache requests on the first member of cluster... * * @throws InterruptedException */ private void processRequestsOnFirstMemberOfCluster() throws InterruptedException { //Entry is added to cache... getUserCacheService().addUser(getUser("1", "Bruce", "Willis"));//Cache Entries are being printed... printCacheEntries();Thread.sleep(10000);User newUser = getUser("1", "Client", "Eastwood"); //Existent Entry is updated on the cache... getUserCacheService().updateUser(newUser.getId(), new UpdateUserProcessor(newUser));//Cache Entries are being printed... printCacheEntries();Thread.sleep(10000);//Entry is deleted from cache... getUserCacheService().deleteUser(newUser.getId(), new DeleteUserProcessor());//Cache Entries are being printed... printCacheEntries();Thread.sleep(10000); }/** * Processes the cache requests on the second member of cluster... * * @throws InterruptedException */ private void processRequestsOnSecondMemberOfCluster() throws InterruptedException { //Entry is added to cache... getUserCacheService().addUser(getUser("2", "Nathalie", "Portman"));Thread.sleep(15000);User newUser = getUser("2", "Sharon", "Stone"); //Existent Entry is updated on the cache... getUserCacheService().updateUser(newUser.getId(), new UpdateUserProcessor(newUser));User newUser2 = getUser("1", "Maria", "Sharapova"); //Existent Entry is updated on the cache... getUserCacheService().updateUser(newUser2.getId(), new UpdateUserProcessor(newUser2));Thread.sleep(15000);//Entry is deleted from cache... getUserCacheService().deleteUser(newUser.getId(), new DeleteUserProcessor());Thread.sleep(15000); }/** * Prints cache entries * */ private void printCacheEntries() { Collection<User> userCollection = (Collection<User>)getUserCacheService().getUserCache().values(); for(User user : userCollection) { log.debug("Cache Content : "+user); } }/** * Gets new user instance * * @param String user id * @param String user name * @param String user surname * @return User user */ private User getUser(String id, String name, String surname) { User user = getNewUserInstance(); user.setId(id); user.setName(name); user.setSurname(surname);return user; }/** * Gets user cache service... * * @return IUserCacheService userCacheService */ public IUserCacheService getUserCacheService() { return userCacheService; }/** * Sets user cache service... * * @param IUserCacheService userCacheService */ public void setUserCacheService(IUserCacheService userCacheService) { this.userCacheService = userCacheService; }/** * Gets a new instance of User Bean * * @return User */ public User getNewUserInstance() { return (User) getBeanFactory().getBean(SystemConstants.BEAN_NAME_USER); }/** * Gets bean factory * * @return BeanFactory */ public BeanFactory getBeanFactory() { return beanFactory; }/** * Sets bean factory * * @param BeanFactory beanFactory * @throws BeansException */ public void setBeanFactory(BeanFactory beanFactory) throws BeansException { this.beanFactory = beanFactory; } }STEP 16 : CREATE Application CLASS Application Class is created to run the application. package com.otv.exe;import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext;import com.otv.cache.updater.task.CacheUpdaterTask; import com.otv.common.SystemConstants;/** * Application Class * * @author onlinetechvision.com * @since 2 Jun 2012 * @version 1.0.0 * */ public class Application {/** * Starts the application * * @param String[] args * */ public static void main(String[] args) { ApplicationContext context = new ClassPathXmlApplicationContext(SystemConstants.APPLICATION_CONTEXT_FILE_NAME);CacheUpdaterTask cacheUpdaterTask = (CacheUpdaterTask) context.getBean(SystemConstants.BEAN_NAME_CACHE_UPDATER_TASK); Thread cacheUpdater = new Thread(cacheUpdaterTask); cacheUpdater.start(); } }STEP 17 : BUILD PROJECT After OTV_Spring_Coherence_With_Processor_and_POF Project is build, OTV_Spring_Coherence-0.0.1-SNAPSHOT.jar will be created. Please note that the members of the cluster have got different configuration for Coherence so project should be built separately for each member. STEP 18 : RUN PROJECT ON FIRST MEMBER OF THE CLUSTER After created OTV_Spring_Coherence-0.0.1-SNAPSHOT.jar file is run at the members of the cluster, the following output logs will be shown on first member’ s console: --After A new cluster is created and First Member joins the cluster, a new entry is added to the cache. 02.06.2012 14:21:45 DEBUG (UserMapListener.java:33) - Inserted Key = 1, Value = Id : 1, Name : Bruce, Surname : Willis 02.06.2012 14:21:45 DEBUG (CacheUpdaterTask.java:116) - Cache Content : Id : 1, Name : Bruce, Surname : Willis ....... --After Second Member joins the cluster, a new entry is added to the cache. 02.06.2012 14:21:45 DEBUG (UserMapListener.java:33) - Inserted Key = 2, Value = Id : 2, Name : Nathalie, Surname : Portman ....... --Cache operations go on both first and second members of the cluster: 02.06.2012 14:21:55 DEBUG (UserMapListener.java:42) - Updated Key = 1, New_Value = Id : 1, Name : Client, Surname : Eastwood, Old Value = Id : 1, Name : Bruce, Surname : Willis02.06.2012 14:21:55 DEBUG (CacheUpdaterTask.java:116) - Cache Content : Id : 2, Name : Nathalie, Surname : Portman 02.06.2012 14:21:55 DEBUG (CacheUpdaterTask.java:116) - Cache Content : Id : 1, Name : Client, Surname : Eastwood02.06.2012 14:22:00 DEBUG (UserMapListener.java:42) - Updated Key = 2, New_Value = Id : 2, Name : Sharon, Surname : Stone, Old Value = Id : 2, Name : Nathalie, Surname : Portman02.06.2012 14:22:00 DEBUG (UserMapListener.java:42) - Updated Key = 1, New_Value = Id : 1, Name : Maria, Surname : Sharapova, Old Value = Id : 1, Name : Client, Surname : Eastwood02.06.2012 14:22:05 DEBUG (UserMapListener.java:24) - Deleted Key = 1, Value = Id : 1, Name : Maria, Surname : Sharapova 02.06.2012 14:22:05 DEBUG (CacheUpdaterTask.java:116) - Cache Content : Id : 2, Name : Sharon, Surname : Stone 02.06.2012 14:22:15 DEBUG (UserMapListener.java:24) - Deleted Key = 2, Value = Id : 2, Name : Sharon, Surname : Stone 02.06.2012 14:22:15 DEBUG (UserMapListener.java:33) - Inserted Key = 1, Value = Id : 1, Name : Bruce, Surname : Willis 02.06.2012 14:22:15 DEBUG (CacheUpdaterTask.java:116) - Cache Content : Id : 1, Name : Bruce, Surname : WillisSTEP 19 : DOWNLOAD OTV_Spring_Coherence_With_Processor_and_POF Further Reading : Performing Transactions in Coherence Using Portable Object Format in Coherence Spring Framework Reference 3.x Reference: How to distribute Spring Beans by using EntryProcessor and PortableObject features in Oracle Coherence from our JCG partner Eren Avsarogullari at the Online Technology Vision blog....
spring-roo-logo

Rapid Cloud Development with Spring Roo – Part 1: Google App Engine (GAE)

Spring Roo is a tool to offer rapid application development on the Java platform. I already explained when to use it: http://www.kai-waehner.de/blog/2011/04/05/when-to-use-spring-roo. Spring Roo supports two solutions for Cloud Computing at the moment: Google App Engine (GAE) and VMware Cloud Foundry. Both provide the Platform as a Service (PaaS) concept. This article will discuss the GAE support of Spring Roo. Cloud Foundry will be analyzed in part 2 of this article series. Deployment of a GAE Application to the Cloud A very good introductory article, which describes the combination of Spring Roo and GAE, already exists here: http://java.dzone.com/articles/creating-application-using. In a nutshell, there is not much to do to deploy your (CRUD-) application in the GAE cloud. You have to choose another database provider, enter your GAE application id in a configuration file and deploy the application using one single Maven command (mvn gae:deploy). That is the difference to „traditional“ Roo applications. Thus, no rocket science! Nevertheless, there are several restrictions for developing GAE applications, for instance you cannot use @OneToMany annotations to specify relations due to NoSQL concepts. Deployment will fail respectively the application will not work as expected if you do not follow the rules. GAE is much more than just deploying a traditional Web Application to the Cloud So, after reading the previous paragragh, the conlusion is the following: Spring Roo supports deploying its applications to the GAE cloud. Thus, everything is fine? No, not at all!Yes, you can deploy your CRUD application to the GAE cloud (if you do not use relations), but GAE is much more. You can or rather should use Task Queues to segment your long-running work, the BigTable datastore and blobstore to store your data, use the URL fetch service to communicate to other applications using HTTP(S), and several other GAE services such as XMPP, Memcache, Mail, and so on. The number of available services further increases with new GAE releases.These GAE services exist for some reasons: You should be able to create a cloud application which scales automatically without any manual server configuration and such stuff. That is the reason why you have to use NoSQL database concepts and Fetch URL instead of a SQL database, Threads, socket programming, and the other techniques which you used in the past when not developing an application for the cloud. Google developers are NOT too dumb to support SQL databases, but it is not the appropriate technology for highly scaling cloud applications. A nice article about „SQL versus NoSQL“ can be found here: http://java.dzone.com/news/sql-vs-nosql-cloud-which Several Spring Roo Commands are missing for developing GAE Applications Spring Roo has no special GAE command. You use the persistence command to create support for BigTable, and you use a Maven goal to deploy the GAE application. Besides, there are no GAE commands although you would need them to create your Task Queues, BigTable datastore access (including relations), URL fetches, and so on. You have to code everything by yourself, as you have to do without Spring Roo. Thus, there is no real support for GAE, yet – contrary to Cloud Foundry (as we will see in part 2 of this article series). Of course, VMware wants to push its own PaaS solution, I understand that. Nevertheless, Spring Roo should also offer good support for other solutions as it does for web frameworks (in the meantime, there is offical support for Spring MVC and GWT, besides plugins for Vaadin, Flex and JSF are available respectively in work). GAE is the only stable, production-ready PaaS Solution in the Java environment Be aware that GAE is the only stable and production-ready PaaS solution in the Java environment at the moment. Other offerings such as Cloud Foundry or Red Hat OpenShift are still in BETA status. Also be aware that there exist reasons why Google is not offering SQL database support yet. They will probably add this feature in the future, because public criticism is huge. Nevertheless, NoSQL databases will be required in many use cases where you want to deploy your application in the cloud. Thus, I hope that Spring Roo will offer better GAE support in future versions. Go to Part 2 Reference: Rapid Cloud Development with Spring Roo – Part 1: Google App Engine (GAE) from our JCG partner Kai Wahner at the Blog about Java EE / SOA / Cloud Computing blog....
spring-roo-logo

Rapid Cloud Development with Spring Roo – Part 2: VMware Cloud Foundry

Spring Roo is a tool to offer rapid application development on the Java platform. I already explained when to use it: http://www.kai-waehner.de/blog/2011/04/05/when-to-use-spring-roo. Spring Roo supports two solutions for Cloud Computing at the moment: Google App Engine (GAE) and VMware Cloud Foundry. Both provide the Platform as a Service (PaaS) concept. This article will discuss the Cloud Foundry support of Spring Roo. GAE was discussed in part 1 of this article series.Deployment of a Cloud Foundry Application to the Cloud The reference guide of Spring Roo gives an introduction at http://www.springsource.org/roo/guide?w=base-cloud-foundry, which describes the combination of Spring Roo and Cloud Foundry. In a nutshell, there is not much to do to deploy your (CRUD-) application in the Cloud Foundry cloud. You have to login to your Cloud Foundry account, create a WAR file and deploy it. Three Roo commands execute these tasks. If you use any Cloud Foundry services (such as MySQL, Redis or RabbitMQ), then you have to create and bind these services using other Roo commands. The deployment is very easy. You can choose to deploy your application to a private cloud (your own servers) or to the public cloud (VMware servers). I got a strange non-speaking exception (that’s a major problem of Spring Roo often): „Operation could not be completed: 400 Bad Request“, but no further details or exceptions. Forum support was necessary. The problem was that the name of my cloud app was already used by another developer, it was not unique (I tried to use the name „SimpleCloudFoundry“). A more speaking error message would be nice! Using another (unique) name solved the problem. Cloud Foundry is just a traditional Web Application – Contrary to GAE So, after reading the previous paragragh, the conlusion is the following: Spring Roo supports deploying its applications to the Cloud Foundry cloud. Thus, everything is fine? Yes, more or less surprisingly, that is true! The statement of the Cloud Foundry documentation is also true: „You won’t need to architect your applications in a special way or make do with a restricted subset of language or framework features, nor will you need to call Cloud Foundry specific APIs. You just develop your application as you do without Cloud Foundry, then you deploy it.“ So, why should you think about using another PaaS solution instead of Cloud Foundry? Cloud Foundry applications are traditional Java web applications which are using Spring and being deployed to a Tomcat web container. You do not have many limitations (remember the Java classes white list of GAE) or database restrictions (remember the BigTable concepts of GAE). Be aware that due to this advantage, you have to use the services offered by Cloud Foundry! At the moment, you can use MySQL, Redis, Mongo DB and RabbitMQ. No other databases or messaging solutions can be used. If the offered services meet your demands, everything is fine. Almost all Cloud Foundry Commands are available in the Roo Shell Usually, you develop a Cloud Foundry application in an IDE such as Eclipse. Besides, you use the VMware CLI (which is a command line tool) to login to Cloud Foundry, create and bind services, deploy, start and stop your application, and so on.Spring Roo offers more than 30 unique Cloud Foundry commands. With Roo’s Cloud Foundry integration, you can now manage the entire life cycle of your application from the Roo shell. That is great! Of course, VMware wants to push both, Cloud Foundry and Spring Roo, so the connection between both products is really good. But …There is no Reason to use Spring Roo for Cloud Foundry Development Spring Roo’s goal is to help the developer to realize applications easier and faster. It is awesome for creating prototypes or CRUD web applications. Nevertheless, it does not help to create Cloud Foundry applications. Sure, you can use all VMC commands directly within the Roo shell, but that’s it. I wonder if this is an advantage? I found it annoying to always type „cloud foundry“ in the Roo shell before entering the real command which I wanted to use. Thus, I switched back to the VMC command line tool quickly. The SpringSource Tool Suite also offers a Cloud Foundry plugin to bind services and deploy applications via „drag and drop“. Very nice!In my opinion, there is no benefit to use Spring Roo for developing Cloud Foundry applications. There is one exception, of course: If you develop a Spring Roo application (let’s say a CRUD app), then you can do everything within the same shell, that is cool.By the way: Though I do think that the combination with Spring Roo brings no benefits, I really like Cloud Foundry. It is one of the first PaaS solutions (besides Amazon Elastic Beanstalk) which offers relational database support. Besides, it is possible to deploy to public AND private clouds. It is open source, thus much more support and services will be available in the future. But be aware: Contrary to GAE, Cloud Foundry is still BETA at the moment.The current conclusion of this article series is that Spring Roo does not really help to develop applications for the cloud. Nevertheless, I like Spring Roo and I like PaaS solutions such as GAE and Cloud Foundry – but not combined. I will write further articles if this situation changes or if further PaaS products are supported by Spring Roo.Reference: Rapid Cloud Development with Spring Roo – Part 2: VMware Cloud Foundry from our JCG partner Kai Wahner at the Blog about Java EE / SOA / Cloud Computing blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close