Featured FREE Whitepapers

What's New Here?

java-interview-questions-answers

JAXB and Root Elements

@XmlRootElement is an annotation that people are used to using with JAXB (JSR-222). It’s purpose is to uniquely associate a root element with a class. Since JAXB classes map to complex types, it is possible for a class to correspond to multiple root elements. In this case @XmlRootElement can not be used and people start getting a bit confused. In this post I’ll demonstrate how @XmlElementDecl can be used to map this use case. XML Schema The XML schema below contains three root elements: customer, billing-address, and shipping-address. The customer element has an anonymous complex type, while billing-address and shipping-address are of the same named type (address-type). <?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.example.org/customer" xmlns="http://www.example.org/customer" elementFormDefault="qualified"><xs:element name="customer"> <xs:complexType> <xs:sequence> <xs:element ref="billing-address"/> <xs:element ref="shipping-address"/> </xs:sequence> </xs:complexType> </xs:element><xs:complexType name="address-type"> <xs:sequence> <xs:element name="street" type="xs:string"/> </xs:sequence> </xs:complexType><xs:element name="billing-address" type="address-type"/><xs:element name="shipping-address" type="address-type"/></xs:schema>Generated Model Below is a JAXB model that was generated from the XML schema. The same concepts apply when adding JAXB annotations to an existing Java model. Customer JAXB domain classes correspond to complex types. Since the customer element had an anonymous complex type the Customer class has an @XmlRootElement annotation. This is because only one XML element can be associated with an anonymous type. package org.example.customer;import javax.xml.bind.annotation.*;@XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "", propOrder = {"billingAddress","shippingAddress"}) @XmlRootElement(name = "customer") public class Customer {@XmlElement(name = "billing-address", required = true) protected AddressType billingAddress;@XmlElement(name = "shipping-address", required = true) protected AddressType shippingAddress;public AddressType getBillingAddress() { return billingAddress; }public void setBillingAddress(AddressType value) { this.billingAddress = value; }public AddressType getShippingAddress() { return shippingAddress; }public void setShippingAddress(AddressType value) { this.shippingAddress = value; }}AddressType Again because JAXB model classes correspond to complex types, a class is generated for the address-type complex type. Since multiple root level elements could exist for this named complex type, it is not annotated with @XmlRootElement. package org.example.customer;import javax.xml.bind.annotation.*;@XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "address-type", propOrder = {"street"}) public class AddressType {@XmlElement(required = true) protected String street;public String getStreet() { return street; }public void setStreet(String value) { this.street = value; }}ObjectFactory The @XmlElementDecl annotation is used to represent root elements that correspond to named complex types. It is placed on a factory method in a class annotated with @XmlRegistry (when generated from an XML schema this class is always called ObjectFactory). The factory method returns the domain object wrapped in an instance of JAXBElement. The JAXBElement has a QName that represents the elements name and namespace URI. package org.example.customer;import javax.xml.bind.JAXBElement; import javax.xml.bind.annotation.*; import javax.xml.namespace.QName;@XmlRegistry public class ObjectFactory {private final static QName _BillingAddress_QNAME = new QName("http://www.example.org/customer", "billing-address"); private final static QName _ShippingAddress_QNAME = new QName("http://www.example.org/customer", "shipping-address");public ObjectFactory() { }public Customer createCustomer() { return new Customer(); }public AddressType createAddressType() { return new AddressType(); }@XmlElementDecl(namespace = "http://www.example.org/customer", name = "billing-address") public JAXBElement<AddressType> createBillingAddress(AddressType value) { return new JAXBElement<AddressType>(_BillingAddress_QNAME, AddressType.class, null, value); }@XmlElementDecl(namespace = "http://www.example.org/customer", name = "shipping-address") public JAXBElement<AddressType> createShippingAddress(AddressType value) { return new JAXBElement<AddressType>(_ShippingAddress_QNAME, AddressType.class, null, value); }}package-info The package-info class is used to specify the namespace mapping (see JAXB & Namespaces). @XmlSchema(namespace = "http://www.example.org/customer", elementFormDefault = XmlNsForm.QUALIFIED) package org.example.customer;import javax.xml.bind.annotation.*;Unmarshal Operation Now we look at the impact of the type of root element when unmarshalling XML. customer.xml Below is a sample XML document with customer as the root element. Remember the customer element had an anonymous complex type. <?xml version="1.0" encoding="UTF-8"?> <customer xmlns="http://www.example.org/customer"> <billing-address> <street>1 Any Street</street> </billing-address> <shipping-address> <street>2 Another Road</street> </shipping-address> </customer>shipping.xml Here is a sample XML document with shipping-address as the root element. The shipping-address element had a named complex type. <?xml version="1.0" encoding="UTF-8"?> <shipping-address xmlns="http://www.example.org/customer"> <street>2 Another Road</street> </shipping-address>Unmarshal Demo    When unmarshalling XML that corresponds to a class annotated with @XmlRootElement you get an instance of the domain object. But when unmarshalling XML that corresponds to a class annotated with @XmlElementDecl you get the domain object wrapped in an instance of JAXBElement. In this example you may need to use the QName from the JAXBElement to determine if you unmarshalled a billing or shipping address. package org.example.customer;import java.io.File; import javax.xml.bind.*;public class UnmarshalDemo {public static void main(String[] args) throws Exception { JAXBContext jc = JAXBContext.newInstance("org.example.customer"); Unmarshaller unmarshaller = jc.createUnmarshaller();// Unmarshal Customer File customerXML = new File("src/org/example/customer/customer.xml"); Customer customer = (Customer) unmarshaller.unmarshal(customerXML);// Unmarshal Shipping Address File shippingXML = new File("src/org/example/customer/shipping.xml"); JAXBElement<AddressType> je = (JAXBElement<AddressType>) unmarshaller.unmarshal(shippingXML); AddressType shipping = je.getValue(); }}Unmarshal Demo – JAXBIntrospector   If you don’t want to deal with remembering whether the result of the unmarshal operation will be a domain object or JAXBElement, then you can use the JAXBIntrospector.getValue(Object) method to always get the domain object. package org.example.customer;import java.io.File; import javax.xml.bind.*;public class JAXBIntrospectorDemo {public static void main(String[] args) throws Exception { JAXBContext jc = JAXBContext.newInstance("org.example.customer"); Unmarshaller unmarshaller = jc.createUnmarshaller();// Unmarshal Customer File customerXML = new File("src/org/example/customer/customer.xml"); Customer customer = (Customer) JAXBIntrospector.getValue(unmarshaller .unmarshal(customerXML));// Unmarshal Shipping Address File shippingXML = new File("src/org/example/customer/shipping.xml"); AddressType shipping = (AddressType) JAXBIntrospector .getValue(unmarshaller.unmarshal(shippingXML)); }}Marshal Operation    You can directly marshal an object annotated with @XmlRootElement to XML. Classes corresponding to @XmlElementDecl annotations must first be wrapped in an instance of JAXBElement. The factory method you you annotated with @XmlElementDecl is the easiest way to do this. The factory method is in the ObjectFactory class if you generated your model from an XML schema. package org.example.customer;import javax.xml.bind.*;public class MarshalDemo {public static void main(String[] args) throws Exception { JAXBContext jc = JAXBContext.newInstance("org.example.customer"); Marshaller marshaller = jc.createMarshaller(); marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true);// Create Domain Objects AddressType billingAddress = new AddressType(); billingAddress.setStreet("1 Any Street"); Customer customer = new Customer(); customer.setBillingAddress(billingAddress);// Marshal Customer marshaller.marshal(customer, System.out);// Marshal Billing Address ObjectFactory objectFactory = new ObjectFactory(); JAXBElement<AddressType> je = objectFactory.createBillingAddress(billingAddress); marshaller.marshal(je, System.out); }}Output Below is the output from running the demo code. <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <customer xmlns="http://www.example.org/customer"> <billing-address> <street>1 Any Street</street> </billing-address> </customer> <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <billing-address xmlns="http://www.example.org/customer"> <street>1 Any Street</street> </billing-address>Reference: JAXB and Root Elements from our JCG partner Blaise Doughan at the Java XML & JSON Binding blog....
java-logo

Java Thread at RUNNABLE state is not really running

Recently, I was doing an analysis/tuning on a Java application server installation in order to identify the bottlenecks and fix them. The most common action in such procedure (tuning) is to retrieve many Thread dumps, when system is on load. Please have in mind that heavy load (for some cases) may have side effects that they may lead us to wrong conclusions. So, a more “controlled” load is more preferable than a real heavy load.When system is on load, you will notice that many Java threads are on RUNNABLE state, but they are not really running. They are waiting for “something“.The most common reasons that cause threads to wait even they are in RUNNABLE state are the following:Insufficient CPU Resources: When you have more running threads than virtual CPUs, then it is normal to have delays from context switching, kernel, OS jobs and other processes of system. Insufficient RAM: If your RAM is not enough them your system will use swap and this always a problem. I/O: When a thread is in a read() or write() call and waiting for data to write or read, then this Thread is at RUNNABLE state but it is not actually run. Slow Network: This is related to #3, as much slower is a network, it will cause longer delays to the “running” thread(s) that are related to network actions. Process Priority: Processes can have different priorities. If JVM process runs with low priority, then other processes will run more frequently in a CPU. You can this using tools like top (GNU Linux), prstat (Solaris), task manager (Windows). Garbage Collection (GC): When GC is running, there are points (stop-the-world) where all threads of JVM (except GC threads) are freezing. At these points, GC is deleting the useless referenced objects and so freeing the available memory size of heap (but only this). We have to use a such strategy (like CMS or G1) that it will minimize the frequency and duration of stop-the-world points.The only one reason that is completely caused by JVM is the last one (GC activity). All the other points are mostly depended on OS and hardware. Thus, we must always monitor the system (OS and hardware) too, not only the JVM.You must have in mind that Java does not use/follow its own threading model. Also current JVM (Hotspot) uses native OS threads and thread scheduling is implemented by underlying OS.Reference: Java Thread at RUNNABLE state is not really running from our JCG partner Adrianos Dadis at the Java, Integration and the virtues of source blog....
java-logo

Exception: java lang AbstractMethodError

This java.lang.AbstractMethodError is usually thrown when we try to invoke the abstract method.Generally this error is identified at the compile time itself,if this error is thrown at run-time then the class must be incompatibly(not compatible with preexisting classes) changed.Hence it is a sub class of IncompatibleClassChange Error.We know that an abstract method cannot be invoked and if we try to do so then you will get a compile-time error.So you may think how this error is thrown at run-time?.   The reason is binary incompatibility- what does it mean?Whenever a class is modified,other classes that refer to this (modified) class will not be aware of the changes made in it.So all the classes must be compiled as a whole. If not then you may encounter one of the subclasses of incompatible class change error.“ This error indicates that the method that you invoke is converted into an abstract method now”.   see the following example to have an idea about this error class B { public void display() { System.out.println("I am inside B"); } }import java.util.*; public class A extends B { public static void main(String args[]) { A a=new A(); a.display(); } }Output: C:\blog>javac A.javaC:\blog>java A I am inside BNow i am going to convert the display() method as an abstract method and compile it alone. abstract class B { public abstract void display(); }Output: C:\blog>javac A.javaC:\blog>java A I am inside BC:\blog>javac B.javaC:\blog>java A Exception in thread "main" java.lang.AbstractMethodError: B.display()V at A.display(A.java:3) at A.main(A.java:8)As you see,the reason for this exception to be thrown at run-time is i have not compiled the classes as a whole.So whenever you make changes to the existing classes ensure that you have compiled the classes as a whole. Hence it is not a good practice to change a method into an abstract method in classes that are distributed.Mostly these kind of error occurs when you use third party library in your application. If this error is not shown at compile time,even if you compile it as a whole package then you must have to check your library settings and class path settings. Because compiler usually searches for classes in system libraries like bootstrap libraries and extension libraries,also in the current directory,but JVM searches for classes in the specified classpath. If you accidentally placed your older version in the system libraries and the newer version in the class path then you will not be notified of this error even if you compile it as whole package. So ensure that the settings relevant to older package has been removed.Reference: java.lang.AbstractMethodError from our JCG partner Ganesh Bhuddhan at the java errors and exceptions blog....
software-development-2-logo

How to build a DIY Service Repository

Every Jedi faces the moment in their life when their Lightsaber simply fails to perform as expected and he or she has bite the bullet and build a better one. Not being a Jedi I clearly have no use for a Lightsaber, but I did have a recurring irritation in the form of the service registries and repositories. These tools often claim to support good SOA principles and practices (like discoverability and contract-first service development) but in my view they often fail to live up to the hype. Service Registries/Repositories fall into two basic camps. There are the commercial large vendor offerings from the usual suspects and the smaller open-source attempts. Being a lightweight agile kind of guy (not literally, sadly) I’m constrained a strong value-vs-cost ethic. To my mind the commercial offerings are basically too bulky, too demanding and too expensive to be of any interest. That leaves the open-source offerings, which are rather thin on the ground, can have limited functionality and are not terribly well thought through in terms of the functionality that I think multi-disciplined team of users would need. So to cut a long story short I decided to build my own ‘DIY Simple Service Repository’ using low-cost open-source tools and technologies. What I wanted from my Simple Service Repository. A good service repository should support the principles of good service-orientation. Specifically, a service repository should allow for a high degree of service contract standardisation, support contract versioning, promote contract-first service development and enable easy service discoverability (preferably at both design time and at compile time). Let me explain what I mean by these requirements. Contract standardisation is the practice of supporting abstraction, loose coupling, federation and reuse in service contracts by centralising artefacts like XML Schemas, abstract WSDL‘s and WS-Policy documents. Reuse of XML Schema’s is particularly important as it is a key enabler for ‘intrinsic interoperability’ whereby services communicate using the same data model. By having all my contracts, data models and policies in one place I can reuse them easily and as often as I like. Contract versioning can be quite a difficult process but I want a simple solution where all the decisions regarding the versioning of contracts, policies and data models are mine. I’m not going to ask much from my simple service repository, I simply want it to hold ‘releases’ in separate version controlled directories that are independent of each other. I’d like releases to be addressable via a URL and be viewable in the users browser. Contract-first development (of SOAP based web services) is important because it ensures that the service contracts remain loosely coupled and independent from the underlying implementation technology. Contract-first often entails a service designer designing the required contracts and developers ‘importing’ them into their build as the basis for a service’s implementation code. This can often be a source of some discomfort for developers who may be used to controlling their own destiny, so making it an easy process can help to overcome any potential objections that may arise. Finally, easy service discoverability benefits service-oriented architectures because by making services highly visible we can reuse them more readily and also prevent ‘accidental’ duplication from occurring. Duplicate, competing, unused or redundant services can confuse and dilute the effectiveness and power of your SOA, hampering its ability to serve its community. These issues can be avoided (and reuse promoted) if service contracts can be easily discovered by anyone at any time. By creating a place to centrally store service contracts (and other design time information like WS-I BP compliance documents or human readable interface documentation) we can achieve all of these goals and add some real value to our SOA analysis, design and development practices. Enter: The SOAGrowers ‘Simple Service Repository’. So here is my solution. Just follow my 3 simple steps and you’ll have a simple service repository in no time… 1. Create a Java web application (the Simple Service Repository application) using Maven. The source for this project is hosted in my SVN server. The application includes browsable folders for the WSDL, XSD and WS-Policy documents and SVN keeps everything nicely version controlled. HTML and CSS provides the basic glue that binds everything together, but you could investigate using a Wiki maybe if you don’t like HTML. I’ve included some screenshots below if you want to see what it looks like in real life. 2. Build & Deploy the Simple Service Repository application to an application server. Jenkins is an open-source continuous integration server and sits as another application on my Glassfish server. Jenkins is configured to build and deploy my application whenever it notices a change in the SVN repository. Glassfish 3.1 is my personal application server of choice for Java web applications other servers could work just as well. Once hosted, the Simple Service Repository’s HTML pages are instantly available via a browser and I’ve also used a specific Glassfish deployment descriptor which makes my content folders browsable by default. These features help satisfy my basic ‘discoverability at design-time’ requirement. When Jenkins builds my service it calls Maven’s ‘install’ which places the Simple Service Repository web application [WAR] and all the service contracts it contains into my Maven repository as a versioned Maven artifact. This is very handy for the next bit… 3. Build a SOAP based Web Service from a contract hosted in the Simple Service Repository (and test it). Maven again comes to the rescue again, using a Maven ‘copy’ plugin to extract the service contract from the WAR in the Maven artifact repository during the ‘init’ phase before using the JAX-WS plugin to create a service implementation framework in Java from the extracted WSDL contract. If I didn’t like the copy approach I could probably ask wsimport to just use the URL for the contract (pointing to the Simple Service Repository’s copy of the WSDL on the application server). The rest is plain sailing – normal boilerplate JAX-WS service implementation code. In my build I use the Cargo plugin for Maven to deploy the service implementation to Glassfish and SoapUI’s Maven plugin to run a service integration test suite during every build. On the build server, this service is also built by Jenkins, but this time its triggered by the Simple Service Repository application being successfully re-built. That way, if the service contract for my service changes, my application gets immediately re-built and re-tested so that I’ll know if I’ve introduced a defect into the system. In some very simple cases, the service project is so light that it only contains a few class files. Because JAX-WS can do annotation based deployment there can be very little in the way of metadata like deployment descriptors etc.An overall view of the Simple Service Repository solution in action.An overall view of the Simple Service Repository solution in action. A job well done? So there you have it, version 0.1 of the the Simple Service Repository is complete. It’s storing my service contracts centrally, it’s making them discoverable, it’s supporting contract-first development and it’s alerting me if I introduce changes to my contracts that destabilise my service implementations. It’s even supporting simple release based versioning of contracts and data models. To my mind it fits the brief perfectly. It’s highly available and accessible, repeatable, manageable and provides a valuable platform that supports closer collaboration between service designers and service developers. More importantly it was very quick and easy to create. It requires zero code (if you don’t count HTML as code), it cost me about a day in development time and it draws nicely on existing low-cost open-source tools and techniques. It also requires very little infrastructure and will run nicely on a normal laptop. I could probably even put it into the cloud without too much additional development effort. In summary, I’m still no Jedi (unfortunately) but I’m happier with my ‘DIY’ Simple Service Repository than I have been with anyone else’s. Find out more… Would you like a copy of the Simple Service Repository? Would you like me to open-source it? Comment, subscribe or contact me for more information. Demo: You can see an online demo of my version of the repository detailed in this article by clicking on the screenshot below.Reference: How to build a DIY Service Repository from our JCG partner Ben Wilcock at the SOA, BPM, Agile & Java blog....
groovy-logo

Groovy DSL – A Simple Example

Domain Specific Languages (DSLs) have become a valuable part of the Groovy idiom. DSLs are used in native Groovy builders, Grails and GORM, and testing frameworks. To a developer, DSLs are consumable and understandable, which makes implementation more fluid as compared to traditional programming. But how is a DSL implemented? How does it work behind the scenes? This article will demonstrate a simple DSL that can get a developer kick started on the basic concepts. What is a DSL DSL’s are meant to target a particular type of problem. They are short expressive means of programming that fit well in a narrow context. For example with GORM, you can express hibernate mapping with a DSL rather than XML. static mapping = { table 'person' columns { name column:'name' } } Much of the theory for DSLs and the benefits they deliver are well documented. Refer to these sources as a starting point:Martin Fowler – DSL Writing Domain Specific Languages – Groovy Programming Groovy – Chapter 18A Simple DSL Example in Groovy The following example offers a simplified view of implementing an internal DSL. Frameworks have much more advanced methods of creating a DSL. However this example does highlight closure delegation and meta object protocol concepts that are essential to understanding the inner workings of a DSL. Requirements Overview Imagine that a customer needs a memo generator. The memos needs to have a few simple fields, such as ‘to’, ‘from’, and ‘body’. The memo can also have sections such as ‘Summary’ or ‘Important.’ The summary fields are dynamic and can be anything on demand. In addition, the memo needs to be outputed in three formats: xml, html, and text. We elect to implement this as a DSL in Groovy. The DSL result looks like this: MemoDsl.make { to 'Nirav Assar' from 'Barack Obama' body 'How are things? We are doing well. Take care' idea 'The economy is key' request 'Please vote for me' xml }The output from the code yields: <memo> <to>Nirav Assar</to> <from>Barack Obama</from> <body>How are things? We are doing well. Take care</body> <idea>The economy is key</idea> <request>Please vote for me</request> </memo> The last line in the DSL can also be changed to ‘html’ or ‘text’. This affects the output format. Implementation A static method that accepts a closure is a hassle free way to implement a DSL. In the memo example, the class MemoDsl has a make method. It creates an instance and delegates all calls in the closure to the instance. This is the mechanism where the ‘to’, and ‘from’ sections end up executing methods inside the MemoDsl class. Once the to() method is called, we store the text in the instance for formatting later on. class MemoDsl {String toText String fromText String body def sections = []/** * This method accepts a closure which is essentially the DSL. Delegate the * closure methods to * the DSL class so the calls can be processed */ def static make(closure) { MemoDsl memoDsl = new MemoDsl() // any method called in closure will be delegated to the memoDsl class closure.delegate = memoDsl closure() }/** * Store the parameter as a variable and use it later to output a memo */ def to(String toText) { this.toText = toText }def from(String fromText) { this.fromText = fromText }def body(String bodyText) { this.body = bodyText } } Dynamic Sections When the closure includes a method that is not present in the MemoDsl class, groovy identifies it as a missing method. With Groovy’s meta object protocol, the methodMissing interface on the class is invoked. This is how we handle sections for the memo. In the client code above we have entries for idea and request. MemoDsl.make { to 'Nirav Assar' from 'Barack Obama' body 'How are things? We are doing well. Take care' idea 'The economy is key' request 'Please vote for me' xml }The sections are processed with the following code in MemoDsl. It creates a section class and appends it to a list in the instance. /** * When a method is not recognized, assume it is a title for a new section. Create a simple * object that contains the method name and the parameter which is the body. */ def methodMissing(String methodName, args) { def section = new Section(title: methodName, body: args[0]) sections << section }Processing Various Outputs Finally the most interesting part of the DSL is how we process the various outputs. The final line in the closure specifies the output desired. When a closure contains a string such as ‘xml’ with no parameters, groovy assumes this is a ‘getter’ method. Thus we need to implement ‘getXml()’ to catch the delegation execution: /** * 'get' methods get called from the dsl by convention. Due to groovy closure delegation, * we had to place MarkUpBuilder and StringWrite code in a static method as the delegate of the closure * did not have access to the system.out */ def getXml() { doXml(this) }/** * Use markupBuilder to create a customer xml output */ private static doXml(MemoDsl memoDsl) { def writer = new StringWriter() def xml = new MarkupBuilder(writer) xml.memo() { to(memoDsl.toText) from(memoDsl.fromText) body(memoDsl.body) // cycle through the stored section objects to create an xml tag for (s in memoDsl.sections) { '$s.title'(s.body) } } println writer }The code for html and text is quite similar. The only variation is how the output is formatted. Entire Code The code in its entirety is displayed next. The best approach I found was to design the DSL client code and the specified formats first, then tackle the implementation. I used TDD and JUnit to drive my implementation. Note that I did not go the extra mile to do asserts on the system output in the tests, although this could be easily enhanced to do so. The code is fully executable inside any IDE. Run the various tests to view the DSL output. package com.solutionsfit.dsl.memotemplateclass MemolDslTest extends GroovyTestCase {void testDslUsage_outputXml() { MemoDsl.make { to 'Nirav Assar' from 'Barack Obama' body 'How are things? We are doing well. Take care' idea 'The economy is key' request 'Please vote for me' xml } }void testDslUsage_outputHtml() { MemoDsl.make { to 'Nirav Assar' from 'Barack Obama' body 'How are things? We are doing well. Take care' idea 'The economy is key' request 'Please vote for me' html } }void testDslUsage_outputText() { MemoDsl.make { to 'Nirav Assar' from 'Barack Obama' body 'How are things? We are doing well. Take care' idea 'The economy is key' request 'Please vote for me' text } } }package com.solutionsfit.dsl.memotemplateimport groovy.xml.MarkupBuilder/** * Processes a simple DSL to create various formats of a memo: xml, html, and text */ class MemoDsl {String toText String fromText String body def sections = []/** * This method accepts a closure which is essentially the DSL. Delegate the closure methods to * the DSL class so the calls can be processed */ def static make(closure) { MemoDsl memoDsl = new MemoDsl() // any method called in closure will be delegated to the memoDsl class closure.delegate = memoDsl closure() }/** * Store the parameter as a variable and use it later to output a memo */ def to(String toText) { this.toText = toText }def from(String fromText) { this.fromText = fromText }def body(String bodyText) { this.body = bodyText }/** * When a method is not recognized, assume it is a title for a new section. Create a simple * object that contains the method name and the parameter which is the body. */ def methodMissing(String methodName, args) { def section = new Section(title: methodName, body: args[0]) sections << section }/** * 'get' methods get called from the dsl by convention. Due to groovy closure delegation, * we had to place MarkUpBuilder and StringWrite code in a static method as the delegate of the closure * did not have access to the system.out */ def getXml() { doXml(this) }def getHtml() { doHtml(this) }def getText() { doText(this) }/** * Use markupBuilder to create a customer xml output */ private static doXml(MemoDsl memoDsl) { def writer = new StringWriter() def xml = new MarkupBuilder(writer) xml.memo() { to(memoDsl.toText) from(memoDsl.fromText) body(memoDsl.body) // cycle through the stored section objects to create an xml tag for (s in memoDsl.sections) { '$s.title'(s.body) } } println writer }/** * Use markupBuilder to create an html xml output */ private static doHtml(MemoDsl memoDsl) { def writer = new StringWriter() def xml = new MarkupBuilder(writer) xml.html() { head { title('Memo') } body { h1('Memo') h3('To: ${memoDsl.toText}') h3('From: ${memoDsl.fromText}') p(memoDsl.body) // cycle through the stored section objects and create uppercase/bold section with body for (s in memoDsl.sections) { p { b(s.title.toUpperCase()) } p(s.body) } } } println writer }/** * Use markupBuilder to create an html xml output */ private static doText(MemoDsl memoDsl) { String template = 'Memo\nTo: ${memoDsl.toText}\nFrom: ${memoDsl.fromText}\n${memoDsl.body}\n' def sectionStrings ='' for (s in memoDsl.sections) { sectionStrings += s.title.toUpperCase() + '\n' + s.body + '\n' } template += sectionStrings println template } }package com.solutionsfit.dsl.memotemplateclass Section { String title String body }Reference: Groovy DSL – A Simple Example from our JCG partner Nirav Assar at the Assar Java Consulting blog....
java-logo

Java Exception: java lang NoSuchMethodError

If you have a look at the error message java.lang.NoSuchMethodError you may understand that the Java Virtual Machine is trying to indicate us that the method you invoked is not available in the class or in an interface. You could have also seen this error thrown when executing a class that has no public static void main() method.To know the reason behind this read this post. When you try to invoke a method that is no longer available in a class then at the compile time itself you will be shown an error message “cannot find symbol”. So you may think how come this error is thrown when launching a program or an application. I have explained the fact behind this issue using the following programs. Let’s have a class Nomethod and Pro1 as follows, Nomethod class: import java.util.*;class Nomethod{ public static void main(String args[]){ Pro1 s=new Pro1(); s.display(); } }Pro1 class: class Pro1{ public void display(){ System.out.println("I am inside display"); } }When you execute this program it will work fine without showing any errors.Now look at what happens when i change the class Pro1 as follows and compile this class alone. Example1: class Pro1 { }Example2: class Pro1{ public int void display(){ System.out.println("I am inside display"); return 1; // for example i have included a statement like this } }Now if you execute the class Nomethod without recompiling it then you will be embarrassed by this java.lang.NoSuchMethodError at run-time. 1. If you change the class Pro1 as shown in Example1,then this exception will be thrown because there is no method display() available in that class. 2. If you consider the Example2 this error is thrown because the signature of the method display() has been changed. If you understand this examples then you might have understood the reason for this error thrown when executing a class that has no main() method. The real fact is that “binary compatibility with the pre-existing binaries(classes) have been compromised by the new binaries(modified classes)”. “when you change the signature of a method or delete a method in a particular class” and compile it alone then other classes that invokes this method will have no idea about the state of the method,thus causing this error to be thrown at run-time. The same case applies to interfaces also,”if you try to change the signature of a method or delete a method in the interface” at that time also this exception will be thrown. What Could Be The Solution For This?  “If you have recompiled the other class, that invokes this modified method or deleted method in the class or in an interface” then this error will be shown at the compile-time itself and you can do the necessary steps to resolve it. Note: Things may get even worse,consider a situation like even if you recompile the class you will not be indicated of this error.What will you do?… Say,for an example you include a older version of the package in your project and you have placed it in the extension library.You also have the newer package(in which the signature of the method has been changed) and you have included that package in the class path. When compiling the classes the compiler will search for the classes in the extension libraries and bootstrap libraries to resolve the references,but the java virtual machine searches only in the class path(for third-party libraries) that has been specified. So when using a new package in your application,ensure that the settings relevant to the older version had been modified and read the documentation of the newer package to know the changes that has been made in this package. Reference: java.lang.NoSuchMethodError from our JCG partner Ganesh Bhuddhan at the java errors and exceptions blog....
java-logo

Memory Access Patterns Are Important

In high-performance computing it is often said that the cost of a cache-miss is the largest performance penalty for an algorithm. For many years the increase in speed of our processors has greatly outstripped latency gains to main-memory. Bandwidth to main-memory has greatly increased via wider, and multi-channel, buses however the latency has not significantly reduced. To hide this latency our processors employ evermore complex cache sub-systems that have many layers.The 1994 paper “Hitting the memory wall: implications of the obvious” describes the problem and goes on to argue that caches do not ultimately help because of compulsory cache-misses. I aim to show that by using access patterns which display consideration for the cache hierarchy, this conclusion is not inevitable.Let’s start putting the problem in context with some examples. Our hardware tries to hide the main-memory latency via a number of techniques. Basically three major bets are taken on memory access patterns:Temporal: Memory accessed recently will likely be required again soon. Spatial: Adjacent memory is likely to be required soon.  Striding: Memory access is likely to follow a predictable pattern.To illustrate these three bets in action let’s write some code and measure the results.Walk through memory in a linear fashion being completely predictable. Pseudo randomly walk round memory within a restricted area then move on. This restricted area is what is commonly known as an operating system page of memory. Pseudo randomly walk around a large area of the heap.CodeThe following code should be run with the -Xmx4g JVM option. public class TestMemoryAccessPatterns { private static final int LONG_SIZE = 8; private static final int PAGE_SIZE = 2 * 1024 * 1024; private static final int ONE_GIG = 1024 * 1024 * 1024; private static final long TWO_GIG = 2L * ONE_GIG; private static final int ARRAY_SIZE = (int)(TWO_GIG / LONG_SIZE); private static final int WORDS_PER_PAGE = PAGE_SIZE / LONG_SIZE; private static final int ARRAY_MASK = ARRAY_SIZE - 1; private static final int PAGE_MASK = WORDS_PER_PAGE - 1; private static final int PRIME_INC = 514229; private static final long[] memory = new long[ARRAY_SIZE]; static { for (int i = 0; i < ARRAY_SIZE; i++) { memory[i] = 777; } } public enum StrideType { LINEAR_WALK { public int next(final int pageOffset, final int wordOffset, final int pos) { return (pos + 1) & ARRAY_MASK; } }, RANDOM_PAGE_WALK { public int next(final int pageOffset, final int wordOffset, final int pos) { return pageOffset + ((pos + PRIME_INC) & PAGE_MASK); } }, RANDOM_HEAP_WALK { public int next(final int pageOffset, final int wordOffset, final int pos) { return (pos + PRIME_INC) & ARRAY_MASK; } }; public abstract int next(int pageOffset, int wordOffset, int pos); } public static void main(final String[] args) { final StrideType strideType; switch (Integer.parseInt(args[0])) { case 1: strideType = StrideType.LINEAR_WALK; break; case 2: strideType = StrideType.RANDOM_PAGE_WALK; break; case 3: strideType = StrideType.RANDOM_HEAP_WALK; break; default: throw new IllegalArgumentException("Unknown StrideType"); } for (int i = 0; i < 5; i++) { perfTest(i, strideType); } } private static void perfTest(final int runNumber, final StrideType strideType) { final long start = System.nanoTime(); int pos = -1; long result = 0; for (int pageOffset = 0; pageOffset < ARRAY_SIZE; pageOffset += WORDS_PER_PAGE) { for (int wordOffset = pageOffset, limit = pageOffset + WORDS_PER_PAGE; wordOffset < limit; wordOffset++) { pos = strideType.next(pageOffset, wordOffset, pos); result += memory[pos]; } } final long duration = System.nanoTime() - start; final double nsOp = duration / (double)ARRAY_SIZE; if (208574349312L != result) { throw new IllegalStateException(); } System.out.format("%d - %.2fns %s\n", Integer.valueOf(runNumber), Double.valueOf(nsOp), strideType); } }Results Intel U4100 @ 1.3GHz, 4GB RAM DDR2 800MHz, Windows 7 64-bit, Java 1.7.0_05 =========================================== 0 - 2.38ns LINEAR_WALK 1 - 2.41ns LINEAR_WALK 2 - 2.35ns LINEAR_WALK 3 - 2.36ns LINEAR_WALK 4 - 2.39ns LINEAR_WALK0 - 12.45ns RANDOM_PAGE_WALK 1 - 12.27ns RANDOM_PAGE_WALK 2 - 12.17ns RANDOM_PAGE_WALK 3 - 12.22ns RANDOM_PAGE_WALK 4 - 12.18ns RANDOM_PAGE_WALK0 - 152.86ns RANDOM_HEAP_WALK 1 - 151.80ns RANDOM_HEAP_WALK 2 - 151.72ns RANDOM_HEAP_WALK 3 - 151.91ns RANDOM_HEAP_WALK 4 - 151.36ns RANDOM_HEAP_WALKIntel i7-860 @ 2.8GHz, 8GB RAM DDR3 1333MHz, Windows 7 64-bit, Java 1.7.0_05 ============================================= 0 - 1.06ns LINEAR_WALK 1 - 1.05ns LINEAR_WALK 2 - 0.98ns LINEAR_WALK 3 - 1.00ns LINEAR_WALK 4 - 1.00ns LINEAR_WALK0 - 3.80ns RANDOM_PAGE_WALK 1 - 3.85ns RANDOM_PAGE_WALK 2 - 3.79ns RANDOM_PAGE_WALK 3 - 3.65ns RANDOM_PAGE_WALK 4 - 3.64ns RANDOM_PAGE_WALK0 - 30.04ns RANDOM_HEAP_WALK 1 - 29.05ns RANDOM_HEAP_WALK 2 - 29.14ns RANDOM_HEAP_WALK 3 - 28.88ns RANDOM_HEAP_WALK 4 - 29.57ns RANDOM_HEAP_WALKIntel i7-2760QM @ 2.40GHz, 8GB RAM DDR3 1600MHz, Linux 3.4.6 kernel 64-bit, Java 1.7.0_05 ================================================= 0 - 0.91ns LINEAR_WALK 1 - 0.92ns LINEAR_WALK 2 - 0.88ns LINEAR_WALK 3 - 0.89ns LINEAR_WALK 4 - 0.89ns LINEAR_WALK0 - 3.29ns RANDOM_PAGE_WALK 1 - 3.35ns RANDOM_PAGE_WALK 2 - 3.33ns RANDOM_PAGE_WALK 3 - 3.31ns RANDOM_PAGE_WALK 4 - 3.30ns RANDOM_PAGE_WALK0 - 9.58ns RANDOM_HEAP_WALK 1 - 9.20ns RANDOM_HEAP_WALK 2 - 9.44ns RANDOM_HEAP_WALK 3 - 9.46ns RANDOM_HEAP_WALK 4 - 9.47ns RANDOM_HEAP_WALKAnalysis I ran the code on 3 different CPU architectures illustrating generational steps forward for Intel. It is clear from the results that each generation has become progressively better at hiding the latency to main-memory based on the 3 bets described above for a relatively small heap. This is because the size and sophistication of various caches keep improving. However as memory size increases they become less effective. For example, if the array is doubled to be 4GB in size, then the average latency increases from ~30ns to ~55ns for the i7-860 doing the random heap walk.It seems that for the linear walk case, memory latency does not exist. However as we walk around memory in an evermore random pattern then the latency starts to become very apparent.The random heap walk produced an interesting result. This is a our worst case scenario, and given the hardware specifications of these systems, we could be looking at 150ns, 65ns, and 75ns for the above tests respectively based on memory controller and memory module latencies. For the Nehalem (i7-860) I can further subvert the cache sub-system by using a 4GB array resulting in ~55ns on average per iteration. The i7-2760QM has larger load buffers, TLB caches, and Linux is running with transparent huge pages which are all working to further hide the latency. By playing with different prime numbers for the stride, results can vary wildly depending on processor type, e.g. try PRIME_INC = 39916801 for Nehalem. I’d like to test this on a much larger heap with Sandy Bridge.The main take away is the more predictable the pattern of access to memory, then the better the cache sub-systems are at hiding main-memory latency. Let’s look at these cache sub-systems in a little detail to try and understand the observed results.Hardware ComponentsWe have many layers of cache plus the pre-fetchers to consider for how latency gets hidden. In this section I’ll try and cover the major components used to hide latency that our hardware and systems software friends have put in place. We will investigate these latency hiding components and use the Linux perf and Google Lightweight Performance Counters utilities to retrieve the performance counters from our CPUs which tell how effective these components are when we execute our programs. Performance counters are CPU specific and what I’ve used here are specific to Sandy Bridge.Data Caches Processors typically have 2 or 3 layers of data cache. Each layer as we move out is progressively larger with increasing latency. The latest Intel processors have 3 layers (L1D, L2, and L3); with sizes 32KB, 256KB, and 4-30MB; and ~1ns, ~4ns, and ~15ns latency respectively for a 3.0GHz CPU.Data caches are effectively hardware hash tables with a fixed number of slots for each hash value. These slots are known as “ways”. An 8-way associative cache will have 8 slots to hold values for addresses that hash to the same cache location. Within these slots the data caches do not store words, they store cache-lines of multiple words. For an Intel processor these cache-lines are typically 64-bytes, that is 8 words on a 64-bit machine. This plays to the spatial bet that adjacent memory is likely to be required soon, which is typically the case if we think of arrays or fields of an object.Data caches are typically evicted in a LRU manner. Caches work by using a write-back algorithm were stores need only be propagated to main-memory when a modified cache-line is evicted. This gives rise the the interesting phenomenon that a load can cause a write-back to the outer cache layers and eventually to main-memory. perf stat -e L1-dcache-loads,L1-dcache-load-misses java -Xmx4g TestMemoryAccessPatterns $Performance counter stats for 'java -Xmx4g TestMemoryAccessPatterns 1': 1,496,626,053 L1-dcache-loads 274,255,164 L1-dcache-misses # 18.32% of all L1-dcache hitsPerformance counter stats for 'java -Xmx4g TestMemoryAccessPatterns 2': 1,537,057,965 L1-dcache-loads 1,570,105,933 L1-dcache-misses # 102.15% of all L1-dcache hitsPerformance counter stats for 'java -Xmx4g TestMemoryAccessPatterns 3': 4,321,888,497 L1-dcache-loads 1,780,223,433 L1-dcache-misses # 41.19% of all L1-dcache hitslikwid-perfctr -C 2 -g L2CACHE java -Xmx4g TestMemoryAccessPatterns $java -Xmx4g TestMemoryAccessPatterns 1 +-----------------------+-------------+ | Event | core 2 | +-----------------------+-------------+ | INSTR_RETIRED_ANY | 5.94918e+09 | | CPU_CLK_UNHALTED_CORE | 5.15969e+09 | | L2_TRANS_ALL_REQUESTS | 1.07252e+09 | | L2_RQSTS_MISS | 3.25413e+08 | +-----------------------+-------------+ +-----------------+-----------+ | Metric | core 2 | +-----------------+-----------+ | Runtime [s] | 2.15481 | | CPI | 0.867293 | | L2 request rate | 0.18028 | | L2 miss rate | 0.0546988 | | L2 miss ratio | 0.303409 | +-----------------+-----------+java -Xmx4g TestMemoryAccessPatterns 2 +-----------------------+-------------+ | Event | core 2 | +-----------------------+-------------+ | INSTR_RETIRED_ANY | 1.48772e+10 | | CPU_CLK_UNHALTED_CORE | 1.64712e+10 | | L2_TRANS_ALL_REQUESTS | 3.41061e+09 | | L2_RQSTS_MISS | 1.5547e+09 | +-----------------------+-------------+ +-----------------+----------+ | Metric | core 2 | +-----------------+----------+ | Runtime [s] | 6.87876 | | CPI | 1.10714 | | L2 request rate | 0.22925 | | L2 miss rate | 0.104502 | | L2 miss ratio | 0.455843 | +-----------------+----------+java -Xmx4g TestMemoryAccessPatterns 3 +-----------------------+-------------+ | Event | core 2 | +-----------------------+-------------+ | INSTR_RETIRED_ANY | 6.49533e+09 | | CPU_CLK_UNHALTED_CORE | 4.18416e+10 | | L2_TRANS_ALL_REQUESTS | 4.67488e+09 | | L2_RQSTS_MISS | 1.43442e+09 | +-----------------------+-------------+ +-----------------+----------+ | Metric | core 2 | +-----------------+----------+ | Runtime [s] | 17.474 | | CPI | 6.4418 | | L2 request rate | 0.71973 | | L2 miss rate | 0.220838 | | L2 miss ratio | 0.306835 | +-----------------+----------+Note: The cache-miss rate of the combined L1D and L2 increases significantly as the pattern of access becomes more random.Translation Lookaside Buffers (TLBs) Our programs deal with virtual memory addresses that need to be translated to physical memory addresses. Virtual memory systems do this by mapping pages. We need to know the offset for a given page and its size for any memory operation. Typically page sizes are 4KB and gradually moving to 2MB and greater. Linux introduced Transparent Huge Pages in the 2.6.38 kernel giving us 2MB pages. The translation of virtual memory pages to physical pages is maintained by the page table. This translation can take multiple accesses to the page table which is a huge performance penalty. To accelerate this lookup, processors have a small hardware cache at each cache level called the TLB cache. A miss on the TLB cache can be hugely expensive because the page table may not be in a nearby data cache. By moving to larger pages, a TLB cache can cover a larger address range for the same number of entries. perf stat -e dTLB-loads,dTLB-load-misses java -Xmx4g TestMemoryAccessPatterns $ Performance counter stats for 'java -Xmx4g TestMemoryAccessPatterns 1': 1,496,128,634 dTLB-loads 310,901 dTLB-misses # 0.02% of all dTLB cache hitsPerformance counter stats for 'java -Xmx4g TestMemoryAccessPatterns 2': 1,551,585,263 dTLB-loads 340,230 dTLB-misses # 0.02% of all dTLB cache hitsPerformance counter stats for 'java -Xmx4g TestMemoryAccessPatterns 3': 4,031,344,537 dTLB-loads 1,345,807,418 dTLB-misses # 33.38% of all dTLB cache hitsNote: We only incur significant TLB misses when randomly walking the whole heap when huge pages are employed.Hardware Pre-Fetchers Hardware will try and predict the next memory access our programs will make and speculatively load that memory into fill buffers. This is done at it simplest level by pre-loading adjacent cache-lines for the spatial bet, or by recognising regular stride based access patterns, typically less than 2KB in stride length. The tests below we are measuring the number of loads that hit a fill buffer from a hardware pre-fetch. likwid-perfctr -C 2 -t intel -g LOAD_HIT_PRE_HW_PF:PMC0 java -Xmx4g TestMemoryAccessPatterns $java -Xmx4g TestMemoryAccessPatterns 1 +--------------------+-------------+ | Event | core 2 | +--------------------+-------------+ | LOAD_HIT_PRE_HW_PF | 1.31613e+09 | +--------------------+-------------+java -Xmx4g TestMemoryAccessPatterns 2 +--------------------+--------+ | Event | core 2 | +--------------------+--------+ | LOAD_HIT_PRE_HW_PF | 368930 | +--------------------+--------+java -Xmx4g TestMemoryAccessPatterns 3 +--------------------+--------+ | Event | core 2 | +--------------------+--------+ | LOAD_HIT_PRE_HW_PF | 324373 | +--------------------+--------+Note: We have a significant success rate for load hits with the pre-fetcher on the linear walk.Memory Controllers and Row Buffers Beyond our last level cache (LLC) sits the memory controllers that manage access to the SDRAM banks. Memory is organised into rows and columns. To access an address, first the row address must be selected (RAS), then the column address is selected (CAS) within that row to get the word. The row is typically a page in size and loaded into a row buffer. Even at this stage the hardware is still helping hide the latency. A queue of memory access requests is maintained and re-ordered so that multiple words can be fetched from the same row if possible.Non-Uniform Memory Access (NUMA) Systems now have memory controllers on the CPU socket. This move to on-socket memory controllers gave an ~50ns latency reduction over existing front side bus (FSB) and external Northbridge memory controllers. Systems with multiple sockets employ memory interconnects, QPI from Intel, which are used when one CPU wants to access memory managed by another CPU socket. The presence of these interconnects gives rise to the non-uniform nature of server memory access. In a 2-socket system memory may be local or 1 hop away. On a 8-socket system memory can be up to 3 hops away, were each hop adds 20ns latency in each direction.What does this mean for algorithms?The difference between an L1D cache-hit, and a full miss resulting in main-memory access, is 2 orders of magnitude; i.e. <1ns vs. 65-100ns. If algorithms randomly walk around our ever increasing address spaces, then we are less likely to benefit from the hardware support that hides this latency.Is there anything we can do about this when designing algorithms and data-structures? Yes there is a lot we can do. If we perform chunks of work on data that is co-located, and we stride around memory in a predictable fashion, then our algorithms can be many times faster. For example rather than using bucket and chain hash tables, like in the JDK, we can employ hash tables using open-addressing with linear-probing. Rather than using linked-lists or trees with single items in each node, we can store an array of many items in each node. Research is advancing on algorithmic approaches that work in harmony with cache sub-systems. One area I find fascinating is Cache Oblivious Algorithms. The name is a bit misleading but there are some great concepts here for how to improve software performance and better execute in parallel. This article is a great illustration of the performance benefits that can be gained. Conclusion To achieve great performance it is important to have sympathy for the cache sub-systems. We have seen in this article what can be achieved by accessing memory in patterns which work with, rather than against, these caches. When designing algorithms and data structures, it has now much more important to consider cache-misses, probably even more so than counting steps in the algorithm. This is not what we were taught in algorithm theory when studying computer science. The last decade has seen some fundamental changes in technology. For me the two most significant are the rise of multi-core, and now big-memory systems with 64-bit address spaces.One thing is certain, if we want software to execute faster and scale better, we need to make better use of the many cores in our CPUs, and pay attention to memory access patterns.Reference: Memory Access Patterns Are Important from our JCG partner Martin Thompson at the Mechanical Sympathy blog....
software-development-2-logo

Top 5 SOA gotchas and how to avoid them

After 5 years of designing and building award winning service oriented architectures, I thought I’d share my top 5 SOA gotcha’s and some general hints on how you can avoid them in your SOA programme.1. Failure to recognise that Service-Orientation is about design (and not about technology).Service-orientation is to web services what object orientation is to Java, C# and C++. Service-orientation is a design paradigm and not a specific technology.Service-orientation is achieved by applying the principals of service-oriented design during the service design process. A Service-Oriented Architecture (aka SOA) is a suite of well designed and reusable services that follow these design principals. A service-oriented architecture cant be built by simply using the technologies associated with web services (such as SOAP or REST).Still confused? Consider this real world example of design vs technology taken from the construction industry… Even if Concrete (the technology) is used to construct a new office building, this doesn’t automatically mean that the building exhibits the design features of Modernist buildings (the architectural style). Concrete can be used equally well to realise any architectural style from Classical and Gothic to Modernist and International. Modernism is an architectural style whereas Concrete is simply one of a number of technologies that can be used to realise it.So just because you have SOAP (or REST) web-services within your technical architecture, this doesn’t mean that your architecture is automatically service-oriented. It’s possible to create web services that are not service-oriented just like its possible to write Java or C# code that isn’t object oriented.2. Failing to align SOA with the business.SOA is much more powerful if the services that you deliver have recognisable business alignment and reuse potential. By delivering services that mirror business activities, it becomes easier to evolve and re-configure those activities when the business changes.When talking to clients, I often describe SOA services as an ‘Organisation API’. Services should reflect the capabilities of the organisation, not the technologies within it.The simplest way I know to enable business alignment is to bring architects and analysts together into one group with a shared vision and shared working practices (such as utilising BPMN for both business and service analysis).3. Failure to share SOA ownership with the business.There is little point creating flexible, malleable and evolve-able technical services if the business is not committed to leveraging this capability on an ongoing basis. Equally, services designed to be readily reusable are pointless if the business can’t discover, interpret and reuse this API to exploit new opportunities.The business should therefore share the responsibility for designing and managing its technical services. In addition, the process of finding and reusing services should be straightforward and accessible, not shrouded in mystery and technical complexity.Basic SOA Governance processes and simple service repositories can help to overcome these issues and can be as simple or as complex as required to fit your organisation’s culture.4. Investing in the wrong tools and technologies.There are a great many expensive tools and technologies available for SOA, so it’s easy to make big mistakes from a very early stage in most SOA programmes. To keep it simple here are my top 2 technologies to take extra care with…Business Process Management (BPM). BPM systems are often sold as a mechanism to enable service reuse via continuous business re-engineering but beware of getting sucked in by salesman waffle. BPM systems can be complex and are not always the answer. Chucking a BPM system into your shopping cart is unlikely to make a difference unless you have the required ‘culture of change’ within your organisation. I’ve seen people waste millions on BPM systems that only get used once because of poor cultural fit and ingrained application silos. When evaluating BPM, always ask yourself two simple questions; Do we need it? Will we use it? If the answer to both is a strong ‘yes’ then by all means go ahead.Enterprise Service Bus (ESB). ESB systems are often sold as ‘instant service enablers’ or ‘SOA out of the box’ but the smart IT manager should be asking “what kind of services would be exposed to service consumers if I did this?” Would these instant ESB services be well designed and business aligned or would they just expose existing legacy or proprietary application API’s using new protocols? Would these new ESB services be inherently interoperable and reusable or would I be exposing proprietary data models to service clients?Take great care with technology selection and make sure you fully understand the pro’s and con’s before you sign on the dotted line.5. Failing to create a cohesive architectural strategy.Mixing and matching architectural strategies is rarely a winning formula. Different technical strategies have different technical outcomes and these outcomes will often conflict with each other. For example, stating that the corporate strategy is service-orientation whilst also stating that you’re standardising on one vendors integrated applications suite will certainly bring technical conflict. How can you create a vendor neutral SOA in a vendor mandated environment? Which takes precedent in terms of allocating budget? Which best reflects the way you do business? Which provides the best flexibility and best differentiates you from your competitors?In my book it’s better to choose one strategy and one set of goals and benefits and then stick with them. Keep it simple and make it clear.Avoiding SOA mistakes is easy: Use (or create) capable Service Technologists.SOA is powerful stuff, but it’s a big and highly specialised topic. That’s not to say it can’t be simple, it’s just that there’s a lot of conflicting advice out there and it’s usually a mistake to think that you can simply move from EAI or OO straight into SOA in one step without having specialists who can help you to correctly design and build your SOA.I always advise that IT managers seek the advice of an independent SOA consultant from a very early stage in any new SOA programme. Ideally someone accredited with relevant qualification from a professional body that delivers vendor-neutral SOA training and certification. Taking this kind of pro-active approach can save you millions in avoidable expenses and protect your whole change programme from many of the inherent pitfalls. Professional advice can help guarantee a decent return on the investment you’re making and will also help ensure that you deliver the strategic benefits that you’re after.That’s my top 5, but what about yours?Reference: Top 5: SOA gotcha’s and how to avoid them. from our JCG partner Ben Wilcock at the SOA, BPM, Agile & Java blog....
netbeans-logo

Oracle Public Cloud Java Service with NetBeans in Early Access

Who expected that to happen: Oracle is working on a public cloud offering and the signs of the approaching official launch are there. Nearly a year after the official announcement I was invited to join the so called “Early Access” program to test-drive the new service and give feedback. Thanks to Reza Shafii who is the product manager in charge I have the allowance to dish the dirt a bit. Even if I am not allowed to show you some screenshots from the UI there is plenty to talk about. And today I am willing to give you a first test-drive of the developer experience with NetBeans.PreparationsAs usual there are some preparations to do. Get yourself a copy of the latest NetBeans 7.2 RC1 Java EE edition. This is the publicly available IDE which has oracle cloud support. It was dropped from the 7.2 Final because … yeah … the OPC isn’t public and nobody wanted to see unusable features in a final release. So the first secret seems to be lifted here. When OPC launches we will see a 7.3 release popping up (concluded from this test-specification). Another useful preparation is to download and install the corresponding WebLogic 10.3.6 for local development. And that is the second surprise so far. Oracle Public Cloud Java Service will be a Java EE 5 service. At least for the GA. And absolutely it doesn’t make any sense to stay at this version. So it is really save to say that the WebLogic 12c which has Java EE 6 support will follow sometime next. All set. Fire up NetBeans.Create your Java EE Application   All you have to do now is to create a new Java EE Web Application with NetBeans. Give it a name (I’m calling it MyCloud) and add a new local WebLogic 10 server in the “Add…” Server dialogue. Don’t forget to chose Java EE 5 as EE version. Let’s add JSF 2.0 and Primefaces 3.2 on the Framework tab. Click “Finish”. If NetBeans is complaining about missing server libraries, let it deploy them. That’s it for now. Right Click your app and Run it. This will fire up your local WebLogic domain and point your browser to http://localhost:7001/MyCloud/ or whatever your app is called. as you can see, the Primefaces components are also working. Not spectacular.Add Cloud…  Next you have to add some cloud. Switch to the services tab, right click on the cloud node and select “Add Cloud…”. Chose “Oracle Cloud” and click Next. You will have to fill in a couple of information here.Identity Domain. The individual or group identity of your Oracle Cloud account. Java Service Name. The name of the Java Service. Database Service Name. The name of the Database Service. Administrator. Your identity as Oracle Cloud administrator. Password. Your Oracle Cloud administrator password. SDK. Path to your local copy of the Oracle Cloud SDK. Click Configure to browse for this file.Lucky you, you don’t have to care about the details here. You get hand on the information after the successful account creation. And it is pretty straight forward to figure out what is meant here if you get a hand on the cloud finally. Some more words about the identity domain.When setting up Oracle Cloud services, a service name and an identity domain must be given to each service. An identity domain is a collection of users and roles that have been given specific privileges to use certain services or manage certain services in the domain. So it basically is kind of a secure storage.Click “Finish” if everything is filled out correctly. NetBeans verifies your provided information against the OPC and you now have the Oracle Cloud in it. Additionally you find a new server “Oracle Cloud Remote” which is actually the server hook you have to specify in your projects run configuration. Go there. Switch it from local “Oracle WebLogic Server” to “Oracle Cloud Remote” and hit OK. Now you are all set for the cloud deployment.Run in the Cloud…Right click and “Run” your project. You see a lot of stuff happen. First of all NetBeans is doing the normal build and afterwards is starting the distribution. First of all this is uploading the bundle (MyCload.war) to the cloud. It get’s scanned for viruses and needs to pass the Whitelist scan (more on this later). If both succeeds the deployment happens and your application is opened in your system’s default browser:That was a typical development round-trip with the Oracle Public Cloud Java Service. Develop and test locally deploy and run in the cloud.Some more NetBeans goodies   But for what is the “Oracle Cloud” entry in the Cloud services good for? For now this is very simple. You can use it to access your deployment jobs and the according log-files.Every deployment gets a unique number and you see the deployments status. Together with the log excerpts you are able to track that down further. Let’s try some more. Add a servlet named “Test” and try to use some malicious code ;) System.exit(0);First indication that something is wrong here is dashed code hint.Completing it pops up a little yellow exclamation mark. Let’s verify the project. Right click on it and select “Verify”. That runs the White List Tool which outputs a detailed error report about the white-list validations. ERROR - Path:D:\MyCloud\dist\MyCloud.war (1 Error) ERROR - Class:net.eisele.opc.servlet.Test (1 Error) ERROR - 1:Method exit not allowed from java.lang.System.(Line No:41 Method Name:java.lang.System->exit(int)) ERROR - D:\MyCloud\dist\MyCloud.war Failed with 1 error(s)It is disappointing but there are limitations (aka White List) in place which prevent you from using every single Java functionality you know. For the very moment I am not going to drill this down further. All early access members had to say something about the restrictions and Oracle listened carefully. A lot of things are moving here and it simply is too early to make any statements about the final white list. A lot of 3rd party libraries (e.g. primefaces) are tested and run smoothly. Those aren’t affected by the white list at all.Bottom Line   That is all for today. I am not going to show you anything else of the OPC. And I know that you can’t test-drive the service your own. You need to have the Javacloud SDK in place which isn’t publicly available today. But it will be. And there will be a chance to test-drive the cloud for free. A trial. And I am looking forward showing you some more of the stuff that is possible. As soon as it becomes available. As of today you can register for access and get notified as the service is ready to sign you up!Reference: Oracle Public Cloud Java Service with NetBeans in Early Access from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....
java-logo

Java 8: Testing the Lambda Water

Java 8 is about a year away and comes with a language feature I really look forward to: Lambda Expression. Sadly the other big feature, Modules for the Java Platform, has been delayed to Java 9. But nevertheless, bringing lambda expressions (or closures if you like) into the language will make programming in Java much better. So nearly one year to go – but as Java is developed open source now, we can have a look and try to use it right now. So let’s go! Download and Install Lambda enabled Java 8 First, I expected that I have to compile Java 8 myself, as it has not yet been release. But I was pleasantly surprised to see, that there are binary build available for all platforms at http://jdk8.java.net/lambda/. So I just downloaded the latest developer preview build and installed it on my computer.To make sure it works, I created an LambdaIntro class containing a “Hello, World!”, compiled and executed it: ~ $ export JAVA_HOME=~/Devtools/Java/jdk1.8.0/ ~ $ cd spikes/lambda-water ~ $ $JAVA_HOME/bin/javac src/net/jthoenes/blog/spike/lambda/LambdaIntro.java ~ $ $JAVA_HOME/bin/java -cp src net.jthoenes.blog.spike.lambda.LambdaIntro Hello from Java 8!Note: I use the command-line to compile and execute here, because IDEs do not support Java 8 as of now. The Non-Lambda Way As an example lets assume I want to loop through a list of objects. But for my business logic, I need to have the value and the index of the list item. If I want to do it with current Java, I have to handle the index together with the actual logic: List list = Arrays.asList('A', 'B', 'C'); for (int index = 0; index < list.size(); index++) { String value = list.get(index); String output = String.format('%d -> %s', index, value); System.out.println(output); } This will output 0 -> A 1 -> B 2 -> C This is not too bad, but I did two things in the same few lines of code: controlling the iteration and providing some (very simple) business logic. Lambda expressions can help me to separate those two.\ The eachWithIndex method signature So I want to have a method eachWithIndex which can be called like this: List list = Arrays.asList('A', 'B', 'C'); eachWithIndex(list, (value, index) -> { String output = String.format('%d -> %s', index, value); System.out.println(output); } ); The method receives two parameter. The first one is the list and the second one is a lambda expression or closure which instructs the method what to do with each list item. As you can see in line 3, the lambda expression receives two argument: the current value und the current index. These arguments do not have a type declaration. The type information will be inferred by the Java 8 compiler. After the arguments, there is a -> and a block of code which should be executed for every list item. Note: You will have to write this method in a normal text editor or ignore the error messages inside your IDE. Implement the eachWithIndex method To use a lambda in Java 8, you need to declare a functional interface. A functional interface is an interface which has exactly one method – the method which will be implemented by the lambda expression. In this case, I need to declare a method which receives an item and an index and returns nothing. So I define the following interface: public static interface ItemWithIndexVisitor<E> { public void visit(E item, int index); } With this interface I can now implement the eachWithIndex method. public static <E> void eachWithIndex(List<E> list, ItemWithIndexVisitor<E> visitor) { for (int i = 0; i < list.size(); i++) { visitor.visit(list.get(i), i); } } The method makes use of the generic parameter <E>, so the item passed to the visit method will be inferred to be of the same type than the list. The nice thing about using functional interfaces is, that there are a lot of them already out there in Java. Think for example of the java.util.concurrent.Callable interface. It can be used as a lambda without having to change the code which consumes the Callable. This makes a lot of the JDK and frameworks lambda enabled by default. Using a method reference One little handy thing coming from Project Lambda are method references. They are a way, to re-use existing methods and package them into a functional interface object. So let’s say I have the following method public static <E> void printItem(E value, int index) { String output = String.format('%d -> %s', index, value); System.out.println(output); } and I want to use this method in my eachWithIndex method, than I can use the :: notation inside my method call: eachWithIndex(list, LambdaIntro::printItem); Looks nice and concise, doesn’t it? Summary This makes my first lambda example run. I couldn’t avoid a smile on my face to see closures running in one of my Java program after longing for them so long. Lambda Expression are currently only available as a developer preview build. If you want to find out more, read the current Early Draft Review or go to the Project Lambda project page. I uploaded the full example code to gist. Reference: Java 8: Testing The Lambda Water from our JCG partner Johannes Thoenes at the Johannes Thoenes blog blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close