Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?

json-logo

MOXy’s Object Graphs – Input/Output Partial Models to XML & JSON

Suppose you have a domain model that you want to expose as a RESTful service. The problem is you only want to input/output part of your data. Previously you would have created a separate model representing the subset and then have code to move data between the models. In EclipseLink 2.5.0 we have a new feature called Object Graphs that enables you to easily define partial views on your model. You can try this out today by downloading an EclipseLink 2.5.0 nightly download starting on March 24, 2013 from:http://www.eclipse.org/eclipselink/downloads/nightly.php  Java Model Below is the Java model that we will use for this example. The model represents customer data. We will use an object graph to output just enough information so that someone could contact the customer by phone. Customer The @XmlNamedObjectGraph extension is used to specify subsets of the model we wish to marshal/unmarshal. This is done by specifying one or more @XmlNamedAttributeNode annotations. If you want an object graph applied to a property you can specify a subgraph for it. The subgraph can either be defined as a @XmlNamedSubgraph or as a @XmlNamedObjectGraph on the target class. package blog.objectgraphs.metadata;import java.util.List; import javax.xml.bind.annotation.*; import org.eclipse.persistence.oxm.annotations.*;@XmlNamedObjectGraph( name='contact info', attributeNodes={ @XmlNamedAttributeNode('name'), @XmlNamedAttributeNode(value='billingAddress', subgraph='location'), @XmlNamedAttributeNode(value='phoneNumbers', subgraph='simple') }, subgraphs={ @XmlNamedSubgraph( name='location', attributeNodes = { @XmlNamedAttributeNode('city'), @XmlNamedAttributeNode('province') } ) } ) @XmlRootElement @XmlAccessorType(XmlAccessType.FIELD) public class Customer {@XmlAttribute private int id;private String name;private Address billingAddress;private Address shippingAddress;@XmlElementWrapper @XmlElement(name='phoneNumber') private List<PhoneNumber> phoneNumbers;} Address Because we defined the object graph for the Address class as a subgraph on the Customer class there is nothing we need to do here. package blog.objectgraphs.metadata;import javax.xml.bind.annotation.*;@XmlAccessorType(XmlAccessType.FIELD) public class Address {private String street;private String city;private String province;private String postalCode;} PhoneNumber For the phoneNumbers property on the Customer class we specified that an object graph called simple should be used to scope the data. We will define this object graph on the PhoneNumber class. An advantage of this approach is that it makes the object graphs easier to be reused. package blog.objectgraphs.metadata;import javax.xml.bind.annotation.*; import org.eclipse.persistence.oxm.annotations.*;@XmlNamedObjectGraph( name='simple', attributeNodes={ @XmlNamedAttributeNode('value'), } ) @XmlAccessorType(XmlAccessType.FIELD) public class PhoneNumber {@XmlAttribute private String type;@XmlValue private String value;} Demo Code Demo In the demo code below we will read in an XML document to fully populate our Java model. After marshalling it out to prove that everything was fully mapped we will specify an object graph on the marshaler (line 22), and output a subset to both XML and JSON. package blog.objectgraphs.metadata;import java.io.File; import javax.xml.bind.*; import org.eclipse.persistence.jaxb.MarshallerProperties;public class Demo {public static void main(String[] args) throws Exception { JAXBContext jc = JAXBContext.newInstance(Customer.class);Unmarshaller unmarshaller = jc.createUnmarshaller(); File xml = new File('src/blog/objectgraphs/metadata/input.xml'); Customer customer = (Customer) unmarshaller.unmarshal(xml);// Output XML Marshaller marshaller = jc.createMarshaller(); marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true); marshaller.marshal(customer, System.out);// Output XML - Based on Object Graph marshaller.setProperty(MarshallerProperties.OBJECT_GRAPH, 'contact info'); marshaller.marshal(customer, System.out);// Output JSON - Based on Object Graph marshaller.setProperty(MarshallerProperties.MEDIA_TYPE, 'application/json'); marshaller.setProperty(MarshallerProperties.JSON_INCLUDE_ROOT, false); marshaller.setProperty(MarshallerProperties.JSON_WRAPPER_AS_ARRAY_NAME, true); marshaller.marshal(customer, System.out); }} input.xml/Output We will use the following document to populate our domain model. We will also marshal it back out to demonstrate that all of the content is actually mapped. <?xml version='1.0' encoding='UTF-8'?> <customer id='123'> <name>Jane Doe</name> <billingAddress> <street>1 A Street</street> <city>Any Town</city> <province>Ontario</province> <postalCode>A1B 2C3</postalCode> </billingAddress> <shippingAddress> <street>2 B Road</street> <city>Another Place</city> <province>Quebec</province> <postalCode>X7Y 8Z9</postalCode> </shippingAddress> <phoneNumbers> <phoneNumber type='work'>555-1111</phoneNumber> <phoneNumber type='home'>555-2222</phoneNumber> </phoneNumbers> </customer> XML Output Based on Object Graph The XML below was produced by the exact same model as the previous XML document. The difference is that we leveraged a named object graph to select a subset of the mapped content. <?xml version='1.0' encoding='UTF-8'?> <customer> <name>Jane Doe</name> <billingAddress> <city>Any Town</city> <province>Ontario</province> </billingAddress> <phoneNumbers> <phoneNumber>555-1111</phoneNumber> <phoneNumber>555-2222</phoneNumber> </phoneNumbers> </customer> JSON Output Based on Object Graph Below is the same subset as the previous XML document represented as JSON. We have used the new JSON_WRAPPER_AS_ARRAY_NAME property (see Binding to JSON & XML – Handling Collections) to improve the representation of collection values. { 'name' : 'Jane Doe', 'billingAddress' : { 'city' : 'Any Town', 'province' : 'Ontario' }, 'phoneNumbers' : [ '555-1111', '555-2222' ] } External Metadata MOXy also offers an external binding document which allows you to provide metadata for third party objects or apply alternate mappings for your model (see: Mapping Object to Multiple XML Schemas – Weather Example). Below is the mapping document for this example. <?xml version='1.0'?> <xml-bindings xmlns='http://www.eclipse.org/eclipselink/xsds/persistence/oxm' package-name='blog.objectgraphs.metadata' xml-accessor-type='FIELD'> <java-types> <java-type name='Customer'> <xml-named-object-graphs> <xml-named-object-graph name='contact info'> <xml-named-attribute-node name='name'/> <xml-named-attribute-node name='billingAddress' subgraph='location'/> <xml-named-attribute-node name='phoneNumbers' subgraph='simple'/> <xml-named-subgraph name='location'> <xml-named-attribute-node name='city'/> <xml-named-attribute-node name='province'/> </xml-named-subgraph> </xml-named-object-graph> </xml-named-object-graphs> <xml-root-element/> <java-attributes> <xml-attribute java-attribute='id'/> <xml-element java-attribute='phoneNumbers' name='phoneNumber'> <xml-element-wrapper/> </xml-element> </java-attributes> </java-type> <java-type name='PhoneNumber'> <xml-named-object-graphs> <xml-named-object-graph name='simple'> <xml-named-attribute-node name='value'/> </xml-named-object-graph> </xml-named-object-graphs> <java-attributes> <xml-attribute java-attribute='type'/> <xml-value java-attribute='value'/> </java-attributes> </java-type> </java-types> </xml-bindings>   Reference: MOXy’s Object Graphs – Input/Output Partial Models to XML & JSON from our JCG partner Blaise Doughan at the Java XML & JSON Binding blog. ...
wso2-logo

One way messaging with WSO2 ESB

As I posted before I am currently working with the WSO2 ESB. To get a good understanding of this ESB I have been walking through the samples (haven’t finished all of them yet). Example 12 is about one-way messaging with the ESB and makes use the TCP Monitor to make it visible. I have described before how to setup a similar tool called ‘TcpTunnelGUI’ but actually I prefer the TCP Monitor. To use the tool see the manual here or here. By the way, the tool is available with the WSO2 ESB installation so you don’t have to download it and install it. Simply go to the ‘$CARBON_HOME/bin’ directory and give the command:./tcpmon.sh         To see the example 12 in action with Tcp Monitor do the following:Start the WSO2 ESBThis example uses the ESB setup that is similar as the one for example 1 so start the ESB by navigating to the ‘$CARBON_HOME/bin’ directory in a terminal and enter the following command: ./wso2esb-samples.sh -sn 1Start the Apache Axis serverNext step is to start the Axis server where the SimpleStockQuote is deployed. To do this open a new terminal and navigate to ‘$CARBON_HOME/samples/axis2Server’ directory. Enter the command ./axis2server.sh.Start the TcpMonitorIf you haven’t done already start the Tcp Monitor. Do this by opening a new terminal and browse to ‘$CARBON_HOME/bin’ and enter the command ./tcpmon.sh This should start the Tcp Monitor tool:Configure the TcpMonitorWe are going to listen to port 8281 and forward the incoming traffic to 8280 (that is where our ESB is running it’s proxy service). Here is how to set this up in the Tcp Monitor:After clicking the ‘Add’ button you see the TcpMonitor waiting for a connection:So let’s send a message through it.Run the Axis clientI made a small change to the statement as shown in the example page. Open a new terminal and run the following command from the directory ‘$CARBON_HOME/samples/axis2Client’: ant stockquote -Daddurl=http://localhost:9000/services/SimpleStockQuoteService -Dprxurl=http://localhost:8281/ -Dmode=placeorderCheck the resultsIn the TCP Monitor we see that there is a line added to the TCP Monitor and in the lower part we see the incoming and outgoing request:Here is the request sent by the Axis client: <soapenv:Envelope xmlns:soapenv='http://schemas.xmlsoap.org/soap/envelope/'> <soapenv:Header xmlns:wsa='http://www.w3.org/2005/08/addressing'> <wsa:To>http://localhost:9000/services/SimpleStockQuoteService</wsa:To> <wsa:ReplyTo> <wsa:Address>http://www.w3.org/2005/08/addressing/none</wsa:Address> </wsa:ReplyTo> <wsa:MessageID>urn:uuid:44ba7c6b-1836-4a62-8e40-814813a64022</wsa:MessageID> <wsa:Action>urn:placeOrder</wsa:Action> </soapenv:Header> <soapenv:Body> <m0:placeOrder xmlns:m0='http://services.samples'> <m0:order> <m0:price>154.76332953114107</m0:price> <m0:quantity>8769</m0:quantity> <m0:symbol>IBM</m0:symbol> </m0:order> </m0:placeOrder> </soapenv:Body> </soapenv:Envelope> The important thing to notice in the request is the following element in the header: <wsa:ReplyTo> <wsa:Address>http://www.w3.org/2005/08/addressing/none</wsa:Address> </wsa:ReplyTo> With this element in the header we tell the we service we don’t expect a response. So what we get as a response is just the 202 response code as we can see in the TCP Monitor: HTTP/1.1 202 Accepted Content-Type: text/xml; charset=UTF-8 Server: Synapse-HttpComponents-NIO Date: Thu, 14 Mar 2013 20:30:19 GMT Transfer-Encoding: chunked0 That completes this example, only a few more to go!   Reference: One way messaging with WSO2 ESB from our JCG partner Pascal Alma at the The Pragmatic Integrator blog. ...
spring-interview-questions-answers

Exception Handling with the Spring 3.2 @ControllerAdvice Annotation

A short time ago, I wrote a blog outlining how I upgraded my Spring sample code to version 3.2 and demonstrating a few of the little ‘gotchas’ that arose. Since that I’ve been perusing Spring 3.2’s new feature list and whilst it doesn’t contain any revolutionary new changes, which I suspect the Guys at Spring are saving for version 4, it does contain a few neat upgrades. The first one that grabbed my attention was the new @ControllerAdvice annotation, which seems to neatly plug a gap in Spring 3 functionality. Let me explain… If you take a look at my blog on Spring 3 MVC Exception Handlers you’ll see that the sample code contains a flaky controller with a request handler method that throws an IOException. The IOException is then handled by another method in the same   controller that’s annotated with @ExceptionHandler(IOException.class). The problem is that your method that’s annotated with @ExceptionHandler(IOException.class) can only handle IOExceptions thrown by its containing controller. If you want to create a global exception handler that handles exceptions thrown by all controllers then you have to revert to something like Spring 2’s SimpleMapingExceptionHandler and some XMLconfiguration. Now things are different. To demonstrate the use of @ControllerAdvice I’ve created a simple Spring 3.2 MVC application that you can find on github. The application’s home page ostensively allows the user to display either their address or credit card details,…except that when the user attempt to do this, the associated controllers will throw an IOException and the application displays the following error page:The controllers that generate the exceptions are fairly straightforward and listed below: @Controller public class UserCreditCardController {private static final Logger logger = LoggerFactory.getLogger(UserCreditCardController.class);/** * Whoops, throw an IOException */ @RequestMapping(value = "userdetails", method = RequestMethod.GET) public String getCardDetails(Model model) throws IOException {logger.info("This will throw an IOException");boolean throwException = true;if (throwException) { throw new IOException("This is my IOException"); }return "home"; }} @Controller public class UserAddressController {private static final Logger logger = LoggerFactory.getLogger(UserAddressController.class);/** * Whoops, throw an IOException */ @RequestMapping(value = "useraddress", method = RequestMethod.GET) public String getUserAddress(Model model) throws IOException {logger.info("This will throw an IOException");boolean throwException = true;if (throwException) { throw new IOException("This is my IOException"); }return "home"; }} As you can see, all that this code does is to map userdetails and useraddress to the getCardDetails(...) and getUserAddress(...) methods respectively. When either of these methods throw an IOException, then the exception is caught by the following class: @ControllerAdvice public class MyControllerAdviceDemo {private static final Logger logger = LoggerFactory.getLogger(MyControllerAdviceDemo.class);@Autowired private UserDao userDao;/** * Catch IOException and redirect to a 'personal' page. */ @ExceptionHandler(IOException.class) public ModelAndView handleIOException(IOException ex) {logger.info("handleIOException - Catching: " + ex.getClass().getSimpleName()); return errorModelAndView(ex); }/** * Get the users details for the 'personal' page */ private ModelAndView errorModelAndView(Exception ex) { ModelAndView modelAndView = new ModelAndView(); modelAndView.setViewName("error"); modelAndView.addObject("name", ex.getClass().getSimpleName()); modelAndView.addObject("user", userDao.readUserName());return modelAndView; } } The class above is annotated by the new @ControllerAdvice annotation and contains a single public method handleIOException(IOException.class). This method catches all IOExceptions thrown by the controllers above, generates a model containing some relevant user information and then displays and error page. The nice thing about this is that,no matter how many controllers your application contains, when any of them throws an IOException, then it’ll be handled by the MyControllerAdviceDemo exception handler. @ModelAttribute and @InitBinderOne final thing to remember is that the although the ControllerAdvice annotation is useful for handling exceptions, it can also be used the globally handle the @ModelAttribute and @InitBinder annotations. The combination of ControllerAdvice and @ModelAttribute gives you the facility to setup model objects for all controllers in one place and likewise the combination of ControllerAdvice and @InitBinder allows you to attach the same custom validator to all your controllers, again, in one place.   Reference: Exception Handling with the Spring 3.2 @ControllerAdvice Annotation from our JCG partner Roger Hughes at the Captain Debug’s Blog blog. ...
apache-maven-logo

How to replace a build module with Veripacks

Compare the two trees below. In both cases the goal is to have an application with two independent modules (frontend and reporting), and one shared/common module (domain). The code in frontend shouldn’t be able to access code in reporting, and vice versa. Both modules can use the domain code. Ideally, we would like to check these access rules at build-time.            On the left, there’s a traditional solution using Maven build modules. Each build module has a pretty elaborate pom.xml, e.g.: <?xml version='1.0' encoding='UTF-8'?> <project xmlns='http://maven.apache.org/POM/4.0.0' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:schemaLocation='http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd'> <parent> <artifactId>parent</artifactId> <groupId>org.veripacks.battle</groupId> <version>1.0.0-SNAPSHOT</version> </parent> <modelVersion>4.0.0</modelVersion> <name>Veripacks vs Build Modules: Frontend</name><artifactId>frontend</artifactId><dependencies> <dependency> <groupId>org.veripacks.battle</groupId> <artifactId>domain</artifactId> <version>1.0.0-SNAPSHOT</version> </dependency> </dependencies> </project> On the right, on the other hand, we have a much simpler structure with only one build module. Each application module now corresponds to one top-level project package (see also this blog on package naming conventions). Notice the package-info.java files. There, using Veripacks, we can specify which packages are visible where. First of all, we specify that the code from top-level packages (frontend, reporting and domain) should be only accessible if explicitly imported, using @RequiresImport. Secondly, we specify that we want to access the domain package in frontend and reporting using @Import; e.g.: @RequiresImport @Import('org.veripacks.battle.domain') package org.veripacks.battle.frontend;import org.veripacks.Import; import org.veripacks.RequiresImport; Now, isn’t the Veripacks approach simpler? There is still build-time checking, which is possible by running a simple test (see the README for details). Plus, you can also use other Veripacks features, like @Export annotations, which is a generalized version of package-private scope, taking into account package hierarchies. There are also other benefits, like trivial sharing of test code (which is kind of hard with Maven), or much easier refactoring (introducing a new application module is a matter of adding a top-level package). The immediate question that arises is – what about 3rd party libraries? Most probably, we’d like frontend-specific libraries to be accessible only in the frontend module, and reporting-specific ones in the reporting module. Well, not supported yet, but good news – that will be the scope of the next Veripacks release. You can view the example projects on GitHub.   Reference: How to replace a build module with Veripacks from our JCG partner Adam Warski at the Blog of Adam Warski blog. ...
optaplanner-logo

Drools Planner renames to OptaPlanner: Announcing www.optaplanner.org

We’re proud to announce the rename Drools Planner to OptaPlanner starting with version 6.0.0.Beta1. We’re also happy to unveil its new website: www.optaplanner.org. OptaPlanner optimizes business resource usage. Every organization faces planning problems: provide products or services with a limited set of constrained resources (employees, assets, time and money). OptaPlanner optimizes such planning to do more business with less resources. Typical use cases include vehicle routing, employee rostering and equipment scheduling. OptaPlanner is a lightweight, embeddable planning engine written in Java™. It helps normal Java™ programmers solve constraint satisfaction problems efficiently. Under the hood, it combines optimization heuristics and metaheuristics with very efficient   score calculation. OptaPlanner is open source software, released under the Apache Software License. It is 100% pure Java™, runs on the JVM and is available in the Maven Central Repository too. For more information, visit the new website: http://www.optaplanner.org Why change the name? OptaPlanner is the new name for Drools Planner. OptaPlanner is now standalone, but can still be optionally combined with the Drools rule engine for a powerful declarative approach to planning optimization.OptaPlanner has graduated from the Drools project to become a top-level JBoss Community project.OptaPlanner is not a fork of Drools Planner. We simply renamed it. OptaPlanner (the planning engine) joins its siblings Drools (the rule engine) and jBPM (the workflow engine) in the KIE group.Our commitment to Drools hasn’t changed.The efficient Drools rule engine is still the recommended way to do score calculation. Alternative score calculation options, such as pure Java calculation (no Drools), also remain fully supported.How will this affect your business? From a business point of view, there’s little or no change:The mission remains unchanged:We’re still 100% committed to put business resource optimization in the hands of normal Java developers.The license remains unchanged (Apache Software License 2.0). It’s still the same open source license. The release lifecycle remains unchanged: OptaPlanner is still released at the same time as Drools and jBPM. Red Hat is considering support subscription offerings for OptaPlanner as part of its BRMS and BPM platforms.A Tech Preview of OptaPlanner is targeted for BRMS 6.0.What has changed?The website has changed to http://www.optaplanner.org The distributions artifacts have changed name:Jar names changes:drools-planner-core-*.jar is now optaplanner-core-*.jar drools-planner-benchmark-*.jar is now optaplanner-benchmark-*.jarMaven identification groupId’s and artifactId’s changes:groupId org.drools.planner is now org.optaplanner artifactId drools-planner-core is now optaplanner-core artifactId drools-planner-benchmark is now optaplanner-benchmarkAs usual, for more information see the Upgrade Recipe in the download zip.The API’s namespace has changed. As usual, see the upgrade recipe on how to deal with this efficiently.Starting from 6.1.0.Final, OptaPlanner will have a 100% backwards compatible API.OptaPlanner gets its own IRC channels on Freenode: #optaplanner and #optaplanner-dev  Reference: Drools Planner renames to OptaPlanner: Announcing www.optaplanner.org from our JCG partner Geoffrey De-Smet at the Drools & jBPM blog. ...
java-interview-questions-answers

JPA and CMT – Why Catching Persistence Exception is Not Enough?

Being in EJB and JPA world using CMT (Container Managed Transactions) is very comfortable. Just define few annotations to demarcate transaction boundary (or use the defaults) and that’s it – no fiddling with manual begin, commit or rollback operations. One way to rollback your transaction is to throw non-application exception (or application exception with rollback = true) from your EJB’s business method. It seems simple: if during some operation there is a possibility that an exception will be thrown and you don’t want to rollback your tx than you should just catch this exception and you’re fine. You can now retry the volatile operation once again within the same, still active transaction.       Now its all true for application exceptions thrown from user’s components. The question is – what with exceptions thrown from other components? Like JPA’s EntityManager throwing a PersistenceException? And that’s where the story begins. What We Want to Achieve Imagine the following scenario: You have an entity named E. It consists of:id – this is the primary key, name – this is some human-readable entity name, content – some arbitrary field holding a string – it simulates the ‘advanced attribute’ which e.g. is calcuated during persistence/merging time and can result in errors. code – holds either OK or ERROR strings – defines if the advanced attributes were successful persisted or not,You want to persist E. You assume that basic attributes of E will always be successfully persisted. The advanced attributes, however, requires some additional calculations or operations which might result in e.g. a constraint violation being thrown from the database. If such situation occur, you still want to have E persisted in the database (but only with basic attributes filled in and the code attribute set to “ERROR”). In other words this is what you could think of:Persist the E with its basic attributes, Try to update it with fragile advanced attributes, If PersistenceException was thrown from step 2. – catch it, set the ‘code’ attribute to “ERROR” and clear all advanced attributes (they caused an exception), Update E.Naive solution Moving to EJB’s code this is how you might try doing it (assume default TransactionAttributes): public void mergeEntity() { MyEntity entity = new MyEntity('entityName', 'OK', 'DEFAULT');em.persist(entity);// This will raise DB constraint violation entity.setContent('tooLongContentValue');// We don't need em.merge(entity) - our entity is in managed mode.try { em.flush(); // Force the flushing to occur now, not during method commit. } catch (PersistenceException e) { // Clear the properties to be able to persist the entity. entity.setContent(''); entity.setCode('ERROR');// We don't need em.merge(entity) - our entity is in managed mode. } } What’s Wrong With This Example? Catching of PersistenceException thrown by an EntityManager is not going to prevent transaction from rolling back. It’s not like that not caching an exception in your EJB will make the tx marked for rollback. It’s the throwing of non-application exception from EntityManager marking the tx to rollback. Not to mention that a resource might by its own mark a tx for rollback in its internals. It effectively means your application doesn’t really have control over such tx behavior. Moreover, as a result of transaction rollback, our entity has been moved to detached state. Therefore some em.merge(entity) at the end of this method would be required. Working Solution So how you can deal with this automatic transaction rollback? Because we’re using CMT our only way is to define another business method that will start a fresh transaction and perform all fragile operations there. This way even if PersistenceException will be thrown (and caught) it will mark only the new transaction to be rolled back. Our main tx will be untouched. Below you can see some code sample from here (with logging statements removed for brevity): public void mergeEntity() { MyEntity entity = new MyEntity('entityName', 'OK', 'DEFAULT');em.persist(entity);try { self.tryMergingEntity(entity); } catch (UpdateException ex) { entity.setContent(''); entity.setCode('ERROR'); } }@TransactionAttribute(TransactionAttributeType.REQUIRES_NEW) public void tryMergingEntity(final MyEntity entity) throws UpdateException { entity.setContent('tooLongContentValue');em.merge(entity);try { em.flush(); } catch (PersistenceException e) { throw new UpdateException(); } }Mind that:UpdateException is an @ApplicationException that extends Exception (so it is rollback=false by default). It is used to inform that the update operation has failed. As an alternative you could change the tryMergingEntity(-) method signature to return boolean instead of void. This boolean could describe if the update was successful or not. self is a self reference to our own EJB. This is a required step to use EJB Container proxy which makes @TransactionAttribute of the called method work. As an alternative you could use SessionContext#getBusinessObject(clazz).tryMergingEntity(entity). The em.merge(entity) is crucial. We are starting new transaction in tryMergingEntity(-) so the entity is not in persistence context. There is no need for any other merge or flush in this method. The tx has not been rolled back so the regular features of CMT approves, meaning that all changes to the entity will be automatically flushed during tx commit.Let’s emphasize once again the bottom line: If you catch an exception it doesn’t mean your current transaction hasn’t been marked for rollback. PersistenceException is not an ApplicationException and will make your tx rollback despite if you catch it or not. JTA BMT Solution All the time we were talking about CMT. What about JTA BMT? Well, as a bonus find the below code which shows how to deal with this problem with BMT (accessible here as well): public void mergeEntity() throws Exception { utx.begin(); MyEntity entity = new MyEntity('entityName', 'OK', 'DEFAULT'); em.persist(entity); utx.commit();utx.begin(); entity.setContent('tooLongContentValue');em.merge(entity);try { em.flush(); } catch (PersistenceException e) { utx.rollback();utx.begin(); entity.setContent(''); entity.setCode('ERROR');em.merge(entity); utx.commit(); } } With JTA BMT we can do this all in just one method. This is because we control when our tx begins and commits/rollbacks (take a look at those utx.begin()/commit()/rollback(). Nevertheless, the result is the same – after throwing PersistenceException our tx is marked for rollback and you can check it using UserTransaction#getStatus() and comparing it to one of the constants like ...
jcg-logo

Hadoop Books Giveaway – Roundup

Fellow geeks, Our giveaway of Packt Publishing’s books on Apache Hadoop has ended. You may find the original post for the competition here. The Prize Winners The 6 lucky winners that will receive the book prizes are (names are as appeared on their emails): Hadoop Real-World Solutions CookbookSellamuthu, Rudra Moorthy Josep Ventura ArgerichHadoop Beginner’s GuideBhakti Rajdev Manuel JesúsHadoop MapReduce CookbookKonstantin Kondratov Jörg AmelunxenEach one of the 6 winners will receive 1 print copy and 1 e-book of the mentioned books on Hadoop for free. Congratulations to the winners! Discount Coupons For the rest of our readers, Packt Publishing has kindly provided us with discount coupons. For print (15% discount): PacktHadoop@15%off For eBook (30% discount): PacktHadoopeBook@30%off For those who wish to purchase the print copy, they will receive a 15% discount and for the e-copy, there will be a 30% discount using the aforementioned discount code. The codes are valid for use till 30th April midnight. Note that users will need to apply these codes on the check out page of Packt website, in the provided promotional code box. Print book purchase comes with complimentary eBook copy of the book. Thank you all for participating. Happy reading/coding! The Java Code Geeks team ...
java-logo

Types of Entity Managers: Application-managed EntityManager

JPA specification defines few types of EntityManagers / Persistence Contexts. We can have:extended and transactional-scoped EntityManagers, container-managed or application-managed EntityManagers. JTA or resource-local EntityManager,Besides the above distinction, we also have two main contexts in which EntityManager / Persistence Context can exist – Java EE and Java SE. Not every option is available for Java EE and not every is possible in Java SE. In the rest of the post I refer to the Java EE environment. Ok, so before we’ll proceed to the real topic of this post — which is the behaviour of the EntityManager in Java EE created manually using EntityManagerFactory — let’s just shortly describe the above EM types. Extended vs transactional-scoped This feature tells us if the EntityManager’s operations might span across multiple transactions. By default the Transactional Persistence Context is used which means that all changes are flushed and all managed entities become detached when the current transaction commits. The extended scope is available only for Stateful EJBs; it makes perfectly sense as the SFSBs can save the state, so end of one business method doesn’t necessary means the end of the transaction. With the SLSB the story is different – we have business method that must end when the business method finishes because in next invocation we don’t have an idea which EJB instance we’ll end in. One method = one transaction; only transactional-scoped EntityManager is allowed for SLSB. You can control if the EntityManager is extended or transactional during the EntityManager injection: @PersistenceContext(type=javax.persistence.PersistenceContextType.EXTENDED) EntityManager em;By default it’s javax.persistence.PersistenceContextType.TRANSACTION. As a side note – using extended EntityManager might allow you to create some interesting solutions; take a look at Adam Bien’s no-transactional bean with transactional save method trick. He uses this approach to make all the changes automatically flushed when the transaction starts and ends (and he actually do that by invoking special, artificial method.) Extended and transaction scoped PersistenceContext are allowed only in case of container-managed EntityManagers. Container-managed vs application-managed In great majority of Java EE applications you’re just injecting the EntityManager with @PersistenceContext like this: @PersistenceContext EntityManager em;This actually means that you’re letting the container to inject the EntityManager for your (the container creates it from the EntityManagerFactory behind the scenes.) This means that your EntityManager is container-managed. Alternatively, you can create an EntityManager by yourself – from the EntityManagerFactory. You can obtain it by injection: @PersistenceUnit EntityManagerFactory emf;Then to get the EntityManager you need to invoke emf.createEntityManager(). And here it is – you’re now using the application-managed EntityManager. The application (you) is responsible for creation and removal of EntityManager. Every application-managed Persistence Context has the extended scope. You might use if if you want to have control over created EM – e.g. if you want to set some property to the underlying JPA implementor or just plug yourself before the business method put it hands on it. You will, however need to move the created EntityManager across multiple beans involved in the transaction – the container won’t do it for you and every time you invoke emf.createEntityManager() you’re creating an EntityManager that is connected to the new PersistenceContext. You might use the CDI for EntityManager’s sharing, but that’s the topic of one of the last sections. JTA vs resource-local This property defines if you want the JTA to manage your EntityManager’s transactions or if you want to use its direct API to begin and commit. If you’re using container-managed EntityManager it automatically means that you have to use JTA EntityManager. If you’re using application-managed EntityManager, you can have it JTA or resource-local. In the real-life it means that if you’re using JTA EntityManager you take care just of the higher-level transactions management either:declaratively; using annotations or XML JTA transactions attributes or, programmatically; using javax.transaction.UserTransaction.If you’re using the resource-local EntityManager you need to go a bit deeper and use EntityManager.getTransaction() that returns javax.persistence.EntityTransaction and invoke commit(-), begin(-), rollback(), etc. You define this feature in persistence.xml using transaction-type attribute: <?xml version='1.0' encoding='UTF-8'?> <persistence ...> <persistence-unit transaction-type='RESOURCE_LOCAL' ... > </persistence>The other possible value (and default when transaction-type is not defined) is JTA. Application-managed JTA EntityManager in Java EE As much as this subtitle sounds complex, knowing all the previous types of EntityManagers and PersistenceContexts you should understand exactly what if refers to.“Application-managed” means that we’ll be injecting @PersistenceUnit EntityManagerFactory instead of EntityManager, “Java EE” because these examples (published on github) are to be used only in Java EE Application Server, “JTA” because we’ll be using the JTA transactions level, so we won’t be using javax.persistence.EntityTransaction.Now the first thing you need to realize is how you use the JTA transactions. There are two types of JTA transactions management – container (CMT) and bean managed (BMT). The Container Managed JTA transactions (CMT) means that you use the javax.ejb.TransactionAttribute for defining if the tx should be active and where the transaction boundaries are. This is the default JTA management type. Alternative, you can choose to demarcate the JTA transactions yourself. This is called the Bean Managed JTA Transactions (BMT.) The application (you) is responsible for starting, rolling back or committing the transaction. How can you control a JTA transaction? You do that using javax.transaction.UserTransaction. How do you obtain one? Well, there are at least 3 ways to do this:it’s bound to the JNDI in the component-private namespace (java:comp/UserTransaction), so you can just look it up using InitialContext or SessionContext, you can use SessionContext to access it – SessionContext#getUserTransaction(), because it’s bound in the well-known JNDI name, you can let the container inject it using: @Resource UserTransaction utx;.If you have UserTransaction you can start demarcating what is to be executed within the transaction. Notice that you’re still controlling the JTA transactions – you’re not even touching the EntityManager’s resource-local transactions. When The EntityManager is in The JTA Transaction? Without the previous introduction you might think that the application-managed EntityManager means that you’re on your own for everything – creating, sharing EntityManager, begining tx, commiting, closing. However, knowing all the above differences you know that you can use the JTA transactions in your application-managed EntityManager. But the question is – how to make it aware of the active JTA transaction? If we have a container-managed EntityManager we know that the container manages it all, but in case we’re on our own – how do we do that? Actually it depends where the EntityManager was created by us. Find some examples below (the complete code can be found on my github account: Case 1: We invoke the following code without the active transaction (so we have TransactionAttribute.NEVER or TransactionAttribute.NOT_SUPPORTED in case of CMT or we didn’t invoke UserTransaction.begin() in case of BMT: EntityManager em = emf.createEntityManager(); em.persist(new Customer(firstName, lastName));Results: The EntityManager operations don’t thrown any exception while persisting but none of the changes are committed. There is no active transaction, so no changes are made. Case 2: We invoke the following code using BMT: utx.begin();EntityManager em = emf.createEntityManager();em.persist(new Customer(firstName, lastName));utx.commit();Results: The new data is properly persisted during JTA commit (in the last line.) Case 3: We invoke the following code using BMT: EntityManager em = emf.createEntityManager(); utx.begin();em.persist(new Customer(firstName, lastName));utx.commit();Results: The EntityManager is outside of the transaction because it was created before the JTA transaction was started. The changes are not persisted despite the commit of the JTA transaction. No exceptions are thrown. In case of the second example you might ask yourself – is it possible to firstly create an EntityManager, then start a transaction and finally somehow make the EntityManager aware of the surrounding tx? And if fact yes, you can do that and that’s exactly what the EntityManager#joinTransaction() method is for. Following two cases should show you how it can be used: Case 4: We invoke the following code using BMT: EntityManager em = emf.createEntityManager(); utx.begin(); em.joinTransaction();em.persist(new Customer(firstName, lastName));utx.commit();Results: Here we explicitly told the EntityManager to join the active JTA transaction. In the result, the EntityManager will flush all its changes during JTA commit. Case 5: We invoke the following code using BMT: EntityManager em = emf.createEntityManager(); utx.begin();em.joinTransaction();em.persist(new Customer(firstName, lastName));utx.commit();utx.begin();em.joinTransaction();em.persist(new Customer(firstName, lastName));utx.commit();Results: Both EntityManager operations are properly persisted. Here we’ve shown that the application-managed Persistence Context can span across multiple JTA transactions (note that we didn’t create another EntityManager but just reused the one used in previous transaction.) And here you can see what the JPA specs (JPA 2.0 Final Release) tells you regarding application-managed Persistence Context: 7.7 Application-managed Persistence Contexts When a JTA application-managed entity manager is used, if the entity manager is created outside the scope of the current JTA transaction, it is the responsibility of the application to associate the entity manager with the transaction (if desired) by calling EntityManager.joinTransaction. If the entity manager is created outside the scope of a JTA transaction, it is not associated with the transaction unless EntityManager.joinTransaction is called. Sharing of EntityManager Using CDI As mentioned, if you want to share your EntityManager between components that constitutes one transaction, you should pass it manually (hey, after all it’s “application-managed”.) The CDI might be a solution here. You could produce the request scoped EntityManager and inject it into any component you need. It could look like this (in real-life you’d need to take care of disposal of EM as well): public class Resources {@PersistenceUnit EntityManagerFactory emf;@Produces @RequestScoped public EntityManager createEntityManager() { return emf.createEntityManager(); } }Now in every bean we could have: @Stateless public class MyBean {@Inject EntityManager em; }It seems a very clean way of sharing the application-managed Persistence Context between different components that constitutes the transaction. However, the thing I feared was: knowing that the application-managed EntityManager transactionality behaviour depends on the location where it was created, such approach might sometimes give you nasty results. Take the following code as an example (it’s also available at my Github project, this class precisely here): @Stateless @TransactionAttribute(TransactionAttributeType.NEVER) public class BeanABoundary {@Inject private EntityManager em;@EJB BeanB beanB;public void invoke() { em.getProperties(); beanB.invoke(); }Notice the BeanA being a non-transactional resource. Also note that we injected and invoked some operation on the EntityManager (this makes the injection to be actually performed.) Now if the BeanB is transactional and also injects and uses EntityManager – we’ll end with a non-transactional EntityManager that will not throw any exception and will not save any changes to the database. In the case of old @PersistenceContext we’d be in a transaction because the EntityManager will be container-managed and the container would be aware of currently active transaction. The container is responsible for sharing the EntityManager between transaction boundaries. In case of shown CDI producer method, the CDI doesn’t know about running transaction and just share the EntityManager in anyway. Of course, one may use the CDI and create a @Produces @PersistenceContext EntityManager em and then use @Inject EntityManager. This would work exactly as the @PersistenceContext EntityManager but allows us to define, e.g. a name of the persistence unit in a single place that produces the EntityManager. This, however, is not an option if we want to have an application-managed EntityManager.   Reference: Types of Entity Managers: Application-managed EntityManager from our JCG partner Piotr Nowicki at the Piotr Nowicki’s Homepage blog. ...
scala-logo

Gang of Four Patterns With Type-Classes and Implicits in Scala (Part 2)

Type-classes are a powerful tool for library creators and maintainers. They reduce boilerplate, open libraries to extension, and act as a compile time switch. Similarly, the GoF patterns are also a collection of software organizational patterns aimed at improving the quality of code. The last blog post explored using one such pattern with type-classes and implicits, the bridge pattern. In this post I’ll move on to using the adapter pattern. The adapter pattern is the most widely used and recognized type-class using a GoF pattern within the Scala community. The standard library includes several examples such as Ordering and Numeric. As with the case of any well designed and implemented library, their use is transparent and invisible to library consumers. Many   people coming to Scala have probably used these without even realizing it. If you’re already familiar with the adapter pattern, skip the next section. Adapter Pattern The adapter pattern (sometimes called a wrapper pattern) is an OO-design concept invented to solve the problem of code duplication and promote code reuse in the presence of disparate interfaces. It does so by unifying code around a common interface and encapsulates the GoF philosophy of “programming to an interface.” From this brief description the bridge and adapter patterns may seem indistinguishable but their purposes are quite strikingly different. Whereas the bridge pattern is used to allow N concepts to vary independently behind N interfaces, the adapter pattern is used to reduce N interfaces to one when the underlying concept is the same. To wit, it is used as the “glue” to tie or adapt inconsistent APIs together (hence the name.) Adapters are more of a handy man’s tool for pragmatic code construction. The common use case being when there is a preexisting component or library that has to be made to work within a code base that was not designed to accommodate it. In general, they are built in advance only when libraries need backwards compatibility between versions or to serve as extension points to future library users. So what’s it look like? Let’s take as an example two interfaces for addition: trait Addable{ def add(x: Int, y: Int) = x+y } trait Summable{ def plus(x: Int)(y: Int) = x+y } where Summable‘s “plus” method exposes the curried form of Addable’s “add” method. If we wanted to make these two interfaces inter-operable in a purely OO world we’d produce: class Add2SumAdapter(addable: Addable) extends Summable{ def plus(x: Int)(x: Int) = addable add (x, y) } class Sum2AddAdapter(sum: Summable) extends Addable{ def add(x: Int, x: Int) = sum plus (x) (y) } Which essentially packages up one interface for another. Library decoupling achieved. As an aside, it has been argued that as we move towards a functional style of programming that we can mitigate the need and boilerplate required for the adapter pattern. In FP languages the adapter morphs into currying, function composition and lifting. This change in dynamic is important to realize, as mitigation is not equivalently elimination. One needs to think about the type signature of the functions above to see how the need for adaptation continues to be true in functional terms: type Plus = (Int => Int => Int) type Add = ((Int, Int) => Int) There just is no way to avoid confronting the type mismatch. It will have to be handled somewhere. Adapter Pattern in Type-Classes Before we begin with the adapter pattern and type-classes in depth, let’s take a step back and talk about bigger issues; library scope issues. To that we mean, let’s talk about a concept related to the adapter pattern, the Dependency Inversion Principle or DIP for short. The hallmark of code written using this technique is a decoupling of higher level modules from lower level modules by forcing the lower level modules to conform to an interface defined at the higher level. Thus the inversion is the higher level module defines the building blocks upon which it is built and not the other way around. DIP results in cleaner and more extensible code but to do so relies heavily on the use of structural design patterns, the adapter being one. Scala allows for implicit conversions between types and, thus, DIP could be implemented in terms of implicit conversions in a very OO-style of coding. This would solve the problem of too much interface adapting boilerplate within our code but, as a side effect, it would lead to code that was unnaturally hard to debug and errors that were even harder to trace. There’s a reason these were turned off by default in 2.10 (and if you’ve experienced the joys of mutable implicit state you’d have a painful understanding why.) Generalizations of DIP at a type level leads to an even more powerful construct known as ad hoc polymorphism. In this case we define a type-class which acts as an adapter, allowing objects of that type to express a finite set of operations. This set of operations defines an interface to which code can be written, independently of the type of the object. Instead of wrapping the class within an interface, the object and it’s type becomes an argument to the interface. Implicit scope resolution is relied upon to inject the correct type-class based on the argument type at the function call site. Before we go further let me point out that DIP, when used in the context of implicits as we have just described, may sound similar to another code organizational technique known as dependency injection. DIP is not dependency injection and neither is what we have just described. DIP is resolved at compile time while DI revolves around run-time resolution. One can make use of the other but they are as different as different can be. Let us look at two example implementations of date and time in Java: java.Date and joda.DateTime. Both represent date and time with methods for modification. However, one is a mutable construct whose methods work by side-effect and the other, immutable whose methods return a new instance. If we wish to work with a date/time type but still remain decoupled from the concrete realization of that type, we’d encode the interface of the behaviors in a reusable type-class: trait DateFoo[DateType]{ def addHours(date: DateType)(hours: Int): DateType def addDays(date: DateType)(days: Int): DateType def addMonths(date: DateType)(months: Int): DateType def addYears(date: DateType)(years: Int): DateType def now(): DateType } Anywhere in our application we could then write code which looked like the following: trait TimeTrackDAO{ def checkin[A](unit: String, key: String, time: A)(implicit date: DateFoo[A]): A def checkout[A](unit: String, key: String, time: A)(implicit date: DateFoo[A]): A def itemize(unit: String, key: String): Int } and as long as there was an implicitly scoped DateFoo type-class for “A,” both the “checkin” and “checkout” methods would just work. And if there wasn’t such a type? We could only use the “itemize” method because the other two would not compile! Meditate on that for a second. Let me say it another way, there is nothing stopping us from using the “itemize” method of our TimeTrackDAO if we have not defined any DateFoo type-class in the system. Only when we attempt to use either the “checkin” or “checkout” methods with a type lacking a DateFoo would our code fail to compile. This is our compile time switch mentioned in the introductory paragraph. Type-classes with implicit resolution allow class/trait functionality to be enabled or disabled at compile time based on a type-parameter and scoping rules. Conclusion OO-design patterns and OO concepts have spent years being refined and explored by developers due, in large part, to the predominance of OO in mainstream languages. Good, SOLID methodologies have arisen as a need to combat the complexity naturally arising out of OO-based designs. Functional concepts, such as type-classes, have only begun to trickle into languages which share a hybridization of the two paradigms. While there is a divide between standard OOP idioms and functional constructs like type-classes, the two can actually be used towards a common good. FP is not a silver bullet. It will not invalidate the common wisdom of many OO inspired ideas but FP concepts due force us to rethink the approach we take in wrestling code complexity. The not so surprisingly useful nature of languages with functions as first class citizens is just the tip of the iceberg. Using the adaptor pattern with FP inspired type-classes, we’ve shown how to reduce boilerplate, open code to extension and impose compile time constraints. There has been no “magic” or hard to trace issues like those that arise out of implicit conversions or DI styled libraries. Type-classes using the adapter pattern are deliberate and explicit in use, with well defined scope resolution which is enforced at compile time. They are the perfect combination of OO and FP principles.   Reference: Gang of Four Patterns With Type-Classes and Implicits in Scala (Part 2) from our JCG partner Owein Reese at the Statically Typed blog. ...
java-interview-questions-answers

EJB Inheritance is Different From Java Inheritance

Despite the fact that EJB inheritance sometimes uses Java inheritance — they’re not always the same. Just as you could read in my previous post, EJB doesn’t have to implement any interface to expose a business interface. The other way around is also true — just because EJB is implementing some interface or extending other EJB doesn’t mean it exposes all or any of its views. Let’s say we want to have some basic EJB exposing remote business interface. We then want to extend this EJB and override remote business methods. Nothing fancy, right? But let’s take a look at some examples.     Remote business interface: public interface RemoteA { void remoteA(); }Base EJB: @Stateless @Remote(RemoteA.class) public class SuperclassEJB implements RemoteA { public void remoteA() { // Some basic code that can be overriden. } }The above SuperclassEJB is our base EJB. It exposes one remote business interface with one method. Now let’s move to subclasses of our EJB: Case 1 – Java Inheritance @Stateless public class SubclassEJB1 extends SuperclassEJB { // 'remoteA' is not EJB business method. EJB inheritance is strictly for implementation reusing. }SubclassEJB1 is an EJB – that’s for sure. But what interfaces does it expose? Because an EJB component must explicitly define what business interfaces it’s defining – our EJB doesn’t have any real business methods at all! It’s new, fresh no-interface view EJB. This means that if in your code you’ll do:@EJB SubclassEJB1 myEJB it’ll inject your no-interface view EJB with no business methods in it. @EJB(name='SubclassEJB1') RemoteA myEJB it’ll refuse to make this injection because RemoteA is not a business interface of our EJB.What’s interesting – if instead of container injection using @EJB you’d do lookup like this: RemoteA subclassEJB1 = (RemoteA) initialContext.lookup('java:module/SubclassEJB1'); subclassEJB1.remoteA();it won’t throw any exception and invoke the remoteA() method correctly. Why? Because what we really looked up was the no-interface view of our EJB. We then casted it to the RemoteA (which is correct from plain Java perspective) and invoked a no-interface view method. I think you’ll agreee it can be quite confusing – instead of using remote interface we’ve ended up with local bean method being correctly invoked. Case 2 – Java Inheritance With Interface Implementation @Stateless public class SubclassEJB2 extends SuperclassEJB implements RemoteA { // 'remoteA' is correctly exposed as EJB business method BUT as an implicit local i-face. // Method implementation is correctly inherited. }Now this looks really weird. Our EJB is extending other EJB and implements remote business interface, right? Well, not exactly. We’re implementing plain Java RemoteA interface. This interface itself doesn’t have @Remote annotation and neither does SuperclassEJB. This means we’re exposing RemoteA as a local business interface. This is one of the default behaviors of EJBs that was discussed in my previous post. This means that if in your code you’ll do:@EJB(name='SubclassEJB2') RemoteA myEJB it’ll use local business interface. Quite messed up, don’t you think?Case 3 – Java Inheritance With Interface Implementation and View Declaration @Stateless @Remote(RemoteA.class) public class SubclassEJB3 extends SuperclassEJB { // Method 'remoteA' is correctly exposed as EJB business method (thanks to @Remote on EJB). // Method implementation is correctly inherited. }This is a correct example of EJBs extension. We’ve correctly reused the implementation with Java inheritance, implemented the EJB remote business interface and exposed it using @Remote. The implements clause is not even required – the @Remote would be enough. However, the @Remote part is crucial. This means that if in your code you’ll do:@EJB(name='SubclassEJB3') RemoteA myEJB it’ll properly use remote business interface.Conclusion As you can see EJB inheritance sometimes might not be as easy as expected. It requires you to know the basics of components and views definition. By default component inheritance is plainly for code reusing – not for component extension. Without this knowledge you might bump into some quite weird and frustrating problems. All of the examples were tested on JBoss AS 7.1.1.   Reference: EJB Inheritance is Different From Java Inheritance from our JCG partner Piotr Nowicki at the Piotr Nowicki’s Homepage blog. ...
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close