Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?


Android listview background row style: Rounded Corner, alternate color

One aspect we didn’t consider in the previous posts is how we can apply style or background to the Listview items (or row). We can customize the look of the ListView in the way we like, for example with can use as background rounded corner, alternate color and so on. By now we considered just custom adapter without taking into account how we can customize how each item inside the listview appears. In this post we want to describe how we can use resource to customize the item look. The first example will describe how we can create rounded corners for each item inside a listview. In the second example we will show how we can alternate the background color.   ListView with rounded corner Let’s suppose we want to create rounded corner for each item. How can we do this?…We need to create some drawable resources and apply them to each item. As you already know we have to create a custom adapter to implement this behaviour. In this post we don’t want to spend too much words about adapters because we described them here and here. As we said the first thing we need is a drawable resource. As you may already know this is powerful feature of Android because it permits us to create geometrical figure in XML style. We have to specify some information to create this figure:border size and color background color (in our case a solid color) cornersWe need to create a file XML under the res/drawable directory. Let’s call this file rounded_corners.xml. This file contains a shape definition. A shape is a geometrical figure that is described by other tags:stroke – a stroke line for the shape (witdh, color, dashWidth and dashGap) solid – solid colour that fills the shape corners – radius and so oSo the rounded_corners.xml look like: <?xml version="1.0" encoding="utf-8"?> <shape xmlns:android="" ><solid android:color="#00FF00"/><corners android:radius="5dp" /><padding android:left="3dp" android:top="3dp" android:right="3dp" android:bottom="3dp" /><stroke android:width="3dp" android:color="#00CC00"/> </shape> Once we have create our shape we need to apply it to the items. To do it we have to create another XML file that describe how we apply this shape. In this case we use the XML tag selector to specify when and how to apply the shape. To specify when to apply the shape we use the status. We specify to apply this shape when:status = enable status = selected status = pressedSo our file ( listview_selector.xml) looks like: <?xml version="1.0" encoding="utf-8"?> <selector xmlns:android="" > <item android:drawable="@drawable/rounded_corner" android:state_enabled="true"/><item android:drawable="@drawable/rounded_corner" android:state_pressed="true"/><item android:drawable="@drawable/rounded_corner" android:state_focused="true"/></selector> Now we have defined our resource, we simply need to specify to apply it in our adapter in this way: public View getView(int position, View convertView, ViewGroup parent) { View v = convertView;PlanetHolder holder = new PlanetHolder();// First let's verify the convertView is not null if (convertView == null) { // This a new view we inflate the new layout LayoutInflater inflater = (LayoutInflater) context.getSystemService(Context.LAYOUT_INFLATER_SERVICE); v = inflater.inflate(R.layout.row_layout, null); // Now we can fill the layout with the right values TextView tv = (TextView) v.findViewById(;holder.planetNameView = tv;v.setTag(holder);v.setBackgroundResource(R.drawable.rounded_corner); } else holder = (PlanetHolder) v.getTag();Planet p = planetList.get(position); holder.planetNameView.setText(p.getName());return v; } If we run the app we have:ListView with alternate color As we describe above, if we want to change how each row look like inside the ListView we have simply change the resource and we can customize its look. For example we can suppose we want to alternate the row color. In this case we need to create two drawable resource one for each background, like that: even_row.xml <?xml version="1.0" encoding="utf-8"?> <shape xmlns:android="" ><solid android:color="#A0A0A0"/><padding android:left="3dp" android:top="3dp" android:right="3dp" android:bottom="3dp" /><stroke android:width="1dp" android:color="#00CC00"/> </shape> odd_row.xml <?xml version="1.0" encoding="utf-8"?> <shape xmlns:android="" ><solid android:color="#F0F0F0"/><padding android:left="3dp" android:top="3dp" android:right="3dp" android:bottom="3dp" /><stroke android:width="1dp" android:color="#00CC00"/> </shape> We need moreover two selectors that uses the drawable resources, like that listview_selector_even.xml <?xml version="1.0" encoding="utf-8"?> <selector xmlns:android="" > <item android:drawable="@drawable/even_row" android:state_enabled="true"/><item android:drawable="@drawable/even_row" android:state_pressed="true"/><item android:drawable="@drawable/even_row" android:state_focused="true"/></selector> listview_selector_odd.xml <?xml version="1.0" encoding="utf-8"?> <selector xmlns:android="" > <item android:drawable="@drawable/odd_row" android:state_enabled="true"/><item android:drawable="@drawable/odd_row" android:state_pressed="true"/><item android:drawable="@drawable/odd_row" android:state_focused="true"/></selector> And finally we apply them inside our custom adapter: public View getView(int position, View convertView, ViewGroup parent) { View v = convertView;PlanetHolder holder = new PlanetHolder();// First let's verify the convertView is not null if (convertView == null) { // This a new view we inflate the new layout LayoutInflater inflater = (LayoutInflater) context.getSystemService(Context.LAYOUT_INFLATER_SERVICE); v = inflater.inflate(R.layout.row_layout, null); // Now we can fill the layout with the right values TextView tv = (TextView) v.findViewById(;holder.planetNameView = tv;v.setTag(holder);if ( position % 2 == 0) v.setBackgroundResource(R.drawable.listview_selector_even); else v.setBackgroundResource(R.drawable.listview_selector_odd); } else holder = (PlanetHolder) v.getTag();Planet p = planetList.get(position); holder.planetNameView.setText(p.getName());return v; } Running the app we have:Source code @ github   Reference: Android listview background row style: Rounded Corner, alternate color from our JCG partner Francesco Azzola at the Surviving w/ Android blog. ...

Choosing between a Pen Test and a Secure Code Review

Secure Code Reviews (bringing someone in from outside of the team to review/audit the code for security vulnerabilities) and application Pen Tests (again, bringing a security specialist in from outside the team to test the system) are both important practices in a secure software development program. But if you could only do one of them, if you had limited time or limited budget, which should you choose? Which approach will find more problems and tell you more about the security of your app and your team? What will give you more bang for your buck? Pen testing and code reviews are very different things – they require different work on your part, they find different problems and give you different information. And the cost can be quite different too. White Box / Black Box We all know the difference between white box and black box. Because they can look inside the box, code reviewers can zero in on high-risk code: public interfaces, session management and password management and access control and crypto and other security plumbing, code that handles confidential data, error handling, auditing. By scanning through the code they can check if the app is vulnerable to common injection attacks (SQL injection, XSS, …),and they can look for time bombs and back doors (which are practically impossible to test for from outside) and other suspicious code. They may find problems with concurrency and timing and other code quality issues that aren’t exploitable but should be fixed any ways. And a good reviewer, as they work to understand the system and its design and ask questions, can also point out design mistakes, incorrect assumptions and inconsistencies – not just coding bugs. Pen Testers rely on scanners and attack proxies and other tools to help them look for many of the same common application vulnerabilities (SQL injection, XSS, …) as well as run-time configuration problems. They will find information disclosure and error handling problems as they hack into the system. And they can test for problems in session management and password handling and user management, authentication and authorization bypass weaknesses, and even find business logic flaws especially in familiar workflows like online shopping and banking functions. But because they can’t see inside the box, they – and you – won’t know if they’ve covered all of the high-risk parts of the system. The kind of security testing that you are already doing on your own can influence whether a pen test or a code review is more useful. Are you testing your web app regularly with a black box dynamic vulnerability scanning tool or service? Or running static analysis checks as part of Continuous Integration? A manual pen test will find many of the same kinds of problems that an automated dynamic scanner will, and more. A good static analysis tool will find at least some of the same bugs that a manual code review will – a lot of reviewers use static analysis source code scanning tools to look for low hanging fruit (common coding mistakes, unsafe functions, hard-coded passwords, simple SQL injection, …). Superficial tests or reviews may not involve much more than someone running one of these automated scanning tools and reviewing and qualifying the results for you. So, if you’ve been relying on dynamic analysis testing, it makes sense to get a code review to look for problems that you haven’t already tested for yourself. And if you’ve been scanning code with static analysis tools, then a pen test may have a better chance of finding different problems. Costs and Hassle A pen test is easy to setup and manage. It should not require a lot of time and hand holding from your team, even if you do it right and make sure to explain the main functions of the application to the pen test team and walk them through the architecture, and give them all the access they need. Code reviews are generally more expensive than pen tests, and will require more time and effort on your part – you can’t just give an outsider a copy of the code and expect them to figure it all out on their own. There is more hand holding needed both ways. You holding their hand and explaining the architecture and how the code is structured and how the system works and the compliance and risk drivers, answering questions about the design and the technology as they go along; and them holding your hand, patiently explaining what they found and how to fix it, and working with your team to understand whether each finding is worth fixing, weeding out false positives and other misunderstandings. This hand holding is important. You want to get maximum value out of a reviewer’s time – you want them to focus on high-risk code and not get lost on tangents. And you want to make sure that your team understands what the reviewer found and how important each bug is and how they should be fixed. So not only do you need to have people helping the reviewer – they should be your best people. Intellectual Property and Confidentiality and other legal concerns are important, especially for code reviews – you’re letting an outsider look at the code, and while you want to be transparent in order to ensure that the review is comprehensive, you may also be risking your secret sauce. Solid contracting and working with reputable firms will minimize some of these concerns, but you may also need to strictly limit what code the reviewer will get to see. Other Factors in Choosing between Pen Tests and Code Reviews The type of system and its architecture can also impact your decision. It’s easy to find pen testers who have lots of experience in testing web portals and online stores – they’ll be familiar with the general architecture and recognize common functions and workflows, and can rely on out-of-the-box scanning and fuzzing tools to help them test. This has become a commodity-based service, where you can expect a good job done for a reasonable price. But if you’re building an app with proprietary system-to-system APIs or proprietary clients, or you are working in a highly-specialized technical domain, it’s harder to find qualified pen testers, and they will cost more. They’ll need more time and help to understand the architecture and the app, how everything fits together and what they should focus on in testing. And they won’t be able to leverage standard tools, so they’ll have to roll something on their own, which will take longer and may not work as well. A code review could tell you more in these cases. But the reviewer has to be competent in the language(s) that your app is written in – and, to do a thorough job, they should also be familiar with the frameworks and libraries that you are using. Since it is not always possible to find someone with the right knowledge and experience, you may end up paying them to learn on the job – and relying a lot on how quickly they learn. And of course if you’re using a lot of third party code for which you don’t have source, then a pen test is really your only choice. Are you in a late stage of development, getting ready to release? What you care about most at this point is validating the security of the running system including the run-time configuration and, if you’re really late in development, finding any high-risk exploitable vulnerabilities because that’s all you will have time to fix. This is where a lot of pen testing is done. If you’re in the early stages of development, it’s better to choose a code review. Pen testing doesn’t make a lot sense (you don’t have enough of the system to do real system testing) and a code review can help set the team on the right path for the rest of the code that they have to write. Learning from and using the results Besides finding vulnerabilities and helping you assess risk, a code review or a pen test both provide learning opportunities – a chance for the development team to understand and improve how they write and test software. Pen tests tell you what is broken and exploitable – developers can’t argue that a problem isn’t real, because an outside attacker found it, and that attacker can explain how easy or hard it was for them to find the bug, what the real risk is. Developers know that they have to fix something – but it’s not clear where and how to fix it. And it’s not clear how they can check that they’ve fixed it right. Unlike most bugs, there are no simple steps for the developer to reproduce the bug themselves: they have to rely on the pen tester to come back and re-test. It’s inefficient, and there isn’t a nice tight feedback loop to reinforce understanding. Another disadvantage with pen tests is that they are done late in development, often very late. The team may not have time to do anything except triage the results and fix whatever has to be fixed before the system goes live. There’s no time for developers to reflect and learn and incorporate what they’ve learned. There can also be a communication gap between pen testers and developers. Most pen testers think and talk like hackers, in terms of exploits and attacks. Or they talk like auditors, compliance-focused, mapping their findings to vulnerability taxonomies and risk management frameworks, which don’t mean anything to developers. Code reviewers think and talk like programmers, which makes code reviews much easier to learn from – provided that the reviewer and the developers on your team make the time to work together and understand the findings. A code reviewer can walk the developer through what is wrong, explain why and how to fix it, and answer the developer’s questions immediately, in terms that a developer will understand, which means that problems can get fixed faster and fixed right. You won’t find all of the security vulnerabilities in an app through a code review or a pen test – or even from doing both of them (although you’d have a better chance). If I could only do one or the other, all other factors aside, I would choose a code review. A review will take more work, and probably cost more, and it might not even find as many security bugs. But you will get more value in the long term from a code review. Developers will learn more and quicker, hopefully enough to understand how to look for and fix security problems on their own, and even more important, to avoid them in the first place.   Reference: Choosing between a Pen Test and a Secure Code Review from our JCG partner Jim Bird at the Building Real Software blog. ...

JPA 2 | EntityManagers, Transactions and everything around it

Introduction One of the most confusing and unclear thing for me, as a Java Developer has been the mystery surrounding the Transaction Management in general and how JPA handles transaction management in particular. When does a transaction get started, when does it end, how entities are persisted, the persistence context and much more. Frameworks like Spring does not help in understanding the concepts either as they provide another layer of abstraction which makes thing difficult to understand. In today’s post, I will try to demystify some of the things behind JPA’s specification about Entity Management, its transaction specifications and how a better understanding of the concept help us design and code effectively. We will try to keep the discussion   technology and framework agonistic although we will look at both Java SE(where the Java EE container is not available) and Java EE based examples. Basic Concepts Before diving into greater details lets quickly walk through some basic classes and what they mean in JPA.EntityManager – A class that manages the persistent state(or lifecycle) of an entity. Persistence Unit – is a named configuration of entity classes. Persistence Context - is a managed set of entity instances. The entities classes are part of the Persistence Unit configurations. Managed Entities - an entity instance is managed if it is part of a persistence context and that Entity Manager can act upon it.From bullet point one and three above, we can infer that an Entity Manager always manages a Persistence Context. And so, if we understand the Persistence Context, we will understand the EntityManager. Details EntityManager in JPA There are three main types of EntityManagers defined in JPA.Container Managed and Transaction Scoped Entity Managers Container Managed and Extended Scope Entity Managers Application Managed Entity ManagersWe will now look at each one of them in slightly more detail. Container Managed Entity Manager When a container of the application(be it a Java EE container or any other custom container like Spring) manages the lifecycle of the Entity Manager, the Entity Manager is said to be Container Managed. The most common way of acquiring a Container Managed EntityManager is to use @PersistenceContext annotation on an EntityManager attribute. Heres an example to define an EntityManager. public class EmployeeServiceImpl implements EmployeeService { @PersistenceContext(unitName="EmployeeService") EntityManager em;public void assignEmployeeToProject(int empId, int projectId) { Project project = em.find(Project.class, projectId); Employee employee = em.find(Employee.class, empId); project.getEmployees().add(employee); employee.getProjects().add(project);} In the above example we have used @PersistenceContext annotation on an EntityManager type instance variable. The PersistenceContext annotation has an attribute “unitName” which identifies the Persistence Unit for that Context. Container Managed Entity Managers come in two flavors :Transaction Scoped Entity Managers Extended Scope Entity ManagersNote that the scope above really means the scope of the Persistence Context that the Entity Manager manages. It is not the scope of the EntityManager itself. Lets look at each one of them in turn. Transaction Scoped Entity Manager This is the most common Entity Manager that is used in the applications. In the above example as well we are actually creating a Transaction Scoped Entity Manager. A Transaction Scoped Entity Manager is returned whenever a reference created by @PersistenceContext is resolved. The biggest benefit of using Transaction Scoped Entity Manager is that it is stateless. This also makes the Transaction Scoped EntityManager threadsafe and thus virtually maintenance free. But we just said that an EntityManager manages the persistence state of an entity and the persistence state of an entity is part of the persistence context that get injected into the EntityManager. So how is the above statement on stateless holds ground? The answer lies in the fact that all Container Managed Entity Managers depend on JTA Transactions. Every time an operation is invoked on an Entity Manager, the container proxy(the container creates a proxy around the entity manager while instantiating it ) checks for any existing Persistence Context on the JTA Transaction. If it finds one, the Entity Manager will use this Persistence Context. If it doesnt find one, then it will create a new Persistence Context and associates it with the transaction. Lets take the same example we discussed above to understand the concept of entity managers and transaction creation. public class EmployeeServiceImpl implements EmployeeService { @PersistenceContext(unitName="EmployeeService") EntityManager em;public void assignEmployeeToProject(int empId, int projectId) { Project project = em.find(Project.class, projectId); Employee employee = em.find(Employee.class, empId); project.getEmployees().add(employee); employee.getProjects().add(project);} In the above example the first line of assignEmployeeToProject method is calling a find method on the EntityManager. The call to find will force the container to check for an existing transaction. If a transaction exists( for example in case of Stateless Session Beans in Java EE, where the container guarantees that a transaction is available whenever a method on the bean is called) or not. If the transaction doesnt exist, it will throw Exception. If it exists, it will then check whether a Persistence Context exists. Since its the first call to any method of the EntityManager, a persistence context is not available yet. The Entity Manager will then create one and use it to find the project bean instance. In the next call to find, the Entity Manager already has an associated Transaction as well as the Persistence Context associated with it. It uses the same transaction to find employee instance. By the end of 2nd line in the method both project and employee instance are managed. At the end of the method call, the transaction is committed and the managed instances of person and employee get persisted. Another thing to keep in mind is that when the transaction is over, the Persistence Context goes away. Extended Scope Entity Manager If and when you want the Persistence Context to be available beyond the scope of a method, you use the Entity Manager with extended scope.  The best way to understand the Extended scope Entity Manager is to take an example where the class needs to maintain some state(which is created as a result of some transactional request like myEntityManager.find(“employeeId”) and then using the employee) and share the state across its business methods. Because the Persistence Context is shared between method calls and is used to maintain state, it is generally not Thread safe unless you are using them inside a Stateful session bean for which the container is responsible for making it thread safe. To reiterate, in case you are using Java EE Container, Extended Scope Entity Managers will be used inside a Stateful Session Bean( Class annotated with @Stateful) . If you decide to use it outside of the Stateful bean, the container does not guarantee you thread saftey and you have to handle that yourself. Same is the case if you are using third party containers like Spring. Lets look at an example of Extended Scope Entity Manager in Java EE environment when using Stateful session beans. Our goal in the example would be to create a business Class that has business methods working on an instance of LibraryUser Entity.  Lets call this business class LibraryUserManagementService that has a business interface UserManagementService. LibraryUserManagementService works on LibraryUsers entity instance . A Library can lend multiple books to the LibraryUser. Heres an example of Stateful Session bean depicting the above scenario. @Stateful public class LibraryUserManagementService implements UserManagementService { @PersistenceContext(unitName="UserService") EntityManager em; LibraryUser user;public void init(String userId) { user = em.find(LibraryUser.class, userId); }public void setUserName(String name) { user.setName(name); }public void borrowBookFromLibrary(BookId bookId) { Book book = em.find(Book.class, bookId); user.getBooks().add(book); book.setLendingUser(user); }// ...@Remove public void finished() { } }In the above scenario where we are working with a user instance, it is more natural to get an instance once and then work our way through it and only when we are done, we should persist the user instance. But, the problem is that the Entity Manager is Transaction Scoped. This means that init will run in its own transaction(thus having its own Persistence Context) and borrowBookFromLibrary will run in its own transaction. As a result, user object becomes unmanaged as soon as the init method ends. To overcome exactly this sort of problem, we make use of PersistenceContextType.EXTENDED type Entity Manager. Heres the modified example with PersistenceContextType EXTENDED that will work perfectly. @Stateful public class LibraryUserManagementService implements UserManagementService { @PersistenceContext(unitName="UserService" , type=PersistenceContextType.EXTENDED) EntityManager em;LibraryUser user;public void init(String userId) { user = em.find(LibraryUser.class, userId); }public void setUserName(String name) { user.setName(name); }public void borrowBookFromLibrary(BookId bookId) { Book book = em.find(Book.class, bookId); user.getBooks().add(book); book.setLendingUser(user); }// ...@Remove public void finished() { } } In the above scenario, The PersistenceContext that manages the user instance is created at the Bean initialization time by the Java EE Container and is available until the finished method is called at which time the transaction is committed. Application Scoped Entity Manager An Entity Manager that is created not by the container, but actually by the application itself is an application scoped Entity Manager. To make the definition clearer, whenever we create an Entity Manager by calling createEntityManager on the EntityManagerFactory instance, we  actually are creating an application scoped Entity Manager. All the Java SE based applications actually use Application Scoped Entity Managers. JPA gives us a class Persistence that is used to ultimately create an Application Scoped Entity Manager. Heres an example of how an application scoped EM can be created : EntityManagerFactory emf = Persistence.createEntityManagerFactory("myPersistenceUnit"); EntityManager em = emf.createEntityManager(); Note that for creating an Application Scoped EntityManager, there needs to be a persistence.xml file in the META-INF folder of the application. EntityManager can be created in two ways. One is already shown above. Another way to create EntityManager is to pass a set of properties as parameter to the createEntityManagerFactory method. EntityManagerFactory emf = Persistence.createEntityManagerFactory("myPersistenceUnit" , myProperties); EntityManager em = emf.createEntityManager(); If you are creating your own Application Managed Entity Manager, then make sure to close it everytime you are done with using it. This is required because you are now managing how and when the EntityManager should be created and used. Transaction Management Transactions are directly related to entities. Managing transactions essentially then would mean managing how entities lifecycle(create, update delete) is managed. Another key to understanding Transaction Management is to understand how Persistence Contexts interacts with transactions. It is worth noting that from an end user perspective, even though we work with an instance of EntityManager, the only role of EntityManager is to determine the lifetime of the Persistence Context. It plays no role in dictating how a Persistence Context should behave. To reiterate, Persistence Context is a managed set of Entity instances. Whenever a transaction begins, a Persistence Context instance gets associated with it. And when a Transaction ends(commits for example), the Persistence Context is flushed and get disassociated with the transaction. There are two types of Transaction management types supported in JPA.RESOURCE LOCAL Transactions JTA or GLOBAL TransactionsResource local transactions refer to the native transactions of the JDBC Driver whereas JTA transactions refer to the transactions of the JEE server. A Resource Local transaction involves a single transactional resource, for example a JDBC Connection. Whenever you need two or more resources(f or example a JMS Connection and a JDBC Connection) within a single transaction, you use  JTA Transaction. Container Managed Entity Managers always use JTA transactions as the container takes care of transaction life cycle management and spawning the transaction across multiple transactional resources. Application Managed Entity Managers can use either Resource Local Transactions or JTA transactions. Normally in JTA or global transaction, a third party transaction monitor enlists the different transactional resources within a transaction, prepares them for a commit and finally commits the transaction. This process of first preparing the resources for transaction(by doing a dry run) and then committing(or rolling back) is called a 2 phase commit. Side Note about XA Protocol- In global transactions, a transaction monitor has to constantly talk to different transactional resources. Different transactional resources can speak different languages and thus may not be understandable to the transaction monitor. XA is a protocol specification that provides a common ground for the transaction monitor to interact with different transactional resources. JTA is a global transaction monitor specification that speaks XA and thus is able to manage multiple transactional resources. Java EE compliant servers has implementation for JTA built in. Other containers like Spring write their own or use others implementation(like Java Open Transaction Manager , JBoss TS etc) for supporting JTA or Global transactions. Persistence Context, Transactions and Entity Managers A Persistence Context can be associated with either single or multiple transactions and can be associated with multiple Entity Managers. A Persistence Context gets registered with a transaction so that persistence context can be flushed when a transaction is committed. The When a transaction starts, the Entity Manager looks for an active Persistence Context instance. If it is not available it creates one and binds it to the transaction. Normally the scope of the persistence context is tightly associated with the transaction. When a transaction ends, the persistence context instance associated with that transaction also ends. But sometime, mostly in the Java EE world, we require transaction propagation, which is the process of sharing a single persistence context between different Entity Managers within a single transaction. Persistence Contexts can have two scopes:Transaction Scoped Persistence Context Extended Scoped Persistence ContextWe have discussed transaction/extended scoped Entity Managers and we also know that Entity Managers can be transaction or extended scoped. The relation is not coincidental. A Transactional scoped Entity Manager creates a Transaction scoped Persistence Context. An Extended Scope Entity Manager uses the Extended Persistence Context. The lifecycle of the Extended Persistence Context is tied to the Stateful Session Bean in the Java EE environment. Let’s briefly discuss these Persistence Contexts Transaction Scoped Persistence Context TSPC is created by the Entity Managers only when it is needed. Transaction scoped Entity Manager creates a TSPC only when a method on the Entity Manager is called for the first time. Thus the creation of Persistence Context is lazy. If there already exists a propagated Persistence Context, then the Entity Manager will use that Persistence Context. Understanding of Persistence Context Propagation is important to identify and debug transaction related problems in your code. Let’s see an example of how a transaction scoped persistence context is propagated.  : public class ItemDAOImpl implements ItemDAO { @PersistenceContext(unitName="ItemService") EntityManager em;LoggingService ls;@TransactionAttribute() public void createItem(Item item) { em.persist(item); ls.log(item.getId(), "created item"); }// ... } : public class LoggingService implements AuditService { @PersistenceContext(unitName="ItemService") EntityManager em; @TransactionAttribute() public void log(int itemId, String action) { // verify item id is valid if (em.find(Item.class, itemId) == null) { throw new IllegalArgumentException("Unknown item id"); } LogRecord lr = new LogRecord(itemId, action); em.persist(lr); }} When createItem method of ItemDAOImpl is called, persist method is called on the entity manager instance. Let’s assume that this is the first call to the entity manager’s method. The Entity Manager will look for any propagated persistence context with Unit Name “ItemService”. It doesn’t find one because this is the first call to the entity manager. Thus it creates a new persistence context instance and attaches it to itself. It then goes on to persist the Item object. After the item object is persisted, we then call to log the item information that is just persisted. Note that the LoggingService has its own EnitityManager instance and the method log has the annotation @TransactionAttribute(which is not required if in Java EE envt and the bean is declared to be an EJB). Since the TransactionAttribute has a default TransactionAttributeType of REQUIRED, the Entity Manager in the LoggingService will look for any Persistence Context that might be available from the pervious transaction. It finds one that was created inside the createItem method of the ItemDAOImpl and uses the same one. That is why, even though the actual item is not yet persisted to the Database(because the transaction has not yet been committed), the entity manager in LoggingService is able to find it because the Persistence Context has been propagated from the ItemDAOImpl to the LoggingService. Extended Persistence Context Whereas Transaction Scoped Persistence Context is created one for every transaction(in case of non propagation), the Extended Persistence Context is created once and is used by all the transactions within the scope of the class that manages the lifecycle of the Extended Persistence Context. In case of Java EE, it is the Stateful Session bean that manages the lifecycle of the extended Persistence context.  The creation of Stateful Session bean is EAGER. In case of Container Managed Transactions, it is created as soon as a method on the class is called. In case of Application managed Transaction it is created when userTransaction.begin() is invoked. Summary A lot of things have been discussed in this blog post, Entity Managers, Transaction Management, Persistence Context , how all these things interact and work with each others. We discussed differences between Container Managed and Application Managed Entity Managers, Transaction Scoped and Extended scope Persistence Context, Transaction propagation. Most of the material for this blog is a result of reading the wonderful book :  Pro JPA 2 . I would recommend reading it if you want more indepth knowledge of how JPA works.   Reference: JPA 2 | EntityManagers, Transactions and everything around it from our JCG partner Anuj Kumar at the JavaWorld Blog blog. ...

ElasticMQ 0.7.0: long polling, non-blocking implementation using Akka and Spray

ElasticMQ 0.7.0, a message queueing system with an actor-based Scala and Amazon SQS-compatible interfaces, was just released. It is a major rewrite, using Akka actors at the core and Spray for the REST layer. So far only the core and SQS modules have been rewritten; journaling, SQL backend and replication are yet to be done. The major client-side improvements are:long polling support, which was added to SQS some time ago simpler stand-alone server – just a single jar to downloadWith long polling, when receiving a message, you can specify an additional MessageWaitTime attribute. If there are no messages in the queue, instead of completing the request with an empty response, ElasticMQ will wait up to MessageWaitTime seconds until messages arrive. This helps both to reduce the bandwidth used (no need for very frequent requests), improve overall system performance (messages are received immediately after being sent) and to reduce SQS costs. The stand-alone server is now a single jar. To run a local, in-memory SQS implementation (e.g. for testing an application which uses SQS), all you need to do is download the jar file and run: java -jar elasticmq-server-0.7.0.jar This will start a server on http://localhost:9324. Of course the interface and port are configurable, see the README for details. As before, you can also run an embedded server from any JVM-based language. Implementation notes For the curious, here’s a short description of how ElasticMQ is implemented, including the core system, REST layer, Akka Dataflow usage and long polling implementation. All the code is available on GitHub. As already mentioned, ElasticMQ is now implemented using Akka and Spray, and doesn’t contain any blocking calls. Everything is asynchronous. Core The core system is actor-based. There’s one main actor (QueueManagerActor), which knows what queues are currently created in the system, and gives the possibility to create and delete queues. For communication with the actors, the typed ask pattern is used. For example, to lookup a queue (a queue is also an actor), a message is defined: case class LookupQueue(queueName: String) extends Replyable[Option[ActorRef]] Usage looks like this: import val lookupFuture: Future[Option[ActorRef]] = queueManagerActor ? LookupQueue("q2") As already mentioned, each queue is an actor, and encapsulates the queue state. We can use simple mutable data structures, without any need for thread synchronisation, as the actor model takes care of that for us. There’s a number of messages which can be sent to a queue-actor, e.g.: case class SendMessage(message: NewMessageData) extends Replyable[MessageData] case class ReceiveMessages(visibilityTimeout: VisibilityTimeout, count: Int, waitForMessages: Option[Duration]) extends Replyable[List[MessageData]] case class GetQueueStatistics(deliveryTime: Long) extends Replyable[QueueStatistics] Rest layer The SQS query/REST layer is implemented using Spray, a lightweight REST/HTTP toolkit based on Akka. Apart from a non-blocking, actor-based IO implementation, Spray also offers a powerful routing library, spray-routing. It contains a number of built-in directives, for matching on the request method (get/post etc.), extracting query of form parameters or matching on the request path. But it also lets you define your own directives, using simple directive composition. A typical ElasticMQ route looks like this: val listQueuesDirective = action("ListQueues") { rootPath { anyParam("QueueNamePrefix"?) { prefixOption => // logic } } } Where action matches on the action name specified in the "Action" URL of body parameter and accepts/rejects the request, rootPath matches on an empty path and so on. Spray has a good tutorial, so I encourage you to take a look there, if you are interested. How to use the queue actors from the routes to complete HTTP requests? The nice thing about Spray is that all it does is passing a RequestContext instance to your routes, expecting nothing in return. It is up to the route to discard the request completely or complete it with a value. The request may also be completed in another thread – or, for example, when some future is completed. Which is exactly what ElasticMQ does. Here map, flatMap and for-comprehensions (which are a nicer syntax for map/flatMap) are very handy, e.g. (simplified): // Looking up the queue and deleting it are going to be called in sequence, // but asynchronously, as ? returns a Future for { queueActor <- queueManagerActor ? LookupQueue(queueName) _ <- queueActor ? DeleteMessage(DeliveryReceipt(receipt)) } { requestContext.complete(200, "message deleted") } Sometimes, when the flow is more complex, ElasticMQ uses Akka Dataflow, which requires the continuations plugin to be enabled. There’s also a similar project which uses macros, Scala Async, but it’s in early development. Using Akka Dataflow, you can write code which uses Futures as if it was normal sequential code. The CPS plugin will transform it to use callbacks where needed. An example, taken from CreateQueueDirectives: flow { val queueActorOption = (queueManagerActor ? LookupQueue( queueActorOption match { case None => { val createResult = (queueManagerActor ? CreateQueue(newQueueData)).apply() createResult match { case Left(e) => throw new SQSException("Queue already created: " + e.message) case Right(_) => newQueueData } } case Some(queueActor) => { (queueActor ? GetQueueData()).apply() } } } The important parts here are the flow block, which delimits the scope of the transformation, and the apply() calls on Futures which extract the content of the future. This looks like completely normal, sequential code, but when executed, since the first Future usage will be run asynchronously. Long polling With all of the code being asynchronous and non-blocking, implementing long polling was quite easy. Note that when receiving messages from a queue, we get a Future[List[MessageData]]. In response to completing this future, the HTTP request is also completed with the appropriate response. However this future may be completed almost immediately (as is the case normally), or after e.g. 10 seconds – there’s no changes in code needed to support that. So the only thing to do was to delay completing the future until the specified amount of time passed or new messages have arrived. The implementation is in QueueActorWaitForMessagesOps. When a request to receive messages arrives, and there’s nothing in the queue, instead of replying (that is, sending an empty list to the sender actor) immediately, we store the reference to the original request and the sender actor in a map. Using the Akka scheduler, we also schedule sending back an empty list and removal of the entry after the specified timeout. When new messages arrive, we simply take a waiting request from the map and try to complete it. Again, all synchronisation and concurrency problems are handled by Akka and the actor model.   Reference: ElasticMQ 0.7.0: long polling, non-blocking implementation using Akka and Spray from our JCG partner Adam Warski at the Blog of Adam Warski blog. ...

Getting started with PhoneGap in Eclipse for Android

Android development with PhoneGap can be done in Windows, OS X, or Linux Step 1: Setting up Android Tools ADT Bundle – Just a single step to setup android development environment Step 2: Downloading and installing PhoneGapVisit the PhoneGap download page and click the orange Download link to begin the download process. Extract the archive to your local file system for use later.You are now ready to create your first PhoneGap project for Android within Eclipse. Step 3: Creating the project in Eclipse Follow these steps to create a new Android project in Eclipse:Choose New > Android ProjectOn the Application Info screen, type a package name for your main Android application .This should be a namespace that logically represents your package structure; for example, com.yourcompany.yourproject. Create New Project In Workspace and Click Next.Configure Launch Icon and BackgroundCreate ActivityConfigure the project to use PhoneGap At this point, Eclipse has created an empty Android project. However, it has not yet been configured to use PhoneGap. You’ll do that next.Create an assets/www directory and a libs directory inside of the new Android project. All of the HTML and JavaScript for your PhoneGap application interface will reside within the assets/www folderTo copy the required files for PhoneGap into the project, first locate the directory where you downloaded PhoneGap, and navigate to the lib/android subdirectoryCopy cordova-2.7.0.js to the assets/www directory within your Android project. Copy cordova-2.7.0.jar to the libs directory within your Android project. Copy the xml directory into the res directory within your Android projectNext, create a file named index.html in the assets/www folder. This file will be used as the main entry point for your PhoneGap application’s interface. In index.html, add the following HTML code to act as a starting point for your user interface development:<!DOCTYPE HTML> <html> <head> <title>PhoneGap</title> <script type="text/javascript" charset="utf-8" src="cordova-2.7.0.js"></script> </head> <body> <h1>Hello PhoneGap</h1> </body> </html>You will need to add the cordova-2.7.0.jar library to the build path for the Android project. Right-click cordova-2.7.0.jar and select Build Path > Add To Build PathUpdate the Activity class Now you are ready to update the Android project to start using PhoneGap.Open your main application Activity file. It will be located under the src folder in the project package that you specified earlier in this process.For my project, which I named HelloPhoneGap, the main Android Activity file is named, and is located in the package com.maanavan.hellophonegap, which I specified in the New Android Project dialog box.In the main Activity class, add an import statement for org.apache.cordova.DroidGap:import org.apache.cordova.DroidGap; Change the base class from Activity to DroidGap ; this is in the class definition following the word extendspublic class MainActivity extends DroidGap Replace the call to setContentView() with a reference to load the PhoneGap interface from the local assets/www/index.html file, which you created earliersuper.loadUrl(Config.getStartUrl()); Note: In PhoneGap projects, you can reference files located in the assets directory with a URL reference file:///android_asset, followed by the path name to the file. The file:///android_asset URI maps to the assets directory.Configure the project metadata You have now configured the files within your Android project to use PhoneGap. The last step is to configure the project metadata to enable PhoneGap to run.Begin by opening the AndroidManifest.xml file in your project root. Use the Eclipse text editor by right-clicking the AndroidManifest.xml file and selecting Open With > Text EditorIn AndroidManifest.xml, add the following supports-screen XML node as a child of the root manifest node:<supports-screens android:largeScreens="true" android:normalScreens="true" android:smallScreens="true" android:resizeable="true" android:anyDensity="true" /> The supports-screen XML node identifies the screen sizes that are supported by your application. You can change screen and form factor support by altering the contents of this entry. To read more about <supports-screens>, visit the Android developer topic on the supports-screen element. Next, you need to configure permissions for the PhoneGap application. Copy the following <uses-permission> XML nodes and paste them as children of the root <manifest> node in the AndroidManifest.xml file: <uses-permission android:name="android.permission.CAMERA" /> <uses-permission android:name="android.permission.VIBRATE" /> <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" /> <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" /> <uses-permission android:name="android.permission.ACCESS_LOCATION_EXTRA_COMMANDS" /> <uses-permission android:name="android.permission.INTERNET" /> <uses-permission android:name="android.permission.RECEIVE_SMS" /> <uses-permission android:name="android.permission.RECORD_AUDIO" /> <uses-permission android:name="android.permission.RECORD_VIDEO"/> <uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" /> <uses-permission android:name="android.permission.READ_CONTACTS" /> <uses-permission android:name="android.permission.WRITE_CONTACTS" /> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /> <uses-permission android:name="android.permission.GET_ACCOUNTS" /> <uses-permission android:name="android.permission.BROADCAST_STICKY" /> The <uses-permission> XML values identify the features that you want to be enabled for your application. The lines above enable all permissions required for all features of PhoneGap to function. After you have built your application, you may want to remove any permissions that you are not actually using; this will remove security warnings during application installation. To read more about Android permissions and the <uses-permission> element, visit the Android developer topic on the uses-permission element.. After you have configured application permissions, you need to modify the existing <activity> node.Locate the <activity> node, which is a child of the <application> XML node. Add the following attribute to the <activity> node:android:configChanges="orientation|keyboardHidden|keyboard|screenSize|locale"> Android Manifest.xml <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="" package="com.maanavan.hellophonegap" android:versionCode="1" android:versionName="1.0" > <supports-screens android:largeScreens="true" android:normalScreens="true" android:smallScreens="true" android:xlargeScreens="true" android:resizeable="true" android:anyDensity="true" /> <uses-sdk android:minSdkVersion="8" android:targetSdkVersion="17" /> <uses-permission android:name="android.permission.CAMERA" /> <uses-permission android:name="android.permission.VIBRATE" /> <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" /> <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" /> <uses-permission android:name="android.permission.ACCESS_LOCATION_EXTRA_COMMANDS" /> <uses-permission android:name="android.permission.INTERNET" /> <uses-permission android:name="android.permission.RECEIVE_SMS" /> <uses-permission android:name="android.permission.RECORD_AUDIO" /> <uses-permission android:name="android.permission.RECORD_VIDEO"/> <uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" /> <uses-permission android:name="android.permission.READ_CONTACTS" /> <uses-permission android:name="android.permission.WRITE_CONTACTS" /> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /> <uses-permission android:name="android.permission.GET_ACCOUNTS" /> <uses-permission android:name="android.permission.BROADCAST_STICKY" /> <application android:allowBackup="true" android:icon="@drawable/ic_launcher" android:label="@string/app_name" android:theme="@style/AppTheme" > <activity android:name="com.maanavan.hellophonegap.MainActivity" android:label="@string/app_name" > android:configChanges="orientation|keyboardHidden|keyboard|screenSize|locale"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest> At this point, your project is configured to run as a PhoneGap project for Android. If you run into any issues, verify your configuration against the example provided at the PhoneGap getting started site for Android. Running the application To launch your PhoneGap application in the Android emulator, right-click the project root, and select Run As > Android ApplicationIf you don’t have any Android virtual devices set up, you will be prompted to configure one. To learn more about configuring Android emulator virtual devices Eclipse will automatically start an Android emulator instance (if one is not already running), deploy your application to the emulator, and launch the application Reference: Getting started with PhoneGap in Eclipse for Android from our JCG partner Sathish Kumar at the Maanavan blog....

Creating Internal DSLs in Java, Java 8- Adopting Martin Fowler’s approach

Currently I am reading this wonderful book on DSLs- Domain Specific Languages by Martin Fowler. The buzz around the DSLs, around the languages which support creation of DSLs with ease, the use of DSLs made me curious to know and learn about this concept of DSLs. And the exprience with the book so far has been impressive. The Definition of DSL as stated by Martin Fowler in his book: Domain-specific language (noun): a computer programming language of limited expressiveness focused on a particular domain.   DSL is nothing new, its been there for quite a long time. People used XML as a form of DSL. Using XML as a DSL is easy because we have XSD for validation of the DSL, we have parsers for parsing the DSL and we have XSLT for transforming the DSL into other languages. And most of the languages provide a very good support for parsing XMLs and populating their domain model objects. The emergence of languages like Ruby, Groovy and others have increased the adoption of DSL. For example Rails, a web framework written using Ruby, uses DSLs extensively. In his book Martin Fowler classifies DSLs as Internal, External and Language Workbenches. As I read through the Internal DSL concepts I played around a bit with my own simple DSL using Java as the host language. Internal DSLs reside in the host language and are bound by the syntactic capabilities of the host language. Using Java as the host language didn’t give me really clear DSLs but I made an effort to get it to closer to a form where I could comprehend the DSL comfortably. I was trying to create a DSL for creating a Graph. As far as I am aware of, the different ways to input and represent a graph are: Adjacency List and Adjacency Matrix. I have always found this difficult to use especially in languages like Java which don’t have matrices as first class citizens. And here I am trying to create an Inner DSL for populating a Graph in Java. In his book, Martin Fowler stresses the need to keep the Semantic Model different from the DSL and to introduce a intermediate Expression builder which populates the Semantic Model from the DSL. By maintaining this I was able to achieve 3 different forms of the DSLs by writing different DSL syntax and expression builders and all the while using the same semantic model. Understanding the Semantic Model The Semantic Model in this case is the Graph class which contains the list of Edge instances and each Edge containing from Vertex, to Vertex and a weight. Lets look at the code for the same: import java.util.ArrayList; import java.util.List; import java.util.Set; import java.util.TreeSet;public class Graph {private List<Edge> edges; private Set<Vertex> vertices;public Graph() { edges = new ArrayList<>(); vertices = new TreeSet<>(); } public void addEdge(Edge edge){ getEdges().add(edge); }public void addVertice(Vertex v){ getVertices().add(v); }public List<Edge> getEdges() { return edges; }public Set<Vertex> getVertices() { return vertices; }public static void printGraph(Graph g){ System.out.println("Vertices..."); for (Vertex v : g.getVertices()) { System.out.print(v.getLabel() + " "); } System.out.println(""); System.out.println("Edges..."); for (Edge e : g.getEdges()) { System.out.println(e); } } } public class Edge { private Vertex fromVertex; private Vertex toVertex; private Double weight;public Edge() { }public Edge(Vertex fromVertex, Vertex toVertex, Double weight) { this.fromVertex = fromVertex; this.toVertex = toVertex; this.weight = weight; }@Override public String toString() { return fromVertex.getLabel()+" to "+ toVertex.getLabel()+" with weight "+ getWeight(); }public Vertex getFromVertex() { return fromVertex; }public void setFromVertex(Vertex fromVertex) { this.fromVertex = fromVertex; }public Vertex getToVertex() { return toVertex; }public void setToVertex(Vertex toVertex) { this.toVertex = toVertex; }public Double getWeight() { return weight; }public void setWeight(Double weight) { this.weight = weight; } } public class Vertex implements Comparable<Vertex> { private String label;public Vertex(String label) { this.label = label.toUpperCase(); }@Override public int compareTo(Vertex o) { return (this.getLabel().compareTo(o.getLabel())); }public String getLabel() { return label; }public void setLabel(String label) { this.label = label; } } Now that we have the Semantic Model in place, lets build the DLSs. You should notice that I am not going to change my Semantic model. Its not a hard and fast rule that the semantic model shouldn’t change, instead the semantic model can evolve by adding new APIs for fetching the data or modifying the data. But binding the Semantic model tightly to the DSL will not be a good approach. Keeping them separate helps in testing the Semantic Model and the DSL independently. The different approaches for creating Internal DSLs stated by Martin Fowler are:Method Chaining Functional Sequence Nested Functions Lambda Expressions/ClosuresI have illustrated 3 in this post except Functional Sequence. But I have used Functional Sequence approach while using the Closures/Lambda expression. Inner DSL by Method Chaining I am envisaging my DSL to be something like: Graph() .edge() .from("a") .to("b") .weight(12.3) .edge() .from("b") .to("c") .weight(10.5) To enable the creation of such DSL we would have to write an expression builder which allows popuplation of the semantic model and provides a fluent interface enabling creation of the DSL. I have created 2 expressions builders- One to build the complete Graph and the other to build individual edges. All the while the Graph/Edge are being built, these expression builders hold the intermediate Graph/Edge objects. The above syntax can be achieved by creating static method in these expression builders and then using static imports to use them in the DSL. The Graph() starts populating the Graph model while the edge() and series of methods later namely: from(), to(), weight() populate the Edge model. The edge() also populates the Graph model. Lets look at the GraphBuilder which is the expression builder for populating the Graph model. public class GraphBuilder {private Graph graph;public GraphBuilder() { graph = new Graph(); }//Start the Graph DSL with this method. public static GraphBuilder Graph(){ return new GraphBuilder(); }//Start the edge building with this method. public EdgeBuilder edge(){ EdgeBuilder builder = new EdgeBuilder(this);getGraph().addEdge(builder.edge);return builder; }public Graph getGraph() { return graph; }public void printGraph(){ Graph.printGraph(graph); } } And the EdgeBuilder which is the expression builder for populating the Edge model. public class EdgeBuilder {Edge edge;//Keep a back reference to the Graph Builder. GraphBuilder gBuilder;public EdgeBuilder(GraphBuilder gBuilder) { this.gBuilder = gBuilder; edge = new Edge(); }public EdgeBuilder from(String lbl){ Vertex v = new Vertex(lbl); edge.setFromVertex(v); gBuilder.getGraph().addVertice(v); return this; } public EdgeBuilder to(String lbl){ Vertex v = new Vertex(lbl); edge.setToVertex(v); gBuilder.getGraph().addVertice(v); return this; }public GraphBuilder weight(Double d){ edge.setWeight(d); return gBuilder; }} Lets try and experiment the DSL: public class GraphDslSample {public static void main(String[] args) {Graph() .edge() .from("a") .to("b") .weight(40.0) .edge() .from("b") .to("c") .weight(20.0) .edge() .from("d") .to("e") .weight(50.5) .printGraph();Graph() .edge() .from("w") .to("y") .weight(23.0) .edge() .from("d") .to("e") .weight(34.5) .edge() .from("e") .to("y") .weight(50.5) .printGraph();} } And the output would be: Vertices... A B C D E Edges... A to B with weight 40.0 B to C with weight 20.0 D to E with weight 50.5 Vertices... D E W Y Edges... W to Y with weight 23.0 D to E with weight 34.5 E to Y with weight 50.5 Do you not find this approach more easy to read and understand than the Adjacency List/Adjacency Matrix approach? This Method Chaining is similar to Train Wreck pattern which I had written about sometime back. Inner DSL by Nested Functions In the Nested functions approach the style of the DSL is different. In this approach I would nest functions within functions to populate my semantic model. Something like: Graph( edge(from("a"), to("b"), weight(12.3), edge(from("b"), to("c"), weight(10.5) ); The advantage with this approach is that its heirarchical naturally unlike method chaining where I had to format the code in a different way. And this approach doesn’t maintain any intermediate state within the Expression builders i.e the expression builders don’t hold the Graph and Edge objects while the DSL is being parsed/executed. The semantic model remain the same as discussed here. Lets look at the expression builders for this DSL. //Populates the Graph model. public class NestedGraphBuilder {public static Graph Graph(Edge... edges){ Graph g = new Graph(); for(Edge e : edges){ g.addEdge(e); g.addVertice(e.getFromVertex()); g.addVertice(e.getToVertex()); } return g; }} //Populates the Edge model. public class NestedEdgeBuilder {public static Edge edge(Vertex from, Vertex to, Double weight){ return new Edge(from, to, weight); }public static Double weight(Double value){ return value; }} //Populates the Vertex model. public class NestedVertexBuilder { public static Vertex from(String lbl){ return new Vertex(lbl); }public static Vertex to(String lbl){ return new Vertex(lbl); } } If you have observed all the methods in the expression builders defined above are static. We use static imports in our code to create a DSL we started to build. Note: I have used different packages for expression builders, semantic model and the dsl. So please update the imports according to the package names you have used. //Update this according to the package name of your builder import static nestedfunction.NestedEdgeBuilder.*; import static nestedfunction.NestedGraphBuilder.*; import static nestedfunction.NestedVertexBuilder.*;/** * * @author msanaull */ public class NestedGraphDsl {public static void main(String[] args) { Graph.printGraph( Graph( edge(from("a"), to("b"), weight(23.4)), edge(from("b"), to("c"), weight(56.7)), edge(from("d"), to("e"), weight(10.4)), edge(from("e"), to("a"), weight(45.9)) ) );} } And the output for this would be: Vertices... A B C D E Edges... A to B with weight 23.4 B to C with weight 56.7 D to E with weight 10.4 E to A with weight 45.9 Now comes the interesting part: How can we leverage the upcoming lambda expressions support in our DSL. Inner DSL using Lambda Expression If you are wondering what Lambda expressions are doing in Java, then please spend some time here before proceeding further. In this example as well we will stick with the same semantic model described here. This DSL leverages Function Sequence along with using the lambda expression support. Lets see how we want our final DSL to be like: Graph(g -> { g.edge( e -> { e.from("a");"b"); e.weight(12.3); });g.edge( e -> { e.from("b");"c"); e.weight(10.5); });} ) Yeah I know the above DSL is overloaded with punctuations, but we have to live with it. If you dont like it, then may be pick a different language. In this approach our expression builders should accept lambda expression/closure/block and then populate the semantic model by executing the lambda expression/closure/block. The expression builder in this implementation maintain the intermediate state of the Graph and Edge objects in the same way we did in DSL implementation by Method Chaining. Lets look at our expression builders: //Populates the Graph model. public class GraphBuilder {Graph g; public GraphBuilder() { g = new Graph(); }public static Graph Graph(Consumer<GraphBuilder> gConsumer){ GraphBuilder gBuilder = new GraphBuilder(); gConsumer.accept(gBuilder); return gBuilder.g; }public void edge(Consumer<EdgeBuilder> eConsumer){ EdgeBuilder eBuilder = new EdgeBuilder(); eConsumer.accept(eBuilder); Edge e = eBuilder.edge(); g.addEdge(e); g.addVertice(e.getFromVertex()); g.addVertice(e.getToVertex()); } } //Populates the Edge model. public class EdgeBuilder { private Edge e; public EdgeBuilder() { e = new Edge(); }public Edge edge(){ return e; }public void from(String lbl){ e.setFromVertex(new Vertex(lbl)); } public void to(String lbl){ e.setToVertex(new Vertex(lbl)); } public void weight(Double w){ e.setWeight(w); }} In the GraphBuilder you see two higlighted lines of code. These make use of a functional interface, Consumer, to be introduced in Java 8. Now lets make use of the above expression builders to create our DSL: //Update the package names with the ones you have given import graph.Graph; import static builder.GraphBuilder.*;public class LambdaDslDemo { public static void main(String[] args) { Graph g1 = Graph( g -> { g.edge( e -> { e.from("a");"b"); e.weight(12.4); });g.edge( e -> { e.from("c");"d"); e.weight(13.4); }); });Graph.printGraph(g1); } } And the output is: Vertices... A B C D Edges... A to B with weight 12.4 C to D with weight 13.4 With this I end this code heavy post. Let me know if you want me to spit this into 3 posts- one for each DSL implementation. I kept it in one place so that it would help us in comparing the 3 different approaches. To summarise:In this post I talked about DSL, Inner DSL as mentioned in the book Domain Specific Languages by Martin Fowler. Provided an implementation for each of the three approaches for implementing the Inner DSLs are:Method Chaining Nested Functions Lambda expressions with Function Sequence   Reference: Creating Internal DSLs in Java, Java 8- Adopting Martin Fowler’s approach from our JCG partner Mohamed Sanaulla at the Experiences Unlimited blog. ...

Invoking Async method call using Future object in Spring

The next example will demonstrate an async method call inside the Spring container. Why do we need async method calls? In some cases we don’t really know if replay is expected or when a result supposed to be delivered back. Traditional way In the Java EE world of handling async calls is using Queue/Topic. We could do the same in Spring but if needed a simple async invocation, you could do it easily by following the next steps: 1. Declare Asynchronous Gateway:     <bean id="executionLogicImpl" class="com.test.components.execution_gateway.ExecutionLogicImpl" abstract="false" lazy-init="default" autowire="default"> </bean> 2. declare interface method with return type – Future(Java 5+): More information on Future object: public interface ExecutionLogic {public Future<String> doSomeExecutionLogic(String message);} * As soon as the GatewayProxyFactoryBean notice a return type Future it will switch the method into an async mode by having AsyncTaskExecutor 3. We will create a job channel which will collect all requests and send them asynchronously to another class(ExecutionLogicImpl) in order to process them(some random business logic): <int:channel id="job1Channel" /><int:service-activator input-channel="job1Channel" ref="executionLogicImpl" method="doSomeExecutionLogic" /> The class ExecutionLogicImpl: public class ExecutionLogicImpl { public String doSomeExecutionLogic(String msg) { try { System.out.println("doing long work on message="+msg); Thread.sleep(8000);} catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } return msg + "_completed";}} Test class: import com.test.components.execution_gateway.ExecutionLogic;public class testExeceution{... ExecutionLogic executionLogic; public String sendMsgToExecutionQueue(String msg) { Future<String> processedMessage = executionLogic.doSomeExecutionLogic(msg);String finalResult = ""; try { finalResult = " " + processedMessage.get(TIMEOUT, TimeUnit.SECONDS); return "1 final result: " + finalResult; // + " " + response; } catch (ExecutionException e) { return "1 final result: " + e + finalResult;// + " " + response; } catch (TimeoutException tex) { return "1 final result: " + tex + finalResult; // + " " + response; } catch (Exception ex) { return "1 final result: " + ex + finalResult;// + " " + response; }} ... } * You can enable timeout using the Future object for cases a response will never be returned. So what’s happening here? We sending input to be executed asynchronously. The sender is waiting for response (asynchronously) as soon as the request finish it’s processing a result will be sent back to the sender.   Reference: Invoking Async method call using Future object in Spring from our JCG partner Idan Fridman at the blog. ...

Design Patterns: Prototype

The one of creational design patterns is a Prototype design pattern. Despite that the Prototype is the creational pattern it distinguishes from other patterns by a concept. I mean that the Prototype in some sense creates itself. I’m going to explain it bellow. All magic of the Prototype pattern is based on a clone() method of a java Object. So let’s to consider an example of usage and then I will try to figure out which cons and pros this pattern has.    The class diagram above shows to us an basic sense of the pattern. An abstract class or an interface can play role of a prototype. Notice that the prototype have to extend Cloneable interface. That’s because concrete implementations of the prototype will invoke the clone() method. The particular class which implements interface (extends the abstract class) has to contain method which will return a copy of itself with the help of clone operation. In my example I declared the Unicellular interface as the prototype and the Amoeba class as its realisation: public interface Unicellular extends Cloneable {public Unicellular reproduce();} public class Amoeba implements Unicellular {public Unicellular reproduce() { Unicellular amoeba = null; try { amoeba = (Unicellular) super.clone(); } catch (CloneNotSupportedException e) { e.printStackTrace(); } return amoeba; }public String toString() { return "Bla bla bla it's a new amoeba..."; }} Demonstration: ... public static void main(String[] args) {Unicellular amoeba = new Amoeba();List< Unicellular > amoebaList = new ArrayList< Unicellular >();amoebaList.add(amoeba.reproduce()); amoebaList.add(amoeba.reproduce()); amoebaList.add(amoeba.reproduce());for (Unicellular a : amoebaList) System.out.println(a);} ... The result: Bla bla bla it’s a new amoeba… Bla bla bla it’s a new amoeba… Bla bla bla it’s a new amoeba… What about the cons and pros? Actually I don’t know what to say regarding this, because I have never encountered such situations where the Prototype pattern will be applied appropriately. Maybe in some cases when you don’t need to call a constructor explicitly or when a system don’t need to depend on way of objects creation.   Reference: Design Patterns: Prototype from our JCG partner Alexey Zvolinskiy at the Fruzenshtein’s notes blog. ...

Software Quality via Unit Testing

The following post is based on a talk I gave at Desert Code Camp 2013. See also the associated slide deck. Software quality is critical to consistently and continually delivering new features to our users. This articles covers the importance of software quality and how to deliver it via unit testing, Test Driven Development and clean code in general. Introduction Unit testing has raised the quality of my code more than any other technique, approach or tool I have come across in the last 15 years. It helps you to write cleaner   code, more quickly and with less bugs. It is also just feels good to write unit tests. That green bar in JUnit (or whatever testing tool you are using), gives you a warm fuzzy feeling. So, the bulk of this talk/article is about unit testing, and it’s smarter cousin, TDD. There are however, many other steps you can take, outside of unit testing, to improve the quality of code; Simple things such as:good variable names short cohesive methods that are easy to understand at a glance avoiding code smells such as long nested if else if blocksSo, the final section of my talk will be on what some people call ‘Clean Code’. But we’ll start by talking about why all this is important. After all, design, clean code and unit tests are merely an means to an end. And that end is: delivering value to users And we should never forget that fact! It is very important to be able to explain to project stakeholders, or indeed other developers, what the motivations and advantages are of unit testing and clean code, and how it ultimately results in value to users. And so, that is the basis for the introductory section called ‘The value of software design’.The value of software design Automated testing Clean code1. The value of software design This section is largely based on a talk (key part starts around 45:00; see also the paper) I was fortunate enough to attend by a guy called Martin Fowler, the ‘Chief Scientist’ at a company called ThoughtWorks. In the talk, Fowler did a great job of answering a question I had been thinking about a lot: Why should we care about ‘good’ design in software? People may put forward questions and statements such asWe need less focus on quality so we can add more features Do we really need unit tests? Refactoring doesn’t change what the code does, so why bother?And it can be difficult to answer those questions, particularly when you are under pressure to deliver, and quickly. On approach is to take the high moral ground. For example, there are people who adopt the attitude thatBad software design is a sin If you are not writing unit tests, with 100% unit test code coverage, you are a BAD developer Poorly named variables or methods will be branded on your flesh as you burn in the fiery pits of hell for your sinsBasically,  treat the issue as a moral one – you are a bad person (or at least a bad developer) if you are designing poor quality software. However, most of us work for a living, and if we are going to take the time and effort to produce quality software, we need to have an economic reason for doing it – otherwise, why bother? If the software we are writing is not least providing significant benefit to our users and stakeholders, we probably shouldn’t be doing it. Remember our goal: Deliver value to users What does Quality even mean? But before we get into what economic reasons for good software design are, let’s talk about what Quality even means when it comes to software? Well, it could mean a few different things, including:Quality GUI (easy to use, looks good, intuitive) Few defects (bug free, no unintelligible error messages Good modular designHowever, only the top two are actually apparent to users/customers/business sponsors. And yet this talk/article focuses almost exclusively on the last one – the one that users have no concept of! And it’s not just users; Does your manager stay awake at night worrying about the quality of the code that you’re producing? Probably not. I bet your manager’s manager definitely doesn’t! And your CEO probably doesn’t even know what code is. So if management and users don’t care about quality code, why should we, as developers care? Fowler’s Design Stamina Hypothesis Well Martin Fowler did a good job of describing why using what he calls the Design Stamina Hypothesis.The y axis represents how much new functionality you are adding to the app The x axis represents time The orange line represents a hypothetical scenario where you create an app with no design (and design definitely includes unit tests) The blue line represents a scenario where you create an app with good designWithout design Fowler’s Design Stamina Hypothesis basically says that if you write code without a good design, you will be able to deliver code very quickly to start with, but your progress will become slower and slower over time and it becomes more and more difficult to add new features as you become bogged downin spaghetti code fixing bugs introduced when you inadvertently broke an piece of code you couldn’t understand spending hours trying to understand code before actually being able to change it (and still have little confidence that you’re not messing it up)In the worst case sceanario (shown above by the blue line tapering off), it will become so slow to make changes that you will likely start to consider a complete rewrite of the application. Because rewriting the entire thing, and the months/years of effort and blood/sweat/tears that that will take is actually more attractive that dealing with the mess you have created. So, how do we avoid that worst case scenario, and what economic benefits can we reap? With good design Well, the second part of Fowler’s Design Stamina Hypothesis is how cumulative functionality is affected by good design. Designing, writing tests, using TDD may take a little longer in the short term, but the benefit is that in the medium to longer term [the point on the graph at which the lines cross), it actually makes you much faster.Adding new features takes about as long as you’d expect it to Ever junior developers, or new team members, can add new features in a reasonable amount of timeAnd in many cases that point is after days or weeks rather than months or years. Design Stamina Hypothesis summary In agile software development, the term often used to describe the amount of new functionality added over a period of time is velocity. Fowler’s notion on good design increasing velocity is just an hypothesis because it can’t be (easily) proved, but it intuitively makes sense to most people involved in producing software. Design Stamina Hypothesis: Design is what gives us the stamina to be able to continually add new features to an application, today, tomorrow and for months and years to come. Technical debt Basically what Fowler is talking about here is the concept of technical debt. Technical debt is a metaphor referring to the eventual consequences of poor design in a codebase. The debt can be thought of as extra work that needs to be done before or in addition to the real work that you need to do. For example, having to improve a design before you can actually add the new feature users have requested. Under the technical debt metaphor, that extra work can be thought of as interest payments.Interest payments can come in the form ofBugs Just understanding what the heck the current code does Refactoring Completing unfinished workHow to deal with technical debt When you encounter technical debt in a project, you basically have 2 options open to you: Pay down or accept. Paying down the debt involves spending extra time to clean, refactor and improve design. The benefit is that it will ultimately speed you up since you’ll be able to add new features faster. The downside is that it will inevitably slow you down now. Accepting the debt means doing the minimum required to add/change features and moving on. The interest you will pay going forward is the additional cost you incur above and beyond adding new features; Everything is slowed down and complicated by the extra complexity. In addition, it is also much more difficult for a new dev on the team to pick up. And last but by no means least, developer morale suffers! No developer enjoys working in an unmaintainable mess of code; And developer turnover is a very real cost. So, when we come across code in our projects that is poorly designed, should we take action? Refactor, add tests, tidy up? For a long time, I thought the answer to that question was simply Yes. Always. However, Fowler makes an excellent point that it is not always economically sensible to do so. If it ain’t broken, don’t fix it Even of a module is a bunch of crap; Badly written, with no tests and poor variable names etc;  If it(surprisingly) doesn’t have any bugs in it does what it is supposed to AND If you never need to changeThen why worry about it? In technical debt terms, it is not exerting very many interest payments. Don’t build bad on top of bad On the other hand, if that badly written code needs to be updated with functionality, or if you find yourself ‘in it’ all the time (even just to understand it), then it becomes important to pay down technical debt and to keep the code clean and easy to maintain & enhance Summary of the value of software design Good Design, tests and good coding practices  etc, are only a means to an end and that end is delivering value users. However, they are very useful in meeting that end. They give us the stamina to continually and consistently deliver functionality faster, with less bugs to our users and so have very real economic benefits And with that, let’s look at what it means to actually use good design techniques via the use of automated tests for software… 2. Automated testing Unit testing A unit test is a piece of code that executes a specific functionality (‘unit’) in the code, andConfirms the behavior or result is as expected. Determines if code is ‘fit for use’Unit testing example It is easiest to explain via an example. This example involves testing a factorial routine. The factorial of a non-negative integer n, denoted by n!, is the product of all positive integers less than or equal to n. For example, the factorial of 3 is equal to 6: 3! = 3 x 2 x 1 = 6 Our implementation of this is as follows:public class Math { public int factorial(int n) {if (n == 1) return 1;return n * factorial(n-1);}} And being good developers, we add a test to make sure the code does what we expect: public class MathTest {@Testpublic void factorial_positive_integer() {Math math = new Math();int result = math.factorial(3);assertThat(result).isEqualTo(6);}} And if we run the test, we will see it passes. Our code must be correct? Well one good thing about tests is that they make you start to think about edge cases. Anobvious one here is zero. So we add a test for that. In mathematics, the factorial of zero is 1 (0! = 1), so we add a test for that: public class MathTest {…@Testpublic void factorial_zero() {Math math = new Math();int result = math.factorial(0);assertThat(result).isEqualTo(1);}} And when we run this test… we see that it fails. Specifically, it will result in some kind of stack overflow. We have found a bug! The issue is our exit condition, or the first line in our algorithm: if (n == 1) return 1; This needs to be updated to check for zero: if (n == 0) return 1; With our algorithm updated, we re-run our tests and all pass. Order is restored to the universe! What unit tests provide Although our previous example demonstrated unit tests finding a bug, find bugs isn’t the unit tests primary benefit. Instead, unit tests:Drive design Act as safety buffers by finding regression bugs Provide documentationDrive design TDD can help drive design and tease the requirements out. The tests effectively act as the first user of the code, making you think about:what should this code do Border conditions (0, null, -ve, too big)They can also push you to use good design, such asShort methodsdifficult to unit test a methid that is 100 lines long, so unit esting forces you to write modular code (low coupling, high cohesion; Test names can highlight violations of SRP; if you start writing a test name like addTwoNumbers_sets_customerID correctly, you are probably doing something very wrongDependency InjectionBasically, writing a class is different from using a class and you need to be aware of that as you write code. Act as safety buffers by finding regression bugs Have you ever been in a bowling alley and seen those buffers or bumpers they put down the side of each lane for beginners or kids, to stop the ball running out? Well unit tests are kind of like that. They act as a safety net by allowing code to be refactored without fear of breaking existing functionality. Having a high test coverage of your code allows you to continue developing features without having to perform lots of manual tests.  When a change introduces a fault, it can be quickly identified and fixed. Regression testing (checking existing functionality to ensure it hasn’t been broken by later modifications to the code) is one of the biggest benefits to Unit Testing – especially when you’re working on a large project where developers don’t know the ins and outs of every piece of code and hence are likely to introduce bugs by incorrectly working with code written by other developers. Unit tests are run frequently as the code base is developed, either as the code is changed or via an automated process with the build. If any of the unit tests fail, it is considered to be a bug either in the changed code or the tests themselves. Documentation Another benefit of unit testing is that it provides a form of living documentation about how the code operates. Unit test cases embody characteristics that are critical to the success of the unit. The test method names provide a succinct description of what a class does. Unit testing limitations Unit testing of course has its limitations:Can not prove the absence of bugsWhile unit tests can prove the presence of bugs, they can never prove their absence (They can prove the absence of specific bugs, yes, but not all bugs). For example, unit tests test what you tell them too. If you don’t think of an edge case, you probably aren’t going to write either a test of the functionality to handle it! For reasons like this, unit tests should augment, never replace, manual testing.Lot’s of code (x3-5)In the simple unit test example we saw earlier, the unit tests had about 3 times the amount of code as the actual code under test – and there were still other scenarios we hadn’t tested yet. In general, for every line of code written, programmers often need 3 to 5 lines of test code. For example, every boolean decision statement requires at least two tests. Test code quickly builds up and it all takes time to write, read and maintain, and run.Some things difficult to testSome things are extremely difficult to test e.g. threading, GUI.Testing legacy code bases can be challengingA common approach to adding unit testing to existing code is to start with one wrapper test, then simultaneously refactor and add tests as you go along. For example if you have a legacy method that has 200 lines of code, you might start by adding one test that, for a given set of parameters gives you a certain return value. This will not test all the side effects the method has (e.g. the effect of calls to other objects), but it is a starting point. You can then start refactoring the method down into smaller methods, adding unit tests as you do so. The initial ‘wrapper’ test will give you some degree of confidence that you have not fundamentally broken the original functionality and the new incremental tests you add as you go about refactoring will give you increased confidence, as well as allowing you to understand (and document) the code. It is worth pointing out though that in some cases, the setup for objects not originally designed with unit testing in mind can be more trouble than it is worth. In these cases, you need to make the kind of decisions we discuss earlier in the technical debt section. So, given all those limitations, should we unit test? Absolutely! In fact, not only should we unit test, we should let unit tests drive development and design, via Test Driven Development (TDD). Test driven development TDD Intro Test-driven development is a set of techniques which encourages simple designs and test suites that inspire confidence. The classic approach to TDD is Red – Green – RefactorRed— Write a failing test, one that may not even compile at first Green— Make the test pass quickly, committing whatever sins necessary in the process Refactor— Eliminate all of the duplication created in merely getting the test to workRed- green/refactor—the TDD mantra. No new functionality without a failing test; no refactoring without passing tests. TDD Example As with straight unit testing, TDD is best explained via an example. However, this section is best viewed as code screen shots. See the presentation slides here. A few points are worth noting from the TDD example:Writing the tests resulted in code that is clean and easy to understand; likely more so than if we had added tests after the fact (or not at all) The test names act as a good form of documentation Finally, there is about five times more test code than the code we were testing, as we predicted earlier. This emphasizes why it is so important to refactor the test code just as, it not more, aggressively than the actual code under test.So far we have looked why we should be concerned with good design in the first place, and how we can use automated tests to drive and confirm our design. Next, we are going to talk about how to spot issues with an existing code base by looking for code smells… 3. Clean code One way to ensure clean code is by avoiding ‘code smells’. What is a code smell? “Certain structures in code that suggest (sometimes they scream for) the possibility of refactoring.” Martin Fowler. Refactoring: Improving the design of existing code A ‘smell’ in code is a hint that something might be wrong with the code. To quote the Portland Pattern Repository’s Wiki, if something smells, it definitely needs to be checked out, but it may not actually need fixing or might have to just be tolerated. The code that smells may itself need fixing, or it may be a symptom of, or hiding, another issue. Either way, it is worth looking into. We will look at the following code smells:Duplicated code Long switch/if statements Long methods Poor method names In-line comments Large classesDuplicated code This is the #1 stink! It violates the DRY principle. If you see the same code structure in more than one place, you can be sure that your program will be better if you find a way to unify them.Symptom Possible actionssame expression in two methods of the same class Extract to a new methodsame expression in two sibling subclasses Extract to a method in a parent classsame expression in two unrelated classes Extract to a new class? Have one class invoke the other?In all cases, parametrize any subtle differences in the duplicated code. Long switch / if statements The problem here is also one duplication.The same switch statement is often duplicated in multiples places. If you add a new clause to the switch, you have to find all these switch, statements and change them. Or similar code being done in each switch statementThe solution is often to use use polymorphism. For example, if you are switching on a code of some kind, move the logic into the class that owns the codes, then introduce code specific subclasses. An alternative is to use the State of Strategy design patterns. Long method The longer a method is, the more difficult it is to understand. Older languages carried an overhead in subroutine calls. Modern OO languages have virtually eliminated that overhead. The key is good naming. If you have a good name for a method you don’t need to look at the body. Methods should be short (<10 line) A one line method seems a little too short, but even this is OK if it adds clarity to the code. Be aggressive about decomposing methods! The real key to decomposing methods into shorter ones is avoiding poor method names… Poor method names Method names should be descriptive, for exampleint process(int id) { //bad! int calculateAccountBalance(int accountID) { //betteri.e. the method name should descibe what the method does without having to read the code or at least, a quick scan of the code should confirm the method does what it says on the tin In-line comments Yes, in-line comments can be considered a code smell! If the code is so difficult to follow that you need to add comments to describe it, consider refactoring! The best ‘comments’ are simply the names you give you methods and variables. Note however, that Javadocs, particularly on public methods, are fine and good. Large classes When a class is trying to do too much, it often shows up as:Too many methods (>10 public?) Too many lines of code Too many instance variables – is every instance variable used in every method?SolutionsEliminate redundancy / duplicated code Extract new/sub classesClean code summary The single most important thing is to make the intent of your code clear. Your code should be clear, concise and easy to understand because although a line of code is written just once, it is likely to be read many times. Will you be able to understand your intent in a month or two? Will your colleague? Will a junior developer? A software program will have, on average, 10 generations of maintenance programmers in its lifetime. Maintaining such unreadable code is hard work because we expend so much energy into understanding what we’re looking at. It’s not just that, though. Studies have shown that poor readability correlates strongly with defect density. Code that’s difficult to read tends to be difficult to test, too, which leads to fewer tests being written. SummaryGood design gives us the stamina to continually and consistently deliver business value Unit tests are an integral part of good design; TDD is even better Good design can also simply be cleaner code; Aggressively refactor to achieve thisFinal thought: Every time you are in a piece of code, just make one small improvement!   Reference: Software Quality via Unit Testing from our JCG partner Shaun Abram at the Shaun Abram’s blog blog. ...

Multiple dynamic includes with one JSF tag

Every JSF developer knows the ui:include and ui:param tags. You can include a facelet (XHTML file) and pass an object, which will be available in the included facelet, as follows:                 <ui:include src="/sections/columns.xhtml"> <ui:param name="columns" value="#{bean.columns}"/> </ui:include> So, you can e.g. use it within a PrimeFaces DataTable with dynamich columns (p:columns) <p:dataTable value="#{bean.entries}" var="data" rowKey="#{}" ...> ... <ui:include src="/sections/columns.xhtml"> <ui:param name="data" value="#{data}"/> <ui:param name="columns" value="#{bean.columns}"/> </ui:include></p:dataTable> where the included facelet could contain this code <ui:composition xmlns="" xmlns:p="" xmlns:ui="" ...> <p:columns value="#{columns}" var="column"> <f:facet name="header"> <h:outputText value="#{msgs[column.header]}"/> </f:facet>// place some input / select or complex composite component for multiple data types here. // a simple example for demonstration purpose: <p:inputText value="#{data[]}"/> </p:columns> </ui:composition> #{bean.columns} refers to a List of special objects which describe the columns. I will name such objects ColumnModel. So, it is a List <ColumnModel>. A ColumnModel has e.g. the attributes header and property. Go on. Now, if we want to add a support for sorting / filtering, we can use dynamic paths which refer to specific facelet files containg sorting or / and filtering feature(s). Simple bind the src attribute to a bean property. <ui:include src="#{bean.columnsIncludeSrc}"> <ui:param name="data" value="#{data}"/> <ui:param name="columns" value="#{bean.columns}"/> </ui:include> The bean has something like private boolean isFilterRight; private boolean isSortRight// setter / getterpublic String getColumnsIncludeSrc() { if (isFilterRight && isSortRight) { return "/include/columnsTableFilterSort.xhtml"; } else if (isFilterRight && !isSortRight) { return "/include/columnsTableFilter.xhtml"; } else if (!isFilterRight && isSortRight) { return "/include/columnsTableSort.xhtml"; } else { return "/include/columnsTable.xhtml"; } } Different facelets are included dependent on the set boolean rights. So, the decision about what file to be included is placed within a bean. To be more flexible, we can encapsulate the table in a composite component and move the decision logic to the component class. <cc:interface componentType="xxx.component.DataTable"> <cc:attribute name="id" required="false" type="java.lang.String" shortDescription="Unique identifier of the component in a NamingContainer"/> <cc:attribute name="entries" required="true" shortDescription="The data which are shown in the datatable. This is a list of object representing one row."/> <cc:attribute name="columns" required="true" type="java.util.List" shortDescription="The columns which are shown in the datatable. This is a list of instances of type ColumnModel."/> ... </cc:interface> <cc:implementation> <p:dataTable value="#{cc.attrs.entries}" var="data" rowKey="#{}" ...> ... <ui:include src="#{cc.columnsIncludeSrc}"> <ui:param name="data" value="#{data}"/> <ui:param name="columns" value="#{cc.attrs.columns}"/> </ui:include></p:dataTable> </cc:implementation> How does ui:include work? This is a tag handler which is applied when the view is being built. In JSF 2, the component tree is built twice on POST requests, once in RESTORE_VIEW phase and once in RENDER_RESPONSE phase. On GET it is built once in the RENDER_RESPONSE phase. This behavior is specified in the JSF 2 specification and is the same in Mojarra and MyFaces. The view building in the RENDER_RESPONSE is necessary in case the page author uses conditional includes or conditional templates. So, you can be sure that the src attribute of the ui:include gets evaluated shortly before the rendering phase. But come to the point! What I wrote until now was an introduction for a motivation to extend ui:include. Recently, I got a task to use a p:dataTable with dynamic columns and p:rowEditor. Like this one in the PrimeFaces showcase. The problem is only – such editing feature doesn’t support p:columns. My idea was to add p:column tags multiple times, dynamically, but with different context parameters. You can imagine this as ui:include with ui:param in a loop. In the example above we intend to iterate over the List<ColumnModel>. Each loop iteration should make an instance of type ColumnModel available in the included facelet. So, I wrote a custom tag handler to include any facelet multiple times. package xxx.taghandler;import xxx.util.VariableMapperWrapper; import; import java.util.List; import java.util.UUID; import javax.el.ExpressionFactory; import javax.el.ValueExpression; import javax.el.VariableMapper; import javax.faces.component.UIComponent; import javax.faces.view.facelets.FaceletContext; import javax.faces.view.facelets.TagAttribute; import javax.faces.view.facelets.TagAttributeException; import javax.faces.view.facelets.TagConfig; import javax.faces.view.facelets.TagHandler;/** * Tag handler to include a facelet multiple times with different contextes (objects from "value"). * The attribute "value" can be either of type java.util.List or array. * If the "value" is null, the tag handler works as a standard ui:include. */ public class InlcudesTagHandler extends TagHandler {private final TagAttribute src; private final TagAttribute value; private final TagAttribute name;public InlcudesTagHandler(TagConfig config) { super(config);this.src = this.getRequiredAttribute("src"); this.value = this.getAttribute("value"); = this.getAttribute("name"); }@Override public void apply(FaceletContext ctx, UIComponent parent) throws IOException { String path = this.src.getValue(ctx); if ((path == null) || (path.length() == 0)) { return; }// wrap the original mapper - this is important when some objects passed into include via ui:param // because ui:param invokes setVariable(...) on the set variable mappper instance VariableMapper origVarMapper = ctx.getVariableMapper(); ctx.setVariableMapper(new VariableMapperWrapper(origVarMapper));try { this.nextHandler.apply(ctx, null);ValueExpression ve = (this.value != null) ? this.value.getValueExpression(ctx, Object.class) : null; Object objValue = (ve != null) ? ve.getValue(ctx) : null;if (objValue == null) { // include facelet only once ctx.includeFacelet(parent, path); } else { int size = 0;if (objValue instanceof List) { size = ((List) objValue).size(); } else if (objValue.getClass().isArray()) { size = ((Object[]) objValue).length; }final ExpressionFactory exprFactory = ctx.getFacesContext().getApplication().getExpressionFactory(); final String strName =;// generate unique Id as a valid Java identifier and use it as variable for the provided value expression final String uniqueId = "a" + UUID.randomUUID().toString().replaceAll("-", ""); ctx.getVariableMapper().setVariable(uniqueId, ve);// include facelet multiple times StringBuilder sb = new StringBuilder(); for (int i = 0; i < size; i++) { if ((strName != null) && (strName.length() != 0)) { // create a new value expression in the array notation and bind it to the variable "name" sb.append("#{"); sb.append(uniqueId); sb.append("["); sb.append(i); sb.append("]}");ctx.getVariableMapper().setVariable(strName, exprFactory.createValueExpression(ctx, sb.toString(), Object.class)); }// included facelet can access the created above value expression ctx.includeFacelet(parent, path);// reset for next iteration sb.setLength(0); } } } catch (IOException e) { throw new TagAttributeException(this.tag, this.src, "Invalid path : " + path); } finally { // restore original mapper ctx.setVariableMapper(origVarMapper); } } } The most important call is ctx.includeFacelet(parent, path). The method includeFacelet(…) from the JSF API includes the facelet markup at some path relative to the current markup. The class VariableMapperWrapper is used for a name to value mapping via ui:param. For the example with columns the variable column will be mapped to expressions #{columns[0]}, #{columns[1]}, etc. before every includeFacelet(…) call as well. Well, not exactly to these expressions, at place of columns should be an unique name mapped again to the columns object (to avoid possible name collisions). The mapper class looks like as follows package xxx.util;import java.util.HashMap; import java.util.Map; import javax.el.ELException; import javax.el.ValueExpression; import javax.el.VariableMapper;/** * Utility class for wrapping a VariableMapper. Modifications occur to the internal Map instance. * The resolving occurs first against the internal Map instance and then against the wrapped VariableMapper * if the Map doesn't contain the requested ValueExpression. */ public class VariableMapperWrapper extends VariableMapper {private final VariableMapper wrapped;private Map<String, ValueExpression> vars;public VariableMapperWrapper(VariableMapper orig) { super(); this.wrapped = orig; }@Override public ValueExpression resolveVariable(String variable) { ValueExpression ve = null; try { if (this.vars != null) { // try to resolve against the internal map ve = this.vars.get(variable); }if (ve == null) { // look in the wrapped variable mapper return this.wrapped.resolveVariable(variable); }return ve; } catch (Throwable e) { throw new ELException("Could not resolve variable: " + variable, e); } }@Override public ValueExpression setVariable(String variable, ValueExpression expression) { if (this.vars == null) { this.vars = new HashMap<String, ValueExpression>(); }return this.vars.put(variable, expression); } } Register the tag handler in a taglib XML file and you are done. <tag> <tag-name>includes</tag-name> <handler-class>xxx.taghandler.InlcudesTagHandler</handler-class> <attribute> <description> <![CDATA[The relative path to a XHTML file to be include one or multiple times.]]> </description> <name>src</name> <required>true</required> <type>java.lang.String</type> </attribute> <attribute> <description> <![CDATA[Objects which should be available in the included XHTML files. This attribute can be either of type java.util.List or array. If it is null, the tag handler works as a standard ui:include.]]> </description> <name>value</name> <required>false</required> <type>java.lang.Object</type> </attribute> <attribute> <description> <![CDATA[The name of the parameter which points to an object of each iteration over the given value.]]> </description> <name>name</name> <required>false</required> <type>java.lang.String</type> </attribute> </tag> Now I was able to use it in a composite component as <p:dataTable value="#{cc.attrs.entries}" var="data" rowKey="#{}" ...> ... <custom:includes src="#{cc.columnsIncludeSrc}" value="#{cc.attrs.columns}" name="column"> <ui:param name="data" value="#{data}"/> </custom:includes></p:dataTable> A typically facelet file (and the component tree) contains a quite regular p:column tag which means we are able to use all DataTable’s features! <ui:composition xmlns="" xmlns:p="" xmlns:ui="" ...> <p:column headerText="#{msgs[column.header]}"> <p:cellEditor> <f:facet name="output"> <custom:typedOutput outputType="#{column.outputTypeName}" typedData="#{column.typedData}" value="#{data[]}" timeZone="#{cc.timeZone}" calendarPattern="#{cc.calendarPattern}" locale="#{cc.locale}"/> </f:facet><f:facet name="input"> <custom:typedInput inputType="#{column.inputTypeName}" typedData="#{column.typedData}" label="#{column.inputTypeName}" value="#{data[]}" onchange="highlightEditedRow(this)" timeZone="#{cc.timeZone}" calendarPattern="#{cc.calendarPattern}" locale="#{cc.locale}"/> </f:facet> </p:cellEditor> </p:column> </ui:composition>Note: This approach can be applied to another components and use cases. The InlcudesTagHandler works universell. For instance, I can imagine to create a dynamic Menu component in PrimeFaces without an underlying MenuModel. A list or array of some model class is still needed of course.   Reference: Multiple dynamic includes with one JSF tag from our JCG partner Oleg Varaksin at the Thoughts on software development blog. ...
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

Get ready to Rock!
To download the books, please verify your email address by following the instructions found on the email we just sent you.