Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?

java-logo

Reducing memory consumption by 20x

This is going to be another story sharing our recent experience with memory-related problems. The case is extracted from a recent customer support case, where we faced a badly behaving application repeadedly dying with OutOfMemoryError messages in production. After running the application with Plumbr attached we were sure we were not facing a memory leak this time. But something was still terribly wrong. The symptoms were discovered by one of our experimental features monitoring the overhead on certain data structures. It gave us a signal pinpointing towards one particular location in the source code. In order to protect the privacy of the customer we have recreated the case using a synthetic sample, at the same time keeping it technically equivalent to the original problem. Feel free to download the source code. We found ourselves staring at a set of objects loaded from an external source. The communication with the external system was implemented via XML interface. Which is not bad per se.  But the fact that the integration implementation details were scattered across the system – the documents received were converted to XMLBean instances and then used across the system – was not maybe the wisest thing. Essentially we were dealing with a lazily-loaded caching solution. The objects cached were Persons:   // Imports and methods removed to improve readability public class Person { private String id; private Date dateOfBirth; private String forename; private String surname; } Not too memory-consuming one might guess. But things start to look a bit more sour when we open up some more details. Namely the implementation of this data was anything like the simple class declaration above. Instead, the implementation used a model-generated data structure. Model used was similar to the following simplified XSD snippet: <xs:schema targetNamespace="http://plumbr.eu" xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified"> <xs:element name="person"> <xs:complexType> <xs:sequence> <xs:element name="id" type="xs:string"/> <xs:element name="dateOfBirth" type="xs:dateTime"/> <xs:element name="forename" type="xs:string"/> <xs:element name="surname" type="xs:string"/> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> Using XMLBeans, the developer had generated the model used behind the scenes. Now lets add the fact that the cache was supposed to hold up to 1.3M instances of Persons and we have created a strong foundation to failure. Running a bundled testcase gave us an indication that 1.3M instances of the XMLBean-based solution would consume approximately 1.5GB of heap. We thought we could do better. First solution is obvious. Integration details should not cross system boundaries. So we changed the caching solution to the simple java.util.HashMap<Long, Person> solution. ID as the key and Person object as the value. Immediately we saw the memory consumption reduced to 214MB. But we were not satisfied yet. As the key in the Map was essentially a number, we had all the reasons to use Trove Collections to further reduce the overhead. Quick change in the implementation and we had replaced our HashMap withTLongObjectHashMap<Person>. Heap consumption dropped to 143MB. We definitely could have stopped there, but the engineering curiosity did not allow us to do so. We could not help to notice that the data used contained a redundant piece of information. Date Of Birth was actually encoded in the ID, so instead of duplicating it in additional field, we could easily calculate the birthday from the given ID. So we changed the layout of the Person object and now it contained just the following fields: // Imports and methods removed to improve readability public class Person { private long id; private String forename; private String surname; } Re-running the tests confirmed our expectations. Heap consumption was down to 93MB. But we were still not satisfied. The application was running on 64-bit machine with an old JDK6 release. Which did not compress the ordinary object pointers by default. Switching to the -XX:+UseCompressedOops gave us an additional win – now we were down to 73MB consumed.We could go further and start interning strings or building a b-tree based on the keys, but this would already start impacting the readability of the code, so we decided to stop here. 21.5x heap reduction should already be good enough result. Lessons learned?Do not let integration details cross system boundaries Redundant data will be costly. Remove the redundancy whenever you can. Primitives are your friends. Know thy tools and learn Trove if you already haven’t Be aware of the optimization techniques provided by your JVMIf you are curious about the experiment conducted, feel free to download the code used from here. The utility used for measurements is described and available in this blogpost.   Reference: Reducing memory consumption by 20x from our JCG partner Nikita Salnikov Tarnovski at the Plumbr Blog blog. ...
mongodb-logo

MongoDB to CSV

Every once in a while, I need to give a non-technical user (like a business analyst) data residing in MongoDB; consequently, I export the target data as a CSV file (which they can presumably slice and dice once they import it into Excel or some similar tool). Mongo has a handy export utility that takes a bevy of options, however, there is an outstanding bug and some general confusion as to how to properly export data in CSV format. Accordingly, if you need to export some specific data from MongoDB into CSV format, here’s how you do it. The key parameters are connection information to include authentication, an output file, and most important, a list of fields to export. What’s more, you can provide a query in escaped JSON format. You can find the mongoexport utility in your Mongo installation bin directory. I tend to favor verbose parameter names and explicit connection information (i.e. rather than a URL syntax, I prefer to spell out the host, port, db, etc directly). As I’m targeting specific data, I’m going to specify the collection; what’s more, I’m going to further filter the data via a query. ObjectId’s can be referenced via the $oid format; furthermore, you’ll need to escape all JSON quotes. For example, if my query is against a users collection and filtered by account_id (which is an ObjectId), the query via the mongo shell would be: Mongo Shell Query db.users.find({account_id:ObjectId('5058ca07b7628c0002099006')}) Via the command line à la monogexport, this translates to: Collections and queries --collection users --query "{\"account_id\": {\"\$oid\": \"5058ca07b7628c0002000006\"}}" Finally, if you want to only export a portion of the fields in a user document, for example, name, email, and created_at, you need to provide them via the fields parameter like so: Fields declaration --fields name,email,created_at Putting it all together yields the following command: Puttin’ it all together mongoexport --host mgo.acme.com --port 10332 --username acmeman --password 12345 \ --collection users --csv --fields name,email,created_at --out all_users.csv --db my_db \ --query "{\"account_id\": {\"\$oid\": \"5058ca07b7628c0999000006\"}}" Of course, you can throw this into a bash script and parameterize the collection, fields, output file, and query with bash’s handy $1, $2, etc variables.   Reference: MongoDB to CSV from our JCG partner Andrew Glover at the The Disco Blog blog. ...
java-interview-questions-answers

5 Reasons to use Guava

Guava is an open source library containing many classes for Java and written by Google. It’s a potentially useful source of miscellaneous utility functions and classes that I’m sure many developers have written themselves before, or maybe just wanted and never had time to write. Here’s 5 good reasons to use it! 1. Collection Initializers and Utilities Generic homogeneous collections are a great feature to have in Java, but sometimes their construction is a bit too verbose, for example:     final Map<String, Map<String, Integer>> lookup = new HashMap<String, Map<String, Integer>>(); Java 7 solves this problem in a really generic way, by allowing a limited form of type inference informally referred to as the Diamond Operator. So we can rewrite the above example as: final Map<String, Map<String, Integer>> lookup = new HashMap<>(); It’s actually already possible to have this kind of inference for non-constructor methods in earlier Java releases, and Guava provides many ready made constructors for existing Java Collections. The above example can be written as: final Map<String, Map<String, Integer>> lookup = Maps.newHashMap(); Guava also provides many useful utility functions for collections under the Maps, Sets et al. classes. Particular favourites of mine are the Sets.union and Sets.intersection methods that return views on the sets, rather than recomputing the values. 2. Limited Functional-Style Programming Guava provides some common methods for passing around methods in a functional style. For example, the map function that many functional programming languages have exists in the form of the Collections2.transform method. Collections2 also has a filter method that allows you to restrict what values are in a collection. For example to remove the elements from a collection that are null, and store it in another collection, you can do the following: Collection<?> noNullsCollection = filter(someCollection, notNull()); Its important to remember that in both cases the function returns a new collection, rather than modifying the existing one and that the resulting collections are lazily computed. 3. Multimaps and Bimaps A really common usage case for a Map involves storing multiple values for a single key. Using the standard Java Collections that’s usually accomplished by using another collection as the value type. This unfortunately ends up involving a lot of ceremony that needs to be repeated in terms of initializing the collection. Multimaps clear this up quite a bit, for example: Multimap<String, Integer> scores = HashMultimap.create(); scores.put("Bob", 20); scores.put("Bob", 10); scores.put("Bob", 15); System.out.println(Collections.max(scores.get("Bob"))); // prints 20 There’s also a BiMap class which goes in the other direction – that is to say that it enforces uniqueness of values as well as keys. Since values are also unique, a BiMap can be used in reverse. 4. Easy Hashcodes and Comparators Its pretty common to want to generate a hashcode for a class in Java from the hashcodes of its fields. Guava provides a utility method for this in the Objects class, here’s an example: int foo; String bar;@Override public int hashCode() { return Objects.hashCode(foo, bar); } Don’t forget to maintain the equals contract if you’re defining a hashcode method. Comparators are another example where writing them frequently involves chaining together a sequence of operations. Guava provides a ComparisonChain class in order to ease this process. Here’s an example with an int and String class: int foo; String bar;@Override public int compareTo(final GuavaExample o) { return ComparisonChain.start().compare(foo, o.foo).compare(bar, o.bar).result(); } 5. Defensive Coding Do you ever find yourself writing certain preconditions for your methods regularly? Sometimes these can be unnecessarily verbose, or fail to convey intent as directly. Guava provides the Preconditions class with a series of common preconditions. For example instead of an if statement and explicit exception throw … if (count <= 0) { throw new IllegalArgumentException("must be positive: " + count); } … you can use an explicit precondition: checkArgument(count > 0, "must be positive: %s", count); Conclusions Being able to replace existing library classes with those from guava, helps you to reduce the amount of code you need to maintain and offers a potential productivity boost. There are alternatives for example the Apache Commons project. It might be the case that you already use and know of these libraries, or prefer their approach and api to the Guava approach. Guava does have an Idea Graveyard - which gives you some idea of what the Google engineers perceive to be the limits of the library, or a bad design decision. You may not individually agree with these choices, at which point you’re back to writing your own library classes. Overall though Guava encourages a terser and less ceremonious style and some appropriate application of Guava could help many Java projects. Original: http://insightfullogic.com/blog/2011/oct/21/5-reasons-use-guava/   Reference: 5 Reasons to use Guava from our JCG partner Andriy Andrunevchyn at the Java User Group of Lviv blog. ...
jboss-drools-logo

Goodbye Guvnor. Hello Drools Workbench.

Many things are changing for Drools 6.0. Along with the functional and feature changes we have restructured the Guvnor github repository to better reflect our new architecture. Guvnor has historically been the web application for Drools. It was a composition of editors specific to Drools, a back-end repository and a simplistic asset management system. Things are now different. For Drools 6.0 the web application has been extensively re-written to use UberFire that provides a generic Workbench environment, a Metadata Engine, Security Framework, a VFS API and clustering support. Guvnor has become a generic asset management framework providing common services for generic projects and their dependencies. Drools use of both UberFire and Guvnor has born the Drools Workbench. A picture always helps:  Uberfire https://github.com/droolsjbpm/uberfire UberFire is the foundation of all components for both Drools and jBPM. Every editor and service leverages UberFire. Components can be mixed and matched into either a full featured application of used in isolation. Guvnor https://github.com/droolsjbpm/guvnor Guvnor adds project services and dependency management to the mix. At present Guvnor consists of few parts; being principally a port of common project services that existed in the old Guvnor. As things settle down and the module matures pluggable workflow will be supported allowing for sensitive operations to be controlled by jBPM processes and rules. Work is already underway to include this for 6.0. kie-wb-common https://github.com/droolsjbpm/kie-wb-common Both Drools and jBPM editors and services share the need for a common set of re-usable screens, services and widgets. Rather than pollute Guvnor with screens and services needed only by Drools and jBPM this module contains such common dependencies. It is possible to just re-use the UberFire and Guvnor stack to create your own project-based workbench type application and take advantage of the underlying services. Drools Workbench (drools-wb) https://github.com/droolsjbpm/drools-wb Drools Workbench is the end product for people looking for a web application that is composed of all Drools related editors, screens and services. It is equivalent to the old Guvnor. Looking for the web application to accompany Drools Expert and Drools Fusion; an environment to author, test and deploy rules. This is what you’re looking for. KIE Drools Workbench (kie-drools-wb) https://github.com/droolsjbpm/kie-wb-distributions/tree/master/kie-drools-wb KIE Drools Workbench (for want of a better name – it’s amazing how difficult names can be) is an extension of Drools Workbench including jBPM Designer to support Rule Flow. jBPM Designer, now being an UberFire compatible component, does not need to be deployed as a separate web application. We bundle it here, along with Drools as a convenience for people looking to author Rule Flows along side their rules. KIE Workbench (kie-wb) https://github.com/droolsjbpm/kie-wb-distributions/tree/master/kie-wb This is the daddy of them all. KIE Workbench is the composition of everything known to man; from both the Drools and jBPM worlds. It provides for authoring of projects, data models, guided rules, decision tables etc, test services, process authoring, a process run-time execution environment and human task interaction KIE Workbench is the old Guvnor, jBPM Designer and jBPM Console applications combined. On steroids. The World is not enough? You may have noticed; KIE Drools Workbench and KIE Workbench are in the same repository. This highlights a great thing about the new module design we have with UberFire. Web applications are just a composition of dependencies. You want your own web application that consists of just the Guided Rule Editor and jBPM Designer? You want your own web application that has the the Data Modeller and some of your own screens? Pick your dependencies and add them to your own UberFire compliant web application and, as the saying goes, the world is your oyster.   Reference: Goodbye Guvnor. Hello Drools Workbench. from our JCG partner Geoffrey De Smet at the Drools & jBPM blog. ...
android-logo

Android listview background row style: Rounded Corner, alternate color

One aspect we didn’t consider in the previous posts is how we can apply style or background to the Listview items (or row). We can customize the look of the ListView in the way we like, for example with can use as background rounded corner, alternate color and so on. By now we considered just custom adapter without taking into account how we can customize how each item inside the listview appears. In this post we want to describe how we can use resource to customize the item look. The first example will describe how we can create rounded corners for each item inside a listview. In the second example we will show how we can alternate the background color.   ListView with rounded corner Let’s suppose we want to create rounded corner for each item. How can we do this?…We need to create some drawable resources and apply them to each item. As you already know we have to create a custom adapter to implement this behaviour. In this post we don’t want to spend too much words about adapters because we described them here and here. As we said the first thing we need is a drawable resource. As you may already know this is powerful feature of Android because it permits us to create geometrical figure in XML style. We have to specify some information to create this figure:border size and color background color (in our case a solid color) cornersWe need to create a file XML under the res/drawable directory. Let’s call this file rounded_corners.xml. This file contains a shape definition. A shape is a geometrical figure that is described by other tags:stroke – a stroke line for the shape (witdh, color, dashWidth and dashGap) solid – solid colour that fills the shape corners – radius and so oSo the rounded_corners.xml look like: <?xml version="1.0" encoding="utf-8"?> <shape xmlns:android="http://schemas.android.com/apk/res/android" ><solid android:color="#00FF00"/><corners android:radius="5dp" /><padding android:left="3dp" android:top="3dp" android:right="3dp" android:bottom="3dp" /><stroke android:width="3dp" android:color="#00CC00"/> </shape> Once we have create our shape we need to apply it to the items. To do it we have to create another XML file that describe how we apply this shape. In this case we use the XML tag selector to specify when and how to apply the shape. To specify when to apply the shape we use the status. We specify to apply this shape when:status = enable status = selected status = pressedSo our file ( listview_selector.xml) looks like: <?xml version="1.0" encoding="utf-8"?> <selector xmlns:android="http://schemas.android.com/apk/res/android" > <item android:drawable="@drawable/rounded_corner" android:state_enabled="true"/><item android:drawable="@drawable/rounded_corner" android:state_pressed="true"/><item android:drawable="@drawable/rounded_corner" android:state_focused="true"/></selector> Now we have defined our resource, we simply need to specify to apply it in our adapter in this way: public View getView(int position, View convertView, ViewGroup parent) { View v = convertView;PlanetHolder holder = new PlanetHolder();// First let's verify the convertView is not null if (convertView == null) { // This a new view we inflate the new layout LayoutInflater inflater = (LayoutInflater) context.getSystemService(Context.LAYOUT_INFLATER_SERVICE); v = inflater.inflate(R.layout.row_layout, null); // Now we can fill the layout with the right values TextView tv = (TextView) v.findViewById(R.id.name);holder.planetNameView = tv;v.setTag(holder);v.setBackgroundResource(R.drawable.rounded_corner); } else holder = (PlanetHolder) v.getTag();Planet p = planetList.get(position); holder.planetNameView.setText(p.getName());return v; } If we run the app we have:ListView with alternate color As we describe above, if we want to change how each row look like inside the ListView we have simply change the resource and we can customize its look. For example we can suppose we want to alternate the row color. In this case we need to create two drawable resource one for each background, like that: even_row.xml <?xml version="1.0" encoding="utf-8"?> <shape xmlns:android="http://schemas.android.com/apk/res/android" ><solid android:color="#A0A0A0"/><padding android:left="3dp" android:top="3dp" android:right="3dp" android:bottom="3dp" /><stroke android:width="1dp" android:color="#00CC00"/> </shape> odd_row.xml <?xml version="1.0" encoding="utf-8"?> <shape xmlns:android="http://schemas.android.com/apk/res/android" ><solid android:color="#F0F0F0"/><padding android:left="3dp" android:top="3dp" android:right="3dp" android:bottom="3dp" /><stroke android:width="1dp" android:color="#00CC00"/> </shape> We need moreover two selectors that uses the drawable resources, like that listview_selector_even.xml <?xml version="1.0" encoding="utf-8"?> <selector xmlns:android="http://schemas.android.com/apk/res/android" > <item android:drawable="@drawable/even_row" android:state_enabled="true"/><item android:drawable="@drawable/even_row" android:state_pressed="true"/><item android:drawable="@drawable/even_row" android:state_focused="true"/></selector> listview_selector_odd.xml <?xml version="1.0" encoding="utf-8"?> <selector xmlns:android="http://schemas.android.com/apk/res/android" > <item android:drawable="@drawable/odd_row" android:state_enabled="true"/><item android:drawable="@drawable/odd_row" android:state_pressed="true"/><item android:drawable="@drawable/odd_row" android:state_focused="true"/></selector> And finally we apply them inside our custom adapter: public View getView(int position, View convertView, ViewGroup parent) { View v = convertView;PlanetHolder holder = new PlanetHolder();// First let's verify the convertView is not null if (convertView == null) { // This a new view we inflate the new layout LayoutInflater inflater = (LayoutInflater) context.getSystemService(Context.LAYOUT_INFLATER_SERVICE); v = inflater.inflate(R.layout.row_layout, null); // Now we can fill the layout with the right values TextView tv = (TextView) v.findViewById(R.id.name);holder.planetNameView = tv;v.setTag(holder);if ( position % 2 == 0) v.setBackgroundResource(R.drawable.listview_selector_even); else v.setBackgroundResource(R.drawable.listview_selector_odd); } else holder = (PlanetHolder) v.getTag();Planet p = planetList.get(position); holder.planetNameView.setText(p.getName());return v; } Running the app we have:Source code @ github   Reference: Android listview background row style: Rounded Corner, alternate color from our JCG partner Francesco Azzola at the Surviving w/ Android blog. ...
software-development-2-logo

Choosing between a Pen Test and a Secure Code Review

Secure Code Reviews (bringing someone in from outside of the team to review/audit the code for security vulnerabilities) and application Pen Tests (again, bringing a security specialist in from outside the team to test the system) are both important practices in a secure software development program. But if you could only do one of them, if you had limited time or limited budget, which should you choose? Which approach will find more problems and tell you more about the security of your app and your team? What will give you more bang for your buck? Pen testing and code reviews are very different things – they require different work on your part, they find different problems and give you different information. And the cost can be quite different too. White Box / Black Box We all know the difference between white box and black box. Because they can look inside the box, code reviewers can zero in on high-risk code: public interfaces, session management and password management and access control and crypto and other security plumbing, code that handles confidential data, error handling, auditing. By scanning through the code they can check if the app is vulnerable to common injection attacks (SQL injection, XSS, …),and they can look for time bombs and back doors (which are practically impossible to test for from outside) and other suspicious code. They may find problems with concurrency and timing and other code quality issues that aren’t exploitable but should be fixed any ways. And a good reviewer, as they work to understand the system and its design and ask questions, can also point out design mistakes, incorrect assumptions and inconsistencies – not just coding bugs. Pen Testers rely on scanners and attack proxies and other tools to help them look for many of the same common application vulnerabilities (SQL injection, XSS, …) as well as run-time configuration problems. They will find information disclosure and error handling problems as they hack into the system. And they can test for problems in session management and password handling and user management, authentication and authorization bypass weaknesses, and even find business logic flaws especially in familiar workflows like online shopping and banking functions. But because they can’t see inside the box, they – and you – won’t know if they’ve covered all of the high-risk parts of the system. The kind of security testing that you are already doing on your own can influence whether a pen test or a code review is more useful. Are you testing your web app regularly with a black box dynamic vulnerability scanning tool or service? Or running static analysis checks as part of Continuous Integration? A manual pen test will find many of the same kinds of problems that an automated dynamic scanner will, and more. A good static analysis tool will find at least some of the same bugs that a manual code review will – a lot of reviewers use static analysis source code scanning tools to look for low hanging fruit (common coding mistakes, unsafe functions, hard-coded passwords, simple SQL injection, …). Superficial tests or reviews may not involve much more than someone running one of these automated scanning tools and reviewing and qualifying the results for you. So, if you’ve been relying on dynamic analysis testing, it makes sense to get a code review to look for problems that you haven’t already tested for yourself. And if you’ve been scanning code with static analysis tools, then a pen test may have a better chance of finding different problems. Costs and Hassle A pen test is easy to setup and manage. It should not require a lot of time and hand holding from your team, even if you do it right and make sure to explain the main functions of the application to the pen test team and walk them through the architecture, and give them all the access they need. Code reviews are generally more expensive than pen tests, and will require more time and effort on your part – you can’t just give an outsider a copy of the code and expect them to figure it all out on their own. There is more hand holding needed both ways. You holding their hand and explaining the architecture and how the code is structured and how the system works and the compliance and risk drivers, answering questions about the design and the technology as they go along; and them holding your hand, patiently explaining what they found and how to fix it, and working with your team to understand whether each finding is worth fixing, weeding out false positives and other misunderstandings. This hand holding is important. You want to get maximum value out of a reviewer’s time – you want them to focus on high-risk code and not get lost on tangents. And you want to make sure that your team understands what the reviewer found and how important each bug is and how they should be fixed. So not only do you need to have people helping the reviewer – they should be your best people. Intellectual Property and Confidentiality and other legal concerns are important, especially for code reviews – you’re letting an outsider look at the code, and while you want to be transparent in order to ensure that the review is comprehensive, you may also be risking your secret sauce. Solid contracting and working with reputable firms will minimize some of these concerns, but you may also need to strictly limit what code the reviewer will get to see. Other Factors in Choosing between Pen Tests and Code Reviews The type of system and its architecture can also impact your decision. It’s easy to find pen testers who have lots of experience in testing web portals and online stores – they’ll be familiar with the general architecture and recognize common functions and workflows, and can rely on out-of-the-box scanning and fuzzing tools to help them test. This has become a commodity-based service, where you can expect a good job done for a reasonable price. But if you’re building an app with proprietary system-to-system APIs or proprietary clients, or you are working in a highly-specialized technical domain, it’s harder to find qualified pen testers, and they will cost more. They’ll need more time and help to understand the architecture and the app, how everything fits together and what they should focus on in testing. And they won’t be able to leverage standard tools, so they’ll have to roll something on their own, which will take longer and may not work as well. A code review could tell you more in these cases. But the reviewer has to be competent in the language(s) that your app is written in – and, to do a thorough job, they should also be familiar with the frameworks and libraries that you are using. Since it is not always possible to find someone with the right knowledge and experience, you may end up paying them to learn on the job – and relying a lot on how quickly they learn. And of course if you’re using a lot of third party code for which you don’t have source, then a pen test is really your only choice. Are you in a late stage of development, getting ready to release? What you care about most at this point is validating the security of the running system including the run-time configuration and, if you’re really late in development, finding any high-risk exploitable vulnerabilities because that’s all you will have time to fix. This is where a lot of pen testing is done. If you’re in the early stages of development, it’s better to choose a code review. Pen testing doesn’t make a lot sense (you don’t have enough of the system to do real system testing) and a code review can help set the team on the right path for the rest of the code that they have to write. Learning from and using the results Besides finding vulnerabilities and helping you assess risk, a code review or a pen test both provide learning opportunities – a chance for the development team to understand and improve how they write and test software. Pen tests tell you what is broken and exploitable – developers can’t argue that a problem isn’t real, because an outside attacker found it, and that attacker can explain how easy or hard it was for them to find the bug, what the real risk is. Developers know that they have to fix something – but it’s not clear where and how to fix it. And it’s not clear how they can check that they’ve fixed it right. Unlike most bugs, there are no simple steps for the developer to reproduce the bug themselves: they have to rely on the pen tester to come back and re-test. It’s inefficient, and there isn’t a nice tight feedback loop to reinforce understanding. Another disadvantage with pen tests is that they are done late in development, often very late. The team may not have time to do anything except triage the results and fix whatever has to be fixed before the system goes live. There’s no time for developers to reflect and learn and incorporate what they’ve learned. There can also be a communication gap between pen testers and developers. Most pen testers think and talk like hackers, in terms of exploits and attacks. Or they talk like auditors, compliance-focused, mapping their findings to vulnerability taxonomies and risk management frameworks, which don’t mean anything to developers. Code reviewers think and talk like programmers, which makes code reviews much easier to learn from – provided that the reviewer and the developers on your team make the time to work together and understand the findings. A code reviewer can walk the developer through what is wrong, explain why and how to fix it, and answer the developer’s questions immediately, in terms that a developer will understand, which means that problems can get fixed faster and fixed right. You won’t find all of the security vulnerabilities in an app through a code review or a pen test – or even from doing both of them (although you’d have a better chance). If I could only do one or the other, all other factors aside, I would choose a code review. A review will take more work, and probably cost more, and it might not even find as many security bugs. But you will get more value in the long term from a code review. Developers will learn more and quicker, hopefully enough to understand how to look for and fix security problems on their own, and even more important, to avoid them in the first place.   Reference: Choosing between a Pen Test and a Secure Code Review from our JCG partner Jim Bird at the Building Real Software blog. ...
java-interview-questions-answers

JPA 2 | EntityManagers, Transactions and everything around it

Introduction One of the most confusing and unclear thing for me, as a Java Developer has been the mystery surrounding the Transaction Management in general and how JPA handles transaction management in particular. When does a transaction get started, when does it end, how entities are persisted, the persistence context and much more. Frameworks like Spring does not help in understanding the concepts either as they provide another layer of abstraction which makes thing difficult to understand. In today’s post, I will try to demystify some of the things behind JPA’s specification about Entity Management, its transaction specifications and how a better understanding of the concept help us design and code effectively. We will try to keep the discussion   technology and framework agonistic although we will look at both Java SE(where the Java EE container is not available) and Java EE based examples. Basic Concepts Before diving into greater details lets quickly walk through some basic classes and what they mean in JPA.EntityManager – A class that manages the persistent state(or lifecycle) of an entity. Persistence Unit – is a named configuration of entity classes. Persistence Context - is a managed set of entity instances. The entities classes are part of the Persistence Unit configurations. Managed Entities - an entity instance is managed if it is part of a persistence context and that Entity Manager can act upon it.From bullet point one and three above, we can infer that an Entity Manager always manages a Persistence Context. And so, if we understand the Persistence Context, we will understand the EntityManager. Details EntityManager in JPA There are three main types of EntityManagers defined in JPA.Container Managed and Transaction Scoped Entity Managers Container Managed and Extended Scope Entity Managers Application Managed Entity ManagersWe will now look at each one of them in slightly more detail. Container Managed Entity Manager When a container of the application(be it a Java EE container or any other custom container like Spring) manages the lifecycle of the Entity Manager, the Entity Manager is said to be Container Managed. The most common way of acquiring a Container Managed EntityManager is to use @PersistenceContext annotation on an EntityManager attribute. Heres an example to define an EntityManager. public class EmployeeServiceImpl implements EmployeeService { @PersistenceContext(unitName="EmployeeService") EntityManager em;public void assignEmployeeToProject(int empId, int projectId) { Project project = em.find(Project.class, projectId); Employee employee = em.find(Employee.class, empId); project.getEmployees().add(employee); employee.getProjects().add(project);} In the above example we have used @PersistenceContext annotation on an EntityManager type instance variable. The PersistenceContext annotation has an attribute “unitName” which identifies the Persistence Unit for that Context. Container Managed Entity Managers come in two flavors :Transaction Scoped Entity Managers Extended Scope Entity ManagersNote that the scope above really means the scope of the Persistence Context that the Entity Manager manages. It is not the scope of the EntityManager itself. Lets look at each one of them in turn. Transaction Scoped Entity Manager This is the most common Entity Manager that is used in the applications. In the above example as well we are actually creating a Transaction Scoped Entity Manager. A Transaction Scoped Entity Manager is returned whenever a reference created by @PersistenceContext is resolved. The biggest benefit of using Transaction Scoped Entity Manager is that it is stateless. This also makes the Transaction Scoped EntityManager threadsafe and thus virtually maintenance free. But we just said that an EntityManager manages the persistence state of an entity and the persistence state of an entity is part of the persistence context that get injected into the EntityManager. So how is the above statement on stateless holds ground? The answer lies in the fact that all Container Managed Entity Managers depend on JTA Transactions. Every time an operation is invoked on an Entity Manager, the container proxy(the container creates a proxy around the entity manager while instantiating it ) checks for any existing Persistence Context on the JTA Transaction. If it finds one, the Entity Manager will use this Persistence Context. If it doesnt find one, then it will create a new Persistence Context and associates it with the transaction. Lets take the same example we discussed above to understand the concept of entity managers and transaction creation. public class EmployeeServiceImpl implements EmployeeService { @PersistenceContext(unitName="EmployeeService") EntityManager em;public void assignEmployeeToProject(int empId, int projectId) { Project project = em.find(Project.class, projectId); Employee employee = em.find(Employee.class, empId); project.getEmployees().add(employee); employee.getProjects().add(project);} In the above example the first line of assignEmployeeToProject method is calling a find method on the EntityManager. The call to find will force the container to check for an existing transaction. If a transaction exists( for example in case of Stateless Session Beans in Java EE, where the container guarantees that a transaction is available whenever a method on the bean is called) or not. If the transaction doesnt exist, it will throw Exception. If it exists, it will then check whether a Persistence Context exists. Since its the first call to any method of the EntityManager, a persistence context is not available yet. The Entity Manager will then create one and use it to find the project bean instance. In the next call to find, the Entity Manager already has an associated Transaction as well as the Persistence Context associated with it. It uses the same transaction to find employee instance. By the end of 2nd line in the method both project and employee instance are managed. At the end of the method call, the transaction is committed and the managed instances of person and employee get persisted. Another thing to keep in mind is that when the transaction is over, the Persistence Context goes away. Extended Scope Entity Manager If and when you want the Persistence Context to be available beyond the scope of a method, you use the Entity Manager with extended scope.  The best way to understand the Extended scope Entity Manager is to take an example where the class needs to maintain some state(which is created as a result of some transactional request like myEntityManager.find(“employeeId”) and then using the employee) and share the state across its business methods. Because the Persistence Context is shared between method calls and is used to maintain state, it is generally not Thread safe unless you are using them inside a Stateful session bean for which the container is responsible for making it thread safe. To reiterate, in case you are using Java EE Container, Extended Scope Entity Managers will be used inside a Stateful Session Bean( Class annotated with @Stateful) . If you decide to use it outside of the Stateful bean, the container does not guarantee you thread saftey and you have to handle that yourself. Same is the case if you are using third party containers like Spring. Lets look at an example of Extended Scope Entity Manager in Java EE environment when using Stateful session beans. Our goal in the example would be to create a business Class that has business methods working on an instance of LibraryUser Entity.  Lets call this business class LibraryUserManagementService that has a business interface UserManagementService. LibraryUserManagementService works on LibraryUsers entity instance . A Library can lend multiple books to the LibraryUser. Heres an example of Stateful Session bean depicting the above scenario. @Stateful public class LibraryUserManagementService implements UserManagementService { @PersistenceContext(unitName="UserService") EntityManager em; LibraryUser user;public void init(String userId) { user = em.find(LibraryUser.class, userId); }public void setUserName(String name) { user.setName(name); }public void borrowBookFromLibrary(BookId bookId) { Book book = em.find(Book.class, bookId); user.getBooks().add(book); book.setLendingUser(user); }// ...@Remove public void finished() { } }In the above scenario where we are working with a user instance, it is more natural to get an instance once and then work our way through it and only when we are done, we should persist the user instance. But, the problem is that the Entity Manager is Transaction Scoped. This means that init will run in its own transaction(thus having its own Persistence Context) and borrowBookFromLibrary will run in its own transaction. As a result, user object becomes unmanaged as soon as the init method ends. To overcome exactly this sort of problem, we make use of PersistenceContextType.EXTENDED type Entity Manager. Heres the modified example with PersistenceContextType EXTENDED that will work perfectly. @Stateful public class LibraryUserManagementService implements UserManagementService { @PersistenceContext(unitName="UserService" , type=PersistenceContextType.EXTENDED) EntityManager em;LibraryUser user;public void init(String userId) { user = em.find(LibraryUser.class, userId); }public void setUserName(String name) { user.setName(name); }public void borrowBookFromLibrary(BookId bookId) { Book book = em.find(Book.class, bookId); user.getBooks().add(book); book.setLendingUser(user); }// ...@Remove public void finished() { } } In the above scenario, The PersistenceContext that manages the user instance is created at the Bean initialization time by the Java EE Container and is available until the finished method is called at which time the transaction is committed. Application Scoped Entity Manager An Entity Manager that is created not by the container, but actually by the application itself is an application scoped Entity Manager. To make the definition clearer, whenever we create an Entity Manager by calling createEntityManager on the EntityManagerFactory instance, we  actually are creating an application scoped Entity Manager. All the Java SE based applications actually use Application Scoped Entity Managers. JPA gives us a class Persistence that is used to ultimately create an Application Scoped Entity Manager. Heres an example of how an application scoped EM can be created : EntityManagerFactory emf = Persistence.createEntityManagerFactory("myPersistenceUnit"); EntityManager em = emf.createEntityManager(); Note that for creating an Application Scoped EntityManager, there needs to be a persistence.xml file in the META-INF folder of the application. EntityManager can be created in two ways. One is already shown above. Another way to create EntityManager is to pass a set of properties as parameter to the createEntityManagerFactory method. EntityManagerFactory emf = Persistence.createEntityManagerFactory("myPersistenceUnit" , myProperties); EntityManager em = emf.createEntityManager(); If you are creating your own Application Managed Entity Manager, then make sure to close it everytime you are done with using it. This is required because you are now managing how and when the EntityManager should be created and used. Transaction Management Transactions are directly related to entities. Managing transactions essentially then would mean managing how entities lifecycle(create, update delete) is managed. Another key to understanding Transaction Management is to understand how Persistence Contexts interacts with transactions. It is worth noting that from an end user perspective, even though we work with an instance of EntityManager, the only role of EntityManager is to determine the lifetime of the Persistence Context. It plays no role in dictating how a Persistence Context should behave. To reiterate, Persistence Context is a managed set of Entity instances. Whenever a transaction begins, a Persistence Context instance gets associated with it. And when a Transaction ends(commits for example), the Persistence Context is flushed and get disassociated with the transaction. There are two types of Transaction management types supported in JPA.RESOURCE LOCAL Transactions JTA or GLOBAL TransactionsResource local transactions refer to the native transactions of the JDBC Driver whereas JTA transactions refer to the transactions of the JEE server. A Resource Local transaction involves a single transactional resource, for example a JDBC Connection. Whenever you need two or more resources(f or example a JMS Connection and a JDBC Connection) within a single transaction, you use  JTA Transaction. Container Managed Entity Managers always use JTA transactions as the container takes care of transaction life cycle management and spawning the transaction across multiple transactional resources. Application Managed Entity Managers can use either Resource Local Transactions or JTA transactions. Normally in JTA or global transaction, a third party transaction monitor enlists the different transactional resources within a transaction, prepares them for a commit and finally commits the transaction. This process of first preparing the resources for transaction(by doing a dry run) and then committing(or rolling back) is called a 2 phase commit. Side Note about XA Protocol- In global transactions, a transaction monitor has to constantly talk to different transactional resources. Different transactional resources can speak different languages and thus may not be understandable to the transaction monitor. XA is a protocol specification that provides a common ground for the transaction monitor to interact with different transactional resources. JTA is a global transaction monitor specification that speaks XA and thus is able to manage multiple transactional resources. Java EE compliant servers has implementation for JTA built in. Other containers like Spring write their own or use others implementation(like Java Open Transaction Manager , JBoss TS etc) for supporting JTA or Global transactions. Persistence Context, Transactions and Entity Managers A Persistence Context can be associated with either single or multiple transactions and can be associated with multiple Entity Managers. A Persistence Context gets registered with a transaction so that persistence context can be flushed when a transaction is committed. The When a transaction starts, the Entity Manager looks for an active Persistence Context instance. If it is not available it creates one and binds it to the transaction. Normally the scope of the persistence context is tightly associated with the transaction. When a transaction ends, the persistence context instance associated with that transaction also ends. But sometime, mostly in the Java EE world, we require transaction propagation, which is the process of sharing a single persistence context between different Entity Managers within a single transaction. Persistence Contexts can have two scopes:Transaction Scoped Persistence Context Extended Scoped Persistence ContextWe have discussed transaction/extended scoped Entity Managers and we also know that Entity Managers can be transaction or extended scoped. The relation is not coincidental. A Transactional scoped Entity Manager creates a Transaction scoped Persistence Context. An Extended Scope Entity Manager uses the Extended Persistence Context. The lifecycle of the Extended Persistence Context is tied to the Stateful Session Bean in the Java EE environment. Let’s briefly discuss these Persistence Contexts Transaction Scoped Persistence Context TSPC is created by the Entity Managers only when it is needed. Transaction scoped Entity Manager creates a TSPC only when a method on the Entity Manager is called for the first time. Thus the creation of Persistence Context is lazy. If there already exists a propagated Persistence Context, then the Entity Manager will use that Persistence Context. Understanding of Persistence Context Propagation is important to identify and debug transaction related problems in your code. Let’s see an example of how a transaction scoped persistence context is propagated. ItemDAOImpl.java  : public class ItemDAOImpl implements ItemDAO { @PersistenceContext(unitName="ItemService") EntityManager em;LoggingService ls;@TransactionAttribute() public void createItem(Item item) { em.persist(item); ls.log(item.getId(), "created item"); }// ... } LoggingService.java : public class LoggingService implements AuditService { @PersistenceContext(unitName="ItemService") EntityManager em; @TransactionAttribute() public void log(int itemId, String action) { // verify item id is valid if (em.find(Item.class, itemId) == null) { throw new IllegalArgumentException("Unknown item id"); } LogRecord lr = new LogRecord(itemId, action); em.persist(lr); }} When createItem method of ItemDAOImpl is called, persist method is called on the entity manager instance. Let’s assume that this is the first call to the entity manager’s method. The Entity Manager will look for any propagated persistence context with Unit Name “ItemService”. It doesn’t find one because this is the first call to the entity manager. Thus it creates a new persistence context instance and attaches it to itself. It then goes on to persist the Item object. After the item object is persisted, we then call to log the item information that is just persisted. Note that the LoggingService has its own EnitityManager instance and the method log has the annotation @TransactionAttribute(which is not required if in Java EE envt and the bean is declared to be an EJB). Since the TransactionAttribute has a default TransactionAttributeType of REQUIRED, the Entity Manager in the LoggingService will look for any Persistence Context that might be available from the pervious transaction. It finds one that was created inside the createItem method of the ItemDAOImpl and uses the same one. That is why, even though the actual item is not yet persisted to the Database(because the transaction has not yet been committed), the entity manager in LoggingService is able to find it because the Persistence Context has been propagated from the ItemDAOImpl to the LoggingService. Extended Persistence Context Whereas Transaction Scoped Persistence Context is created one for every transaction(in case of non propagation), the Extended Persistence Context is created once and is used by all the transactions within the scope of the class that manages the lifecycle of the Extended Persistence Context. In case of Java EE, it is the Stateful Session bean that manages the lifecycle of the extended Persistence context.  The creation of Stateful Session bean is EAGER. In case of Container Managed Transactions, it is created as soon as a method on the class is called. In case of Application managed Transaction it is created when userTransaction.begin() is invoked. Summary A lot of things have been discussed in this blog post, Entity Managers, Transaction Management, Persistence Context , how all these things interact and work with each others. We discussed differences between Container Managed and Application Managed Entity Managers, Transaction Scoped and Extended scope Persistence Context, Transaction propagation. Most of the material for this blog is a result of reading the wonderful book :  Pro JPA 2 . I would recommend reading it if you want more indepth knowledge of how JPA works.   Reference: JPA 2 | EntityManagers, Transactions and everything around it from our JCG partner Anuj Kumar at the JavaWorld Blog blog. ...
akka-logo

ElasticMQ 0.7.0: long polling, non-blocking implementation using Akka and Spray

ElasticMQ 0.7.0, a message queueing system with an actor-based Scala and Amazon SQS-compatible interfaces, was just released. It is a major rewrite, using Akka actors at the core and Spray for the REST layer. So far only the core and SQS modules have been rewritten; journaling, SQL backend and replication are yet to be done. The major client-side improvements are:long polling support, which was added to SQS some time ago simpler stand-alone server – just a single jar to downloadWith long polling, when receiving a message, you can specify an additional MessageWaitTime attribute. If there are no messages in the queue, instead of completing the request with an empty response, ElasticMQ will wait up to MessageWaitTime seconds until messages arrive. This helps both to reduce the bandwidth used (no need for very frequent requests), improve overall system performance (messages are received immediately after being sent) and to reduce SQS costs. The stand-alone server is now a single jar. To run a local, in-memory SQS implementation (e.g. for testing an application which uses SQS), all you need to do is download the jar file and run: java -jar elasticmq-server-0.7.0.jar This will start a server on http://localhost:9324. Of course the interface and port are configurable, see the README for details. As before, you can also run an embedded server from any JVM-based language. Implementation notes For the curious, here’s a short description of how ElasticMQ is implemented, including the core system, REST layer, Akka Dataflow usage and long polling implementation. All the code is available on GitHub. As already mentioned, ElasticMQ is now implemented using Akka and Spray, and doesn’t contain any blocking calls. Everything is asynchronous. Core The core system is actor-based. There’s one main actor (QueueManagerActor), which knows what queues are currently created in the system, and gives the possibility to create and delete queues. For communication with the actors, the typed ask pattern is used. For example, to lookup a queue (a queue is also an actor), a message is defined: case class LookupQueue(queueName: String) extends Replyable[Option[ActorRef]] Usage looks like this: import org.elasticmq.actor.reply._ val lookupFuture: Future[Option[ActorRef]] = queueManagerActor ? LookupQueue("q2") As already mentioned, each queue is an actor, and encapsulates the queue state. We can use simple mutable data structures, without any need for thread synchronisation, as the actor model takes care of that for us. There’s a number of messages which can be sent to a queue-actor, e.g.: case class SendMessage(message: NewMessageData) extends Replyable[MessageData] case class ReceiveMessages(visibilityTimeout: VisibilityTimeout, count: Int, waitForMessages: Option[Duration]) extends Replyable[List[MessageData]] case class GetQueueStatistics(deliveryTime: Long) extends Replyable[QueueStatistics] Rest layer The SQS query/REST layer is implemented using Spray, a lightweight REST/HTTP toolkit based on Akka. Apart from a non-blocking, actor-based IO implementation, Spray also offers a powerful routing library, spray-routing. It contains a number of built-in directives, for matching on the request method (get/post etc.), extracting query of form parameters or matching on the request path. But it also lets you define your own directives, using simple directive composition. A typical ElasticMQ route looks like this: val listQueuesDirective = action("ListQueues") { rootPath { anyParam("QueueNamePrefix"?) { prefixOption => // logic } } } Where action matches on the action name specified in the "Action" URL of body parameter and accepts/rejects the request, rootPath matches on an empty path and so on. Spray has a good tutorial, so I encourage you to take a look there, if you are interested. How to use the queue actors from the routes to complete HTTP requests? The nice thing about Spray is that all it does is passing a RequestContext instance to your routes, expecting nothing in return. It is up to the route to discard the request completely or complete it with a value. The request may also be completed in another thread – or, for example, when some future is completed. Which is exactly what ElasticMQ does. Here map, flatMap and for-comprehensions (which are a nicer syntax for map/flatMap) are very handy, e.g. (simplified): // Looking up the queue and deleting it are going to be called in sequence, // but asynchronously, as ? returns a Future for { queueActor <- queueManagerActor ? LookupQueue(queueName) _ <- queueActor ? DeleteMessage(DeliveryReceipt(receipt)) } { requestContext.complete(200, "message deleted") } Sometimes, when the flow is more complex, ElasticMQ uses Akka Dataflow, which requires the continuations plugin to be enabled. There’s also a similar project which uses macros, Scala Async, but it’s in early development. Using Akka Dataflow, you can write code which uses Futures as if it was normal sequential code. The CPS plugin will transform it to use callbacks where needed. An example, taken from CreateQueueDirectives: flow { val queueActorOption = (queueManagerActor ? LookupQueue(newQueueData.name)).apply() queueActorOption match { case None => { val createResult = (queueManagerActor ? CreateQueue(newQueueData)).apply() createResult match { case Left(e) => throw new SQSException("Queue already created: " + e.message) case Right(_) => newQueueData } } case Some(queueActor) => { (queueActor ? GetQueueData()).apply() } } } The important parts here are the flow block, which delimits the scope of the transformation, and the apply() calls on Futures which extract the content of the future. This looks like completely normal, sequential code, but when executed, since the first Future usage will be run asynchronously. Long polling With all of the code being asynchronous and non-blocking, implementing long polling was quite easy. Note that when receiving messages from a queue, we get a Future[List[MessageData]]. In response to completing this future, the HTTP request is also completed with the appropriate response. However this future may be completed almost immediately (as is the case normally), or after e.g. 10 seconds – there’s no changes in code needed to support that. So the only thing to do was to delay completing the future until the specified amount of time passed or new messages have arrived. The implementation is in QueueActorWaitForMessagesOps. When a request to receive messages arrives, and there’s nothing in the queue, instead of replying (that is, sending an empty list to the sender actor) immediately, we store the reference to the original request and the sender actor in a map. Using the Akka scheduler, we also schedule sending back an empty list and removal of the entry after the specified timeout. When new messages arrive, we simply take a waiting request from the map and try to complete it. Again, all synchronisation and concurrency problems are handled by Akka and the actor model.   Reference: ElasticMQ 0.7.0: long polling, non-blocking implementation using Akka and Spray from our JCG partner Adam Warski at the Blog of Adam Warski blog. ...
android-logo

Getting started with PhoneGap in Eclipse for Android

Android development with PhoneGap can be done in Windows, OS X, or Linux Step 1: Setting up Android Tools ADT Bundle – Just a single step to setup android development environment Step 2: Downloading and installing PhoneGapVisit the PhoneGap download page and click the orange Download link to begin the download process. Extract the archive to your local file system for use later.You are now ready to create your first PhoneGap project for Android within Eclipse. Step 3: Creating the project in Eclipse Follow these steps to create a new Android project in Eclipse:Choose New > Android ProjectOn the Application Info screen, type a package name for your main Android application .This should be a namespace that logically represents your package structure; for example, com.yourcompany.yourproject. Create New Project In Workspace and Click Next.Configure Launch Icon and BackgroundCreate ActivityConfigure the project to use PhoneGap At this point, Eclipse has created an empty Android project. However, it has not yet been configured to use PhoneGap. You’ll do that next.Create an assets/www directory and a libs directory inside of the new Android project. All of the HTML and JavaScript for your PhoneGap application interface will reside within the assets/www folderTo copy the required files for PhoneGap into the project, first locate the directory where you downloaded PhoneGap, and navigate to the lib/android subdirectoryCopy cordova-2.7.0.js to the assets/www directory within your Android project. Copy cordova-2.7.0.jar to the libs directory within your Android project. Copy the xml directory into the res directory within your Android projectNext, create a file named index.html in the assets/www folder. This file will be used as the main entry point for your PhoneGap application’s interface. In index.html, add the following HTML code to act as a starting point for your user interface development:<!DOCTYPE HTML> <html> <head> <title>PhoneGap</title> <script type="text/javascript" charset="utf-8" src="cordova-2.7.0.js"></script> </head> <body> <h1>Hello PhoneGap</h1> </body> </html>You will need to add the cordova-2.7.0.jar library to the build path for the Android project. Right-click cordova-2.7.0.jar and select Build Path > Add To Build PathUpdate the Activity class Now you are ready to update the Android project to start using PhoneGap.Open your main application Activity file. It will be located under the src folder in the project package that you specified earlier in this process.For my project, which I named HelloPhoneGap, the main Android Activity file is named MainActivity.java, and is located in the package com.maanavan.hellophonegap, which I specified in the New Android Project dialog box.In the main Activity class, add an import statement for org.apache.cordova.DroidGap:import org.apache.cordova.DroidGap; Change the base class from Activity to DroidGap ; this is in the class definition following the word extendspublic class MainActivity extends DroidGap Replace the call to setContentView() with a reference to load the PhoneGap interface from the local assets/www/index.html file, which you created earliersuper.loadUrl(Config.getStartUrl()); Note: In PhoneGap projects, you can reference files located in the assets directory with a URL reference file:///android_asset, followed by the path name to the file. The file:///android_asset URI maps to the assets directory.Configure the project metadata You have now configured the files within your Android project to use PhoneGap. The last step is to configure the project metadata to enable PhoneGap to run.Begin by opening the AndroidManifest.xml file in your project root. Use the Eclipse text editor by right-clicking the AndroidManifest.xml file and selecting Open With > Text EditorIn AndroidManifest.xml, add the following supports-screen XML node as a child of the root manifest node:<supports-screens android:largeScreens="true" android:normalScreens="true" android:smallScreens="true" android:resizeable="true" android:anyDensity="true" /> The supports-screen XML node identifies the screen sizes that are supported by your application. You can change screen and form factor support by altering the contents of this entry. To read more about <supports-screens>, visit the Android developer topic on the supports-screen element. Next, you need to configure permissions for the PhoneGap application. Copy the following <uses-permission> XML nodes and paste them as children of the root <manifest> node in the AndroidManifest.xml file: <uses-permission android:name="android.permission.CAMERA" /> <uses-permission android:name="android.permission.VIBRATE" /> <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" /> <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" /> <uses-permission android:name="android.permission.ACCESS_LOCATION_EXTRA_COMMANDS" /> <uses-permission android:name="android.permission.INTERNET" /> <uses-permission android:name="android.permission.RECEIVE_SMS" /> <uses-permission android:name="android.permission.RECORD_AUDIO" /> <uses-permission android:name="android.permission.RECORD_VIDEO"/> <uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" /> <uses-permission android:name="android.permission.READ_CONTACTS" /> <uses-permission android:name="android.permission.WRITE_CONTACTS" /> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /> <uses-permission android:name="android.permission.GET_ACCOUNTS" /> <uses-permission android:name="android.permission.BROADCAST_STICKY" /> The <uses-permission> XML values identify the features that you want to be enabled for your application. The lines above enable all permissions required for all features of PhoneGap to function. After you have built your application, you may want to remove any permissions that you are not actually using; this will remove security warnings during application installation. To read more about Android permissions and the <uses-permission> element, visit the Android developer topic on the uses-permission element.. After you have configured application permissions, you need to modify the existing <activity> node.Locate the <activity> node, which is a child of the <application> XML node. Add the following attribute to the <activity> node:android:configChanges="orientation|keyboardHidden|keyboard|screenSize|locale"> Android Manifest.xml <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.maanavan.hellophonegap" android:versionCode="1" android:versionName="1.0" > <supports-screens android:largeScreens="true" android:normalScreens="true" android:smallScreens="true" android:xlargeScreens="true" android:resizeable="true" android:anyDensity="true" /> <uses-sdk android:minSdkVersion="8" android:targetSdkVersion="17" /> <uses-permission android:name="android.permission.CAMERA" /> <uses-permission android:name="android.permission.VIBRATE" /> <uses-permission android:name="android.permission.ACCESS_COARSE_LOCATION" /> <uses-permission android:name="android.permission.ACCESS_FINE_LOCATION" /> <uses-permission android:name="android.permission.ACCESS_LOCATION_EXTRA_COMMANDS" /> <uses-permission android:name="android.permission.INTERNET" /> <uses-permission android:name="android.permission.RECEIVE_SMS" /> <uses-permission android:name="android.permission.RECORD_AUDIO" /> <uses-permission android:name="android.permission.RECORD_VIDEO"/> <uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" /> <uses-permission android:name="android.permission.READ_CONTACTS" /> <uses-permission android:name="android.permission.WRITE_CONTACTS" /> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" /> <uses-permission android:name="android.permission.GET_ACCOUNTS" /> <uses-permission android:name="android.permission.BROADCAST_STICKY" /> <application android:allowBackup="true" android:icon="@drawable/ic_launcher" android:label="@string/app_name" android:theme="@style/AppTheme" > <activity android:name="com.maanavan.hellophonegap.MainActivity" android:label="@string/app_name" > android:configChanges="orientation|keyboardHidden|keyboard|screenSize|locale"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> </manifest> At this point, your project is configured to run as a PhoneGap project for Android. If you run into any issues, verify your configuration against the example provided at the PhoneGap getting started site for Android. Running the application To launch your PhoneGap application in the Android emulator, right-click the project root, and select Run As > Android ApplicationIf you don’t have any Android virtual devices set up, you will be prompted to configure one. To learn more about configuring Android emulator virtual devices Eclipse will automatically start an Android emulator instance (if one is not already running), deploy your application to the emulator, and launch the application Reference: Getting started with PhoneGap in Eclipse for Android from our JCG partner Sathish Kumar at the Maanavan blog....
java-logo

Creating Internal DSLs in Java, Java 8- Adopting Martin Fowler’s approach

Currently I am reading this wonderful book on DSLs- Domain Specific Languages by Martin Fowler. The buzz around the DSLs, around the languages which support creation of DSLs with ease, the use of DSLs made me curious to know and learn about this concept of DSLs. And the exprience with the book so far has been impressive. The Definition of DSL as stated by Martin Fowler in his book: Domain-specific language (noun): a computer programming language of limited expressiveness focused on a particular domain.   DSL is nothing new, its been there for quite a long time. People used XML as a form of DSL. Using XML as a DSL is easy because we have XSD for validation of the DSL, we have parsers for parsing the DSL and we have XSLT for transforming the DSL into other languages. And most of the languages provide a very good support for parsing XMLs and populating their domain model objects. The emergence of languages like Ruby, Groovy and others have increased the adoption of DSL. For example Rails, a web framework written using Ruby, uses DSLs extensively. In his book Martin Fowler classifies DSLs as Internal, External and Language Workbenches. As I read through the Internal DSL concepts I played around a bit with my own simple DSL using Java as the host language. Internal DSLs reside in the host language and are bound by the syntactic capabilities of the host language. Using Java as the host language didn’t give me really clear DSLs but I made an effort to get it to closer to a form where I could comprehend the DSL comfortably. I was trying to create a DSL for creating a Graph. As far as I am aware of, the different ways to input and represent a graph are: Adjacency List and Adjacency Matrix. I have always found this difficult to use especially in languages like Java which don’t have matrices as first class citizens. And here I am trying to create an Inner DSL for populating a Graph in Java. In his book, Martin Fowler stresses the need to keep the Semantic Model different from the DSL and to introduce a intermediate Expression builder which populates the Semantic Model from the DSL. By maintaining this I was able to achieve 3 different forms of the DSLs by writing different DSL syntax and expression builders and all the while using the same semantic model. Understanding the Semantic Model The Semantic Model in this case is the Graph class which contains the list of Edge instances and each Edge containing from Vertex, to Vertex and a weight. Lets look at the code for the same: Graph.java import java.util.ArrayList; import java.util.List; import java.util.Set; import java.util.TreeSet;public class Graph {private List<Edge> edges; private Set<Vertex> vertices;public Graph() { edges = new ArrayList<>(); vertices = new TreeSet<>(); } public void addEdge(Edge edge){ getEdges().add(edge); }public void addVertice(Vertex v){ getVertices().add(v); }public List<Edge> getEdges() { return edges; }public Set<Vertex> getVertices() { return vertices; }public static void printGraph(Graph g){ System.out.println("Vertices..."); for (Vertex v : g.getVertices()) { System.out.print(v.getLabel() + " "); } System.out.println(""); System.out.println("Edges..."); for (Edge e : g.getEdges()) { System.out.println(e); } } } Edge.java public class Edge { private Vertex fromVertex; private Vertex toVertex; private Double weight;public Edge() { }public Edge(Vertex fromVertex, Vertex toVertex, Double weight) { this.fromVertex = fromVertex; this.toVertex = toVertex; this.weight = weight; }@Override public String toString() { return fromVertex.getLabel()+" to "+ toVertex.getLabel()+" with weight "+ getWeight(); }public Vertex getFromVertex() { return fromVertex; }public void setFromVertex(Vertex fromVertex) { this.fromVertex = fromVertex; }public Vertex getToVertex() { return toVertex; }public void setToVertex(Vertex toVertex) { this.toVertex = toVertex; }public Double getWeight() { return weight; }public void setWeight(Double weight) { this.weight = weight; } } Vertex.java public class Vertex implements Comparable<Vertex> { private String label;public Vertex(String label) { this.label = label.toUpperCase(); }@Override public int compareTo(Vertex o) { return (this.getLabel().compareTo(o.getLabel())); }public String getLabel() { return label; }public void setLabel(String label) { this.label = label; } } Now that we have the Semantic Model in place, lets build the DLSs. You should notice that I am not going to change my Semantic model. Its not a hard and fast rule that the semantic model shouldn’t change, instead the semantic model can evolve by adding new APIs for fetching the data or modifying the data. But binding the Semantic model tightly to the DSL will not be a good approach. Keeping them separate helps in testing the Semantic Model and the DSL independently. The different approaches for creating Internal DSLs stated by Martin Fowler are:Method Chaining Functional Sequence Nested Functions Lambda Expressions/ClosuresI have illustrated 3 in this post except Functional Sequence. But I have used Functional Sequence approach while using the Closures/Lambda expression. Inner DSL by Method Chaining I am envisaging my DSL to be something like: Graph() .edge() .from("a") .to("b") .weight(12.3) .edge() .from("b") .to("c") .weight(10.5) To enable the creation of such DSL we would have to write an expression builder which allows popuplation of the semantic model and provides a fluent interface enabling creation of the DSL. I have created 2 expressions builders- One to build the complete Graph and the other to build individual edges. All the while the Graph/Edge are being built, these expression builders hold the intermediate Graph/Edge objects. The above syntax can be achieved by creating static method in these expression builders and then using static imports to use them in the DSL. The Graph() starts populating the Graph model while the edge() and series of methods later namely: from(), to(), weight() populate the Edge model. The edge() also populates the Graph model. Lets look at the GraphBuilder which is the expression builder for populating the Graph model. GraphBuilder.java public class GraphBuilder {private Graph graph;public GraphBuilder() { graph = new Graph(); }//Start the Graph DSL with this method. public static GraphBuilder Graph(){ return new GraphBuilder(); }//Start the edge building with this method. public EdgeBuilder edge(){ EdgeBuilder builder = new EdgeBuilder(this);getGraph().addEdge(builder.edge);return builder; }public Graph getGraph() { return graph; }public void printGraph(){ Graph.printGraph(graph); } } And the EdgeBuilder which is the expression builder for populating the Edge model. EdgeBuilder.java public class EdgeBuilder {Edge edge;//Keep a back reference to the Graph Builder. GraphBuilder gBuilder;public EdgeBuilder(GraphBuilder gBuilder) { this.gBuilder = gBuilder; edge = new Edge(); }public EdgeBuilder from(String lbl){ Vertex v = new Vertex(lbl); edge.setFromVertex(v); gBuilder.getGraph().addVertice(v); return this; } public EdgeBuilder to(String lbl){ Vertex v = new Vertex(lbl); edge.setToVertex(v); gBuilder.getGraph().addVertice(v); return this; }public GraphBuilder weight(Double d){ edge.setWeight(d); return gBuilder; }} Lets try and experiment the DSL: public class GraphDslSample {public static void main(String[] args) {Graph() .edge() .from("a") .to("b") .weight(40.0) .edge() .from("b") .to("c") .weight(20.0) .edge() .from("d") .to("e") .weight(50.5) .printGraph();Graph() .edge() .from("w") .to("y") .weight(23.0) .edge() .from("d") .to("e") .weight(34.5) .edge() .from("e") .to("y") .weight(50.5) .printGraph();} } And the output would be: Vertices... A B C D E Edges... A to B with weight 40.0 B to C with weight 20.0 D to E with weight 50.5 Vertices... D E W Y Edges... W to Y with weight 23.0 D to E with weight 34.5 E to Y with weight 50.5 Do you not find this approach more easy to read and understand than the Adjacency List/Adjacency Matrix approach? This Method Chaining is similar to Train Wreck pattern which I had written about sometime back. Inner DSL by Nested Functions In the Nested functions approach the style of the DSL is different. In this approach I would nest functions within functions to populate my semantic model. Something like: Graph( edge(from("a"), to("b"), weight(12.3), edge(from("b"), to("c"), weight(10.5) ); The advantage with this approach is that its heirarchical naturally unlike method chaining where I had to format the code in a different way. And this approach doesn’t maintain any intermediate state within the Expression builders i.e the expression builders don’t hold the Graph and Edge objects while the DSL is being parsed/executed. The semantic model remain the same as discussed here. Lets look at the expression builders for this DSL. NestedGraphBuilder.java //Populates the Graph model. public class NestedGraphBuilder {public static Graph Graph(Edge... edges){ Graph g = new Graph(); for(Edge e : edges){ g.addEdge(e); g.addVertice(e.getFromVertex()); g.addVertice(e.getToVertex()); } return g; }} NestedEdgeBuilder.java //Populates the Edge model. public class NestedEdgeBuilder {public static Edge edge(Vertex from, Vertex to, Double weight){ return new Edge(from, to, weight); }public static Double weight(Double value){ return value; }} NestedVertexBuilder.java //Populates the Vertex model. public class NestedVertexBuilder { public static Vertex from(String lbl){ return new Vertex(lbl); }public static Vertex to(String lbl){ return new Vertex(lbl); } } If you have observed all the methods in the expression builders defined above are static. We use static imports in our code to create a DSL we started to build. Note: I have used different packages for expression builders, semantic model and the dsl. So please update the imports according to the package names you have used. //Update this according to the package name of your builder import static nestedfunction.NestedEdgeBuilder.*; import static nestedfunction.NestedGraphBuilder.*; import static nestedfunction.NestedVertexBuilder.*;/** * * @author msanaull */ public class NestedGraphDsl {public static void main(String[] args) { Graph.printGraph( Graph( edge(from("a"), to("b"), weight(23.4)), edge(from("b"), to("c"), weight(56.7)), edge(from("d"), to("e"), weight(10.4)), edge(from("e"), to("a"), weight(45.9)) ) );} } And the output for this would be: Vertices... A B C D E Edges... A to B with weight 23.4 B to C with weight 56.7 D to E with weight 10.4 E to A with weight 45.9 Now comes the interesting part: How can we leverage the upcoming lambda expressions support in our DSL. Inner DSL using Lambda Expression If you are wondering what Lambda expressions are doing in Java, then please spend some time here before proceeding further. In this example as well we will stick with the same semantic model described here. This DSL leverages Function Sequence along with using the lambda expression support. Lets see how we want our final DSL to be like: Graph(g -> { g.edge( e -> { e.from("a"); e.to("b"); e.weight(12.3); });g.edge( e -> { e.from("b"); e.to("c"); e.weight(10.5); });} ) Yeah I know the above DSL is overloaded with punctuations, but we have to live with it. If you dont like it, then may be pick a different language. In this approach our expression builders should accept lambda expression/closure/block and then populate the semantic model by executing the lambda expression/closure/block. The expression builder in this implementation maintain the intermediate state of the Graph and Edge objects in the same way we did in DSL implementation by Method Chaining. Lets look at our expression builders: GraphBuilder.java //Populates the Graph model. public class GraphBuilder {Graph g; public GraphBuilder() { g = new Graph(); }public static Graph Graph(Consumer<GraphBuilder> gConsumer){ GraphBuilder gBuilder = new GraphBuilder(); gConsumer.accept(gBuilder); return gBuilder.g; }public void edge(Consumer<EdgeBuilder> eConsumer){ EdgeBuilder eBuilder = new EdgeBuilder(); eConsumer.accept(eBuilder); Edge e = eBuilder.edge(); g.addEdge(e); g.addVertice(e.getFromVertex()); g.addVertice(e.getToVertex()); } } EdgeBuilder.java //Populates the Edge model. public class EdgeBuilder { private Edge e; public EdgeBuilder() { e = new Edge(); }public Edge edge(){ return e; }public void from(String lbl){ e.setFromVertex(new Vertex(lbl)); } public void to(String lbl){ e.setToVertex(new Vertex(lbl)); } public void weight(Double w){ e.setWeight(w); }} In the GraphBuilder you see two higlighted lines of code. These make use of a functional interface, Consumer, to be introduced in Java 8. Now lets make use of the above expression builders to create our DSL: //Update the package names with the ones you have given import graph.Graph; import static builder.GraphBuilder.*;public class LambdaDslDemo { public static void main(String[] args) { Graph g1 = Graph( g -> { g.edge( e -> { e.from("a"); e.to("b"); e.weight(12.4); });g.edge( e -> { e.from("c"); e.to("d"); e.weight(13.4); }); });Graph.printGraph(g1); } } And the output is: Vertices... A B C D Edges... A to B with weight 12.4 C to D with weight 13.4 With this I end this code heavy post. Let me know if you want me to spit this into 3 posts- one for each DSL implementation. I kept it in one place so that it would help us in comparing the 3 different approaches. To summarise:In this post I talked about DSL, Inner DSL as mentioned in the book Domain Specific Languages by Martin Fowler. Provided an implementation for each of the three approaches for implementing the Inner DSLs are:Method Chaining Nested Functions Lambda expressions with Function Sequence   Reference: Creating Internal DSLs in Java, Java 8- Adopting Martin Fowler’s approach from our JCG partner Mohamed Sanaulla at the Experiences Unlimited blog. ...
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

Get ready to Rock!
To download the books, please verify your email address by following the instructions found on the email we just sent you.

THANK YOU!

Close