Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?


JMS and Spring: Small Things Sometimes Matter

JmsTemplate and DefaultMessageListenerContainer are Spring helpers for accessing JMS compatible MOM. Their main goal is to form a layer above the JMS API and deal with infrastructure such as transaction management/message acknowledgement and hiding some of the repetitive and clumsy parts of the JMS API (hang in there: JMS 2.0 is on its way!). To use either one of these helpers you have to supply them with (at least) a JMS ConnectionFactory and a valid JMS Destination. When running your app on an application server, the ConnectionFactory will most likely be defined using the JEE architecture. This boils down adding the ConnectionFactory and its configuration parameters allowing them to be published in the directory service under a given alias (eg. jms/myConnectionFactory). Within your   app you might for example use the “jndi-lookup” out of the JEE namespace or JndiTemplate/JndiObjectFactoryBean beans if more configuration is required for looking up the ConnectionFactory and pass it along to your JmsTemplate and/or DefaultMessageListenerContainer. The latter, JMS destination, identifies a JMS Queue or Topic for which you want to produce messages to or consume mesages from. However, both JmsTemplate as DefaultMessageListenerContainer have two different properties for injecting the destination. There is a method taking the destination as String and one taking it as a JMS Destination type. This functionality is nothing invented by Spring, the JMS specification mentions both approaches: 4.4.4 Creating Destination Objects Most clients will use Destinations that are JMS administered objects that they have looked up via JNDI. This is the most portable approach. Some specialized clients may need to create Destinations by dynamically manufacturing one using a provider-specific destination name. Sessions provide a JMS provider-specific method for doing this. If you pass along a destination as String then the helpers will hide the extra steps required to map them to a valid JMS Destination. In the end a createConsumer on a JMS Session expects you to pass along a Destination object to indicate where to consume messages from before returning a MessageConsumer. When destinations are configured as String, the Destination is looked up by Spring using the JMS API itself. By default JmsTemplate and DefaultMessageListenerContainer have a reference to a DestinationResolver which is DynamicDestinationResolver by default (more on that later). The code below is an extract from DynamicDestinationResolver, the highlighted lines indicate the usage of the JMS API to transform the String to a Destination (in this example a Queue): protected Queue resolveQueue(Session session, String queueName) throws JMSException { if (session instanceof QueueSession) { // Cast to QueueSession: will work on both JMS 1.1 and 1.0.2 return ((QueueSession) session).createQueue(queueName); } else { // Fall back to generic JMS Session: will only work on JMS 1.1 return session.createQueue(queueName); } } The other way mentioned by the spec (JNDI approach) is to configure Destinations as administrable objects on your application server. This follows the principle as with the ConnectionFactory; the destination is published in the applications servers directory and can be looked up by its JNDI name (eg. jms/myQueue). Again you can lookup the JMS Destination in your app and pass it along to JmsTemplate and/or DefaultMessageListenerContainer making use of the property taking the JMS Destination as parameter. Now, why do we have those two options? I always assumed that it was a matter of choice between convenience (dynamic approach) and environment transparancy/configurability (JNDI approach). For example: in some situations the name of the physical destination might be different depending on the environment where your application runs. If you configure your physical destination names inside your application you obviously loose this benefit as they cannot be altered without rebuilding your application. If you configured them as administered object on the other hand, it is merely a simple change in the application server configuration to alter the physical destination name. Remember; having physical Destinations names configurable can make sense. Besides the Destination type, applications dealing with messaging are agnostic to its details. A messaging destination has no functional contract and none of its properties (physical destination, persistence, and so forth) are of importance for the code your write. The actual contract is inside the messages itself (the headers and the body). A database table on the other is an example of something that does expose a contract by itself and is tightly coupled with your code. In most cases renaming a database table does impact your code, hence making something like this configurable has normally no added value compared to a messaging Destination. Recently I discovered that my understanding of this is not the entire truth. The specification (from “4.4.4 Creating Destination Objects” as pasted some paragraphs above) already gives a hint: “Most clients will use Destinations that are JMS administered objects that they have looked up via JNDI. This is the most portable approach.” Basically this tells us that the other approach (the dynamic approach where we work with a destination as String) is “the least portable” way. This was never really clear to me as each provider is required to implement both methods, however “portable” has to be looked at in a broader context. When configuring Destinations as String, Spring will by default transform them to JMS Desintations whenever it creates a new JMS Session. When using the DefaultMessageListenerContainer for consuming messages each message you process occurs in a transaction and by default the JMS session and consumer are not pooled, hence they are re-created for each receive operation. This results in transforming the String to a JMS Destination each time the container checks for new messages and/or receives a new message. The “non portable” aspect comes into play as it also means that the details and costs of this transformation depend entirely on your MOM’s driver/implementation. In our case we experienced this with Oracle AQ as MOM provider. Each time a destination transformation happens the driver executes a specific query: select /*+ FIRST_ROWS */ t1.owner,, t1.queue_table, t1.queue_type, t1.max_retries, t1.retry_delay, t1.retention, t1.user_comment, t2. type , t2.object_type, from all_queues t1, all_queue_tables t2 where t1.owner=:1 and and t2.owner=:3 and t1.queue_table=t2.queue_table Forum entry can be found here. Although this query was improved in the latest drivers (as mentioned by the bug report), it was still causing significant overhead on the database. The two options to solve this:Do what the specification advices you to do: configure destinations as resources on the application server. The application server will hand out the same instance each time, so they are already cached there. Even though you will receive the same instance for every lookup, when using JndiTemplate (or JndiDestinationResolver, see below) it will also be chached application side, so even the lookup itself will only happen once. Enable session/consumer caching on the DefaultMessageListenerContainer. When the caching is set to consumer, it indirectly also re-use the Destination as the consumer holds a reference to the Destination. This pooling is Spring added functionality and the JavaDoc says it safe when using resource local transaction and it “should” be safe when using XA transaction (except running on JBoss 4).The first is probably the best. However in our case all destinations are already defined inside the application (and there are plenty of them) and there is no need for having them configurable. Refactoring these merely for this technical reason is going to generate a lot of overhead with no other advantages. The second solution is the least preferred one as this would imply extra testing and investigation to make sure nothing breaks. Also, this seems to be doing more then needed, as there is no indication in our case that creating a Session or Consumer has measurable impact on performance. According to the JMS specification: 4.4 Session A JMS Session is a single-threaded context* for producing and consuming messages. Although it may allocate provider resources outside the Java virtual machine, it is considered a lightweight JMS object. Btw; this is also valid for MessageConsumers/Producers. Both of them are bound to a session, so if a Session is lightweight to open then these objects will be as well. There is however a third solution; a custom DestinationResolver. The DestinationResolver is the abstraction that takes care of going from a String to a Destination. The default (DynamicDestinationResolver) uses the createConsumer(javax.jms.Destination) on the JMS Session to transform, but it does however not cache the resulting Destination. However, if your Destinations are configured on the application server as resources, you can (besides using Spring’s JNDI support and injection the Destination directly) also use JndiDestinationResolver. This resolver will treat the supplied String as a JNDI location (instead of physical destination name) and perform a lookup for you. By default it will cache the resulting Destination, avoiding any subsequent JNDI lookups. Now, one can also configure JndiDestinationResolver as a caching decorator for the DynamicDestinationResolver. If you set fallback to true, it will first try to use the String as a location to lookup from JNDI, if that fails it will pass our String along to DynamicDestinationResolver using the JMS API to transform our String to a Destination. The resulting Destination is in both cases cached and thus a next request for the same Destination will be served from the cache. With this resolver there is a solution out of the box without having to write any code: <bean id="cachingDestinationResolver" class=""> <property name="cache" value="true"/> <property name="fallbackToDynamicDestination" value="true"/> </bean><bean id="infra.abstractMessageListenerContainer" class="org.springframework.jms.listener.DefaultMessageListenerContainer" abstract="true"> <property name="destinationResolver" ref="cachingDestinationResolver"/> ... </bean> The JndiDestinationResolver is thread safe by internally using a ConcurrentHasmap to store the bindings. A JMS Destination is on itself thread safe according to the JMS 1.1 specification (2.8 Multithreading) and can safely be cached:This is again a nice example on how simple things can sometimes have an important impact. This time the solution was straightforward thanks to Spring. It would however been a better idea to make the caching behaviour the default as this would decouple it from any provider specific quirks in looking up the destination. The reason this isn’t the default is probably because the DefaultMessageListenerContainer supports changing the destination on the fly (using JMX for example): Note: The destination may be replaced at runtime, with the listener container picking up the new destination immediately (works e.g. with DefaultMessageListenerContainer, as long as the cache level is less than CACHE_CONSUMER). However, this is considered advanced usage; use it with care!   Reference: JMS and Spring: Small Things Sometimes Matter from our JCG partner Koen Serneels at the Koen Serneels – Technology blog blog. ...

JPA: Determining the Owning Side of a Relationship

When using the Java Persistence API (JPA) it is often necessary to create relationships between two entities.  These relationships are defined within the data model (think database) through the use of foreign keys and within our object model (think Java) using annotations to indicate associations. When defining relationships or associations within the object model a common task is identifying the owning side of the relationship.  Identifying the owning entity within a relationship is important because the owning side is most often, if not always, where the @JoinColumn annotation must be specified. To illustrate the concept of the owning side of an entity we will use a data model to support this discussion.    Let’s analyze this simple model, which depicts a relationship between two tables POST and SERIES.  In this relationship, the POST table stores a blog post, which can be part of a series of posts represented by the SERIES table. In the data model, the SERIES_ID foreign key on the POST table associates the POST with its respective SERIES.  This foreign key indicates which entity owns the relationship. Let’s add these entities in the object model and establish a simple unidirectional relationship between them.  First, the series entity: @Entity @Table(name="SERIES") public class Series {@Id @GeneratedValue(strategy=GenerationType.AUTO) @Column(name="SERIES_ID") private Integer seriesId;@Column(name="TITLE") private String title;//Accessors... } And the Post entity: @Entity @Table(name="POST") public class Post {@Id @GeneratedValue(strategy=GenerationType.AUTO) @Column(name="POST_ID") Integer postId;@Column(name="TITLE") String title;@Column(name="POST_DATE") Date postDate;@ManyToOne @JoinColumn(name="SERIES_ID") private Series series;//Accessors... } In the Post entity the @JoinColumn annotation is specified above the field Series to denote the foreign key to be used to identify a Post’s respective Series.  The @JoinColumn annotation was placed on the Post entity because it is the owning entity in the relationship.  The owning side of the entity was determined by referencing both entities in the data model and identifying the entity containing the foreign key. If the relationship between the Post and Series entities was required to be bidirectional, meaning the Post entities should be accessible from the Series, the inverse side of the relationship (Series) must be annotated with @OneToMany, with a mappedBy element defined.  The mappedBy element should point to the field on the owning side of the relationship (Post) that specifies the @JoinColumn used to associate the entities. The mapping for establishing a bidirectional relationship is highlighted in the following refactoring of the Series entity: @Entity @Table(name="SERIES") public class Series {@Id @GeneratedValue(strategy=GenerationType.AUTO) @Column(name="SERIES_ID") private Integer seriesId;@Column(name="TITLE") private String title;@OneToMany(mappedBy="series") private List posts = new ArrayList();//Accessors... } In summary, when determining the owning entity within a relationship defined within a JPA persistence unit, it is important to consult the data model to find which entities respective table in the data model contains the foreign key.   Reference: JPA: Determining the Owning Side of a Relationship from our JCG partner Kevin Bowersox at the ToThought blog. ...

How to Install Gradle

Gradle is a dependency management / build tool that combines the best of Maven and Ant, making it an extremely powerful and customizable tool. It also uses a sleek Groovy DSL instead of the XML approach of Maven and Ant and is my personal tool-of-choice when I start a new project. Here’s how to install. I’ll write a future post to get us started with some Gradle projects. 1. Install Java First you need to have the Java JDK (Java Development Kit) installed; having the JRE (Java Runtime Environment) is not enough. To check if you have the JDK installed, open a command prompt or terminal and type javac -version. If you have a JDK   installed, you will see your javac version output, eg. javac 1.7.0_01. If you get an error that javac is not a recognized command, download and install the Java JDK. 2. Download GradleDownload from Gradle site3. Unpack and Set System variables WindowsUnzip the Gradle download to the folder to which you would like to install Gradle, eg. “C:\Program Files”. The subdirectory gradle-x.x will be created from the archive, where x.x is the version. Add location of your Gradle “bin” folder to your path. Open the system properties (WinKey + Pause), select the “Advanced” tab, and the “Environment Variables” button, then add “C:\Program Files\gradle-x.x\bin” (or wherever you unzipped Gradle) to the end of your “Path” variable under System Properties. Be sure to omit any quotation marks around the path even if it contains spaces. Also make sure you separated from previous PATH entries with a semicolon “;”. In the same dialog, make sure that JAVA_HOME exists in your user variables or in the system variables and it is set to the location of your JDK, e.g. C:\Program Files\Java\jdk1.7.0_06 and that %JAVA_HOME%\bin is in your Path environment variable. Open a new command prompt (type cmd in Start menu) and run gradle –version to verify that it is correctly installed.Mac/LinuxExtract the distribution archive, i.e. gradle-x.x-bin.tar.gz to the directory you wish to install Gradle. These instructions assume you chose /usr/local/gradle. The subdirectory gradle-x.x will be created from the archive. In a command terminal, add Gradle to your PATH variable: export PATH=/usr/local/gradle/gradle-x.x/bin:$PATH Make sure that JAVA_HOME is set to the location of your JDK, e.g. export JAVA_HOME=/usr/java/jdk1.7.0_06 and that $JAVA_HOME/bin is in your PATH environment variable. Run gradle –version to verify that it is correctly installed.You now have Gradle set up! Stay tuned for another post on how to build a simple Gradle project. More reading:Gradle – Installation Instructions Gradle – Users Guide  Reference: How to Install Gradle from our JCG partner Steve Hanson at the CodeTutr blog. ...

Grails Design Best Practices

Grails is designed to be an interactive agile based rapid development framework which advocates convention not configuration. This article explained the usage and best practices around the Grails.                 Domain-driven designAlways use domain-driven design: First create your basic domain model classes, and then use scaffolding to get them online. This will help you stay motivated and understand your domain better. Use Test Driven Approach: Domain model test cases provide a great way of experimenting testing out your validations. Validate: Use validators to keep your domain objects in order. Custom validators aren’t hard to implement, so don’t be afraid to roll your own if the situation demands. It’s better to encapsulate the validation logic in your domain class than to use dodgy controller hacksFollow the Grails ConventionFollow Grails conventions: Grails is convention driven development which need to follow as per convention means Views should just be views. Controllers should just be controllers. Service as Transactional: Keep the transactional part in service Functional Logics: Functional logics will be implemented as part of Services and Model objects. This means pass the request to Controller and Controller in turn will invoke the service.Dependency Injection Grail is based on dependency injection on convention so we need focus on folder convention to put component on appropriate grails-app/folder and proper naming conventionLocation Convention: Services go into the services folder. Controllers go into the controller’s folder. Naming Convention: If you have a Xxx model object, and you need a controller and service for it, then your controller and service should be named XxxController and XxxService. Grail will auto wire based naming conventionPresentation/ViewUse pagination: Paginating large datasets creates a much better user experience and improves overall presentation performance. Use custom tags: Develop reusable custom tag for your UI. It will improve overall development productivity and enhance maintenance. Use Layout smarter. Handle basic flash message display in your layout rather than repeating it for each view. Pick a right JavaScript library. Choose an Ajax library that makes sense for the rest of your app. It takes time to download libraries, so minimize the number of libraries in play. Use convention-based layout: Favor convention-based layouts over explicit Meta tags. Often a specific layout for one particular action can make things much more maintainable than doing meta-magic branching. Take advantage of Meta tag styles when you need to style a subset of pages for a controller, but use convention layouts for the rest. Externalize config file: Always include an externalized config file, so that any configuration that needs to be overridden on production can be done without even generating a new war file Prefer dynamic scaffolding: Always prefer dynamic scaffolding over static scaffolding. Use Datasorce config: Use DataSource.groovy property file to configure datasorce Re-use custom validators: All custom validators place in a shared validator’s file, to support re-usability of these constraints amongst other domains. Avoid logic in web layer: We should avoid putting lots of logic in the web layer to make a clear separation between presentation layer and business layer. Use BuildConfig.groovy: To install any plugin in your application, declare it in BuildConfig.groovy rather than using the install-plugin command. Keep simple view: Make view as simple as possible and use service layer for business logics Re use template and custom taglib:  Split out shared content into templates and g:render and build custom taglib for common UI elements Use layouts: Use layout for consistent look across the UI. DRY: Keep your views DRY (“Don’t repeat yourself”). Split the repeated content into templates.ControllerKeep Controller thin: Don’t perform business logic, web services, DB operation or transaction within controllers. Keep the controller as thin as possible. The purpose of a controller is to accept incoming request, invoke domain or a service for a result, give the result back to the requester. Use command object: Take advantage of command objects for form submissions. Don’t just use them for validation—they can also be handy for encapsulating tricky business logic.  Understand data binding. Data-binding options in Grails are plentiful and subtle. Use the proper naming convention: Use the standard naming convention of “<DomainClass>Controller”. Use flash scope. Flash scope is ideal for passing messages to the user (when a redirect is involved). Use errors object:  Make use of the errors object on your domain class to display validation messages. Take advantage of resource bundles to make error messages relevant to your application use cases. Apply filters. Apply filters when you need to selectively fire backend logic based on URLs or controller-actions combosServices A service is the right candidate for complex business logic or coarse grained code. If required, the service API can easily be exposed as a RESTful/SOAP web service.Use for Transactional: Services are transactional by default, but can be made non-transactional if none of their methods update the persistence store. Avoid code duplication – Common operations should be extracted and reuse across the application Statelessness: Services should be stateless. Use domain object for state full operation.DomainOverride setters and getters: Take advantage of the ability to override setters and getters to make properties easier to work with. De-merge complex query: Assemble named queries by chaining to prepare complex query. Restrict Specific logic to domain: Keep the logic specific to that object. More complex business logic that deals with groups of objects belongs in services Use domain objects to model domain logic: Moving domain logic to Services is a hangover of inconvenient persistence layers Don’t mix with domain folder: Don’t mix any other common utility classes or value objects in the domain folder; rather they can go in src/groovy. Use sensible constructors: Use sensible constructors for instantiating domain objects, to avoid any unwanted state and to construct only valid objects.TagLibSimple taglib: Keep a tag simple and break a tag into reusable sub-tags if required. Should contain logic: Taglib should contain more of logic than rendering Use multiple custom taglib: Use multiple custom taglib for better organization.Plug inRe-usable plugins: Construct re-usable functionality and logic parts of your application as independent re-usable Grails plugins which can be tested individually and will remove complexity from your main application(s). Public plugin repository: re-usable plugin across the multiple applications could be published in the public plugin repository. Use fixtures plugin : To bootstrap your data during development Override plugin for small change: If you need to make a small change to the plugin you are using, then instead of making the plugin inline for this small change, you can override these files by following the same directory structure or package.. Use onChange: Use onChange, if your plugin adds dynamic methods or properties so that those methods and properties are retained after a reload. Use local plugin repositories: Local plugin repositories serve several purposes such as sharing plugin across the applications Modularize large or complex applications. Modularize large or complex applications using plugin to bring Separation of Concerns pattern. Write functional tests for your plugins. For plugins to be reliable, you should write functional tests.TestingTest Driven Development Approach: Grails advocates TDD approach means test first so it make ensure that your functionality covered. Test Often:  Test often so you get quicker feedback when something breaks, which makes it easier to fix. This helps speed up the development cycle. Use test coverage: Maintain test coverage and avoid gap in test coverage as much as possible Favor units’ tests over integration tests: As well as being faster to run/debug they enforce loose coupling better. An exception is for service testing, where integration testing is generally more useful. Use continuous integration (CI).  Use a continuous integration (CI) to automate the build and deployment across various environments. Use grails console to test: Use grails console to test, embed a Groovy console into your web UI – it’s an invaluable tool to examine insides of a running application.DeploymentUse release plugin: Use the release plugin to deploy in-house plugins to your Maven repository. Use continuous integration (CI). This is almost mandatory for any team bigger than one to catch those bugs that appear when changes from different people are merged together. Automate the process: Write scripts to automate any repetitive task which reduce errors and improve overall productivity. Familiar with resource plugin: Make yourself familiar with the resources plugin for handling of static resourcesSummary Grails’s main target is to develop web application quickly and rapidly in agile manner. Grails is based on Convention over configuration and DRY(Don’t Repeat yourself) and not only that we can re-use existing java code in Grails that give power to build robust and stable web application in quickly and rapid manner   Reference: Grails Design Best Practices from our JCG partner Nitin Kumar at the Tech My Talk blog. ...

Penetration Testing Shouldn’t be a Waste of Time

In a recent post on “Debunking Myths: Penetration Testing is a Waste of Time”, Rohit Sethi looks at some of the disadvantages of the passive and irresponsible way that application pen testing is generally done today: wait until the system is ready to go live, hire an outside firm or consultant, give them a short time to try to hack in, fix anything important that they find, maybe retest to get a passing grade, and now your system is ‘certified secure’. A test like this “doesn’t tell you:      What are the potential threats to your application? Which threats is your application “not vulnerable” to? Which threats did the testers not assess your application for? Which threats were not possible to test from a runtime perspective? How did time and other constraints on the test affect the reliability of results? For example, if the testers had 5 more days, what other security tests would they have executed? What was the skill level of the testers and would you get the same set of results from a different tester or another consultancy?”Sethi stresses the importance of setting expectations and defining requirements for pen testing. An outside pen tester will not be able to understand your business requirements or the internals of the system well enough to do a comprehensive job – unless maybe if your app is yet another straightforward online portal or web store written in PHP or Ruby on Rails, something that they have seen many times before. You should assume that pen testers will miss something, possibly a lot, and there’s no way of knowing what they didn’t test or how good a job they actually did on what they did test. You could try defect seeding to get some idea of how careful and smart they were (and how many bugs they didn’t find), but this assumes that you know an awful lot about your system and about security and security testing (and if you’re this good, you probably don’t need their help). Turning on code coverage analysis during the test will tell you what parts of the code didn’t get touched – but it won’t help you identify the code that you didn’t write but should have, which is often a bigger problem when it comes to security. You can’t expect a pen tester to find all of the security vulnerabilities in your system – even if you are willing to spend a lot of time and money on it. But pen tests are important because this is a way to find things that are hard for you to find on your own:Technology-specific and platform-specific vulnerabilities Configuration and deployment mistakes in the run-time environment Pointy-Hat problems in areas like authentication and session management that should have been taken care of by the framework that you are using, if it works and if you are using it properly Fussy problems in information leakage, object enumeration and error handling – problems that look small to you but can be exploited by an intelligent and motivated attacker with time on their side Mistakes in data validation or output encoding and filtering, that look small to you but…And if you’re lucky, some other problems that you should have caught on your own but didn’t, like weaknesses in workflow or access control or password management or a race condition. Pen testing is about information, not about vulnerabilities The real point of pen testing, or any other kind of testing, is not to find all of the bugs in a system. It is to get information.Information on examples of bugs in the application that need to be reviewed and fixed, how they were found, and how serious they are. Information that you can use to calibrate your development practices and controls, to understand just how good (or not good) you are at building software.Testing doesn’t provide all possible information, but it provides some. Good testing will provide lots of useful information. James Bach (Satisfice) This information leads to questions: How many other bugs like this could there be in the code? Where else should we look for bugs, and what other kinds of bugs or weaknesses could there be in the code or the design? Where did these bugs come from in the first place? Why did we make that mistake? What didn’t we know or what didn’t we understand? Why didn’t we catch the problems earlier? What do we need to do to prevent them or to catch them in the future? If the bugs are serious enough, or there are enough of them, this means going all the way through RCA and exercises like 5 Whys to understand and address Root Cause. To get high-quality information, you need to share information with pen testers. Give the pen tester as much information as possibleWalk through the app with pen testers, hilight the important functions, and provide documentation Take time to explain the architecture and platform Share results of previous pen tests Provide access behind proxies etcAsk them for information in return: ask them to explain their findings as well as their approach, what they tried and what they covered in their tests and what they didn’t, where they spent most of their time, what problems they ran into and where they wasted time, what confused them and what surprised them. Information that you can use to improve your own testing, and to make pen testing more efficient and more effective in the future. When you’re hiring a pen tester, you’re paying for information. But it’s your responsibility to get as much good information as possible, to understand it and to use it properly.   Reference: Penetration Testing Shouldn’t be a Waste of Time from our JCG partner Jim Bird at the Building Real Software blog. ...

Android Tutorial: Using the ViewPager

Currently, one of the most popular Widgets in the Android library is the ViewPager.  It’s implemented in several of the most-used Android apps, like the Google Play app and one of my own apps, RBRecorder:The ViewPager is the widget that allows the user to swipe left or right to see an entirely new screen. In a sense, it’s just a nicer way to show the user multiple tabs. It also has the ability to dynamically add and remove pages (or tabs) at anytime. Consider the idea of grouping search results by certain categories, and showing each category in a separate list. With the ViewPager, the user could then swipe left or right to see other categorized lists. Using the ViewPager requires some knowledge of both Fragments and PageAdapters. In this case, Fragments are “pages”. Each screen that the ViewPager allows the user to scroll to is really a Fragment. By using Fragments instead of a View here, we’re given a much wider range of possibilities to show in each page. We’re not limited to just a List of items. This could be any collection of views and widgets we may need. You can think of PageAdapters in the same way that you think of ListAdapters. The Page Adapter’s job is to supply Fragments (instead of views) to the UI for drawing. I’ve put together a quick tutorial that gets a ViewPager up and running (with the Support Library), in just a few steps. This tutorial follows more of a top-down approach. It moves from the Application down to the Fragments. If you want to dive straight into the source code yourself, you can grab the project here. At The Application Level Before getting started, it’s important to make sure the Support Library is updated from your SDK, and that the library itself is included in your project. Although the ViewPager and Fragments are newer constructs in Android, it’s easy to port them back to older versions of Android by using the Support Library. To add the library to your project, you’ll need to create a “libs” folder in your project and drop the JAR file in. For more information on this step, check out this page on the Support Library help page on the developer site. Setting Up The Layout File The next step is to add the ViewPager to your layout file for your Activity. This step requires you to dive into the XML of your layout file instead of using the GUI layout editor. Your layout file should look something like this:<RelativeLayout xmlns:android=""xmlns:tools=""android:layout_width="match_parent"android:layout_height="match_parent" ><"@+id/viewpager"android:layout_width="fill_parent"android:layout_height="fill_parent" /></RelativeLayout>Implementing The Activity Now we’ll put the main Activity together. The main takeaways from this activity are as follows:The class inherits from FragmentActivity, not Activity This Activity “has a” PageAdapter object and a Fragment object, which we will define a bit later The Activity needs to initialize it’s own PageAdapterpublic class PageViewActivity extends FragmentActivity {MyPageAdapter pageAdapter;@Overridepublic void onCreate(Bundle savedInstanceState) {super.onCreate(savedInstanceState);setContentView(R.layout.activity_page_view);List<Fragment> fragments = getFragments();pageAdapter = new MyPageAdapter(getSupportFragmentManager(), fragments);ViewPager pager = (ViewPager)findViewById(;pager.setAdapter(pageAdapter);}}Implementing The PageAdapter Now that we have the FragmentActivity covered, we need to create our PageAdapter. This is a class that inherits from the FragmentPageAdapater class. In creating this class, we have two goals in mind:Make sure the Adapter has our fragment list Make sure it gives the Activity the correct fragmentclass MyPageAdapter extends FragmentPagerAdapter {private List<Fragment> fragments;public MyPageAdapter(FragmentManager fm, List<Fragment> fragments) {super(fm);this.fragments = fragments;}@Overridepublic Fragment getItem(int position) {return this.fragments.get(position);}@Overridepublic int getCount() {return this.fragments.size();}}Getting The Fragments Set Up With the PageAdapter complete, all that is now needed are the Fragments themselves. We need to implement two things:The getFragment method in the PageViewActivity The MyFragment class1. The getFragment method is straightforward. The only question is how are the actual Fragments created. For now, we’ll leave that logic to the MyFragment class.private List<Fragment> getFragments(){List<Fragment> fList = new ArrayList<Fragment>();fList.add(MyFragment.newInstance("Fragment 1"));fList.add(MyFragment.newInstance("Fragment 2"));fList.add(MyFragment.newInstance("Fragment 3"));return fList;}2. The MyFragment class also has it’s own layout file. For this example, the layout file only consists of a simple TextView. We’ll use this TextView to tell us which Fragment we are currently looking at (notice in the getFragments code, we are passing in a String in the newInstance method).<?xml version="1.0" encoding="utf-8"?><RelativeLayout xmlns:android=""android:layout_width="match_parent"android:layout_height="match_parent"android:orientation="vertical" ><TextViewandroid:id="@+id/textView"android:layout_width="wrap_content"android:layout_height="wrap_content"android:layout_centerHorizontal="true"android:layout_centerVertical="true"android:textAppearance="?android:attr/textAppearanceLarge" /></RelativeLayout>And now the Fragment code itself: The only trick here is that we create the fragment using a static class method, and we use a Bundle to pass information to the Fragment object itself.public class MyFragment extends Fragment {public static final String EXTRA_MESSAGE = "EXTRA_MESSAGE";public static final MyFragment newInstance(String message){MyFragment f = new MyFragment();Bundle bdl = new Bundle(1);bdl.putString(EXTRA_MESSAGE, message);f.setArguments(bdl);return f;}@Overridepublic View onCreateView(LayoutInflater inflater, ViewGroup container,Bundle savedInstanceState) {String message = getArguments().getString(EXTRA_MESSAGE);View v = inflater.inflate(R.layout.myfragment_layout, container, false);TextView messageTextView = (TextView)v.findViewById(;messageTextView.setText(message);return v;}}That’s it! with the above code, you can easily get a simple page adapter up and running. You can also get the source code of the above tutorial from GitHub. For More Advance Developers There are actually a few different types of FragmentPageAdapters out there. It is important to know what they are and what they do, as knowing this bit of information could save you some time when creating complex applications with the ViewPager. The FragmentPagerAdapter is the more general PageAdapter to use. This version does not destroy Fragments it has as long as the user can potentially go back to that Fragment. The idea is that this PageAdapter is used for mainly “static” or unchanging Fragments. If you have Fragments that are more dynamic and change frequently, you may want to look into the FragmentStatePagerAdapter. This Adapter is a bit more friendly to dynamic Fragments and doesn’t consume nearly as much memory as the FragmentPagerAdapter.   Reference: Android Tutorial: Using the ViewPager from our JCG partner Isaac Taylor at the Programming Mobile blog. ...

The three greatest paragraphs ever written on encapsulation

Definition. Encapsulation isn’t sexy. It’s the chartered accounting of software design. Functional Java programming? Formula freakin’ one. Hybrid cloud computing? Solid booster rockets exploding on a midnight launch-pad. But encapsulation? Do not confuse, however, this lack of oomph with lack of importance. Invaluable invisible support systems surround us. Try living without plumbing for a week. Or electricity. Coupled with its rather sorry PR encapsulation groans under a second burden, that of ambiguity. Every programmer has her or his own definition of what encapsulation is. Most resemble one another, many overlap, but some do little more than reflect randomly-acquired habit. Yet if we are to discuss the three greatest encapsulation paragraphs we surely need a definition. You may not agree with it but you should at least be in a position to judge the degree to which those paragraphs accord with this definition. How, then, do we solve this ambiguity? It would be nice if there were an international, standard definition of encapsulation upon which to construct our investigation but for this to be the case would require an international organization for standardization, and this international organization for standardization would have had to define, “Encapsulation,” in some meaningful way. Well – would you believe it? – there actually is such an international organization for standardization. It’s called the, “International Organization for Standardization.” And these good people have officially defined encapsulation (in ISO/IEC 10746-3) as being: Encapsulation: (drum roll) the property that the information contained in an object is accessible only through interactions at the interfaces supported by the object (cymbal crash!). (Sound effects added by the author.) Yes, yes, that’s not your definition. And it begs all sorts of questions. And it defines notions in terms of other, equally-nebulous notions. You are asked, however, not to mindlessly replace your own definition with the above but merely to consider it plausible for this article’s duration. Use it as a tool to crack the shells of the paragraphs to come then toss it aside if you wish. It issues, after all, from the people whose food health-regulations ensured that the breakfast you ate this morning was relatively safe. Let us lend them our ears even if we take them back when they are finished. Separation of concerns. You can still catch, “The Godfather: Part II,” in the cinema, Paul Anka croons away at the top of the U.S.A. music charts and the Nevada Desert shrieks as a two-hundred kiloton blast splatters radioactive debris over its vitrifying sands. Meanwhile in The Netherlands, professor Edsger Dijkstra completes the week by typing another essay. Renowned in computing circles and recipient of the prestigious A. M. Turing award a couple of years earlier, Dijkstra has been writing insightful essays on computing for over a decade, some – such as the infamous Go to statement considered harmful – generating among enthusiasts as much enmity as respect. Today’s essay is called On the role of scientific thought. It is perhaps a tad wordy, a tad meandering, yet it harbours a core of profound genius. The first of our three greatest paragraphs on encapsulation, the text needs nothing of the context of its surrounding essay to make itself understood:“Let me try to explain to you, what to my taste is characteristic for all intelligent thinking. It is, that one is willing to study in depth an aspect of one’s subject matter in isolation for the sake of its own consistency, all the time knowing that one is occupying oneself only with one of the aspects. We know that a program must be correct and we can study it from that viewpoint only; we also know that it should be efficient and we can study its efficiency on another day, so to speak. In another mood we may ask ourselves whether, and if so: why, the program is desirable. But nothing is gained —on the contrary!— by tackling these various aspects simultaneously. It is what I sometimes have called “the separation of concerns”, which, even if not perfectly possible, is yet the only available technique for effective ordering of one’s thoughts, that I know of.”Here, Dijkstra whispers for the first time the phrase, “Separation of concerns,” little knowing that his voice would still reverberate through our digital cloisters almost forty years later. For who has not heard of the separation of concerns? And who would dream of understanding any non-trivial source code by consuming it in one, mighty brain-gulp rather than slicing it asunder to dissect its individual requirements, its features, its packages or its sub-structures? Dijkstra was not, however, discussing encapsulation specifically, so how can this be one of the greatest paragraphs on encapsulation? This is an encapsulation piece in that it underwrites encapsulation’s claim that an object’s information is separate from its interfaces. Indeed, encapsulation merely celebrates and exploits this separability without which it would slump de-fanged, a forgotten curiosity. Today, we think this all obvious. What’s the big deal? Such is the mark of a great idea: it is difficult to conceive of a world in which that idea plays no part. We’ll encounter this plight again. Information-hiding. Two years before Dijkstra clickity-clacked out his masterpiece, Canadian David Parnas had already lit upon a gem of his own. (Our journey declines along an axis of the conceptually abstract rather than the chronological.) Parnas – like Dijkstra, a university lecturer – published his On the Criteria To Be Used in Decomposing Systems into Modules in December, 1972. In his paper Parnas describes how to build a software system and concludes with the second of our three greatest paragraphs:“We have tried to demonstrate by these examples that it is almost always incorrect to begin the decomposition of a system into modules on the basis of a flowchart. We propose instead that one begins with a list of difficult design decisions or design decisions which are likely to change. Each module is then designed to hide such a decision from the others.”Witness the birth of information-hiding and wonder no more whence Java’s private modifier comes. If some doubt surrounded the relevance of Dijkstra’s contribution, here is the very essence of encapsulation; for what restricts access to interfaces but that the object’s information is, in some sense, hidden whereas the interfaces are not? As with separation of concerns, information-hiding seems mundane to us now as steel to the builders of skyscrapers; few rivets clang home to iron-age thoughts. Programming boasts no such longevity, of course, yet forty years of searching has failed to produce information-hiding’s successor. Ripple-effects. Our third and final greatest paragraph springs from not one soul but three. In a paper entitled Structured design written for the IBM Systems Journal, W. Stevens, G. Myers and L. Constantine introduced a wealth of extraordinary ideas to the computing universe of 1974. (What was it about the seventies that nurtured such innovation? The flares? The handlebar mustaches? The brownness?) While separation of concerns and information-hiding provided the foundational mechanisms by which encapsulation is realised, the Structured design paper crystallized the criteria by which the deployment of these mechanisms might be judged. Again, drained of their wonder by repeated use, the words’ over-familiarity belies their significance:“The fewer and simpler the connections between modules, the easier it is to understand each module without reference to other modules. Minimizing connections between modules also minimises the paths along which changes and errors can propagate into other parts of the system, thus eliminating disastrous, ‘Ripple effects,’ where changes in one part causes errors in another, necessitating additional changes elsewhere, giving rise to new errors, etc.”The paper dashes on, elaborating these points by introducing the concepts of coupling and cohesion – thereby supplying strong contenders for the fourth and fifth greatest encapsulation paragraphs – but the above exemplifies encapsulation performed well and, more interestingly, badly. When faced with poorly-structured source code, we tend to recoil viscerally from the sight of countless dependencies radiating directionlessly throughout the system. Still today the ripple-effects identified by Messrs Stevens et al. cost our industry hundreds of millions of dollars every year which, given the tools furnished by our three papers here, is odd. Summary. None of the papers presented here enjoyed complete originality even on first publication. Literary shards presaged all. These passages, however, consolidated – and to a huge degree innovated – key principles by which all good software continues to be built today. Their authors deserve recognition as some of computing’s finest minds. Concepts evolve. The world for which software was written in the seventies is not the world now outside our windows. Perhaps some day software will not be composed of individual, interacting parts and so separation of concerns, information-hiding and ripple-effect will fade from the programmer’s vocabulary. That day, however, is not yet come.   Reference: The three greatest paragraphs ever written on encapsulation from our JCG partner Edmund Kirwan at the A blog about software. blog. ...

Java API for JSON Processing (JSR-353) – Stream APIs

Java will soon have a standard set of APIs for processing JSON as part of Java EE 7. This standard is being defined as JSR 353 – Java API for JSON Processing (JSON-P) and it is currently at the Final Approval Ballot. JSON-P offers both object oriented and stream based approaches, in this post I will introduce the stream APIs. You can get JSON-P reference implementation from the link below:      JsonGenerator ( JsonGenerator makes it very easy to create JSON. With its fluent API the code to produce the JSON very closely resembles the resulting JSON. package blog.jsonp; import java.util.*; import javax.json.Json; import*; public class GeneratorDemo { public static void main(String[] args) { Map<String, Object> properties = new HashMap<String, Object>(1); properties.put(JsonGenerator.PRETTY_PRINTING, true); JsonGeneratorFactory jgf = Json.createGeneratorFactory(properties); JsonGenerator jg = jgf.createGenerator(System.out); jg.writeStartObject() // { .write("name", "Jane Doe") // "name":"Jane Doe", .writeStartObject("address") // "address":{ .write("type", 1) // "type":1, .write("street", "1 A Street") // "street":"1 A Street", .writeNull("city") // "city":null, .write("verified", false) // "verified":false .writeEnd() // }, .writeStartArray("phone-numbers") // "phone-numbers":[ .writeStartObject() // { .write("number", "555-1111") // "number":"555-1111", .write("extension", "123") // "extension":"123" .writeEnd() // }, .writeStartObject() // { .write("number", "555-2222") // "number":"555-2222", .writeNull("extension") // "extension":null .writeEnd() // } .writeEnd() // ] .writeEnd() // } .close(); } }Output Below is the output from running the GeneratorDemo. { "name":"Jane Doe", "address":{ "type":1, "street":"1 A Street", "city":null, "verified":false }, "phone-numbers":[ { "number":"555-1111", "extension":"123" }, { "number":"555-2222", "extension":null } ] } JsonParser ( Using JsonParser we will parse the output of the previous example to get the address information. JSON parser provides a depth first traversal of events corresponding to the JSON structure. Different data can be obtained from the JsonParser depending on the type of the event. package blog.jsonp; import; import javax.json.Json; import; import; public class ParserDemo { public static void main(String[] args) throws Exception { try (FileInputStream json = new FileInputStream("src/blog/jsonp/input.json")) { JsonParser jr = Json.createParser(json); Event event = null; // Advance to "address" key while(jr.hasNext()) { event =; if(event == Event.KEY_NAME && "address".equals(jr.getString())) { event =; break; } } // Output contents of "address" object while(event != Event.END_OBJECT) { switch(event) { case KEY_NAME: { System.out.print(jr.getString()); System.out.print(" = "); break; } case VALUE_FALSE: { System.out.println(false); break; } case VALUE_NULL: { System.out.println("null"); break; } case VALUE_NUMBER: { if(jr.isIntegralNumber()) { System.out.println(jr.getInt()); } else { System.out.println(jr.getBigDecimal()); } break; } case VALUE_STRING: { System.out.println(jr.getString()); break; } case VALUE_TRUE: { System.out.println(true); break; } default: { } } event =; } } } } Output Below is the output from running the ParserDemo. type = 1 street = 1 A Street city = null verified = false MOXy and the Java API for JSON Processing (JSR-353) Mapping your JSON to domain objects is still the easiest way to interact with JSON. Now that JSR-353 is finalizing we will integrating it into MOXy’s JSON-binding. You can track our progress on this using the following link:Bug 405161 – MOXy support for Java API for JSON Processing (JSR-353)  Reference: Java API for JSON Processing (JSR-353) – Stream APIs from our JCG partner Blaise Doughan at the Java XML & JSON Binding blog. ...

Inadvertent Recursion Protection with Java ThreadLocals

Now here’s a little trick for those of you hacking around with third-party tools, trying to extend them without fully understanding them (yet!). Assume the following situation:You want to extend a library that exposes a hierarchical data model (let’s assume you want to extend Apache Jackrabbit) That library internally checks access rights before accessing any nodes of the content repository You want to implement your own access control algorithm Your access control algorithm will access other nodes of the content repository … which in turn will again trigger access control … which in turn will again access other nodes of the content repository… Infinite recursion, possibly resulting in a StackOverflowError, if you’re not recursing breadth-first. Now, you have two options:Take the time, sit down, understand the internals, and do it right. You probably shouldn’t recurse into your access control once you’ve reached your own extension. In the case of extending Jackrabbit, this would be done by using a System Session to further access nodes within your access control algorithm. A System Session usually bypasses access control. Be impatient, wanting to get results quickly, and prevent recursion with a trickOf course, you really should opt for option 1. But who has the time to understand everything? Here’s how to implement that trick. /** * This thread local indicates whether you've * already started recursing with level 1 */ static final ThreadLocal<Boolean> RECURSION_CONTROL = new ThreadLocal<Boolean>(); /** * This method executes a delegate in a "protected" * mode, preventing recursion. If a inadvertent * recursion occurred, return a default instead */ public static <T> T protect( T resultOnRecursion, Protectable<T> delegate) throws Exception { // Not recursing yet, allow a single level of // recursion and execute the delegate once if (RECURSION_CONTROL.get() == null) { try { RECURSION_CONTROL.set(true); return; } finally { RECURSION_CONTROL.remove(); } } // Abort recursion and return early else { return resultOnRecursion; } } /** * An API to wrap your code with */ public interface Protectable<T> { T call() throws Exception; } This works easily as can be seen in this usage example: public static void main(String[] args) throws Exception { protect(null, new Protectable<Void>() { @Override public Void call() throws Exception { // Recurse infinitely System.out.println("Recursing?"); main(null); System.out.println("No!"); return null; } }); } The recursive call to the main() method will be aborted by the protect method, and return early, instead of executing call(). This idea can also be further elaborated by using a Map of ThreadLocals instead, allowing for specifying various keys or contexts for which to prevent recursion. Then, you could also put an Integer into the ThreadLocal, incrementing it on recursion, allowing for at most N levels of recursion. static final ThreadLocal<Integer> RECURSION_CONTROL = new ThreadLocal<Integer>(); public static <T> T protect( T resultOnRecursion, Protectable<T> delegate) throws Exception { Integer level = RECURSION_CONTROL.get(); level = (level == null) ? 0 : level; if (level < 5) { try { RECURSION_CONTROL.set(level + 1); return; } finally { if (level > 0) RECURSION_CONTROL.set(level - 1); else RECURSION_CONTROL.remove(); } } else { return resultOnRecursion; } } But again. Maybe you should just take a couple of minutes more and learn about how the internals of your host library really work, and get things right from the beginning… As always, when applying tricks and hacks!   Reference: Inadvertent Recursion Protection with Java ThreadLocals from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog. ...

JUnit: Naming Individual Test Cases in a Parameterized Test

A couple of years ago I wrote about JUnit Parameterized Tests. One of the things I didn’t like about them was that JUnit named the invidividual test cases using numbers, so if they failed you had no idea which test parameters caused the failure. The following Eclipse screenshot will show you what I mean:                However, in JUnit 4.11, the @Parameters annotation now takes a name argument which can be used to display the parameters in the test name and hence, make them more descriptive. You can use the following placeholders in this argument and they will be replaced by actual values at runtime by JUnit:{index}: the current parameter index {0}, {1}, …: the first, second, and so on, parameter valueHere is an example: import static org.junit.Assert.assertEquals; import java.util.Arrays; import java.util.Collection; import org.junit.Test; import org.junit.runner.RunWith; import org.junit.runners.Parameterized; import org.junit.runners.Parameterized.Parameters; @RunWith(Parameterized.class) public class StringSortTest { @Parameters(name = "{index}: sort[{0}]={1}") public static Collection<Object[]> data() { return Arrays.asList(new Object[][] { { "abc", "abc"}, { "cba", "abc"}, { "abcddcba", "aabbccdd"}, { "a", "a"}, { "aaa", "aaa"}, { "", ""} }); } private final String input; private final String expected; public StringSortTest(final String input, final String expected){ this.input = input; this.expected = expected; } @Test public void testSort(){ assertEquals(expected, sort(input)); } private static String sort(final String s) { final char[] charArray = s.toCharArray(); Arrays.sort(charArray); return new String(charArray); } } When you run the test, you will see individual test cases named as shown in the Eclipse screenshot below, so it is easy to identify the parameters used in each test case.Note that due to a bug in Eclipse, names containing brackets are truncated. That’s why I had to use sort[{0}], instead of sort({0}).   Reference: JUnit: Naming Individual Test Cases in a Parameterized Test from our JCG partner Fahd Shariff at the blog. ...
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

Get ready to Rock!
To download the books, please verify your email address by following the instructions found on the email we just sent you.