Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?

grails-logo

Grails Design Best Practices

Grails is designed to be an interactive agile based rapid development framework which advocates convention not configuration. This article explained the usage and best practices around the Grails.                 Domain-driven designAlways use domain-driven design: First create your basic domain model classes, and then use scaffolding to get them online. This will help you stay motivated and understand your domain better. Use Test Driven Approach: Domain model test cases provide a great way of experimenting testing out your validations. Validate: Use validators to keep your domain objects in order. Custom validators aren’t hard to implement, so don’t be afraid to roll your own if the situation demands. It’s better to encapsulate the validation logic in your domain class than to use dodgy controller hacksFollow the Grails ConventionFollow Grails conventions: Grails is convention driven development which need to follow as per convention means Views should just be views. Controllers should just be controllers. Service as Transactional: Keep the transactional part in service Functional Logics: Functional logics will be implemented as part of Services and Model objects. This means pass the request to Controller and Controller in turn will invoke the service.Dependency Injection Grail is based on dependency injection on convention so we need focus on folder convention to put component on appropriate grails-app/folder and proper naming conventionLocation Convention: Services go into the services folder. Controllers go into the controller’s folder. Naming Convention: If you have a Xxx model object, and you need a controller and service for it, then your controller and service should be named XxxController and XxxService. Grail will auto wire based naming conventionPresentation/ViewUse pagination: Paginating large datasets creates a much better user experience and improves overall presentation performance. Use custom tags: Develop reusable custom tag for your UI. It will improve overall development productivity and enhance maintenance. Use Layout smarter. Handle basic flash message display in your layout rather than repeating it for each view. Pick a right JavaScript library. Choose an Ajax library that makes sense for the rest of your app. It takes time to download libraries, so minimize the number of libraries in play. Use convention-based layout: Favor convention-based layouts over explicit Meta tags. Often a specific layout for one particular action can make things much more maintainable than doing meta-magic branching. Take advantage of Meta tag styles when you need to style a subset of pages for a controller, but use convention layouts for the rest. Externalize config file: Always include an externalized config file, so that any configuration that needs to be overridden on production can be done without even generating a new war file Prefer dynamic scaffolding: Always prefer dynamic scaffolding over static scaffolding. Use Datasorce config: Use DataSource.groovy property file to configure datasorce Re-use custom validators: All custom validators place in a shared validator’s file, to support re-usability of these constraints amongst other domains. Avoid logic in web layer: We should avoid putting lots of logic in the web layer to make a clear separation between presentation layer and business layer. Use BuildConfig.groovy: To install any plugin in your application, declare it in BuildConfig.groovy rather than using the install-plugin command. Keep simple view: Make view as simple as possible and use service layer for business logics Re use template and custom taglib:  Split out shared content into templates and g:render and build custom taglib for common UI elements Use layouts: Use layout for consistent look across the UI. DRY: Keep your views DRY (“Don’t repeat yourself”). Split the repeated content into templates.ControllerKeep Controller thin: Don’t perform business logic, web services, DB operation or transaction within controllers. Keep the controller as thin as possible. The purpose of a controller is to accept incoming request, invoke domain or a service for a result, give the result back to the requester. Use command object: Take advantage of command objects for form submissions. Don’t just use them for validation—they can also be handy for encapsulating tricky business logic.  Understand data binding. Data-binding options in Grails are plentiful and subtle. Use the proper naming convention: Use the standard naming convention of “<DomainClass>Controller”. Use flash scope. Flash scope is ideal for passing messages to the user (when a redirect is involved). Use errors object:  Make use of the errors object on your domain class to display validation messages. Take advantage of resource bundles to make error messages relevant to your application use cases. Apply filters. Apply filters when you need to selectively fire backend logic based on URLs or controller-actions combosServices A service is the right candidate for complex business logic or coarse grained code. If required, the service API can easily be exposed as a RESTful/SOAP web service.Use for Transactional: Services are transactional by default, but can be made non-transactional if none of their methods update the persistence store. Avoid code duplication – Common operations should be extracted and reuse across the application Statelessness: Services should be stateless. Use domain object for state full operation.DomainOverride setters and getters: Take advantage of the ability to override setters and getters to make properties easier to work with. De-merge complex query: Assemble named queries by chaining to prepare complex query. Restrict Specific logic to domain: Keep the logic specific to that object. More complex business logic that deals with groups of objects belongs in services Use domain objects to model domain logic: Moving domain logic to Services is a hangover of inconvenient persistence layers Don’t mix with domain folder: Don’t mix any other common utility classes or value objects in the domain folder; rather they can go in src/groovy. Use sensible constructors: Use sensible constructors for instantiating domain objects, to avoid any unwanted state and to construct only valid objects.TagLibSimple taglib: Keep a tag simple and break a tag into reusable sub-tags if required. Should contain logic: Taglib should contain more of logic than rendering Use multiple custom taglib: Use multiple custom taglib for better organization.Plug inRe-usable plugins: Construct re-usable functionality and logic parts of your application as independent re-usable Grails plugins which can be tested individually and will remove complexity from your main application(s). Public plugin repository: re-usable plugin across the multiple applications could be published in the public plugin repository. Use fixtures plugin : To bootstrap your data during development Override plugin for small change: If you need to make a small change to the plugin you are using, then instead of making the plugin inline for this small change, you can override these files by following the same directory structure or package.. Use onChange: Use onChange, if your plugin adds dynamic methods or properties so that those methods and properties are retained after a reload. Use local plugin repositories: Local plugin repositories serve several purposes such as sharing plugin across the applications Modularize large or complex applications. Modularize large or complex applications using plugin to bring Separation of Concerns pattern. Write functional tests for your plugins. For plugins to be reliable, you should write functional tests.TestingTest Driven Development Approach: Grails advocates TDD approach means test first so it make ensure that your functionality covered. Test Often:  Test often so you get quicker feedback when something breaks, which makes it easier to fix. This helps speed up the development cycle. Use test coverage: Maintain test coverage and avoid gap in test coverage as much as possible Favor units’ tests over integration tests: As well as being faster to run/debug they enforce loose coupling better. An exception is for service testing, where integration testing is generally more useful. Use continuous integration (CI).  Use a continuous integration (CI) to automate the build and deployment across various environments. Use grails console to test: Use grails console to test, embed a Groovy console into your web UI – it’s an invaluable tool to examine insides of a running application.DeploymentUse release plugin: Use the release plugin to deploy in-house plugins to your Maven repository. Use continuous integration (CI). This is almost mandatory for any team bigger than one to catch those bugs that appear when changes from different people are merged together. Automate the process: Write scripts to automate any repetitive task which reduce errors and improve overall productivity. Familiar with resource plugin: Make yourself familiar with the resources plugin for handling of static resourcesSummary Grails’s main target is to develop web application quickly and rapidly in agile manner. Grails is based on Convention over configuration and DRY(Don’t Repeat yourself) and not only that we can re-use existing java code in Grails that give power to build robust and stable web application in quickly and rapid manner   Reference: Grails Design Best Practices from our JCG partner Nitin Kumar at the Tech My Talk blog. ...
software-development-2-logo

Penetration Testing Shouldn’t be a Waste of Time

In a recent post on “Debunking Myths: Penetration Testing is a Waste of Time”, Rohit Sethi looks at some of the disadvantages of the passive and irresponsible way that application pen testing is generally done today: wait until the system is ready to go live, hire an outside firm or consultant, give them a short time to try to hack in, fix anything important that they find, maybe retest to get a passing grade, and now your system is ‘certified secure’. A test like this “doesn’t tell you:      What are the potential threats to your application? Which threats is your application “not vulnerable” to? Which threats did the testers not assess your application for? Which threats were not possible to test from a runtime perspective? How did time and other constraints on the test affect the reliability of results? For example, if the testers had 5 more days, what other security tests would they have executed? What was the skill level of the testers and would you get the same set of results from a different tester or another consultancy?”Sethi stresses the importance of setting expectations and defining requirements for pen testing. An outside pen tester will not be able to understand your business requirements or the internals of the system well enough to do a comprehensive job – unless maybe if your app is yet another straightforward online portal or web store written in PHP or Ruby on Rails, something that they have seen many times before. You should assume that pen testers will miss something, possibly a lot, and there’s no way of knowing what they didn’t test or how good a job they actually did on what they did test. You could try defect seeding to get some idea of how careful and smart they were (and how many bugs they didn’t find), but this assumes that you know an awful lot about your system and about security and security testing (and if you’re this good, you probably don’t need their help). Turning on code coverage analysis during the test will tell you what parts of the code didn’t get touched – but it won’t help you identify the code that you didn’t write but should have, which is often a bigger problem when it comes to security. You can’t expect a pen tester to find all of the security vulnerabilities in your system – even if you are willing to spend a lot of time and money on it. But pen tests are important because this is a way to find things that are hard for you to find on your own:Technology-specific and platform-specific vulnerabilities Configuration and deployment mistakes in the run-time environment Pointy-Hat problems in areas like authentication and session management that should have been taken care of by the framework that you are using, if it works and if you are using it properly Fussy problems in information leakage, object enumeration and error handling – problems that look small to you but can be exploited by an intelligent and motivated attacker with time on their side Mistakes in data validation or output encoding and filtering, that look small to you but…And if you’re lucky, some other problems that you should have caught on your own but didn’t, like weaknesses in workflow or access control or password management or a race condition. Pen testing is about information, not about vulnerabilities The real point of pen testing, or any other kind of testing, is not to find all of the bugs in a system. It is to get information.Information on examples of bugs in the application that need to be reviewed and fixed, how they were found, and how serious they are. Information that you can use to calibrate your development practices and controls, to understand just how good (or not good) you are at building software.Testing doesn’t provide all possible information, but it provides some. Good testing will provide lots of useful information. James Bach (Satisfice) This information leads to questions: How many other bugs like this could there be in the code? Where else should we look for bugs, and what other kinds of bugs or weaknesses could there be in the code or the design? Where did these bugs come from in the first place? Why did we make that mistake? What didn’t we know or what didn’t we understand? Why didn’t we catch the problems earlier? What do we need to do to prevent them or to catch them in the future? If the bugs are serious enough, or there are enough of them, this means going all the way through RCA and exercises like 5 Whys to understand and address Root Cause. To get high-quality information, you need to share information with pen testers. Give the pen tester as much information as possibleWalk through the app with pen testers, hilight the important functions, and provide documentation Take time to explain the architecture and platform Share results of previous pen tests Provide access behind proxies etcAsk them for information in return: ask them to explain their findings as well as their approach, what they tried and what they covered in their tests and what they didn’t, where they spent most of their time, what problems they ran into and where they wasted time, what confused them and what surprised them. Information that you can use to improve your own testing, and to make pen testing more efficient and more effective in the future. When you’re hiring a pen tester, you’re paying for information. But it’s your responsibility to get as much good information as possible, to understand it and to use it properly.   Reference: Penetration Testing Shouldn’t be a Waste of Time from our JCG partner Jim Bird at the Building Real Software blog. ...
android-logo

Android Tutorial: Using the ViewPager

Currently, one of the most popular Widgets in the Android library is the ViewPager.  It’s implemented in several of the most-used Android apps, like the Google Play app and one of my own apps, RBRecorder:The ViewPager is the widget that allows the user to swipe left or right to see an entirely new screen. In a sense, it’s just a nicer way to show the user multiple tabs. It also has the ability to dynamically add and remove pages (or tabs) at anytime. Consider the idea of grouping search results by certain categories, and showing each category in a separate list. With the ViewPager, the user could then swipe left or right to see other categorized lists. Using the ViewPager requires some knowledge of both Fragments and PageAdapters. In this case, Fragments are “pages”. Each screen that the ViewPager allows the user to scroll to is really a Fragment. By using Fragments instead of a View here, we’re given a much wider range of possibilities to show in each page. We’re not limited to just a List of items. This could be any collection of views and widgets we may need. You can think of PageAdapters in the same way that you think of ListAdapters. The Page Adapter’s job is to supply Fragments (instead of views) to the UI for drawing. I’ve put together a quick tutorial that gets a ViewPager up and running (with the Support Library), in just a few steps. This tutorial follows more of a top-down approach. It moves from the Application down to the Fragments. If you want to dive straight into the source code yourself, you can grab the project here. At The Application Level Before getting started, it’s important to make sure the Support Library is updated from your SDK, and that the library itself is included in your project. Although the ViewPager and Fragments are newer constructs in Android, it’s easy to port them back to older versions of Android by using the Support Library. To add the library to your project, you’ll need to create a “libs” folder in your project and drop the JAR file in. For more information on this step, check out this page on the Support Library help page on the developer site. Setting Up The Layout File The next step is to add the ViewPager to your layout file for your Activity. This step requires you to dive into the XML of your layout file instead of using the GUI layout editor. Your layout file should look something like this:<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"xmlns:tools="http://schemas.android.com/tools"android:layout_width="match_parent"android:layout_height="match_parent" ><android.support.v4.view.ViewPagerandroid:id="@+id/viewpager"android:layout_width="fill_parent"android:layout_height="fill_parent" /></RelativeLayout>Implementing The Activity Now we’ll put the main Activity together. The main takeaways from this activity are as follows:The class inherits from FragmentActivity, not Activity This Activity “has a” PageAdapter object and a Fragment object, which we will define a bit later The Activity needs to initialize it’s own PageAdapterpublic class PageViewActivity extends FragmentActivity {MyPageAdapter pageAdapter;@Overridepublic void onCreate(Bundle savedInstanceState) {super.onCreate(savedInstanceState);setContentView(R.layout.activity_page_view);List<Fragment> fragments = getFragments();pageAdapter = new MyPageAdapter(getSupportFragmentManager(), fragments);ViewPager pager = (ViewPager)findViewById(R.id.viewpager);pager.setAdapter(pageAdapter);}}Implementing The PageAdapter Now that we have the FragmentActivity covered, we need to create our PageAdapter. This is a class that inherits from the FragmentPageAdapater class. In creating this class, we have two goals in mind:Make sure the Adapter has our fragment list Make sure it gives the Activity the correct fragmentclass MyPageAdapter extends FragmentPagerAdapter {private List<Fragment> fragments;public MyPageAdapter(FragmentManager fm, List<Fragment> fragments) {super(fm);this.fragments = fragments;}@Overridepublic Fragment getItem(int position) {return this.fragments.get(position);}@Overridepublic int getCount() {return this.fragments.size();}}Getting The Fragments Set Up With the PageAdapter complete, all that is now needed are the Fragments themselves. We need to implement two things:The getFragment method in the PageViewActivity The MyFragment class1. The getFragment method is straightforward. The only question is how are the actual Fragments created. For now, we’ll leave that logic to the MyFragment class.private List<Fragment> getFragments(){List<Fragment> fList = new ArrayList<Fragment>();fList.add(MyFragment.newInstance("Fragment 1"));fList.add(MyFragment.newInstance("Fragment 2"));fList.add(MyFragment.newInstance("Fragment 3"));return fList;}2. The MyFragment class also has it’s own layout file. For this example, the layout file only consists of a simple TextView. We’ll use this TextView to tell us which Fragment we are currently looking at (notice in the getFragments code, we are passing in a String in the newInstance method).<?xml version="1.0" encoding="utf-8"?><RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"android:layout_width="match_parent"android:layout_height="match_parent"android:orientation="vertical" ><TextViewandroid:id="@+id/textView"android:layout_width="wrap_content"android:layout_height="wrap_content"android:layout_centerHorizontal="true"android:layout_centerVertical="true"android:textAppearance="?android:attr/textAppearanceLarge" /></RelativeLayout>And now the Fragment code itself: The only trick here is that we create the fragment using a static class method, and we use a Bundle to pass information to the Fragment object itself.public class MyFragment extends Fragment {public static final String EXTRA_MESSAGE = "EXTRA_MESSAGE";public static final MyFragment newInstance(String message){MyFragment f = new MyFragment();Bundle bdl = new Bundle(1);bdl.putString(EXTRA_MESSAGE, message);f.setArguments(bdl);return f;}@Overridepublic View onCreateView(LayoutInflater inflater, ViewGroup container,Bundle savedInstanceState) {String message = getArguments().getString(EXTRA_MESSAGE);View v = inflater.inflate(R.layout.myfragment_layout, container, false);TextView messageTextView = (TextView)v.findViewById(R.id.textView);messageTextView.setText(message);return v;}}That’s it! with the above code, you can easily get a simple page adapter up and running. You can also get the source code of the above tutorial from GitHub. For More Advance Developers There are actually a few different types of FragmentPageAdapters out there. It is important to know what they are and what they do, as knowing this bit of information could save you some time when creating complex applications with the ViewPager. The FragmentPagerAdapter is the more general PageAdapter to use. This version does not destroy Fragments it has as long as the user can potentially go back to that Fragment. The idea is that this PageAdapter is used for mainly “static” or unchanging Fragments. If you have Fragments that are more dynamic and change frequently, you may want to look into the FragmentStatePagerAdapter. This Adapter is a bit more friendly to dynamic Fragments and doesn’t consume nearly as much memory as the FragmentPagerAdapter.   Reference: Android Tutorial: Using the ViewPager from our JCG partner Isaac Taylor at the Programming Mobile blog. ...
software-development-2-logo

The three greatest paragraphs ever written on encapsulation

Definition. Encapsulation isn’t sexy. It’s the chartered accounting of software design. Functional Java programming? Formula freakin’ one. Hybrid cloud computing? Solid booster rockets exploding on a midnight launch-pad. But encapsulation? Do not confuse, however, this lack of oomph with lack of importance. Invaluable invisible support systems surround us. Try living without plumbing for a week. Or electricity. Coupled with its rather sorry PR encapsulation groans under a second burden, that of ambiguity. Every programmer has her or his own definition of what encapsulation is. Most resemble one another, many overlap, but some do little more than reflect randomly-acquired habit. Yet if we are to discuss the three greatest encapsulation paragraphs we surely need a definition. You may not agree with it but you should at least be in a position to judge the degree to which those paragraphs accord with this definition. How, then, do we solve this ambiguity? It would be nice if there were an international, standard definition of encapsulation upon which to construct our investigation but for this to be the case would require an international organization for standardization, and this international organization for standardization would have had to define, “Encapsulation,” in some meaningful way. Well – would you believe it? – there actually is such an international organization for standardization. It’s called the, “International Organization for Standardization.” And these good people have officially defined encapsulation (in ISO/IEC 10746-3) as being: Encapsulation: (drum roll) the property that the information contained in an object is accessible only through interactions at the interfaces supported by the object (cymbal crash!). (Sound effects added by the author.) Yes, yes, that’s not your definition. And it begs all sorts of questions. And it defines notions in terms of other, equally-nebulous notions. You are asked, however, not to mindlessly replace your own definition with the above but merely to consider it plausible for this article’s duration. Use it as a tool to crack the shells of the paragraphs to come then toss it aside if you wish. It issues, after all, from the people whose food health-regulations ensured that the breakfast you ate this morning was relatively safe. Let us lend them our ears even if we take them back when they are finished. Separation of concerns. You can still catch, “The Godfather: Part II,” in the cinema, Paul Anka croons away at the top of the U.S.A. music charts and the Nevada Desert shrieks as a two-hundred kiloton blast splatters radioactive debris over its vitrifying sands. Meanwhile in The Netherlands, professor Edsger Dijkstra completes the week by typing another essay. Renowned in computing circles and recipient of the prestigious A. M. Turing award a couple of years earlier, Dijkstra has been writing insightful essays on computing for over a decade, some – such as the infamous Go to statement considered harmful – generating among enthusiasts as much enmity as respect. Today’s essay is called On the role of scientific thought. It is perhaps a tad wordy, a tad meandering, yet it harbours a core of profound genius. The first of our three greatest paragraphs on encapsulation, the text needs nothing of the context of its surrounding essay to make itself understood:“Let me try to explain to you, what to my taste is characteristic for all intelligent thinking. It is, that one is willing to study in depth an aspect of one’s subject matter in isolation for the sake of its own consistency, all the time knowing that one is occupying oneself only with one of the aspects. We know that a program must be correct and we can study it from that viewpoint only; we also know that it should be efficient and we can study its efficiency on another day, so to speak. In another mood we may ask ourselves whether, and if so: why, the program is desirable. But nothing is gained —on the contrary!— by tackling these various aspects simultaneously. It is what I sometimes have called “the separation of concerns”, which, even if not perfectly possible, is yet the only available technique for effective ordering of one’s thoughts, that I know of.”Here, Dijkstra whispers for the first time the phrase, “Separation of concerns,” little knowing that his voice would still reverberate through our digital cloisters almost forty years later. For who has not heard of the separation of concerns? And who would dream of understanding any non-trivial source code by consuming it in one, mighty brain-gulp rather than slicing it asunder to dissect its individual requirements, its features, its packages or its sub-structures? Dijkstra was not, however, discussing encapsulation specifically, so how can this be one of the greatest paragraphs on encapsulation? This is an encapsulation piece in that it underwrites encapsulation’s claim that an object’s information is separate from its interfaces. Indeed, encapsulation merely celebrates and exploits this separability without which it would slump de-fanged, a forgotten curiosity. Today, we think this all obvious. What’s the big deal? Such is the mark of a great idea: it is difficult to conceive of a world in which that idea plays no part. We’ll encounter this plight again. Information-hiding. Two years before Dijkstra clickity-clacked out his masterpiece, Canadian David Parnas had already lit upon a gem of his own. (Our journey declines along an axis of the conceptually abstract rather than the chronological.) Parnas – like Dijkstra, a university lecturer – published his On the Criteria To Be Used in Decomposing Systems into Modules in December, 1972. In his paper Parnas describes how to build a software system and concludes with the second of our three greatest paragraphs:“We have tried to demonstrate by these examples that it is almost always incorrect to begin the decomposition of a system into modules on the basis of a flowchart. We propose instead that one begins with a list of difficult design decisions or design decisions which are likely to change. Each module is then designed to hide such a decision from the others.”Witness the birth of information-hiding and wonder no more whence Java’s private modifier comes. If some doubt surrounded the relevance of Dijkstra’s contribution, here is the very essence of encapsulation; for what restricts access to interfaces but that the object’s information is, in some sense, hidden whereas the interfaces are not? As with separation of concerns, information-hiding seems mundane to us now as steel to the builders of skyscrapers; few rivets clang home to iron-age thoughts. Programming boasts no such longevity, of course, yet forty years of searching has failed to produce information-hiding’s successor. Ripple-effects. Our third and final greatest paragraph springs from not one soul but three. In a paper entitled Structured design written for the IBM Systems Journal, W. Stevens, G. Myers and L. Constantine introduced a wealth of extraordinary ideas to the computing universe of 1974. (What was it about the seventies that nurtured such innovation? The flares? The handlebar mustaches? The brownness?) While separation of concerns and information-hiding provided the foundational mechanisms by which encapsulation is realised, the Structured design paper crystallized the criteria by which the deployment of these mechanisms might be judged. Again, drained of their wonder by repeated use, the words’ over-familiarity belies their significance:“The fewer and simpler the connections between modules, the easier it is to understand each module without reference to other modules. Minimizing connections between modules also minimises the paths along which changes and errors can propagate into other parts of the system, thus eliminating disastrous, ‘Ripple effects,’ where changes in one part causes errors in another, necessitating additional changes elsewhere, giving rise to new errors, etc.”The paper dashes on, elaborating these points by introducing the concepts of coupling and cohesion – thereby supplying strong contenders for the fourth and fifth greatest encapsulation paragraphs – but the above exemplifies encapsulation performed well and, more interestingly, badly. When faced with poorly-structured source code, we tend to recoil viscerally from the sight of countless dependencies radiating directionlessly throughout the system. Still today the ripple-effects identified by Messrs Stevens et al. cost our industry hundreds of millions of dollars every year which, given the tools furnished by our three papers here, is odd. Summary. None of the papers presented here enjoyed complete originality even on first publication. Literary shards presaged all. These passages, however, consolidated – and to a huge degree innovated – key principles by which all good software continues to be built today. Their authors deserve recognition as some of computing’s finest minds. Concepts evolve. The world for which software was written in the seventies is not the world now outside our windows. Perhaps some day software will not be composed of individual, interacting parts and so separation of concerns, information-hiding and ripple-effect will fade from the programmer’s vocabulary. That day, however, is not yet come.   Reference: The three greatest paragraphs ever written on encapsulation from our JCG partner Edmund Kirwan at the A blog about software. blog. ...
json-logo

Java API for JSON Processing (JSR-353) – Stream APIs

Java will soon have a standard set of APIs for processing JSON as part of Java EE 7. This standard is being defined as JSR 353 – Java API for JSON Processing (JSON-P) and it is currently at the Final Approval Ballot. JSON-P offers both object oriented and stream based approaches, in this post I will introduce the stream APIs. You can get JSON-P reference implementation from the link below:http://java.net/projects/jsonp/      JsonGenerator (javax.json.stream) JsonGenerator makes it very easy to create JSON. With its fluent API the code to produce the JSON very closely resembles the resulting JSON. package blog.jsonp; import java.util.*; import javax.json.Json; import javax.json.stream.*; public class GeneratorDemo { public static void main(String[] args) { Map<String, Object> properties = new HashMap<String, Object>(1); properties.put(JsonGenerator.PRETTY_PRINTING, true); JsonGeneratorFactory jgf = Json.createGeneratorFactory(properties); JsonGenerator jg = jgf.createGenerator(System.out); jg.writeStartObject() // { .write("name", "Jane Doe") // "name":"Jane Doe", .writeStartObject("address") // "address":{ .write("type", 1) // "type":1, .write("street", "1 A Street") // "street":"1 A Street", .writeNull("city") // "city":null, .write("verified", false) // "verified":false .writeEnd() // }, .writeStartArray("phone-numbers") // "phone-numbers":[ .writeStartObject() // { .write("number", "555-1111") // "number":"555-1111", .write("extension", "123") // "extension":"123" .writeEnd() // }, .writeStartObject() // { .write("number", "555-2222") // "number":"555-2222", .writeNull("extension") // "extension":null .writeEnd() // } .writeEnd() // ] .writeEnd() // } .close(); } }Output Below is the output from running the GeneratorDemo. { "name":"Jane Doe", "address":{ "type":1, "street":"1 A Street", "city":null, "verified":false }, "phone-numbers":[ { "number":"555-1111", "extension":"123" }, { "number":"555-2222", "extension":null } ] } JsonParser (javax.json.stream) Using JsonParser we will parse the output of the previous example to get the address information. JSON parser provides a depth first traversal of events corresponding to the JSON structure. Different data can be obtained from the JsonParser depending on the type of the event. package blog.jsonp; import java.io.FileInputStream; import javax.json.Json; import javax.json.stream.JsonParser; import javax.json.stream.JsonParser.Event; public class ParserDemo { public static void main(String[] args) throws Exception { try (FileInputStream json = new FileInputStream("src/blog/jsonp/input.json")) { JsonParser jr = Json.createParser(json); Event event = null; // Advance to "address" key while(jr.hasNext()) { event = jr.next(); if(event == Event.KEY_NAME && "address".equals(jr.getString())) { event = jr.next(); break; } } // Output contents of "address" object while(event != Event.END_OBJECT) { switch(event) { case KEY_NAME: { System.out.print(jr.getString()); System.out.print(" = "); break; } case VALUE_FALSE: { System.out.println(false); break; } case VALUE_NULL: { System.out.println("null"); break; } case VALUE_NUMBER: { if(jr.isIntegralNumber()) { System.out.println(jr.getInt()); } else { System.out.println(jr.getBigDecimal()); } break; } case VALUE_STRING: { System.out.println(jr.getString()); break; } case VALUE_TRUE: { System.out.println(true); break; } default: { } } event = jr.next(); } } } } Output Below is the output from running the ParserDemo. type = 1 street = 1 A Street city = null verified = false MOXy and the Java API for JSON Processing (JSR-353) Mapping your JSON to domain objects is still the easiest way to interact with JSON. Now that JSR-353 is finalizing we will integrating it into MOXy’s JSON-binding. You can track our progress on this using the following link:Bug 405161 – MOXy support for Java API for JSON Processing (JSR-353)  Reference: Java API for JSON Processing (JSR-353) – Stream APIs from our JCG partner Blaise Doughan at the Java XML & JSON Binding blog. ...
java-logo

Inadvertent Recursion Protection with Java ThreadLocals

Now here’s a little trick for those of you hacking around with third-party tools, trying to extend them without fully understanding them (yet!). Assume the following situation:You want to extend a library that exposes a hierarchical data model (let’s assume you want to extend Apache Jackrabbit) That library internally checks access rights before accessing any nodes of the content repository You want to implement your own access control algorithm Your access control algorithm will access other nodes of the content repository … which in turn will again trigger access control … which in turn will again access other nodes of the content repository… Infinite recursion, possibly resulting in a StackOverflowError, if you’re not recursing breadth-first. Now, you have two options:Take the time, sit down, understand the internals, and do it right. You probably shouldn’t recurse into your access control once you’ve reached your own extension. In the case of extending Jackrabbit, this would be done by using a System Session to further access nodes within your access control algorithm. A System Session usually bypasses access control. Be impatient, wanting to get results quickly, and prevent recursion with a trickOf course, you really should opt for option 1. But who has the time to understand everything? Here’s how to implement that trick. /** * This thread local indicates whether you've * already started recursing with level 1 */ static final ThreadLocal<Boolean> RECURSION_CONTROL = new ThreadLocal<Boolean>(); /** * This method executes a delegate in a "protected" * mode, preventing recursion. If a inadvertent * recursion occurred, return a default instead */ public static <T> T protect( T resultOnRecursion, Protectable<T> delegate) throws Exception { // Not recursing yet, allow a single level of // recursion and execute the delegate once if (RECURSION_CONTROL.get() == null) { try { RECURSION_CONTROL.set(true); return delegate.call(); } finally { RECURSION_CONTROL.remove(); } } // Abort recursion and return early else { return resultOnRecursion; } } /** * An API to wrap your code with */ public interface Protectable<T> { T call() throws Exception; } This works easily as can be seen in this usage example: public static void main(String[] args) throws Exception { protect(null, new Protectable<Void>() { @Override public Void call() throws Exception { // Recurse infinitely System.out.println("Recursing?"); main(null); System.out.println("No!"); return null; } }); } The recursive call to the main() method will be aborted by the protect method, and return early, instead of executing call(). This idea can also be further elaborated by using a Map of ThreadLocals instead, allowing for specifying various keys or contexts for which to prevent recursion. Then, you could also put an Integer into the ThreadLocal, incrementing it on recursion, allowing for at most N levels of recursion. static final ThreadLocal<Integer> RECURSION_CONTROL = new ThreadLocal<Integer>(); public static <T> T protect( T resultOnRecursion, Protectable<T> delegate) throws Exception { Integer level = RECURSION_CONTROL.get(); level = (level == null) ? 0 : level; if (level < 5) { try { RECURSION_CONTROL.set(level + 1); return delegate.call(); } finally { if (level > 0) RECURSION_CONTROL.set(level - 1); else RECURSION_CONTROL.remove(); } } else { return resultOnRecursion; } } But again. Maybe you should just take a couple of minutes more and learn about how the internals of your host library really work, and get things right from the beginning… As always, when applying tricks and hacks!   Reference: Inadvertent Recursion Protection with Java ThreadLocals from our JCG partner Lukas Eder at the JAVA, SQL, AND JOOQ blog. ...
junit-logo

JUnit: Naming Individual Test Cases in a Parameterized Test

A couple of years ago I wrote about JUnit Parameterized Tests. One of the things I didn’t like about them was that JUnit named the invidividual test cases using numbers, so if they failed you had no idea which test parameters caused the failure. The following Eclipse screenshot will show you what I mean:                However, in JUnit 4.11, the @Parameters annotation now takes a name argument which can be used to display the parameters in the test name and hence, make them more descriptive. You can use the following placeholders in this argument and they will be replaced by actual values at runtime by JUnit:{index}: the current parameter index {0}, {1}, …: the first, second, and so on, parameter valueHere is an example: import static org.junit.Assert.assertEquals; import java.util.Arrays; import java.util.Collection; import org.junit.Test; import org.junit.runner.RunWith; import org.junit.runners.Parameterized; import org.junit.runners.Parameterized.Parameters; @RunWith(Parameterized.class) public class StringSortTest { @Parameters(name = "{index}: sort[{0}]={1}") public static Collection<Object[]> data() { return Arrays.asList(new Object[][] { { "abc", "abc"}, { "cba", "abc"}, { "abcddcba", "aabbccdd"}, { "a", "a"}, { "aaa", "aaa"}, { "", ""} }); } private final String input; private final String expected; public StringSortTest(final String input, final String expected){ this.input = input; this.expected = expected; } @Test public void testSort(){ assertEquals(expected, sort(input)); } private static String sort(final String s) { final char[] charArray = s.toCharArray(); Arrays.sort(charArray); return new String(charArray); } } When you run the test, you will see individual test cases named as shown in the Eclipse screenshot below, so it is easy to identify the parameters used in each test case.Note that due to a bug in Eclipse, names containing brackets are truncated. That’s why I had to use sort[{0}], instead of sort({0}).   Reference: JUnit: Naming Individual Test Cases in a Parameterized Test from our JCG partner Fahd Shariff at the fahd.blog blog. ...
software-development-2-logo

War Games, Pair Testing and Other Fun Ways to Find Bugs

I’ve already examined how important good testing is to the health of a project, a product and an organization. There’s a lot more to good testing than running an automated test suite in Continuous Integration and forcing someone to walk through functional test scripts and check lists. A good tester will spend time exploring the app, making sure that they really understand it and that the app actually makes sense, finding soft spots and poking them to uncover problems that nobody expects, providing valuable information and feedback to the team. What’s better than a good tester? Two good testers working together…       Pair Testing – Two Heads are Better than One Pair Testing is an exploratory testing approach where two testers work through scenarios together, combining their knowledge of the app and their unique skills and experience to duplicate hard-to-find bugs or to do especially deep testing of some part of a system. Like in pair programming, one person drives, defining the goals of the testing session, the time limit and the starting scenarios and providing the hands at the keyboard; and the other person navigates, observes, takes notes, advises, asks questions, double checks, challenges and causes trouble. As a pair they can help each other through misunderstandings and blocks, build on each other’s ideas to come up with new variations and more ways to attack the app, push each other to find more problems, and together they have a better chance of noticing small inconsistencies and errors that the other person might not consider important. Pair testing can be especially effective if you pair developers and testers together – a good tester knows where to look for problems and how to break software; a good developer can use their understanding of the code and design to suggest alternative scenarios and variations, and together they can help each other recognize inconsistencies and identify unexpected behaviour. This is not just a good way to track down bugs – it’s also a good way for people to learn from each other about the app and about testing in general. In our team, developers and testers regularly pair up to review and test hard problems together, like validating changes to complex business rules or operational testing of distributed failover and recovery scenarios. Pair testing, especially pairing developers and testers together, is a mature team practice. You need testers and developers who are confident and comfortable working together, who trust and respect each other, who understand the value and purpose of exploratory testing, and who are all willing to put the time in to do a good job. War Games and Team Testing If two heads are better than one, then what about four heads, or eight, or ten or …? You can get more perspectives and create more chances to learn by running War Games: team testing sessions which put a bunch of people together and try to get as close as possible to recreating real-life conditions. In team testing, one person defines the goals, roles, time limit and main scenarios. Multiple people end up driving, each playing different roles or assuming different personas, some people trying crazy shit to see what happens, others being more disciplined, while somebody else shoulder surfs or looks through logs and code as people find problems. More people means more variations and more chances to create unexpected situations, more eyes to look out for inconsistencies and finishing details (“is the system supposed to do this when I do that?”), and more hands to try the same steps at the same time to test for concurrency problems. At worst, you’ll have a bunch of monkeys bashing at keyboards and maybe finding some bugs. But a well-run team test session is a beautiful thing, where people feed on each other’s findings and ideas and improvise in a loosely structured way, like a jazz ensemble. Testing this way makes a lot of sense for interactive systems like online games, social networks, online stores or online trading: apps that support different kinds of users playing different roles with different configurations and different navigation options that can lead to many different paths through the app and many different experiences. With so many people doing so many things, it’s important that everyone (or at least someone) has the discipline to keep track of what they are doing, and make notes as they find problems. But even if people are keeping decent notes, sometimes all that you really know is that somebody found a problem, but nobody is sure what exactly they were doing at the time or what the steps are to reproduce the problem. It can be like finding a problem in production, so you need to use similar troubleshooting techniques, rely more on logs and error files to help retrace steps. Team testing can be done in large groups, sometimes even as part of acceptance testing or field testing with customers. But there are diminishing returns: as more people get involved, it’s harder to keep everyone motivated and focused, and harder to understand and deal with the results. We used to invite the entire team into team testing sessions, to get as many eyes as possible on problems, and to give everyone an opportunity to see the system working as a whole (which is important when you are still building it, and everyone has been focused on their pieces). But now we’ve found that a team as small as four to six people who really understand the system is usually enough, better than two people, and much more efficient than ten, or a hundred. You need enough people to create and explore enough options, but a small enough group that everyone can still work closely together and stay engaged. Team testing is another mature team practice: you need people who trust each other and are comfortable working together, who are reasonably disciplined, who understand exploratory testing and who like finding bugs. Let’s Play a Game We relied on War Games a lot when we were first building the system, before we had good automated testing coverage in place. It was an inefficient, but effective way to increase code coverage and find good bugs before our customers did. We still rely on War Games today, but now it’s about looking for real-life bugs: testing at the edges, testing weird combinations and workflow chaining problems, looking closely for usability and finishing issues, forcing errors, finding setup and configuration mistakes, and hunting down timing errors and races and locking problems. Team testing is one of the most useful ways to find subtle (and not so subtle) bugs and to build confidence in our software development and testing practices. Everyone is surprised, and sometimes disappointed, by the kinds of problems that can be found this way, even after our other testing and reviews have been done. This kind of testing is not just about finding bugs that need to be fixed: it points out areas where we need to improve, and raises alarms if too many – or any scary – problems are found. This is because War Games only make sense in later stages of development, once you have enough of a working system together to do real system testing, and after you have already done your basic functional testing and regression. It’s expensive to get multiple people together, to set up the system for a group of people to test, to define the roles and scenarios, and then to run the test sessions and review the results – you don’t want to waste everyone’s time finding basic functional bugs or regressions that should have and could have been picked up earlier. So whatever you do find should be a (not-so-nice) surprise. War Games can also be exhausting – good exploratory testing like this is only effective if everyone is intensely involved, it takes energy and commitment. This isn’t something that we do every week or even every iteration. We do it when somebody (a developer or a tester or a manager) recognizes that we’ve changed something important in workflow or the architecture or business rules; or decides that it’s time, because we’ve made enough small changes and fixes over enough iterations or because we’ve seen some funny bugs in production recently, time to run through key scenarios together as a group and see what we can find. What makes War Games work is that they are games: an intensity and competition builds naturally when you get smart people working together on a problem, and a sense of play.“Framing something like software testing in terms of gaming, and borrowing some of their ideas and mechanics, applying them and experimenting can be incredibly worthwhile.” Jonathan Kohl, Applying Gamification to Software Testing When people realize that it’s fun to find more bugs and better bugs than the other people on the team, they push each other to try harder, which leads to smarter and better testing, and to everyone learning more about the system. It’s a game, and it can be fun – but it’s serious business too.   Reference: War Games, Pair Testing and Other Fun Ways to Find Bugs from our JCG partner Jim Bird at the Building Real Software blog. ...
optaplanner-logo

Score DRL: faster and easier in OptaPlanner

For OptaPlanner (= Drools Planner) 6.0.0.Beta1, I ‘ve replaced the ConstraintOccurrence with the much more elegant ConstraintMatch system. The result is that your score DRL files are:much faster easier to read and write far less error-prone, because they make it a lot harder to cause score corruptionLet’s look at the results first, before we look at the code readability improvements.     Faster ‘Show me the benchmarks!’ The average calculate count – which is the number of scores OptaPlanner calculates per second – has risen dramatically.N queens: +39% calc count for 256 queens Cloud balance: +27% calc count on average Vehicle routing: +40% calc count on average Course scheduling: +20% calc count on average Exam scheduling: +23% calc count on average Nurse rostering: +7% calc count on averageHowever, this doesn’t necessarily imply a dramatic improvement in result, especially if the old result is already (near) optimal. It means you can get the exact same result in far less time. But – as with all other performance improvements – gives no promise for significantly better results in the same time. It does helps when scaling out.Cloud balance: +0.58% feasible soft score on average in 5 minutes Vehicle routing: +0.14% feasible soft score on average in 5 minutes Course scheduling: +2.28% feasible soft score on average in 7 minutes Exam scheduling: +0.53% feasible soft score on average in 7 minutesSeveral of the 30 Vehicle routing datasets were already solved optimally in 5 minutes, so these drag the average down, despite the high vehicle routing speedup. All benchmarks use the exact same Drools and OptaPlanner version, so these numbers show only the improvements of the ConstraintMatch change. There are several other improvements in 6.0. How does the average calculate count scale? Here are a some charts comparing the old ConstraintOccurrence with new ConstraintMatch. The new ConstraintMatch’s current implementation hasn’t been fully optimized, so it’s sometimes referred to being in ‘slow’ mode (even though it’s faster). CloudBalance:Vehicle routing:Course scheduling:Exam rostering:Easier ‘Show me the code!’ For starters, the accumulateHardScore and accumulateSoftScore rules are removed. Less boilerplate. Next, each of the score rule’s RHS (= then side) is simpler: Before: rule "conflictingLecturesSameCourseInSamePeriod" when ... then insertLogical(new IntConstraintOccurrence("conflictingLecturesSameCourseInSamePeriod", ConstraintType.HARD, -1, $leftLecture, $rightLecture)); end After: rule "conflictingLecturesSameCourseInSamePeriod" when ... then scoreHolder.addHardConstraintMatch(kcontext, -1); end Notice that you don’t need to repeat the ruleName or the causes (the lectures) no more. OptaPlanner figures out it itself through the kcontext variable. Drools automatically exposes the kcontext variable in the RHS, so you don’t need any extra code for it. Also, the limited ConstraintType enum has been replaced by a Score type specific method, to allow OptaPlanner to better support multilevel score types, for example HardMediumSoftScore and BendableScore. You also no longer need to hack the API’s to get a list of all ConstraintOcurrence’s: the ConstraintMatch objects (and their totals per constraint) are available directly on the ScoreDirector API.   Reference: Score DRL: faster and easier in OptaPlanner from our JCG partner Geoffrey De-Smet at the Drools & jBPM blog. ...
java-interview-questions-answers

Java EE CDI bean scopes

Contexts and Dependency Injection (CDI) for the Java EE platform is a feature that helps to bind together the web tier and the transactional tier of the Java EE platform. CDI is a set of services that, used together, make it easy for developers to use enterprise beans along with JavaServer Faces technology in web applications. In CDI, a bean is a source of contextual objects that define application state and/or logic. A Java EE component is a bean if the lifecycle of its instances may be managed by the container according to the lifecycle context model defined in the CDI specification. A managed bean is implemented by a Java class, which is called its bean class. A top-level Java class is a managed bean if it is defined to be a managed bean by any other Java EE technology specification, such as the JavaServer Faces technology specification. When we need to use a bean that injects another bean class in a web application, the bean needs to be able to hold state over the duration of the user’s interaction with the application. The way to define this state is to give the bean a scope. A scope gives an object a well-defined lifecycle context. A scoped object can be automatically created when it is needed and automatically destroyed when the context in which it was created ends. Moreover, its state is automatically shared by any clients that execute in the same context. When we create a Java EE component that is a managed bean, it becomes a scoped object, which exists in a well-defined lifecycle context. The scopes provided by CDI are presented in the table below:1. Request – @RequestScoped This scope describes a user’s interaction with a web application in a single HTTP request. The instance of the @RequestScoped annotated bean has an HTTP request lifecycle. 2. Session – @SessionScoped This scope desrcibes a user’s interaction with a web application across multiple HTTP requests. 3. Application – @ApplicationScoped In this case the state is shared across all users’ interactions with a web application. The container provides the same instance of the @ApplicationScoped annotated bean to all client requests. 4. Conversation – @ConversationScoped This scope describes a user’s interaction with a JavaServer Faces application, within explicit developer-controlled boundaries that extend the scope across multiple invocations of the JavaServer Faces lifecycle. All long-running conversations are scoped to a particular HTTP servlet session and may not cross session boundaries. Note that with ConversationScoped beans we achieve the same functionality we need from a ViewScoped JSF bean. In addition, with the ConversationScoped beans we can maintain the same conversation – or state – between distinct page requests. But when we leave a conversation without it, the managed bean will stay active until it times out. A thing to notice is that beans that use session or conversation scope must be serializable. This is because the the container passivates the HTTP session from time to time, so when the session is activated again the beans’ state must be retrieved. 5. Singleton – @Singleton pseudo-scope This is a pseudo-scope. It defines that a bean is once instantiated. When a CDI managed bean is injected into another bean, the CDI container makes use of a proxy. The proxy is the one to handle calls to the bean. Though, @Singleton annotated beans don’t have a proxy object. Clients hold a direct reference to the singleton instance. So, what happens when a client is serialized ? We must ensure that the singleton bean remains a singleton. To do so there are a fiew ways, such as, have the singleton bean implement writeResolve() and readReplace() (as defined by the Java serialization specification), make sure the client keeps only a transient reference to the singleton bean, or give the client a reference of type Instance<X> where X is the bean type of the singleton bean. 6. Dependent – @Dependent pseudo-scope This pseudo-scope means that an object exists to serve exactly one client (bean) and has the same lifecycle as that client (bean). This is the default scope for a bean which does not explicitly declare a scope type. An instance of a dependent bean is never shared between different clients or different injection points. It is strictly a dependent object of some other object. It is instantiated when the object it belongs to is created, and destroyed when the object it belongs to is destroyed. All predefined scopes except @Dependent are contextual scopes. CDI places beans of contextual scope in the context whose lifecycle is defined by the Java EE specifications. For example, a session context and its beans exist during the lifetime of an HTTP session. Injected references to the beans are contextually aware. The references always apply to the bean that is associated with the context for the thread that is making the reference. The CDI container ensures that the objects are created and injected at the correct time as determined by the scope that is specified for these objects. You can also define and implement custom scopes. They can be used by those who implement and extend the CDI specification.   This was a tutorial of all bean scoped provided by CDI.   References:Seam framework reference documentation The Java EE 6 Tutorial...
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

Get ready to Rock!
To download the books, please verify your email address by following the instructions found on the email we just sent you.

THANK YOU!

Close