Featured FREE Whitepapers

What's New Here?


Cobertura and Maven: Code Coverage for Integration and Unit Tests

On the turmeric project, we maintain a nightly dashboard. On the dash board we collect statistics about the project, including code coverage, findbugs analysis and other metrics. We had been using the Maven EMMA plugin to provide code coverage, but ran into a problem with EMMA. It was causing test failures after the classes were instrumented. So we disabled the code coverage, as we needed accurate test results during our builds. However we still needed code coverage, more importantly we also need coverage for the existing test suite, which really an integration test suite instead of a unit test suite. Cobertura and EMMA plugins both really are designed to work with unit tests. So we have to work around the limitation.First we need to instrument the classes. Second we need to jar up the instrumented classes and have them used by the build later. Need to tell the Integration Tests to use the instrumented classes for it’s dependencies. Generate an XML report of the results.I tried doing this without falling back to ant, but everytime I tried to use the maven-site-plugin and configure it to generate the reports, it would complain that cobertura:check wasn’t configured correctly. In our case I didn’t need check to run, I just needed the reports generated. So Ant and AntContrib to the rescue. The following is the complete maven profile I came up with: <profile> <id>cobertura</id> <dependencies> <dependency> <groupId>net.sourceforge.cobertura</groupId> <artifactId>cobertura</artifactId> <optional>true</optional> <version></version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> <configuration> <instrumentation> <excludes> <exclude>org/ebayopensource/turmeric/test/**/*.class</exclude> <exclude>org/ebayopensource/turmeric/common/v1/**/*.class</exclude> </excludes> </instrumentation> </configuration> <executions> <execution> <id>cobertura-instrument</id> <phase>process-classes</phase> <goals> <goal>instrument</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <executions> <execution> <id>cobertura-jar</id> <phase>post-integration-test</phase> <goals> <goal>jar</goal> </goals> <configuration> <classifier>cobertura</classifier> <classesDirectory>${basedir}/target/generated-classes/cobertura</classesDirectory> </configuration> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-install-plugin</artifactId> <version>2.3.1</version> <executions> <execution> <id>cobertura-install</id> <phase>install</phase> <goals> <goal>install</goal> </goals> <configuration> <classifier>cobertura</classifier> </configuration> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-antrun-plugin</artifactId> <executions> <execution> <phase>verify</phase> <configuration> <tasks> <taskdef classpathref='maven.runtime.classpath' resource='tasks.properties' /> <taskdef classpathref='maven.runtime.classpath' resource='net/sf/antcontrib/antcontrib.properties' /> <available file='${project.build.directory}/cobertura/cobertura.ser' property='ser.file.exists' /> <if> <equals arg1='${ser.file.exists}' arg2='true' /> <then> <echo message='Executing cobertura report' /> <mkdir dir='${project.build.directory}/site/cobertura' /> <cobertura-report format='xml' destdir='${project.build.directory}/site/cobertura' datafile='${project.build.directory}/cobertura/cobertura.ser' /> </then> <else> <echo message='No SER file found.' /> </else> </if> </tasks> </configuration> <goals> <goal>run</goal> </goals> </execution> </executions> <dependencies> <dependency> <groupId>ant-contrib</groupId> <artifactId>ant-contrib</artifactId> <version>20020829</version> </dependency> </dependencies> </plugin> </plugins> </build> </profile> Note: Do not use the cobertura:cobertura goal with this profile. It will fail the build because it will try to instrument the classes twice. The use of Ant and AntContrib was a necessity because there is no cobertura:report goal, as it expects to run during the site generation phase. However, this causes the check goal to run as well, and we didn’t need that. So maybe, I’ll work up a patch to add a reporting goal just to run the report without having to run the site goal as well. Hopefully, this helps some people, as I lost much hair working this out. Happy coding and don’t forget to share! Reference: Enable Code Coverage for Integration and Unit Tests using Cobertura and Maven from our JCG partner David Carver at the Intellectual Cramps blog....

XACML In The Cloud

The eXtensible Access Control Markup Language (XACML) is the de facto standard for authorization. The specification defines an architecture (see image on the right) that relates the different components that make up an XACML-based system. This post explores a variation on the standard architecture that is better suitable for use in the cloud.Authorization in the Cloud In cloud computing, multiple tenants share the same resources that they reach over a network. The entry point into the cloud must, of course, be protected using a Policy Enforcement Point (PEP). Since XACML implements Attribute-Based Access Control (ABAC), we can use an attribute to indicate the tenant, and use that attribute in our policies. We could, for instance, use the following standard attribute, which is defined in the core XACML specification: urn:oasis:names:tc:xacml:1.0:subject:subject-id-qualifier This identifier indicates the security domain of the subject. It identifies the administrator and policy that manages the name-space in which the subject id is administered. Using this attribute, we can target policies to the right tenant.Keeping Policies For Different Tenants Separate We don’t want to mix policies for different tenants. First of all, we don’t want a change in policy for one tenant to ever be able to affect a different tenant. Keeping those policies separate is one way to ensure that can never happen. We can achieve the same goal by keeping all policies together and carefully writing top-level policy sets. But we are better off employing the security best practice of segmentation and keeping policies for different tenants separate in case there was a problem with those top-level policies or with the Policy Decision Point (PDP) evaluating them (defense in depth).Multi-tenant XACML ArchitectureWe can use the composite pattern to implement a PDP that our cloud PEP can call. This composite PDP will extract the tenant attribute from the request, and forward the request to a tenant-specific Context Handler/PDP/PIP/PAP system based on the value of the tenant attribute. In the figure on the right, the composite PDP is called Multi-tenant PDP. It uses a component called Tenant-PDP Provider that is responsible for looking up the correct PDP based on the tenant attribute. Don’t forget to share! Reference: XACML In The Cloud from our JCG partner Remon Sinnema at the Secure Software Development blog....

Android Homescreen Widget with AlarmManager

In this tutorial we will learn to create widget with update interval less than 30 mins using AlarmManager. New update: In Android 4.1, a new feature has been introduced for Homescreen widget which enables widget to reorganize its view when resized . To support this feature a new method onAppWidgetOptionsChanged() has been introduced in AppWidgetProvider class. This method gets called in response to the ACTION_APPWIDGET_OPTIONS_CHANGED broadcast when this widget has been layed out at a new size. Project Information: Meta-information about the project. Platform Version : Android API Level 16. IDE : Eclipse Helios Service Release 2 Emulator: Android 4.1 Prerequisite: Preliminary knowledge of Android application framework, Intent Broadcast receiver and AlarmManager.Example with fixed update interval less than 30 mins. In this tutorial we will create time widget which shows current time. This widget will get updated every second and we will be using AlarmManager for it. Here, repeating alarm is set for one second interval. But in real world scenario, it is not recommended to use one second repeating alarm because it drains the battery fast. You have to follow the similar steps mentioned in previous widget tutorial to write widget layout file. But this time we are introducing a TextView field in the layout which will display the time. The content of the “time_widget_layout.xml” is given below. <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" android:background="@drawable/widget_background" ><TextView android:id="@+id/tvTime" style="@android:style/TextAppearance.Medium" android:layout_width="match_parent" android:layout_height="match_parent" android:layout_gravity="center" android:layout_margin="4dip" android:gravity="center_horizontal|center_vertical" android:textColor="#000000" /></LinearLayout> Follow the same procedure to create the AppWidgetProvider metadata file. The content of metadata file ”widget_metadata.xml” is given below. <appwidget-provider xmlns:android="http://schemas.android.com/apk/res/android" android:initialLayout="@layout/time_widget_layout" android:minHeight="40dp" android:minWidth="130dp" android:updatePeriodMillis="1800000" > </appwidget-provider>In this tutorial, onEnabled(), onDsiabled(), onUpdate() and onAppWidgetOptionsChanged() have been defined unlike the previous widget tutorial where only onUpdate() was defined.onEnabled(): An instance of AlarmManager is created here to start the repeating timer and register the intent with the AlarmManager. As this method gets called at the very first instance of widget installation, it helps to set repeating alarm only once. onDisabled(): In this method, alarm is canceled because this method gets called as soon as the very last instance of widget is removed/uninstalled and we don’t want to leave the registered alarm even when it’s not being used. onUpdate(): This method updates the time on remote TextView. onAppWidgetOptionsChanged(): This method gets called when the widget is resized.package com.rakesh.widgetalarmmanagerexample;import android.app.AlarmManager; import android.app.PendingIntent; import android.appwidget.AppWidgetManager; import android.appwidget.AppWidgetProvider; import android.content.ComponentName; import android.content.Context; import android.content.Intent; import android.os.Bundle; import android.widget.RemoteViews; import android.widget.Toast;public class TimeWidgetProvider extends AppWidgetProvider {@Override public void onDeleted(Context context, int[] appWidgetIds) { Toast.makeText(context, "TimeWidgetRemoved id(s):"+appWidgetIds, Toast.LENGTH_SHORT).show(); super.onDeleted(context, appWidgetIds); }@Override public void onDisabled(Context context) { Toast.makeText(context, "onDisabled():last widget instance removed", Toast.LENGTH_SHORT).show(); Intent intent = new Intent(context, AlarmManagerBroadcastReceiver.class); PendingIntent sender = PendingIntent.getBroadcast(context, 0, intent, 0); AlarmManager alarmManager = (AlarmManager) context.getSystemService(Context.ALARM_SERVICE); alarmManager.cancel(sender); super.onDisabled(context); }@Override public void onEnabled(Context context) { super.onEnabled(context); AlarmManager am=(AlarmManager)context.getSystemService(Context.ALARM_SERVICE); Intent intent = new Intent(context, AlarmManagerBroadcastReceiver.class); PendingIntent pi = PendingIntent.getBroadcast(context, 0, intent, 0); //After after 3 seconds am.setRepeating(AlarmManager.RTC_WAKEUP, System.currentTimeMillis()+ 100 * 3, 1000 , pi); }@Override public void onUpdate(Context context, AppWidgetManager appWidgetManager, int[] appWidgetIds) { ComponentName thisWidget = new ComponentName(context, TimeWidgetProvider.class);for (int widgetId : appWidgetManager.getAppWidgetIds(thisWidget)) {//Get the remote views RemoteViews remoteViews = new RemoteViews(context.getPackageName(), R.layout.time_widget_layout); // Set the text with the current time. remoteViews.setTextViewText(R.id.tvTime, Utility.getCurrentTime("hh:mm:ss a")); appWidgetManager.updateAppWidget(widgetId, remoteViews); } }@Override public void onAppWidgetOptionsChanged(Context context, AppWidgetManager appWidgetManager, int appWidgetId, Bundle newOptions) { //Do some operation here, once you see that the widget has change its size or position. Toast.makeText(context, "onAppWidgetOptionsChanged() called", Toast.LENGTH_SHORT).show(); } }Broadcast receiver is defined to handle the intent registered with alarm. This broadcast receiver gets called every second because repeating alarm has been set in the AppWidgetProvider classs for 1 second. Here, onReceive() method has been defined which updates the widget with the current time and getCurrentTime() has been used to get the current time. package com.rakesh.widgetalarmmanagerexample;import android.app.AlarmManager; import android.app.PendingIntent; import android.appwidget.AppWidgetManager; import android.content.BroadcastReceiver; import android.content.ComponentName; import android.content.Context; import android.content.Intent; import android.os.PowerManager; import android.widget.RemoteViews; import android.widget.Toast;public class AlarmManagerBroadcastReceiver extends BroadcastReceiver {@Override public void onReceive(Context context, Intent intent) { PowerManager pm = (PowerManager) context.getSystemService(Context.POWER_SERVICE); PowerManager.WakeLock wl = pm.newWakeLock(PowerManager.PARTIAL_WAKE_LOCK, "YOUR TAG"); //Acquire the lock wl.acquire();//You can do the processing here update the widget/remote views. RemoteViews remoteViews = new RemoteViews(context.getPackageName(), R.layout.time_widget_layout); remoteViews.setTextViewText(R.id.tvTime, Utility.getCurrentTime("hh:mm:ss a")); ComponentName thiswidget = new ComponentName(context, TimeWidgetProvider.class); AppWidgetManager manager = AppWidgetManager.getInstance(context); manager.updateAppWidget(thiswidget, remoteViews); //Release the lock wl.release(); } } It’s always a good idea to keep utility methods in some utility class which can be accessed from other packages. getCurrentTime() has been defined in the Uitility class. This method is used in AppWidgetProvider and BroadcastReciever classes. package com.rakesh.widgetalarmmanagerexample;import java.text.Format; import java.text.SimpleDateFormat; import java.util.Date;public class Utility { public static String getCurrentTime(String timeformat){ Format formatter = new SimpleDateFormat(timeformat); return formatter.format(new Date()); } }In Android manifest file, we need to include WAKE_LOCK permission because wake lock is used in broadcast receiver. AlarmManagerBroadcastReceiver has been registered as broadcast receiver. Remaining part is simple to understand. <manifest android:versioncode="1" android:versionname="1.0" package="com.rakesh.widgetalarmmanagerexample" xmlns:android="http://schemas.android.com/apk/res/android"> <uses-sdk android:minsdkversion="16" android:targetsdkversion="16"/><uses-permission android:name="android.permission.WAKE_LOCK"/> <application android:icon="@drawable/ic_launcher" android:label="@string/app_name"> <activity android:label="@string/title_activity_widget_alarm_manager" android:name=".WidgetAlarmManagerActivity"> <intent-filter> <action android:name="android.intent.action.MAIN"/> <category android:name="android.intent.category.LAUNCHER"/> </intent-filter> </activity> <receiver android:icon="@drawable/ic_launcher" android:label="@string/app_name" android:name=".TimeWidgetProvider"> <intent-filter> <action android:name="android.appwidget.action.APPWIDGET_UPDATE"/> </intent-filter> <meta-data android:name="android.appwidget.provider" android:resource="@xml/widget_metadata"/> </receiver> <receiver android:name=".AlarmManagerBroadcastReceiver"/> </application> </manifest>Once the code is executed, the widget gets registered. When you install widget on homescreen, it appears as shown below.you can download source code from here. Reference: Tutorial on Android Homescreen Widget with AlarmManager. from our JCG partner Rakesh Cusat at the Code4Reference blog....

Security Requirements With Abuse Cases

Gary McGraw describes several best practices for building secure software. One is the use of so-called abuse cases. Since his chapter on abuse cases left me hungry for more information, this post examines additional literature on the subject and how to fit abuse cases into a Security Development Lifecycle (SDL).Modeling Functional Requirements With Use CasesAbuse cases are an adaptation of use cases, abstract episodes of interaction between a system and its environment.A use case consists of a number of related scenarios. A scenario is a description of a specific interaction between the system and particular actors. Each use case has a main success scenario and some additional scenarios to cover variations and exceptional cases.Actors are external agents, and can be either human or non-human.For better understanding, each use case should state the goal that the primary actor is working towards.Use cases are represented in UML diagrams (see example on left) as ovals that are connected to stick figures, which represent the actors. Use case diagrams are accompanied by textual use case descriptions that explain how the actors and the system interact. Modeling Security Requirements With Abuse CasesAn abuse case is a use case where the results of the interaction are harmful to the system, one of the actors, or one of the stakeholders in the system. An interaction is harmful if it decreases the security (confidentiality, integrity, or availability) of the system.Abuse cases are also referred to as misuse cases, although some people maintain they’re different. I think the two concepts are too similar to treat differently, so whenever I write “abuse case”, it refers to “misuse case” as well.Some actors in regular use cases may also act as attacker in an abuse case (e.g. in the case of an insider threat). We should then introduce a new actor to avoid confusion (potentially using inheritance). This is consistent with the best practice of having actors represent roles rather than actual users.Attackers are described in more detail than regular actors, to make it easier to look at the system from their point of view. Their description should include the resources at their disposal, their skills, and their objectives.Note that objectives are longer term than the (ab)use case’s goal. For instance, the attacker’s goal for an abuse case may be to gain root privileges on a certain server, while her objective may be industrial espionage.Abuse cases are very different from use cases in one respect: while we know how the actor in a use case achieves her goal, we don’t know precisely how an attacker will break the system’s security. If we would, we would fix the vulnerability! Therefore, abuse case scenarios describe interactions less precisely than regular use case scenarios. Modeling Other Non-Functional RequirementsNote that since actors in use cases needn’t be human, we can employ a similar approach to abuse cases with actors like “network failure” etc. to model non-functional requirements beyond security, like reliability, portability, maintainability, etc.For this to work, one must be able to express the non-functional requirement as an interactive scenario. I won’t go into this topic any further in this post. Creating Abuse CasesAbuse case models are best created when use cases are: during requirements gathering. It’s easiest to define the abuse cases after the regular use cases are identified (or even defined).Abuse case modeling requires one to wear a black hat. Therefore, it makes sense to invite people with black hat capabilities, like testers and network operators or administrators to the table.The first step in developing abuse cases is to find the actors. As stated before, every actor in a regular use case can potentially be turned into an malicious actor in an abuse case.We should next add actors for different kinds of intruders. These are distinguished based on their resources and skills.When we have the actors, we can identify the abuse cases by determining how they might interact with the system. We might identify such malicious interactions by combining the regular use cases with attack patterns.We can find more abuse cases by combining them systematically and recursively with regular use cases.    Combining Use Cases and Abuse CasesSome people keep use cases and abuse cases separate to avoid confusion. Others combine them, but display abuse cases as inverted use cases (i.e. black ovals with white text, and actors with black heads).The latter approach makes it possible to relate abuse cases to use cases using UML associations. For instance, an abuse case may threaten a use case, while a use case might mitigate an abuse case. The latter use case is also referred to as a security use case. Security use cases usually deal with security features.Security use cases can be threatened by new abuse cases, for which we can find new security use cases to mitigate, etc. etc. In this way, a “game” of play and counterplay enfolds that fits well in a defense in depth strategy.We should not expect to win this “game”. Instead, we should make a good trade-off between security requirements and other aspects of the system, like usability and development cost. Ideally, these trade-offs are made clearly visible to stakeholders by using a good risk management framework. Reusing Abuse CasesUse cases can be abstracted into essential use cases to make them more reusable. There is no reason we couldn’t do the same with abuse cases and security use cases.It seems to me that this not just possible, but already done. Microsoft’s STRIDE model contains generalized threats, and its SDL Threat Modeling tool automatically identifies which of those are applicable to your situation. ConclusionAlthough abuse cases are a bit different from regular use cases, their main value is that they present information about security risks in a format that may already be familiar to the stakeholders of the software development process. Developers in particular are likely to know them.This should make it easier for people with little or no security background to start thinking about securing their systems and how to trade-off security and functionality.However, it seems that threat modeling gives the same advantages as abuse cases. Since threat modeling is supported by tools, it’s little wonder that people prefer that over abuse cases for inclusion in their Security Development Lifecycle.Reference: Abuse Cases from our JCG partner Remon Sinnema at the Secure Software Development blog....

Project Jigsaw: The Consequences of Deferring

Mr. Mark Reinhold has announced in July 2012 that they were planning to withdraw Project Jigsaw from Java 8 because Jigsaw would delay its release, planned for September 2013 (One year from now). This date is known because Oracle has decided to implement a two years roadmap planning for Java, so September 2013 is actually 2 years after the release of Java 7.According to Jigsaw’s website… “The goal of this Project is to design and implement a standard module system for the Java SE Platform, and to apply that system to the Platform itself and to the JDK. The original goal of this Project was to design and implement a module system focused narrowly upon the goal of modularizing the JDK, and to apply that system to the JDK itself. The growing demand for a truly standard module system for the Java Platform motivated expanding the scope of the Project to produce a module system that can ultimately become a JCP-approved part of the Java SE Platform and also serve the needs of the ME and EE Platforms.”They also say: “Jigsaw was originally intended for Java 7 but was deferred to Java 8.” Now they want to defer it to Java 9 :-( More details of their decision making are available in a Q&A post on Reinhold’s blog. You may read and follow the discussion there. Here is my opinion:Without Jigsaw, I believe that it’s very difficult to put Java everywhere. Without Jigsaw, the idea of multi-platform is getting restricted to servers in a age of smartphones and tablets. Jigsaw may be “late for the train”, but it is letting Java late for the entire platform ecosystemObserving the market, we can see that development is becoming platform-dependent (iOS, Android, etc.) Only Java can beat this trending because of its large experience on multiplatform implementation, and the time to do it is NOW! Otherwise, in 3 or 4 year there will be no Java on devices, and the development community will have knowledge enough to live with that. Therefore, Java will be basically a server-side technology.The reasoning behind my prediction is the following: mobile devices are limited in terms of resources and a modular JVM would allow the creation of tailored JVM considering the constraints of each device. I put myself in the shoes of those devices manufacturers: “I wouldn’t distribute something in my products that might impact negatively the user experience in terms of performance”. That was the argument (at least the public one) Apple used to avoid distributing the Flash plugin for iOS’s browser. Probably because of that, Adobe definitively gave up of Flash on mobile devices. A modular JVM would simplify a lot Oracle’s negotiation with a lot of device players. It would be reasonable for Apple to include Java as a language for iPad and iPhone applications; Google would finally embed the JVM into Android to evolve faster with new Java language features, getting busy just with a module to extend the JVM to specific Android’s capabilities; it would be even possible to save Nokia from bankruptcy :DYou may wonder whether Apple and Google would ever adopt JVM as a standard runtime platform. Have you heard about opportunity cost? It states that our current choices and activities are actually blocking other possible choices and activities. The tricky part is to chose the opportunity that is least costly or with the highest profit. Having said that, we can see the scenario considering that Java was not an option because it wasn’t modular when those companies made their decisions. If Java was modular and Apple had adopted it, iOS platform would have at least three times more apps than Android. “Java” was in Google’s strategy to catchup with Apple. Only Java could allow Google to do it in such a short period of time. So, it’s not so simple to ignore Java.Now, Oracle vs. Google: Of course the effort to move Java forward should be economically viable, and in order to use Java, Google would have to spend some money. Unfortunately, Oracle and Google work with different currencies. While Oracle thinks in terms of licenses, Google thinks in terms of advertising. These currencies are incompatible, very difficult to convert, because while license is cost, advertising is profit. Therefore, Oracle would never reach a deal increasing Google’s cost, but it would be possible to get a deal decreasing Google’s profit. In other words, Oracle could have a percentage of Google’s profit on advertising sold through Java apps in order to make Java available for Android. Google makes this kind of deal with a lot of companies like Yahoo, AOL and others. Why not with Oracle?If Oracle doesn’t give all resources that the JDK team needs to make Jigsaw a reality in Java 8, Oracle will be completely out of the pervasive game very soon. Without breaking the JDK into manageable and efficient pieces, Oracle won’t have arguments to convince the industry that Java is the way to go on the long run.Before deciding to drop Jigsaw out, I beg Oracle to think about the consequences! They must ignore the fixed release roadmap and accept the difficulty of the task. We can stay happy with Java 7 (it’s not widely adopted anyway) as long as Jigsaw is on the way for Java 8. This fixed release cycle can actually come back after Java 8.I would love to be wrong and be taken by surprise with an official Oracle’s announcement of the definitive support for JavaFX on Apple and Android devices during the next JavaOne ;-) However, I think the likelihood is very low :-(Reference: The Consequences of Deferring Project Jigsaw from our JCG partner Hildeberto Mendonca at the Hildeberto’s Blog blog....

Android broadcast receiver: Registering/unregistering during runtime

In the previous post, we leaned to enable and disable the Broadcast receiver added in the Android manifest file. In this post, we will learn to register and unregister broadcast receiver programmatically. It’s always suggested to register and unregister broadcast receiver programmatically as it saves system resources. In this tutorial, we will make an application having two buttons to register and unregister the broadcast receiver respectively. Example Code Let’s define the layout file and add two buttons. Associate registerBroadcastReceiver onclick method with register button and unregisterBroadcastReceiver onclick method with unregister button. <LinearLayout xmlns:android='http://schemas.android.com/apk/res/android' xmlns:tools='http://schemas.android.com/tools' android:layout_width='match_parent' android:layout_height='match_parent' android:orientation='vertical'> <Button android:layout_width='fill_parent' android:layout_height='wrap_content' android:padding='@dimen/padding_medium' android:text='@string/register_broadcast_receiver' android:onClick='registerBroadcastReceiver' tools:context='.RegisterUnregister' /> <Button android:layout_width='fill_parent' android:layout_height='wrap_content' android:padding='@dimen/padding_medium' android:text='@string/unregister_broadcast_receiver' android:onClick='unregisterBroadcastReceiver' tools:context='.RegisterUnregister' /></LinearLayout> Define the string constants used in layout file in string.xml. <resources> <string name='app_name'>EnableDisableBroadcastReceiver2</string> <string name='menu_settings'>Settings</string> <string name='title_activity_enable_disable'>EnableDisable</string> <string name='register_broadcast_receiver'>Register Broadcast Receiver</string> <string name='unregister_broadcast_receiver'>Unregister Broadcast Receiver</string> </resources> Now define the broadcast receiver. In onReceive() method, we will show a toast message containing the current time. The onReceive() method gets invoked when the particular intent is broadcasted. package com.code4reference.broadcastreceiver.enabledisable;import java.text.Format; import java.text.SimpleDateFormat; import java.util.Date;import android.content.BroadcastReceiver; import android.content.Context; import android.content.Intent; import android.widget.Toast;public class UserDefinedBroadcastReceiver extends BroadcastReceiver {@Override public void onReceive(Context context, Intent intent) {//You can do the processing here update the widget/remote views. StringBuilder msgStr = new StringBuilder('Current time : '); Format formatter = new SimpleDateFormat('hh:mm:ss a'); msgStr.append(formatter.format(new Date())); Toast.makeText(context, msgStr, Toast.LENGTH_SHORT).show(); } } We are going to define the main activity class called RegisterUnregister. In this class we will define two onclick methods, registerBroadcastReceiver and unregisterBroadcastReceiver attached with Register and Unregister buttons in layout file respectively. The registerBroadcastReceiver() method registers the UserDefinedBroadcastReceiver for TIME_TICK intent action type. The TIME_TICK intent gets fired every minute. Once the broadcast receiver gets registered, you will notice the toast message after every minute. package com.code4reference.broadcastreceiver.enabledisable;import android.app.Activity; import android.content.IntentFilter; import android.os.Bundle; import android.view.Menu; import android.view.View; import android.widget.Toast;public class RegisterUnregister extends Activity {UserDefinedBroadcastReceiver broadCastReceiver = new UserDefinedBroadcastReceiver();@Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_register_unregister); }@Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.activity_enable_disable, menu); return true; }/** * This method enables the Broadcast receiver for * 'android.intent.action.TIME_TICK' intent. This intent get * broadcasted every minute. * * @param view */ public void registerBroadcastReceiver(View view) {this.registerReceiver(broadCastReceiver, new IntentFilter( 'android.intent.action.TIME_TICK')); Toast.makeText(this, 'Registered broadcast receiver', Toast.LENGTH_SHORT) .show(); }/** * This method disables the Broadcast receiver * * @param view */ public void unregisterBroadcastReceiver(View view) {this.unregisterReceiver(broadCastReceiver);Toast.makeText(this, 'unregistered broadcst receiver', Toast.LENGTH_SHORT) .show(); } }We will not modify the AndroidManifest file because we are not registering the broadcast receiver in AndroidManifest file. <manifest xmlns:android='http://schemas.android.com/apk/res/android' package='com.code4reference.broadcastreceiver.enabledisable' android:versionCode='1' android:versionName='1.0' > <uses-sdk android:minSdkVersion='8' android:targetSdkVersion='15' /> <application android:icon='@drawable/ic_launcher' android:label='@string/app_name' android:theme='@style/AppTheme' > <activity android:name='.RegisterUnregister' android:label='@string/title_activity_enable_disable' > <intent-filter> <action android:name='android.intent.action.MAIN' /> <category android:name='android.intent.category.LAUNCHER' /> </intent-filter> </activity> </application> </manifest> Just complete the coding and execute the code and you will see the application as shown below.You can get the complete source at github/Code4Reference. Reference: Registering/unregistering of Android broadcast receiver during runtime. from our JCG partner Rakesh Cusat at the Code4Reference blog....

How to write better POJO Services

In Java, you can easily implements some business logic in a Plain Old Java Object (POJO) classes, and you can run them in a fancy server or framework without much hassle. There many server/frameworks, such as JBossAS, Spring or Camel etc, that would allow you to deploy POJO without even hardcoding to their API. Obviously you would get advance features if you willing to couple to their API specifics, but even if you do, you can keep these to minimal by encapsulating your own POJO and their API in a wrapper. By writing and designing your own application as simple POJO as possible, you will have the most flexible ways in choose a framework or server to deploy and run your application. One effective way to write your business logic in these environments is to use Service component. In this article I will share few things I learned in writing Services. What is a Service? The word Service is overly used today, and it could mean many things to different people. When I say Service, my definition is a software component that has minimal of life-cycles such as init, start, stop, and destroy. You may not need all these stages of life-cycles in every service you write, but you can simply ignore ones that don’t apply. When writing large application that intended for long running such as a server component, definining these life-cycles and ensure they are excuted in proper order is crucial! I will be walking you through a Java demo project that I have prepared. It’s very basic and it should run as stand-alone. The only dependency it has is the SLF4J logger. If you don’t know how to use logger, then simply replace them with System.out.println. However I would strongly encourage you to learn how to use logger effectively during application development though. Also if you want to try out the Spring related demos, then obviously you would need their jars as well.Writing basic POJO service You can quickly define a contract of a Service with life-cycles as below in an interface. package servicedemo;public interface Service { void init(); void start(); void stop(); void destroy(); boolean isInited(); boolean isStarted(); } Developers are free to do what they want in their Service implementation, but you might want to give them an adapter class so that they don’t have to re-write same basic logic on each Service. I would provide an abstract service like this: package servicedemo;import java.util.concurrent.atomic.*; import org.slf4j.*; public abstract class AbstractService implements Service { protected Logger logger = LoggerFactory.getLogger(getClass()); protected AtomicBoolean started = new AtomicBoolean(false); protected AtomicBoolean inited = new AtomicBoolean(false);public void init() { if (!inited.get()) { initService(); inited.set(true); logger.debug('{} initialized.', this); } }public void start() { // Init service if it has not done so. if (!inited.get()) { init(); } // Start service now. if (!started.get()) { startService(); started.set(true); logger.debug('{} started.', this); } }public void stop() { if (started.get()) { stopService(); started.set(false); logger.debug('{} stopped.', this); } }public void destroy() { // Stop service if it is still running. if (started.get()) { stop(); } // Destroy service now. if (inited.get()) { destroyService(); inited.set(false); logger.debug('{} destroyed.', this); } }public boolean isStarted() { return started.get(); }public boolean isInited() { return inited.get(); }@Override public String toString() { return getClass().getSimpleName() + '[id=' + System.identityHashCode(this) + ']'; }protected void initService() { }protected void startService() { }protected void stopService() { }protected void destroyService() { } } This abstract class provide the basic of most services needs. It has a logger and states to keep track of the life-cycles. It then delegate new sets of life-cycle methods so subclass can choose to override. Notice that the start() method is checking auto calling init() if it hasn’t already done so. Same is done in destroy() method to the stop() method. This is important if we’re to use it in a container that only have two stages life-cycles invocation. In this case, we can simply invoke start() and destroy() to match to our service’s life-cycles. Some frameworks might go even further and create separate interfaces for each stage of the life-cycles, such as InitableService or StartableService etc. But I think that would be too much in a typical app. In most of the cases, you want something simple, so I like it just one interface. User may choose to ignore methods they don’t want, or simply use an adaptor class. Before we end this section, I would throw in a silly Hello world service that can be used in our demo later. package servicedemo;public class HelloService extends AbstractService { public void initService() { logger.info(this + ' inited.'); } public void startService() { logger.info(this + ' started.'); } public void stopService() { logger.info(this + ' stopped.'); } public void destroyService() { logger.info(this + ' destroyed.'); } }Managing multiple POJO Services with a container Now we have the basic of Service definition defined, your development team may start writing business logic code! Before long, you will have a library of your own services to re-use. To be able group and control these services into an effetive way, we want also provide a container to manage them. The idea is that we typically want to control and manage multiple services with a container as a group in a higher level. Here is a simple implementation for you to get started: package servicedemo;import java.util.*; public class ServiceContainer extends AbstractService { private List<Service> services = new ArrayList<Service>();public void setServices(List<Service> services) { this.services = services; } public void addService(Service service) { this.services.add(service); }public void initService() { logger.debug('Initializing ' + this + ' with ' + services.size() + ' services.'); for (Service service : services) { logger.debug('Initializing ' + service); service.init(); } logger.info(this + ' inited.'); } public void startService() { logger.debug('Starting ' + this + ' with ' + services.size() + ' services.'); for (Service service : services) { logger.debug('Starting ' + service); service.start(); } logger.info(this + ' started.'); } public void stopService() { int size = services.size(); logger.debug('Stopping ' + this + ' with ' + size + ' services in reverse order.'); for (int i = size - 1; i >= 0; i--) { Service service = services.get(i); logger.debug('Stopping ' + service); service.stop(); } logger.info(this + ' stopped.'); } public void destroyService() { int size = services.size(); logger.debug('Destroying ' + this + ' with ' + size + ' services in reverse order.'); for (int i = size - 1; i >= 0; i--) { Service service = services.get(i); logger.debug('Destroying ' + service); service.destroy(); } logger.info(this + ' destroyed.'); } } From above code, you will notice few important things:We extends the AbstractService, so a container is a service itself. We would invoke all service’s life-cycles before moving to next. No services will start unless all others are inited. We should stop and destroy services in reverse order for most general use cases.The above container implementation is simple and run in synchronized fashion. This mean, you start container, then all services will start in order you added them. Stop should be same but in reverse order. I also hope you would able to see that there is plenty of room for you to improve this container as well. For example, you may add thread pool to control the execution of the services in asynchronized fashion.Running POJO Services Running services with a simple runner program. In the simplest form, we can run our POJO services on our own without any fancy server or frameworks. Java programs start its life from a static main method, so we surely can invoke init and start of our services in there. But we also need to address the stop and destroy life-cycles when user shuts down the program (usually by hitting CTRL+C.) For this, the Java has the java.lang.Runtime#addShutdownHook() facility. You can create a simple stand-alone server to bootstrap Service like this: package servicedemo;import org.slf4j.*; public class ServiceRunner { private static Logger logger = LoggerFactory.getLogger(ServiceRunner.class);public static void main(String[] args) { ServiceRunner main = new ServiceRunner(); main.run(args); }public void run(String[] args) { if (args.length < 1) throw new RuntimeException('Missing service class name as argument.');String serviceClassName = args[0]; try { logger.debug('Creating ' + serviceClassName); Class<?> serviceClass = Class.forName(serviceClassName); if (!Service.class.isAssignableFrom(serviceClass)) { throw new RuntimeException('Service class ' + serviceClassName + ' did not implements ' + Service.class.getName()); } Object serviceObject = serviceClass.newInstance(); Service service = (Service)serviceObject;registerShutdownHook(service);logger.debug('Starting service ' + service); service.init(); service.start(); logger.info(service + ' started.');synchronized(this) { this.wait(); } } catch (Exception e) { throw new RuntimeException('Failed to create and run ' + serviceClassName, e); } }private void registerShutdownHook(final Service service) { Runtime.getRuntime().addShutdownHook(new Thread() { public void run() { logger.debug('Stopping service ' + service); service.stop(); service.destroy(); logger.info(service + ' stopped.'); } }); } } With abover runner, you should able to run it with this command: $ java demo.ServiceRunner servicedemo.HelloServiceLook carefully, and you’ll see that you have many options to run multiple services with above runner. Let me highlight couple:Improve above runner directly and make all args for each new service class name, instead of just first element. Or write a MultiLoaderService that will load multiple services you want. You may control argument passing using System Properties.Can you think of other ways to improve this runner?Running services with Spring The Spring framework is an IoC container, and it’s well known to be easy to work POJO, and Spring lets you wire your application together. This would be a perfect fit to use in our POJO services. However, with all the features Spring brings, it missed a easy to use, out of box main program to bootstrap spring config xml context files. But with what we built so far, this is actually an easy thing to do. Let’s write one of our POJO Service to bootstrap a spring context file. package servicedemo;import org.springframework.context.ConfigurableApplicationContext; import org.springframework.context.support.FileSystemXmlApplicationContext;public class SpringService extends AbstractService { private ConfigurableApplicationContext springContext;public void startService() { String springConfig = System.getProperty('springContext', 'spring.xml); springContext = new FileSystemXmlApplicationContext(springConfig); logger.info(this + ' started.'); } public void stopService() { springContext.close(); logger.info(this + ' stopped.'); } } With that simple SpringService you can run and load any spring xml file. For example try this: $ java -DspringContext=config/service-demo-spring.xml demo.ServiceRunner servicedemo.SpringService Inside the config/service-demo-spring.xml file, you can easily create our container that hosts one or more service in Spring beans. <beans xmlns='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:schemaLocation='http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd'><bean id='helloService' class='servicedemo.HelloService'> </bean><bean id='serviceContainer' class='servicedemo.ServiceContainer' init-method='start' destroy-method='destroy'> <property name='services'> <list> <ref bean='helloService'/> </list> </property> </bean></beans>Notice that I only need to setup init-method and destroy-method once on the serviceContainer bean. You can then add one or more other service such as the helloService as much as you want. They will all be started, managed, and then shutdown when you close the Spring context. Note that Spring context container did not explicitly have the same life-cycles as our services. The Spring context will automatically instanciate all your dependency beans, and then invoke all beans who’s init-method is set. All that is done inside the constructor of FileSystemXmlApplicationContext. No explicit init method is called from user. However at the end, during stop of the service, Spring provide the container#close() to clean things up. Again, they do not differentiate stop from destroy. Because of this, we must merge our init and start into Spring’s init state, and then merge stop and destroy into Spring’s close state. Recall our AbstractService#destory will auto invoke stop if it hasn’t already done so. So this is trick that we need to understand in order to use Spring effectively.Running services with JEE app server In a corporate env, we usually do not have the freedom to run what we want as a stand-alone program. Instead they usually have some infrustructure and stricter standard technology stack in place already, such as using a JEE application server. In these situation, the most portable to run POJO services is in a war web application. In a Servlet web application, you can write a class that implements javax.servlet.ServletContextListener and this will provide you the life-cycles hook via contextInitialized and contextDestroyed. In there, you can instanciate your ServiceContainer object and call start and destroy methods accordingly. Here is an example that you can explore: package servicedemo; import java.util.*; import javax.servlet.*; public class ServiceContainerListener implements ServletContextListener { private static Logger logger = LoggerFactory.getLogger(ServiceContainerListener.class); private ServiceContainer serviceContainer;public void contextInitialized(ServletContextEvent sce) { serviceContainer = new ServiceContainer(); List<Service> services = createServices(); serviceContainer.setServices(services); serviceContainer.start(); logger.info(serviceContainer + ' started in web application.'); }public void contextDestroyed(ServletContextEvent sce) { serviceContainer.destroy(); logger.info(serviceContainer + ' destroyed in web application.'); }private List<Service> createServices() { List<Service> result = new ArrayList<Service>(); // populate services here. return result; } } You may configure above in the WEB-INF/web.xml like this:<listener> <listener-class>servicedemo.ServiceContainerListener</listener-class> </listener></web-app>The demo provided a placeholder that you must add your services in code. But you can easily make that configurable using the web.xml for context parameters. If you were to use Spring inside a Servlet container, you may directly use their org.springframework.web.context.ContextLoaderListener class that does pretty much same as above, except they allow you to specify their xml configuration file using the contextConfigLocation context parameter. That’s how a typical Spring MVC based application is configure. Once you have this setup, you can experiment our POJO service just as the Spring xml sample given above to test things out. You should see our service in action by your logger output. PS: Actually what we described here are simply related to Servlet web application, and not JEE specific. So you can use Tomcat server just fine as well.The importance of Service’s life-cycles and it’s real world usage All the information I presented here are not novelty, nor a killer design pattern. In fact they have been used in many popular open source projects. However, in my past experience at work, folks always manage to make these extremely complicated, and worse case is that they completely disregard the importance of life-cycles when writing services. It’s true that not everything you going to write needs to be fitted into a service, but if you find the need, please do pay attention to them, and take good care that they do invoked properly. The last thing you want is to exit JVM without clean up in services that you allocated precious resources for. These would become more disastrous if you allow your application to be dynamically reloaded during deployment without exiting JVM, in which will lead to system resources leakage. The above Service practice has been put into use in the TimeMachine project. In fact, if you look at the timemachine.scheduler.service.SchedulerEngine, it would just be a container of many services running together. And that’s how user can extend the scheduler functionalities as well, by writing a Service. You can load these services dynamically by a simple properties file. Reference: How to write better POJO Services from our JCG partner Zemian Deng at the A Programmer’s Journal blog....

Naming Antipatterns

One of these annoying challenges when coding is finding proper names for your classes. There are some tools available making fun of our inability to come up with proper names. But while I enjoy these kind of gags I think there is some serious problem hiding. The problem is: Classes should be some kind of abstraction. I should only have one abstraction for a single purpose. But if my classes are unique abstractions it should be easy to name them, right? You just have to look into the average code base to see it isn’t easy. Lets have a look at a couple of common anti patterns. AbstractAnything: Often you have a possibly large set of classes that share lots of common stuff. In many cases you find an abstract class at the top of their inheritance tree named AbstractWhatever. Obviously that name is technical correct but not very helpfull. After all, the class itself is telling me it is abstract, no need to put it in the name. But apart from being picky about the name a couple of serious design problem come with these classes. They tend to gather functionality common to their subclasses. The problem is: Just because all (or even worse many) of the subclasses need a feature or function it shouldn’t be embedded in their superclass. The features are often mostly independent and therefore should go in separate classes. Lets take the example of an AbstractEditor which is intended to be the base class for models for editors in a Swing application. You might find things in an AbstractEditor likea save method, setting the cursor to its wait state before calling an abstract real save method. a boolean property telling you if this editor needs saving the class and id of the entity being edited Some property change handling infrastructure infrastructure code for validating inputs code that handles the process of asking the user if he wants to save changes when he tries to close the editorand so on. Of course these features depend on each others, but when they are lumped into a single class the dependencies become muddled. If a new need for a feature occurs there is hardly an option anymore but to put it in the same class and after some time the class looks almost like this example. Note that some developers try to hide the application of this anti pattern by renaming the class to BaseSomething. Doesn’t help though. AnythingDo, AnythingTo, AnythingBs With this antipattern you have a properly named class and lots of very similar classes with various suffixes. These suffixes often are very short and denote different layers of the application. On the boundaries of these layers data gets copied from one object into the other with barely any logic. The problem with this antipattern is that while there might be valid reasons for these classes to exists the seamingly determinism to construct one out of the other often runs against the purpose of these classes. An example: You might have a Person class which represents a Person in your system. You might also have a PersonHe (He like Hibernate Entity) which is mapped to a database table using Hibernate. The Person class is intended be used in all the business logic stuff, but since at the boundary to the persistence layer it is just copied over to the Hibernate Entity it has to be handled in the way Hibernate expects things. For example you have to move the complete Person Object around even if you just want to change a single attribute (e.g. the marriage status), because if you just leave fields empty, Hibernate will store these empty fields in the database and you end up with Persons that don’t have any useful property anymore except being married. Although this actually describes reality pretty good in some cases it normally isn’t what you want. Instead consider a design where in case of a marriage you actually create a Marriage object in your business logic, which does not have any direct relationship inside the database. You would do all kind of checks and logic in your business layer (without having code like) if (oldPerson.married != newPerson.married && newPerson.married) ... And only when you store it you put the information from the Marriage into Hibernate Person Entities. There is no MarriageHe or anything. This kind of design makes for way more expressive code. But developers don’t realize this option and often it is incredibly hard to force it into the existing infrastructur/architecture, because everything assumes there is a 1:1 relationship between Person and PersoneHe and all the other Person classes. AnythingImpl This one is annoying. And most people actually feel that they do something wrong when they have an interface X and a single implementation XImpl. It is bad because the Impl suffix basically tells us nothing. JavaDoc already tells us its the implementation of the interface, no need to put that fact into the class name. It also suggests there will always be only one implementation. At least I hope you don’t have classes ending in Impl2 and Impl3 in your code base. But if you have only one implementation in the first place why do you have an interface? I doesn’t make sense. Lets think hard about what other implementations there are (or might be). A classical exammple is the PersonDao interface and PersonDaoImpl. Here are some possible implementation alternatives I would come up with:one implementation could use Hibernate to store and retrieve stuff. one implementation could use a map or similar in memory structure to store stuff. Very usefull for testing one implementation might use special Oracle featuresWhich one is PersonDaoImpl? And by contrast which one is OraclePersonDao, HibernatePersonDao and InMemoryPersonDao? If nothing else consider ProductionPersonDao to distinguish it from the implementation used for testing. The next time you have a good class or interface name and you feel like slapping some kind of standard suffix or prefix onto it in order to create the name for another class, think twice. You might be creating a useless class name, or you might be screwing up your software design. Reference: Naming Antipatterns from our JCG partner Jens Schauder at the Schauderhaft blog....

Sonar’s Quality Alphabet

Sonar (by SonarSource.com) is getting more and more popular among developer teams. It’s an open source platform measuring software quality in the following 7 axesArchitecture and Design  Comments  Coding Rules  Complexity  Code Duplication  Potential Bugs  Unit Tests If you’re a Sonar newbie then you might find this blog post very useful. On the other hand if you’re an experienced user then you can refresh your memory and what you’ve learned so far. Sonar’s Alphabet is not a user manual. It’s a reference to help you learn (and teach others) some basic terms and words used in the world of Sonar.A for Analysis : Sonar’s basic feature is the ability to analyse source with various ways (Maven, Ant, Sonar runner, trigger by CI system ) . You can have static and/or dynamic analysis if supported by the analyzed language.  B for Blockers : These are violations of the highest severity. They are considered real (not potential bugs ) so fix them as soon as possible  C for Continuous Inspection : Continuous Inspection requires a tool to automate data collection, to report on measures and to highlight hot spots and defects and yes, Sonar is currently the leading “all-in-one” Continuous Inspection engine.  D for Differential Views : Sonar’s star feature let you compare a snapshot analysis with a previous analysis. Fully customizable and dynamic makes continuous inspection a piece of cake.  E for Eclipse. If you’re an Eclipse fan then did you know that you can have most of Sonar’s features in your IDE without leaving it. If not then you should give a try the Sonar’s Eclipse plugin.  F for Filters : Filters are used to specify conditions and criteria on which projects are displayed. They can be used in dashboards or in widgets that require a filter.  G for Global Dashboards : Global dashboards are available at instance level and can be accessed through the menu on the left. One of those global dashboards is set as your home page.Any widget can be added to a global dashboard. Thus, any kinds of information from a project or from the Sonar instance can be displayed at will.  H for Historical Information : Knowing the quality level of your source code in a specific time instance is not enough. You need to be able to compare it with previous analysis. Sonar keeps historical information that can be viewed with many ways such as Timeline widget, Historical table widget or metric tendencies.   I for Internationalization : Sonar (and some of the open source plugins) supports internationalization. It’s available in 7 languages.  J for Jenkins : Although jenkins is not a term of Sonar, you’ll read it in many posts and articles. A best practice to run Sonar analysis and to achieve Continuous Inspection is to automate it by using a CI server. Sonar folks have created a very simple, still useful plugin, that integrates Sonar with Jenkins  K for Key : If you want to dive in Sonar’s technical details or write your own plugin then don’t forget that most of core concepts are identified by a key ( project key, metric key, coding rule key etc. )  L for Languages : Sonar was initially designed to analyze Java source code. Today, more than 20 languages are supported by free or commercial plugins.  M for Manual Measures : You can even define your own measures and set their values when automated calculation is not feasible ( such as team size, project budget etc. )  N for Notifications : Let Sonar sends you an email when Changes in review assigned to you or created by you New violations on your favorite projects introduced during the first differential view period.  O for Opensource : Sonar core as well as most of the plugins are available in CodeHaus or GitHub.  P for plugins. More than 50 Sonar plugins are available for a variety of topics. New languages, reporting, integration with other systems and many more. The best way to Install / update them through the Update Center.  Q for Quality Profiles. Sonar comes with default Quality profiles. For each language you can create your own profiles or edit the existing ones to adjust sonar analysis according to your demands. For each quality profile you activate/deactivate rules from the most popular tools such as PMD, FindBugs, Checkstyle and of course rules directly created by Sonar guys.  R for Reviews : Code Reviews made easy with Sonar. You can assign reviews directly to Sonar users and associate them with a violation. Create action plans to group them and track their progress from analysis to analysis.  S for Sonar in Action book. The only Sonar book that covers all aspects of Sonar. For beginners to advanced users even for developers that want to write their own plugins.  T for Testing : Sonar provides many test metrics such as line coverage, branch coverage and code coverage. It’s integrated with most popular coverage tools (jacoco, emma, cobertura, clover). It can show also metrics on integration tests and by installing opensource plugins you can integrate it with other test frameworks ( JMeter, Thucycides, GreenPepper etc.)  U for User mailing list. Being an active member of this list I can assure you that you can get answers for all your issues and problems.  V for Violations : A very popular term in Sonar. When a source code file (tests files apply also) doesn’t comply with a coding rule then Sonar creates a violation about it.  W for Widgets : Everything you see in a dashboard is a widget. Some of them are only available only for global dashboards. You can add as many as you want in a dashboard and customize them to fit your needs. There are many Sonar core widgets and usually plugins may offer some additional widgets.  X for X-ray : You can consider Sonar as your x-rays glasses to actually see IN your code. Nothing is hidden anymore and everything is measured.  Y for Yesterday’s comparison : One of the most common differential views usages is to compare the current analysis snapshot with the analysis triggered yesterday. Very useful if you don’t want to add up your technical debt and handle it only at the end of each development cycle.  Z for Zero values : For many Sonar metrics such as code duplication, critical/blocker violations, package cycles your purpose should be to minimize or nullify them that means seeing a lot of Zero values in your dashboard.When I was trying to create this alphabet in some cases/letters I was really in big dilemma which word/term to cover. For instance the Sonar runner, which is not mentioned above, is the proposed and standard way to analyze any project with Sonar regardless the programming language. If you think that an important Sonar term is missing feel free to comment and I’ll adjust the text. Reference: Sonar’s Quality Alphabet from our JCG partner  Patroklos Papapetrou at the Only Software matters blog....

Service-Oriented UI with JSF

In large software development projects, service-oriented architecture is very common because it provides a functional interface that can be used by different teams or departments. The same principles should be applied when creating user interfaces.In the case of a large company that has, among others, a billing department and a customer management department, an organizational chart might look like this:If the billing department wants to develop a new dialog for creating invoices, it might look like this:As you can see, the screen above references a customer in the upper part. Clicking the “..” button right behind the short name text field will open the below dialog that allows the user to select the customer:After pressing “Select” the customer data is shown in the invoice form. It’s also possible to select a customer by simply entering a customer number or typing a short name into the text fields on the invoice screen. If a unique short name is entered, no selection dialog appears at all. Instead, the customer data is displayed directly. Only an ambiguous short name results in opening the customer selection screen. The customer functionality will be provided by developers who belong to the customer management team. A typical approach involves the customer management development team providing some services while the billing department developers create the user interface and call these services. However, this approach involves a stronger coupling between these two distinct departments than is actually necessary. The invoice only needs a unique ID for referencing the customer data. Developers creating the invoice dialog don’t really want to know how the customer data is queried or what services are used in the background to obtain that information. The customer management developers should provide the complete part of the UI that displays the customer ID and handles the selection of the customer:Using JSF 2, this is easy to achieve with composite components. The logical interface between the customer management department and the billing department consists of three parts:Composite component (XHTML) Backing bean for the composite component Listener interface for handling the selection resultsProvider (customer management departement) Composite component: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml" xmlns:ui="http://java.sun.com/jsf/facelets" xmlns:h="http://java.sun.com/jsf/html" xmlns:f="http://java.sun.com/jsf/core" xmlns:composite="http://java.sun.com/jsf/composite" xmlns:ice="http://www.icesoft.com/icefaces/component" xmlns:ace="http://www.icefaces.org/icefaces/components" xmlns:icecore="http://www.icefaces.org/icefaces/core"><ui:composition><composite:interface name="customerSelectionPanel" displayName="Customer Selection Panel" shortDescription="Select a customer using it's number or short name"> <composite:attribute name="model" type="org.fuin.examples.soui.view.CustomerSelectionBean" required="true" /> </composite:interface><composite:implementation> <ui:param name="model" value="#{cc.attrs.model}"/> <ice:form id="customerSelectionForm"> <icecore:singleSubmit submitOnBlur="true" /> <h:panelGroup id="table" layout="block"><table><tr> <td><h:outputLabel for="customerNumber" value="#{messages.customerNumber}" /></td> <td><h:inputText id="customerNumber" value="#{model.id}" required="false" /></td> <td>&nbsp;</td> <td><h:outputLabel for="customerShortName" value="#{messages.customerShortName}" /></td> <td><h:inputText id="customerShortName" value="#{model.shortName}" required="false" /></td> <td><h:commandButton action="#{model.select}" value="#{messages.select}" /></td> </tr><tr> <td><h:outputLabel for="customerName" value="#{messages.customerName}" /></td> <td colspan="5"><h:inputText id="customerName" value="#{model.name}" readonly="true" /></td> </tr></table></h:panelGroup> </ice:form></composite:implementation></ui:composition></html> Backing bean for the composite component: package org.fuin.examples.soui.view;import java.io.Serializable;import javax.enterprise.context.Dependent; import javax.inject.Inject; import javax.inject.Named;import org.apache.commons.lang.ObjectUtils; import org.fuin.examples.soui.model.Customer; import org.fuin.examples.soui.services.CustomerService; import org.fuin.examples.soui.services.CustomerShortNameNotUniqueException; import org.fuin.examples.soui.services.UnknownCustomerException;@Named @Dependent public class CustomerSelectionBean implements Serializable {private static final long serialVersionUID = 1L;private Long id;private String shortName;private String name;private CustomerSelectionListener listener;@Inject private CustomerService service;public CustomerSelectionBean() { super(); listener = new DefaultCustomerSelectionListener(); }public Long getId() { return id; }public void setId(final Long id) { if (ObjectUtils.equals(this.id, id)) { return; } if (id == null) { clear(); } else { clear(); this.id = id; try { final Customer customer = service.findById(this.id); changed(customer); } catch (final UnknownCustomerException ex) { FacesUtils.addErrorMessage(ex.getMessage()); } } }public String getShortName() { return shortName; }public void setShortName(final String shortNameX) { final String shortName = (shortNameX == "") ? null : shortNameX; if (ObjectUtils.equals(this.shortName, shortName)) { return; } if (shortName == null) { clear(); } else { if (this.id != null) { clear(); } this.shortName = shortName; try { final Customer customer = service .findByShortName(this.shortName); changed(customer); } catch (final CustomerShortNameNotUniqueException ex) { select(); } catch (final UnknownCustomerException ex) { FacesUtils.addErrorMessage(ex.getMessage()); } } }public String getName() { return name; }public CustomerSelectionListener getConnector() { return listener; }public void select() { // TODO Implement... }public void clear() { changed(null); }private void changed(final Customer customer) { if (customer == null) { this.id = null; this.shortName = null; this.name = null; listener.customerChanged(null, null); } else { this.id = customer.getId(); this.shortName = customer.getShortName(); this.name = customer.getName(); listener.customerChanged(this.id, this.name); } }public void setListener(final CustomerSelectionListener listener) { if (listener == null) { this.listener = new DefaultCustomerSelectionListener(); } else { this.listener = listener; } }public void setCustomerId(final Long id) throws UnknownCustomerException { clear(); if (id != null) { clear(); this.id = id; changed(service.findById(this.id)); } }private static final class DefaultCustomerSelectionListener implements CustomerSelectionListener {@Override public final void customerChanged(final Long id, final String name) { // Do nothing... }}} Listener interface for handling results: package org.fuin.examples.soui.view;/** * Gets informed if customer selection changed. */ public interface CustomerSelectionListener {/** * Customer selection changed. * * @param id New unique customer identifier - May be NULL. * @param name New customer name - May be NULL. */ public void customerChanged(Long id, String name);}User (billing departement) The invoice bean simply uses the customer selection bean by injecting it, and connects to it using the listener interface: package org.fuin.examples.soui.view;import java.io.Serializable;import javax.annotation.PostConstruct; import javax.enterprise.context.SessionScoped; import javax.enterprise.inject.New; import javax.inject.Inject; import javax.inject.Named;@Named("invoiceBean") @SessionScoped public class InvoiceBean implements Serializable {private static final long serialVersionUID = 1L;@Inject @New private CustomerSelectionBean customerSelectionBean;private Long customerId;private String customerName;@PostConstruct public void init() { customerSelectionBean.setListener(new CustomerSelectionListener() { @Override public final void customerChanged(final Long id, final String name) { customerId = id; customerName = name; } }); }public CustomerSelectionBean getCustomerSelectionBean() { return customerSelectionBean; }public String getCustomerName() { return customerName; }} Finally, in the invoice XHTML, the composite component is used and linked to the injected backing bean: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml" xmlns:ui="http://java.sun.com/jsf/facelets" xmlns:h="http://java.sun.com/jsf/html" xmlns:f="http://java.sun.com/jsf/core" xmlns:fuin="http://fuin.org/examples/soui/facelets" xmlns:customer="http://java.sun.com/jsf/composite/customer"><ui:composition template="/WEB-INF/templates/template.xhtml"> <ui:param name="title" value="#{messages.invoiceTitle}" /> <ui:define name="header"></ui:define> <ui:define name="content"> <customer:selection-panel model="#{invoiceBean.customerSelectionBean}" /> </ui:define><ui:define name="footer"></ui:define></ui:composition> </html>Summary In conclusion, parts of the user interface that reference data from other departments should be the responsibility of the department that delivers the data. Any changes in the providing code can then be easily made without any changes to the using code. Another important benefit of this method is the harmonization of the application’s user interface. Controls and panels that display the same data always look the same. Every department can also create a repository of its provided user interface components, making the process of designing a new dialog as easy as putting the right components together. Reference: Service-Oriented UI from our JCG partner Michael Schnell at the A Java Developer’s Life blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: