Featured FREE Whitepapers

What's New Here?


Integrating JavaFX and Swing

I’ve just finished rewriting a component of my app that was using Swing and now is using JavaFX, I’ve ended up with a JavaFX component that integrates with the larger swing app. It is a large app and the rewrite took me a while, in the end everything worked fine and I’m glad I did it. Reasons you might want to do this in your swing app You might want to rewrite your Swing app and change it to use JavaFX instead, the easiest way is to do this incrementally by changing each component at a time. This requires that you integrate each of the newly changed JavaFX components with the rest of your Swing app. I’ll summarize why you might want to start rewriting your app from Swing to JavaFX:It’s the future Swing is pretty much dead in the sense that it won’t get any further developments. JavaFX is the new UI toolkit for Java, it is better prepared for the future with things like touch support, 3D, built-in animation support, video and audio playback, etc.Probable future support for mobile: Android, IOS… From what I’ve been seeing, I think it’s pretty much a guarantee that Android, IOS, etc support will be made available, Oracle already has working prototypes of this that they show on public conferences, the only question is when. I think it won’t take that long, probably we’ll see more on this in the next JavaOne, coming up soon.It’s solid JavaFX is a well-designed toolkit with a rapid growing pace, a bright future and a set of good free UI tools. Furthermore unlike in the past, Oracle is giving developers feedback a great importance changing and adapting its APIs to meet their goals.It’s pretty Unlike Swing, not counting third party librarys, which was ugly by itself, JavaFX looks good right from the start. Given that users nowadays expect good looking well designed apps this is a pretty good point.Nice extras Some nice extras, like the charts API, an embedded browser that supports HTML5, etc.How you do it Back on JavaFX 1.3 you could embed Swing in JavaFX but not the other way around, at least not officially. I implemented a Swing component that allowed you to embed JavaFX content in Swing (called JXScene) and made it publicly available in the jfxtras project. It was the only way you could embed a JavaFX scene in a Swing app. Now Oracle with JavaFX 2.X made an official way of embedding JavaFX in Swing which makes more sense but unfortunately not a way to embed Swing in JavaFX, however I guess this will suffice in most cases.Arquitecture Essentially when you are embedding JavaFX in Swing you end up with 2 running UI threads: the Swing EDT thread and the JavaFX User thread. There is a chance that in the future there will only be one thread for both as is the case with SWT, making Swing run on the JavaFX User Thread, but for now we’ll have to manage our way with 2 threads. Two threads running at the same time in the UI is what complicates matters, and makes JavaFX integration not as easy as you might expect, unless you’re doing some trivial small app but I guess that is not the scenario for most of the real world use cases. If you’re doing a small app might as well do it all in JavaFX. Coding JavaFX gives you JFXPanel, which is a Swing panel that hosts a JavaFX scene. You set the scene on the JFXPanel and add the panel wherever you could add a Swing Component. To access JavaFX data you have to wrap your code in a Runnable object and call the Platform.runLater method: jbutton.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { Platform.runLater(new Runnable() { @Override public void run() { fxlabel.setText("Swing button clicked!"); } }); } }); On the other side is Swing data. This data must be accessed only by the EDT. To ensure that your code is running on the EDT, wrap it into a Runnable object and call the SwingUtilities.invokeLater: SwingUtilities.invokeLater(new Runnable() { @Override public void run() { //Code to change Swing data. } });TipsJavaFX already throws exceptions when you access a JavaFX resource outside the JavaFX User Thread, but bear in mind that this does not happen always. To minimize performance costs not all situations are checked. If you use Substance third party library than an exception will also be thrown whenever a Swing resource is accessed outside the EDT. Setting Substance as your Swing look and feel might be a good solution to lessen concurrency mistakes on the swing side that you might do. Be very careful when sharing resources between the 2 UI threads, try to avoid this as much as possible. Best way to solve multi-threading problems is to avoid them, and these kind of problems are among the most difficult to solve in Software Engineering. There is a reason why Swing started off as a multi-threaded toolkit and ended changing to a single threaded one. Sometimes you might want to check if you are on the JavaFX User Thread via Platform.isFxApplicationThread() and only than issue a call to Platform.runLater(…), because if you are on the JavaFX User Thread and call runLater(...) the execution of the code that is inside will still be deferred to a later time and this might not be what you want.Other links to check out:Oracle tutorial: http://docs.oracle.com/javafx/2/swing/jfxpub-swing.htmReference: Integrating JavaFX and Swing from our JCG partner Pedro Duque Vieira at the Pixel Duke blog....

Android Jelly Bean notification tutorial

You may have heard about Android Jelly Bean (API level 16). Google has improved a lot of features and introduced new features. One of them is the notification. Now they have made the notification more versatile by introducing media rich notification. Google has come up with three special style of notification which are mentioned below. Even developer can write his own customized notification style using remote view.The old Notification class constructor has been deprecated and a brand new and enhanced version of Notification has been introduced. Notification Type:Basic Notification – Shows simple and short notification with icon. Big Picture Notification – Shows visual content such as bitmap. Big Text Notification – Shows multiline Textview object. Inbox Style Notification – Shows any kind of list, e.g messages, headline etc.Old syntax requires us to create an object of notification but now Android uses builder patter to create the notification object. Notification.Builder class has been introduced to make this task easier. This class returns the builder object which is configurable according to your requirements. The helper classes have been introduced like Notification.BigPictureStyle, Notification.BigTextStyle, and Notification.InboxStyle. These classes are re-builder classes which take object created by Notification.Builder class and modify the behavior like so. Project Information: Meta-data about the project. Platform Version : Android API Level 16. IDE : Eclipse Helios Service Release 2 Emulator : Android 4.1(API 16) Prerequisite: Preliminary knowledge of Android application framework, and Intent. First create project by Eclipse > File> New Project>Android Application Project. The following dialog box will appear. Fill the required field, i.e Application Name, Project Name and Package Name. Don’t forget to select the Build SDK version (for this tutorial Google API 16 has been selected). Now press the next button.Once the dialog box appears, select the BlankActivity and click the next button.Fill the Activity Name and Layout file name for the dialog box shown below and hit the finish button.This process will setup the basic project files. Now we are going to add four buttons in the activity_main.xml file. You can modify the layout file using either Graphical Layout editor or xml editor. The content of the file should look like this. <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" android:gravity="center_horizontal"><Button android:id="@+id/btBasicNotification" android:layout_width="fill_parent" android:layout_height="wrap_content" android:gravity="center_horizontal|center_vertical" android:onClick="sendBasicNotification" android:text="@string/btBasicNotification" android:background="@drawable/button_background" android:textColor="#000000" /> <Button android:id="@+id/btBigTextNotification" android:layout_width="fill_parent" android:layout_height="wrap_content" android:gravity="center_horizontal|center_vertical" android:onClick="sendBigTextStyleNotification" android:text="@string/btBigTextNotification" android:background="@drawable/button_background" android:textColor="#000000" /> <Button android:id="@+id/btBigPictureNotification" android:layout_width="fill_parent" android:layout_height="wrap_content" android:gravity="center_horizontal|center_vertical" android:onClick="sendBigPictureStyleNotification" android:text="@string/btBigPictureNotification" android:background="@drawable/button_background" android:textColor="#000000" /> <Button android:id="@+id/btInboxStyleNotification" android:layout_width="fill_parent" android:layout_height="wrap_content" android:gravity="center_horizontal|center_vertical" android:onClick="sendInboxStyleNotification" android:text="@string/btInboxStyleNotification" android:background="@drawable/button_background" android:textColor="#000000"/> </LinearLayout> You may have noticed that onClick methods are associated with respective buttons. If you don’t know how to define and use the background file for view then ignore the android:background field. Now we are going to define the methods sendBasicNotification, sendBigTextStyleNotification, sendBigPictureStyleNotification and sendInboxStyleNotification. As the method name suggests, it sends that particular kind of notification. In each method we are creating Notification.Builder object, and customizing the object. Here builder pattern has been used to customize the object. Once the customization is done, call build() method to get the notification object. In this new notification system, at most three actions can be associated to a notification, which are displayed below the notification content. This can be achieved by calling addAction() method on the builder object. The same number of icons you will see on the notification as you will notice for sendBigPictureStyleNotifcation() method. Notification priority can also be set by calling setPriority() method as shown in sendBigTextStyleNotification() method. In the code given below, intent has been used to invoke the HandleNotificationActivity. package com.example.jellybeannotificationexample;import android.app.Activity; import android.app.Notification; import android.app.Notification.Builder; import android.app.NotificationManager; import android.app.PendingIntent; import android.content.Intent; import android.graphics.BitmapFactory; import android.os.Bundle; import android.view.Menu; import android.view.View;public class NotificationMainActivity extends Activity {@Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); }@Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.activity_main, menu); return true; }public void sendBasicNotification(View view) { Notification notification = new Notification.Builder(this) .setContentTitle("Basic Notification") .setContentText("Basic Notification, used earlier") .setSmallIcon(R.drawable.ic_launcher_share).build(); notification.flags |= Notification.FLAG_AUTO_CANCEL; NotificationManager notificationManager = getNotificationManager(); notificationManager.notify(0, notification); }public void sendBigTextStyleNotification(View view) { String msgText = "Jeally Bean Notification example!! " + "where you will see three different kind of notification. " + "you can even put the very long string here.";NotificationManager notificationManager = getNotificationManager(); PendingIntent pi = getPendingIntent(); Builder builder = new Notification.Builder(this); builder.setContentTitle("Big text Notofication") .setContentText("Big text Notification") .setSmallIcon(R.drawable.ic_launcher) .setAutoCancel(true); .setPriority(Notification.PRIORITY_HIGH) .addAction(R.drawable.ic_launcher_web, "show activity", pi); Notification notification = new Notification.BigTextStyle(builder) .bigText(msgText).build(); notificationManager.notify(0, notification); }public void sendBigPictureStyleNotification(View view) { PendingIntent pi = getPendingIntent(); Builder builder = new Notification.Builder(this); builder.setContentTitle("BP notification") // Notification title .setContentText("BigPicutre notification") // you can put subject line. .setSmallIcon(R.drawable.ic_launcher) // Set your notification icon here. .addAction(R.drawable.ic_launcher_web, "show activity", pi) .addAction( R.drawable.ic_launcher_share, "Share", PendingIntent.getActivity(getApplicationContext(), 0, getIntent(), 0, null));// Now create the Big picture notification. Notification notification = new Notification.BigPictureStyle(builder) .bigPicture( BitmapFactory.decodeResource(getResources(), R.drawable.big_picture)).build(); // Put the auto cancel notification flag notification.flags |= Notification.FLAG_AUTO_CANCEL; NotificationManager notificationManager = getNotificationManager(); notificationManager.notify(0, notification); }public void sendInboxStyleNotification(View view) { PendingIntent pi = getPendingIntent(); Builder builder = new Notification.Builder(this) .setContentTitle("IS Notification") .setContentText("Inbox Style notification!!") .setSmallIcon(R.drawable.ic_launcher) .addAction(R.drawable.ic_launcher_web, "show activity", pi);Notification notification = new Notification.InboxStyle(builder) .addLine("First message").addLine("Second message") .addLine("Thrid message").addLine("Fourth Message") .setSummaryText("+2 more").build(); // Put the auto cancel notification flag notification.flags |= Notification.FLAG_AUTO_CANCEL; NotificationManager notificationManager = getNotificationManager(); notificationManager.notify(0, notification); }public PendingIntent getPendingIntent() { return PendingIntent.getActivity(this, 0, new Intent(this, HandleNotificationActivity.class), 0); }public NotificationManager getNotificationManager() { return (NotificationManager) getSystemService(NOTIFICATION_SERVICE); } } We have defined basic HandleNotificationActivity which just shows a simple message when intent is fired for this activity. The content of the file is as following. package com.example.jellybeannotificationexample;import android.app.Activity; import android.os.Bundle;public class HandleNotificationActivity extends Activity {@Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.handle_notification_activity); } } The corresponding layout file(handle_notification_activity.xml) is given below <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:orientation="vertical" android:gravity="center_horizontal|center_vertical" ><TextView android:id="@+id/textView1" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="@string/tvHandleNotification" android:textSize="20dp" android:textStyle="bold|italic" /></LinearLayout> Now you have to define the Android manifiest file. HandleNotificationActivity should be included in the manifest file and then put the intent filter for this activity. <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.jellybeannotificationexample" android:versionCode="1" android:versionName="1.0" ><uses-sdk android:minSdkVersion="16" android:targetSdkVersion="16" /><application android:icon="@drawable/ic_launcher" android:label="@string/app_name"> <activity android:name=".NotificationMainActivity" android:label="@string/title_activity_main" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:name=".HandleNotificationActivity" android:label="@string/title_activity_main" > <intent-filter> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application></manifest> Once you are done with the coding, just execute it. You will see the application as shown in the picture below. On clicking the button you will see the corresponding notification on the upper part of the screen. If you drag the notification down then you can see the entire message and corresponding icon. The pictures below are the notification when they were dragged down.Big Text StyleApplicationInbox StyleBig Picture StyleBasic notificationIf you are interested to know more about source code, you can find it over here. Reference: Tutorial on new Android Jelly Bean notification from our JCG partner Rakesh Cusat at the Code4Reference blog....

Observer Pattern with Spring Events

INTRODUCTIONThe essence of the Observer Pattern is to ‘Define a one-to-many dependency between objects so that when one object changes state, all its dependents are notified and updated automatically.’ GoF. Observer pattern is a subset of publish/subscribe pattern which allows a number of observer objects to see an event. This pattern can be used in different situations, but in summary we can say that Observer pattern can be applied when an object should be able to notify messages to other objects, and you don’t want these objects being tightly coupled. In my case I have used this pattern when an asynchronous event should be notified to one or more graphical component. This pattern can be implemented using an adhoc solution or using java.util.Observer/Observable classes. But my projects are always developed with Spring whether they are web or desktop applications. So in current post I will explain how I implement Observer pattern with Spring. HANDS ON Event handling in Spring ApplicationContext is provided through ApplicationEvent class and ApplicationListener interface. If a bean that implements ApplicationListener interface is deployed into the context, every time an ApplicationEvent is published to container, ApplicationListener receives it. Spring comes with built-in events, like ContextStartedEvent, ContextStoppedEvent, but you can also create your own custom events. For developing your own events, three classes are required, observer role, observable role and the event. Observers are those who receive events and must implement ApplicationListener class. Observable classes are responsible of publishing events and must implement ApplicationEventPublisherAware. Finally event class has to extend ApplicationEvent. CODING What I am going to implement is wikipedia example of Observer pattern ( http://en.wikipedia.org/wiki/Observer_pattern#Example) but using Spring Events instead of Observer/Observable Java classes. The example is a basic publish/subscribe example where one String message is sent from one module to another one. Let’s create MessageEvent. This event contains a String that represents the message we want to send. It is a simple class that extends from ApplicationEvent. public class MessageEvent extends ApplicationEvent { ** * * private static final long serialVersionUID = 5743058377815147529L; private String message; public MessageEvent(Object source, String message) { super(source); this.message = message; } @Override public String toString() { StringBuilder builder = new StringBuilder(); builder.append('MessageEvent [message=').append(message).append(']'); return builder.toString(); } } Next class is the Observable class. This class must implements ApplicationEventPublisherAware. This interface defines a setter method with ApplicationEventPublisher as parameter. This parameter is used for publishing events. In current implementation see that also implements Runnable interface so user can create from console input, public class EventSource implements Runnable, ApplicationEventPublisherAware { private ApplicationEventPublisher applicationEventPublisher = null; public void setApplicationEventPublisher( ApplicationEventPublisher applicationEventPublisher) { this.applicationEventPublisher = applicationEventPublisher; } public void run() { final InputStreamReader isr = new InputStreamReader(System.in); final BufferedReader br = new BufferedReader(isr); while (true) { try { String response = br.readLine(); System.out.println(Thread.currentThread().getName()); this.applicationEventPublisher.publishEvent(new MessageEvent(this, response)); } catch (IOException e) { e.printStackTrace(); } } } } The Observer class is even simpler. Implements ApplicationListener interface. Method onApplicationEvent is called when an event is published. See that it is a generic interface, so no cast is required. This differs from java.util.Observer class. public class ResponseHandler implements ApplicationListener<MessageEvent> { public void onApplicationEvent(MessageEvent messageEvent) { System.out.println(Thread.currentThread().getName()); System.out.println(messageEvent); } } In application context file, you register both ApplicationListener and ApplicationEventPublisherAware beans. And finally a main class to test the system. A thread is created to execute multiple asynchronous events. public class MyApp { public static void main(String args[]) { ApplicationContext applicationContext = new ClassPathXmlApplicationContext('classpath:META-INFspringapp-context.xml'); EventSource eventSource = applicationContext.getBean('eventSource', EventSource.class); Thread thread = new Thread(eventSource); thread.start(); } } So start the program and write something to console. You will see something like: hello Thread-0 Thread-0 MessageEvent [message=hello] I have entered ‘ hello‘ message and thread name of event publisher is printed. Then event is sent and handler thread name is printed too. Finally the received event is shown. There is one thing that should call your attention. Both sender ( Observable) and receiver ( Observer) are executed in same thread; by default event listeners receive events synchronously. This means that publishEvent() method, blocks until all listeners have finished processing the event. This approach has many advantages (for example reusing transaction contexts, …), but in some cases you will prefer that each event is executed in new thread, Spring also supports this strategy. In Spring, class responsible of managing events is SimpleApplicationEventMulticaster. This class multicasts all events to all registered listeners, leaving it up to the listeners to ignore events that they are not interested in. Default behaviour is that all listeners are invoked in calling thread. Now I am going to explain how Spring Event Architecture is initialized and how you can modify. By default when ApplicationContext is started up, it calls initApplicationEventMulticaster method. This method verify if exists a bean with id applicationEventMulticaster of type ApplicationEventMulticaster. If it is the case defined ApplicationEventMulticaster is used, if not a new SimpleApplicationEventMulticaster with default configuration is created. SimpleApplicationEventMulticaster has a setTaskExecutor which can be used for specifying which java.util.concurrent.Executor will execute events. So if you want that each event is executed in a different thread, a good approach would be using a ThreadPoolExecutor. As explained in last paragraph, now we must explicitly define SimpleApplicationEventMulticaster instead of using default ones. Let’s implement: <beans xmlns='http:www.springframework.orgschemabeans' xmlns:xsi='http:www.w3.org2001XMLSchema-instance' xmlns:context='http:www.springframework.orgschemacontext' xmlns:task='http:www.springframework.orgschematask' xsi:schemaLocation='http:www.springframework.orgschematask http:www.springframework.orgschemataskspring-task-3.0.xsd http:www.springframework.orgschemabeans http:www.springframework.orgschemabeansspring-beans-3.0.xsd http:www.springframework.orgschemacontext http:www.springframework.orgschemacontextspring-context-3.0.xsd'> <bean id='eventSource' class='org.asotobu.oo.EventSource' > <bean id='responseHandler' class='org.asotobu.oo.ResponseHandler' > <task:executor id='pool' pool-size='10' > <bean id='applicationEventMulticaster' class='org.springframework.context.event.SimpleApplicationEventMulticaster'> <property name='taskExecutor' ref='pool' > <bean> <beans> First of all SimpleApplicationEventMulticaster must be defined as a bean with id applicationEventMulticaster. Then task pool is set, and we rerun our main class. And output will be: hello Thread-1 pool-1 MessageEvent [message=hello] Note that now sender and receiver thread is different. And of course you can create your own ApplicationEventMulticaster for more complex operations. You just have to implement ApplicationEventMulticaster and defining it with applicationEventMulticaster bean name, and events will be executed depending on your own strategy. Hope that now your Spring desktop applications can take full advantage of Spring events for separating modules. Download Code. Reference: Observer Pattern with Spring Events from our JCG partner Alex Soto at the One Jar To Rule Them All blog....

Which Java thread consumes my CPU?

What do you do when your Java application consumes 100% of the CPU? Turns out you can easily find the problematic thread(s) using built-in UNIX and JDK tools. No profilers or agents required. For the purpose of testing we’ll use this simple program: public class Main { public static void main(String[] args) { new Thread(new Idle(), 'Idle').start(); new Thread(new Busy(), 'Busy').start(); } }class Idle implements Runnable {@Override public void run() { try { TimeUnit.HOURS.sleep(1); } catch (InterruptedException e) { } } }class Busy implements Runnable { @Override public void run() { while(true) { 'Foo'.matches('F.*'); } } }As you can see, it starts two threads. Idle is not consuming any CPU (remember, sleeping threads consume memory, but not CPU) while Busy eats the whole core as regular expression parsing and executing is a surprisingly complex process. Let’s run this program and forget about it. How can we quickly find out that Busy is the problematic piece of our software? First of all we use top to find out the process id ( PID) of the java process consuming most of the CPU. This is quite straightforward: $ top -n1 | grep -m1 javaThis will display the first line of top output containing ‘ java‘ sentence: 22614 tomek 20 0 1360m 734m 31m S 6 24.3 7:36.59 javaThe first column is the PID, let’s extract it. Unfortunately it turned out that top uses ANSI escape codes for colors – invisible characters that are breaking tools like grep and cut. Luckily I found a perl script to remove these characters and was finally able to extract the PID of java process exhausting my CPU: $ top -n1 | grep -m1 java | perl -pe 's/\e\[?.*?[\@-~] ?//g' | cut -f1 -d' 'The cut -f1 -d' ' invocation simply takes the first value out of space-separated columns: 22614Now when we now the problematic JVM PID, we can use top -H to find problematic Linux threads. The -H option prints a list of all threads as opposed to processes, the PID column now represents the internal Linux thread ID: $ top -n1 -H | grep -m1 java $ top -n1 -H | grep -m1 java | perl -pe 's/\e\[?.*?[\@-~] ?//g' | cut -f1 -d' 'The output is surprisingly similar, but the first value is now the thread ID: 25938 tomek 20 0 1360m 748m 31m S 2 24.8 0:15.15 java 25938So we have a process ID of our busy JVM and Linux thread ID (most likely from that process) consuming our CPU. Here comes the best part: if you look at jstack output (available in JDK), each thread has some mysterious ID printed next to its name: 'Busy' prio=10 tid=0x7f3bf800 nid=0x6552 runnable [0x7f25c000] java.lang.Thread.State: RUNNABLE at java.util.regex.Pattern$Node.study(Pattern.java:3010)That’s right, the nid=0x645a parameter is the same as thread ID printed by top -H. Of course to not make it too simple, top uses decimal notation while jstack prints in hex. Again there is a simple solution, printf ‘%x’: $ printf '%x' 25938 6552Let’s wrap all we have now into a script and combine the results: #!/bin/bash PID=$(top -n1 | grep -m1 java | perl -pe 's/\e\[?.*?[\@-~] ?//g' | cut -f1 -d' ') NID=$(printf '%x' $(top -n1 -H | grep -m1 java | perl -pe 's/\e\[?.*?[\@-~] ?//g' | cut -f1 -d' ')) jstack $PID | grep -A500 $NID | grep -m1 '^$' -B 500PID holds the java PID and NID holds the thread ID, most likely from that JVM. The last line simply dumps the JVM stack trace of the given PID and filters out (using grep) the thread which has matching nid. Guess what, it works: $ ./profile.sh 'Busy' prio=10 tid=0x7f3bf800 nid=0x6552 runnable [0x7f25c000] java.lang.Thread.State: RUNNABLE at java.util.regex.Pattern$Node.study(Pattern.java:3010) at java.util.regex.Pattern$Curly.study(Pattern.java:3854) at java.util.regex.Pattern$CharProperty.study(Pattern.java:3355) at java.util.regex.Pattern$Start.<init>(Pattern.java:3044) at java.util.regex.Pattern.compile(Pattern.java:1480) at java.util.regex.Pattern.<init>(Pattern.java:1133) at java.util.regex.Pattern.compile(Pattern.java:823) at java.util.regex.Pattern.matches(Pattern.java:928) at java.lang.String.matches(String.java:2090) at com.blogspot.nurkiewicz.Busy.run(Main.java:27) at java.lang.Thread.run(Thread.java:662)Running the script multiple times (or with watch, see below) will capture Busy thread in different places, but almost always inside regular expression parsing – which is our problematic piece! Multiple threads In case your application has multiple CPU-hungry threads, you can use watch -n1 ./profile.sh command to run the script every second and get semi real-time stack dumps, most likely from different threads. Testing with the following program: new Thread(new Idle(), 'Idle').start(); new Thread(new Busy(), 'Busy-1').start(); new Thread(new Busy(), 'Busy-2').start();you’ll see stack traces either of Busy-1 or of Busy-2 threads (in different places inside Pattern class), but never Idle. Reference: Which Java thread consumes my CPU? from our JCG partner Tomasz Nurkiewicz at the Java and neighbourhood blog....

Gradle Custom Plugin

This tutorial describes the way of creating Gradle standalone custom plugin. It covers the following topicsCreating task, and using it in Custom plugin Stand alone Custom plugin Short plugin id Customize Gradle setting using settings.gradleProject info :Gradle version : 1.1 OS platform : Ubuntu 12.10 Prerequisite : Basic understanding of Gradle script.Creating the Stand alone custom plugincreate the directory structure |-custom-plugin | |-plugin | |-src | |-main | | |-groovy | | | |-com | | | |-code4reference | | | |-gradle | | |-resources | | | |-META-INF | | | |-gradle-plugins | |-test | | |-groovy | | | |-com | | | |-code4reference | | | |-gradle |-user Here plugin directory contains all source code and resource files whereas the user directory contains the consumer script which uses custom plugin. Execute the following command to create the directory structure. Here groovy folder contains the source code package. $ mkdir -p custom-plugin/plugin/src/main/groovy/com/code4reference/gradle $ mkdir -p custom-plugin/plugin/src/main/resources/META-INF/gradle-plugins $ mkdir -p custom-plugin/userCustom plugin source code Every plugin should have a implementation class to extend the Plugin class. Let’s define the plugin class. package com.code4reference.gradle;import org.gradle.api.*;class Code4ReferencePlugin implements Plugin { def void apply(Project project) { //c4rTask task has been defined below. project.task('c4rTask') << { println 'Hi from Code4Reference plugin!' } } } Put this file in custom-plugin/plugin/src/main/groovy/com/code4reference/gradle directory. Here, c4rTask task has been defined to print a simple line. Short plugin ID In order to apply a plugin, we usually use a short ID e.g apply plugin : ‘java’. Here ‘java’ is the short plugin id for the class org.gradle.api.plugins.JavaPlugin. The short plugin id can be defined in easy steps. For this, we need to create a property file and put it in the META-INF/gradle-plugins directory which comes under the class path. The name of the file will be our short id. This property file must contain the line shown below and it should point to the plugin implementation class. Let’s create the property file as code4reference.properties and point it to the Code4ReferencePlugin class. implementation-class=com.code4reference.gradle.Code4ReferencePluginGradle script to generate the plugin For compiling and building this plugin, we will write the gradle script. Create the file named build.gradle in plugin directory and copy the content below in it. apply plugin: 'groovy' apply plugin: 'maven' dependencies { compile gradleApi() groovy localGroovy() } repositories { mavenCentral() }group='com.code4reference' //Group name makes easier to manager the packages. version='1.1-SNAPSHOT'uploadArchives { repositories { mavenDeployer { repository(url: uri('../repo')) } } } In this gradle script, we use groovy plugin to compile groovy source code and declare gradleAPI as the compile time dependencies. You may have noticed that we use maven plugin. It basically creates the plugin jar file and stores in the maven repository. Here we create the maven repository named repo in the parent directory and store the jar file in it. Building plugin and putting in repository $ gradle uploadArchives #This will put the plugin-version.jar in maven repository.:compileJava UP-TO-DATE :compileGroovy UP-TO-DATE :processResources UP-TO-DATE :classes UP-TO-DATE :jar :uploadArchives Uploading: com/code4reference/plugin/1.1-SNAPSHOT/plugin-1.1-20120816.163101-1.jar to repository remote at file:/home/rakesh/programming/mygitrepo/Code4Reference/GradleExample/custom-plugin-1/repo/ Transferring 5K from remote Uploaded 5KBUILD SUCCESSFULTotal time: 34.892 secsPorject settings using settings.gradle When the above command is executed, gradle tires to get the project name from the settings.gradle. If settings.gradle file is not present in the current directory, then it gets the name of the current directory and assumes it as the project name. It then forms the path to store the jar file. The file path convention is as following /group/name/projectName/version/projectname-version-timestamp.jar. You may notice in the above output that the jar path name and the jar file name have plugin word because the the current directory name is plugin and gradle assumes it as project name. If we want to override this property and put code4ReferencePlugin as the project name, we need to create a settings.gradle file in the plugin directory and put the following line. rootProject.name = 'code4ReferencePlugin' Now again execute the command to generate the plugin jar file. $gradle uploadArchives compileJava UP-TO-DATE :compileGroovy UP-TO-DATE :processResources UP-TO-DATE :classes UP-TO-DATE :jar UP-TO-DATE :uploadArchives Uploading: com/code4reference/code4ReferencePlugin/1.1-SNAPSHOT/code4ReferencePlugin-1.1-20120816.164441-5.jar to repository remote at file:/home/rakesh/programming/mygitrepo/Code4Reference/GradleExample/custom-plugin-1/repo/ Transferring 5K from remote Uploaded 5KBUILD SUCCESSFULTotal time: 8.61 secs Now the problem is solved. The jar is getting generated with name code4ReferencePlugin-[version]-timestamp.jar . If you want to find more about the gradle and system properties, find it here.Using the custom plugin This is really a simple step. Although we use the other plugin, the custom plugin can also be used in similar way. Now create another build.gradle file in user directory and copy the code given below. buildscript { repositories { maven { url uri('../repo') } } dependencies { classpath group: 'com.code4reference', name: 'code4ReferencePlugin', version: '1.1-SNAPSHOT' } } apply plugin: 'code4reference' build.gradle script accesses maven repository present in the parent directory. We have also defined dependency which basically accesses the particular version of jar file from the maven. Last but not the least, we apply the short plugin id “code4reference”. To run this gradle script, execute the command below on the terminal in the user directory. $ gradle c4rTask #Remember we have created c4rTask in Code4ReferencePlugin class. #You will get the following output. :c4rTask Hi from Code4Reference plugin!BUILD SUCCESSFULTotal time: 3.908 secs Voilà!! you just created custom plugin and used it in a different project script. You can find the source code for this tutorial over here. Code4Reference Now, will cover the following topics.Define custom Task class Passing arguments to custom plugin task Nested arguments Testing the custom pluginProject info : Project name : Gradle custom plugin Gradle version : 1.1 OS platform : Ubuntu 12.10 Prerequisite : Basic understanding of Gradle script. Here, we will follow the same directory hierarchy listed in the first part.Define custom Task Let’s define a custom class named Code4ReferenceTask which extends DefaultTask class and put this file in the same folder where Code4ReferencePlugin.groovy is kept. This class contains a method named showMessage() which is annotated with @TaskAction. Gradle calls this method when the task is executed. package com.code4reference.gradle;import org.gradle.api.DefaultTask import org.gradle.api.tasks.TaskActionclass Code4ReferenceTask extends DefaultTask {@TaskAction def showMessage() { println '----------showMessage-------------' } } Now we need to do some minor modifications in the Code4ReferencePlugin.groovy to include the custom task. The modified Code4ReferencePlugin class is as following. package com.code4reference.gradle;import org.gradle.api.*;class Code4ReferencePlugin implements Plugin { def void apply(Project project) { //Define the task named c4rTask of type Code4ReferenceTask project.task('c4rTask', type: Code4ReferenceTask) } } You may notice that only the highlighted line has been changed from the past implementation. Now the “c4rTask” is of Code4ReferenceTask type. Execute the gradle uploadArchives command in the plugin directory. This will update the jar file in Maven repo. Now execute the command below in user directory with the same old build.gradle. We will get the following output. $gradle c4rTask :c4rTask ----------showMessage------------- BUILD SUCCESSFULTotal time: 14.057 secsPassing arguments to custom plugin task The above implementation is the simplest one and doesn’t do much. What if we want to pass the arguments from Gradle script to this task? We can achieve it by accessing extension object. The Gradle Project has an associated ExtensionContainer object that helps keep track of all the settings and properties being passed to plugins class. Let’s define an extension class which can hold the arguments and pass those to the Task class. The highlighted lines in the Code4ReferencePlugin class help to pass the arguments to the Task class. package com.code4reference.gradle;import org.gradle.api.*;//For passing arguments from gradle script. class Code4ReferencePluginExtension { String message = 'Hello from Code4Reference' String sender = 'Code4Reference' } class Code4ReferencePlugin implements Plugin { def void apply(Project project) { project.extensions.create('c4rArgs', Code4ReferencePluginExtension) project.task('c4rTask', type: Code4ReferenceTask) } } We have defined Code4ReferencePluginExtension as Extension class which contains two variables message and sender. These serve as the arguments for the custom defined task. We need to modify the Code4RefernceTask class to access the arguments. The highlighted lines have been added to the previous Code4ReferenceTask class implementation. package com.code4reference.gradle;import org.gradle.api.DefaultTask import org.gradle.api.tasks.TaskActionclass Code4ReferenceTask extends DefaultTask {@TaskAction def showMessage() { println '------------showMessage-------------------' println 'From : ${project.c4rArgs.sender},\ message : ${project.c4rArgs.message}' } } Execute the gradle uploadArchives command in the plugin directory. This will update the jar file in Maven repo. Also, we need to update the build.gradle in the user directory. //custom-plugin-2/user buildscript { repositories { maven { url uri('../repo') } } dependencies { classpath group: 'com.code4reference', name: 'code4ReferencePlugin', version: '1.2-SNAPSHOT' } }apply plugin: 'code4reference'c4rArgs { sender = 'Rakesh' message = 'Hello there !!!!' }You may have noticed that c4rArgs closure has been added and sender and message variables are set in the closure. These two variables are accessible in the showMessage() method. Now run the build.gradle present in user directory. we get the following output. $gradle c4rTask :c4rTask -------------------------showMessage----------------------------- From : Rakesh, message : Hello there !!!!BUILD SUCCESSFULTotal time: 15.817 secsNested arguments What if we want to pass the nested arguments? We can achieve this by nesting the Extension objects. Here is the code for Code4ReferencePlugin class. Only highlighted lines have been added in this class. package com.code4reference.gradle;import org.gradle.api.*;//Extension class for nested argumetns class C4RNestedPluginExtention { String receiver = 'Admin' String email = 'admin@code4reference.com'} //For keeping passing arguments from gradle script. class Code4ReferencePluginExtension { String message = 'Hello from Code4Reference' String sender = 'Code4Reference' C4RNestedPluginExtention nested = new C4RNestedPluginExtention() } class Code4ReferencePlugin implements Plugin { def void apply(Project project) { project.extensions.create('c4rArgs', Code4ReferencePluginExtension) project.c4rArgs.extensions.create('nestedArgs',C4RNestedPluginExtention) project.task('c4rTask', type: Code4ReferenceTask) } }It’s time to modify the Code4ReferenceTask class as well. Highlighted lines have been added in this class to access the nested arguments. package com.code4reference.gradle;import org.gradle.api.DefaultTask import org.gradle.api.tasks.TaskActionclass Code4ReferenceTask extends DefaultTask {@TaskAction def showMessage() { println '------------showMessage-------------------' println 'From : ${project.c4rArgs.sender},\ message : ${project.c4rArgs.message}' println 'To : ${project.c4rArgs.nestedArgs.receiver},\ email : ${project.c4rArgs.nestedArgs.email}' } } Execute the gradle uploadArchives command again in the plugin directory to update the jar file in Maven repo. Now modify the build.gradle file present in user directory to pass the nested arguments. buildscript { repositories { maven { url uri('../repo') } } dependencies { classpath group: 'com.code4reference', name: 'code4ReferencePlugin', version: '1.2-SNAPSHOT' } }apply plugin: 'code4reference'c4rArgs { sender = 'Rakesh' message = 'Hello there !!!!'nestedArgs{ receiver = 'gradleAdmin' email = 'gradleAdmin@code4reference.com' } }We have added the highlighted line in the build.gradle file. Testing plugin and task Testing of code is an important aspect of code development. Now we are going to add the unit test for the custom task and plugin. For this, we need to create the directory structure for the test classes. We need to put the test folder in the src directory. Execute the command below in plugin directory to create the test directories. $mkdir -p src/test/groovy/com/code4reference/gradle/ Test directory structure follows the same package directory structure which has been used for source code package directory. In this directory, put the test classes for Code4ReferencePlugin and Code4ReferenceTask. In test class, ProjectBuilder is used to access the project object. These test cases are easy to write, similar to the Junit test cases. The code of test classes is as following: package com.code4reference.gradle;import org.junit.Test import org.gradle.testfixtures.ProjectBuilder import org.gradle.api.Project import static org.junit.Assert.*class Code4ReferenceTaskTest { @Test public void canAddTaskToProject() { Project project = ProjectBuilder.builder().build() def task = project.task('c4rtakstest', type: Code4ReferenceTask) assertTrue(task instanceof Code4ReferenceTask) } }package com.code4reference.gradle;import org.junit.Test import org.gradle.testfixtures.ProjectBuilder import org.gradle.api.Project import static org.junit.Assert.*class Code4ReferencePluginTest { @Test public void code4referencePluginAddsCode4ReferenceTaskToProject() { Project project = ProjectBuilder.builder().build() project.apply plugin: 'code4reference' println 'code4referencePluginAddsCode4ReferenceTaskToProject' assertTrue(project.tasks.c4rTask instanceof Code4ReferenceTask) } } To run the test, execute the following command in plugin folder. $gradle test #For success test cases. :compileJava UP-TO-DATE :compileGroovy UP-TO-DATE :processResources UP-TO-DATE :classes UP-TO-DATE :compileTestJava UP-TO-DATE :compileTestGroovy :processTestResources UP-TO-DATE :testClasses :testBUILD SUCCESSFULTotal time: 42.799 secs$gradle test #In case of test case failure, #you can expect output similar to given below. :compileJava UP-TO-DATE :compileGroovy UP-TO-DATE :processResources UP-TO-DATE :classes UP-TO-DATE :compileTestJava UP-TO-DATE :compileTestGroovy :processTestResources UP-TO-DATE :testClasses :testcom.code4reference.gradle.Code4ReferencePluginTest > code4referencePluginAddsCode4ReferenceTaskToProject FAILED java.lang.AssertionError at Code4ReferencePluginTest.groovy:142 tests completed, 1 failedFAILURE: Build failed with an exception.* What went wrong: Execution failed for task ':test'. > There were failing tests. See the report at: file:///home/rakesh/programming/mygitrepo/Code4Reference/GradleExample/custom-plugin-2/plugin/build/reports/tests/index.html* Try: Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.BUILD FAILEDGradle test provides the test report and its location. This file can be opened using any browser to examine the stack trace.You can find the  source code here. Reference: Gradle custom plugin (Part-1), Gradle custom plugin (Part-2) from our JCG partner Rakesh Cusat at the Code4Reference blog....

What’s better – Big Fat Tests or Little Tests?

Like most startups, we built a lot of prototypes and wrote and threw out a lot of code as we tried out different ideas. Because we were throwing out the code anyways, we didn’t bother writing tests – why write tests that you’ll just throw away too?But as we ramped the team up to build the prototype out into a working system, we got into trouble early. We were pushing our small test team too hard trying to keep up with changes and new features, while still trying to make sure that the core system was working properly. We needed to get a good automated test capability in place fast.The quickest way to do this was by writing what Michael Feathers calls “Characterization Tests”: automated tests – written at inflection points in an existing code base – that capture the behavior of parts of a system, so that you know if you’ve affected existing behavior when you change or fix something. Once you’ve reviewed these tests to make sure that what the system is doing is actually what it is supposed to be doing, the tests become an effective regression tool.The tests that we wrote to do this are bigger and broader than unit tests – they’re fat developer-facing tests that run beneath the UI and validate a business function or a business rule involving one or more system components or subsystems. Unlike customer-facing functional tests, they don’t require manual setup or verification. Most of these tests are positive, happy path tests that make sure that important functions in the system are working properly, and that test validation functions.Using fat and happy tests as a starting point for test automation is described in the Continuous Delivery book. The idea is to automate high-value high-risk test scenarios that cover as much of the important parts of the system as you can with a small number of tests. This gives you a “smoke test” to start, and the core of a test suite.Today we have thousands of automated tests that run in our Continuous Integration environment. Developers write small unit tests, especially in new parts of the code and where we need to test through a lot of different logical paths and variations quickly. But a big part of our automated tests are still fat, or at least chubby, functional component tests and linked integration tests that explore different paths through the main parts of the system.We use code coverage analysis to identify weak spots, areas where we need to add more automated tests or do more manual testing. Using a combination of unit tests and component tests we get high (90%+) test coverage in core parts of the application, and we exercise a lot of the general plumbing of the system regularly.It’s easy to test server-side services this way, using a common pattern: set up initial state in a database or memory, perform some action using a message or API call, verify the expected results (including messages and database changes and in-memory state) and then roll-back state and prepare for the next test.We also have hundreds of much bigger and fatter integration and acceptance tests that test client UI functions and client API functions through to the server. These “really big fat” tests involve a lot more setup work and have more moving parts, are harder to write and require more maintenance, and take longer to run. They are also more fragile and need to be changed more often. But they test real end-to-end scenarios that can catch real problems like intermittent system race conditions as well as regressions. What’s good and bad about fat tests?There are advantages and disadvantages in relying on fat tests.First, bigger tests have more dependencies. They need more setup work and more test infrastructure, they have more steps, and they take longer to run than unit tests. You need to take time to design a test approach and to create templates and utilities to make it easy to write and maintain bigger tests.You’ll end up with more waste and overlap: common code that gets exercised over and over, just like in the real world. You’ll have to put in better hardware to run the tests, and testing pipelines so that more expensive testing (like the really fat integration and acceptance testing) is done later and less often.Feedback from big tests isn’t as fast or as direct when tests fail. Gerard Meszaros points out that the bigger the test, the harder is to understand what actually broke – you know that there is a real problem, but you have more digging to figure out where the problem is. Feedback to the developer is less immediate: bigger tests run slower than small tests and you have more debugging work to do. We’ve done a lot of work on providing contextual information when tests fail so that programmers can move faster to figuring out what’s broken. And from a regression test standpoint, it’s usually obvious that whatever broke the system is whatever you just changed, so….As you work more on a large system, it is less important to get immediate and local feedback on the change that you just made and more important to make sure that you didn’t break something else somewhere else, that you didn’t make an incorrect assumption or break a contract of some kind, or introduce a side-effect. Big component tests and interaction tests help catch important problems faster. They tell you more about the state of the system, how healthy it is. You can have a lot of small unit tests that are passing, but that won’t give you as much confidence as a smaller number of fat tests that tell you that the core functions of the system are working correctly.Bigger tests also tell you more about what the system does and how it works. I don’t buy the idea that tests make for good documentation of a system – at least unit tests don’t. It’s unrealistic to expect a developer to pick up how a system works from looking at hundreds or thousands of unit tests. But new people joining a team can look at functional tests to understand the important functions of the system and what the rules of the system are. And testers, even non-technical manual testers, can read the tests and understand what tests scenarios are covered and what aren’t, and use this to guide their own testing and review work.Meszaros also explains that good automated developer tests, even tests at the class or method level, should always be black box tests, so that if you need to change the implementation in refactoring or for optimization, you can do this without breaking a lot of tests. Fat tests make these black boxes bigger, raising it to a component or service level. This makes it even easier to change implementation details without having to fix tests – as long as you don’t change public interfaces and public behavior, (which are dangerous changes to make anyways), the tests will still run fine.But this also means that you can make mistakes in implementation that won’t be caught by functional tests – behavior outside of the box hasn’t changed, but something inside the box might still be wrong, a mistake that won’t trip you up until later. Fat tests won’t find these kinds of mistakes, and they won’t catch other detailed mistakes like missing some validation.It’s harder to write negative tests and to test error handling code this way, because the internal exception paths are often blocked at a higher level. You’ll need other kinds of testing, including unit tests and manual exploratory testing and destructive testing to check edge cases and catch problems in exception handling.Would we do it this way again?I’d like to think that if we started something brand new again, we’d start off in a more disciplined way, test first and all that. But I can’t promise. When you are trying to get to the right idea as quickly as possible, anything that gets in the way and slows down thinking and feedback is going to be put aside. It’s once you’ve got something that is close-to-right and close-to-working and you need to make sure that it keeps working, that testing becomes an imperative.You need both small unit tests and chubby functional tests and some big fat integration and end-to-end tests to do a proper job of automated testing. It’s not an either/or argument.But writing fat, functional and interaction tests will pay back faster in the short-term, because you can cover more of the important scenarios faster with fewer tests. And they pay back over time in regression, because you always know that aren’t breaking anything important, and you know that you are exercising the paths and scenarios that your customers are or will be – the paths and scenarios that should be tested all of the time. When it comes to automated testing, some extra fat is a good thing.Reference: What’s better – Big Fat Tests or Little Tests? from our JCG partner Jim Bird at the Building Real Software blog....

Java memes which refuse to die

Also titled; My pet hates in Java coding.  There are a number of Java memes which annoy me, partly because they were always a bad idea, but mostly because people still keep picking them up years after there is better alternatives. Using StringBuffer instead of StringBuilder The Javadoc for StringBuffer from 2004 states As of release JDK 5, this class has been supplemented with an equivalent class designed for use by a single thread, StringBuilder. The StringBuilder class should generally be used in preference to this one, as it supports all of the same operations but it is faster, as it performs no synchronization.  Not only is StringBuilder a better choice, the occasions where you could have used a synchronized StringBuffer are so rare, its unlike it was ever a good idea. Say you had the code // run in two threads sb.append(key).append("=").append(value).append(", ");Each append is thread safe, but the lock could be release at any point meaning you could get key1=value1, key2=value2, key1=key2value1=, value2, key1key2==value1value2, ,What makes it worse is that the JIT and JVM will attempt to hold onto the lock between calls in the interests of efficiency. This means you can have code which passes all your tests and works in production for years, but then very rarely breaks, possibly due to upgrading your JVM. Using DataInputStream to read text Another common meme is using DataInputStream when reading text in the following template (three lines with the two readers on the same line) I suspect there is one original code which gets copied around. FileInputStream fstream = new FileInputStream("filename.txt"); DataInputStream in = new DataInputStream(fstream); BufferedReader br = new BufferedReader(new InputStreamReader(in));This is bad for three reasonsYou might be tempted to use in to read binary which won’t work due to the buffered nature of BufferedReader. (I have seen this tried) Similarly, you might believe that DataInputStream does something useful here when it doesn’t There is a much shorter way which is correct.BufferedReader br = new BufferedReader(new FileReader("filename.txt")); // or with Java 7. try (BufferedReader br = new BufferedReader(new FileReader("filename.txt")) { // use br }Using Double Checked Locking to create a Singleton When Double checked locking was first used it was a bad idea because the JVM didn’t support this operation safely. // Singleton with double-checked locking: public class Singleton { private volatile static Singleton instance;private Singleton() { }public static Singleton getInstance() { if (instance == null) { synchronized (Singleton.class) { if (instance == null) { instance = new Singleton(); } } } return instance; } }The problem was that until Java 5.0, this usually worked but wasn’t guaranteed in the memory model. There was a simpler option which was safe and didn’t require explicit locking. // suggested by Bill Pugh public class Singleton { // Private constructor prevents instantiation from other classes private Singleton() { }/** * SingletonHolder is loaded on the first execution of Singleton.getInstance() * or the first access to SingletonHolder.INSTANCE, not before. */ private static class SingletonHolder { public static final Singleton INSTANCE = new Singleton(); }public static Singleton getInstance() { return SingletonHolder.INSTANCE; } }This was still verbose, but it worked and didn’t require an explicit lock so it could be faster. In Java 5.0, when they fixed the memory model to handle double locking safely, they also introduced enums which gave you a much simpler solution. In the second edition of his book Effective Java, Joshua Bloch claims that “a single-element enum type is the best way to implement a singleton”  With an enum, the code looks like this. public enum Singleton { INSTANCE; }This is lazy loaded, thread safe, without explicit locks and much simpler. Reference: Java memes which refuse to die from our JCG partner Peter Lawrey at the Vanilla Java blog....

Bcrypt, Salt. It’s The Bare Minimum.

The other day I read this Arstechnica article and realized how tragic the situation is. And it is not this bad because of the evil hackers. It’s bad because few people know how to handle one very common thing: authentication (signup and login). But it seems even cool companies like LinkedIn and Yahoo do it wrong (tons of passwords have leaked recently)Most of the problems described in the article is solved with bcrypt. And using salt is a must. Other options are also acceptable – PBKDF2 and probably SHA-512. Note that bcrypt is not a hash function, it’s an algorithm that is specifically designed for password storage. It has its own salt generation built-in. Here are two stack exchange questions on the topic: this and this. Jeff Atwood has also written on the topic some time ago.What is salt? It’s a random string (series of bits, to be precise, but for the purpose of password storage, let’s view it as string) that is appended to each password before it is hashed. So “mypassword” may become “543abc7d9fab773fb2a0mypassword”. You then add the salt every time you need to check if the password is correct (i.e. salt+password should generate the same hash that is stored in the database). How does this help? First, rainbow tables (tables of precomputed hashes for character combinations) can’t be used. Rainbow tables are generated for shorter passwords, and a big salt makes the password huge. Bruteforce is still possible, as the attacker knows your salt, so he can just bruteforce salt+(set of attempted passwords). Bcrypt, however, addresses bruteforce, because it is intentionally “slow”.So, use salt. Prefer bcrypt. And that’s not if you have to be super-secure – that’s the absolute minimum for every website out there that stores passwords. And don’t say “my site is just a forum, what can happen if someone gets the passwords”. Users tend to reuse passwords, so their password for your stupid site may also be their email of facebook password. So take this seriously, whatever your website is, because you are risking the security of your users outside your premises. If you think it’s hard to use bcrypt, then don’t use passwords at all. Use “Login with facebook/twitter”, OpenID (that is actually harder than using bcrypt) or another form of externalized authentication.Having used the word “minimum” a couple of times, I’ll proceed with a short list of things to consider in terms of web security that should be done in addition to the minimum requirement of using salt. If you are handling money, or some other very important staff, you can’t afford to stay on the bare minimum:use https everywhere. Sending unsecure session cookies can be sniffed and the attacker can “steal” the user’s session. one-time tokens – sends short-lived tokens (codes) via SMS, or login links – via email, that are used to authentication. That way you even don’t need passwords (you move the authentication complexity to the mobile network / the email provider) encourage use of passphrases, rather than passwords – short passwords are easier to bruteforce, but long passwords are hard to remember. That’s why you could encourage your users to use a passphrase, like “dust in the wind” or “who let the dogs out”, which are easy to remember, but hard to attack. (My signup page has an example of a subtle encouragement) require additional verification for highly-sensitive actions, and don’t allow changing emails if the login was automatic (performed with a long-lived “remember-me” cookie) lock accounts after failed consecutive logins – “bruteforce” should only be usable if the attacker gets hold of your database. It should not happen through your interface. use certificates for authentication – public-key cryptography can be used to establish mutual trust between the user and the server – the user knows the server is the right one, and the server knows the user is not a random person that somehow obtained the password. use hardware tokens – using digital signatures are the same as the above option, but they store the certificates on hardware devices and cannot be extracted from there. So only the owner of the physical device can authenticateWeb security is a complex field. Hello world examples must not be followed for real-world systems. Consider all implications for your users outside your system. Bottom-line: use bcrypt.Reference: Bcrypt, Salt. It’s The Bare Minimum. from our JCG partner Bozhidar Bozhanov at the Bozho’s tech blog blog....

Overqualified is Overdiagnosed

I’ve been inspired by comments on prior articles to discuss the sensitive topics of ‘overqualification’ and ageism. My Why You Didn’t Get The Job and Why You Didn’t Get The Interview posts were republished on a few sites, resulting in some active debates where at some point a participant states that the real reason that they weren’t hired was that they are overqualified for all the jobs out there, or they were victims of ageism. In my opinion and experience recruiting in the software engineering world, the term overqualified is used too widely by companies (and then inaccurately cited by rejected candidates), and claims of alleged ageism are often something else entirely. Before we begin, I acknowledge that companies want to hire cheaper labor when possible, and some shops care less about quality products than others. And for the record, I’m over 40. By saying you are overqualified for jobs, what are you really saying? “I am more skilled or more experienced than the job requires.“ That feels kind of good, doesn’t it? SPOUSE: How did the interview go? JOB SEEKER: I didn’t get the job. SPOUSE 1: Oh, I’m sorry. What happened? JOB SEEKER: Unfortunately, it turns out my skills are simply too strong. Of course rejection hurts, but to tell your spouse (and yourself) that you were turned down because you were too skilled or too experienced is much less bruising on the ego than the alternative. For companies looking to eliminate candidates, using the word overqualified may take some of the sting and fear of retribution out of the rejection. But is it true? Think about this scenario for a second. You are trying to hire a software developer and you estimate that someone with say five years of experience should be able to handle the duties effectively. A candidate is presented with fifteen years of experience that has all the attributes you are seeking. This person should theoretically perform the tasks quicker and even take on some additional workload. Do you really think a company would not hire this person simply because he/she has those additional years of experience? I would argue that is rarely the case. Question: Is ‘overqualified’ a code word used by managers/HR to mean other things? Answer: ALMOST ALWAYS What can overqualified actually mean? listed in order from most likely to least likely, IMOOverpaid/over budget – If your experience > what is required, it generally becomes a problem when your salary requirements are above what is budgeted. It’s not that you are classified as overpaid in your current role, but that you would be overpaid for the level of responsibility at the new job. I list this as the most likely culprit because I often see companies initially reject a candidate as overqualified, then hire that same person because of a lack of less experienced quality talent. Stagnant – Candidates who have worked for many years as a developer in a technically stagnant and regulated environment will often not thrive in less regulated, more technically diverse firms. The conventional wisdom, right or wrong, is that you can’t release the zoo lions back into the jungle once they’ve been tamed. ‘Overskilled’ – If your skills > what is necessary for the job, an employer may fear that the lack of challenges provided will bore you into looking for more interesting work in the future. Hiring a tech lead to do bug fixes could lead to a short stint. There is emerging evidence that shows skilled workers do not exit less challenging jobs quickly or in high numbers, but hiring managers are not quite ready to abandon the traditional line of thinking. Threatening – If your experience > those conducting the interviews, there could be some fear that you could be a competitor for future opportunities for promotion. If a start-up is yet to hire a CTO, the highest geek on that firm’s food chain may be jockeying for the role. This may sound a bit like a paranoid conspiracy theory, but I genuinely believe it is prevalent enough to mention. Too old – Ageism is a real problem, but in my experience in the software world, ageism is also widely overdiagnosed by candidates who think the problem is their age when in actuality it is their work history. Most of the self-diagnosed claims of ageism that I hear are from candidates who spent perhaps 20+ years working for the same company and have not focused on keeping their skills up to date (see stagnant above). I can’t say that I’ve ever heard a claim of ageism from a candidate that has moved around in their career and stayed current with technology. The problem often isn’t age, it is relevance.Some of the best and most accomplished/successful software engineering professionals that I know are over 50, which is older than some of the candidates I hear claiming possible ageism. One trait that the overwhelming majority of these engineers have in common is that they didn’t stay in any one place for too long to stagnate. I don’t think that is a coincidence. If you are an active job seeker that is continuously hearing that you are overqualified, what can you do to improve your standing?Rethink – Try to investigate which of the meanings of overqualified you are hearing most often. Is your compensation in line with what companies are paying for your set of qualifications? Do you present yourself in interviews as someone who may become easily bored when your work is less challenging? Are you making it clear in interviews that you want the job, and you explain why you want the job? Retool – Make sure your skills are relevant and being sought by companies. Invest time to learn an emerging technology or developing some niche specialty that isn’t already flooded. Remarket – Write down the top reasons you think a company should hire you, and then check to see if those reasons are represented in your job search materials (resume, email application, cover letters). Find out what was effective for your peers in their job search and try to implement new self-promotion tactics. Reboot and refresh – Take a new look at your options beyond the traditional career paths. Have you considered consulting or contracting roles where your guidance and mentoring skills could be justified and valued for temporary periods? Are there emerging markets that interest you?Terms like ‘overqualified’ and ‘not a fit’ are unfortunately the laziest, easiest, and safest ways that companies can reject you for a position, and they almost always mean something else. Discovering the real reason you were passed up is necessary to make the proper adjustments so you can get less rejections and more offers. Reference: Overqualified is Overdiagnosed from our JCG partner Dave Fecak at the Job Tips For Geeks blog....

Rewrite to the edge – getting the most out of it! On GlassFish!

A great topic for modern application development is rewriting. Since the introduction of Java Server Faces and the new lightweight programming model in Java EE 6 you are struggling with pretty and simple, bookmarkable URLs. PrettyFaces was out there since some time and even if it could be called mature at the 3.3.3 version I wasn’t convinced. Mainly because of the fact that I had to configure it in xml. If you ever did a JSF project you know that this is something you do on top later on. Or never. With the last option being the one I have seen a lot. Rewrite is going to change that. Programmatic, easy to use and highly customizable. Exactly what I was looking for.Getting StartedNothing is easy as getting started with stuff coming from one of the RedHat guys. Fire up NetBeans, create a new Maven based Webapp, add JSF and Primefaces to the mix and run it on GlassFish. First step for adding rewriting magic to your application is to add the rewrite dependencies to your project. <dependency> <groupId>org.ocpsoft.rewrite</groupId> <artifactId>rewrite-servlet</artifactId> <version>1.1.0.Final</version> </dependency>That isn’t enough since I am going to use it together with JSF, you also need the jsf-integration. <dependency> <groupId>org.ocpsoft.rewrite</groupId> <artifactId>rewrite-integration-faces</artifactId> <version>1.1.0.Final</version> </dependency>Next implement your own ConfigurationProvider. This is the central piece where most of the magic happens.Let’s call it TricksProvider for now and we also extend the abstract HttpConfigurationProvider. A simple first version looks like this: public class TricksProvider extends HttpConfigurationProvider { @Override public int priority() { return 10; }@Override public Configuration getConfiguration(final ServletContext context) { return ConfigurationBuilder.begin() .addRule(Join.path("/").to("/welcomePrimefaces.xhtml")); } }Now you have to register your ConfigurationProvider. You do this by adding a simple textfile named org.ocpsoft.rewrite.config.ConfigurationProvider to your applications /META-INF/services/ folder. Add the fully qualified name of your ConfigurationProvider implementation to it and you are done. If you fire up your application. The Rewriting BasicsWhile copying the above provider you implicitly added your first rewriting rule. By requesting http://host:8080/yourapp/ you get directly forwarded to the Primefaces welcome page generated by NetBeans. All rules are based on the same principle. Every single rule consists of a condition and an operation. Something like “If X happens, do Y”. Rewrite knows two different kinds of Rules. Some preconfigured ones (Join) starting with “addRule()” and a fluent interface starting with defineRule(). This is a bit confusing because the next major release will deprecate the defineRule() and rename it to addRule(). So most the examples you find (especially the test cases in the latest trunk) are not working with the 1.1.0.Final. Rewrite knows about two different Directions. Inbound and Outbound. Inbound is most likely working like every rewriting engine you know (e.g. mod_rewrite). A request arrives and is forwarded or redirected to the resources defined in your rules. The Outbound direction is little less. It basically has a hook in the encodeURL() method of the HttpServletRequest and rewrites the links you have in your pages (if they get rendered with the help of encodeURL at all). JSF is doing this out of the box. If you are thinking to use it with JSPs you have to make sure to call it yourself. Forwarding .html to .xhtml with some magicLet’s look at some stuff you could do with rewrite. First we add the following to the TricksProvider: .defineRule() .when(Direction.isInbound() .and(Path.matches("{name}.html").where("name").matches("[a-zA-Z/]+"))) .perform(Forward.to("{name}.xhtml"));This is a rule which is looking at inbound requests and checks for all Patch matches {name}.html which confirm to the regular expression pattern [a-zA-Z/]+ and Forwards those to {name}.xhtml files. If this rule is in place all requests to http://host:8080/yourapp/something.html will end up being forwarded to something.xhtml. Now your users will no longer know that you are using fancy JSF stuff underneath and believe you are working with html :) If a url which isn’t matching the regular expression is requested, for example something like http://host:8080/yourapp/something123.html this simply isn’t forwarded and if the something123.html isn’t present in your application you will end up receiving a 404 error. Rewriting Outbound LinksThe other way round you could also add the following rule: .defineRule() .when(Path.matches("test.xhtml") .and(Direction.isOutbound())) .perform(Substitute.with("test.html"))You imagine what this is doing, right? If you have a facelet which contains something like this: <h:outputLink value="test.xhtml">Normal Test</h:outputLink>The link that is rendered to the user will be rewritten to test.html. This is the most basic action for outbound links you will ever need. Most of the magic happens with inbound links. Not a big surprise looking at the very limited reach of the encodeURL() hook. The OutputBufferThe most astonishing stuff in rewrite is called OutputBuffer. At least until the release we are working with at the moment. It is going to be renamed in 2.0 but for now let’s simply look at what you could do. The OutputBuffer is your hook to the response. Whatever you would like to do with the response before it actually arrives at your client’s browser could be done here. Thinking about transforming the markup? Converting css? Or even GZIP compression? Great, that is exactly what you could do. Let’s implement a simple ZipOutputBuffer public class ZipOutputBuffer implements OutputBuffer {private final static Logger LOGGER = Logger.getLogger(ZipOutputBuffer.class.getName());@Override public InputStream execute(InputStream input) { String contents = Streams.toString(input); LOGGER.log(Level.FINER, "Content {0} Length {1}", new Object[]{contents, contents.getBytes().length}); byte[] compressed = compress(contents); LOGGER.log(Level.FINER, "Length: {0}", compressed.length); return new ByteArrayInputStream(compressed); }public static byte[] compress(String string) { ByteArrayOutputStream os = new ByteArrayOutputStream(string.length()); byte[] compressed = null; try { try (GZIPOutputStream gos = new GZIPOutputStream(os)) { gos.write(string.getBytes()); } compressed = os.toByteArray(); os.close(); } catch (IOException iox) { LOGGER.log(Level.SEVERE, "Compression Failed: ", iox); } return compressed; } }As you can see, I am messing around with some streams and use the java.util.zip.GZIPOutputStream to shrink the stream received in this method. Next we have to add the relevant rule to the TricksProvider: .defineRule() .when(Path.matches("/gziptest").and(Direction.isInbound())) .perform(Forward.to("test.xhtml") .and(Response.withOutputBufferedBy(new ZipOutputBuffer()) .and(Response.addHeader("Content-Encoding", "gzip")) .and(Response.addHeader("Content-Type", "text/html"))))An inbound rule (we are not willing to rewrite links in pages here .. so it has to be inbound) which adds the ZipOutputBuffer to the Response. Also take care for the additional response header (both) unless you want to see your browser complaining about the content I have mixed up :) That is it. The request http://host:8080/yourapp/gziptest now delivers the test.xhtml with GZIP compression. That is 2,6KB vs. 1,23 KB!! Less than half of the size !! It’s not very convenient to work with streams and byte[]. And I am not sure if this will work with larger page sizes in terms of memory fragmentation, but it is an easy way out if you don’t have a compression filter in place or only need to compress single parts of your application. Enhance Security with RewriteBut that is not all you could do: You could also enhance the security with rewrite. Lincoln has a great post up about securing your application with rewrite. There are plenty of possible examples around how to use this. I Came up with a single use-case where didn’t want to use the welcome-file features and prefer to dispatch users individually. While doing this I would also inspect their paths and check if the stuff they are entering is malicious or not. You could either do it with the .matches() condition or with a custom constraint. Add the following to the TricksProvider: Constraint<String> selectedCharacters = new Constraint<String>() { @Override public boolean isSatisfiedBy(Rewrite event, EvaluationContext context, String value) { return value.matches("[a-zA-Z/]+"); } };And define the following rule: .defineRule() .when(Direction.isInbound() .and(Path.matches("{path}").where("path").matches("^(.+)/$") .and(Path.captureIn("checkChar").where("checkChar").constrainedBy(selectedCharacters)))) .perform(Redirect.permanent(context.getContextPath() + "{path}index.html"))Another inbound modification. Checking the path if it is has a folder pattern and capturing it in a variable which is checked against the custom constraints. Great! Now you have a save and easy forwarding mechanism in place. All http://host:8080/yourapp/folder/ request are now rewritten to http://host:8080/yourapp/index.html. If you look at the other rules from above you see, that the .html is forwarded to .xhtml … and you are done! Bottom LineI like working with rewrite a lot. It feels easier than configuring the xml files of prettyfaces and I truly enjoyed the support of Lincoln and Christian during my first steps with it. I am curious to see what the 2.0 is coming up with and I hope that I get some more debug output for the rules configuration just to see what is happening. The default is nothing and it could be very tricky to find the right combination of conditions to have a working rule. Looking for the complete sources? Find them on github. Happy to read about your experiences. Where is the GlassFish Part?Oh, yeah. I mentioned it in the headline, right? That should be more like a default. I was running everything with latest GlassFish so you can be sure that this is working. And NetBeans is at 7.2 at the moment and you should give it a try if you haven’t. I didn’t came across a single issue related to GlassFish and I am very pleased to stress this here. Great work! One last remark: Before you are going to implement the OutputBuffer like crazy take a look at what your favorite appserver has in stock already. GlassFish knows about GZIP compression already and it simply can be switched on! Might be a good idea to think twice before implementing here. Reference: Rewrite to the edge – getting the most out of it! On GlassFish! from our JCG partner Markus Eisele at the Enterprise Software Development with Java blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: