Featured FREE Whitepapers

What's New Here?


Start Android Development in 5 Minutes

Android OS is the most popular mobile OS nowadays (leading market with %46.9 above iOS’s %18.8). Besides, globally over %20 of computers are mobile devices. So, over %10 of people in the world uses Android OS and softwares related with it. For this reason, Android application development is a very important issue for developers and software engineers.This post will describe a fast (5 minutes, excluding tool download periods) and simple startup of an android project on Eclipse IDE, for developers who deal with the matter. 1. JDK and Eclipse IDE: Download and install appropriate JDK for your computer and OS (JDK 6 or 7 are preferred) http://www.oracle.com/technetwork/java/javase/downloads/index.html Download latest Eclipse distribution, unzip it to your computer. http://www.eclipse.org/downloads/2. Android SDK: Download and install android SDK: http://developer.android.com/sdk/index.html Run Android SDK and install some API (e.g. 8, 14, 16, …). Android distributions have various APIs and your application API must be compatible with your device. This step may also be performed using Eclipse ADT plug-in Android toolbar buttons. For each API, ‘SDK Platform’ is required. ‘SDK Tools’ and ‘SDK Platform Tools’ under ‘Tools’ menu and ‘Android Support Library’ under ‘Extras’ menu should be installed for general usage.Don’t forget to set your Android SDK directory on Eclipse Window –> Preferences:3. Eclipse ADT Plugin: Start Eclipse. Click Help –> Install New Software, paste adress below and click OK: https://dl-ssl.google.com/android/eclipse/Select ‘Android Development Tools’ from ‘Developer Tools’, click ‘Next’ various times and finish installation. You will see Android SDK (which performs installing different APIs as told in step 2) and Android emulator buttons on the top panel. You can also start Android projects from now on.4. Android Emulator: Android SDK includes a useful emulator for testing. After installing ADT plug-in, you can start emulator by ‘Android Virtual Device Manageer’ button from toolbar. After that you may add one or more android devices with different API and system configurations.You can have detailed info about emulator usage and command line parameters here: http://developer.android.com/tools/help/emulator.html 5. Creating Project: Select File –> New –> Project –> Android Application Project. You can also start an Android Sample Project for exercising.You must select ‘Build SDK’ and ‘Minimum Required SDK’ while creating project.After that right click your project and select ‘Run’. Your project will run on selected emulator. After this short and simple start, you may want to look at these: http://developer.android.com/training/basics/firstapp/index.html http://www.vogella.com/articles/Android/article.html http://www.coreservlets.com/android-tutorial/ http://java.dzone.com/articles/10-attractive-android http://www.javacodegeeks.com/2010/10/android-full-application-tutorial.html Don’t forget to share! Reference: Start Android Development in 5 Minutes from our JCG partner Cagdas Basaraner at the CodeBuild blog....

MapReduce: Working Through Data-Intensive Text Processing – Local Aggregation Part II

This post continues with the series on implementing algorithms found in the Data Intensive Processing with MapReduce book. Part one can be found here. In the previous post, we discussed using the technique of local aggregation as a means of reducing the amount of data shuffled and transferred across the network. Reducing the amount of data transferred is one of the top ways to improve the efficiency of a MapReduce job. A word-count MapReduce job was used to demonstrate local aggregation. Since the results only require a total count, we could re-use the same reducer for our combiner as changing the order or groupings of the addends will not affect the sum. But what if you wanted an average? Then the same approach would not work because calculating an average of averages is not equal to the average of the original set of numbers. With a little bit of insight though, we can still use local aggregation. For these examples we will be using a sample of the NCDC weather dataset used in Hadoop the Definitive Guide book. We will calculate the average temperature for each month in the year 1901. The averages algorithm for the combiner and the in-mapper combining option can be found in chapter 3.1.3 of Data-Intensive Processing with MapReduce. One Size Does Not Fit All Last time we described two approaches for reducing data in a MapReduce job, Hadoop Combiners and the in-mapper combining approach. Combiners are considered an optimization by the Hadoop framework and there are no guarantees on how many times they will be called, if at all. As a result, mappers must emit data in the form expected by the reducers so if combiners aren’t involved, the final result is not changed. To adjust for calculating averages, we need to go back to the mapper and change it’s output.Mapper Changes In the word-count example, the non-optimized mapper simply emitted the word and the count of 1. The combiner and in-mapper combining mapper optimized this output by keeping each word as a key in a hash map with the total count as the value. Each time a word was seen the count was incremented by 1. With this setup, if the combiner was not called, the reducer would receive the word as a key and a long list of 1?s to add together, resulting in the same output (of course using the in-mapper combining mapper avoided this issue because it’s guaranteed to combine results as it’s part of the mapper code). To compute an average, we will have our base mapper emit a string key (the year and month of the weather observation concatenated together) and a custom writable object, called TemperatureAveragingPair. The TemperatureAveragingPair object will contain two numbers (IntWritables), the temperature taken and a count of one. We will take the MaximumTemperatureMapper from Hadoop: The Definitive Guide and use it as inspiration for creating an AverageTemperatureMapper: public class AverageTemperatureMapper extends Mapper<LongWritable, Text, Text, TemperatureAveragingPair> { //sample line of weather data //0029029070999991901010106004+64333+023450FM-12+000599999V0202701N015919999999N0000001N9-00781+99999102001ADDGF10899199999999999private Text outText = new Text(); private TemperatureAveragingPair pair = new TemperatureAveragingPair(); private static final int MISSING = 9999;@Override protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { String line = value.toString(); String yearMonth = line.substring(15, 21);int tempStartPosition = 87;if (line.charAt(tempStartPosition) == '+') { tempStartPosition += 1; }int temp = Integer.parseInt(line.substring(tempStartPosition, 92));if (temp != MISSING) { outText.set(yearMonth); pair.set(temp, 1); context.write(outText, pair); } } } By having the mapper output a key and TemperatureAveragingPair object our MapReduce program is guaranteed to have the correct results regardless if the combiner is called.Combiner We need to reduce the amount of data sent, so we will sum the temperatures, and sum the counts and store them separately. By doing so we will reduce data sent, but preserve the format needed for calculating correct averages. If/when the combiner is called, it will take all the TemperatureAveragingPair objects passed in and emit a single TemperatureAveragingPair object for the same key, containing the summed temperatures and counts. Here is the code for the combiner: public class AverageTemperatureCombiner extends Reducer<Text,TemperatureAveragingPair,Text,TemperatureAveragingPair> { private TemperatureAveragingPair pair = new TemperatureAveragingPair();@Override protected void reduce(Text key, Iterable<TemperatureAveragingPair> values, Context context) throws IOException, InterruptedException { int temp = 0; int count = 0; for (TemperatureAveragingPair value : values) { temp += value.getTemp().get(); count += value.getCount().get(); } pair.set(temp,count); context.write(key,pair); } } But we are really interested in being guaranteed we have reduced the amount of data sent to the reducers, so we’ll have a look at how to achieve that next.In Mapper Combining Averages Similar to the word-count example, for calculating averages, the in-mapper-combining mapper will use a hash map with the concatenated year+month as a key and a TemperatureAveragingPair as the value. Each time we get the same year+month combination, we’ll take the pair object out of the map, add the temperature and increase the count by by one. Once the cleanup method is called we’ll and emit all pairs with their respective key: public class AverageTemperatureCombiningMapper extends Mapper<LongWritable, Text, Text, TemperatureAveragingPair> { //sample line of weather data //0029029070999991901010106004+64333+023450FM-12+000599999V0202701N015919999999N0000001N9-00781+99999102001ADDGF10899199999999999private static final int MISSING = 9999; private Map<String,TemperatureAveragingPair> pairMap = new HashMap<String,TemperatureAveragingPair>();@Override protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { String line = value.toString(); String yearMonth = line.substring(15, 21);int tempStartPosition = 87;if (line.charAt(tempStartPosition) == '+') { tempStartPosition += 1; }int temp = Integer.parseInt(line.substring(tempStartPosition, 92));if (temp != MISSING) { TemperatureAveragingPair pair = pairMap.get(yearMonth); if(pair == null){ pair = new TemperatureAveragingPair(); pairMap.put(yearMonth,pair); } int temps = pair.getTemp().get() + temp; int count = pair.getCount().get() + 1; pair.set(temps,count); } }@Override protected void cleanup(Context context) throws IOException, InterruptedException { Set<String> keys = pairMap.keySet(); Text keyText = new Text(); for (String key : keys) { keyText.set(key); context.write(keyText,pairMap.get(key)); } } } By following the same pattern of keeping track of data between map calls, we can achieve reliable data reduction by implementing an in-mapper combining strategy. The same caveats apply for keeping state across all calls to the mapper, but looking at the gains that can be made in processing efficiency, using this approach merits some consideration.Reducer At this point, writing our reducer is easy, take a list of pairs for each key, sum all the temperatures and counts then divide the sum of the temperatures by the sum of the counts. public class AverageTemperatureReducer extends Reducer<Text, TemperatureAveragingPair, Text, IntWritable> { private IntWritable average = new IntWritable();@Override protected void reduce(Text key, Iterable<TemperatureAveragingPair> values, Context context) throws IOException, InterruptedException { int temp = 0; int count = 0; for (TemperatureAveragingPair pair : values) { temp += pair.getTemp().get(); count += pair.getCount().get(); } average.set(temp / count); context.write(key, average); } }Results The results are predictable with the combiner and in-mapper-combining mapper options showing substantially reduced data output. Non-Optimized Mapper Option: 12/10/10 23:05:28 INFO mapred.JobClient: Reduce input groups=12 12/10/10 23:05:28 INFO mapred.JobClient: Combine output records=0 12/10/10 23:05:28 INFO mapred.JobClient: Map input records=6565 12/10/10 23:05:28 INFO mapred.JobClient: Reduce shuffle bytes=111594 12/10/10 23:05:28 INFO mapred.JobClient: Reduce output records=12 12/10/10 23:05:28 INFO mapred.JobClient: Spilled Records=13128 12/10/10 23:05:28 INFO mapred.JobClient: Map output bytes=98460 12/10/10 23:05:28 INFO mapred.JobClient: Total committed heap usage (bytes)=269619200 12/10/10 23:05:28 INFO mapred.JobClient: Combine input records=0 12/10/10 23:05:28 INFO mapred.JobClient: Map output records=6564 12/10/10 23:05:28 INFO mapred.JobClient: SPLIT_RAW_BYTES=108 12/10/10 23:05:28 INFO mapred.JobClient: Reduce input records=6564 Combiner Option: 12/10/10 23:07:19 INFO mapred.JobClient: Reduce input groups=12 12/10/10 23:07:19 INFO mapred.JobClient: Combine output records=12 12/10/10 23:07:19 INFO mapred.JobClient: Map input records=6565 12/10/10 23:07:19 INFO mapred.JobClient: Reduce shuffle bytes=210 12/10/10 23:07:19 INFO mapred.JobClient: Reduce output records=12 12/10/10 23:07:19 INFO mapred.JobClient: Spilled Records=24 12/10/10 23:07:19 INFO mapred.JobClient: Map output bytes=98460 12/10/10 23:07:19 INFO mapred.JobClient: Total committed heap usage (bytes)=269619200 12/10/10 23:07:19 INFO mapred.JobClient: Combine input records=6564 12/10/10 23:07:19 INFO mapred.JobClient: Map output records=6564 12/10/10 23:07:19 INFO mapred.JobClient: SPLIT_RAW_BYTES=108 12/10/10 23:07:19 INFO mapred.JobClient: Reduce input records=12 In-Mapper-Combining Option: 12/10/10 23:09:09 INFO mapred.JobClient: Reduce input groups=12 12/10/10 23:09:09 INFO mapred.JobClient: Combine output records=0 12/10/10 23:09:09 INFO mapred.JobClient: Map input records=6565 12/10/10 23:09:09 INFO mapred.JobClient: Reduce shuffle bytes=210 12/10/10 23:09:09 INFO mapred.JobClient: Reduce output records=12 12/10/10 23:09:09 INFO mapred.JobClient: Spilled Records=24 12/10/10 23:09:09 INFO mapred.JobClient: Map output bytes=180 12/10/10 23:09:09 INFO mapred.JobClient: Total committed heap usage (bytes)=269619200 12/10/10 23:09:09 INFO mapred.JobClient: Combine input records=0 12/10/10 23:09:09 INFO mapred.JobClient: Map output records=12 12/10/10 23:09:09 INFO mapred.JobClient: SPLIT_RAW_BYTES=108 12/10/10 23:09:09 INFO mapred.JobClient: Reduce input records=12 Calculated Results: (NOTE: the temperatures in the sample file are in Celsius * 10)Non-Optimized Combiner In-Mapper-Combiner Mapper190101 -25 190102 -91 190103 -49 190104 22 190105 76 190106 146 190107 192 190108 170 190109 114 190110 86 190111 -16 190112 -77 190101 -25 190102 -91 190103 -49 190104 22 190105 76 190106 146 190107 192 190108 170 190109 114 190110 86 190111 -16 190112 -77 190101 -25 190102 -91 190103 -49 190104 22 190105 76 190106 146 190107 192 190108 170 190109 114 190110 86 190111 -16 190112 -77Conclusion We have covered local aggregation for both the simple case where one could reuse the reducer as a combiner and a more complicated case where some insight on how to structure the data while still gaining the benefits of locally aggregating data for increase processing efficiency.Further ReadingData-Intensive Processing with MapReduce by Jimmy Lin and Chris Dyer Hadoop: The Definitive Guide by Tom White Source Code from blog Hadoop API MRUnit for unit testing Apache Hadoop map reduce jobs Project Gutenberg a great source of books in plain text format, great for testing Hadoop jobs locally.Reference: Working Through Data-Intensive Text Processing with MapReduce – Local Aggregation Part II from our JCG partner Bill Bejeck at the Random Thoughts On Coding blog....

CometD: Facebook similar chat for your Java web application

Chatting is easy just like eating a piece of cake or drinking a hot coffee. Have you ever thought about developing a chat program by yourself?. You know that, it is not easy as chatting. But, if you are a developer and if you read to the end of this article, you may put a try to develop a chatting application by your self and allow your users to chat via your web application.I had to implement a chatting application for my web application. As every one does, I started to search on internet. I found IRC. When I read and search more about IRC, I understood that finding a web base client for IRC was difficult. I wanted to have more customizable web client which is working similar to Facebook. At last and luckily, I found CometD. Finally, I was able to implement chatting application by using CometD and more customizable chat windows opening on the browser which is exactly similar to Facebook. This works almost all the modern browsers. This article explains step by step, How to implement chatting application from the scratch and also How to integrate chatting application to your existing Java base web application. Remember, Your web application should be a Java base one. You need to download the cometD from their official web site. It has all the dependencies required to implement the chatting application except tow java script libraries. I have written two Javascript libraries, one to create dynamic chat windows like Facebook and other to handle CometD chatting functionality in generic way. If you can manage these stuff by your self, you don’t need to use those tow Javascript libraries. Actually, CometD documentation provides good details. But, I go ahead with the tutorial by using those tow libraries. Any way, I recommend first use those tow libraries and then customize it as you need. I hope to share the sample application with you and you can deploy it in your localhost and test, how it works. 1.Adding required jar files. If you use maven to build your project, add the following dependencies into your pom.xml file <dependencies> <dependency> <groupId>org.cometd.java</groupId> <artifactId>bayeux-api</artifactId> <version>2.5.0</version> </dependency> <dependency> <groupId>org.cometd.java</groupId> <artifactId>cometd-java-server</artifactId> <version>2.5.0</version> </dependency> <dependency> <groupId>org.cometd.java</groupId> <artifactId>cometd-websocket-jetty</artifactId> <version>2.5.0</version> <exclusions> <exclusion> <groupId>org.cometd.java</groupId> <artifactId>cometd-java-client</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-simple</artifactId> <version>1.6.6</version> </dependency> <dependency> <groupId>org.cometd.java</groupId> <artifactId>cometd-java-annotations</artifactId> <version>2.5.0</version> </dependency> </dependencies> If you are not using maven to build your project, just copy the following .jar files into /WEB-INF/lib folder from your CometD download bundle. You can find these .jar files from /cometd-demo/target/cometd-demo-2.5.0.warfile.bayeux-api-2.5.0.jar cometd-java-annotations-2.5.0.jar cometd-java-common-2.5.0.jar cometd-java-server-2.5.0.jar cometd-websocket-jetty-2.5.0.jar javax.inject-1.jar jetty-continuation-7.6.7.v20120910.jar jetty-http-7.6.7.v20120910.jar jetty-io-7.6.7.v20120910.jar jetty-jmx-7.6.7.v20120910.jar jetty-util-7.6.7.v20120910.jar jetty-websocket-7.6.7.v20120910.jar jsr250-api-1.0.jar slf4j-api-1.6.6.jar slf4j-simple-1.6.6.jar2.Adding required Javascript files. You need to link the following Javascript files.cometd.js AckExtension.js ReloadExtension.js jquery-1.8.2.js jquery.cookie.js jquery.cometd.js jquery.cometd-reload.js chat.window.js comet.chat.jsThe ‘ chat.window.js‘ and ‘ comet.chat.js‘ are my own tow Javascript libraries which does not come with CometD distribution. If you are totally following this tutorial, you have to link those tow libraries as well. Provided sample application has these tow Javascript libraries. 3.Writing chat service class. /** * @author Semika siriwardana * CometD chat service. */ package com.semika.cometd;import java.util.HashMap; import java.util.Map; import java.util.Set; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.ConcurrentMap;import javax.inject.Inject;import org.cometd.annotation.Configure; import org.cometd.annotation.Listener; import org.cometd.annotation.Service; import org.cometd.annotation.Session; import org.cometd.bayeux.client.ClientSessionChannel; import org.cometd.bayeux.server.BayeuxServer; import org.cometd.bayeux.server.ConfigurableServerChannel; import org.cometd.bayeux.server.ServerMessage; import org.cometd.bayeux.server.ServerSession; import org.cometd.server.authorizer.GrantAuthorizer; import org.cometd.server.filter.DataFilter; import org.cometd.server.filter.DataFilterMessageListener; import org.cometd.server.filter.JSONDataFilter; import org.cometd.server.filter.NoMarkupFilter;@Service('chat') public class ChatService { private final ConcurrentMap<String, Map<String, String>> _members = new ConcurrentHashMap<String, Map<String, String>>(); @Inject private BayeuxServer _bayeux; @Session private ServerSession _session;@Configure ({'/chat/**','/members/**'}) protected void configureChatStarStar(ConfigurableServerChannel channel) { DataFilterMessageListener noMarkup = new DataFilterMessageListener(new NoMarkupFilter(),new BadWordFilter()); channel.addListener(noMarkup); channel.addAuthorizer(GrantAuthorizer.GRANT_ALL); }@Configure ('/service/members') protected void configureMembers(ConfigurableServerChannel channel) { channel.addAuthorizer(GrantAuthorizer.GRANT_PUBLISH); channel.setPersistent(true); }@Listener('/service/members') public void handleMembership(ServerSession client, ServerMessage message) { Map<String, Object> data = message.getDataAsMap(); final String room = ((String)data.get('room')).substring('/chat/'.length()); Map<String, String> roomMembers = _members.get(room); if (roomMembers == null) { Map<String, String> new_room = new ConcurrentHashMap<String, String>(); roomMembers = _members.putIfAbsent(room, new_room); if (roomMembers == null) roomMembers = new_room; } final Map<String, String> members = roomMembers; String userName = (String)data.get('user'); members.put(userName, client.getId()); client.addListener(new ServerSession.RemoveListener() { public void removed(ServerSession session, boolean timeout) { members.values().remove(session.getId()); broadcastMembers(room, members.keySet()); } });broadcastMembers(room, members.keySet()); }private void broadcastMembers(String room, Set<String> members) { // Broadcast the new members list ClientSessionChannel channel = _session.getLocalSession().getChannel('/members/'+room); channel.publish(members); }@Configure ('/service/privatechat') protected void configurePrivateChat(ConfigurableServerChannel channel) { DataFilterMessageListener noMarkup = new DataFilterMessageListener(new NoMarkupFilter(),new BadWordFilter()); channel.setPersistent(true); channel.addListener(noMarkup); channel.addAuthorizer(GrantAuthorizer.GRANT_PUBLISH); }@Listener('/service/privatechat') protected void privateChat(ServerSession client, ServerMessage message) { Map<String,Object> data = message.getDataAsMap(); String room = ((String)data.get('room')).substring('/chat/'.length()); Map<String, String> membersMap = _members.get(room); if (membersMap == null) { Map<String,String>new_room=new ConcurrentHashMap<String, String>(); membersMap=_members.putIfAbsent(room,new_room); if (membersMap==null) membersMap=new_room; } String peerName = (String)data.get('peer'); String peerId = membersMap.get(peerName); if (peerId != null) { ServerSession peer = _bayeux.getSession(peerId); if (peer != null) { Map<String, Object> chat = new HashMap<String, Object>(); String text = (String)data.get('chat'); chat.put('chat', text); chat.put('user', data.get('user')); chat.put('scope', 'private'); chat.put('peer', peerName); ServerMessage.Mutable forward = _bayeux.newMessage(); forward.setChannel('/chat/' + room); forward.setId(message.getId()); forward.setData(chat);if (text.lastIndexOf('lazy') > 0) { forward.setLazy(true); } if (peer != client) { peer.deliver(_session, forward); } client.deliver(_session, forward); } } }class BadWordFilter extends JSONDataFilter { @Override protected Object filterString(String string) { if (string.indexOf('dang') >= 0) { throw new DataFilter.Abort(); } return string; } } } 4.Changing web.xml file. You should add the following filter into your web.xml file. <filter> <filter-name>continuation</filter-name> <filter-class>org.eclipse.jetty.continuation.ContinuationFilter</filter-class> </filter> <filter-mapping> <filter-name>continuation</filter-name> <url-pattern>/cometd/*</url-pattern> </filter-mapping> And also the following servlet. <servlet> <servlet-name>cometd</servlet-name> <servlet-class>org.cometd.annotation.AnnotationCometdServlet</servlet-class> <init-param> <param-name>timeout</param-name> <param-value>20000</param-value> </init-param> <init-param> <param-name>interval</param-name> <param-value>0</param-value> </init-param> <init-param> <param-name>maxInterval</param-name> <param-value>10000</param-value> </init-param> <init-param> <param-name>maxLazyTimeout</param-name> <param-value>5000</param-value> </init-param> <init-param> <param-name>long-polling.multiSessionInterval</param-name> <param-value>2000</param-value> </init-param> <init-param> <param-name>logLevel</param-name> <param-value>0</param-value> </init-param> <init-param> <param-name>transports</param-name> <param-value>org.cometd.websocket.server.WebSocketTransport</param-value> </init-param> <init-param> <param-name>services</param-name> <param-value>com.semika.cometd.ChatService</param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>cometd</servlet-name> <url-pattern>/cometd/*</url-pattern> </servlet-mapping> 5.Implementing client side functions. I think this section should be descriptive. If you allows your users to chat with other users, you need to show the list of online users in you web page, just like Facebook shows the online users inside the right side bar. For that, you can place a simple <span> or <div> tag inside your page. I have done it as follows. <div id='members'></div> All the online users will be displayed with in the above container. Once you click on a particular user name, it will open a new chat window similar to Facebook. For each pair of users, it will open a new chat window. To get this behaviour, you should use ‘ chat.window.js‘ which I mentioned before. Chatting in between particular pair of users will continue through a dedicated chat window. Just after user is logging into your web application as usual way, we should subscribe that user to chat channels. You can do it using the following way. $(document).ready(function(){ $.cometChat.onLoad({memberListContainerID:'members'}); }); Note that, I have passed the ‘id’ of online user list container as a configuration parameter. Then, user should be joined with channel as follows.You can call the bellow method with the username. function join(userName){ $.cometChat.join(userName); } Since for each chat, there is a dedicated chat window just like Facebook, we should maintain global Javascript array to store those created chat window objects. You need to place the following Javascript code inside your page. function getChatWindowByUserPair(loginUserName, peerUserName) { var chatWindow; for(var i = 0; i < chatWindowArray.length; i++) { var windowInfo = chatWindowArray[i]; if (windowInfo.loginUserName == loginUserName && windowInfo.peerUserName == peerUserName) { chatWindow = windowInfo.windowObj; } } return chatWindow; } function createWindow(loginUserName, peerUserName) { var chatWindow = getChatWindowByUserPair(loginUserName, peerUserName); if (chatWindow == null) { //Not chat window created before for this user pair. chatWindow = new ChatWindow(); //Create new chat window. chatWindow.initWindow({ loginUserName:loginUserName, peerUserName:peerUserName, windowArray:chatWindowArray}); //collect all chat windows opended so far. var chatWindowInfo = { peerUserName:peerUserName, loginUserName:loginUserName, windowObj:chatWindow }; chatWindowArray.push(chatWindowInfo); } chatWindow.show(); return chatWindow; } As I mentioned above, declare following global Javascript variable. var chatWindowArray = []; var config = { contextPath: '${pageContext.request.contextPath}' }; Since I am using a JSP page, I have to get the context path via ‘ pageContext‘ variable. If you are using a HTML page, manage it by your self to declare ‘config’ Javascript global variable. Now, you almost reached to last part of the tutorial. 5.How does the sample application works? You can download the comet.war file and deploy it in your server. Point the browser to following URL. http://localhost:8080/comet This will bring you to a page which has a text field and button called ‘Join’. Insert some user name as you wish and click on ‘Join’ button. Then you will be forwarded to a another page which has list of online users. Your name is highlighted in red color. To chat in your local machine, You can open another browser (IE and FF) and join to the chat channel. The peer user displays in blue color in the online users list. Once you click on a peer user, it will open a new chat window so that You can chat with him. This functions very similar to Facebook chatting. I have tested this chatting application in IE, FF and Crome and works fine. If you want any help of integrating this with your Java base web application, just send me a mail.Reference: Facebook similar chat for your Java web application. from our JCG partner Semika loku kaluge at the Code Box blog....

Create new message notification pop up in Java

First create JFrame to work as pop up. Add some JLabels in it to contain information and assign them at proper location to look like a notification message.A sample code is given below: String message = 'You got a new notification message. Isn't it awesome to have such a notification message.'; String header = 'This is header of notification message'; JFrame frame = new JFrame(); frame.setSize(300,125); frame.setLayout(new GridBagLayout()); GridBagConstraints constraints = new GridBagConstraints(); constraints.gridx = 0; constraints.gridy = 0; constraints.weightx = 1.0f; constraints.weighty = 1.0f; constraints.insets = new Insets(5, 5, 5, 5); constraints.fill = GridBagConstraints.BOTH; JLabel headingLabel = new JLabel(header); headingLabel .setIcon(headingIcon); // --- use image icon you want to be as heading image. headingLabel.setOpaque(false); frame.add(headingLabel, constraints); constraints.gridx++; constraints.weightx = 0f; constraints.weighty = 0f; constraints.fill = GridBagConstraints.NONE; constraints.anchor = GridBagConstraints.NORTH; JButton cloesButton = new JButton('X'); cloesButton.setMargin(new Insets(1, 4, 1, 4)); cloesButton.setFocusable(false); frame.add(cloesButton, constraints); constraints.gridx = 0; constraints.gridy++; constraints.weightx = 1.0f; constraints.weighty = 1.0f; constraints.insets = new Insets(5, 5, 5, 5); constraints.fill = GridBagConstraints.BOTH; JLabel messageLabel = new JLabel('<HtMl>'+message); frame.add(messageLabel, constraints); frame.setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE); frame.setVisible(true); Output of this will be:Here I have created a JFrame and add two labels; first is headingLabel which is header label and second is messageLabel which will contains message information; and a close button. I have used GridBagLayout but you can use any of your choice. Now to make this frame look like a popup we have to remove the title bar and border from this frame. For this add following line after frame.setSize(…); : frame.setUndecorated(true); Now the output will be:Note that now our frame cannot be closed as it do not have title bar close button. So to make our close button to work as frame closing button change its declaration as follows: JButton cloesButton = new JButton(new AbstractAction('x') { @Override public void actionPerformed(final ActionEvent e) { frame.dispose(); } }); After adding it you will get error as “Cannot refer to a non-final variable frame inside an inner class defined in a different method”. To get rid of this error you can adopt one of following solution:Make your frame variable as final. Make your frame variable a global variable in class. Make your class extends JFrame and remove frame variable at all.Now when you run your program it will look like same as in figure 2 but now you will be able to close your frame by clicking on closeButton. You will notice that your frame appears at the top of screen so change to its location bottom right corner of the screen add following lines after creating frame: Dimension scrSize = Toolkit.getDefaultToolkit().getScreenSize();// size of the screen Insets toolHeight = Toolkit.getDefaultToolkit().getScreenInsets(frame.getGraphicsConfiguration());// height of the task bar frame.setLocation(scrSize.width - frame.getWidth(), scrSize.height - toolHeight.bottom - frame.getHeight()); Now when you run it will look like this:Now to make it to disappear after predefined time add following lines at the end: new Thread(){ @Override public void run() { try { Thread.sleep(5000); // time after which pop up will be disappeared. frame.dispose(); } catch (InterruptedException e) { e.printStackTrace(); } }; }.start(); Up to this you have successfully created a notification pop up that will appear at bottom-right corner of screen and disappear after some time if close button is not clicked. So as a final touch you can design it as you want by applying look and feel or by applying different colours in frame. Also you can make it appear on top of all windows by adding: frame.setAlwaysOnTop(true);Some things to notice in above code blocks: 1. <HtMl> tag in messageLabel. It is to make word wrap in label. But make sure you text in message does not exceed some specific amount of length. You can adjust this and height of pop up as per your need. 2. “headingIcon” is not declared in code. It is the image icon you want to use instead of devil icon in screen shot as a heading title icon. A sample declaration will look like: ImageIcon headingIcon = new ImageIcon(“image_url”); 3. Currently a new window for our pop up is shown in task bar so if you want that no window is shown for pop up in task bar change JFrame to JDialog. 4. In above code default timeout before the pop up will be disappear is taken as 5 seconds you can update it as per your need by editing following line in code: Thread.sleep(5000); // time after which pop up will be disappeared. 5. To make close button look like default title bar’s close button “x” is taken in his text. You can write it close if you want to. Hope this helps you. Happy coding and don’t forget to share! Reference: Create new message notification pop up in Java. from our JCG partner Harsh Raval at the harryjoy blog....

How I select Open Source projects

Earlier this day somebody sent a image on twitter. Its title was something like, “how to choose Open Source projects” or so. It showed a flow chart. The first decision point was: is it an Apache project. If yes, so the creator suggests, don’t use the project. I was looking at this image and thought: wow, what complete and utter bullshit. Yes, there was some discussion in the past. Is Apache harmful? Or is it not? Some people seem to forget that GitHub is a tool. And the Apache Software Foundation is a community. The tool and the community, both have their benefits and drawbacks. But you cannot compare them 1:1. Have you ever tried to meet your github fellows? The likelihood to meet ASF guys and drink a beer together is really high. An Open Source project is more than its set of tools. But this post is not about what is better, github or ASF. Much has been said already about it (and too much bullshit). What really bugs me is that people seem to choose Open Source projects after the tools the projects use. Here is my personal list after which I choose projects. Please feel free to send me your suggestions by e-mail or as a comment, and don’t forget to +1 this postIs the project actively developed? Look at the projects source code browser. When happened the last commit? If there was one recently, did it touch the README only or is code changed? Projects which do not see an update for more than 6 months are either very, very stable (unlikely) or there is no interest anymore.Are there more people developing? One-Man-Shows might work sometimes, but if there are many people working on a project, it is unlikely that urgent bugs are not fixed. With only one man behind a project, you need to wait for fixes until he returns from vacation.Is there any support? A twitter account you can follow is not really support. You may send gists around, ok, but you can not expect in depth responses. Even Stackoverflow is not enough. Stackoverflow is good, but you need direct access to the developers sometimes. And hey, sometimes you have a stupid question when you start and well, on SO you can get downvoted.Is there any IP clearance? Where does the source code come from? From the author? Really? What, if the author has stolen it? The Apache Software Foundation has mechanisms which protect users here. On GitHub it is not so clear. But even there are some companies who might (or might not) care about this. For example. Twitter backs Bootstrap.io. My guess is this software has a clean IP. I try to use software only if I know where it comes from. This includes much Software from the ASF, some of Github or others, like jQuery.org.Are the people nice? When I use Open Source, the creators become my team mates. I usually check out the source code from most projects I use. I want to look in side. Sometimes I have questions. I don’t want to work in a team full of idiots or egomans. If the people are nice, likeliness increases that I use it.Methods How is the project developed and how is quality controlled? At Apache Commons for example there are many people around looking at every change of each component. Sometimes lenghty discussions happen if we should drop JDK 1.3 support or not. Should we change the interface now or not – it would break backwards compatibility. At Commons there are various tools in place to check binary compatibility, bug freeness and so on. Continous Integration helps to keep quality alive. Finally a Vote is required to push a release, and every of the voters does his checks. It is not easy to get out an release at the ASF, sometimes it requires 5 or more release candidates. I have seen projects on GitHub doing them same.Are there releases? Some projects on GitHub do not make releases. They ask you to check out the source code and update it, when they change things. Wow, this is a real blocker. I want tested software, I want a release number. Agile development can become Agile chaos easily.Are people speaking about it? If I have never heard of software and nobody speaks about it, I am careful. It does mean nobody else can help you if you run into problems.Are there docs? Docs? Complete? Readable? With examples? If yes then good. If no, go away. If developers don’t care on their docs, they have no real interest in their community. Good code documents itself. Right. But I cannot read everything when I am in time pressure.Is there community? Is the project only a “we stop by and drop a fix” group or is it a real community? Communities have the benefits of group effects. If somethings smells nasty, they might fork. If something is wrong in a “stop by” group the group will die.Is there one head? If the project has 1000 forks with many changes which are not found in the trunk – which one should I choose? Better is one version control system. I don’t care if it is Mercural and Bitbucket, Git and Github or SVN at the Apache Software Foundation. I am just a user in most cases, f**k, the devs spent so much time, I leave it up to them how they want to develop! It is just important that I have -one- place were I get the real, cool, official sources.Vendors Who is running the project? Is it one vendor? Are there many vendors? Or is it a collective of individuals? I am very careful when projects are backed by only one vendor. In case of Bootstrap (Twitter) I don’t care. The project is great, but so small I can replace if something wents wrong in a couple of days. If it comes to JBoss (Red Hat), I am a bit more careful. The strategy of JBoss already earned some critics back then. In this case I would prefer Apache Geronimo. This is another JEE container. At the ASF people are committing, not companies. Even when there are companies who pay committers to work fulltime on the projects (like with OpenOffice.org), there are always many other people who can continue, just in case. The ASF like having a good diversity of committers at their projects.Licenses Is the license clear and understandable? Is there a LICENSE file in the source code? I prefer the AL2.0 license (or similar, like MIT) because I can do whatever I want with it. I do lots of Open Source work, but honestly, sometimes I need to sell my software and for that AL et al works best. Projects which do not have named LICENSE, or have a complicated “double license” model or a license like the GPL usually are not usable for me.Acknowledgements Thanks to Simone Tripodi and Maurizio Cucchiara for their valuable feedback on this post.Don’t forget to share! Reference: How I select Open Source projects from our JCG partner Christian Grobmeier at the PHP und Java Entwickler blog....

On Measuring Code Coverage

In a previous post, I explained how to visualize what part of your code is covered by your tests. This post explores two questions that are perhaps more important: why and what code coverage to measure.Why We Measure Code Coverage What does it mean for a statement to be covered by tests? Well, it means that the statement was executed while the tests ran, nothing more, nothing less. We can’t automatically assume that the statement is tested, since the bare fact that a statement was executed doesn’t imply that the effects of that execution were verified by the tests. If you practice Test-First Programming, then the tests are written before the code. A new statement is added to the code only to make a failing test pass. So with Test-First Programming, you know that each executed statement is also a tested statement.If you don’t write your tests first, then all bets are off. Since Test-First Programming isn’t as popular as I think it should be, let’s assume for the remainder of this post that you’re not practicing it. Then what good does it do us to know that a statement is executed? Well, if the next statement is also executed, then we know that the first statement didn’t throw an exception. That doesn’t help us much either, however. Most statements should not throw an exception, but some statements clearly should. So in general, we still don’t get a lot of value out of knowing that a statement is executed. The true value of measuring code coverage is therefore not in the statements that are covered, but in the statements that are not covered! Any statement that is not executed while running the tests is surely not tested. Uncovered code indicates that we’re missing tests.What Code Coverage We Should Measure Our next job is to figure out what tests are missing, so we can add them. How can we do that? Since we’re measuring code coverage, we know the target of the missing tests, namely the statements that were not executed.If some of those statements are in a single class, and you have unit tests for that class, it’s easy to see that those unit tests are incomplete. Unit tests can definitely benefit from measuring code coverage. What about acceptance tests? Some code can easily be related to a single feature, so in those cases we could add an acceptance test. In general, however, the relationship between a single line of code and a feature is weak. Just think of all the code we re-use between features. So we shouldn’t expect to always be able to tell by looking at the code what acceptance test we’re missing. It makes sense to measure code coverage for unit tests, but not so much for acceptance tests.Code Coverage on Acceptance Tests Can Reveal Dead Code One thing we can do by measuring code coverage on acceptance tests, is find dead code. Dead code is code that is not executed, except perhaps by unit tests. It lives on in the code base like a zombie. Dead code takes up space, but that’s not usually a big problem. Some dead code can be detected by other means, like by your IDE. So all in all, it seems that we’re not gaining much by measuring code coverage for acceptance tests.Code Coverage on Acceptance Tests May Be Dangerous OK, so we don’t gain much by measuring coverage on acceptance tests. But no harm, no foul, right? Well, that remains to be seen. Some organizations impose targets for code coverage. Mindlessly following a rule is not a good idea, but, alas, such is often the way of big organizations. Anyway, an imposed number of, say, 75% line coverage may be achievable by executing only the acceptance tests. So developers may have an incentive to focus their tests exclusively on acceptance tests.This is not as it should be according to the Test Pyramid. Acceptance tests are slower, and, especially when working through a GUI, may also be more brittle than unit tests. Therefore, they usually don’t go much further than testing the happy path. While it’s great to know that all the units integrate well, the happy path is not where most bugs hide. Some edge and error cases are very hard to write as automated acceptance tests. For instance, how do you test what happens when the network connection drops out? These types of failures are much easier explored by unit tests, since you can use mock objects there. The path of least resistance in your development process should lead developers to do the right thing. The right thing is to have most of the tests in the form of unit tests. If you enforce a certain amount of code coverage, be sure to measure that coverage on unit tests only. Reference: On Measuring Code Coverage from our JCG partner Remon Sinnema at the Secure Software Development blog....

Master Detail CRUD operations with Regions ADF 11g

This is an example that demonstrates how to create a Master Detail relationship between tables by using Regions. The main purpose of regions is the notion of reusability. With regions and bounded task flows we can reuse our pages into many other pages keeping the same functionality and having a more cleaner approach Download the Sample Application. For this example we are going to use only one Model project and keep things simple. We are going to create our Business Components through JDeveloper and it’s wizards. We are using Master Detail for Departments and Employees.So, we are going to create two Bounded Task Flows that use fragments. One for the Departments One for the employees.In each bounded task flow we drag and drop a view and place the appropriate names of departments and employees.Then in the unbounded flow we create a jspx that will have two Regions defined. One for the Department BTF One for the Employees BTF For Departments we are going to drag and drop the Departments iterator as a form with navigation buttons and submit button. Additionally, we add the createInsert and Delete Operation buttons next to submitWe do the same with employees. The only difference here is that we drop an editable table and not a form. Additionally we drag it from the hierarchy and not the alone one in our Data Control. This means that we drag the detailed employees.Next, we are going to create an index page in our unbounded task flow that will contain our Bounded Task Flows as regions. In order to that, after we created the index page, we simply drag and drop each Bounded Task Flow as a RegionWe do the same for the Employees Bounded Task Flow. Up to now, we have our hierarchy done and well placed. Since we share the same application module instance, we are good to go!! All that is left now is to place commit and rollback buttons in our Departments fragment and we are done! For the rollback button we have to make a specific adjustment: The emps region needs to be refreshed and indicate that the rollback is performed. For this reason we are going to set the refresh property as follows:So, what we do here is, to set a refresh condition on our detail region. What we say here is, refresh emps fragment when the dept fragments is refreshed. NOTE: this is a simple application demonstrating the ease of use of Regions. It is not intended to cover all aspects of regions. Download the Sample Application. Reference: Master Detail CRUD operations with Regions ADF 11g from our JCG partner Dimitrios Stassinopoulos at the Born To DeBug blog....

JavaOne 2012: Observations and Impressions

I am starting this particular blog post as I sit the the San Francisco International Airport waiting to board an airplane to head home after another satisfying but tiring JavaOne (2012) experience. It is difficult to write another blog post after having frantically written ~30 blog posts on the conference since the keynotes on last Sunday, but I want to record some of my observations and impressions of the conference while they’re still relatively fresh. More than in previous years, I did embed some general observations (usually complaints) within posts on individual sessions.This post is broken up into ‘the good,’ ‘the bad,’ and ‘the ugly’ of JavaOne 2012. I want to emphasize that the conference overall was outstanding and I am appreciative of the opportunity to have attended. I hope the overall tone of my post reflects my overall highly positive feelings about this conference, but also presents a realistic portrait of the not-so-great aspects of the conference. The GoodOverall Technical Content There is a wide variety of things conference attendees look forward to in a conference. Many of us look forward to many of the same things in a conference. For me, the single most important attribute of a technical conference is its content. In that category, JavaOne 2012 was a success. There was actually too much good content to take it all in, but that’s a welcome dilemma.High Attention to Low-Level Details I think Adam Bien made an important observation: even though it’s nice to have community involvement in the conference, JavaOne presents a special opportunity to hear from the folks (mostly Oracle employees) working ‘in the trenches’ on the latest Java APIs, specifications, and SDKs. Bien put it this way, ‘I mainly attended sessions delivered by Oracle engineers. 90% of this sessions were great with unique, deep technical content probably only deliverable by someone implementing the low level stuff. This is my personal motivation for attending JavaOne.’ I’ve been to database-oriented conferences where many of the Oracle employees’ presentations are heavy on marketing and slideware and low on technical detail. That’s not the case at JavaOne where Oracle employees present the low-level details that Java developers want to hear.Breadth and Scope of Technical Content No matter in which dimension it is measured, JavaOne 2012 featured breadth and depth of content. Subjects in Java SE, Java EE, Java ME/embedded, web, JVM (alternate languages), and even some non-Java topics were available in nearly every session block. The keynotes (especially the Strategy Keynote and Technical Keynote) and select presentations that I attended provided roadmaps and vision for what lies ahead. I enjoyed the breadth of ‘temporal usefulness’ available in the presentations. I learned about things I like won’t use anytime soon but are interesting and mind-expanding (Ceylon, JavaFX Embedded, Play Framework, Akka, Tiggzi), things that I’ll definitely use in the intermediate future (Project Lambda, JSR 310 Date/Time API), things I’ll use in the near future (Scala), and things I’ll use almost as soon as I get home (JDK 7’s jcmd, NetBeans Project Easel, Checker Framework). I was even able to learn several new tips and/or tricks for things for which I already had significant familiarity (Groovy, JavaFX, NetBeans’s custom declarative language for refactoring/hints).Attention to Community I stated above that I agree with Adam Bien’s assertion that one of the most valuable aspects of JavaOne is the access to people working directly on the future of Java. That being stated, I do appreciate Oracle making a real effort to reach out to the community. I posted during several presentations in which the speakers solicited feedback and ideas from the community and the audience. This was a nearly universal theme of any of the presentations related to anything open source. The JavaOne Community Keynote is the most obvious manifestation of JavaOne’s commitment to community, but that theme was reiterated in numerous presentations.The Host City San Francisco is a great city to visit and offers lots to do for downtime and for people traveling with JavaOne participants who are not themselves attending JavaOne. Although I look forward to any opportunity I get to attend JavaOne, I think I look forward to the visit to San Francisco as much as the conference. It’s definitely an interesting city to visit with great dining and other activities. The weather was pleasant and clear most of the time, though fog rolled in occasionally to remind us it is San Francisco and it was unusually hot in the early portion of the conference.Oracle makes the presence of Oracle OpenWorld and JavaOne well-known throughout the city. Taxicabs feature signs for the respective conferences on their advertisements, there are signs all over the place, and some sections of the downtown near the conference venues (Moscone for Oracle OpenWorld and three Union Square hotels for JavaOne) for activities.Extracurricular Activities JavaOne provides numerous extracurricular activities beyond the technical content of the conference and beyond what the city provides. I didn’t participate in many of these this year due to other commitments and activities, but the offerings are fairly impressive. The Oracle Appreciation Night, which featured Pearl Jam and Kings of Leon this year, is especially impressive. Although there are numerous disadvantages to JavaOne being the ‘little brother’ held simultaneously with Oracle OpenWorld, some of these activities are available because of the bigger and better attended big brother conference being held simultaneously. The Return of James Gosling There was no denying that the ‘surprise’ return of James Gosling to JavaOne (Community Keynote) left a big and very positive impression. The nostalgic factor (reminder of JavaOne’s most glorious days) seemed to be as big as Gosling’s presentation itself. I monitored a lot of the Twitter traffic during the week on ‘javaone’ and no single Tweet or set of Tweets came anywhere close to being tweeted and re-treeted as often as mention of Gosling’s return to JavaOne.Increased Exposure to Tools Master craftsmen in any industry are more successful with the correct tools. At JavaOne 2012, I became familiar with tools that I either had not been aware of previously or had not fully appreciated previously. These were either the subject of the presentations I saw or were used ‘incidentally’ during projects and hallways discussions. These projects included JaCoCo Java Code Coverage Library (first read about in a Tweet), Checker Framework, the Oracle JDK 7 jcmd command-line tool, and NetBeans 7.3 Project Easel. I was also reminded that JDeveloper provides one of the better free UML tools, an important reminder now that NetBeans no longer supports UML (UML last supported in NetBeans 6.7). Online JavaOne 2012 Coverage Modern technology continues to make JavaOne more accessible to developer worldwide each year. Oracle made a lot of content available online early in the conference and individual members of the community also contributed significantly to the JavaOne coverage. Even some of the individual contributions were in part due to Oracle; I, for example, attended JavaOne 2012 on a blogger pass and was able to write posts like this one thanks to that complimentary pass. Between attending sessions, visiting some San Francisco sites, and writing my own blog posts, I’ve only been able to read a fraction of the other posts written about JavaOne 2012. I hope to catch up on those in coming weeks. I did try to watch Tweeted messages about the conference as it went along and was impressed with the quick coverage of important aspects of the conference. Oracle has made ‘featured keynotes and highlights’ available online (video). There have been several Oracle-originated blogs of interest including Oracle Outlines Roadmap for Java SE and JavaFX at JavaOne 2012, Virtual Collateral Rack (PDFs of sessions), Thursday Community Keynote: ‘By the Community, For the Community’, JavaOne 2012 Sunday Strategy Keynote, and The JavaOne 2012 Sunday Technical Keynote. Individual JavaOne 2012 summaries include Jim Gough‘s Highlights From Java One 2012, Mark Stephens‘s 5 key things I learnt at Javaone2012, Yakov Fain‘s My Three Days at JavaOne 2012, and Trisha Gee‘s JavaOne: The Summary.A Dose of Reality The blogosphere tends to distort the reality of software development for a variety of reasons (dominated by ‘new’ and ‘interesting’ developments, for one). Attending conferences can be a good way to talk to others to get a better perspective on the reality of general software development. For example, at JavaOne 2012, there were several reminders that there is still significant software development that occurs on the desktop (it’s not all web/mobile) and that the demise of UML has been overstated. The Bad These ‘bad’ things are mostly accepted parts of the JavaOne experience. They are certainly outweighed by the good both in terms of number of ‘bad’ or ‘good’ things and in terms of importance of the things. In other words, there were more good things about JavaOne and the good things were more important to me than the bad things.The Hotels Venue The spreading of JavaOne over three Union Square hotels (Hilton, Parc 55, and Nikko) and the Masonic Auditorium would probably not be as big of a negative if JavaOne attendees were not aware of the presentations-friendly Moscone Center in the same city just blocks away. I am getting used to this venue and can navigate it better now than previously. I actually often enjoy the opportunity of going outside to move between buildings. However, I also found myself changing a couple selected presentations in the last couple of days because my original choice was in a particularly poor conference room area.Poor Wifi The Wifi at JavaOne simply cannot scale to the number of people wanting to use it via laptops, iPads, iPod Touch devices, Android tablets, and other personal devices. The Wifi was pretty good in the mornings before things got going and was outstanding on Thursday afternoon when a lot of people had already left. The Food Like the venues, the food is not completely awful; it’s just not very good. It is sufficient for what is needed (providing nutrients and energy), but its lack of flavor stands in stark contrast to the excellent breakfasts and dinners I enjoyed again this year while in San Francisco.Getting To and Leaving San Francisco My flights into and out of San Francisco were both delayed due to fog in San Francisco and/or due to metering of traffic in the airport. In addition to this, we were told that the U.S. Navy‘s use on SFO for some of their Fleet Week exercises was the reason we sat on the runway for an extra twenty minutes. This is an example of where the good (being in San Francisco for the conference) outweighed the bad. The UglyInconsiderate and Intentionally Rude Misbehavior Perhaps the ugliest part of JavaOne 2012 had little to do with the conference itself or its organizers, but was instead caused by a small portion of its attendees. It seemed that I repeatedly got behind the person trying to text and walk at the same time. These individuals slowed down traffic in the already congested halls as they walked more slowly and wandered in unpredictable directions and caused people to try to walk around them, causing additional issues. People tend not to drive and text as well as they might think and walking and texting is no different. The walking while texting may be less dangerous than driving while texting, but it’s not without its dangers. There was one guy I was behind who was stopping intermittently while trying to eat and walk down the stairs because he was losing his lunch or snack. Continuing to try to do both made it so that neither was done well.Other misbehavior that I observed were observed by others as well. These included unnecessary presentation hijacking, mobile phones ringing in sessions and some people even taking the call without leaving, people cutting in lines, and excessive entering and exiting of presentations at mid-point (most noticeably a problem when someone who sat in the first few rows made a show of his or her exit). The majority of attendees were well-behaved, but the small fraction of inconsiderate and even intentionally rude attendees was probably the ugliest part of JavaOne 2012. In defense of JavaOne, this ‘ugliness’ seems to be more reflective of human behavior than of the conference. Additional / Miscellaneous ObservationsTrendy Topics Some of the topics that seemed particular popular at this year’s JavaOne included REST, HTML5, Project Nashorn, JDK8/Lambda, NetBeans, and Embedded/Raspberry Pi.Convergence A major theme of JavaOne 2012 was ‘convergence.’ This theme was explicitly identified in the keynotes and several presentations such as ‘Looking into the JVM Crystal Ball‘ (convergence of Oracle’s JRockit and HotSpot JVMs), ‘Mastering Java Deployment‘ (convergence of Java SE and JavaFX), ‘JavaFX on Smart Embedded Devices‘ (convergence of JavaFX and JavaFX Embedded, representing convergence of editions of Java [EE, SE, ME]), ‘NetBeans.Next – The Roadmap Ahead‘ (sharing of features between NetBeans and JDeveloper), and ‘Diagnosing Your Application on the JVM‘ (convergence of VM tools between JRockit and HotSpot and converge of command-line tools into single new jcmd tool). One of the manifestations of this convergence of versions of Java is the renaming of versions. It was interesting to hear multiple speakers refer to current JavaFX as JavaFX 2.2 and the ‘next’ major version of JavaFX as JavaFX 8 (version that was to be called JavaFX 3). This version naming change is documented in the post JavaFX 2.2 is here, and JavaFX 8.0 is on its way! Similarly, Java ME is seeing a version naming change as well: Java ME 3.2 is the current version and Java ME 8 is the ‘next’ major version.JDK 7 Update 10: The Next ‘Big’ Minor Release? I heard multiple Oracle presenters mention features that they are already using in JDK 7 Update 10. Given that most of us who are using JDK 7 are using JDK 7 Update 6 (and JDK 7 Update 7 is the current regular download), it sounds to me like JDK 7 Update 10 may be the next ‘minor’ release of JDK 7 with significant new tools for things such as application diagnosis and application deployment. The naming of JDK minor releases with odd numbers for Critical Patch Updates (CPUs) and even numbers for ‘limited update releases’ was announced previously. JDK 7u10 Build b10 is available in Developer Preview.‘Java’ Becoming Bigger Than Ever One thing that is clearer to me than ever before after attending JavaOne 2012 is that ‘Java’ has become big for any one person to get his or her hands around the whole thing. Even some of the most knowledgeable experts I know in the Java community were heard to say that they would need to ask someone else to answer a specific question out of their area of expertise. It’s becoming increasingly difficult for any one person to thoroughly understand all aspects of Java (JVM, EE, SE, ME, etc.). When you throw in alternate languages and new frameworks and tools, one person simply cannot learn or understand all of it. It’s great that we have so many choices, but it can be frustrating to see entire areas of ‘Java’ that would be interesting to delve into, but simply require too much time and effort to give it those areas the desired degree of attention. Overall Overall, I think JavaOne 2012 was a success by most peoples’ measures. It certainly was by mine. I’m not the only one who was sorry to see it end. JavaOne 2013 will be held September 22–26, 2013, in San Francisco. Don’t forget to share! Reference: JavaOne 2012: Observations and Impressions from our JCG partner Dustin Marx at the Inspired by Actual Events blog....

JavaOne 2012: JavaFX Graphics Tips and Tricks

I returned to the Hilton (Imperial Ballroom B) to see Richard Bair‘s (Oracle Java Client Architect) ‘JavaFX Graphics Tips and Tricks.’ Bair is associated with FX Experience and obviously knows JavaFX. Bair said a theme of his talk is on performance. He cautioned that as with most things performance related, avoid performance pre-optimization. He had a big yellow caution screen stating ‘WRITE CLEAN CODE, THEN PROFILE!’ He said his talk is based on JavaFX 2.2 and some of the tips and tricks may not be applicable with JavaFX 8.Bair covered the ‘GUIMark 2 Vector’ benchmark for several browers on three different operating systems (versions not specified): Windows, Linux, and Mac OS X. Bair compared JavaFX to these browsers’ native support. He also pointed out that sometimes SceneGraph is faster and sometimes Canvas is faster. Many of the points Bair brought up are more important on smaller devices than on desktops. JavaFX was much quicker than the browsers in GUIMark 2 Bitmap and JavaFX Canvas was the quickest of all. The GUIMark 2 Text test did not provide useful data for Windows due to limited rate, but JavaFX did well on Linux and Mac OS X. Bair intends to release benchmarking approaches for public consumption and he showed a chart indicating significant performance improvement from JavaFX 2.2 to JavaFX 8. Bair’s Performance Rule #1 is ‘Do Less Work.’ Bair stated, ‘Smaller systems require a much more intense round of performance tuning.’ He added that ‘every line counts’ and ‘extra method calls add up.’ Although in traditional desktop Java we’ve been taught not to worry about number of methods calls, this can be an issue on smaller devices (‘excessive inlining is expensive’ and ‘excessive method invocations are expensive’). Bair showed how to use a local final variable to reduce the number of method invocations. He acknowledged that it is ‘absolutely micro performance pre-optimization’ on the desktop, but is a useful tactic for smaller devices. Bair said that ‘fill rate’ is a limitation with ‘nearly 100% certainty.’ Geometry rate is unlikely to be a significant limit in JavaFX unless you have ‘zillions of vertices.’ CSS overhead is a possible limitation as is layout computation. JavaFX does a lot of caching and the latter may not always be an issue. There is a ‘good chance’ that system I/O will limit you, especially on smaller devices. Bair showed an example of ‘abusing the fill rate’ by drawing the furthest back background first and then drawing over the majority of that with another fill. He had some points for avoiding this unnecessary filling such as ‘only draw what has changed.’ Bair pointed out that the developer identifies ‘dirty regions’ in Swing, but that JavaFX SceneGraph ‘does this automatically!’ He did caution that JavaFX Canvas requires the developer to identify the ‘dirty regions.’ Another approach for improving fill rate is to ‘limit use of (some) effects.’ ‘Effects are almost free on your desktop systems,’ but may need to watch them more closely on smaller devices. Bair discussed a bullet stating, ‘Limit use of non-rectangular non-axis aligned clips’ as another tactic for improving fill rate. Directly clipping aligned images is quick, but the process of anti-aliasing, rendering as a background image, and rotating non-aligned pixel boundaries ‘costs you a little more’ (but you won’t notice on most desktop applications). Bair stated that reducing overdraw is an effective way of improving fill rate. Related to reduction of overdraw, he discussed using ‘image skinning.’ Bair also mentioned here that JavaFX 8 includes automatic region texture cache. Other ideas for reducing overdraw include simplifying the style (Metro, Android), consolidating background fills, and reducing the number of overlapping nodes. Bair stated that Microsoft intentionally came up with an easy-to-draw style in Metro. The Android style is similarly quicker and easier to draw. ‘Occlusion Culling‘ allows avoidance of drawing (culling) things that won’t be visible.’ Doing this allows us to ‘reduce overdraw and increase rendering performance.’ The JavaFX engine can respond to JavaFX CSS opaque insets to know when to not redraw these areas. There are CSS costs to be aware of such as parsing a stylesheet. Bair showed a ‘CSS Horror Show’ slide with .parent:hover .child {...} and explanation of why this is so terrifying: all the children must be revisited each time the parent is hovered over. Similarly, .parent .child {...} can be bad if there is a large number of children since ‘when we encounter a node with the .child style class, we must walk up the entire scene graph until we find it.’ It is better to limit the search to immediate parent. Bair stated that the setStyle CSS property is very convenient, but can be costly. The parsing and other support can add to performance problems. CSS provides power, flexibility, and convenience, but that does come at a performance cost. One of Bair’s tips is to ‘avoid structural changes to SceneGraph.’ All CSS from the changed point on down must be recomputed. Besides this reapplication of CSS, ‘structural integrity checks’ are required when the SceneGraph is changed. JavaFX has optimized toFront/toBack, so use these rather than removing and adding back. Another Bair tip is to ‘use FXCollections.’ His first bullet on this stated, ‘Shoot for minimal notification overhead.’ A sub-bullet recommended using setAll instead of clear and addAll. Another sub-bullet added, ‘Avoid multiple add calls.’ Use of FXColections.sort() is best because it ‘sends ‘permutation’ change events.’ This means that JavaFX engine knows what has changed and so only recomputes what is necessary for that specific type of change. These ‘permutations’ are ‘handled by separate fast paths.’ Bair stated that ‘ListView is blistering fast’ because it ‘reuses nodes’ and maintains minimum changes. Bair concluded that slide with ‘Reuse ListView for all your virtualization needs!’ Bair’s ‘Manual Layout’ tip included the idea of custom extensions of Region. He cautioned that you almost always need to implement computePrefWidth and computePrefHeight when extending Region. Bair had a slide listing the questions that ‘JavaFX asks’ when handling layout. These are questions like ‘How wide/tall would you like to be?’ and ‘Can you be resized?’ JavaFX asks these questions at least once and sometimes many more times that number of times when trying to render a layout. A customized layout can reduce the number of times attempted and the number of questions asked. ‘JavaFX asks a lot of questions’ and they are ‘all asked for each node during layout.’ Bair had a ‘Major Tip!’ related to ‘Content Bias.’ If height depends on width, you’re HORIZONTAL biased. If width depends on height, you’re VERTICAL biased. Bair stated that ‘(contentBias = null) is by far the fastest’ (all computed preferences for height and width are cached). Content Bias is typically null or horizontal. There is a bug in that ‘contentBias != null isn’t actually well supported in the built-in layouts.’ Everything covered so far has been under Bair’s Rule #1 (do less work). Rule #2 is now ‘Know Your Device.’ Bair showed a slide comparing the powerful NVidia GForce GTX 690 to the less powerful NVidia GForce 310 and to the even lowlier PowerVR SGX543MP3. Bair’s point, of course, is that ‘JavaFX gives you a single development platform and a single set of APIs, but which APIs you can and can’t use is going to depend on the inherent performance characteristics of the device.’ Bair had some rules of thumb for JavaFX on devices. Use desktop application for handling 20 thousand to 100 thousand nodes. Five hundred to one thousand nodes is the better range for embedded. For really small embedded devices, stick to range of 100 to 200 nodes for the JavaFX application. Bair provided another tip related to cache. He talked about caching a chart because it will then only be drawn to the image once and can then quickly be drawn to the screen ‘a bazillion times.’ Bair cautioned, though, that this ‘backfires if the node is changing a lot.’ Bair said that he’ll often turn cache to true, do an animation, and then set cache to false again. CacheHint can be set to SPEED when rotating and scaling for better performance. If you want it redrawn when it rotates for greater accuracy, then use cache hint other than SPEED. JavaFX 8 has a Pulse Logger (-Djavafx.pulseLogger=true system property) that ‘prints out a lot of crap’ (in a good way) about the JavaFX engine’s execution. There is a lot of provided information on a per-pulse basis including pulse number (auto-incremented integer), pulse duration, and time since last pulse. The information also includes thread details and events details. This data allows a developer to see what it taking most of the time. Bair ended the session with the same bright yellow caution slide: Write Clean Code, Then Profile! The slide also points out, ‘Don’t overdo it or you will have an unmaintainable mess. Reference: JavaOne 2012: JavaFX Graphics Tips and Tricks from our JCG partner Dustin Marx at the Inspired by Actual Events blog....

Spring MVC Customized User Login Logout Implementation Example

This post describes how to implement a customized user access to an Spring MVC web application (login logout). As a prerequisite, readers are advised to read this post which introduces several Spring Security concepts. The code example is available from Github in the Spring-MVC-Login-Logout directory. It is derived from the Spring MVC with annotations example. Customized Authentication Provider In order to implementation our own way of accepting user login requests, we need to implement an authentication provider. The following let’s users in if their id is identical to their passwords: public class MyAuthenticationProvider implements AuthenticationProvider {private static final List<GrantedAuthority> AUTHORITIES = new ArrayList<GrantedAuthority>();static { AUTHORITIES.add(new SimpleGrantedAuthority('ROLE_USER')); AUTHORITIES.add(new SimpleGrantedAuthority('ROLE_ANONYMOUS')); }@Override public Authentication authenticate(Authentication auth) throws AuthenticationException {if (auth.getName().equals(auth.getCredentials())) { return new UsernamePasswordAuthenticationToken(auth.getName(), auth.getCredentials(), AUTHORITIES); }throw new BadCredentialsException('Bad Credentials');}@Override public boolean supports(Class<?> authentication) { if ( authentication == null ) return false;return Authentication.class.isAssignableFrom(authentication); }}Security.xml We need to create a security.xml file: <beans:beans xmlns='http://www.springframework.org/schema/security' xmlns:beans='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:schemaLocation='http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans-3.0.xsdhttp://www.springframework.org/schema/securityhttp://www.springframework.org/schema/security/spring-security-3.1.xsd'><http> <intercept-url pattern='/*' access='ROLE_ANONYMOUS'/> <form-login default-target-url='/' always-use-default-target='true' /> <anonymous /> <logout /> </http><authentication-manager alias='authenticationManager'> <authentication-provider ref='myAuthenticationProvider' /> </authentication-manager><beans:bean id='myAuthenticationProvider' class='com.jverstry.LoginLogout.Authentication.MyAuthenticationProvider' /> </beans:beans>The above makes sure all users have the anonymous role to access any page. Once logged in, they are redirected to the main page. If they don’t log in, they are automatically considered as anonymous users. A logout function is also declared. Rather than re-implementing the wheel, we use items delivered by Spring itself. Main Page We implement a main page displaying the name of the currently logged in user, together with login and logout links: <%@ taglib prefix='c' uri='http://java.sun.com/jsp/jstl/core' %> <!doctype html> <html lang='en'> <head> <meta charset='utf-8'> <title>Welcome To MVC Customized Login Logout!!!</title> </head> <body> <h1>Spring MVC Customized Login Logout !!!</h1> Who is currently logged in? <c:out value='${CurrPrincipal}' /> !<br /> <a href='<c:url value='/spring_security_login'/>'>Login</a>  <a href='<c:url value='/j_spring_security_logout'/>'>Logout</a> </body> </html>Controller We need to provide the currently logged in user name to the view: @Controller public class MyController {@RequestMapping(value = '/') public String home(Model model) {model.addAttribute('CurrPrincipal', SecurityContextHolder.getContext() .getAuthentication().getName());return 'index';}}Running The Example Once compile, one can start the example by browsing: http://localhost:9292/spring-mvc-login-logout/. It will display the following:Log in using the same id and password:The application returns to the main and displays:More Spring related posts here. Happy coding and don’t forget to share! Reference: Spring MVC Customized User Login Logout Implementation Example from our JCG partner Jerome Versrynge at the Technical Notes blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: