Do you want to know how to develop your skillset to become a Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you our best selling eBooks for FREE!

1. JPA Mini Book

2. JVM Troubleshooting Guide

3. JUnit Tutorial for Unit Testing

4. Java Annotations Tutorial

5. Java Interview Questions

and many more ....

Featured FREE Whitepapers

What's New Here?


8 Ways to improve your Java EE Production Support skills

Everybody involved in Java EE production support know this job can be difficult; 7/24 pager support, multiple incidents and bug fixes to deal with on a regular basis, pressure from the client and the management team to resolve production problems as fast as possible and prevent reoccurrences. On top of your day to day work, you also have to take care of multiple application deployments driven by multiple IT delivery teams. Sounds familiar? As hard as it can be, the reward for your hard work can be significant. You may have noticed from my past articles that I’m quite passionate about Java EE production support, root cause analysis and performance related problems. This post is all about sharing a few tips and work principles I have applied over the last 10+ years working with multiple Java EE production support teams onshore & offshore. This article will provide you with 8 ways to improve your production support skills which may help you better enjoy your IT support job and ultimately become a Java EE production support guru. #1 – Partner with your clients and delivery teams My first recommendation should not be a surprise to anybody. Regardless how good you are from a technical perspective, you will be unable to succeed as a great production support leader if you fail to partner with your clients and IT delivery teams. You have to realize that you are providing a service to your client who is the owner and master of the IT production environment. You are expected to ensure the availability of the critical Java EE production systems and address known and future problems to come. Stay away from damaging attitudes such as a false impression that you are the actual owner or getting frustrated at your client for lack of understanding of a problem etc. Your job is to get all the facts right and provide good recommendations to your clients so they can make the right decisions. Over time, a solid trust will be established between you and your client with great benefits & opportunities.Building a strong relationship with the IT delivery team is also very important. The delivery team, which includes IT architects, project managers and technical resources, is seen as the team of experts responsible to build and enhance the Java EE production environments via their established project delivery model. Over the years, I have seen several examples of friction between these 2 actors. The support team tends to be over critical of the delivery team work due to bad experience with failed deployments, surge of production incidents etc. I have also noticed examples where the delivery team tends to lack confidence in support team capabilities again due to bad experience in the context of failed deployments or lack of proper root cause analysis or technical knowledge etc. As a production support individual, you have to build your credibility and stay away from negative and non-professional attitude. Building credibility means hard work, proper gathering of facts, technical & root cause analysis, showing interest in learning a new solution etc. This will increase the trust with the delivery team and allow you to gain significant exposure and experience in long term. Ultimately, you will be able to work and provide consultation for both teams. Proper balance and professionalism between these 3 actors is key for any successful IT production environment. #2 – Every production incident is a learning opportunity One of the great things about Java EE production support is the multiple learning opportunities you are exposed to. You may have realized that after each production outage you achieved at least one the following goals:You gained new technical knowledge from a new problem type You increased your knowledge and experience on a known situation You increased your visibility and trust with your operation client You were able to share your existing knowledge with other team members allowing them to succeed and resolve the problemPlease note that it is also normal to face negative experiences from time to time. Again, you will also grow stronger from those and learn from your mistakes. Recurring problems, incidents or preventive work still offer you opportunities to gather more technical facts, pinpoint the root cause or come up with recommendations to develop a permanent resolution. The bottom line is that the more incidents you are involved with, the better. It is OK if you are not comfortable yet to take an active role in the incident recovery but please ensure that you are present so you can at least gain experience and knowledge from your other more experienced team members. #3 – Don’t fear change, embrace it One common problem I have noticed across the Java EE support teams is a fear factor around production platform changes such as project deployment, infrastructure or network level changes etc. Below are a few reasons of this common fear:For many support team members, application “change” is synonym of production “instability” Lack of understanding of the project itself or scope of changes will automatically translate as fear Low comfort level of executing the requested application or middleware changesSuch fear factor is often a symptom of gaps in the current release management process between the 3 main actors or production platform problems such as:Lack of proper knowledge transfer between the IT delivery and support teams Already unstable production environment prior to new project deployment Lack of deep technical knowledge of Java EE or middlewareFear can be a serious obstacle for your future growth and must be deal with seriously. My recommendation to you is that regardless of the existing gaps within your organization, simply embrace the changes but combine with proper due diligence such as asking for more KT, participating in project deployment strategy and risk assessments, performing code walkthroughs etc. This will allow you to eliminate that “fear” attitude, gain experience and credibility with your IT delivery team and client. This will also give you opportunities to build recommendations for future project deployments and infrastructure related improvements. Finally, if you feel that you are lacking technical knowledge to implement the changes, simply say it and ask for another more experienced team member to shadow your work. This approach will reduce your fear level and allow you to gain experience with minimal risk level. #4 – Learn how to read JVM Thread Dump and monitoring tools data I’m sure you have noticed from my past articles and case studies that I use JVM Thread Dump a lot. This is for a reason. Thread Dump analysis is one of the most important and valuable skill to acquire for any successful Java EE production support individual. I analyzed my first Thread Dump 10 years ago when troubleshooting a Weblogic 6 problem running on JDK 1.3. 10 years and hundreds of Thread Dump snapshots later, I’m still learning new problem patterns…The good part with JVM and Thread Dump is that you will always find new patterns to identity and understand. I can guarantee you that once you acquire this knowledge (along with JVM fundamentals), not only a lot of production incidents will be easier to pinpoint but also much more fun and self-rewarding. Given how easy, fast and non-intrusive it is these days to generate a JVM Thread Dump; there is simply no excuse not to learn this key troubleshooting technique.My other recommendation is to learn how to use existing monitoring tools and interpret the data. Java EE monitoring tools are highly valuable weapons for any production support individual involved in day to day support. Depending of the product purchased or free tools used by your IT client, they will provide you with a performance view of your Java EE applications, middleware (Weblogic, JBoss, WAS…) and the JVM itself. This historical data is also critical when performing root cause analysis following a major production outage. Proper knowledge and understanding of the data will allow you to understand the IT platform performance, capacity and give you opportunities to work with the IT capacity planning analysis & architect team which are accountable to ensure long term stability and scalability of the IT production environment. #5 – Learn how to write code and perform code walkthroughs My next recommendation is to improve your coding skills. One of the most important responsibilities as part of a Java EE production support team, on top of regular bug fixes, is to act as a “gate keeper” e.g. last line of defense before the implementation of a project. This risk assessment exercise involves not only project review, test results, performance test report etc. but also code walkthroughs. Unfortunately, this review is often not performed properly, if done at all. The goal of the exercise is to identify areas for improvement and detect potential harmful code defects for the production environment such as thread safe problems, lack of IO/Socket related timeouts etc. Your capability to perform such code assessment depends of your coding skills and overall knowledge of the Java EE design patterns & anti-patterns. Improving your coding skills can be done by following a few strategies as per below:Explore opportunities within your IT organization to perform delivery work Jump on any opportunity to review officially or unofficially existing or new project code Create personal Java EE development projects pertinent for your day to day work and long term career Join Java/Java EE Open Source projects & communities (Apache, JBoss, Spring…)#6 – Don’t pretend that you know everything about Java, JVM & Middleware Another common problem I noticed for many Java EE production support individuals is a skill ‘plateau’. This is especially problematic when working on static IT production environments with few changes and hardening improvements. In this context, you get used very quickly to your day to day work, technology used and known problems. You then become very comfortable with your tasks with a false impression of seniority. Then one day, your IT organization is faced with a re-org or you have to work for a new client. At this point you are shocked and struggling to overcome the new challenges. What happened?You reached a skill plateau within your small Java EE application list and middleware bubble You failed to invest time into yourself and outside your work IT bubble You failed to acknowledge your lack of deeper Java, Java EE & middleware knowledge e.g. false impression of knowing everything You failed to keep your eyes opened and explore the rest of the IT world and Java communityMy main recommendation to you is that when you feel over confident or over qualified in your current role, it is time to move on and take on new challenges. This could mean a different role within your existing support team, moving to a project delivery team for a certain time or completely switching job and / or IT client. Constantly seeking new challenges will lead to:Significant increase of knowledge due to a higher diversity of technologies such as JVM vendors (HotSpot, IBM JVM, Oracle JRockit…), middleware (Weblogic, JBoss, WAS…), databases, OS, infrastructure etc. Significant increase of knowledge due to a higher diversity of implementations and solutions (SOA, Web development / portals, middle-tier, legacy integration, mobile development etc.) Increased learning opportunities due to new types of production incidents Increased visibility within your IT organization and Java community Improved client skills and contacts Increased resistance to work under stress e.g. learn how to use stress and adrenaline at your advantage (typical boost you can get during a severe production outage)#7 – Share your knowledge with your team and the Java community Sharing your Java EE skills and production support experience is a great way to improve and maintain a strong relationship with your support team members. I also encourage you to participate and share your Java EE production problems with the Java community (Blogs, forums, Open Source groups etc.) since a lot of problems are common and I’m sure people can benefit from your experience. That being said, one approach that I follow myself and highly recommend is to schedule planned (ideally weekly) internal training sessions. The topic is typically chosen via a simple voting system and presented by different members, when possible. A good sharing mentality will naturally lead you to more research and reading, further increasing your skills in long term. #8 – Rise to the Challenge At this point you have acquired a solid knowledge foundation and key troubleshooting skills. You have been involved in many production incidents with good understanding of the root cause and resolution. You understand well your IT production environment and your client is starting to request your presence directly on critical incidents. You are also spending time every week to improve your coding skills and sharing with the Java community…but are you really up to the challenge? A true hero can be defined by an individual with great capabilities to rise to the challenge and lead the others to victory. Obviously you are not expected to save the world but you can still be the “hero of the day” by rising to the challenge and leading your support team to the resolution of critical production outages. A true successful and recognized Java EE production support person is not necessarily the strongest technical resource but one who has learned how to properly balance his technical knowledge and client skills along with a strong capability to rise to the challenge and take the lead when faced with difficult situations. I really hope that these tips can help you in your day to day Java EE production support. Please share your experience and tips on how to improve your Java EE production support skills. Reference: 8 Ways to improve your Java EE Production Support skills from our JCG partner Pierre-Hugues Charbonneau at the Java EE Support Patterns & Java Tutorial blog....

I/O Demystified

With all the hype on highly scalable server design and the rage behind nodejs I have been meaning to do some focused reading on IO design patterns to which until now couldn’t find enough time to invest. Now having done some research I thought it’s best to jot down stuff I came across as a future reference for me and any one who may come across this post. OK then.. Let’s hop on the I/O bus and go for a ride. Types of I/O There are four different ways IO can be done according to the blocking or non blocking nature of the operations and the synchronous or asynchronous nature of IO readiness/completion event notifications. Synchronous Blocking I/O This is where the IO operation blocks the application until its completion which forms the basis of typical thread per connection model found in most web servers. When the blocking read() or write() is called there will be a context switch to the kernel where the IO operation would happen and data would be copied to a kernel buffer. Afterwards, the kernel buffer will be transfered to user space application level buffer and the application thread will be marked as runnable upon which the application would unblock and read the data in user space buffer. Thread per connection model tries to limit the effect of this blocking by confining a connection to a thread so that handling of other concurrent connections will not be blocked by an I/O operation on one connection. This is fine as long as the connections are short lived and data link latencies are not that bad. However in the case of long lived or high latency connections the chances are that threads will be held up by these connections for a long time causing starvation for new connections if a fixed size thread pool is used since blocked threads cannot be reused to service new connections while in the state of being blocked or else it will cause a large number of threads to be spawned within the system if each connection is serviced using a new thread, which can become pretty resource intensive with high context switching costs for a highly concurrent load. ServerSocket server = new ServerSocket(port); while(true) { Socket connection = server.accept(); spawn-Thread-and-process(connection); }Synchronous Non Blocking I/O In this mode the device or the connection is configured as non blocking so that read() and write() operations will not be blocked. This usually means if the operation cannot be immediately satisfied it would return with an error code indicating that the operation would block (EWOULDBLOCK in POSIX) or the device is temporarily unavailable (EAGAIN in POSIX). It is up to the application to poll until the device is ready and all the data are read. However this is not very efficient since each of these calls would cause a context switch to kernel and back irrespective of whether some data was read or not. Asynchronous Non Blocking I/O with Readiness Events The problem with the earlier mode was that the application had to poll and busy wait to get the job done. Wouldn’t it be better that some how the application was notified when the device is ready to be read/  written? That is what exactly this mode provides you with. Using a special system call (varies according to the platform – select()/poll()/epoll() for Linux, kqueue() for BSD, /dev/poll for Solaris) the application registers the interest of getting I/O readiness information for a certain I/O operation (read or write) from a certain device (a file descriptor in Linux parlance since all sockets are abstracted using file descriptors). Afterwards this system call is invoked, which would block until at least on of the registered file descriptors become ready. Once this is true the file descriptors ready for doing I/O will be fetched as the return of the system call and can be serviced sequentially in a loop in the application thread. The ready connection processing logic is usually contained within a user provided event handler which would still have to issue non blocking read()/write() calls to fetch data from device to kernel and ultimately to the user space buffer incurring a context switch to the kernel. More ever there is usually no absolute guarantee that it will be possible to do the intended I/O with the device since what operating system provides is only an indication that the device might be ready to do the I/O operation of interest but the non blocking read() or write() can bail you out in such situations. However this should be the exception than the norm. So the overall idea is to get readiness events in an asynchronous fashion and register some event handlers to handle once such event notifications are triggered. So as you can see all of these can be done in a single  thread while multiplexing among different connections primarily due to the nature of the select() (here I  choose a representative system call) which can return readiness of multiple sockets at a time. This is part of the appeal of this mode of operation where one thread can serve large number of connections at a time. This mode is what usually known as the “Non Blocking I/O” model. Java has abstracted out the differences between platform specific system call implementations with its NIO API. The socket/file descriptors are abstracted using Channels and Selector encapsulates the selection system call. The applications interested in getting readiness events registers a Channel (usually a SocketChannel obtained by an accept() on a ServerSocketChannel) with the Selector and get a SelectionKey which acts as a handle for holding the Channel and registration information. Then the blocking select() call is made on Selector which would return a set of SelectionKeys which then can be processed one by one using the application specified event handlers. Selector selector =;channel.configureBlocking(false);SelectionKey key = channel.register(selector, SelectionKey.OP_READ);while(true) {int readyChannels =;if(readyChannels == 0) continue;Set<SelectionKey> selectedKeys = selector.selectedKeys();Iterator<SelectionKey> keyIterator = selectedKeys.iterator();while(keyIterator.hasNext()) {SelectionKey key =;if(key.isAcceptable()) { // a connection was accepted by a ServerSocketChannel.} else if (key.isConnectable()) { // a connection was established with a remote server.} else if (key.isReadable()) { // a channel is ready for reading} else if (key.isWritable()) { // a channel is ready for writing }keyIterator.remove(); } }Asynchronous and Non Blocking I/O with Completion Events Readiness events only go so far to notify you that the device/ socket is ready do something. The application still has to do the dirty work of reading the data from the device/ socket (more accurately directing the operating system to do so via a system call) to the user space buffer all the way from device. Wouldn’t it be nice to delegate this job to the operating system to run in the background and let it inform you once it’s completed the job by transferring all the data from device to kernel buffer and finally to the application level buffer? That is the basic idea behind this mode usually known as the “Asynchronous I/O” mode. For this it is required the operating system support AIO operations. In Linux this support is present in aio POSIX API from 2.6 and for Windows this is present in the form of “I/O Completion Ports”. With NIO2 Java has stepped up its support for this mode with its AsynchronousChannel API. Operating System Support In order to support readiness and completion event notifications different operating systems provide varying system calls. For readiness events select() and poll() can be used in Linux based systems. However the newer epoll() variant is preferred due to its efficiency over select() or poll(). select() suffer from the fact that the selection time increases linearly with the number of descriptors monitored. It is appearently notorious for overwriting the file descriptor array references. So each time it is called the descriptor array is required to be repopulated from a separate copy. Not an elegant solution at any rate. The epoll() variant can be configured in two ways. Namely edge-triggered and level-triggered. In edge-triggered case it will emit a notification only when an event is detected on the associated descriptor. Say during an event-triggered notification your application handler only read half of the kernel input buffer. Now it won’t get a notification on this descriptor next time around even when there are some data to be read unless the device is ready to send more data causing a file descriptor event. Level-triggered configuration on the other hand will trigger a notification each time when there is data to be read. The comparable system calls are present in the form of kqueue in BSD flavours and /dev/poll or “Event Completion” in Solaris depending on the version. The Windows equivalent is “I/O Completion Ports”. The situation for the AIO mode however is bit different at least in the Linux case. The aio support for sockets in Linux seems to be shady at best with some suggesting it is actually using readiness events at kernel level while providing an asynchronous abstraction on completion events at application level. However Windows seems to support this first class again via “I/O Completion Ports”. Design I/O Patterns 101 There are patterns every where when it comes to software development. I/O is no different. There are couple I/O patterns associated with NIO and AIO models which are described below. Reactor Pattern There are several components participating in this pattern. I will go through them first so it would be easy to understand the diagram. Reactor Initiator: This is the component which would initiate the non blocking server by configuring and initiating the dispatcher. First it would bind the server socket and register it with the demultiplexer for client connection accept readiness events. Then the event handler implementations for each type of readiness events (read/ write/ accept etc..) will be registered with the dispatcher. Next the dispatcher event loop will be invoked to handle event notifications. Dispatcher : Defines an interface for registering, removing, and dispatching Event Handlers responsible for reacting on connection events which include connection acceptance, data input/output and timeout events on a set of connections. For servicing a client connection the related event handler (e.g: accept event handler) would register the accepted client channel (wrapper for underlying client socket) with the demultiplexer along side with the type of readiness events to listen for that particular channel. Afterwards the dispatcher thread will invoke the blocking readiness selection operation on demultiplexer for the set of registered channels. Once one or more registered channels are ready for I/O the dispatcher would service each returned “Handle” associated with the each ready channel one by one using registered event handlers. It is important that these event handlers don’t hold up dispatcher thread since it will delay dispatcher servicing other ready connections. Since the usual logic within an event handler includes transferring data to/from the ready connection which would block until all the data are transferred between user space and kernel space data buffers normally it is the case that these handlers are run in different threads from a thread pool. Handle : A handle is returned once a channel is registered with the demultiplexer which encapsulates connection channel and readiness information. A set of ready Handles would be returned by demultiplexer readiness selection operation. Java NIO equivalent is SelectionKey. Demultiplexer : Waits for readiness events of in one or more registered connection channels. Java NIO equivalent is Selector. Event Handler : Specifies the interface having hook methods for dispatching connection events. These methods need to be implemented by application specific event handler implementations. Concrete Event Handler : Contains the logic to read/write data from underlying connection and to do the required processing or initiate client connection acceptance protocol from the passed Handle.Event handlers are typically run in separate threads from a thread pool as shown in below diagram.A simple echo server implementation for this pattern is as follows (without event handler thread pool). public class ReactorInitiator {private static final int NIO_SERVER_PORT = 9993;public void initiateReactiveServer(int port) throws Exception {ServerSocketChannel server =; server.socket().bind(new InetSocketAddress(port)); server.configureBlocking(false);Dispatcher dispatcher = new Dispatcher(); dispatcher.registerChannel(SelectionKey.OP_ACCEPT, server);dispatcher.registerEventHandler( SelectionKey.OP_ACCEPT, new AcceptEventHandler( dispatcher.getDemultiplexer()));dispatcher.registerEventHandler( SelectionKey.OP_READ, new ReadEventHandler( dispatcher.getDemultiplexer()));dispatcher.registerEventHandler( SelectionKey.OP_WRITE, new WriteEventHandler());; // Run the dispatcher loop}public static void main(String[] args) throws Exception { System.out.println('Starting NIO server at port : ' + NIO_SERVER_PORT); new ReactorInitiator(). initiateReactiveServer(NIO_SERVER_PORT); }}public class Dispatcher {private Map<Integer, EventHandler> registeredHandlers = new ConcurrentHashMap<Integer, EventHandler>(); private Selector demultiplexer;public Dispatcher() throws Exception { demultiplexer =; }public Selector getDemultiplexer() { return demultiplexer; }public void registerEventHandler( int eventType, EventHandler eventHandler) { registeredHandlers.put(eventType, eventHandler); }// Used to register ServerSocketChannel with the // selector to accept incoming client connections public void registerChannel( int eventType, SelectableChannel channel) throws Exception { channel.register(demultiplexer, eventType); }public void run() { try { while (true) { // Loop indefinitely;Set<SelectionKey> readyHandles = demultiplexer.selectedKeys(); Iterator<SelectionKey> handleIterator = readyHandles.iterator();while (handleIterator.hasNext()) { SelectionKey handle =;if (handle.isAcceptable()) { EventHandler handler = registeredHandlers.get(SelectionKey.OP_ACCEPT); handler.handleEvent(handle); // Note : Here we don't remove this handle from // selector since we want to keep listening to // new client connections }if (handle.isReadable()) { EventHandler handler = registeredHandlers.get(SelectionKey.OP_READ); handler.handleEvent(handle); handleIterator.remove(); }if (handle.isWritable()) { EventHandler handler = registeredHandlers.get(SelectionKey.OP_WRITE); handler.handleEvent(handle); handleIterator.remove(); } } } } catch (Exception e) { e.printStackTrace(); } }}public interface EventHandler {public void handleEvent(SelectionKey handle) throws Exception;}public class AcceptEventHandler implements EventHandler { private Selector demultiplexer; public AcceptEventHandler(Selector demultiplexer) { this.demultiplexer = demultiplexer; }@Override public void handleEvent(SelectionKey handle) throws Exception { ServerSocketChannel serverSocketChannel = (ServerSocketChannel); SocketChannel socketChannel = serverSocketChannel.accept(); if (socketChannel != null) { socketChannel.configureBlocking(false); socketChannel.register( demultiplexer, SelectionKey.OP_READ); } }}public class ReadEventHandler implements EventHandler {private Selector demultiplexer; private ByteBuffer inputBuffer = ByteBuffer.allocate(2048);public ReadEventHandler(Selector demultiplexer) { this.demultiplexer = demultiplexer; }@Override public void handleEvent(SelectionKey handle) throws Exception { SocketChannel socketChannel = (SocketChannel);; // Read data from clientinputBuffer.flip(); // Rewind the buffer to start reading from the beginningbyte[] buffer = new byte[inputBuffer.limit()]; inputBuffer.get(buffer);System.out.println('Received message from client : ' + new String(buffer)); inputBuffer.flip(); // Rewind the buffer to start reading from the beginning // Register the interest for writable readiness event for // this channel in order to echo back the messagesocketChannel.register( demultiplexer, SelectionKey.OP_WRITE, inputBuffer); }}public class WriteEventHandler implements EventHandler {@Override public void handleEvent(SelectionKey handle) throws Exception { SocketChannel socketChannel = (SocketChannel); ByteBuffer inputBuffer = (ByteBuffer) handle.attachment(); socketChannel.write(inputBuffer); socketChannel.close(); // Close connection }}Proactor Pattern This pattern is based on asynchronous I/O model. Main components are as follows. Proactive Initiator : This is the entity which initiates Asynchronous Operation accepting client connections. This is usually the server application’s main thread. Registers a Completion Handler along with a Completion Dispatcher to handle connection acceptance asynchronous event notification. Asynchronous Operation Processor : This is responsible for carrying out I/O operations asynchronously and providing completion event notifications to application level Completion Handler. This is usually the asynchronous I/O interface exposed by Operating System. Asynchronous Operation : Asynchronous Operations are run to completion by the Asynchronous Operation Processor in separate kernel threads. Completion Dispatcher : This is responsible for calling back to the application Completion Handlers when Asynchronous Operations complete. When the Asynchronous Operation Processor completes an asynchronously initiated operation, the Completion Dispatcher performs an application callback on its behalf. Usually delegates the event notification handling to the suitable Completion Handler according to the type of the event. Completion Handler : This is the interface implemented by application to process asynchronous event completion events.Let’s look at how this pattern can be implemented (as a simple echo server) using new Java NIO.2 API added in Java 7. public class ProactorInitiator { static int ASYNC_SERVER_PORT = 4333;public void initiateProactiveServer(int port) throws IOException {final AsynchronousServerSocketChannel listener = new InetSocketAddress(port)); AcceptCompletionHandler acceptCompletionHandler = new AcceptCompletionHandler(listener);SessionState state = new SessionState(); listener.accept(state, acceptCompletionHandler); }public static void main(String[] args) { try { System.out.println('Async server listening on port : ' + ASYNC_SERVER_PORT); new ProactorInitiator().initiateProactiveServer( ASYNC_SERVER_PORT); } catch (IOException e) { e.printStackTrace(); }// Sleep indefinitely since otherwise the JVM would terminate while (true) { try { Thread.sleep(Long.MAX_VALUE); } catch (InterruptedException e) { e.printStackTrace(); } } } }public class AcceptCompletionHandler implements CompletionHandler<AsynchronousSocketChannel, SessionState> {private AsynchronousServerSocketChannel listener;public AcceptCompletionHandler( AsynchronousServerSocketChannel listener) { this.listener = listener; }@Override public void completed(AsynchronousSocketChannel socketChannel, SessionState sessionState) { // accept the next connection SessionState newSessionState = new SessionState(); listener.accept(newSessionState, this);// handle this connection ByteBuffer inputBuffer = ByteBuffer.allocate(2048); ReadCompletionHandler readCompletionHandler = new ReadCompletionHandler(socketChannel, inputBuffer); inputBuffer, sessionState, readCompletionHandler); }@Override public void failed(Throwable exc, SessionState sessionState) { // Handle connection failure... }}public class ReadCompletionHandler implements CompletionHandler<Integer, SessionState> {private AsynchronousSocketChannel socketChannel; private ByteBuffer inputBuffer;public ReadCompletionHandler( AsynchronousSocketChannel socketChannel, ByteBuffer inputBuffer) { this.socketChannel = socketChannel; this.inputBuffer = inputBuffer; }@Override public void completed( Integer bytesRead, SessionState sessionState) {byte[] buffer = new byte[bytesRead]; inputBuffer.rewind(); // Rewind the input buffer to read from the beginninginputBuffer.get(buffer); String message = new String(buffer);System.out.println('Received message from client : ' + message);// Echo the message back to client WriteCompletionHandler writeCompletionHandler = new WriteCompletionHandler(socketChannel);ByteBuffer outputBuffer = ByteBuffer.wrap(buffer);socketChannel.write( outputBuffer, sessionState, writeCompletionHandler); }@Override public void failed(Throwable exc, SessionState attachment) { //Handle read failure..... }}public class WriteCompletionHandler implements CompletionHandler<Integer, SessionState> {private AsynchronousSocketChannel socketChannel;public WriteCompletionHandler( AsynchronousSocketChannel socketChannel) { this.socketChannel = socketChannel; }@Override public void completed( Integer bytesWritten, SessionState attachment) { try { socketChannel.close(); } catch (IOException e) { e.printStackTrace(); } }@Override public void failed(Throwable exc, SessionState attachment) { // Handle write failure..... }}public class SessionState {private Map<String, String> sessionProps = new ConcurrentHashMap<String, String>();public String getProperty(String key) { return sessionProps.get(key); }public void setProperty(String key, String value) { sessionProps.put(key, value); }}Each type of event completion (accept/ read/ write) is handled by a separate completion handler   implementing CompletionHandler interface (Accept/ Read/ WriteCompletionHandler etc.). The state transitions are managed inside these connection handlers. Additional SessionState argument can be used to hold client session specific state across a series of completion events. NIO Frameworks (HTTPCore) If you are thinking of implementing a NIO based HTTP server you are in luck. Apache HTTPCore library provides excellent support for handling HTTP traffic with NIO. API provides higher level abstractions on top of NIO layer with HTTP requests handling built in. A minimal non blocking HTTP server implementation which returns a dummy output for any GET request is given below. public class NHttpServer {public void start() throws IOReactorException { HttpParams params = new BasicHttpParams(); // Connection parameters params. setIntParameter( HttpConnectionParams.SO_TIMEOUT, 60000) .setIntParameter( HttpConnectionParams.SOCKET_BUFFER_SIZE, 8 * 1024) .setBooleanParameter( HttpConnectionParams.STALE_CONNECTION_CHECK, true) .setBooleanParameter( HttpConnectionParams.TCP_NODELAY, true);final DefaultListeningIOReactor ioReactor = new DefaultListeningIOReactor(2, params); // Spawns an IOReactor having two reactor threads // running selectors. Number of threads here is // usually matched to the number of processor cores // in the system// Application specific readiness event handler ServerHandler handler = new ServerHandler();final IOEventDispatch ioEventDispatch = new DefaultServerIOEventDispatch(handler, params); // Default IO event dispatcher encapsulating the // event handlerListenerEndpoint endpoint = ioReactor.listen( new InetSocketAddress(4444));// start the IO reactor in a new separate thread Thread t = new Thread(new Runnable() { public void run() { try { System.out.println('Listening in port 4444'); ioReactor.execute(ioEventDispatch); } catch (InterruptedIOException ex) { ex.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } catch (Exception e) { e.printStackTrace(); } } }); t.start();// Wait for the endpoint to become ready, // i.e. for the listener to start accepting requests. try { endpoint.waitFor(); } catch (InterruptedException e) { e.printStackTrace(); } }public static void main(String[] args) throws IOReactorException { new NHttpServer().start(); }}public class ServerHandler implements NHttpServiceHandler {private static final int BUFFER_SIZE = 2048;private static final String RESPONSE_SOURCE_BUFFER = 'response-source-buffer';// the factory to create HTTP responses private final HttpResponseFactory responseFactory;// the HTTP response processor private final HttpProcessor httpProcessor;// the strategy to re-use connections private final ConnectionReuseStrategy connStrategy;// the buffer allocator private final ByteBufferAllocator allocator;public ServerHandler() { super(); this.responseFactory = new DefaultHttpResponseFactory(); this.httpProcessor = new BasicHttpProcessor(); this.connStrategy = new DefaultConnectionReuseStrategy(); this.allocator = new HeapByteBufferAllocator(); }@Override public void connected( NHttpServerConnection nHttpServerConnection) { System.out.println('New incoming connection'); }@Override public void requestReceived( NHttpServerConnection nHttpServerConnection) {HttpRequest request = nHttpServerConnection.getHttpRequest(); if (request instanceof HttpEntityEnclosingRequest) { // Handle POST and PUT requests } else {ContentOutputBuffer outputBuffer = new SharedOutputBuffer( BUFFER_SIZE, nHttpServerConnection, allocator);HttpContext context = nHttpServerConnection.getContext(); context.setAttribute( RESPONSE_SOURCE_BUFFER, outputBuffer); OutputStream os = new ContentOutputStream(outputBuffer);// create the default response to this request ProtocolVersion httpVersion = request.getRequestLine().getProtocolVersion(); HttpResponse response = responseFactory.newHttpResponse( httpVersion, HttpStatus.SC_OK, nHttpServerConnection.getContext());// create a basic HttpEntity using the source // channel of the response pipe BasicHttpEntity entity = new BasicHttpEntity(); if (httpVersion.greaterEquals(HttpVersion.HTTP_1_1)) { entity.setChunked(true); } response.setEntity(entity);String method = request.getRequestLine(). getMethod().toUpperCase();if (method.equals('GET')) { try { nHttpServerConnection.suspendInput(); nHttpServerConnection.submitResponse(response); os.write(new String('Hello client..'). getBytes('UTF-8'));os.flush(); os.close(); } catch (Exception e) { e.printStackTrace(); } } // Handle other http methods } }@Override public void inputReady( NHttpServerConnection nHttpServerConnection, ContentDecoder contentDecoder) { // Handle request enclosed entities here by reading // them from the channel }@Override public void responseReady( NHttpServerConnection nHttpServerConnection) {try { nHttpServerConnection.close(); } catch (IOException e) { e.printStackTrace(); } }@Override public void outputReady( NHttpServerConnection nHttpServerConnection, ContentEncoder encoder) { HttpContext context = nHttpServerConnection.getContext(); ContentOutputBuffer outBuf = (ContentOutputBuffer) context.getAttribute( RESPONSE_SOURCE_BUFFER);try { outBuf.produceContent(encoder); } catch (IOException e) { e.printStackTrace(); } }@Override public void exception( NHttpServerConnection nHttpServerConnection, IOException e) { e.printStackTrace(); }@Override public void exception( NHttpServerConnection nHttpServerConnection, HttpException e) { e.printStackTrace(); }@Override public void timeout( NHttpServerConnection nHttpServerConnection) { try { nHttpServerConnection.close(); } catch (IOException e) { e.printStackTrace(); } }@Override public void closed( NHttpServerConnection nHttpServerConnection) { try { nHttpServerConnection.close(); } catch (IOException e) { e.printStackTrace(); } }}IOReactor class will basically wrap the demultiplexer functionality with ServerHandler implementation handling readiness events. Apache Synapse (an open source ESB) contains a good implementation of a NIO based HTTP server in which NIO is used to scaling for a large number of clients per instance with rather constant memory usage over time. The implementation also contains good debugging and server statistics collection mechanisms built in along with Axis2 transport framework integration. It can be found at [1]. Conclusion There are several options when it comes to doing I/O which can affect the scalability and performance of servers. Each of above I/O mechanisms have pros and cons so that the decisions should be made on expected scalability and performance characteristics and ease of maintainence of these approaches. This concludes my somewhat long winded article on I/O. Feel free to provide suggestions, corrections or comments that you may have. Complete source codes for servers outlined in the post along with clients can be downloaded from here. Related links There were many references I went through in the process. Below are some of the interesting ones. [1] [2] [3] [4] [5] [6] [7] Java NIO by Ron Hitchens [8] [9] [10] Reference: I/O Demystified from our JCG partner Buddhika Chamith at the Source Open blog....

Why You Didn’t Get the Interview

After reading the tremendous response to Why You Didn’t Get the Job (a sincere thanks to those that read and shared the post) I realized that many of the reasons referenced were specific to mistakes candidates make during interviews. At least a handful of readers told me that they didn’t get the job because they didn’t even get the interview.With a down economy, most of us have heard accounts of a job seeker sending out 100, 200, perhaps 300 résumés without getting even one response. These anecdotes are often received by sympathetic ears who commiserate and then share their personal stories of a failed job search. To anyone who has sent out large quantities of résumés without any response or interviews, I offer this advice: The complete lack of response is not due to the economy. The lack of response is based on your résumé, your experience, or your résumé submission itself.My intent here is to help and certainly not to offend, so if you are one of these people that has had a hard time finding new work, please view this as free advice mixed with a touch of tough love. I have read far too many comments lately from struggling job seekers casting blame for their lack of success in the search (“it wasn’t a real job posting”, “the manager wasn’t a good judge of talent“, etc.), but now it’s time to take a look inward on how you can maximize your success. I spoke to a person recently who had sent out over 100 résumés without getting more than two interviews, and I quickly discovered that the reasons for the failure were quite obvious to the trained eye (mine). The economy isn’t great, but there are candidates being interviewed for the jobs you are applying for (most of them anyway), and it’s time to figure out why that interview isn’t being given to you.If you apply for a job and don’t receive a response, there are only a few possibilities as to why that are within our control (please note the emphasis before commenting). Generally the problem isa mistake made during the résumé submission itself, problems with the résumé, or your experience.Qualified candidates that pay attention to these tips will see better results from their search efforts.  Your Résumé SubmissionRésumés to jobs@blackholeofdeath – The problem here isn’t that your résumé or application was flawed, it’s just that nobody has read it. Sending to hr@ or jobs@ addresses is never ideal, and your résumé may be funneled to a scoring system that scans it for certain buzzwords and rates it based on the absence, presence and frequency of these words. HRbot apocalypse… Solution – Do some research to see if you know anyone who works/worked at the company, even a friend of a friend, to submit the résumé. Protip: Chances are the internal employee may even get a referral bonus. LinkedIn is a valuable tool for this. Working with an agency recruiter will also help here, as recruiters are typically sending your information directly to internal HR or hiring managers.Follow instructions – If the job posting asks that you send a cover letter, résumé, and salary requirements, this request serves two purposes. First and most obviously, they actually want to see how well you write (cover letter), your experience (résumé), and the price tag (salary requirements). Second, they want to see if you are able and willing to follow instructions. Perhaps that is why the ad requested the documents in a specific format? Some companies are now consciously making the application process even a bit more complicated, which serves as both a test of your attention to detail and to gauge whether applicants are interested enough to take an extra step. Making it more difficult for candidates to apply should yield a qualified and engaged candidate pool, which is the desired result. Solution – Carefully read what the manager/recruiter is seeking and be sure to follow the directions exactly. Have a friend review your application before hitting send.Spelling and grammar – Spelling errors are inexcusable on a résumé today. Grammar is given much more leeway, but frequent grammatical errors are a killer. Solution – Have a friend or colleague read it for you, as it is much more difficult to edit your own material (trust me).Price tag – As you would expect, if you provide a salary requirement that is well above the listed (or unlisted) range, you will not get a response. Conversely and counterintuitively, if you provide a salary requirement that is well below the range, you will also not get a response. Huh?Suppose you want to hire someone to put in a new kitchen, and you get three estimates. The first is 25K, the second is 20K, and the third is 2K. Which one are you going to choose? It’s hard to tell, but I’m pretty sure you aren’t going to use the one that quoted you 2K. Companies want to hire candidates that are aware of market value and priced accordingly, and anyone asking for amounts well above market will not get any attention. Solution – Research the going rate for the job and be sure to manage your expectations based on market conditions. Another strategy is trying to delay providing salary information until mutual interest is established. If the company falls in love, the compensation expectation might hurt less. There is some risk of wasting time in interviews if you do not provide information early in the process, and most companies today will require the information before agreeing to an interview.Canned application – By ‘canned’ I am referring to job seekers that are obviously cutting and pasting content from previous cover letters instead of taking the time to try and personalize the content. Solution – Go to the hiring firm’s website and find something specific and unique that makes you want to work for that company. Include that information in your submission. If you are using a template and just filling in the blanks (“I read your job posting on _____ and I am really excited to learn that your company _____ is hiring a ______”), delete the template now. If you aren’t willing to invest even a few minutes into the application process, why should the company invest any time learning about you?Too eager – If I receive a résumé submission for a job posting and then get a second email from that candidate within 24 hours asking about the submission, I can be fairly sure that this is an omen. If I get a call on my mobile immediately after receiving the application ‘just to make sure it came through‘, you might as well just have the Psycho music playing in the background. Even if this candidate is qualified, there will probably be lots of hand-holding and coaching required to get this person hired. Reasonably qualified candidates with realistic expectations and an understanding of business acumen don’t make this mistake. Solution – Have patience while waiting for a response to your résumé, and be sure to give someone at least a couple/few days to respond. If you are clearly qualified for a position, you will get a reply when your résumé hits the right desk. Pestering or questioning the ability of those that are processing your application is a guarantee that you will not be called in.  Your RésuméYour objective – If your objective states “Seeking a position as a Python developer in a stable corporate environment“, don’t expect a callback from the start-up company looking for a Ruby developer. This applies even if you are qualified for the job! Why doesn’t the company want to talk to you if you are qualified? Because you clearly stated that you wanted to do something else. If you put in writing that you are seeking a specific job, that information must closely resemble the job to which you are applying. Solution - You may choose to have multiple copies of your résumé with multiple objectives, so you can customize the résumé to the job (just be sure to remember which one you used so you bring the correct résumé to the interview!). As there may be a range of positions you are both qualified and willing to take, using a ‘Profile’ section that summarizes your skills instead of an ‘Objective’ is a safer alternative.Spelling and grammar (again) – see abovetl;dr – To any non-geek readers, this means ‘too long; didn’t read‘. To my geek readers, many of you are guilty of this. I’ve written about this over and over again, but I still get seven page résumés from candidates. I have witnessed hiring managers respond to long-winded résumés with such gems as ‘if her résumé is this long, imagine how verbose her code will be‘. (Even for non-Java candidates! #rimshot) Hiring managers for jobs that require writing skills or even verbal communication can be extremely critical of tl;dr résumés. Solution – Keep it to two or three pages maximum. If you can’t handle that, get professional help.Buzzword bingo – This is a term that industry insiders use to refer to résumés that include a laundry list of acronyms and buzzwords. The goal is to either catch the eye of an automated search robot (or human) designed to rate résumés based on certain words, or to insinuate that the candidate actually has all the listed skills. Software engineers are probably more guilty of this than other professionals, as the inclusion of one particular skill can sometimes make the difference between your document being viewed by an actual human or not. When candidates list far too many skills buzzwords than would be reasonably possible for one person to actually know, you can be sure the recruiter or manager will pass based on credibility concerns. Solution – I advise candidates to limit the buzzwords on your résumé to technologies, tools, or concepts that you could discuss in an intelligent conversation. If you would not be comfortable answering questions about it in an interview, leave it off.  Your ExperienceGaping holes – If you have had one or more extended period of unemployment, hiring managers and recruiters may simply decide to pass on you instead of asking about the reasons why. Perhaps you took a sabbatical, went back to school full-time, or left on maternity leave. Don’t assume that managers are going to play detective and figure out that the years associated with your Master’s degree correspond to the two year gap in employment. Solution – Explain and justify any periods of unemployment on your résumé with as much clarity as possible without going into too many personal details. Mentioning family leave is appropriate, but providing the medical diagnosis of your sick relative is not.Job hopping – Some managers are very wary of candidates that have multiple employers over short periods of time. In the software world it tends to be common to make moves a bit more frequently than in some other professions, but there comes a point where it’s one move too many and you may be viewed as a job hopper. The fear of hiring a job hopper has several roots. A manager may feel you are a low performer, a mercenary that always goes to the highest bidder, or that you may get bored after a short time and seek a new challenge. Companies are unwilling to invest in hires that appear to be temporary. Solution – If the moves were the result of mergers, acquisitions, layoffs, or a change in company direction, be sure to note these conditions somewhere in the résumé. Never use what could be viewed as potential derogatory information in the explanation. Clearly list if certain jobs were project/contract.Listed experience is irrelevant/unrelated – This could be a symptom of simply being unqualified for the position, or it could be tied to an inability to detail what you actually do that is relevant to the listed job requirements. I would suspect that most of the aforementioned people (that received no responses to 100 submission) probably fall into the unqualified category, as job seekers tend to feel overconfident about being a fit for a wider range of positions than is realistic. Companies expect a very close fit during a buyer’s market, and are willing to open up their hiring standards a bit when the playing field starts to level. Solution – Be sure to elaborate on all elements of your job that closely resemble the responsibilities listed in the posting. Instead of wasting time filling out applications for jobs that are clearly well out of reach, spend that time researching jobs that are a better match for you.You are overqualified – The term ‘overqualified’ seems to be overused by rejected applicants today, as there is no real stigma to the term. It’s entirely comfortable for a candidate to say/think “I didn’t get the job because I possess more skills at a higher level than the employer was seeking“. When a company is seeking an intermediate level engineer, it isn’t always because they want someone earlier in their career than a senior level engineer (although in some cases this could be true). Rather, they want the intermediate level engineer because that is what their budget dictates or they expect that senior engineers would not be challenged by the role (and therefore would leave). There are also situations where companies will not want to hire you because your experience is indicative that you will only be taking this job until something better comes along. A CEO applying for a job as a toll collector will not be taken seriously. Solution – Be sure that your résumé accurately represents your level of skill and experience. Inflating your credentials or job titles will always work against you.  ConclusionThe time you spend on your job search is valuable, so be sure to use it wisely. Invest additional effort on applications for jobs that you feel are a great fit, and go above and beyond to be sure your submission gets attention. As a general rule of thumb, you want to be sure that whoever receives your résumé will get it into the hands of someone who has a similar job to the one you want, not just someone trained to look for buzzwords. Employees that have similar experience will be the best judges of your fit. If you aren’t getting the response you want, do not keep using the same methods and expecting a different result.Reference: Why You Didn’t Get the Interview from our JCG partner Dave Fecak at the Job Tips For Geeks blog....

Git newbie commands

If you’re new to Git you will recognize that some things work different compared to SVN or CVS based repositories. This blog explains the 10 most important commands in a Git workflow that you need to know about. If you are on Windows and you want to follow the steps below, all you need to do is to set-up Git on your local machine. Before we go into Git commands bear in mind (and do not forget!) that Git has a working directory, a staging area and the local repository. See the overview below taken from GIT workflow is as follows: You go to the directory where you want to have version controll. You use git init to put this directory under version control. This creates a new repository for that current location. You make changes to your files, then use git add to stage files into the staging area. You’ll use git status and git diff to see what you’ve changed, and then finally git commit to actually record the snapshot forever into your local repository. When you want to upload your changes to a remote repository you’ll use git push. When you want to download changes from a remote repository to your local repository you’ll use git fetch and git merge. Lets go through this step-by-step. To create a repository from an existing directory of files, you can simply run git init in that directory. Go to that directory you want to get under version control: git init  All new and changed files need to be added to the staging area prior to commit to the repository. To add all files in the current directory to the staging area: git add --allTo commit the files and changes to the repository: git commit -am "Initial commit"  Note that I have use the -am option which does an add -all implicitly. This command is equivalent to the SVN- or CVS-style “commit”. Again: if you want to update your local Git repository there is always an add-operation followed by a commit-operation, with all new and modified files. Then you go create a repository at Let’s say you named it git_example. Then you add the remote repository address to your local GIT configuration: git remote add EXAMPLE that in the example thats my user name on You’ll need to use yours obviously. I have named the remote repository “EXAMPLE”. You can refer to an alias name instead of using the remote URL all the time. You are ready to communicate with a remote repository now. If you are behind a firewall you make the proxy configuration: git config --global http.proxy Then you push your files to the remote repository: git push EXAMPLE master  Then imagine somebody changed the remote files. You need to get them: git fetch EXAMPLE You need to merge those changes from the remote master into your local master branch (plain language: you copy the changes from your local repository to your working directory). Assume that your current context is the master branch and you want to merge EXAMPLE branch into master, you’ll write: git merge EXAMPLE To compare the staging area to your working directory: git status -sThe example shows the status after I have modified the README.txt (but did not added or commited yet). Without any extra arguments, a simple git diff will display what content you’ve changed in your project since the last commit that are not yet staged for the next commit snapshot. git diff  The example shows the diff output after I have edited the README.txt file (but did not added or commited yet). When I add all changes to staging, git diff will not display changes ’cause there is nothing in your working directory that has not been staged.  It’s different with git status. It shows the differences between your last commit and the staging/working area:  In short: git status shows differences between your local repository and your working directory/staging area. Whereas git diff (as used above) shows differences between your staging area and your working directory. That’s it. These are the most important Git commands a newbie must know to get started. See the reference for more information on using Git. Downloading a remote repository If you like to copy a repository from (or any other remote address) to your local machine: git clone can now work on the code and push the changes back to that remote repository if you like. Working with branches – changing your current context  A branch in Git is nothing else but the “current context” you are working in. Typically you start working in the “master” branch. Let’s say you want to try some stuff and you’re not sure if what you’re doing is a good idea (which happens very often actually :-)). In that case you can create a new branch and experiment with your idea: git branch [branch_name] When you just enter git branch it will list your new branches. If you’d like to work with your new branch, you can write: git checkout [branch_name] One important fact to notice: if you switch between branches it does not change the state of your modified files. Say you have a modified file You switch from master branch to your new some_crazy_idea branch. After the switch will still be in modified state. You could commit it to some_crazy_idea branch now. If you switch to the master branch however this commit would not be visible, ’cause you did not commit within the master branch context. If the file was new, you would not even see the file in your working tree anymore. If you want to let others know about your new idea you push the branch to the remote repository: git push [remote_repository_name] [branch_name] You’d use fetch instead of push to get the changes in a remote branch into your local repository again. This is how you delete a branch again if you don’t need it anymore: git branch -d [branch_name] Removing files  If you accidentally committed something to a branch you can easily remove the file again. For example, to remove the readme.txt file in your current branch: git rm --cached readme.txt The --cached option only removes the file from the index. Your working directory remains unchanged. You can also remove a folder. The .settings folder for an eclipse project – for instance – is nothing you should share with others: git rm --cached -r some_eclipse_project/.settings After you ran the rm command the file is still in the index (history of Git version control). You can permanently delete the complete history with this command: Note: be very careful with commands like this and try them in a copy of your repository before you apply them to your productive repository. Always create a copy of the complete repository before you run such commands. git filter-branch --index-filter 'git rm --cached --ignore-unmatch [your_file_name]' HEAD Ignoring files: you do not want to version control a certain file or directory  To ignore files you just add the file name to the .gitignore file in the directory that owns the file. This way it will not be added to version control anymore. Here is my .gitignore for the root directory of an Eclipse project: /target /.settings .project .classpathIt ignores the target and the .settings folder as well as the .project and the .classpath file. Sometimes its helpful to configure global ignore rules that apply to the complete repository: git config --global core.excludesfile ~/.gitignore_global This added the following entry to my .gitconfig global parameters file which resides in the git root directory. excludesfile = d:/dev_home/repositories/git/.gitignore_global These are my current global exclude rules in my .gitignore_global file: # Compiled source # ################### *.com *.class *.dll *.exe *.o *.so # Logs and databases # ###################### *.log Note: these rules are shared with other users. Local per-repo rules can be added to the .git/info/exclude file in your repo. These rules are not committed with the repo so they are not shared with others. Restoring files – put the clocks back Sometimes you make changes to your files and after some time you realize that what you’ve done was a bad idea. You want go back to your last commit state then. If you made changes to your working directory and you want to restore your last HEAD commit in your working directory enter: git reset --hard HEAD This command sets the current branch head to the last commit (HEAD) and overwrites your local working directory with that last commit state (the --hard option). So it will overwrite your modified files. Instead of HEAD (which is your last commit) you could name a branch or a tag like ‘v0.6′. You can also reset to a previous commit: HEAD~2 is the commit before your last commit. May be you want to restore a file you have deleted in your working directory. Here is what I’ve entered to restore a java file I have deleted accidentally: git checkout HEAD sources/spring-decorator/src/test/java/com/schlimm/decorator/simple/ Again: Instead of HEAD you could name a branch or a tag like ‘v0.6′. You can draw the file from a previous commit: HEAD~2 is the commit before your last commit. Working with tags – making bookmarks to your source code Sometimes you want to make a version of your source code. This way you can refer to it later on. To apply a version tag v1.0.0 to your files you’d write: git tag -a v1.0.0 -m "Creating the first official version" You can share your tags with others in a remote repository: git push [remote_repository_name] --tags Where remote_repository_name is the alias name for your remote repository. You write fetch instead of push to get tags that others committed to the remote repository down to your local repository. If you just enter git tag it will give you the list of known tags. To get infos about the v1.0.0 tag, you’d write: git show v1.0.0 -s If you want to continue work on a tag, for instance on the production branch with version v5.0.1, you enter: git checkout v5.0.1 -b [your_production_branch] Note that this command also creates a new branch for the tag, this way you can make commits and anything else you wish to record back to the repository. Reference: “Top 10 commands for the Git newbie” from our JCG partner Niklas....

Creating a Java Dynamic Proxy

Java Dynamic proxy mechanism provides an interesting way to create proxy instances. The steps to create a dynamic proxy is a little tedious though, consider a proxy to be used for auditing the time taken for a method call for a service instance – public interface InventoryService { public Inventory create(Inventory inventory); public List<Inventory> list(); public Inventory findByVin(String vin); public Inventory update(Inventory inventory); public boolean delete(Long id); public Inventory compositeUpdateService(String vin, String newMake); }The steps to create a dynamic proxy for instances of this interface is along these lines: 1. Create an instance of a java.lang.reflect.InvocationHandler, this will be responsible for handling the method calls on behalf of the actual service instance, a sample Invocation handler for auditing is the following: ... public class AuditProxy implements java.lang.reflect.InvocationHandler {private Object obj;public static Object newInstance(Object obj) { return java.lang.reflect.Proxy.newProxyInstance(obj.getClass().getClassLoader(), obj .getClass().getInterfaces(), new AuditProxy(obj)); }private AuditProxy(Object obj) { this.obj = obj; }public Object invoke(Object proxy, Method m, Object[] args) throws Throwable { Object result; try {"before method " + m.getName()); long start = System.nanoTime(); result = m.invoke(obj, args); long end = System.nanoTime();"%s took %d ns", m.getName(), (end-start)) ); } catch (InvocationTargetException e) { throw e.getTargetException(); } catch (Exception e) { throw new RuntimeException("unexpected invocation exception: " + e.getMessage()); } finally {"after method " + m.getName()); } return result; } }2. When creating instances of InventoryService, return a proxy which in this case is the AuditProxy, composing instances of InventoryService, which can be better explained using a UML:This is how it would look in code: InventoryService inventoryService = (InventoryService)AuditProxy.newInstance(new DefaultInventoryService());Now, any calls to inventoryService will be via the AuditProxy instance, which would measure the time taken in the method while delegating the actual method call to the InventoryService instance. So what are proxies used for: 1. Spring AOP uses it extensively – it internally creates a dynamic proxy for different AOP constructs 2. As in this example, for any class decoration – AOP will definitely be a better fit for such a use case though 3. For any frameworks needing to support interface and annotation based features – A real proxied instance need not even exist, a dynamic proxy can recreate the behavior expected of an interface, based on some meta-data provided through annotations. Reference: Creating a Java Dynamic Proxy from our JCG partner Biju Kunjummen at the all and sundry blog....

Which JSRs Are Included In Java EE 7?

I started to fill out a table of all of the Java Specification Requests that are supposed to go into Java EE 7. Because the platform edition is still being decided, some of the details are rather hard to pin down. The full EJB Product for Java EE 7 has these following standard components and APIs:-NameVersionDescriptionJSRWeb ProfileBatch Process1.0Batch Processing352Bean Validation1.1Bean validation framework349Common Annotations1.1Common Annotations for the Java EE platform250 Might beCDI1.1Contexts and Dependency Injection for Java EE346YConcurrency Utilities1.0Concurrency Utilities for the Java EE platform236DI1.0Dependency Injection for Java330EL3.0Unified Expression Language for configuration of web components and context dependency injection341YEJB3.2Enterprise Java Beans, entity beans and EJB QL345Y (Lite)JAASJava Authentication & Authorization ServiceJACC1.4Java Authorization Contract for Containers115Java EE Management1.1JASPIC1.1Java Authentication Service Provider Interface for Containers196JavaMail1.4Java Mail API919 Might beJAXBJava API for XML BindingJAXP1.4Java API for XML Parsing206JAX-RS2.0Java API for RESTful Services339YJAX-WS1.3Java API for XML –based Web Services including SOAP and WSDL224JCA1.7J2EE Connector Architecture?JCache1.0Temporary Caching API for Java EE107 MaybeJMS2.0Java Message Service343JPA2.1Java Persistence API338YJSF2.2Java Server Faces344YJSON1.0JavaScript Serialization Object Notation Protocol353 MaybeJSP2.3Java Server Pages?YJSPD1.0Java Server Pages Debugging?YJSTL1.2Java Standard Template Library?YJTA1.2Java Transaction APIManaged Beans1.0Managed Beans 1.1342?YServlet3.1Java Servlet340YWeb Services1.3224Web Services Metadata2.1Are these the correctly numbered JSRs? Are you involved with the newest of these JSRs? If so can you give a definite answer? Reference: Which JSRs Are Included In Java EE 7? from our JCG partner Peter Pilgrim at the Peter Pilgrim’s blog blog....

JSF – PrimeFaces & Hibernate Integration Project

This article shows how to develop a project by using JSF, PrimeFaces and Hibernate. A sample application is below : Used Technologies : JDK 1.6.0_21 Maven 3.0.2 JSF 2.0.3 PrimeFaces 2.2.1 Hibernate 3.6.7 MySQL Java Connector 5.1.17 MySQL 5.5.8 Apache Tomcat 7.0 STEP 1 : CREATE USER TABLE A new USER Table is created by executing below script: CREATE TABLE USER ( id int(11) NOT NULL, name varchar(45) NOT NULL, surname varchar(45) NOT NULL, PRIMARY KEY (`id`) );STEP 2 : CREATE MAVEN PROJECT A maven project is created as below. (It can be created by using Maven or IDE Plug-in).STEP 3 : LIBRARIES JSF, Hibernate and the dependencies libraries are added to Maven’ s pom.xml. These libraries will be downloaded by Maven Central Repository. <!-- JSF library --> <dependency> <groupId>com.sun.faces</groupId> <artifactId>jsf-api</artifactId> <version>2.0.3</version> </dependency> <dependency> <groupId>com.sun.faces</groupId> <artifactId>jsf-impl</artifactId> <version>2.0.3</version> </dependency> <dependency> <groupId>javax.servlet</groupId> <artifactId>jstl</artifactId> <version>1.2</version> </dependency> <!-- Hibernate library --> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-core</artifactId> <version>3.6.7.Final</version> </dependency> <dependency> <groupId>javassist</groupId> <artifactId>javassist</artifactId> <version>3.12.1.GA</version> </dependency> <!-- MySQL Java Connector library --> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.17</version> </dependency> <!-- Log4j library --> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>1.2.16</version> </dependency> Note : primefaces-2.2.1.jar can also be downloaded via maven or below link : <repository> <id>prime-repo</id> <name>PrimeFaces Maven Repository</name> <url></url> <layout>default</layout> </repository><dependency> <groupId>org.primefaces</groupId> <artifactId>primefaces</artifactId> <version>2.2.1</version> </dependency> or STEP 4 : CREATE MANAGED BEAN CLASS A new managed bean class is created. This bean is used that can be associated with UI components. Managed Beans contain property and getter and setter methods. Also, they can cover the methods for event handling, navigation, validation etc. package com.otv;import; import java.util.List;import org.apache.log4j.Logger; import org.hibernate.Session; import org.hibernate.Transaction;import com.otv.hbm.User; import com.otv.util.HibernateUtil;/** * @author * @since 3 Oct 2011 * @version 1.0.0 * */ public class UserManagedBean implements Serializable{private static final long serialVersionUID = 1L; private static Logger log = Logger.getLogger(UserManagedBean.class); private static final String SUCCESS = 'success'; private static final String ERROR = 'error'; private String name; private String surname; private String message;public String getName() { return name; } public void setName(String name) { = name; } public String getSurname() { return surname; } public void setSurname(String surname) { this.surname = surname; }public String getMessage() { StringBuffer strBuff = new StringBuffer(); strBuff.append('Name : ').append(this.getName()); strBuff.append(', Surname : ').append(this.getSurname()); this.setMessage(strBuff.toString()); return this.message; }public void setMessage(String message) { this.message = message; }public String save() { String result = null; Session session = HibernateUtil.getSessionFactory().openSession();User user = new User(); user.setName(this.getName()); user.setSurname(this.getSurname());Transaction tx = null;try { tx = session.beginTransaction();; tx.commit(); log.debug('New Record : ' + user + ', wasCommitted : ' + tx.wasCommitted()); result = SUCCESS; } catch (Exception e) { if (tx != null) { tx.rollback(); result = ERROR; e.printStackTrace(); } } finally { session.close(); } return result; }public List<User> getUsers() { Session session = HibernateUtil.getSessionFactory().openSession(); List<User> userList = session.createCriteria(User.class).list(); return userList; }public void reset() { this.setName(''); this.setSurname(''); } }STEP 5 : CREATE USER CLASS A new User class is created to model User Table. package com.otv.hbm; /** * @author * @since 3 Oct 2011 * @version 1.0.0 * */ public class User {private int id; private String name; private String surname;public int getId() { return id; }public void setId(int id) { = id; }public String getName() { return name; }public void setName(String name) { = name; }public String getSurname() { return surname; }public void setSurname(String surname) { this.surname = surname; }@Override public String toString() { StringBuffer strBuff = new StringBuffer(); strBuff.append('id : ').append(id); strBuff.append(', name : ').append(name); strBuff.append(', surname : ').append(surname); return strBuff.toString(); } }STEP 6 : CREATE HIBERNATEUTIL CLASS Singleton HibernateUtil class is created to build a Hibernate SessionFactory Object. package com.otv.util;import org.hibernate.SessionFactory; import org.hibernate.cfg.Configuration;/** * @author * @since 3 Oct 2011 * @version 1.0.0 * */ public class HibernateUtil {private static SessionFactory sessionFactory = null;public static SessionFactory getSessionFactory() { if(sessionFactory == null) { sessionFactory = new Configuration().configure().buildSessionFactory(); } return sessionFactory; }public static void setSessionFactory(SessionFactory sessionFactory) { HibernateUtil.sessionFactory = sessionFactory; }}STEP 7 : CREATE index.xhtml index.xhtml is created. <html xmlns='' xmlns:h='' xmlns:f='' xmlns:p=''><h:head><title>Welcome to JSF_PrimeFaces_Hibernate Project</title></h:head> <body> <h:form> <table> <tr> <td><h:outputLabel for='name' value='Name:' /></td> <td><p:inputText id='name' value='#{}'/></td> </tr> <tr> <td><h:outputLabel for='surname' value='Surname:' /></td> <td><p:inputText id='surname' value='#{userMBean.surname}'/> </td> </tr> <tr> <td><p:commandButton id='submit' value='Save' action='#{}' ajax='false'/></td> <td><p:commandButton id='reset' value='Reset' action='#{userMBean.reset}' ajax='false'/></td> </tr> </table> </h:form> </body> </html>STEP 8 : CREATE welcome.xhtml welcome.xhtml is created. <html xmlns='' xmlns:h='' xmlns:f='' xmlns:p=''><h:head> <title>Welcome to JSF_PrimeFaces_Hibernate Project</title> </h:head> <body> <h:form> <h:outputText value='Saved Record is #{userMBean.message}'></h:outputText> <p:dataTable id='users' value='#{userMBean.getUsers()}' var='user' style='width: 10%'> <p:column> <f:facet name='header'> <h:outputText value='ID' /> </f:facet> <h:outputText value='#{}' /> </p:column> <p:column> <f:facet name='header'> <h:outputText value='Name' /> </f:facet> <h:outputText value='#{}' /> </p:column> <p:column> <f:facet name='header'> <h:outputText value='Surname' /> </f:facet> <h:outputText value='#{user.surname}' /> </p:column> </p:dataTable> </h:form> </body> </html> STEP 9 : CREATE error.xhtml error.xhtml is created. <html xmlns='' xmlns:h='' xmlns:f='' xmlns:p=''><h:head><title>Welcome to JSF_PrimeFaces_Hibernate Project</title></h:head> <body> <f:view> <h:form> <h:outputText value='Transaction Error has occurred!'></h:outputText> </h:form> </f:view> </body> </html>STEP 10 : CONFIGURE faces-config.xml faces-config.xml is created as below. It covers the configuration of both managed beans and navigation between the xhtml pages. <?xml version='1.0' encoding='UTF-8'?> <faces-config xmlns='' xmlns:xsi='' xsi:schemaLocation=''version='2.0'><managed-bean> <managed-bean-name>userMBean</managed-bean-name> <managed-bean-class>com.otv.UserManagedBean</managed-bean-class> <managed-bean-scope>request</managed-bean-scope> </managed-bean><navigation-rule> <from-view-id>/pages/index.xhtml</from-view-id> <navigation-case> <from-outcome>success</from-outcome> <to-view-id>/pages/welcome.xhtml</to-view-id> </navigation-case> <navigation-case> <from-outcome>error</from-outcome> <to-view-id>/pages/error.xhtml</to-view-id> </navigation-case> </navigation-rule> </faces-config>STEP 11 : UPDATE web.xml web.xml is updated. <?xml version='1.0' encoding='UTF-8'?> <web-app xmlns:xsi='' xmlns='' xmlns:web='' xsi:schemaLocation='' id='WebApp_ID' version='2.5'> <display-name>OTV_JSF_PrimeFaces_Hibernate</display-name> <context-param> <param-name>javax.faces.PROJECT_STAGE</param-name> <param-value>Development</param-value> </context-param> <welcome-file-list> <welcome-file>/pages/index.xhtml</welcome-file> </welcome-file-list> <servlet> <servlet-name>Faces Servlet</servlet-name> <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>/faces/*</url-pattern> </servlet-mapping> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>*.jsf</url-pattern> </servlet-mapping> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>*.faces</url-pattern> </servlet-mapping> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>*.xhtml</url-pattern> </servlet-mapping> </web-app>STEP 12 : CREATE user.hbm.xml User Table Configuration are set. <?xml version='1.0'?> <!DOCTYPE hibernate-mapping PUBLIC '-//Hibernate/Hibernate Mapping DTD 3.0//EN' ''><hibernate-mapping> <class name='com.otv.hbm.User' table='USER'> <id name='id' type='int' column='ID' > <generator class='increment'/> </id><property name='name'> <column name='NAME' /> </property> <property name='surname'> <column name='SURNAME'/> </property> </class> </hibernate-mapping>STEP 13 : CREATE hibernate.cfg.xml hibernate.cfg.xml is created to manage interaction between application and the database: <?xml version='1.0' encoding='utf-8'?> <!DOCTYPE hibernate-configuration PUBLIC '-//Hibernate/Hibernate Configuration DTD//EN' ''><hibernate-configuration> <session-factory> <property name='hibernate.connection.driver_class'>com.mysql.jdbc.Driver</property> <property name='hibernate.connection.url'>jdbc:mysql://localhost:3306/Test</property> <property name='hibernate.connection.username'>root</property> <property name='hibernate.connection.password'>root</property> <property name='hibernate.connection.pool_size'>10</property> <property name='show_sql'>true</property> <property name='dialect'>org.hibernate.dialect.MySQLDialect</property> <!-- Mapping files --> <mapping resource='hbm/user.hbm.xml'/> </session-factory> </hibernate-configuration>STEP 14 : DEPLOY PROJECT TO APPLICATION SERVER When Project is deployed to Application Server(Apache tomcat), the screen will be seen as below :After submit button is clicked, welcome.xhtml page will be seen as below :STEP 15 : DOWNLOAD OTV_JSF_Hibernate_PrimeFaces Reference: JSF – PrimeFaces & Hibernate Integration Project from our JCG partner Eren Avsarogullari at the Online Technology Vision blog....

Spring Security Implementing Custom UserDetails with Hibernate

Most of the time, we will want to configure our own security access roles in web applications. This is easily achieved in Spring Security. In this article we will see the most simple way to do this. First of all we will need the following tables in the database: CREATE TABLE IF NOT EXISTS `mydb`.`security_role` (`id` INT(11) NOT NULL AUTO_INCREMENT ,`name` VARCHAR(50) NULL DEFAULT NULL ,PRIMARY KEY (`id`) )ENGINE = InnoDBAUTO_INCREMENT = 4DEFAULT CHARACTER SET = latin1;CREATE TABLE IF NOT EXISTS `mydb`.`user` (`id` INT(11) NOT NULL AUTO_INCREMENT ,`first_name` VARCHAR(45) NULL DEFAULT NULL ,`family_name` VARCHAR(45) NULL DEFAULT NULL ,`dob` DATE NULL DEFAULT NULL ,`password` VARCHAR(45) NOT NULL ,`username` VARCHAR(45) NOT NULL ,`confirm_password` VARCHAR(45) NOT NULL ,`active` TINYINT(1) NOT NULL ,PRIMARY KEY (`id`) ,UNIQUE INDEX `username` (`username` ASC) )ENGINE = InnoDBAUTO_INCREMENT = 9DEFAULT CHARACTER SET = latin1;CREATE TABLE IF NOT EXISTS `mydb`.`user_security_role` (`user_id` INT(11) NOT NULL ,`security_role_id` INT(11) NOT NULL ,PRIMARY KEY (`user_id`, `security_role_id`) ,INDEX `security_role_id` (`security_role_id` ASC) ,CONSTRAINT `user_security_role_ibfk_1`FOREIGN KEY (`user_id` )REFERENCES `mydb`.`user` (`id` ),CONSTRAINT `user_security_role_ibfk_2`FOREIGN KEY (`security_role_id` )REFERENCES `mydb`.`security_role` (`id` ))ENGINE = InnoDBDEFAULT CHARACTER SET = latin1;Obviously, the table user will hold users, table security_role will hold security roles and user_security_roles will hold the association. In order for the implementation to be as simple as possible, entries inside the security_role table should always start with “ROLE_”, otherwise we will need to encapsulate (this will NOT be covered in this article). So we execute the following statements: insert into security_role(name) values ('ROLE_admin');insert into security_role(name) values ('ROLE_Kennel_Owner');insert into security_role(name) values ('ROLE_User');insert into user (first_name,family_name,password,username,confirm_password,active)values ('ioannis','ntantis','123456','giannisapi','123456',1);insert into user_security_role (user_id,security_role_id) values (1,1); So after those commands we have the following: Three different security roles One user with username “giannisapi” We have give the role “ROLE_admin” to user “giannisapi” Now that everything is completed on the database side, we will move to the java side to see what needs to be done. First we will create the necessary DTO (there are various tools that will automatically generate DTO’s from the database for you): package org.intan.pedigree.form;import;import java.util.Collection;import java.util.Date;import java.util.Set;import javax.persistence.Basic;import javax.persistence.Column;import javax.persistence.Entity;import javax.persistence.GeneratedValue;import javax.persistence.GenerationType;import javax.persistence.Id;import javax.persistence.JoinColumn;import javax.persistence.JoinTable;import javax.persistence.ManyToMany;import javax.persistence.NamedQueries;import javax.persistence.NamedQuery;import javax.persistence.Table;import javax.persistence.Temporal;import javax.persistence.TemporalType;/**** @author intan*/@Entity@Table(name = 'user', catalog = 'mydb', schema = '')@NamedQueries({@NamedQuery(name = 'UserEntity.findAll', query = 'SELECT u FROM UserEntity u'),@NamedQuery(name = 'UserEntity.findById', query = 'SELECT u FROM UserEntity u WHERE = :id'),@NamedQuery(name = 'UserEntity.findByFirstName', query = 'SELECT u FROM UserEntity u WHERE u.firstName = :firstName'),@NamedQuery(name = 'UserEntity.findByFamilyName', query = 'SELECT u FROM UserEntity u WHERE u.familyName = :familyName'),@NamedQuery(name = 'UserEntity.findByDob', query = 'SELECT u FROM UserEntity u WHERE u.dob = :dob'),@NamedQuery(name = 'UserEntity.findByPassword', query = 'SELECT u FROM UserEntity u WHERE u.password = :password'),@NamedQuery(name = 'UserEntity.findByUsername', query = 'SELECT u FROM UserEntity u WHERE u.username = :username'),@NamedQuery(name = 'UserEntity.findByConfirmPassword', query = 'SELECT u FROM UserEntity u WHERE u.confirmPassword = :confirmPassword'),@NamedQuery(name = 'UserEntity.findByActive', query = 'SELECT u FROM UserEntity u WHERE = :active')})public class UserEntity implements Serializable {private static final long serialVersionUID = 1L;@Id@GeneratedValue(strategy = GenerationType.IDENTITY)@Basic(optional = false)@Column(name = 'id')private Integer id;@Column(name = 'first_name')private String firstName;@Column(name = 'family_name')private String familyName;@Column(name = 'dob')@Temporal(TemporalType.DATE)private Date dob;@Basic(optional = false)@Column(name = 'password')private String password;@Basic(optional = false)@Column(name = 'username')private String username;@Basic(optional = false)@Column(name = 'confirm_password')private String confirmPassword;@Basic(optional = false)@Column(name = 'active')private boolean active;@JoinTable(name = 'user_security_role', joinColumns = {@JoinColumn(name = 'user_id', referencedColumnName = 'id')}, inverseJoinColumns = {@JoinColumn(name = 'security_role_id', referencedColumnName = 'id')})@ManyToManyprivate Set securityRoleCollection;public UserEntity() {}public UserEntity(Integer id) { = id;}public UserEntity(Integer id, String password, String username, String confirmPassword, boolean active) { = id;this.password = password;this.username = username;this.confirmPassword = confirmPassword; = active;}public Integer getId() {return id;}public void setId(Integer id) { = id;}public String getFirstName() {return firstName;}public void setFirstName(String firstName) {this.firstName = firstName;}public String getFamilyName() {return familyName;}public void setFamilyName(String familyName) {this.familyName = familyName;}public Date getDob() {return dob;}public void setDob(Date dob) {this.dob = dob;}public String getPassword() {return password;}public void setPassword(String password) {this.password = password;}public String getUsername() {return username;}public void setUsername(String username) {this.username = username;}public String getConfirmPassword() {return confirmPassword;}public void setConfirmPassword(String confirmPassword) {this.confirmPassword = confirmPassword;}public boolean getActive() {return active;}public void setActive(boolean active) { = active;}public Set getSecurityRoleCollection() {return securityRoleCollection;}public void setSecurityRoleCollection(Set securityRoleCollection) {this.securityRoleCollection = securityRoleCollection;}@Overridepublic int hashCode() {int hash = 0;hash += (id != null ? id.hashCode() : 0);return hash;}@Overridepublic boolean equals(Object object) {// TODO: Warning - this method won't work in the case the id fields are not setif (!(object instanceof UserEntity)) {return false;}UserEntity other = (UserEntity) object;if (( == null && != null) || ( != null && ! {return false;}return true;}@Overridepublic String toString() {return 'org.intan.pedigree.form.User[id=' + id + ']';}} package org.intan.pedigree.form;import;import java.util.Collection;import javax.persistence.Basic;import javax.persistence.Column;import javax.persistence.Entity;import javax.persistence.GeneratedValue;import javax.persistence.GenerationType;import javax.persistence.Id;import javax.persistence.ManyToMany;import javax.persistence.NamedQueries;import javax.persistence.NamedQuery;import javax.persistence.Table;/**** @author intan*/@Entity@Table(name = 'security_role', catalog = 'mydb', schema = '')@NamedQueries({@NamedQuery(name = 'SecurityRoleEntity.findAll', query = 'SELECT s FROM SecurityRoleEntity s'),@NamedQuery(name = 'SecurityRoleEntity.findById', query = 'SELECT s FROM SecurityRoleEntity s WHERE = :id'),@NamedQuery(name = 'SecurityRoleEntity.findByName', query = 'SELECT s FROM SecurityRoleEntity s WHERE = :name')})public class SecurityRoleEntity implements Serializable {private static final long serialVersionUID = 1L;@Id@GeneratedValue(strategy = GenerationType.IDENTITY)@Basic(optional = false)@Column(name = 'id')private Integer id;@Column(name = 'name')private String name;@ManyToMany(mappedBy = 'securityRoleCollection')private Collection userCollection;public SecurityRoleEntity() {}public SecurityRoleEntity(Integer id) { = id;}public Integer getId() {return id;}public void setId(Integer id) { = id;}public String getName() {return name;}public void setName(String name) { = name;}public Collection getUserCollection() {return userCollection;}public void setUserCollection(Collection userCollection) {this.userCollection = userCollection;}@Overridepublic int hashCode() {int hash = 0;hash += (id != null ? id.hashCode() : 0);return hash;}@Overridepublic boolean equals(Object object) {// TODO: Warning - this method won't work in the case the id fields are not setif (!(object instanceof SecurityRoleEntity)) {return false;}SecurityRoleEntity other = (SecurityRoleEntity) object;if (( == null && != null) || ( != null && ! {return false;}return true;}@Overridepublic String toString() {return 'org.intan.pedigree.form.SecurityRole[id=' + id + ']';}}Now that we have out DTO lets created the necessary DAO classes: package org.intan.pedigree.dao;import java.util.List;import java.util.Set;import org.hibernate.SessionFactory;import org.intan.pedigree.form.SecurityRoleEntity;import org.intan.pedigree.form.UserEntity;import org.springframework.beans.factory.annotation.Autowired;import org.springframework.stereotype.Repository;@Repositorypublic class UserEntityDAOImpl implements UserEntityDAO{@Autowiredprivate SessionFactory sessionFactory;public void addUser(UserEntity user) {try {sessionFactory.getCurrentSession().save(user);} catch (Exception e) {System.out.println(e);}}public UserEntity findByName(String username) {UserEntity user = (UserEntity) sessionFactory.getCurrentSession().createQuery('select u from UserEntity u where u.username = '' + username + ''').uniqueResult();return user;}public UserEntity getUserByID(Integer id) {UserEntity user = (UserEntity) sessionFactory.getCurrentSession().createQuery('select u from UserEntity u where id = '' + id + ''').uniqueResult();return user;}public String activateUser(Integer id) {String hql = 'update UserEntityset active = :active where id = :id';org.hibernate.Query query = sessionFactory.getCurrentSession().createQuery(hql);query.setString('active','Y');query.setInteger('id',id);int rowCount = query.executeUpdate();System.out.println('Rows affected: ' + rowCount);return '';}public String disableUser(Integer id) {String hql = 'update UserEntity set active = :active where id = :id';org.hibernate.Query query = sessionFactory.getCurrentSession().createQuery(hql);query.setInteger('active',0);query.setInteger('id',id);int rowCount = query.executeUpdate();System.out.println('Rows affected: ' + rowCount);return '';}public void updateUser(UserEntity user) {try {sessionFactory.getCurrentSession().update(user);} catch (Exception e) {System.out.println(e);}}public List listUser() {return sessionFactory.getCurrentSession().createQuery('from UserEntity').list();}public void removeUser(Integer id) {UserEntity user = (UserEntity) sessionFactory.getCurrentSession().load(UserEntity.class, id);if (null != user) {sessionFactory.getCurrentSession().delete(user);}}public Set getSecurityRolesForUsername(String username) {UserEntity user = (UserEntity) sessionFactory.getCurrentSession().createQuery('select u from UserEntity u where u.username = '' + username + ''').uniqueResult();if (user!= null) {Set roles = (Set) user.getSecurityRoleCollection();if (roles != null && roles.size() > 0) {return roles;}}return null;}} package org.intan.pedigree.dao;import java.util.List;import org.hibernate.Criteria;import org.hibernate.SessionFactory;import org.hibernate.criterion.Restrictions;import org.intan.pedigree.form.Country;import org.intan.pedigree.form.Kennel;import org.intan.pedigree.form.SecurityRoleEntity;import org.springframework.beans.factory.annotation.Autowired;import org.springframework.stereotype.Repository;@Repositorypublic class SecurityRoleEntityDAOImpl implements SecurityRoleEntityDAO{@Autowiredprivate SessionFactory sessionFactory;public void addSecurityRoleEntity(SecurityRoleEntity securityRoleEntity) {try {sessionFactory.getCurrentSession().save(securityRoleEntity);} catch (Exception e) {System.out.println(e);}}public List listSecurityRoleEntity() {Criteria criteria = sessionFactory.getCurrentSession().createCriteria(SecurityRoleEntity.class);criteria.add('name','ROLE_ADMIN' ));return criteria.list();}public SecurityRoleEntity getSecurityRoleEntityById(Integer id) {Criteria criteria = sessionFactory.getCurrentSession().createCriteria(SecurityRoleEntity.class);criteria.add(Restrictions.eq('id',id));return (SecurityRoleEntity) criteria.uniqueResult();}public void removeSecurityRoleEntity(Integer id) {SecurityRoleEntity securityRoleEntity = (SecurityRoleEntity) sessionFactory.getCurrentSession().load(SecurityRoleEntity.class, id);if (null != securityRoleEntity) {sessionFactory.getCurrentSession().delete(securityRoleEntity);}}} Now we will create the service layer for the above DAO’s. package org.intan.pedigree.service;import java.util.List;import org.intan.pedigree.dao.SecurityRoleEntityDAO;import org.intan.pedigree.form.SecurityRoleEntity;import org.springframework.beans.factory.annotation.Autowired;import org.springframework.stereotype.Service;import org.springframework.transaction.annotation.Transactional;@Servicepublic class SecurityRoleEntityServiceImpl implements SecurityRoleEntityService{@Autowiredprivate SecurityRoleEntityDAO securityRoleEntityDAO;@Transactionalpublic void addSecurityRoleEntity(SecurityRoleEntity securityRoleEntity) {securityRoleEntityDAO.addSecurityRoleEntity(securityRoleEntity);}@Transactionalpublic List listSecurityRoleEntity() {return securityRoleEntityDAO.listSecurityRoleEntity();}@Transactionalpublic void removeSecurityRoleEntity(Integer id) {securityRoleEntityDAO.removeSecurityRoleEntity(id);}@Transactionalpublic SecurityRoleEntity getSecurityRoleEntityById(Integer id) {return securityRoleEntityDAO.getSecurityRoleEntityById( id);}}In the Service layer of UserDetails below, pay attention that it implements UserDetailsService from package org.intan.pedigree.service;import org.intan.pedigree.dao.UserEntityDAO;import org.intan.pedigree.dao.UserEntityDAO;import org.intan.pedigree.form.UserEntity;import org.springframework.beans.factory.annotation.Autowired;import org.springframework.dao.DataAccessException;import org.springframework.stereotype.Service;import org.springframework.transaction.annotation.Transactional;import;import;import;import;@Service('userDetailsService')public class UserDetailsServiceImpl implements UserDetailsService {@Autowiredprivate UserEntityDAO dao;@Autowiredprivate Assembler assembler;@Transactional(readOnly = true)public UserDetails loadUserByUsername(String username)throws UsernameNotFoundException, DataAccessException {UserDetails userDetails = null;UserEntity userEntity = dao.findByName(username);if (userEntity == null)throw new UsernameNotFoundException('user not found');return assembler.buildUserFromUserEntity(userEntity);}} You also see above, that the loadUserByUsername methods return the result of the assembler.buildUserFromUserEntity . Simply put, what this method of the assembler does is to to construct a object from the given UserEntity DTO. The code of the Assembler class is given below: package org.intan.pedigree.service;import java.util.ArrayList;import java.util.Collection;import org.intan.pedigree.form.SecurityRoleEntity;import org.intan.pedigree.form.UserEntity;import;import;import;import org.springframework.stereotype.Service;import org.springframework.transaction.annotation.Transactional;@Service('assembler')public class Assembler {@Transactional(readOnly = true)User buildUserFromUserEntity(UserEntity userEntity) {String username = userEntity.getUsername();String password = userEntity.getPassword();boolean enabled = userEntity.getActive();boolean accountNonExpired = userEntity.getActive();boolean credentialsNonExpired = userEntity.getActive();boolean accountNonLocked = userEntity.getActive();Collection authorities = new ArrayList();for (SecurityRoleEntity role : userEntity.getSecurityRoleCollection()) {authorities.add(new GrantedAuthorityImpl(role.getName()));}User user = new User(username, password, enabled,accountNonExpired, credentialsNonExpired, accountNonLocked, authorities);return user;}} The only thing that remain to be done now is to define what is necessary in the applicationContext-Security.xml. For this create a new xml file called “applicationContext-Security.xml” with the following contents: <?xml version='1.0' encoding='UTF-8'?> <beans:beans xmlns='' xmlns:beans='' xmlns:xsi='' xmlns:context='' xsi:schemaLocation=''> <beans:bean id='userDetailsService' class='org.intan.pedigree.service.UserDetailsServiceImpl'></beans:bean> <context:component-scan base-package='org.intan.pedigree' /> <http auto-config='true'> <intercept-url pattern='/admin/**' access='ROLE_ADMIN' /> <intercept-url pattern='/user/**' access='ROLE_REGISTERED_USER' /> <intercept-url pattern='/kennel/**' access='ROLE_KENNEL_OWNER' /> <!-- <security:intercept-url pattern='/login.jsp' access='IS_AUTHENTICATED_ANONYMOUSLY' /> --> </http><beans:bean id='daoAuthenticationProvider' class=''> <beans:property name='userDetailsService' ref='userDetailsService' /> </beans:bean><beans:bean id='authenticationManager' class=''> <beans:property name='providers'> <beans:list> <beans:ref local='daoAuthenticationProvider' /> </beans:list> </beans:property> </beans:bean><authentication-manager> <authentication-provider user-service-ref='userDetailsService'> <password-encoder hash='plaintext' /> </authentication-provider> </authentication-manager></beans:beans>In your web.xml put the following code in order to load the applicationContext-security.xml file. <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/applicationContext-hibernate.xml /WEB-INF/applicationContext-security.xml </param-value> </context-param> Last of all, excuse any typing mistakes etc, as this code is just copy and paste from personal work that I have done, if something does not work please post the question and I will be more than happy to assist you. Reference: Spring 3, Spring Security Implementing Custom UserDetails with Hibernate from our JCG partner Ioannis Dadis at the Giannisapi blog....

Stumbling towards a better design

Some programs have a clear design and coding new features is quick and easy. Other programs are a patchwork quilt of barely comprehensible fragments, bug fixes, and glue. If you have to code new features for such programs, you’re often better off rewriting them.However there is a middle ground that I suspect is pretty common in these days of clean code and automated test suites: you have a good program with a clear design, but as you start to implement a new feature, you realise it’s a force fit and you’re not sure why. What to do?I’ve recently being implementing a feature in Eclipse Virgo which raised this question and I hope sheds some light on how to proceed. Let’s take a look.The New FeatureI’ve been changing the way the Virgo kernel isolates itself from applications. Previously, Equinox supported nested OSGi frameworks and it was easy to isolate the kernel from applications by putting the applications in a nested framework (a “user region”) and sharing selected packages and services with the kernel. However, the nested framework support is being withdrawn in favour of an OSGi standard set of framework hooks. These hooks let you control, or at least limit, the visibility of bundles, packages, and services — all in a single framework.So I set about re-basing Virgo on the framework hooks. The future looked good: eventually the hooks could be used to implement multiple user regions and even to rework the way application scoping is implemented in Virgo.An Initial ImplementationOne bundle in the Virgo kernel is responsible for region support, so I set about reworking it to use the framework hooks. After a couple of weeks the kernel and all its tests were running ok. However, the vision of using the hooks to implement multiple user regions and redo application scoping had receded into the distance given the rather basic way I had written the framework hooks. I had the option of ignoring this and calling “YAGNI!” (You Ain’t Gonna Need It!). But I was certain that once I merged my branch into master, the necessary generalisation would drop down the list of priorities. Also, if I ever did prioritise the generalisation work, I would have forgotten much of what was then buzzing around my head.Stepping BackSo the first step was to come up with a suitable abstract model. I had some ideas when we were discussing nested frameworks in the OSGi Alliance a couple of years ago: to partition the framework into groups of bundles and then to connect these groups together with one-way connections which would allow certain packages and services to be visible from one group to another.Using Virgo terminology, I set about defining how I could partition the framework into regions and then connect the regions together using package, service, and bundle filters. At first it was tempting to avoid cycles in the graph, but it soon became clear that cycles are harmless and indeed necessary for modelling Virgo’s existing kernel and user region, which need to be connected to each other with appropriate filters.A Clean AbstractionSoon I had a reasonably good idea of the kind of graph with filters that was necessary, so it was tempting to get coding and then refactor the thing into a reasonable shape. But I had very little idea of how the filtering of bundles and packages would interact. In the past I’ve found that refactoring from such a starting point can waste a lot of time, especially when tests have been written and need reworking. Code has inertia to being changed, so its often better to defer coding until I get a better understanding.To get a clean abstraction and a clear understanding, while avoiding “analysis paralysis”, I wrote a formal specification of these connected regions. This is essentially a mathematical model of the state of the graph and the operations on it. This kind of model enables properties of the system to be discovered before it is implemented in code. My friend and colleague, Steve Powell, was keen to review the spec and suggest several simplifications and before long, we had a formal spec with some rather nice algebraic properties for filtering and combining regions.To give you a feel for how these properties look, take this example which says that “combining” two regions (used when describing the combined appearance of two regions) and then filtering is equivalent to filtering the two regions first and then combining the result:Being a visual thinker and to make the formal spec more useful to non-mathematicians, I also drew plenty of pictures along the way. Here’s an example graph of regions:A New Implementation I defined a RegionDigraph (“digraph” is short for “directed graph”) interface, implemented it, and defined a suit of unit tests to give good code coverage. I then implemented a fresh collection of framework hooks in terms of the region digraph and then ripped out the old framework hooks and code supporting what in retrospect was a poorly formed notion of region membership and replaced this with the new framework hooks underpinned by the region digraph.I Really Did Need It (IRDNI?) It took a while to get all the kernel integration tests running again, mainly because the user region needs to be configured so that packages from the system bundle (which belongs in the kernel region) are imported along with some new services such as the region digraph service.As problems occurred, I could step back and think in terms of the underlying graph. By writing appropriate toString methods on Region and RegionDigraph implementation classes, the model became easier to visualise in the debugger. This gives me hope that if and when other issues arise, I will have a better chance of debugging them because I can understand the underlying model.A couple of significant issues turned up along the way, both related to the use of “side states” when Virgo deploys applications.The first is the need to temporarily add bundle descriptions to the user region.The second is the need to respect the region digraph when diagnosing resolver errors. This is relatively straightforward when deploying and diagnosing failures. It is less straightforward when dumping resolution failure states for offline analysis: the region digraph also needs to be dumped so it can also be used in the offline analysis.These issues would have been much harder to address in the initial framework hooks implementation. The first issue would have involved some fairly arbitrary code to record and remove bundle descriptions from the user region. The second would have been much trickier as there was a poorly defined and overly static notion of region membership which wouldn’t have lent itself to including in a state dump without looking like a gross hack. But with the region digraph it was easy to create a temporary “coregion” to contain the temporary bundle descriptions and it should be straightforward to capture the digraph alongside the state dump.Ok, so I’m convinced that the region digraph is pulling its weight and isn’t a bunch of YAGNI. But someone challenged me the other day by asking “Why do the framework hooks have to be so complex?”.Unnecessary Complexity? Well, firstly the region digraph ensures consistent behaviour across the five framework hooks (bundle find, bundle event, service find, service event, and resolver hooks), especially regarding filtering behaviour, treatment of the system bundle, and transitive dependencies (i.e. across more than one region connection). This consistency should lead to fewer bugs, more consistent documentation, and ease of understanding for users.Secondly, the region digraph is much more flexible than hooks based on a static notion of region membership: bundles may be added to the kernel after the user region has been created, application scoping should be relatively straightforward to rework in terms of regions thus giving scoping and regions consistent semantics (fewer bugs, better documentation etc), and multiple user regions should be relatively tractable to implement.Thirdly, the region digraph should be an excellent basis for implementing the notion of a multi-bundle application. In the OSGi Alliance, we are currently discussing how to standardise the multi-bundle application constructs in Virgo, Apache Aries, the Paremus Service Fabric, and elsewhere. Indeed I regard it as a proof of concept that the framework hooks can be used to implement certain basic kinds of multi-bundle application. As a nice spin-off, the development of the region digraph has resulted in several Equinox bugs being fixed and some clarifications being made to the framework hooks specification.Next Steps I am writing this while the region digraph is “rippling” through the Virgo repositories on its way into the 3.0 line. But this item is starting to have a broader impact. Last week I gave a presentation on the region digraph to the OSGi Alliance’s Enterprise Expert Group. There was a lot of interest and subsequently there has even been discussion of whether the feature should be implemented in Equinox so that it can be reused by other projects outside Virgo.Postscript (30 March 2010)   The region digraph is working out well in Virgo. We had to rework the function underlying the admin console because there is no longer a “surrogate” bundle representing the kernel packages and services in the user region. To better represent the connections from the user region to the kernel, the runtime artefact model inside Virgo needs to be upgraded to understand regions directly. This is work in progress in the 3.0 line.Meanwhile, Tom Watson, an Equinox committer, is working with me to move the region digraph code to Equinox. The rationale is to ensure that multiple users of the framework hooks can co-exist (by using the region digraph API instead of directly using the framework hooks).Tom contributed several significant changes to the digraph code in Virgo including persistence support. When Virgo dumps a resolution failure state, it also dumps the region digraph. The dumped digraph is read back in later and used to provide a resolution hook for analysing the dumped state, which ensures consistency between the live resolver and the dumped state analysis.Reference: Stumbling towards a better design from our JCG partner Glyn Normington at the Mind the Gap blog....

Why You Didn’t Get the Job

Over the course of my career I have scheduled thousands of software engineering interviews with hundreds of hiring managers at a wide array of companies and organizations. I have learned that although no two managers look for the exact same set of technical skills or behaviors, there are recognizable patterns in the feedback I receive when a candidate is not presented with a job offer.Obviously, if you are unable to demonstrate the basic fundamental skills for a position (for our purposes, software engineering expertise), anything else that happens during an interview is irrelevant. For that technical skills assessment, you are generally on your own, as recruiters should not provide candidates with specific technical questions that they will be asked in an interview.It should be helpful for job seekers to know where others have stumbled in interviews where technical skill was not the sole or even primary reason provided for the candidate’s rejection. The examples of feedback below are things I have heard repeatedly over the years, and tend to be the leading non-technical causes of failed interviews in the software industry (IMO).Candidate has wide technical breadth but little depth – The ‘jack of all trades’ response is not uncommon, particularly for folks that have perhaps bounced from job to job a little too much. Having experience working in diverse technical environments is almost always a positive, but only if you are there long enough to take some deeper skills and experience away with you. Companies will seek depth in at least some subset of your overall skill set.Candidate displayed a superiority complex or sense of entitlement - This seems most common when a candidate will subtly (or perhaps not so subtly) express that they may be unwilling to do any tasks aside from new development, such as code maintenance, or when a candidate confesses an interest in exclusively working with a certain subset of technologies. Candidates that are perceived as team players may mention preferences, but will also be careful to acknowledge their willingness to perform any relevant tasks that need to be done.Candidate showed a lack of passion – The lack of passion comment has various applications. Sometimes the candidate is perceived as apathetic about an opportunity or uninterested in the hiring company, or often it is described as what seems to be an overall apathy for the engineering profession (that software is not what they want to be doing). Regardless of the source of apathy, this perception is hard to overcome. If a candidate has no passion for the business, the technology, or the people, chances are the interview is a waste of time.Candidate talked more about the accomplishments of co-workers – This piece of feedback seems to be going viral lately. Candidates apparently ramble on about other groups that built pieces of their software product, QA, the devops team’s role, and everyone else in the company, yet they fail to dig deep into what their own contribution was. This signifies to interviewers that perhaps this candidate is either the least productive member of the team or is simply unable to describe their own duties. Give credit where it is due to your peers, but be sure to focus on your own accomplishments first.Candidate seems completely unaware of anything that happens beyond his/her desk – Repeatedly using answers such as “I don’t know who built that” or “I’m not sure how that worked” can be an indicator that the candidate is insulated in his/her role, or doesn’t have the curiosity to learn what others are doing in their company. As most engineering groups tend towards heavy collaboration these days, this lack of information will be a red flag for potential new employers.Candidate more focused on the tools/technology than on the profession – Although rare, this often troubles managers a great deal, and it’s often a symptom of the ‘fanboy’ complex. I first experienced this phenomenon when EJB first arrived on the scene in the Java world, and many candidates only wanted to work for companies that were using EJB. When a technologist is more focused on becoming great at a specific tool than becoming a great overall engineer, companies may show some reluctance. This is a trend that I expect could grow as the number of language/platform choices expands, and as fanatical response and the overall level of polarization of the tech community around certain technologies increases.Candidate’s claimed/résumé experience ? candidate’s actual experience – Embellishing the résumé is nothing new. A blatant lie on a résumé is obviously a serious no-no, but even some minor exaggerations or vague inaccuracies can come back and bite you. The most common example is when a candidate includes technologies or buzzwords on a résumé that they know nothing about. Including items in a skills matrix that are not representative of your current skill set is seen as dishonest by hiring managers.Candidate’s experience is not ‘transferable’ – If your company is only using homegrown frameworks and proprietary software, or if you have worked in the same company for many years without any fundamental changes in the development environment, this could be you. The interviewer in this case feels that you may be productive in your current environment, but when given a different set of tools, methodologies, and team members, the candidate may encounter too steep a learning curve. This is often a response on candidates that have worked within development groups at very large companies for many years.Reference: Why You Didn’t Get the Job from our JCG partner Dave Fecak at the Job Tips For Geeks blog....
Java Code Geeks and all content copyright © 2010-2015, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: