Featured FREE Whitepapers

What's New Here?


Hibernate caches basics

Recently I have experimented with hibernate cache. In this post I would like share my experience and point out some of the details of Hibernate Second Level Cache. On the way I will direct you to some articles that helped me implement the cache. Let’s get started from the ground. Caching in hibernate Caching functionality is designed to reduces the amount of necessary database access. When the objects are cached they resides in memory. You have the flexibility to limit the usage of memory and store the items in disk storage.The implementation will depend on the underlying cache manager. There are various flavors of caching available, but is better to cache non-transactional and read-only data. Hibernate provides 3 types of caching. 1. Session Cache The session cache caches object within the current session. It is enabled by default in Hibernate. Read more about Session Cache . Objects in the session cache resides in the same memory location. 2. Second Level Cache The second level cache is responsible for caching objects across sessions. When this is turned on, objects will be first searched in cache and if they are not found, a database query will be fired. Read here on how to implement Second Level Cache. Second level cache will be used when the objects are loaded using their primary key. This includes fetching of associations. In case of second level cache the objects are constructed and hence all of them will reside in different memory locations. 3. Query Cache Query Cache is used to cache the results of a query. Read here on how to implement query cache.When the query cache is turned on, the results of the query are stored against the combination query and parameters. Every time the query is fired the cache manager checks for the combination of parameters and query. If the results are found in the cache they are returned other wise a database transaction is initiated. As you can see, it is not a good idea to cache a query if it has number of parameters or a single parameter can take number of values. For each of this combination the results are stored in the memory. This can lead to extensive memory usage. Finally, here is a list of good articles written on this topic, 1. Speed Up Your Hibernate Applications with Second-Level Caching 2. Hibernate: Truly Understanding the Second-Level and Query Caches 3. EhCache Integration with Spring and Hibernate. Step by Step Tutorial 4. Configuring Ehcache with hibernate Reference: All about Hibernate Second Level Cache from our JCG partner Manu PK at the The Object Oriented Life blog....

Java threads: How many should I create

Introduction“How many threads should I create?”. Many years before one of my friends asked me the question, then I gave him the answer follow the guideline with ” Number of CPU core + 1″. Most of you will be nodding when you are reading here. Unfortunately all of us are wrong at that point.Right now I would give the answer with if your archiecture was based on shared resource model then your thread number should be “Number of CPU core + 1″ with better throughput, but if your architecture was shared-nothing model (like SEDA, ACTOR) then you could create as many thread as your need. Walk ThroughSo here came one question why so many eldership continuely gave us the guideline with “Number of Cpu core + 1″, because they told us the context switching of thread was heavy and would block your system scalability. But noboday noticed the programming or architecture model they were under. So if you read carefully you would find most of them described the pragramming or architecture model were based on shared resource model.Give you several examples:1. Socket programming – socket layer was shared by many requests, so you need context switch between every requests.2. Information provider system – most customer will contiuely access the same requestetc…So they would meet the multiple requests access the same resource situation so system would require add lock to that resource since consistency requirement of their system. Lock contention would come into play so the context swich of multiple threading would be very heavy.After I find this interesting thing, I consider willother programming or architecture models can walk around that limitation. So if shared resource model has failed for creating more java threading, maybe we can try shared nothing model.So fortunately I get one chance create one system need large scalability, the system need send out lots of notfication in very quick manner. So I decide go ahead with SEDA model for trial and leverage with my multiple-lane commonj pattern, current I can run the java application with maximum number around 600 threads if your java heap setting with 1.5 gigabytes in one machine.So how about the average memory consumption for one java thread is around 512 kilobytes (Ref: http://www.javacodegeeks.com/2011/04/erlang-vs-java-memory-architecture.html), so 600 threads almost you need 300M memory consumption (include java native and java heap). And if you system design is good, the 300M usage will not your burden acutally.By the way in windows you can’t create more then 1000 since windows can’t handle the threads very well, but you can create 1000 threads in linux if you leverage with NPTL. So many persons told you java couldn’t handle large concurrent job processings that wasn’t 100% true.Someone may ask how about thread itself lifecycle swap: ready – runnable – running – waiting. I would say java and latest OS already could handle them suprisingly effecient, and if you have mutliple-core cpu and turn on NUMA the whole performance will be enhanced more further. So it’s not your bottleneck at least from very beginning phase.Of course create thread and make thread to running stage are very heavy things, so please leverage with threadpool (jdk: executors)And you could ref : http://community.jboss.org/people/andy.song/blog/2011/02/22/performance-compare-between-kilim-and-multilane-commj-pattern for power of many java threads  ConclusionIn the future how will you answer the question “How many java threads should I create?”. I hope your answer will change to:1. if your archiecture was based on shared resource model then your thread number should be “Number of CPU core + 1″ with better throughput2. if your architecture was shared-nothing model (like SEDA, ACTOR) then you could create as many thread as your need.Reference: How many java threads should I create? from our JCG partner Andy Song at the song andy’s Stuff blog....

How Employers Measure Passion in Software Engineering Candidates

Over the past few months I have had some exchanges with small company executives and hiring managers which have opened my eyes to what I consider a relatively new wrinkle in the software development hiring world. I have been recruiting software engineers for 14 years, and I don’t recall another time where I’ve observed this at the same level. Here are two examples.The first incident was related to a candidate (‘A’) resume that I submitted to a local start-up. A was well-qualified for the position based on the technical specifications the client gave me, and I anticipated that at worst a phone screen for A would be automatic. I even went as far as to share A’s interview availability. A day after hitting ‘send’, I received feedback that the hiring manager was not interested in an interview. A large part of the manager’s reasoning was related to the fact that A had taken a two year sabbatical to pursue a degree in a non-technical discipline and subsequently took a job in that field for a brief stint, before returning to the software world a few years ago. I clarified information about A to be sure that the manager had full understanding of the situation, and the verdict was upheld – no interview.My second anecdote involves another candidate (‘B’) that I presented for a position with a different client company. B was someone I would classify as a junior level candidate overall and probably ‘borderline qualified’ for the role. B had roughly the minimum amount of required experience with a few gaps, and I was not 100% confident that B would be invited in. B was brought in for an interview, performed about average on the technical portions, and shined interpersonally. As this company does not make a habit of hiring average engineers, I was at least a bit surprised when an offer was made. I was told that a contributing factor for making the offer was that B’s ‘extracurricular activities’ were, according to my client, indicative of someone that was going to be a great engineer (though B’s current skills were average). B’s potential wasn’t being assessed as if B were an entry level engineer with a solid academic background, but rather the potential was assessed based on B’s interest in software.There are obviously many other stories like these, and the link between them seems obvious. Software firms that are hiring engineers (smaller shops in particular) appear to be qualifying and quantifying a candidate’s passion with the same level of scrutiny that they use in trying to measure technical skills and culture fit. Historically, companies have reviewed resumes and conducted interviews to answer the question, ‘Can this candidate perform the task at hand?‘. For my purposes as a recruiter of engineers, the question can be oversimplified as ‘Can he/she code?’. It seems the trend is to follow that question with ‘Does he/she CARE about the job, the company, and the craft?’.If you lack passion for the industry, be advised that in future job interviews you may be judged on this quality. Whether you love coding or not, reading further will give you some insight. Engineer A is a cautionary tale, while B is someone the passionate will want to emulate. Let’s start with A.  I don’t want to be like A. How can I avoid appearing dispassionate on my resume?Candidate A never had a chance, and I’ll shoulder partial responsibility for that. A was rejected based solely on a resume and my accompanying notes, so theoretically A could be extremely passionate about software engineering without appearing to be so on paper. Applicants do take some potential risks by choosing to include irrelevant experience, education, or even hobbies on a resume, and I will often warn my candidates of items that could cause alarm. In this case, A’s inclusion of both job details and advanced degrees in another discipline were judged as a red flag that A might decide to again leave the software industry. A similar conclusion could have been reached if A had listed hobbies that evidenced a deep-rooted drive toward something other than engineering (say, studying for a certification in a trade).Another related mistake on resumes is an Objective section that does not reflect the job for which you are applying. I have witnessed candidates being rejected for interviews based on an objective, and the most common example is when a candidate seeking a dev job lists ‘technical lead’ or ‘manager’ in the objective. Typical feedback might sound like this: ‘Our job is a basic development position, and if she only wants to be in a leadership slot she would not be happy with the role’. Listing the type of job that you are passionate about is essential if you are going to include an objective. I prefer that candidates avoid an objective section to avoid this specific danger, as most job seekers are open to more than one possible hiring scenario.I want to be like B. What can I do to highlight my passion during my search?Since the search starts out with the resume, be sure to list all of the details about you that demonstrate your enthusiasm. This should include relevant education, professional experience, and hobbies or activities that pertain to engineering. When listing your professional experience, emphasize the elements of your job that were the most relevant to what you want to do. If you want to strictly do development, downplay the details of your sys admin or QA tasks (a mention could be helpful, just don’t dwell). When listing your academic credentials, recent grads should be sure to provide specifics on classes relevant to your job goals, and it may be in your best interest to remove degrees or advanced courses unrelated to engineering.In my experience, the most commonly overlooked resume details that would indicate passion are:participation in open source projects membership in user groups or meetups conference attendance public-speaking appearances engineering-related hobbies (e.g. Arduino, personal/organizational websites you built or maintain, tech blogging) technical volunteer/non-profit experienceIf any of the above are not on your resume, be sure to include them before your next job search.Assuming that you get the opportunity to interview, try to gracefully and tactfully include some details from the bulleted list above. Your reading habits and technologies you self-study are best mentioned in interviews, as they may seem less appropriate as resume material.Conclusion: Most candidates should feel free to at least mention interests that are not engineering related if the opportunity presents itself, as companies tend to like hiring employees that are not strictly one-dimensional. Just be sure not to overemphasize interests or activities that could be misinterpreted as future career goals. Passion alone won’t get you a job, but it can certainly make a difference in a manager’s decision on who to hire (candidate B) and who not to even interview (candidate A). Make sure you use your resume and interview time to show your passion.Reference: How Employers Measure Passion in Software Engineering Candidates (and how to express your passion in resumes and interviews from our JCG partner Dave Fecak at the Job Tips For Geeks blog....

Spring JDBC Database connection pool setup

Setting up JDBC Database Connection Pool in Spring framework is easy for any Java application, just matter of changing few configuration in spring configuration file.If you are writing core java application and not running on any web or application server like Tomcat or Weblogic, Managing Database connection pool using Apache Commons DBCP and Commons Pool along-with Spring framework is nice choice but if you have luxury of having web server and managed J2EE Container, consider using Connection pool managed by J2EE server those are better option in terms of maintenance, flexibility and also help to prevent java.lang.OutofMemroyError:PermGen Space in tomcat by avoiding loading of JDBC driver in web-app class-loader, Also keeping JDBC connection pool information in Server makes it easy to change or include settings for JDBC over SSL. In this article we will see how to setup Database connection pool in spring framework using Apache commons DBCP and commons pool.jar This article is in continuation of my tutorials on spring framework and database like LDAP Authentication in J2EE with Spring Security and manage session using Spring security If you haven’t read those article than you may find them useful. Spring Example JDBC Database Connection Pool Spring framework provides convenient JdbcTemplate class for performing all Database related operation. if you are not using Hibernate than using Spring’s JdbcTemplate is good option. JdbcTemplate requires a DataSource which is javax.sql.DataSource implementation and you can get this directly using spring bean configuration or by using JNDI if you are using J2EE web server or application server for managing Connection Pool. See How to setup JDBC connection Pool in tomcat and Spring for JNDI based connection pooling for more details. In order to setup Data source you will require following configuration in your applicationContext.xml (spring configuration) file: //Datasource connection settings in Spring <bean id="springDataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close" > <property name="url" value="jdbc:oracle:thin:@localhost:1521:SPRING_TEST" /> <property name="driverClassName" value="oracle.jdbc.driver.OracleDriver" /> <property name="username" value="root" /> <property name="password" value="root" /> <property name="removeAbandoned" value="true" /> <property name="initialSize" value="20" /> <property name="maxActive" value="30" /> </bean>//Dao class configuration in spring <bean id="EmployeeDatabaseBean" class="com.test.EmployeeDAOImpl"> <property name="dataSource" ref="springDataSource"/> </bean> Below configuration of DBCP connection pool will create 20 database connection as initialSize is 20 and goes up to 30 Database connection if required as maxActive is 30. you can customize your database connection pool by using different properties provided by Apache DBCP library. Above example is creating connection pool with Oracle 11g database and we are using oracle.jdbc.driver.OracleDriver comes along with ojdbc6.jar or ojdbc6_g.jar , to learn more about how to connect Oracle database from Java program see the link. Java Code for using Connection pool in Spring Below is complete code example of DAO class which uses Spring JdbcTemplate to execute SELECT query against database using database connection from Connection pool. If you are not initializing Database connection pool on start-up than it may take a while when you execute your first query because it needs to create certain number of SQL connection and then it execute query but once connection pool is created subsequent queries will execute faster. //Code for DAO Class using Spring JdbcTemplate package com.test import javax.sql.DataSource; import org.log4j.Logger; import org.log4j.LoggerFactory; import org.springframework.jdbc.core.JdbcTemplate;/** * Java Program example to use DBCP connection pool with Spring framework * @author Javin Paul */ public class EmployeeDAOImpl implements EmployeeDAO {private Logger logger = LoggerFactory.getLogger(EmployeeDAOImpl.class); private JdbcTemplate jdbcTemplate;public void setDataSource(DataSource dataSource) { this.jdbcTemplate = new JdbcTemplate(dataSource); }@Override public boolean isEmployeeExists(String emp_id) { try { logger.debug("Checking Employee in EMP table using Spring Jdbc Template"); int number = this.jdbcTemplate.queryForInt("select count(*) from EMP where emp_id=?", emp_id); if (number > 0) { return true; } } catch (Exception exception) { exception.printStackTrace(); } return false; } }Dependency:1. you need to include oracle driver jar like ojdbc_6.jar in you classpath.  2. Apache DBCP and commons pool jar in application classpath.That’s all on how to configure JDBC Database connection pool in Spring framework. As I said its pretty easy using Apache DBCP library. Just matter of few configuration in spring applicationContext.xml and you are ready. If you want to configure JDBC Connection pool on tomcat (JNDI connection pool) and want to use in spring than see here.Reference: JDBC Database connection pool in Spring Framework – How to Setup Example from our JCG partner Javin Paul at the Javarevisited blog....

Spring Profiles in XML Config Files

My last blog was very simple as it covered my painless upgrade from Spring 3.0.x to Spring 3.1.x and I finished by mentioning that you can upgrade your Spring schemas to 3.1 to allow you to take advantage of Spring’s newest features. In today’s blog, I’m going to cover one of the coolest of these features: Spring profiles. But, before talking about how you implement Spring profiles, I thought that it would be a good idea to explore the problem that they’re solving, which is need to create different Spring configurations for different environments. This usually arises because your app needs to connect to several similar external resources during its development lifecycle and more often and not these ‘external resources’ are usually databases, although they could be JMS queues, web services, remote EJBs etc. The number of environments that your app has to work on before it goes lives usually depends upon a few of things, including your organizations business processes, the scale of the your app and it’s ‘importance’ (i.e. if you’re writing the tax collection system for your country’s revenue service then the testing process may be more rigorous than if you’re writing an eCommerce app for a local shop). Just so that you get the picture, below is a quick (and probably incomplete) list of all the different environments that came to mind:Local Developer Machine Development Test Machine The Test Teams Functional Test Machine The Integration Test Machine Clone Environment (A copy of live) LiveThis is not a new problem and it’s usually solved by creating a set of Spring XML and properties files for each environment. The XML files usually consist of a master file that imports other environment specific files. These are then coupled together at compile time to create different WAR or EAR files. This method has worked for years, but it does have a few problems:It’s non-standard. Each organization usually has its own way of tackling this problem, with no two methods being quite the same/ It’s difficult to implement leaving lots of room for errors. A different WAR/EAR file has to be created for and deployed on each environment taking time and effort, which could be better spent writing code.The differences in the Spring beans configurations can normally be divided into two. Firstly, there are environment specific properties such as URLs and database names. These are usually injected into Spring XML files using the PropertyPlaceholderConfigurer class and the associated ${} notation. <bean id='propertyConfigurer' class='org.springframework.beans.factory.config.PropertyPlaceholderConfigurer'> <property name='locations'> <list> <value>db.properties</value> </list> </property> </bean> Secondly, there are environment specific bean classes such as data sources, which usually differ depending upon how you’re connecting to a database. For example in development you may have: <bean id='dataSource' class='org.springframework.jdbc.datasource.DriverManagerDataSource'> <property name='driverClassName'> <value>${database.driver}</value> </property> <property name='url'> <value>${database.uri}</value> </property> <property name='username'> <value>${database.user}</value> </property> <property name='password'> <value>${database.password}</value> </property> </bean> …whilst in test or live you’ll simply write: <jee:jndi-lookup id='dataSource' jndi-name='jdbc/LiveDataSource'/> The Spring guidelines say that Spring profiles should only be used the second example above: bean specific classes and that you should continue to use PropertyPlaceholderConfigurer to initialize simple bean properties; however, you may want to use Spring profiles to inject an environment specific PropertyPlaceholderConfigurer in to your Spring context. Having said that, I’m going to break this convention in my sample code as I want the simplest code possible to demonstrate Spring profile’s features. Spring Profiles and XML Configuration In terms of XML configuration, Spring 3.1 introduces the new profile attribute to the beans element of the spring-beans schema: <beans profile='dev'> It’s this profile attribute that acts as a switch when enabling and disabling profiles in different environments. To explain all this further I’m going to use the simple idea that your application needs to load a person class, and that person class contains different properties depending upon the environment on which your program is running. The Person class is very trivial and looks something like this: public class Person {private final String firstName;private final String lastName;private final int age;public Person(String firstName, String lastName, int age) {this.firstName = firstName;this.lastName = lastName;this.age = age;}public String getFirstName() {return firstName;}public String getLastName() {return lastName;}public int getAge() {return age;}} …and is defined in the following XML configuration files: <?xml version='1.0' encoding='UTF-8'?> <beans xmlns='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:schemaLocation='http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd' profile='test1'><bean id='employee' class='profiles.Person'> <constructor-arg value='John' /> <constructor-arg value='Smith' /> <constructor-arg value='89' /> </bean> </beans> …and <?xml version='1.0' encoding='UTF-8'?> <beans xmlns='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xsi:schemaLocation='http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd' profile='test2'><bean id='employee' class='profiles.Person'> <constructor-arg value='Fred' /> <constructor-arg value='ButterWorth' /> <constructor-arg value='23' /> </bean> </beans> …called test-1-profile.xml and test-2-profile.xml respectively (remember these names, they’re important later on). As you can see, the only differences in configuration are the first name, last name and age properties. Unfortunately, it’s not enough simply to define your profiles, you have to tell Spring which profile you’re loading. This means that following old ‘standard’ code will now fail: @Test(expected = NoSuchBeanDefinitionException.class)public void testProfileNotActive() {// Ensure that properties from other tests aren't setSystem.setProperty('spring.profiles.active', '');ApplicationContext ctx = new ClassPathXmlApplicationContext('test-1-profile.xml');Person person = ctx.getBean(Person.class);String firstName = person.getFirstName();System.out.println(firstName);} Fortunately there are several ways of selecting your profile and to my mind the most useful is by using the ‘spring.profiles.active’ system property. For example, the following test will now pass: System.setProperty('spring.profiles.active', 'test1');ApplicationContext ctx = new ClassPathXmlApplicationContext('test-1-profile.xml');Person person = ctx.getBean(Person.class);String firstName = person.getFirstName();assertEquals('John', firstName); Obviously, you wouldn’t want to hard code things as I’ve done above and best practice usually means keeping the system properties definitions separate to your application. This gives you the option of using either a simple command line argument such as: -Dspring.profiles.active='test1' …or by adding # Setting a property value spring.profiles.active=test1 to Tomcat’s catalina.properties So, that’s all there is to it: you create your Spring XML profiles using the beans element profile attribute and switch on the profile you want to use by setting the spring.profiles.active system property to your profile’s name. Accessing Some Extra Flexibility However, that’s not the end of the story as the Guy’s at Spring has added a number of ways of programmatically loading and enabling profiles – should you choose to do so. @Testpublic void testProfileActive() {ClassPathXmlApplicationContext ctx = new ClassPathXmlApplicationContext('test-1-profile.xml');ConfigurableEnvironment env = ctx.getEnvironment();env.setActiveProfiles('test1');ctx.refresh();Person person = ctx.getBean('employee', Person.class);String firstName = person.getFirstName();assertEquals('John', firstName);} In the code above, I’ve used the new ConfigurableEnvironment class to activate the “test1” profile. @Testpublic void testProfileActiveUsingGenericXmlApplicationContextMultipleFilesSelectTest1() {GenericXmlApplicationContext ctx = new GenericXmlApplicationContext();ConfigurableEnvironment env = ctx.getEnvironment();env.setActiveProfiles('test1');ctx.load('*-profile.xml');ctx.refresh();Person person = ctx.getBean('employee', Person.class);String firstName = person.getFirstName();assertEquals('John', firstName);} However, The Guys At Spring now recommend that you use the GenericApplicationContext class instead of ClassPathXmlApplicationContext and FileSystemXmlApplicationContext as this provides additional flexibility. For example, in the code above, I’ve used GenericApplicationContext’s load(...) method to load a number of configuration files using a wild card: ctx.load('*-profile.xml'); Remember the filenames from earlier on? This will load both test-1-profile.xml and test-2-profile.xml. Profiles also include additional flexibility that allows you to activate more than one at a time. If you take a look at the code below, you can see that I’m activating both of my test1 and test2 profiles: @Testpublic void testMultipleProfilesActive() {GenericXmlApplicationContext ctx = new GenericXmlApplicationContext();ConfigurableEnvironment env = ctx.getEnvironment();env.setActiveProfiles('test1', 'test2');ctx.load('*-profile.xml');ctx.refresh();Person person = ctx.getBean('employee', Person.class);String firstName = person.getFirstName();assertEquals('Fred', firstName);} Beware, in the case of this example I have two beans with the same id of “employee”, and there’s no way of telling which one is valid and is supposed to take precedence. From my test, I guessing that the second one that’s read overwrites, or masks access to, the first. This is okay as you’re not supposed to have multiple beans with the same name – it’s just something to watch out for when activating multiple profiles. Finally, one of the better simplifications you can make is to use nested <beans/> elements. <?xml version='1.0' encoding='UTF-8'?> <beans xmlns='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:context='http://www.springframework.org/schema/context' xsi:schemaLocation='http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.1.xsd'><beans profile='test1'> <bean id='employee1' class='profiles.Person'> <constructor-arg value='John' /> <constructor-arg value='Smith' /> <constructor-arg value='89' /> </bean> </beans><beans profile='test2'> <bean id='employee2' class='profiles.Person'> <constructor-arg value='Bert' /> <constructor-arg value='William' /> <constructor-arg value='32' /> </bean> </beans></beans> This takes away all the tedious mucking about with wild cards and loading multiple files, albeit at the expense of a minimal amount of flexibility. My next blog concludes my look at Spring profiles, by taking a look at the @Configuration annotation used in conjunction with the new @Profile annotation… so, more on that later. Reference: Using Spring Profiles in XML Config from our JCG partner Roger Hughes at the Captain Debug’s Blog blog....

8 Ways to improve your Java EE Production Support skills

Everybody involved in Java EE production support know this job can be difficult; 7/24 pager support, multiple incidents and bug fixes to deal with on a regular basis, pressure from the client and the management team to resolve production problems as fast as possible and prevent reoccurrences. On top of your day to day work, you also have to take care of multiple application deployments driven by multiple IT delivery teams. Sounds familiar? As hard as it can be, the reward for your hard work can be significant. You may have noticed from my past articles that I’m quite passionate about Java EE production support, root cause analysis and performance related problems. This post is all about sharing a few tips and work principles I have applied over the last 10+ years working with multiple Java EE production support teams onshore & offshore. This article will provide you with 8 ways to improve your production support skills which may help you better enjoy your IT support job and ultimately become a Java EE production support guru. #1 – Partner with your clients and delivery teams My first recommendation should not be a surprise to anybody. Regardless how good you are from a technical perspective, you will be unable to succeed as a great production support leader if you fail to partner with your clients and IT delivery teams. You have to realize that you are providing a service to your client who is the owner and master of the IT production environment. You are expected to ensure the availability of the critical Java EE production systems and address known and future problems to come. Stay away from damaging attitudes such as a false impression that you are the actual owner or getting frustrated at your client for lack of understanding of a problem etc. Your job is to get all the facts right and provide good recommendations to your clients so they can make the right decisions. Over time, a solid trust will be established between you and your client with great benefits & opportunities.Building a strong relationship with the IT delivery team is also very important. The delivery team, which includes IT architects, project managers and technical resources, is seen as the team of experts responsible to build and enhance the Java EE production environments via their established project delivery model. Over the years, I have seen several examples of friction between these 2 actors. The support team tends to be over critical of the delivery team work due to bad experience with failed deployments, surge of production incidents etc. I have also noticed examples where the delivery team tends to lack confidence in support team capabilities again due to bad experience in the context of failed deployments or lack of proper root cause analysis or technical knowledge etc. As a production support individual, you have to build your credibility and stay away from negative and non-professional attitude. Building credibility means hard work, proper gathering of facts, technical & root cause analysis, showing interest in learning a new solution etc. This will increase the trust with the delivery team and allow you to gain significant exposure and experience in long term. Ultimately, you will be able to work and provide consultation for both teams. Proper balance and professionalism between these 3 actors is key for any successful IT production environment. #2 – Every production incident is a learning opportunity One of the great things about Java EE production support is the multiple learning opportunities you are exposed to. You may have realized that after each production outage you achieved at least one the following goals:You gained new technical knowledge from a new problem type You increased your knowledge and experience on a known situation You increased your visibility and trust with your operation client You were able to share your existing knowledge with other team members allowing them to succeed and resolve the problemPlease note that it is also normal to face negative experiences from time to time. Again, you will also grow stronger from those and learn from your mistakes. Recurring problems, incidents or preventive work still offer you opportunities to gather more technical facts, pinpoint the root cause or come up with recommendations to develop a permanent resolution. The bottom line is that the more incidents you are involved with, the better. It is OK if you are not comfortable yet to take an active role in the incident recovery but please ensure that you are present so you can at least gain experience and knowledge from your other more experienced team members. #3 – Don’t fear change, embrace it One common problem I have noticed across the Java EE support teams is a fear factor around production platform changes such as project deployment, infrastructure or network level changes etc. Below are a few reasons of this common fear:For many support team members, application “change” is synonym of production “instability” Lack of understanding of the project itself or scope of changes will automatically translate as fear Low comfort level of executing the requested application or middleware changesSuch fear factor is often a symptom of gaps in the current release management process between the 3 main actors or production platform problems such as:Lack of proper knowledge transfer between the IT delivery and support teams Already unstable production environment prior to new project deployment Lack of deep technical knowledge of Java EE or middlewareFear can be a serious obstacle for your future growth and must be deal with seriously. My recommendation to you is that regardless of the existing gaps within your organization, simply embrace the changes but combine with proper due diligence such as asking for more KT, participating in project deployment strategy and risk assessments, performing code walkthroughs etc. This will allow you to eliminate that “fear” attitude, gain experience and credibility with your IT delivery team and client. This will also give you opportunities to build recommendations for future project deployments and infrastructure related improvements. Finally, if you feel that you are lacking technical knowledge to implement the changes, simply say it and ask for another more experienced team member to shadow your work. This approach will reduce your fear level and allow you to gain experience with minimal risk level. #4 – Learn how to read JVM Thread Dump and monitoring tools data I’m sure you have noticed from my past articles and case studies that I use JVM Thread Dump a lot. This is for a reason. Thread Dump analysis is one of the most important and valuable skill to acquire for any successful Java EE production support individual. I analyzed my first Thread Dump 10 years ago when troubleshooting a Weblogic 6 problem running on JDK 1.3. 10 years and hundreds of Thread Dump snapshots later, I’m still learning new problem patterns…The good part with JVM and Thread Dump is that you will always find new patterns to identity and understand. I can guarantee you that once you acquire this knowledge (along with JVM fundamentals), not only a lot of production incidents will be easier to pinpoint but also much more fun and self-rewarding. Given how easy, fast and non-intrusive it is these days to generate a JVM Thread Dump; there is simply no excuse not to learn this key troubleshooting technique.My other recommendation is to learn how to use existing monitoring tools and interpret the data. Java EE monitoring tools are highly valuable weapons for any production support individual involved in day to day support. Depending of the product purchased or free tools used by your IT client, they will provide you with a performance view of your Java EE applications, middleware (Weblogic, JBoss, WAS…) and the JVM itself. This historical data is also critical when performing root cause analysis following a major production outage. Proper knowledge and understanding of the data will allow you to understand the IT platform performance, capacity and give you opportunities to work with the IT capacity planning analysis & architect team which are accountable to ensure long term stability and scalability of the IT production environment. #5 – Learn how to write code and perform code walkthroughs My next recommendation is to improve your coding skills. One of the most important responsibilities as part of a Java EE production support team, on top of regular bug fixes, is to act as a “gate keeper” e.g. last line of defense before the implementation of a project. This risk assessment exercise involves not only project review, test results, performance test report etc. but also code walkthroughs. Unfortunately, this review is often not performed properly, if done at all. The goal of the exercise is to identify areas for improvement and detect potential harmful code defects for the production environment such as thread safe problems, lack of IO/Socket related timeouts etc. Your capability to perform such code assessment depends of your coding skills and overall knowledge of the Java EE design patterns & anti-patterns. Improving your coding skills can be done by following a few strategies as per below:Explore opportunities within your IT organization to perform delivery work Jump on any opportunity to review officially or unofficially existing or new project code Create personal Java EE development projects pertinent for your day to day work and long term career Join Java/Java EE Open Source projects & communities (Apache, JBoss, Spring…)#6 – Don’t pretend that you know everything about Java, JVM & Middleware Another common problem I noticed for many Java EE production support individuals is a skill ‘plateau’. This is especially problematic when working on static IT production environments with few changes and hardening improvements. In this context, you get used very quickly to your day to day work, technology used and known problems. You then become very comfortable with your tasks with a false impression of seniority. Then one day, your IT organization is faced with a re-org or you have to work for a new client. At this point you are shocked and struggling to overcome the new challenges. What happened?You reached a skill plateau within your small Java EE application list and middleware bubble You failed to invest time into yourself and outside your work IT bubble You failed to acknowledge your lack of deeper Java, Java EE & middleware knowledge e.g. false impression of knowing everything You failed to keep your eyes opened and explore the rest of the IT world and Java communityMy main recommendation to you is that when you feel over confident or over qualified in your current role, it is time to move on and take on new challenges. This could mean a different role within your existing support team, moving to a project delivery team for a certain time or completely switching job and / or IT client. Constantly seeking new challenges will lead to:Significant increase of knowledge due to a higher diversity of technologies such as JVM vendors (HotSpot, IBM JVM, Oracle JRockit…), middleware (Weblogic, JBoss, WAS…), databases, OS, infrastructure etc. Significant increase of knowledge due to a higher diversity of implementations and solutions (SOA, Web development / portals, middle-tier, legacy integration, mobile development etc.) Increased learning opportunities due to new types of production incidents Increased visibility within your IT organization and Java community Improved client skills and contacts Increased resistance to work under stress e.g. learn how to use stress and adrenaline at your advantage (typical boost you can get during a severe production outage)#7 – Share your knowledge with your team and the Java community Sharing your Java EE skills and production support experience is a great way to improve and maintain a strong relationship with your support team members. I also encourage you to participate and share your Java EE production problems with the Java community (Blogs, forums, Open Source groups etc.) since a lot of problems are common and I’m sure people can benefit from your experience. That being said, one approach that I follow myself and highly recommend is to schedule planned (ideally weekly) internal training sessions. The topic is typically chosen via a simple voting system and presented by different members, when possible. A good sharing mentality will naturally lead you to more research and reading, further increasing your skills in long term. #8 – Rise to the Challenge At this point you have acquired a solid knowledge foundation and key troubleshooting skills. You have been involved in many production incidents with good understanding of the root cause and resolution. You understand well your IT production environment and your client is starting to request your presence directly on critical incidents. You are also spending time every week to improve your coding skills and sharing with the Java community…but are you really up to the challenge? A true hero can be defined by an individual with great capabilities to rise to the challenge and lead the others to victory. Obviously you are not expected to save the world but you can still be the “hero of the day” by rising to the challenge and leading your support team to the resolution of critical production outages. A true successful and recognized Java EE production support person is not necessarily the strongest technical resource but one who has learned how to properly balance his technical knowledge and client skills along with a strong capability to rise to the challenge and take the lead when faced with difficult situations. I really hope that these tips can help you in your day to day Java EE production support. Please share your experience and tips on how to improve your Java EE production support skills. Reference: 8 Ways to improve your Java EE Production Support skills from our JCG partner Pierre-Hugues Charbonneau at the Java EE Support Patterns & Java Tutorial blog....

I/O Demystified

With all the hype on highly scalable server design and the rage behind nodejs I have been meaning to do some focused reading on IO design patterns to which until now couldn’t find enough time to invest. Now having done some research I thought it’s best to jot down stuff I came across as a future reference for me and any one who may come across this post. OK then.. Let’s hop on the I/O bus and go for a ride. Types of I/O There are four different ways IO can be done according to the blocking or non blocking nature of the operations and the synchronous or asynchronous nature of IO readiness/completion event notifications. Synchronous Blocking I/O This is where the IO operation blocks the application until its completion which forms the basis of typical thread per connection model found in most web servers. When the blocking read() or write() is called there will be a context switch to the kernel where the IO operation would happen and data would be copied to a kernel buffer. Afterwards, the kernel buffer will be transfered to user space application level buffer and the application thread will be marked as runnable upon which the application would unblock and read the data in user space buffer. Thread per connection model tries to limit the effect of this blocking by confining a connection to a thread so that handling of other concurrent connections will not be blocked by an I/O operation on one connection. This is fine as long as the connections are short lived and data link latencies are not that bad. However in the case of long lived or high latency connections the chances are that threads will be held up by these connections for a long time causing starvation for new connections if a fixed size thread pool is used since blocked threads cannot be reused to service new connections while in the state of being blocked or else it will cause a large number of threads to be spawned within the system if each connection is serviced using a new thread, which can become pretty resource intensive with high context switching costs for a highly concurrent load. ServerSocket server = new ServerSocket(port); while(true) { Socket connection = server.accept(); spawn-Thread-and-process(connection); }Synchronous Non Blocking I/O In this mode the device or the connection is configured as non blocking so that read() and write() operations will not be blocked. This usually means if the operation cannot be immediately satisfied it would return with an error code indicating that the operation would block (EWOULDBLOCK in POSIX) or the device is temporarily unavailable (EAGAIN in POSIX). It is up to the application to poll until the device is ready and all the data are read. However this is not very efficient since each of these calls would cause a context switch to kernel and back irrespective of whether some data was read or not. Asynchronous Non Blocking I/O with Readiness Events The problem with the earlier mode was that the application had to poll and busy wait to get the job done. Wouldn’t it be better that some how the application was notified when the device is ready to be read/  written? That is what exactly this mode provides you with. Using a special system call (varies according to the platform – select()/poll()/epoll() for Linux, kqueue() for BSD, /dev/poll for Solaris) the application registers the interest of getting I/O readiness information for a certain I/O operation (read or write) from a certain device (a file descriptor in Linux parlance since all sockets are abstracted using file descriptors). Afterwards this system call is invoked, which would block until at least on of the registered file descriptors become ready. Once this is true the file descriptors ready for doing I/O will be fetched as the return of the system call and can be serviced sequentially in a loop in the application thread. The ready connection processing logic is usually contained within a user provided event handler which would still have to issue non blocking read()/write() calls to fetch data from device to kernel and ultimately to the user space buffer incurring a context switch to the kernel. More ever there is usually no absolute guarantee that it will be possible to do the intended I/O with the device since what operating system provides is only an indication that the device might be ready to do the I/O operation of interest but the non blocking read() or write() can bail you out in such situations. However this should be the exception than the norm. So the overall idea is to get readiness events in an asynchronous fashion and register some event handlers to handle once such event notifications are triggered. So as you can see all of these can be done in a single  thread while multiplexing among different connections primarily due to the nature of the select() (here I  choose a representative system call) which can return readiness of multiple sockets at a time. This is part of the appeal of this mode of operation where one thread can serve large number of connections at a time. This mode is what usually known as the “Non Blocking I/O” model. Java has abstracted out the differences between platform specific system call implementations with its NIO API. The socket/file descriptors are abstracted using Channels and Selector encapsulates the selection system call. The applications interested in getting readiness events registers a Channel (usually a SocketChannel obtained by an accept() on a ServerSocketChannel) with the Selector and get a SelectionKey which acts as a handle for holding the Channel and registration information. Then the blocking select() call is made on Selector which would return a set of SelectionKeys which then can be processed one by one using the application specified event handlers. Selector selector = Selector.open();channel.configureBlocking(false);SelectionKey key = channel.register(selector, SelectionKey.OP_READ);while(true) {int readyChannels = selector.select();if(readyChannels == 0) continue;Set<SelectionKey> selectedKeys = selector.selectedKeys();Iterator<SelectionKey> keyIterator = selectedKeys.iterator();while(keyIterator.hasNext()) {SelectionKey key = keyIterator.next();if(key.isAcceptable()) { // a connection was accepted by a ServerSocketChannel.} else if (key.isConnectable()) { // a connection was established with a remote server.} else if (key.isReadable()) { // a channel is ready for reading} else if (key.isWritable()) { // a channel is ready for writing }keyIterator.remove(); } }Asynchronous and Non Blocking I/O with Completion Events Readiness events only go so far to notify you that the device/ socket is ready do something. The application still has to do the dirty work of reading the data from the device/ socket (more accurately directing the operating system to do so via a system call) to the user space buffer all the way from device. Wouldn’t it be nice to delegate this job to the operating system to run in the background and let it inform you once it’s completed the job by transferring all the data from device to kernel buffer and finally to the application level buffer? That is the basic idea behind this mode usually known as the “Asynchronous I/O” mode. For this it is required the operating system support AIO operations. In Linux this support is present in aio POSIX API from 2.6 and for Windows this is present in the form of “I/O Completion Ports”. With NIO2 Java has stepped up its support for this mode with its AsynchronousChannel API. Operating System Support In order to support readiness and completion event notifications different operating systems provide varying system calls. For readiness events select() and poll() can be used in Linux based systems. However the newer epoll() variant is preferred due to its efficiency over select() or poll(). select() suffer from the fact that the selection time increases linearly with the number of descriptors monitored. It is appearently notorious for overwriting the file descriptor array references. So each time it is called the descriptor array is required to be repopulated from a separate copy. Not an elegant solution at any rate. The epoll() variant can be configured in two ways. Namely edge-triggered and level-triggered. In edge-triggered case it will emit a notification only when an event is detected on the associated descriptor. Say during an event-triggered notification your application handler only read half of the kernel input buffer. Now it won’t get a notification on this descriptor next time around even when there are some data to be read unless the device is ready to send more data causing a file descriptor event. Level-triggered configuration on the other hand will trigger a notification each time when there is data to be read. The comparable system calls are present in the form of kqueue in BSD flavours and /dev/poll or “Event Completion” in Solaris depending on the version. The Windows equivalent is “I/O Completion Ports”. The situation for the AIO mode however is bit different at least in the Linux case. The aio support for sockets in Linux seems to be shady at best with some suggesting it is actually using readiness events at kernel level while providing an asynchronous abstraction on completion events at application level. However Windows seems to support this first class again via “I/O Completion Ports”. Design I/O Patterns 101 There are patterns every where when it comes to software development. I/O is no different. There are couple I/O patterns associated with NIO and AIO models which are described below. Reactor Pattern There are several components participating in this pattern. I will go through them first so it would be easy to understand the diagram. Reactor Initiator: This is the component which would initiate the non blocking server by configuring and initiating the dispatcher. First it would bind the server socket and register it with the demultiplexer for client connection accept readiness events. Then the event handler implementations for each type of readiness events (read/ write/ accept etc..) will be registered with the dispatcher. Next the dispatcher event loop will be invoked to handle event notifications. Dispatcher : Defines an interface for registering, removing, and dispatching Event Handlers responsible for reacting on connection events which include connection acceptance, data input/output and timeout events on a set of connections. For servicing a client connection the related event handler (e.g: accept event handler) would register the accepted client channel (wrapper for underlying client socket) with the demultiplexer along side with the type of readiness events to listen for that particular channel. Afterwards the dispatcher thread will invoke the blocking readiness selection operation on demultiplexer for the set of registered channels. Once one or more registered channels are ready for I/O the dispatcher would service each returned “Handle” associated with the each ready channel one by one using registered event handlers. It is important that these event handlers don’t hold up dispatcher thread since it will delay dispatcher servicing other ready connections. Since the usual logic within an event handler includes transferring data to/from the ready connection which would block until all the data are transferred between user space and kernel space data buffers normally it is the case that these handlers are run in different threads from a thread pool. Handle : A handle is returned once a channel is registered with the demultiplexer which encapsulates connection channel and readiness information. A set of ready Handles would be returned by demultiplexer readiness selection operation. Java NIO equivalent is SelectionKey. Demultiplexer : Waits for readiness events of in one or more registered connection channels. Java NIO equivalent is Selector. Event Handler : Specifies the interface having hook methods for dispatching connection events. These methods need to be implemented by application specific event handler implementations. Concrete Event Handler : Contains the logic to read/write data from underlying connection and to do the required processing or initiate client connection acceptance protocol from the passed Handle.Event handlers are typically run in separate threads from a thread pool as shown in below diagram.A simple echo server implementation for this pattern is as follows (without event handler thread pool). public class ReactorInitiator {private static final int NIO_SERVER_PORT = 9993;public void initiateReactiveServer(int port) throws Exception {ServerSocketChannel server = ServerSocketChannel.open(); server.socket().bind(new InetSocketAddress(port)); server.configureBlocking(false);Dispatcher dispatcher = new Dispatcher(); dispatcher.registerChannel(SelectionKey.OP_ACCEPT, server);dispatcher.registerEventHandler( SelectionKey.OP_ACCEPT, new AcceptEventHandler( dispatcher.getDemultiplexer()));dispatcher.registerEventHandler( SelectionKey.OP_READ, new ReadEventHandler( dispatcher.getDemultiplexer()));dispatcher.registerEventHandler( SelectionKey.OP_WRITE, new WriteEventHandler());dispatcher.run(); // Run the dispatcher loop}public static void main(String[] args) throws Exception { System.out.println('Starting NIO server at port : ' + NIO_SERVER_PORT); new ReactorInitiator(). initiateReactiveServer(NIO_SERVER_PORT); }}public class Dispatcher {private Map<Integer, EventHandler> registeredHandlers = new ConcurrentHashMap<Integer, EventHandler>(); private Selector demultiplexer;public Dispatcher() throws Exception { demultiplexer = Selector.open(); }public Selector getDemultiplexer() { return demultiplexer; }public void registerEventHandler( int eventType, EventHandler eventHandler) { registeredHandlers.put(eventType, eventHandler); }// Used to register ServerSocketChannel with the // selector to accept incoming client connections public void registerChannel( int eventType, SelectableChannel channel) throws Exception { channel.register(demultiplexer, eventType); }public void run() { try { while (true) { // Loop indefinitely demultiplexer.select();Set<SelectionKey> readyHandles = demultiplexer.selectedKeys(); Iterator<SelectionKey> handleIterator = readyHandles.iterator();while (handleIterator.hasNext()) { SelectionKey handle = handleIterator.next();if (handle.isAcceptable()) { EventHandler handler = registeredHandlers.get(SelectionKey.OP_ACCEPT); handler.handleEvent(handle); // Note : Here we don't remove this handle from // selector since we want to keep listening to // new client connections }if (handle.isReadable()) { EventHandler handler = registeredHandlers.get(SelectionKey.OP_READ); handler.handleEvent(handle); handleIterator.remove(); }if (handle.isWritable()) { EventHandler handler = registeredHandlers.get(SelectionKey.OP_WRITE); handler.handleEvent(handle); handleIterator.remove(); } } } } catch (Exception e) { e.printStackTrace(); } }}public interface EventHandler {public void handleEvent(SelectionKey handle) throws Exception;}public class AcceptEventHandler implements EventHandler { private Selector demultiplexer; public AcceptEventHandler(Selector demultiplexer) { this.demultiplexer = demultiplexer; }@Override public void handleEvent(SelectionKey handle) throws Exception { ServerSocketChannel serverSocketChannel = (ServerSocketChannel) handle.channel(); SocketChannel socketChannel = serverSocketChannel.accept(); if (socketChannel != null) { socketChannel.configureBlocking(false); socketChannel.register( demultiplexer, SelectionKey.OP_READ); } }}public class ReadEventHandler implements EventHandler {private Selector demultiplexer; private ByteBuffer inputBuffer = ByteBuffer.allocate(2048);public ReadEventHandler(Selector demultiplexer) { this.demultiplexer = demultiplexer; }@Override public void handleEvent(SelectionKey handle) throws Exception { SocketChannel socketChannel = (SocketChannel) handle.channel();socketChannel.read(inputBuffer); // Read data from clientinputBuffer.flip(); // Rewind the buffer to start reading from the beginningbyte[] buffer = new byte[inputBuffer.limit()]; inputBuffer.get(buffer);System.out.println('Received message from client : ' + new String(buffer)); inputBuffer.flip(); // Rewind the buffer to start reading from the beginning // Register the interest for writable readiness event for // this channel in order to echo back the messagesocketChannel.register( demultiplexer, SelectionKey.OP_WRITE, inputBuffer); }}public class WriteEventHandler implements EventHandler {@Override public void handleEvent(SelectionKey handle) throws Exception { SocketChannel socketChannel = (SocketChannel) handle.channel(); ByteBuffer inputBuffer = (ByteBuffer) handle.attachment(); socketChannel.write(inputBuffer); socketChannel.close(); // Close connection }}Proactor Pattern This pattern is based on asynchronous I/O model. Main components are as follows. Proactive Initiator : This is the entity which initiates Asynchronous Operation accepting client connections. This is usually the server application’s main thread. Registers a Completion Handler along with a Completion Dispatcher to handle connection acceptance asynchronous event notification. Asynchronous Operation Processor : This is responsible for carrying out I/O operations asynchronously and providing completion event notifications to application level Completion Handler. This is usually the asynchronous I/O interface exposed by Operating System. Asynchronous Operation : Asynchronous Operations are run to completion by the Asynchronous Operation Processor in separate kernel threads. Completion Dispatcher : This is responsible for calling back to the application Completion Handlers when Asynchronous Operations complete. When the Asynchronous Operation Processor completes an asynchronously initiated operation, the Completion Dispatcher performs an application callback on its behalf. Usually delegates the event notification handling to the suitable Completion Handler according to the type of the event. Completion Handler : This is the interface implemented by application to process asynchronous event completion events.Let’s look at how this pattern can be implemented (as a simple echo server) using new Java NIO.2 API added in Java 7. public class ProactorInitiator { static int ASYNC_SERVER_PORT = 4333;public void initiateProactiveServer(int port) throws IOException {final AsynchronousServerSocketChannel listener = AsynchronousServerSocketChannel.open().bind( new InetSocketAddress(port)); AcceptCompletionHandler acceptCompletionHandler = new AcceptCompletionHandler(listener);SessionState state = new SessionState(); listener.accept(state, acceptCompletionHandler); }public static void main(String[] args) { try { System.out.println('Async server listening on port : ' + ASYNC_SERVER_PORT); new ProactorInitiator().initiateProactiveServer( ASYNC_SERVER_PORT); } catch (IOException e) { e.printStackTrace(); }// Sleep indefinitely since otherwise the JVM would terminate while (true) { try { Thread.sleep(Long.MAX_VALUE); } catch (InterruptedException e) { e.printStackTrace(); } } } }public class AcceptCompletionHandler implements CompletionHandler<AsynchronousSocketChannel, SessionState> {private AsynchronousServerSocketChannel listener;public AcceptCompletionHandler( AsynchronousServerSocketChannel listener) { this.listener = listener; }@Override public void completed(AsynchronousSocketChannel socketChannel, SessionState sessionState) { // accept the next connection SessionState newSessionState = new SessionState(); listener.accept(newSessionState, this);// handle this connection ByteBuffer inputBuffer = ByteBuffer.allocate(2048); ReadCompletionHandler readCompletionHandler = new ReadCompletionHandler(socketChannel, inputBuffer); socketChannel.read( inputBuffer, sessionState, readCompletionHandler); }@Override public void failed(Throwable exc, SessionState sessionState) { // Handle connection failure... }}public class ReadCompletionHandler implements CompletionHandler<Integer, SessionState> {private AsynchronousSocketChannel socketChannel; private ByteBuffer inputBuffer;public ReadCompletionHandler( AsynchronousSocketChannel socketChannel, ByteBuffer inputBuffer) { this.socketChannel = socketChannel; this.inputBuffer = inputBuffer; }@Override public void completed( Integer bytesRead, SessionState sessionState) {byte[] buffer = new byte[bytesRead]; inputBuffer.rewind(); // Rewind the input buffer to read from the beginninginputBuffer.get(buffer); String message = new String(buffer);System.out.println('Received message from client : ' + message);// Echo the message back to client WriteCompletionHandler writeCompletionHandler = new WriteCompletionHandler(socketChannel);ByteBuffer outputBuffer = ByteBuffer.wrap(buffer);socketChannel.write( outputBuffer, sessionState, writeCompletionHandler); }@Override public void failed(Throwable exc, SessionState attachment) { //Handle read failure..... }}public class WriteCompletionHandler implements CompletionHandler<Integer, SessionState> {private AsynchronousSocketChannel socketChannel;public WriteCompletionHandler( AsynchronousSocketChannel socketChannel) { this.socketChannel = socketChannel; }@Override public void completed( Integer bytesWritten, SessionState attachment) { try { socketChannel.close(); } catch (IOException e) { e.printStackTrace(); } }@Override public void failed(Throwable exc, SessionState attachment) { // Handle write failure..... }}public class SessionState {private Map<String, String> sessionProps = new ConcurrentHashMap<String, String>();public String getProperty(String key) { return sessionProps.get(key); }public void setProperty(String key, String value) { sessionProps.put(key, value); }}Each type of event completion (accept/ read/ write) is handled by a separate completion handler   implementing CompletionHandler interface (Accept/ Read/ WriteCompletionHandler etc.). The state transitions are managed inside these connection handlers. Additional SessionState argument can be used to hold client session specific state across a series of completion events. NIO Frameworks (HTTPCore) If you are thinking of implementing a NIO based HTTP server you are in luck. Apache HTTPCore library provides excellent support for handling HTTP traffic with NIO. API provides higher level abstractions on top of NIO layer with HTTP requests handling built in. A minimal non blocking HTTP server implementation which returns a dummy output for any GET request is given below. public class NHttpServer {public void start() throws IOReactorException { HttpParams params = new BasicHttpParams(); // Connection parameters params. setIntParameter( HttpConnectionParams.SO_TIMEOUT, 60000) .setIntParameter( HttpConnectionParams.SOCKET_BUFFER_SIZE, 8 * 1024) .setBooleanParameter( HttpConnectionParams.STALE_CONNECTION_CHECK, true) .setBooleanParameter( HttpConnectionParams.TCP_NODELAY, true);final DefaultListeningIOReactor ioReactor = new DefaultListeningIOReactor(2, params); // Spawns an IOReactor having two reactor threads // running selectors. Number of threads here is // usually matched to the number of processor cores // in the system// Application specific readiness event handler ServerHandler handler = new ServerHandler();final IOEventDispatch ioEventDispatch = new DefaultServerIOEventDispatch(handler, params); // Default IO event dispatcher encapsulating the // event handlerListenerEndpoint endpoint = ioReactor.listen( new InetSocketAddress(4444));// start the IO reactor in a new separate thread Thread t = new Thread(new Runnable() { public void run() { try { System.out.println('Listening in port 4444'); ioReactor.execute(ioEventDispatch); } catch (InterruptedIOException ex) { ex.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } catch (Exception e) { e.printStackTrace(); } } }); t.start();// Wait for the endpoint to become ready, // i.e. for the listener to start accepting requests. try { endpoint.waitFor(); } catch (InterruptedException e) { e.printStackTrace(); } }public static void main(String[] args) throws IOReactorException { new NHttpServer().start(); }}public class ServerHandler implements NHttpServiceHandler {private static final int BUFFER_SIZE = 2048;private static final String RESPONSE_SOURCE_BUFFER = 'response-source-buffer';// the factory to create HTTP responses private final HttpResponseFactory responseFactory;// the HTTP response processor private final HttpProcessor httpProcessor;// the strategy to re-use connections private final ConnectionReuseStrategy connStrategy;// the buffer allocator private final ByteBufferAllocator allocator;public ServerHandler() { super(); this.responseFactory = new DefaultHttpResponseFactory(); this.httpProcessor = new BasicHttpProcessor(); this.connStrategy = new DefaultConnectionReuseStrategy(); this.allocator = new HeapByteBufferAllocator(); }@Override public void connected( NHttpServerConnection nHttpServerConnection) { System.out.println('New incoming connection'); }@Override public void requestReceived( NHttpServerConnection nHttpServerConnection) {HttpRequest request = nHttpServerConnection.getHttpRequest(); if (request instanceof HttpEntityEnclosingRequest) { // Handle POST and PUT requests } else {ContentOutputBuffer outputBuffer = new SharedOutputBuffer( BUFFER_SIZE, nHttpServerConnection, allocator);HttpContext context = nHttpServerConnection.getContext(); context.setAttribute( RESPONSE_SOURCE_BUFFER, outputBuffer); OutputStream os = new ContentOutputStream(outputBuffer);// create the default response to this request ProtocolVersion httpVersion = request.getRequestLine().getProtocolVersion(); HttpResponse response = responseFactory.newHttpResponse( httpVersion, HttpStatus.SC_OK, nHttpServerConnection.getContext());// create a basic HttpEntity using the source // channel of the response pipe BasicHttpEntity entity = new BasicHttpEntity(); if (httpVersion.greaterEquals(HttpVersion.HTTP_1_1)) { entity.setChunked(true); } response.setEntity(entity);String method = request.getRequestLine(). getMethod().toUpperCase();if (method.equals('GET')) { try { nHttpServerConnection.suspendInput(); nHttpServerConnection.submitResponse(response); os.write(new String('Hello client..'). getBytes('UTF-8'));os.flush(); os.close(); } catch (Exception e) { e.printStackTrace(); } } // Handle other http methods } }@Override public void inputReady( NHttpServerConnection nHttpServerConnection, ContentDecoder contentDecoder) { // Handle request enclosed entities here by reading // them from the channel }@Override public void responseReady( NHttpServerConnection nHttpServerConnection) {try { nHttpServerConnection.close(); } catch (IOException e) { e.printStackTrace(); } }@Override public void outputReady( NHttpServerConnection nHttpServerConnection, ContentEncoder encoder) { HttpContext context = nHttpServerConnection.getContext(); ContentOutputBuffer outBuf = (ContentOutputBuffer) context.getAttribute( RESPONSE_SOURCE_BUFFER);try { outBuf.produceContent(encoder); } catch (IOException e) { e.printStackTrace(); } }@Override public void exception( NHttpServerConnection nHttpServerConnection, IOException e) { e.printStackTrace(); }@Override public void exception( NHttpServerConnection nHttpServerConnection, HttpException e) { e.printStackTrace(); }@Override public void timeout( NHttpServerConnection nHttpServerConnection) { try { nHttpServerConnection.close(); } catch (IOException e) { e.printStackTrace(); } }@Override public void closed( NHttpServerConnection nHttpServerConnection) { try { nHttpServerConnection.close(); } catch (IOException e) { e.printStackTrace(); } }}IOReactor class will basically wrap the demultiplexer functionality with ServerHandler implementation handling readiness events. Apache Synapse (an open source ESB) contains a good implementation of a NIO based HTTP server in which NIO is used to scaling for a large number of clients per instance with rather constant memory usage over time. The implementation also contains good debugging and server statistics collection mechanisms built in along with Axis2 transport framework integration. It can be found at [1]. Conclusion There are several options when it comes to doing I/O which can affect the scalability and performance of servers. Each of above I/O mechanisms have pros and cons so that the decisions should be made on expected scalability and performance characteristics and ease of maintainence of these approaches. This concludes my somewhat long winded article on I/O. Feel free to provide suggestions, corrections or comments that you may have. Complete source codes for servers outlined in the post along with clients can be downloaded from here. Related links There were many references I went through in the process. Below are some of the interesting ones. [1] http://www.ibm.com/developerworks/java/library/j-nio2-1/index.html [2] http://www.ibm.com/developerworks/linux/library/l-async/ [3] http://lse.sourceforge.net/io/aionotes.txt [4] http://wknight8111.blogspot.com/?tag=aio [5] http://nick-black.com/dankwiki/index.php/Fast_UNIX_Servers [6] http://today.java.net/pub/a/today/2007/02/13/architecture-of-highly-scalable-nio-server.html [7] Java NIO by Ron Hitchens [8] http://www.dre.vanderbilt.edu/~schmidt/PDF/reactor-siemens.pdf [9] http://www.cs.wustl.edu/~schmidt/PDF/proactor.pdf [10] http://www.kegel.com/c10k.html Reference: I/O Demystified from our JCG partner Buddhika Chamith at the Source Open blog....

Why You Didn’t Get the Interview

After reading the tremendous response to Why You Didn’t Get the Job (a sincere thanks to those that read and shared the post) I realized that many of the reasons referenced were specific to mistakes candidates make during interviews. At least a handful of readers told me that they didn’t get the job because they didn’t even get the interview.With a down economy, most of us have heard accounts of a job seeker sending out 100, 200, perhaps 300 résumés without getting even one response. These anecdotes are often received by sympathetic ears who commiserate and then share their personal stories of a failed job search. To anyone who has sent out large quantities of résumés without any response or interviews, I offer this advice: The complete lack of response is not due to the economy. The lack of response is based on your résumé, your experience, or your résumé submission itself.My intent here is to help and certainly not to offend, so if you are one of these people that has had a hard time finding new work, please view this as free advice mixed with a touch of tough love. I have read far too many comments lately from struggling job seekers casting blame for their lack of success in the search (“it wasn’t a real job posting”, “the manager wasn’t a good judge of talent“, etc.), but now it’s time to take a look inward on how you can maximize your success. I spoke to a person recently who had sent out over 100 résumés without getting more than two interviews, and I quickly discovered that the reasons for the failure were quite obvious to the trained eye (mine). The economy isn’t great, but there are candidates being interviewed for the jobs you are applying for (most of them anyway), and it’s time to figure out why that interview isn’t being given to you.If you apply for a job and don’t receive a response, there are only a few possibilities as to why that are within our control (please note the emphasis before commenting). Generally the problem isa mistake made during the résumé submission itself, problems with the résumé, or your experience.Qualified candidates that pay attention to these tips will see better results from their search efforts.  Your Résumé SubmissionRésumés to jobs@blackholeofdeath – The problem here isn’t that your résumé or application was flawed, it’s just that nobody has read it. Sending to hr@ or jobs@ addresses is never ideal, and your résumé may be funneled to a scoring system that scans it for certain buzzwords and rates it based on the absence, presence and frequency of these words. HRbot apocalypse… Solution – Do some research to see if you know anyone who works/worked at the company, even a friend of a friend, to submit the résumé. Protip: Chances are the internal employee may even get a referral bonus. LinkedIn is a valuable tool for this. Working with an agency recruiter will also help here, as recruiters are typically sending your information directly to internal HR or hiring managers.Follow instructions – If the job posting asks that you send a cover letter, résumé, and salary requirements, this request serves two purposes. First and most obviously, they actually want to see how well you write (cover letter), your experience (résumé), and the price tag (salary requirements). Second, they want to see if you are able and willing to follow instructions. Perhaps that is why the ad requested the documents in a specific format? Some companies are now consciously making the application process even a bit more complicated, which serves as both a test of your attention to detail and to gauge whether applicants are interested enough to take an extra step. Making it more difficult for candidates to apply should yield a qualified and engaged candidate pool, which is the desired result. Solution – Carefully read what the manager/recruiter is seeking and be sure to follow the directions exactly. Have a friend review your application before hitting send.Spelling and grammar – Spelling errors are inexcusable on a résumé today. Grammar is given much more leeway, but frequent grammatical errors are a killer. Solution – Have a friend or colleague read it for you, as it is much more difficult to edit your own material (trust me).Price tag – As you would expect, if you provide a salary requirement that is well above the listed (or unlisted) range, you will not get a response. Conversely and counterintuitively, if you provide a salary requirement that is well below the range, you will also not get a response. Huh?Suppose you want to hire someone to put in a new kitchen, and you get three estimates. The first is 25K, the second is 20K, and the third is 2K. Which one are you going to choose? It’s hard to tell, but I’m pretty sure you aren’t going to use the one that quoted you 2K. Companies want to hire candidates that are aware of market value and priced accordingly, and anyone asking for amounts well above market will not get any attention. Solution – Research the going rate for the job and be sure to manage your expectations based on market conditions. Another strategy is trying to delay providing salary information until mutual interest is established. If the company falls in love, the compensation expectation might hurt less. There is some risk of wasting time in interviews if you do not provide information early in the process, and most companies today will require the information before agreeing to an interview.Canned application – By ‘canned’ I am referring to job seekers that are obviously cutting and pasting content from previous cover letters instead of taking the time to try and personalize the content. Solution – Go to the hiring firm’s website and find something specific and unique that makes you want to work for that company. Include that information in your submission. If you are using a template and just filling in the blanks (“I read your job posting on _____ and I am really excited to learn that your company _____ is hiring a ______”), delete the template now. If you aren’t willing to invest even a few minutes into the application process, why should the company invest any time learning about you?Too eager – If I receive a résumé submission for a job posting and then get a second email from that candidate within 24 hours asking about the submission, I can be fairly sure that this is an omen. If I get a call on my mobile immediately after receiving the application ‘just to make sure it came through‘, you might as well just have the Psycho music playing in the background. Even if this candidate is qualified, there will probably be lots of hand-holding and coaching required to get this person hired. Reasonably qualified candidates with realistic expectations and an understanding of business acumen don’t make this mistake. Solution – Have patience while waiting for a response to your résumé, and be sure to give someone at least a couple/few days to respond. If you are clearly qualified for a position, you will get a reply when your résumé hits the right desk. Pestering or questioning the ability of those that are processing your application is a guarantee that you will not be called in.  Your RésuméYour objective – If your objective states “Seeking a position as a Python developer in a stable corporate environment“, don’t expect a callback from the start-up company looking for a Ruby developer. This applies even if you are qualified for the job! Why doesn’t the company want to talk to you if you are qualified? Because you clearly stated that you wanted to do something else. If you put in writing that you are seeking a specific job, that information must closely resemble the job to which you are applying. Solution - You may choose to have multiple copies of your résumé with multiple objectives, so you can customize the résumé to the job (just be sure to remember which one you used so you bring the correct résumé to the interview!). As there may be a range of positions you are both qualified and willing to take, using a ‘Profile’ section that summarizes your skills instead of an ‘Objective’ is a safer alternative.Spelling and grammar (again) – see abovetl;dr – To any non-geek readers, this means ‘too long; didn’t read‘. To my geek readers, many of you are guilty of this. I’ve written about this over and over again, but I still get seven page résumés from candidates. I have witnessed hiring managers respond to long-winded résumés with such gems as ‘if her résumé is this long, imagine how verbose her code will be‘. (Even for non-Java candidates! #rimshot) Hiring managers for jobs that require writing skills or even verbal communication can be extremely critical of tl;dr résumés. Solution – Keep it to two or three pages maximum. If you can’t handle that, get professional help.Buzzword bingo – This is a term that industry insiders use to refer to résumés that include a laundry list of acronyms and buzzwords. The goal is to either catch the eye of an automated search robot (or human) designed to rate résumés based on certain words, or to insinuate that the candidate actually has all the listed skills. Software engineers are probably more guilty of this than other professionals, as the inclusion of one particular skill can sometimes make the difference between your document being viewed by an actual human or not. When candidates list far too many skills buzzwords than would be reasonably possible for one person to actually know, you can be sure the recruiter or manager will pass based on credibility concerns. Solution – I advise candidates to limit the buzzwords on your résumé to technologies, tools, or concepts that you could discuss in an intelligent conversation. If you would not be comfortable answering questions about it in an interview, leave it off.  Your ExperienceGaping holes – If you have had one or more extended period of unemployment, hiring managers and recruiters may simply decide to pass on you instead of asking about the reasons why. Perhaps you took a sabbatical, went back to school full-time, or left on maternity leave. Don’t assume that managers are going to play detective and figure out that the years associated with your Master’s degree correspond to the two year gap in employment. Solution – Explain and justify any periods of unemployment on your résumé with as much clarity as possible without going into too many personal details. Mentioning family leave is appropriate, but providing the medical diagnosis of your sick relative is not.Job hopping – Some managers are very wary of candidates that have multiple employers over short periods of time. In the software world it tends to be common to make moves a bit more frequently than in some other professions, but there comes a point where it’s one move too many and you may be viewed as a job hopper. The fear of hiring a job hopper has several roots. A manager may feel you are a low performer, a mercenary that always goes to the highest bidder, or that you may get bored after a short time and seek a new challenge. Companies are unwilling to invest in hires that appear to be temporary. Solution – If the moves were the result of mergers, acquisitions, layoffs, or a change in company direction, be sure to note these conditions somewhere in the résumé. Never use what could be viewed as potential derogatory information in the explanation. Clearly list if certain jobs were project/contract.Listed experience is irrelevant/unrelated – This could be a symptom of simply being unqualified for the position, or it could be tied to an inability to detail what you actually do that is relevant to the listed job requirements. I would suspect that most of the aforementioned people (that received no responses to 100 submission) probably fall into the unqualified category, as job seekers tend to feel overconfident about being a fit for a wider range of positions than is realistic. Companies expect a very close fit during a buyer’s market, and are willing to open up their hiring standards a bit when the playing field starts to level. Solution – Be sure to elaborate on all elements of your job that closely resemble the responsibilities listed in the posting. Instead of wasting time filling out applications for jobs that are clearly well out of reach, spend that time researching jobs that are a better match for you.You are overqualified – The term ‘overqualified’ seems to be overused by rejected applicants today, as there is no real stigma to the term. It’s entirely comfortable for a candidate to say/think “I didn’t get the job because I possess more skills at a higher level than the employer was seeking“. When a company is seeking an intermediate level engineer, it isn’t always because they want someone earlier in their career than a senior level engineer (although in some cases this could be true). Rather, they want the intermediate level engineer because that is what their budget dictates or they expect that senior engineers would not be challenged by the role (and therefore would leave). There are also situations where companies will not want to hire you because your experience is indicative that you will only be taking this job until something better comes along. A CEO applying for a job as a toll collector will not be taken seriously. Solution – Be sure that your résumé accurately represents your level of skill and experience. Inflating your credentials or job titles will always work against you.  ConclusionThe time you spend on your job search is valuable, so be sure to use it wisely. Invest additional effort on applications for jobs that you feel are a great fit, and go above and beyond to be sure your submission gets attention. As a general rule of thumb, you want to be sure that whoever receives your résumé will get it into the hands of someone who has a similar job to the one you want, not just someone trained to look for buzzwords. Employees that have similar experience will be the best judges of your fit. If you aren’t getting the response you want, do not keep using the same methods and expecting a different result.Reference: Why You Didn’t Get the Interview from our JCG partner Dave Fecak at the Job Tips For Geeks blog....

Git newbie commands

If you’re new to Git you will recognize that some things work different compared to SVN or CVS based repositories. This blog explains the 10 most important commands in a Git workflow that you need to know about. If you are on Windows and you want to follow the steps below, all you need to do is to set-up Git on your local machine. Before we go into Git commands bear in mind (and do not forget!) that Git has a working directory, a staging area and the local repository. See the overview below taken from http://progit.org.The GIT workflow is as follows: You go to the directory where you want to have version controll. You use git init to put this directory under version control. This creates a new repository for that current location. You make changes to your files, then use git add to stage files into the staging area. You’ll use git status and git diff to see what you’ve changed, and then finally git commit to actually record the snapshot forever into your local repository. When you want to upload your changes to a remote repository you’ll use git push. When you want to download changes from a remote repository to your local repository you’ll use git fetch and git merge. Lets go through this step-by-step. To create a repository from an existing directory of files, you can simply run git init in that directory. Go to that directory you want to get under version control: git init  All new and changed files need to be added to the staging area prior to commit to the repository. To add all files in the current directory to the staging area: git add --allTo commit the files and changes to the repository: git commit -am "Initial commit"  Note that I have use the -am option which does an add -all implicitly. This command is equivalent to the SVN- or CVS-style “commit”. Again: if you want to update your local Git repository there is always an add-operation followed by a commit-operation, with all new and modified files. Then you go create a repository at Github.com. Let’s say you named it git_example. Then you add the remote repository address to your local GIT configuration: git remote add EXAMPLE https://nschlimm@github.com/nschlimm/git_example.gitNote that in the example thats my user name on Github.com. You’ll need to use yours obviously. I have named the remote repository “EXAMPLE”. You can refer to an alias name instead of using the remote URL all the time. You are ready to communicate with a remote repository now. If you are behind a firewall you make the proxy configuration: git config --global http.proxy http://username:passwork@someproxy.mycompany.com:80 Then you push your files to the remote repository: git push EXAMPLE master  Then imagine somebody changed the remote files. You need to get them: git fetch EXAMPLE You need to merge those changes from the remote master into your local master branch (plain language: you copy the changes from your local repository to your working directory). Assume that your current context is the master branch and you want to merge EXAMPLE branch into master, you’ll write: git merge EXAMPLE To compare the staging area to your working directory: git status -sThe example shows the status after I have modified the README.txt (but did not added or commited yet). Without any extra arguments, a simple git diff will display what content you’ve changed in your project since the last commit that are not yet staged for the next commit snapshot. git diff  The example shows the diff output after I have edited the README.txt file (but did not added or commited yet). When I add all changes to staging, git diff will not display changes ’cause there is nothing in your working directory that has not been staged.  It’s different with git status. It shows the differences between your last commit and the staging/working area:  In short: git status shows differences between your local repository and your working directory/staging area. Whereas git diff (as used above) shows differences between your staging area and your working directory. That’s it. These are the most important Git commands a newbie must know to get started. See the gitref.org reference for more information on using Git. Downloading a remote repository If you like to copy a repository from Github.com (or any other remote address) to your local machine: git clone https://nschlimm@github.com/nschlimm/spring-decorator.gitYou can now work on the code and push the changes back to that remote repository if you like. Working with branches – changing your current context  A branch in Git is nothing else but the “current context” you are working in. Typically you start working in the “master” branch. Let’s say you want to try some stuff and you’re not sure if what you’re doing is a good idea (which happens very often actually :-)). In that case you can create a new branch and experiment with your idea: git branch [branch_name] When you just enter git branch it will list your new branches. If you’d like to work with your new branch, you can write: git checkout [branch_name] One important fact to notice: if you switch between branches it does not change the state of your modified files. Say you have a modified file foo.java. You switch from master branch to your new some_crazy_idea branch. After the switch foo.java will still be in modified state. You could commit it to some_crazy_idea branch now. If you switch to the master branch however this commit would not be visible, ’cause you did not commit within the master branch context. If the file was new, you would not even see the file in your working tree anymore. If you want to let others know about your new idea you push the branch to the remote repository: git push [remote_repository_name] [branch_name] You’d use fetch instead of push to get the changes in a remote branch into your local repository again. This is how you delete a branch again if you don’t need it anymore: git branch -d [branch_name] Removing files  If you accidentally committed something to a branch you can easily remove the file again. For example, to remove the readme.txt file in your current branch: git rm --cached readme.txt The --cached option only removes the file from the index. Your working directory remains unchanged. You can also remove a folder. The .settings folder for an eclipse project – for instance – is nothing you should share with others: git rm --cached -r some_eclipse_project/.settings After you ran the rm command the file is still in the index (history of Git version control). You can permanently delete the complete history with this command: Note: be very careful with commands like this and try them in a copy of your repository before you apply them to your productive repository. Always create a copy of the complete repository before you run such commands. git filter-branch --index-filter 'git rm --cached --ignore-unmatch [your_file_name]' HEAD Ignoring files: you do not want to version control a certain file or directory  To ignore files you just add the file name to the .gitignore file in the directory that owns the file. This way it will not be added to version control anymore. Here is my .gitignore for the root directory of an Eclipse project: /target /.settings .project .classpathIt ignores the target and the .settings folder as well as the .project and the .classpath file. Sometimes its helpful to configure global ignore rules that apply to the complete repository: git config --global core.excludesfile ~/.gitignore_global This added the following entry to my .gitconfig global parameters file which resides in the git root directory. excludesfile = d:/dev_home/repositories/git/.gitignore_global These are my current global exclude rules in my .gitignore_global file: # Compiled source # ################### *.com *.class *.dll *.exe *.o *.so # Logs and databases # ###################### *.log Note: these rules are shared with other users. Local per-repo rules can be added to the .git/info/exclude file in your repo. These rules are not committed with the repo so they are not shared with others. Restoring files – put the clocks back Sometimes you make changes to your files and after some time you realize that what you’ve done was a bad idea. You want go back to your last commit state then. If you made changes to your working directory and you want to restore your last HEAD commit in your working directory enter: git reset --hard HEAD This command sets the current branch head to the last commit (HEAD) and overwrites your local working directory with that last commit state (the --hard option). So it will overwrite your modified files. Instead of HEAD (which is your last commit) you could name a branch or a tag like ‘v0.6′. You can also reset to a previous commit: HEAD~2 is the commit before your last commit. May be you want to restore a file you have deleted in your working directory. Here is what I’ve entered to restore a java file I have deleted accidentally: git checkout HEAD sources/spring-decorator/src/test/java/com/schlimm/decorator/simple/SetupSession.java Again: Instead of HEAD you could name a branch or a tag like ‘v0.6′. You can draw the file from a previous commit: HEAD~2 is the commit before your last commit. Working with tags – making bookmarks to your source code Sometimes you want to make a version of your source code. This way you can refer to it later on. To apply a version tag v1.0.0 to your files you’d write: git tag -a v1.0.0 -m "Creating the first official version" You can share your tags with others in a remote repository: git push [remote_repository_name] --tags Where remote_repository_name is the alias name for your remote repository. You write fetch instead of push to get tags that others committed to the remote repository down to your local repository. If you just enter git tag it will give you the list of known tags. To get infos about the v1.0.0 tag, you’d write: git show v1.0.0 -s If you want to continue work on a tag, for instance on the production branch with version v5.0.1, you enter: git checkout v5.0.1 -b [your_production_branch] Note that this command also creates a new branch for the tag, this way you can make commits and anything else you wish to record back to the repository. Reference: “Top 10 commands for the Git newbie” from our JCG partner Niklas....

Creating a Java Dynamic Proxy

Java Dynamic proxy mechanism provides an interesting way to create proxy instances. The steps to create a dynamic proxy is a little tedious though, consider a proxy to be used for auditing the time taken for a method call for a service instance – public interface InventoryService { public Inventory create(Inventory inventory); public List<Inventory> list(); public Inventory findByVin(String vin); public Inventory update(Inventory inventory); public boolean delete(Long id); public Inventory compositeUpdateService(String vin, String newMake); }The steps to create a dynamic proxy for instances of this interface is along these lines: 1. Create an instance of a java.lang.reflect.InvocationHandler, this will be responsible for handling the method calls on behalf of the actual service instance, a sample Invocation handler for auditing is the following: ... public class AuditProxy implements java.lang.reflect.InvocationHandler {private Object obj;public static Object newInstance(Object obj) { return java.lang.reflect.Proxy.newProxyInstance(obj.getClass().getClassLoader(), obj .getClass().getInterfaces(), new AuditProxy(obj)); }private AuditProxy(Object obj) { this.obj = obj; }public Object invoke(Object proxy, Method m, Object[] args) throws Throwable { Object result; try { logger.info("before method " + m.getName()); long start = System.nanoTime(); result = m.invoke(obj, args); long end = System.nanoTime(); logger.info(String.format("%s took %d ns", m.getName(), (end-start)) ); } catch (InvocationTargetException e) { throw e.getTargetException(); } catch (Exception e) { throw new RuntimeException("unexpected invocation exception: " + e.getMessage()); } finally { logger.info("after method " + m.getName()); } return result; } }2. When creating instances of InventoryService, return a proxy which in this case is the AuditProxy, composing instances of InventoryService, which can be better explained using a UML:This is how it would look in code: InventoryService inventoryService = (InventoryService)AuditProxy.newInstance(new DefaultInventoryService());Now, any calls to inventoryService will be via the AuditProxy instance, which would measure the time taken in the method while delegating the actual method call to the InventoryService instance. So what are proxies used for: 1. Spring AOP uses it extensively – it internally creates a dynamic proxy for different AOP constructs 2. As in this example, for any class decoration – AOP will definitely be a better fit for such a use case though 3. For any frameworks needing to support interface and annotation based features – A real proxied instance need not even exist, a dynamic proxy can recreate the behavior expected of an interface, based on some meta-data provided through annotations. Reference: Creating a Java Dynamic Proxy from our JCG partner Biju Kunjummen at the all and sundry blog....
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below: