Featured FREE Whitepapers

What's New Here?

career-logo

How to Hire Geeks, Brand Your Shop, and Beat the “Talent Shortage”

As a recruiter of software engineers, I hear every day how difficult it is for software companies to find technical talent. If hiring engineers was easy, I wouldn’t be in business. Using a recruiter is one way to have qualified potential employees neatly packaged and delivered, but there are several other strategies that forward-thinking firms can implement to differentiate themselves from their competitors who are just posting ads to the same old places and hitting up the friends and family networks. Your company probably spends significant amounts of money to advertise your product and brand, but very little attention is paid to promoting the company’s identity as a good employer for engineers.       These strategies can take a bit of time and effort, but the rewards are stronger talent at a lesser expense. Here are a handful of ways to make your company more attractive to new engineering hires. Creative ads, inviting job descriptions, unique process I have both reviewed and written more job descriptions for software engineering jobs than I care to mention, and it seems that well over 90% of the ads out there consist of the same trite words and phrases mashed up in different ways. More importantly, it is incredibly rare to see ads that ask the reader to apply. You will see ads that specify who should not apply (“must have x years of experience with ______“), but how often do you see an ad actually encouraging an applicant to “check us out”? Request your reader to act and apply, particularly if your ad is placed in a location where qualified candidates are more likely to be. Making the application process itself more interesting is another way to set yourself apart. I don’t know anything about Parse, but I know they allow engineers to apply via API. Asking an engineer to fill out an online application that takes ten minutes is an annoying barrier to applying, while adding a small element to the hiring process that engineers view as a minor challenge is a potential draw. If you are going to argue that the application process is a test of an applicant’s commitment and interest, I will counter that a better measure of interest is to have engineers solve a small technical problem to apply (see API example above). Engineering blogs Geeks like reading about cool stuff that other geeks are doing. How often do you see links publicized from the Twitter engineering blog, Facebook’s engineering page, or Netflix’s blog. Are you sick of seeing the phrase “____ is a GitHubber!“? Maybe your company isn’t solving the types of problems that these firms are, but that doesn’t mean the problems you are solving won’t be interesting to a specific audience. Smaller shops that post even once or twice a month about a technical challenge, decisions being made, or new additions will draw some readers and potential hires. Comments on your engineering blog are a signal of potential employment interest. Open source projects and GitHub/Bitbucket public repos If your company has developed something internally that could have some utility to other developers, making it open source can score your firm some credibility and visibility with the community. Exposing well-written code shows off your team’s expertise and making it freely available to others builds goodwill. Those interactions with developers that contribute to your project or use the code are a good way to start a recruiting dialogue. Community involvement/outreach Sponsoring and/or presenting at meetups, conferences, and users’ groups is probably the most targeted advertising you can do to promote your company as an attractive employer of engineers. In theory, money spent on sponsorships could be much more effective than job ads on general employment sites. Unfortunately, many companies spend the money but end up making a negative impression by trying to turn a meeting into their own career fair. As someone who has run a users’ group for almost 13 years, it appears that the most effective way to attract potential hires in these forums is to have a couple of your best engineers present and demonstrate how they solved a challenging technical problem. If you can get the audience to leave the session thinking “I’d love to work with them“, you will get some new applicants. “Courting” during the hiring process What is your typical hiring process? If you are like most of the companies I’ve worked with over the past 15 years, the process consists of a phone screen and one or two face-to-face interviews (and sometimes a test). When your process is exactly the same as that of your competitors, what does it say about your company? Nothing. Mix it up a little by initiating contact with an offsite coffee or lunch, especially if the candidate appears to be very strong and in demand. Always be interviewing If your company’s five best engineers resigned tomorrow, who would you try to hire? I expect that most simply don’t know. They say timing is everything in hiring (and everything else). However, the main reason timing is such a factor is that most companies are only willing to interview candidates when they have a well-defined open position. Timing is indeed everything when the hiring window is only fully open during short instances and cycles. I am constantly trying to encourage my clients to always keep an open ear to new hires, and to be willing to interview candidates even when there is not a budgeted position currently open. It is probably important to tell candidates when there is not a current opening, but many will still want to take an informational or informal interview. This gives a firm the opportunity to develop a wish list of hires for when a vacancy arises. During times where an open job is not immediately available a company may raise the bar as to who is invited in, but interviewing exceptional candidates as they appear is one way to defeat much of what is attributed to ‘bad timing’. Focus more on overall talent, less on buzzwords If your company has explicit and rigid rules on only considering candidates that have a certain number of years of experience, whether overall or in a certain technology, you are doing it wrong. In a buyer’s market it is common for firms to create very specific requirements for experience, but in times like now when demand is high and supply is low we see the requirements open up significantly. Companies that hire the best available engineering talent instead of an engineer with a specific skill should end up with better teams in the long run. Turning away a savvy engineer because his/her experience is with a different language is a tough choice. Of course some hires are made with short term goals, particularly in the start-up world, but focusing too much on a narrow skill set contributes greatly to the perceived skills shortage. Conclusion In my own experience, the companies that are using some of these strategies are much easier clients to ‘sell’ to candidates. Being stealthy is intriguing but by design it won’t get you noticed. Making your engineering organization visible to your target audience (great engineers) and promoting the company’s image as “engineer-friendly” should result in a larger and more qualified candidate pool.   Reference: How to Hire Geeks, Brand Your Shop, and Beat the “Talent Shortage” from our JCG partner Dave Fecak at the Job Tips For Geeks blog. ...
java-logo

Guava Splitter vs StringUtils

So I recently wrote a post about good old reliable Apache Commons StringUtils, which provoked a couple of comments, one of which was that Google Guava provides better mechanisms for joining and splitting Strings. I have to admit, this is a corner of Guava I’ve yet to explore. So thought I ought to take a closer look, and compare with StringUtils, and I have to admit I was surprised at what I found. Splitting strings eh? There can’t be many different ways of doing this surely? Well Guava and StringUtils do take a sylisticly different approach. Lets start with the basic usage.   // Apache StringUtils... String[] tokens1 = StringUtils.split('one,two,three',',');// Guava splitter... Iterable<String> tokens2 = Splitter.on(',').split('one,two,three'); So, my first observation is that Splitter is more object orientated. You have to create a splitter object, which you then use to do the splitting. Whereas the StringUtils splitter methods uses a more functional style, with static methods. Here I much prefer Splitter. Need a reusable splitter that splits comma separated lists? A splitter that also trims leading and trailing white space, and ignores empty elements? Not a problem: Splitter niceCommaSplitter = Splitter.on(',') .omitEmptyString() .trimResults();niceCommaSplitter.split('one,, two, three'); //'one','two','three' niceCommaSplitter.split(' four , five '); //'four','five' That looks really useful, any other differences? The other thing to notice is that Splitter returns an Iterable<String>, whereas StringUtils.split returns a String array. Don’t really see that making much of a difference, most of the time I just want to loop through the tokens in order anyway! I also didn’t think it was a big deal, until I examined the performance of the two approaches. To do this I tried running the following code: final String numberList = 'One,Two,Three,Four,Five,Six,Seven,Eight,Nine,Ten';long start = System.currentTimeMillis(); for(int i=0; i<1000000; i++) { StringUtils.split(numberList , ','); } System.out.println(System.currentTimeMillis() - start);start = System.currentTimeMillis(); for(int i=0; i<1000000; i++) { Splitter.on(',').split(numberList ); } System.out.println(System.currentTimeMillis() - start); On my machine this output the following times: 594 31 Guava’s Splitter is almost 10 times faster! Now this is a much bigger difference than I was expecting, Splitter is over 10 times faster than StringUtils. How can this be? Well, I suspect it’s something to do with the return type. Splitter returns an Iterable<String>, whereas StringUtils.split gives you an array of Strings! So Splitter doesn’t actually need to create new String objects. It’s also worth noting you can cache your Splitter object, which results in an even faster runtime. Blimey, end of argument? Guava’s Splitter wins every time? Hold on a second. This isn’t quite the full story. Notice we’re not actually doing anything with the result of the Strings? Like I mentioned, it looks like the Splitter isn’t actually creating any new Strings. I suspect it’s actually deferring this to the Iterator object it returns. So can we test this? Sure thing. Here’s some code to repeatedly check the lengths of the generated substrings: final String numberList = 'One,Two,Three,Four,Five,Six,Seven,Eight,Nine,Ten'; long start = System.currentTimeMillis(); for(int i=0; i<1000000; i++) { final String[] numbers = StringUtils.split(numberList, ','); for(String number : numbers) { number.length(); } } System.out.println(System.currentTimeMillis() - start);Splitter splitter = Splitter.on(','); start = System.currentTimeMillis(); for(int i=0; i<1000000; i++) { Iterable<String> numbers = splitter.split(numberList); for(String number : numbers) { number.length(); } } System.out.println(System.currentTimeMillis() - start); On my machine this outputs: 609 2048 Guava’s Splitter is almost 4 times slower! Indeed, I was expecting them to be about the same, or maybe Guava slightly faster, so this is another surprising result. Looks like by returning an Iterable, Splitter is trading immediate gains, for longer term pain. There’s also a moral here about making sure performance tests are actually testing something useful. In conclusion I think I’ll still use Splitter most of the time. On small lists the difference in performance is going to be negligible, and Splitter just feels much nicer to use. Still I was surprised by the result, and if you’re splitting lots of Strings and performance is an issue, it might be worth considering switching back to Commons StringUtils.   Reference: Guava Splitter vs StringUtils from our JCG partner Tom Jefferys at the Tom’s Programming Blog blog. ...
jboss-drools-logo

JBoss BRMS Best Practices – tips for your BPM Process Initialization Layer

I have posted some articles in the past on migration strategies, taken closer looks at process layers and provided some best practices for jBPM, both touching on very specific parts of your BPM strategies. I wanted to revisit the topic of best practices but then on an Intelligent, Integrated Enterprise level where we talk about getting control over your business processes with JBoss BRMS.Introduction To start with we need to take a closer look at the landscape and then peel back the layers like an onion for a closer look at how we can provide BPM projects that scale well. Figure 1 shows that there are several component layers where we will want to focus our attention:Process Initialization Layer Process Implementation Layer Process Repository Tooling for business users & developers Console, reporting & BAM dashboards Process Interaction Layer  The process initialization layer will be covered in this article, where I present some best practices around you, your customer and how processes are started. The process implementation layer is where the processes are being maintained, with help from the process repository, tooling, business users and developers that design them. Here you will also find the various implementation details, such as domain specific extensions to cover specific node types within our projects. Best practices in this layer will be covered at a later time. The console, reporting and BAM dashboard components are the extended tooling used in projects to provide business value or information that can be used to influence business decisions. Best practices in this area will be covered at a later time.Finally, the process interaction layer is where you processes will connect to all manner of legacy systems, back office systems, service layers, rules systems even third party systems and services. Best practices in this area will be covered in a later   article. Process Initialization LayerTaking a look at how to initialize your processes, I want to provide you with some of the best practices I have seen used by larger enterprises over the years. There seems to be a main theme of gathering the customer, user or system data that is needed to start your process, then just injecting it by thestartProcess call. This can be embedded in your application via the BRMS jBPM API call, make use of the RESTful service or via a standard java web service call. No matter how you gather the data to initialize your process instances, you might want to think about how you would want to scale out your initialization setup from the beginning. Often the initial projects are setup without much thought as to the future, so certain issues have not been taken into consideration.             Customers The customer defined here can be a person, a system or some user that provides for the initial process starting data. In figure 2 we provide a high level look at how our customers provide process data that we then package up into a request to be dropped into one of the process queues. From the queues we can then prioritize and let different mechanisms fetch these process requests and start a process instance with the provided request data. We show here EJB’s, MDB’s and clouds that represent any manner of scheduling that might be used to empty the process queues. Queues These queues can be as simple as database tables or as refined as message queues. They can be setup any way your project desires, such as Last-In-First-Out (LIFO) or First-In-First-Out (FIFO). The benefits of using message queues is that you can prioritize these from your polling mechanism. The reason for this setup is two fold. First, you have ensured that by not directly starting the process instance from the customer interface that you have persisted the customer request. It will never be lost in route to the process engine. Second, you have the ability to prioritize future processes that might not be able to meet project requirements like a new process request that has to start in 10 seconds after submission by the customer. If it gets put at the bottom of a queue that takes an hour to get to processing it, you have a problem. By prioritizing your queues you can adjust your polling mechanism to check the proper queues in the proper order each time. Java / Cloud The java icons in figure 2 are representing any JEE mechanism you might want to use to deal with the process queues. It can be EJB’s, MDB’s, a scheduler you write yourself or whatever you want to come up with to pick up process requests. The cloud icons are meant to represent services that can be used by your software to actually call the finalstartProcess method to initialize the process instance being requested and pass it initial data. It is important to centralize this interaction with the jBPM API into a single service thereby ensuring minimal work if the API should change, for possible version migrations in the future and should you wish to expand it in future projects to extend the service interaction with jBPM. So far, we have walked through the high level BPM architecture and laid out the various layers of interaction. The first layer of interaction in larger enterprise BPM architectures, the initialization layer is examined to provide some insights into best practices within this layer. It is not a discussion that attempts to push implementation details, but takes a step back and introduces some of the basic elements that are repeatedly encountered in large BPM architectures. It covers the initial customer submission of a processing request, the queueing of process requests and the processing of these queues in a consistent and scalable manner. There is still more to take a look at in future articles, in the Process Implementation Layer, the Process Interaction Layer, in the Process Repository, in the Tooling and in the reporting & BAM layers. Process Implementation Layer This layer focuses on your business process designs, your implementations of custom actions in your processes and extensions to your ways of working with your processes. The adoption of the standard BPMN2 for process design and execution has taken a lot of the troubles out of this layer of your BPM architecture. Process engines are forced to adhere and support the BPMN2 standard which means you are limited in what can do during the designing of your processes. Knowledge sessions There is within the JBoss BRMS BPM component one thing of interest for building highly scalable process architectures. This is the concept of a Knowledge Session (KS), specifically a Stateful Knowledge Session (SKS). This is created to hold you process information, both data and an instance of your process specification.When running rules based applications it is normal procedure to run a single KS (note, not stateful!) with all your rules and data leveraging this single KS. With a SKS and processes, we want to leverage a single SKS per process instance. We can bundle this functionality into a single service to allow for concurrency and to facilitate our process instance life-cycle management. Within this service you can also embed eventual synchronous or asynchronous Business Activity Monitoring (BAM) event producers as desired. This article briefly walks through the high level BPM architecture and lays out the various layers of interaction. The implementation layer is examined to provide some insights into best practices within this layer. The main focus is the SKS where we suggest how to not only use, but manage process instance life-cycles within a single service. On top of this it is suggested that this is a good entry point to offload your BAM events. There is still more to take a look at in future articles, in the Process Interaction Layer, in the Process Repository, in the Tooling and in the reporting & BAM layers. Process Interaction Layer There is much to be gained by a good strategy for accessing business logic, back-end systems, back-office systems, user interfaces, other applications, third-party services or whatever your business processes need to use to get their jobs done. Many enterprises are isolating these interactions with a service layer within a Service Oriented Architecture (SOA) which provides for flexibility and scales nicely across all the various workloads that may be encountered. Taking a look at the BPM layer here we want to mention just a few of these backend systems as an example of how to optimize your process projects in your enterprise.Human tasks The JBoss BRMS BPM architecture includes a separate Human Task (HT) server that runs as a service that implements the WS-HT specification. Being pluggable there is nothing to prevent you from hosting another server in your enterprise by exposing the WS-HT task life-cycle in a service. This should then use a synchronous invocation model which vastly simplifies the standard product implementation that leverages a HornetQ messaging system by default. Reporting A second service that you can implement to provide great reporting scalability we call a Business Activity Monitoring (BAM) service. This service you would use to centralize the BAM events and use it to push these events to JMS queues which are both reliable and fast. A separate machine can then be used to host these JMS BAM queues, processing the messages without putting load on the BPM engine itself, write to a separate BAM database, optimise with batch writing and any clients that consume the BAM information will again not be putting any load on the BPM engine itself. Conclusion This article briefly walks through the high level BPM architecture and lays out the various layers of interaction. The interaction layer is examined to provide some insights into best practices within this layer. There are several services that you can create to centralize your activities around human task and reporting. By centralising your human task interaction you can provide a standard and scalable solution to your enterprise. With the BAM service you are able to off load the work to a separate entity in your architecture, guaranteeing both delivery of these events and consistent performance with regards to reporting activities from your processes. There is still more to take a look at in future articles, in the Process Interaction Layer, in the Process Repository, in the Tooling and in the reporting & BAM layers. Chinese translation provided by Christina Lin.   Reference: JBoss BRMS Best Practices – tips for your BPM Process Initialization Layer , JBoss BRMS Best Practices – tips for your BPM Process Implementation Layer , JBoss BRMS Best Practices – tips for your BPM Process Interaction Layer from our JCG partner Eric D. Schabell at the Thoughts on Middleware, Linux, software, cycling and other news… blog. ...
java-logo

Google Guava BloomFilter

When the Guava project released version 11.0, one of the new additions was the BloomFilter class. A BloomFilter is a unique data-structure used to indicate if an element is contained in a set. What makes a BloomFilter interesting is it will indicate if an element is absolutely not contained, or may be contained in a set. This property of never having a false negative makes the BloomFilter a great candidate for use as a guard condition to help prevent performing unnecessary and expensive operations. While BloomFilters have received good exposure lately, using one meant rolling your own, or doing a Google search for code. The trouble with rolling your own BloomFilter is getting the correct hash function to make the filter   effective. Considering Guava uses the Murmur Hash for its’ implementation, we now have the usefulness of an effective BloomFilter just a library away. BloomFilter Crash Course BloomFilters are essentially bit vectors. At a high level BloomFilters work in the following manner:Add the element to the filter. Hash it a few times, than set the bits to 1 where the index matches the results of the hash.When testing if an element is in the set, you follow the same hashing procedure and check if the bits are set to 1 or 0. This process is how a BloomFilter can guarantee an element does not exist. If the bits aren’t set, it’s simply impossible for the element to be in the set. However, a positive answer means the element is in the set or a hashing collision occurred. A more detaild description of a BloomFilter can be found here and a good tutorial on BloomFilters here. According to wikipedia, Google uses BloomFilters in BigTable to avoid disk lookups for non-existent items. Another interesting usage is using a BloomFilter to optimize a sql querry. Using the Guava BloomFilter A Guava BloomFilter is created by calling the static method create on the BloomFilter class, passing in a Funnel object and an int representing the expected number of insertions. A Funnel, also new in Guava 11, is an object that can send data into a Sink. The following example is the default implementation and has a percentage of false positives of 3%. Guava provides a Funnels class containing two static methods providing implementations of the Funnel interface for inserting a CharSequence or byte Array into a filter. //Creating the BloomFilter BloomFilter bloomFilter = BloomFilter.create(Funnels.byteArrayFunnel(), 1000);//Putting elements into the filter //A BigInteger representing a key of some sort bloomFilter.put(bigInteger.toByteArray());//Testing for element in set boolean mayBeContained = bloomFilter.mayContain(bitIntegerII.toByteArray()); UPDATE: based on the comment from Louis Wasserman, here’s how to create a BloomFilter for BigIntegers with a custom Funnel implementation: //Create the custom filter class BigIntegerFunnel implements Funnel<BigInteger> { @Override public void funnel(BigInteger from, Sink into) { into.putBytes(from.toByteArray()); } }//Creating the BloomFilter BloomFilter bloomFilter = BloomFilter.create(new BigIntegerFunnel(), 1000);//Putting elements into the filter //A BigInteger representing a key of some sort bloomFilter.put(bigInteger);//Testing for element in set boolean mayBeContained = bloomFilter.mayContain(bitIntegerII); Considerations It’s critical to estimate the number of expected insertions correctly. As insertions into the filter approach or exceeds the expected number, the BloomFilter begins to fill up and as a result will generate more false positives to the point of being useless. There is another version of the BloomFilter.create method taking an additional parameter, a double representing the desired level of false hit probability (must be greater than 0 and less than one). The level of false hit probability affects the number of hashes for storing or searching for elements. The lower the desired percentage, the higher number of hashes performed. Conclusion A BloomFilter is a useful item for a developer to have in his/her toolbox. The Guava project now makes it very simple to begin using a BloomFilter when the need arises. I hope you enjoyed this post. Helpful comments and suggestions are welcomed. ReferencesUnit Test Demo of Guava BloomFilter. BloomFilter class All You Want to Know about BloomFilters. BloomFilter Tutorial. BloomFilter on Wikipedia.  Reference: Google Guava BloomFIlter from our JCG partner Bill Bejeck at the Random Thoughts On Coding blog. ...
java-logo

Busting PermGen Myths

In my latest post I explained the reasons that can cause the java.lang.OutOfMemoryError: PermGen space crashes. Now it is time to talk about possible solutions to the problem. Or, more precisely, about what the Internet suggests for possible solutions. Unfortunately, I can only say that I felt my inner Jamie Hyneman from MythBusters awakening when going through the different “expert opinions” on the subject.       I googled for current common knowledge about ways to solve java.lang.OutOfMemoryError: PermGen space crashes and went through a couple dozen pages that seemed more appropriate in Google results. Fortunately, most of the suggestions have already been distilled into this topic of the very respected StackOverflow. As you can see, the topic is truly popular and has some quite highly voted answers. But the irony is that the whole topic contains exactly zero solutions I could recommend myself. Well, aside from “Find the cause of memory leak”, which is absolutely correct, of course, but not very helpful way to respond to the question “How to solve memory leak”. Let us review the suggestions put forward on the SO page. Use -XX:MaxPermSize=XXXM There can be two reasons that cause the java.lang.OutOfMemoryError: PermGen space error. One is that application server and/or application really does use so many classes that they do not fit into default sized Permanent Generation. It is definitely possible and not that rare in fact. In this case increasing the size of Permanent Generation can really save the day. If your only problem is how to fit too many furniture into too small house, then buy the bigger house! But what if your over-caring mother sends you new furniture every week? You cannot possibly continue to move to the bigger houses over and over again. That is exactly the situation with memory leaks – and also with the classloader leaks, as described in my previous post that I mentioned above. Let me be clear here: no increase in Permanent Generation size will save you from the classloader leak. It can only postpone it. And make it harder to predict how many re-deployments your server will outlive. -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled The most popular answer on StackOverflow was to add these options to the server’s command line. And, they say, “maybe add -XX:+UseConcMarkSweepGC too. Just to be sure”. My first problem with these JVM flags is that there is no explanation available of what they really do. Neither in the SO answer (and I don’t like answers that tell you to do something without the reasoning why you should do it), nor actually in the whole Internet. Really, I was unable to find any documentation about these options, except for this page. But, in fact, that does not even matter. In no way any tinkering with the Garbage Collector options will help you in case of a classloder leak. Because, by definition, a memory leak is a situation where GC falls short. If there is a valid live hard reference from somewhere within your server’s classloader to an object or class of your application, then the GC will never think of it as garbage and will never reclaim it. Sure, all these JVM flags look very smart and magical. And they really may be required in some situations. But they are certainly not sufficient and don’t solve your Permanent Generation leak. Use JRockit The next proposition was to switch to the JRockit JVM. The rationale was that as JRockit has no Permanent Generation, one cannot run out of it. Surely, an interesting proposition. Unfortunately, it will not solve our problem either. The only result of this “solution” will be getting a java.lang.OutOfMemoryError: Java heap space instead of the java.lang.OutOfMemoryError: PermGen space. In the absence of separate generation for class definitions, JRockit uses the usual Java heap space for them. And as long the root cause of the leak is not fixed, those class definitions will fill up even the largest heap, given enough time. Restart the server Yet another way to pretend that the problem is solved, is to restart the application server from time to time. E.g. instead of redeploying the application, just restart the whole server. But the first time you see an application server with more than one application deployed, you will know that this is rarely possible in production environment. And this is not really a solution. It is a way to hide your head in the sand. Use Tomcat This one is actually not that hopeless as the previous ones – recent Tomcat versions really do try to solve classloader leaks. See for yourself in their documentation. IF you can use Tomcat as your target server, and IF your leak is one of those Tomcat can successfully fight against, then maybe, just maybe, you are lucky and the problem is solved for you. Use <Your favorite profiler tool here> May be a viable solution too. But again, with a couple of IFs. Firstly, you should be able to use that profiler in the affected environment. And as I have previously mentioned in my other post, profilers impose overhead of the level that might not be acceptable in the (production) environment. And secondly, you must know how to use the profiler to extract the required information and conclude the location of the leak. And my 10+ years of experience show that is very rarely the case. Conclusion So far we haven’t seen any definite solution to the java.lang.OutOfMemoryError: PermGen space error. There were a few that can be viable in some cases. But I was astounded by the fact that the majority of proposals were just plain invalid! You could waste days or weeks trying them and not even start to solve the real problem: find that rogue reference that is the root cause of the leak! Fortunately, as of the 1.1 release, Plumbr also discovers PermGen leaks. And it tells you the very reason that keeps the classloader from being freed, sparing you the time of hunting down the leak. So next time, when facing the java.lang.OutOfMemoryError: PermGen space message, download Plumbr and get rid of the problem for good.   Reference: Busting PermGen Myths from our JCG partner Nikita Salnikov Tarnovski at the Plumbr Blog blog. ...
agile-logo

Agile is Not for Everyone

Someone asked me again about self-assessments for their agile transition. That got me thinking about this problem of transitioning to agile. I don’t believe agile is for everyone in every circumstance. Some people claim agile has “crossed the chasm.” Certainly, many people are aware of agile. Many people understand that a cross-functional team works in increments, delivering features asking for feedback. That’s at the team level. You’ve seen my general picture of a general agile team looks like, and just in case you don’t remember, here it is again.  So when I say ‘Agile is Not for Everyone’ what do I mean? The problem is agile is not just for teams. Once a team installs agile, the team bumps up against systemic management issues. Management has to be willing to change. Program management has to be willing to change. HR has to be willing to change. Finance has to be willing to change. That’s huge. We’re talking about changing an organization’s culture. You don’t have to change the culture on Day One. But you do have to change eventually. And starting with the team is a good start. If the team can’t get to continuous integration and small-enough stories to move to two-week iterations, maybe agile is not for them. And when I say two-week iterations, I mean releaseable at the end of two weeks. Anyone can transition to agile. It takes work and determination. Here are the issues I see that prevent people from transitioning to agile:Agile requires that you start managing the project portfolio. Oh, maybe not at the very beginning, but certainly eventually. You cannot multitask on projects and be successful. Are you willing to say that yes, you will commit to some projects for now, and not commit to some projects for now? And, you will keep practicing so your teams are not overloaded? And, you will not move people like chess pieces? If you want to go to more teams, it’s not as simple as multiplying what you do on one team to several. That will give you bloat. I have several posts about program management already and you can expect more to come. Agile requires an open culture. Are you willing to give and receive feedback at all levels? Agile invites team recognition and rewards. Are you willing to at least discuss how to move to team evaluation, recognition, and rewards? Are you willing to discuss how to have career ladders that don’t automatically move people to traditional management? Are you willing to rethink what management is and how much you need? Are you willing to think about how to move what you might now call “management” into normal people’s work? Agile requires transparency. Are you willing to be transparent about who makes which decisions? Are you willing to be transparent about the boundaries of management decisions? Agile does not easily play with once-a-year budgeting. It invites incremental funding. But Finance doesn’t know about incremental funding. Finance still has a difficult time with capitalizing software as we create it. Finance prefers milestones. How do we help Finance with capitalization? For software-as-a-service, it’s an easier problem–you decide when you have released enough to capitalize. For non SaaS products, it’s a lot harder. Are you willing to try? Is Finance willing to try?Can you see now that agile is not just a lifecycle, but a huge cultural shift for the organization? For a project team, it’s one lifecycle among many, but for the organization, it’s much more than that. If you can’t maintain a transition to agile, you should not be ashamed or worried. You are not alone. What you can do is read Manage It! Your Guide to Modern, Pragmatic Project Management, and re-read the lifecycle chapter and appendix again. You have many choices for lifecycles. And, with what you know about timeboxes, slicing features into small stories, ranking stories, creating cross-functional teams, integrating testing into the iteration, you would have an awesome RUP or staged delivery lifecycle. I’m not saying agile is for the elite. Far from it. I’m saying agile is for people who want to and can manage the cultural change that it requires. And, if you try to do many of the technical and project management practices we suggest in agile, you will be better off. But is agile the objective? Or are projects that deliver products your customers want your objective? Agile is one vehicle. It’s not the only vehicle. Choose the vehicle that fits your culture. I’m all for being more effective. For me, that’s the thing that counts. If you need an agile assessment, you’re barking up the wrong tree. You need to see if you are more effective this year than you were last year.   Reference: Agile is Not for Everyone from our JCG partner Johanna Rothman at the Managing Product Development blog. ...
spring-interview-questions-answers

TaskletStep Oriented Processing in Spring Batch

Many enterprise applications require batch processing to process billions of transactions every day. These big transaction sets have to be processed without performance problems. Spring Batch is a lightweight and robust batch framework to process these big data sets. Spring Batch offers ‘TaskletStep Oriented’ and ‘Chunk Oriented’ processing style. In this article, TaskletStep Oriented Processing Model is explained.           Let us investigate fundamental Spring Batch components : Job : An entity that encapsulates an entire batch process. Step and Tasklets are defined under a Job Step : A domain object that encapsulates an independent, sequential phase of a batch job. JobInstance : Batch domain object representing a uniquely identifiable job run – it’s identity is given by the pair Job and JobParameters. JobParameters : Value object representing runtime parameters to a batch job. JobExecution : A JobExecution refers to the technical concept of a single attempt to run a Job. An execution may end in failure or success, but the JobInstance corresponding to a given execution will not be considered complete unless the execution completes successfully. JobRepository : An interface which responsible for persistence of batch meta-data entities. In the following sample, an in-memory repository is used via MapJobRepositoryFactoryBean. JobLauncher : An interface exposing run method, which launches and controls the defined jobs. TaskLet : An interface exposing execute method, which will be a called repeatedly until it either returns RepeatStatus.FINISHED or throws an exception to signal a failure. It is used when both readers and writers are not required as the following sample. Let us take a look how to develop Tasklet-Step Oriented Processing Model. Used Technologies :JDK 1.7.0_09 Spring 3.1.3 Spring Batch 2.1.9 Maven 3.0.4STEP 1 : CREATE MAVEN PROJECT A maven project is created as below. (It can be created by using Maven or IDE Plug-in).STEP 2 : LIBRARIES Firstly, dependencies are added to Maven’ s pom.xml. <properties> <spring.version>3.1.3.RELEASE</spring.version> <spring-batch.version>2.1.9.RELEASE</spring-batch.version> </properties><dependencies> <!-- Spring Dependencies --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>${spring.version}</version> </dependency><dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>${spring.version}</version> </dependency><!-- Spring Batch Dependency --> <dependency> <groupId>org.springframework.batch</groupId> <artifactId>spring-batch-core</artifactId> <version>${spring-batch.version}</version> </dependency><!-- Log4j library --> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>1.2.16</version> </dependency></dependencies> maven-compiler-plugin(Maven Plugin) is used to compile the project with JDK 1.7 <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.0</version> <configuration> <source>1.7</source> <target>1.7</target> </configuration> </plugin> The following Maven plugin can be used to create runnable-jar, <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-shade-plugin</artifactId> <version>2.0</version><executions> <execution> <phase>package</phase> <goals> <goal>shade</goal> </goals> <configuration> <configuration> <source>1.7</source> <target>1.7</target> </configuration> <transformers> <transformer implementation= 'org.apache.maven.plugins.shade.resource.ManifestResourceTransformer'> <mainClass>com.onlinetechvision.exe.Application</mainClass> </transformer> <transformer implementation= 'org.apache.maven.plugins.shade.resource.AppendingTransformer'> <resource>META-INF/spring.handlers</resource> </transformer> <transformer implementation= 'org.apache.maven.plugins.shade.resource.AppendingTransformer'> <resource>META-INF/spring.schemas</resource> </transformer> </transformers> </configuration> </execution> </executions> </plugin> STEP 3 : CREATE SuccessfulStepTasklet TASKLET SuccessfulStepTasklet is created by implementing Tasklet Interface. It illustrates business logic in successful step. package com.onlinetechvision.tasklet;import org.apache.log4j.Logger; import org.springframework.batch.core.StepContribution; import org.springframework.batch.core.scope.context.ChunkContext; import org.springframework.batch.core.step.tasklet.Tasklet; import org.springframework.batch.repeat.RepeatStatus;/** * SuccessfulStepTasklet Class illustrates a successful job * * @author onlinetechvision.com * @since 27 Nov 2012 * @version 1.0.0 * */ public class SuccessfulStepTasklet implements Tasklet {private static final Logger logger = Logger.getLogger(SuccessfulStepTasklet.class);private String taskResult;/** * Executes SuccessfulStepTasklet * * @param StepContribution stepContribution * @param ChunkContext chunkContext * @return RepeatStatus * @throws Exception * */ @Override public RepeatStatus execute(StepContribution stepContribution, ChunkContext chunkContext) throws Exception { logger.debug('Task Result : ' + getTaskResult()); return RepeatStatus.FINISHED; }public String getTaskResult() { return taskResult; }public void setTaskResult(String taskResult) { this.taskResult = taskResult; }} STEP 4 : CREATE FailedStepTasklet TASKLET FailedStepTasklet is created by implementing Tasklet Interface. It illustrates business logic in failed step. package com.onlinetechvision.tasklet;import org.apache.log4j.Logger; import org.springframework.batch.core.StepContribution; import org.springframework.batch.core.scope.context.ChunkContext; import org.springframework.batch.core.step.tasklet.Tasklet; import org.springframework.batch.repeat.RepeatStatus;/** * FailedStepTasklet Class illustrates a failed job. * * @author onlinetechvision.com * @since 27 Nov 2012 * @version 1.0.0 * */ public class FailedStepTasklet implements Tasklet {private static final Logger logger = Logger.getLogger(FailedStepTasklet.class);private String taskResult;/** * Executes FailedStepTasklet * * @param StepContribution stepContribution * @param ChunkContext chunkContext * @return RepeatStatus * @throws Exception * */ @Override public RepeatStatus execute(StepContribution stepContribution, ChunkContext chunkContext) throws Exception { logger.debug('Task Result : ' + getTaskResult()); throw new Exception('Error occurred!'); }public String getTaskResult() { return taskResult; }public void setTaskResult(String taskResult) { this.taskResult = taskResult; }} STEP 5 : CREATE BatchProcessStarter CLASS BatchProcessStarter Class is created to launch the jobs. Also, it logs their execution results. A Completed Job Instance can not be restarted with the same parameter(s) because it already exists in job repository and JobInstanceAlreadyCompleteException is thrown with “A job instance already exists and is complete” description. It can be restarted with different parameter. In the following sample, different currentTime parameter is set in order to restart FirstJob. package com.onlinetechvision.spring.batch;import org.apache.log4j.Logger; import org.springframework.batch.core.Job; import org.springframework.batch.core.JobExecution; import org.springframework.batch.core.JobParametersBuilder; import org.springframework.batch.core.JobParametersInvalidException; import org.springframework.batch.core.launch.JobLauncher; import org.springframework.batch.core.repository.JobExecutionAlreadyRunningException; import org.springframework.batch.core.repository.JobInstanceAlreadyCompleteException; import org.springframework.batch.core.repository.JobRepository; import org.springframework.batch.core.repository.JobRestartException;/** * BatchProcessStarter Class launches the jobs and logs their execution results. * * @author onlinetechvision.com * @since 27 Nov 2012 * @version 1.0.0 * */ public class BatchProcessStarter {private static final Logger logger = Logger.getLogger(BatchProcessStarter.class);private Job firstJob; private Job secondJob; private Job thirdJob; private JobLauncher jobLauncher; private JobRepository jobRepository;/** * Starts the jobs and logs their execution results. * */ public void start() { JobExecution jobExecution = null; JobParametersBuilder builder = new JobParametersBuilder();try {builder.addLong('currentTime', new Long(System.currentTimeMillis())); getJobLauncher().run(getFirstJob(), builder.toJobParameters()); jobExecution = getJobRepository().getLastJobExecution(getFirstJob().getName(), builder.toJobParameters()); logger.debug(jobExecution.toString());getJobLauncher().run(getSecondJob(), builder.toJobParameters()); jobExecution = getJobRepository().getLastJobExecution(getSecondJob().getName(), builder.toJobParameters()); logger.debug(jobExecution.toString());getJobLauncher().run(getThirdJob(), builder.toJobParameters()); jobExecution = getJobRepository().getLastJobExecution(getThirdJob().getName(), builder.toJobParameters()); logger.debug(jobExecution.toString());builder.addLong('currentTime', new Long(System.currentTimeMillis())); getJobLauncher().run(getFirstJob(), builder.toJobParameters()); jobExecution = getJobRepository().getLastJobExecution(getFirstJob().getName(), builder.toJobParameters()); logger.debug(jobExecution.toString());} catch (JobExecutionAlreadyRunningException | JobRestartException | JobInstanceAlreadyCompleteException | JobParametersInvalidException e) { logger.error(e); }}public Job getFirstJob() { return firstJob; }public void setFirstJob(Job firstJob) { this.firstJob = firstJob; }public Job getSecondJob() { return secondJob; }public void setSecondJob(Job secondJob) { this.secondJob = secondJob; }public Job getThirdJob() { return thirdJob; }public void setThirdJob(Job thirdJob) { this.thirdJob = thirdJob; }public JobLauncher getJobLauncher() { return jobLauncher; }public void setJobLauncher(JobLauncher jobLauncher) { this.jobLauncher = jobLauncher; }public JobRepository getJobRepository() { return jobRepository; }public void setJobRepository(JobRepository jobRepository) { this.jobRepository = jobRepository; }} STEP 6 : CREATE applicationContext.xml Spring Configuration file, applicationContext.xml, is created. It covers Tasklets and BatchProcessStarter definitions. <?xml version='1.0' encoding='UTF-8'?> <beans xmlns='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:batch='http://www.springframework.org/schema/batch' xsi:schemaLocation='http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans-3.0.xsdhttp://www.springframework.org/schema/batchhttp://www.springframework.org/schema/batch/spring-batch-2.1.xsd'><bean id='firstTasklet' class='com.onlinetechvision.tasklet.SuccessfulStepTasklet'> <property name='taskResult' value='First Task is executed...' /> </bean><bean id='secondTasklet' class='com.onlinetechvision.tasklet.SuccessfulStepTasklet'> <property name='taskResult' value='Second Task is executed...' /> </bean><bean id='thirdTasklet' class='com.onlinetechvision.tasklet.SuccessfulStepTasklet'> <property name='taskResult' value='Third Task is executed...' /> </bean><bean id='fourthTasklet' class='com.onlinetechvision.tasklet.SuccessfulStepTasklet'> <property name='taskResult' value='Fourth Task is executed...' /> </bean><bean id='fifthTasklet' class='com.onlinetechvision.tasklet.SuccessfulStepTasklet'> <property name='taskResult' value='Fifth Task is executed...' /> </bean><bean id='sixthTasklet' class='com.onlinetechvision.tasklet.SuccessfulStepTasklet'> <property name='taskResult' value='Sixth Task is executed...' /> </bean><bean id='seventhTasklet' class='com.onlinetechvision.tasklet.SuccessfulStepTasklet'> <property name='taskResult' value='Seventh Task is executed...' /> </bean><bean id='failedStepTasklet' class='com.onlinetechvision.tasklet.FailedStepTasklet'> <property name='taskResult' value='Error occurred!' /> </bean><bean id='batchProcessStarter' class='com.onlinetechvision.spring.batch.BatchProcessStarter'> <property name='jobLauncher' ref='jobLauncher'/> <property name='jobRepository' ref='jobRepository'/> <property name='firstJob' ref='firstJob'/> <property name='secondJob' ref='secondJob'/> <property name='thirdJob' ref='thirdJob'/> </bean></beans> STEP 7 : CREATE jobContext.xml Spring Configuration file, jobContext.xml, is created. Jobs’ flows are the following : FirstJob’ s flow : 1) FirstStep is started. 2) After FirstStep is completed with COMPLETED status, SecondStep is started. 3) After SecondStep is completed with COMPLETED status, ThirdStep is started. 4) After ThirdStep is completed with COMPLETED status, FirstJob execution is completed with COMPLETED status. SecondJob’ s flow : 1) FourthStep is started. 2) After FourthStep is completed with COMPLETED status, FifthStep is started. 3) After FifthStep is completed with COMPLETED status, SecondJob execution is completed with COMPLETED status. ThirdJob’ s flow : 1) SixthStep is started. 2) After SixthStep is completed with COMPLETED status, SeventhStep is started. 3) After SeventhStep is completed with FAILED status, ThirdJob execution is completed FAILED status. FirstJob’ s flow is same with the first execution. <?xml version='1.0' encoding='UTF-8'?> <beans xmlns='http://www.springframework.org/schema/beans' xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance' xmlns:batch='http://www.springframework.org/schema/batch' xsi:schemaLocation='http://www.springframework.org/schema/beanshttp://www.springframework.org/schema/beans/spring-beans-3.0.xsdhttp://www.springframework.org/schema/batchhttp://www.springframework.org/schema/batch/spring-batch-2.1.xsd'><import resource='applicationContext.xml'/><bean id='transactionManager' class='org.springframework.batch.support.transaction.ResourcelessTransactionManager'/><bean id='jobRepository' class='org.springframework.batch.core.repository.support.MapJobRepositoryFactoryBean'> <property name='transactionManager' ref='transactionManager' /> </bean><bean id='jobLauncher' class='org.springframework.batch.core.launch.support.SimpleJobLauncher' > <property name='jobRepository' ref='jobRepository'/> </bean><bean id='taskletStep' class='org.springframework.batch.core.step.tasklet.TaskletStep'> <property name='jobRepository' ref='jobRepository'/> <property name='transactionManager' ref='transactionManager'/> </bean><batch:job id='firstJob'> <batch:step id='firstStep' next='secondStep'> <batch:tasklet ref='firstTasklet'/> </batch:step> <batch:step id='secondStep' next='thirdStep' > <batch:tasklet ref='secondTasklet'/> </batch:step> <batch:step id='thirdStep'> <batch:tasklet ref='thirdTasklet' /> </batch:step> </batch:job><batch:job id='secondJob'> <batch:step id='fourthStep'> <batch:tasklet ref='fourthTasklet' /> <batch:next on='*' to='fifthStep' /> <batch:next on='FAILED' to='failedStep' /> </batch:step> <batch:step id='fifthStep'> <batch:tasklet ref='fifthTasklet' /> </batch:step> <batch:step id='failedStep'> <batch:tasklet ref='failedStepTasklet' /> </batch:step> </batch:job><batch:job id='thirdJob'> <batch:step id='sixthStep'> <batch:tasklet ref='sixthTasklet' /> <batch:next on='*' to='seventhStep' /> <batch:next on='FAILED' to='eighthStep' /> </batch:step> <batch:step id='seventhStep'> <batch:tasklet ref='failedStepTasklet' /> </batch:step> <batch:step id='eighthStep'> <batch:tasklet ref='seventhTasklet' /> </batch:step> </batch:job></beans> STEP 8 : CREATE Application CLASS Application Class is created to run the application. package com.onlinetechvision.exe;import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext;import com.onlinetechvision.spring.batch.BatchProcessStarter;/** * Application Class starts the application. * * @author onlinetechvision.com * @since 27 Nov 2012 * @version 1.0.0 * */ public class Application {/** * Starts the application * * @param String[] args * */ public static void main(String[] args) { ApplicationContext appContext = new ClassPathXmlApplicationContext('jobContext.xml'); BatchProcessStarter batchProcessStarter = (BatchProcessStarter)appContext.getBean('batchProcessStarter'); batchProcessStarter.start(); }} STEP 9 : BUILD PROJECT After OTV_SpringBatch_TaskletStep_Oriented_Processing Project is built, OTV_SpringBatch_TaskletStep-0.0.1-SNAPSHOT.jar will be created. STEP 10 : RUN PROJECT After created OTV_SpringBatch_TaskletStep-0.0.1-SNAPSHOT.jar file is run, the following console output logs will be shown : First Job’ s console output : 25.11.2012 21:29:19 INFO (SimpleJobLauncher.java:118) - Job: [FlowJob: [name=firstJob]] launched with the following parameters: [{currentTime=1353878959462}] 25.11.2012 21:29:19 DEBUG (AbstractJob.java:278) - Job execution starting: JobExecution: id=0, version=0, startTime=null, endTime=null, lastUpdated=Sun Nov 25 21:29:19 GMT 2012, status=STARTING, exitStatus=exitCode=UNKNOWN; exitDescription=, job=[JobInstance: id=0, version=0, JobParameters=[{currentTime=1353878959462}], Job=[firstJob]] 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:135) - Resuming state=firstJob.firstStep with status=UNKNOWN 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=firstJob.firstStep 25.11.2012 21:29:20 INFO (SimpleStepHandler.java:133) - Executing step: [firstStep] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:180) - Executing: id=1 25.11.2012 21:29:20 DEBUG (SuccessfulStepTasklet.java:33) - Task Result : First Task is executed... 25.11.2012 21:29:20 DEBUG (AbstractStep.java:209) - Step execution success: id=1 25.11.2012 21:29:20 DEBUG (AbstractStep.java:273) - Step execution complete: StepExecution: id=1, version=3, name=firstStep, status=COMPLETED, exitStatus=COMPLETED, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=1, rollbackCount=0 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=firstJob.firstStep with status=COMPLETED25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=firstJob.secondStep 25.11.2012 21:29:20 INFO (SimpleStepHandler.java:133) - Executing step: [secondStep] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:180) - Executing: id=2 25.11.2012 21:29:20 DEBUG (SuccessfulStepTasklet.java:33) - Task Result : Second Task is executed... 25.11.2012 21:29:20 DEBUG (AbstractStep.java:209) - Step execution success: id=2 25.11.2012 21:29:20 DEBUG (AbstractStep.java:273) - Step execution complete: StepExecution: id=2, version=3, name=secondStep, status=COMPLETED, exitStatus=COMPLETED, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=1, rollbackCount=0 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=firstJob.secondStep with status=COMPLETED 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=firstJob.thirdStep25.11.2012 21:29:20 INFO (SimpleStepHandler.java:133) - Executing step: [thirdStep] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:180) - Executing: id=3 25.11.2012 21:29:20 DEBUG (SuccessfulStepTasklet.java:33) - Task Result : Third Task is executed... 25.11.2012 21:29:20 DEBUG (AbstractStep.java:273) - Step execution complete: StepExecution: id=3, version=3, name=thirdStep, status=COMPLETED, exitStatus=COMPLETED, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=1, rollbackCount=0 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=firstJob.thirdStep with status=COMPLETED 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=firstJob.end3 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=firstJob.end3 with status=COMPLETED 25.11.2012 21:29:20 DEBUG (AbstractJob.java:294) - Job execution complete: JobExecution: id=0, version=1, startTime=Sun Nov 25 21:29:19 GMT 2012, endTime=null, lastUpdated=Sun Nov 25 21:29:19 GMT 2012, status=COMPLETED, exitStatus=exitCode=COMPLETED;exitDescription=, job=[JobInstance: id=0, version=0, JobParameters=[{currentTime=1353878959462}], Job=[firstJob]] 25.11.2012 21:29:20 INFO (SimpleJobLauncher.java:121) - Job: [FlowJob: [name=firstJob]] completed with the following parameters: [{currentTime=1353878959462}] and the following status: [COMPLETED] 25.11.2012 21:29:20 DEBUG (BatchProcessStarter.java:44) - JobExecution: id=0, version=2, startTime=Sun Nov 25 21:29:19 GMT 2012, endTime=Sun Nov 25 21:29:20 GMT 2012, lastUpdated=Sun Nov 25 21:29:20 GMT 2012, status=COMPLETED, exitStatus=exitCode=COMPLETED;exitDescription=, job=[JobInstance: id=0, version=0, JobParameters=[{currentTime=1353878959462}], Job=[firstJob]] Second Job’ s console output : 25.11.2012 21:29:20 INFO (SimpleJobLauncher.java:118) - Job: [FlowJob: [name=secondJob]] launched with the following parameters: [{currentTime=1353878959462}] 25.11.2012 21:29:20 DEBUG (AbstractJob.java:278) - Job execution starting: JobExecution: id=1, version=0, startTime=null, endTime=null, lastUpdated=Sun Nov 25 21:29:20 GMT 2012, status=STARTING, exitStatus=exitCode=UNKNOWN;exitDescription=, job=[JobInstance: id=1, version=0, JobParameters=[{currentTime=1353878959462}], Job=[secondJob]] 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:135) - Resuming state=secondJob.fourthStep with status=UNKNOWN 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=secondJob.fourthStep 25.11.2012 21:29:20 INFO (SimpleStepHandler.java:133) - Executing step: [fourthStep] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:180) - Executing: id=4 25.11.2012 21:29:20 DEBUG (SuccessfulStepTasklet.java:33) - Task Result : Fourth Task is executed... 25.11.2012 21:29:20 DEBUG (AbstractStep.java:273) - Step execution complete: StepExecution: id=4, version=3, name=fourthStep, status=COMPLETED, exitStatus=COMPLETED, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=1, rollbackCount=0 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=secondJob.fourthStep with status=COMPLETED25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=secondJob.fifthStep 25.11.2012 21:29:20 INFO (SimpleStepHandler.java:133) - Executing step: [fifthStep] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:180) - Executing: id=5 25.11.2012 21:29:20 DEBUG (SuccessfulStepTasklet.java:33) - Task Result : Fifth Task is executed... 25.11.2012 21:29:20 DEBUG (AbstractStep.java:273) - Step execution complete: StepExecution: id=5, version=3, name=fifthStep, status=COMPLETED, exitStatus=COMPLETED, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=1, rollbackCount=0 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=secondJob.fifthStep with status=COMPLETED 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=secondJob.end5 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=secondJob.end5 with status=COMPLETED 25.11.2012 21:29:20 DEBUG (AbstractJob.java:294) - Job execution complete: JobExecution: id=1, version=1, startTime=Sun Nov 25 21:29:20 GMT 2012, endTime=null, lastUpdated=Sun Nov 25 21:29:20 GMT 2012, status=COMPLETED, exitStatus=exitCode=COMPLETED;exitDescription=, job=[JobInstance: id=1, version=0, JobParameters=[{currentTime=1353878959462}], Job=[secondJob]] 25.11.2012 21:29:20 INFO (SimpleJobLauncher.java:121) - Job: [FlowJob: [name=secondJob]] completed with the following parameters: [{currentTime=1353878959462}] and the following status: [COMPLETED] 25.11.2012 21:29:20 DEBUG (BatchProcessStarter.java:48) - JobExecution: id=1, version=2, startTime=Sun Nov 25 21:29:20 GMT 2012, endTime=Sun Nov 25 21:29:20 GMT 2012, lastUpdated=Sun Nov 25 21:29:20 GMT 2012, status=COMPLETED, exitStatus=exitCode=COMPLETED;exitDescription=, job=[JobInstance: id=1, version=0, JobParameters=[{currentTime=1353878959462}], Job=[secondJob]] Third Job’ s console output : 25.11.2012 21:29:20 INFO (SimpleJobLauncher.java:118) - Job: [FlowJob: [name=thirdJob]] launched with the following parameters: [{currentTime=1353878959462}] 25.11.2012 21:29:20 DEBUG (AbstractJob.java:278) - Job execution starting: JobExecution: id=2, version=0, startTime=null, endTime=null, lastUpdated=Sun Nov 25 21:29:20 GMT 2012, status=STARTING, exitStatus=exitCode=UNKNOWN;exitDescription=, job=[JobInstance: id=2, version=0, JobParameters=[{currentTime=1353878959462}], Job=[thirdJob]] 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:135) - Resuming state=thirdJob.sixthStep with status=UNKNOWN 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=thirdJob.sixthStep 25.11.2012 21:29:20 INFO (SimpleStepHandler.java:133) - Executing step: [sixthStep] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:180) - Executing: id=6 25.11.2012 21:29:20 DEBUG (SuccessfulStepTasklet.java:33) - Task Result : Sixth Task is executed... 25.11.2012 21:29:20 DEBUG (AbstractStep.java:273) - Step execution complete: StepExecution: id=6, version=3, name=sixthStep, status=COMPLETED, exitStatus=COMPLETED, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=1, rollbackCount=0 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=thirdJob.sixthStep with status=COMPLETED25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=thirdJob.seventhStep 25.11.2012 21:29:20 INFO (SimpleStepHandler.java:133) - Executing step: [seventhStep] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:180) - Executing: id=7 25.11.2012 21:29:20 DEBUG (FailedStepTasklet.java:33) - Task Result : Error occurred! 25.11.2012 21:29:20 DEBUG (TaskletStep.java:456) - Rollback for Exception: java.lang.Exception: Error occurred! 25.11.2012 21:29:20 DEBUG (TransactionTemplate.java:152) - Initiating transaction rollback on application exception...25.11.2012 21:29:20 DEBUG (AbstractPlatformTransactionManager.java:821) - Initiating transaction rollback 25.11.2012 21:29:20 DEBUG (ResourcelessTransactionManager.java:54) - Rolling back resourceless transaction on [org.springframework.batch.support.transaction.ResourcelessTransactionManager $ResourcelessTransaction@40874c04] 25.11.2012 21:29:20 DEBUG (RepeatTemplate.java:291) - Handling exception: java.lang.Exception, caused by: java.lang.Exception: Error occurred! 25.11.2012 21:29:20 DEBUG (RepeatTemplate.java:251) - Handling fatal exception explicitly (rethrowing first of 1): java.lang.Exception: Error occurred! 25.11.2012 21:29:20 ERROR (AbstractStep.java:222) - Encountered an error executing the step...25.11.2012 21:29:20 DEBUG (ResourcelessTransactionManager.java:34) - Committing resourceless transaction on [org.springframework.batch.support.transaction.ResourcelessTransactionManager $ResourcelessTransaction@66a7d863] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:273) - Step execution complete: StepExecution: id=7, version=2, name=seventhStep, status=FAILED, exitStatus=FAILED, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=0, rollbackCount=1 25.11.2012 21:29:20 DEBUG (ResourcelessTransactionManager.java:34) - Committing resourceless transaction on [org.springframework.batch.support.transaction.ResourcelessTransactionManager $ResourcelessTransaction@156f803c] 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=thirdJob.seventhStep with status=FAILED 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=thirdJob.fail8 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=thirdJob.fail8 with status=FAILED 25.11.2012 21:29:20 DEBUG (AbstractJob.java:294) - Job execution complete: JobExecution: id=2, version=1, startTime=Sun Nov 25 21:29:20 GMT 2012, endTime=null, lastUpdated=Sun Nov 25 21:29:20 GMT 2012, status=FAILED, exitStatus=exitCode=FAILED;exitDescription=, job=[JobInstance: id=2, version=0, JobParameters=[{currentTime=1353878959462}], Job=[thirdJob]] 25.11.2012 21:29:20 INFO (SimpleJobLauncher.java:121) - Job: [FlowJob: [name=thirdJob]] completed with the following parameters: [{currentTime=1353878959462}] and the following status: [FAILED] 25.11.2012 21:29:20 DEBUG (BatchProcessStarter.java:52) - JobExecution: id=2, version=2, startTime=Sun Nov 25 21:29:20 GMT 2012, endTime=Sun Nov 25 21:29:20 GMT 2012, lastUpdated=Sun Nov 25 21:29:20 GMT 2012, status=FAILED, exitStatus=exitCode=FAILED; exitDescription=, job=[JobInstance: id=2, version=0, JobParameters=[{currentTime=1353878959462}], Job=[thirdJob]] First Job’ s console output after restarting : 25.11.2012 21:29:20 INFO (SimpleJobLauncher.java:118) - Job: [FlowJob: [name=firstJob]] launched with the following parameters: [{currentTime=1353878960660}] 25.11.2012 21:29:20 DEBUG (AbstractJob.java:278) - Job execution starting: JobExecution: id=3, version=0, startTime=null, endTime=null, lastUpdated=Sun Nov 25 21:29:20 GMT 2012, status=STARTING, exitStatus=exitCode=UNKNOWN;exitDescription=, job=[JobInstance: id=3, version=0, JobParameters=[{currentTime=1353878960660}], Job=[firstJob]] 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:135) - Resuming state=firstJob.firstStep with status=UNKNOWN 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=firstJob.firstStep 25.11.2012 21:29:20 INFO (SimpleStepHandler.java:133) - Executing step: [firstStep] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:180) - Executing: id=8 25.11.2012 21:29:20 DEBUG (SuccessfulStepTasklet.java:33) - Task Result : First Task is executed... 25.11.2012 21:29:20 DEBUG (AbstractStep.java:209) - Step execution success: id=8 25.11.2012 21:29:20 DEBUG (AbstractStep.java:273) - Step execution complete: StepExecution: id=8, version=3, name=firstStep, status=COMPLETED, exitStatus=COMPLETED, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=1, rollbackCount=0 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=firstJob.firstStep with status=COMPLETED25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=firstJob.secondStep 25.11.2012 21:29:20 INFO (SimpleStepHandler.java:133) - Executing step: [secondStep] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:180) - Executing: id=9 25.11.2012 21:29:20 DEBUG (SuccessfulStepTasklet.java:33) - Task Result : Second Task is executed... 25.11.2012 21:29:20 DEBUG (TaskletStep.java:417) - Applying contribution: [StepContribution: read=0, written=0, filtered=0, readSkips=0, writeSkips=0, processSkips=0, exitStatus=EXECUTING] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:209) - Step execution success: id=9 25.11.2012 21:29:20 DEBUG (AbstractStep.java:273) - Step execution complete: StepExecution: id=9, version=3, name=secondStep, status=COMPLETED, exitStatus=COMPLETED, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=1, rollbackCount=0 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=firstJob.secondStep with status=COMPLETED25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=firstJob.thirdStep 25.11.2012 21:29:20 INFO (SimpleStepHandler.java:133) - Executing step: [thirdStep] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:180) - Executing: id=10 25.11.2012 21:29:20 DEBUG (SuccessfulStepTasklet.java:33) - Task Result : Third Task is executed... 25.11.2012 21:29:20 DEBUG (TaskletStep.java:417) - Applying contribution: [StepContribution: read=0, written=0, filtered=0, readSkips=0, writeSkips=0, processSkips=0, exitStatus=EXECUTING] 25.11.2012 21:29:20 DEBUG (AbstractStep.java:209) - Step execution success: id=10 25.11.2012 21:29:20 DEBUG (AbstractStep.java:273) - Step execution complete: StepExecution: id=10, version=3, name=thirdStep, status=COMPLETED, exitStatus=COMPLETED, readCount=0, filterCount=0, writeCount=0 readSkipCount=0, writeSkipCount=0, processSkipCount=0, commitCount=1, rollbackCount=0 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=firstJob.thirdStep with status=COMPLETED 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:143) - Handling state=firstJob.end3 25.11.2012 21:29:20 DEBUG (SimpleFlow.java:156) - Completed state=firstJob.end3 with status=COMPLETED 25.11.2012 21:29:20 DEBUG (AbstractJob.java:294) - Job execution complete: JobExecution: id=3, version=1, startTime=Sun Nov 25 21:29:20 GMT 2012, endTime=null, lastUpdated=Sun Nov 25 21:29:20 GMT 2012, status=COMPLETED, exitStatus=exitCode=COMPLETED;exitDescription=, job=[JobInstance: id=3, version=0, JobParameters=[{currentTime=1353878960660}], Job=[firstJob]] 25.11.2012 21:29:20 INFO (SimpleJobLauncher.java:121) - Job: [FlowJob: [name=firstJob]] completed with the following parameters: [{currentTime=1353878960660}] and the following status: [COMPLETED] 25.11.2012 21:29:20 DEBUG (BatchProcessStarter.java:57) - JobExecution: id=3, version=2, startTime=Sun Nov 25 21:29:20 GMT 2012, endTime=Sun Nov 25 21:29:20 GMT 2012, lastUpdated=Sun Nov 25 21:29:20 GMT 2012, status=COMPLETED, exitStatus=exitCode=COMPLETED;exitDescription=, job=[JobInstance: id=3, version=0, JobParameters=[{currentTime=1353878960660}], Job=[firstJob]] STEP 11 : DOWNLOAD https://github.com/erenavsarogullari/OTV_SpringBatch_TaskletStep Related Links : Spring Batch – Reference Documentation Spring Batch – API Documentation   Reference: TaskletStep Oriented Processing in Spring Batch from our JCG partner Eren Avsarogullari at the Online Technology Vision blog. ...
apache-activemq-logo

ActiveMQ: Understanding Memory Usage

As indicated by some recent mailing list emails and a lot of info returned from Google, ActiveMQ’s SystemUsage and particularly the MemoryUsage functionality has left some people confused. I’ll try to explain some details around MemoryUsage that might be helpful in understanding how it works. I won’t cover StoreUsage and TempUsage as my colleauges have covered those in some depth.               There is a section of the activemq.xml configuration you can use to specify SystemUsage limits, specifically around the memory, persistent store, and temporary store that a broker can use. Here is an example with the defaults that come with ActiveMQ 5.7: <systemUsage> <systemUsage> <memoryUsage> <memoryUsage limit="64 mb"/> </memoryUsage> <storeUsage> <storeUsage limit="100 gb"/> </storeUsage> <tempUsage> <tempUsage limit="50 gb"/> </tempUsage> </systemUsage> </systemUsage> MemoryUsage MemoryUsage seems to cause the most confusion, so here goes my attempt to clarify its inner workings. When a message comes in to the broker, it has to go somewhere. It first gets unmarshalled off the wire into an ActiveMQ command object of type ActiveMQMessage. At this moment, the object is obviously in memory but the broker isn’t keeping track of it. Which brings us to our first point. The MemoryUsage is really just a counter of bytes that the broker needs and uses to keep track of how much of our JVM memory is being used by messages. This gives the broker some way of monitoring and ensuring we don’t hit our limits (more on that in a bit). Otherwise we could take on messages without knowing where our limits are until the JVM runs out of heap space. So we left off with the message coming in off the wire. Once we have that, the broker will take a look at which destination (or multiple destinations) the message needs to be routed. Once it finds the destination, it will “send” it there. The destination will increment a reference count of the message (to later know whether or not the message is considered “alive”) and proceed to do something with it. For the first reference count, the memory usage is incremented. For the last reference count, the memory usage is decremented. If the destination is a queue, it will store the message into a persistent location and try to dispatch it to a consumer subscription. If it’s a Topic, it will try to dispatch it to all subscriptions. Along the way (from the initial entry into the destination to the subscription that will send the message to the consumer), the message reference count may be incremented or decremented. As long as it has a reference count greater than or equal to 1, it will be accounted for in memory. Again, the MemoryUsage is just an object that counts bytes of messages to know how much JVM memory has been used to hold messages. So now that we have a basic understanding of what the MemoryUsage is, let’s take a closer look at a couple things:MemoryUsage hierarchies (what’s this destination memory limit that I can configure on policy entries)?? Producer Flow Control Splitting memory usage between destinations and subscriptions (producers and consumers)?Main Broker Memory, Destination Memory, Subscription Memory When the broker loads up, it will create its own SystemUsage object (or use the one specified in the configuration). As we know, the SystemUsage object has a MemoryUsage, StoreUsage, and TempUsage associated with it. The memory component will be known as the broker’s Main memory. It’s a usage object that keeps track of overall (destination, subscription, etc) memory. A destination, when it’s created, will create its own SystemUsage object (which creates its own separate Memory, Store, and Temp Usage objects) but it will set its parent to the be broker’s main SystemUsage object. A destination can have its memory limits tuned individually (but not Store and Temp, those will still delegate to the parent). To set a destination’s memory limit: <destinationPolicy> <policyMap> <policyEntries> <policyEntry queue=">" memoryLimit="5MB"/> </policyEntries> </policyMap> </destinationPolicy> So the destination usage objects can be used to more finely control MemoryUsage, but it will always coordinate with the Main memory for all usage counts. This functionality can be used to limit the number of messages that a destination keeps around so that a single destination cannot starve other destinations. For queues, it also affects the store cursor’s high water mark. A queue has different cursors for persistent and non-persistent messages. If we hit the high water mark (a threshold of the destination’s memory limit), no more messages be cached ready to be dispatched, and non-persistent messages can be purged to temp disk as necessary (if the StoreCursor will use FilePendingMessageCursor… otherwise it will just use a VMPendingMessageCursor and won’t purge to temporary store). If you don’t specify a memory limit for individual destinations, the destination’s SystemUsage will delegate to the parent (Main SystemUsage) for all usage counts. This means it will effectively use the broker’s Main SystemUsage for all memory-related counts. Consumer subscriptions, on the other hand, don’t have any notion of their own SystemUsage or MemoryUsage counters. They will always use the broker’s Main SystemUsage objects. The main thing to note about this is when using a FilePendingMessageCursor for subscriptions (for example, for a Topic subscription), the messages will not be swapped to disk until the cursor high-water mark (70% by default) is reached.. but that means 70% of Main memory will need to be reached. That could be a while, and a lot of messages could be kept in memory. And if your subscription is the one holding most of those messages, swapping to disk could take a while. As topics dispatch messages to one subscription at a time, if one subscription grinds to a halt because it’s swapping its messages to disk, the rest of the subscription ready to receive the message will also feel the slow down. You can set the cursor high water mark for subscriptions of a topic to be lower than the default: <destinationPolicy> <policyMap> <policyEntries> <policyEntry topic="FOO.BAR.>" cursorMemoryHighWaterMark="30" /> </policyEntries> </policyMap> </destinationPolicy> For those interested… When a message comes in the the destination, a MemoryUsage object is set on the message so that when Message.incrementReferenceCount() can increment the memory usage (on first referenced). So that means it’s accounted for by the destination’s Memory usage (and also the Main memory since the destination’s memory also informs its parent when its usage changes) and continues to do so. The only time this will change is if the message gets swapped to disk. When it gets swapped, its reference counts will be decremented, its memory usage will be decremented, and it will lose its MemoryUsage object once it gets to disk. So when it comes back to life, which MemoryUsage object will get associated with it, and where will it be counted? If it was swapped to a queue’s store, when it reconstitutes, it will be again associated with the destination memory usage. If it was swapped to a temp store in a subscription (like in a FilePendingMessageCursor), when it reconstitutes, it will NOT be associated with the destination’s memory usage anymore. It will be associated with the subscription’s memory usage (which is main memory). Producer Flow Control The big win for keeping track of memory used by messages is for Producer Flow Control (PFC). PFC is enabled by default and basically slows down the producers when usage limits are reached. This keeps the broker from exceeding its limits and running out of resources. For producers sending synchronously or for async sends with a producer window specified, if system usages are reached the broker will block that individual producer, but it will not block the connection. It will instead put the message away temporarily to wait for space to become available. It will only send back a ProducerAck once the message has been stored. Until then, the client is expected to block its send operation (which won’t block the connection itself). The ActiveMQ 5.x client libraries handle this for you. However, if an async send is sent without a producer window, or if a producer doesn’t behave properly and ignores ProducerAcks, PFC will actually block the entire connection when memory is reached. This could result in deadlock if you have consumers sharing the same connection. If producer flow control is turned off, then you have to be a little more careful about how you set up your system usages. When producer flow control is off, it basically means “broker, you have to accept every message that comes in, no matter if the consumers cannot keep up”. This can be used to handle spikes for incoming messages to a destination. If you’ve ever seen memory usages in your logs severely exceed the limits you’ve set, you probably had PFC turned off and that is expected behavior. Splitting Broker’s Main Memory So… I said earlier that a destination’s memory uses the broker’s main memory as a parent, and that subscriptions don’t have their own memory counters, they just use the broker’s main memory. Well this is true in the default case, but if you find a reason, you can further tune how memory is divided and limited. The idea here is you can partition the broker’s main memory into “Producer” and “Consumer” parts. The Producer part will be used for all things related to messages coming in to the broker, therefore it will be used in destinations. So this means when a destination creates its own MemoryUsage, it will use the Producer memory as its parent, and the Producer memory will use a portion of the broker’s main memory. On the other hand, the Consumer part will be used for all things related to dispatching messages to consumers. This means subscriptions. Instead of a subscription using the broker’s main memory directly, it will use the Consumer memory which will be a portion of the main memory. Ideally, the Consumer portion and the Producer portion will equal the entire broker’s main memory. To split the memory between producer and consumer, set the splitSystemUsageForProducersConsumers property on the main <broker/> element: <broker splitSystemUsageForProducersConsumers='true'> By default this will split the broker’s Main memory usage into 60% for the producers and 40% for the consumers. To tune this even further, set the producerSystemUsagePortion and consumerSystemUsagePortion on the main broker element: <broker splitSystemUsageForProducersConsumers='true' producerSystemUsagePortion='70' consumerSystemUsagePortion='30'> There you have it. Hopefully this sheds some light into the MemoryUsage of the broker.   Reference: ActiveMQ: Understanding Memory Usage from our JCG partner Christian Posta at the Christian Posta Software blog. ...
jsf-logo

Session Timeout Handling on JSF AJAX request

Session Timeout Handling on JSF AJAX request When we develop JSF application with AJAX behaviour, we may experience the problem in handling timeout scenario of Ajax request. For example, if you are using J2EE Form-based authentication, a normal request should be redirected to the login page after session timeout. However, if your request is AJAX, the response could not be treated properly on the client-side. User will remain on the same page and does not aware that the session is expired.     Many people proposed solution for this issue. The followings are two of possible solutions involve the use of Spring security framework: 1. Oleg Varaksin’s post 2. Spring Security 3 and ICEfaces 3 Tutorial Yet, some applications may just using simple mechanism to stored their authentication and authorization information in session. For those application that is not using Spring Security framework, how can they handle such problem? I just modified the solution proposed by Oleg Varaksin a bit as my reference. First, create a simple session scoped JSF managed bean called ‘MyJsfAjaxTimeoutSetting’. The main purpose of this POJO is just to allow you to configure the redirect url after session timeout in faces-config.xml. You may not need this class if you do not want the timeout URL to be configurable. public class MyJsfAjaxTimeoutSetting {public MyJsfAjaxTimeoutSetting() { }private String timeoutUrl;public String getTimeoutUrl() { return timeoutUrl; }public void setTimeoutUrl(String timeoutUrl) { this.timeoutUrl = timeoutUrl; }} Second, create a PhaseListener to handle the redirect of Ajax request. This PhaseListener is the most important part of the solution. It re-creates the response so that the Ajax request could be redirect after timeout. import org.borislam.util.FacesUtil; import org.borislam.util.SecurityUtil; import java.io.IOException; import javax.faces.FacesException; import javax.faces.FactoryFinder; import javax.faces.context.ExternalContext; import javax.faces.context.FacesContext; import javax.faces.context.ResponseWriter; import javax.faces.event.PhaseEvent; import javax.faces.event.PhaseId; import javax.faces.event.PhaseListener; import javax.faces.render.RenderKit; import javax.faces.render.RenderKitFactory; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import org.apache.log4j.Logger; import org.primefaces.context.RequestContext;public class MyJsfAjaxTimeoutPhaseListener implements PhaseListener {public void afterPhase(PhaseEvent event) { }public void beforePhase(PhaseEvent event) { MyJsfAjaxTimeoutSetting timeoutSetting = (MyJsfAjaxTimeoutSetting)FacesUtil.getManagedBean('MyJsfAjaxTimeoutSetting'); FacesContext fc = FacesContext.getCurrentInstance(); RequestContext rc = RequestContext.getCurrentInstance(); ExternalContext ec = fc.getExternalContext(); HttpServletResponse response = (HttpServletResponse) ec.getResponse(); HttpServletRequest request = (HttpServletRequest) ec.getRequest();if (timeoutSetting ==null) { System.out.println('JSF Ajax Timeout Setting is not configured. Do Nothing!'); return ; }UserCredential user = SecurityUtil.getUserCredential(); //////////////////////////////////////////////////////////////////////////////////////////////// // // You can replace the above line of code with the security control of your application. // For example , you may get the authenticated user object from session or threadlocal storage. // It depends on your design. ////////////////////////////////////////////////////////////////////////////////////////////////if (user==null) { // user credential not found. // considered to be a Timeout caseif (ec.isResponseCommitted()) { // redirect is not possible return; }try{if ( ( (rc!=null && RequestContext.getCurrentInstance().isAjaxRequest()) || (fc!=null && fc.getPartialViewContext().isPartialRequest())) && fc.getResponseWriter() == null && fc.getRenderKit() == null) {response.setCharacterEncoding(request.getCharacterEncoding());RenderKitFactory factory = (RenderKitFactory) FactoryFinder.getFactory(FactoryFinder.RENDER_KIT_FACTORY);RenderKit renderKit = factory.getRenderKit(fc, fc.getApplication().getViewHandler().calculateRenderKitId(fc));ResponseWriter responseWriter = renderKit.createResponseWriter( response.getWriter(), null, request.getCharacterEncoding()); fc.setResponseWriter(responseWriter);ec.redirect(ec.getRequestContextPath() + (timeoutSetting.getTimeoutUrl() != null ? timeoutSetting.getTimeoutUrl() : '')); }} catch (IOException e) { System.out.println('Redirect to the specified page '' + timeoutSetting.getTimeoutUrl() + '' failed'); throw new FacesException(e); } } else { return ; //This is not a timeout case . Do nothing ! } }public PhaseId getPhaseId() { return PhaseId.RESTORE_VIEW; }} The details of the FacesUtil.getManagedBean(‘MyJsfAjaxTimeoutSetting’) is shown below: public static Object getManagedBean(String beanName) {FacesContext fc = FacesContext.getCurrentInstance(); ELContext elc = fc.getELContext(); ExpressionFactory ef = fc.getApplication().getExpressionFactory(); ValueExpression ve = ef.createValueExpression(elc, getJsfEl(beanName), Object.class); return ve.getValue(elc); } Configuration As said before, the purpose of the session scoped managed bean, MyJsfAjaxTimeoutSetting, is just to allow you to make the timeoutUrl configurable in your faces-config.xml. <managed-bean> <managed-bean-name>MyJsfAjaxTimeoutSetting</managed-bean-name> <managed-bean-class>org.borislam.security.MyJsfAjaxTimeoutSetting</managed-bean-class> <managed-bean-scope>session</managed-bean-scope> <managed-property> <property-name>timeoutUrl</property-name> <value>/login.do</value> </managed-property> </managed-bean> Most importantly, add the PhaseListener in your faces-config.xml. <lifecycle> <phase-listener id="JSFAjaxTimeoutPhaseListener">hk.edu.hkeaa.infrastructure.security.JSFAjaxTimeoutPhaseListener </phase-listener> </lifecycle> If you are using spring framework, you could managed the MyJsfAjaxTimeoutSetting in Spring with the help of SpringBeanFacesELResolver. Then, you can use the following configuration. <bean id="MyJsfAjaxTimeoutSetting" class="org.borislam.security.MyJsfAjaxTimeoutSetting" scope="session"> <property name="timeoutUrl" value="/login.do">   Reference: Session Timeout Handling on JSF AJAX request from our JCG partner Boris Lam at the Programming Peacefully blog. ...
groovy-logo

A very light Groovy based web application project template

A very light Groovy based web application project template You might have heard of the project Grails is a Groovy version of Ruby on Rails like framework that let you create web application much more easier with Dynamic scripting. Despite all that power Grails provided, it is not ‘light’ if you look under the hood. I am not saying Grails is bad or anything. Grails is actually pretty cool to write web application with. However I found myself often want something even lighter and yet still want to prototype with Groovy. So here I will show you a maven-groovy-webapp project template that I use to get start any web application development. It’s very simple, light, and yet very Groovy.   How to get started Unzip maven-webapp-groovy.zip above and you should see these few files: bash> cd maven-webapp-groovy bash> find . bash> ./pom.xml bash> ./README.txt bash> ./src bash> ./src/main bash> ./src/main/java bash> ./src/main/java/deng bash> ./src/main/java/deng/GroovyContextListener.java bash> ./src/main/resources bash> ./src/main/resources/log4j.properties bash> ./src/main/webapp bash> ./src/main/webapp/console.gt bash> ./src/main/webapp/health.gt bash> ./src/main/webapp/home.gt bash> ./src/main/webapp/WEB-INF bash> ./src/main/webapp/WEB-INF/classes bash> ./src/main/webapp/WEB-INF/classes/.keep bash> ./src/main/webapp/WEB-INF/groovy bash> ./src/main/webapp/WEB-INF/groovy/console.groovy bash> ./src/main/webapp/WEB-INF/groovy/health.groovy bash> ./src/main/webapp/WEB-INF/groovy/home.groovy bash> ./src/main/webapp/WEB-INF/groovy/init.groovy bash> ./src/main/webapp/WEB-INF/groovy/destroy.groovy bash> ./src/main/webapp/WEB-INF/web.xml As you can see it’s a maven based application, and I have configured tomcat plugin, so you may run it like this: bash> mvn tomcat7:run bash> open http://localhost:8080/maven-webapp-groovy/home.groovy And ofcourse, with maven, running package phase will let you deploy it into any real application servers when ready. bash> mvn package bash> cp target/maven-webapp-groovy.war $APP_SERVER_HOME/autodeploy What’s in it You should checkout the main config in web.xml file, and you’ll see that there couple built-in Groovy servlets and a custom listener. <?xml version="1.0"?> <web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" version="2.5"><description>Groovy Web Application</description> <welcome-file-list> <welcome-file>home.groovy</welcome-file> </welcome-file-list><servlet> <servlet-name>GroovyServlet</servlet-name> <servlet-class>groovy.servlet.GroovyServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>GroovyServlet</servlet-name> <url-pattern>*.groovy</url-pattern> </servlet-mapping><servlet> <servlet-name>TemplateServlet</servlet-name> <servlet-class>groovy.servlet.TemplateServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>TemplateServlet</servlet-name> <url-pattern>*.gt</url-pattern> </servlet-mapping><listener> <listener-class>deng.GroovyContextListener</listener-class> </listener> <context-param> <param-name>initScripts</param-name> <param-value>/WEB-INF/groovy/init.groovy</param-value> </context-param> <context-param> <param-name>destroyScripts</param-name> <param-value>/WEB-INF/groovy/destroy.groovy</param-value> </context-param></web-app> I’ve chosen to use GroovyServlet as a controller (it comes with Groovy!), and this let you use any scripts inside /WEB-INF/groovy directory. That’s it, no further setup. That’s about the only requirement you need to get a Groovy webapp started! See console.groovy as example and how it works. It’s a groovy version of this JVM console Now you can use Groovy to process any logic and even generate the HTML output if you like, but I find it even more easier to use TemplateServlet. This allow Groovy template files to be serve as view. It’s very much like JSP, but it uses Groovy instead! And we know Groovy syntax are much shorter to write! See console.gt as exmaple and how it works. The GroovyContextListener is something I wrote, and it’s optional. This allow you to run any scripts during the webapp startup or shutdown states. I’ve created an empty init.groovy and destroy.groovy placeholder. So now you have all the hooks you need to prototype just about any web application you need. Simplicity wins This setup is just plain Java Servlet with Groovy loaded. I often think the more simple you get, then less bug and faster you code. No heavy frameworks, no extra learning curve, (other than basic Servlet API and Groovy/Java skills ofcourse), and off you go. Go have fun with this Groovy webapp template! And let me know if you have some cool prototypes to show off after playing with this.   Reference: A very light Groovy based web application project template from our JCG partner Zemian Deng at the A Programmer’s Journal blog. ...
Java Code Geeks and all content copyright © 2010-2014, Exelixis Media Ltd | Terms of Use | Privacy Policy | Contact
All trademarks and registered trademarks appearing on Java Code Geeks are the property of their respective owners.
Java is a trademark or registered trademark of Oracle Corporation in the United States and other countries.
Java Code Geeks is not connected to Oracle Corporation and is not sponsored by Oracle Corporation.
Do you want to know how to develop your skillset and become a ...
Java Rockstar?

Subscribe to our newsletter to start Rocking right now!

To get you started we give you two of our best selling eBooks for FREE!

Get ready to Rock!
You can download the complementary eBooks using the links below:
Close