Throughout those years we have kept a close eye on the problems packaged in different flavours of OutOfMemoryError message. Daily digests on new questions on specific keywords via specialised services such as the Google Alerts have given us a good overview about the situations where applications fail with the java.lang.OutOfMemoryError in logs.
The people facing the problem tend to fall into pretty well-segmented buckets, so I decided to describe some of the more interesting personas a bit.
Self-taught surgeons. These guys are now truly creative, I definitely have to give them credit for this. When faced with unexpected error messages, they come up with a sea of explanations why this particular error might have been occurred. And jump right into fixing the problem.
I have seen all of the following for so many times that I have even lost the count. I can only warrant that the examples are both real and scary:
- “I am still getting the OutOfMemoryError error even though I bought 16G of additional RAM”. Well, instead of increasing the -Xmx the most obvious solution for many seems to switch to a shopping spree.
- “I am facing an OutOfMemoryError in my logs and I found that one of my .class files is more than 1,500 lines long. How small do my class files have to be that I get rid of the messages”. How can you reach to a correlation between # of rows in a class file and running out of heap space is beyond me, but I guess there is a method in this madness.
- “I switched from using java.util.Vectors to java.util.ArrayList and spent a month refactoring my app but I am still getting the OutOfMemoryError”. Well, good for you, Vectors are so 1999. But again, why you are curing the patient if you have no clue what is causing the disease
The list could go on forever. What makes me curious is – when software developers behave like this – should I be more careful when approaching my doctor for the next time as well?
Configuration H2x0rz. If there is a parameter in the JVM configuration, it has to be tweaked. This seems to be the only truth for this particular group. Indeed, what is the chance that the defaults recommended by Oracle JVM engineers would make any sense. The result? Applications launching with way too large minimum heaps, mangled thread priorities, vastly underutilized tenured areas or unsuitable and/or experimental GC algos.
Do not get me wrong, if you know what you are doing and are building this fine-tuning based on actual measurements, go ahead. Most often than not, these type of users have inherited “the right set of configuration options” from somewhere and are now applying the same set of -XX parameters to each and every JVM they see out there. Please, don’t. The highly transactional webapp you last faced is a completely different beast than the data-hungry batch job you have at your fingertips at the moment.
Victims of Data Surge. Those guys have been building and running a business app for years without major performance issues. And then the lightning strikes and the app is lying dead on the ground with OutOfMemoryErrors in the log files. Some of the users are suddenly able to run the whole app to the ground by executing an operation loading too much data at once into the memory.
Whether it is caused by business being good and the amount of customers has just grown beyond certain magic point or whether the company has acquired and merged a competitor doubling the amount of data, the effect is the same.
Situations like this, when identified can be resolved by applying a number of well-known tools and techniques. You can defer data loading, process the operation in smaller batches or change the data structure responsible for storing this data – up to you. Many of those solutions could be a good fit.
But what we instead see in such cases is hiding the problem under increased heap size. You can indeed escape the OutOfMemoryError by just increasing the -Xmx in your configuration, but more often than not you are still doing a disservice to the users. Large operations still take long time to complete, thus annoying the users via the increased latency. Worse yet, when increasing the heap you are often causing GC pauses to span to intolerable lengths.
Minecrafters. If I had to pick a single application responsible for memory leaks it would be Minecraft. Over the years I have most likely seen thousands of frustrated 9-year olds forced to deal with the heap configuration.
A quick googling surfaces the extent of the problem, which I guess is a good enough case study for anyone considering shipping desktop software built on Java.
If you did not feel yourself belonging to any of the groups listed, good. You are among the pragmatic engineers taking pride in your craft by investigating the cause and effect relations carefully before jumping to conclusions.
Bulletproof Java Code: A Practical Strategy for Developing Functional, Reliable, and Secure Java Code
Use Java? If you do, you know that Java software can be used to drive application logic of Web services or Web applications. Perhaps you use it for desktop applications? Or, embedded devices? Whatever your use of Java code, functional errors are the enemy!
To combat this enemy, your team might already perform functional testing. Even so, you're taking significant risks if you have not yet implemented a comprehensive team-wide quality management strategy. Such a strategy alleviates reliability, security, and performance problems to ensure that your code is free of functionality errors.Read this article to learn about this simple four-step strategy that is proven to make Java code more reliable, more secure, and easier to maintain.