Core Java

Busting PermGen Myths

Myth busted In my latest post I explained the reasons that can cause the java.lang.OutOfMemoryError: PermGen space crashes. Now it is time to talk about possible solutions to the problem. Or, more precisely, about what the Internet suggests for possible solutions. Unfortunately, I can only say that I felt my inner Jamie Hyneman from MythBusters awakening when going through the different “expert opinions” on the subject.
 
 
 
I googled for current common knowledge about ways to solve java.lang.OutOfMemoryError: PermGen space crashes and went through a couple dozen pages that seemed more appropriate in Google results. Fortunately, most of the suggestions have already been distilled into this topic of the very respected StackOverflow. As you can see, the topic is truly popular and has some quite highly voted answers. But the irony is that the whole topic contains exactly zero solutions I could recommend myself. Well, aside from “Find the cause of memory leak”, which is absolutely correct, of course, but not very helpful way to respond to the question “How to solve memory leak”. Let us review the suggestions put forward on the SO page.

Use -XX:MaxPermSize=XXXM

There can be two reasons that cause the java.lang.OutOfMemoryError: PermGen space error.

One is that application server and/or application really does use so many classes that they do not fit into default sized Permanent Generation. It is definitely possible and not that rare in fact. In this case increasing the size of Permanent Generation can really save the day. If your only problem is how to fit too many furniture into too small house, then buy the bigger house!

But what if your over-caring mother sends you new furniture every week? You cannot possibly continue to move to the bigger houses over and over again. That is exactly the situation with memory leaks – and also with the classloader leaks, as described in my previous post that I mentioned above. Let me be clear here: no increase in Permanent Generation size will save you from the classloader leak. It can only postpone it. And make it harder to predict how many re-deployments your server will outlive.

-XX:+CMSClassUnloadingEnabled
-XX:+CMSPermGenSweepingEnabled

The most popular answer on StackOverflow was to add these options to the server’s command line. And, they say, “maybe add -XX:+UseConcMarkSweepGC too. Just to be sure”. My first problem with these JVM flags is that there is no explanation available of what they really do. Neither in the SO answer (and I don’t like answers that tell you to do something without the reasoning why you should do it), nor actually in the whole Internet.

Really, I was unable to find any documentation about these options, except for this page. But, in fact, that does not even matter. In no way any tinkering with the Garbage Collector options will help you in case of a classloder leak. Because, by definition, a memory leak is a situation where GC falls short. If there is a valid live hard reference from somewhere within your server’s classloader to an object or class of your application, then the GC will never think of it as garbage and will never reclaim it. Sure, all these JVM flags look very smart and magical. And they really may be required in some situations. But they are certainly not sufficient and don’t solve your Permanent Generation leak.

Use JRockit

The next proposition was to switch to the JRockit JVM. The rationale was that as JRockit has no Permanent Generation, one cannot run out of it. Surely, an interesting proposition. Unfortunately, it will not solve our problem either.

The only result of this “solution” will be getting a java.lang.OutOfMemoryError: Java heap space instead of the java.lang.OutOfMemoryError: PermGen space. In the absence of separate generation for class definitions, JRockit uses the usual Java heap space for them. And as long the root cause of the leak is not fixed, those class definitions will fill up even the largest heap, given enough time.

Restart the server

Yet another way to pretend that the problem is solved, is to restart the application server from time to time. E.g. instead of redeploying the application, just restart the whole server. But the first time you see an application server with more than one application deployed, you will know that this is rarely possible in production environment. And this is not really a solution. It is a way to hide your head in the sand.

Use Tomcat

This one is actually not that hopeless as the previous ones – recent Tomcat versions really do try to solve classloader leaks. See for yourself in their documentation. IF you can use Tomcat as your target server, and IF your leak is one of those Tomcat can successfully fight against, then maybe, just maybe, you are lucky and the problem is solved for you.

Use <Your favorite profiler tool here>

May be a viable solution too. But again, with a couple of IFs. Firstly, you should be able to use that profiler in the affected environment. And as I have previously mentioned in my other post, profilers impose overhead of the level that might not be acceptable in the (production) environment. And secondly, you must know how to use the profiler to extract the required information and conclude the location of the leak. And my 10+ years of experience show that is very rarely the case.

Conclusion

So far we haven’t seen any definite solution to the java.lang.OutOfMemoryError: PermGen space error. There were a few that can be viable in some cases. But I was astounded by the fact that the majority of proposals were just plain invalid! You could waste days or weeks trying them and not even start to solve the real problem: find that rogue reference that is the root cause of the leak!

Fortunately, as of the 1.1 release, Plumbr also discovers PermGen leaks. And it tells you the very reason that keeps the classloader from being freed, sparing you the time of hunting down the leak. So next time, when facing the java.lang.OutOfMemoryError: PermGen space message, download Plumbr and get rid of the problem for good.
 

Reference: Busting PermGen Myths from our JCG partner Nikita Salnikov Tarnovski at the Plumbr Blog blog.

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Marcus Kraßmann
Marcus Kraßmann
11 years ago

Well, the JRockit solution worked fine for me without replacing one error with another. Maybe I did not ran it long enough (just one day) or the amount of re-deployments was too small. But maybe it was the fact that replaced classes were really GC’d after a re-deployment. Maybe I should try it again and measure the size of heap space in the background while doing some re-deployment stuff, but actually I switched to JRebel which for me solved the problem, too…

Back to top button