Tomcat - PermGen space Exception - spring

Iam using Tomcat 6, springs, hibernate and struts in my application. Often i get this exception in logs regarding memory and the site hangs and stops responding. Iam not sure whats wrong. Can anybody suggest what changes has to be done to avoid this.
org.apache.tomcat.util.threads.ThreadPool$ControlRunnable run
SEVERE: Caught exception (java.lang.OutOfMemoryError: PermGen space) executing org.apache.jk.common.ChannelSocket$SocketConnection#2c52d3, terminating thread

The PermGen space is what Tomcat uses to store class definitions (definitions only, no instantiations) and string pools that have been interned. From experience, the PermGen space issues tend to happen frequently in dev environments really since Tomcat has to load new classes every time it deploys a WAR or does a jspc (when you edit a jsp file). Personally, I tend to deploy and redeploy wars a lot when I’m in dev testing so I know I’m bound to run out sooner or later (primarily because Java’s GC cycles are still kinda crap so if you redeploy your wars quickly and frequently enough, the space fills up faster than they can manage).
This should theoretically be less of an issue in production environments since you (hopefully) don’t change the codebase on a 10 minute basis. If it still occurs, that just means your codebase (and corresponding library dependencies) are too large for the default memory allocation and you’ll just need to mess around with stack and heap allocation. I think the standards are stuff like:
-XX:MaxPermSize=SIZE
I’ve found however the best way to take care of that for good is to allow classes to be unloaded so your PermGen never runs out:
-XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled
Stuff like that worked magic for me in the past. One thing tho, there’s a significant performance tradeoff in using those, since permgen sweeps will make like an extra 2 requests for every request you make or something along those lines. You’ll need to balance your use with the tradeoffs.

Your application may either have a memory leak or your Tomcat may not have enough memory to run the application. Use tools like Yourkit to check your memory usage.
Edit: Try increasing Tomocat memory. By default it starts with 256 MB. If you are on Windows you can do this by configuring Tomcat process settings. Refer to below post.
How do I increase memory on Tomcat 7 when running as a Windows Service?

Related

Why is the Heap Memory Usage of a SpringBoot application keeps increasing?

I created and ran a simple SpringBoot application to accept some requests. I used jconsole to check the Heap Memory Usage and I saw this periodic increase followed by GC, I don't know the reason for the increases. Are there any Objects keep being created (because I think the instances are imported to container when the application starts)?
Spring boot has background processes which may consume your resources despite on absent requests to your app, for example jobs, etc.
Those spikes are expected and regular for any Java app based on any more less complex framework. GC algorithm depends on your jvm version, but could be overridden. The graph shows normal situation, from time to time memory consumed for some activities and after some time GC wake up and do the cleaning.
In case if you want to check what exactly objects caused memory spike you may try to use Java Flight Recorder or regular heap dump analysis using Eclipse memory analyser.
For current case Java Flight Recorder would be more convenient and suitable.

web application runs much faster in embedded tomcat than in standalone tomcat

I have a spring-boot web application (mostly used through REST calls), that I can run using mvn exec that starts an embedded tomcat (8.5.11), or build a war and deploy it into a standalone tomcat (debian stock 8.5.14-1~bpo8+1). Both are configured the same way, using
To our utmost surprise, the embedded tomcat seems to be much faster for high loads (a small test sequence with 200+ threads using jmeter). At 600 threads, for example:
The standalone tomcat has very large response times, while having a relatively low load of 50-70 (the server has 64 cores and can run 128 threads), and a low IO usage.
The embedded tomcat has a load of 150-200 and faster response times, and high I/O usage (it seems that the database is the limiting factor here, but it degrades gracefully: 600 threads results in double as slow as 300 threads).
Supposedly, the configuration is the same for both tomcats, so currently I am quite troubled because of this. I really would not like to run embedded tomcat in production if I can help it.
Does anyone have an idea:
what the cause for this performance disparity may be, and
how we can reliably compare the configuration for two tomcats?
Update
I ran some more tests and discovered a significant difference after looking through the Garbage Collector logs: with 600 jmeter threads, the embedded tomcat spent about 5% of its time GCing, while the standalone tomcat spent about 50% of its time GCing. I calculated these numbers with an awk script, so they may be a bit mis-parsed, but manually checking the GC logs seems to corroborate them. It still does not explain why one of them is GCing all the time and the other is not...
One more update
I managed to speed up the standalone tomcat by switching the garbage collector to G1. Now, it uses about 20% of elapsed time for garbage collection, and never exceeds 1s for any single GC run. Now the standalone tomcat is only 20-30% slower than the embedded tomcat. Interestingly, using G1 in the embedded tomcat had no real effect on its performance, GC overhead is still around 15% there.
This is by no means a solution, but it helped to close the gap between the two tomcats and thus now the problem is not so critical.
Check the memory parameters for your standalone Tomcat and your spring boot application, especially the java heap size.
My guess is that your standalone Tomcat has a value for Xmx set in the startup script (catalina.sh and/or setenv.sh), say for example 1 Gb, which is much lower than what your Spring Boot app is using.
If you haven't specified a value for Xmx on the command line for your spring boot app, it will default to 25% of your physical memory. If your server has 16 Gb of RAM, that'll be 4Gb...
I'd recommend running your tests again after making sure the same JVM parameters are in use (Xms, Xmx, various GC options, ...). If unsure, inspect the running VMs with jVisualVm or similar tool.

Web application very slow in Tomcat 7

I implemented a web application to start the Tomcat service works very quickly, but spending hours and when more users are entering is getting slow (up to 15 users approx.).
Checking RAM usage statistics (20%), CPU (25%)
Server Features:
RAM 8GB
Processor i7
Windows Server 2008 64bit
Tomcat 7
MySql 5.0
Struts2
-Xms1024m
-Xmx1024m
PermGen = 1024
MaxPernGen = 1024
I do not use Web server, we publish directly on Tomcat.
Entering midnight slowness is still maintained (only 1 user online)
The solution I have is to restart the Tomcat service and response time is again excellent.
Is there anyone who has experienced this issue? Any clue would be appreciated.
Not enough details provided. Need more information :(
Use htop or top to find memory and CPU usage per process & per thread.
CPU
A constant 25% CPU usage in a 4 cores system can indicate that a single-core application/thread is running 100% CPU on the only core it is able to use.
Which application is eating the CPU ?
Memory
20% memory is ~1.6GB. It is a bit more than I expect for an idle server running only tomcat + mysql. The -Xms1024 tells tomcat to preallocate 1GB memory so that explains it.
Change tomcat settings to -Xms512 and -Xmx2048. Watch tomcat memory usage while you throw some users at it. If it keeps growing until it reaches 2GB... then freezes, that can indicate a memory leak.
Disk
Use df -h to check disk usage. A full partition can make the issues you are experiencing.
Filesystem Size Used Avail Usage% Mounted on
/cygdrive/c 149G 149G 414M 100% /
(If you just discovered in this example that my laptop is running out of space. You're doing it right :D)
Logs
Logs are awesome. Yet they have a bad habit to fill up the disk. Check logs disk usage. Are logs being written/erased/rotated properly when new users connect ? Does erasing logs fix the issue ? (copy them somewhere for future analysis before you erase them)
If not. Logs are STILL awesome. They have the good habit to help you track bugs. Check tomcat logs. You may want to set logging level to debug. What happens last when the website die ? Any useful error message ? Do user connections are still received and accepted by tomcat ?
Application
I suppose that the 25% CPU goes to tomcat (and not mysql). Tomcat doesn't fail by itself. The application running on it must be failing. Try removing the application from tomcat (you can eventually put an hello world instead). Can tomcat keep working overnight without your application ? It probably can, in which case the fault is on the application.
Enable full debug logging in your application and try to track the issue. Run it straight from eclipse in debug mode and throw users at it. Does it fail consistently in the same way ?
If yes, hit "pause" in the eclipse debugger and check what the application is doing. Look at the piece of code each thread is currently running + its call stack. Repeat that a few times. If there is a deadlock, an infinite loop, or similar, you can find it this way.
You will have found the issue by now if you are lucky. If not, you're unfortunate and it's a tricky bug that might be deep inside the application. That can get tricky to trace. Determination will lead to success. Good luck =)
For performance related issue, we need to follow the given rules:
You can equalize and emphasize the size of xms and xmx for effectiveness.
-Xms2048m
-Xmx2048m
You can also enable the PermGen to be garbage collected.
-XX:+UseConcMarkSweepGC -XX:+CMSPermGenSweepingEnabled -XX:+CMSClassUnloadingEnabled
If the page changes too frequently to make this option logical, try temporarily caching the dynamic content, so that it doesn't need to be regenerated over and over again. Any techniques you can use to cache work that's already been done instead of doing it again should be used - this is the key to achieving the best Tomcat performance.
If there any database related issue, then can follow sql query perfomance tuning
rotating the Catalina.out log file, without restarting Tomcat.
In details,There are two ways.
The first, which is more direct, is that you can rotate Catalina.out by adding a simple pipe to the log rotation tool of your choice in Catalina's startup shell script. This will look something like:
"$CATALINA_BASE"/logs/catalina.out WeaponOfChoice 2>&1 &
Simply replace "WeaponOfChoice" with your favorite log rotation tool.
The second way is less direct, but ultimately better. The best way to handle the rotation of Catalina.out is to make sure it never needs to rotate. Simply set the "swallowOutput" property to true for all Contexts in "server.xml".
This will route System.err and System.out to whatever Logging implementation you have configured, or JULI, if you haven't configured.
See more at: Tomcat Catalina Out
I experienced a very slow stock Tomcat dashboard on a clean Centos7 install and found the following cause and solution:
Slow start up times for Tomcat are often related to Java's
SecureRandom implementation. By default, it uses /dev/random as an
entropy source. This can be slow as it uses system events to gather
entropy (e.g. disk reads, key presses, etc). As the urandom manpage
states:
When the entropy pool is empty, reads from /dev/random will block until additional environmental noise is gathered.
Source: https://www.digitalocean.com/community/questions/tomcat-8-5-9-restart-is-really-slow-on-my-centos-7-2-droplet
Fix it by adding the following configuration option to your tomcat.conf or (preferred) a custom file into /tomcat/conf/conf.d/:
JAVA_OPTS="-Djava.security.egd=file:/dev/./urandom"
We encountered a similar problem, the cause was "catalina.out". It is the standard destination log file for "System.out" and "System.err". It's size kept on increasing thus slowing things down and ultimately tomcat crashed. This problem was solved by rotating "catalina.out". We were using redhat so we made a shell script to rotate "catalina.out".
Here are some links:-
Mulesoft article on catalina (also contains two methods of rotating):
Tomcat Catalina Introduction
If "catalina.out" is not the problem then try this instead:-
Mulesoft article on optimizing tomcat:
Tuning Tomcat Performance For Optimum Speed
We had a problem, which looks similar to yours. Tomcat was slow to respond, but access log showed just milliseconds for answer. The problem was streaming responses. One of our services returned real-time data that user could subscribe to. EPOLL were becoming bloated. Network requests couldn't get to the Tomcat. And whats more interesting, CPU was mostly idle (since no one could ask server to do anything) and acceptor/poller threads were sitting in WAIT, not RUNNING or IN_NATIVE.
At the time we just limited amount of such requests and everything became normal.

GRAILS 2, memory issues

We have an application built using Grails 2.0.1 and MongoDB. And as our userbase have grown and we did some performance research, we noticed that for each typical request grails eats about 150Mb of RAM, and when RAM is about to reach maximum it performs GC.
We've put singleton mode for controllers, and non-transactional for Services. We use JRockit.
I'd like to know if it can be considered normal for grails app or no. Our website is nothing more than a usual website, no extra memory usages, just a user management system and the code itself seems to be OK.
Here are the plugins we use:
app.grails.version=2.0.1,
app.servlet.version=2.4,
app.version=0.1,
plugins.cache-headers=1.1.3,
plugins.code-coverage=1.2.5,
plugins.codenarc=0.12,
plugins.crypto=2.0,
plugins.gsp-arse=1.3
plugins.jaxrs=0.6,
plugins.mongodb=1.0.0.RC5,
plugins.navigation=1.2,
plugins.quartz=0.4.2,
plugins.redis=1.0.0.M9,
plugins.rendering=0.4.3,
plugins.selenium=0.8,
plugins.selenium-rc=1.0.2,
plugins.spring-security-core=1.2.7.2,
plugins.springcache=1.3.1,
plugins.svn=1.0.1,
plugins.tomcat=2.0.1,
plugins.ui-performance=1.2.2
On a Sun JDK, fire up jvisualvm (or the jrockit equivalent, if there is one. Otherwise get yourself a proper profiler that works with jrockit), attach it to your running server, start the profiler and analyze the output. This will give you an idea on where to look.
Maybe you are actually loading that much information from the backend storage. but that's just a guess.

JBOSS Configurations

Disclaimer: I am more of a programmer and have little knowledge of JBOSS.
When we deployed the system, it works properly in the test environment. However in production, since there are multiple users and a lot of data are being updated/saved, some issues occurred. Double updates are being created, some functions are not working unless the server is restartedd. I'm thinking that this may be corrected by modifying whatever session or memory parameter JBOSS has. So we could prevent restarting the server every time an error occurs.
Question: What parameter or jboss configuration should we edit to accommodate multiple users and a large number of transactions.
You need to investigate the reason for your application not behaving the way you want to. Some Points you can consider :
Log request in Jboss support.
Try increasing Java heap size. (this can be done by editing entry in standalone.conf).
Try enabling gc logs to see , if your garbage is properly collected or not.
Check for memory leakage in code.
try to analyze thread dumps to check whether some of your threads are being blocked or not.
See what if you server CPU and memory Utilization is high or not.
I'm not sure what version of JBoss you are using, but if you are wanting to increase the JVM memory you can modify the following line in the run.bat file in your bin folder:
set JAVA_OPTS=%JAVA_OPTS% -Xms128m -Xmx512m
Xms = Minimum size and Xmx is Maximum size. If you think it is due to lack of resources, you may want to increase it to something like this:
set JAVA_OPTS=%JAVA_OPTS% -Xms512m -Xmx1024m
Does everything work normally with only 1 or very few users using the system? To me it seems more like a coding issue.
Check the Heap size of JVM and set it according to Machine RAM available.
You must have performance testing done before you deploy to production. You can check the behaviour of your code in performance testing using some monitoring tool or JMX tool.
Tune the paramter like Heap size, GC alogrithm, you might want to define fixed size of young generation
Tune the thread as well.
https://developer.jboss.org/wiki/ThreadPoolConfiguration?_sscc=t

Resources