Grails 1.3.9 application high CPU load after some time - performance

A Grails 1.3.9 application, which is deployed with Tomcat 7 gets serious performance issues after it is used by 10 to 20 users concurrently.
CPU load increases dramatically and HTTP responses become very slow.
Second-level EhCache is used with this application.
We discovered that it helps to clear the second level cache using Melody plugin.
I tried many different setting of the EhCache (changing expiry times, using memory cache only...) but high CPU load and severe performance problems still strike after some time.
We suspect that performance issues are somehow related to second-level cache, but we were unable to find out how to resolve the problem.
We'd appreciate any suggestions to resolve this situation. Thanks.
Edit:
Memory history from Melody:
Tomcat 7 JVM arguments:
-Djava.util.logging.config.file=/var/lib/tomcat7/conf/logging.properties
-Djava.awt.headless=true
-Xss1G
-Xmx2G
-Xms2G
-XX:MaxPermSize=256m
-XX:PermSize=128m
-XX:+UseConcMarkSweepGC
-Dstringchararrayaccessor.disabled=true
-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
-Djava.endorsed.dirs=/usr/share/tomcat7/endorsed
-Dcatalina.base=/var/lib/tomcat7
-Dcatalina.home=/usr/share/tomcat7
-Djava.io.tmpdir=/tmp/tomcat7-tomcat7-tmp

Related

Please guide me how to increase JVM heap size for Tomcat 7 version

We are observing Tomcat JVM heap size is reaching warning state (80%) everyday and recovering automatically sometimes and sometimes recovering post tomcat restrat.
Please guide me how to increase JVM heap size for Tomcat 7 version. I see below content in setenv.sh file of Tomcat.
CATALINA_OPTS="$CATALINA_OPTS -XX:+UseConcMarkSweepGC"
Thanks in advance.
Hello,
We are observing Tomcat JVM heap size is reaching warning state (80%) everyday and recovering automatically sometimes and sometimes recovering post tomcat restrat.
Please guide me how to increase JVM heap size for Tomcat 7 version. I see below content in setenv.sh file of Tomcat.
ATALINA_OPTS="$CATALINA_OPTS -XX:+UseConcMarkSweepGC"
Thanks in advance.

What are the best options for reducing class loading memory footprint in Wildfly 11?

I'm using Wildfly 11 and Java 8. I'm running into OutOfMemory errors due to lack of MetaSpace so I'm looking for ways to optimize my class loading. I have several WAR files that run on the same app server. Most contain the same types of classes
$WILDFLY_HOME/myapp.war/WEB-INF/lib/spring-context-4.3.8.RELEASE.jar
$WILDFLY_HOME/myapp.war/WEB-INF/lib/spring-context-support-4.3.8.RELEASE.jar
...
I'm wondering if this is a problem (e.g. Wildfly attempts to load the same class twice for different applications) and if so, what options are available to me for cutting down my memory footprint due to class loading?

Spring ehcache vs Memcached?

I have worked on spring cahing using ehcache . To me it is like same with different set of API exposed and their implementation.
What's the difference in terms of features provided between them
apart from API/implementation ?
Update:- I have already seen Hibernate EHCache vs MemCache but that question is mainly from hibernate perspective but my question is in general for any caching service . Answer to that question also states there is not much difference in terms of features
Aside from the API differences you noted, the major difference here is going to be that memcached lives in a different process while Ehcache is internal to the JVM - unless configured to store on disk or in a cluster.
This mainly means that with Memcached you always need a serialized version of your objects and you always interact with a different process, remote or not.
Ehcache, and other JVM based caching solutions, start with a on-heap based cache initially which allows lookups to be simply about handling references to your Java objects.
Of course this means that the objects keep living in the Java heap, increasing memory pressure. In the case of Ehcache 3.x you have the option to move to offheap memory and more, allowing to grow the cache without impacting JVM heap.
At this point, the benefit of Memcached may be that you want non Java clients to access it.
And the final decision really is in your hands. Caches are consuming memory to provide reduced latency. What works for you may be different than what works for others. You have to measure and decide.

Glassfish 4 (Spring MVC/JSP/Hibernate Application) builds up HEAP

Our Glassfish 4 Application Server builds up heap space and hangs while testing high loads (200 parallel connections). We used different garbage collection settings while testing.
The application runs on a clustered Glassfish Server with one Administration Domain and two instances using --availabilityenabled=true to share session data. But we can reproduce the Heap building up on non clustered Glassfish Servers (running on our desktops), too.
We are using the following Spring/Hibernate Versions:
spring 4.0.1 (including webmvc, aop, context, beans, binding, jdbc, web, tx)
spring-security 3.2.0
hibernate 4.3.1 (core, entitymanager, validator)
jsp 2.2
elasticsearch 1.0.0
Stiuation
While normal testing the Heap starts to load up. Garbage collection is trying to do its job but the used Heap still rises.
(Whish I could post images...) you would see a rise of approx. 300M in 20 hours.
When using JMeter to simulate higher load the Heap Usage starts to rise. After 2 hours of 200 simultanious connections the Heap rises to 6G (from 0.5G) and it continues to rise till 8G is reached (approx. 4 hours in). Then bad things happen.
When performing test for 2 hours only (reaching 6G of Heap) the heap does not shrink, even when performing manual GC or leaving the Glassfish without new connections for 24 hours. It stays at 6G.
Heap dump
I took a Heapdump after 2 hours of testing:
class instances
java.util.ArrayList 19.904.237
java.lang.Object[] 15.851.496
org.jboss.weld.util.collections.ArraySet 9.192.068
org.jboss.weld.bean.interceptor.WeldInterceptorClassMetadata 9.188.347
java.util.HashMap$Entry 5.546.603
java.util.Collections$UnmodifiableRandomAccesList 5.331.810
java.util.Collections$SynchronizedRandomAccesList 5.328.892
java.lang.reflect.Constructor 2.669.192
com.google.common.collect.MapMakerInternalMap$StrongEntity 2.667.181
com.google.common.collect.MapMakerInternalMap$StrongValueReference 2.667.181
org.jboss.weld.injection.AbstractCallableInjectionPoint$1 2.664.747
org.jboss.weld.injection.ConstructorInjectionPoint 2.664.737
org.jboss.weld.injection.producer.BeanInjectionTarget 2.664.737
org.jboss.weld.injection.producer.DefaultInstanciator 2.664.731
Current Configuration
Java: Java(TM) SE Runtime Environment, 1.7.0_51-b13
JVM: Java HotSpot(TM) 64-Bit Server VM, 24.51-b03, mixed mode
Server: Server GlassFish Server Open Source Edition 4.0
Our current settings are:
-XX:+UnlockDiagnosticVMOptions
-XX:+UseParNewGC
-XX:ParallelGCThreads=4
-XX:+UseConcMarkSweepGC
-XX:MaxPermSize=1024m
-XX:NewRatio=4
-Xms4096m
-Xss256k
-Xmx8G
-javaagent:/opt/glassfish4/glassfish/lib/monitor/flashlight-agent.jar
[...]
Server Hardware
One server has 14 (including hyperthreading) CPU Cores, 32GB of ram and runs Glassfish and ElasticSearch. The Load never reaches 5 till Heap is nearly full and excessive garbage collecting kicks in. Load then rises massively.
What are we doing wrong here?
Weld, which is the implementation of CDI in Glassfish has a known memory leak in the version that ships with Glassfish 4. See: https://community.jboss.org/message/848709. You can replace the weld jar file to upgrade to version 2.0.5.Final. This will sorta fix the situation in that you will no longer see "org.jboss.weld.bean.interceptor.WeldInterceptorClassMetadata" classes grow without bound in your heap dumps.
There is another memory leak due to using tag files with parameters with CDI enabled (which is generally the case since GlassFish 4 will eagerly enable CDI by default even if you don't use it or have a beans.xml). As a workaround disable CDI or use a jsp segment include instead of a tag. See https://www.java.net/forum/topic/glassfish/glassfish/glassfish-4-memory-leak

OpenJPA cache vs ehcache plugin

openjpa comes with its own cache implementation. But it will also easily integrate with ehcache and other third-party cache providers.
What are the main advantages of using ehcache vs OpenJPA's implementation? is it a scalability issue?
Thanks
Ehcache can run in a distributed mode which is more scalable and the default OpenJPA cache implementation is limited by the size of the JVM's memory. I believe that running with the default cache you will see much better performance that running with a distributed ehcache.

Resources