I'm aware that we can't use the advantages of java 8 with spring 3 such as lambda expression, Optional, Stream, etc ...
But how about the new way of handling memory ?
Edit
By memory-handling i mean the new metaspace memory, metaspace garbage collector, the suppression of permanent generation ...
Also, we couldn't upgrade Spring to latest version 4 because we had to upgrade Hibernate to 3.6 as well, and this is not possible for the moment (we are dealing with an old application).
Related
why I have spend some extra effort to migrate my application if I'm not using new features of latest version
You don't have to upgrade, but I'd say it's worth it for 2 reasons:
Performance and security upgrades of Spring and other dependencies is always a worthwhile effort in my opinion. By skipping upgrades you could be using vulnerable packages.
What happens in a year if you do need a new feature or need to migrate to JDK 11 and beyond? It's typically easier to do the incremental updates multiple times per year rather than a big-bang upgrade every couple of years.
We are running a spring boot based service with which, we are having GC issues on running perf tests. When we looked at the heap dump on Eclipse plugin "Memory Analysis Toolkit" (MAT), we found below:
One instance of "com.fasterxml.jackson.databind.ObjectMapper" loaded by "org.apache.catalina.loader.WebappClassLoader # 0x1000050b8" occupies 2,057,443,904 (93.15%) bytes. The memory is accumulated in one instance of "com.fasterxml.jackson.databind.ser.SerializerCache" loaded by "org.apache.catalina.loader.WebappClassLoader # 0x1000050b8".
Keywords
com.fasterxml.jackson.databind.ser.SerializerCache
com.fasterxml.jackson.databind.ObjectMapper
org.apache.catalina.loader.WebappClassLoader # 0x1000050b8
Not really sure what's going on here. Why is SerializerCache holding onto so many objects in Map, and, how to clear it out.
Using
SpringIO: 2.0.6.RELEASE
The SpringIO 2.0.6.RELEASE uses jackson version 2.6.7, which, incidentally, doesn't show up on the Jackson release page of master branch (https://github.com/FasterXML/jackson-databind/blob/master/release-notes/VERSION).
I upgraded my application to use jackson version 2.8.1 and it fixed the issues.
I'm in the process of upgrading JavaLite ActiveJDBC from EHCache 2.x to v 3.x.
It looks APIs changed dramatically, and I can find equivalents of what I need in v 3.x, except for one: How to clear all caches? For example, in v2.x, I could do this:
net.sf.ehcache.CacheManager cacheManager = net.sf.ehcache.CacheManager.create();
//... code
cacheManager.removalAll();
How can I do this in EHCache 3?
Clarification: CacheManager.removalAll() is a method that not only clear caches, but removes them completely. It is deprecated in the latest version and replaced with CacheManager.removeAllCaches() to better indicate its purpose.
Caches will no longer be alive and cannot be used anymore if you were to keep a reference to one of them.
The equivalent in Ehcache 3 would be to invoke: CacheManager.close() which would close all caches and then release all resources held by the CacheManager.
Hard to conclude with the disconnect between what I understand the stated goal to be (clear data from caches) and the Ehcache 2 method used (remove all caches) if Ehcache 3 satisfies it.
What is the difference between the Ehcache and Guava Cache? In which scenarios would I use which type of caching?
Note: I am a developer working on Ehcache
The answer to your question depends on which features your cache needs to have inside your application.
Guava caches have all the basic caching features.
Ehcache has more advanced features - see 2.x line or upcoming 3.x line
Multi tier caching (disk in 2.x, offheap + disk in 3.x)
Clustering (2.x for now, soon in 3.x)
Refresh ahead or scheduled refresh (2.x for now)
JSR-107 API
So my advice is to look at your needs, play with both and then decide.
Our Glassfish 4 Application Server builds up heap space and hangs while testing high loads (200 parallel connections). We used different garbage collection settings while testing.
The application runs on a clustered Glassfish Server with one Administration Domain and two instances using --availabilityenabled=true to share session data. But we can reproduce the Heap building up on non clustered Glassfish Servers (running on our desktops), too.
We are using the following Spring/Hibernate Versions:
spring 4.0.1 (including webmvc, aop, context, beans, binding, jdbc, web, tx)
spring-security 3.2.0
hibernate 4.3.1 (core, entitymanager, validator)
jsp 2.2
elasticsearch 1.0.0
Stiuation
While normal testing the Heap starts to load up. Garbage collection is trying to do its job but the used Heap still rises.
(Whish I could post images...) you would see a rise of approx. 300M in 20 hours.
When using JMeter to simulate higher load the Heap Usage starts to rise. After 2 hours of 200 simultanious connections the Heap rises to 6G (from 0.5G) and it continues to rise till 8G is reached (approx. 4 hours in). Then bad things happen.
When performing test for 2 hours only (reaching 6G of Heap) the heap does not shrink, even when performing manual GC or leaving the Glassfish without new connections for 24 hours. It stays at 6G.
Heap dump
I took a Heapdump after 2 hours of testing:
class instances
java.util.ArrayList 19.904.237
java.lang.Object[] 15.851.496
org.jboss.weld.util.collections.ArraySet 9.192.068
org.jboss.weld.bean.interceptor.WeldInterceptorClassMetadata 9.188.347
java.util.HashMap$Entry 5.546.603
java.util.Collections$UnmodifiableRandomAccesList 5.331.810
java.util.Collections$SynchronizedRandomAccesList 5.328.892
java.lang.reflect.Constructor 2.669.192
com.google.common.collect.MapMakerInternalMap$StrongEntity 2.667.181
com.google.common.collect.MapMakerInternalMap$StrongValueReference 2.667.181
org.jboss.weld.injection.AbstractCallableInjectionPoint$1 2.664.747
org.jboss.weld.injection.ConstructorInjectionPoint 2.664.737
org.jboss.weld.injection.producer.BeanInjectionTarget 2.664.737
org.jboss.weld.injection.producer.DefaultInstanciator 2.664.731
Current Configuration
Java: Java(TM) SE Runtime Environment, 1.7.0_51-b13
JVM: Java HotSpot(TM) 64-Bit Server VM, 24.51-b03, mixed mode
Server: Server GlassFish Server Open Source Edition 4.0
Our current settings are:
-XX:+UnlockDiagnosticVMOptions
-XX:+UseParNewGC
-XX:ParallelGCThreads=4
-XX:+UseConcMarkSweepGC
-XX:MaxPermSize=1024m
-XX:NewRatio=4
-Xms4096m
-Xss256k
-Xmx8G
-javaagent:/opt/glassfish4/glassfish/lib/monitor/flashlight-agent.jar
[...]
Server Hardware
One server has 14 (including hyperthreading) CPU Cores, 32GB of ram and runs Glassfish and ElasticSearch. The Load never reaches 5 till Heap is nearly full and excessive garbage collecting kicks in. Load then rises massively.
What are we doing wrong here?
Weld, which is the implementation of CDI in Glassfish has a known memory leak in the version that ships with Glassfish 4. See: https://community.jboss.org/message/848709. You can replace the weld jar file to upgrade to version 2.0.5.Final. This will sorta fix the situation in that you will no longer see "org.jboss.weld.bean.interceptor.WeldInterceptorClassMetadata" classes grow without bound in your heap dumps.
There is another memory leak due to using tag files with parameters with CDI enabled (which is generally the case since GlassFish 4 will eagerly enable CDI by default even if you don't use it or have a beans.xml). As a workaround disable CDI or use a jsp segment include instead of a tag. See https://www.java.net/forum/topic/glassfish/glassfish/glassfish-4-memory-leak