Grails Quartz job performance degrades / slows over time - spring

We have a situation where we have a Grails 2.3.11 based application that uses Quartz (version 2.2.1 / Grails Quartz plugin version 1.0.2) jobs to do certain long running processes (1-5 minutes) in background so that a polling service allows the browser to fetch the progress. This is used primarily for import and export of data from the application. For example, when the application first starts, the export for 200,000+ rows takes approx 2 minutes. The following day the export takes 3+ minutes. The third day the export takes more than 6 minutes.
We have narrowed the problem down to just the Quartz jobs. When the system is in the degraded state all other web pages respond with nearly identical response times as when the system is in optimal condition. It appears that the Quartz jobs tend to slowdown linearly or incrementally over the period of 2 to 3 days. This may be usage related or time, for which we are uncertain.
We are familiar with the memory leak bug reported by Burt Beckwith and added the fix to our code. We were experiencing the memory leak before but now memory management appears to be health, even when the job performance is 5-10x slower than
The jobs use GORM for most of the queries. We've optimized some to use criterias with projects so they are light weight but haven't been able to change all the logic over so there are a number of Gorm objects. In the case of the exports we've changed the queries to be read-only. The logic also clears out the hibernate session appropriately to limit the number of objects in memory.
Here are a few additional details:
The level-2 cache is disabled
Running on Tomcat 7
Using MySQL 5.6
Using Java JDK 1.7.0_72
Linux
System I/O, swapping and CPU are all well within reason
Heap and Permgen memory usage is reasonable
Garbage collection is stable and reasonably consistent based on usage levels
The issue occurs even when there is only a single user
We have done period stack/thread dump analysis
We have been profiling the application with xRebel (development) and AppDynamics (production) as well we have Java Melody installed into the application
We had this problem with Grails 1.3.8 but recently upgraded to 2.3.11 which may have exasperated the problem.
Any suggestions would be greatly appreciated.
Thanks,
John

Related

Why is the Heap Memory Usage of a SpringBoot application keeps increasing?

I created and ran a simple SpringBoot application to accept some requests. I used jconsole to check the Heap Memory Usage and I saw this periodic increase followed by GC, I don't know the reason for the increases. Are there any Objects keep being created (because I think the instances are imported to container when the application starts)?
Spring boot has background processes which may consume your resources despite on absent requests to your app, for example jobs, etc.
Those spikes are expected and regular for any Java app based on any more less complex framework. GC algorithm depends on your jvm version, but could be overridden. The graph shows normal situation, from time to time memory consumed for some activities and after some time GC wake up and do the cleaning.
In case if you want to check what exactly objects caused memory spike you may try to use Java Flight Recorder or regular heap dump analysis using Eclipse memory analyser.
For current case Java Flight Recorder would be more convenient and suitable.

Elasticsearch load is not distributed evenly

I am facing strange issue with Elasticsearch. I have 8 nodes with same configurations (16GB RAM and 8 core CPU).
One node "es53node6" has always high load as shown in the screenshot below. Also 5-6 nodes were getting stopped yesterday automatically after every 3-4 hours.
What could be the reason?
ES version : 5.3
there can be a fair share of reasons. Maybe all data is stored on that node (which should not happen by default), maybe you are sending all the requests to this single node.
Also, there is no automatic stopping of Elasticsearch built-in. You can configure Elasticsearch that it stops the JVM process when an out-of-memory exception occurs, but this is not enabled by default as it relies on a more recent JVM.
You can use the hot threads API to check where the CPU time is spent in Elasticsearch.

Performance degradation with JSF, Spring, Hibernate application

I'm working on web application developed by the following technologies (JSF, Spring, Hibernate, MySQL, MongoDB, Elasticsearch, Jetty server and Tomcat). I created a stress test that simulates the application use by one user. In test scenario, I'm running a stress test for 4 paralel users, and in the same time login as 5th user and measure system response time for certain activities. If we present a response time when stress time is not running as T, after 5 minutes of running stress test response time is ~2xT, after 30 minutes it's ~5xT, after 45 minutes it's ~10-15xT.
After 1 hour I stopped the tests, waited 25 minutes untill memory heap was less then 200mb and CPU use near 0%, and performance got better slightly, but not too much (~7T). 1 hour after the tests stopped nothing changed.
My PC has 8GB of RAM, and I run the application with -Xms512m -Xmx4092m -XX:PermSize=512m -XX:MaxPermSize=512m, and according to what I found from JVisualVM and JProfiler, I don't think this is a memory leak issue. However, I'm not sure what else could be the problem.
I hope someone could indicate what should be a possible reason for such a huge performance degradation or where should I look for it.
Thanks in advance.
After having analysed all aspects of the application I identified that Mojarra was the main reason for performance degradation. Ajax post response time has been longer while backend initialization remained the same after a stress tests. After moving from Mojarra 2.1.9 to 2.2.0, problems disappeared, and I believe that is related to the complexity of JSF pages and how Mojarra deals with big number of components on one page.

Memory footprint for large systems in Vaadin

I'm working in financial sector and we are about to select Vaadin 7 for development of large heavy load system.
But I'm a bit worried about Vaadin memory footprint for large systems since Vaadin keeps all state in session. It means that for every new user all application state will be stored in memory, won't it?
We cannot aford to build monolithic system - system must be scalable and agile instead. Since we have huge client base it must be easy to customize and ready to grow.
Could anyone please share the experience and possible workarounds how to minimize or eliminate those problems in Vaadin?
During the development of our products we faced the problem of large memory footprint using the default Vaadin architecture.
The Vaadin architecture is based on components driven by events. Using components is fairly simple to create a tightly coupled application. The reason is that components are structured into a hierarchy. It's like a pyramid. The larger application is built; the larger pyramid is stored in the session for each user.
In order to significantly reduce the memory allocation we've created a page based approach for the application with a comprehensive event model on the background using the old school state management. It is based on the Statechart notation in XML format.
As the result the session keeps only visited pages during the user workflow, described by the Statechart configuration. When the user finishes the workflow, all the pages are released to be collected by garbage collector.
To see the difference we have done some tests to compare memory allocated for the user working with the application.
The applications developed:
with tightly coupled approach consume from 5 to 15MB of heap per user
with loose-coupled approach - up to 2 MB
We are quite happy with results since it let us scale the large system using 4GB RAM up to 1000-1500 concurrent users per server.
Almost forgot. We used Lexaden Web Flow library. It is with Apache license.
I think you should have a look here: https://vaadin.com/blog/-/blogs/vaadin-scalability-study-quicktickets
Plus, I have found the following info by people who run Vaadin in production.
Balázs Hódossy:
We have a back office system with more than 10 000 users. The daily
user number is about 3000 but half of them use the system 8 hours
without logout. We use Liferay 6.0.5 Tomcat bundle and Vaadin as
portlet. Our two servers have 48 GB RAM and we give Tomcat 24 GB heap.
DB got 18 GB and the system the rest. Measure the heap to the session
size, concurrent users, and the activity. More memory cause more
rarely but longer full GC. We plan to increase the number of Tomcat
workers and reduce the heap. When you measure your server, try to add
a little bit more memory. If the cost is so important than decrease
the processor cost and buy more RAM. Most of the time it is valuable
with a little tuning.
Pierre-Emmanuel Gros:
For 1000 dayly user heavyly used , a pure vaadin application: Server
3 gb 2 core Jetty with ulimit to 50000 Postgresql 9 with 50
concurent users ( a connection pool is used). As software part, I used also ehcache to cache DTO objects,and pure JDBC.

Performance of a Vaadin front end on Apache Tomcat

The main form of my Vaadin application (running on Ubuntu server and Tomcat) takes enormous amount of time to load (more than 1 minute).
It's very simple and the web server is not under load (only a couple of users access this web server).
How can I find out why this performance problem occurs (and where is the bottleneck) ?
Vaadin by itself takes just couple of kilobytes per application. Try not to load lots of views and data in memory upfront in init. Instead, do that lazily.
I am also relatively new to Vaadin, but I can tell you this is not normal. We are running our Vaadin application on both Tomcat 6 and 7 with only a few seconds of start up time (Mac O/S). We deploy to Fedora for production.
Vaadin does take a lot of memory, and I would suggest that you check your Tomcat startup parameters to see how much is used and maybe increase it. This is the -Xmx512m switch when you run TC or any java app. I would say that 512m is really an absolute minimum for Tomcat/Vaadin for testing and 5 to 10X that or more would be used for a production environment.
Java memory defaults depend on your version of java and could be insufficient.
Smaller of 1/4th of the physical memory or 1GB. Before J2SE 5.0, the
default maximum heap size was 64MB.

Resources