Performance of a Vaadin front end on Apache Tomcat - performance

The main form of my Vaadin application (running on Ubuntu server and Tomcat) takes enormous amount of time to load (more than 1 minute).
It's very simple and the web server is not under load (only a couple of users access this web server).
How can I find out why this performance problem occurs (and where is the bottleneck) ?

Vaadin by itself takes just couple of kilobytes per application. Try not to load lots of views and data in memory upfront in init. Instead, do that lazily.

I am also relatively new to Vaadin, but I can tell you this is not normal. We are running our Vaadin application on both Tomcat 6 and 7 with only a few seconds of start up time (Mac O/S). We deploy to Fedora for production.
Vaadin does take a lot of memory, and I would suggest that you check your Tomcat startup parameters to see how much is used and maybe increase it. This is the -Xmx512m switch when you run TC or any java app. I would say that 512m is really an absolute minimum for Tomcat/Vaadin for testing and 5 to 10X that or more would be used for a production environment.
Java memory defaults depend on your version of java and could be insufficient.
Smaller of 1/4th of the physical memory or 1GB. Before J2SE 5.0, the
default maximum heap size was 64MB.

Related

web application runs much faster in embedded tomcat than in standalone tomcat

I have a spring-boot web application (mostly used through REST calls), that I can run using mvn exec that starts an embedded tomcat (8.5.11), or build a war and deploy it into a standalone tomcat (debian stock 8.5.14-1~bpo8+1). Both are configured the same way, using
To our utmost surprise, the embedded tomcat seems to be much faster for high loads (a small test sequence with 200+ threads using jmeter). At 600 threads, for example:
The standalone tomcat has very large response times, while having a relatively low load of 50-70 (the server has 64 cores and can run 128 threads), and a low IO usage.
The embedded tomcat has a load of 150-200 and faster response times, and high I/O usage (it seems that the database is the limiting factor here, but it degrades gracefully: 600 threads results in double as slow as 300 threads).
Supposedly, the configuration is the same for both tomcats, so currently I am quite troubled because of this. I really would not like to run embedded tomcat in production if I can help it.
Does anyone have an idea:
what the cause for this performance disparity may be, and
how we can reliably compare the configuration for two tomcats?
Update
I ran some more tests and discovered a significant difference after looking through the Garbage Collector logs: with 600 jmeter threads, the embedded tomcat spent about 5% of its time GCing, while the standalone tomcat spent about 50% of its time GCing. I calculated these numbers with an awk script, so they may be a bit mis-parsed, but manually checking the GC logs seems to corroborate them. It still does not explain why one of them is GCing all the time and the other is not...
One more update
I managed to speed up the standalone tomcat by switching the garbage collector to G1. Now, it uses about 20% of elapsed time for garbage collection, and never exceeds 1s for any single GC run. Now the standalone tomcat is only 20-30% slower than the embedded tomcat. Interestingly, using G1 in the embedded tomcat had no real effect on its performance, GC overhead is still around 15% there.
This is by no means a solution, but it helped to close the gap between the two tomcats and thus now the problem is not so critical.
Check the memory parameters for your standalone Tomcat and your spring boot application, especially the java heap size.
My guess is that your standalone Tomcat has a value for Xmx set in the startup script (catalina.sh and/or setenv.sh), say for example 1 Gb, which is much lower than what your Spring Boot app is using.
If you haven't specified a value for Xmx on the command line for your spring boot app, it will default to 25% of your physical memory. If your server has 16 Gb of RAM, that'll be 4Gb...
I'd recommend running your tests again after making sure the same JVM parameters are in use (Xms, Xmx, various GC options, ...). If unsure, inspect the running VMs with jVisualVm or similar tool.

Grails Quartz job performance degrades / slows over time

We have a situation where we have a Grails 2.3.11 based application that uses Quartz (version 2.2.1 / Grails Quartz plugin version 1.0.2) jobs to do certain long running processes (1-5 minutes) in background so that a polling service allows the browser to fetch the progress. This is used primarily for import and export of data from the application. For example, when the application first starts, the export for 200,000+ rows takes approx 2 minutes. The following day the export takes 3+ minutes. The third day the export takes more than 6 minutes.
We have narrowed the problem down to just the Quartz jobs. When the system is in the degraded state all other web pages respond with nearly identical response times as when the system is in optimal condition. It appears that the Quartz jobs tend to slowdown linearly or incrementally over the period of 2 to 3 days. This may be usage related or time, for which we are uncertain.
We are familiar with the memory leak bug reported by Burt Beckwith and added the fix to our code. We were experiencing the memory leak before but now memory management appears to be health, even when the job performance is 5-10x slower than
The jobs use GORM for most of the queries. We've optimized some to use criterias with projects so they are light weight but haven't been able to change all the logic over so there are a number of Gorm objects. In the case of the exports we've changed the queries to be read-only. The logic also clears out the hibernate session appropriately to limit the number of objects in memory.
Here are a few additional details:
The level-2 cache is disabled
Running on Tomcat 7
Using MySQL 5.6
Using Java JDK 1.7.0_72
Linux
System I/O, swapping and CPU are all well within reason
Heap and Permgen memory usage is reasonable
Garbage collection is stable and reasonably consistent based on usage levels
The issue occurs even when there is only a single user
We have done period stack/thread dump analysis
We have been profiling the application with xRebel (development) and AppDynamics (production) as well we have Java Melody installed into the application
We had this problem with Grails 1.3.8 but recently upgraded to 2.3.11 which may have exasperated the problem.
Any suggestions would be greatly appreciated.
Thanks,
John

How to increase grails2.0.4 performance in Development mode?

I am working with grails framework. It takes too much time to respond a request from browser. Due to this issue i have to restart the server too many times. I will highly appreciate for your accurate answer.
You may suffer from performance issues due to insufficient memory (heap). In this case you will need increase the maximum heap size. You can use the Grails JavaMelody plugin , which integrates JavaMelody system monitoring tool into grails application, in order to tack such issues.
To increase the maximum heap size when running the application with the run-app command add the following vm option: -Xmx1024m
If running with the run-war command add the following to your BuildConfig.groovy:
grails.tomcat.jvmArgs = ['-Xmx1g']
In both cases set the maximum heap size according to your needs.

Memory footprint for large systems in Vaadin

I'm working in financial sector and we are about to select Vaadin 7 for development of large heavy load system.
But I'm a bit worried about Vaadin memory footprint for large systems since Vaadin keeps all state in session. It means that for every new user all application state will be stored in memory, won't it?
We cannot aford to build monolithic system - system must be scalable and agile instead. Since we have huge client base it must be easy to customize and ready to grow.
Could anyone please share the experience and possible workarounds how to minimize or eliminate those problems in Vaadin?
During the development of our products we faced the problem of large memory footprint using the default Vaadin architecture.
The Vaadin architecture is based on components driven by events. Using components is fairly simple to create a tightly coupled application. The reason is that components are structured into a hierarchy. It's like a pyramid. The larger application is built; the larger pyramid is stored in the session for each user.
In order to significantly reduce the memory allocation we've created a page based approach for the application with a comprehensive event model on the background using the old school state management. It is based on the Statechart notation in XML format.
As the result the session keeps only visited pages during the user workflow, described by the Statechart configuration. When the user finishes the workflow, all the pages are released to be collected by garbage collector.
To see the difference we have done some tests to compare memory allocated for the user working with the application.
The applications developed:
with tightly coupled approach consume from 5 to 15MB of heap per user
with loose-coupled approach - up to 2 MB
We are quite happy with results since it let us scale the large system using 4GB RAM up to 1000-1500 concurrent users per server.
Almost forgot. We used Lexaden Web Flow library. It is with Apache license.
I think you should have a look here: https://vaadin.com/blog/-/blogs/vaadin-scalability-study-quicktickets
Plus, I have found the following info by people who run Vaadin in production.
Balázs Hódossy:
We have a back office system with more than 10 000 users. The daily
user number is about 3000 but half of them use the system 8 hours
without logout. We use Liferay 6.0.5 Tomcat bundle and Vaadin as
portlet. Our two servers have 48 GB RAM and we give Tomcat 24 GB heap.
DB got 18 GB and the system the rest. Measure the heap to the session
size, concurrent users, and the activity. More memory cause more
rarely but longer full GC. We plan to increase the number of Tomcat
workers and reduce the heap. When you measure your server, try to add
a little bit more memory. If the cost is so important than decrease
the processor cost and buy more RAM. Most of the time it is valuable
with a little tuning.
Pierre-Emmanuel Gros:
For 1000 dayly user heavyly used , a pure vaadin application: Server
3 gb 2 core Jetty with ulimit to 50000 Postgresql 9 with 50
concurent users ( a connection pool is used). As software part, I used also ehcache to cache DTO objects,and pure JDBC.

Web application very slow in Tomcat 7

I implemented a web application to start the Tomcat service works very quickly, but spending hours and when more users are entering is getting slow (up to 15 users approx.).
Checking RAM usage statistics (20%), CPU (25%)
Server Features:
RAM 8GB
Processor i7
Windows Server 2008 64bit
Tomcat 7
MySql 5.0
Struts2
-Xms1024m
-Xmx1024m
PermGen = 1024
MaxPernGen = 1024
I do not use Web server, we publish directly on Tomcat.
Entering midnight slowness is still maintained (only 1 user online)
The solution I have is to restart the Tomcat service and response time is again excellent.
Is there anyone who has experienced this issue? Any clue would be appreciated.
Not enough details provided. Need more information :(
Use htop or top to find memory and CPU usage per process & per thread.
CPU
A constant 25% CPU usage in a 4 cores system can indicate that a single-core application/thread is running 100% CPU on the only core it is able to use.
Which application is eating the CPU ?
Memory
20% memory is ~1.6GB. It is a bit more than I expect for an idle server running only tomcat + mysql. The -Xms1024 tells tomcat to preallocate 1GB memory so that explains it.
Change tomcat settings to -Xms512 and -Xmx2048. Watch tomcat memory usage while you throw some users at it. If it keeps growing until it reaches 2GB... then freezes, that can indicate a memory leak.
Disk
Use df -h to check disk usage. A full partition can make the issues you are experiencing.
Filesystem Size Used Avail Usage% Mounted on
/cygdrive/c 149G 149G 414M 100% /
(If you just discovered in this example that my laptop is running out of space. You're doing it right :D)
Logs
Logs are awesome. Yet they have a bad habit to fill up the disk. Check logs disk usage. Are logs being written/erased/rotated properly when new users connect ? Does erasing logs fix the issue ? (copy them somewhere for future analysis before you erase them)
If not. Logs are STILL awesome. They have the good habit to help you track bugs. Check tomcat logs. You may want to set logging level to debug. What happens last when the website die ? Any useful error message ? Do user connections are still received and accepted by tomcat ?
Application
I suppose that the 25% CPU goes to tomcat (and not mysql). Tomcat doesn't fail by itself. The application running on it must be failing. Try removing the application from tomcat (you can eventually put an hello world instead). Can tomcat keep working overnight without your application ? It probably can, in which case the fault is on the application.
Enable full debug logging in your application and try to track the issue. Run it straight from eclipse in debug mode and throw users at it. Does it fail consistently in the same way ?
If yes, hit "pause" in the eclipse debugger and check what the application is doing. Look at the piece of code each thread is currently running + its call stack. Repeat that a few times. If there is a deadlock, an infinite loop, or similar, you can find it this way.
You will have found the issue by now if you are lucky. If not, you're unfortunate and it's a tricky bug that might be deep inside the application. That can get tricky to trace. Determination will lead to success. Good luck =)
For performance related issue, we need to follow the given rules:
You can equalize and emphasize the size of xms and xmx for effectiveness.
-Xms2048m
-Xmx2048m
You can also enable the PermGen to be garbage collected.
-XX:+UseConcMarkSweepGC -XX:+CMSPermGenSweepingEnabled -XX:+CMSClassUnloadingEnabled
If the page changes too frequently to make this option logical, try temporarily caching the dynamic content, so that it doesn't need to be regenerated over and over again. Any techniques you can use to cache work that's already been done instead of doing it again should be used - this is the key to achieving the best Tomcat performance.
If there any database related issue, then can follow sql query perfomance tuning
rotating the Catalina.out log file, without restarting Tomcat.
In details,There are two ways.
The first, which is more direct, is that you can rotate Catalina.out by adding a simple pipe to the log rotation tool of your choice in Catalina's startup shell script. This will look something like:
"$CATALINA_BASE"/logs/catalina.out WeaponOfChoice 2>&1 &
Simply replace "WeaponOfChoice" with your favorite log rotation tool.
The second way is less direct, but ultimately better. The best way to handle the rotation of Catalina.out is to make sure it never needs to rotate. Simply set the "swallowOutput" property to true for all Contexts in "server.xml".
This will route System.err and System.out to whatever Logging implementation you have configured, or JULI, if you haven't configured.
See more at: Tomcat Catalina Out
I experienced a very slow stock Tomcat dashboard on a clean Centos7 install and found the following cause and solution:
Slow start up times for Tomcat are often related to Java's
SecureRandom implementation. By default, it uses /dev/random as an
entropy source. This can be slow as it uses system events to gather
entropy (e.g. disk reads, key presses, etc). As the urandom manpage
states:
When the entropy pool is empty, reads from /dev/random will block until additional environmental noise is gathered.
Source: https://www.digitalocean.com/community/questions/tomcat-8-5-9-restart-is-really-slow-on-my-centos-7-2-droplet
Fix it by adding the following configuration option to your tomcat.conf or (preferred) a custom file into /tomcat/conf/conf.d/:
JAVA_OPTS="-Djava.security.egd=file:/dev/./urandom"
We encountered a similar problem, the cause was "catalina.out". It is the standard destination log file for "System.out" and "System.err". It's size kept on increasing thus slowing things down and ultimately tomcat crashed. This problem was solved by rotating "catalina.out". We were using redhat so we made a shell script to rotate "catalina.out".
Here are some links:-
Mulesoft article on catalina (also contains two methods of rotating):
Tomcat Catalina Introduction
If "catalina.out" is not the problem then try this instead:-
Mulesoft article on optimizing tomcat:
Tuning Tomcat Performance For Optimum Speed
We had a problem, which looks similar to yours. Tomcat was slow to respond, but access log showed just milliseconds for answer. The problem was streaming responses. One of our services returned real-time data that user could subscribe to. EPOLL were becoming bloated. Network requests couldn't get to the Tomcat. And whats more interesting, CPU was mostly idle (since no one could ask server to do anything) and acceptor/poller threads were sitting in WAIT, not RUNNING or IN_NATIVE.
At the time we just limited amount of such requests and everything became normal.

Resources