We have a large TeamCity Server (10.0.3), with around 2.000 builds configurations and around 50 build agents.
Frequently, we encounter some performances issues, with a garbage collection.
Inside the teamcity-server.log, we found this:
[2017-11-28 12:30:54,339] WARN - jetbrains.buildServer.SERVER - GC usage exceeded 50% threshold and is now 60%. GC was fired 82987 times since server start and consumed total 18454595ms. Current memory usage: 1.09 GB.
We are unable to figure out the source of the issue.
According to the Documentation, a 64 bit version of Java should be used, with only 4g RAM. We encountered some issues, and decided to use -Xmx6g parameter instead.
Do you know where we can enable/find more traces in order to figure out the source of our over-consumption of memory ?
First, you can try disabling third-party plugins and see if it helps.
Then you try benchmarking the server according to this blog post and see if increased memory limits will improve the situations.
But the best way to investigate the memory over-consumption would be capturing the memory dump and investigating the content using profiling tools. You can create memory dump from Administration | Server Administration | Diagnostics page of your TeamCity web UI using Dump Memory Snapshot button.
You can investigate the dump on your own or send it to Jetbrains for investigation.
Related
In WSO2 EI 6.6, proxy stopped working abruptly. upon analyzing observed an error in the wso2 carbon log "GC Overhead limit exceeded", after this error nothing happening in the EI.
Proxy logic is to get the data from Sql server table and form an xml and send it to an external API. Proxy runs every 5 mins interval and in every interval maximum of 5 records will be pushed to an API.
After restarting the wso2 carbon services, proxy are started working. currently we are restarting the services every 3 days to avoid this issue.
Need to know how to identify the potential issue and resolve this.
This mean the JVM has run out of allocated memory. There can be many reasons for this. For example, if you haven't allocated enough memory to the JVM you can easily run out of memory. If that's not the case you need to analyze a memory dump and see what's occupying the memory causing it to fill up.
Generally, when you see the mentioned error the JVM automatically creates a heap dump(heap-dump.hprof) in the <EI_HOME>/repository/logs directory. You can try analyzing the dump to find the root cause. If the server doesn't generate a memory dump, manually take a memory dump when it's occupied than the expected level and analyze it.
We are migrating from websphere application server to websphere liberty.
When our application is deployed in WAS, the CPU utilisation is 8%. The same application when deployed in WLP, the CPU utilisation is more than 50% and was fluctuating.
Can anyone advise how to debug this issue and which parameters to check to minimise the CPU utilisation.
My advice would be to use your favorite monitoring / profiling tool:
Check that your application isn't spending a lot of time garbage collecting. That could be a sign of the heap being too small, or another GC tuning problem.
Check which non-GC threads are using a lot of time. Does that tell you something unexpected?
Profile the code to look for performance hotspots.
Without knowing the cause, we can't suggest JVM parameter changes.
I hope you have verified its the liberty process hogging on the CPU.
Can you turn on the verbose GC in liberty profile and see the logs for GC.
In our Application there are more the 2000 pages which are deployed in prod server. Sometime when user browse some URL's the CPU spikes going more than 70%. I can not find when it's occurs and which URL create this. So can any one tell me best open source tool to Monitor and Create logs of W3WP.exe process CPU utilization and request URL's when CPU spikes more than 50%.
procdump + windbg
There is a sysinternals tool called procdump which can automatically create a memory dump of your process for analysis when cpu exceeds a threshold.
From the command line usage:
-c CPU threshold at which to create a dump of the process.
Once you have a process dump you will need to load it into windbg in order to analyze what's taking up all the cpu cycles. Covering off windbg is pretty big, but here's briefly what you need to do:
load the SOS dll (managed debug extension)
call the !runaway command to get list of long running threads
dive into a long running thread by selecting it and calling !clrstack command
There are many blogs on using windbg. Here is one example. A great resource on analyzing these types of issues is Tess Ferrandez's blog.
perfmon + procdump + windbg
Perfmon can help you see if the issue is related to high rates of memory allocation which is causing garbage collection. You can look at CPU for w3wp as well as allocation rates for the process and the number of Gen 2 collections occurring. Gen 2 collections mean Gen 1 and 0 are also collected, meaning it can be an expensive operation. Counters to look at:
# Gen 2 Collections
% Time in GC
Allocated Bytes/second
If you see some very high allocation rates, you will still need a memory dump (procdump) and windbg to analyse what the root cause is.
Again - Tess Ferrandez has a blog post on this flavor of high cpu. In this post the issue is allocating large objects onto the heap.
perfmon + appcmd
I haven't tried this myself but in theory it should work, and is simpler than other options - though will not produce same level of detail. You can configure perfmon alerts on cpu for w3wp.exe. The alerts can be configured to run a task. You can create a batch file which runs the appcmd IIS tool and tell it to dump all the running requests:
appcmd list requests > c:\temp\high-cpu-requests.txt
This way you will get a list of long running requests when the cpu is high, and hopefully be able to work out offending page from there.
IIS Advanced Logging may help you here.
Whilst it will not give you CPU Utilisation per request, it can log CPU utilisation in general. What you could do is try and match these spikes to the requests that come before it.
I implemented a web application to start the Tomcat service works very quickly, but spending hours and when more users are entering is getting slow (up to 15 users approx.).
Checking RAM usage statistics (20%), CPU (25%)
Server Features:
RAM 8GB
Processor i7
Windows Server 2008 64bit
Tomcat 7
MySql 5.0
Struts2
-Xms1024m
-Xmx1024m
PermGen = 1024
MaxPernGen = 1024
I do not use Web server, we publish directly on Tomcat.
Entering midnight slowness is still maintained (only 1 user online)
The solution I have is to restart the Tomcat service and response time is again excellent.
Is there anyone who has experienced this issue? Any clue would be appreciated.
Not enough details provided. Need more information :(
Use htop or top to find memory and CPU usage per process & per thread.
CPU
A constant 25% CPU usage in a 4 cores system can indicate that a single-core application/thread is running 100% CPU on the only core it is able to use.
Which application is eating the CPU ?
Memory
20% memory is ~1.6GB. It is a bit more than I expect for an idle server running only tomcat + mysql. The -Xms1024 tells tomcat to preallocate 1GB memory so that explains it.
Change tomcat settings to -Xms512 and -Xmx2048. Watch tomcat memory usage while you throw some users at it. If it keeps growing until it reaches 2GB... then freezes, that can indicate a memory leak.
Disk
Use df -h to check disk usage. A full partition can make the issues you are experiencing.
Filesystem Size Used Avail Usage% Mounted on
/cygdrive/c 149G 149G 414M 100% /
(If you just discovered in this example that my laptop is running out of space. You're doing it right :D)
Logs
Logs are awesome. Yet they have a bad habit to fill up the disk. Check logs disk usage. Are logs being written/erased/rotated properly when new users connect ? Does erasing logs fix the issue ? (copy them somewhere for future analysis before you erase them)
If not. Logs are STILL awesome. They have the good habit to help you track bugs. Check tomcat logs. You may want to set logging level to debug. What happens last when the website die ? Any useful error message ? Do user connections are still received and accepted by tomcat ?
Application
I suppose that the 25% CPU goes to tomcat (and not mysql). Tomcat doesn't fail by itself. The application running on it must be failing. Try removing the application from tomcat (you can eventually put an hello world instead). Can tomcat keep working overnight without your application ? It probably can, in which case the fault is on the application.
Enable full debug logging in your application and try to track the issue. Run it straight from eclipse in debug mode and throw users at it. Does it fail consistently in the same way ?
If yes, hit "pause" in the eclipse debugger and check what the application is doing. Look at the piece of code each thread is currently running + its call stack. Repeat that a few times. If there is a deadlock, an infinite loop, or similar, you can find it this way.
You will have found the issue by now if you are lucky. If not, you're unfortunate and it's a tricky bug that might be deep inside the application. That can get tricky to trace. Determination will lead to success. Good luck =)
For performance related issue, we need to follow the given rules:
You can equalize and emphasize the size of xms and xmx for effectiveness.
-Xms2048m
-Xmx2048m
You can also enable the PermGen to be garbage collected.
-XX:+UseConcMarkSweepGC -XX:+CMSPermGenSweepingEnabled -XX:+CMSClassUnloadingEnabled
If the page changes too frequently to make this option logical, try temporarily caching the dynamic content, so that it doesn't need to be regenerated over and over again. Any techniques you can use to cache work that's already been done instead of doing it again should be used - this is the key to achieving the best Tomcat performance.
If there any database related issue, then can follow sql query perfomance tuning
rotating the Catalina.out log file, without restarting Tomcat.
In details,There are two ways.
The first, which is more direct, is that you can rotate Catalina.out by adding a simple pipe to the log rotation tool of your choice in Catalina's startup shell script. This will look something like:
"$CATALINA_BASE"/logs/catalina.out WeaponOfChoice 2>&1 &
Simply replace "WeaponOfChoice" with your favorite log rotation tool.
The second way is less direct, but ultimately better. The best way to handle the rotation of Catalina.out is to make sure it never needs to rotate. Simply set the "swallowOutput" property to true for all Contexts in "server.xml".
This will route System.err and System.out to whatever Logging implementation you have configured, or JULI, if you haven't configured.
See more at: Tomcat Catalina Out
I experienced a very slow stock Tomcat dashboard on a clean Centos7 install and found the following cause and solution:
Slow start up times for Tomcat are often related to Java's
SecureRandom implementation. By default, it uses /dev/random as an
entropy source. This can be slow as it uses system events to gather
entropy (e.g. disk reads, key presses, etc). As the urandom manpage
states:
When the entropy pool is empty, reads from /dev/random will block until additional environmental noise is gathered.
Source: https://www.digitalocean.com/community/questions/tomcat-8-5-9-restart-is-really-slow-on-my-centos-7-2-droplet
Fix it by adding the following configuration option to your tomcat.conf or (preferred) a custom file into /tomcat/conf/conf.d/:
JAVA_OPTS="-Djava.security.egd=file:/dev/./urandom"
We encountered a similar problem, the cause was "catalina.out". It is the standard destination log file for "System.out" and "System.err". It's size kept on increasing thus slowing things down and ultimately tomcat crashed. This problem was solved by rotating "catalina.out". We were using redhat so we made a shell script to rotate "catalina.out".
Here are some links:-
Mulesoft article on catalina (also contains two methods of rotating):
Tomcat Catalina Introduction
If "catalina.out" is not the problem then try this instead:-
Mulesoft article on optimizing tomcat:
Tuning Tomcat Performance For Optimum Speed
We had a problem, which looks similar to yours. Tomcat was slow to respond, but access log showed just milliseconds for answer. The problem was streaming responses. One of our services returned real-time data that user could subscribe to. EPOLL were becoming bloated. Network requests couldn't get to the Tomcat. And whats more interesting, CPU was mostly idle (since no one could ask server to do anything) and acceptor/poller threads were sitting in WAIT, not RUNNING or IN_NATIVE.
At the time we just limited amount of such requests and everything became normal.
Disclaimer: I am more of a programmer and have little knowledge of JBOSS.
When we deployed the system, it works properly in the test environment. However in production, since there are multiple users and a lot of data are being updated/saved, some issues occurred. Double updates are being created, some functions are not working unless the server is restartedd. I'm thinking that this may be corrected by modifying whatever session or memory parameter JBOSS has. So we could prevent restarting the server every time an error occurs.
Question: What parameter or jboss configuration should we edit to accommodate multiple users and a large number of transactions.
You need to investigate the reason for your application not behaving the way you want to. Some Points you can consider :
Log request in Jboss support.
Try increasing Java heap size. (this can be done by editing entry in standalone.conf).
Try enabling gc logs to see , if your garbage is properly collected or not.
Check for memory leakage in code.
try to analyze thread dumps to check whether some of your threads are being blocked or not.
See what if you server CPU and memory Utilization is high or not.
I'm not sure what version of JBoss you are using, but if you are wanting to increase the JVM memory you can modify the following line in the run.bat file in your bin folder:
set JAVA_OPTS=%JAVA_OPTS% -Xms128m -Xmx512m
Xms = Minimum size and Xmx is Maximum size. If you think it is due to lack of resources, you may want to increase it to something like this:
set JAVA_OPTS=%JAVA_OPTS% -Xms512m -Xmx1024m
Does everything work normally with only 1 or very few users using the system? To me it seems more like a coding issue.
Check the Heap size of JVM and set it according to Machine RAM available.
You must have performance testing done before you deploy to production. You can check the behaviour of your code in performance testing using some monitoring tool or JMX tool.
Tune the paramter like Heap size, GC alogrithm, you might want to define fixed size of young generation
Tune the thread as well.
https://developer.jboss.org/wiki/ThreadPoolConfiguration?_sscc=t