Unusably slow Jenkins: Normal CPU but high disk activity on Jenkins web page access - performance

On Jenkins v1.528, after a few hundred builds, web page access and build job completion slows to many times slower than normal, all the way to unusable. Restart fixes the issue. The host machine, a Mac circa 2012, has gigabytes of available memory and CPU usage is normal.
But I notice a persistent spike in disk usage when accessing a Jenkins web page (such as one for a Jenkins job/task/build). It seems that Jenkins has possibly run out of heap space. Yet I can't think of anything that has changed on this Jenkins instance or on the host machine to cause the issue (it didn't used to be a problem).
Also, I've seen some fixes for major slowness recently, but they went into prior versions.

We have a similar issue after restart. This is what I figured our still today.
Your issue seems to be related to this issues:
jenkins is slow resopnding to main page request
Jenkins gets very slow after saving some Jobs
Jenkins' homepage loading very slowly
Jenkins is very slow on first visit

Related

Issues with GitLab Runner on 32-bit Windows

I have a problem with GitLab Runner on 32-bit Windows. The runners are at version 14.4.0 and our GitLab instance is at version 14.4.1-ee. The runners are tied to specific machines running 32-bit Windows 10 Pro (10.0.19043), use shell executors (PowerShell), and run with full administrative privileges (i.e., as the local system user). This is outside my control.
Sporadically, and for no discernable reason, the runners stop sending log traffic to our GitLab instance. They should be uploading several MB worth of logs. I don't see failed attempts to upload logs in debug mode. I don't see any of the network traffic I expect in WireShark. This might correlate with issues loading a custom driver, but I can't say for sure.
The workaround is even more perplexing. The following protocol fixes the issue: remove all the runners using the GitLab CI interface; uninstall the malfunctioning runner; download a new runner binary, register and install it. If I repeat the same steps, except without downloading a new binary, the issue persists. The files are identical when I run a binary diff on them.
I haven't been able to extract any relevant information from the system event logs or network traffic. The issue only affects our runners on 32-bit Windows. It doesn't affect 64-bit Windows or runners on Linux, regardless of architecture. It seems to happen sporadically, in the sense that I can't correlate it with anything interesting happening on the affected machines.
Clearly, something about our 32-bit Windows environments is different and causing the runners to malfunction. I just don't know what it is. I would appreciate any direction figuring out the source of this problem. The fact that downloading new binaries makes the difference has me worried, but I don't have any reason to suspect our machines have been compromised.
This problem was resolved by running tests remotely over SSH. It's almost certainly a bug with the 32-bit Windows distribution of gitlab-runner.

Teamcity slow; 100% disk usage

On one of my server's teamcity is running all build steps extremely slowly.
Using Teamcity perfmon I traced it to 100% disk usage doing simple tasks like running a nant script with a copy.
How do I fix this?
It turned out there was a type of Windows logging switched on on the server. Teamcity does a lot of things, and so the logging of what teamcity was doing was disk heavy. The type of logging was identified by looking at what process was hogging disk usage using Task Manager. I can't remember what it was, but switching off this Windows related logging solved the problem

Web access is extremely slow

I have TFS 2015 installed on one of the company's servers. I try to access TFS using web access and it is extremely slow, it takes more than 5 minutes for a page to load and sometimes even longer. If I restart the server, TFS becomes a little bit faster (a page would need only a minute or so to load), but soon it becomes slower.
The server itself is okay. The CPU and memory are not even fully utilized (~20% - ~40% is utilized).
Other applications that are installed on the server are working fine, so it's just TFS.
Any suggestions?
Log in the application tier machine to try to access the web access to see whether you can see the same behavior.
Check the network connection between the application tier machine and data tier machine if you set up TFS in a multiple server configuration. You may try to turn off the firewall and anti-virus software on the machines.
Clean the cache folder on the application tier, usually the folder locates in: C:\TfsData\ApplicationTier\_fileCache
Check the Requirements and compatibility, to see whether your TFS set up on a appropriate environment.
If the items above is not helpful. You may need to consider move your TFS to another hardware.

WaIISHost flatlining web-role

First off, I'm very new to Azure.
I've successfully deployed an ASP.NET MVC 3 web application to Azure, using a web role. The app uses Entity Framework and SQL Azure.
Recently I've done some changes (some including adding appsettings), and tried to upgrade the application. When upgrading, it took quite a long time, before Aborting. I've always deployed through the management portal Silverlight application at http://windows.azure.com.
When trying again to no avail, I setup remote desktop and deployed again. The remote desktop session was extremely slow, and it turned out to be because WaIISHost was putting the CPU to 100%.
The IIS Manager shows that the application is deployed and 'started', however I cannot navigate to the site in the VM, and the deployment constantly seems to be trying to update without success and eventually aborting and retrying, (as I write this, it's currently Busy and Waiting for role to start...).
Does anyone have any ideas as to what the problem could be?
I believe all the right dependencies are set to copy local, which is a possible problem. It is extremely hard to debug this issue, as the remote desktop session hangs so often due to the 100% CPU utilization, and the recycling/restarting/reupdating of the web role from time to time.
Thanks,
James
P.S. Hope some of that made at least some sense...
I doubt that there's something doing in your WebRole.OnStart and/or Run, which caused the WaIISHost uses 100% CPU. Can you remove all codes from the WebRole.OnStart and/or Run and try again.
And it might be helpful to turn on the IntelliTrace when deploying, so that you can download the trace and find out any exceptions occurred when your application started, even before the website started.

VS 2008/AJAX Project Fails Under Stress

I've been working on a VB.NET/VS2008/AJAX/SQL Server project for over 2 years now without any real issues coming up. However, we're in the last week of our project doing some heavy stress testing and the project starts failing once I get about 150 simultaneous users. I've even gone so far as to create a stripped down version of the site which only logs in a user, pulls up their profile and then logs off. That still fails under stress. When I says "fails" I mean the CPU's are spiked and the App Pool eventually crashes. This is running on a Windows 2008 R2 duo quad server w/ 16 gig of memory. The memory never spikes but the CPU tops out.
I ran YSlow on the site and it pointed out that I needed to compress the .axd files, etc... I did that by implementing Gzip compression on everything but that's what got me to the 150 users. I run YSlow now and it says everything is "A".
I'm really not sure where to go from here. I'd be more than willing to share the stripped down version of the site for anyone to review. I'm not sure if it's the server, my code or the web.config.
I know it is a bit late but have you considered increasing the number of worker processes in the application pool of your site to form a web garden? You can do this on the IIS Manager.

Resources