Teamcity slow; 100% disk usage - teamcity

On one of my server's teamcity is running all build steps extremely slowly.
Using Teamcity perfmon I traced it to 100% disk usage doing simple tasks like running a nant script with a copy.
How do I fix this?

It turned out there was a type of Windows logging switched on on the server. Teamcity does a lot of things, and so the logging of what teamcity was doing was disk heavy. The type of logging was identified by looking at what process was hogging disk usage using Task Manager. I can't remember what it was, but switching off this Windows related logging solved the problem

Related

Issues with GitLab Runner on 32-bit Windows

I have a problem with GitLab Runner on 32-bit Windows. The runners are at version 14.4.0 and our GitLab instance is at version 14.4.1-ee. The runners are tied to specific machines running 32-bit Windows 10 Pro (10.0.19043), use shell executors (PowerShell), and run with full administrative privileges (i.e., as the local system user). This is outside my control.
Sporadically, and for no discernable reason, the runners stop sending log traffic to our GitLab instance. They should be uploading several MB worth of logs. I don't see failed attempts to upload logs in debug mode. I don't see any of the network traffic I expect in WireShark. This might correlate with issues loading a custom driver, but I can't say for sure.
The workaround is even more perplexing. The following protocol fixes the issue: remove all the runners using the GitLab CI interface; uninstall the malfunctioning runner; download a new runner binary, register and install it. If I repeat the same steps, except without downloading a new binary, the issue persists. The files are identical when I run a binary diff on them.
I haven't been able to extract any relevant information from the system event logs or network traffic. The issue only affects our runners on 32-bit Windows. It doesn't affect 64-bit Windows or runners on Linux, regardless of architecture. It seems to happen sporadically, in the sense that I can't correlate it with anything interesting happening on the affected machines.
Clearly, something about our 32-bit Windows environments is different and causing the runners to malfunction. I just don't know what it is. I would appreciate any direction figuring out the source of this problem. The fact that downloading new binaries makes the difference has me worried, but I don't have any reason to suspect our machines have been compromised.
This problem was resolved by running tests remotely over SSH. It's almost certainly a bug with the 32-bit Windows distribution of gitlab-runner.

Task Manager shows Hard drive at 100%

My hard drive is at 100% in Task Manager.
I disabled Windows Search and Superfetch and hard drive is still at 100%.
I am using Windows 10.
Any suggestions would be helpful.
Update: Task Manager won't show what process is clogging up hard drive at 100%.
Task Manager won't show any processes that use up a lot of percentage of hard drive.
I suggest you see the processes tab and see if any process that might be using maximum read/writes in your hard drive.
Disable Indexing service that sometimes use more resources. Disable any startup process that might be using your system resources.
Windows + R -> Run Menu -> Type: msconfig and see any startup process that you can disable. Disable any program that seems suspicious.
You can try some other repair methods like:
Perform a diskcheck
Reset Virtual Memory
Disable Antivirus Software temporarily
Change the settings in Google & Skype
Fix your StorAHCI.sys driver
Update your device drivers
Win10 100% disk usage
I had the same issue on my WINDOWS 10 system and I tried a lot of things like turning off the search indexing feature of windows but nothing worked using all that. Here is what worked for me. I opened the task manager and found that there was a task with Microsoft Compatibility Telemetry (CompatTelRunner.exe). It is a Windows process that is designed to collect and send usage and performance data to Microsoft. The executable file collects and regularly sends usage and performance information to Microsoft in order to analyze the user experience and improve it. The described file also helps Microsoft to identify compatibility issues and ensure compatibility when installing the latest Windows OS version. However, Microsoft Compatibility Telemetry eats CPU by scanning computer files and check their compatibility with Windows 10 in case an update is initiated.
I simply clicked on End Task for Microsoft Compatibility Telemetry and my disk usage went from 98% to 15% within few seconds. I hope it helps others experiencing the same issue as well.
I had the same issue with windows 10 on Laptop.
I set the windows update service from automatic to manual.
Now i am always under 5%.
Click on administrative tools in control panel
Then click on Services
set windows update to manual.
Had the same problem for months. Desactivated SrTasks.exe and it started working.
However this task is clearly something important, so I think it's not recommanded to stop it.

Unusably slow Jenkins: Normal CPU but high disk activity on Jenkins web page access

On Jenkins v1.528, after a few hundred builds, web page access and build job completion slows to many times slower than normal, all the way to unusable. Restart fixes the issue. The host machine, a Mac circa 2012, has gigabytes of available memory and CPU usage is normal.
But I notice a persistent spike in disk usage when accessing a Jenkins web page (such as one for a Jenkins job/task/build). It seems that Jenkins has possibly run out of heap space. Yet I can't think of anything that has changed on this Jenkins instance or on the host machine to cause the issue (it didn't used to be a problem).
Also, I've seen some fixes for major slowness recently, but they went into prior versions.
We have a similar issue after restart. This is what I figured our still today.
Your issue seems to be related to this issues:
jenkins is slow resopnding to main page request
Jenkins gets very slow after saving some Jobs
Jenkins' homepage loading very slowly
Jenkins is very slow on first visit

WaIISHost flatlining web-role

First off, I'm very new to Azure.
I've successfully deployed an ASP.NET MVC 3 web application to Azure, using a web role. The app uses Entity Framework and SQL Azure.
Recently I've done some changes (some including adding appsettings), and tried to upgrade the application. When upgrading, it took quite a long time, before Aborting. I've always deployed through the management portal Silverlight application at http://windows.azure.com.
When trying again to no avail, I setup remote desktop and deployed again. The remote desktop session was extremely slow, and it turned out to be because WaIISHost was putting the CPU to 100%.
The IIS Manager shows that the application is deployed and 'started', however I cannot navigate to the site in the VM, and the deployment constantly seems to be trying to update without success and eventually aborting and retrying, (as I write this, it's currently Busy and Waiting for role to start...).
Does anyone have any ideas as to what the problem could be?
I believe all the right dependencies are set to copy local, which is a possible problem. It is extremely hard to debug this issue, as the remote desktop session hangs so often due to the 100% CPU utilization, and the recycling/restarting/reupdating of the web role from time to time.
Thanks,
James
P.S. Hope some of that made at least some sense...
I doubt that there's something doing in your WebRole.OnStart and/or Run, which caused the WaIISHost uses 100% CPU. Can you remove all codes from the WebRole.OnStart and/or Run and try again.
And it might be helpful to turn on the IntelliTrace when deploying, so that you can download the trace and find out any exceptions occurred when your application started, even before the website started.

Is it possible to run Teamcity on Linux and use Windows as a Build Agent?

I would like to run Teamcity (with a build agent) in a Linux VM to handle our none-.net projects. But in the same breath I'd like to have a BuildAgent setup on a Windows server to handle all of the .net projects.
I can't think of any reasons why this wouldn't work but has anyone any experience and any ideas about the problems I might encounter before I spend too much real time on this?
Ta
It's fully supported. TeamCity also knows which agents to route builds to.
This is a very normal scenario and many project I know do this without any problems. Just make sure that for the builds' Agent Requirements, you properly direct the appropriate job to the appropriate agent. One criterion can be that agent.os.name should contain Windows or Linux etc.

Resources