MSBuild performance problems after server migration - performance

We are running TeamCity with 20-ish .Net build configurations. All of them uses MSBuild 4.0. We recently moved one of our build agents from a machine running Windows Server 2008 to a new physical server running Windows Server 2012, and after this migration all the builds are taking almost twice as long time to finish, compared to on the old server! The new server is more powerful than the old server, both in terms of CPU, RAM and Disk. We have been running benchmarks on both servers, all of them confirming that the new server should be more capable than the old one.
According to the build logs it seems that all the stages of the build is slower, so it's not just one of the build steps that are slower.
First thing we did was to check the CPU utilization during builds, and it is suspiciously low! Only 3-6% CPU usage on some of the cores. RAM usage was also very low. Could it be some config for the build agent that is slowing down the builds, and that we have overlooked?
Note: The new server is running as a virtual machine. At first we thought that was the reason, but then that should have been reflected on the benchmarks? This is the only virtual machine running on this physical server, and it has almost all HW resources dedicated. Would be interesting to hear if any of you have had similar bad experience with running the build server on a virtual machine. We have also tried booting "natively" from the VHD image, without any difference in the build times.
I know this could be a very tricky one to debug for "external" people, but I was hoping that someone could maybe give some good suggestion on where to look for the issue, as we are kind of stuck right now.
Edit: Tried activating the Performance Monitor tool in TeamCity, and it shows that both CPU, RAM and Disk usage is comfortably running at <10% (Disk access peaked at 35% a couple of times during the build)

Related

Xcode Server Bots not running tests

I'm trying, but failing to setup a reliable continuous integration environment using Xcode server.
I have a git repository on a headless mac mini server running the Xcode server service, the server has a separate development user account with administrator privileges that is used by Xcode.
I have setup my schemes, with testing included and shared them to the repository.
The bots run, check out code, build, analyze and archive, but only seems to run tests when it feels like it, which is almost never. I've checked the schemes and they have not changed since Xcode ran the tests and when it didn't.
On first setting them up, tests wouldn't run at all, until I added administrator privileges to the development account, then the tests ran a couple of times, before Xcode server decided to stop running them again.
I don't seem to get any reason why the tests aren't run, sometimes the bots fail to run because of some crash during the setup, and an error is reported, but mostly the bot seems to run, they just don't execute the tests, and no error is reported.
I've logged in remotely to the server, and the simulator is running, but never seems to do anything.
Here's a screenshot of an example bot, you can see the tests used to run, it sees I've reduced my warnings and got rid of an analysis issue. You can also see where no tests run, and no kind of warning or error is given as to why.
I've tried restarting the server, nope.
I've tried restarting the client, nope.
It's really frustrating and can't find any recent issues that offer a proper solution to this. The server is in constant use running backups and other tasks, so I'd rather not have a solution that involves me logging in to the server and restarting something every time there's a problem, which is always, it makes the whole point of bots useless if I'm spending more time logging in to my server trying to get them to work than they are at actually running.
Anyone have similar issues and a solution?
Edit: Noticed that my memory usage was very high on the server, memory pressure was practically always amber, so went out and got some memory today, increased the mac mini's memory from 4GB to 16GB, and now the tests have started running again. Also, the whole process is much faster (less than surprising i guess).
Could it just be low memory causing problems with the simulator? I've only just installed the memory and restarted, so I'll give it a few test runs before I confirm this solution, it's stopped working before...
Seems that this may be a memory issue, I upgraded the servers memory from 4GB to 16GB as my Activity monitor was showing significant memory pressure.
Since doing this, the bots started running tests again, and the total running time for the bot is a quarter that it was.
As per my edit, I've been running the bots for a day now, including bots that run on multiple simulators, and everything seems to be fine.
It's not very good that no obvious indication is given in xcode as to why the tests didn't run.
For reference and to see if this might fix your problems, original server specs were :
Mac Mini Server edition (late 2012)
2.3 GHz Intel Core i7
4GB memory
2x1TB drives
Replaced the 2x2GB memory sticks with 2x8GB sticks (The maximum allowed for the model)
EDIT : After a month of running with no problems, increasing the memory has solved the problem permanently.

What hardware improvements should we make to speed up our build machine?

We have a build machine running in our development department, which we've set up to build continuously throughout the working day.
What this does is:
Deletes the source code previously checked out (5 minutes)
Does a clean checkout from subversion (15 minutes)
Builds a whole bunch of C++ and .NET code (35 minutes)
Builds installers and run unit tests (5 minutes)
Given the above, what sort of impact would adding different hardware have on improving the time it takes to do the above?
For example - I was thinking about using an SSD for the harddisk as compiling involves a lot of random disk access.
The subversion server is currently a virtual machine - would switching it to be a physical machine help the slow checkout?
What impact would upgrading from a Core 2 Duo processor to an i7 make?
Any other suggestions on speeding up the above?
One trick that might speed up the SVN checkout process could be to have a working copy on the build machine, update the working copy and do a svn export from the working copy to the build directory. This should reduce the load on the SVN server and reduce network traffic.
Another trick to reduce the first 5 minutes of cleaning could be to move the old build dir to a temp folder on the same disk and then use another background task to delete the old build dir when the main build completes (could be a nightly cleanup task).
I think you've made good suggestions yourself. Definitely add a faster hard-drive (SSD or otherwise) and upgrade the CPU as well. I think your code repository (Subversion) should definitely be on a physical machine, ideally separate from your build machine. I think you'll notice a big difference after upgrading the hardware. Also, make sure the machine doesn't have any other large tasks running at the same time as the build tasks (e.g. virus scanning) so that the build tasks aren't slowed down.
How is your build machine setup to execute its tasks? Are you using continuous integration software? Is the machine itself a server or just a regular desktop machine?
Another way to speed up SVN is to use binary protocol instead of HTTP.
It looks like the build time is the most time consuming part - that's the best candidate for optimization. What about parallel build spread over other machines in the office - products like Incredibuild might significantly improve compilation time.

IIS performance improvements

What can be done to improve performance in IIS? When I deploy my webapplication to my local IIS machine it goes much slower than when I run the solution in visual studio without debugging. The difference is remarkable, like double as fast.
Some things I can think of -
Check your processor usage and memory usage - this can be extremely important.
check if Gzip compression is enabled - enabling this on an already overloaded CPU can further degrade performance. However over network this will save you bandwidth, so its a good option to enable when processor is not overloaded
Is this on the same machine or different machines? If your build machine is less capable than your development machine (which is sometimes the case) that could add to the problem
Also have you considered the initial compile time for the websites? At least the first page load in local IIS will be somewhat slower because of the initial compilation that takes place on request of the page. In Visual studio, you will see this separately as a build step and this could also create an illusion of VS dev server being faster than IIS.

Virtual Development Environment Performance - .NET Development

I have the following setup for my daily/main/only development environment
Hardware/Tin = 4gb ram, 2.6ghz dual core CPU, 2x250gb HD's, usual array of periperhals
One the tin above, I currently have Windows XP installed, in Windows XP I have VMWare Workstation installed and I run a Windows Server 2003 deelopment environment. This includes,Visual Studio 2003/2005/2008, Sql Sever 2005/2008, Full MS Office suite, some producitivity tools (e.g. Redgate Sql/Data Compare, DevXpress Coderush, TestDriven.net etc).
I have problems with this, it runs slow (15 minutes to boot), the Watch/Autos windows in VS freeze up when debugging, I can't have more than 2-3 copies of VS open, the Errors window freezes up, WinGrep and COm+ constantly runs out of Virtual Desktop Memory and so forth (In fact, I would attribue most of the issues to Virtual Desktop Memory)
Now, I've tried every tweak in the book, I have second HD for VMWare, my paging file is on a differnt drive, I've adjusted my Ram split between guest and host, I've hacked the reg key for Virtual Desktop Memory and all of this to no avail.
Now, I could increase my Ram or CPU, but I'm not able to.
My question is, has anybody experienced the above, and if so, how did you solve it? Did you try ESXi? or shift your environment to raw tin?
IMHO, you've tried just about every tweak in the book. I'd suggest that you should just move to native for your main setup, and restrict VM use for testing.
I use a VM as my main dev env, but I don't run as much stuff as you, so I don't hit a big performance wall.
I guess the trick you didn't try was to run less things on your VM. 2-3 copies of VS are a recipe for slowness. Running Sql Server, same thing. Bump up memory would be good, but at least run services (iis, sql server) on another vm or better yet, another box. You are taxing your VM waay too much, it is not VM's fault.
The problem you run into most of the time on VPS is IO wait.
Do you run your virtual machine off of a disk image, if so try defragmenting your drive.
Or did you dedicate a partition to it?
Edit:
I would suggest to:
either try defragmenting the drive that has the disk image
either try dedicating a partition to the virtual machine, instead of using a disk image all together. (ideally the first partition on the drive, since this will have the lowest random access time)
Running off a disk image works, but since you're working on top of a filesystem, the disk image might be fragmented throughout the disk.
Good luck, hope it helps...

Significant Performance Decrease when moving from Windows Server 2003 to 2008 (IIS 6 to IIS 7)

Our ASP.Net 2.0 web app was running happily along on Windows Server 2003. We were starting to see some of the limits of the environment approaching, such as memory and CPU usage spikes, and as we're getting ready to scale we decided it was time for a larger server with higher availability.
We decided to move to Windows Server 2008 to take advantage of IIS 7's shared configuration. In our development and integration environments, we reproduced the OS and app in 2008/IIS 7 and everything seemed fine. But truth be told, don't have a good way of simulating production-like loads as of yet, nor can we reproduce our prod environment accurately (we're small with limited resources). So once we rolled out to production, we were surprised to find performance significantly worse on 2008 than it was on 2003.
We've also moved from a 32-bit environment to 64-bit in the process, and we've also incorporated ASP.Net 3.5 dll's into the project.
Memory usage is through the roof, but I'm not as worried about that. We believe in part this is because of the overhead with Server 2008's memory, so throwing more RAM at it may solve that issue. The troubling thing is we're seeing processor spikes to 99% CPU Utilization, which we've never seen before in the 2003/IIS 6 environment.
Has anyone encountered these issues before and are there any suggestions for a solution/places to look? Right now we're doing the following:
1) Buying time by adding memory.
2) Buying time by setting app pool limits: shut down w3wp.exe when CPU hits 99% load. Since you don't have the option to recycle the app pools, I have a scheduled task running that recycles any stopped app pools.
3) Profiling the app pools under Classic and Integrated modes to see which may perform better.
Any other ideas are completely welcome.
Our experiance is that code runs much faster on a 64bit windows 2008 than on a 32bit windows 2003 server.
I am wondering if something else is also running on the machine. For example is SQL Server installed with a maintainence plan that could cause the CPU spike.
I would check the following:
Which process is using the CPU?
Is there a change in the code? Try installing the new code on the old machine
Is it something to do with the compile options? Is the CPU usage a recompile?
Are there any errors in the event log?
In our cases, since we have 4 processors, we then increased the "number of worker process to 4" currently working well so far as compare before.
here a snapshot:
http://pic.gd/c3661a
You can use the application pool "Recycle" option in IIS7+ to configure physical and virtual memory limits for application pools. Once these are reached the process will recycle and the resources will be released. Unfortunately the option to recycle based on CUP usage has been removed from IIS7+ (some one correct me if I'm wrong). If you have other apps on the server and want to avoid them competing for resources when this condition happens you can implement Windows System Resources Manager and it's IIS policy (here is a good tutorial http://learn.iis.net/page.aspx/449/using-wsrm-to-manage-iis-70-apppool-cpu-utilization/)
Note SRWM is only available on Enterprise and Data Center editions.

Resources