Web access is extremely slow - performance

I have TFS 2015 installed on one of the company's servers. I try to access TFS using web access and it is extremely slow, it takes more than 5 minutes for a page to load and sometimes even longer. If I restart the server, TFS becomes a little bit faster (a page would need only a minute or so to load), but soon it becomes slower.
The server itself is okay. The CPU and memory are not even fully utilized (~20% - ~40% is utilized).
Other applications that are installed on the server are working fine, so it's just TFS.
Any suggestions?

Log in the application tier machine to try to access the web access to see whether you can see the same behavior.
Check the network connection between the application tier machine and data tier machine if you set up TFS in a multiple server configuration. You may try to turn off the firewall and anti-virus software on the machines.
Clean the cache folder on the application tier, usually the folder locates in: C:\TfsData\ApplicationTier\_fileCache
Check the Requirements and compatibility, to see whether your TFS set up on a appropriate environment.
If the items above is not helpful. You may need to consider move your TFS to another hardware.

Related

Repeating loss of session variables occurring with web application after moving to new server

I have an old web application which formerly ran on a windows 2003 server. When I moved it to a new Windows 2008 server, I started receiving an error that I never had before. The app uses a windows login. Upon accessing the app, the user is asked for their login. After that, they are free to use to application. However, the issue is that after using it for some time, the user will be booted out and asked to login again. The system is also much slower than it was previously. It is operating on IIS7. It seems to me that there is a loss of session variables occurring, but I am unsure about why that would be the case.
Interestingly, when the user logs in again, they can generally use the application for a longer period of time before being booted out and asked to log in again. It is also worth mentioning that it seems like the more users there are on the server, the less prominent the issue is.
It is also worth mentioning that I tried moving the application to another 2008 server, and it worked perfectly fine on that one. This leads me to believe that the issue lies somewhere in the settings on the server. I looked at the settings of the two 2008 servers side-by-side and noted the differences, but was incapable of finding a difference that would cause this sort of error. One difference that might be worth noting is that the server which does not work properly is 32 bit, whereas the server which does works is 64 bit. Although, I don't see how that difference could lead to the application having a loss of session variables, but still working otherwise.
Additional information:
The code in the application on each server is identical, so that leads me to believe that the error is on the server level and not within the application itself.
Given that the code is identical, I do not believe this to be a result of Session.Abandon() being called from anywhere.
I do not believe this is due to a session timeout.
I have read that other people experience a loss of session variables due to app pool recycling, and that often the app pool recycling is from the config files being accessed (whether it be from a user or from something like an anti-virus software). I have no reason to believe that this is the case here, because all servers are under the same anti-virus and the application works fine on them.
On the server which works, the IIS authentication setting are set such that windows authentication is disabled and that anonymous authentication is enabled. Whereas, on the other server, the opposite is true.
Any help with this issue would be appreciated.
Thank you.
Make sure your app pool is running under 4.0 .Net Framework and also check your application pool identity. When your using 7.0 iis, make sure you use integrated mode.

Visual Studio Team Services when someone leaves the company

We've transitioning from Rackspace dedicated boxes to a completely cloud Azure environment. Production servers and development and as an MS shop we're going to be using Visual Studio Team Services. As an MS ISV partner we have a bunch of MSDN seats so our developers are all going to have an MSDN w/ VS Premium account which we'll use with Team Services/TFS. We're replicating our production web server on a virtual machine but after some refactoring will eventually move to an Azure website.
My question is about when users leave the company. Right now we have everyone log into a development server using RDP. They develop on that server. When someone is gone we shut their access off to that server.
With Team Services when the user opens up a project do they automatically get the entire project downloaded to their local development environment/machine? If someone leaves the company is there a process using VSO that secures that code and removes it from them or makes it inaccessible? Any way to lock it down when we need to? I can't seem to find a procedure to do this.
To add or remove someone from the account, go to the Users hub on the home page for your account. If you remove a user from it, that user will no longer be able to access your account.
When users connect to your account, they'll need to take some action to get source code. That would be cloning in the case of using Git or creating a workspace and running get for TFVC.
If the user has source code, for example, on a machine, there is no way to remotely remove it. They won't be able to get updates, etc., but there's nothing running on the computer that would be able to erase the code the user has already obtained.
All source code sharing i know allow zipping up or browsing the local repository. Including VS Team Services.
Daniel Mann is correct . Developing on shared servers via RDP is terrible for productivity due to development being graphics and disk intensive, often requiring admin rights and reboots / crashes, debugging triggers system interrupts, out of memory loops are fun on a shared machine ie they stuff everybody else around. (Even with RDP you can copy and paste or map a network drive locally or upload to the net )
If your doing critical stuff the ONLY thing that really works is physically bring them in to non internet connected machine /network with USB disabled. However these mechanisms especially denying internet will half productivity.
This is why most organizations rely on legal contracts. On a 2M project is it worth making it a 4M project? There are cases where this is required normally around national security /CIA / Defence but not for IP, there are better / trickier ways.
Pretty much all binaries are reverse engineer-able with little effort if you really want to. obfuscation does very little.

Where should I install my Continuous Integration Server?

I'm the lead developer at a startup and we currently have the following setup:
- Development Server
- Staging Server
- Production Server
- Paid Subversion Hosting
- My local machine
- 2 other developers' local machines
Where is the best place to host the CI server? On an entire new server? Or is my local machine sufficient for this?
Definitely not your local machine. I'd suggest a separate server unless you don't mind slowing down your dev server.
I say not your local machine because the last thing you want to be hindered by is builds. Nothing is more frustrating than a slow machine. And you should generally keep official builds generated off of a separate server.
Generally not local machine (when other options are available) as you mostly want to have the same "stuff" installed (or not installed) on the build server as you have on the production server, so that whatever is running on the the build server is running in as realistic a scenario as possible.
Speaking from a .NET point of view, this means that I don't want (for example) Visual Studio running on the build server, ruling out my local machine.
It would also be a good idea to be sure someone on your team has access to the machine and can perform actions on it, thus potentially ruling out the hosted solution.
Aside from that, as long as it's on a box with a half decent spec, I don't think it really matters.
I would put it at the development server, staging server, or the paid subversion hosting instance, if possible.

Do you have performance problems when you work on Visual Studio projects via a network share?

We have tremendous problems with Visual Studio (2008, if that matters) locking up and slowing down when accessing projects over a network drive. It can take several minutes to open a large Web site project through a mapped drive, and saving even a single file can take a minute or more.
I fired up Wireshark and watched the traffic. VS, it seems, requests massive amounts of files from the network -- there's an enormous amount of SMB traffic. I've done some research, and this traffic seems to stem from two situations.
VS has to have everything in its own process to provide Intellisense.
VS needs to have all the source in order to compile the project.
All the advice I've read seems to boil down to the same thing: work locally, not on a remote machine, then push your code to an integration server via source control.
This would sure solve our problems (VS is quite fast working locally), but what if you can't work locally? What if the project and the infrastructure required to run it is too large and complicated to be replicated on everyone's individual machines?
We've gone 'round this problem a couple times, and the only way we can figure to work on these projects is direct access via a mapped drive. However, the VS slowness and lockups are really becoming a problem.
One solution: we installed VS on the server and work on the projects directly on the servers via RDP. Seriously.
So, I ask:
What does everyone else do? Do you work via the network, or do you replicate projects locally? If remotely, do you suffer from VS performance issues.
We work locally and use SVN to keep all our code on the server.
I find VS 2008 quite slow working locally sometimes so I wouldn't fancy working on a network share.
Trying to compile over a network share is horribly slow using visual studio. Your start times will be bad as the intellisense database is regenerated. Each compilation has to go over the network multiple times. Linking takes forever.
If you need the output of your compilation on the network, I'd recommend doing your compile locally and defining a post-build command to copy the results to your share.
If, as you say, you cannot pull everything locally then I'd suggest your project is too big and needs to be broken up into more manageable chunks. For a multi-tier application, break it up by tier and invest in some form of continuous integration (e.g. CruiseControl) to automatically build individual pieces. In this way you can work locally on an particular piece and pull the pre-build portions from CI for the other pieces of the application.
I'm not terribly surprised that using VS to load projects over a network share has performance issues. VS (in any language) is constantly getting information from files in the project. Once you start loading this over a network you're at the mercy of the underlying network connection. All lags and access issues will directly translate into VS having an issue loading file contents.
I would advise copying the solution locally and using some form of source code control to sync the project on the share.
If the code is too complicated to install on everyone's machine, then don't put it on everyone's machine. Does everyone need to have everything in order to do productive work?
I have 79 projects in my solution that I work with. Several hundred thousand lines of code. I pull my source down everyday from TFS and build it; it's a lot of code, but it's a far better solution than trying to work over a network share.
A more legitimate situation of having the source code on a share is when one has a non-Windows host on which a (number of) virtual Windows machine is running.
I have this exact situation where my desktop machine (the host) is running Debian and I use VMware to run various virtual Windows machines (the guests), including one that has Visual Studio installed so that I can target Windows OS's. Having the source code on a Samba share on the host machine has the following pro's:
The source is not duplicated, so there is no way to confuse different copies while working on several virtual machines at the same time.
I have full control over the source from my preferred OS.
I can turn on and off any of the virtual machines, or roll back to a snapshot, without the risk of loosing changes.
I can build (etc.) from the same source on several machines without having to commit changes before the source fully tested (reason: I have to use Subversion <1.5).
The only problem with this setup is that Visual Studio (6,7,8,9) is painfully slow.
I have mounted the partition (on which the share lies) with "relatime" and this works in as far as the disk activity on the share moderate, but Visual Studio keeps the (virtual) network card occupied all the time.
Any solutions to this would be very appreciated.
I encountered similar problems everytime I worked (work = anything else then just copy / paste files) over a network drive. The problem occured with ZendStudio and Eclipse.
Why not use any kind of source control?
When working on Windows based projects I've always worked locally.
Once at a unix shop (AIX iirc) developers would work via NFS mount and checkin/checkout via RCS...
I'm using VS2005 across to a network share and not having any performance issues. However, it is a new server (Windows Server 2008). I don't have any other data points for VS since using it at work is relatively new for me.
However, some datapoints from using Netbeans for previous projects on a network share... Local build time for my project was 2 minutes on Vista, on a fast dual-core AMD 64-bit machine. For a network share project, on a Server 2003 box, it was 20 minutes. Building that same project from an ancient Tablet PC (1ghz, single core) running XP locally was around 5 minutes. Interestingly enough, the Tablet PC could build on the Server 2003 box in the same 5 minutes.
For those asking "why" on the network share. The network share is automatically backed up, archived, etc. Also, that way I can very easily look at the same projects from multiple machines without having to worry about pushing back into the repository, etc. Once you've gone to having your dev stuff on a device where you can get to it from anywhere/anything, you'll never want to do local storage again!
I have performance problems via network anything, they just aren't good enough yet.
I thought it was common knowledge that disk-speed is one of the major "slowness" factors when it comes to using VS in Windows. Most dev machines I've built have had projects located on 10k RPM RAID0 drives, or at least a single 10k RPM drive. And even then it seems slow sometimes. Just the way it is, I suppose, until VS2009/VS2010 fixes it? :)
From my experience, this lag when working on a network share is 99% due to Intellisense. Disable it and you'll see.
disabling Intellisense indeed speeds up saving and opening files trough a UNC share dramatically
http://blogs.msdn.com/saraford/archive/2007/12/03/did-you-know-how-to-turn-off-intellisense-by-default.aspx
but then again, as stated in other comments, you might as well use a good text editor
I've also experienced the problems with performance mentioned above. It seems to vary from project to project, but I did find one way of speeding up performance significantly for some project types.
Following the advice in this article made a previously unusable project on a network location (it would take minutes to open one file) perform almost like a local project. The basic gist is that you need to grant FULL TRUST to the network location:
To grant permission to all your projects in your Visual Studio Projects folder located on the network, follow these 8 steps:
Open Microsoft .NET Framework 1.1 (or 2.0) Configuration which you'll
find under Administrative Tools in the Control Panel.
Expand Runtime Security Policy | Machine, | Code Groups | All_Code |
LocalIntranet_Zone In the right-hand pane, click Add a Child Code
Group.
In the dialog that follows choose Create a new code group and fill in
a Name like Visual Studio Projects.
Optionally, provide a Description for the Code Group. (You'll see the
description when you click a Code Group in the left tree, helping you
identify the various Code Groups you may have) .
In the Condition Type drop down, choose URL
For the URL field, type something like this:
file://YourServer/My Documents/Visual Studio Projects/*
Under Use existing permission set, choose FullTrust (that is, if you
trust your own applications. If you don't, choose a different
permission set or create a new one).
Not sure why this works, but it made a previously unusable NET 2.0 project perform significantly better.
Original article: http://imar.spaanjaars.com/364/how-do-i-allow-my-visual-studio-net-projects-to-run-from-a-network-location
I was having the same problem. I have a local copy of our build system, which expects certain drive letters, and was also experiencing slowness.
I have solved the problem by adding the following registry keys:
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\DOS Devices]
"R:"="\DosDevices\D:\devel\build
"S:"="\DosDevices\D:\devel\src"
Note that the double '\'s above are part of the .reg file format. When using regedit use single '\' throughout.
My build times were divided by 3. :)
I found the info in the wikipedia article on the SUBST command.

Does Visual studio Team Foundation Server really need to be on it's own machine?

So we decided to go with visual studio team foundation server for version control, etc. Getting ready to deploy today and read in installation guide:
"You cannot install Team Foundation Server on a domain controller or a computer that is running other server products such as Exchange Server or Host Integration Server."
That and other comments in the guide lead me to think ms does not want me to install tfs on anything other than a server dedicated soley to hosting tfs (ie don't put it on one of my front-end webservers or backend dc).
I am planning on doing a single-server deployment (mostly for simplicity). Can anyone verify that tfs has to be on a dedicated machine? If so, should I virtualize it and hang it off one of the front end machines?
Thanks all...
Performance is pretty important for TFS - check-ins, for example, should be pretty instantaneous or it can have a dramtic impact on developer productivity.
That said - it doesn't need a lot of horsepower - here's a link to the Server Requirements My current client is going "Virtual" - there should be no reason not to - assuming you know how to "tune" your virtual servers to perform equivilantly to the stated hardware specs.
One of the key things to remember, ALL data in TFS is stored in SQL server, so anything running on the same hardware that can affect SQL Server performance will affect TFS's performance. That is why it's important to have Build Server(s) distributed on another machine. Software builds are VERY "File-System" itensive operations and can have a very negative impact on SQL Server performance - hence why it's important to move that off to another "box"
From my experience this is because of the user membership that comes with a domain controller where creating the necessary TFS groups on the domain controller gives incorrect permissions.
However, there is a workaround:
Installation of the TFS Data Tier
Components on a Domain Controller
Copy the contents of \dt in a temp. directory, e.g. C:\TEMP\dt.
Open the file hcpackage.xml in Notepad or any XML capable editor
Search for the phrase “domain controller”.
Change the first WQL after the first match to
<WQL
namespace="\\.\root\cimv2"
query="SELECT * FROM Win32_ComputerSystem WHERE Domain !=''
AND
DomainRole >3"
action="="
count="1"
/>
You have to change count="0" to count="1".
Restart the setup.

Resources