Does Visual studio Team Foundation Server really need to be on it's own machine? - visual-studio

So we decided to go with visual studio team foundation server for version control, etc. Getting ready to deploy today and read in installation guide:
"You cannot install Team Foundation Server on a domain controller or a computer that is running other server products such as Exchange Server or Host Integration Server."
That and other comments in the guide lead me to think ms does not want me to install tfs on anything other than a server dedicated soley to hosting tfs (ie don't put it on one of my front-end webservers or backend dc).
I am planning on doing a single-server deployment (mostly for simplicity). Can anyone verify that tfs has to be on a dedicated machine? If so, should I virtualize it and hang it off one of the front end machines?
Thanks all...

Performance is pretty important for TFS - check-ins, for example, should be pretty instantaneous or it can have a dramtic impact on developer productivity.
That said - it doesn't need a lot of horsepower - here's a link to the Server Requirements My current client is going "Virtual" - there should be no reason not to - assuming you know how to "tune" your virtual servers to perform equivilantly to the stated hardware specs.
One of the key things to remember, ALL data in TFS is stored in SQL server, so anything running on the same hardware that can affect SQL Server performance will affect TFS's performance. That is why it's important to have Build Server(s) distributed on another machine. Software builds are VERY "File-System" itensive operations and can have a very negative impact on SQL Server performance - hence why it's important to move that off to another "box"

From my experience this is because of the user membership that comes with a domain controller where creating the necessary TFS groups on the domain controller gives incorrect permissions.
However, there is a workaround:
Installation of the TFS Data Tier
Components on a Domain Controller
Copy the contents of \dt in a temp. directory, e.g. C:\TEMP\dt.
Open the file hcpackage.xml in Notepad or any XML capable editor
Search for the phrase “domain controller”.
Change the first WQL after the first match to
<WQL
namespace="\\.\root\cimv2"
query="SELECT * FROM Win32_ComputerSystem WHERE Domain !=''
AND
DomainRole >3"
action="="
count="1"
/>
You have to change count="0" to count="1".
Restart the setup.

Related

Web access is extremely slow

I have TFS 2015 installed on one of the company's servers. I try to access TFS using web access and it is extremely slow, it takes more than 5 minutes for a page to load and sometimes even longer. If I restart the server, TFS becomes a little bit faster (a page would need only a minute or so to load), but soon it becomes slower.
The server itself is okay. The CPU and memory are not even fully utilized (~20% - ~40% is utilized).
Other applications that are installed on the server are working fine, so it's just TFS.
Any suggestions?
Log in the application tier machine to try to access the web access to see whether you can see the same behavior.
Check the network connection between the application tier machine and data tier machine if you set up TFS in a multiple server configuration. You may try to turn off the firewall and anti-virus software on the machines.
Clean the cache folder on the application tier, usually the folder locates in: C:\TfsData\ApplicationTier\_fileCache
Check the Requirements and compatibility, to see whether your TFS set up on a appropriate environment.
If the items above is not helpful. You may need to consider move your TFS to another hardware.

Visual Studio Team Services when someone leaves the company

We've transitioning from Rackspace dedicated boxes to a completely cloud Azure environment. Production servers and development and as an MS shop we're going to be using Visual Studio Team Services. As an MS ISV partner we have a bunch of MSDN seats so our developers are all going to have an MSDN w/ VS Premium account which we'll use with Team Services/TFS. We're replicating our production web server on a virtual machine but after some refactoring will eventually move to an Azure website.
My question is about when users leave the company. Right now we have everyone log into a development server using RDP. They develop on that server. When someone is gone we shut their access off to that server.
With Team Services when the user opens up a project do they automatically get the entire project downloaded to their local development environment/machine? If someone leaves the company is there a process using VSO that secures that code and removes it from them or makes it inaccessible? Any way to lock it down when we need to? I can't seem to find a procedure to do this.
To add or remove someone from the account, go to the Users hub on the home page for your account. If you remove a user from it, that user will no longer be able to access your account.
When users connect to your account, they'll need to take some action to get source code. That would be cloning in the case of using Git or creating a workspace and running get for TFVC.
If the user has source code, for example, on a machine, there is no way to remotely remove it. They won't be able to get updates, etc., but there's nothing running on the computer that would be able to erase the code the user has already obtained.
All source code sharing i know allow zipping up or browsing the local repository. Including VS Team Services.
Daniel Mann is correct . Developing on shared servers via RDP is terrible for productivity due to development being graphics and disk intensive, often requiring admin rights and reboots / crashes, debugging triggers system interrupts, out of memory loops are fun on a shared machine ie they stuff everybody else around. (Even with RDP you can copy and paste or map a network drive locally or upload to the net )
If your doing critical stuff the ONLY thing that really works is physically bring them in to non internet connected machine /network with USB disabled. However these mechanisms especially denying internet will half productivity.
This is why most organizations rely on legal contracts. On a 2M project is it worth making it a 4M project? There are cases where this is required normally around national security /CIA / Defence but not for IP, there are better / trickier ways.
Pretty much all binaries are reverse engineer-able with little effort if you really want to. obfuscation does very little.

Automatic applications deployment

I want to automate applications/roles/features deployment (unattended) on Windows 2012 R2 Infrastructure, this project needs many hours of programming, this is why i'm asking here.
I want to deploy the following applications and roles : Active Directory, DNS, Sql Server 2012, Citrix XenApp Server, Citrix XenDesktop server, Citrix Datacollector, Citrix Licence server, Citrix Storefront server.
So the basic deployment will be on 8 servers (already installed on ESXi, with ip configuration only).
I imagined this scenario :
I will fill a CSV file that contains all of information, and execute Powershell scripts to deploy everything, we can imagine 1 script that will call different scripts for each components (sql, ad, dns, citrix etc..)
I don't want to depend of any tool (sccm, puppet or whatever..), this is the reasons why i want to create it from scratch -> But maybe i'm wrong.
I also read that there is a new feature called Powershell DSC, to simplify application deployment http://blogs.technet.com/b/privatecloud/archive/2013/08/30/introducing-powershell-desired-state-configuration-dsc.aspx
There is a simple example : if you need 4 iis webserver then, execute this code :
Configuration DeployWebServers
{
Node ("test1.domain.com","test2.domain.com","test3.domain.com","test4.domain.com")
{
Windows-Feature IIS
{
Name = "Web-Server"
Ensure = "Present"
}
}
}
DeployWebServers -OutputPath "C:\scripts"
Start-DscConfiguration -path "C:\scripts" -Verbose -Wait -Force
But in my case i'll have only 1 server per application/roles or feature, if i understand well, this feature is interesting only if you need to deploy the same configuration on (x) servers
What's your advice? Should i choose to write powershell script from scratch? Or choose a solution like puppet or chef (much easier), but in this case i'll be dependant of a tool.
The best solution would be to use a sql database -> The final goal of my project is a web application with a database who will execute my powershell scripts to deploy my infrastructure
Of course from this web application, I will populate my database through forms, and my powershell scripts will query this database to get informations (ip address, client name, domain name, password, users...)**
Thank you for your advice.
Chef or Puppet will be the easiest way forward and both tools have been around for long enough for you not to worry about them disappearing off the internents. Both work, pretty much, out of the box and will get you up and running in a considerably lesser time than if you were to design your own system.
Having said that, a benefit of going with a PS solution is it doesn't require any agents installed on destination boxes(connectivity thanks to WinRM). Ultimately you can wrap it up as a Powershell module, hand it out to your sysadmins and retain full control of what's going on under the hood.
A PS solution will give you full control, better understanding of underlying process - but that will at cost of time and other design headaches.
To sum up: if you have the time, the will or a specific use case then go with PS. Otherwise do what the big boys do and save yourself reinventing the wheel - or seventeen.
Disclaimer: I did the PS thing for a previous employer.
If you're looking for a repeatable deployment solution, and you don't might using some light, free, infrastructure, I propose you use Windows ADK 8.1 and MDT 2013 (if you're using Windows Server 2012 R2). You can develop a front end to chose a deployment type. Rather than using a csv file, all the information can be contained within the Task Sequence, and can be configured to trigger tasks on different conditions.
Johan Arwidmark from deploymentresearch.com has developed a great example of this called Hydration Kit, with full Step-by-Step Guide that sets up a Configuration Manager 2012 R2 infrastructure, running on Windows Server 2012 R2 and SQL Server 2012 SP1, in either Hyper-V or VMware. If you ask him nicely, he might allow you to use his work as a base for your project.

Where should I install my Continuous Integration Server?

I'm the lead developer at a startup and we currently have the following setup:
- Development Server
- Staging Server
- Production Server
- Paid Subversion Hosting
- My local machine
- 2 other developers' local machines
Where is the best place to host the CI server? On an entire new server? Or is my local machine sufficient for this?
Definitely not your local machine. I'd suggest a separate server unless you don't mind slowing down your dev server.
I say not your local machine because the last thing you want to be hindered by is builds. Nothing is more frustrating than a slow machine. And you should generally keep official builds generated off of a separate server.
Generally not local machine (when other options are available) as you mostly want to have the same "stuff" installed (or not installed) on the build server as you have on the production server, so that whatever is running on the the build server is running in as realistic a scenario as possible.
Speaking from a .NET point of view, this means that I don't want (for example) Visual Studio running on the build server, ruling out my local machine.
It would also be a good idea to be sure someone on your team has access to the machine and can perform actions on it, thus potentially ruling out the hosted solution.
Aside from that, as long as it's on a box with a half decent spec, I don't think it really matters.
I would put it at the development server, staging server, or the paid subversion hosting instance, if possible.

Why is there a Red Cross against my User Group in Team Explorer > Team Members?

Recently our Development user group (Windows) has started showing with a Red Cross in Team Explorer and we cannot expand it anymore.
I have tried removing and re-adding the group but to no avail.
Does anyone know why it would display like this?
We are using TFS 2010 with VS2010 SP1 and August's Power Toys.!
BTW, "Technical Testing Team" is another Windows Domain User Group, just like Development and that works OK.
In general, the red crosses on particular services are caused either by that service being unavailable or by permissions issues...
Are you still able to perform actions that require admin permissions? Does this apply to a single project or all?
How are you defining your developers? A windows domain group? If so, is the TFS server able to see the DC?
I'd suggest you try installing Team Explorer on the TFS server and running it when logged on as yourself - see if you have the same problem. If not, it may be network or firewall problems between your dev machine and the server. At least it would narrow the problem down.
Edit 1:
Do reports work properly? (Specifically, do the graphs show up in reports)?
What auth are you using? Kerberos?
What account is TFS running as? What permissions (if any) does that account have on the network?
Can you see the security information you'd expect in the TFS_Configuration database? (Try tbl_SecurityAccessControlEntry) [Usual "Change nothing, do it at your own risk" disclaimer]
Edit 2:
As per the install docs, the TFS service should be running under its own account (IIRC they suggest Domain\TFS.Service). Check the permissions on the windows services on the TFS Server and see who they're running as. Makes sure the permissions for that user are correct as per the installation instructions
NTLM can cause problems as it doesn't allow credentials to be delegated/relayed the way Kerberos does (and has some picky setup requirements) - but that's obviously not why it's broken all of a sudden (and that usually manifests as graphs not displaying in reports).
WRT: the SecurityAccessControlEntry table, I was more interested in making sure there were entries and that it could be read properly than the contents.
I assume you've tried deleting/recreating groups - If not, give it a shot (deleting the domain group may be an issue with other services but try using a different (new) group and removing the old one from TFS entirely)
I have to admit I'm running out of ideas after that. If it were me, I'd try a clean install on a new server/VM and either point the new install at the old data store [multiple server setup] or export/import projects [single server setup].
For Multiple server setups, this would determine if it's a TFS installation issue/data corruption. For single-server, there's a good chance this would just clean up the problem. You could, of course, also ex/import on multi-server too if it does turn out to be a data thing.
You may want to hang on to see if someone has a less drastic solution.
Looking in the General tab of the VS Output windows there is a message:
Skipping loading group Development into Team Members because it has 102 members.
Looks like VS has a limit on here.

Resources