AzureDevOps onPremise, Workspacemapping really slow - performance

We are using the onPremise version of the DevOps Server 2019 (curently update 1) with self-hosted agents (agents have the last version available from gitHub) in combination with TFVC (2019).
The devOps server is running in a virtual machine and the tfvc server is running in a different virtual machine.
The communication between them is fast, i tested this already by simply copying big testdata from one to the other over network. There is no Problem.
On each and every run, at the very beginning, the workspace mapping from a previous run is getting deleted, a new one is created and than a new workspace mapping to every source paths defined in the repository is established. This is taking about 30-60 Minutes on each and every pipeline/run.
We dont have only one single path in the repository defined. there are a lot of mappings, so that the amount of code that gets taken from TFS stays little and only represents that source code, that is needed by this executed solution.
This can't be changed and has to stay as it is, also we can't simply move to github. (Just sayin in case someone would like to advice to move to github :))
Are there any people, that experienced the same behaviour in the past, that the repository paths mapping at the fist build step is taking about 30-60 minutes when a build is executed?
thanks for any hints in advance

The solution now was, installing everything from scretch on a new machine.
After that, the mappings are running in a 10th of the time it took before

Related

How to solve Microsoft TFS (Team Foundation Server) update problem

How to solve Microsoft TFS (Team Foundation Server) update problem and when you want to avoid TFS overriding option that requires much time.
Problem:
You want to update a location in TFS named $abc\code. You know that there is definitely an update in files in that location but for some TFS system limitation its not updating the proper file and showing that everything is updated. But in reality some files are not latest. This sometimes happens when we go back to an old snapshot in a virtual machine like Oracle VM virtual box. It may also happen is some other scenarios.
N.B. You might also face similar problem in "Azure DevOps Server".
A hard solution:
Update the $abc\code folder using TFS override option. In this case If update takes 2 days for example you have to keep you computer active for 2 days and cannot stop the process in the middle of it. Otherwise you have to do the full process again which will require another 2 days for example. Note that if the contents at $abc\code are not so large then this approach is the easiest.
An easy solution:
There is a TFS Workspace in the Visual Studio user interface shown as a drop-down. Delete the current workspace and create a new workspace with a different name such as Name_Year_Month_Day (for example "Workspace20200514").
When you create the new workspace set the TFS location exactly in the same directory as the previous workspace. If TFS asks you to update all then click "No" to cancel the immediate update.
Now take latest ("get latest") for $abc\code location.
In this case you can cancel the update anytime and shut down you computer and later you can take latest again. Taking latest second or third (or any number of) time will take only the remaining files those are not latest. It will not take the already updated files again. So, it will save time.
So, you can split you update into parts and your task becomes easy.

Does TFS with VS2012/VS2013 track every local save?

Thinking about using Team Foundation Server with VS2012 or VS2013. Does TFS track every single local change? For example. If I save a file 10 times locally even though I don't check in, will every one of those local saves be checked in on the TFS server so anyone on the team can see? Of does it just send to the server the latest saved version of the file when you check in? I'm not looking for it to do this because it wouldn't seem to make sense unless a manager wanted to track your hours or something.
MSDN describes the checkin process over here. It is stated there, that...
... all the included file changes from your workspace along with the comment, check-in notes, and links to related work items are stored on the server as a single changeset on your server.
That means that only the last version of your changes made locally will make it into the changeset, onto the server and to your coworkers. You can save as often as you want beforehand, the server won't notice.
Short answer: No.
Longer answer:
I can't think of any SCM that works that way.
Here's how it works, with pretty much any version control system (the terminology will differ from SCM to SCM, but the concepts are the same):
You start modifying a file.
You change the file as much as you want. When you're done, you commit it/check it in.
The contents of the file at that point in time are what's stored in the SCM. No intervening changes are stored.

SonarQube Incremental mode gives different results on different machines with same code

I am running a new sonar server with one project and one quality profile.
I have a "runner" machine that continuously running full scans (after getting the latest code)
and the developers machines where they would like to run incremental analyses before checking-in.
when running an incremental on the "runner" machine without making any changes I get 0 issues as expected (but I am also getting a lot of "resolved" issues - what is the deal with that? I expected to get 0 new and 0 resolved since I changed NOTHING).
BUT, when running incremental on the developer machine (after getting the latest code) I am getting a huge number of new issues, even though they also made no changes to the code.
to make sure I am not making any mistakes, I used TFS to compare the two project directories (on the runner and on the dev local machines, the folders that the analysis uses) - and proved that both are the same (except for the sonar generated files)
so to sum it up:
what could be the cause for such a behavior?
why would I get resolved issues if I did not make any changes to the code?
could it have anything to do with machine clocks? (i am desperate...)
If you would tell me that there is no change in hell that such a problem can occur, then I would go back and check myself, but I am running such a simple configuration that I don't think that I am missing anything.
Thank you for your help.

Visual Source Safe - Removing files from web projects

I'll try to make this as straight forward as possible.
Currently our team has a VSS database where our projects are stored.
Developers grab the code and place on their localhost machine and develop locally.
Designated developer grabs latest version and pushes to development server.
The problem is, when a file is removed from the project (by deleting it in VS2008) then the next time another developer (not the one who deleted it) checks in, it prompts them to check in those deleted files because they still have a copy on their local machine.
Is there a way around this? To have VSS instruct the client machine to remove these files and not prompt them to check back in? What is the preferred approach for this?
Edit Note(s):
I agree SVN is better than VSS
I agree Web Application project is better than Web Site project
Problem: This same thing happens with files which are removed from class libraries.
You number one way around this is to stop using web site projects. Web Site Projects cause visual studio to automatically add anything it finds in the project path to the project.
Instead, move to Web Application Projects which don't have this behavior problem.
Web Site projects are good for single person developments.
UPDATE:
VB shops from the days gone past had similiar issues in that whatever they had installed affected the build process. You might take a page from their playbook and have a "clean" build machine. Prior to doing a deployment you would delete all of the project folders, then do a get latest. This way you would be sure that the only thing deployed is what you have in source control.
Incidentally, this is also how the TFS Build server works. It deletes the workspace, then creates a new one and downloads the necessary project files.
Further, you might consider using something like Cruise Control to handle builds.
Maybe the dev should take care to only check in or add things that they have been working on. Its kind of sloppy if they are adding things that they were not even using.
Your best solution would be to switch to a better version control system, like SVN.
At my job we recently acquired a project from an outsourcing company who did use VSS as their version control. We were able to import all of the change history into SVN from VSS, and get up and running pretty quickly with SVN at that point.
And with SVN, you can set up ignores for files and folders, so the files in your web projects dont get put into SVN and the ignore attributes are checked out onto each developer's machine
I believe we used VSSMigrate to do the migration to SVN http://www.poweradmin.com/sourcecode/vssmigrate.aspx
VSS is an awful versioning system and you should switch to SVN but that's got nothing to do with the crux of the problem. The project file contains references to what files are actually part of the project. If the visual studio project isn't checked in along with the changes to it, theres no way for any other developer to be fully updated hence queries to delete files when they grab the latest from VSS. From there you've got multiple choices...
Make the vbproj part of the repository. Any project level changes will be part of the commit and other developers can be notified. Problem here is it's also going to be on the dev server. Ideally you could use near the same process to deploy to dev as you would to deploy as release. This leads into the other way...
SVN gives you hooks for almost all major events, where hooks are literally just a properly named batch file / exe. For your purposes, you could use a post-commit hook to push the appropriate files, say via ftp, to the server on every commit. File problems solved, and more importantly closer towards the concept of continuous integration.
Something you may want to consider doing:
Get Latest (Recursive)
Check In ...
Its a manual process, but it may give you the desired result, plus if VS talks about deleted files, you know they should be deleted from the local machine in step 1.

How to speed up the eclipse project 'refresh'

I have a fairly large PHP codebase (10k files) that I work with using Eclipse 3.4/PDT 2 on a windows machine, while the files are hosted on a Debian fileserver. I connect via a mapped drive on windows.
Despite having a 1gbit ethernet connection, doing an eclipse project refresh is quite slow. Up to 5 mins. And I am blocked from working while this happens.
This normally wouldn't be such a problem since Eclipse theoretically shouldn't have to do a full refresh very often. However I use the subclipse plugin also which triggers a full refresh each time it completes a switch/update.
My hunch is that the slowest part of the process is eclipse checking the 10k files one by one for changes over samba.
There is a large number of files in the codebase that I would never need to access from eclipse, so I don't need it to check them at all. However I can't figure out how to prevent it from doing so. I have tried marking them 'derived'. This prevents them from being included in the build process etc. But it doesn't seem to speed up the refresh process at all. It seems that Eclipse still checks their changed status.
I've also removed the unneeded folders from PDT's 'build path'. This does speed up the 'building workspace' process but again it doesn't speed up the actual refresh that precedes building (and which is what takes the most time).
Thanks all for your suggestions. Basically, JW was on the right track. Work locally.
To that end, I discovered a plugin called FileSync:
http://andrei.gmxhome.de/filesync/
This automatically copies the changed files to the network share. Works fantastically. I can now do a complete update/switch/refresh from within Eclipse in a couple of seconds.
Do you have to store the files on a share? Maybe you can set up some sort of automatic mirroring, so you work with the files locally, and they get automatically copied to the share. I'm in a similar situation, and I'd hate to give up the speed of editing files on my own machine.
Given it's subversioned, why not have the files locally, and use a post commit hook to update to the latest version on the dev server after every commit? (or have a specific string in the commit log (eg '##DEPLOY##') when you want to update dev, and only run the update when the post commit hook sees this string).
Apart from refresh speed-ups, the advantage of this technique is that you can have broken files that you are working on in eclipse, and the dev server is still ok (albeit with an older version of the code).
The disadvantage is that you have to do a commit to push your saved files onto the dev server.
I solwed this problem by changing "File Transfer Buffer Size" at:
Window->Preferences->Remote Systems-Files
and change "File transfer buffer size"-s Download (KB) and Upload (KB) values to high value, I set it to 1000 kb, by default it is 40 kb
Use offline folder feature in Windows by right-click and select "Make availiable offline".
It could save a lot of time and round trip delay in the file sharing protocol.
The use of svn externals with the revision flag for the non changing stuff might prevent subclipse from refreshing those files on update. Then again it might not. Since you'd have to make some changes to the structure of your subversion repository to get it working, I would suggest you do some simple testing before doing it for real.

Resources