Code Covereage Overview in TFS different from .coverage file - visual-studio

We are trying to add visual studio coverage to our CI and so far it is working, but when I open the build, it displays a different coverage for the downloadable .coverage file.
This is how it looks in the overview:
and this is the downloaded .coverage file, when i open it in VS:
as you can see, the absolute covered lines and blocks are the same, but it seams that locally i have another number of overall lines and therefore the percentage is much higher. I am pretty sure that the local version is the correct one, as Resharper also covers 67%. I read, that building in release can change the coverage, I also built in debug/release locally and in the CI, but the coverage stays more or less the same (0.1 % off).
What we also do is, that we run our integrationtests on a different machine, so I had to copy the pdb files over to the other machine to be able to cover the tests over there. This also is working and the results get merged correctly but somhow the overall line/block count is still off.
We are using TFS 2017 and VS 2019.
Did anybody have the same problems? Is there a workaround without having to use a third party cover tool?

Related

VS Code Go test coverage highlighting absent

I have 2 VS Code workspaces for Go development, in both cases using Go modules.
In the first workspace I get red/green bars in the gutter indicating test code coverage. All very nice.
In the second I cannot get the coverage to show. I know the tests are running (being run by VS Code), using go test -coverprofile... and I can check the output and it looks good. I originally had a multi-root workspace but I tried creating a new one with just a single root for the module I'm working on. I tried making sure the root path did not include symlinks.
I have spent several hours now looking at the differences between the settings at user, workspace and folder level for the two workspaces and cannot find any differences.
I would appreciate and suggestion on how to track down why the coverage highlighting is not showing.
I'm using go 1.13.15.
Update
I have updated to VSCode 1.51.1 and go 1.15.2. No change.
However, when I use a git diff then I do see the coverage in the gutter, just not in normal editor views
I had a similar issue - I migrated from dev machine #1 to dev machine #2, and had the exact same environment, software/tools versions, etc. Code coverage worked as expected on dev1, but not on dev2.
The critical difference was that, on dev2, I connected to a vscode workspace that was a symlink of my gopath, whereas on dev1, that path had been a proper directory. When I connected to my vscode workspace on dev2 to the underlying directory, and not the symlinked version, code coverage began to display correctly.

Change Set Management With Visual Studio Web Projects

I developed and have been following a development changes (aka Change Request / CR) methodology for all my web development for many years which includes the following procedural constraints:
Version control Trunk equals Production at all times except for small windows during production releases.
Development code is stored in its own independent directory structure and contain all files necessary for deployment of the changes to dev, test or production. This includes all DB schema and web resources in that change set.
Each CR directory will NOT contain any files that are not being modified in the change set.
In a team based environment, these change set directories would be branches in version control so all team members could see all open changes and also enable automated validation / CI.
Each Change is named with a unique identifier like a date/timestamp or a formal change identifier in an issue tracker like Jira.
When Deploying changes to dev, test or prod, the web directory can just be pushed over to the web server being tested by SCP/SFTP, sometimes using the cool WinSCP directory syncing tool.
When production release is complete and validated, files can be copied / merged to Version Control Trunk and the CR folder moved to an archive for future reference.
Now, I am now trying to get my head around how to maintain large web projects using Visual Studio whilst following my change set methodology and have hit a wall on how to make this work. It would be great to keep my changes separate but also be able to use Visual Studio to step through code to debug things when log based debugging is insufficient. The Web projects I am maintaining are massive.
Has anybody come across the ability in Visual Studio to have a Web Project based on two separate directory structures like WEB_BASE (containing the entire website) and CR1234 (containing the actively development code) and have Visual Studio use files by the following order of precedence: Use any files from CR1234 first and then WEB_BASE second, ignoring the duplicates in WEB_BASE?
Are there any other ways to keep track of the discrete changes required for change sets whilst using Visual Studio and minimizing the UI noise of scrolling through resources that you are not changing to find the files you are working on?

How to make Visual Studio build to fail on ReSharper Error

How can I make my code build to fail when ReSharper detects an "Error" after code inspection?
I am using C# in Visual Studio 2017 along with ReSharper. I have set the inspection severity of Possible 'System.NullReferenceException' to show as "Error". This setting only shows a red underline for erroneous code, however the VS build still succeeds if I just ignore it. I want to make the build to fail if developer ignores such errors detected by ReSharper inspection.
I'm afraid Resharper seems to not support this option for now.
1.In my opinion, the error level in C#\Potential Code Quality Issues is something like showing a red underline to indicate where there maybe has a risk to help improve your code. And red to indicate this issue deserves attention. Actually,it's something controlled by us, we determine to make them error(red line) or warning(blue line?).
But such a potential code issue can't be recognized by msbuild (build system in vs). So the build will ignore these potential issues and succeeds.
2.For build settings in Resharper, I tried msbuild settings and compiler settings like below:
I set every element in the Potential Code Quality Issues to error. Also, i set null reference related settings like below:
After that I create a simple null reference but the build ignores that and succeeds. Same result when I use Resharper build.(Resharper options=>Tools=>Build=>Build engine) So maybe it's a negative answer :(
This isn't an ideal solution, but JetBrains provide a command line tool called InspectCode which runs their code inspections on your solution and outputs the results in XML or other formats. You could add a custom MSBuild step which runs InspectCode.exe MySolution.sln -o=output.xml, examines output.xml for errors, and fails the build if any are found.
Unfortunately InspectCode is slow and even though the analyses seem to be cached across runs it still takes a significant amount of time. For example, on my solution of 700k lines of code the tool takes 60 seconds on the second run, i.e. with a warm cache. So I don't think this is a viable solution to run on developer machines on every build. It might be acceptable in an automated build system.

No code coverage results on TFS 2013 Build Server

So our TFS 2013 builds do everything correctly except report code coverage. I've seen similar questions here, e.g. TFS 2013 - No Code Coverage Results, but we've already tried the suggested fixes with no results.
UPDATE 1 — we've taken more steps to try to resolve this; here's the complete list:
Ensured Visual Studio Ultimate was installed on the build server
Tried setting the build definition's "CodeCoverageEnabled" to true as well as the "Code coverage is enabled" setting
Ensured the build was using the Debug configuration and that PDBs were being generated for the DLLs to be tested
Added a .runsettings file with the correct ModulePath included under <CodeCoverage> (verified in build log that the module path was being interpreted correctly; it would produce errors if we intentionally malformed it)
Checked in .runsettings file
Set build definition to "Custom" and pointed to .runsettings file
The build process itself works fine. We can get code coverage results when we build the project locally in the IDE. On the build server, both MSTest and NUnit test projects run fine, and we see pass/fail results as expected. The "No Code Coverage Results" message still plagues us though.
Update 2 -
Here is what we see in the run log:
Somebody suggested homegrown code-coverage calculator in https://stackoverflow.com/a/16198120/141508, but it'd be a crime to spend $150 bazillion-thousand dollars on TFS 2013 & VS Ultimate 2013 with MSDN and still not have this one basic function working.
Add a run settings file to source control. Set the tests to custom and point to the run settings file. More info on using the .runsettings file can be found on msdn: http://msdn.microsoft.com/en-us/library/jj159530.aspx
I was experiencing the same problem. My issue was with the ModulePath. The MSDN examples suggest you can just use the name of a target binary. That was not working for me. However, when I made the name a regular expression, it worked. I am also dumping build output into one folder so that pdb and other reference files are found. Hope that helps.
<ModulePath>.*Administration\.dll.*</ModulePath>
I am using local build server with Visual Studio online with a .runsettings file and I had exactly the same issue.
None of the trickery above helped, so I tested the build script on the hosted build controller and it worked fine, so I decided the problem must be the build server itself.
I changed the Build Service account from "Network Service" to a regular windows user account in the TFS Configuration Tool and now code coverage is collected. Note that this user will need access to the TFS build directories.
I found this question because I saw something peculiar on this article. (Look for the "Delay" setting that defaults to 60).
d. Add a new argument ‘Delay’, enter details as mentioned below
Name – Delay, Direction – In, ArgumentType-Int32, Default Value – 60
This argument is required to delay coverage check, so that required build details are filled up by the build agent, this delay varies from
system to system, in some cases this might not be required at all.
http://www.prowareness.com/blog/failing-build-on-insufficient-code-coverageblock-coverage-part-3/
Maybe try putting in a "delay" work-flow item in the template you are using.......

Unexplained results with VS2008 "Get everything..." option

We've only recently begun using TFS (2008) with Visual Studio (2008). A couple of developers discovered the "Get everything when a solution or project is opened" option in VS and decided it was a good idea--and it would seem to be.
However, we've been getting some curious results when opening some solutions. The solutions in question contain several projects of mixed types--mostly class libraries and web apps. The curious part is the list of files in the "Get" dialog box that comes up.
Here's what I've found out so far about the files in the list:
The list is incomplete; not every controlled file in the solution is listed.
The version in the workspace matches the version in source control.
They are not missing from the workspace.
There are files from each of the projects in the solution; though, not every file in each project is included.
The list of files is the same for three seperate developers on three seperate machines.
Running a tf get from a command line does not yeild the same results.
Any insight into this would be greatly appreciated. As I mentioned, this option seems like a good idea, but we're a bit hesitant to rely on it when the results are unexpected.
Thanks.
I know that any files that are not in any project will not be pulled down by TFS when you go to get a latest at the solution level. My guess is that is part of your mixed/unexpected results.
I personally do not have that option checked. I always pull everything down from source control first thing. Whenever I check in source code, I also pull down everything again, compile it and run it first. That way I do not introduce any issues into TFS.
I would make sure that everyone on your development team is using the same general settings for TFS source control. I always have it prompt for check out (saving/editing) and get latest version of item on check out.
Have you applied the latest SP for TFS 2008 (SP1 last I remember). And SP1 on each developer's machine as well?

Resources