StackOverflowException with JetBrains dotCover in TeamCity - teamcity

I use TeamCity to support continuous integration of a project.
Currently, when I run .NET Code Coverage: JetBrains dotCover for a specific project then the dotCover crashes with StackOverflowException.
Also, the unit test 'TestSomething' is correct and always runs successfully.
Here are the error logs snapshot from the build:
[MyAssembly.dll] MyAssembly.MyTester.TestSomething (12s)
[18:28:37][MyAssembly.MyTester.TestSomething]
[18:28:37][MyAssembly.MyTester.TestSomething] Process is terminated due to StackOverflowException.
[18:28:38][MyAssembly.MyTester.TestSomething] [JetBrains dotCover] Coverage session finished [12/22/2015 6:28:38 PM]
[18:28:38][MyAssembly.MyTester.TestSomething] [JetBrains dotCover] Analysed application exited with code '-1073741571'
[18:28:38][MyAssembly.MyTester.TestSomething] [JetBrains dotCover] Coverage session finished but no snapshots were created.
[18:28:39][MyAssembly.MyTester.TestSomething] ##teamcity[importData type='dotNetCoverage' tool='dotcover' file='C:\TeamCity\buildAgent\temp\buildTmp\coverage_dotcover33181917853826188801.data']
[18:28:37][Step 3/10]
[18:28:37][Step 3/10] Process is terminated due to StackOverflowException.
I really cannot understand why this happens.
Any help from experts?

This was a bug in JetBrains dotCover.

Related

dotCover on TeamCity stop working without any error

I have configured dotCover on my build on TeamCity. But sometimes it just stops working. There is no error. Just I don't have any new entries in the log. It ends with a timeout (timeout is configured for the whole build, not some dotcover timeout).
Part of the log of one of a failed build
! 14:55:14 Starting test execution, please wait...
14:55:14 Starting test execution, please wait...
14:55:14 Starting test execution, please wait...
14:55:14 A total of 1 test files matched the specified pattern.
14:55:14 Starting test execution, please wait...
14:55:14 A total of 1 test files matched the specified pattern.
14:55:14 Starting test execution, please wait...
14:55:14 A total of 1 test files matched the specified pattern.
14:55:15 A total of 1 test files matched the specified pattern.
14:55:15 A total of 1 test files matched the specified pattern.
15:01:37 The build MyApp::Build and Test #1.6.3-rc.3224 {buildId=634051} has been running for more than 8 minutes. Terminating...
15:01:37 Execution timeout
Previously it happens also in the middle of running tests. Today, after the dotcover update I tried to verify this problem. In 30 builds it happens once. But sometimes it is more frequent (I don't have enough old data to verify it).
I updated dotCover to 2022.2.4 Cross-Platform (the newest Cross-Platform version available on TeamCity). I'm using TeamCity 2021.2.1 (build 99602) (I'm a little scared about updating it).
I have the newest version of xUnit (2.4.2)
As a temporary solution, I configured a retry build on any error (I'm wondering if there is a possibility to retry only on timeout).

Test framework quit unexpectetly; agent library failed to init: instrument

Every time I'm starting a debug in IntelliJ I receive:
Error occurred during initialization of VM
agent library failed to init: instrument
Failed to find Premain-Class manifest attribute in /Users/me/.m2/repository/org/jetbrains/kotlinx/kotlinx-coroutines-core/1.4.3/kotlinx-coroutines-core-1.4.3.jar
Process finished with exit code 1
I already
Restarted, invalidated caches
Updates all dependencies in the POM
Re-downloaded the project
Reset the project to various old branches
Deleted local maven cache
It only occurs when I want to debug. Test, compile, run, all works - just debug doesn't. Debug works as usual on all other projects.
Has anyone an idea what the hell is going on?
There was an issue: KTIJ-17927 Debugger: "Failed to find Premain-Class manifest attribute" when debugging main function in jvmMain in MPP with coroutines
The workaround is to set File | Settings | Build, Execution, Deployment | Debugger | Data Views | Kotlin | Disable coroutine agent option. Or update to the latest IDE and Kotlin plugin version where it should be fixed.

TFS2015 running SoapUI test as unit tests - Error: Could not create the Java Virtual Machine

I'm trying to run SoapUI tests as unit tests as described here:
http://blog.simplecode.eu/post/Soap-UI-testing-with-MsTest
Everything works nice when I'm running test locally in visual studio.
But when I'm trying to run those tests during the build process on tfs2015 i get "Error: Could not create the Java Virtual Machine":
My other unit tests during that build are performed correctly.
Anyone had similar issue?
It's not a really build error in TFS. You can see the result in your screenshot, build partially succeeded.
Currently, by default the build result is "Failed" if anything failed to compile, "Partially Succeeded" if there are any unit test failures, and "Succeeded" otherwise.
Build is reported as "Partially Succeeded" when the "Test Success" property is set to "False" and "CompilationSuccess"="True"
So your error is more like something in your code fails the test or some configuration related to test in the build definition or build agent environment. Since everything works nice when you are running test locally in visual studio.
Seems disable code coverage fixed the issue.
Manage, to fix that - problem was enabled code coverage. Disabling it fixed to problem.

How does Hudson/Jenkins determine job result status?

I have a Hudson server that runs maven 3 jobs to compile Java code. In one instance, a particular job's build log indicates that the compilation and unit tests all ran successfully, and the build completed in a successful state and another chained job was called, all indicated that Hudson believed the job to be successful. The log shows:
20:44:11 [INFO] BUILD SUCCESS
20:44:11 [INFO] ------------------------------------------------------------------------
20:44:11 [INFO] Total time: 1:35:43.774s
20:44:11 [INFO] Finished at: Mon Mar 24 20:44:11 CDT 2014
20:44:40 [INFO] Final Memory: 51M/495M
20:44:40 [INFO] ------------------------------------------------------------------------
20:44:42 channel stopped
20:44:42 [locks-and-latches] Releasing all the locks
20:44:42 [locks-and-latches] All the locks released
20:44:43 Archiving artifacts
20:45:33 Updating JIRA-1234
20:45:33 Description set: 1.23.567
20:45:33 Labelling Build in Perforce using ${BUILD_TAG}
20:45:33 [jobname] $ /usr/local/bin/p4 -s label -i
20:45:33 Label 'hudson-jobname-45' successfully generated.
20:45:33 [DEBUG] Skipping watched dependency update; build not configured with trigger: jobname #45
20:45:33 Finished: SUCCESS
However, the Hudson job page shows a "red ball" and that job run is listed as "Last failed build (#45)" on the job page. When I look at the hudson#hudson:~hudson/jobs/jobname/builds/45/build.xml file, there is a line that says
<result>FAILURE</result>
Assuming this was where the final result was captured, I changed this to
<result>SUCCESS</result>
and reloaded the page, but the red ball is still showing on the job page for that instance. I have not restarted the server to attempt to re-read the info from disk.
To be fair, there were some environmental issues around this build. Some hours later, I got a message that more than one Hudson server instance was running against the same disk image and confirmed this with
ps -ef | grep hudson.war
on the server console, showing two running processes. The job page says this run "Took 0 ms", even though the log says "Total time: 1:35:43.774s". This is Hudson ver. 2.2.1 running on "CentOS release 5.4 (Final)".
My questions:
What are the criteria for each of the job statuses? (Stable, Failed, Unstable, Aborted and Disabled statuses)?
When and where is that data captured in the running server, and can it be modified?
You have too many questions in one post.
As for "What are the criteria for each of the job statuses? (Stable, Failed, Unstable, Aborted and Disabled statuses)?"
Disabled is when a project/job is disabled from configuration (done by user with permissions). This is not a run status.
The rest are Job Run statuses:
Aborted is when a run has been cancelled/aborted. This happens when a user (with permissions) clicks the red cross button to cancel a running build. I believe SCM checkout failure also causes aborted status (but not too sure about that)
Unstable is a special status that can be applied to the job run. Usually this is done by job configuration (such as Maven) or through plugins such as Text-finder plugin. I am not aware of a way to induce unstable status through command line. Maybe through a groovy script, like plugins do. Most of the times, unstable is set by the job configuration itself, indicating failed test.
Stable and Failed are the direct result of Build Step's exit code. If the build step, such as Execute Shell, exits with exit code 0, the build step is considered success. Anything other than 0 is considered failed. If during execution of a build step, the process exits with anything other than 0, it is considered failed. If there are multiple build steps (Free-style project jobs), the last Build Step's exit code marks the status of the whole run. Once again, any Build Step in between that exits with anything other than 0 will mark the build failed.
Post-build actions, such as Text-finder plugin can also change the status of the job run.
You are not supposed to be tinkering with job history files to change the status of previous jobs. If you need to change the job result, use post-build actions such as Text-finder plugin, but even then, it only allows to downgrade the result.
Here's a one liner to printing the commit SHA1 in a jenkins job using groovy post-build script:
println "git rev-parse HEAD".execute().text
You can save this to a variable or export it as a build parameter.

Why bamboo stops build if one stage failed?

I have 3 bamboo stages: testing, staging and production. All of them extract the sources, compile and run tests. In previous version of Bamboo it continued the next stage if one failed (one of test failed f.e.). In the latest version it does not go to the next stage and stops build. How can i override this behaviour in order to continue build even if one stage failed?
Output:
simple 17-Sep-2013 17:56:12 Failing task since return code of [c:\dev\maven3\bin\mvn.bat --batch-mode -Djava.io.tmpdir=C:\Program Files\Bamboo\temp\CS-AND-JOB1 clean install -P envbuild -DbuildNumber=4] was 1 while expected 0
simple 17-Sep-2013 17:56:12 Parsing test results...
simple 17-Sep-2013 17:56:12 Finished task 'Maven 3.x'
simple 17-Sep-2013 17:56:12 Running post build plugin 'NCover Results Collector'
simple 17-Sep-2013 17:56:12 Running post build plugin 'Clover Results Collector'
simple 17-Sep-2013 17:56:12 Running post build plugin 'Artifact Copier'
simple 17-Sep-2013 17:56:12 Finalising the build...
simple 17-Sep-2013 17:56:12 Stopping timer.
simple 17-Sep-2013 17:56:12 Build CS-AND-JOB1-4 completed.
In short, you can't override this behaviour, this is how Bamboo (and other build tools) are designed to work.
What you should be doing is:
Build once and save the output as an artifact which can be used by later stages
Test your build output by retrieving the artifact from the first stage
Deploy to X environment
You shouldn't be deploying to an environment if any of the preceding steps have failed, the point of a failing stage is to indicate something is not right, you should either fix broken tests or exclude them.
I've contacted atlassian support and they confirm it's a bug - stages should be independent. They are investigating this problem.

Resources