I set up Jenkins and I now want to start some serious testing. We are developing a C program so we cannot handle every possible exception. What we want to do is to make some smoke test. We want to run our program with some artificial data that accesses every part of the program to test for access violations, etc.
I wonder now, what will happen if the job does cause the program to crash?
Can I catch that, or does it hang? What will happen? How can I handle these exceptions inside a pipeline?
Related
I have several cypress spec files running against a web app in CI/CD build pipelines.
For whatever reason, there is a gap in time between each spec file is run in the pipeline, so the more spec files we add, the slower the build runs. I can see in the logs it's about 30s to a full minute between each spec file is run (I turned off the video record option to ensure that was not somehow related). Recently, it has begun to stall out completely and the build step fails from timing out.
To verify it wasn't related to the number of tests, I did an experiment by combining all the different tests into a single spec file and running only that file. This worked perfectly - because there was only a single spec file to load, the build did not experience any of those long pauses in between running multiple spec files.
Of course, placing all our tests into a single file is not ideal. I know with the cypress test runner there is a way to accomplish running all tests across multiple spec files as if they were in a single file using "Run all specs" button. From the cypress docs:
"But when you click on "Run all specs" button after cypress open, the Test Runner bundles and concatenates all specs together..."
I want to accomplish the exact same thing through the command line. Does anyone know how to do this? Or accomplish the same thing in another way?
Using cypress run is not the equivalent. Although this command runs all tests, it still fires up each spec file separately (hence the delay issue in the pipeline).
Seems like they don't want to do it that way. Sorry for not having a better answer.
https://glebbahmutov.com/blog/run-all-specs/
What is the best elegant approach for debugging a large E2E test?
I am using TestCafe automation framework and currently. I'm facing multiple tests that are flaky and require a fix.
The problem is that every time I modify something within the test code I need to run the entire test from the start in order to see if that new update succeeds or not.
I would like to hear ideas about strategies regard to how to debug an E2E test without losing your mind.
Current debug methods:
Using the builtin TestCafe mechanism for debugging in the problematic area in the code and try to comment out everything before that line.
But that really doesn't feel like the best approach.
When there are prerequisite data such as user credentials,url,etc.. I manually Declare those again just before the debug().
PS: I know that tests should be focused as much as possible and relatively small but this is what we have now.
Thanks in advance
You can try using the flag
--debug-on-fail
This pauses the test when it fails and allows you to view the tested page and determine the cause of the fail.
Also use test.only to to specify that only a particular test or fixture should run while all others should be skipped
https://devexpress.github.io/testcafe/documentation/using-testcafe/command-line-interface.html#--debug-on-fail
You can use the takeScreenshot action to capture the existing state of the application during the test. Test Café stores the screenshots inside the screenshots sub-directory and names the file with a timestamp. Alternatively, you can add the takeOnFails command line flag to automatically capture the screen whenever a test fails, so at the point of failure.
Another option is to slow down the test so it’s easier to observe when it is running. You can adjust the speed using the - speed command line flag. 1 is the fastest speed and 0.01 the slowest. Then you can record the test run using the - video command line flag, but you need to set up FFmpeg for this.
I am using PhantomJS to execute my Jasmine specs (v 2.0) on TeamCity as part of my CI process. The problem I have is that the process never exits when it has finished running the specs. I have also seen it exit early with a timeout. I am using the Teamcity Jasmine reporter.
More detail: I have tried a basic setTimeout-based mechanism found on a blog post (will add when I find the URL) and the standard wait-for driven script on the PhantomJs.org page. In both cases, the specs never finish. When running it through TeamCity, the build never finishes, though the log shows that all the specs have been run; if I run it directly from the command line, I see one of two things: either it finishes and then sits there indefinitely, or I get a wait-for timeout message in the middle of running the specs (i.e. it does not finish).
Final detail: this only happens when I include two particular specs. One is a long spec containing dozens of tests, the other a spec for a simple custom collection type. The order in which I include them makes no difference. If I run either spec alone, Phantom exits correctly. If I include either of these specs with all the others, Phantom exits correctly. It only seems to hang when BOTH these two particular specs are included.
I have checked that I have no uninstalled jasmine clocks.
The quantity of js involved is too large to post.
I have some Issues regarding Jenkins and running a Powershell Script within. Long Story short: the Script takes 8x longe execution time then running it manually (takes just a few minutes) on the Server(Slave).
Im wondering why?
In the script are functions which which invoke commands like & msbuild.exe or & svn commit. I found out that the script hangs up in those Lines where before metioned commands are executed. The result is, that Jenkins time out because the Script take that long. I could alter the Timeout threshold in the Jenkins Job Configuration but i dont think this is the solution for the problem
There are no error ouputs or any information why it takes that long and i do not have any further Idea for the reason. Maybe one of you could tell me, how Jenkins invokes internaly those commands.
This is what Jenkins does (Windows batch plugin):
powershell -File %WORKSPACE%\ScriptHead\DeployOrRelease.ps1
I've created my own Powershell CI Service before I found that Jenkins supports it's own such plugin. But in my implementation and in my current jobs configs we follow sample segregation principle rule: more is better better. I found that my CI Service works better when is separated in different steps (also in case of error it's a lot easy for a root cause analyse). The Single responsibility principle is also helpful here. So as in Jenkins we have pre- & post-, build and email steps as separate script. About
msbuild.exe
As far as I remember in my case there were issues related with the operations in FileSystem paths. So when script was divided/separated in different functions we had better performance (additional checks of params).
Use "divide and conquer" technique. You have two choices: modify your script so that will display what is doing and how much it takes for every step. Second option is to make smaller scripts to perform actions like:
get the code source,
compile/build the application,
run the test,
create a package,
send the package,
archive the logs
send notification.
The most problematic is usually the first step: To get the source code from GIT or SVN or Mercurial or whatever you have as version control system. Make sure this step is not embeded into your script.
During the job run, Jenkins capture the output and use AJAX to display the result in your browser. In the script make sure you flush standard output for every step or several steps. Some languages cache standard output so you can see the results only at the end.
Also you can create log files that can be helpful to archive and verify activity status for older runs. From my experience using Jenkins with more then 10 steps requires you to create a specialized application that can run multiple steps like "robot framework".
I'm using MSTest in Visual Studio 2010 and have the need to restore my database after all tests have run.
What I did was decorate a method with the AssemblyCleanupAttribute attribute.
<AssemblyCleanupAttribute()>
Shared Sub AssemblyCleanup()
' Restore my databases which takes a long time...
End Sub
Problem is the clean up takes a reasonable amount of time, so much so that the timeout is reached.
The only reason I started realizing that a timeout occurred is that in debug mode the Output window reports "...QTAgent32.exe, AgentObject: Cleanup: Timeout reached in cleaning up the agent.". Hence it fails very quietly and I would have loved if MSTest reported a Test Run Error.
What is the best way to detect and report the timeout? My ideal solution would be to report the timeout as a test run error.
In short, you cannot cause MSTest to report an error if AssemblyCleanup times out.
If you are encountering this issue, then at this point you need to consider if this limitation of MSTest is too great for you. There are other, and imho better, test frameworks out there.
If you decide to stick with MSTest and just want to ensure, at least, that the code/script in AssemblyCleanup runs to completion then you can choose to either run the clean up code as a Process. That way even if AssemblyCleanup internally calls a Thread.Abort then your Process runs to completion. It's messy though...
Why not wrap the contents of each test in a transaction, and rollback the transaction at the end of the test? See here for more information: http://msdn.microsoft.com/en-us/library/bb381703(v=vs.80).aspx#dtbunttsttedp_topic7