Why does NUnit not start executing the next test when a thread is suspended? - async-await

I would like to know why NUnit does not start the next async test when await Task.Delay is called.
I have a test project where each test submits a job to a service in the cloud, polls for the results, and then asserts the results that come back. The cloud service can run thousands of jobs in parallel. In production, a job can take up to a couple hours to run, but the test jobs I am submitting each run in less than a minute.
Currently up to 10 tests run in a parallel, but each test synchronously waits on the results to come back before the next test can start.
I am trying to think of way to make all tests complete faster (I understand about unit tests. We have though as well, but these tests have a different purpose). One idea is to use the async/await functionality built into .Net to suspend the thread one test is running on and start the next test while the first test is waiting on results from the cloud service. I built a little test project to see if this idea would work.
These tests were written on .Net framework v4.7.2. Using Nunit 3.13.2.
using NUnit.Framework;
using System;
using System.Collections.Concurrent;
using System.IO;
using System.Threading.Tasks;
namespace TestNUnit
{
[TestFixture]
public class Test
{
private static ConcurrentQueue<string> _queue = new ConcurrentQueue<string>();
[Test]
public async Task Test1()
{
_queue.Enqueue($"Starting test 1 {DateTime.Now}");
await Task.Delay(12000);
Assert.IsTrue(true);
_queue.Enqueue($" Ending test 1 {DateTime.Now}");
}
[Test]
public async Task Test2()
{
_queue.Enqueue($"Starting test 2 {DateTime.Now}");
await Task.Delay(10000);
Assert.IsTrue(true);
_queue.Enqueue($" Ending test 2 {DateTime.Now}");
}
[Test]
public async Task Test3()
{
_queue.Enqueue($"Starting test 3 {DateTime.Now}");
await Task.Delay(14000);
Assert.IsTrue(true);
_queue.Enqueue($" Ending test 3 {DateTime.Now}");
}
[OneTimeTearDown]
public void Cleanup()
{
File.AppendAllLines("C:\\temp\\nunit.txt", _queue.ToArray() );
}
}
}
In my AssemblyInfo file I added the following lines to run 2 tests in parallel.
[assembly: Parallelizable(ParallelScope.All)]
[assembly: LevelOfParallelism(2)]
I hoped to see that all 3 tests started before any tests finished. Instead what I see is that 2 tests start and the third test starts after 1 of the other two tests finish. I tried running in both Visual Studio and the command line using the NUnitLite nuget package.
I tried the same test on XUnit and MsTest v2. When running the command line version of XUnit, I saw the behavior I wanted to see. Everywhere else though I see that 1 test has to finish before the third test starts.
Does anyone know why this would only work when running the command line version of XUnit? Thanks!
Here is the output when the test framework waits for 1 test to finish before starting the third test:
Starting test 2 3/4/2022 7:40:18 AM
Starting test 1 3/4/2022 7:40:18 AM
Ending test 2 3/4/2022 7:40:28 AM
Starting test 3 3/4/2022 7:40:28 AM
Ending test 1 3/4/2022 7:40:30 AM
Ending test 3 3/4/2022 7:40:42 AM
Here is the output when all three tests start before any finishes:
Starting test 1 3/3/2022 4:19:09 PM
Starting test 3 3/3/2022 4:19:09 PM
Starting test 2 3/3/2022 4:19:09 PM
Ending test 2 3/3/2022 4:19:19 PM
Ending test 1 3/3/2022 4:19:21 PM
Ending test 3 3/3/2022 4:19:23 PM

NUnit (and most likely the other framework runners where this didn't work for you) sets up the number of threads you specify to run tests. So, when you reach a point where all those tests have entered a wait state, no further tests can be run.
The assumption in this design is that wait states are rare and short, which is almost always true for unit tests, which was the original intended application for most of these runners - most certainly for nunitlite.
For a runner to continue after all the originally allocated threads were waiting, it would have to be re-designed to create new threads dynamically.
UPDATE:
Responding to the comment about the ThreadPool... The NUnit parallel dispatcher doesn't use the ThreadPool. At the time I wrote it, the ThreadPool wasn't available on all platforms we supported, so each TestWorker creates its own thread. If it were written today, it might well use the ThreadPool as you imagined.
I'm afraid that this may answer the question of your title without actually helping you. But it does suggest a workaround...
Since the only threads you will ever get are those initially allocated, you may simply increase the number of threads. For example... if you estimate that your particular set of tests will be in a wait state 50% of the time, use 20 threads rather than 10. Experiment with the number until you hit the best level for your workload. If the ideal number turns out (as is likely) to be different on your desktop versus your CI/CD environment, then provide it as an option to the command rather than using the attribute.

Related

How to Run List of random multiple test cases in Cypress and how to make command string shorten

Lets say i have 300 test cases and among them 100 are failing now i want to run those 100 test cases again (Note: i have even rerun the cypress test cases with appropriate option and it even run the test cases for finding flaky test cases)
Now i have a list of failing 100 test cases in a notepad or Excel sheet now
is there any mechanism to run this test cases in CYPRESS
if i go with
cypress run --spec=cypress/integration/one.sepc.ts,cypress/integration/two.spec.ts"
that 100 test cases will cause a big string and it looks like
cypress run --spec=cypress/integration/one.sepc.ts,cypress/integration/two.spec.ts, ..... hundread.spec.ts"
this will leave that command is a huge text and complex to maintain so is there any way to run the list of failing test cases only at whatever time I want to run after fixing the application code or data
any suggestions will be helpful
More info
I was looking for the way it runs multiple test cases mentioned in one text file reference or dictionary reference
For Example, if I run all 100 test cases and 20 among them failed so I would maintain the file names and paths which are failing in the file or dictionary
and now I want cypress to take this file and run all the test cases references which are failing thereby running those specific test cases which are failing
(Note: i am aware of retrys to be placed for the execution
npx cypress run --spec "cypress/e2e/folderName" in cmd run all the specs in cypress folder.
describe.only("2nd describe", () => { it("Should check xx",() =>{ }); it("Should check yy", () => { }); }); it can only run the specific suit. Run specific test case use it.only("Should check yy", () => { });

How to performance test workflow execution?

I have 2 APIs
Create a workflow (http POST request)
Check workflow status (http GET
request)
I want to performance test on how much time does workflow takes to complete.
Tried two ways:
Option 1 Created a java test that triggers workflow create API and then poll status API to check if status turns to CREATED. I check the time taken in this process which gives me performance results.
Option 2 Was using Gatling to do the same
val createWorkflow = http("create").post("").body(ElFileBody("src/main/resources/weather.json")).asJson.check(status.is(200))
.check(jsonPath("$.id").saveAs("id"))
val statusWorkflow = http("status").get("/${id}")
.check(jsonPath("$.status").saveAs("status")).asJson.check(status.is(200))
val scn = scenario("CREATING")
.exec(createWorkflow)
.repeat(20){exec(statusWorkflow)}
Gatling one didn't really work (or I am doing it in some wrong way). Is there a way in Gatling I can merge multiple requests and do something similar to Option 1
Is there some other tool that can help me out to performance test such scenarios?
I think something like below should work when using Gatling's tryMax
.tryMax(100) {
pause(1)
.exec(http("status").get("/${id}")
.check(jsonPath("$.status").saveAs("status")).asJson.check(status.is(200))
)
}
Note: I didn't try this out locally. More information about tryMax:
https://medium.com/#vcomposieux/load-testing-gatling-tips-tricks-47e829e5d449 (Polling: waiting for an asynchronous task)
https://gatling.io/docs/current/advanced_tutorial/#step-05-check-and-failure-management

Is it possible to run threads at different time interval- JMeter

I have 8 threads in JMeter, which i am executing for every 5 minutes using Task scheduler.
Now i have included 2 threads which want to run for 5 times per day only (ex: at 12am, 5am,10am...)
when the moment comes, the execution shall be 8+2 & remaining time, it shall be only 8 threads.
Is it possible to configure such usecase in Jmeter..
If you're going to use the same .jmx script and want to execute either 8 or 10 "threads" (whatever it is), you can go for:
If Controller - for conditional execution of this or that test elements
__groovy() function to check current time, an example condition which trigger the test at i.e. 5 AM would be:
${__groovy(Calendar.getInstance().get(Calendar.HOUR_OF_DAY) == 5 && Calendar.getInstance().get(Calendar.MINUTE) == 0,)}

How to match number of Test run in maven log with source code?

I am trying to link the number of Test run founded on the log file in https://api.travis-ci.org/v3/job/29350712/log.txt of the project presto of facebook with the real test in source code.
The source code linked to this run of the build is located in the following link: https://github.com/prestodb/presto/tree/bd6f5ff8076c3fdb2424406cb5a10f506c7bfbeb/presto-raptor/src/test/java/com/facebook/presto/raptor
I am computing the number of places where I encounter '#Test' in the source code then it should be the same number of Test run in the log file.
In most of cases it works. But there is some of them like the subproject 'presto-raptor' where there is 329 Tests run. But in the source code I found 27 time the #Test.
I notice that there is some test preceded by: #Test(singleThreaded = true)
This is an example in the following link:
https://github.com/prestodb/presto/blob/bd6f5ff8076c3fdb2424406cb5a10f506c7bfbeb/presto-raptor/src/test/java/com/facebook/presto/raptor/metadata/TestRaptorSplitManager.java
#Test(singleThreaded = true)
public class TestRaptorSplitManager
{
I expected to have the same number of Test run as in the log file. But It seems that the source code is running in parallel (multi-thread)
My question here is how do I match the number 329 Tests run with real test cases in source code.
TestNG counts the number of tests based on the following (apart from the regular way of counting tests)
Data driven tests are counted as new tests. So if you have a #Test that is powered by a data provider (and lets say the data provider gives 5 sets of data), then to TestNG, there were 5 tests that were run.
Tests with multiple invocation counts are also counted as individual tests (so for e.g., if you have #Test(invocationCount = 5), then TestNG would report this test as 5 tests in the reports, which is what Maven console is showing as well.
So not sure how it would be possible for you to build a matching capability that would cross check this against the source code (especially when your tests involve a data provider)

Testing a long process in Xamarin Test Cloud

I have a question about Xamarin Test Cloud, hope someone can point me in the right direction.
When a user taps a button in my app, a process runs for around 30 minutes. I added a Unit Test project and it runs perfectly in the emulator.
However, I need to test it in real devices, so I decided to use Xamarin Test Cloud. When I run the test there, it doesn't complete it. As I said, it should take 30 minutes but the test finishes almost immediately.
Here is the code of my Test:
[Test]
[Timeout(Int32.MaxValue)]
public async void Optimize()
{
await Task.Run(async() =>
{
app.Screenshot("Start " + DateTime.Now);
app.Tap(x => x.Marked("btnOptimize"));
await Task.Delay(120000);
app.Screenshot("End " + DateTime.Now);
}
}
If I run the test in the emulator, the screenshot names are (for instance) "Start 12:00:00" and "End 12:30:00" respectively (so it means that it runs for 30 minutes, as expected). However, in Test Cloud I get (for instance) "Start 12:00:00" and "End 12:02:00", which means that the test runs for only 2 minutes. But that's because I added the delay. Without the delay, it will run for only 5 seconds.
Is that what I need? I can add 1800000 so the test can be completed in 30 minutes, but what if I don't know the time?
Thank you and sorry if it's a basic question
Something like this should do the job:
[Test]
public async void Optimize()
{
app.Screenshot("Start");
app.Tap("btnOptimize");
app.WaitForElement ("otherButton", "Timed out waiting for Button", new TimeSpan(0,30,0));
app.Screenshot("End");
}
Where "otherButton" becomes visible when the task is done. There are other Wait methods available in the API.
But, note that the Xamarin Test Cloud has a thirty minute maximum per test by default. That default can be modified by contacting Xamarin Test Cloud support.
Also, it is not a good practice to include non-deterministic information or any information that may vary per device or run in your screen shot titles. When you run on more than one device the test report steps and screenshots are collated partially by the screen shot titles so they should match across devices for best results.
While I have never tried a timeout length of 30 minutes, the Calabash allows you to wait for a condition using wait_for):
The following snippet is an example of how to use wait_for to detect the presence of a button on the screen:
wait_for(timeout: 60, timeout_message: "Could not find 'Sign in' button") do
element_exists("button marked:'Sign in'")
end
Ref: https://docs.xamarin.com/guides/testcloud/calabash/working-with/timeouts/
Just an FYI: 30 minutes is a really long time for a mobile device to be "background processing" without user interface interaction, if you are targeting iOS/Apple Store, this death sentence in getting Apple submission approval as they will never wait that long for an app to process something....
You need to identify tap by Id and add WaitForElement(first argument is query you want to wait on) in correct syntax like given below. It should work for you.
app.Screenshot("Start " + DateTime.Now);
app.WaitForElement(x => x.Id("btnOptimize"),"Optimization timed out",new System.TimeSpan(0,30,0),null,null);
app.Tap(x => x.Id("btnOptimize"));
app.Screenshot("End " + DateTime.Now);

Resources