I am using Katalon Studio for Web and API testing. For performance testing, I use JMeter as Katalon Studio does not have support for performance tests.
I want to know if it is possible to do the following. Every test case is written in Katalon Studio for Web testing. I somehow must make so that, when test cases will run on Katalon Studio, it also should make performance/load testing. It is hard to rewrite everything in JMeter.
Is there any tool that looks at the running test cases and also makes performance tests? I mean, for example, one test case is for Login page, the user will log in into page and then it will log out. And for that, I want to get information about the time it took to log in and log out.
You can do some kind of performance testing even with Katalon Studio itself, for example, by using System.currentTimeMillis() method like this:
long ts1 = System.currentTimeMillis()
WebUI.openBrowser("")
// test steps
WebUI.closeBrowser()
long ts2 = System.currentTimeMillis()
println("Test duration: "+(ts2-ts1)+ " miliseconds.")
Basically, you take the current time in any two moments during your test. That is ts1 and ts2. And you measure the difference between them.
So, login duration test might look something like this:
long ts1 = System.currentTimeMillis()
WebUI.setText('username-test-object', 'username')
WebUI.setText('password-test-object', 'password')
WebUI.click('login-button-test-object')
WebUI.waitForElementNotPresent('login-button-test-object')
long ts2 = System.currentTimeMillis()
println("Login duration: "+(ts2-ts1)+ " miliseconds.")
Related
I would like to know why NUnit does not start the next async test when await Task.Delay is called.
I have a test project where each test submits a job to a service in the cloud, polls for the results, and then asserts the results that come back. The cloud service can run thousands of jobs in parallel. In production, a job can take up to a couple hours to run, but the test jobs I am submitting each run in less than a minute.
Currently up to 10 tests run in a parallel, but each test synchronously waits on the results to come back before the next test can start.
I am trying to think of way to make all tests complete faster (I understand about unit tests. We have though as well, but these tests have a different purpose). One idea is to use the async/await functionality built into .Net to suspend the thread one test is running on and start the next test while the first test is waiting on results from the cloud service. I built a little test project to see if this idea would work.
These tests were written on .Net framework v4.7.2. Using Nunit 3.13.2.
using NUnit.Framework;
using System;
using System.Collections.Concurrent;
using System.IO;
using System.Threading.Tasks;
namespace TestNUnit
{
[TestFixture]
public class Test
{
private static ConcurrentQueue<string> _queue = new ConcurrentQueue<string>();
[Test]
public async Task Test1()
{
_queue.Enqueue($"Starting test 1 {DateTime.Now}");
await Task.Delay(12000);
Assert.IsTrue(true);
_queue.Enqueue($" Ending test 1 {DateTime.Now}");
}
[Test]
public async Task Test2()
{
_queue.Enqueue($"Starting test 2 {DateTime.Now}");
await Task.Delay(10000);
Assert.IsTrue(true);
_queue.Enqueue($" Ending test 2 {DateTime.Now}");
}
[Test]
public async Task Test3()
{
_queue.Enqueue($"Starting test 3 {DateTime.Now}");
await Task.Delay(14000);
Assert.IsTrue(true);
_queue.Enqueue($" Ending test 3 {DateTime.Now}");
}
[OneTimeTearDown]
public void Cleanup()
{
File.AppendAllLines("C:\\temp\\nunit.txt", _queue.ToArray() );
}
}
}
In my AssemblyInfo file I added the following lines to run 2 tests in parallel.
[assembly: Parallelizable(ParallelScope.All)]
[assembly: LevelOfParallelism(2)]
I hoped to see that all 3 tests started before any tests finished. Instead what I see is that 2 tests start and the third test starts after 1 of the other two tests finish. I tried running in both Visual Studio and the command line using the NUnitLite nuget package.
I tried the same test on XUnit and MsTest v2. When running the command line version of XUnit, I saw the behavior I wanted to see. Everywhere else though I see that 1 test has to finish before the third test starts.
Does anyone know why this would only work when running the command line version of XUnit? Thanks!
Here is the output when the test framework waits for 1 test to finish before starting the third test:
Starting test 2 3/4/2022 7:40:18 AM
Starting test 1 3/4/2022 7:40:18 AM
Ending test 2 3/4/2022 7:40:28 AM
Starting test 3 3/4/2022 7:40:28 AM
Ending test 1 3/4/2022 7:40:30 AM
Ending test 3 3/4/2022 7:40:42 AM
Here is the output when all three tests start before any finishes:
Starting test 1 3/3/2022 4:19:09 PM
Starting test 3 3/3/2022 4:19:09 PM
Starting test 2 3/3/2022 4:19:09 PM
Ending test 2 3/3/2022 4:19:19 PM
Ending test 1 3/3/2022 4:19:21 PM
Ending test 3 3/3/2022 4:19:23 PM
NUnit (and most likely the other framework runners where this didn't work for you) sets up the number of threads you specify to run tests. So, when you reach a point where all those tests have entered a wait state, no further tests can be run.
The assumption in this design is that wait states are rare and short, which is almost always true for unit tests, which was the original intended application for most of these runners - most certainly for nunitlite.
For a runner to continue after all the originally allocated threads were waiting, it would have to be re-designed to create new threads dynamically.
UPDATE:
Responding to the comment about the ThreadPool... The NUnit parallel dispatcher doesn't use the ThreadPool. At the time I wrote it, the ThreadPool wasn't available on all platforms we supported, so each TestWorker creates its own thread. If it were written today, it might well use the ThreadPool as you imagined.
I'm afraid that this may answer the question of your title without actually helping you. But it does suggest a workaround...
Since the only threads you will ever get are those initially allocated, you may simply increase the number of threads. For example... if you estimate that your particular set of tests will be in a wait state 50% of the time, use 20 threads rather than 10. Experiment with the number until you hit the best level for your workload. If the ideal number turns out (as is likely) to be different on your desktop versus your CI/CD environment, then provide it as an option to the command rather than using the attribute.
I have 2 APIs
Create a workflow (http POST request)
Check workflow status (http GET
request)
I want to performance test on how much time does workflow takes to complete.
Tried two ways:
Option 1 Created a java test that triggers workflow create API and then poll status API to check if status turns to CREATED. I check the time taken in this process which gives me performance results.
Option 2 Was using Gatling to do the same
val createWorkflow = http("create").post("").body(ElFileBody("src/main/resources/weather.json")).asJson.check(status.is(200))
.check(jsonPath("$.id").saveAs("id"))
val statusWorkflow = http("status").get("/${id}")
.check(jsonPath("$.status").saveAs("status")).asJson.check(status.is(200))
val scn = scenario("CREATING")
.exec(createWorkflow)
.repeat(20){exec(statusWorkflow)}
Gatling one didn't really work (or I am doing it in some wrong way). Is there a way in Gatling I can merge multiple requests and do something similar to Option 1
Is there some other tool that can help me out to performance test such scenarios?
I think something like below should work when using Gatling's tryMax
.tryMax(100) {
pause(1)
.exec(http("status").get("/${id}")
.check(jsonPath("$.status").saveAs("status")).asJson.check(status.is(200))
)
}
Note: I didn't try this out locally. More information about tryMax:
https://medium.com/#vcomposieux/load-testing-gatling-tips-tricks-47e829e5d449 (Polling: waiting for an asynchronous task)
https://gatling.io/docs/current/advanced_tutorial/#step-05-check-and-failure-management
Does exists any way to calculate count of requests under SLA in jmeter from UI? For example, count of requests that response time < 400 ms?
I had a similar problem a while ago and wrote a little tool - see https://github.com/sgoeschl/jmeter-sla-report
Simplest solution is to use Simple Data Writer to save Label, Elapsed Time and / or Latency to a CSV file, which will generate raw output like this:
elapsed,label
423,sampler1
452,sampler2
958,sampler1
152,sampler1
And from here you can take it to any other tool (awk, Excel, etc.) to filter results you want.
Another option is to use BeanShell Listener to generate such report on the fly. Something like this:
long responseTime = sampleResult.getEndTime() - sampleResult.getStartTime();
if(responseTime < 400) {
FileOutputStream f = new FileOutputStream("myreport.csv", true);
PrintStream p = new PrintStream(f);
this.interpreter.setOut(p);
print(sampleResult.getSampleLabel() + "," + responseTime);
f.close();
}
This method, though, may not be performant enough if you are planning to run a stress test with many (more than 200-300) users and many operations that "fit" the filter.
JMeter provides OOTB a Web Report that provides tons of informations regarding your load test using standard metrics like APDEX, Percentiles ...
See this:
http://jmeter.apache.org/usermanual/generating-dashboard.html
If you still want this, do the following:
Add as a child of your request add a Duration Assertion:
All response below it will be marked as failing.
And in the report, you'll have the count of successful requests meeting this SLA criterion.
I have a question about Xamarin Test Cloud, hope someone can point me in the right direction.
When a user taps a button in my app, a process runs for around 30 minutes. I added a Unit Test project and it runs perfectly in the emulator.
However, I need to test it in real devices, so I decided to use Xamarin Test Cloud. When I run the test there, it doesn't complete it. As I said, it should take 30 minutes but the test finishes almost immediately.
Here is the code of my Test:
[Test]
[Timeout(Int32.MaxValue)]
public async void Optimize()
{
await Task.Run(async() =>
{
app.Screenshot("Start " + DateTime.Now);
app.Tap(x => x.Marked("btnOptimize"));
await Task.Delay(120000);
app.Screenshot("End " + DateTime.Now);
}
}
If I run the test in the emulator, the screenshot names are (for instance) "Start 12:00:00" and "End 12:30:00" respectively (so it means that it runs for 30 minutes, as expected). However, in Test Cloud I get (for instance) "Start 12:00:00" and "End 12:02:00", which means that the test runs for only 2 minutes. But that's because I added the delay. Without the delay, it will run for only 5 seconds.
Is that what I need? I can add 1800000 so the test can be completed in 30 minutes, but what if I don't know the time?
Thank you and sorry if it's a basic question
Something like this should do the job:
[Test]
public async void Optimize()
{
app.Screenshot("Start");
app.Tap("btnOptimize");
app.WaitForElement ("otherButton", "Timed out waiting for Button", new TimeSpan(0,30,0));
app.Screenshot("End");
}
Where "otherButton" becomes visible when the task is done. There are other Wait methods available in the API.
But, note that the Xamarin Test Cloud has a thirty minute maximum per test by default. That default can be modified by contacting Xamarin Test Cloud support.
Also, it is not a good practice to include non-deterministic information or any information that may vary per device or run in your screen shot titles. When you run on more than one device the test report steps and screenshots are collated partially by the screen shot titles so they should match across devices for best results.
While I have never tried a timeout length of 30 minutes, the Calabash allows you to wait for a condition using wait_for):
The following snippet is an example of how to use wait_for to detect the presence of a button on the screen:
wait_for(timeout: 60, timeout_message: "Could not find 'Sign in' button") do
element_exists("button marked:'Sign in'")
end
Ref: https://docs.xamarin.com/guides/testcloud/calabash/working-with/timeouts/
Just an FYI: 30 minutes is a really long time for a mobile device to be "background processing" without user interface interaction, if you are targeting iOS/Apple Store, this death sentence in getting Apple submission approval as they will never wait that long for an app to process something....
You need to identify tap by Id and add WaitForElement(first argument is query you want to wait on) in correct syntax like given below. It should work for you.
app.Screenshot("Start " + DateTime.Now);
app.WaitForElement(x => x.Id("btnOptimize"),"Optimization timed out",new System.TimeSpan(0,30,0),null,null);
app.Tap(x => x.Id("btnOptimize"));
app.Screenshot("End " + DateTime.Now);
I am using JMeter for load testing. I have noticed that the response time it shows keeps increasing until the test plan has finished running.
I have 3 thread groups with the following settings:
Number of threads: 900, 180, 180
Rampup: 0
Loop count: 20
Each of the threads has a constant throughput controller with the following settings:
Throughput: 900, 180, 180
Jmeter Test Plan http://cl.ly/UPhC/jmeter_test_plan.png
I don't understand why the response time keeps increasing from the beginning until the end of the test plan execution.
Maybe the system under test is creating objects and provoking garbage collections. That won't be a problem at first, but will get worst as the test progresses. Do a profiling of the JVM of the system under test with Visual VM or similar.
The problem could be the target system.
But if you are running your plan in GUI mode then it can explain your issue particularly View Results Tree listener which exists for DEBUGGING Test plan and absolutely not for GUI Load Test.
Read this:
http://www.ubik-ingenierie.com/blog/jmeter_performance_tuning_tips/
So fix is:
- Run your test in NON GUI mode and keep only Summary report
You can after test reload result file.
You may also see this:
http://www.ubik-ingenierie.com/blog/automatically-generating-nice-graphs-at-end-of-your-load-test-with-apache-jmeter-and-jmeter-plugins/