Random POST requests being made after test case is executed - cypress

I began to write a portion of a test in which an input on a form gets filled out. But, for some reason, after that part of the test ends, the logs on Cypress are showing me a bunch of POST Requests with a little yellow box that has numbers, which I am guessing is the number of requests being made? As a result, the CPU on my laptop is in high usage, my laptop is overheated and at times, the Cypress Browser in which I am seeing these logs is breaking. Does anyone have any idea as to why this could be happening? The only thing this test case is doing so far is filling out the name input on a form. I have provided an image below to show you.

Related

power automate keyword not always triggering flow

I have a pretty simple flow. I have tried "mytest" in the test channel every 30 seconds since the last one fires. as you can see, sometimes it fires, sometimes it doesnt. the keyword trigger doesnt always fire. it seems sporatic. any ideas why? What other info would you need to assist?
it seems like im hitting some type of limit. i was able to make a bunch of keyword tests today and got expected results. after a few test, it stops working again.

Why is JMeter Result is different to User Experience Result?

We are currently conducting performance tests on both web apps that we have, one is running within a private network and the other is accessible for all. For both apps, a single page-load of the landing page or initial page only takes between 2-3 seconds on a user POV, but when we use blaze and JMeter, the results are between 15-20 seconds. Am I missing something? The 15-20 seconds result came from the Loadtime/Sample Time in JMeter and in Elapsed column if extracted to .csv. Please help as I'm stuck.
We have tried conducting tests on multiple PCs within the office premises along with a PC remotely accessed on another site and we still get the same results. The number of thread and ramp-up period is both set to 1 to imitate a single user only.
Where a delta exists, it is certain to mean that two different items are being timed. It would help to understand on your front end are you timing to a standard metric, such as w3c domComplete, time to interactive, first contentful paint, some other location, and then compare where this comes into play on the drilldown on the performance tab of chrome. Odds are that there is a lot occuring that is not visible that is being captured by Jmeter.
You might also look for other threads on here on how jmeter operates as compared to a "real browser" There are differences which could come into play affecting your page comparisons, particularly if you have dozens/hundreds of elements that need to be downloaded to complete your page. Also, pay attention to third party components where you do not have permission to test their servers.
I can think of 2 possible causees:
Clear your browser history, especially browser cache. It might be the case you're getting HTTP Status 304 for all requests in browser because responses are being returned from the browser cache and no actual requests are being made while JMeter always uses "clean" session.
Pay attention to Connect Time and Latency metrics as it might be the case the server response time is low but the time for network packets to travel back and forth is very high.
Connect Time. JMeter measures the time it took to establish the connection, including SSL handshake. Note that connect time is not automatically subtracted from latency. In case of connection error, the metric will be equal to the time it took to face the error, for example in case of Timeout, it should be equal to connection timeout.
Latency. JMeter measures the latency from just before sending the request to just after the first response has been received. Thus the time includes all the processing needed to assemble the request as well as assembling the first part of the response, which in general will be longer than one byte. Protocol analysers (such as Wireshark) measure the time when bytes are actually sent/received over the interface. The JMeter time should be closer to that which is experienced by a browser or other application client.
So basically "Elapsed time = Connect Time + Latency + Server Processing Time"
In general given:
the same machine
clean browser session
and JMeter configured to behave like a real browser
you should get similar or equal timings for the same page

I have random timeouts in cypress tests

I'm working with cypress now for 3 months and I try to fix this problem for 2 months now and i really don't now how to fix it.
When i run all my tests there are a lot of tests failing. And every-time its another test (random).
The application that i'm testing has an button that is disabled and when the fields are stuffed with text, the button becomes active.
but the problem is that cypress clicks on the button when the button is still disabled. the button needs some time to get active, now I have put the following in the code:
cy.wait('#budgetblindsPost')
cy.wait(500)
But this is also not working. I have less errors but I still get errors.
Here is an example of an error I get
Here is also an example of my code
Using cy.wait() all over the place may eventually solve issues related to timeout, but will make your test suite unnecessarily slow. Instead, you should increase the timeout(s)
One-off
This command will only fail after 30 seconds of not being able to find the object, or, when it finds it, 30 seconds of not being able to click it.
cy.get('#model_save', {timeout: 30000}).click({timeout: 30000});
Please note that your value of 500 means half a second, which may not be enough.
Global
If you find yourself overriding the timeout with the same value in a lot of places, you may wish to increase it once for all in the config.
defaultCommandTimeout: 4000
Time, in milliseconds, to wait until most DOM based commands are considered timed out

Does JMeter show the correct average response time for the first page it hits for many virtual users?

I'm load testing a system with 500 virtual users. I've kept the "Ramp-Up period (in seconds)" option to zero. So, what I understand, JMeter will hit the system with 500 virtual users all at the same time. Please correct me if I'm wrong here.
Now, the summary report shows the average response time for the first page is ~100 seconds!. Which is more than a minute and a half of wait time. But while the JMeter is running, I manually went to the same page/url using a browser and didn't have to wait for that long. It was not even close, the page response was almost immediate for me.
My question is: is there any known issue for the average response time of the first page? Is it JMeter which is taking long to trigger that many users?
Thanks in advance.
--Ishtiaque
There is no issue in Jmeter related to first page response time.
Summary Report shows all response time details in Milliseconds, the value "100" seconds have you converted milliseconds to seconds?
Also in order to make sure that 500 users hit concurrently, use Synchronizing Timer.
Hope this will help.
While the response times will be accurate, you need to consider the affect of starting so many threads at once on both your server and your client.
500 threads to start at once is not insignificant n the client. If your server has the connections, it will start 500 threads as well.
Ramping over a period of time is more realistic loadwise, but still not really indicative of server capability until the threads have all started and settled in.
Databases can also require a settling in period which can affect response times.
Alternative to ramping is introducing a random wait at the start of each thread before firing the first sample. You can then choose not to ramp over time, but still expect resources on the client to suddenly come under load and change the settings if you hit limits. This will make the entire run much more realistic of typical behaviour. However, you need to determine if your use cases are typical.
Although the heap size is increased, i notice there is still longer time as compared to actual response time. Later i realised it was the probe effect (the extra time a tool generates due to test execution)

Reproducing load from access-log with respect to timestamp differences (Jmeter or similar)

Context
I am using JMeter with JMeter-plugins for load testing. So far I was either modelling the traffic myself (ramp-up periods, bursts, etc.) or just doing full load testing. Now, however, I need to reproduce the exact same traffic as was in the access log, so for example if I had three requests at, say, 13:00:01, 13:00:03 and 13:00:06, I will need the sampler to hit those respective requests in same timing, second one being 2 seconds after the first, and third one being 3 seconds after the second.
I surfed the web for solutions, however the only hint I got was to write custom LogParser to extract timestamps and time differences. However that doesn't cover the actual timing at which the sample will send the request, since this is stored in ThreadGroup.
Summary
This leads me to my question : how can I reproduce the same exact traffic (with respect to time differences between requests) as was in the access log? I don't want to model a similar one (like Gaussian etc.), I need an exact copy of traffic.
If it's impossible in JMeter, please direct me to the right tool.

Resources