Decrease Protractor/Jasmine test delay between it() - jasmine

I wanted to know if there is a way to decrease the delay between each it() function. It looks like the default is around 3 seconds.

Related

How to make requests not match in load testing?

First of all I apologize for my bad English, I'll try to be clear.
I tried to run a Test in jmeter with 60 threads with 600 seconds for Ramp-up during for example: 15 minutes. The problem is that as I understand jmeter should distribute the load in at most 1 hits per second, which is not happening.
I'm using a Constant Throughput Timer with "Calculate Throughput based on (this thread only)" on 1 sample per minute. When the minute passes, the requests begin to coincide up to 10 hits per second.
I understand that this happens because the first thread executes a request and when the minute passes another thread executes a request and so on until the ramp up time is over.
The question is: is there any way to limit the hits per second, achieving, for example, in a test of 180 requests per minute, the hits per second reach a maximum of 3? exactly distributing the load evenly?
I hope I was clear
Thanks!
Yes, but you're using:
Not very accurate timer
The timer is not properly configured, it limits the number of request per minute for 1 thread and not considering other threads. Due to concurrency you can have > than one hit per second, you need to switch to all active threads (shared)
It will be much easier to use either Throughput Shaping Timer or Precise Throughput Timer, they are more accurate and self-explanatory.

Constant Throughput Timer seems to only gauge for 1 min

I am trying to run a post request in Jmeter. I want 10 requests to fire per second over a period of 1 hour. How could I achieve this?
Looking around, Constant Throughput Timer seems to be the popular option.
But for some reason, no matter what I switch around, I end up with only 500 requests. Can I please get some guidance as to why? It feels like such a basic option yet I simply can't figure it out. Been at it for hours and just not going anywhere.
My settings (For testing just trying with 2 mins, so I expect to end up with 1200 requests).
Thread Group:
Number of threads: 20
Ramp Up Period: 1
Scheduler checked.
Duration Set for 120 seconds (2 mins).
I then go on to add the Constant Throughput Timer. I set the value to 600 (Thus 10 requests per second).
As mentioned above, running this gives me 500 requests... I was expecting 1200 requests.. Why? Even if I extent mu duration to 3 mins, it would still be 500. Please help.
Constant Throughput Timer can only pause the threads to the desired throughput so if you want to achieve 10 requests per second with 20 users your application must be at most 500ms, if it will be higher - the number of requests per unit of time will be proportionally less.
So first of all try increasing the number of threads
Make sure to follow JMeter Best Practices (just in case JMeter is not capable of sending requests fast enough)
You may find Concurrency Thread Group and Throughput Shaping Timer more convenient and precise, moreover this combination can kick off extra threads if current amount is not enough in order to reach/maintain the defined throughput

The correct use of timers in a thread group (Until now my timers get ignored)

My goal is to simulate 500 users that perform certain requests on the website in an amount of time of five minutes.
To make the test come as as close as possible to reality, I want to add a thinking time between requests (here: two seconds). The problem is no matter what I do, the timers get ignored. To give you an example, I would like to perform an login request every 2 seconds. Here is data of the thread group:
Number of Threads: 500
Ramp-Up Period: 300
Loop Count: 1
So what I did do till now to achieve this:
I used the constant timer and put it at as a child to my request, that didnt work, timer gets just ignored, no matter what value I use.
I tried the constant throughput timer, but that didnt work too, values get ignored.
What am I doing wrong. I added a screenshot so you are able to see where I did put the constant timer in my test plan.
Screenshots of my testplan:
In your case you can work without timers, you can use the Ramp up period to be Number of threads * 2 (seconds) to start Thread every 2 seconds approximately.
So in your case just put Ramp-Up Period: 1000 (and remove timer)
You are using wrong timer, Constant Timer just adds delay of 5 seconds before each request. If you want JMeter to perform login each 2 seconds you should consider switching to Constant Throughput Timer
Remember that Constant Throughput Timer acts precisely enough on minute level only so you might need to play with ramp-up period on Thread Group level in order to limit threads execution rate during first 60 seconds. Alternatively you can consider using Throughput Shaping Timer plugin

Reliable timing with EventMachine periodic timers

My objective is to have a system that broadcasts an ad every 10 minutes for 37,500 cities. It takes around 5 minutes do the DB queries, calculations, and AMQP IO for all cities.
The code is roughly structured like:
EventMachine.add_periodic_timer(10 minutes) do
the_task_that_takes_five_minutes
end
What I'm finding is that even though the timer is set for 10 minute intervals and even though the task takes less than ten minutes the command fires in 15 minute intervals (the time it takes to complete the task + the EM period.)
If we make the assumption that the task will never take longer than 10 minutes, how would I go about ensuring that the period of the timer is always exactly 10 minutes from the previous run regardless of the task processing time?
EM basically seems to set the next timer after the task is run, not before.
I've tried a simple EM.defer around the task batch itself. I assumed this would open up the thread for setting the next timer but this doesn't solve the issue.
Can I away with the following?
def do_stuff
EventMachine.add_timer(10 minutes) do
do_stuff
end
the_task_that_takes_five_minutes
end
do_stuff
I know I can do that sort of thing in Javascript because the timer wouldn't execute inside the do_stuff call stack. Is this true for EventMachine?
This is just an idea, but maybe the timer could fire the function just to broadcast the ad, not to make all the calculations.
EventMachine.add_periodic_timer(10 minutes) do
ad_broadcasting calculations
EM.defer
calculations = Calc.new
end
end
I'm not sure if deferring the calculations there would avoid EM from waiting for it.

jMeter - performance degrading with higher loop count

I need a little help on how to debug the matter. My current jMeter scenario seems to run fine as long as I keep the loop count at 1, when I add more loops the performance starts to degrade a lot.
I have a thread group with 225 threads, 110s ramp up, loop count 1 - my total response time is ca. 8-9secs. I run this several times to confirm, each run shows similar response times.
Now, I did the same test , just changed the loop count to 3, all other parameters unchanged, and the performance went south, total response time is ca. 30-40s.
I was under the impression that 3x 1 loop runs would be, more or less, equivalent to 1x 3 loops run. It seems that is not the matter. Anyone could explain to me why is that?
Or, if this should be equivalent, any idea where to search for the culprit of degrading performance?
What you're saying is that the response times degrade if you increase the throughput (as in requests per second).
Based on 225 threads making a single request with a rampup of 110 seconds your throughput is going to be in the region of 2 requests every second. Increasing the loop count to 3 is going to up that by around a factor of 3 to 6 requests a second (assuming no timers). Except of course if the response times are increasing then you will not reach this level of throughput which is you problem.
Given that this request is already taking 8-9 seconds, which is not especially fast, it could be assumed that there is some heavy thinking going on behind the scenes and that you have simply hit a bottleneck, somewhere...
Try using less threads and a longer rampup and then monitor the response times and the throughput rate. At some point, as the load increases, you will see response times start to degrade and then at this point you need to roll up your sleeves and have a look at what is happening in your AUT.
Note. 3 x 1 loop is not the same as 1 x 3 loops. The delay between iterations will cause one thread with multiple iterations to have a different throughput vs. more threads with one iteration where the throughput is decided by the rampup, not the delay. That said, this is not what you describe in your question - you mention that the number of threads is consistent.
In addition to the answer from Oliver: try to use custom listener like Active Threads Over Time Listener - to monitor your load-scenario.
You can also retry both your scenarios described above, with this listener - sure, you'll see the difference in graphs.

Resources