I am running a 1000 user test. and some of the flows have 25 users with the expected throughput as 0.000011574 per second.
The client is suggesting that I run it with about 1800 second think time.
Using Little's law I am getting the Think Time value to be 2160000.
I am suggesting that we just use 1 user and give a 600 second think time, even though calculations give me 86400 seconds think time. Since the flow has to be tested while under load.
What would be the correct approach? Go with client or Go with my assumption?
Let me know your valuable thoughts.
0.000011574 of what per second?
This reads like a requirement from a server admin and not from "the business."
Related
I am completely new to Performance testing and JMeter and hence my question may sound silly to some people.
We have identified some flows of an application and they are like:- Login, SignUp, Perform Transaction. Basically, we are trying to test our API's performance so we have used HTTP Request Sampler heavily. If I have scripted all these flows in JMeter, how can achieve answers to following
How can we decide the benchmark of this system? There is no one in organisation who can help with numbers right now and we have to identify number of users beyond which our system can crash.
For Example, if we say that 1,00,000 users are expected to visit our website in one hour's time then how can we execute this in JMeter? Should Forever loop be used with 3600 seconds(60 mins) of RampUp OR should I go ahead with Number of Threads as 1,00,000 RampUp ask 3600 and Loop Count as 1? What is the ideal way to test this?
What has been done till now?
1. We use to run above mentioned flows with Loop Count as 1. However, as per my knowledge, it's completely based on how much RampUp time I give and JMeter will decide accordingly how many threads it require in parallel to complete the task. Results were not helpful in our case as there was not much load to system.
2. Then, we changed the approach and tried Loop Count as Forever for some 100 users and ran the test for a duration of 10 minutes. After continuing with such test for sometime, we got higher Standard Deviation in JMeter's Summary Report which was fixed by tuning our DB and applying some indexes. We continued this way but I am still confused whether this can really simulate realistic scenario.
Thanks in advance!
Please refer my answer and comments to the similar question below:
performance-testing-in-production-environment-using-jmeter
Recently in an interview I was asked the question, to find Vusers using throughput and response time.
Question.
Find Vusers for throughput of 1260 bits per second and response time of 2 Milli seconds. The duration of the test we have run to achieve these results is 1 hour.
When I asked he said No thinktime or pacing, so it's zero.
So, As per Littles law, i calculated it as response time * throughput
1260*(0.002)=2.52 or 3..He said it's wrong..
Is there anything iam missing here? If yes then please let me know.. As per the response time as 2 Milli seconds which is rare I think 3 user should be ok..But if iam wrong then what is the correct calculation..
You do not want to work for this person.
In collapsing the time between requests to zero your interviewer has collapsed the client-server model, which is predicated upon a delay between requests from any singular client where requests from other clients are to be addressed.
We are building a new application in parse and are trying to estimate our requests/second and optimize the application to limit it and keep it below the 30/second. Our app, still in development, makes various calls to parse. Some only use 1 requests, and a few as many as 5 requests. We have tested and verified this in the analytics/events/api requests tab.
However, when I go to the analytics/performance/total requests section, the requests/second rarely go above .2 and are often much lower. I assume this is because this is an average over a minute or more. So I have two questions:
1) does anyone know what the # represents on this total requests/second screen. Is it an average over a certain time period. If so, how much?
2) when parse denies the request due to rate limit, does it deny based on the actual per second, or is it based on an average over a certain time period?
Thanks!
I supposed you have your answer by now but just in case:
You're allowed 30reqs/sec on a free plan, but Parse actually counts it on a per minute basis, or 1800 requests per minute.
My customer give me a traffic figure is 600 request/second and the RX(Mbps) is 30.
Please help me what is the suitable scenario test plan for this issue.
My customer and me is in different countries, so is the network effect to the result?
Many thanks on your pointers.
First of all, 600 requests/second rate isn't something which is recommended to be run from a single node.
You need to consider JMeter Remote Testing which assumes running the test from multiple JMeter instances. Make sure that you're following JMeter Performance and Tuning Tips guidelines while developing your test.
In order to achieve 600 requests/second rate, not more, not less you need to use Constant Throughput Timer
Can someone please explain the correlation between requests per second and response time? Which are you trying to improve at first? If your competitor offers less 'requests per second' on his most used functionality then you, is your application performing better in terms of end-user performance?
Can someone please explain the correlation between requests per second and response time?
Think of this situation as if it were a gas station. Cars arrive at various intervals and occupy a pump; they spend some time filling up, and then they leave.
Each car that arrives and occupies a pump is a request.
The time it takes to fill up is your response time.
You can improve things in two ways:
If you add more pumps, you can service additional cars at once because there will be more capacity.
If you make all your pumps faster, you can service more cars over time with the same number of pumps, because each car will finish sooner.
Which are you trying to improve at first?
That depends. Do you want to serve people faster (improving their experience while making some others wait) and thus more people overall, or do you want to serve more people at once (at the possible expense of request time)? Ideally, get both metrics as good as possible.
It all depends on what sort of load your system will be under.
If you have millions of users then you need to handle more requests per second possibly at the expense of response time otherwise users may not be able to connect when they want to.
However, if you are only going to have 30 users then it's more important to them that your system responds quickly than it being able to handle a thousand requests a second.
Requests per second may be high while offering an awful user experience. You might have a lot of users buying thousands of concert tickets per second but the response time for each user is over 30 seconds.
For a high performing, enjoyable web site, you need to have a high number of requests per second and a maximum response time. As a user, I like 5 seconds or less.
If your competitor offers less 'requests per second' on his most used functionality then you, is your application performing better in terms of end-user performance?
I wouldn't agree with that. Look at Google. They make thousands of requests a second - hell, I think it's something like 100 million per day and 3 billion per month.
To answer your question, I think response time is more important than requests per second. Sure you can optimize/minimize the number of requests made, but if your product scales to handle unlimited requests (just by throwing more hardware at the problem) then I think that is more valuable.