I run the same API 4 times in the same JMeter script. in the 1st running API get the high time and after that same API get low times.
User Create API - 2067 ms
User Create API 1- 948 ms
User Create API 2- 869 ms
User Create API 3- 902 ms
User Create API 4- 993 ms
why this kind of scenario does in the JMeter??
JMeter only sends requests, waits for responses, measures time in-between and writes down the performance metrics and KPIs.
If first request takes longer than the following ones the reasons could be in:
Your application under test uses lazy initialization pattern
Your application under test needs to warm up its caches
First request takes longer due to the process of establishing the connection and subsequent requests are simply re-using the connection if you're sending Keep-Alive header
Your API endpoint response is cached on database or in-memory level
etc. the reasons could be numerous, you need to monitor everything you can on both JMeter and the system under test sides to understand this.
Jmeter tries to initialize TCP connections and handles SSL handshakes for the first request. For next requests it uses its config parameters httpclient4.time_to_live and httpclient.reset_state_on_thread_group_iteration.
You can refer to Jmeter's properties reference for more information
Related
I am working on migration of scripts from performance center to Jmeter5.2.1.
As part of this migration , we are using same functional flow which we did in performance center.
My scenario consists of users logging in to the web application perform 10-15 iterations and then logout.
This is my Testplan.
TestPlan
--ThreadGroup1
--Once Only Controller (login of users)
--Loop Controller (10 Iterations)
HTTP1
HTTP2
HTTP3
.
.
--Once only Controller (logout of users)
--csv Config data ( username/password)
--csv config data( unique data for the loop controller)
With this approach I am noticing that the time taken to complete the test in Jmeter is much more than what we have in performance center ( I took care of think times and added the similar values)
Why is my test run slow in Jmeter?
Is loop controller sequential? Meaning at a given time it can run only one request?
If not loop controller what other options we have to satisfy my scenario.
If I include different thread groups , carrying JSESSIONIDs needs to be done across thread groups which is not a best practice to do so.
Update:
Comparison between performance center and Jmeter settings
Below are the settings in Jmeter.
Thread Group settings:
TestPlan :
HTTP Cookie manager in Thread Group
CSV data files in Test plan
Once Only counters for Login and Logout
Loop Controller for Iterations.
HTTP request Defaults: ( Even with out checking retrieve all embedded and parallel downloads its taking more than an hour for 3 users)
TestPlan
Performance Center results :
Every Sampler has HTTP Header manager
Entire Test Plan
Given you send the same requests you should have the same response times, no matter which tool is being used under the hood.
It's hard to say what the differences are without seeing the full scripts from the both tools so generic advice is to use a third-party sniffer tool like Wireshark or Fiddler in order to identify the differences and configure JMeter to behave exactly like the "performance center" (whatever it is)
For example I fail to see HTTP Cache Manager and it will cause JMeter to download embedded resources (images, scripts, styles, sounds, fonts) for each and every HTTP request while real browser does it only once.
I also don't see HTTP Header Manager which might be very important, for example if you send Accept header the server will be aware that the client can understand gzipped responses which will greatly reduce network traffic.
More information: How to make JMeter behave more like a real browser
I am new to Jmeter; I have only been using it for two weeks, and am running into some issues with a test I have created.
The test is designed to hit a lambda in AWS to generate a pre-sign URL via an API call, which is required for placing an object into a S3 bucket, for this to be successful, a signature is required.
Below is the Jmeter test:
Bzm - Concurrency Thread Group:
User Defined Variables
HTTP Header Manager
Jp#gc- throughput shaping timer
HTTP request:
JSR223 PreProcessor (Generate a random guid for the object)
JSR223 PreProcessor (Generates the required signature)
I am using the above to perform the following load testing, start with a baseline of 1 request per second and every 20 minutes increase the request per second to 30 for two minutes, then return to 1 request per second, this cycle repeats over a 2-hour period.
This test is running across 10 fargate tasks, so the total number of requests, which should be hitting the lambda, is 10 request per second at the baseline and 300 request per second during the burst.
My problem is that when I get to my third burst in the cycle my test is returning a 403 error, when checking Jmeter this reports the following for the 403 error ‘Signature expired is now earlier than’ message.
I am unclear of the reason to why my request suddenly start to fail with this error after successfully running for an hour. The only information I have been able to find relating to the root cause of this was a clock skew issue; however as the test run successfully for an hour before this happens and everything is being hosted in AWS I don’t believe this a clock skew issue and if it is how I resolve this.
Has anyone else run into similar problems?
As per Authenticating Requests (AWS Signature Version 4) article:
Protect against reuse of the signed portions of the request – The signed portions (using AWS Signatures) of requests are valid within 15 minutes of the timestamp in the request. An unauthorized party who has access to a signed request can modify the unsigned portions of the request without affecting the request's validity in the 15 minute window. Because of this, we recommend that you maximize protection by signing request headers and body, making HTTPS requests to Amazon S3, and by using the s3:x-amz-content-sha256 condition key (see Amazon S3 Signature Version 4 Authentication Specific Policy Keys) in AWS policies to require users to sign S3 request bodies.
So you need to check the timestamp field of your request and compare it to the current time on the machine.
Also be aware that you can create a GUID using __UUID() function so no need to write custom code.
Make sure to use Groovy language and tick Cache compiled script if available box and avoid inlining of JMeter functions or variables into your script body.
You can see an example of generating an AWS signature in How to Handle Dynamic AWS SigV4 in JMeter for API Testing article
I'm trying to setup a performance test for a websocket application, using JMeter.
The request is {"type":"subscribe_rq","id":1,"ts":"2018-10-16T00:00:00","data":{"sinceSeq":0}}.
Response is multipart and sequential; initial response and an update every second, as long as the connection is open. (I checked this with "WebSocket Test Client", a chrome extension).
Currently, I only get the first main response, but not the updates. Rather not sure how to get these updates. How to achieve this in JMeter? That is, how to keep the connection open for a specified period (say 5 secs) and receive the multiple responses during that period and assert it?
To keep the connection open I have a Constant Timer with 5 secs in Thread Delay. Not sure if this will work...
Going forward please remember to include essential parts of your query into the question itself, i.e. output from the "Network" tab of the browser developer tools or screenshot of this WebSocket Test Client (whatever it is) could tell the full story.
In the mean time, there is a project: JMeter WebSocket Samplers by Peter Doornbosch which has many useful sample JMeter Test Plans including the one which you can use as the basis: Single read sample.jmx which queries the data in the loop over the single WebSocket Connection.
Check out JMeter WebSocket Samplers - A Practical Guide article to get started with the WebSocket Samplers.
I am trying to stress test a web application which is composed of login,view page,other pages and log out. The full flow contains 14 request and I have created 300 users to complete the flow.
I have the following Thread Group configuration:
According to the online resources since I have 300 users and the ramp up period is 6, for each 1 second there will be 50 thread added. Therefore all the 300 thread will be up and running after 6 second.
So can I conclude that after 6 second Jmeter will have 300 active thread accessing the website at the same time?
My second question is when I execute the load test of more than 100 users when I view the Result Tree Listener in the Sampler Result tab the following error is triggered only for js and css files but when I open the response data tab for that request it is displayed correctly.
Response code: 200
Response message: Embedded resource download
javax.net.ssl.SSLHandshakeException message:Non HTTP response message: Remote host closed connection during handshake,
Is it a performance issue of my website or Jmeter cannot download all the js/css files?
Thanks in advance
With regards to your Threads configuration, the actual concurrency will depend on your application response time. JMeter acts as follows:
Each 1 second JMeter will start 50 users
Each of 50 users will start executing your 14 requests upside down
When a user will finish executing all requests it will be shut down
So given your application average response time for all 14 requests is > 500 ms you should have 300 concurrent users. You can always check how many users were online using Active Threads Over Time listener. See JMeter Test Results: Why the Actual Users Number is Lower than Expected article for more detailed information on the topic
Too little information to provide the answer check jmeter.log and your application under test logs for any clues. One thing is obvious: you should definitely NOT be running JMeter in GUI mode especially with View Results Tree listener enabled as it is too resource intensive and side effects might be unpredictable. So repeat your test in non-GUI mode with all listeners disabled and if the issue will remain - inspect log files.
We have built a web services to serve map tiles like google map based on asp.net.
And The client require that the response time for 1000 concurrencies requests must be less than 1 seconds.
Now we use the loader balance hardware,We deploy the service to 4 servers using iis , then we use the loader balance hardware to distribute the requests to different server.
However someone suggest that we should not use the loader balance,since the browser request limits.
It is said that for a given domain,the number of the requests the browser can sent at the same time is limited(maybe 10 or more).
So we should make our client application request to different tiles server directly.
Now,I am confused,which is the right way?
It is said that for a given domain,the number of the requests the browser can sent at the same time is limited(maybe 10 or more)
This is only sort of true. Most browsers won't make more than a few request to the same domain. However, there is no set standard or defined limit and it is often configurable.
How do you know all your users will be accessing your service through the browser
What happens if you have 1000 concurrent users?
Use a load balancer