How to perform load testing for many users? - jmeter

I am new to jmeter, I am trying to do load testing for 20 users for the below scenario but facing some issues
There are 3 sequential url, in the first url.. request will be send and in the 3rd url the response message (response is processed) will be obtained.
So to obtain the response message we have to refresh the 3rd url i.e 3rd url will be refreshed until we get a response message like response is processed. If I am doing a load testing for 3 users. for first user the response message may be obtained at 3rd time refresh of 3rd url and for 2nd user it may be obtained at 5th time refresh similarly for 3rd user it may be obtained at 8th or 10th time refresh of 3rd url so we will be getting the response message at any n'th refresh of 3rd url.
On each HTTP request the sample time is calculated however i need to calculate how long for 1 user it takes i.e timetaken ,starting from the 1st request until the response message is obtained in 3rd url
There are 2 issues:
I am unaware of how can I set a condition to click on the 3rd url until response message is obtained.
How can I get the timetaken for 1 user for 3 url i.e from sending request in 1st url to obtaining response message in 3rd url instead of sample time for each url (http request)
Can somebody please help me with this issues

Put your "3rd url" request under While Controller and specify the condition in the way so the request will loop until response matches your expectations
You can measure the whole sequence execution time by putting all 3 requests under Transaction Controller

Related

JMeter View results tree listener displaying duplicate https requests

View result listener displaying duplicate https requests, 1 request is without response and 1 request is with having response. I need only 1 request, How can i fix it?
enter image description here
enter image description here
Sample with response message as "Number of samples in transaction : 1, number of failing samples : 0" is for the "Transaction Controller" while the other is the actual request with response.
Select the "Generate parent sample" checkbox in the Transaction controller and you will be able to see the desired results.
Transaction Controller will always club the response times for the underlying Http requests. If you don't want to see this in results use "Simple Controller" instead.
Look into "Sampler Result" tab of the View Results Tree listener:
If you see one of HTTP status codes 3xx it means that you're being redirected so it's absolutely normal to see blank response if this is the case of redirection
You can control the behaviour of the JMeter when it comes to handling redirect responses by playing with Redirect automatically and Follow Redirects checkboxes in the HTTP Request sampler:
However you need to remember that you don't "need only 1 request", you need exactly the same number of requests like real browser sends so inspect how many requests are being sent by the real browser using your favourite browser developer tools and ensure that JMeter sends the same amount of requests and they have the same nature

Mailchimp API Get last campaign sent

Is there any direct way to get the last campaign sent on a Mailchimp account via Mailchimp API V3?
So far the only way I found was to iterate over the campaigns and get the last one but it takes too much time.
Thanks in advance.
This is what I did:
/campaigns?sort_field=send_time&sort_dir=DESC&status=sent&count=1
Or for anyone who needs the last campaign from a particular folder:
/campaigns?folder_id=[FOLDER ID]&sort_field=send_time&sort_dir=DESC&status=sent&count=1
I don't think you can get only the last campaign sent using only one request, but you can achieve this by making two requests to the following endpoint
/campaigns
as described here. The parameters that you need are count, status, and offset.
For the first request, set the count parameter to 1 and status parameter to sent. You will get the first campaign sent, but you will also get total_items in the response body. The total_items indicates the total number of sent campaigns in your MailChimp account regardless of the pagination and that's what you need to make the second request.
For the second request, set the count parameter to 1, status parameter to sent, and offset parameter to the value of total_items above - 1. For example if the total_items from the first request is 150, then you should set offset to 149. Setting the offset parameter to 149 will skip the first 149 sent campaigns. The campaigns field in the response of the second request will contain the last campaign sent from your MailChimp account, which is what you're looking for. This will be much quicker than enumerating through all of the sent campaigns.

HTTP GET vs POST for Idempotent Reporting

I'm building a web-based reporting tool that queries but does not change large amounts of data.
In order to verify the reporting query, I am using a form for input validation.
I know the following about HTTP GET:
It should be used for idempotent requests
Repeated requests may be cached by the browser
What about the following situations?
The data being reported changes every minute and must not be cached?
The query string is very large and greater than the 2000 character URL limit?
I know I can easily just use POST and "break the rules", but are there definitive situations in which POST is recommended for idempotent requests?
Also, I'm submitting the form via AJAX and the framework is Python/Django, but I don't think that should change anything.
I think that using POST for this sort situation is acceptable. Citing the HTTP 1.1 RFC
The action performed by the POST method might not result in a
resource that can be identified by a URI. In this case, either 200
(OK) or 204 (No Content) is the appropriate response status,
depending on whether or not the response includes an entity that
describes the result.
In your case a "search result" resource is created on the server which adheres to the HTTP POST request specification. You can either opt to return the result resource as the response or as a separate URI to the just created resource and may be deleted as the result resource is no longer necessary after one minute's time(i.e as you said data changes every one minute).
The data being reported changes every minute
Every time you make a request, it is going to create a new resource based on your above statement.
Additionally you can return 201 status and a URL to retrieve the search result resource but I m not sure if you want this sort of behavior but I just provided as a side note.
Second part of your first question says results must not be cached. Well this is something you configure on the server to return necessary HTTP headers to force intermediary proxies and clients to not cache the result, for example, with If-Modified-Since, Cache-control etc.
Your second question is already answered as you have to use POST request instead of GET request due to the URL character limit.

Handling processing overhead due to request time out

Consider a service running on a server for a customer c1,but customer c1 times out after 'S' sec for what so ever be the reason so customer again fires the same request ,so server is running duplicate query hence it gets overloaded, resolve this glitch. Please help me !!!
I assume you are on the server side and hence cannot control multiple requests coming in from the same client.
Every client should be having an IP address associated with them. In your load balancer(if you have one) or in your server you need to keep an in-memory cache which keeps track of all requests, their IP addresses, timestamp when request originated and timestamp when request processing finished. Next you define and appropriate time measure - which should be near about 70-80% percentile of processing time for all your requests. Lets say X seconds.
Now, before you accept any request at your loadbalancer/ server you need to check in this in-memory cache whether the same IP has sent the same request and the time elapsed since the last request is less than X. If so do not accept this request and instead send a custom error stating something like "previous request still under processing. Please try after some time".
In case IP address is not enough for identifying a client, as the same client may be sending requests to different endpoints on your server for different services, then you need to store another identifier which maybe a kind of token/session identifier - such as c1 or customer id in your case. Ideally, a customer can send only 1 request from 1 IP Address to an endpoint at any 1 point of time. Just in case you have mobile and web interfaces then you can add the channel-type(web/mobile/tablet) as well to the list of identifying parameters .
So now, a combination of - customer id(c1), IP address, request URL,request time, channel-type will always be unique for a request coming in. Using a key of all these parameters in your cache to uniquely fetch information for a request and validating whether to start processing the request or send a custom error message to prevent overloading the server with re-requests - should solve the problem defined above.
Note - 'S' seconds i.e. client-side timeout - given that the client-side timeout is not in our control - should not concern the server-side and will have no bearing on the design I have detailed above.

POST call with same headers and same request body gives different response?

I was using a website (mysite1.com) , and 3 screens deep from the login screen is the screen that I want to reach using bash and curl & simulating the exact same request that would have gone through the browser. What I mean by simulating is sending exactly the same headers (including referer & origin).
Here is what is happening:
I was able to cross the login screen & screen 2 by simulating the browser behaviour
Now, I am stuck at screen 3. The POST call that goes to the mysite1 server is same as the browser would send, every bit of it.
To do the POST call of 3rd Screen , I create a form on localhost with action="URLOf3rdScreenOnMysite1" and method=post. And before submitting, I change the referer origin and other headers using a browser extension.
This is generating the request that I mentioned in point 2.
However,the first two calls for screen1 and 2 were in bash.
There are no cookies being used by mysite1. Session_id is there as a GET query string parameter. I assumed that probably the server is keeping a track of the flow of URLs requested, but I got the error response even when I followed the flow using bash.
The POST call in the 3rd screen returns different response (an error response) when I try to simulate and returns the , even when the flow of URLs requested is the same in both the cases. How can this be possible? How is the server coming to know that these request are different, the one from browser and the other from bash + last-screen-from-browser? Are there any other parameters involved except for Headers + POST Data + URLs requested? Maybe a different connection established when calling the 3rd screen from browser?

Resources