How to capture the response time of calls to other APIs within an API with Jmeter - jmeter

I am currently using Jmeter to test the response time for an API. Lets call it API A. If API A calls API B, which is hosted on the same server but different port, is there a way for me to capture the response time of API B using Jmeter?
I realize there is a similar question here which is trying to accomplish the same thing but it does not work for me. I don't see the internal call to API B.

JMeter knows nothing about what's going on under the hood of your application, it sends a HTTP Request, waits for the response and measures the time taken as well as some other metrics.
If there is some extra activity under the hood of an API call the only way to capture it is using a profiler tool or an APM tool at the application under test end.

You could not. Jmet is an outsider, Jmet only know there is API-A, and doesn't know the internal implementation(that API-A calls API-B).
A better design would be, for each APIs, itself should metric the total run time and log into metric server. There's a lot of options about server side metrics system you could explore.

Related

Performance Testing Non Restful SpringBoot

I have a spring boot app that listens to a queue so it is non restful. Is there any easy way to get a time breakdown for the amount of time spent in each service?
The options are in:
Use a Profiler tool when running your application, this way you will be able to trace down the time even to a single function call
If you have an APM tool in place you can collect the same information plus metrics from the operating system, database, message broker, etc.
Just in case you need to generate messages and "feed" them to your application it could be done using i.e. Apache JMeter tool, see Building a JMS Testing Plan article for more information if needed

Single API endpoint for entire application using lambda and API Gateway

Currently we are running a NodeJS webApp using serverless. The API Gateway is using a single API endpoint for the entire application and routing is handled internally. So basically single http {Any+} endpoint for entire application.
My question is,
1, Whats the disadvantage of this method?? ( I know lambda is build for FaaS but right now we are handling it as a monolithic function.)
2, How much instance can lambda run at a time if we are following this method? Can it handle a million+ request at single time?
Every help would be appreciated. Thanks!
Disadvantage would be as you say - it's monolithic so you've not modularised your code at all. The idea is that adjusting one function shouldn't affect the rest, but in this case it can.
You can run as many as you like concurrently; you can set limits though (and there are some limits initially for safety which can be removed).
If you are running the function regularly it should also 'warm start' i.e. have a shorter boot time after the first time.

Performance improvement for web services

We have a webservice, which will be called to provide the delivery date of the product, while purchasing in eComm website.
We are using IBM Sterling Order Management in the backend, and its OOB webservice and its OOB service.
This webservice (WSDL) is taking more time, more than 40 seconds, which create timeoutexception in other integrated systems (Middleware).
So we want to improve the performance of this webservice. Could you please help me to provide the way to improve the performance ? Will it be improved if the Server's spec has been upgraded ? As it the OOB service, we can't customize it.
First of all you need to figure out the performance bottleneck. To start with you could put a verbose trace on the OOB Webservice. Use the logs and see if you can zero-in on any particular component or sql taking consuming majority of the time. If it's sql, you can tune/baseline the OOB query/tables using indexes.
If you have any user exits implemented (for the OOB API), ensure that they are lean and aren't making any expensive API calls like changeOrder API.
One of the questions to be asked here would be if the webservice needs to respond with the actual processing results or if it could move the actual processing to the background eg: separate integration server and just respond with a simple acknowledgement of the webservice request. If the service only needs to respond with an acknowledgement you could possibly move the actual processing to a separate async service.
First try to find out where the actual problem is and hence here the few pointers,
1) Check in OMS how much time the service is taking with the same input which you are using ti invoke the webservice.
2) If from OMS end response time is fine then check the network latency/bandwidth.
3) CPU usage while hitting the webservice.

Elastic Search Load Testing

I have a single node elastic search server running on ec2. I want to do some load testing using search requests with random search queries. I am using JMeter for load testing with two different approaches -
HTTP Client - When I test using these clients with 10k/20k/50k of requests, it works fine.
ES Transport Client - This works fine with approx 2k of requests.
Here are the steps I have followed -
Instantiating client on every run and close it once the test finished.
Once client instantiates, I start the jmeter sampling and send the search request.
After this run, stops the sampling.
I am getting No Node Available Exception after 2k of request with transport client.
ES Server is running with 3g of memory and have given 6g of memory to load tester.
Please help me if there is some config modification required and if I am not using the correct approach to test the load.
Thanks in Advance.
What kind of responses are you getting from the http test? Have you verified you are getting valid responses for all 10~50k requests? It might be perhaps your cluster cannot take on the load you're putting on it for either test. Since TransportClient is more intimately coupled to the ES server, you will explicitly see errors that come back from TransportClient, but if you're simply sending requests via HTTP without validating the response, it's easy to miss any issues.
Although, before taking a stab in the dark like I just did, I would also check to see what kind of QPS you are getting using the HTTP method vs the TC method, what your CPU/memory look like throughout both tests, what the response times look like, etc. It helps to monitor the health of your system throughout the process to detect any symptoms that might help explain the cause.

Stress test a Server via by launching multiple processes

I need to stress test a Server with around 3000 users conecting to it concurrently via SyncML Clients. For simulation of each user, a application needs to be launched which then connects to the server and does some operations.
Each user corresponds to each process.
The process is unix based and does http transactions based on SyncML Protocol.
I need to run the load for these 3000 processes for an hour or so.
Can you suggest best industry methods to fulfil such requirements?
Can JMeter or Locust help me in this?
Regards
You can definitely use Locust for this.
I wouldn't recommend starting processes to generate the load (even though it's possible), mainly because you won't get detailed statistics on what requests are made, how long they take to complete, etc.
Either you could just manually do the HTTP POST requests containing the SyncML data with the built in Locust HTTP client, or you could actually take something like pysyncml, and make your own SyncML client that reports the requests it does to Locust. It's fairly simple to do, you can read more about it, and see example, on the documentation page about custom clients.
Yes, JMeter can do this, though it's not clear to me what exactly the unix based processes needs to do.
JMeter can natively make HTTP POST requests and send XML data. Unless you have some very custom logic to make the requests, stick to JMeter on it's own.
If you must, you CAN execute a local process, but then you're severely limiting the number of users you can simulate per machine.
http://jmeter.apache.org/usermanual/component_reference.html#OS_Process_Sampler

Resources