Too high TTFB on Laravel's API requests - laravel

When i executes a HTTP-request to Laravel's API (e.g. /api/devices) wia Postman, the execution time is ~1000ms.
When same HTTP-request executes from Front on React-Redux, the TTFB (time to first byte) gains to 3000-7000ms.
SQL query logging shows times up to 50ms per query (~10 queries), but Enter point's (public/index.php) execute time is only 1-3ms.
Where should i look for a problem??

Use Barryvdh's debug bar.
It will provide full profiling information for your app and let you know where the slow-down is.

Related

Jmeter response time confusion, In load testing what time do I need to consider as the response time

I am using Jmeter for performan testing, I believe elapsed time is the response time which I am considering(i.e.,85 milliseconds). When I hit the same request from postman It is taking very less time(i.e.,35 milliseconds) so I want to know where my JMETER giving correct results or not
Elapsed time consists of:
Connect time (it might include SSL handshake)
Latency
Application response time
Given you're running the same request (URL, body, headers) from the same machine you should have similar results.
Try running the request for more times, i.e. 10 or 100 using newman and JMeter (set number of iterations in Thread Group to 100). If you will still be seeing differences - consider comparing the requests using an external sniffer tool like Wireshark, it will give you more insights regarding what's going on under the hood.

Why is JMeter Result is different to User Experience Result?

We are currently conducting performance tests on both web apps that we have, one is running within a private network and the other is accessible for all. For both apps, a single page-load of the landing page or initial page only takes between 2-3 seconds on a user POV, but when we use blaze and JMeter, the results are between 15-20 seconds. Am I missing something? The 15-20 seconds result came from the Loadtime/Sample Time in JMeter and in Elapsed column if extracted to .csv. Please help as I'm stuck.
We have tried conducting tests on multiple PCs within the office premises along with a PC remotely accessed on another site and we still get the same results. The number of thread and ramp-up period is both set to 1 to imitate a single user only.
Where a delta exists, it is certain to mean that two different items are being timed. It would help to understand on your front end are you timing to a standard metric, such as w3c domComplete, time to interactive, first contentful paint, some other location, and then compare where this comes into play on the drilldown on the performance tab of chrome. Odds are that there is a lot occuring that is not visible that is being captured by Jmeter.
You might also look for other threads on here on how jmeter operates as compared to a "real browser" There are differences which could come into play affecting your page comparisons, particularly if you have dozens/hundreds of elements that need to be downloaded to complete your page. Also, pay attention to third party components where you do not have permission to test their servers.
I can think of 2 possible causees:
Clear your browser history, especially browser cache. It might be the case you're getting HTTP Status 304 for all requests in browser because responses are being returned from the browser cache and no actual requests are being made while JMeter always uses "clean" session.
Pay attention to Connect Time and Latency metrics as it might be the case the server response time is low but the time for network packets to travel back and forth is very high.
Connect Time. JMeter measures the time it took to establish the connection, including SSL handshake. Note that connect time is not automatically subtracted from latency. In case of connection error, the metric will be equal to the time it took to face the error, for example in case of Timeout, it should be equal to connection timeout.
Latency. JMeter measures the latency from just before sending the request to just after the first response has been received. Thus the time includes all the processing needed to assemble the request as well as assembling the first part of the response, which in general will be longer than one byte. Protocol analysers (such as Wireshark) measure the time when bytes are actually sent/received over the interface. The JMeter time should be closer to that which is experienced by a browser or other application client.
So basically "Elapsed time = Connect Time + Latency + Server Processing Time"
In general given:
the same machine
clean browser session
and JMeter configured to behave like a real browser
you should get similar or equal timings for the same page

jmeter -How to set max value in Aggregate report

I have a test plan for Rest API with one thread group with 2 samplers within.
While running load test for
no of threads(users):80
Ramp up period: 1
I get "Response code: 504 Response message: GATEWAY_TIMEOUT" in jmeter.
I observed that when Max value in Aggregate graph reaches 60000ms all response gets timed out.
What need to be done to prevent time out issue.
Load test works fine when I use 50 users or less.
I think you are getting timeouts because at load of 80+ users, response time shoots up but your application or rest API's have less time out duration set. Because of heavy response times you are exceeding time out duration and getting those errors.
To resolve this issue simplest solution would be to increase time out values if possible.
Otherwise you need to improve response time of those Rest API's to a better value so that you won't get timeouts.
While doing this, monitor system utilization to be sure that changes are not hampering anywhere else.
From what you are saying it seems your application limit is ~60 users load with given configuration.
please check you ELB settings , or application server settings(glassfish/apache) , ELB has by default 59 seconds of time out , after that ELB would time expire your request .
But you can see the response for those requests in the DB which might have taken longer time to respond

Does JMeter show the correct average response time for the first page it hits for many virtual users?

I'm load testing a system with 500 virtual users. I've kept the "Ramp-Up period (in seconds)" option to zero. So, what I understand, JMeter will hit the system with 500 virtual users all at the same time. Please correct me if I'm wrong here.
Now, the summary report shows the average response time for the first page is ~100 seconds!. Which is more than a minute and a half of wait time. But while the JMeter is running, I manually went to the same page/url using a browser and didn't have to wait for that long. It was not even close, the page response was almost immediate for me.
My question is: is there any known issue for the average response time of the first page? Is it JMeter which is taking long to trigger that many users?
Thanks in advance.
--Ishtiaque
There is no issue in Jmeter related to first page response time.
Summary Report shows all response time details in Milliseconds, the value "100" seconds have you converted milliseconds to seconds?
Also in order to make sure that 500 users hit concurrently, use Synchronizing Timer.
Hope this will help.
While the response times will be accurate, you need to consider the affect of starting so many threads at once on both your server and your client.
500 threads to start at once is not insignificant n the client. If your server has the connections, it will start 500 threads as well.
Ramping over a period of time is more realistic loadwise, but still not really indicative of server capability until the threads have all started and settled in.
Databases can also require a settling in period which can affect response times.
Alternative to ramping is introducing a random wait at the start of each thread before firing the first sample. You can then choose not to ramp over time, but still expect resources on the client to suddenly come under load and change the settings if you hit limits. This will make the entire run much more realistic of typical behaviour. However, you need to determine if your use cases are typical.
Although the heap size is increased, i notice there is still longer time as compared to actual response time. Later i realised it was the probe effect (the extra time a tool generates due to test execution)

Large Waiting time for an HTTP request

I'm working on developing a web site using cakephp. I'm analyzing the website now using firebug + Yslow and Google chrome developer tools. In an Ajax request I get a large waiting time about 6s while the receiving time is too small 66ms which cause a great latency in the request. Does anybody know why the waiting time is too large??
Waiting time - From the time of request to the time first byte is received, which involves a round trip time. There can be latency if your server away from your machine. Usually it requires 3 round trips. 1 for DNS lookup and 1 for establishing TCP Connection, 1 for request and response pair.
Receiving Time : It will be less if there is less amount of data being downloaded from the server to the client.
For further reference : http://www.webperformancematters.com/journal/2007/7/24/latency-bandwidth-and-response-times.html
My guess is that you might be performing a SQL query as part of the resource that you are calling via Ajax. If this is the case, you may need to tune your query or indexes to improve the speed of the query. Can you post some code so we may review?

Resources