Large time difference for data to reach localhost - laravel

In my laravel application, I am making a HTTP request to a remote server using guzzle library . However large time is required for data to reach local host.
Here is the resposne I get on browser ,
However if I run command ping server_IP I get roughly 175ms as average transmission time .
I also monitored my CPU usage after making requests in infinte loop, however I couldn't find much usage .
I also tried hosting my laravel application on nginx server, but I still observe around 1-1.1 seconds overhead .
What could be causing this delay and how can I reduce it ?

There are few potential reasons.
Laravel is not the fastest framework. There are few hundred of files which need to be loaded on every single request. If you server doesn't have a SSD drive your performance will be terrible. I'd recommend to create a RAMDISK and serve the files from there.
Network latency. Open up wireshark and look at all the requests that need to be done. All them impact negatively the performance and some of them you cannot get around (DNS,...). Try to combine the CSS and JS files so that the number of requests is minimised.
Database connection on server side takes long time to setup, retrieves lots of data,...
Bear in mind that in most situations the latency is due to IO constraints rather than CPU usage. Moreover in test/preproduction environments where the number of requests per second that the server gets rounds to 0.

Related

Ajax Call reduce response time

I'm trying to reduce ajax response time of my website, deployed on Hostinger, it takes around 1.30seconds to get the response on live whereas 220ms on my local machine, is there any way to make it faster on live too? even with null data it takes that much time on live.
The speed of a AJAX-response depends on several influences:
The speed of the remote machine (=host). Most of the time the
different sites are only virtual hosts which are multiplexed by a
bigger machine. So, if you share your space with other
high-traffic-sites, yours will not respond fast. You can ask your
provider to give you a new spot on another server of his - helped me
several times.
The speed of your connection may be slow or the connection of your
provider as well. If you have access to another test-machine for
routing-measurements you can check if your connection is really slow. If is not then you may ask your provider to re-route the connection is he is able to.
Does your AJAX-callmake excessive usage of databases (e.g. mysql)?
It could be a slow database-installation (maybe too many users).
Normally you can move the database-spot yourself of contact the
provider to move your databases to another virtual machine.
You see - if it runs at your computer fast and at your provider slow then it is most of the time the problem of your provider and you should contact him.

Testing 10.000 VU in JMeter in 10 seconds

I need to test 200.000 VU hitting an app in 10 seconds, so I started to make a test of 10.000 VU, running Jmeter in Non-GUI mode, to see the response of my computer, my internet connection and the site response, but I got 83.50% of Errors.
95% of the errors were these:
Non HTTP response code: java.net.ConnectException/Non HTTP response message: Connection timed out: connect
This means that the internet connection was not enough for the short time of the test?
Thanks.
Running 200K users
Generally speaking in traditional HTTP running 200.000 users from one machine is impossible: there isn't that many ports. I.e. if you maximize your port usage (and it's likely you need to change OS settings to do that, since usually OS will limit number of open ports to somehwere between 1000 and 10000), JMeter will have about 64500 ports to run requests on. Each JMeter HTTP sampler needs a separate port, so you need 200K ports. Thus you need to have at least 4 machines to run 200K requests concurrently.
But that may not be enough: if you have more than one request sequentially (like most performance tests do), you will be able to run even less concurrent requests, since ports are usually not closed right away after request is done, so next request has to use a different port.
Don't forget that server also must be able to receive similar load.
But even that may not be enough: JMeter needs to have enough memory to accommodate 10-30K threads. Size of thread in memory will depend on a few things, and how your script is designed among them.
Bottom line: with all the tweaking, realistically, port availability limits number of concurrent requests JMeter can run from one machine to 10-30K concurrent users. Thus to test 200K users, you need about 7-20 JMeter machines.
Running 10K users
If you were testing in a designated environment (where clients and servers are next to each other with optimized network between them), you should be able to run 10K users from one machine, if other limits, e.g. memory and max ports were properly tweaked. But sounds like you are trying to test them over the internet connection?
Well, 2 problems here:
Performance testing over internet connection is absolutely pointless. You don't know what is between you and servers, and how those things in between are changing the shape of the load. You won't know if it was 10K concurrent requests, or 10K sequential requests. And results will only tell you how fast your internet is.
Any ISP will have a limit on number of connections from one IP, and it will be well below 10K. Not to mention that some ISPs may flag / temporary ban your IP for such flood.
Bottom line: whoever asked you to test 10K or 200K concurrent users, should also provide a set of JMeter machines to run this test from. Those machines should be close to tested servers, preferably without any extra routing in between (or with well known and well configured routing)
I don't think that stressing your application by kicking off 200k users at once is a good idea (same applies to 10k users) as the results, even in case of success, won't tell the full story. Moreover, in case of error you will be able to state only that 10k users in 10 seconds is not possible, however you won't have the information like:
What was the number of users when errors start occurring
What is the correlation between number of concurrent users and response time and/or throughput
What is the saturation point (the maximum system performance)
So I would recommend re-running your test and increasing the load gradually from one virtual user to 10 000 and see when it breaks. The breaking point is called bottleneck and the cause can be determined like:
First of all make sure you're following JMeter Best Practices as default JMeter configuration is not suitable for high loads and if JMeter is not capable of sending requests fast enough you will not get accurate results. Most probably you will have to run JMeter in Distributed mode, it is highly unlikely you will be able to mimic 20k requests per second from a single machine (or it has to be a very powerful one)
Set up monitoring of the application under test in order to ensure that it has enough headroom in terms of CPU, RAM, Disk, etc. You can use JMeter PerfMon Plugin for this
Check your application infrastructure, like JMeter the majority of middleware components like web/application servers, load balancers, databases, etc. default configurations are suitable for development and debugging, they need to be tuned for high throughput.
Check your application code using profiler tools telemetry, the reason could be in i.e. slow DB query, inefficient algorithm, large object, heavy function, etc.

Web app initial load time

I am using a shared hosting plan at Bluehost to host a golf tournament live scoring mobile web app. I am caching everything I can on Cloudflare, and spent quite some time on overall optimization of the initial download & rendering times. There might be more I could do, but without question my single biggest issue is the initial call to my website: www.spanishpointscup.org. Sometimes this seems to be related to DNS lookup and other times related to Waiting(TTFB).
Below are 2 screen shot images of the network calls that show variations in accessing my index.html. Sometimes this initial file load can be even longer. Very rarely are any of the other files downloaded creating a long delay time, so my only focus now is the initial file load. I think that even if I had server side rendering, I would still have this issue.
Does anyone have specific recommendations that they are confident will help me? Switch to VPS or other host? Thank you.
This is typical when you use a shared server.
The DNS has nothing to do with the issue. DNS has to do with the request not the response. It is the Browser that must resolve the the domain name to an ip address.
The delay you are seeing is due to the server being busy and your page is sitting in a queue waiting behind other processes. Possibly you have a CPU grabbing neighbor on your shared server. Or Bluehost has some performance issues.
You will likely notice some image files take an excessively long time to transmit. Which image is slow will appear to be random with each fresh (not in cache) page load.
UPDATE
After further review I noticed the "wait" times are excessive. Wait time is in green on your waterfall. Notice how the transmit time (blue) is short. This is the time it takes the server to retrieve the page from the disk and put it into the transmit buffer. 300-400 millisecond is excessive.
Find a new service provider.

It is not possible to download large files at Jetty server

I made a few test downloads using the Jetty 9 server, where it is made multiple downloads of a single file with an approximate size of 80 MB. When smaller number of downloads and the time of 55 seconds is not reached to download, all usually end, however if any downloads in progress after 55 seconds the flow of the network simply to download and no more remains.
I tried already set the timeout and the buffer Jetty, though this has not worked. Has anyone had this problem or have any suggestions on how to solve? Tests on IIS and Apache Server work very well. Use JMeter for testing.
Marcus, maybe you are just hitting Jetty bug 472621?
Edit: The mentioned bug is a separate timeout in Jetty that applies to the total operation, not just idle time. So by setting the http.timeout property you essentially define a maximum time any download is allowed to take, which in turn may cause timeout errors for slow clients and/or large downloads.
Cheers,
momo
A timeout means your client isn't reading fast enough.
JMeter isn't reading the response data fast enough, so the connection sits idle long enough that it idle times out and disconnects.
We test with 800MB and 2GB files regularly.
On using HTTP/1.0, HTTP/1.1, and HTTP/2 protocols.
Using normal (plaintext) connections, and secured TLS connections.
With responses being delivered in as many Transfer-Encodings and Content-Encodings as we can think of (compressed, gzip, chunked, ranged, etc.).
We do all of these tests using our own test infrastructure, often spinning up many many Amazon EC2 nodes to perform a load test that can sufficiently test the server demands (a typical test is 20 client nodes to 1 server node)
When testing large responses, you'll need to be aware of the protocol (HTTP/1.x vs HTTP/2) and how persistence behavior of that protocol can change the request / response latency. In the real world you wont have multiple large requests after each other on the same persisted connection via HTTP/1 (on HTTP/2 the multiple requests would be parallel and be sent at the same time).
Be sure you setup your JMeter to use HTTP/1.1 and not use persisted connections. (see JMeter documentation for help on that)
Also be aware of your bandwidth for your testing, its very common to blame a server (any server) for not performing fast enough, when the test itself is sloppily setup and has expectations that far exceed the bandwidth of the network itself.
Next, don't test with the same machine, this sort of load test would need multiple machines (1 for the server, and 4+ for the client)
Lastly, when load testing, you'll want to become intimately aware of your networking configurations on your server (and to a lesser extent, your client test machines) to maximize your network configuration for high load. Default configurations for OS's are rarely sufficient to handle proper load testing.

How many users should a EC2 Micro Instance be able to handle only with a nginx server?

I have a iOS Social App.
This app talks to my server to do updates & retrieval fairly often. Mostly small text as JSON. Sometimes users will upload pictures that my web-server will then upload to a S3 Bucket. No pictures or any other type of file will be retrieved from the web-server
The EC2 Micro Ubuntu 13.04 Instance runs PHP 5.5, PHP-FPM and NGINX. Cache is handled by Elastic Cache using Redis and the database connects to a separate m1.large MongoDB server. The content can be fairly dynamic as newsfeed can be dynamic.
I am a total newbie in regards to configuring NGINX for performance and I am trying to see whether I've configured my server properly or not.
I am using Siege to test my server load but I can't find any type of statistics on how many concurrent users / page loads should my system be able to handle so that I know that I've done something right or something wrong.
What amount of concurrent users / page load should my server be able to handle?
I guess if I cant get hold on statistic from experience what should be easy, medium, and extreme for my micro instance?
I am aware that there are several other questions asking similar things. But none provide any sort of estimates for a similar system, which is what I am looking for.
I haven't tried nginx on microinstance for the reasons Jonathan pointed out. If you consume cpu burst you will be throttled very hard and your app will become unusable.
IF you want to follow that path I would recommend:
Try to cap cpu usage for nginx and php5-fpm to make sure you do not go over the thereshold of cpu penalities. I have no ideia what that thereshold is. I believe the main problem with micro instance is to maintain a consistent cpu availability. If you go over the cap you are screwed.
Try to use fastcgi_cache, if possible. You want to hit php5-fpm only if really needed.
Keep in mind that gzipping on the fly will eat alot of cpu. I mean alot of cpu (for a instance that has almost none cpu power). If you can use gzip_static, do it. But I believe you cannot.
As for statistics, you will need to do that yourself. I have statistics for m1.small but none for micro. Start by making nginx serve a static html file with very few kb. Do a siege benchmark mode with 10 concurrent users for 10 minutes and measure. Make sure you are sieging from a stronger machine.
siege -b -c10 -t600s 'http:// private-ip /test.html'
You will probably see the effects of cpu throttle by just doing that! What you want to keep an eye on is the transactions per second and how much throughput can the nginx serve. Keep in mind that m1small max is 35mb/s so m1.micro will be even less.
Then, move to a json response. Try gzipping. See how much concurrent requests per second you can get.
And dont forget to come back here and report your numbers.
Best regards.
Micro instances are unique in that they use a burstable profile. While you may get up two 2 ECU's in terms of performance for a short period of time, after it uses its burstable allotment it will be limited to around 0.1 or 0.2 ECU. Eventually the allotment resets and you can get 2 ECU's again.
Much of this is going to come down to how CPU/Memory heavy your application is. It sounds like you have it pretty well optimized already.

Resources