Remote API server slowiness - performance

In our server we reach api.twitter.com and use REST API of Twitter. Until 3 days ago we had no problems. But since that time we have slowiness problem.
Regarding to Twitter API status page there is no problem. But we have very big delays.
We make 350-400 requests per minute.
Before, we had a performance of 600-700 ms. per request. (Snapshot image)
But now it became 3600-4000 ms per request. (Snapshot image)
It doesn't look like a temporary slowiness because it remains nearly for 3 days.
What did I check:
- I didn't make any big code change in our repo. Also when we make minimal reuqests with just one line of request, we still get this slowiness.
- I check server speed with Ookla's speedtest. It looks good. 800 Mb/s download, 250 Mb/s upload.
- We don't have any CPU, RAM or disk problem. CPU average is 30%, RAM is 50% loaded, disk IO is nearly 4-5%.
So what would be the probable causes ?
I can check them and update question.
(Centos 6.5, PHP 5.4.36, Nginx 1.6, Apache/2.2.15, Apache run PHP as PHP module, XCache 3.2.0)

Related

Random requests to Laravel app take too long and get 504 Nginx error code

My Laravel app randomly (average 1-3 in 9000 requests) get 504 Nginx Gateway Time-out errors in various Livewire API calls (various components as well). Sometimes request gets "stuck" even when just logging in.
Whole app's interface is Livewire heavy, so most of "stuck" requests happen in them, but, to my understanding, 504 error shouldn't be front-end (Livewire) related. I only have basic knowledge about servers, so I'm not sure if that's the case.
App is hosted on Cloudways. After opening a ticket, they just changed request Execution limit to 3600 sec and Memory limit to 2048 MB, but nothing changed, because failing requests where not that memory demanding in the first place. Later they disabled Varnish, but it didn't help as well. After that they said that server is fully optimized and are not looking for a problem anymore. They will try changing server settings only if I name which ones to change.
While testing, I run a script which adds and removes 1 charater in a customer Search Bar so I'm always making same 2 queries. What I found out:
Most of Responses are fast, but sometimes same Request (same query
and everything) takes 3+ min. or even longer and gets "504 Nginx"
code.
Laravel Debugbar shows, that app took only 41.13 ms (used 6 MB) to run that Request, and received it i.e. at 2022-05-16 16:38:51. But browser shows "Requested on 2022-05-16 16:35:24". And it's true because I've waited for that simple response for 3 min. 44 s.
When "stuck", same request can be lost for different durations, can be 1 min. can be 15 min., but the moment I make another Request from different Livewire component both new and old (that was "stuck") instantly get responses.
[Probably Livewire feature] While waiting for response, livewire
tracks all of changes in searchbar and after response or 504 error,
sends request with an array called "payload" which has all of the
recorded changes, sometimes hundreds of items (if I didn't stop the
while request was "stuck")
I haven't encountered this issue on my local machine while using Laravel Valet.
Server services (Cloudways don't show much):
Apache
Memchaced
Mysql
Ngnix
PHP FPM
Redis
Varnish (disabled)
Basic info:
PHP 8.0
mariaDB 10.4
Laravel v9.4.1
Livewire v2.10.4
Available settings (also not much):
EXECUTION LIMIT 300 sec
MEMORY LIMIT 1024MB
PHP
MAX INPUT VARIABLES 12500
MAX INPUT TIME 299 sec
OPCACHE Memory 254MB
XDEBUG disabled
NGINX
STATIC CACHE EXPIRY 525600min.
TLS Versions 1.2, 1.3
Can it still be Livewire's problem? What should I try changing in the server settings?
Before trying to migrate to another hosting provider, I'd like to try fixing it.

Elasticsearch speed vs. Cloud (localhost to production)

I have got a single ELK stack with a single node running in a vagrant virtual box on my machine. It has 3 indexes which are 90mb, 3.6gb, and 38gb.
At the same time, I have also got a Javascript application running on the host machine, consuming data from Elasticsearch which runs no problem, speed and everything's perfect. (Locally)
The issue comes when I put my Javascript application in production, as the Elasticsearch endpoint in the application has to go from localhost:9200 to MyDomainName.com:9200. The speed of the application runs fine within the company, but when I access it from home, the speed drastically decreases and often crashes. However, when I go to Kibana from home, running query there is fine.
The company is using BT broadband and has a download speed of 60mb, and 20mb upload. Doesn't use fixed IP so have to update A record whenever IP changes manually, but I don't think is relevant to the problem.
Is the internet speed the main issue that affected the loading speed outside of the company? How do I improve this? Is cloud (CDN?) the only option that would make things run faster? If so how much would it cost to host it in the cloud assuming I would index a lot of documents in the first time, but do a daily max. 10mb indexing after?
UPDATE1: Metrics from sending a request from Home using Chrome > Network
Queued at 32.77s
Started at 32.77s
Resource Scheduling
- Queueing 0.37 ms
Connection Start
- Stalled 38.32s
- DNS Lookup 0.22ms
- Initial Connection
Request/Response
- Request sent 48 μs
- Waiting (TTFB) 436.61.ms
- Content Download 0.58 ms
UPDATE2:
The stalling period seems to been much lesser when I use a VPN?

Yii2 basic application page - load testing with apache jmeter freezes server

I had configured yii2 basic application template in windows server (Dual core processor, 8 GB RAM) No extra code written other than just installing it.When testing with apache jmeter with 100 concurrent users in 10 min CPU usage get hit 99% and the server freezes.Normal static PHP page would work without any issues under the same test, if placed outside the framework.It take around 2-3 percent of CPU utilisation.
If you're allowing to run 100 concurrent PHP processes on 2-core CPU, this is more like issue with your server configuration - each process gets less that 1% of your CPU, which makes everything really slow. You should limit number of PHP processes (in php-fpm config for example) and queue them at webserver level - it is better to process 20 concurrent request at the same time and do it fast, than process 100 and do it slow.
You should start from guide tutorial about Yii optimization.
Definitely disable debug mode.
Use more efficient backend for cache (like APCu or redis/memcache) and session.
Disable Yii autoloader and use optimized autoloader from composer: https://www.yiiframework.com/doc/guide/2.0/en/concept-autoloading#using-other-autoloaders
You may also look at application prepared for basic benchmark and compare configs.
As per Patrick comment, you're comparing a simple PHP page with a more complex framework.
There are tons of possible reasons for your issues:
misconfiguration of YII2
Unrealistic JMeter test, you don't give any information on it
Issue in YII2

Why is the latency of my GAE app serving static files so high?

I was checking the performance of my Go application on GAE, and I thought that the response time for a static file was quite high (183ms). Is it? Why is it? What can I do about it?
64.103.25.105 - - [07/Feb/2013:04:10:03 -0800] "GET /css/bootstrap-responsive.css
HTTP/1.1" 200 21752 - "Go http package" "example.com" ms=183 cpu_ms=0
"Regular" 200 ms seems on the high side of things for static files. I serve a static version of the same "bootstrap-responsive.css" from my application and I can see two types of answer times:
50-100ms (most of the time)
150-500ms (sometimes)
Since I have a ping roundtrip of more or less 50ms to google app engine, it seems the file is usually served within 50ms or so.
I would guess the 150-300ms response time is related to google app engine frontend server being "cold cached". I presumed that retrieving the file from some persistent storage, involves higher latencies than if it is in the frontend server cache.
I also assume that you can hit various frontend servers and get sporadic higher latencies.
Lastly, the overall perceived latency from a browser should be closely approximated by:
(tc)ping round trip + tcp/http queuing/buffering at the frontend server + file serving application time (as seen in your google app logs) + time to transfer the file.
If the frontend server is not overloaded and the file is small, the latency should be close to ping + serving time.
In my case, 50ms (ping) + 35ms (serving) = 85ms, is quite close to what I see in my browser 95ms.
Finally, If your app is serving a lot of requests, they maybe get queued, introducing a delay that is not "visible" in the application logs.
For a comparison I tested a site using tools.pingdom.com
Pingdom reported a Load time of 218ms
Here was the result from the logs:
2013-02-11 22:28:26.773 /stylesheets/bootstrap.min.css 200 35ms 45kb
Another test resulting in 238ms from Pingdom and 2ms in the logs.
Therefore, I would say that your 183ms seems relatively good. There are so many factors at play:
Your location to the server
Is the server that is serving the resource overloaded?
You could try serving the files using a Go instance instead of App Engine's static file server. I tested this some time ago, the results were occasionally faster, but the speeds were less consistent. Response time also increased under load, due to App Engine Instance being Limited to 10 Concurrent Requests. Not to mention you will be billed for the instance time.
Edit:
For a comparison to other Cloud / CDN providers see Cedexis's - Free Country Reports
You should try setting caching on static files.

Performance Zend Soap Service on LAMP

I have developed 2 soap webservices in my zend application. In my development environment (MAMP on mac 8 GB ram i7 processor) the performance is really good. When I deploy it on my Ubuntu LAMP server (1 GB RAM 1 processor) the performance decreases a lot. Its more than 10 times slower.
I have a java client (eclipse autogenerated client from wsdl) The problem is that the first call is always 4 times slower than the second one. This goes for both my MAMP and LAMP.
MAMP
- First call 400 ms
- Second call 100 ms
LAMP
- First call 2 000 ms
- Second call 400 ms
I simply duplicate the request so the request is exactley the same for the first and second call.
If I manually run the LAMP client several times the first call will be done at around 900 ms. It feels as if the Zend application has to "startup" something during the first call.
Does anyone have any clue on how I can get around this? What I've tried:
Make sure the wsdl is cached
Installed xcache (not shipped with LAMP)
Read tunings tutroials
Thanks in advance!
This performance issue often occurs when you use Zend_Soap_AutoDiscovery for wsdl generation. If that is the case for your code, you should consider storing your generated wsdl as a separate xml file and load it in the Zend_Soap_Server constructor.
This looks like a problem with opcode cache. Without opcode cache, Zend's really slow. And it gets a ncie boost when using it.
I'd look for Zend Optimizer, eAccelerator, or simillar...
That would be why it slows down after some idle time (classes/files are wiped from IO cache).

Resources