Magento Redis CPU usage High redis-rdb-bgsave - magento

Out of the blue my Magento 2.3 installatie crashed, the CPU suddely raises and eventually Magento crashes. No updates or something like that, so that can't be te cause.
Main cause seems to be Redis, after stop-writes-on-bgsave-error no in Redis the system didn't crash anymore, but memory and CPU usage are stil high.

Found the cause, google was doing something like a DDOS attack. 150k requests per day...
Adjusted the robots.txt to do less indexing

Related

High memory and CPU consumption for rails application on google cloud

I have a Compute engine on google cloud with 4 core CPU Ivy Brigde and 15 GB RAM and on that I have deployed my rails application.
Before this I had hosted my rails application on digital ocean and there I was getting good throughput and also the cpu and memory consumption was minimal.
It never crossed 3 GB memory consumption on Digital ocean and the CPU consumption max was around 50% - 55%.
On Digital Ocean I had a single instance with 4 core CPU and 8GB RAM and even I was running mysql,redis and sidekiq on the same instance and still it could handle the load easily.
But as I moved to google cloud I started facing the problems for the same code.
Actually I was expecting more throughput from the Google cloud as Google has data centers in Asia, but I started facing issue.
When I restart apache everything comes back to normal and again after 2 - 3 hours it goes on consuming memeory and CPU and finally instance stops responding to the requests anymore.
I checked the logs..... and there are no much increase in traffic, also I cheked logs during the load time to ensure whether someone is attacking the servers.
But all the request I found are from a valid browsers with valid user agents.
I don't understand why is this happening.
First I felt if it is a DDOS/DOS attack but din't find anything suspicious in the log (apache access logs and rails logs).
Please help me.
Hoping for some good solution that I can try and debug the issue.
Thanks :)

How does memcached work? (details inside)

As I understand memcached should cache stuff to virtual/physical memory. I've a Wordpress installation with W3TC installed, and my server is capable of using either Disk or Memcached, so I went for Memcached.
When I check the memory usage in cPanel, it's on 0 (physical's on ~100Mb). When I try to load the site, memory usage jumps to 100-300Mb (different values for both types of memory, but they are both at ~300Mb). CPU usage jumps to 100%. It stays like that for a few minutes.
So how does memcached work then? It doesn't make sense to me. Would I be better off using Disk cache instead? The site's utterly slow too, unless I'm reloading it or revisiting pages I've already visited - then it's lightning-fast. Disk cache however, seems slow-ish too in general...
What am I supposed to do? Is there a way of "fixing" this, if there is something to fix in the first place of course?
Any info or insight is appreciated,
Thanks!

How many users should a EC2 Micro Instance be able to handle only with a nginx server?

I have a iOS Social App.
This app talks to my server to do updates & retrieval fairly often. Mostly small text as JSON. Sometimes users will upload pictures that my web-server will then upload to a S3 Bucket. No pictures or any other type of file will be retrieved from the web-server
The EC2 Micro Ubuntu 13.04 Instance runs PHP 5.5, PHP-FPM and NGINX. Cache is handled by Elastic Cache using Redis and the database connects to a separate m1.large MongoDB server. The content can be fairly dynamic as newsfeed can be dynamic.
I am a total newbie in regards to configuring NGINX for performance and I am trying to see whether I've configured my server properly or not.
I am using Siege to test my server load but I can't find any type of statistics on how many concurrent users / page loads should my system be able to handle so that I know that I've done something right or something wrong.
What amount of concurrent users / page load should my server be able to handle?
I guess if I cant get hold on statistic from experience what should be easy, medium, and extreme for my micro instance?
I am aware that there are several other questions asking similar things. But none provide any sort of estimates for a similar system, which is what I am looking for.
I haven't tried nginx on microinstance for the reasons Jonathan pointed out. If you consume cpu burst you will be throttled very hard and your app will become unusable.
IF you want to follow that path I would recommend:
Try to cap cpu usage for nginx and php5-fpm to make sure you do not go over the thereshold of cpu penalities. I have no ideia what that thereshold is. I believe the main problem with micro instance is to maintain a consistent cpu availability. If you go over the cap you are screwed.
Try to use fastcgi_cache, if possible. You want to hit php5-fpm only if really needed.
Keep in mind that gzipping on the fly will eat alot of cpu. I mean alot of cpu (for a instance that has almost none cpu power). If you can use gzip_static, do it. But I believe you cannot.
As for statistics, you will need to do that yourself. I have statistics for m1.small but none for micro. Start by making nginx serve a static html file with very few kb. Do a siege benchmark mode with 10 concurrent users for 10 minutes and measure. Make sure you are sieging from a stronger machine.
siege -b -c10 -t600s 'http:// private-ip /test.html'
You will probably see the effects of cpu throttle by just doing that! What you want to keep an eye on is the transactions per second and how much throughput can the nginx serve. Keep in mind that m1small max is 35mb/s so m1.micro will be even less.
Then, move to a json response. Try gzipping. See how much concurrent requests per second you can get.
And dont forget to come back here and report your numbers.
Best regards.
Micro instances are unique in that they use a burstable profile. While you may get up two 2 ECU's in terms of performance for a short period of time, after it uses its burstable allotment it will be limited to around 0.1 or 0.2 ECU. Eventually the allotment resets and you can get 2 ECU's again.
Much of this is going to come down to how CPU/Memory heavy your application is. It sounds like you have it pretty well optimized already.

High CPU load but low CPU usage and RAM usage

I am running a mobile website to get the live running status of any train in India. It is http://www.spoturtrain.com . The full code is written in PHP and Nginx is used as the webserver, php-fpm is used as the application server. All php requests are proxied to the app server. During peak traffic hours in the morning, the system load shoots up to 4 but the CPU% and the memory usage is low. Please take a look at the snapshot of the top command of the server.
Th %CPU displayed in the bottom section is per-thread, which means the percentage of one CPU core used by the indicated thread. The CPU(s) section indicates the total amount of available CPU that is being utilized, so it is possible to have one thread reporting that it is using 100% CPU, while only 25% (4 core) or 12.5% (8 core) of the overall CPU cycles are being consumed.
Analyzing thread CPU usage on Linux
You don't really ask a question, so it's hard to tell if you are wanting some advice or just asking to have the numbers explained. As #Charles states, a typical "acceptable" load is 1 per CPU core before noticeable performance degradation occurs, but in the case of PHP running on most web servers, you may (but probably won't in most cases) start noticing problems at anything above 1. Whether or not you do will largely depend on your disk and network I/O.
Whether or not the performance is acceptable for your application isn't something I can answer, but you can take a look at this thread for more places to jump into the options for getting your web server to thread requests.
What is thread safe or non thread safe in PHP
Whether or not you can do anything about it depends on your hosting situation.

Performance of memcache on a shared server

Lately I've been experimenting with increasing performance on my blog, and not just one-click fixes but also looking at code in addition to other things like CDN, cache, etc.
I talked to my host about installing memcache so I can enable it in W3 Total Cache and he seems to think it will actually hinder my site as it will instantaneously max out my RAM usage (which is 1GB).
Do you think he is accurate, and should I try it anyway? My blog and forum (MyBB) get a combined 200,000 pageviews a month.
In fact, having 200.000 pageviews a month, I would go a way from a 'shared' host, and buy a VPS or dedicated server or something, Memcache(d) is a good tool indeed, but there is lots of other way you can get better performance.
Memcached is good if you know how to use it correctly, (The w3 total cache memcached thing, doesn't do the job).
As a performance engineer, I think a lot about speed, but also about server load and stuff. Im working much with wordpress sites, and the way I increase the performance to the maximum on my servers, is to generate HTML pages of my wordpress sites, this will result in 0 or minimal access to the PHP handler itself, which increase performance a lot.
What you then again can do, is to add another caching proxy in front of the web server, etc Varnish, which caches results, which means you'll never touch the web-server either.
What it will do, is when the client request your page, it will serve the already processed page directly via the memory, which is pretty fast. You then have a TTL on your files, and can be as low as 50 seconds which is default. 50 seconds doesn't sounds a lot. But if you have 200k pageviews, that means you will have 4.5 pageviews each minute if you had same amount of pageviews each minute. So peak hours doesn't count.
When you do 1 page view, there will be a lot of processing going on:
Making the first request to the web-server, starting the php process, process data, grap stuff from the DB, process the data, process the PHP site, etc. If we can do this for a few requests it will speed up the performance.
Often you should be able to generate HTML files of your forum too, which then would be renewed each 1-2 minutes, if there is a request to the file. it will require 1 request being processed instead of 4-9 requests (if not more).
You can limit the amount of memory that memcached uses. If the memory is maxed out the oldest entries are pruned. In CentOS/Debian there is /etc/default/memcached and you can set the maximum memory with the -m flag.
In my experience 64MB or even 32MB of memcached memory are enough for Wordpress and make a huge difference. Be sure to not cache whole pages (that fills the cache pretty fast) instead use memcache for the Wordpress Object Cache.
For generall Performance: Make sure to have a recent PHP Version (5.3+) and have APC installed. For Database Queries I would skip W3TC and go directly for the MySQL Query Cache.

Resources