How does memcached work? (details inside) - caching

As I understand memcached should cache stuff to virtual/physical memory. I've a Wordpress installation with W3TC installed, and my server is capable of using either Disk or Memcached, so I went for Memcached.
When I check the memory usage in cPanel, it's on 0 (physical's on ~100Mb). When I try to load the site, memory usage jumps to 100-300Mb (different values for both types of memory, but they are both at ~300Mb). CPU usage jumps to 100%. It stays like that for a few minutes.
So how does memcached work then? It doesn't make sense to me. Would I be better off using Disk cache instead? The site's utterly slow too, unless I'm reloading it or revisiting pages I've already visited - then it's lightning-fast. Disk cache however, seems slow-ish too in general...
What am I supposed to do? Is there a way of "fixing" this, if there is something to fix in the first place of course?
Any info or insight is appreciated,
Thanks!

Related

Magento Redis CPU usage High redis-rdb-bgsave

Out of the blue my Magento 2.3 installatie crashed, the CPU suddely raises and eventually Magento crashes. No updates or something like that, so that can't be te cause.
Main cause seems to be Redis, after stop-writes-on-bgsave-error no in Redis the system didn't crash anymore, but memory and CPU usage are stil high.
Found the cause, google was doing something like a DDOS attack. 150k requests per day...
Adjusted the robots.txt to do less indexing

Is a hard restart of redis required to free memory?

I recently came upon a SO question where the op asked in which scenarios redis frees up memory. It seems they were recommended a hard start is a potential way, however this is untested in the case of redis. Can anyone let me know for sure whether this works?
I have a live environment, I don't want to have to restart redis-server, but its memory foot print is debilitating now and I'm on the verge of a server migration. So it's important for me to remove as much bloat as possible (and there's a ton of bloat).
I'm not sure what you mean by "bloat", but attaching your server's INFO ALL output may be helpful.
By default, Redis uses jemalloc as a memory allocator. The allocator is in charge of actually freeing RAM for the OS to reclaim, after Redis frees it. Redis v4 and above include the ability to force the allocator to purge the freed RAM (MEMORY PURGE, see https://github.com/antirez/redis-doc/pull/851).
Regardless of purge, there's also the matter of memory fragmentation. While v4 has the experimental active defrag feature, a restart is the way to "fix" that in prior versions.
To mitigate a restart and the downtime involved, use Redis' replication to create a slave and failover your apps to it before restarting the original master.

Cache a static file in memory forever on Nginx?

I have Nginx running in a Docker container, and it serves some static files. The files will never change at runtime - if they actually do change, the container will be stopped, the image will be rebuilt, and a new container will be started.
So, to improve performance, it would be perfect if Nginx would read the static files only one single time from disk and then server it from memory forever. I have found some configuration options to configure caching, but at least from what I have seen none of them provided this "forever" behavior that I'm looking for.
Is this possible at all? If so, how do I need to configure Nginx to achieve this?
Nginx as an HTTP server cannot do memory-caching of static files or pages.
Nginx is a capable and mature HTTP and proxy server. But there seems to be some confusion about its capabilities with respect to caching. Nginx server cannot memory-cache files when running as a pure Web server. And…wait what!? Let me rephrase: Nginx HTTP server cannot memory-cache files or pages.
Possible Workaround
The Nginx community’s answer is: no problem, let the OS do memory caching for you! The OS is written by smart people (true) and knows the what, when, where, and how of caching (a mere opinion). So, they say, cat your static files to /dev/null periodically and just trust it to cache your stuff for you! For those who are wondering and pondering, what’s the cat /dev/null reference has to do with caching? Read on to find out more (hint: don’t do it!).
How does it work?
It turns out that Linux is a fine-tuned beast that’s hawk-eyed about what goes in and out of its cache thingy. That cache thingy is called the Page Cache. The Page Cache is the memory store where frequently-accessed files are partially or entirely stored so they’re quickly accessible. The kernel is responsible for keeping track of files that are cached in memory, when they need to be updated, or when they need to be evicted. The more free RAM that’s available the larger the page cache the “better” the caching.
Please refer below diagram for more depth explanation:
Operating system does in memory caching by default. It's called page cache. In addition, you can enable sendfile to avoid copying data between kernel space and user space.

How many users should a EC2 Micro Instance be able to handle only with a nginx server?

I have a iOS Social App.
This app talks to my server to do updates & retrieval fairly often. Mostly small text as JSON. Sometimes users will upload pictures that my web-server will then upload to a S3 Bucket. No pictures or any other type of file will be retrieved from the web-server
The EC2 Micro Ubuntu 13.04 Instance runs PHP 5.5, PHP-FPM and NGINX. Cache is handled by Elastic Cache using Redis and the database connects to a separate m1.large MongoDB server. The content can be fairly dynamic as newsfeed can be dynamic.
I am a total newbie in regards to configuring NGINX for performance and I am trying to see whether I've configured my server properly or not.
I am using Siege to test my server load but I can't find any type of statistics on how many concurrent users / page loads should my system be able to handle so that I know that I've done something right or something wrong.
What amount of concurrent users / page load should my server be able to handle?
I guess if I cant get hold on statistic from experience what should be easy, medium, and extreme for my micro instance?
I am aware that there are several other questions asking similar things. But none provide any sort of estimates for a similar system, which is what I am looking for.
I haven't tried nginx on microinstance for the reasons Jonathan pointed out. If you consume cpu burst you will be throttled very hard and your app will become unusable.
IF you want to follow that path I would recommend:
Try to cap cpu usage for nginx and php5-fpm to make sure you do not go over the thereshold of cpu penalities. I have no ideia what that thereshold is. I believe the main problem with micro instance is to maintain a consistent cpu availability. If you go over the cap you are screwed.
Try to use fastcgi_cache, if possible. You want to hit php5-fpm only if really needed.
Keep in mind that gzipping on the fly will eat alot of cpu. I mean alot of cpu (for a instance that has almost none cpu power). If you can use gzip_static, do it. But I believe you cannot.
As for statistics, you will need to do that yourself. I have statistics for m1.small but none for micro. Start by making nginx serve a static html file with very few kb. Do a siege benchmark mode with 10 concurrent users for 10 minutes and measure. Make sure you are sieging from a stronger machine.
siege -b -c10 -t600s 'http:// private-ip /test.html'
You will probably see the effects of cpu throttle by just doing that! What you want to keep an eye on is the transactions per second and how much throughput can the nginx serve. Keep in mind that m1small max is 35mb/s so m1.micro will be even less.
Then, move to a json response. Try gzipping. See how much concurrent requests per second you can get.
And dont forget to come back here and report your numbers.
Best regards.
Micro instances are unique in that they use a burstable profile. While you may get up two 2 ECU's in terms of performance for a short period of time, after it uses its burstable allotment it will be limited to around 0.1 or 0.2 ECU. Eventually the allotment resets and you can get 2 ECU's again.
Much of this is going to come down to how CPU/Memory heavy your application is. It sounds like you have it pretty well optimized already.

Performance of memcache on a shared server

Lately I've been experimenting with increasing performance on my blog, and not just one-click fixes but also looking at code in addition to other things like CDN, cache, etc.
I talked to my host about installing memcache so I can enable it in W3 Total Cache and he seems to think it will actually hinder my site as it will instantaneously max out my RAM usage (which is 1GB).
Do you think he is accurate, and should I try it anyway? My blog and forum (MyBB) get a combined 200,000 pageviews a month.
In fact, having 200.000 pageviews a month, I would go a way from a 'shared' host, and buy a VPS or dedicated server or something, Memcache(d) is a good tool indeed, but there is lots of other way you can get better performance.
Memcached is good if you know how to use it correctly, (The w3 total cache memcached thing, doesn't do the job).
As a performance engineer, I think a lot about speed, but also about server load and stuff. Im working much with wordpress sites, and the way I increase the performance to the maximum on my servers, is to generate HTML pages of my wordpress sites, this will result in 0 or minimal access to the PHP handler itself, which increase performance a lot.
What you then again can do, is to add another caching proxy in front of the web server, etc Varnish, which caches results, which means you'll never touch the web-server either.
What it will do, is when the client request your page, it will serve the already processed page directly via the memory, which is pretty fast. You then have a TTL on your files, and can be as low as 50 seconds which is default. 50 seconds doesn't sounds a lot. But if you have 200k pageviews, that means you will have 4.5 pageviews each minute if you had same amount of pageviews each minute. So peak hours doesn't count.
When you do 1 page view, there will be a lot of processing going on:
Making the first request to the web-server, starting the php process, process data, grap stuff from the DB, process the data, process the PHP site, etc. If we can do this for a few requests it will speed up the performance.
Often you should be able to generate HTML files of your forum too, which then would be renewed each 1-2 minutes, if there is a request to the file. it will require 1 request being processed instead of 4-9 requests (if not more).
You can limit the amount of memory that memcached uses. If the memory is maxed out the oldest entries are pruned. In CentOS/Debian there is /etc/default/memcached and you can set the maximum memory with the -m flag.
In my experience 64MB or even 32MB of memcached memory are enough for Wordpress and make a huge difference. Be sure to not cache whole pages (that fills the cache pretty fast) instead use memcache for the Wordpress Object Cache.
For generall Performance: Make sure to have a recent PHP Version (5.3+) and have APC installed. For Database Queries I would skip W3TC and go directly for the MySQL Query Cache.

Resources