I have Nginx running in a Docker container, and it serves some static files. The files will never change at runtime - if they actually do change, the container will be stopped, the image will be rebuilt, and a new container will be started.
So, to improve performance, it would be perfect if Nginx would read the static files only one single time from disk and then server it from memory forever. I have found some configuration options to configure caching, but at least from what I have seen none of them provided this "forever" behavior that I'm looking for.
Is this possible at all? If so, how do I need to configure Nginx to achieve this?
Nginx as an HTTP server cannot do memory-caching of static files or pages.
Nginx is a capable and mature HTTP and proxy server. But there seems to be some confusion about its capabilities with respect to caching. Nginx server cannot memory-cache files when running as a pure Web server. And…wait what!? Let me rephrase: Nginx HTTP server cannot memory-cache files or pages.
Possible Workaround
The Nginx community’s answer is: no problem, let the OS do memory caching for you! The OS is written by smart people (true) and knows the what, when, where, and how of caching (a mere opinion). So, they say, cat your static files to /dev/null periodically and just trust it to cache your stuff for you! For those who are wondering and pondering, what’s the cat /dev/null reference has to do with caching? Read on to find out more (hint: don’t do it!).
How does it work?
It turns out that Linux is a fine-tuned beast that’s hawk-eyed about what goes in and out of its cache thingy. That cache thingy is called the Page Cache. The Page Cache is the memory store where frequently-accessed files are partially or entirely stored so they’re quickly accessible. The kernel is responsible for keeping track of files that are cached in memory, when they need to be updated, or when they need to be evicted. The more free RAM that’s available the larger the page cache the “better” the caching.
Please refer below diagram for more depth explanation:
Operating system does in memory caching by default. It's called page cache. In addition, you can enable sendfile to avoid copying data between kernel space and user space.
Related
I'm running WP website with cache plugin enabled.
The site is running slow so I decided to check which element is consuming more time to load.
Straight to F12 (chrome web tools) and from there the tab Network.
What I see and I don't understand is why some of the files are loading from disk cache and other no.
Please see the attached image (column "Size")
So, if you know the answer guys, please share it.
Thank you!
Memory Cache- stores and loads resources from Random Access Memory (RAM). It is fast because it is easy to load resources from RAM. These resources will persist until you close the Browser or you manually clear it.
Disk Cache- It stores and load the resources from disk and it is persistent. It will not contact webserver over network to get the data.Disk cache is usually included as part of the hard disk.
I guess browser will decide the type of cache storage based on the type of the resources or based on their frequency of usage.
Sometimes we use assets or resources from other sites(third party) these contents will be transferred over the network and size of those contents will be donated in Bytes(B).
It seems that all resources are loaded from cache. The difference is some resources are read from disk cache. Some resource are read from memory cache and the rest come from 304.
ETag and cache-control decide whether a resource should be read from local disk/memory cache or require to be refreshed(304). If the resource expired, then chrome will send request to server to check whether the file need to be updated. The size of 304 request is just the size of request entity not the size of your source file.
If the resource didn't expire, chrome will try to read from memory/disk cache and it won't send any request to server side.
It is unclear how web browser decide the cache type.
According to some document, we just notice that chrome are prefer to save css file into disk cache and save img/font/js file into memory cache.
We're using nginx as a local proxy on a number of deployed sites. We're trying to add caching, but it appears that this isn't supported on windows (http://nginx.org/en/docs/windows.html#known_issues).
The problem seems to be with shared memory support; which is used to allow very fast cache key lookup. In our situation, we have a small number of clients connecting through the proxy to download some large files. We don't need very fast cache key lookup.
Is there any way to tell nginx not to use shared memory for its cache key lookup?
Thanks,
Alastair
(p.s. we have limited control over the target deploy, so we cannot run a linux version, even within a vm. It has to be a windows app)
If your cache key set is relatively limited and not dynamic, you can try to turn proxy cache on recent Nginx and increase keys_zone size to be large enough to contain the key set. On some machines you may need to turn off ASLR (e.g. with EMET) but from experience it may work as is.
See https://stackoverflow.com/a/40965027/3624545 for limits and behavior.
Do stress test for desired key set with HIT/MISS monitoring e.g.
log_format cachelog '$upstream_cache_status "$request" $status';
access_log logs/access_cache.log cachelog;
to ensure it is working properly, does not crash or consume more memory than expected.
As I understand memcached should cache stuff to virtual/physical memory. I've a Wordpress installation with W3TC installed, and my server is capable of using either Disk or Memcached, so I went for Memcached.
When I check the memory usage in cPanel, it's on 0 (physical's on ~100Mb). When I try to load the site, memory usage jumps to 100-300Mb (different values for both types of memory, but they are both at ~300Mb). CPU usage jumps to 100%. It stays like that for a few minutes.
So how does memcached work then? It doesn't make sense to me. Would I be better off using Disk cache instead? The site's utterly slow too, unless I'm reloading it or revisiting pages I've already visited - then it's lightning-fast. Disk cache however, seems slow-ish too in general...
What am I supposed to do? Is there a way of "fixing" this, if there is something to fix in the first place of course?
Any info or insight is appreciated,
Thanks!
One of my Railo web applications generates too many I/O requests.
Since it's hosted on an Amazon Ec2 instance, that directly affects my billing badly, because of EBS disk activity (hundreds of milions of operations).
How can I monitor I/O requests? The perfect tool would allow me to find which template/component makes intensive I/O.
I'm already using FusionReactor and that's great for profiling memory spaces and so on, but it doesn't have anything for I/O.
so you could start out by using the operating system monitoring tools to see if you have mainly reads or writes, next step is looking at memory issues despite it being an disk IO issue, maybe your servers are low on memory and thrashing the drives as they are swapping pages in and out of memory.
if you have not done so turn on the template cache this will stop railo checking the file system on every page request (provided you have the memory).
if you have plenty of memory (both for your OS and for the JVM) and you have template caching on start looking for your busy pages in fusion reactor, check for cffile, cfdirectory and other tags in these pages.... good luck.
also use of queries of queries is often a culprit in high disk io as internally a database is used which runs pages to disk on large resultsets if I remeber correctly.
I have a iOS Social App.
This app talks to my server to do updates & retrieval fairly often. Mostly small text as JSON. Sometimes users will upload pictures that my web-server will then upload to a S3 Bucket. No pictures or any other type of file will be retrieved from the web-server
The EC2 Micro Ubuntu 13.04 Instance runs PHP 5.5, PHP-FPM and NGINX. Cache is handled by Elastic Cache using Redis and the database connects to a separate m1.large MongoDB server. The content can be fairly dynamic as newsfeed can be dynamic.
I am a total newbie in regards to configuring NGINX for performance and I am trying to see whether I've configured my server properly or not.
I am using Siege to test my server load but I can't find any type of statistics on how many concurrent users / page loads should my system be able to handle so that I know that I've done something right or something wrong.
What amount of concurrent users / page load should my server be able to handle?
I guess if I cant get hold on statistic from experience what should be easy, medium, and extreme for my micro instance?
I am aware that there are several other questions asking similar things. But none provide any sort of estimates for a similar system, which is what I am looking for.
I haven't tried nginx on microinstance for the reasons Jonathan pointed out. If you consume cpu burst you will be throttled very hard and your app will become unusable.
IF you want to follow that path I would recommend:
Try to cap cpu usage for nginx and php5-fpm to make sure you do not go over the thereshold of cpu penalities. I have no ideia what that thereshold is. I believe the main problem with micro instance is to maintain a consistent cpu availability. If you go over the cap you are screwed.
Try to use fastcgi_cache, if possible. You want to hit php5-fpm only if really needed.
Keep in mind that gzipping on the fly will eat alot of cpu. I mean alot of cpu (for a instance that has almost none cpu power). If you can use gzip_static, do it. But I believe you cannot.
As for statistics, you will need to do that yourself. I have statistics for m1.small but none for micro. Start by making nginx serve a static html file with very few kb. Do a siege benchmark mode with 10 concurrent users for 10 minutes and measure. Make sure you are sieging from a stronger machine.
siege -b -c10 -t600s 'http:// private-ip /test.html'
You will probably see the effects of cpu throttle by just doing that! What you want to keep an eye on is the transactions per second and how much throughput can the nginx serve. Keep in mind that m1small max is 35mb/s so m1.micro will be even less.
Then, move to a json response. Try gzipping. See how much concurrent requests per second you can get.
And dont forget to come back here and report your numbers.
Best regards.
Micro instances are unique in that they use a burstable profile. While you may get up two 2 ECU's in terms of performance for a short period of time, after it uses its burstable allotment it will be limited to around 0.1 or 0.2 ECU. Eventually the allotment resets and you can get 2 ECU's again.
Much of this is going to come down to how CPU/Memory heavy your application is. It sounds like you have it pretty well optimized already.