Caching on Nginx for Windows 7? - windows

We're using nginx as a local proxy on a number of deployed sites. We're trying to add caching, but it appears that this isn't supported on windows (http://nginx.org/en/docs/windows.html#known_issues).
The problem seems to be with shared memory support; which is used to allow very fast cache key lookup. In our situation, we have a small number of clients connecting through the proxy to download some large files. We don't need very fast cache key lookup.
Is there any way to tell nginx not to use shared memory for its cache key lookup?
Thanks,
Alastair
(p.s. we have limited control over the target deploy, so we cannot run a linux version, even within a vm. It has to be a windows app)

If your cache key set is relatively limited and not dynamic, you can try to turn proxy cache on recent Nginx and increase keys_zone size to be large enough to contain the key set. On some machines you may need to turn off ASLR (e.g. with EMET) but from experience it may work as is.
See https://stackoverflow.com/a/40965027/3624545 for limits and behavior.
Do stress test for desired key set with HIT/MISS monitoring e.g.
log_format cachelog '$upstream_cache_status "$request" $status';
access_log logs/access_cache.log cachelog;
to ensure it is working properly, does not crash or consume more memory than expected.

Related

Dynacache - Caching everything

I have taken over an application that serves around 180 TPS. The responses are always SOAP XML responses with a size of around 24000 bytes. We have been told that we have a dynacache and i can see that we have a cachespec.xml. But I am unable to understand how many entries it holds currently and its max limit.
How can i check this? I have tried DynamicCacheAccessor.getDistributedMap().size() but this always returns 0.
We have a lot of data inconsistencies because of Java hashmap caching layers internally. What are your thoughts on increasing dynacache and eliminate the internal caching ? How much server memory might this consume ?
Thanks in advance
The DynamicCacheAccessor accesses the default servlet cache instance, baseCache. If size() always returns zero then your cachespec.xml is configured to use a different cache instance.
Look for a directive in the cachespec.xml:
<cache-instance name="cache_instance_name"></cache-instance> to determine what cache instance you are using.
Also install the Cache Monitor from the installableApps directory. See
Monitoring and
CacheMonitor. The Cache Monitor is an invaluable tool when developing/maintaining an app using servlet caching.
Using liberty, install the webCacheMonitor-1.0 feature.

Cache a static file in memory forever on Nginx?

I have Nginx running in a Docker container, and it serves some static files. The files will never change at runtime - if they actually do change, the container will be stopped, the image will be rebuilt, and a new container will be started.
So, to improve performance, it would be perfect if Nginx would read the static files only one single time from disk and then server it from memory forever. I have found some configuration options to configure caching, but at least from what I have seen none of them provided this "forever" behavior that I'm looking for.
Is this possible at all? If so, how do I need to configure Nginx to achieve this?
Nginx as an HTTP server cannot do memory-caching of static files or pages.
Nginx is a capable and mature HTTP and proxy server. But there seems to be some confusion about its capabilities with respect to caching. Nginx server cannot memory-cache files when running as a pure Web server. And…wait what!? Let me rephrase: Nginx HTTP server cannot memory-cache files or pages.
Possible Workaround
The Nginx community’s answer is: no problem, let the OS do memory caching for you! The OS is written by smart people (true) and knows the what, when, where, and how of caching (a mere opinion). So, they say, cat your static files to /dev/null periodically and just trust it to cache your stuff for you! For those who are wondering and pondering, what’s the cat /dev/null reference has to do with caching? Read on to find out more (hint: don’t do it!).
How does it work?
It turns out that Linux is a fine-tuned beast that’s hawk-eyed about what goes in and out of its cache thingy. That cache thingy is called the Page Cache. The Page Cache is the memory store where frequently-accessed files are partially or entirely stored so they’re quickly accessible. The kernel is responsible for keeping track of files that are cached in memory, when they need to be updated, or when they need to be evicted. The more free RAM that’s available the larger the page cache the “better” the caching.
Please refer below diagram for more depth explanation:
Operating system does in memory caching by default. It's called page cache. In addition, you can enable sendfile to avoid copying data between kernel space and user space.

How to reduce memory usage on windows azure shared website?

I have a site hosted on Windows Azure shared websites. It just got suspended for going over memory usage limit of 512MB/hour.
I do use .net caching rather heavily (to prevent multiple calls to database/external APIs, etc...).
Is that caching a no-no in shared websites on Windows Azure?
Do you use System.Runtime.Cache? You should be able to limit the amount of caching e.g. the memorycache object uses. See http://msdn.microsoft.com/en-us/library/dd941874.aspx for more information.
Even if you will stop using Cache it still can be used by framework/libs. I also have same problem (interesting, that in free mode memory limit is 1024MB, but shared one is lowered to 512).
As I see, memory amount that Azure shows on portal seems very close to System.Diagnostics.Process.GetCurrentProcess().PrivateMemorySize value.
At this moment I'm experimenting with caching settings to set maximum memory:
<system.web>
<caching>
<cache privateBytesLimit="250000000" privateBytesPollTime="00:00:15"/>
</caching>
</system.web>
Several days ago I set 300MB but several minutes ago got suspended again :(, so lowering to 250MB.
But anyway, this is very unclear, strange and "wrong" solution imho.
UPDATE
Got suspended again this morning. Temporarily converted to standard mode with small instance (1.7 GB RAM).
My WorkingSet counter now is about 200 megs now (with PeakWorkingSet 330 megs). BUT! GC's CollectionCount is increased approx 8 times (Gen0 is 1800 times instead of 250 for less that a day).
My current theory is that in "shared" mode websites are running inside "big" VM with a lot of memory and Garbage Collector just not have a need to run often, leading to longer "garbage life" and more memory consumption.
Have no access to my developer computer right now for some verification, but planing to convert site to web role in cloud service ASAP - with extra small instance (cost is comparable to shared web site cost)...
Might be worth checking a profile using perfmon on your local machine to see if what if its hitting the limits normally first, then look at maybe configuring the logging on Azure and again digging through it.
Also ensuring everything is precompiled and that your not loading and modules etc you don't need can really effect performance etc on Azure.
I think what you might want to try here is scale our instead of up. If you add a second instance that will double your resource limit.

What is named.exe process and how to avoid consuming high CPU rates

I have a Windows Server 2008 with Plesk running two web sites.
Sometimes the server is going slow and there is a named.exe process making the CPU peak 100%.
It last a short period of time and after a while it comes again.
I would like to know what this process is for and how to configure it for not consuming this cpu and make my sites go slow.
This must be a DNS service, also known as Bind. High CPU usage may indicate one of the following:
DNS is re-reading its configuration. In this case high CPU usage shall be aligned with your activities in Plesk - i.e. adding and removing domains.
Someone (normally another DNS server) is pulling data from your DNS server. It is normal process. As you say it is for short period of time, it doesn't look like DNS DDoS
AFAIK there is no default way in Windows to restrict software from taking 100% CPU if no other apps require CPU at the moment.
See "DNS Treewalk Suite" system, off the process, and uses the antivirus.
Check the error "log" in the system.

Common Issues in Developing Cluster Aware non-web-based Enterprise Applications

I've to move a Windows based multi-threaded application (which uses global variables as well as an RDBMS for storage) to an NLB (i.e., network load balancer) cluster. The common architectural issues that immediately come to mind are
Global variables (which are both read/ written) will have to be moved to a shared storage. What are the best practices here? Is there anything available in Windows Clustering API to manage such things?
My application uses sockets, and persistent connections is a norm in the field I work. I believe persistent connections cannot be load balanced. Again, what are the architectural recommendations in this regard?
I'll answer the persistent connection part of the question first since it's easier. All good network load-balancing solutions (including Microsoft's NLB service built into Windows Server, but also including load balancing devices like F5 BigIP) have the ability to "stick" individual connections from clients to particular cluster nodes for the duration of the connection. In Microsoft's NLB this is called "Single Affinity", while other load balancers call it "Sticky Sessions". Sometimes there are caveats (for example, Microsoft's NLB will break connections if a new member is added to the cluster, although a single connection is never moved from one host to another).
re: global variables, they are the bane of load-balanced systems. Most designers of load-balanced apps will do a lot of re-architecture to minimize dependence on shared state since it impedes the scalabilty and availability of a load-balanced application. Most of these approaches come down to a two-step strategy: first, move shared state to a highly-available location, and second, change the app to minimize the number of times that shared state must be accessed.
Most clustered apps I've seen will store shared state (even shared, volatile state like global variables) in an RDBMS. This is mostly out of convenience. You can also use an in-memory database for maximum performance. But the simplicity of using an RDBMS for all shared state (transient and durable), plus the use of existing database tools for high-availability, tends to work out for many services. Perf of an RDBMS is of course orders of magnitude slower than global variables in memory, but if shared state is small you'll be reading out of the RDBMS's cache anyways, and if you're making a network hop to read/write the data the difference is relatively less. You can also make a big difference by optimizing your database schema for fast reading/writing, for example by removing unneeded indexes and using NOLOCK for all read queries where exact, up-to-the-millisecond accuracy is not required.
I'm not saying an RDBMS will always be the best solution for shared state, only that improving shared-state access times are usually not the way that load-balanced apps get their performance-- instead, they get performance by removing the need to synchronously access (and, especially, write to) shared state on every request. That's the second thing I noted above: changing your app to reduce dependence on shared state.
For example, for simple "counters" and similar metrics, apps will often queue up their updates and have a single thread in charge of updating shared state asynchronously from the queue.
For more complex cases, apps may swtich from Pessimistic Concurrency (checking that a resource is available beforehand) to Optimistic Concurrency (assuming it's available, and then backing out the work later if you ended up, for example, selling the same item to two different clients!).
Net-net, in load-balanced situations, brute force solutions often don't work as well as thinking creatively about your dependency on shared state and coming up with inventive ways to prevent having to wait for synchronous reading or writing shared state on every request.
I would not bother with using MSCS (Microsoft Cluster Service) in your scenario. MSCS is a failover solution, meaning it's good at keeping a one-server app highly available even if one of the cluster nodes goes down, but you won't get the scalability and simplicity you'll get from a true load-balanced service. I suspect MSCS does have ways to share state (on a shared disk) but they require setting up an MSCS cluster which involves setting up failover, using a shared disk, and other complexity which isn't appropriate for most load-balanced apps. You're better off using a database or a specialized in-memory solution to store your shared state.
Regarding persistent connection look into the port rules, because port rules determine which tcpip port is handled and how.
MSDN:
When a port rule uses multiple-host
load balancing, one of three client
affinity modes is selected. When no
client affinity mode is selected,
Network Load Balancing load-balances
client traffic from one IP address and
different source ports on
multiple-cluster hosts. This maximizes
the granularity of load balancing and
minimizes response time to clients. To
assist in managing client sessions,
the default single-client affinity
mode load-balances all network traffic
from a given client's IP address on a
single-cluster host. The class C
affinity mode further constrains this
to load-balance all client traffic
from a single class C address space.
In an asp.net app what allows session state to be persistent is when the clients affinity parameter setting is enabled; the NLB directs all TCP connections from one client IP address to the same cluster host. This allows session state to be maintained in host memory;
The client affinity parameter makes sure that a connection would always route on the server it was landed initially; thereby maintaining the application state.
Therefore I believe, same would happen for your windows based multi threaded app, if you utilize the affinity parameter.
Network Load Balancing Best practices
Web Farming with the
Network Load Balancing Service
in Windows Server 2003 might help you give an insight
Concurrency (Check out Apache Cassandra, et al)
Speed of light issues (if going cross-country or international you'll want heavy use of transactions)
Backups and deduplication (Companies like FalconStor or EMC can help here in a distributed system. I wouldn't underestimate the need for consulting here)

Resources