One of my Railo web applications generates too many I/O requests.
Since it's hosted on an Amazon Ec2 instance, that directly affects my billing badly, because of EBS disk activity (hundreds of milions of operations).
How can I monitor I/O requests? The perfect tool would allow me to find which template/component makes intensive I/O.
I'm already using FusionReactor and that's great for profiling memory spaces and so on, but it doesn't have anything for I/O.
so you could start out by using the operating system monitoring tools to see if you have mainly reads or writes, next step is looking at memory issues despite it being an disk IO issue, maybe your servers are low on memory and thrashing the drives as they are swapping pages in and out of memory.
if you have not done so turn on the template cache this will stop railo checking the file system on every page request (provided you have the memory).
if you have plenty of memory (both for your OS and for the JVM) and you have template caching on start looking for your busy pages in fusion reactor, check for cffile, cfdirectory and other tags in these pages.... good luck.
also use of queries of queries is often a culprit in high disk io as internally a database is used which runs pages to disk on large resultsets if I remeber correctly.
Related
I read somewhere "A modern server has 144GB RAM memory", is that 144GB all used as cache?
When we talk about a server's cache, does that mean the server's memory?
It all depends on the caching method utilized by the applications that run on the sever. There are numerous caching methods, but two methods frequently used are persistent Caching and In Memory Caching.
With persistent cache, the application stores cache values somewhere intended to be “permanent”, such as the file system, database or otherwise.
Whereas, with In Memory Caching, the application uses the memory (AKA RAM, in your question 144GB) to store data. Using this method, the data is intended to be semi-permanent and not persist across reboots, application recycles, or otherwise.
If, when coding, you allocate a new object, dictionary, list or otherwise, these objects are stored in memory. Additionally, all of a servers memory is not available to the applications that run on said server. All operating systems and processes that are installed use the same RAM. Therefore, it’s common for a device that has 4GB RAM to only have 2GB reasonably usable, as the other 2GB is used by the operating system. Of course, these numbers depend on a lot of factors.
I have an api (Express.js driven) that doesn't do any disk operation. It only reads/writes to db. Would there be a difference if the machine runs an ssd type of disk or standard disk?
Does it influence the performance? Because I believe the require loads files only one time not every request.
My Azure cloud service reads and writes to blobs using the .Net storage library (1.7). The blobs are in the same data centre as the service. In my first container, operations are fast (order of 10ms). In my second container they are very slow (typically about 2s or 14s, not much in between). Both are transferring the data using CloudBlob.DownloadToStream() into a MemoryStream. File sizes are typically less than 100kB.
Now I admit I haven't set up a proper test to be able to demonstrate all the above - I'm just going by my log files, so there could be some subtle difference in the way I am accessing the blobs. Apologies if this turns out to be the case.
Anyway, the only relevant difference between these two containers seems to be:
The fast container is accessed frequently (tens of thousands of requests per day), and the slow container quite infrequently (perhaps 200 requests per day).
The fast container typically stores items that are fetched soon afterwards. The slow container is often loading things that might have been stored days ago.
Question: What factors affect blob performance for infrequently-accessed blobs? What can I do to make it faster?
(I don't know how Azure blob storage is implemented, but based on the above I'm going to guess that the data is saved into a storage array and accessed via a dynamically scaling collection of VMs, each of which implements in-memory caching of blobs. Thus the ~14s delay occurs when Azure finds it needs to spin up the VMs. The ~2s delay occurs when a VM is available, but it needs to hunt down the data on a physical disk (seems rather slow), and the 10ms delay occurs when the item is stored in an in-memory cache, or something like that.)
Windows Azure Storage is not architected how you are describing (with an expanding number of cache VMs), so there would be no impact of some data being cached and other data not being cached on the Azure Storage server side. See Windows Azure Storage Architecture Overview for a good overview, or SOSP Paper - Windows Azure Storage: A Highly Available Cloud Storage Service with Strong Consistency for a more in depth look.
To determine why your blob requests are slower, the first thing to do would be to determine if the slow performance is server side or client side. Fortunately Azure Storage makes this easy via the Storage Analytics (Windows Azure Storage Logging: Using Logs to Track Storage Requests) - just compare the End To End latency and the Server Latency. I suspect you will see one of two things:
Low E2E and Low Server. This would indicate that either the request is getting delayed being sent from the client (ie. not enough worker threads), or your logging is providing incorrect data.
High E2E and Low Server. This would indicate a problem on the client side in processing the request (not enough worker threads to process the Response, slow processing of the memory stream, etc).
I have a site hosted on Windows Azure shared websites. It just got suspended for going over memory usage limit of 512MB/hour.
I do use .net caching rather heavily (to prevent multiple calls to database/external APIs, etc...).
Is that caching a no-no in shared websites on Windows Azure?
Do you use System.Runtime.Cache? You should be able to limit the amount of caching e.g. the memorycache object uses. See http://msdn.microsoft.com/en-us/library/dd941874.aspx for more information.
Even if you will stop using Cache it still can be used by framework/libs. I also have same problem (interesting, that in free mode memory limit is 1024MB, but shared one is lowered to 512).
As I see, memory amount that Azure shows on portal seems very close to System.Diagnostics.Process.GetCurrentProcess().PrivateMemorySize value.
At this moment I'm experimenting with caching settings to set maximum memory:
<system.web>
<caching>
<cache privateBytesLimit="250000000" privateBytesPollTime="00:00:15"/>
</caching>
</system.web>
Several days ago I set 300MB but several minutes ago got suspended again :(, so lowering to 250MB.
But anyway, this is very unclear, strange and "wrong" solution imho.
UPDATE
Got suspended again this morning. Temporarily converted to standard mode with small instance (1.7 GB RAM).
My WorkingSet counter now is about 200 megs now (with PeakWorkingSet 330 megs). BUT! GC's CollectionCount is increased approx 8 times (Gen0 is 1800 times instead of 250 for less that a day).
My current theory is that in "shared" mode websites are running inside "big" VM with a lot of memory and Garbage Collector just not have a need to run often, leading to longer "garbage life" and more memory consumption.
Have no access to my developer computer right now for some verification, but planing to convert site to web role in cloud service ASAP - with extra small instance (cost is comparable to shared web site cost)...
Might be worth checking a profile using perfmon on your local machine to see if what if its hitting the limits normally first, then look at maybe configuring the logging on Azure and again digging through it.
Also ensuring everything is precompiled and that your not loading and modules etc you don't need can really effect performance etc on Azure.
I think what you might want to try here is scale our instead of up. If you add a second instance that will double your resource limit.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
We are deploying a large scale web application that uses only redis as a data store. I notice the the benchmark of our redis master is around 8000 transactions per second on EC2, far less than the stated benchmarks on dedicated hardware.
I understand that there is a performance penalty for running Redis on a virtual machine like EC2, but I would love some pointers from people who have deployed Redis in production environments on EC2 on what EC2 setup you have found most effective for getting more out of redis.
Thanks.
EC2 is probably not the best environment to run Redis on virtualized hardware, but it is a popular one, and there are a number of points to know to get the best from Redis on this platform.
I'm one of the authors of http://redis.io/topics/benchmarks and http://redis.io/topics/latency which cover most of the topics I present below. This is just a summary of the main points.
Virtualization toll
It is not specific to EC2, but Redis is significantly slower when running on a VM (in term of maximum supported throughput). This is due to the fact for basic operations, Redis does not add much overhead to the epoll/read/write system calls required to handle client connections (like memcached, or other efficient key/value stores). System calls are typically more expensive on a VM, and they represent a significant part of Redis activity (especially in benchmarks). In that conditions, a 50% decrease in term of maximum throughput compared to bare metal is not uncommon.
Of course, it also depends on the quality of the hypervisor. For EC2, Xen is used.
Benchmarking in good conditions
Benchmarking can be tricky, especially on a platform like EC2. One point often forgotten is to ensure a proper configuration for both the benchmark client and server. For instance, do not run redis-benchmark on a CPU starved micro-instance (which will likely be throttled down by Amazon) while targeting your Redis server. Both machines are equally important to get a good maximum throughput.
Actually, to evaluate Redis performance, you need to:
run redis-benchmark locally (on the same machine than the server), assuming you have more than one vCPU core.
run redis-benchmark remotely (from a different VM), on a machine whose QoS configuration is equivalent to the server machine
So you can evaluate and compare performance of the machines and the network.
On EC2, you will have the best results with second generation M3 instances (or high-memory, or cluster compute instances) so you can benefit of HVM (hardware virtualization) instead of relying on slower para-virtualization.
The fork issue
This is not specific to EC2, but to Xen: forking a large process can be really slow on Xen (it looks better with kvm). For Redis this is a big problem if you plan to use persistence: both persistence options (RDB or AOF) require the main thread to fork and launch background save or rewrite processes.
In some cases, fork latency can freeze Redis event loop for several seconds. The more memory managed by the Redis instance, the more latency.
On EC2, be sure to use a HVM enabled instance (M3, high-memory, cluster), it will mitigate the issue.
Then, if you have large memory requirements, and your application can tolerate it, consider running several smaller Redis instances on the same machine, and shard your data. It can decrease the latency due to fork operations to an acceptable level.
Persistence configuration
This is a key point to get good performance from Redis (both on VM and bare metal). So please take the time to carefully read http://redis.io/topics/persistence
If you use RDB, keep in mind the memory copy-on-write mechanism will start duplicating pages once the save background process has been forked off. So you need to ensure there is enough memory for Redis itself, plus some margin to cope with the COW. the amount of extra memory depends on your workload. The more you write in the instance, the more extra memory you need.
Please note writing a file may also consume some memory (because of the filesystem cache), so during a Redis background save, you need to account for Redis memory, COW overhead, and size of the dump file.
The machine running the Redis server must never swap. If it does, the result will be catastrophic. Contrary to some other stores, Redis is not virtual memory friendly.
With Linux, be sure to set sensible system parameters: vm.overcommit_memory=1 and vm.swappiness=0 (or a very low value anyway). Do not use old kernel versions: they are quite bad at enforcing a low swappiness (resulting in swapping when a large file is written).
If you use AOF, review the fsync options. It is a tradeoff between raw performance and durability of the write operations. You need to make a choice and define a strategy.
You also need to get familiar with the EC2 storage options. On some VM, you have the choice between ephemeral storage and EBS. On some others, you only have EBS.
Ephemeral storage is generally faster, and you will probably get less issues than with EBS, but you can easily loose your data in case of disk failure or reboot of the host, etc ... You can imagine putting RDB snapshots on ephemeral storage, and then copying the resulting files to EBS directories, as a tradeoff between performance and robustness.
EBS is remote storage: it may eat the standard network bandwidth allocated to the VM, and impact the maximum throughput of Redis. If you plan to use EBS, consider selecting the "EBS-optimized" option to establish a QoS between the standard network and storage links.
Finally, a very common setup for performance demanding instances with EC2 is to deactivate persistence on the master, and only activate it on a slave instance. It is probably less safe for the data, but it may prevent a lot of potential latency issues on the master.