According to a comment on a StackOverflow answer,
In a web or worker role you have to use an Azure Drive - which has
much lower performance than an Azure Disk which you get with a VHD.
Reference: blogs.msdn.com/b/windowsazurestorage/archive/2012/06/28/…
– Matt Johnson Feb 19 at 20:15
However, I've read through this reference link and other related documentation, and I cannot find anything to support the assertion that a PaaS Cloud Drive is slower than an IaaS disk. In fact, the only thing I do see is that drives work on 2 MB chunks, whereas disks work on 128 KB chunks. I would therefore assume that drives would be more performant than disks.
Drives: IO < 2 megabytes will be 1 transaction; IO >= 2 megabytes will be broken into transactions of 2MBs or smaller
Disks: IO < 128 kilobytes will be 1 transaction; IO >= 128 kilobytes will be broken into transactions of 128KBs or smaller
Does anyone have any real world metrics or links to indicate the perf difference between these two options?
The two features are currently implemented differently.
Azure Drive is a filesystem filter that grabs the NTFS calls, converts to REST, and forwards to the Azure Blob backing the disk (Page Blob). Network IO counts against the quota of the VM (each core of a VM gets 100Mb/Sec).
Data Drives are implemented within the Azure Hypervisor and are presented to the Guest OS as a mountable drive. Same basic idea - it converts the calls to the drive to REST and interacts with the Azure Blob backing the drive (still a Page Blob). The network IO for the call to storage does not count against the Guest OS, so you could still have 100Mb/sec/core for 'regular' network traffic while making calls to the data disk.
For both, there are local caching options the impact of which will vary based on the specific workloads & IO patterns.
I would recommend a quick read of the following for more details:
http://blogs.msdn.com/b/windowsazurestorage/archive/2012/11/04/windows-azure-s-flat-network-storage-and-2012-scalability-targets.aspx
http://blogs.msdn.com/b/windowsazurestorage/archive/2012/06/28/exploring-windows-azure-drives-disks-and-images.aspx
Related
How does one precisely utilize --parallel-level when using azcopy on Ubuntu 14.04 to speed up download performance?
I chose a value of 100, but without any reason (just to see what happens). I can't find related documentation online.
I'm using it to transfer files from an Azure blob to an AWS EC2 VM. It's just a t2.micro instance - however I'm using it for testing purposes and once I get the hang of azcopy, I'm open to using a bigger instance. I have to transfer ~50 GB of data, mostly low res image (i.e. lots of files).
https://learn.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-linux?toc=%2fazure%2fstorage%2fblobs%2ftoc.json
Option --parallel-level specifies the number of concurrent copy
operations. By default, AzCopy starts a certain number of concurrent
operations to increase the data transfer throughput. The number of
concurrent operations is equal eight times the number of processors
you have. If you are running AzCopy across a low-bandwidth network,
you can specify a lower number for --parallel-level to avoid failure
caused by resource competition.
Under most of the cases you don't need to specify this option, only when you're running AzCopy across a low-bandwidth network, you can specify a lower number of this option.
One of my Railo web applications generates too many I/O requests.
Since it's hosted on an Amazon Ec2 instance, that directly affects my billing badly, because of EBS disk activity (hundreds of milions of operations).
How can I monitor I/O requests? The perfect tool would allow me to find which template/component makes intensive I/O.
I'm already using FusionReactor and that's great for profiling memory spaces and so on, but it doesn't have anything for I/O.
so you could start out by using the operating system monitoring tools to see if you have mainly reads or writes, next step is looking at memory issues despite it being an disk IO issue, maybe your servers are low on memory and thrashing the drives as they are swapping pages in and out of memory.
if you have not done so turn on the template cache this will stop railo checking the file system on every page request (provided you have the memory).
if you have plenty of memory (both for your OS and for the JVM) and you have template caching on start looking for your busy pages in fusion reactor, check for cffile, cfdirectory and other tags in these pages.... good luck.
also use of queries of queries is often a culprit in high disk io as internally a database is used which runs pages to disk on large resultsets if I remeber correctly.
My Azure cloud service reads and writes to blobs using the .Net storage library (1.7). The blobs are in the same data centre as the service. In my first container, operations are fast (order of 10ms). In my second container they are very slow (typically about 2s or 14s, not much in between). Both are transferring the data using CloudBlob.DownloadToStream() into a MemoryStream. File sizes are typically less than 100kB.
Now I admit I haven't set up a proper test to be able to demonstrate all the above - I'm just going by my log files, so there could be some subtle difference in the way I am accessing the blobs. Apologies if this turns out to be the case.
Anyway, the only relevant difference between these two containers seems to be:
The fast container is accessed frequently (tens of thousands of requests per day), and the slow container quite infrequently (perhaps 200 requests per day).
The fast container typically stores items that are fetched soon afterwards. The slow container is often loading things that might have been stored days ago.
Question: What factors affect blob performance for infrequently-accessed blobs? What can I do to make it faster?
(I don't know how Azure blob storage is implemented, but based on the above I'm going to guess that the data is saved into a storage array and accessed via a dynamically scaling collection of VMs, each of which implements in-memory caching of blobs. Thus the ~14s delay occurs when Azure finds it needs to spin up the VMs. The ~2s delay occurs when a VM is available, but it needs to hunt down the data on a physical disk (seems rather slow), and the 10ms delay occurs when the item is stored in an in-memory cache, or something like that.)
Windows Azure Storage is not architected how you are describing (with an expanding number of cache VMs), so there would be no impact of some data being cached and other data not being cached on the Azure Storage server side. See Windows Azure Storage Architecture Overview for a good overview, or SOSP Paper - Windows Azure Storage: A Highly Available Cloud Storage Service with Strong Consistency for a more in depth look.
To determine why your blob requests are slower, the first thing to do would be to determine if the slow performance is server side or client side. Fortunately Azure Storage makes this easy via the Storage Analytics (Windows Azure Storage Logging: Using Logs to Track Storage Requests) - just compare the End To End latency and the Server Latency. I suspect you will see one of two things:
Low E2E and Low Server. This would indicate that either the request is getting delayed being sent from the client (ie. not enough worker threads), or your logging is providing incorrect data.
High E2E and Low Server. This would indicate a problem on the client side in processing the request (not enough worker threads to process the Response, slow processing of the memory stream, etc).
So far I get an average of 700 kilobytes per second for downloads via chrome hitting an ec2 instance in virginia (us-east region). If I download directly from s3 in virginia (us-east region) I get 2 megabytes per second.
I've simplified this way down to simply running apache and reading a file from a mounted ebs volume. Less than one percent of the time I've seen the download hit around 1,800 kilobytes per second.
I also tried nginx, no difference. I also tried running a large instance with 7GB of Ram. I tried allocating 6GB of ram to the jvm and running tomcat, streaming the files in memory from s3 to avoid the disk. I tried enabling sendfile in apache. None of this helps.
When I run from apache reading from the file system, and use a download manager such as downthemall, I always get 2 megabytes per second when downloading from an ec2 instance in virginia (us-east region). It's as if my apache is configured to only allow 700 megabytes per thread. I don't see any configuration options relating to this though.
What am I missing here? I also benchmarked dropbox downloads as they use ec2 as well, and I noticed I get roughly 700 kilobytes per second there too, which is way slow as well. I imagine they must host their ec2 instances in virginia / us-east region as well based in the speed. If I use a download manager to download files from dropbox I get 2 megabytes a second as well.
Is this just the case with tcp, where if you are far away from the server you have to split transfers into chunks and download them in parrallel to saturate your network connection?
I think your last sentence is right: your 700mbps is probably a limitation of a given tcp connection ... maybe a throttle imposed by EC2, or perhaps your ISP, or the browser, or a router along the way -- dunno. Download managers likely split the request over multiple connections (I think this is called "multi-source"), gluing things together in the right order after they arrive. Whether this is the case depends on the software you're using, of course.
Just a question about Azure.
Yes, I know roughly about Azure and cloud computing. I will put it in this way:
say, in normal way, I build a program listening to a TCP port. I run this server program in a server. I also build a client program, which connects to the server through specified port. Once a client is connected, my server program will compute some thing and return to the client.
Above is the normal model, or say my program's model.
Now I want to use Azure. I want to use because my clients are too many, let's say 1 million a day. I don't want to rent 1000 servers and maintain them. ( just a assumption for the number of clients)
I have looked at the Azure pricing plan. It say about CPU and talks about small, median, large instances.
I don't know what they mean. for e.g., in my above assumed case, how many instances do I need? or at most I can get from azure for extra large (8 small instances?)
How does Azure scale for my program? If I choose small instance (my server program is very little, just compute some data and return to clients), will Azure scale for me? or Azure just gives me one virture server and let it overload?
Please consider the CPU only, not storage or network traffic.
You choose two things: what size of VM to run (small, medium, large) and how many of those VMs to run. That means you could choose a small VM (single processor) and run 100 "instances" of it (100 VMs), or you could choose a large VM (eight processors on the same server) and run 10 instances of it (10 VMs).
Today, Windows Azure doesn't automatically adjust your scale, so it's up to you to use the web portal or the Service Management API to increase the number of instances as your need increases.
One factor to consider is if your app can take advantage of multi-core environments - multi-thread, shared memory, etc. to improve its scale. If it can, it may be better to use 5 2x core (i.e. medium) VMs than 10 1x core (small) VMs. You may find in some cases that 2 4x core VMs perform better than 5 2core.
If your app is not parallel/multi-core, then you could just do some 'x' number of small VMs. The charges are linear anyway - i.e. a 2core VM is twice the cost of a single core.
Other factors would include the scratch disk size & memory available in the VM.
One other suggestion - you may want to look into leveraging the Azure queues (i.e. have the client post to the queue and the workers pull from there). This would allow you to transparently (to the client) increase/decrease the workers w/out worrying about connections, etc. Also, if a processing step failed and crashed your instance the message would persist and be picked up by one of the others.
I suggest you also monitor, evaluate, and perfect the results of your Azure configuration.
For "Monitoring Applications in Windows Azure" (and performance) please reference
http://channel9.msdn.com/learn/courses/Azure/Deployment/DeployingApplicationsinWindowsAzure/Exercise-3-Monitoring-Applications-in-Windows-Azure/
There is also a good blog entry titled "Visualizing Windows Azure diagnostic data"
Check out http://www.paraleap.com - simple service for automatically adjusting number of instances that you have according to demand.