Is there any data on how fast Azure VM local drives are? - performance

I'm experimenting with OnStart() in my Azure role using "small" instances. Turns out it takes about two minutes to unpack a 400 megabytes ZIP file that is located in "local storage" on drive D into a folder on drive E.
I though maybe I should do it some other way around but I can't find any data about how fast the local disks on Azure VMs typically are.
Are there any test results for how fast Azure VM local disks are?

I just ran a comparison of disk performance between Azure and Amazon EC2. You can read it here, although you will probably want to translate it from Norwegian :-)
The interesting parts, though, are the following HD Tune screenshots.
First, a small instance at Amazon EC2 running Windows Server 2008:
Next, a small instance on Azure running Windows Server 2012:
This isn't a fair comparison, as some of the differences may be due to missing Windows 2012 drivers, but you may still find it useful.
As pointed out by Sandrino, though, small instances at Azure only get "moderate" I/O performance, and this may be an argument in favor of Amazon.

It all depends on your VM size: https://www.windowsazure.com/en-us/pricing/details/#cloud-services. As you can see a small instance will give you a moderate I/O performance, and medium/large/xxl will give you a high I/O performance.
If you want specifics I suggest you read through this blog post: Microsoft SQL Server 2012 VM Performance on Windows Azure Virtual Machines – Part I: I/O Performance results. They talk about the SQLIO tool that can help people decide on moving they SQL Server infrastructure to Windows Azure VMs.
This tool is interesting since it might just give you the info you need (read and write MB/s):

Related

What if I reduce the amount of RAM allocated to Elastic Search for Azure Devops Server Express?

I just installed Azure Devops Server Express on my computer with 32GB of RAM (running Windows 10) and I noticed that Elastic Search that was installed as a part of Azure Devops setup now consumes 10GB of RAM.
This Azure Devops installation is for my personal use and will only see low usage.
I reduced the amount of RAM to 512MB initial/1GB max via Elastic Search manager utility.
However I am not sure what will be the outcome of this.
Does it mean the search will be slower (I am perfectly fine with this)?
Or does it mean that search functionality will just not work or will be "partially broken" (i.e. "in-memory index" will be incomplete or something like that)?
For evaluation or personal use, you can use a basic configuration with as little as 2 GB of RAM. Please refer to the document.
The size of RAM doesn't affect search results, but will cause performance issues like search speed and load time.

Random 1104 Error Reading File Errors in Multiple VFP Applications

We have multiple applications developed in Visual Foxpro 8.0 running in a data center on Windows 2008 R2 on VMware. We also have a Citrix farm on the same network where users run yet another VFP 8.0 application in Citrix sessions. All applications share the same set of data tables located on a file server (also Windows 2008 R2 VM). Virtual hosts are connected by 10Gb LAN (managed switch).
Since mid-July we started seeing random 1104 "Error reading file..." errors on multiple different applications on multiple servers. All of them reference different files on the file server.
The problem started mid-July and it frequency gradually increased. Earlier it was most frequent in the afternoons by 3 pm, now it happens from early morning till late afternoon. It affects EDI servers (these run batch jobs in unattended mode) and Citrix servers and a variety of applications. It occurs when a VFP application (any of them) tries to open a database container file or individual tables most often with USE command but some times executing a SQL Select statement, or when loading a VFP form that opens tables in DataEnvironment
We caught a moment when the same exact error happened on two different servers running different applications at the same exact moment (up to a second). We also saw two different applications running on the same computer erroring out at the same moment.
We replaced the file server with a new virtual machine with no relief (we since changed it back to the old file server ).
We disabled the antivirus.
We updated VMware on all hosts to the latest version.
Sysinternals Process Monitor displays "INVALID_NETWORK_RESPONSE" event when the error occurs.
We captured traffic on both the server side and client side when the error occurred and had it analyzed by a network analysis specialist. He observed a peculiar pattern, where client OS starts retrieving the file in question from the file server AFTER VFP application had thrown an error. It seems that VFP application requests a file from OS, then it either gets an abnormal response or just times out and only after that the OS sends packets requesting the file. Again, this happens sporadically.
OpLocks and SMB2 have been disabled on all computers both on the server and client side of the equation for many years and everything was running smoothly until now...
Any advice would be greatly appreciated.
My first piece of advice would be to re-enable OpLocks and SMB2. There is no reason to mess with either of those items as things stand today and you are losing a huge amount of performance running at SMB1 level.
In my experience these issues have almost always been caused by one of the following.
Antivirus/antimalware software.
Replication or online backup software like MozyPro.
The Windows Search indexing service.
You should consider installing the Windows 7 / Server 2008 R2 Enterprise Hotfix Rollup if you haven't already.
That problem mostly related by SMB2!
Some Antivirus Software!
Windows updates! If you use VFP apps by DBF/DBC file. Do not update your system/OS. That is my personal suggestion. Windows Server 2012+ or Windows 10+ prorbably would big problems at near future.
And the point high probably is:
What is your I/O request per secs? if your IO request bigger than 1000~2000 per secs for a dbf file that is a bottle neck; and your storage device is HDD -> you need to switch/update your HDD to SSD. I suggest m.2 pro series SSD.

Moving TFS working folder to network share

Our IT folks are telling us (the dev group) we shall not have ANY files stored on our local hard drives, including our TFS working folders. This is ridiculous for a variety of reasons but until I'm convinced it's a good idea, I'll play along and when no one is looking make a local working folder.
Does anyone does have their working folder on a network share? How well does it work? Each developer would have their own folder in the share but it would be on the network. My main concerns are performance and we would need to be connected at all times in order to work.
On a TFS point of view it's working without issue, but stay away from the Local Workspace of TFS/VS11.
I strongly feel for you on the compiling point of view, compiling a solution stored over the network is absolutely a disaster in term of performances.
You did not mention it but I assume your Network Share uses a Network Drive.
Btw, can I know why these guys don't want you to store files locally ?
While it's not something I would typically recommend, if that is the policy and you have to adhere to it, it might be worthwhile to consider simply having server-side development VM's that your devs RDP into. I've seen companies do this before, and the big downside is that if your not connected to the network you can't do anything.
There are some upsides too though. Being able to easily increase resources (RAM, disk space, CPU, etc) because of the virtualization infrastructure. If somebodies laptop dies they are not out of comission, just find a loaner machine and RDP into their VM and they're up and running. If somebody leaves, you have a copy of their entire working machine that you can give to their replacement. All machines can be easily backed up. Etc. Compiling, and working within VS in general should be much faster too than trying to work with a local Visual Studio reading/writing to a network drive.

Ideas on how to save space with Windows 2008 R2 server on Hyper-V?

I've got this question awhile ago, but it still bothers me.
I work with a few virtual machines running Windows 2008 server, mostly demo VMs and test machines. Since most devs use them, I prefer to not have individual setups here and there and maintain a catalog of exported VMs and hard drive images instead.
Thanks to side-by-side assemblies and windows updates each server carries an overhead of about 6 - 12 Gigs in side-by-side folder (winsxs) and windows update.
Suppose I have 50 exported VMs (with their images), each has about 3 Gigs of payload data (OS, programs, data) and about 12 Gigs of shared overhead, which is mostly the same for all these VMs. Then I waste 2/3 of my storage space (about 600 Gigs total), not to mention network overhead of pushing this redundant data around the network when a dev wants to download a new VM snapshot.
So I am thinking of a way of consolidating the winsxs folder accross multiple VMS. Ideally, I'd like to come up with some shared drive or something. I am even willing to designate a physical device for this.
I realize that Windows server has minimum requirements and these files cannot be deleted (http://social.technet.microsoft.com/Forums/en-US/itprovistasetup/thread/9411dbaa-69ac-43a1-8915-749670cec8c3).
I also found a post on moving winsxs folder, but it does not appear as a reliable solution.
Does this sound even remotely feasible? What are the best practices for consolidating resources across VMs?
Thank you almighty stackoverflow gurus for your prompt attention ;-)
Don't touch the WinSXS folder.
It's not as big as it looks (alot of it is hard links to duplicate files)
If you wanted to have consolidated space, use differencing disks--create one VM with Windows on it, and then use that disk as the basis for the rest. Each disk will only store the delta between the original and where that VM goes after that.
It is not possible to share WinSxS folders across installations.
If you want to know more about how WinSxS works, check out my blog post: http://fearthecowboy.com/post/CoApp-FAQ-Can-you-explain-how-Side-by-side-%28WinSxS%29-works.aspx

best way to set up a VM for development (regarding performance)

I am trying to set up a clean vm I will use in many of my devs. Hopefully I will use it many times and for a long time, so I want to get it right and set it up so performance is as good as possible. I have searched for a list of things to do, but strangely found only older posts, and none here.
My requirements are:
My host is Vista 32b, and guest is Windows2008 64b, using Vmware Workstation.
The VM should also be able to run on a Vmware ESX
I cannot move to other products (VirtualBox etc), but info about performance of each one is welcomed for reference. Anyway I guess most advices would apply to other OSs and other VM products.
I need network connectivity to my LAN
When developing/testing, guest will run several java processes, a DB and perform some file I/O
What I have found so far is:
HOWTO: Squeeze Every Last Drop of Performance Out of Your Virtual PCs: it's and old post, and about Virtual PC, but I guess most things still apply (and also apply to vmware).
I guess it makes a difference to disable all unnecessary services, but the ones mentioned in 1 seem like too few, I specifically always disable Windows Search. Any other service I should disable?
You can try to run the CD/DVD through vLite to remove unwanted crap. I'm not 100% sure if Windows 2008 server is supported but you could give it a try. I've successfully stripped down XP with nLite to about 200MB with only the bare minimum I need for testing software. You might be able to do something similar to Windows 2008 with vLite.
My host is Vista 32b, and guest is
Windows2008 64b,
First mistake. Seriously, why not running 64 bit even on Vista? This would give your VM a good memory space to work with, while now even if it is possible with VmWare it goes through really nasty API's in the Windows layer.
That said, why use Vista as host? Why not directly load a 2008 R2 host, configure it into workstation mode (heck, you even get our friendly AERO if you install all the things the server leaves out per default) and be happy with it?
I guess it makes a difference to
disable all unnecessary services,
Hm, seriously? I run a couple of Hyper-V hosting servers on top of physical domain controllers without any reconfiguration and with good enough (i.e. great) perforamnce. Helps I dont ahve the typical workstation bottleneck (i.e. one overloaded hard disc). I never found a reason to disable any service for squeezing the last performance out.
Guest will run many java processes, a
DB and perform lots of file I/O
Well, get proper hardware for that. I.e. a hardware RAID controller, and a LOT of drives - in accordance with your needs. DB is IO sensitive. VERY sensitive.

Resources