http://aws.amazon.com/ec2/#pricing
I can't understand this. What is an instance? ("On-Demand Instances let you pay for compute capacity by the hour with no long-term commitments.")
Does this mean that I can use whole as my VMware server:
(Extra Large Instance)
15 GB memory
8 EC2 Compute Units (4 virtual cores with 2 EC2 Compute Units each)
1,690 GB instance storage
64-bit platform
I/O Performance: High
API name: m1.xlarge
For $0.96 per hour?
Or does it mean only like one operation or something? What is that instance exactly?
An instance signifies an operating system instance (a virtual machine). By using virtualization, Amazon (and cloud providers in general) offer you a virtualized environment where OS instances are running. You have full control over that operating system inside that environment. Per hour means that you pay that much for using your OS instance resources for a single hour. I believe that page has almost all the details about pricing.
An instance is a virtual machine. For example you can start up an ubuntu instance and then you can SSH into it and do whatever you want.
Related
I am interested in understanding the way in which the hardware resources (CPU, disk, network, etc.) of an AWS physical server is shared between different applications. Do people have experiences about inexplicable performance changes in services running on AWS that you have successfully attributed to another application sharing the physical resources? If so, how did you go about debugging this?
In particular, I am interested in more complicated interactions between the resources, such as CPU->Memory bandwidth. If you run 15 VMs on a single machine, you will surely have worse performance than if you ran 2 VMs.
Perhaps this is a more general question about Xen virtualization, but I don't know if there is some kind of AWS magic happening under the hood that I don't know about.
I am not sure if this is the right forum for this kind of question; if not, it would be helpful if you could point me towards a resource or another forum.
Amazon EC2 instances are not susceptible to "noisy neighbour" problems.
Based upon the Instance Type selected, the EC2 instance receives CPU, Memory and (for some instance types) locally attached disk storage. These resources are dedicated to the instance and will not be impacted by other users nor other virtual machines. (An exception to this is the t1 and t2 instance types.)
Specifically:
The instance is allocated a number of vCPUs. These are provided to the instance and no other instance can use these vCPUs (see note about t1 and t2 below). The EC2 Instance Type page defines a vCPU as:
Each vCPU is a hyperthread of an Intel Xeon core for M4, M3, C4, C3, R3, HS1, G2, I2, and D2.
The instance is allocated an amount of RAM. No other instance can use this RAM. There is no oversubscription of CPU nor RAM.
The instance might be allocated locally-attached disk storage, known as Instance Store or Ephemeral Storage. This disk storage does not persist when the instance is Stopped or Terminated, so only store temporary data or data that is replicated elsewhere.
The instance is allocated network bandwidth that is dedicated to that instance. No other instance can impact this network bandwidth. The network performance is based upon the selected instance type. Basically, larger instances receive more network performance.
None of the above factors are impacted by other instances (virtual machines) running on the same host.
t1 and t2 instance types
An exception to the above statement are:
t1.micro instances "provide a small amount of consistent CPU resources and allow you to increase CPU capacity in short bursts when additional cycles are available".
t2 instances provide burst capacity based upon a system of CPU Credits. CPU Credits are earned at a constant rate depending upon instance type, and these credits can be used to burst the CPU when necessary.
For both these instance types, I would assume that this burst capacity is shared between instances, so it is possible that CPU burst might be impacted by other instances also wishing to burst. The t2 instances, however, would make this 'fair' by only consuming CPU credits when the CPU did actually burst.
Dedicated Instances and Dedicated Hosts
Dedicated instances are "Amazon EC2 instances that run in a virtual private cloud (VPC) on hardware that's dedicated to a single customer." Basically, your AWS account will be the only account running instances on that host computer.
A Dedicated Host is a "physical server with EC2 instance capacity fully dedicated to your use. Dedicated Hosts allow you to use your existing per-socket, per-core, or per-VM software licenses, including Windows Server, Microsoft SQL Server, SUSE, Linux Enterprise Server, and so on." Basically, you pay for the entire host computer and then launch individually instances on the host (at no additional charge).
The use of a Dedicated Instance or a Dedicated Host has no impact on resources allocated to each instance. They would receive the same resources as when running as a normal Shared Instance.
My company is running into a network performance problem that seemingly has all of the "experts" we're working with (VMWare support, RHEL support, our managed services hosting provider) stumped.
The issue is that network latency between our VMs (even VMs residing on the same physical host) increases--up to 100x or more!--with network throughput. For example, without any network load, latency (measured by ping) might be ~0.1ms. Start transferring a couple 100MB files, and latency grows to 1ms. Initiate a bunch (~20 or so) concurrent data transfers between two VMs, and the latency between the VMs can increase to upwards of 10ms.
This is a huge problem for us because we have application server VMs hosting processes that might issue 1 million or so queries against a database server (different VM) per hour. Adding a millisecond or two to each query therefore increases our runtime substantially--sometimes doubling or tripling our expected durations.
We've got what I would think is a pretty standard environment:
ESXi 6.0u2
4 Dell M620 blades with 2x Xeon E5-2650v2 processors and 128GB RAM
SolidFire SAN
And our base VM configuration consists of:
RHEL7, minimal install
Multiple LUNs configured for mount points at /boot, /, /var/log, /var/log/audit, /home, /tmp and swap
All partitions except /boot encrypted with LUKS (over LVM)
Our database server VMs are running Postgres 9.4.
We've already tried the following:
Change the virtual NIC from VMNETx3 to e1000 and back
Adjust RHEL ethernet stack settings
Using ESXi's "low latency" option for the VMs
Upgrading our hosts and vCenter from ESX 5.5 to 6.0u2
Creating bare-bones VMs (setup as above with LUKS, etc., but without any of our production services on them) for testing
Moving the datastore from the SSD SolidFire SAN to local (on-blade) spinning storage
None of these improved network latency. The only test that showed expected (non-deteriorating) latency is when we set up a second pair of bare-bones VMs without LUKS encryption. Unfortunately, we need fully encrypted partitions (for which we manage the keys) because we are dealing with regulated, sensitive data.
I don't see how LUKS--in and of itself--can be to blame here. Rather, I suspect that LUKS running with some combination of ESX, our hosting hardware, and/or our VM hardware configuration is to blame.
I performed a test in a much wimpier environment (MacBook Pro, i5, 8GB RAM, VMWare Fusion 6.0, Centos7 VMs configured similarly with LUKS on LVM and the same testing scripts) and was unable to reproduce the latency issue. Regardless of how much network traffic I sent between the VMs, latency remained steady at about 0.4ms. And this was on a laptop with a ton of the things going on!
Any pointers/tips/solutions will be greatly appreciated!
After much scrutiny and comparing the non-performing VMs against the performant VMs, we identified the issue as a bad selection for the advanced "Latency Sensitivity" setting.
For our poorly performing VMs, this was set to "Low". After changing the setting to "Normal" and restarting the VMs, latency dropped by ~100x and throughput (which we hadn't originally noticed was also a problem) increased by ~250x!
Storage
Trying to launch an Ubuntu 13.10 Amazon AWS instance I’ve started with a General Purpose m1.medium instance with 1x410 GB instance storage. When I come to tab 4: Add Storage it says 8 GB on the root device and N/A on another device called Instance Store 0. However I can increase the root device to up to 1024 GB. I can also change the Instance Store 0 to an EBS and get 1024 GB there.
How am I able to select more space than I have available (410GB)? Am I charged for that? Where can I see how much of each of my instances are costing? If I set the root device to 410GB will I then be charged exactly as on the Amazon list?
Reserved Instance
I also have purchased a Reserved Instance. How can I verify that the EC2 instance I have just created is actually using my reserved instance?
Storage
You have two storage options with instances. EBS and Instance storage:
EBS is somewhat like an SAN volume. It exists outside of the instance and is accessed via dedicated ethernet (This is shared between instances based on IO priority). Volumes are billed based on the provisioned size. It does have some benefits over instance storage in that it can be easily snapshotted and moved from instance to instance. EBS limits are practical ones, while you can attach quite a few volumes (up to 1TB each) the available bandwidth will likely be saturated before you can take advantage of all of them.
Instance storage is a disk attached directly to the host hardware. Its included in the instance cost. But it should not be treated as persistent. You will loose any data stored on this volume if your instance is stopped or fails for any reason. This is because at each boot, your instance is assigned to available host in a pool instead of being locked to a specific host.
You can use both types of storage on any instance (Except micro, no instance storage available). Instance storage options however need to be set at launch and can't be changed later. EBS volumes can be added/removed at any time. In most cases, instance storage is disabled by default and needs to be explicitly enabled. Helps avoid the situation of "Why did all my data get deleted?"
Reserved Instance
Reservations are a billing feature. As long as your instance matches the parameters of the reservation you will be billed at the reservation rates. Should be able to verify this with account activity.
Reading http://aws.amazon.com/autoscaling/ it looks like Amazon lets you create more virtual machines (EC2 units) automatically when the load on your existing machine gets high.
However, that's not what I want. I want a single virtual machine that becomes more powerful (more RAM, CPU, etc) when the machine load/memory usage is high. How do I do this?
vps.net appears to offer this:
http://vps.net/product/cloud-servers/
under "scale with demand", but I'd like to find an Amazon equivalent.
You can scale an EC2 instance up and down, but
Any automated trigger for this would need to be written by you, calling the EC2 API calls to perform the scaling
Moving the EC2 instance to a larger or smaller instance type requires shutting down and rebooting the server.
The basic method is:
stop (not terminate) the instance
modify-instance-attributes to change the type
start the instance
reassociate the Elastic IP address (if any).
I've written an article that provides more information, sample commands, and things to watch out for when performing this resize:
http://alestic.com/2011/02/ec2-change-type
I'm using small size ec2.
its noticeably slower than my less than $800 home linux machine.
(about average machine purchased 6month ago)
I don't know cpu or hard-disk is the bottleneck.
Wonder if there's a way to tell which.
yes, if you want to monitor your EC2 instance, consider using Amazon's cloudwatch ( http://aws.amazon.com/cloudwatch/ ). This service can monitor all your instance's resources, such as CPU utilization, memory usage, network latency, and request counts. It's also free in the amazon free tier.
If you're looking for more detailed monitoring, consider serverdensity service ( http://www.serverdensity.com/cloud-monitoring/ ). They can monitor software installed on the server itself, such as apache service