What happens when I reboot an EC2 instance? - amazon-ec2

When I reboot an EC2 instance, do I get the initial image again, or is the state of the hard disk before the reboot kept?
And what happens with billing, does the hour start again, or do I continue with the fraction of the hour I was in when I rebooted?

Rebooting an instance is like rebooting a PC. The hard disk isn't affected. You don't return to the image's original state, but the contents of the hard disks are those before the reboot.
Rebooting isn't associated with billing. Billing starts when you instantiate an image and stops when you terminate it. Rebooting in between hasn't any effect.

Rebooting keeps the disks intact.
If you shut down the instance and power up a new one, the disks will be reset to their initial states.
This doesn't apply to the EBS disks, which persist even across shutdowns.

As per AWS Documentation:
An instance reboot is equivalent to an operating system reboot. In
most cases, it takes only a few minutes to reboot your instance. When
you reboot an instance, it remains on the same physical host, so your
instance keeps its public DNS name (IPv4), private IPv4 address, IPv6
address (if applicable), and any data on its instance store volumes.
Rebooting an instance doesn't start a new instance billing hour,
unlike stopping and restarting your instance.
Further, they recommend:
We recommend that you use Amazon EC2 to reboot your instance instead
of running the operating system reboot command from your instance. If
you use Amazon EC2 to reboot your instance, we perform a hard reboot
if the instance does not cleanly shut down within four minutes.

When you rebooting an instance, it will keep remains same hypervisor and restart the VM just like normal Linux reboot.
If you created a VM with ephemeral block store, then you would not lose the ephemeral storage when you restart the instance.
As mentioned above, rebooting will not affect the billing

go to instance and reboot. I just did and all my states and the data were intact. Wait for a few minutes before everything comes back to normal.

Related

ec2 high storage instances, can it be used for couchdb or other db

As far as I know instance storage on ec2 is ephemeral ie it is lost when the instance is restarted. Is this different for high storage instances? if not it would seem that you would lose any database data stored on the instance upon restart.
Ebs backed volumes should be used for storage. Either you can take standard storage or iops. This will avoid any data losses due to instance going down.
When you attach the ebs volume to instance, make sure a entry is made to fstab file. So when you reboot the instance the volume is still attached. When you stop an instance, the data is present but the hardware is released while reboot is just like your system restarts.
All ephemeral storage is lost if the instance is stopped. You can still use a database with the high IO storage instances, but you will need to replicate the data between several instances to attain persistence.

How should I approach my EC2 contingency plan?

I've browsed through other questions on SO concerning backing up EC2, and it has provided me with a good basis, but I'm still a bit confused as to how I should approach my solution and develop a contingency plan. Most questions are fairly specific, but I have a pretty vanilla setup, and I think this information will be beneficial to future users. Let me provide my basic setup:
Basic small instance
Pushing files to S3
Running MongoDB
Running nginx
Now, due to the ephemeral nature of EC2, it's apparent that I need to bind my EC2 instance to EBS to ensure persistent storage. The reason I'm attempting to develop a contingency plan is that I'm worried my instance may disappear at any time (due to outages, etc). If my instance were to disappear, I'm concerned that I would have to spin up a new instance and reinstall all my applications before getting everything up an running again. A few questions:
How do I backup my instance to ensure that if it were to disappear, I could quickly bring it back up (preferably without having to reinstall all previous software)? I don't need a series of backups, just the previous days (or weeks) backup to ensure that a previous working version exists that can be started quickly.
If I use EBS instead of instance storage, it essentially acts in place of my hard drive right? So if I have MongoDB installed, I'm assuming the database it is writing to cab placed on EBS?
If I go with the small instance with 160 GB of storage and I use EBS, would I need to allocate 160 GB of EBS out of the gate, or is the 160 GB for instance storage only?
So, in summary, I would like a solution (either manual or automatic) that can create a snapshot of my EC2 instance to ensure that if it were to disappear, it could be reconstructed without having to spend the time to manually reconstruct everything.
In an ideal world, if my instance were to disappear, I could spin up an version of my instance with everything intact (to a point where things were backed up). Any resources or suggestions? Thanks in advance.
OK, here we go:
For backups:
Create your instance from one of the stock AWS images. Make sure it is an EBS-backed VM - depending on the size of the VM you pick, you'll get a volume assigned with 'n' GB of space, attached as the boot volume (/dev/sda1).
Configure the VM with whatever software you need, apply patches, tune for disk fragmentation, CPU consumption (task priorities, etc), and any other configuration you need that makes the VM tailored to your requirements.
Stop the VM and take a snapshot of the EBS volume, then restart it (re-assigning the Elastic IP is there is one). This is your backup snapshot - repeat as desired at whatever frequency you like. Remember to stop the VM when you snap it to prevent the OS writing to the volume whilst you're taking a copy of it.
For recovery:
Your VM will fail, eventually. You'll break something and render it damaged or inoperative, or the hardware it's running on will suffer a fault. It will happen.
When it does, terminate it (if it didn't self-terminate) and spin-up a new VM of the same type from the AWS stock list. Wait until it shows as 'Running', and then stop it.
Detach its EBS volume and delete it.
Create a new EBS volume from whichever backup snapshot you last created, and attach that new volume to the VM as /dev/sda1.
Start the VM and assign your EIP if appropriate.
About EBS storage:
It's a chunk of storage space. If you format it to look like a standard disk, you can use it exactly as you would a physical disk. Install stuff on it, point software at it for use as storage space, whatever.
you have two options: (BUT NOT EXACTLY AS YOU WANT ;( )
1- Have an 'external' EBS attached into your EC2 instance, and manually (you can do automatically through cronjobs), make snapshots from it ! But it's not what you want, why ? If your EC2 instance disappears, you will need to re-create all your environment again and re-attach your EBS... So it's a nice way of having backups of HUGE datas on your EC2, but your enviroment is destructed...
2- The best way, but not so perfect, is after you finished configuring your EC2, make a private AMI from it, so anytime you want you can launch more instances like that , from that AMI, so everything is cloned.... BUT the worst part of it, is that every time you change a configuration from the instance, you still need to make a new AMI, and every time you make a new AMI, you need to reboot your instance to grant data integrity on your new private AMI !
I recommend you to take a closer look into RESERVED EC2 instances, that have a better stability from normal ones. BUT YOU STILL CAN HAVE HARDWARE DISASTERS as normal instances too...

EC2 dashboard mentions about a running instance, even when the instance is not running

EC2 dashboard mentions about a running instance, even when the instance is not running. I see a EBS volume also in a in-use status. I am confused, is the machine running or not?
I have seen that happen when closing down an linux instance on the machine (with shutdown now from the command line).
If the console says that the instance is running even though you shut it down you should probably shut it down from the console (to avoid being billed).
Sometimes there are problems with the hardware on the server. The instance is showing as running but you cannot connect and you cannot use any services on that instance. The best thing to do in this situation is post a message on EC2's forums and ask them to look at your instance.
They're usually pretty quick to respond though they don't make any grantees. They can force the machine into a stopped state, whether or not they can fix the issue without you loosing your data will depend on what is actually wrong with the instance.
This happens from time to time with my instances as well.

Is the Azure role host actually restarted when a role crashes or is restarted via management API?

Suppose my Azure role somehow exhausts system-wide resources. For example it spawns many processes and all those processes hang and consume all virtual memory in the system. Or it creates a gazillion of Windows API event objects and fails to release them and no more such object can be created. I mean anything except trashing the filesystem.
Now the changes I describe are cancelled out once normal Windows machine restarts - processes are terminated, virtual memory is "recycled", events and other similar objects are "recycled" and so on.
Yet there's a concern. What if the host is not actually restarted, but goes through some other process when I hit "reboot" or "stop", then "start"?
Is the host actually rebooted when I restart a role or reboot an instance?
When you reboot the instance, the VM is actually rebooted. When you stop and start, the VM is not rebooted, but the process is restarted.

Single instance Amazon EC2

We're running a lightweight web app on a single EC2 server instance, which is fine for our needs, but we're wondering about monitoring and restarting it if it goes down.
We have a separate non-Amazon server we'd like to use to monitor the EC2 and start a fresh instance if necessary and shut down the old one. All our user data is on Elastic Storage, so we're not too worried about losing anything.
I was wondering if anyone has any experience of using EC2 in this way, and in particular of automating the process of starting the new instance? We have no problem creating something from scratch, but it seems like it should be a solved problem, so I was wondering if anyone has any tips, links, scripts, tutorials, etc to share.
Thanks.
You should have a look at puppet and its support for AWS. I would also look at the RightScale AWS library as well as this post about starting a server with the RightScale scripts. You may also find this article on web serving with EC2 useful. I have done something similar to this but without the external monitoring, the node monitored itself and shut down when it was no longer needed then a new one would start up later when there was more work to do.
Couple of points:
You MUST MUST MUST back up your Amazon EBS volume.
They claim "better" reliability, but not 100%, and it's SEVERAL orders of magnitude off of S3's "12 9's" of durability. S3 durability >> EBS durability. That's a fact. EBS supports a "snapshots" feature which backs up your storage efficiently and incrementally to S3. Also, with EBS snapshots, you only pay for the compressed deltas, which is typically far far less than the allocated volume size. In another life, I've sent lost-volume emails to smaller customers like you who "thought" that EBS was "durable" and trusted it with the only copy of a mission-critical database... it's heartbreaking.
Your Q: automating start-up of a new instance
The design path you mention is relatively untraveled; here's why... Lots of companies run redundant "hot-spare" instances where the second instance is booted and running. This allows rapid failover (seconds) in the event of "failure" (could be hardware or software). The issue with a "cold-spare" is that it's harder to keep the machine up to date and ready to pick up where the old box left off. More important, it's tricky to VALIDATE that the spare is capable of successfully recovering your production service. Hardware is more reliable than untested software systems. TEST TEST TEST. If you haven't tested your fail-over, it doesn't work.
The simple automation of starting a new EBS instance is easy, bordering on trivial. It's just a one-line bash script calling the EC2 command-line tools. What's tricky is everything on top of that. Such a solution pretty much implies a fully 100% automated deployment process. And this is all specific to your application. Can your app pull down all the data it needs to run (maybe it's stored in S3?). Can you kill you instance today and boot a new instance with 0.000 manual setup/install steps?
Or, you may be talking about a scenario I'll call "re-instancing an EBS volume":
EC2 box dies (root volume is EBS)
Force detach EBS volume
Boot new EC2 instance with the EBS volume
... That mostly works. The gotchas:
Doesn't protect against EBS failures, either total volume loss or an availability loss
Recovery time is O(minutes) assuming everything works just right
Your services need to be configured to restart automatically. It does no good to bring the box back if Nginx isn't running.
Your DNS routes or other services or whatever need to be ok with the IP-address changing. This can be worked around with ElasticIP.
How are your host SSH keys handled? Same name, new host key can break SSH-based automation when it gets the strong-warning for host-key-changed.
I don't have proof of this (other than seeing it happen once), but I believe that EC2/EBS _already_does_this_ automatically for boot-from-EBS instances
Again, the hard part here is on your plate. Can you stop your production service today and bring it up RELIABLY on a new instance? If so, the EC2 part of the story is really really easy.
As a side point:
All our user data is on Elastic Storage, so we're not too worried about losing anything.
I'd strongly suggest to regularly snapshot your EBS (Elastic Block Storage) to S3 if you are not doing that already.
You can use an autoscale group with a min/max/desired quantity of 1. Place the instance behind an ELB and have the autoscale group be triggered by the ELB healthy node count. This allows you to have built in monitoring by cloudwatch and the ELB health check. Anytime there is an issue the instance be replaced by the autoscale service.
If you have not checked 'Protect against accidental termination' you might want to do so.
Even if you have disabled 'Detailed Monitoring' for your instance you should still see the 'StatusCheckFailed' metric for your instance over which you can configure an alarm (In the CloudWatch dashboard)
Your application (hosted in a different server) should receive the alarm and start the instance using the AWS API (or CLI)
Since you have protected against accidental termination you would never need to spawn a new instance.

Resources