Stopping an EC2 instance causes a new one to generate - amazon-ec2

I have setup an environment with Amazon's Elastic Beanstalk which has generated an EC2 instance. The storage is too small and I wish to increase it, so my plan was to stop the instance (termination protection is on), snapshot the volume, create a new bigger volume from the snapshot and re-attach to the instance.
The issue I'm having is that when I stop the instance, which happens successfully, another instance gets automatically generated! How do I stop this behaviour?

This is because the default autoscaling min group size is 1. You can set the autoscaling min and max group size to 0 using ebextensions.
Create a folder called .ebextensions in your app source. In this folder create a file with name 01-asg.config. Add the following to this file. Note the file is in YAML format so indentation is important.
Resources:
AWSEBAutoScalingGroup:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
MinSize: 0
MaxSize: 0
Zip the app source and deploy this new version to your environment. The instance should go away.

Don't manage Elastic Beanstalk instances from the EC2 console… Use the Elastic Beanstalk console.

Related

How to update a laravel project in aws Elastic beanstalk, while keeping the same storage

I want to update my Laravel project in aws beanstalk, but the problem is the storage in tha aws elastic beanstalk is now different , and i want to keep it, i dont know how , cuz my Project contains a storage folder, but it's empty, and if i update it , i'll loose all the files
how can i update the code, but keep the storage ?
Your application should be designed to be stateless. The reason is that your EB instances always run in an Auto Scaling group.
This means that they can be terminated and replaced at any time, without your knowledge or involvement. There are many scenarios under which that may happen. Examples are, Availability Zone re-balance, migration to new physical hardware, scaling in and out activities, or instance health degradation.
Subsequently, you are always at risk loosing your storage, whether you like it or not.
Therefore you application should be designed as stateless, which means that it does not store any data on the instance. This is achieved usually by storing the data in an external storage such as EFS:
How can I mount an Amazon EFS volume to an instance in my Elastic Beanstalk environment?
But if you still want to keep your design, you can always use .ebextentions scripts to help you replace the storage folder. Specifically, in Commands you would make a copy of your storage folder to a safe location at the start of the new deployment. Then in Container commands you would copy the files back to your new application folder, just before the new deployment completes.

Attach EBS volume to this AWS ec2 instance?

I’m running a website on an m3.2xlarge AWS ec2 instance which is EBS backed.
I want to change the instance type to m2.2xlarge instance.
This new instance already has instance storage of 850 gb hdd.
My question is: can I attach the existing EBS drive to the new m2.2xlarge instance? and boot to the website?
There are 2 possibilities
1
stop your running Instance, Detach the volume (or take a snapshot of it)
Now while launching the new Instance you can add additional volume along with your Root volume (of new Instance). Just keep in mind you cannot delete or exclude the Root volume of new Instance while Launching, its just that you can add an extra volume along with the root volume
2
Stop your running Instance
Go to actions and then Select Insatnce Setting and then click on Change Instance Type
Select your desired Type of Instance
after changing the Type of instance, you can restart the Instance
Your existing EBS will remain attached after resizing. You will not get to see instance store which comes with m2.2xlarge. This won't get add up to resized instance. This is based on the documentation here:
When you resize an instance, the resized instance usually has the same
number of instance store volumes that you specified when you launched
the original instance. If you want to add instance store volumes, you
must migrate your application to a new instance with the instance type
and instance store volumes that you need

How to mount PVC on pod in OpenShift Online 3

I just ported my application to OpenShift Online 3 (from version 2), and now I'm struggling to understand how to manage persistent, "shared" data, that is not wiped after each build.
After reading the documentation about Persistent Volume Claims, I created a new PVC inside my project, of type RWO, using the Web dashboard. At this point I tried to understand how to access this storage from inside each pod, or if I needed to do something to mount it, and I ended up doing this:
$ oc volume dc/myapp --add --type=persistentVolumeClaim --claim-name=pvcname --mount-path=/usr/share/data
After this, it looks like the new configuration was successfully registered:
$ oc volume dc --all
deploymentconfigs/myapp
pvc/pvcname (allocated 1GiB) as volume-jh1jf
mounted at /usr/share/data
I could also see the new /usr/share/data directory from inside the pods created by the new builds.
However, after making this change, all deployments started failing with this error:
Failed to attach volume "pvc-0b747c80-a687-11e7-9eb0-122631632f42" on node "ip-172-31-48-134.ec2.internal" with: Error attaching EBS volume "vol-0008c8127ff0f4617" to instance "i-00195cc4e1d31f8ce": VolumeInUse: vol-0008c8127ff0f4617 is already attached to an instance status code: 400, request id: 722f3797-f486-4739-ab4e-fe1826ae53af. The volume is currently attached to instance "i-089e2a60e525f447c"
from which it looks like my latest change had the effect of attaching the volume to a specific instance. But then how can I mount the volume to my pods so that it survives each build and deploy?
Because you are using an EBS volume type, you must set the deployment strategy on the deployment config to Recreate instead of Rolling. This is because an EBS volume can only be mounted on a single node in the cluster at a time. This means you cannot using a rolling deployment, nor scale your application above 1 replica, as both result in more than one instance and there is no guarantee they will be deployed to the same node.

Rebooting my EC2 instance empties my www/html folder

I have created an environment with Elastic Beanstalk with a EC2 instance with PHP installed: my files are in /var/www/html.
First I allowed Auto-Scaling/Load balancer but when the auto-scaling triggered, it created another instance and terminated the old one. And then I realized, the new one was not a clone of the old one : I lost all my config and my files, while I did attached a SSD root volume in my EB config.
I tried again and I created an AMI image which I included in my EB config (in Custom AMI ID). This time my config stays but my folder /var/www/html is emptied and replaced by default index.html files.
1. Is it supposed to happen ? I thought the auto scaling created a cloned of the instance ?
So I decided to disable auto scaling / load balancer and to work on a single instance mode. But then even when I reboot my EC2 instance, the config is preserved but my whole folder /var/www/html is emptied again and only the default files are inside.
2. Why ? There is an EBS volume attached to my instance (EB did that automatically), so it should not happen, if I understand correctly how it works.
Maybe it is the same issue for both but I really don't get why my files are deleted.
Thanks a lot for your help !
Romain
Autoscaling uses an AMI to launch new instances, and AMIs are no more than snapshots of EC2 instances at a certain point in time. Because of this, every time Autoscaling launches a new instance, any differences between the AMI and the current desired state must be applied in boot time prior to receive new traffic.
ElasticBeanstalk provides tools to manage application deployments integrated into the Autoscaling dynamic and also manage instance configurations. Sometimes these configurations become too complex to achieve during bootstrap using the EB tools and there is when custom AMIs come handy.
If you SSH into an autoscaling instance and start manually performing actions out of the ElasticBeanstalk toolstack's scope, all of those changes will be lost in the next Autoscaling event unless you save an updated AMI from your instance and apply it to your Autoscaling Group.

How can I connect my autoscaling group to my ecs cluster?

In all tutorials for ECS you need to create a cluster and after that an autoscaling group, that will spawn instances. Somehow in all these tutorials the instances magically show up in the cluster, but noone gives a hint what's connecting the autoscaling group and the cluster.
my autoscaling group spawns instances as expected, but they just dont show up on my ecs cluster, who holds my docker definitions.
Where is the connection I'm missing?
I was struggling with this for a while. The key to getting the instances in the autoscaling group associated with your ECS cluster is in the user data. When you are creating your launch config when you get to step 3 "Configure Details" hit the advanced tab and enter a simple bash script like the following for your user data.
#!/usr/bin/env bash
echo ECS_CLUSTER=your_cluster_name >> /etc/ecs/ecs.config
All the available parameters for agent configuration can be found here http://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-config.html
An autoscaling group is not strictly associated to a cluster. However, an autoscaling group can be configured such that each instance launched registers itself into a particular cluster.
Registering an instance into a cluster is the responsibility of the ECS Agent running on the instance. If you're using the Amazon ECS-optimized AMI, the ECS Agent will launch when the instance boots and register itself into the configured cluster. However, you can also use the ECS Agent on other Linux AMIs by following the installation instructions.
Well, i found out.
Its all about the ecs-agent and its config file /etc/ecs/ecs.config
(This file will be created through the Userdata field, when creating EC2 instances, even from an autoscaling configuration.)
Read about its configuration options here: http://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-config.html
But you can even copy a ecs.config stored on Amazon S3, do it like this (following lines go into Userdata field):
#!/bin/bash
yum install -y aws-cli
aws configure set default.s3.signature_version s3v4
aws configure set default.s3.addressing_style path
aws configure set default.region eu-central-1
aws s3 cp s3://<bucketname>/ecs.config /etc/ecs/ecs.config
note: Signature_version v4 is specific for some regions, like eu-central-1.
This ofc only works, if your IAM role for the instance (in my case its ecsInstanceRole) has the right AmazonS3ReadOnlyAccess
The AWS GUI console way for that would be:
Use the cluster wizard at https://console.aws.amazon.com/ecs/home#/firstRun .
It will create an autoscaling grou for your cluster, a loadbalancer in front of it, and connect it all nicely.
This question is old but the answer is not complete. There are 2 parts to getting your own auto-scaling group to show up in your cluster (as of Jan 2022).
You need to ensure your cluster name is set for ECS_CLUSTER variable in /etc/ecs/ecs.config as mentioned in this answer: https://stackoverflow.com/a/35324937/583875
You need to create a new capacity provider for the cluster and attach this auto scaling group. To do this, go to Cluster -> Capacity Provider -> Create -> Select your auto scaling group under Auto Scaling group.
Another tricky part is getting your service to use the instances (if you have a service running). You need to edit the Service, and change the Capacity provider strategy. Click on Add another provider and choose the new capacity provider you created in (2) above.
That's all! To ensure things are working properly: you should see your capacity provider under Graph -> Capacity Providers and you should see instances from your auto scaling group under Graph -> ECS Instances.

Resources