Rebooting my EC2 instance empties my www/html folder - amazon-ec2

I have created an environment with Elastic Beanstalk with a EC2 instance with PHP installed: my files are in /var/www/html.
First I allowed Auto-Scaling/Load balancer but when the auto-scaling triggered, it created another instance and terminated the old one. And then I realized, the new one was not a clone of the old one : I lost all my config and my files, while I did attached a SSD root volume in my EB config.
I tried again and I created an AMI image which I included in my EB config (in Custom AMI ID). This time my config stays but my folder /var/www/html is emptied and replaced by default index.html files.
1. Is it supposed to happen ? I thought the auto scaling created a cloned of the instance ?
So I decided to disable auto scaling / load balancer and to work on a single instance mode. But then even when I reboot my EC2 instance, the config is preserved but my whole folder /var/www/html is emptied again and only the default files are inside.
2. Why ? There is an EBS volume attached to my instance (EB did that automatically), so it should not happen, if I understand correctly how it works.
Maybe it is the same issue for both but I really don't get why my files are deleted.
Thanks a lot for your help !
Romain

Autoscaling uses an AMI to launch new instances, and AMIs are no more than snapshots of EC2 instances at a certain point in time. Because of this, every time Autoscaling launches a new instance, any differences between the AMI and the current desired state must be applied in boot time prior to receive new traffic.
ElasticBeanstalk provides tools to manage application deployments integrated into the Autoscaling dynamic and also manage instance configurations. Sometimes these configurations become too complex to achieve during bootstrap using the EB tools and there is when custom AMIs come handy.
If you SSH into an autoscaling instance and start manually performing actions out of the ElasticBeanstalk toolstack's scope, all of those changes will be lost in the next Autoscaling event unless you save an updated AMI from your instance and apply it to your Autoscaling Group.

Related

Laravel AWS EBS Auto Scaling

Im looking for some advice, this may seem like a silly question but I am having some issues with understanding how AWS EBS autoscaling works and its best practices.
I have a laravel application that is deployed to AWS EBS through bitbucket pipelines. This all works and deploys successfully.
My issue is when the autoscaling triggers it then brings up a new EC2 instance and then load balances the traffic. The problem is that the new EC2 instance in the fleet is a blank AWS Linux2 AMI so just shows the nginx welcome page.
I think the issue is that it's using a blank AMI and not getting my application. I am guessing i could create an image from the EC2 image running my application and then scale with that but i would have to do that every time i do a deployment.
Can you configure the auto scaling group to replicate the running EC2 instance?
Any help or advice as to the best way to accomplish autoscaling with my application would be great.
Its depend on the AMI selected in Launch Configuration..
You need to create AMI of your live EC2 instance after you updated your all required softwares, dbs, configurations and verified(tested) for proper work..
then add this AMI to Auto scale Launch Configuration..
you dont need to create AMI for each deployment..
Whenever you makes changes On Ec2 server , or updates your app source code, you need to create new AMI and need to specify that AMI in Autoscale launch configuration.
best practice is to config the auto scale with a user data script. So when the new AMI boots up during the auto scaling it reads the user data (cloud init/upstart). The user data script can pull the code from the git or what ever source control and run the necessary pre-deployment commands.

AWS - EC2 instance stopped and started removed the application code

I usually restart node instance, but since client was complaining speed issue, i read somewhere online and stopped and started the instance.
It changed the IP of my instance and the code is all gone.
Now when i ssh, i can see /srv/www but i am not able to get into www folder. Changed the permissions, owner but still www is behaving differently.
Its a ROR application, deployed through aws-opswork
#Undo has the right idea. I haven't checked this in the last version of OpsWorks, but by default OpsWorks managed instances have an instance based store: meaning the storage is destroyed when the instance is stopped.
In OpsWorks, in the settings for a Layer, you can specify EBS volumes. EBS means Elastic Block Store, which will persist even if the instance is stopped and a new err instance is started.

Change Instance type of a cluster registered ec2 instance

I have an Amazon EC2 instance which is registered to a cluster of Amazon ECS.
And I want to change this instance's type from c4.large to c4.8xlarge.
I'm able to change its type from c4.large to c4.8xlarge in AWS console. But after the change, I found
[ERROR] Could not register module="api client" err="ClientException: Container instance type changes are not supported. Container instance XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX was previously registered as c4.large.
being printed in /var/log/ecs/ecs-agent.log.20XX-XX-XX-XX file.
Is it possible to change ec2 instance type and re-register it to a cluster?
I think maybe deregister it first, then register it again should work. But I'm afraid this may cause something irreversible in my AWS working environment. So I haven't tried this method yet.
To solve this connection problem between the agent and cluster, just delete the file /var/lib/ecs/data/ecs_agent_data.json and restart docker and ECS.
After that, a new container instance will be created in your cluster with the new size.
sudo rm /var/lib/ecs/data/ecs_agent_data.json
sudo service docker restart
sudo start ecs
Then you can go to the ECS cluster console and deregister the old container instance
UPDATE:
According to #florins and #MBear commented below, AWS updated the data file on ECS instances.
sudo rm /var/lib/ecs/data/agent.db
sudo service docker restart
sudo start ecs
As of March 2021 / AMI image ami-0db98e57137013b2d, /var/lib/ecs/data/ecs_agent_data.json mentioned in the last useful answer does not exist. For me, the commands to execute on the changed instance were:
sudo rm /var/lib/ecs/data/agent.db
sudo service docker restart
After that, it was possible to deploy containers to the instance, without fresh registration (AWS automatically registered a second ECS container instance of the new type). I did have a leftover container instance with the resources of the old instance type to remove.
You can't do this. Per their docs:
The type of EC2 instance that you choose for your container instances determines the resources available in your cluster. Amazon EC2 provides different instance types, each with different CPU, memory, storage, and networking capacity that you can use to run your tasks. For more information, see Amazon EC2 Instances.
This means that when you launch a container on an instance, the agent gathers a bunch of metadata about the instance to run it. If you change it, all of that metadata (or a lot) has changed in a bad way. CPU units, memory, etc. The agent is aware of this and will report it as an error.
You should spin up a new instance of the new type and register it to the cluster and let the task run on it. If it's a service, just terminate the old instance and let it run it against the new one.
I can't think of any real reason why terminating your old instance would cause something irreversible unless it is misconfigured or fragile via user specific settings, by default this would not cause anything destructive.
As alternative approach if the EC2 instance does not store any valuable a new instance using the old instance as template could be started. This takes all existing values and can be achieved just with a few clicks in minutes.
Select the EC2 instance and then "Actions -> Images and templates -> Start more like this". Just change the instance type.
When the instance is running got the the ECS cluster to the tab "ECS instances" and activate the new created instance.
Shutdown the old instance
Update your task maybe taking more cpu and memory and update the service to take the new task revision

How does Amazon EC2 Auto Scaling work?

I am trying to understand how Amazon implements the auto scaling feature. I can understand how it is triggered but I don't know what exactly happens during the auto scaling. How does it expand. For instance,
If I set the triggering condition as cpu>90. Once the vm's cpu usage increases above 90:
Does it have a template image which will be copied to the new machine and started?
How long will it take to start servicing the new requests ?
Will the old vm have any downtime ?
I understand that it has the capability to provide load balancing between the VMs. But, I cannot find any links/paper which explains how Amazon auto scaling works. It will be great if you can provide me some information regarding the same. Thank you.
Essentially, in the set up you register an AMI, and a set of EC2 start parameters - a launch configuration (Instance size, userdata, security group, region, availability zone etc) You also set up scaling policies.
Your scaling trigger fires
Policies are examined to determine which launch configuration pplies
ec2 run instance is called with the registered AMI and the launch configuration parameters.
At this point, an instance is started which is a combination of the AMI and the launch configuration. It registers itself with an IP address into the AWS environment.
As part of the initial startup (done by ec2config or ec2run - going from memory here) - the newly starting instance can connect to instance meta data and run the script stored in "userdata". This script can bootstrap software installation, operating system configuration, settings, anything really that you can do with a script.
Once it's completed, you've got a newly created instance.
Now - if this process was kicked off by auto-scale and elastic-load-balancing, at the point that the instance is "Windows is ready" (Check ec2config.log), the load balancer will add the instance to itself. Once it's responding to requests, it will be marked healthy, and the ELB will start routing traffic.
The gold standard is to have a generic AMI, and use your bootstrap script to install all the packages / msi's / gems or whatever you need onto the server. But what often happens is that people build a golden image, and register that AMI for scaling.
The downside to the latter method is that every release requires a new AMI to be created, and the launch configurations to be updated.
Hope that gives you a bit more info.
may be this can help you
http://www.cardinalpath.com/autoscaling-your-website-with-amazon-web-services-part-2/
http://www.cardinalpath.com/autoscaling-your-website-with-amazon-web-services-part-1/
this post helped me achiving this
Have a read of this chaps blog, it helped me when I doing some research on the subject.
http://www.codebelay.com/blog/2009/08/02/how-to-load-balance-and-auto-scale-with-amazons-ec2/

amazon ec2, is mount information stored in ami or snapshot?

I have ec2 instance set up where mysql DB is mounted on separate volume.
(as detailed in http://aws.amazon.com/articles/1663 )
I want to duplicate this instance set up where my application servers on duplicated instances share the DB volume which is attached to the already running ec2 instance.(I can specify mysql ip through configuration file)
Since almost every set up except the mysql ip is identical, i'd like to create an ami from the first instance and slightly modify to create 2nd,3rd instances.
The question is, the mount information stored in the first instance will take effect when I launch the 2nd instance.
I can elaborate the question,
1. I read that a volume can not be attached to more than one ec2 instance at the same time.
2. the running instance attaches/mount an volume to itself on start up.(so it seems)
3. if I were to create an ami from first instance and use that to initiate other instances, how would auto attach/mount information(which I assume, will be stored in the ami) will affect the other instances.
Eugene,
Mounting the same device to several servers is not possible, so you better forget about this option.
The best solution is to:
Create a copy of your master instance.
Detach the created mount volume. We are going to create an image from this new instance, and you don't want the useless drive copy to be re-created every time.
Change the settings that you need to change, in order to make this server rely on the remote (master) mysql server.
Once you are satisfied with the outcome, create an image from this instance.
Good luck!
Dotan

Resources