so we've created an autoscaling group with an ami of our own, that ami have a server and an automated ossec service that reports to slack channel, the thing is that when a new instance is launched, the ossec send a lot of alerts because the files signatures are different and that is ok because when a new instance is launched it recreates the ami in new volumes.
Now how can I have ossec installed in those ami, but when as launch a new instance not having all the alerts from files changed.
I tried restarting ossec service when a new instance is launched but It had the same behavior, ossec sends alerts that all the files were changed.
Since your ossec agent included in your AMI I don't think it is possible to stop these alerts on the first boot, becuase as you said it is simply what ossec does when recognize any change, so I would suggest to do not include ossec agent in the image but alternatively install it using User data whenever auto scale group create a new instance, this may cause extra time to boot the instances but it may fix your problem.
One way to solve this could be using a cronjob or systemd, to restart o start the hybrid OSSEC process.
In my case we decide to add the folders in the exceptions so the OSSEC don't scan that folders.
I have been working to implement ossec under our golden amis in AWS the last days and this has been a huge pain due to the alarms by files changes which generate every time a ec2 instance is created.
Exists two big points here:
An ec2 is created using cloud init. I had to create a new
systemd template of ossec service to start after cloud-final.service:
[Unit]
Description=OSSEC
After=network.target cloud-final.service
After=multi-user.target
[Service]
Type=forking
ExecStart=/var/ossec/bin/ossec-control start
ExecStop=/var/ossec/bin/ossec-control stop
ExecReload=/var/ossec/bin/ossec-control restart
Restart=always
[Install]
WantedBy=multi-user.target
OSSEC work using queues. You have to ensure at the moment you are
generating the ami the ossec service is stopped and the following
directories cleaned:
/var/ossec/queue/diff/local/*
/var/ossec/queue/syscheck/*
/var/ossec/logs/alerts/*
I have to do a big effort to achieve that. So I hope that my answer in this post will be helpful in the future
Related
Im looking for some advice, this may seem like a silly question but I am having some issues with understanding how AWS EBS autoscaling works and its best practices.
I have a laravel application that is deployed to AWS EBS through bitbucket pipelines. This all works and deploys successfully.
My issue is when the autoscaling triggers it then brings up a new EC2 instance and then load balances the traffic. The problem is that the new EC2 instance in the fleet is a blank AWS Linux2 AMI so just shows the nginx welcome page.
I think the issue is that it's using a blank AMI and not getting my application. I am guessing i could create an image from the EC2 image running my application and then scale with that but i would have to do that every time i do a deployment.
Can you configure the auto scaling group to replicate the running EC2 instance?
Any help or advice as to the best way to accomplish autoscaling with my application would be great.
Its depend on the AMI selected in Launch Configuration..
You need to create AMI of your live EC2 instance after you updated your all required softwares, dbs, configurations and verified(tested) for proper work..
then add this AMI to Auto scale Launch Configuration..
you dont need to create AMI for each deployment..
Whenever you makes changes On Ec2 server , or updates your app source code, you need to create new AMI and need to specify that AMI in Autoscale launch configuration.
best practice is to config the auto scale with a user data script. So when the new AMI boots up during the auto scaling it reads the user data (cloud init/upstart). The user data script can pull the code from the git or what ever source control and run the necessary pre-deployment commands.
I have created my own EC2 instance in AWS. That AMI is AWS ECS optimized AMI for launching ecs service from my EC2 instance. I previously discussed the same thing. And tried with that approach. The link is below,
Microservice Deployment Using AWS ECS Service
I created my cluster and configured that cluster name when I am creating optimized AMI by following code snippet in advanced userdata section,
#!/bin/bash
echo ECS_CLUSTER=your_cluster_name >> /etc/ecs/ecs.config
I followed the documentation of cluster creation from following link,
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create_cluster.htmlecs
But, no result - when creating cluster and ECS task definitions it creates and launches into one EC2. And again creating another EC2 by specifying above code. So total 2 Ec2. I already created my own ECS optimized.
I am finding for launching ECS service from my own AMI (that I created). Actually I need to launch my ECS service from my Ec2 (I had created my machine Amazon optimized AMI).
The reason behind this requirement is I don't want to launch my services in machine that owned by others. I need to launch from my machine. And also I need to host my angular application in the same my machine. So I need control of my machine. How can I do this?
Sounds like you just need to create a Launch Configuration. With this you can specify the User Data settings that should be applied when a host is setup.
After you create your Launch Configuration, create a new Auto Scaling Group based off of it (there's a drop-down to select the launch configuration you want to use).
From here, any new instances launched under that ASG will apply the settings you've configured in the associated Launch Configuration.
I have created an environment with Elastic Beanstalk with a EC2 instance with PHP installed: my files are in /var/www/html.
First I allowed Auto-Scaling/Load balancer but when the auto-scaling triggered, it created another instance and terminated the old one. And then I realized, the new one was not a clone of the old one : I lost all my config and my files, while I did attached a SSD root volume in my EB config.
I tried again and I created an AMI image which I included in my EB config (in Custom AMI ID). This time my config stays but my folder /var/www/html is emptied and replaced by default index.html files.
1. Is it supposed to happen ? I thought the auto scaling created a cloned of the instance ?
So I decided to disable auto scaling / load balancer and to work on a single instance mode. But then even when I reboot my EC2 instance, the config is preserved but my whole folder /var/www/html is emptied again and only the default files are inside.
2. Why ? There is an EBS volume attached to my instance (EB did that automatically), so it should not happen, if I understand correctly how it works.
Maybe it is the same issue for both but I really don't get why my files are deleted.
Thanks a lot for your help !
Romain
Autoscaling uses an AMI to launch new instances, and AMIs are no more than snapshots of EC2 instances at a certain point in time. Because of this, every time Autoscaling launches a new instance, any differences between the AMI and the current desired state must be applied in boot time prior to receive new traffic.
ElasticBeanstalk provides tools to manage application deployments integrated into the Autoscaling dynamic and also manage instance configurations. Sometimes these configurations become too complex to achieve during bootstrap using the EB tools and there is when custom AMIs come handy.
If you SSH into an autoscaling instance and start manually performing actions out of the ElasticBeanstalk toolstack's scope, all of those changes will be lost in the next Autoscaling event unless you save an updated AMI from your instance and apply it to your Autoscaling Group.
I currently have a try-out environment with ~16 services divided over 4 micro-instances. Instances are managed by an autoscaling group (ASG). When I need to update the AMI of my cluster instances, currently I do:
Create new launch config, edit ASG with new launch config.
Detach all instances with replacement option from the ASG and wait until the new ones are listed in the cluster instance list.
MANUALLY find and deregister the old instances from the ECS cluster (very tricky)
Now the services are killed by ECS due to deregistering the instances :(
Wait 3 minutes until the services are restarted on the new instances
MANUALLY find the EC2 instances in the EC2 instance list and terminate them (be very very careful not to terminate the new ones).
With this approach I have about 3 minutes of downtime and I shiver from the idea to do this in production envs.. Is there a way to do this without downtime but keeping the overall amount of instance the same (so without 200% scaling settings etc.).
You can update the Launch Configuration with the new AMI and then assign it to the ASG. Make sure to include the following in the user-data section:
echo ECS_CLUSTER=your_cluster_name >> /etc/ecs/ecs.config
Then terminate one instance at a time, and wait until the new one is up and automatically registered before terminating the next.
This could be scriptable to be automated too.
I have an Amazon EC2 instance which is registered to a cluster of Amazon ECS.
And I want to change this instance's type from c4.large to c4.8xlarge.
I'm able to change its type from c4.large to c4.8xlarge in AWS console. But after the change, I found
[ERROR] Could not register module="api client" err="ClientException: Container instance type changes are not supported. Container instance XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX was previously registered as c4.large.
being printed in /var/log/ecs/ecs-agent.log.20XX-XX-XX-XX file.
Is it possible to change ec2 instance type and re-register it to a cluster?
I think maybe deregister it first, then register it again should work. But I'm afraid this may cause something irreversible in my AWS working environment. So I haven't tried this method yet.
To solve this connection problem between the agent and cluster, just delete the file /var/lib/ecs/data/ecs_agent_data.json and restart docker and ECS.
After that, a new container instance will be created in your cluster with the new size.
sudo rm /var/lib/ecs/data/ecs_agent_data.json
sudo service docker restart
sudo start ecs
Then you can go to the ECS cluster console and deregister the old container instance
UPDATE:
According to #florins and #MBear commented below, AWS updated the data file on ECS instances.
sudo rm /var/lib/ecs/data/agent.db
sudo service docker restart
sudo start ecs
As of March 2021 / AMI image ami-0db98e57137013b2d, /var/lib/ecs/data/ecs_agent_data.json mentioned in the last useful answer does not exist. For me, the commands to execute on the changed instance were:
sudo rm /var/lib/ecs/data/agent.db
sudo service docker restart
After that, it was possible to deploy containers to the instance, without fresh registration (AWS automatically registered a second ECS container instance of the new type). I did have a leftover container instance with the resources of the old instance type to remove.
You can't do this. Per their docs:
The type of EC2 instance that you choose for your container instances determines the resources available in your cluster. Amazon EC2 provides different instance types, each with different CPU, memory, storage, and networking capacity that you can use to run your tasks. For more information, see Amazon EC2 Instances.
This means that when you launch a container on an instance, the agent gathers a bunch of metadata about the instance to run it. If you change it, all of that metadata (or a lot) has changed in a bad way. CPU units, memory, etc. The agent is aware of this and will report it as an error.
You should spin up a new instance of the new type and register it to the cluster and let the task run on it. If it's a service, just terminate the old instance and let it run it against the new one.
I can't think of any real reason why terminating your old instance would cause something irreversible unless it is misconfigured or fragile via user specific settings, by default this would not cause anything destructive.
As alternative approach if the EC2 instance does not store any valuable a new instance using the old instance as template could be started. This takes all existing values and can be achieved just with a few clicks in minutes.
Select the EC2 instance and then "Actions -> Images and templates -> Start more like this". Just change the instance type.
When the instance is running got the the ECS cluster to the tab "ECS instances" and activate the new created instance.
Shutdown the old instance
Update your task maybe taking more cpu and memory and update the service to take the new task revision