I have a requirement to launch multiple EC2 instances in the Tokyo region, based on the number of AMIs owned by our account in that same region. The AMIs are backed-up daily from another region.
What this CloudFormation needs to achieve is:
Retrieve a list of AMIs created today
Attempt to launch each of them in the same region
For example, if today there are 10 different AMIs created in the Tokyo region, then CloudFormation will then create 10 EC2 instances based on these 10 AMIs.
I have looked at some examples at Walkthrough: Looking Up Amazon Machine Image IDs - AWS CloudFormation but found the code does not suit the requirement.
I already have the Lambda function retrieve-today-ami.py, the challenge is to include them in the CF template found in Walkthrough: Looking Up Amazon Machine Image IDs - AWS CloudFormation
Normally, CloudFormation is used to launch pre-defined infrastructure. Your requirement to launch a variable number of instances with information that changes for each instance every day, does not match the model for using CloudFormation.
Based on your use-case, I would recommend writing a script to perform the operation you want.
For example, a Python scripts that lists the AMIs, identifies the ones you want to use, then launches EC2 instances using those AMIs.
You might be able to achieve this by using a Lambda-backed custom resource to fetch the names of the AMIs. Then, the outputs of your custom resource could be used in the EC2 stanzas in the template. You could have the one template defining the Lambda export the values and import them on your EC2 templates.
Related
I have created an environment in AWS. The environment has networking (VPC), EC2 instances, RDS (MySQL), Redis, ALB, S3, etc.
Now I want to have a shell script (bash) that will show the
EC2 instances - instance types, IPs, termination protection, etc.
Networking - VPC and subnet CIDRs, DNS hostnames, DNS hostnames - enable or disable
S3- Details like policy, bucket name, Default encryption, Default encryption, Replication rules, etc.
RDS - ARN, end points, reader and writer instances, version, etc.
Redis - version, node type, shards, total nodes, etc.
ALB - DNS name, listeners, etc.
and need to have all these in a file as output.
Note: I have to give only the AWS account number, region, and tags as input.
FYI, the above input values have to be taken from JSON or any CSV file.
Can you please help me?
I tried some scripts, but they were not able to work properly.
Currently, I am manually updating and checking everything.
Note: I have this environment that got created through Terraform that contains networking, bastion, the backend, a worker node, RDS, S3, and ALB. Now I want to validate these all as part of a checklist through automation. that I require in the form of a shell script with PASS or FAIL.
For these stuff IAC (Infrastructure As Code) tools such as Terraform are invented.
You can write down the specifics for your cloud resources (such as s3, lambda etc.) and can manage version, config, backend based on your environment settings.
Here are some common aws services written in terraform you can look as reference to start with terraform.
We use terraform.env.tfvars to pass environment specific variables. And automate the whole thing using some bash scripts. The reference repo is actually a project from which you can get ideas of how it's done.
Best wishes.
I'm trying to set up a way to monitor disk usage for instances belonging to an AutoScaling Group, and add an alarm when the volumes associated to the instances are almost full.
Since it seems there are no metrics normally offered by Amazon to do that, I resorted using the CloudWatch Agent to get what I wanted. So far so good, I can create graphs and alarms for the metrics I want using the CloudWatch console.
My issue is how to automate everything with CDK. How can I automate the creation of the metric for each instance, without knowing the instance id beforehand? Is there a solution for this issue?
You can install and config CloudWatch agent via EC2 user data and the auto scaling group uses launch template to launch EC2 instance. All of those things can be done by AWS CDK.
There is an example from this open source project for your reference.
Another approach you could take is using AWS Systems Manager. Essentially, you install an SSM agent for your instances, and create an SSM Document (think Shell/Python script) that will run your setup script/automation.
You then create a State Manager Association, tying the SSM Document with your instances based on EC2 tags e.g. Application=MyApp or Team=MyTeam. This way, you don't have to provide any resource ids, just the tag key value pair which could extend multiple instances and future instance replacements. You can schedule it to run at specific times (cron) or at a certain frequency (rate) to enforce state.
I have created my own EC2 instance in AWS. That AMI is AWS ECS optimized AMI for launching ecs service from my EC2 instance. I previously discussed the same thing. And tried with that approach. The link is below,
Microservice Deployment Using AWS ECS Service
I created my cluster and configured that cluster name when I am creating optimized AMI by following code snippet in advanced userdata section,
#!/bin/bash
echo ECS_CLUSTER=your_cluster_name >> /etc/ecs/ecs.config
I followed the documentation of cluster creation from following link,
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create_cluster.htmlecs
But, no result - when creating cluster and ECS task definitions it creates and launches into one EC2. And again creating another EC2 by specifying above code. So total 2 Ec2. I already created my own ECS optimized.
I am finding for launching ECS service from my own AMI (that I created). Actually I need to launch my ECS service from my Ec2 (I had created my machine Amazon optimized AMI).
The reason behind this requirement is I don't want to launch my services in machine that owned by others. I need to launch from my machine. And also I need to host my angular application in the same my machine. So I need control of my machine. How can I do this?
Sounds like you just need to create a Launch Configuration. With this you can specify the User Data settings that should be applied when a host is setup.
After you create your Launch Configuration, create a new Auto Scaling Group based off of it (there's a drop-down to select the launch configuration you want to use).
From here, any new instances launched under that ASG will apply the settings you've configured in the associated Launch Configuration.
I wrote a simple cloudformation template in JSON that brings up an EC2 instance from a pre-existing AMI. After the instance is up, I want to make sure specific services (SQL services) are up and running on that EC2 instance?
How can I do that in my Cloud formation template?
Any pointers?
CloudFormation has CreationPolicies for that use case.
CreationPolicy Attribute
Here you can define a "wait for a signal criteria" before the stack continues to create other resources.
For that signal, you need to implement a script on your EC2 instance, which checks for the needed resource a sends a signal if the check is successful.
I want to use power of cloud, where master or main ec2 instance is creating multiple instances based on need and then destroying them.
need to Create multiple instance from same AMI.
I want to know best way to accomplish this.
Thanks
You can utilize EC2 APIs for this purpose.
ec2-run-instances (http://docs.aws.amazon.com/AWSEC2/latest/CommandLineReference/ApiReference-cmd-RunInstances.html) is a command that allows you to create a new instance from your own (or public) AMI. You can also specify the number of instances you wish to create.
There are also Web Service operation (RunInstances) for this purpose:
http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-RunInstances.html
Which one to use is up to you. However, I don't think starting a new instance from a master instance is a good practice in AWS. You can rely on Elastic Load Balancing (http://aws.amazon.com/elasticloadbalancing/) and Auto Scaling (http://aws.amazon.com/autoscaling/) to scale up/down your server fleet depending on incoming traffic or healthiness of your running instances.