I have this bash script which lists all active EC2 instances across regions:
for region in `aws ec2 describe-regions --output text | cut -f3` do
echo -e "\nListing Instances in region:'$region'..."
aws ec2 describe-instances --region $region
done
I'd like to port this to a Lambda function on AWS. What would be the best approach today? Do I have to use a wrapper or similar? Node? I googled and found what looked like mostly workarounds.. but they were a couple of years old. Would appreciate an up-to-date indication.
Two ways to do it:
Using custom runtimes and layers: https://github.com/gkrizek/bash-lambda-layer
'Exec'ing from another runtime: https://github.com/alestic/lambdash
You should write it in a language that has an AWS SDK, such as Python.
You should also think about what the Lambda function should do with the output, since at the moment it just retrieves information but doesn't do anything with it.
Here's a sample AWS Lambda function:
import boto3
def lambda_handler(event, context):
instance_ids = []
# Get a list of regions
ec2_client = boto3.client('ec2')
response = ec2_client.describe_regions()
# For each region
for region in response['Regions']:
# Get a list of instances
ec2_resource = boto3.resource('ec2', region_name=region['RegionName'])
for instance in ec2_resource.instances.all():
instance_ids.append(instance.id)
# Return the list of instance_ids
return instance_ids
Please note that it takes quite a bit of time to call all the regions sequentially. The above can take 15-20 seconds.
Related
I'm currently working on a cloudformation script that requires a lot of MSK configuration details to run. I am working on a makefile that runs the command
#aws kafka list-clusters
This returns a json-like structure that can be found here . Most of the details I require are in that structure, is there a way to retrieve each of them without having to save the output and then parse through the structure..? All of this would be done within the makefile, so that it can be plugged directly into the cloudformation and wouldn't require manual input/hardcoded values.
I hope I'm not missing something simple, Thanks!
Right, seems it was as simple as using a --query.
So let's say I wanted to get the ARN of the cluster (if there is only one on the region which there is in my case)
I'd call
CLUSTER_ARN = #aws kafka list-clusters --query 'ClusterInfoList[0].ClusterArn'
This will call the Shell command to list only the first kafka cluster, and return the ClusterArn value and store it in CLUSTER_ARN to be used throughout the Makefile.
Please read here to see more about filtering :)
My current Terraform setup consists of 3 template files. Each template file is linked to a launch configuration resource which is then used to launch instances via auto scaling events. In each template file, there is an AWS CLI command used to attach an existing EBS volume to the new instance being launched via autoscaling. I am having some trouble writing a conditional expression to pass in a variable to this AWS CLI command being used to attach a specific drive. Since I have 3 template files and 3 EBS volumes I'm looking to attach to each instance in its own autoscaling group, I don't believe I can have more than 2 expressions within my conditional expression. Any advice on how I can achieve this would be helpful.
Template_file
data "template_file" "ML_10_user_data" {
count = "${(var.enable ? 1 : 0) * var.number_of_zones}" // 3 templates
template = "${file("userdata.sh")}
vars {
ebs_volume = "${count.index == 0 ? vol-xxxxxxxxxxxxxxxxx : vol-xxxxxxxxxxxxxxxxx}" // how to include 3rd EBS volume?
}
}
Userdata.sh
#!/bin/bash
aws ec2 attach-volume --volume-id ${EBS_VOLUME} --instance_id `curl http://169.254.169.254/latest/meta-data/instance-id` --device /dev/sdf
EBS_VOLUME=${ebs_volume}
Any advice on how I can fulfill what I am trying to accomplish would be appreciated.
The best way would be to put it in a list:
variable "volumes" {
default = ["vol-1111","vol-2222","vol-333"]
}
data "template_file" "user_data" {
count = "${(var.enable ? 1 : 0) * var.number_of_zones}"
template = "${file("userdata.sh")}"
vars {
ebs_volume = "${var.volumes[count.index]}"
}
}
However, if these instances should be in ASG, this is not a very good design. Instances in ASG should be identical, interchangeable and disposable. They can be terminated at any time by AWS or scaling activities, and you should treat those instances as a group, not as individual entities.
My understanding is I'm supposed to use resource when using Boto3 :)
The following returns all the key/value pairs; how would I get a specific key value? I'm looking print out the name given to the instance.
ec2 = boto3.resource('ec2')
for instance in ec2.instances.all():
print (instance.tags)
You can use either the boto3 resource or client interfaces. The resource interface is a higher level which is easier (simpler) to work with. The client interface is lower layer and you have more fine grained control. Start off with using resource and later switch to client as you better understand Python / boto3 / AWS SDKs.
Here is an example that will print the Value value.
The key parts to understand is that instance.tags is an array of Python dict (dictionary). You need to loop thru this array to get to each "Value". When accessing a dict you use this syntax ['name_of_item'].
AWS stores tags as Key and Value. These are the names to use when processing the dict.
import boto3
ec2 = boto3.resource('ec2')
for instance in ec2.instances.all():
print (instance.tags)
for tag in instance.tags:
print(tag['Value'])
I have been unable to find a simple example which shows me how to use boto to terminate an Amazon EC2 instance using an alarm (without using AutoScaling). I want to terminate the specific instance that has a CPU usage less than 1% for 10 minutes.
Here is what I've tried so far:
import boto.ec2
import boto.ec2.cloudwatch
from boto.ec2.cloudwatch import MetricAlarm
conn = boto.ec2.connect_to_region("us-east-1", aws_access_key_id=ACCESS_KEY, aws_secret_access_key=SECRET_KEY)
cw = boto.ec2.cloudwatch.connect_to_region("us-east-1", aws_access_key_id=ACCESS_KEY, aws_secret_access_key=SECRET_KEY)
reservations = conn.get_all_instances()
for r in reservations:
for inst in r.instances:
alarm = boto.ec2.cloudwatch.MetricAlarm(name='TestAlarm', description='This is a test alarm.', namespace='AWS/EC2', metric='CPUUtilization', statistic='Average', comparison='<=', threshold=1, period=300, evaluation_periods=2, dimensions={'InstanceId':[inst.id]}, alarm_actions=['arn:aws:automate:us-east-1:ec2:terminate'])
cw.put_metric_alarm(alarm)
Unfortunately it gives me this error:
dimensions={'InstanceId':[inst.id]}, alarm_actions=['arn:aws:automate:us-east-1:ec2:terminate'])
TypeError: init() got an unexpected keyword argument 'alarm_actions'
I'm sure it's something simple I'm missing.
Also, I am not using CloudFormation, so I cannot use the AutoScaling feature. This is because I don't want the alarm to use a metric across the entire group, rather only for a specific instance, and only terminate that specific instance (not any instance in that group).
Thanks in advance for your help!
The alarm actions are not passed through dimensions but rather added as an attribute to the MetricAlarm object that you are using. In your code you need to do the following:
alarm = boto.ec2.cloudwatch.MetricAlarm(name='TestAlarm', description='This is a test alarm.', namespace='AWS/EC2', metric='CPUUtilization', statistic='Average', comparison='<=', threshold=1, period=300, evaluation_periods=2, dimensions={'InstanceId':[inst.id]})
alarm.add_alarm_action('arn:aws:automate:us-east-1:ec2:terminate')
cw.put_metric_alarm(alarm)
You can also see in the boto documentation here:
http://docs.pythonboto.org/en/latest/ref/cloudwatch.html#module-boto.ec2.cloudwatch.alarm
I'm writing some Ruby scripts to wrap AWS ELB command line calls, mostly so that I can act on several ELB instances simultaneously. One task is to use the elb-describe-instance-health call to see what instance IDs are attached to this ELB.
I want to match the Instance ID to a nickname we have set up for those instances, so that I can see at a glance what machines area connected to the ELB, without having to look up the instance names.
So I am issuing:
cmd = "elb-describe-instance-health #{elbName}"
value = `#{cmd}`
Passing the elb name into the call. This returns output such as:
INSTANCE_ID i-jfjtktykg InService N/A N/A
INSTANCE_ID i-ujelforos InService N/A N/A
One line appear for each instance in the ELB. There are two spaces between each field.
What I need to get is the second field, which is the actual instance ID. Basically I'm trying to get each line returned, turn it into an array, get the 2nd field, which I can then use to lookup our server nickname.
Not sure if this is the right approach, but any suggestions on how to get this done are very welcome.
The newly released aws-sdk gem supports Elastic Load Balancing (AWS::ELB). If you want to get a list of instance ids attached to your load balancer you can do the following:
AWS.config(:access_key_id => '...', :secret_access_key => '...')
elb = AWS::ELB.new
intsance_ids = elb.load_balancers['LOAD_BALANCER_NAME'].instances.collect(&:id)
You could also use EC2 to store your instance nicknames.
ec2 = AWS::EC2.new
ec2.instances['INSTANCE_ID'].tags['nickname'] = 'NICKNAME'
Assuming your instances are tagged with their nicknames, you could collect them like so:
elb = AWS::ELB.new
elb.load_balancers['LOAD_BALANCER_NAME'].instances.collect{|i| i.tags['nickname'] }
A simple way to extract the second column would be something like this:
ids = value.split("\n").collect { |line| line.split(/\s+/)[1] }
This will leave the second column values in the Array ids. All this does is breaks the value into lines, breaks each line into whitespace delimited columns, and then extracts the second column.
There's probably no need to try to be too clever for something like this, a simple and straight forward solution should be sufficient.
References:
collect
split