I need to find the volume id of root volume of my EC2 instance in boto3
I tried getting volumes by describe_volumes but there is no identifier for root volume
You can use aws ec2 describe-instances to view attached disks. The volumes will appear in the BlockDeviceMappings section:
"BlockDeviceMappings": [
{
"DeviceName": "/dev/xvda",
"Ebs": {
"AttachTime": "2016-01-24T06:46:06+00:00",
"DeleteOnTermination": true,
"Status": "attached",
"VolumeId": "vol-686feca2"
}
}
],
If the DeviceName of the volume matches root_device_name of instance, then that volume is the root volume.
import boto3
session=boto3.session.Session()
ec2=session.resource('ec2')
instance_iterator = ec2.instances.all()
for instance in instance_iterator:
print(instance.id)
for device in instance.block_device_mappings:
if device['DeviceName']==instance.root_device_name:
print("The root volume is ", device['Ebs']['VolumeId'])
else:
print("The additional ebs volume is", device['Ebs']['VolumeId'])
there is no identifier for root volume
Most likely reason is that the volume is not attached to any instance.
Default call to describe-volumes returns all volumes. If a volume is attached to an instance, it is reflected in its output. If the volume is not attached, than there is no information about its Attachments and Device name on the instance.
To list only attached volumes, you can use filter:
aws ec2 describe-volumes \
--filters Name=attachment.status,Values=attached
Related
The following code snippet uses the latest version of boto3 and looks for all "running" instances in ap-east-1, where the client is created with the specific region (ap-east-1)
try:
running_instances = ec2.describe_instance_status(
Filters=[
{
"Name": "instance-state-name",
"Values": ["running"],
},
],
InstanceIds=<list of instance_ids>,
)
except ClientError as e:
<catch exception>
The result is an empty list even though there are running Ec2 instances.
The above snippet works for all other regions though.
The AWS command aws ec2 describe-instance-status --region ap-east-1 --filter Name="instance-state-name",Values="running" --instance-id <list of instance ids> returns the running instances with the same filter.
What I am missing for this region specifically while using boto3?
Is there a specific version of boto3 that works for ap-east-1 region?
https://github.com/boto/boto3/issues/3575 - I asked the question here and they helped with debugging it.
Org Payer Level Regional STS needed to be enabled for such regions.
I have code which run in lambda but same is not work on my system.
asgName="test"
def lambda_handler(event, context):
client = boto3.client('autoscaling')
asgName="test"
response = client.describe_auto_scaling_groups(AutoScalingGroupNames=[asgName])
if not response['AutoScalingGroups']:
return 'No such ASG'
...
...
...
my below code i try to run in linux but prompt error "No such ASG"
asgName="test"
client = boto3.client('autoscaling')
response = client.describe_auto_scaling_groups(AutoScalingGroupNames=[asgName])
if not response['AutoScalingGroups']:
return 'No such ASG'
The first thing to check is that you are connecting to the correct AWS region. If not specified, it defaults to us-east-1 (N. Virginia). A region can also be specified in the credentials file.
In your code, you can specify the region with:
client = boto3.client('autoscaling', region_name = 'us-west-2')
The next thing to check is that the credentials are associated with the correct account. The AWS Lambda function is obviously running in your desired account, but you should confirm that the code running "in linux" is using the same AWS account.
You can do this by using the AWS Command-Line Interface (CLI), which will use the same credentials as your Python code on the Linux computer. Run:
aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names test
It should give the same result as the Python code running on that computer.
You might need to specify the region:
aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names test --region us-west-2
(Of course, change your region as appropriate.)
I am attempting to programmatically put data into a locally running DynamoDB Container by triggering a Python lambda expression.
I'm trying to follow the template provided here: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GettingStarted.Python.03.html
I am using the amazon/dynamodb-local you can download here: https://hub.docker.com/r/amazon/dynamodb-local
Using Ubuntu 18.04.2 LTS to run the container and lambda server
AWS Sam CLI to run my Lambda api
Docker Version 18.09.4
Python 3.6 (You can see this in sam logs below)
Startup command for python lambda is just "sam local start-api"
First my Lambda Code
import json
import boto3
def lambda_handler(event, context):
print("before grabbing dynamodb")
# dynamodb = boto3.resource('dynamodb', endpoint_url="http://localhost:8000",region_name='us-west-2',AWS_ACCESS_KEY_ID='RANDOM',AWS_SECRET_ACCESS_KEY='RANDOM')
dynamodb = boto3.resource('dynamodb', endpoint_url="http://localhost:8000")
table = dynamodb.Table('ContactRequests')
try:
response = table.put_item(
Item={
'id': "1234",
'name': "test user",
'email': "testEmail#gmail.com"
}
)
print("response: " + str(response))
return {
"statusCode": 200,
"body": json.dumps({
"message": "hello world"
}),
}
I know that I should have this table ContactRequests available at localhost:8000, because I can run this script to view my docker container dynamodb tables
I have tested this with a variety of values in the boto.resource call to include the access keys, region names, and secret keys, with no improvement to result
dev#ubuntu:~/Projects$ aws dynamodb list-tables --endpoint-url http://localhost:8000
{
"TableNames": [
"ContactRequests"
]
}
I am also able to successfully hit the localhost:8000/shell that dynamodb offers
Unfortunately while running, if I hit the endpoint that triggers this method, I get a timeout that logs like so
Fetching lambci/lambda:python3.6 Docker container image......
2019-04-09 15:52:08 Mounting /home/dev/Projects/sam-app/.aws-sam/build/HelloWorldFunction as /var/task:ro inside runtime container
2019-04-09 15:52:12 Function 'HelloWorldFunction' timed out after 3 seconds
2019-04-09 15:52:13 Function returned an invalid response (must include one of: body, headers or statusCode in the response object). Response received:
2019-04-09 15:52:13 127.0.0.1 - - [09/Apr/2019 15:52:13] "GET /hello HTTP/1.1" 502 -
Notice that none of my print methods are being triggered, if I remove the call to table.put, then the print methods are successfully called.
I've seen similar questions on Stack Overflow such as this lambda python dynamodb write gets timeout error that state that the problem is I am using a local db, but shouldn't I still be able to write to a local db with boto3, if I point it to my locally running dynamodb instance?
Your Docker container running the Lambda function can't reach the DynamoDB at 127.0.0.1. Try instead the name of your DynamoDB local docker container as the host name for the endpoint:
dynamodb = boto3.resource('dynamodb', endpoint_url="http://<DynamoDB_LOCAL_NAME>:8000")
You can use docker ps to find the <DynamoDB_LOCAL_NAME> or give it a name:
docker run --name dynamodb amazon/dynamodb-local
and then connect:
dynamodb = boto3.resource('dynamodb', endpoint_url="http://dynamodb:8000")
Found the solution to the problem here: connecting AWS SAM Local with dynamodb in docker
The question asker noted that he saw online that he may need to connect to the same docker network using:
docker network create lambda-local
So created this network, then updated my sam command and my docker commands to use this network, like so:
docker run --name dynamodb -p 8000:8000 --network=local-lambda amazon/dynamodb-local
sam local start-api --docker-network local-lambda
After that I no longer experienced the timeout issue.
I'm still working on understanding exactly why this was the issue
To be fair though, it was important that I use the dynamodb container name as the host for my boto3 resource call as well.
So in the end, it was a combination of the solution above and the answer provided by "Reto Aebersold" that created the final solution
dynamodb = boto3.resource('dynamodb', endpoint_url="http://<DynamoDB_LOCAL_NAME>:8000")
I'm trying to build an Amazon AMI centos using Packer. I am using the amazon-chroot builder.
The ami exists, but I am getting the build error
[root#ip-10-32-11-16 retel-base]# packer build retel-base.json
amazon-chroot output will be in this color.
==> amazon-chroot: Gathering information about this EC2 instance...
==> amazon-chroot: Inspecting the source AMI...
==> amazon-chroot: Couldn't find root device!
Build 'amazon-chroot' errored: Couldn't find root device!
==> Some builds didn't complete successfully and had errors:
--> amazon-chroot: Couldn't find root device!
==> Builds finished but no artifacts were created.
cat retel-base.json
{
"variables": {
"ACCESS_KEY_ID": "{{env `ACCESS_KEY_ID`}}",
"SECRET_ACCESS_KEY": "{{env `SECRET_ACCESS_KEY`}}"
},
"builders": [{
"type": "amazon-chroot",
"access_key": "{{user `ACCESS_KEY_ID`}}",
"secret_key": "{{user `SECRET_ACCESS_KEY`}}",
"source_ami":"ami-a40df4cc",
"ami_name": "base image built with packer {{timestamp}}"
}]
}
I think this might be to do with a mismatch between the name of the root device and the block device mapping.
In the official CentOS AMIs, the root device is named /dev/sda but the block device mapping only lists /dev/sda1, which is apparently a partition on the root device.
The Aminator by Netflix has a similar problem with partitioned volumes: https://github.com/Netflix/aminator/issues/129
$ aws ec2 describe-images --image-ids ami-6a70e303
{
"Images": [],
"ResponseMetadata": {
"RequestId": "348eb2b0-b975-4632-915e-f2e344d275bd"
}
From us-east-1.
This ami is for Amazon Windows_Server-2008-R2_SP1-English-64Bit-Base-2013.02.22
Any ideas as to why not returning data ?
This image is apparently deregistered. The command describes images available to you. Being deleted this image is no longer available.
More info here: http://thecloudmarket.com/image/ami-6a70e303--windows-server-2008-r2-sp1-english-64bit-base-2013-02-22#/definition