How to find an EC2 instance by public IP? - amazon-ec2

I'm trying to write a command line tool to manage my EC2 instances.
In the environment I will be running the tool, only the public IPs of the instances are available, so I need a way to get EC2 instance IDs by IP so that I can call methods like reboot.
I've checked the documentation. There's a method called filter that looks promising, but I can't find the documentation that shows how to use it to filter by public IPs.
How can I do this?

Here is an example with boto3 SDK.
import boto3
client = boto3.client('ec2')
response = client.describe_instances(
Filters=[
{
'Name': 'ip-address',
'Values': [
'54.x.x.x',
]
},
]
)
response ['Reservations'][0]['Instances'][0]['InstanceId']
//'i-0aaxxxxxxxxxxx'
Reference -
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ec2.html#EC2.Client.describe_instances

Related

How can I call list of ec2 instance based on the app code using tag method

I am trying to get all the instance(server name) ID based on the app. Let's say I have an app in the server. How do I know which apps below to which server. I want my code to find all the instance (server) that belongs to each app. Is there any way to look through the app in the ec2 console and figure out the servers are associated with the app. More of using tag method
import boto3
client = boto3.client('ec2')
my_instance = 'i-xxxxxxxx'
(Disclaimer: I work for AWS Resource Groups)
Seeing your comments that you use tags for all apps, you can use AWS Resource Groups to create a group - the example below assumes you used App:Something as tag, first creates a Resource Group, and then lists all the members of that group.
Using this group, you can for example get automatically a CloudWatch dashboard for those resources, or use this group as a target in RunCommand.
import json
import boto3
RG = boto3.client('resource-groups')
RG.create_group(
Name = 'Something-App-Instances',
Description = 'EC2 Instances for Something App',
ResourceQuery = {
'Type': 'TAG_FILTERS_1_0',
'Query': json.dumps({
'ResourceTypeFilters': ['AWS::EC2::Instance'],
'TagFilters': [{
'Key': 'App',
'Values': ['Something']
}]
})
},
Tags = {
'App': 'Something'
}
)
# List all resources in a group using a paginator
paginator = RG.get_paginator('list_group_resources')
resource_pages = paginator.paginate(GroupName = 'Something-App-Instances')
for page in resource_pages:
for resource in page['ResourceIdentifiers']:
print(resource['ResourceType'] + ': ' + resource['ResourceArn'])
Another option to just get the list without saving it as a group would be to directly use the Resource Groups Tagging API
What you install on an Amazon EC2 instance is totally up to you. You do this by running code on the instance itself. AWS is not involved in the decision of what you install on the instance, nor does it know what you installed on an instance.
Therefore, you will need to keep track of "what apps are installed on what server" yourself.
You might choose to take advantage of Tags on instances to add some metadata, such as the purpose of the server. You could also use AWS Systems Manager to run commands on instances (eg to install software) or even use AWS CodeDeploy to roll-out software to fleets of servers.
However, even with all of these deployment options, AWS cannot track what you have put on each individual server. You will need to do that yourself.
Update: You can use AWS Resource Groups to view/manage resources by tag.
Here's some sample Python code to list tags by instance:
import boto3
ec2_resource = boto3.resource('ec2', region_name='ap-southeast-2')
instances = ec2_resource.instances.all()
for instance in instances:
for tag in instance.tags:
print(instance.instance_id, tag['Key'], tag['Value'])

Filter AWS resources using regex in aws-sdk-go

So I have some different types of aws resources tagged as xxx/yyy/<generated_id>. I need to fetch them using go-sdk.
Here is a sample code for subnets, the filters look the same for every other resource.
This doesn't work.
var resp *ec2.DescribeSubnetsOutput
resp, err = d.ec2Client().DescribeSubnets(&ec2.DescribeSubnetsInput{
Filters: []*ec2.Filter{
{
Name: aws.String("vpc-id"),
Values: []*string{&d.VpcId},
},
{
Name: aws.String(fmt.Sprintf(`tag:"xxx/yyy.[*]"`),
Values: []*string{aws.String("owned")},
},
},
})
This does:
aws ec2 describe-subnets --filters `Name=tag:"xxx/yyy.[*]",Values=owned`
I'm obviously doing something wrong, can someone point out what?
There is nothing in the API documentation to suggest that DescribeSubnets accepts a regular expression in filter names: https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_DescribeSubnets.html
If it works in the CLI, that's likely something the CLI is doing on top of what the SDK offers. The Go SDK is like any other AWS SDK; it exposes the AWS API in a language-specific way. The AWS CLI adds convenience features on top of the API to make it more useful on the command line, but that doesn't mean those features are exposed by the API or any published SDK.
I stepped with this problem recently, my issue was the version of the sdk I was using;
Filters: [ ]*ec2.Filter{
is for v1 sdk mod and it was not working as I was importing github.com/aws/aws-sdk-go-v2/aws, while
Filters: [ ]types.Filter{
is for v2 and this one worked in my case.
https://aws.amazon.com/blogs/developer/aws-sdk-for-go-version-2-general-availability/

JovoFramework - LAUNCH - isNewUser() is always false on AWS Lambda

I'm using jovo framework (version 1.0.0) and I'm facing the following problem:
In app.js:
app.setHandler({
'LAUNCH': function() {
if(this.user().isNewUser()) {
this.tell('This will never be told on AWS Lambda.');
}
}
});
Running locally I can distinguish between (isNewUser === true) and (isNewUser === false) but as soon as I'm executing it as a lambda function on AWS isNewUser is always false. Why is that?
and additionally
NEW_USER': function() {
}
Isn't triggered either.
System Environment on local machine:
Windows 10 Home
NodeJS: v8.9.1
Lambda function:
NodeJS 6.10
I really appreciate any help you can provide.
Both 'NEW_USER' and this.user().isNewUser() need to have access to a database, where the number of sessions is stored for each user.
When you're prototyping locally, it uses the default File Persistence database integration, which saves the data to a local db/db.json file.
However, on AWS Lambda, the local db doesn't work, so you need to set up a DynamoDB configuration. Learn more here: Jovo Framework Docs > Database Integrations > DynamoDB.
Please remember to give your Lambda function role the right permission to access DynamoDB data: AWS Lambda Permissions Model.

Fetch boto3 credentials only from EC2 instance profile

The boto3 documentation lists the order in which credentials are searched and the credentials are fetched from the EC2 instance metadata service only at the very last.
How do I force boto3 to fetch the credentials only from the EC2 instance profile or the instance metadata service?
I came across this which lets me get the temporary credentials from the metadata service and then I could pass this on to create a boto3 session.
However my question is whether there is a better way to do this? Is it possible to create a boto3 session by specifying the provider to use ie InstanceMetadataProvider - link? I tried searching the docs a lot, but couldn't figure it out.
The reason - the context under which this script runs also has environment variables with AWS keys set which would obviously take precedence, however I need the script to run only with the IAM role assigned to the EC2 instance.
So I ended up doing this, works as expected. Always uses the temp creds from the instance role. The script is short-lived so the validity of the creds is not an issue.
from botocore.credentials import InstanceMetadataProvider, InstanceMetadataFetcher
provider = InstanceMetadataProvider(iam_role_fetcher=InstanceMetadataFetcher(timeout=1000, num_attempts=2))
creds = provider.load().get_frozen_credentials()
client = boto3.client('ssm', region_name='us-east-1', aws_access_key_id=creds.access_key, aws_secret_access_key=creds.secret_key, aws_session_token=creds.token)
If there is a better way to do, please feel free to post.
You could also use boto3.
>>> session = boto3.Session(region_name='foo_region')
>>> credentials = session.get_credentials()
>>> credentials = credentials.get_frozen_credentials()
>>> credentials.access_key
u'ABC...'
>>> credentials.secret_key
u'DEF...'
>>> credentials.token
u'ZXC...'
>>> access_key = credentials.access_key
>>> secret_key = credentials.secret_key
It's a similar idea, but I find it returns much faster
import boto3
import botocore
botocore_session = botocore.session.get_session()
credential_provider = botocore_session.get_component('credential_provider')
instance_metadata_provider = credential_provider.get_provider('iam-role')
credential_provider.insert_before('env', instance_metadata_provider)
boto3_session = boto3.Session(botocore_session=botocore_session)
client = boto3_session.client(...)
resource = boto3_session.resource(...)

runtime configuration for AWS Lambda function

I have an an AWS Lambda function that needs to connect to a remote TCP service. Is there any way to configure the Lambda function with the IP address of the remote service after the Lambda function has been deployed to AWS? Or do I have to bake the configuration into the packaged Lambda function before it's deployed?
I found a way that I use for supporting a test environment and a production environment that will help you.
For the test version of the function, I am calling it TEST-ConnectToRemoteTcpService and for the production version of the function I am naming the function PRODUCTION-ConnectToRemoteTcpService. This allows me pull out the environment name using a regular expression.
Then I am storing config/test.json and config/production.json in the zip file that I upload as code for the function. This zip file will be extracted into the directory process.env.LAMBDA_TASK_ROOT when the function runs. So I can load that file and get the config I need.
Some people don't like storing the config in the code zip file, which is fine - you can just load a file from S3 or use whatever strategy you like.
Code for reading the file from the zip:
const readConfiguration = () => {
return new Promise((resolve, reject) => {
let environment = /^(.*?)-.*/.exec(process.env.AWS_LAMBDA_FUNCTION_NAME)[1].toLowerCase();
console.log(`environment is ${environment}`);
fs.readFile(`${process.env.LAMBDA_TASK_ROOT}/config/${environment}.json`, 'utf8', function (err,data) {
if (err) {
reject(err);
} else {
var config = JSON.parse(data);
console.log(`configuration is ${data}`);
resolve(config);
}
});
});
};
Support for environment variables was added for AWS Lambda starting November 18, 2016. Adding a variable to an existing function can be done via command line as shown below or from the AWS Console.
aws lambda update-function-configuration \
--function-name MyFunction \
--environment Variables={REMOTE_SERVICE_IP=100.100.100.100}
Documentation can be found here.
You can invoke the Lambda function via SNS topic subscription and have it configure itself from the payload inside the SNS event.
Here's the official guide on how to do that Invoking Lambda via SNS.
A few options, depending on the use-case
If your config will not change then you can use S3 objects and access from Lambda or set your Lambda to trigger on new config changes. (Though this is the cheapest way, you are limited in what you can do compared to other alternatives)
If the config is changing constantly, then DynamoDB - Key/value is an alternative.
If DynamoDB is expensive for the frequent read/writes and not worth the value then you can have TCP service post config into a SQS queue. (or an SNS if you want to trigger when the service posts a new config)

Resources