Get API Gateway's IP address in Lambda Function (by Python) - aws-lambda

I have a Lambda function that has an API Gateway trigger and wonder if there are any method to get the API's IP address?
I already get the request Address by event & context like 'xxxxxxx.execute-api.ap-xxxx-1.amazonaws.com'. by following code:
import json
def lambda_handler(event, context):
result = str(event.get('params').get('header').get('Host'))
return result

the requester's ip is here:
event['requestContext']['identity']['sourceIp']

Related

method: "hardhat_impersonateAccount" - What happens when you call this method with an address that doesn't exist?

async function impersonateAccount(acctAddress) {
await hre.network.provider.request({
method: "hardhat_impersonateAccount",
params: [acctAddress],
});
return await ethers.getSigner(acctAddress);
}
When forking the blockchain locally on Hardhat, the function above allows developers to impersonate the address passed as argument to it.
So you can create transactions as if you're the owner of the account.
What happens when forking the mainnet, and you pass an address that does not exist on the mainnet as an argument?
Would it throw an error?
Does it create the account for you locally and give you access?
It will create the account locally with a balance of 0 ETH.
I tried this with the Ropsten address 0xFD391b604E9456c0Ec4aC13Cc881FbAF68868eB2, which currently has 210 testnet ETH and does not exist on the mainnet.
With your code example it will return a valid signer, and if you check the balance of the signer's address it will have 0 ETH.

Create CloudWatch alarm that sets an instance to standby via SNS/Lambda

What I am looking to do is set an instance to standby mode when it hits an alarm state. I already have an alarm set up to detect when my instance hits 90% CPU for a while. The alarm currently sends a Slack and text message via SNS calling a Lambda function. I would like to add is to have the instance go into standby mode. The instances are in an autoscaling group.
I found that you can perform this through the CLI using the command :
aws autoscaling enter-standby --instance-ids i-66b4f7d5be234234234 --auto-scaling-group-name my-asg --should-decrement-desired-capacity
You can also do this with boto3 :
response = client.enter_standby(
InstanceIds=[
'string',
],
AutoScalingGroupName='string',
ShouldDecrementDesiredCapacity=True|False
)
I assume I need to write another Lambda function that will be triggered by SNS that will use the boto3 code to do this?
Is there a better/easier way before I start?
I already have the InstanceId passed into the event to the Lambda so I will have to add the ASG name in the event.
Is there a way to get the ASG name in the Lambda function when I already have the Instance ID? Then I do not have to pass it in with the event.
Thanks!
Your question has a couple sub-parts, so I'll try to answer them in order:
I assume I need to write another Lambda function that will be triggered by SNS that will use the boto3 code to do this?
You don't need to, you could overload your existing function. I could see a valid argument for either separate functions (separation of concerns) or one function (since "reacting to CPU hitting 90%" is basically "one thing").
Is there a better/easier way before I start?
I don't know of any other way you could do it, other than Cloudwatch -> SNS -> Lambda.
Is there a way to get the ASG name in the Lambda function when I already have the Instance ID?
Yes, see this question for an example. It's up to you whether it looks like doing it in the Lambda or passing an additional parameter is the cleaner option.
For anyone interested, here is what I came up with for the Lambda function (in Python) :
# Puts the instance in the standby mode which takes it off the load balancer
# and a replacement unit is spun up to take its place
#
import json
import boto3
ec2_client = boto3.client('ec2')
asg_client = boto3.client('autoscaling')
def lambda_handler(event, context):
# Get the id from the event JSON
msg = event['Records'][0]['Sns']['Message']
msg_json = json.loads(msg)
id = msg_json['Trigger']['Dimensions'][0]['value']
print("Instance id is " + str(id))
# Capture all the info about the instance so we can extract the ASG name later
response = ec2_client.describe_instances(
Filters=[
{
'Name': 'instance-id',
'Values': [str(id)]
},
],
)
# Get the ASG name from the response JSON
#autoscaling_name = response['Reservations'][0]['Instances'][0]['Tags'][1]['Value']
tags = response['Reservations'][0]['Instances'][0]['Tags']
autoscaling_name = next(t["Value"] for t in tags if t["Key"] == "aws:autoscaling:groupName")
print("Autoscaling name is - " + str(autoscaling_name))
# Put the instance in standby
response = asg_client.enter_standby(
InstanceIds=[
str(id),
],
AutoScalingGroupName=str(autoscaling_name),
ShouldDecrementDesiredCapacity=False
)

glue job times out when calling aws boto3 client api

I am using glue console not dev endpoint. The glue job is able to access glue catalogue and table using below code
datasource0 = glueContext.create_dynamic_frame.from_catalog(database =
"glue-db", table_name = "countries")
print "Table Schema:", datasource0.schema()
print "datasource0", datasource0.show()
Now I want to get the metadata for all tables from the glue data base glue-db.
I could not find a function in awsglue.context api, therefore i am using boto3.
client = boto3.client('glue', 'eu-central-1')
responseGetDatabases = client.get_databases()
databaseList = responseGetDatabases['DatabaseList']
for databaseDict in databaseList:
databaseName = databaseDict['Name']
print ("databaseName:{}".format(databaseName))
responseGetTables = client.get_tables( DatabaseName = databaseName,
MaxResults=123)
print("responseGetDatabases{}".format(responseGetTables))
tableList = responseGetTables['TableList']
print("response Object{0}".format(responseGetTables))
for tableDict in tableList:
tableName = tableDict['Name']
print("-- tableName:{}".format(tableName))
the code runs in lambda function, but fails within glue etl job with following error
botocore.vendored.requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='glue.eu-central-1.amazonaws.com', port=443): Max retries exceeded with url: / (Caused by ConnectTimeoutError(, 'Connection to glue.eu-central-1.amazonaws.com timed out. (connect timeout=60)'))
The problem seems to be in environment configuration. Glue VPC has two subnets
private subnet: with s3 endpoint for glue, allows inbound traffic from the RDS security group. It has
public subnet: in glue vpc with nat gateway. Private subnet is reachable through gate nat Gateway. I am not sure what i am missing here.
Try using a proxy while creating the boto3 client:
from pyhocon import ConfigFactory
service_name = 'glue'
default = ConfigFactory.parse_file('glue-default.conf')
override = ConfigFactory.parse_file('glue-override.conf')
host = override.get('proxy.host', default.get('proxy.host'))
port = override.get('proxy.port', default.get('proxy.port'))
config = Config()
if host and port:
config.proxies = {'https': '{}:{}'.format(host, port)}
client = boto3.Session(region_name=region).client(service_name=service_name, config=config)
glue-default.conf and glue-override.conf are deployed to the cluster by glue while spark submit into the /tmp directory.
I had a similar issue and I did the same by using the public library from glue:
s3://aws-glue-assets-eu-central-1/scripts/lib/utils.py
can you please try the boto client creation as below by specifying the region explicitly?
client = boto3.client('glue',region_name='eu-central-1')
I had a similar problem when I was running this command from Glue Python Shell.
So I created endpoint (VPC->Endpoints) for Glue service (service name: "com.amazonaws.eu-west-1.glue"), this one was assigned to the same Subnet and Security Group as the Glue Connection which was used in the Glue Python Shell Job.

AWS Lambda Not Getting Query Parameters

I am setting up my first Lambda function on AWS. I use Python 3.6. My code is as follows:
def lambda_handler(event, context):
result = {}
result["Log stream name:"] = context.log_stream_name
result["Log group name:"] = context.log_group_name
result["Request ID:"] = context.aws_request_id
result["Mem. limits(MB)"] = context.memory_limit_in_mb
result["size of event"] = len(event)
result["type of event"] = str(type(event))
return result
I also set up an API Gateway for test Lambda.
However, no matter what query paramters I pass in to the API Gateway, the event is always an empty dict. Below is a sample response. What am I missing?
Request: /test/number?input=5
Status: 200
Latency: 223 ms
Response Body
{
"Log stream name:": "2018/12/05/[$LATEST]9d9fd5dd157046b4a67792aa49f5d71c",
"Log group name:": "/aws/lambda/test",
"Request ID:": "dce7beaf-f8c9-11e8-9cc4-85afb50a0e0c",
"Mem. limits(MB)": "128",
"size of event": 0,
"type of event": "<class 'dict'>"
}
Assuming you don't have request mapping templates, you should turn Lambda Proxy integration on.

AWS Lambda Chalice: "The request could not be satisfied" Error

I want my lambda function to return the response of another lambda function invoked via AWS API Gateway.
Both functions are deployed by Lambda Chalice to different APIs.
When the first function sends a request to the 2nd functions API endpoint, I am getting an error response saying that "The request could not be satisfied".
Any help is appreciated.
Edit to include some code as requested; shortened for brevity:
#app.route('/verify_user_token', methods=['GET'], cors=True)
def verify_user_token():
request = app.current_request
params = request.query_params or {}
# do your things here; if all goes well:
r = requests.get(ANOTHER_AWS_API_GATEWAY_ENDPOINT_URL, data=params)
return r.text

Resources