I have an AWS lambda and have created an event source mapping for the same. When I delete the lambda using Python boto3, does the event source mapping also get deleted along with that?
No. A Lambda Event Source Mapping is a separate, customer-managed resource. It has its own CRUD API and CloudFormation AWS::Lambda::EventSourceMapping resource type. You must delete it yourself with delete_event_source_mapping.
res = client.list_event_source_mappings(EventSourceArn=queue_arn, FunctionName=function_name)
assert len(res["EventSourceMappings"]) == 1
client.delete_function(FunctionName=function_name)
res = client.list_event_source_mappings(EventSourceArn=queue_arn, FunctionName=function_name)
assert len(res["EventSourceMappings"]) == 1
client.delete_event_source_mapping(UUID=mapping_uuid)
res = client.list_event_source_mappings(EventSourceArn=queue_arn, FunctionName=function_name)
assert len(res["EventSourceMappings"]) == 0 # wait a few seconds for deletion to finish
Related
What I am looking to do is set an instance to standby mode when it hits an alarm state. I already have an alarm set up to detect when my instance hits 90% CPU for a while. The alarm currently sends a Slack and text message via SNS calling a Lambda function. I would like to add is to have the instance go into standby mode. The instances are in an autoscaling group.
I found that you can perform this through the CLI using the command :
aws autoscaling enter-standby --instance-ids i-66b4f7d5be234234234 --auto-scaling-group-name my-asg --should-decrement-desired-capacity
You can also do this with boto3 :
response = client.enter_standby(
InstanceIds=[
'string',
],
AutoScalingGroupName='string',
ShouldDecrementDesiredCapacity=True|False
)
I assume I need to write another Lambda function that will be triggered by SNS that will use the boto3 code to do this?
Is there a better/easier way before I start?
I already have the InstanceId passed into the event to the Lambda so I will have to add the ASG name in the event.
Is there a way to get the ASG name in the Lambda function when I already have the Instance ID? Then I do not have to pass it in with the event.
Thanks!
Your question has a couple sub-parts, so I'll try to answer them in order:
I assume I need to write another Lambda function that will be triggered by SNS that will use the boto3 code to do this?
You don't need to, you could overload your existing function. I could see a valid argument for either separate functions (separation of concerns) or one function (since "reacting to CPU hitting 90%" is basically "one thing").
Is there a better/easier way before I start?
I don't know of any other way you could do it, other than Cloudwatch -> SNS -> Lambda.
Is there a way to get the ASG name in the Lambda function when I already have the Instance ID?
Yes, see this question for an example. It's up to you whether it looks like doing it in the Lambda or passing an additional parameter is the cleaner option.
For anyone interested, here is what I came up with for the Lambda function (in Python) :
# Puts the instance in the standby mode which takes it off the load balancer
# and a replacement unit is spun up to take its place
#
import json
import boto3
ec2_client = boto3.client('ec2')
asg_client = boto3.client('autoscaling')
def lambda_handler(event, context):
# Get the id from the event JSON
msg = event['Records'][0]['Sns']['Message']
msg_json = json.loads(msg)
id = msg_json['Trigger']['Dimensions'][0]['value']
print("Instance id is " + str(id))
# Capture all the info about the instance so we can extract the ASG name later
response = ec2_client.describe_instances(
Filters=[
{
'Name': 'instance-id',
'Values': [str(id)]
},
],
)
# Get the ASG name from the response JSON
#autoscaling_name = response['Reservations'][0]['Instances'][0]['Tags'][1]['Value']
tags = response['Reservations'][0]['Instances'][0]['Tags']
autoscaling_name = next(t["Value"] for t in tags if t["Key"] == "aws:autoscaling:groupName")
print("Autoscaling name is - " + str(autoscaling_name))
# Put the instance in standby
response = asg_client.enter_standby(
InstanceIds=[
str(id),
],
AutoScalingGroupName=str(autoscaling_name),
ShouldDecrementDesiredCapacity=False
)
I am trying to add data to Kinesis Firehose delivery stream using putrecord with python3.6 on aws lambda. When calling put record on the stream I get following exception.
An error occurred (ResourceNotFoundException) when calling the PutRecord operation: Stream MyStream under account 123456 not found.
I am executing following python code to add data to Stream.
import boto3
import json
def lambda_handler(event, context):
session = boto3.Session(aws_access_key_id=key_id, aws_secret_access_key=access_key)
kinesis_client = session.client('kinesis', region_name='ap-south-1')
records = event['Records']
write_records = list()
count = 0
for record in records:
count += 1
if str(record['eventName']).lower() == 'insert':
rec = record['dynamodb']['Keys']
rec.update(record['dynamodb']['NewImage'])
new_record = dict()
new_record['Data'] = json.dumps(rec).encode()
new_record['PartitionKey'] = 'PartitionKey'+str(count)
# Following Line throws Exception
kinesis_client.put_record(StreamName="MyStream", Data=new_record['Data'], PartitionKey='PartitionKey'+str(count))
elif str(record['eventName']).lower() == 'modify':
pass
write_records = json.dumps(write_records)
print(stream_data)
MyStream status is active and source for the stream data is set to Direct PUT and other sources
If you are sure that the stream name is correct, you can create client with regional endpoint of Kinesis
kinesis_client = session.client('kinesis', region_name='ap-south-1', endpoint_url='https://kinesis.ap-south-1.amazonaws.com/')
AWS Service Endpoints List
https://docs.aws.amazon.com/general/latest/gr/rande.html
Hope this helps !!!
Below is my boto3 code snippet for lambda. My requirement is to read the entire cloudwatch logs and based on certain criteria should push it to S3.
I have used the below snippet to read the cloudwatch logs from each stream. This is working absolutely fine, for lesser data. However for massive logs inside each LogSteam this will throw
Throttle exception - (reached max retries: 4)
Default/Max value is 50.
I tried given certain other values but of no use. Please check and let me know if there is any other alternative for this?
while v_nextToken is not None:
cnt+=1
loglist += '\n' + "No of iterations inside describe_log_streams 2nd stage - Iteration Cnt" + str(cnt)
#Note : Max value of limit=50 and by default value will be 50
#desc_response = client.describe_log_streams(logGroupName=vlog_groups,orderBy='LastEventTime',nextToken=v_nextToken,descending=True, limit=50)
try:
desc_response = client.describe_log_streams(logGroupName=vlog_groups,orderBy='LastEventTime',nextToken=v_nextToken,descending=True, limit=50)
except Exception as e:
print ( "Throttling error" + str(e) )
You can use CW logs subscription filter for Lambda, so the lambda will be triggered directly from the log stream. You can also consider subscribing a Kinesis stream which has some advantages.
I use Github API V3 to get forks count for a repository, i use:
GET /repos/:owner/:repo/forks
The request bring me only 30 results even if a repository contain more, I googled a little and I found that due to the memory restrict the API return only 30 results per page, and if I want next results I have to specify the number of page.
Only me I don't need all this information, all I need is the number of forks.
Is there any way to get only the number of forks?
Because If I start to loop page per page my script risque to crash if a repository contain thousand results.
You can try and use a search query.
For instance, for my repo VonC/b2d, I would use:
https://api.github.com/search/repositories?q=user%3AVonC+repo%3Ab2d+b2d
The json answer gives me a "forks_count": 5
Here is one with more than 4000 forks (consider only the first result, meaning the one whose "full_name" is actually "strongloop/express")
https://api.github.com/search/repositories?q=user%3Astrongloop+repo%3Aexpress+express
"forks_count": 4114,
I had a job where I need to get all forks as git-remotes of a github project.
I wrote the simple python script https://gist.github.com/urpylka/9a404991b28aeff006a34fb64da12de4
At the base of the program is recursion function for getting forks of a fork. And I met same problem (GitHub API was returning me only 30 items).
I solved it with add increment of ?page=1 and add check for null response from server.
def get_fork(username, repo, forks, auth=None):
page = 1
while 1:
r = None
request = "https://api.github.com/repos/{}/{}/forks?page={}".format(username, repo, page)
if auth is None: r = requests.get(request)
else: r = requests.get(request, auth=(auth['login'], auth['secret']))
j = r.json()
r.close()
if 'message' in j:
print("username: {}, repo: {}".format(username, repo))
print(j['message'] + " " + j['documentation_url'])
if str(j['message']) == "Not Found": break
else: exit(1)
if len(j) == 0: break
else: page += 1
for item in j:
forks.append({'user': item['owner']['login'], 'repo': item['name']})
if auth is None:
get_fork(item['owner']['login'], item['name'], forks)
else:
get_fork(item['owner']['login'], item['name'], forks, auth)
I have a Sinatra app that runs inside of EventMachine. Currently, I am taking a post request of JSON data, deferring storage, and returning a 200 OK status code. The deferred task simply pushes the data to a queue and increments a stats counter. The code is similar to:
class App < Sinatra::Base
...
post '/' do
json = request.body.read
operation = lambda do
push_to_queue(json)
incr_incoming_stats
end
callback = lambda {}
EM.defer(operation, callback)
end
...
end
My question is, how do I test this functionality. If I use Rack::Test::Methods, then I have to put in something like sleep 1 to make sure the deferred task has completed before checking the queue and stats such that my test may look like:
it 'should push data to queue with valid request' do
post('/', #json)
sleep 1
#redis.llen("#{#opts[:redis_prefix]}-queue").should > 0
end
Any help is appreciated!
The solution was pretty simple and once I realized it, I felt kind of silly. I created a test-helper that contained the following:
module EM
def self.defer(op, callback)
callback.call(op.call)
end
end
Then just include this into your test-files. This way the defer method will just run the operation and callback on the same thread.