Cannot delete Lambda#Edge created by Cloud Formation - aws-lambda

I cannot delete a Lambda#Edge function create by Cloud Formation. During the Cloud Formation creation process an error occurred and the rollback process was executed. At the end we can't remove the Lambda created, we resolved the CF problem, renamed the resource and CF created a new Lambda. But the old one continues there. There aren't Cloud Front or another resource linked at the old Lambda and still we can't remove. When we try to remove we receive this message:
An error occurred when deleting your function: Lambda was unable to
delete
arn:aws:lambda:us-east-1:326353638202:function:web-comp-cloud-front-fn-prod:2
because it is a replicated function. Please see our documentation for
Deleting Lambda#Edge Functions and Replicas.
I know that if there aren't linked resources to Lambda#Edge after some minutes the replicas are deleted. But we can't find the linked resources.
Thank you in advance for your help.

I had a similar issue where I simply wasn't able to delete a Lambda#Edge, and the following helped,
Create a new Cloudfront distribution, and associate your Lambda#Edge with this new distribution.
Wait for the distribution to be fully deployed.
Remove the association of your Lambda#Edge from the Cloudfront distribution that you just created.
Wait for the distribution to be fully deployed.
Additionally, wait for a few more minutes.
Then, try to delete your Lambda#Edge.

The error message clearly indicates that the function still is replicated at the edge, which is the reason why you cannot delete it. So you first have to remove the lamda#edge association before deleting the function. If they are created in the same stack the easiest way is probably to set the lambda function's DeletionPolicy to Retain and to remove it manually afterwards.
Keep in mind that it can take up to a few hours before the replicas are deleted, not after some minutes. Usually, I just wait until the next day to remove them.

Related

middy-ssm not picking up changes to the lambda's execution role

We're using middy-ssm to fetch & cache SSM parameter values during lambda initialization. We ran into a situation where the execution role of the lambda did not have access to perform SSM::GetParameters on the path that it attempted to fetch. We updated a policy on the role to allow access, but it appeared like the lambda function never picked up the changes to permissions, but instead kept failing due to missing permissions until the end of the lifecycle (closer to 1 hour as requests kept on coming to it).
I then did a test where I fetched parameters using both the aws-lambda SDK directly and middy-ssm. Initially the lambda role didn't have permissions and both methods failed. We updated the policy and after a couple of minutes, the code that used the SDK was able to retrieve the parameter, but the middy middleware kept failing.
I tried to interpret the implementation of middy-ssm to figure out if the error result is somehow cached or what is going on there, but couldn't really pinpoint the issue. Any insight and/or suggestions how to overcome this are welcome! Thanks!
So as pointed out by Will in the comments, this turned out to be a bug.

Terraform and OCI : "The existing Db System with ID <OCID> has a conflicting state of UPDATING" when creating multiple databases

I am trying to create 30 databases (oci_database_database resource) under 5 existing db_homes. All of these resources are under a single DB System :
When applying my code, a first database is successfully created then when terraform attempts to create the second one I get the following error message : "Error: Service error:IncorrectState. The existing Db System with ID has a conflicting state of UPDATING", which causes the execution to stop.
If I re-apply my code, the second database is created then I get the same previous error when terraform attempts to create the third one.
I am assuming I get this message because terraform starts creating the following database as soon as the first one is created, but the DB System status is not up to date yet (still 'UPDATING' instead of 'AVAILABLE').
A good way for the OCI provider to avoid this issue would be to consider a database creation as completed when the creation is indeed completed AND the associated db home and db system's status are back to 'AVAILABLE'.
Any suggestion on how to adress the issue I am encountering ?
Feel free to ask if you need any additional information.
Thank you.
As mentioned above, it looks like you have opened a ticket regarding this via github. What you are experiencing should not happen, as terraform should retry after seeing the error. As per your github post, the person helping you is in need of your log with timestamp so they can better troubleshoot. At this stage I would recommend following up there and sharing the requested info.

Can't delete EC2 instance or ECS volume

I'm trying to remove an EC2 instance that was added by mistake while trying to figure out how to deploy my API code.
Every time I terminate it, another one appears.
I now have a list of terminated instances, and 1 too many running instances.
I also have an extra EBS instance which I need to remove but can't because the Delete option is disabled.
When I read the docs it says this should work, there is no mention of the delete option not being available.
I can detach the volume, but then another one appears.
This question is similar to another one. That one suggests the problem may be caused by a cluster, but I don't have any. I followed the instructions just in case but none are listed.
There is likely an autoscaling group that is recreating it. Open the EC2 console and click Auto Scaling Groups in the left-side menu. Delete the ASG and any remaining instances should automatically be terminated.

Trigger AWS Lambda function whenever a new file arrived on two different s3 prefixes

Every day we get one incremental file, and we have multiple sources from which we gets incremental files. And both will place these files in two different s3 prefixes. But they come in different time. We want to process both the files in one go and generate a report out of that. For this I will be using AWS Lambda and Data Pipeline. We will trigger AWS Data pipe line through Lambda. And Lambda will be triggered whenever a new file arrived.
We are able to the same when we have single source, so we created a s3 trigger ever for lambda and when ever the file comes, it is getting triggered and starting pipe line and emr activity is getting and at the end the report is getting generated.
Now we have the second source as well, and now we want to start the activity whenever both the files are arrived/uploaded.
Not sure if we can trigger aws lambda with more than one dependency. I know this can be done through Step Functions, i might go to that route if we dont have support for triggering lambda with multiple dependencies.
Trigger AWS Lambda function whenever new files arrived on two different s3 prefixes. Dont trigger lambda function if a file arrived on only s3 location but not on other location.

What happens to leftover files created by lambda function

If I write a file to disk inside of a lambda function, what happens to it after I'm done with the lambda function. Do I have to explicitly tell it to remove, or will amazon automatically delete everything after the function finishes running?
Lambda functions that you execute on AWS run within an isolated space called a container, which is provisioned just for you and that function. AWS may not clean up this container immediately for the purpose of making subsequent executions of your lambda function faster (as the container is already provisioned).
When your Lambda function is not executed for "an amount of time" the container will be cleaned up by AWS. If you publish a revision of your code then old containers are cleaned up and a new one is provisioned for your Lambda function on next execution.
What is important to keep in mind is that the files you mention and any variables you declare outside of the handler code will still be present on subsequent executions. The same goes for your /tmp files
Knowing that this is the case you should also consider redesigning your code to ensure a clean exit (even under a failure condition) if "left-overs" from past executions could cause you some conflict.
It's also important to make sure that you never assume a container will still exist on next execution.
You can check out some official documentation on this here:
http://docs.aws.amazon.com/lambda/latest/dg/lambda-introduction.html
I hope this helps!

Resources