middy-ssm not picking up changes to the lambda's execution role - aws-lambda

We're using middy-ssm to fetch & cache SSM parameter values during lambda initialization. We ran into a situation where the execution role of the lambda did not have access to perform SSM::GetParameters on the path that it attempted to fetch. We updated a policy on the role to allow access, but it appeared like the lambda function never picked up the changes to permissions, but instead kept failing due to missing permissions until the end of the lifecycle (closer to 1 hour as requests kept on coming to it).
I then did a test where I fetched parameters using both the aws-lambda SDK directly and middy-ssm. Initially the lambda role didn't have permissions and both methods failed. We updated the policy and after a couple of minutes, the code that used the SDK was able to retrieve the parameter, but the middy middleware kept failing.
I tried to interpret the implementation of middy-ssm to figure out if the error result is somehow cached or what is going on there, but couldn't really pinpoint the issue. Any insight and/or suggestions how to overcome this are welcome! Thanks!

So as pointed out by Will in the comments, this turned out to be a bug.

Related

Oracle ORDS: Get request returns old data, then after period of time the changed data

I am having a problem with the Oracle Rest Data Services (short ORDS) and I can't find a solution.
The Problem is as follows:
We are using ORDS via a TomCat Webserver and I have 2 Endpoints defined, one to Update a dataset and one to get all datasets from this table.
If I update the value via my Endpoint the change is written in the Table, but if I try to get the table with this change ORDS only response with the old not changed table. After a certain period of Time while constantly trying to get the change it repondes with the expected values. (happens after max 1 minute, can be earlier).
Because of this behaviour I accused some type of caching, but I cant find no configuration in the oracle database or on the TomCat.
Another Point for this theory was that I logged what happens in my GET procedure and found that only the one request with the correct values gets logged, like the others didnt even happen ..
The Request giving me the old value are coming back in the 4-8 ms range while the request with the correct data is in the 100-200 ms.
Ty for your help :)
I tried logging what happens, but I got that only the request with the fresh values was logged.
I tried to restart the TomCat Webserver to make sure that the cache is cleared, but this didnt fix the Problem
I searched for a configuration in ORDS or oracle where a cache would be defined, but it was never set.
I tried to set the value via a SQL update and not an endpoint, but even here I get the change only delayed
Do you have a full overview of the communication path? Maybe there is a proxy between?
When the TomCat has no caching configuration and you restartet the webserver during your tests and still have the same issue, then there is maybe more...
Kind regards
M-Achilles

Cannot delete Lambda#Edge created by Cloud Formation

I cannot delete a Lambda#Edge function create by Cloud Formation. During the Cloud Formation creation process an error occurred and the rollback process was executed. At the end we can't remove the Lambda created, we resolved the CF problem, renamed the resource and CF created a new Lambda. But the old one continues there. There aren't Cloud Front or another resource linked at the old Lambda and still we can't remove. When we try to remove we receive this message:
An error occurred when deleting your function: Lambda was unable to
delete
arn:aws:lambda:us-east-1:326353638202:function:web-comp-cloud-front-fn-prod:2
because it is a replicated function. Please see our documentation for
Deleting Lambda#Edge Functions and Replicas.
I know that if there aren't linked resources to Lambda#Edge after some minutes the replicas are deleted. But we can't find the linked resources.
Thank you in advance for your help.
I had a similar issue where I simply wasn't able to delete a Lambda#Edge, and the following helped,
Create a new Cloudfront distribution, and associate your Lambda#Edge with this new distribution.
Wait for the distribution to be fully deployed.
Remove the association of your Lambda#Edge from the Cloudfront distribution that you just created.
Wait for the distribution to be fully deployed.
Additionally, wait for a few more minutes.
Then, try to delete your Lambda#Edge.
The error message clearly indicates that the function still is replicated at the edge, which is the reason why you cannot delete it. So you first have to remove the lamda#edge association before deleting the function. If they are created in the same stack the easiest way is probably to set the lambda function's DeletionPolicy to Retain and to remove it manually afterwards.
Keep in mind that it can take up to a few hours before the replicas are deleted, not after some minutes. Usually, I just wait until the next day to remove them.

Terraform and OCI : "The existing Db System with ID <OCID> has a conflicting state of UPDATING" when creating multiple databases

I am trying to create 30 databases (oci_database_database resource) under 5 existing db_homes. All of these resources are under a single DB System :
When applying my code, a first database is successfully created then when terraform attempts to create the second one I get the following error message : "Error: Service error:IncorrectState. The existing Db System with ID has a conflicting state of UPDATING", which causes the execution to stop.
If I re-apply my code, the second database is created then I get the same previous error when terraform attempts to create the third one.
I am assuming I get this message because terraform starts creating the following database as soon as the first one is created, but the DB System status is not up to date yet (still 'UPDATING' instead of 'AVAILABLE').
A good way for the OCI provider to avoid this issue would be to consider a database creation as completed when the creation is indeed completed AND the associated db home and db system's status are back to 'AVAILABLE'.
Any suggestion on how to adress the issue I am encountering ?
Feel free to ask if you need any additional information.
Thank you.
As mentioned above, it looks like you have opened a ticket regarding this via github. What you are experiencing should not happen, as terraform should retry after seeing the error. As per your github post, the person helping you is in need of your log with timestamp so they can better troubleshoot. At this stage I would recommend following up there and sharing the requested info.

How to invoke step function from a lambda which is inside a vpc?

I am trying to invoke a step function from a lambda which is inside a VPC.
I get exception that HTTP request timed out.
Is it possible to access step function from a lambda in a vpc?
Thanks,
If your lambda function is running inside a VPC, you need to add a VPC endpoint for step functions.
In the VPC console : Endpoints : Create Endpoint, the service name for step functions is com.amazonaws.us-east-1.states (the region name may vary).
Took me a while to find this in the documentation.
It is possible but depends on how you are trying to access step functions. If you are using the AWS SDK then it should take care of any http security issues, otherwise if you are executing raw HTTP commands you will need to mess around with AWS headers.
The other thing you will need to look at is the role that lambda is executing. Without seeing how you have things configure I can only suggest to you things I encountered; you may need to adjust your policies so the role can have the action: sts:AssumeRole, another possibility is adding the action: iam:PassRole to the same execution role.
The easiest solution is to grant your execution role administrator privileges, test it out then work backwards to lock down your role access. Remember to treat your lambda function like another API user account and set privileges appropriately.

What happens to leftover files created by lambda function

If I write a file to disk inside of a lambda function, what happens to it after I'm done with the lambda function. Do I have to explicitly tell it to remove, or will amazon automatically delete everything after the function finishes running?
Lambda functions that you execute on AWS run within an isolated space called a container, which is provisioned just for you and that function. AWS may not clean up this container immediately for the purpose of making subsequent executions of your lambda function faster (as the container is already provisioned).
When your Lambda function is not executed for "an amount of time" the container will be cleaned up by AWS. If you publish a revision of your code then old containers are cleaned up and a new one is provisioned for your Lambda function on next execution.
What is important to keep in mind is that the files you mention and any variables you declare outside of the handler code will still be present on subsequent executions. The same goes for your /tmp files
Knowing that this is the case you should also consider redesigning your code to ensure a clean exit (even under a failure condition) if "left-overs" from past executions could cause you some conflict.
It's also important to make sure that you never assume a container will still exist on next execution.
You can check out some official documentation on this here:
http://docs.aws.amazon.com/lambda/latest/dg/lambda-introduction.html
I hope this helps!

Resources