AWS Lambda Layer: No module named 'psycopg2._psycopg'" - aws-lambda

I have a library that I downloaded here:
psycopg2
I tried all stakeoverflow suggestions thus far but they didn't work.
I placed it in a folder like this then zipped it to a python.zip folder on windows. The libraries inside are unzipped.
Then I created a lambda layer like this:
I've made sure that the runtime for layer and the function are the same, can someone please assist? Been struggling with this for more than a day.

AWS Lambda uses the Amazon Linux environment, if you are using windows and create a zip file of dependencies it might not work while you run your lambda function. It will be better if you create the layer as a docker env. Please check below:
https://www.geeksforgeeks.org/how-to-install-python-packages-for-aws-lambda-layers/

You need To compile it within a similar architecture as the lambda runtime. I would log into an Amazon Linux EC2 install psycopg there into a specific directory, then copy those files to your Lambda layer on your Windows machine.
Can send more specific steps if you need.

Related

Deploying a lambda with terraform that contains multiple files?

I have a python lambda I want to deploy that depends on some other python scripts. The lambda itself can't run without those. Looking at the docs, I don't see a way for me to process that entire "folder" as lambda and deploy it that way. I understand I can easily add that specific lambda in my step function later, but I need other scripts to go with it so I could actually run it. I know how to use archive provider to archive the entire folder, could that be helpful in my efforts? Thanks.
You need to use the Terraform archive provider to create a zip of the entire folder. Then reference the zip file as your Lambda function source.

State Machine: How to add code for java lambda function?

I am trying to implement a state machine using the Java lambda function. I have created a state machine and some java lambda functions. But the code editor does not support java.
Upload from option is available here with 2 different formats:
.zip or .jar file
Amazone s3 location
What kind of file do we need to upload over here? Can anyone show me some sample files? Is there any pom file we need to upload for the working of state function?
For java lambdas we can upload jar file as well as zip which can be created by gradle and maven plugins mentioned in the article.
Also lambda now supports container so you can also use container image.
There are also few popular frameworks you can use to deploy java lambda as native image like Quarkus or Micronaut.

Install azure-cli package on aws lambda

I'm trying to install azure-cli on aws lambda for integration purpose. Size of azure-cli seems to be huge for aws lambda and unable to upload the zip file.
I want to create a service principal (client secret) in azure using lambda using python.
The only way to create service principal is through azure-cli.
Is there any other way to create client secret? or can we handle azure-cli package size to upload in aws lambda?
I have gone through many blogs online, but azure-cli is required to create client secret.
install azure-cli on aws lambda
Do you mean pip install Python package within AWS Lambda?
If so, One of the great things about using Python is the availability of a huge number of libraries that helps you implement fast solutions without having to code all classes and functions from scratch. As mentioned before, Amazon Lambda offers a list of Python libraries that you can import into your function. The problem starts when you have to use libraries that are not available. One way to do it is to install the library locally inside the same folder you have your lambda_function.py file, zip the files and upload it to your Amazon Lambda console. This process can be a laborious and inconvenient task to install libraries locally and upload it every time you have to create a new Lambda function.
To make your life easier, Amazon offers the possibility for us to upload our libraries as AWS Lambda layers, which consists of a file structure where you store your libraries, load it independently to Amazon Lambda, and use them on your code whenever needed. Once you create a Lambda Layer it can be used by any other new Lambda Function.
There are the steps of getting started with AWS Lambda Layers for Python.

AWS Lambda's: SAM deployment ...identifying and removing old S3 package versions?

I'm relatively new to AWS lambda's and SAM, and now I've got things working I've got a seemingly simple question I can't find an answer to.
I've spent the last week getting a lambda app up and running using SAM (build, package, deploy numerous times until it works).
Problem
So now my S3 bucket I'm using to upload to has numerous (100 or so) previously uploaded (by sam package) versions of my zip'd up code.
Question
How can you identify which zipped up packages are the current ones (ie used by a current function and/or layer), and remove all the old obsolete ones?
Is there a way in SAM (cmd line options or in the template files) to
have it automatically delete old versions of your package when you
'sam package' upload a new version?
Is there somewhere in the AWS console to find the key for which zip file in your bucket a current function or layer is using? (I tried everywhere to find that, but couldn't manage to ...it's easy to get the ARN's, but not what the actual URI in your bucket that maps to)
Slight Complication
In the bucket I'm using to store the lambda packages, I've also got a custom layer.
So if it was just the app packages, I could easily (right now) just go in and delete everything in the bucket then do a re-build/package/deploy to clean it. ...but that would also delete my layer (and - same problem - I'm now sure which zip file in the bucket the layer is using).
But that approach wouldn't work long term anyway, as I'm planning to put together approx 10-15 different packages/functions, so deleting everything in the bucket when just one of them is updated is not going to work.
thanks for any thoughts, ideas and help!
1.In your packaged.yaml (generated after invoking sam package) file you can see under each lambda function a CodeUri with unique path s3://your bucket/id . the id is the one used by the current function and/or layer and resides in your bucket.
In layer it's ContentUri.
2.automatically delete old versions of your package when you 'sam package' upload a new version - i'm not aware of something like that.
3.Through AWS console you can see your layer version i don't think there is an indication of your function/layer CodeUri/ContentUri .
You can try to compare the currently deployed stack with what you've stored in S3. Let's assume you have a stack called test-stack, then you can retrieve the processed stack from CloudFormation using the AWS CLI like this:
AWS_PAGER="" aws cloudformation get-template --stack-name test-stack \
--output json --template-stage Processed
To only get the processed template body, you may want to pipe the output again through
jq -r ".TemplateBody"
Now you have the processed CFN template that tells you which S3 buckets and keys it is using. Here is an example for a lambda function:
MyLambda:
Type: 'AWS::Lambda::Function'
Properties:
Code:
S3Bucket: my-bucket
S3Key: 0c53a7ccb1c1762eaeebd96555d13a20
You can then try to delete s3 objects that are not referenced by the current stack.
There used to be a github ticket requesting some sort of automatic cleanup mechanism but it has been closed as it was out of scope https://github.com/aws/serverless-application-model/issues/557#issuecomment-417867028
It may be worth noting that you could also try to setup a S3 lifecycle rule to automatically clean up old s3 objects as suggested here: https://github.com/aws/aws-sam-cli/issues/648 However, I don't think that this will always be a suitable solution.
Last but not least, there has been an attempt to include some automatic cleaning approach in the sam documentation, but it was dismissed as:
[...] there are certain use cases that require these packaged S3 objects to persist, and deleting them would cause significant problems. One such example is the "CloudFormation stack deployment rollback" scenario: 1) Deploy version N of a stack, 2) Delete the packaged S3 object that version N uses, 3) Deploy version N+1 with a "bad" template file that triggers a CloudFormation rollback.
https://github.com/awsdocs/aws-sam-developer-guide/pull/3#issuecomment-462993286
So while it is possible to identify obsolete S3 packaged versions, it might not always be a good idea to delete them after all...
Actually, CloudFormation (which SAM is based on) uses S3 as temporary storage only. When you create or update the Lambda function, a copy of the code is made, so you could delete all objects from the bucket and the Lambda function would still work correctly.
Caveat: there are cases where the S3 object may be required, for example to rollback a CloudFormation stack. For example the "CloudFormation stack deployment rollback" scenario (reference):
Deploy version N of a stack
Delete the packaged S3 object that version N uses
Deploy version N+1 with a "bad" template file that triggers a CloudFormation rollback

How to decrease the size of serverless deploy?

I'm deploying an aws lambda function using serverless framework. My problem is there is a large file (44MB) that is deployed every time I do sls deploy -f any_fn. I've had similar problems when there is a node_modules folder (which can be quite big).
Is there a way to reduce the upload size by uploading the common files only once (and for all functions)? Because right now it keeps zipping and deploying that same binary file again and again though it never changes.
There's no way to do what you propose. AWS Lambda requires you to upload the entire package including all dependencies each time. Everything has to be inside the zip file that is deployed to Lambda.
This was my solution:
Upload the large files to S3 bucket.
Download all S3 files in a function executed under the global scope, not in the scope
of exports.handler so the code will be executed only one time (for
container).
To make sure re-use of the container you should keep
the lambda warm using CloudWatch timer with two simple steps.
This allow you to deploy only the small files.
You can try using lambda layers. All you need to do is create separate serverless project for dependencies management for ex. node_modoles and rest of the services will refer to it (follow docs). This should reduce the deployment or package size of individual lambda significantly.
Use lambda containers and your problems will be solved! Lambda containers have a 10 GB image size limmit! You can add anything you want in there! I've made many express apps with
Serverless http
and lambda containers.
You can also add an efs to your lambda and acess your files from there.
Check this tutorial

Resources