Install azure-cli package on aws lambda - aws-lambda

I'm trying to install azure-cli on aws lambda for integration purpose. Size of azure-cli seems to be huge for aws lambda and unable to upload the zip file.
I want to create a service principal (client secret) in azure using lambda using python.
The only way to create service principal is through azure-cli.
Is there any other way to create client secret? or can we handle azure-cli package size to upload in aws lambda?
I have gone through many blogs online, but azure-cli is required to create client secret.

install azure-cli on aws lambda
Do you mean pip install Python package within AWS Lambda?
If so, One of the great things about using Python is the availability of a huge number of libraries that helps you implement fast solutions without having to code all classes and functions from scratch. As mentioned before, Amazon Lambda offers a list of Python libraries that you can import into your function. The problem starts when you have to use libraries that are not available. One way to do it is to install the library locally inside the same folder you have your lambda_function.py file, zip the files and upload it to your Amazon Lambda console. This process can be a laborious and inconvenient task to install libraries locally and upload it every time you have to create a new Lambda function.
To make your life easier, Amazon offers the possibility for us to upload our libraries as AWS Lambda layers, which consists of a file structure where you store your libraries, load it independently to Amazon Lambda, and use them on your code whenever needed. Once you create a Lambda Layer it can be used by any other new Lambda Function.
There are the steps of getting started with AWS Lambda Layers for Python.

Related

Running/Testing an AWS Serverless API written in Terraform

No clear path to do development in a serverless environment.
I have an API Gateway backed by some Lambda functions declared in Terraform. I deploy to the cloud and everything is fine, but how do I go about setting a proper workflow for development? It seems like a struggle to push every small code change to the cloud while developing in order to run your code. Terraform has started getting some support by the SAM framework to run your Lambda functions locally (https://aws.amazon.com/blogs/compute/better-together-aws-sam-cli-and-hashicorp-terraform/), but still no way to simulate a local server and test out your endpoints in Postman for example.
First of all I use serverless plugin instead of terraform, my answer is based on what you provided and what I found around.
From what I understood so far with priovided documentation you are able to run sam CLI with terraform (cf: Chapter Local testing)
You might follow this documentation to invoke local functions.
I recommend to use JSON files to create use cases instead of stdin injection.
First step is to create your payload in json file and to invoke your lambda with the json payload like
sam local invoke "YOUR_LAMBDA_NAME" -e ./path/to/yourjsonfile.json

AWS Lambda Layer: No module named 'psycopg2._psycopg'"

I have a library that I downloaded here:
psycopg2
I tried all stakeoverflow suggestions thus far but they didn't work.
I placed it in a folder like this then zipped it to a python.zip folder on windows. The libraries inside are unzipped.
Then I created a lambda layer like this:
I've made sure that the runtime for layer and the function are the same, can someone please assist? Been struggling with this for more than a day.
AWS Lambda uses the Amazon Linux environment, if you are using windows and create a zip file of dependencies it might not work while you run your lambda function. It will be better if you create the layer as a docker env. Please check below:
https://www.geeksforgeeks.org/how-to-install-python-packages-for-aws-lambda-layers/
You need To compile it within a similar architecture as the lambda runtime. I would log into an Amazon Linux EC2 install psycopg there into a specific directory, then copy those files to your Lambda layer on your Windows machine.
Can send more specific steps if you need.

Is there a way to deploy a terraform file via an AWS lambda function?

As the title suggests I am looking for a way to deploy a terraform file via an AWS lambda function. I would like to deploy this file via a time-based event. This is my first time working with terraform and I cannot seem to find anything pertaining to this specific use case.
I am much more versed in CloudFormation so normally what I would do is use the boto3 library to set up a lambda function that would deploy a CloudFormation stack. Does anyone know how to do this with a terraform file?

How to install and run the acme.sh inside AWS lambda?

I am interested to run this acme.sh a LetsEncrypt bash client within AWS Lambda to generate a ECDSA wildcard SSL cert.
I read that AWS lambda now supports bash via Layers.
The documentation within AWS Lambda developer guide doesn't really paint a clear picture for me to do this.
So I was wondering if somebody can help make the developer guide clearer for me in this particular context.
This script is a bit heavy for lambda, id suggest trying to use AWS Fargate instead, which lets you spin up dynamic containers, there's a Dockerfile already in the repo, so start from there.
You can run certbot (that is written with python) on AWS Lambda using python runtime to generate wildcard SSL certs using DNS challenge.
You can also check the complete certbot-lambda script that generates certs and exports them to [AWS](AWS Secrets Manager).

How to decrease the size of serverless deploy?

I'm deploying an aws lambda function using serverless framework. My problem is there is a large file (44MB) that is deployed every time I do sls deploy -f any_fn. I've had similar problems when there is a node_modules folder (which can be quite big).
Is there a way to reduce the upload size by uploading the common files only once (and for all functions)? Because right now it keeps zipping and deploying that same binary file again and again though it never changes.
There's no way to do what you propose. AWS Lambda requires you to upload the entire package including all dependencies each time. Everything has to be inside the zip file that is deployed to Lambda.
This was my solution:
Upload the large files to S3 bucket.
Download all S3 files in a function executed under the global scope, not in the scope
of exports.handler so the code will be executed only one time (for
container).
To make sure re-use of the container you should keep
the lambda warm using CloudWatch timer with two simple steps.
This allow you to deploy only the small files.
You can try using lambda layers. All you need to do is create separate serverless project for dependencies management for ex. node_modoles and rest of the services will refer to it (follow docs). This should reduce the deployment or package size of individual lambda significantly.
Use lambda containers and your problems will be solved! Lambda containers have a 10 GB image size limmit! You can add anything you want in there! I've made many express apps with
Serverless http
and lambda containers.
You can also add an efs to your lambda and acess your files from there.
Check this tutorial

Resources