I want to ffmpeg with my lambda, But I don't know my runtime(unix or?) env, so I cannot copy the binary directly.
I can install the library on every invocation of my lambda, which seems like a wasted effort and money. Please suggest any alternatives.
Lambda offers the ability to prepackaged shared code via layers. These layers can be shared publicly, and fortunately an ffmpeg layer has already been built:
https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:145266761615:applications~ffmpeg-lambda-layer
Use this as a layer in your lambda function, and you'll be able to use ffmpeg without worrying about the details.
AWS Lambda runs on Amazon Linux or Amazon Linux 2, depending on the function runtime. You can find the full list here.
You can run an EC2 instance running either of the two in order to compile and test your executable.
The Serverless Video Preview and Analysis Service project uses ffmpeg in a lambda function to perform video processing. You can find there a way to package an ffmpeg executable with a lambda function.
Related
AWS Lambda suggests using ADOT(AWS Distro for OpenTelemetry) over the AWS x-ray SDK to instrument python code in AWS Lambda. The documentation that suggests this can be found here:
https://docs.aws.amazon.com/lambda/latest/dg/python-tracing.html
However, the issue is the ADOT documentation does not speak about how to properly instrument container image based lambda functions.
How can this be achieved using Lambda container images?
ADOT is provides this solution as a Lambda Layer. Since Lambda functions packaged as container images do not support adding Lambda layers, us developers take on the responsibility for packaging the preferred runtimes and dependencies as a part of the container image during the build process.
My understanding is that in order for ADOT to instrument the code, ADOT needs to be invoked to wrap the subsequent network calls to ensure they are traced. My understanding is that the lambda python base container image uses a shell script as its entrypoint. How will copying the layer data to /opt ensure that the ADOT wrappers get called when the lambda gets invoked?
Recently I've been surveying Serverless, and I find some differences between the architecture of Serverless platforms such as AWS Lambda and OpenWhisk. Some platforms use container as function executor(and most of them are open-source like Fn).
However, commercial serverless platforms like Alibaba Cloud Function Compute prefer to use a combination of virtual machine and container where containers run on host VMs .
It confuses me a lot, and I want to ask why do these platforms need an extra host VM rather than plain container? What's the benefit of this? One possible answer, I guess, is to obtain higher security.
I am building a lambda function to be deployed in a Greengrass core device which has additional dependencies such as NumPy etc. I have followed the instructions provided in official documentation to do so but not able to do it.
I have created a virtual environment, installed all of the dependencies, and compressed all the lib files and directories along with the main code which contains the function handler.
Can anyone help me out regarding this issue?
You should make sure that the hierarchy of your deployed package is correct.
The python dependencies should be in the highest hierarchy level.
My strongest suggestion is to use a framework to deploy your lambdas.
We're using the serverless framework, which has many benefits. In particular, it creates this hierarchy "behind the scene":
https://www.serverless.com/blog/serverless-python-packaging
Disclosure: I'm working for Lumigo, a 3rd party company that helps you to debug your serverless architecture.
I am trying to deploy a binary utility along with my Python-based lambda to a Greengrass group.
Using a layer seemed like a natural way to accomplish this. No errors are encountered during the deployment, but I cannot find any evidence that the layer exists on the edge device (Ubuntu 18.04).
I have also been unable to find any documentation that confirms or denies layers work with Greengrass.
To move forward, I am bundling the utility within the Lambda deployment package itself. This seems workable, but a layer seems more flexible...
Any clarification would be appreciated.
Just from my experience, no, Layers do not get pushed to the Greengrass host, so if your lambda function depends on libraries in a layer, it will fail to run, even though deployments will go through without issue.
AWS I believe is no longer updating Greengrass V1, as they're focused on Greengrass V2. So I doubt this issue will get addressed. As a workaround, package everything within the lambda function and don't use layers for those functions getting pushed to Greengrass.
Can someone explain how AWS lambda would know about any custom classes or methods I have written? Do you need to copy them into the zip file?
Basically I am outsourcing part of my server to Lambda where I have custom classes and methods.
Thanks
Just include the python files with the additional classes in your zip file and then you can do a regular import foo from your lambda. This is handy if you have a custom library that you need to use in several lambdas. See Creating a Deployment Package (Python).
To do this in C# and create a not so microservice Lambda, yes I've done this, you can reference the other projects. The trick is that these other projects also have to be AWS "Lambdas". That way they have the same underlying dotnetcore references. I don't think it is possible to take a standard framework dll and reference it.
Can you do this via reflection without a reference? Don't know. If you needed to do that then you might as well just create another AWS Lambda and call it directly from another Lambda. AWS Lambda to Lambda communication is very fast. AWS Lambda to API Gateway to AWS Lambda not so much because of the added latency. One could also do Step Functions which is really AWS Lambda to AWS Lambda with state.