I am trying to implement a state machine using the Java lambda function. I have created a state machine and some java lambda functions. But the code editor does not support java.
Upload from option is available here with 2 different formats:
.zip or .jar file
Amazone s3 location
What kind of file do we need to upload over here? Can anyone show me some sample files? Is there any pom file we need to upload for the working of state function?
For java lambdas we can upload jar file as well as zip which can be created by gradle and maven plugins mentioned in the article.
Also lambda now supports container so you can also use container image.
There are also few popular frameworks you can use to deploy java lambda as native image like Quarkus or Micronaut.
Related
I am migrating existing lambda functions created using the AWS GUI to a serverless framework project for better version control.
Few functions have layers, now I am trying to add the layer in the config file by directly using the ARN of the layer. This layer was created using the GUI, not using the framework.
functions:
functionName:
handler: handlerFile.handler
layers:
- arn:aws:lambda:...:...:layer:layername:version # Using the ARN directly here, no layer config present in this project
Now when I try to deploy the project, I am getting Module not found. Can't resolve 'sharp', so the layer is not working and unable to access the modules, the sharp library is in the layer. node_modules doesn't exist or is not a directory All the online tutorials and documentation add the layer files manually in the project and deploy a new layer and then use that, is it not possible to use the ARN of an existing layer? It is happening at the webpack compilation step of deployment. This is the webpack config file
module.exports = {
target : 'node',
mode: 'none'
}
The layer uses the folder structure mentioned in the docs, it also works fine in the existing lambda function that I created in the GUI. I am using multiple layers, so I didn't want to add the layer files in the serverless project to keep it clean. The last thing to try would be to manually create layer directories and deploy the layers first using the serverless framework and then it might work(though not sure)
Is it possible to use the ARN of an existing layer directly in the serverless function config given that the layers have already been created using the GUI and not using the framework?
Serverless framework version : 3
Layer type: nodejs 16
Yes, it is possible to use existing layers exactly in the way you added them, you should be able to use both existing layers via ARN and ones created by the Framework. Could you please share the full error and tell us what version of the Framework are you using?
On the side note - module not found might suggest that handler cannot be found. I see you have hanlerFile in config instead of (probably) handlerFile. Maybe this typo is causing the problem here?
I have a library that I downloaded here:
psycopg2
I tried all stakeoverflow suggestions thus far but they didn't work.
I placed it in a folder like this then zipped it to a python.zip folder on windows. The libraries inside are unzipped.
Then I created a lambda layer like this:
I've made sure that the runtime for layer and the function are the same, can someone please assist? Been struggling with this for more than a day.
AWS Lambda uses the Amazon Linux environment, if you are using windows and create a zip file of dependencies it might not work while you run your lambda function. It will be better if you create the layer as a docker env. Please check below:
https://www.geeksforgeeks.org/how-to-install-python-packages-for-aws-lambda-layers/
You need To compile it within a similar architecture as the lambda runtime. I would log into an Amazon Linux EC2 install psycopg there into a specific directory, then copy those files to your Lambda layer on your Windows machine.
Can send more specific steps if you need.
I have a python lambda I want to deploy that depends on some other python scripts. The lambda itself can't run without those. Looking at the docs, I don't see a way for me to process that entire "folder" as lambda and deploy it that way. I understand I can easily add that specific lambda in my step function later, but I need other scripts to go with it so I could actually run it. I know how to use archive provider to archive the entire folder, could that be helpful in my efforts? Thanks.
You need to use the Terraform archive provider to create a zip of the entire folder. Then reference the zip file as your Lambda function source.
I want to generate a lambda's config file dynamically (Basically application config) during the AWS stack creation.
Once all the configs are ready then only the particular lambda should be created along with that newly generated file. Can I achieve this using custom resources in AWS cloud formation?
I searched but only with lambda or commandrunner or SNS topics only there. No custom resource to write or modify local files. Could someone provide a sample or guidance to do this ?
Here's some options I see for your use case:
Use a Lambda based CF Custom Resource for your config file logic. Load base files from S3 or checkout from Version Control (git) within the Custom Resource Lambda function.
Execute a custom script within your build/deploy process. E.g. you have a build.sh script that contains the commands to deploy the CF templates, but first you execute another script that creates the config file and places it in the source folder for the lambda function.
Use a Docker Image based Lambda function and include your config file logic in the Dockerfile. You can also use AWS SAM to build the docker image within the CF deployment.
Use AWS CDK and its concept of bundling for lambda functions.
I am trying to create VDMs using EDMX from SFSF, using this blog
I create a SCP Business Application template and then from in the srv module I try to add new data model from external source - in this case API Business Hub.
I try to use SuccessFactors Employee Central - Personal Information.
https://api.sap.com/api/ECPersonalInformation/overview
The process starts and fails with the message: "OData models with multiple schemas are not supported" and then "Could not generate Virtual Data Model classes."
The external folder is generated as expected with the XML in the EDMX folder but the csn folder is empty.
As I understand it this should work with any api from the business hub? Am I doing something wrong or am I missing something?
Thanks.
Update:
There seems to be an issue with the conversion from EDMX into CSN used by the Web IDE (which is not part of the SAP Cloud SDK).
The Java VDM generated by the OData Generator from the SAP Cloud SDK (used as a component by the Web IDE) should work without any problem.
This looks like an unexpected behavior. We will investigate this further.
In the meantime, as a workaround, you can use our maven plugin or CLI to create the data model for you. This is described in detail in this blog post.
The tl;dr version (for the CLI) is:
Determine which version of the SAP Cloud SDK you are using (search for sdk-bom in your parent pom.xml). I assume this to be version 2.16.0 for this example.
Download the CLI library from maven central: https://search.maven.org/artifact/com.sap.cloud.s4hana.datamodel/odata-generator-cli/2.16.0/jar
Download the metadata file (edmx) from the API Business Hub (as linked in your question)
Run the CLI with e.g. the following command:
java -jar odata-generator-cli-2.16.0.jar -i <input-directory> -o <output-directory> -b <base-path>
The <base-path> in there is the prefix (service independent) to be used in between your host configuration and the actual service name.
Add the generated code manually to your project.
I will updates this answer with the results of the investigation.