Use existing Lambda layer(AWS) in Serverless(framework) project - aws-lambda

I am migrating existing lambda functions created using the AWS GUI to a serverless framework project for better version control.
Few functions have layers, now I am trying to add the layer in the config file by directly using the ARN of the layer. This layer was created using the GUI, not using the framework.
functions:
functionName:
handler: handlerFile.handler
layers:
- arn:aws:lambda:...:...:layer:layername:version # Using the ARN directly here, no layer config present in this project
Now when I try to deploy the project, I am getting Module not found. Can't resolve 'sharp', so the layer is not working and unable to access the modules, the sharp library is in the layer. node_modules doesn't exist or is not a directory All the online tutorials and documentation add the layer files manually in the project and deploy a new layer and then use that, is it not possible to use the ARN of an existing layer? It is happening at the webpack compilation step of deployment. This is the webpack config file
module.exports = {
target : 'node',
mode: 'none'
}
The layer uses the folder structure mentioned in the docs, it also works fine in the existing lambda function that I created in the GUI. I am using multiple layers, so I didn't want to add the layer files in the serverless project to keep it clean. The last thing to try would be to manually create layer directories and deploy the layers first using the serverless framework and then it might work(though not sure)
Is it possible to use the ARN of an existing layer directly in the serverless function config given that the layers have already been created using the GUI and not using the framework?
Serverless framework version : 3
Layer type: nodejs 16

Yes, it is possible to use existing layers exactly in the way you added them, you should be able to use both existing layers via ARN and ones created by the Framework. Could you please share the full error and tell us what version of the Framework are you using?
On the side note - module not found might suggest that handler cannot be found. I see you have hanlerFile in config instead of (probably) handlerFile. Maybe this typo is causing the problem here?

Related

Using Helm For Deploying Spring Boot Microservice to K8s

We have build a few Microservices (MS) which have been deployed to our company's K8s clusters.
For current deployment, any one of our MSs will be built as a Docker image and they deployed manually using the following steps; and it works fine:
Create Configmap
Installing a Service.yaml
Installing a Deployment.yaml
Installing an Ingress.yaml
I'm now looking at Helm v3 to simplify and encapsulate these deployments. I've read a lot of the Helm v3 documentation, but I still haven't found the answer to some simple questions and I hope to get an answer here before absorbing the entire doc along with Go and SPRIG and then finding out it won't fit our needs.
Our Spring MS has 5 separate application.properties files that are specific to each of our 5 environments. These properties files are simple multi-line key=value format with some comments preceded by #.
# environment based values
key1=value1
key2=value2
Using helm create, I installed a chart called ./deploy in the root directory which auto-created ./templates and a values.yaml.
The problem is that I need to access the application.properties files outside of the Chart's ./deploy directory.
From helm, I'd like to reference these 2 files from within my configmap.yaml's Data: section.
./src/main/resource/dev/application.properties
./src/main/resources/logback.xml
And I want to keep these files in their current format, not rewrite them to JSON/YAML format.
Does Helm v3 allow this?
Putting this as answer as there's no enough space on the comments!
Check the 12 factor app link I shared above, in particular the section on configuration... The explanation there is not great but the idea is behind is to build one container and deploy that container in any environment without having to modify it plus to have the ability to change the configuration without the need to create a new release (the latter cannot be done if the config is baked in the container). This allows, for example, to change a DB connection pool size without a release (or any other config parameter). It's also good from a security point of view as you might not want the container running in your lower environments (dev/test/whatnot) having production configuration (passwords, api keys, etc). This approach is similar to the Continuous Delivery principle of build once, deploy anywhere.
I assume that when you run the app locally, you only need access to one set of configuration, so you can keep that in a separate file (e.g. application.dev.properties), and have the parameters that change between environments in helm environment variables. I know you mentioned you don't want to do this, but this is considered a good practice nowadays (might be considered otherwise in the future...).
I also think it's important to be pragmatic, if in your case you don't feel the need to have the configuration outside of the container, then don't do it, and probably using the suggestion I gave to change a command line parameter to pick the config file works well. At the same time, keep in mind the 12 factor-app approach in case you find out you do need it in the future.

State Machine: How to add code for java lambda function?

I am trying to implement a state machine using the Java lambda function. I have created a state machine and some java lambda functions. But the code editor does not support java.
Upload from option is available here with 2 different formats:
.zip or .jar file
Amazone s3 location
What kind of file do we need to upload over here? Can anyone show me some sample files? Is there any pom file we need to upload for the working of state function?
For java lambdas we can upload jar file as well as zip which can be created by gradle and maven plugins mentioned in the article.
Also lambda now supports container so you can also use container image.
There are also few popular frameworks you can use to deploy java lambda as native image like Quarkus or Micronaut.

Create services taking reference of existing subscription

Is it possible to take the existing services in azure cloud subscription as reference and create similar services with parameters update in another subscription either by using powershell or ARM template.
We are missing few details while taking the reference details manually and then creating it using ARM templates. We wanted it to be end to end automation.
You can export the ARM template from existing resources using Export-AzureRmResourceGroup or Save-AzureRmResourceGroupDeploymentTemplate (https://azure.microsoft.com/en-us/blog/export-template/) and then redeploy that template to a new environment.
However, if you are using Export-AzureRmResourceGroup to try to dynamically create an ARM template from existing resources then the generated template will likely not be ready to automatically redeploy. There may be issues with resource dependencies, resources not getting exported correctly, template limitations, etc. It will take a fair bit of manual effort to update the generated ARM template to get it to a point where it can be correctly redeployed into another subscription.
If you are able to use Save-AzureRmResourceGroupDeploymentTemplate (ie. if your existing resources were all deployed via ARM templates with no post deployment ad-hoc changes) then the templates should be ready to deploy.
For future reference, the best solution is to always deploy all of your resources via ARM templates (or something like Terraform) where your configuration is all saved in a source repository and you are deploying via a CI/CD pipeline.

Cannot create Virtual Data Model classes using Cloud SDK

I am trying to create VDMs using EDMX from SFSF, using this blog
I create a SCP Business Application template and then from in the srv module I try to add new data model from external source - in this case API Business Hub.
I try to use SuccessFactors Employee Central - Personal Information.
https://api.sap.com/api/ECPersonalInformation/overview
The process starts and fails with the message: "OData models with multiple schemas are not supported" and then "Could not generate Virtual Data Model classes."
The external folder is generated as expected with the XML in the EDMX folder but the csn folder is empty.
As I understand it this should work with any api from the business hub? Am I doing something wrong or am I missing something?
Thanks.
Update:
There seems to be an issue with the conversion from EDMX into CSN used by the Web IDE (which is not part of the SAP Cloud SDK).
The Java VDM generated by the OData Generator from the SAP Cloud SDK (used as a component by the Web IDE) should work without any problem.
This looks like an unexpected behavior. We will investigate this further.
In the meantime, as a workaround, you can use our maven plugin or CLI to create the data model for you. This is described in detail in this blog post.
The tl;dr version (for the CLI) is:
Determine which version of the SAP Cloud SDK you are using (search for sdk-bom in your parent pom.xml). I assume this to be version 2.16.0 for this example.
Download the CLI library from maven central: https://search.maven.org/artifact/com.sap.cloud.s4hana.datamodel/odata-generator-cli/2.16.0/jar
Download the metadata file (edmx) from the API Business Hub (as linked in your question)
Run the CLI with e.g. the following command:
java -jar odata-generator-cli-2.16.0.jar -i <input-directory> -o <output-directory> -b <base-path>
The <base-path> in there is the prefix (service independent) to be used in between your host configuration and the actual service name.
Add the generated code manually to your project.
I will updates this answer with the results of the investigation.

AWS multiple Lambda in same project

My team is in the process of creating a project which follows the serverless architecture, we are using AWS Lambda, NodeJS, and Serverless framework.
The application will be a set of services each one will be handled as a separate function.
I found examples combining multiple functions under the same project then using cloud formation to deploy all at once, but with some defects we don't want, like having resources of different modules deployed for each lambda function,
which will cause some redundancy and if we want to change one file it will not be reflected in all lamda functions as it's local to the hosting lamda function
https://github.com/serverless/examples/tree/master/aws-node-rest-api-with-dynamodb
My question:
do you know the best way to organize a project containing multiple functions, each one has it's separate .yaml and configurations with the ability to deploy all of them when needed or specify selective updated functions to be deployed?
I think I found a good way to do this in a way like the one mentioned here : https://serverless.readme.io/docs/project-structure
I created a service containing some Labmda functions , each one is contained within a separate folder , also I had a lib folder on the root level containing all the common modules that can be used in my Lambda functions .
So my Structure looks like :
Root ---
functions----
function1
function2
libs---
tests--
resources
serverless.yml (root level)
and in my yml file I'm pointing to Lamdas with relative paths like :
functions:
hello1:
handler: functions/function1/hello1.hello
Now I can deploy all functions with one Serverless command , or selectively deploy the changes function specificity
and the deployed Lamda will only contain the required code

Resources