I am trying to understand if we can create a LAMBDA function which when execute will create a new Lambda function (if it doesn't exists) or update with a new version if it was already existing. The new lambda function will be stored in Codecommit and therefore the invoking lambda function should be able to clone the codecommit repo and use that function cloned to create the new one. Any suggestions?
Here is how I would do it,
CodeCommit (Trigger) --> Lambda (Trigger External Process to build and
zip it to S3) --> S3 (Create Trigger) --> Lambda(Update Lambda Code [Self or Other Lambda)
With CodeCommit you can also pass in parameters that can differentiate repository to trigger different build process.
Hope it helps.
Related
I would like to trigger an AWS Lambda Function whenever a new file is added to AWS FSX. This is in order to perform an action on the file using to the Lambda function that gets notified.
While considering AWS cloudtrail, Eventbridge and Cloudwatch to trigger the Lambda function; I was unable to find AWS FSX in the data source options for this monitoring resource in AWS. Any suggestion on what tool can be used?
When I try deploy to an existing lambda function configured in serverless.yml as following, it says "An error occurred: ApiLambdaFunction - an-existing-function-name-created-by-my-devops already exists."
functions:
api:
name: an-existing-function-name-created-by-my-devops
So it is not allowed to deploy to an existing lambda not created by serverless?
As Serverless manages your resources via a CloudFormation Stack, you could probably be able to import the lambda function within the UI (Import Existing Resources into a CloudFormation Stack) and do the deploy afterwards again.
I did not try this and there's most probably a better solution though.
Edit: precondition is that you successfully created your stack before adding your desired function.
I have a Lambda function defined in a Cloudformation template with a reference to an S3 bucket and key where I have saved a zipfile containing the Lambda source in the usual fashion. I have a separate CI build process building the Lambda function and dumping it into S3. Now I want the S3 key within the Cloudformation template to be static, I don't want to be changing it for every Lambda commit+rebuild. But Cloudformation thinks the Lambda hasn't changed because the S3 key hasn't changed, even though the contents of the zipfile have been changed.
Must I change the S3 key each time to trigger Lambda redeployment, or is there a way to force Lambda redeployment via Cloudformation whilst retaining the static key ?
You are right, CFT doesn't realise the changes since the S3 key remains same despite the content of it is changed.
As you mentioned, can have the S3 Key different from the previous CFT execution so that the lambda code gets deployed.
You will have to keep the S3 key as CFT parameter
Otherwise, try using SAM Packaging in AWS Code Build and use Code Deploy with Cloud Formation.
Here, the location will not be mentioned as zip, instead takes the code path and builds it and template gets updated with the new deployment package location everytime. (See buildspec.yml in CodeBuild)
References:
https://docs.aws.amazon.com/lambda/latest/dg/build-pipeline.html
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-deploying.html
Hope this helps.
Is it possible for one lambda function to deploy a new lamda function with serverless.yml and handler.py stored in s3 or github?
I am using serverless framework and python
AWS Lambda can deploy another lambda with a s3 trigger. The simple flow is like this.
Bundle project and upload to S3 --> S3 Trigger --> Lambda (Create or
Update function Code)
Here is the complete documentation
So I have an existing CloudFormation stack up and running. However, I haven't found a solution for my problem, which is that I want my resources, for example EC2 and Lambda, to have up to date code.
It seems that a CloudFormation stack doesn't update if the template doesn't have any changes. I'm holding my code inside a S3 bucket as a zip-file, but if this file gets changed, CloudFormation doesn't notice it.
Is my best bet creating a git hook script that uses AWS CLI and updates the EC2 and Lambda code or is there some 'elegant' way for CloudFormation to notice these changes?
Create a new lambda function to update your existing lambda and ec2 or call the cloud formation to update them. On your S3, create an object Put event and call that new lambda function. So whenever a new file(zip) is put in s3, your ec2 & lambda gets updated.