I'm working on an AWS Lambda using the serverless framework. I'm trying to get my Lambda function to run on an EventBridge trigger, more specifically a cron schedule. The schedule being to run every 30 minutes between 20:00 and 07:00 Mon-Friday. However, this is great for prod, but in QA/UAT I want a different cron schedule (most likely). Thus I'm looking to implement stage based cron triggers so that I can schedule it for the day time in QA/UAT, but schedule it for the evening in production.
I originally tried a single cron schedule trigger of cron (0/30 20:30-06:30 ? * 1-5 *) for UAT, but that didn't work for some reason. My Lambda function ran only twice after 20:30, which I've yet to figure out.
My serverless file contains:
custom:
stage: "${opt:stage, self:provider.stage, 'dev'}"
stages:
- dev
- uat
eveningSchedule:
dev: cron(0/30 08:00-12:59 ? * 1-5 *)
uat: cron(0/30 20:30-23:59 ? * 1-5 *)
morningSchedule:
dev: cron(0/30 12:01-17:30 ? * 1-5 *)
uat: cron(0/30 00:30-06:30 ? * 1-5 *)
The function is defined as :
functions:
handler123:
handler: foo::bar::functionName
package:
artifact: ./bin/Release/net6.0/foo.bar.zip
events:
- schedule: "${self:custom.eveningSchedule.stage}"
- schedule: "${self:custom.morningSchedule.stage}"
The error I get when running sls deploy is:
Cannot resolve variable at "functions.CifFileRetriever.events.0": Value not found at "self" source and "functions.CifFileRetriever.events.1": Value not found at "self"
Would be massively grateful for any help on this one.
Related
Any help sooner would be greatly appreciated
I am using PIPE to connect to Jenkins pipeline from BB and using the below code in my BB.yml
- step: &functionalTest
name: functional test
image: python:3.9
script:
- pipe: atlassian/jenkins-job-trigger:0.1.1
variables:
JENKINS_URL: '<<myJenkinsURL>>'
JENKINS_USER: '<<myJenkinsUser>>'
JENKINS_TOKEN: $JENKINS_USER_TOKEN
JOB_NAME: '<<myJenkinsJob>>'
WAIT: 'true'
WAIT_MAX_TIMEOUT: 500
It was working fine until last week. However, since Friday I can see number failures in BB pipeline with the timeout. Though the Jenkins job is successful and took only 4 mins and 56 secs to execute all test cases. Also I have WAIT_MAX_TIMEOUT: 500 (almost 8 mins max timeout)
Exception:
✖ Timeout while waiting for jenkins job with build number 254 to be completed
PS: Jenkins job for this build ID is successful in 5 mins (including the Sonar report generation)
I am using "serverless": "^2.43.1".
I am following official docs from:
https://www.serverless.com/examples/aws-node-scheduled-cron
but.. this does not seems to work at all..
All I get is:
schedule rate "cron" not yet supported!
scheduler: invalid, schedule syntax
Scheduling [my lambda name] cron: [undefined] input: undefined
Tried with official example:
functions:
myHandler:
handler: handler.run
events:
- schedule: cron(0/2 * ? * MON-FRI *)
Or even just to invoke it every minute:
functions:
myHandler:
handler: handler.run
events:
- schedule: cron(* * * * *)
but the error persist and lambda is not invoked both locally (serverless offline) and on AWS cloud
Can you help me with that?
use rate format !
- schedule: rate(1 minute)
1.install serverless-offline-scheduler npm package
And add the same plugin to your serverless.yml file
plugins:
serverless-webpack
serverless-offline
serverless-offline-scheduler
I am running a Symfony 4 (PHP) application on AWS Lambda using Bref (which uses Serverless).
Bref provides a layer for Symfony's bin/console binary. The Serverless config for the Lambda function looks like this:
functions:
console:
handler: bin/console
name: 'mm-console'
description: 'Symfony 4 console'
timeout: 120 # in seconds
layers:
- ${bref:layer.php-73} # PHP
- ${bref:layer.console} # The "console" layer
Using the above, I can run vendor/bin/bref cli mm-console -- mm:find-matches to run bin/console mm:find-matches on Lambda.
What if I want to run the mm:find-matches console command on a schedule on Lambda?
I tried this:
functions:
mm-find-matches:
handler: "bin/console mm:find-matches"
name: 'mm-find-matches'
description: 'Find mentor matches'
timeout: 120
layers:
- ${bref:layer.php-73} # PHP
- ${bref:layer.console} # The "console" layer
schedule:
rate: rate(2 hours)
However "bin/console mm:find-matches" is not a valid handler.
How can I pass mm:find-matches command to the bin/console function on a schedule?
You can pass command line arguments via the schedule event input like so:
functions:
console:
handler: bin/console
name: 'mm-console'
description: 'Symfony 4 console'
timeout: 120 # in seconds
layers:
- ${bref:layer.php-73} # PHP
- ${bref:layer.console} # The "console" layer
events:
- schedule:
input:
cli: "mm:find-matches --env=test"
rate: rate(2 hours)
enabled: true
Although there is some discussion on this bref github issue about whether using the cli console application is the best solution, vs writing PHP functions that bootstrap the kernel and do the specific thing you want the command to do.
Earlier my function in serverless was:
functions:
fun:
handler: file.handler
name: ${opt:stage, self:provider.stage}-lambda-fun
environment: ${file(env.yml):${self:provider.stage}.lambda-fun}
timeout : 180
memorySize : 1024
I want to change fun with some meaningful name, So I changes it as:
Earlier my function in serverless was:
functions:
my-fun:
handler: file.handler
name: ${opt:stage, self:provider.stage}-lambda-fun
environment: ${file(env.yml):${self:provider.stage}.lambda-fun}
timeout : 180
memorySize : 1024
Now When I deployed this function through serverless, Got the below error:
An error occurred while provisioning your stack: my-funLogGroup
- /aws/lambda/lambda-fun already exists
Please help me What I can do more to do this.
Try removing the stack first using serverless remove and then redeploy.
It's not the exact same issue but this github issue gives an alternative soulution: Cannot rename Lambda functions #108
I commented the function definition I wanted to rename and the resources with references to it, then sls deploy , uncommented and sls deploy again.
The problem with this is that the first deploy will delete the function so you have to take into account this down time.
I have a Google Cloud Build build that times out after 10 min, 3 sec. Is there a way to extend that timeout?
The build status is set to "Build failed (timeout)" and I'm okay with it taking longer than 10 minutes.
In cloudbuild.yaml you have to add something like timeout: 660s.
E.g.
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/[PRODUCT_ID]/[CONTAINER_IMAGE]', '.' ]
images:
- 'gcr.io/[PRODUCT_ID]/[CONTAINER_IMAGE]'
timeout: 660s
If you defined your build using a cloudbuild.yaml, you can just set the timeout field; see the full definition of a Build Resource in the documentation.
If you are using the gcloud CLI, it takes a --timeout flag; try gcloud builds submit --help for details.
Example: gcloud builds submit --timeout=900s ...