Serverless framework - schedule rate "cron" not yet supported - aws-lambda

I am using "serverless": "^2.43.1".
I am following official docs from:
https://www.serverless.com/examples/aws-node-scheduled-cron
but.. this does not seems to work at all..
All I get is:
schedule rate "cron" not yet supported!
scheduler: invalid, schedule syntax
Scheduling [my lambda name] cron: [undefined] input: undefined
Tried with official example:
functions:
myHandler:
handler: handler.run
events:
- schedule: cron(0/2 * ? * MON-FRI *)
Or even just to invoke it every minute:
functions:
myHandler:
handler: handler.run
events:
- schedule: cron(* * * * *)
but the error persist and lambda is not invoked both locally (serverless offline) and on AWS cloud
Can you help me with that?

use rate format !
- schedule: rate(1 minute)

1.install serverless-offline-scheduler npm package
And add the same plugin to your serverless.yml file
plugins:
serverless-webpack
serverless-offline
serverless-offline-scheduler

Related

AWS Lambda - Stage based event triggers

I'm working on an AWS Lambda using the serverless framework. I'm trying to get my Lambda function to run on an EventBridge trigger, more specifically a cron schedule. The schedule being to run every 30 minutes between 20:00 and 07:00 Mon-Friday. However, this is great for prod, but in QA/UAT I want a different cron schedule (most likely). Thus I'm looking to implement stage based cron triggers so that I can schedule it for the day time in QA/UAT, but schedule it for the evening in production.
I originally tried a single cron schedule trigger of cron (0/30 20:30-06:30 ? * 1-5 *) for UAT, but that didn't work for some reason. My Lambda function ran only twice after 20:30, which I've yet to figure out.
My serverless file contains:
custom:
stage: "${opt:stage, self:provider.stage, 'dev'}"
stages:
- dev
- uat
eveningSchedule:
dev: cron(0/30 08:00-12:59 ? * 1-5 *)
uat: cron(0/30 20:30-23:59 ? * 1-5 *)
morningSchedule:
dev: cron(0/30 12:01-17:30 ? * 1-5 *)
uat: cron(0/30 00:30-06:30 ? * 1-5 *)
The function is defined as :
functions:
handler123:
handler: foo::bar::functionName
package:
artifact: ./bin/Release/net6.0/foo.bar.zip
events:
- schedule: "${self:custom.eveningSchedule.stage}"
- schedule: "${self:custom.morningSchedule.stage}"
The error I get when running sls deploy is:
Cannot resolve variable at "functions.CifFileRetriever.events.0": Value not found at "self" source and "functions.CifFileRetriever.events.1": Value not found at "self"
Would be massively grateful for any help on this one.

Percy not running in CircleCI orbs (w/ Cypress)

I'm trying to get Percy.io to take snapshots of a simple test written in Cypress, building in CircleCI. However, the 'builds' are showing up as failed in the Percy dashboard despite the test/build passing in CircleCI. In the Cypress test runner it is showing 'Percy not running' where my snapshots are placed.
I've followed the tutorials on the Percy and Cypress sites. I can get Percy to work locally, by running percy exec -- cypress run
but the CircleCI config doesn't run Cypress via the command cypress run, it runs it via the cypress orb.
It seems like the two orbs, Cypress and Percy, doesn't know the other exists.
Here's my CircleCI config file:
version: 2.1
orbs:
node: circleci/node#4.5.1
cypress: cypress-io/cypress#1.28.0
slack: circleci/slack#4.4.2
percy: percy/agent#0.1.3
workflows:
version: 2
commit-workflow:
jobs:
- cypress/run:
name: Smoke Tests
record: true
store_artifacts: true
spec: cypress/integration/E2E/*
post-steps:
- store_test_results:
path: test-results
- slack/notify:
channel: general
event: fail
template: basic_fail_1
mentions: '#Jac'
- slack/notify:
channel: general
event: pass
template: basic_success_1
mentions: '#Jac'
- percy/finalize_all:
requires:
- Smoke Tests
The Run Cypress Tests step doesn't make any mention of Percy, so I'm assuming it simply isn't running - that despite using the Percy orb, there's some sort of config I'm missing?
Apologies, I keep finding answers to my questions after posting to Stack
Overflow! I obviously don't know the properties of cypress/run well enough. But essentially, there's a custom command-prefix property that can be added for the purpose of amending the command used to run cypress. In fact, Percy is the example used in the Cypress docs.
Config now looks like:
version: 2.1
orbs:
node: circleci/node#4.5.1
cypress: cypress-io/cypress#1.28.0
slack: circleci/slack#4.4.2
percy: percy/agent#0.1.3
workflows:
version: 2
commit-workflow:
jobs:
- cypress/run:
name: Smoke Tests
record: true
store_artifacts: true
spec: cypress/integration/E2E/*
command-prefix: npx percy exec --
post-steps:
- store_test_results:
path: test-results
- slack/notify:
channel: general
event: fail
template: basic_fail_1
mentions: '#Jac'
- slack/notify:
channel: general
event: pass
template: basic_success_1
mentions: '#Jac'
- percy/finalize_all:
requires:
- Smoke Tests

How do I pass command line arguments to a Lambda function in Serverless/Bref?

I am running a Symfony 4 (PHP) application on AWS Lambda using Bref (which uses Serverless).
Bref provides a layer for Symfony's bin/console binary. The Serverless config for the Lambda function looks like this:
functions:
console:
handler: bin/console
name: 'mm-console'
description: 'Symfony 4 console'
timeout: 120 # in seconds
layers:
- ${bref:layer.php-73} # PHP
- ${bref:layer.console} # The "console" layer
Using the above, I can run vendor/bin/bref cli mm-console -- mm:find-matches to run bin/console mm:find-matches on Lambda.
What if I want to run the mm:find-matches console command on a schedule on Lambda?
I tried this:
functions:
mm-find-matches:
handler: "bin/console mm:find-matches"
name: 'mm-find-matches'
description: 'Find mentor matches'
timeout: 120
layers:
- ${bref:layer.php-73} # PHP
- ${bref:layer.console} # The "console" layer
schedule:
rate: rate(2 hours)
However "bin/console mm:find-matches" is not a valid handler.
How can I pass mm:find-matches command to the bin/console function on a schedule?
You can pass command line arguments via the schedule event input like so:
functions:
console:
handler: bin/console
name: 'mm-console'
description: 'Symfony 4 console'
timeout: 120 # in seconds
layers:
- ${bref:layer.php-73} # PHP
- ${bref:layer.console} # The "console" layer
events:
- schedule:
input:
cli: "mm:find-matches --env=test"
rate: rate(2 hours)
enabled: true
Although there is some discussion on this bref github issue about whether using the cli console application is the best solution, vs writing PHP functions that bootstrap the kernel and do the specific thing you want the command to do.

How to rename the aws lambda function without changing anything in it

Earlier my function in serverless was:
functions:
fun:
handler: file.handler
name: ${opt:stage, self:provider.stage}-lambda-fun
environment: ${file(env.yml):${self:provider.stage}.lambda-fun}
timeout : 180
memorySize : 1024
I want to change fun with some meaningful name, So I changes it as:
Earlier my function in serverless was:
functions:
my-fun:
handler: file.handler
name: ${opt:stage, self:provider.stage}-lambda-fun
environment: ${file(env.yml):${self:provider.stage}.lambda-fun}
timeout : 180
memorySize : 1024
Now When I deployed this function through serverless, Got the below error:
An error occurred while provisioning your stack: my-funLogGroup
- /aws/lambda/lambda-fun already exists
Please help me What I can do more to do this.
Try removing the stack first using serverless remove and then redeploy.
It's not the exact same issue but this github issue gives an alternative soulution: Cannot rename Lambda functions #108
I commented the function definition I wanted to rename and the resources with references to it, then sls deploy , uncommented and sls deploy again.
The problem with this is that the first deploy will delete the function so you have to take into account this down time.

Google Cloud Build timing out

I have a Google Cloud Build build that times out after 10 min, 3 sec. Is there a way to extend that timeout?
The build status is set to "Build failed (timeout)" and I'm okay with it taking longer than 10 minutes.
In cloudbuild.yaml you have to add something like timeout: 660s.
E.g.
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/[PRODUCT_ID]/[CONTAINER_IMAGE]', '.' ]
images:
- 'gcr.io/[PRODUCT_ID]/[CONTAINER_IMAGE]'
timeout: 660s
If you defined your build using a cloudbuild.yaml, you can just set the timeout field; see the full definition of a Build Resource in the documentation.
If you are using the gcloud CLI, it takes a --timeout flag; try gcloud builds submit --help for details.
Example: gcloud builds submit --timeout=900s ...

Resources