How do I pass command line arguments to a Lambda function in Serverless/Bref? - aws-lambda

I am running a Symfony 4 (PHP) application on AWS Lambda using Bref (which uses Serverless).
Bref provides a layer for Symfony's bin/console binary. The Serverless config for the Lambda function looks like this:
functions:
console:
handler: bin/console
name: 'mm-console'
description: 'Symfony 4 console'
timeout: 120 # in seconds
layers:
- ${bref:layer.php-73} # PHP
- ${bref:layer.console} # The "console" layer
Using the above, I can run vendor/bin/bref cli mm-console -- mm:find-matches to run bin/console mm:find-matches on Lambda.
What if I want to run the mm:find-matches console command on a schedule on Lambda?
I tried this:
functions:
mm-find-matches:
handler: "bin/console mm:find-matches"
name: 'mm-find-matches'
description: 'Find mentor matches'
timeout: 120
layers:
- ${bref:layer.php-73} # PHP
- ${bref:layer.console} # The "console" layer
schedule:
rate: rate(2 hours)
However "bin/console mm:find-matches" is not a valid handler.
How can I pass mm:find-matches command to the bin/console function on a schedule?

You can pass command line arguments via the schedule event input like so:
functions:
console:
handler: bin/console
name: 'mm-console'
description: 'Symfony 4 console'
timeout: 120 # in seconds
layers:
- ${bref:layer.php-73} # PHP
- ${bref:layer.console} # The "console" layer
events:
- schedule:
input:
cli: "mm:find-matches --env=test"
rate: rate(2 hours)
enabled: true
Although there is some discussion on this bref github issue about whether using the cli console application is the best solution, vs writing PHP functions that bootstrap the kernel and do the specific thing you want the command to do.

Related

Serverless framework - schedule rate "cron" not yet supported

I am using "serverless": "^2.43.1".
I am following official docs from:
https://www.serverless.com/examples/aws-node-scheduled-cron
but.. this does not seems to work at all..
All I get is:
schedule rate "cron" not yet supported!
scheduler: invalid, schedule syntax
Scheduling [my lambda name] cron: [undefined] input: undefined
Tried with official example:
functions:
myHandler:
handler: handler.run
events:
- schedule: cron(0/2 * ? * MON-FRI *)
Or even just to invoke it every minute:
functions:
myHandler:
handler: handler.run
events:
- schedule: cron(* * * * *)
but the error persist and lambda is not invoked both locally (serverless offline) and on AWS cloud
Can you help me with that?
use rate format !
- schedule: rate(1 minute)
1.install serverless-offline-scheduler npm package
And add the same plugin to your serverless.yml file
plugins:
serverless-webpack
serverless-offline
serverless-offline-scheduler

Unable to deploy pre built image in app engine standard environment (GCP)

My spring boot application was working fine in cloud build & deployed without any issue till September.
Now my trigger fails in gcloud app deploy.
Step #4: ERROR: (gcloud.app.deploy) INVALID_ARGUMENT: Deployment cannot use a pre-built image. Pre-built images are only allowed in the App Engine Flexible Environment.
app.yaml
runtime: java11
env: standard
service: service
handlers:
- url: /.*
script: this field is required, but ignored
cloudbuild.yaml
steps:
# backend deployment
# Step 1:
- name: maven:3-jdk-14
entrypoint: mvn
dir: 'service'
args: ["test"]
# Step 2:
- name: maven:3-jdk-14
entrypoint: mvn
dir: 'service'
args: ["clean", "install", "-Dmaven.test.skip=true"]
# Step 3:
- name: docker
dir: 'service'
args: ["build", "-t", "gcr.io/service-base/base", "."]
# Step 4:
- name: "gcr.io/cloud-builders/docker"
args: ["push", "gcr.io/service-base/base"]
# Step 5:
- name: 'gcr.io/cloud-builders/gcloud'
dir: 'service/src/main/appengine'
args: ['app', 'deploy', "--image-url=gcr.io/service-base/base"]
timeout: "30m0s"
# Step 6:
# dispatch.yaml deployment
- name: "gcr.io/cloud-builders/gcloud"
dir: 'service/src/main/appengine'
args: ["app", "deploy", "dispatch.yaml"]
timeout: "30m0s"
timeout: "100m0s"
images: ["gcr.io/service-base/base"]
Cloud build error
Thanks in advance. Im confused how my build was working fine before & what am i doing wrong now.
You can't deploy custom container on App Engine standard. You have to provide your code and the environment runtime. Then Buildpack is used to create a standard container on Google Side (for information, a new Cloud Build job is ran for this) and deployed on App Engine.
I recommend you to have a look to Cloud Run to use your custom container. It's very close to App Engine (and even better on many points!) and very customizable.
What your cloudbuild.yaml comment's refer to as Step 5 corresponds to the Step #4 in the error because system begins numbering steps from 0.
The error message is accurate; App Engine standard (!) differs from App Engine flexible in that the latter (flexible) permits container image deployments. App Engine standard deploys from sources.
See Google's example.
It's possible that something has changed Google's side that's causing the issue but, the env: standard in your app.yaml suggests the build file has changed.

Cloud function e2e

How do I write e2e or integration testig for cloud function, so far
I've been able to use bash automation script, but when deployment I can not easily detect it
gcloud functions deploy MyFunction --entry-point MyFunction --runtime go111 --trigger-http
Bash is a good starting point, how about using some e2e testing tools, for instance
with endly workflow e2e runner your deployment workflow may look like the following
pipeline:
deploy:
action: exec:run
comments: deploy HelloWord triggered by http
target: $target
sleepTimeMs: 1500
terminators:
- Do you want to continue
errors:
- ERROR
env:
GOOGLE_APPLICATION_CREDENTIALS: ${env.HOME}/.secret/${gcSecrets}.json
commands:
- cd $appPath
- export PATH=$PATH:${env.HOME}/google-cloud-sdk/bin/
- gcloud config set project $projectID
- ${cmd[4].stdout}:/Do you want to continue/ ? Y
- gcloud functions deploy HelloWorld --entry-point HelloWorld --runtime go111 --trigger-http
extract:
- key: triggerURL
regExpr: (?sm).+httpsTrigger:[^u]+url:[\s\t]+([^\r\n]+)
validateTriggerURL:
action: validator:assert
actual: ${deploy.Data.triggerURL}
expected: /HelloWorld/
post:
triggerURL: ${deploy.Data.triggerURL}
you can also achive the same with using cloudfunction service API calls
defaults:
credentials: $gcSecrets
pipeline:
deploy:
action: gcp/cloudfunctions:deploy
'#name': HelloWorld
entryPoint: HelloWorldFn
runtime: go111
source:
URL: ${appPath}/hello/
Finally, you can look into practical serverless e2e testing examples(cloudfunctions, lambda, firebase, firestore, dynamodb,pubsub, sqs,sns,bigquery etc ...)
serverless_e2e

How to rename the aws lambda function without changing anything in it

Earlier my function in serverless was:
functions:
fun:
handler: file.handler
name: ${opt:stage, self:provider.stage}-lambda-fun
environment: ${file(env.yml):${self:provider.stage}.lambda-fun}
timeout : 180
memorySize : 1024
I want to change fun with some meaningful name, So I changes it as:
Earlier my function in serverless was:
functions:
my-fun:
handler: file.handler
name: ${opt:stage, self:provider.stage}-lambda-fun
environment: ${file(env.yml):${self:provider.stage}.lambda-fun}
timeout : 180
memorySize : 1024
Now When I deployed this function through serverless, Got the below error:
An error occurred while provisioning your stack: my-funLogGroup
- /aws/lambda/lambda-fun already exists
Please help me What I can do more to do this.
Try removing the stack first using serverless remove and then redeploy.
It's not the exact same issue but this github issue gives an alternative soulution: Cannot rename Lambda functions #108
I commented the function definition I wanted to rename and the resources with references to it, then sls deploy , uncommented and sls deploy again.
The problem with this is that the first deploy will delete the function so you have to take into account this down time.

Google Cloud Build timing out

I have a Google Cloud Build build that times out after 10 min, 3 sec. Is there a way to extend that timeout?
The build status is set to "Build failed (timeout)" and I'm okay with it taking longer than 10 minutes.
In cloudbuild.yaml you have to add something like timeout: 660s.
E.g.
steps:
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/[PRODUCT_ID]/[CONTAINER_IMAGE]', '.' ]
images:
- 'gcr.io/[PRODUCT_ID]/[CONTAINER_IMAGE]'
timeout: 660s
If you defined your build using a cloudbuild.yaml, you can just set the timeout field; see the full definition of a Build Resource in the documentation.
If you are using the gcloud CLI, it takes a --timeout flag; try gcloud builds submit --help for details.
Example: gcloud builds submit --timeout=900s ...

Resources