I got the serverless-plugin-warmup 4.2.0-rc.1 working fine with serverless version 1.36.2
But it only executes with one single warmup call instead of the configured five.
Is there any problem in my serverless.yml config?
It is also strange that I have to add 'warmup: true' to the function section to get the function warmed up. According to the docs on https://github.com/FidelLimited/serverless-plugin-warmup the config at custom section should be enough.
plugins:
- serverless-prune-plugin
- serverless-plugin-warmup
custom:
warmup:
enabled: true
concurrency: 5
prewarm: true
schedule: rate(2 minutes)
source: { "type": "keepLambdaWarm" }
timeout: 60
functions:
myFunction:
name: ${self:service}-${opt:stage}-${opt:version}
handler: myHandler
environment:
FUNCTION_NAME: myFunction
warmup: true
in AWS Cloud Watch I only see one execution every 2 minutes. I would expect to see 5 executions every 2 minutes, or do I misunderstand something here?
EDIT:
Now using the master branch concurrency works but now the context that is deliverd to the function which should be warmed is broken: Using Spring Cloud Functions => "Error parsing Client Context as JSON"
Looking at the JS of the generated warmup function the delivered source looks not ok =>
const functions = [{"name":"myFunction","config":{"enabled":true,"source":"\"\\\"{\\\\\\\"source\\\\\\\":\\\\\\\"serverless-plugin-warmup\\\\\\\"}\\\"\"","concurrency":3}}];
Config is:
custom:
warmup:
enabled: true
concurrency: 3
prewarm: true
schedule: rate(5 minutes)
timeout: 60
Added Property sourceRaw: true to warmup config which generates a clean source in the Function JS.
const functions = [{"name":"myFunctionName","config":{"enabled":true,"source":"{\"type\":\"keepLambdaWarm\"}","concurrency":3}}];
Config:
custom:
warmup:
enabled: true
concurrency: 3
prewarm: true
schedule: rate(5 minutes)
source: { "type": "keepLambdaWarm" }
sourceRaw: true
timeout: 60
Related
I want to schedule one lambda via AWS EventBridge. The issue is I want to read the number value used in ScheduledExpression from SSM variable GCHeartbeatInterval
Code I used is below
heartbeat-check:
handler: groupconsultation/heartbeatcheck.handler
description: ${self:custom.gitVersion}
timeout: 15
memorySize: 1536
package:
include:
- groupconsultation/heartbeatcheck.js
- shared/*
- newrelic-lambda-wrapper.js
events:
- eventBridge:
enabled: true
schedule: rate(2 minutes)
resources:
Resources:
GCHeartbeatInterval:
Type: AWS::SSM::Parameter
Properties:
Name: /${file(vars.js):values.environmentName}/lambda/HeartbeatInterval
Type: String
Value: 1
Description: value in minute. need to convert it to seconds/milliseconds
Is this possible to achieve in serverless.yml ?
Reason for reading it from SSM is, it's a heartbeat service and the same value will be used by FE to send a heartbeat in set interval. BE lambda needs to be triggerred after 2x heartbeat interval
It turns out it's not possible. Only solution to it was to pass the variable as a command line argument. something like below.
custom:
mySchedule: ${opt:mySchedule, 1} # Allow overrides from CLI
...
schedule: ${self:custom.mySchedule}
...
resources:
Resources:
GCHeartbeatInterval:
Type: AWS::SSM::Parameter
Properties:
Name: /${file(vars.js):values.environmentName}/lambda/HeartbeatInterval
Type: String
Value: ${self:custom.mySchedule}
With the other approach, if we make it work we still have to redeploy the application as we do need to redeploy in this case also.
I am trying to create a scheduled lambda function using the Serverless framework and to send it different parameters from different events.
here is my serverless configuration:
functions:
profile:
timeout: 10
handler: profile.profile
events:
- schedule:
rate: rate(1 minute)
params:
hello: world
The issue is that when I run sls deploy, I get the following error:
Serverless: at 'functions.profile.events[0]': unrecognized property 'params'
This is basically copied from the documentation here, so should work...
Am I missing something?
The documentation you're referencing is for Apache Open Whisk.
If you're using AWS, you'll need to use input as shown in the aws documentation
functions:
aggregate:
handler: statistics.handler
events:
- schedule:
rate: rate(10 minutes)
enabled: false
input:
key1: value1
key2: value2
stageParams:
stage: dev
The documentation that you referred to is for OpenWhisk https://www.serverless.com/framework/docs/providers/openwhisk/events/schedule/#schedule/.
Cloudwatch Events (now rebranded as EventBridge) is at https://www.serverless.com/framework/docs/providers/aws/events/schedule/#enabling--disabling. Sample code for reference
functions:
aggregate:
handler: statistics.handler
events:
- schedule:
rate: rate(10 minutes)
enabled: false
input:
key1: value1
key2: value2
stageParams:
stage: dev
- schedule:
rate: cron(0 12 * * ? *)
enabled: false
inputPath: '$.stageVariables'
- schedule:
rate: rate(2 hours)
enabled: true
inputTransformer:
inputPathsMap:
eventTime: '$.time'
inputTemplate: '{"time": <eventTime>, "key1": "value1"}'
Official docs at https://docs.aws.amazon.com/eventbridge/latest/userguide/scheduled-events.html
I could see one of my configuration something like below. There we use parameters instead of param.
functions:
test_function:
handler: handler.test_function
memorySize: 512
timeout: 60
events:
- http:
path: get-hello
method: get
request:
parameters:
queryStrings:
name: true
I want to execute the lambda function locally , on SQS event which is on my AWS account. I have defined the required event but this not getting triggered.
How can this be achieved?
I am able to send the messages to the same queue using cron event from my local.
Here are few things I tried... but didnt work for me .
functions:
account-data-delta-test:
handler: functions/test/data/dataDeltatestGenerator.handler
name: ${self:provider.stage}-account-data-delta-test
description: account delta update - ${self:provider.stage}-account-data-delta-test
tags:
Name: ${self:provider.stage}-account-data-delta-test
# keeping 5 minute function timeout just in case large volume of data.
timeout: 300
events:
- sqs:
arn:
Fn::GetAtt: [ testGenerationQueue, Arn ]
batchSize: 10
----------
Policies:
- PolicyName: ${self:provider.stage}-test-sqs-policy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- sqs:ReceiveMessage
- sqs:DeleteMessage
- sqs:GetQueueAttributes
- sqs:ChangeMessageVisibility
- sqs:SendMessage
- sqs:GetQueueUrl
- sqs:ListQueues
Resource: "*"
---------------
---
Resources:
testGenerationQueue:
Type: AWS::SQS::Queue
Properties:
QueueName: ${self:provider.stage}-account-test-queue
VisibilityTimeout: 60
Tags:
-
Key: Name
Value: ${self:provider.stage}-account-test-queue
-------------
const sqs = new AWS.SQS({
region: process.env.REGION,
});
exports.handler = async (event) => {
console.error('------------ >>>>CRON:START: Test delta Job run.', event);
log.error('------------ >>>>CRON:START: Test delta Job run.', event);
});
You can't trigger your local Lambda function from your remote context because they haven't nothin in common.
I suppose your goal is to test the logic of Lambda function, if so you have two options.
Option 1
A faster way could be invoke function locally using sam local invoke. In this way, you could provide this command some argument, one of those arguments is the event source (i.e. the event information that SQS will send to the Lambda as soon this is triggered).
sam local invoke -e sqs.input.json account-data-delta-test
and your sqs.input.json would look like this (generate using sam local generate-event sqs receive-message)
so you will actually test your Lambda locally.
Pros: is fast
Cons: You still have to test the trigger when you will deploy on AWS
Option 2
In a second scenario you will sacrifice the bind between a queue and Lambda. You have to trigger your function at fix interval and explicitly use the ReceiveMessage in your code.
Pro: you can read a real message from a real queue.
Con: you have to invoke function at regular interval and this is not handy.
I have a SAM template file that is throwing errors while doing sam build: [InvalidResourceException('MyFunction', "Type of property 'Events' is invalid.")]
First off, at the top of my file (at the same level as Globals) I have this event (the idea is to define a CloudWatch schedule that fires every 15 minutes and invokes a lambda):
Events:
Type: Schedule
Properties:
Schedule: rate(15 mins)
name: InvokeEvery15MinutesSchedule
Description: Invoke the target every 15 mins
Enabled: True
And here's what the function looks like:
MyFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: ./path-to-code
Events:
- !Ref InvokeEvery15MinutesSchedule
I was doing this because I saw earlier that the following syntax is valid:
Globals:
Function:
Layers:
- !Ref Layer1
- !Ref Layer1
So, I thought that if I define an event at the top level and reference it inside the lambda, it will work. I want to keep it outside of the Lambda declaration because I want this to apply to several functions.
Can someone help with what I'm doing wrong?
"Events" is a lambda source object that defines the events that trigger this function. The object describing the source of events which trigger the function.
Try this:
MyFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: ./path-to-code
Events:
RateSchedule:
Type: Schedule
Properties:
Schedule: rate(15 mins)
Name: InvokeEvery15MinutesSchedule
Description: Invoke the target every 15 mins
Enabled: True
I am writing a cloudformation serverless yaml for a lambda function. I need a conditional parameter reservedConcurrency to be 100 if IsProduction is true, 20 if false. But error happens when I deploy the yaml file:
You should use integer as reservedConcurrency value on function
resources:
Conditions:
IsProduction:
Fn::Equals:
- ${self:provider.stage}
- production
functions:
somefunction:
handler: functions/somefunction
timeout: 300
events:
- sqs:
arn:
Fn::GetAtt: [ somequeue, Arn ]
batchSize: 10
reservedConcurrency:
Fn::If:
- IsProduction
- 100
- 20
You can't use Cloudformation intrinsic functions within the functions block inside the serverless.yml file.
Instead try using nested variables
custom:
concurrency:
prod: 100
functions:
somefunction:
handler: functions/somefunction
timeout: 300
events:
- sqs:
arn:
Fn::GetAtt: [ somequeue, Arn ]
batchSize: 10
reservedConcurrency: ${self:custom.concurrency.${self:provider.stage}, 20}