How to trigger a AWS lambda by sending event to EventBridge - aws-lambda

I have a AWS lambda that the trigger for activating it is an event from EventBridge (rule)
The rule looks like this:
{
"detail-type": ["ECS Task State Change"],
"source": ["aws.ecs"],
"detail": {
"stopCode": ["EssentialContainerExited", "UserInitiated"],
"clusterArn": ["arn:aws:ecs:.........."],
"containers": {
"name": ["some name"]
},
"lastStatus": ["DEACTIVATING"],
"desiredStatus": ["STOPPED"]
}
}
This event is normally triggered when ECS task status is changed (in this case when a task is killed)
My questions are:
Can I simulate this event from command line?
maybe by running aws events put-events --entries file://putevents.json
(What should I write in the putevents.json file?)
Can I simulate this event from Javascript code?

TL;DR Yes and yes, provided you deal with with the limitation that user-generated events cannot have a source that begins with aws.
Send custom events to EventBridge with the PutEvents API. The API is available in the CLI as well as in the SDKs (see AWS JS SDK). The list of custom events you pass in the entries parameter must have three fields at a minimum:
[
{
"source": "my-custom-event", // cannot start with aws !!,
"detail-type": "ECS Task State Change",
"detail": {} // copy from the ECS sample events docs
}
]
The ECS task state change event samples in the ECS documentation make handy templates for your custom events. You can safely prune any non-required field that you don't need for pattern matching.
Custom events are not permitted to mimic the aws system event sources. So amend your rule to also match on your custom source name:
"source": ["aws.ecs", "my-custom-event"],

Related

How to configure an Azure custom handler with a timer trigger?

I'm trying to configure a new function in my Golang custom handler that uses a timer trigger. But I haven't been able to find any documentation for it.
I've reviewed examples on the Azure/Azure-Functions github, but a timer trigger is missing: https://github.com/Azure/Azure-Functions
I've also reviewed the custom handler documentation at the microsoft azure/azure-functions page but it was only for HTTP triggers: https://learn.microsoft.com/en-us/azure/azure-functions/create-first-function-vs-code-other?tabs=go%2Cwindows
It's unclear how the function is executed in main.go on the cron schedule configured in function.json.
The intent is to execute the function once an hour. This is the /functionname/function.json file I'm using:
{
"bindings": [
{
"name": "timer",
"type": "timerTrigger",
"direction": "in",
"schedule": "0 0 * * * *"
}
]
}
There are a few differences between configuring a custom handler with a timer trigger and an HTTP trigger.
These are the differences that I noticed while figuring out how to get the timer trigger function up and running:
The /local.settings.json file requires a field called "AzureWebJobsStorage" when configuring a timer trigger, it's not required for the HTTP trigger. If not present it will cause the function app to fail when starting. This field stores the connection string for the storage account being used with the function app. (obviously add this file to .gitignore)
When the function app attempts to call the timer triggered function, it expects it to be at the endpoint /functionName. This is different from the HTTP trigger which executes function handlers at /api/functionName.
Note: Timer trigger functions don't appear to be able to set an outbound http binding. So even though the function sets a response, it isn't used and does not get sent back to through the function host. It's unclear why timer triggers ignore the outbound http binding or how to set a response without using one.

/actuator/health does not detect stopped binders in Spring Cloud Stream

We are using Spring Cloud Streams with multiple bindings based on Kafka Streams binders.
The output of /actuator/health correctly lists all our bindings and their state (RUNNING) - see example below.
Our expectation was, when a binding is stopped using
curl -d '{"state":"STOPPED"}' -H "Content-Type: application/json" -X POST http://<host>:<port>/actuator/bindings/mystep1,
it is still listed, but with threadState = NOT_RUNNING or SHUTDOWN and the overall health status is DOWN.
This is not the case!. After stopping a binder, it is removed from the list and the overall state of /actuator/health is still UP.
Is there a reason for this? We would like to have a alert, on this execution state of our application.
Are there code examples, how we can achieve this by a customized solution based on KafkaStreamsBinderHealthIndicator?
Example output of /actuator/health with Kafka Streams:
{
"status": "UP",
"components": {
"binders": {
"status": "UP",
"components": {
"kstream": {
"status": "UP",
"details": {
"mystep1": {
"threadState": "RUNNING",
...
},
...
},
"mystep2": {
"threadState": "RUNNING",
...
},
...
}
}
}
}
},
"refreshScope": {
"status": "UP"
}
}
}
UPDATE on the exact situation:
We do not stop the binding manually via the bindings endpoint.
We have implemented integrated error queues for runtime errors within all processing steps based on StreamBridge.
The solution has also some kind of circuit breaker feature. This is the one that stops a binding from within the code, when a configurable limit of consecutive runtime errors is reached, because we do not want to flood our internal error queues.
Our application is monitored by Icinga via /actuator/health, therefore we would like to get an alarm, when on of the bindings is stopped.
Switching in Icinga to another endpoint like /actuator/bindings cannot be done easily by our team.
Presently, the Kafka Streams binder health indicator only considers the currently active Kafka Streams for health check. What you are seeing as the output when the binding is stopped is expected. Since you used the bindings endpoint to stop the binding, you can use /actuator/bindings to get the status of the bindings. There you will see the state of all the bindings in the stopped processor as stopped. Does that satisfy your use case? If not, please add a new issue in the repository and we could consider making some changes in the binder so that the health indicator is configurable by the users. At the moment, applications cannot customize the health check implementation. We could also consider adding a property, using which you can force the stopped/inactive kafka streams processors as part of the health check output. This is going to be tricky - for e.g. what will be the overall status of the health if some processors are down?

How to send a CloudWatchEvent from a lambda to an EventBridge destination

I have a lambda which is triggered by an EventBridge custom bus. I want to send another event to the customer bus at the end of the function processing. I created a destination in the lambda to send to the same custom bus.
I have the following code where the function handler will return a CloudWatchEvent. This is not working.
public async Task<CloudWatchEvent<object>> FunctionHandler(CloudWatchEvent<object> evnt, ILambdaContext context)
{
return await ProcessMessageAsync(evnt, context);
}
My lambda was being triggered by S3 input event (which is asynchronous), I tried adding destination on Lambda "success" to EventBridge bus, created a rule to capture that and send it to CloudWatch logs but it didn't seem to work.
Turns out, while creating the Rule in EventBridge, event pattern was set to:
{
"source": ["aws.lambda"]
}
Which is what you get if you are using the console and selecting AWS Lambda as the AWS Service.
Infuriated, I couldn't seem to get it to work even with a simple event. On further inspection, I looked at the input event and realized that it wants lambda and not aws.lambda. It is also mentioned in the documentation: https://docs.aws.amazon.com/lambda/latest/dg/invocation-async.html
So to fix it, I changed it to
{
"source": ["lambda"]
}
and it worked for me.
Have you given a shot to AWS Lambda Destinations. There are 4 types of Destinations supported
SQS Queue
SNS Topic
Event Bridge Event Bus
Lambda Function itself.

Lambda Step Functions: Fire & Forget pattern

I have a Python-based Lambda (core Lambda) serving a synchronous API. The API is triggered from an user interactive application. I now need to add some logging & metrics (slightly compute intensive) to the Lambda. I don't want the core Lambda to be delayed by this. I want to push this into a new Lambda (logging Lambda). What I want is- core Lambda completes its work, triggers the logging Lambda (fire & forget) and returns the response to API call immediately. The end state (success/failure) of the logging Lambda is irrelevant.
Can "Step Functions" achieve this? The core & logging Lambdas have their own end state and I'm not sure if the "Step" function pattern can accommodate this.
You can start an asynchronous Lambda function invocation using "InvocationType": "Event" in your Invoke parameters. To do that in Step Functions, the ASL code looks like this:
{
"StartAt": "Invoke Lambda function asynchronously",
"States": {
"Invoke Lambda function asynchronously": {
"Type": "Task",
"Resource": "arn:aws:states:::lambda:invoke",
"Parameters": {
"FunctionName": "myFunction",
"Payload.$": "$",
"InvocationType": "Event"
},
"End": true
}
}
}
Having an async Lambda Task (as shown above) after your core Lambda Task seems like it should work. To make sure the logging Lambda failing doesn't affect the overall workflow, you can add a Catcher to it on States.ALL and redirect to a Succeed state.
https://docs.aws.amazon.com/step-functions/latest/dg/concepts-error-handling.html#error-handling-fallback-states
If the secondary Lambda is purely invoked for logging purposes and the state machine is not dependent on its output, you could invoke the secondary Lambda from within your primary Lambda, then return from the primary Lambda. This way your state machine doesn't need to know about the logging steps and you can "fire and forget" before resuming your workflow.

Is there any way to trigger a AWS Lambda function at the end of an AWS Glue job?

Currently I'm using an AWS Glue job to load data into RedShift, but after that load I need to run some data cleansing tasks probably using an AWS Lambda function. Is there any way to trigger a Lambda function at the end of a Glue job? Lambda functions can be triggered using SNS messages, but I couldn't find a way to send an SNS at the end of the Glue job.
#oreoluwa is right, this can be done using Cloudwatch Events.
From the Cloudwatch dashboard:
Click on 'Rules' from the left menu
For 'Event Source', choose 'Event Pattern' and in 'Service Name' choose 'Glue'
For 'Event Type' choose 'Glue Job State Change'
On the right side of the page, in the 'Targets' section, click 'Add Target' -> 'Lambda Function' and then choose your function.
The event you'll get in Lambda will be of the format:
{
'version': '0',
'id': 'a9bc90be-xx00-03e0-9bc5-a0a0a0a0a0a0',
'detail-type': 'GlueJobStateChange',
'source': 'aws.glue',
'account': 'xxxxxxxxxx',
'time': '2018-05-10T16: 17: 03Z',
'region': 'us-east-2',
'resources': [],
'detail': {
'jobName': 'xxxx_myjobname_yyyy',
'severity': 'INFO',
'state': 'SUCCEEDED',
'jobRunId': 'jr_565465465446788dfdsdf546545454654546546465454654',
'message': 'Jobrunsucceeded'
}
}
Since AWS Glue has started supporting python, you can probably follow the below path to achieve what you desire. Below sample script shows how to do that -
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
import boto3 ## Step-2
## #params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
## Do all ETL stuff here
## Once the ETL completes
lambda_client = boto3.client('lambda') ## Step-3
response = lambda_client.invoke(FunctionName='string') ## Step-4
Create a python based Glue Job (to perform ETL on Redshift)
In the job script, import boto3 (need to place this package as script library).
Make a connection to lambda using boto3
Invoke lambda function using the boto3 lambda invoke() once the ETL completes.
Please make sure that the role that you are using while creating the Glue job has permissions to invoke lambda functions.
Refer to the Boto3 documentation for lambda here.
No. Currently you can't trigger a lambda function at the end of a Glue job. The reason for this is that this trigger has not yet been provided by AWS in Lambda. If you look at the list of AWS lambda triggers after you create a lambda function, you will see that it has most of AWS services as trigger but not AWS Glue. So, for now, it is not possible but maybe in future.
But I would like to mention that you can actually control the flow of glue scripts using your lambda python script. (I did it using python, I am sure there may be other languages supporting this). My use case was that whenever I upload any object in S3 bucket, it gets lambda function trigger from which I was reading the object file and starting my glue job. And once the status of Glue job was complete, I would write my file back to S3 bucket linked to this Lambda function.
#ace and #adeel, have part of the solution, but you could get this resolved by creating the CloudWatch Rule with the following event pattern:
{
"source": [
"aws.glue"
],
"detail-type": [
"Glue Job State Change"
],
"detail": {
"jobName": [
"<YourJobName>"
],
"state": [
"SUCCEEDED"
]
}
}
Lambda can be triggered on S3 put. You can put a dummy file on S3 as the last glue job; which would in turn trigger lambda. I have tested this.
You can orchestrate your AWS Glue Jobs and AWS Lambda functions by using AWS Step Functions. Here is a blog post that explains how to do it and gives an example: https://aws.amazon.com/blogs/big-data/orchestrate-multiple-etl-jobs-using-aws-step-functions-and-aws-lambda/
In essence, when a Glue job finishes (success or fail), your Step Function workflow can catch the event and invoke your Lambda function.
yes it is possible to trigger but for this we have to take help of EventBridge .
Please follow below instruction
go to EventBridge then Under Events you will find rules click on it then click on create rule give a suitable name to your rule by make sure radio button selected on Rule with an event pattern then click Next in event source it will be AWS events or EventBridge partner events then in creation method select Use pattern form.
In event pattern select event source as "AWS service" and in AWS service select glue and then new drop down selection will be enabled there select "Glue Job State Change"
then right side event pattern is there click on edit pattern and do changes as per your need.
{
"detail-type": ["Glue Job State Change"],
"source": ["aws.glue"],
"detail": {
"jobName": ["Your glue Name"],
"state": ["FAILED"]
}
}
in state : STARTING , RUNNING , STOPPING , STOPPED , SUCCEEDED , FAILED , ERROR , WAITING and TIMEOUT you can choose this
don't use any other field unless you are using ec2 instance then you have to use resources field and you can place it next to source
then click on next select aws service in target type select Lambda function and then select your lambda function name in new drop down which appeared after selecting the target and then next , next and save.
congrats you have successfully created the configuration to trigger lambda function based on glue job.

Resources