I am building graphql server in AWS websocket apigateway + Lambda to support subscription. The lambda saves all subscribers' data in a database. When there is a need to publish data to subscribers, it need to construct a payload and send it to the websocket apigateway which will forward the payload to the Apollo client.
I have made the subscribe request work but the question I have is how to construct a payload the Apollo client expects when the server wants to publish data to subscribers.
I build all of this in go by using the library github.com/graph-gophers/graphql-go. Any library in go supports building the payload is great for me.
We are sending emails using AWS SES. I know how to configure notifications for bounces and complaints using SNS. Infact I have configured SNS to deliver those notifications using https endpoint.
I have a springboot rest service that I would like to receive those notifications in. Can anybody point to a sample springboot rest controller to receive and process these notifications? Any help will be greatly appreciated.
I want to use AWS X-Ray with Apollo Federation. The Apollo Gateway is hosted on AWS Lambda and it calls subservices which are also hosted on AWS Lambda.
I activated tracing for every lambda (gateway & subservices) in serverless.yml:
tracing: {
apiGateway: true,
lambda: true,
}
And I instrumented every lambda to capture HTTPs calls globally:
const AWSXRay = require("aws-xray-sdk");
AWSXRay.captureHTTPsGlobal(require("https"));
The traces for gateway and subservices work well. Below is the gateway trace and subservice trace:
However, it seems the subservice traces use a different traceID even though the header x-amzn-trace-id is correctly passed from the gateway to the subservice:
The picture above is a screenshot of cloudwatch logs of one subservice. The header x-amzn-trace-id is correctly passed from the gateway (1st & 2nd red rectangles), but it is different from the traceId used for the lambda (rectangle at the bottom). Hence, both traces cannot be gathered together.
Am I missing something here?
I'm trying to send error messages from Lambda to DLQ. I can see error messages in cloudwatch logs but unable to see any entry in DLQ. I followed below steps-
1- Created Lambda Functions.
2- Added API gateway as trigger.
3- Created a queue in SQS as standard.
4- Assigned SQS full access policy to Lambda Role.
5- Configured Asynchronous invocation in Lambda Function and defined my queue which i created in step 3.
Do I need to do anything additional on API gateway or Lambda.
Per link! here, Azure Functions Service Bus trigger lets you listen on Azure Service Bus. We are currently using mostly AWS for our cloud services. And we are working with vendor who has real time notifications using Azure service bus. I would like to know if there is anyway to connect to service bus using lambda. Anytime there is a new message on the bus, we would like our AWS lambda to invoke and take it from there.
It's not possible. However you can use Azure functions (Azure serverless offering) triggered by Azure Service bus to consume the messages.
If you really want cross vendor trigger then you need to consume azure service bus message, convert the message into http payload and trigger AWS lambda with Http payload that has message contents.
Cloudwatch Event Rule: https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/Create-CloudWatch-Events-Rule.html
You specify your event source -- a supported Service and an action/API call and the targets and setting up the required IAM setting(Lambda permission etc. if you create from IaC tools like terraform..) And you are good to go!
Then as long as Cloudwatch event rule is up, all the events that falls into the rule you specify will trigger your lambda.
Event rule can also be used a "cron schedule" for lambda, which I have been using. I did encounter some delay very rarely tho.
Update: to make it as real time as possible, you would need to make modification to your vendor's Azure account to enable some message pushing to your AWS endpoint(an API gateway), which I assume is a NO. Other than that, a AWS self contained solution is to set up a cloudwatch event rule to pull your vendor's Azure HTTP endpoint every 1 minutes and store the stuff in your own SQS queues.