AWS sample test json for an SQS message with SNS notifications in it, to hit a lambda - aws-lambda

I am running into some trouble understanding how to create a sample test json to test my lambda. Currently the workflow is SNS->SQS->Lambda. I am trying to test the lambda in console with a sample json. I have tried putting SNS message under "body" field in SQS both as a json and a json string but facing parsing issues. Have referred a few other SO answers (ref: Amazon SNS -> SQS message body), but the suggestion was to use raw-messaging option, which my subscribers do not use. Can some one post a sample json structure to test, for SQS records with SNS notifications in them?
PS:
Tried below test event (without json string in body). Also tried using json string instead for the body.
{
"Records": [
{
"messageId": "19dd0b57-b21e-4ac1-bd88-01bbb068cb78",
"receiptHandle": "MessageReceiptHandle",
"body":{
"Type" : "Notification",
"MessageId" : "84102bd5-8890-4ed5-aeba-c15fafc926dc",
"TopicArn" : "arn:aws:sns:eu-west-1:534706846367:HelloWorld",
"Message" : "hello World",
"Timestamp" : "2012-06-05T13:44:22.360Z",
"SignatureVersion" : "1",
"Signature" : "Qzh0qXhijBKylaFwc9PGE+lQQDwHGWkIzCW2Ld1eVrxNfSem4yyBTgouqGX26V0m1qhFD4RQcBzE3oNqx5jFhJfV4hN45FNcsFVnmfLPGNUTmJWblSk8f6znWgTy8UtK9xrTeNYzK59k3VJ4WTJ5kCEj+2vH7sBV15fAXeCAtdQ=",
"SigningCertURL" : "https://sns.eu-west-1.amazonaws.com/SimpleNotificationService-f3ecfb7224c7233fe7bb5f59f96de52f.pem",
"UnsubscribeURL" : "https://sns.eu-west-1.amazonaws.com/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:eu-west-1:534706846367:HelloWorld:8a3acde2-cb0b-4a56-9b9c-b75ed7307556"
},
"attributes": {
"ApproximateReceiveCount": "1",
"SentTimestamp": "1523232000000",
"SenderId": "123456789012",
"ApproximateFirstReceiveTimestamp": "1523232000001"
},
"messageAttributes": {},
"md5OfBody": "{{{md5_of_body}}}",
"eventSource": "aws:sqs",
"eventSourceARN": "arn:aws:sqs:us-east-1:123456789012:MyQueue",
"awsRegion": "us-east-1"
}
]
}

Related

Cannot subscribe WebSocket for inverse contract in ByBit unified v3 API

I'm trying to use ByBit derivatives v3 API to subscribe public market data from WebSocket.
I first query instruments of inverse contract BTCUSD through /derivatives/v3/public/instruments-info, and I got this:
{
"symbol": "BTCUSD",
"contractType": "InversePerpetual",
"status": "Trading",
"baseCoin": "BTC",
"quoteCoin": "USD",
"launchTime": "0",
"deliveryTime": "0",
"deliveryFeeRate": "",
"priceScale": "2",
"leverageFilter": {
"minLeverage": "1",
"maxLeverage": "100",
"leverageStep": "0.01"
},
"priceFilter": {
"minPrice": "0.50",
"maxPrice": "999999.00",
"tickSize": "0.50"
},
"lotSizeFilter": {
"maxTradingQty": "1000000",
"minTradingQty": "1",
"qtyStep": "1"
}
}
Then I follow the WebSocket Data Document, using endpoint wss://stream.bybit.com/contract/usdt/public/v3 to subscribe topic orderbook.25.BTCUSD, and I got:
{"success":false,"ret_msg":"error:handler not found,topic:orderbook.25.BTCUSD","conn_id":"027f109e-a7fb-4af0-8b69-78bbb293e34b","req_id":"","op":"subscribe"}
Topic orderbook.25.BTCUSDT works. I know there is usdt in the websocket endpoints, but in the document there is no other choice. I tried usd/public/v3 unified/public/v3, none of them works.
Subscribe to wss://stream.bybit.com/realtime and send this message :
{"op":"subscribe","args":["trade.BTCUSD"]}
see https://bybit-exchange.github.io/docs/futuresV2/inverse/#t-websocket for the documentation

Publishing Avro messages using Kafka REST Proxy throws "Conversion of JSON to Avro failed"

I am trying to publish a message which has a union for one field as
{
"name": "somefield",
"type": [
"null",
{
"type": "array",
"items": {
"type": "record",
Publishing the message using the Kafka REST Proxy keeps throwing me the following error when somefield has an array populated.
{
"error_code": 42203,
"message": "Conversion of JSON to Avro failed: Failed to convert JSON to Avro: Expected start-union. Got START_ARRAY"
}
Same schema with somefield: null is working fine.
The Java classes are built in the Spring Boot project using the gradle plugin from the Avro schemas. When I use the generated Java classes and publish a message, with the array populated using the Spring KafkaTemplate, the message is getting published correctly with the correct schema. (The schema is taken from the generated Avro Specific Record) I copy the same json value and schema and publish via REST proxy, it fails with the above error.
I have these content types in the API call
accept:application/vnd.kafka.v2+json, application/vnd.kafka+json, application/json
content-type:application/vnd.kafka.avro.v2+json
What am I missing here? Any pointers to troubleshoot the issue is appreciated.
The messages I tested for were,
{
"somefield" : null
}
and
{
"somefield" : [
{"field1": "hello"}
]
}
However, it should be instead passed as,
{
"somefield" : {
"array": [
{"field1": "hello"}
]}
}

How do I post a test Kinesis event from Postman to a local Lambda function running on serverless?

Sorry, wasn't sure how to make the question itself brief enough...
I can post data from Postman to my local Lambda function. The issue is that when running locally, I have use this line of code...
event = JSON.parse(event.body);
...so that I can do this...
event.Records.forEach(function(record)
{
// do some stuff
}
But when I deploy the function to AWS, parsing event.body is unnecessary. In fact it throws an error.
I was assuming that there is something different about the JSON (or other aspects of the request) that I'm posting from Postman to my local app when compared to what Kinesis actually sends. But the JSON blob I'm posting locally was logged directly from Lambda on AWS to Cloudwatch.
I'm missing something.
TBH, this only matters because having to comment out that line as a step in the deployment process is annoying and error-prone.
Here's the JSON (names have been changed to protect the innocent):
{
"Records": [
{
"kinesis": {
"kinesisSchemaVersion": "1.0",
"partitionKey": "Thursday, 11 April 2019",
"sequenceNumber": "49594660145138471912435706107712688932829223550684495922",
"data": "some base 64 stuff",
"approximateArrivalTimestamp": 1555045874.83
},
"eventSource": "aws:kinesis",
"eventVersion": "1.0",
"eventID": "shardId-000000000003:1234123412341234123412341234123412341234123412341234",
"eventName": "aws:kinesis:record",
"invokeIdentityArn": "arn:aws:iam::1234123412341234:role/lambda-kinesis-role",
"awsRegion": "us-west-2",
"eventSourceARN": "arn:aws:kinesis:us-west-2:1234123412341234:stream/front-end-requests"
}
]
}

Error when attempting to create VSTS API Service Hook Subscription

Everyone,
I have been around and around about creating Visual Studio Team Services - Service Hook Subscriptions automagically via API. I can get the TFS publisher to work, but not the RM publisher. I am trying to create a hook subscription for Release Deployment Completed. Any examples I can find simply don't work even when swapped with my data. TFS provider works perfecty. Sample with response...
POST https://project.visualstudio.com/_apis/hooks/subscriptions?api-version=4.1-preview
Body:
{
"publisherId": "rm",
"eventType": "ms.vss-release.deployment-completed-event",
"consumerId": "slack",
"consumerActionId": "postMessageToChannel",
"publisherInputs": {
"releaseEnvironmentId" : "777",
"releaseDefinitionId" : "1",
"releaseEnvironmentStatus" : "4",
"projectId" : "6ce954b1-ce1f-45d1-b94d-e6bf2464ba2c"
},
"consumerInputs": {
"url": "https://hooks.slack.com/services/myservice/crazyGuidxxxxxxxxxxxxxxxx"
}
}
Response:
{
"$id": "1",
"innerException": null,
"message": "There is no registered handler for the service hooks event type ms.vss-release.deployment-completed-event.",
"typeName": "System.InvalidOperationException, mscorlib",
"typeKey": "InvalidOperationException",
"errorCode": 0,
"eventId": 0
}
It's because you missed vsrm in your url for a release. Use the following api and you'll get a successful response:
POST https://project.vsrm.visualstudio.com/_apis/hooks/subscriptions?api-version=4.1-preview

kinesis agent to lambda, how to get origin file and server

I have a kinesis agent that streams a lot of log files information to kinesis streams and I have a Lambda function that parses the info.
On Lambda in addition to the string I need to know the source file name an machine name is it possible?
You can add it to the data that you send to Kinesis.
Lambda gets Kinesis records as base64 string, you can encode to this string a JSON of this form:
{
"machine": [machine],
"data": [original data]
}
And then, when processing the records on Lambda: (nodejs):
let record_object = JSON.parse(new Buffer(event.Records[0].kinesis.data, 'base64').toString('utf8'));
let machine = record_object.machine;
let data = record_object.data;
Assuming you are using Kinesis Agent to produce data stream. I see that the opensource community has added ADDEC2METADATA as a preprocessing option in the agent. The source code
Make sure that the source content file is of JSON format. If the original format is CSV then use the CSVTOJSON transformer first to convert it to JSON and then pipe it to ADDEC2METADATA transformer as shown below.
Open agent.json and add the following:
"flows": [
{
"filePattern": "/tmp/app.log*",
"kinesisStream": "my-stream",
"dataProcessingOptions": [
{
"optionName": "CSVTOJSON",
"customFieldNames": ["your", "custom", "field", "names","here", "if","origin","file","is","csv"],
"delimiter": ","
},
{
"optionName": "ADDEC2METADATA",
"logFormat": "RFC3339SYSLOG"
}
]
}
]
}
If your code is running out of a container/ECS/EKS etc. where the originating info is not as simple as collecting info about bare-metal EC2, then use "ADDMETADATA" declarative as shown below in the agent.log file:
{
"optionName": "ADDMETADATA",
"timestamp": "true/false",
"metadata": {
"key": "value",
"foo": {
"bar": "baz"
}
}
}

Resources