I have an AWS Lamda that is configured to receive messages from an Amazon MQ (Apache ActiveMQ) via an MQ trigger. I depend on the destination object shown in the example MQ record event in the documentation to know from which queue the message is coming. However, in reality, the message object has an empty destination object, and I am unsure why. The object looks good otherwise.
This is what the example looks like in the AWS documentation.
{
"eventSource": "aws:mq",
"eventSourceArn": "arn:aws:mq:us-west-2:111122223333:broker:test:b-9bcfa592-423a-4942-879d-eb284b418fc8",
"messages": [
{
"messageID": "ID:b-9bcfa592-423a-4942-879d-eb284b418fc8-1.mq.us-west-2.amazonaws.com-37557-1234520418293-4:1:1:1:1",
"messageType": "jms/text-message",
"deliveryMode": 1,
"replyTo": null,
"type": null,
"expiration": "60000",
"priority": 1,
"correlationId": "myJMSCoID",
"redelivered": false,
"destination": {
"physicalname": "testQueue"
},
"data":"QUJDOkFBQUE=",
"timestamp": 1598827811958,
"brokerInTime": 1598827811958,
"brokerOutTime": 1598827811959
},
{
"messageID": "ID:b-9bcfa592-423a-4942-879d-eb284b418fc8-1.mq.us-west-2.amazonaws.com-37557-1234520418293-4:1:1:1:1",
"messageType": "jms/bytes-message",
"deliveryMode": 1,
"replyTo": null,
"type": null,
"expiration": "60000",
"priority": 2,
"correlationId": "myJMSCoID1",
"redelivered": false,
"destination": {
"physicalname": "testQueue"
},
"data":"LQaGQ82S48k=",
"timestamp": 1598827811958,
"brokerInTime": 1598827811958,
"brokerOutTime": 1598827811959
}
]
}
This is what the event object looks like in reality as JSON when logged out.
This is the MQ Trigger.
This is what the object looks like in the Active MQ console.
I have a teams bot that can answer 1-to-1 voice calls. During the call I want the bot to be able to send chat messages to the user and be able to reference user data (like their name).
Although an incoming call does have a encrypted source identity, from my experiments it appears this is not a valid user id for proactive messaging.
Interestingly enough this is easily possible in group calls as it starts passing you participant lists (which i've done before), but 1-to-1 calls appear to rely on the source field which effectively leaves the user as anonymous.
{
"#odata.type": "#microsoft.graph.commsNotifications",
"value": [
{
"#odata.type": "#microsoft.graph.commsNotification",
"changeType": "created",
"resource": "/app/calls/4a1f2c00-831f-4e4e-9d7c-1648b6dddb73",
"resourceUrl": "/communications/calls/4a1f2c00-831f-4e4e-9d7c-1648b6dddb73",
"resourceData": {
"#odata.type": "#microsoft.graph.call",
"state": "incoming",
"direction": "incoming",
"callbackUri": "https://...",
"source": {
"#odata.type": "#microsoft.graph.participantInfo",
"id": "7684a0ea-7db6-4f3e-a339-eb46e16d57f0",
"identity": {
"#odata.type": "#microsoft.graph.identitySet",
"encrypted": {
"#odata.type": "#microsoft.graph.identity",
"id": "1g7qrdwga2udafuebrjcyobchnq7r4xigupowjluuccfdceufmew6ush6wlx-kellf96ky2nnhsl084rn6vegqmwawiqpux0kk5aw5lqq9oydrewxe9awkrk_uh_0nxat", // <-- not a valid chat user
"tenantId": "{tenancyId}",
"identityProvider": "None"
}
},
"endpointType": "default",
"region": "apac",
"languageId": "en-us"
},
"targets": [
{
"#odata.type": "#microsoft.graph.invitationParticipantInfo",
"identity": {
"#odata.type": "#microsoft.graph.identitySet",
"application": {
"#odata.type": "#microsoft.graph.identity",
"id": "a2716ab5-9b38-4364-8869-b9b8deeff897",
"identityProvider": "AAD"
}
},
"endpointType": "default",
"id": "023126f0-904f-4c01-a78d-03f28e77e7a7",
"region": null,
"languageId": null
}
],
"tenantId": "{Azure Tenancy}",
"myParticipantId": "023126f0-904f-4c01-a78d-03f28e77e7a7",
"callChainId": "37de77c7-54b3-4d04-9e9c-181e5f5b5773",
"incomingContext": {
"#odata.type": "#microsoft.graph.incomingContext",
"sourceParticipantId": "7684a0ea-7db6-4f3e-a339-eb46e16d57f0"
},
"id": "4a1f2c00-831f-4e4e-9d7c-1648b6dddb73"
}
}
]
}
Yes, we can able to send the chat messages by creating Responder Call handler from bot.
Could you please try to implement the sample code.
In the sample code, will have a class named "ResponderCallHandler.cs", please have a look.
i am trying to understand the receiving model from the kafka which is emitted from neo4j streams to one kafka topic.
Actually that data is looing like below.
{
"meta": {
"timestamp": 1608725179199,
"username": "neo4j",
"txId": 16443,
"txEventId": 0,
"txEventsCount": 1,
"operation": "created",
"source": {
"hostname": "9945"
}
},
"payload": {
"id": "1000",
"before": null,
"after": {
"properties": {
"name": "sdfdf"
},
"labels": [
"aaq"
]
},
"type": "node"
},
"schema": {
"properties": {
"name": "String"
},
"constraints": []
}
}
so to consume this type of complex structured data in springboot from kafka we need to create nested model?
i mean nearly 4 classes nested each other?
From my understanding i am trying to create the below classes.
meta (**This is 1st class**)
"operation": "created",
payload(**This is 2ndclass which is nested inside the Meta**)
id
before
after(**it is 3rd class which is nested to payload**)
properties(**it is 4th class which is nested with in after**)
these data only we needed to store
labels
type
Actually i didn't faced thee kind of nested issue before so don't have idea to procees?
so is the above method is right or any other possiblities are there?
Have to consume the data from kafka topic which is emitted by neo4j streams is the ultimate goal?
Language : Java
Framework: Springboot
I have one PostConfirmation lambda trigger that adds data to DynamoDB via GraphQL (appsync https call) and then I query for that info in the PreTokenGeneration lambda.
When I test manually via my app UI things work.
But when executing via Jest tests, 50%+ of the time I get an error due to the supposed record info not being found in DynamoDB.
The problem doesn't occur when I test manually via UI app. Only when executing via Jest test.
I checked the Cloudwatch timestamps for the PostConfirmation DynamoDB record addition, PreTokenGeneration and checked the createDate in DynamoDB. The timestamps look ok.
For instance:
The PostConfirmation log entry that says the record was added has the timestamp at 2020-08-24T17:51:06.463Z.
The DynamoDB createdAt for the added record (createdAt) says the record was created at 2020-08-24T17:51:06.377Z.
The PostConfirmation lambda "END RequestId: xxxxx" has the timestamp at 2020-08-24T17:51:06.465-05:00
The PreTokenGeneration lambda starts at 2020-08-24T17:51:12.866Z and at 2020-08-24T17:51:13.680Z the query result says it didn't find any record.
Can someone help me or give me a hint about why this happen and/or how can I troubleshoot this problem? Thank you in advance.
Taking into account the answers from #noel-llevares I modified the VTL template to include the ConsistentRead=true but the problem remains.
Heres is the RequestMapping logged for the save operation
{
"logType": "RequestMapping",
"path": [
"createAccountMember"
],
"fieldName": "createAccountMember",
"resolverArn": "arn:aws:appsync:xx-xxxx-x:111111111:apis/<redacted>/types/Mutation/resolvers/createAccountMember",
"requestId": "<redacted>",
"context": {
"arguments": {
"input": {
"id": "<redacted>",
"userID": "<redacted>",
"accountID": "<redacted>",
"membershipStatus": "active",
"groupsEnrolledIn": [
<redacted>
],
"recordOwner": "<redacted>",
"createdAt": "2020-08-25T05:11:10.917Z",
"updatedAt": "2020-08-25T05:11:10.917Z",
"__typename": "AccountMember"
}
},
"stash": {},
"outErrors": []
},
"fieldInError": false,
"errors": [],
"parentType": "Mutation",
"graphQLAPIId": "<redacted>",
"transformedTemplate": "\n\n\n\n\n\n\n\n{\n \"version\": \"2018-05-29\",\n \"operation\": \"PutItem\",\n \"key\": {\n \"id\": {\"S\":\"<redacted>\"}\n} ,\n \"attributeValues\": {\"accountID\":{\"S\":\"<redacted>\"},\"createdAt\":{\"S\":\"2020-08-25T05:11:10.917Z\"},\"recordOwner\":{\"S\":\"<redacted>\"},\"__typename\":{\"S\":\"AccountMember\"},\"id\":{\"S\":\"<redacted>\"},\"membershipStatus\":{\"S\":\"active\"},\"userID\":{\"S\":\"<redacted>\"},\"groupsEnrolledIn\":{\"L\":[{\"S\":\"<redacted>\"},{\"S\":\"<redacted>\"},{\"S\":\"<redacted>\"}]},\"updatedAt\":{\"S\":\"2020-08-25T05:11:10.917Z\"}},\n \"condition\": {\"expression\":\"attribute_not_exists(#id)\",\"expressionNames\":{\"#id\":\"id\"}}\n}\n"
}
The ResponseMapping logged for the save operation
{
"logType": "ResponseMapping",
"path": [
"createAccountMember"
],
"fieldName": "createAccountMember",
"resolverArn": "<redacted>",
"requestId": "<redacted>",
"context": {
"arguments": {
"input": {
"id": "<redacted>",
"userID": "<redacted>",
"accountID": "<redacted>",
"membershipStatus": "active",
"groupsEnrolledIn": [
"<redacted>",
"<redacted>",
"<redacted>"
],
"recordOwner": "<redacted>",
"createdAt": "2020-08-25T05:11:10.917Z",
"updatedAt": "2020-08-25T05:11:10.917Z",
"__typename": "AccountMember"
}
},
"result": {
"accountID": "<redacted>",
"createdAt": "2020-08-25T05:11:10.917Z",
"recordOwner": "<redacted>",
"__typename": "AccountMember",
"id": "<redacted>",
"membershipStatus": "active",
"userID": "<redacted>",
"groupsEnrolledIn": [
"<redacted>",
"<redacted>",
"<redacted>"
],
"updatedAt": "2020-08-25T05:11:10.917Z"
},
"stash": {},
"outErrors": []
},
"fieldInError": false,
"errors": [],
"parentType": "Mutation",
"graphQLAPIId": "<redacted>",
"transformedTemplate": "{\"accountID\":\"<redacted>\",\"createdAt\":\"2020-08-25T05:11:10.917Z\",\"recordOwner\":\"<redacted>\",\"__typename\":\"AccountMember\",\"id\":\"<redacted>\",\"membershipStatus\":\"active\",\"userID\":\"<redacted>\",\"groupsEnrolledIn\":[\"<redacted>\",\"<redacted>\",\"<redacted>\"],\"updatedAt\":\"2020-08-25T05:11:10.917Z\"}\n"
}
Here's is the Request mapping logged for the list operation. You can see the consistentRead=true
{
"logType": "RequestMapping",
"path": [
"listAccountMembers"
],
"fieldName": "listAccountMembers",
"resolverArn": "<redacted>",
"requestId": "<redacted>",
"context": {
"arguments": {
"filter": {
"userID": {
"eq": "<redacted>"
}
}
},
"stash": {},
"outErrors": []
},
"fieldInError": false,
"errors": [],
"parentType": "Query",
"graphQLAPIId": "<redacted>,
"transformedTemplate": " \n{\"version\":\"2018-05-29\",\"limit\":100,\"consistentRead\":true,\"filter\":{\"expression\":\"(#userID = :userID_eq)\",\"expressionNames\":{\"#userID\":\"userID\"},\"expressionValues\":{\":userID_eq\":{\"S\":\"<redacted>\"}}},\"operation\":\"Scan\"}"
}
Here is the responseMapping logged. You can see the result is an empty array (items:[]) even though the record has been added previously and we have specified consistentRead=true for the query.
{
"logType": "ResponseMapping",
"path": [
"listAccountMembers"
],
"fieldName": "listAccountMembers",
"resolverArn": "<redacted>",
"requestId": "<redacted>",
"context": {
"arguments": {
"filter": {
"userID": {
"eq": "<redacted>"
}
}
},
"result": {
"items": [],
"nextToken": "<redacted>",
"scannedCount": 100
},
"stash": {},
"outErrors": []
},
"fieldInError": false,
"errors": [],
"parentType": "Query",
"graphQLAPIId": "<redacted>",
"transformedTemplate": "\n{\"items\":[],\"nextToken\":\"<redacted>",\"scannedCount\":100,\"startedAt\":null}\n"
}
What else could I be missing?
UPDATE02
I found the possible cause. It's beacause I'm new to how DynamoDB works. The query or scan operations get the results by key. In this case there is no key involved, so it gets all the records taking the limit into account. In my case it's 100, and then it applies the filter. So if the record added is not in the first 100 results it can't find it unless I go through the paging (not good for muy specific need).
TL;DR: I changed the query to use a #key directive with userID as the key field and the problem has gone because que field is a GSI and the number of records I expect to retrieve with such partition is much less than the 100 limit. I'll add this as part of the answer as soon as I finish undoing the tweaks i previously made.
DynamoDB is eventually consistent by default.
According to the documentation,
When you read data from a DynamoDB table, the response might not reflect the results of a recently completed write operation. The response might include some stale data. If you repeat your read request after a short time, the response should return the latest data.
If you need to read what you just wrote immediately, you can opt to use strongly consistent reads. This is usually done by specifying ConsistentRead to true in your DynamoDB calls.
I found the root cause. The query or scan operations get the results by key. In this case there is no key involved, so it gets all the records taking the limit into account, in my case it's 100, and then it applies the filter. So if the record added is not in the first 100 results it can't find it unless I go through paging (not good for my specific need). I didn't notice this because I'm new to how DynamoDB works. But thank to #noel-llevares I went through more in-depth research to find a solution.
The solution was to change the query to use a #key directive with name "byUsername" that exists in the AccountMember type with userID as the key field and the problem has gone because que field is a GSI and the number of records I expect to retrieve with such partition is much less than the 100 limit.
I am not getting "nextPageToken" in the response object when I tried to retrieve list of users who subscribed to our channels using YT Data API (v3) Subscription. For some reason YT not returning "nextPageToken" even though below channel has more than 100K subscribers so could you please advise me on how to be able to fetch next pages of subscribers. Same behavior happening when I tried with any of channels from our CMS account:
Request:
https://www.googleapis.com/youtube/v3/subscriptions?onBehalfOfContentOwner=xxxx&onBehalfOfContentOwnerChannel=xxxxxxxxxxx&fields=items(contentDetails,id,snippet(publishedAt,channelId),subscriberSnippet(title,description)),nextPageToken,pageInfo,tokenPagination&maxResults=50&mySubscribers=true&part=id,snippet,contentDetails,subscriberSnippet&key=xxxxxxxxxxxxxxxx
Here is sample response snippet (I trimmed out other 48 items from below list and intentionally masked out subscriber details)
{
"items": [
{
"snippet": {
"channelId": "UCUR8UieACc2QXl7waH821hQ",
"publishedAt": "2014-05-20T19:50:44.000Z"
},
"contentDetails": {
"newItemCount": 0,
"activityType": "all",
"totalItemCount": 51
},
"subscriberSnippet": {
"description": "",
"title": "Sebastian Brentsworth"
},
"id": "MVPSEm5kMooIHMvcBKqbtFJAp1dHw0GeHza2Iq5KXP"
},
{
"snippet": {
"channelId": "UCYs04YSyy1soNzyvsDljYVg",
"publishedAt": "2014-05-28T22:39:30.000Z"
},
"contentDetails": {
"newItemCount": 0,
"activityType": "all",
"totalItemCount": 51
},
"subscriberSnippet": {
"description": "",
"title": "Jason Chan"
},
"id": "Xd7_fS3FIA4rnSu6NXEfxF8trXzL8-LspvIuYtDMmc0"
}
],
"pageInfo": {
"resultsPerPage": 50,
"totalResults": 144403
}
}
"Known" (Hopefully also to Google) bug:
https://code.google.com/p/gdata-issues/issues/detail?id=7163 and youtube.subscriptions.list (api v3) - nextPageToken isn't available
For the time being, I've came up with a token generator as a workaround (see other SO post, or here: https://gist.github.com/pulsar256/f5621e85ef50711adc6f)