Detect the speaker of Google Home or Amazon's Alexa - slack

I would like to detect who is interacting with my agent.
For example I read that Alexa should be able to detect different users. The Google Home advertisement also let me think that it should detect who is talking. So how can I see who is talking?
In slack it seems to be easier since it is well known who is writing. However I cannot see who I get the current user.

I found out how to detect the user in slack: If you implement that hook you will get this example json:
{
"id": "f7912345-e21c-450f-a8ca-d01e38012345",
"timestamp": "2016-12-20T06:53:51.071Z",
"result": {
"source": "agent",
"resolvedQuery": "echo hallo welt",
"speech": "",
"action": "",
"actionIncomplete": false,
"parameters": {
"myInput": "hallo welt"
},
"contexts": [{
"name": "generic",
"parameters": {
"slack_user_id": "U0AT12345",
"myInput": "hallo welt",
"slack_channel": "D3DR12345",
"myInput.original": "hallo welt"
},
"lifespan": 4
}],
"metadata": {
"intentId": "06212345-06a0-40fe-bbeb-9189db412345",
"webhookUsed": "true",
"webhookForSlotFillingUsed": "false",
"intentName": "Response"
},
"fulfillment": {
"speech": "",
"messages": [{
"type": 0,
"speech": ""
}]
},
"score": 0.75
},
"status": {
"code": 200,
"errorType": "success"
},
"sessionId": "10612345-c681-11e6-af08-875120912345",
"originalRequest": {
"source": "slack_testbot",
"data": {
"channel": "D3DR12345",
"match": ["echo hallo welt"],
"text": "echo hallo welt",
"team": "T04H12345",
"type": "message",
"event": "direct_message",
"user": "U0AT12345",
"ts": "1482216830.000005"
}
}
}
So in case of slack you can access result->contexts[0]->paramaters->slack_user_id.

Google Home does not (at least currently) have a way to handle multiple users on the same device.

Google Home keeps improving (even removing development hurdles I've faced with their recent updates). It can now be trained to know your voice vs someone else's voice.
Tomato, tomahto. Google Home now supports multiple users

Related

Body always empty when answering an invoke

I have a live Microsoft Teams application with a couple of working functionalities. Right now I am working on a new feature where the user clicks on a button in a previously sent adaptive card and I will open another adaptive card with a form for the user to fill. I am using this documentation as a base, which has a really similar example as to what I am trying to achieve here.
Since I am working with an adaptive card the button that will open my form card is an Action.Submit button with "msteams": {"type': 'task/fetch"} inside its data. When I click that button I receive the correct invoke call, the body looks like this:
{
"type": "invoke",
"timestamp": "2021-02-01T20:19:34.327Z",
"localTimestamp": "2021-02-01T15:19:34.327-05:00",
"id": "f:955407977095344101",
"channelId": "msteams",
"serviceUrl": "https://smba.trafficmanager.net/amer/",
"from": {
"id": "censured",
"name": "censured",
"aadObjectId": "censured"
},
"conversation": {
"conversationType": "personal",
"tenantId": "censured",
"id": "censured"
},
"recipient": {
"id": "censured",
"name": "Tcensured"
},
"entities": [
{
"locale": "censured",
"country": "censured",
"platform": "censured",
"timezone": "censured",
"type": "clientInfo"
}
],
"channelData": {
"tenant": {
"id": "censured"
},
"source": {
"name": "message"
},
"legacy": {
"replyToId": "censured"
}
},
"replyToId": "censured",
"value": {
"commandId": "reply_feedback",
"requested_feedback_id": 1,
"type": "composeExtension/fetchTask"
},
"locale": "censured",
"localTimezone": "censured"
}
It seems like the invoke call is correct, so the next step is to answer the call with my adaptive card. However, it doesn't matter what I answer this call with, when I inspect the invoke call in my browser the answer is always empty, even if I answer with a really simple task response like (which I got from the documentation)
{
"task": {
"type": "continue",
"value": {
"title": "Task module title",
"height": 500,
"width": "medium",
"url": "https://contoso.com/msteams/taskmodules/newcustomer",
"fallbackUrl": "https://contoso.com/msteams/taskmodules/newcustomer"
}
}
}
Or even
{
"task": {
"type": "message",
"value": "Test"
}
}
I still get an empty response on the front-end side. I am fairly confident that I am properly answering the call on my side with data, I have a lot of other features in this same application working so I don't think it is some problem where I am actually giving an empty body from my side. Maybe for the specific invoke call I need to answer things in a different manner?
Obs: I am using Python with no SDKs, so I build Adaptive Cards and interpret the requests on my application.

AdaptiveCard not rendering in Bot Framework Emulator / Web Chat

Really hoping someone can help out on this.
I'm trying to achieve - a no-code chat bot using QnAMaker.ai and Azure Bot Services with AdaptiveCards to serve rich content.
I have a knowledgebase set up and published, I have a bot in Azure set up to serve that content and it seems to work okay at the first stage.
Now I'm trying to add AdaptiveCards without opening and editing the solution in VSCode - I really want to keep all this contained in a no-code solution.
I Googled how to add custom cards/content and found this post by LiveTiles - excellent - I thought, I can just add minified JSON and it will render what I want - lovely stuff!
However; despite there being a live output render on the LiveTiles site, when I take that JSON I cannot get it to render through either Web Chat or the Bot Framework Emulator.
I've tried...
Copy/pasting the raw JSON into a QnAPair
{
"contentType": "application/vnd.microsoft.card.adaptive",
"content": {
"type": "AdaptiveCard",
"version": "1.0",
"body": [
{
"type": "Image",
"url": "",
"size": "stretch",
"selectAction": {
"type": "Action.OpenUrl",
"title": "Test",
"url": "https://www.livetiles.nyc/"
}
},
{
"type": "TextBlock",
"text": "This is an adaptive card - if this renders it means it's worked!",
"wrap": true
}
],
"actions": [
{
"type": "Action.Submit",
"title": "Let's get started!",
"url": "Let's get started!"
}
]
}
}
Copy/pasting minified JSON into a QnAPair
{"contentType":"application/vnd.microsoft.card.adaptive","content":{"type":"AdaptiveCard","version":"1.0","body":[{"type":"Image","url":"","size":"stretch","selectAction":{"type":"Action.OpenUrl","title":"Test","url":"https://www.livetiles.nyc/"}},{"type":"TextBlock","text":"This is an adaptive card - if this renders it means it's worked!","wrap":true}],"actions":[{"type":"Action.Submit","title":"Let's get started!","url":"Let's get started!"}]}}
Making a Source Excel File (which includes the JSON) and adding that to the knowledge base
All my attempts end up with the bot spitting the actual JSON at me when I ask it. Not the lovely rendered card I wanted.
Renders on the LiveTiles site:
Doesn't render on the Emulator
Or on the Web Chat
In QnAMaker.ai Test Function
Really hoping someone can offer some insight or advice to this.
Please try below json it works for me ,
{
//"contentType": "application/vnd.microsoft.card.adaptive",
//"content": {
"type": "AdaptiveCard",
"version": "1.0",
"body": [
{
"type": "Image",
"url": "",
"size": "stretch",
"selectAction": {
"type": "Action.OpenUrl",
"title": "Test",
"url": "https://www.livetiles.nyc/"
}
},
{
"type": "TextBlock",
"text": "This is an adaptive card - if this renders it means it's worked!",
"wrap": true
}
],
"actions": [
{
"type": "Action.Submit",
"title": "Let's get started!",
"url": "Let's get started!"
}
]
//}
}
I have added a screenshot below please check
code for send card as an attachment :
var cardAttachment = Common.CreateAdaptiveCardAttachment();
await turnContext.SendActivityAsync(MessageFactory.Attachment(cardAttachment), cancellationToken);

Why my Alexa ReportStatus directive response not working?

I want to enable alexa voice control for my smart home device. I was able to discover device. Now all devices are showing in alexa app. But when I try to turn on the device from my alexa app it is getting stuck. Loader is moving unlimited period of time. It is actually calling ReportStatus directive.
This is the json that I am getting from alexa app for a light. The light has only turn on and turn off capabilities.
{
"directive": {
"endpoint": {
"cookie": {
"detail1": "For simplicity, this is the only appliance",
"detail2": "that has some values in the additionalApplianceDetails"
},
"endpointId": "endpoint-001",
"scope": {
"token": "weza|IwEBIGu_tmpSTQaEPvhm0OYy-4ncjve_Au1788TAWR2DC8b7xJlPDiX3HV3rJUtG0qyauIlman4bX4ZCK0-6NvKWagqXNLSdH3bDBLxD_9VtgCQo6wUlEd4DNmL9Yf5sWuUCkV1ALAxxbhqPs3QlTofubxtpSnF05ZWOSjyNUlM3ShryLh7owTywFa_7oXCCaLdLCTiqOm27aPn-yyJEDNG57Sc9iysrZkJHaxVPbdZdcqRmaw9zFGVWOqsgjqiojkKrfztslVL1Ggo6v7Teg8isrZD8osr5HFkWAmZHi8K7UrHmwQnsD9CosgSxSG0avnUoomdsZx3_LPjLJKf5twJrN1vbLolzOgxUbVuAVPVrs8UN40KFEu6eCv_7rYz9AER_61di-4w1K27kjeJvzPMIKlLXLvv6Z-2GyuQq_8M1fUdM0SgiAkqjf92S9SNxezTUiDYdOjB1JrktbQc0WM6OYYXOMjtXcCPx3bqNwWoPZWBk7qptLTurCHcYnnDl27Q0RcJ3u1vFvMaT8l0x87K6wqW2",
"type": "BearerToken"
}
},
"header": {
"correlationToken": "AAAAAAQAeXUb9VLQcUVXClbXZQBvIDAIAAAAAAAAiBMdYahxBjRIHYbFACdRe+68uyc0KiCkClvpOCfh5dZw7NlTHoqnbbjPPydl4Nmkh4KLuFtKboYiwENwsVa9Q2WwAgRlEM+SR9PSNrWqnKvKDtulnkVXuTDkHf8f4LskbFd4VhX6cN518TA0MaZZvSfli9CN7KNY7m07P+eIv71nwxUFP5UN4xe4Jsz1V6nLzUGAG2jJIW4Lg0ARHENqDhbFtra4SV+vPXUN8L4qIwvC5xD6/mjsdN7B1ihGy/8djQA2+cxZ3XOEz2UOATyPEDlpVw5PBasQiJbRiSFSZZqEvQ0NHNfPWAWz5ieQXO1z1NAE5RMgn9d5gcEfDecjScP9DE2Yw43MypX/3VMDJmbjuTlhg9AabxLTQndKV8w9JNM1lLXcdp7i2JShOLO0bDDBPqJH1zsiZGJ93zWn+VDOTzDt+482V/AWgcHOWYnB+UZnL9GZFwEKVWTcQ20u2inFK9J11M5wr3ia57WDP6SQ7zkAmERDGfL0wswN/j0vFpqw+0/G7vjAUs2hGyg9oOy7fN2PFntk6IHV8mh47sC+ENj9dujJ9+ENwfEwEi792m7WlA8PGtvxdEqyVib5hY3qfNirqPMhMmPBf2hZlpbUfpf69q9R8GNFq41EZnTlg/AxSBjjLUJazaKQ8RU1VgipcdK1aGupJf5Oi85uEuYWN96OoEtivhUTZXg==",
"messageId": "dd8670d5-3afa-483a-93a3-f0fff0ab6572",
"name": "ReportState",
"namespace": "Alexa",
"payloadVersion": "3"
},
"payload": {}
}
}
This is the response I am sending from lambda function. It is written in python 3.6.
{
"event": {
"context": {
"properties": [
{
"name": "powerState",
"namespace": "Alexa.PowerController",
"timeOfSample": "2018-12-17T18:17:35.00Z",
"uncertaintyInMilliseconds": 500,
"value": "ON"
}
]
},
"endpoint": {
"cookie": {
"detail1": "For simplicity, this is the only appliance",
"detail2": "that has some values in the additionalApplianceDetails"
},
"endpointId": "endpoint-001",
"scope": {
"token": "weza|IwEBIGu_tmpSTQaEPvhm0OYy-4ncjve_Au1788TAWR2DC8b7xJlPDiX3HV3rJUtG0qyauIlman4bX4ZCK0-6NvKWagqXNLSdH3bDBLxD_9VtgCQo6wUlEd4DNmL9Yf5sWuUCkV1ALAxxbhqPs3QlTofubxtpSnF05ZWOSjyNUlM3ShryLh7owTywFa_7oXCCaLdLCTiqOm27aPn-yyJEDNG57Sc9iysrZkJHaxVPbdZdcqRmaw9zFGVWOqsgjqiojkKrfztslVL1Ggo6v7Teg8isrZD8osr5HFkWAmZHi8K7UrHmwQnsD9CosgSxSG0avnUoomdsZx3_LPjLJKf5twJrN1vbLolzOgxUbVuAVPVrs8UN40KFEu6eCv_7rYz9AER_61di-4w1K27kjeJvzPMIKlLXLvv6Z-2GyuQq_8M1fUdM0SgiAkqjf92S9SNxezTUiDYdOjB1JrktbQc0WM6OYYXOMjtXcCPx3bqNwWoPZWBk7qptLTurCHcYnnDl27Q0RcJ3u1vFvMaT8l0x87K6wqW2",
"type": "BearerToken"
}
},
"header": {
"correlationToken": "AAAAAAQAeXUb9VLQcUVXClbXZQBvIDAIAAAAAAAAiBMdYahxBjRIHYbFACdRe+68uyc0KiCkClvpOCfh5dZw7NlTHoqnbbjPPydl4Nmkh4KLuFtKboYiwENwsVa9Q2WwAgRlEM+SR9PSNrWqnKvKDtulnkVXuTDkHf8f4LskbFd4VhX6cN518TA0MaZZvSfli9CN7KNY7m07P+eIv71nwxUFP5UN4xe4Jsz1V6nLzUGAG2jJIW4Lg0ARHENqDhbFtra4SV+vPXUN8L4qIwvC5xD6/mjsdN7B1ihGy/8djQA2+cxZ3XOEz2UOATyPEDlpVw5PBasQiJbRiSFSZZqEvQ0NHNfPWAWz5ieQXO1z1NAE5RMgn9d5gcEfDecjScP9DE2Yw43MypX/3VMDJmbjuTlhg9AabxLTQndKV8w9JNM1lLXcdp7i2JShOLO0bDDBPqJH1zsiZGJ93zWn+VDOTzDt+482V/AWgcHOWYnB+UZnL9GZFwEKVWTcQ20u2inFK9J11M5wr3ia57WDP6SQ7zkAmERDGfL0wswN/j0vFpqw+0/G7vjAUs2hGyg9oOy7fN2PFntk6IHV8mh47sC+ENj9dujJ9+ENwfEwEi792m7WlA8PGtvxdEqyVib5hY3qfNirqPMhMmPBf2hZlpbUfpf69q9R8GNFq41EZnTlg/AxSBjjLUJazaKQ8RU1VgipcdK1aGupJf5Oi85uEuYWN96OoEtivhUTZXg==",
"messageId": "dd8670d5-3afa-483a-93a3-f0fff0ab6572",
"name": "StateReport",
"namespace": "Alexa",
"payloadVersion": "3"
},
"payload": {}
}
}
Please help me. I am stuck in this for last 2 days.
Not sure if this is related to your problem. In your response, the context element is inside event. But according to the documentation and code sample, context and event should be at the same level.
{
"context": {
"properties": [...]
},
"event": {
"header": ...,
"endpoint": ...,
"payload": {}
}
}

Alexa Skill: Interaction model is not being updated during test

I am developing a Alexa Skill and I have an Intent named NewAppointmentIntent which originally had 7 slots.
I have added a new Slot yesterday named Doctor and successfully built the Skill.
When I invoke that intent, it still have 7 Slots and not 8. The Doctor Slot does not appears in the request and responses outputs.
The Intent in images:
The output when invocing the Intent, where slot Doctor expected in slots attribute:
"request": {
"type": "IntentRequest",
"requestId": "amzn1.echo-api.request.9529849e-190d-4278-95a8-3702b3ee4d1c",
"timestamp": "2018-12-12T10:05:14Z",
"locale": "en-US",
"intent": {
"name": "NewAppointmentIntent",
"confirmationStatus": "NONE",
"slots": {
"Status": {
"name": "Status",
"confirmationStatus": "NONE"
},
"Comment": {
"name": "Comment",
"confirmationStatus": "NONE"
},
"ReasonForVisit": {
"name": "ReasonForVisit",
"confirmationStatus": "NONE"
},
"Time": {
"name": "Time",
"confirmationStatus": "NONE"
},
"EmergencyType": {
"name": "EmergencyType",
"confirmationStatus": "NONE"
},
"PatientNumber": {
"name": "PatientNumber",
"confirmationStatus": "NONE"
},
"Day": {
"name": "Day",
"confirmationStatus": "NONE"
}
}
},
"dialogState": "STARTED"
}
So I wish to know how to refresh the Skill?
Close the window, re-open and in your console -> Save the model -> Build the model. Then test again.
If still doesn't show, click on the JSON Editor(this will be the last option in the list of your Interaction model), then in your NewAppointmentIntent can you see the Docter slot in the slots array?
If not, then maybe something was broken when you created the slot. So delete the Docter slot and re-add it. Then again verify it in JSON editor and this should solve the problem.

#mention via incoming webhook in MS Teams

I'm trying to mention a user from an incoming webhook.
I tried a few iterations via Postman of
{
"text": "test #user"
}
or
{
"text": "test #user#email.com"
}
but none of these seem to work.
Is this simple but very important thing just not possible?
Thanks.
I'm afraid this isn't possible yet - the only way to do # mentions is by using the full Bot Framework APIs.
You're not the only one to have asked for this though, so I'll get it on the backlog.
This is now supported and documented here (https://learn.microsoft.com/en-us/microsoftteams/platform/task-modules-and-cards/cards/cards-format?tabs=adaptive-md%2Cconnector-html#user-mention-in-incoming-webhook-with-adaptive-cards).
Sample:
{
"type": "message",
"attachments": [
{
"contentType": "application/vnd.microsoft.card.adaptive",
"content": {
"type": "AdaptiveCard",
"body": [
{
"type": "TextBlock",
"size": "Medium",
"weight": "Bolder",
"text": "Sample Adaptive Card with User Mention"
},
{
"type": "TextBlock",
"text": "Hi <at>Adele UPN</at>, <at>Adele AAD</at>"
}
],
"$schema": "http://adaptivecards.io/schemas/adaptive-card.json",
"version": "1.0",
"msteams": {
"entities": [
{
"type": "mention",
"text": "<at>Adele UPN</at>",
"mentioned": {
"id": "AdeleV#contoso.onmicrosoft.com",
"name": "Adele Vance"
}
},
{
"type": "mention",
"text": "<at>Adele AAD</at>",
"mentioned": {
"id": "87d349ed-44d7-43e1-9a83-5f2406dee5bd",
"name": "Adele Vance"
}
}
]
}
}
}]
}
If it helps anybody, after looking in to this and seeing it couldn't be done (still!?), a workaround for me was to change the channel notification settings to banner + feed for all new posts for relevant users in the channel. This eliminates the need to use the tag (if tagging the team).
It is supported in Teams now, however the sample code does not work in Microsoft card playground, I didn't know the code actually worked until I tried in Postman.

Resources