google assistant saying "sorry, it looks like the camera doesn't support streaming to remote screens" - ip-camera

I am trying to integrate my camera with google assistant for streaming, after syncing, and I requested google assistant "play hall camera on office TV" so I can get execute command to my webhook, but I am not receiving any execute request.it's saying "sorry, it looks like the hall camera doesn't support streaming to remote screens"
below is the sync response
{
"payload": {
"agentUserId": "USER3103",
"devices": [
{
"traits": [
"action.devices.traits.CameraStream"
],
"willReportState": false,
"name": {
"name": "hall camera"
},
"attributes": {
"cameraStreamNeedAuthToken": true,
"cameraStreamSupportedProtocols": [
"webrtc"
],
"cameraStreamNeedDrmEncryption": false
},
"id": "210XXXXXXXX",
"type": "action.devices.types.CAMERA",
"deviceInfo": {
"swVersion": "0.0.0",
"model": "11010",
"manufacturer": "WIFICAMERA",
"hwVersion": "HD_1.2.3"
}
}
]
},
"requestId": "ff36a3cc-ec34-11e6-b1a0-64510650abcf"
}

Remote screens doesn't support webrtc protocol like Chromecast
in the above case, cameraStreamSupportedProtocols is only webrtc that's why it's saying
"Sorry, it looks like the camera doesn't support streaming to remote screens"
other protocols will work fine like hls,progressive_mp4 etc...

Related

AdaptiveCard not rendering in Bot Framework Emulator / Web Chat

Really hoping someone can help out on this.
I'm trying to achieve - a no-code chat bot using QnAMaker.ai and Azure Bot Services with AdaptiveCards to serve rich content.
I have a knowledgebase set up and published, I have a bot in Azure set up to serve that content and it seems to work okay at the first stage.
Now I'm trying to add AdaptiveCards without opening and editing the solution in VSCode - I really want to keep all this contained in a no-code solution.
I Googled how to add custom cards/content and found this post by LiveTiles - excellent - I thought, I can just add minified JSON and it will render what I want - lovely stuff!
However; despite there being a live output render on the LiveTiles site, when I take that JSON I cannot get it to render through either Web Chat or the Bot Framework Emulator.
I've tried...
Copy/pasting the raw JSON into a QnAPair
{
"contentType": "application/vnd.microsoft.card.adaptive",
"content": {
"type": "AdaptiveCard",
"version": "1.0",
"body": [
{
"type": "Image",
"url": "",
"size": "stretch",
"selectAction": {
"type": "Action.OpenUrl",
"title": "Test",
"url": "https://www.livetiles.nyc/"
}
},
{
"type": "TextBlock",
"text": "This is an adaptive card - if this renders it means it's worked!",
"wrap": true
}
],
"actions": [
{
"type": "Action.Submit",
"title": "Let's get started!",
"url": "Let's get started!"
}
]
}
}
Copy/pasting minified JSON into a QnAPair
{"contentType":"application/vnd.microsoft.card.adaptive","content":{"type":"AdaptiveCard","version":"1.0","body":[{"type":"Image","url":"","size":"stretch","selectAction":{"type":"Action.OpenUrl","title":"Test","url":"https://www.livetiles.nyc/"}},{"type":"TextBlock","text":"This is an adaptive card - if this renders it means it's worked!","wrap":true}],"actions":[{"type":"Action.Submit","title":"Let's get started!","url":"Let's get started!"}]}}
Making a Source Excel File (which includes the JSON) and adding that to the knowledge base
All my attempts end up with the bot spitting the actual JSON at me when I ask it. Not the lovely rendered card I wanted.
Renders on the LiveTiles site:
Doesn't render on the Emulator
Or on the Web Chat
In QnAMaker.ai Test Function
Really hoping someone can offer some insight or advice to this.
Please try below json it works for me ,
{
//"contentType": "application/vnd.microsoft.card.adaptive",
//"content": {
"type": "AdaptiveCard",
"version": "1.0",
"body": [
{
"type": "Image",
"url": "",
"size": "stretch",
"selectAction": {
"type": "Action.OpenUrl",
"title": "Test",
"url": "https://www.livetiles.nyc/"
}
},
{
"type": "TextBlock",
"text": "This is an adaptive card - if this renders it means it's worked!",
"wrap": true
}
],
"actions": [
{
"type": "Action.Submit",
"title": "Let's get started!",
"url": "Let's get started!"
}
]
//}
}
I have added a screenshot below please check
code for send card as an attachment :
var cardAttachment = Common.CreateAdaptiveCardAttachment();
await turnContext.SendActivityAsync(MessageFactory.Attachment(cardAttachment), cancellationToken);

how to make callto: works in adaptive card

I am trying to initiate a call when pressing a button within an adaptive card (in microsoft teams)
I add URL as callto:[useremail]
eventhough when I write this in the search bar in chrome it works, but when pressing the button in the adaptive card it gives me an error on the chrome page.
any idea why might this happen?
Edit:
Here's a sample card of what I used:
{
"$schema": "http://adaptivecards.io/schemas/adaptive-card.json",
"type": "AdaptiveCard",
"version": "1.0",
"body": [
{
"type": "Container",
"items": [
{
"type": "ColumnSet",
"columns": [
{
"type": "Column",
"width": "auto",
"items": [
{
"size": "small",
"style": "person",
"type": "Image",
"url": "https://pbs.twimg.com/profile_images/3647943215/d7f12830b3c17a5a9e4afcc370e3a37e_400x400.jpeg",
"selectAction":{
"type": "Action.OpenUrl",
"url": "callto:rex#gmail.com"
}
}
]
}
]
}
]
}
]
}
It looks like links inside Teams don't support any protocols other than the usual HTTP/S. You can turn your callto link into an https link using a redirect URL service like these: https://www.cnet.com/news/10-links-to-shorten-your-links/
If you need to generate your callto links dynamically, I'm not sure how many of those services have API's your bot can use. TinyURL does though.
It's also pretty simple to just host your own redirection service in your own domain. You could even use the same domain that your bot is running on, so your link might end up looking something like this: https://rexbot.azurewebsites.net/api/callto/rex#gmail.com
Also, you might consider getting support for this from Teams directly. You can request that they support more URL protocols.

Skype not showing images recieved

I am trying to create a bot and deploy it onto different platforms, but when I return images in the carousel for the chatbot, Skype doesn't render them, while the same works for Facebook and even on the web widget provided by Gupshup.
If you want to know the chatbot platform i'm using it would be Gupshup with api.ai hooked for nlp , do you know the problem i am encountering?
I have tried different ways of getting the image. First I got it from the site I am making the chatbot for, then I tried shortening the url using google and finally I tried uploading the image on Google drive.The image format is jpg could that be the issue?
http://imgur.com/a/19aG3
[Update:7/5/2017] - Skype issue is now resolved. The carousel code that is working for Facebook Messenger will also work for Skype.
Sample JSON for Carousel on Skype:
{
"type": "catalogue",
"msgid": "cat_212",
"items": [{
"title": "White T Shirt",
"subtitle": "Soft cotton t-shirt \nXs, S, M, L \n$10",
"imgurl": "https://pixel.nymag.com/imgs/fashion/daily/2016/06/02/t-shirt/everlane.w710.h473.2x.jpg",
"options": [{
"type": "url",
"title": "View Details",
"url": "https://pixel.nymag.com/imgs/fashion/daily/2016/06/02/t-shirt/everlane.w710.h473.2x.jpg"
},
{
"type": "text",
"title": "Buy"
}
]
},
{
"title": "Grey T Shirt",
"subtitle": "Soft cotton t-shirt \nXs, S, M, L \n$12",
"imgurl": "https://cdn.shopify.com/s/files/1/0407/0829/products/Grey_t-shirt_front_1024x1024.jpg?v=1466588290",
"options": [{
"type": "url",
"title": "View Details",
"url": "https://cdn.shopify.com/s/files/1/0407/0829/products/Grey_t-shirt_front_1024x1024.jpg?v=1466588290"
},
{
"type": "text",
"title": "Buy"
}
]
}
]}
Result:
Images are not loading for the carousel on Skype because there have
been recent changes in Microsoft framework which are not yet
incorporated in Gupshup's bot platform. Although next week you should
be able to see those images load once we at Gupshup pushes the fix
into production. There will be no need of changing the code from your
end as the same code for carousel works across the supported
platforms.
I will update this answer once the fix is live into production.
PS: I work for Gupshup.

Detect the speaker of Google Home or Amazon's Alexa

I would like to detect who is interacting with my agent.
For example I read that Alexa should be able to detect different users. The Google Home advertisement also let me think that it should detect who is talking. So how can I see who is talking?
In slack it seems to be easier since it is well known who is writing. However I cannot see who I get the current user.
I found out how to detect the user in slack: If you implement that hook you will get this example json:
{
"id": "f7912345-e21c-450f-a8ca-d01e38012345",
"timestamp": "2016-12-20T06:53:51.071Z",
"result": {
"source": "agent",
"resolvedQuery": "echo hallo welt",
"speech": "",
"action": "",
"actionIncomplete": false,
"parameters": {
"myInput": "hallo welt"
},
"contexts": [{
"name": "generic",
"parameters": {
"slack_user_id": "U0AT12345",
"myInput": "hallo welt",
"slack_channel": "D3DR12345",
"myInput.original": "hallo welt"
},
"lifespan": 4
}],
"metadata": {
"intentId": "06212345-06a0-40fe-bbeb-9189db412345",
"webhookUsed": "true",
"webhookForSlotFillingUsed": "false",
"intentName": "Response"
},
"fulfillment": {
"speech": "",
"messages": [{
"type": 0,
"speech": ""
}]
},
"score": 0.75
},
"status": {
"code": 200,
"errorType": "success"
},
"sessionId": "10612345-c681-11e6-af08-875120912345",
"originalRequest": {
"source": "slack_testbot",
"data": {
"channel": "D3DR12345",
"match": ["echo hallo welt"],
"text": "echo hallo welt",
"team": "T04H12345",
"type": "message",
"event": "direct_message",
"user": "U0AT12345",
"ts": "1482216830.000005"
}
}
}
So in case of slack you can access result->contexts[0]->paramaters->slack_user_id.
Google Home does not (at least currently) have a way to handle multiple users on the same device.
Google Home keeps improving (even removing development hurdles I've faced with their recent updates). It can now be trained to know your voice vs someone else's voice.
Tomato, tomahto. Google Home now supports multiple users

How can I interpret /perceive the presence of buttons in the response of directline api?

Consider there is a action card response from the MS bot & it looks as follows in skype:
When this similar response comes in the REST APIs i.e using Direct Line APIs. The following is the relevant part of JSON response.
{
"id": "1t90Ym3PEry|000000000000000014",
"conversationId": "1t90Ym3PEry",
"created": "2016-12-06T09:34:55.6280699Z",
"from": "rich3cards",
"images": [
"https://upload.wikimedia.org/wikipedia/commons/thumb/7/7c/Seattlenighttimequeenanne.jpg/320px-Seattlenighttimequeenanne.jpg"
],
"attachments": [],
"eTag": "W/\"datetime'2016-12-06T09%3A34%3A54.94083Z'\""
},
{
"id": "1t90Ym3PEry|000000000000000014",
"conversationId": "1t90Ym3PEry",
"created": "2016-12-06T09:34:55.6280699Z",
"from": "rich3cards",
"text": "Hero Card\n\nSpace Needle\n\nThe <b>Space Needle</b> is an observation tower in Seattle, Washington, a landmark of the Pacific Northwest, and an icon of Seattle.\n\n(Current Weather) action?weather=Seattle, WA",
"images": [],
"attachments": [],
"eTag": "W/\"datetime'2016-12-06T09%3A34%3A54.94083Z'\""
}
Now, the question is about how do we parse this json to get the button data [(Current Weather) action?weather=Seattle, WA"] out of the text attribute? Is the only way is patter match ?
Has anyone faced or know solution, please put some light here too ;)
Update: If its different channel like skype/webchat/etc.. the JSON response looks very proper to consume, following is the sample JSON.
{
"type": "message",
"id": "5AdoK89rtSc|000000000000000018",
"timestamp": "2016-12-06T09:53:20.4777291Z",
"channelId": "webchat",
"from": {
"id": "rich3cards",
"name": "RichCards"
},
"conversation": {
"id": "5AdoK89rtSc"
},
"attachments": [
{
"contentType": "application/vnd.microsoft.card.hero",
"content": {
"title": "Hero Card",
"subtitle": "Space Needle",
"text": "The <b>Space Needle</b> is an observation tower in Seattle, Washington, a landmark of the Pacific Northwest, and an icon of Seattle.",
"images": [
{
"url": "https://upload.wikimedia.org/wikipedia/commons/thumb/7/7c/Seattlenighttimequeenanne.jpg/320px-Seattlenighttimequeenanne.jpg"
}
],
"buttons": [
{
"type": "postBack",
"value": "action?weather=Seattle, WA",
"title": "Current Weather"
}
]
}
}
]
As mentioned in the comments, you are using DirectLine v1.1. Unfortunately, v1.1 doesn't support attachments/cards and so there isn't a good way to understand/parse the card.
You might want to consider moving to DirectLine v3 which has full support for attachments.
Alternatively, if you want to support Cards, you might have to do something custom as shown in the DirectLine sample. There, the bot is sending the hero card through the ChannelData field and the client is parsing that accordingly. However, you might have to add the logic to detect who is talking to the bot so you send the cards as ChannelData only if the caller is DirectLine and not one of the other clients (such as skype)

Resources