Search Message extension oAuth card styling - microsoft-teams

I have requirement to style (change the color of link instead blue) and change the text in auth card in Search message extension. I tried to pass all parameters. But nothing appear to impact. I also tried to generate an Adpative card with text having link. But link doesnt have styling option.
This is code , I am using right now :
return new MessagingExtensionResponse
{
ComposeExtension = new MessagingExtensionResult
{
Type = "auth",
SuggestedActions = new MessagingExtensionSuggestedAction
{
Actions = new List<CardAction>
{
new CardAction
{
Type = ActionTypes.OpenUrl,
Value = signInLink,
Title = "Sign in"
},
},
},
},
};
Adaptive Card Json Tried:
{
"type": "TextBlock",
"text": "Please [Sign in](https://adaptivecards.io)"
}
I tried o replace crad action as suggested below, but it didnt work.
`
private async Task GetAuthCard(ITurnContext turnContext, CancellationToken cancellationToken)
{
// Retrieve OAuth Sign in Link
string signInLink = await GetSignInLinkAsync(turnContext, cancellationToken).ConfigureAwait(false);
AuthCardModel auth = new();
auth.AdaptiveCardPath = Path.Combine(".", "Resources", "AuthCardTemplate.json");
auth.signinLink = signInLink;
Attachment adaptiveCard = CreateAdaptiveCardActivity(auth.AdaptiveCardPath, auth);
_logger.LogWarning($"[TeamsBot LogIn adaptive card] -> {JsonConvert.SerializeObject(adaptiveCard)}");
MessagingExtensionAttachment attachment = new MessagingExtensionAttachment
{
ContentType = ThumbnailCard.ContentType,
Content = adaptiveCard.Content,
};
return new MessagingExtensionResponse
{
ComposeExtension = new MessagingExtensionResult
{
Type = "auth",
AttachmentLayout = "list",
Attachments = new List<MessagingExtensionAttachment> { attachment },
},
};
}
`
AuthCardTemplate.json :
`
{
"type": "AdaptiveCard",
"body": [
{
"type": "TextBlock",
"text": "__ * * Sign in * * __"
}
],
"$schema": "http://adaptivecards.io/schemas/adaptive-card.json",
"version": "1.3"
}
`

Teams Does not support Card action button styling in cards. This is by design. Markdown is a simple way to format text that looks great on any device. It doesn’t do anything fancy like change the font size, color, or type — just the essentials.
Document-https://learn.microsoft.com/en-us/adaptive-cards/authoring-cards/text-features#markdown-commonmark-subset

Related

Adding prompt options hides text on Android devices

I am sending some message activity and choice prompts using Hero card as below
firstMessage.inputHint = InputHints.IgnoringInput;
await stepContext.context.sendActivity(firstMessage);
secondMessage = MessageFactory.carousel(boltOnDataMap);
await stepContext.context.sendActivity(secondMessage);
let getNameByType: Object = {
intent: "intent.bundle.get_name_by_type",
"entities": [
{
"entity": "giga",
"type": "ent.mobile_data",
"score": 1,
"canon": "giga"
}
]
};
let promptChoices: CardAction[];
promptChoices = [
{
value: ssoUrl,
type: ActionTypes.OpenUrl,
title: getText(
"show-button:button.txt"
),
},
{
value: getNameByType,
type: ActionTypes.PostBack,
title: getText(
"show-button-2:button2.txt"
),
},
];
const attachment: Partial<Activity> = MessageFactory.attachment(
CardFactory.heroCard(null, null, promptChoices)
);
const promptOptions: PromptOptions = {
prompt: {
...attachment,
inputHint: InputHints.ExpectingInput,
},
choices: ChoiceFactory.toChoices(promptChoices),
};
return await stepContext.prompt(
this.promptsNames.CHOOSE_BOLTONS,
promptOptions
);
Everything is displayed properly on iOS devices but on Android firstMessage does not get displayed. PFA the screenshot below
If I remove the prompt options, firstMessage gets displayed on Android device as well.
I checked this link Waiting method in Bot Framework v4 to confirm I am adding the prompt options correctly and the code looks similar.
Am I doing something wrong, which is causing the text(firstMessage) not be displayed on Android.

C# Microsoft Bot Schema Teams Image cropping off the top and bottom

I had a question regarding Teams Microsoft Bot framework. Whenever my bot sends an adaptive card, the top and the bottom of the photo continue to cut off. Inside the adaptive card is the hero card image, it seems I'm unable to resize it to make it fit. I've tried making the image smaller and larger to see if that would fix the issue. Below is a screenshot of the issue I am having.
I'm hoping someone has run into the same issue and if this is fixable or not. Thank you.
Image being used; https://imgur.com/a/hkcSkrJ
public async Task<SendResult> SendAsync(NotificationTeamsAttempt attempt)
{
try
{
if (string.IsNullOrWhiteSpace(attempt.ConversationId))
throw new Exception("Conversation Id is required.");
if (string.IsNullOrWhiteSpace(attempt.ServiceUrl))
throw new Exception("Service Url is required.");
using (var connector = new ConnectorClient(new Uri(attempt.ServiceUrl), _clientId, _clientSecret))
{
var activity = MessageFactory.Text("");
activity.Attachments.Add(attempt.Attachment());
activity.Summary = attempt.Summary();
var response = await connector.Conversations.SendToConversationAsync(attempt.ConversationId, activity);
return new SendResult
{
IsSuccess = true,
DispatchId = response.Id
};
}
}
catch (Exception exception)
{
return new SendResult
{
IsSuccess = false,
Exception = exception
};
}
}
public override Attachment Attachment()
{
var card = new ThumbnailCard
{
Title = "Post submitted for review by " + DraftAuthor,
Subtitle = DraftTitle,
Text = DraftDescription,
Images = new List<CardImage>(),
Buttons = new List<CardAction>()
};
if (!string.IsNullOrWhiteSpace(TeamsUrl))
{
card.Buttons.Add(new CardAction
{
Type = "openUrl",
Title = "Review in Teams",
Value = TeamsUrl.Replace("null", $"%22post%7C{DraftId}%7C{DraftId}%22")
});
}
if (!string.IsNullOrWhiteSpace(SPUrl))
{
card.Buttons.Add(new CardAction
{
Type = "openUrl",
Title = "Review in SharePoint",
Value = $"{SPUrl}?postId={DraftId}&sourceId={DraftId}"
});
}
return card.ToAttachment();
}
Please disregard the black lines I've added. But below you can see where the image is cropping off.
Image of the cropping.
Moving comment to answer -
We are using the below JSON and we get the perfect image without cropping -- { "type": "AdaptiveCard", "body": [ { "type": "TextBlock", "size": "Medium", "weight": "Bolder", "text": "card image test" }, { "type": "Container", "items": [ { "title": "Public test 1", "type": "Image", "url": "https://i.imgur.com/OiJNN03.jpeg" } ] } ], "$schema": "adaptivecards.io/schemas/adaptive-card.json", "version": "1.0" }

Initiate message action on MS Teams and read the content of the message

I have a working composeExtension (message extension) in Microsoft Teams.
One of the commands is query, and on selecting an item on the list will post a hero card in the chat.
The next thing, I am looking for is triggering action command on the message and generate an adaptive card with preloaded information from the message itself.
After doing some research and going over MS Documentation - Define messaging extension action commands
Below solution is in NodeJS
Step 1: Update Manifest File
...
"composeExtensions": [
...
"commands": [
...
{
"id": "Action",
"type": "action",
"title": "Action",
"description": "Test command to run action on message context (message sharing)",
"context": [ "message" ]
}
...
]
...
]
...
Step 2: Handling the request
async handleTeamsMessagingExtensionFetchTask(context, action) {
...
if( action.commandId === 'Action') {
// here you can return anything eg: adaptive card
// below helper function will read the content of the message
let cardContent = await this.getAdaptiveCardContent(context);
let CardData = JSON.parse(cardContent);
}
...
}
async getAdaptiveCardContent(context) {
if (!context.hasOwnProperty("_activity") || !context._activity.hasOwnProperty("value")) {
return null;
}
let messagePayload = context._activity.value.messagePayload;
if (messagePayload) {
for (let i = 0, attachment; i < messagePayload.attachments.length; i++) {
attachment = messagePayload.attachments[i];
if (attachment.contentType === "application/vnd.microsoft.card.hero") { // In my case its a adaptive card
return attachment.content;
}
}
}
return null;
}

Microsoft Bot returns error on facebook for HeroCard

Created a bot using C# and sending herocard with buttons, it works fine with webchat and emulator but on facebook. I'm getting the below error whenever I send the message with attachment to facebook.
SendActivityToUserAsync FAILED:
{
"error": {
"message": "(#100) Web url cannot be empty for url type button",
"type": "OAuthException",
"code": 100,
"error_subcode": 2018041,
"fbtrace_id": "C0s4Cxs3g+s"
}
}
Operation returned an invalid status code 'BadRequest'
//code for the attachment.
var heroCard = new HeroCard
{
// title of the card
Title = "",
//subtitle of the card
Subtitle = "",
// list of buttons
Buttons = new List<CardAction> {
new CardAction() {
Title = "Show my calendar",
Text = "Show my calendar",
Type = ActionTypes.ImBack
},
new CardAction() {
Title = "Show my day",
Text = "Show my day",
Type = ActionTypes.ImBack
}
}
};
You are missing the Value field in your card actions. The following code works. You need the value field to provide the "value"(for lack of better word) for your ImBack
var heroCard = new HeroCard
{
// title of the card
Title = "",
//subtitle of the card
Subtitle = "",
// list of buttons
Buttons = new List<CardAction> {
new CardAction() {
Title = "Show my calendar",
Text = "Show my calendar",
Type = ActionTypes.ImBack,
Value = "123"
},
new CardAction() {
Title = "Show my day",
Text = "Show my day",
Type = ActionTypes.ImBack,
Value = "456"
}
}
};

Trouble with an undefined object in Alexa Skills Custom Intents

I am currently attempting to make an Alexa skill that will issue a "Find my iPhone" alert to my apple devices when I give alexa the correct prompts. I am quite new to developing for the alexa skill set and coding at that (especially in node.js). Here is my quote:
var phoneId = "I have my values here";
var ipadId = "I have my values here";
var macId = "I have my values here";
var deviceId = "";
var APP_ID = ''; //replace with "amzn1.echo-sdk-ams.app.[your-unique-value-here]";
var AlexaSkill = require('./AlexaSkill');
var alexaResponse;
//Import Apple.js
var Apple = require('./Apple');
var apple = new Apple();
var alertSuccess = "Alert sent to Kenny's phone";
var alertFailed = "Alert couldn't be sent to Kenny's phone. Good luck finding it.";
var FindDevice = function () {
AlexaSkill.call(this, APP_ID);
};
// Extend AlexaSkill
FindDevice.prototype = Object.create(AlexaSkill.prototype);
FindDevice.prototype.constructor = FindDevice;
FindDevice.prototype.eventHandlers.onSessionStarted = function (sessionStartedRequest, session) {
console.log("Quote onSessionStarted requestId: " + sessionStartedRequest.requestId
+ ", sessionId: " + session.sessionId);
// any initialization logic goes here
};
FindDevice.prototype.eventHandlers.onLaunch = function (launchRequest, session, response) {
console.log("Quote onLaunch requestId: " + launchRequest.requestId + ", sessionId: " + session.sessionId);
getWelcomeResponse(response);
};
FindDevice.prototype.eventHandlers.onSessionEnded = function (sessionEndedRequest, session) {
console.log("Quote onSessionEnded requestId: " + sessionEndedRequest.requestId
+ ", sessionId: " + session.sessionId);
// any cleanup logic goes here
};
FindDevice.prototype.intentHandlers = {
// register custom intent handlers
"FindDeviceIntent": function (intent, session, response) {
determineDevice(intent, session, response);
}
};
/**
* Returns the welcome response for when a user invokes this skill.
*/
function getWelcomeResponse(response) {
// If we wanted to initialize the session to have some attributes we could add those here.
var speechText = "Welcome to the Lost Device. Which device shall I find?";
var repromptText = "<speak>Please choose a category by saying, " +
"iPhone <break time=\"0.2s\" /> " +
"Mac <break time=\"0.2s\" /> " +
"iPad <break time=\"0.2s\" /></speak>";
var speechOutput = {
speech: speechText,
type: AlexaSkill.speechOutputType.PLAIN_TEXT
};
var repromptOutput = {
speech: repromptText,
type: AlexaSkill.speechOutputType.SSML
};
response.ask(speechOutput, repromptOutput);
}
function determineDevice(intent, session, response) {
var deviceSlot = intent.slots.Device;
if (deviceSlot == "iPhone") {
deviceId = phoneId;
pingDevice(deviceId);
} else if (deviceSlot == "iPad") {
deviceId = ipadId;
pingDevice(deviceId);
} else if (deviceSlot == "Mac") {
deviceId = macId;
pingDevice(deviceId);
} else {
var speechText = "None of those are valid devices. Please try again.";
speechOutput = {
speech: speechText,
type: AlexaSkill.speechOutputType.PLAIN_TEXT
};
response.tell(speechOutput);
}
}
function pingDevice(deviceId) {
apple.sendAlert(deviceId, 'Glad you found your phone.', function(success, result){
if(success){
console.log("Alert Sent Successfully");
var speechOutput = alertSuccess;
response.tell(speechOutput);
} else {
console.log("Alert Unsuccessful");
console.log(result);
var speechOutput = alertFailed;
response.tell(speechOutput);
}
});
}
// Create the handler that responds to the Alexa Request.
exports.handler = function (event, context) {
// Create an instance of the FindDevice skill.
var findDevice = new FindDevice();
findDevice.execute(event, context);
};
Here is the error from lambda:
{
"errorMessage": "Cannot read property 'PLAIN_TEXT' of undefined",
"errorType": "TypeError",
"stackTrace": [
"getWelcomeResponse (/var/task/index.js:87:42)",
"FindDevice.eventHandlers.onLaunch (/var/task/index.js:58:5)",
"FindDevice.LaunchRequest (/var/task/AlexaSkill.js:10:37)",
"FindDevice.AlexaSkill.execute (/var/task/AlexaSkill.js:91:24)",
"exports.handler (/var/task/index.js:137:16)"
]
}
I understand that there is an undefined object here, but for the life of me I can't figure out where the code is going wrong. I am trying to take the Slot from my intent and then change the device to ping based on the slot word used. Also because I am so new to this a lot of the coding is just being done by patching things together. I did find that when I removed the .PLAIN_TEXT lines all together the code ran in lambda, but then broke in the alexa skills test area. I get the hunch I don't understand how the slots from intents are passed, but I am having trouble finding material I can understand on that matter. Any help would be fantastic!
Within the determineDevice function, you're accessing the "Device" slot object directly, rather than the actual value passed in, therefore it will never match your pre-defined set of device names.
A slot object has a name and a value - if you take a look at the Service Request JSON in the Service Simulator within the Developer Console when you're testing your Alexa skill, you'll see something like the following:
{
"session": {
"sessionId": "SessionId.dd05eb31-ae83-4058-b6d5-df55fbe51040",
"application": {
"applicationId": "amzn1.ask.skill.61dc6132-1727-4e56-b194-5996b626cb5a"
},
"attributes": {
},
"user": {
"userId": "amzn1.ask.account.XXXXXXXXXXXX"
},
"new": false
},
"request": {
"type": "IntentRequest",
"requestId": "EdwRequestId.6f083909-a831-495f-9b55-75be9f37a9d7",
"locale": "en-GB",
"timestamp": "2017-07-23T22:14:45Z",
"intent": {
"name": "AnswerIntent",
"slots": {
"person": {
"name": "person",
"value": "Fred"
}
}
}
},
"version": "1.0"
}
Note I have a slot called "person" in this case, but to get the value in the slot, you need to access the "value" property. In your example, you would change the first line of the determineDevice function to:
var deviceSlot = intent.slots.Device.value;
As an aside, I've found the Alexa-cookbook Github repository an essential resource for learning how to work with the Alexa SDK, there are examples in there for most scenarios.

Resources