Bot Framework web chat + speech + specific voice - botframework

I'm trying to get web chat with my bot (V4 bot and web chat) to work with the speech cognitive service, using a specific voice. I've got it almost working as per this sample and others in the same folder (https://github.com/Microsoft/BotFramework-WebChat/blob/master/samples/06.c.cognitive-services-speech-services-js/index.html)
The only part of the equation I'm missing is whether I can specify the voice. I can't find how to specify voice in the samples and the web chat source code.
This page linked to from the speech cog service docco (https://learn.microsoft.com/en-gb/azure/cognitive-services/speech-service/speech-synthesis-markup) mentions specifying voice inside the SSML but I don't want to have to somehow crack open and modify the SSML being generated by the bot if I can avoid it.
Does anyone have any idea if this is possible, and if so how?
Thanks
Lee

OK I came to an answer on this myself after looking at the pony fill code. Partial snippet below. Update the list of voice to locale mappings to match the voice you want to use for the specified locale.
const speechServicesPonyfillFactory = await window.WebChat.createCognitiveServicesSpeechServicesPonyfillFactory({ authorizationToken, region });
return options => {
const ponyfill = speechServicesPonyfillFactory(options);
var speechSynthesisUtterance = ponyfill.SpeechSynthesisUtterance;
var speechSynthesis = ponyfill.speechSynthesis;
speechSynthesis.getVoices = function () {
return [
{ lang: 'en-US', gender: 'Male', voiceURI: 'Microsoft Server Speech Text to Speech Voice (en-US, BenjaminRUS)' }
];
}
return {
SpeechGrammarList: ponyfill.SpeechGrammarList,
SpeechRecognition: ponyfill.SpeechRecognition,
speechSynthesis: speechSynthesis,
SpeechSynthesisUtterance: speechSynthesisUtterance
}
};
};
...
var ponyfillFactory = await createSpeechPonyfillFactory({ authorizationToken, region });
...
// Do the usual stuff from the sample to get auth token and region...
window.WebChat.renderWebChat({
directLine: directLine,
webSpeechPonyfillFactory: ponyfillFactory,
store
}, document.getElementById('webchat'));

Related

Add support for various accents in Directline speech for Azure chat bot

We are working on a requirement where we need to support various accents for azure chat bot. Currently we have Directline speech enabled for the speech as below.
(async function () {
var speechServicesTokenRes = await fetch(
'https://eastus.api.cognitive.microsoft.com/sts/v1.0/issuetoken',
{
method: 'POST',
headers: {
'Ocp-Apim-Subscription-Key': '***************'
}
});
if (speechServicesTokenRes.status === 200) {
authorizationToken = await speechServicesTokenRes.text();
var webSpeechPonyfillFactory = await window.WebChat.createCognitiveServicesSpeechServicesPonyfillFactory({
authorizationToken: authorizationToken,
region: 'eastus'
});
window.WebChat.renderWebChat({
directLine: createDirectLine({
secret: '********************'
}),
webSpeechPonyfillFactory: webSpeechPonyfillFactory
}, document.getElementById('webchat'));
}
})().catch(err => console.error(err));
Can anyone guide me if there is any way to customize Directline speech for accents.
Unfortunately, Direct Line Speech does not support accents. The full list of supported languages, including regional variations (for example Spanish (Honduras) vs Spanish (Panama)) can be found here.
Direct Line Speech does support SSML (Speech Synthesis Markup Language) which has a rich set of associated features. Some options that may help you are:
Adjusting speaking style to represent mood
Adding / removing breaks or pauses
Using phonemes to adjust pronunciation
Using custom lexicons to adjust pronunciation
Adjusting prosody (i.e., pitch, rate, valume, etc.)
One last option is to create a custom neural voice which
lets you create a one-of-a-kind customized synthetic voice for your applications.
In this case, you would provide audio and/or text samples in order to train the voice for use in customizing it.
Hope of help!

OnMembersAddedAsync is not getting called

I have Bot developed in .NET core 3.1 c#. I am sending Adaptive card in OnMembersAddedAsync. It ix expected that as soon as end customer open chat control it should send Adaptive card. This is working well in Azure web chat control & emulator. But when I added it web site and open chat control in web site it will not work. Control will wait for end customer to send some message & then it sends card. In console log of BotFrame.html I can see that DIRECT_LINE connection is established :
DIRECT_LINE/CONNECT_PENDING
DIRECT_LINE/UPDATE_CONNECTION_STATUS
DIRECT_LINE/UPDATE_CONNECTION_STATUS
DIRECT_LINE/CONNECT_FULFILLING
DIRECT_LINE/CONNECT_FULFILLED
My code for BotFrame.html is :
const store = window.WebChat.createStore( {}, ( { dispatch } ) => next => async action => {
console.log(action.type);
if ( action.type === 'DIRECT_LINE/INCOMING_ACTIVITY' ) {…..
}
}
Even in Bot Logs I cannot see OnMembersAddedAsync is called unless customer sends message. Am I missing anything. Same is working well in Azure web chat control & emulator.
This sample, 04.api/a.welcome-event, demonstrates best practice on how to setup Web Chat and your bot for sending a welcome message.
In short, as shown in the sample, you will send an event to your bot when direct line connects your bot and the client.
When the event is received by the bot, it will trigger the bot to send the actual welcome message/card/activity to the user via the client.
In Web Chat
const store = window.WebChat.createStore({}, ({ dispatch }) => next => action => {
if (action.type === 'DIRECT_LINE/CONNECT_FULFILLED') {
// When we receive DIRECT_LINE/CONNECT_FULFILLED action, we will send an event activity using WEB_CHAT/SEND_EVENT
dispatch({
type: 'WEB_CHAT/SEND_EVENT',
payload: {
name: 'webchat/join',
value: { language: window.navigator.language }
}
});
}
In Your Bot
if (context.activity.name === 'webchat/join') {
await context.sendActivity('Welcome, friend!!');
}
Hope of help!

User and Bot messages appear on same side of chat container

I built a QnA Maker and integrated it via Direct Line in my Website using BotFramework-WebChat for styling.
Messages of the user and the bot are appearing at the same side of the chat container. I can't figure why.
This is how it currently looks like:
This is the code I'm using:
<script>
const styleSet = window.WebChat.createStyleSet({
bubbleFromUserBackground: 'rgba(227, 227, 227, .1)',
hideUploadButton: true,
botAvatarInitials: 'WD',
sendTypingIndicator: true,
userAvatarInitials: 'you'
});
styleSet.textContent = Object.assign(
{},
styleSet.textContent,
{
fontFamily: '\'Lato\', sans-serif'
}
);
window.WebChat.renderWebChat(
{
directLine: window.WebChat.createDirectLine({
token: 'xxxxxx'
}),
styleSet,
userID: 'qna-homepage-bot',
username: 'Web Chat User',
locale: 'en-US',
},
document.getElementById('webchat')
);
document.querySelector('#webchat > *').focus();
</script>
I wasn't able to reproduce this, but I suspect you are setting your user ID to the same value as the bot ID. When Web Chat receives an activity, it sets the role property in the activity's from attribute based on the ID (you can take a look at the source code here). Web Chat then uses the role to determine how the activity is styled. If the bot id equals the user id, Web Chat will confuse the role attribute and apply the wrong CSS stylings. Try changing the userID value in the render Web Chat options to something else.
Note, the userID value should be unique for each user; otherwise, every conversation will share the same user state.
Hope this helps!

How to use Azure Speech Service in Bot Framework Web Chat

I am using Bot Framework Web Chat and I correctly setup a front-end for the user to chat with my bot. I am trying to enable speech for it, I try following the tutorial here: https://learn.microsoft.com/en-us/azure/bot-service/bot-service-channel-connect-webchat-speech?view=azure-bot-service-3.0
The problem I have is I try to use Azure Speech Service, I setup my service correctly, and I set the key. But I am not sure where to get the CognitiveServices? The tutorial doesn't specify where to get it.
Here is my code:
<div id="bot"/>
<script src="https://cdn.botframework.com/botframework-webchat/latest/botchat.js"></script>
<script>
const speechOptionsRemote = {
speechRecognizer: new CognitiveServices.SpeechRecognizer({ subscriptionKey: '...' }),
speechSynthesizer: new CognitiveServices.SpeechSynthesizer({
gender: CognitiveServices.SynthesisGender.Female,
subscriptionKey: '...',
voiceName: 'Microsoft Server Speech Text to Speech Voice (en-US, JessaRUS)'
})
};
BotChat.App({
directLine: { secret: '...' },
user: { id: 'WebChat' },
bot: { id: '...' },
resize: 'detect',
speechOptions: speechOptionsRemote,
showUploadButton: false
}, document.getElementById("bot"));
var header = document.getElementsByClassName("wc-header");
header[0].innerHTML = "<span ><p align='center' >My Bot</p></span>"
</script>
It complain that CogntiveService is not found when I navigate to the page. Where do I get it?
Your code sample is using v3 of Webchat, which is now deprecated, see here. There is a v4 of BotFramework-WebChat on the GitHub repository, the update has been a few days ago.
So when in your code your are downloading cdn.botframework.com/botframework-webchat/latest/botchat.js, it is the v4: that explains why it can't found CognitiveServices: it has been refactored.
For using Cognitive Services Speech in the v4, have a look to the dedicated sample: https://github.com/Microsoft/BotFramework-WebChat/tree/master/samples/speech-cognitive-services-bing-speech

How capture audio message receive or image receive in BotKit Facebook

I have been using Botkit Facebook Messenger and I can receive text messages from Facebook perfectly, however I can not capture audio messages, images or attachments.
Has anyone been able to capture these types of messages?
var Botkit = require('botkit');
var controller = Botkit.facebookbot({
access_token: process.env.access_token,
verify_token: process.env.verify_token,
})
var bot = controller.spawn({
});
// if you are already using Express, you can use your own server instance...
// see "Use BotKit with an Express web server"
controller.setupWebserver(process.env.port,function(err,webserver) {
controller.createWebhookEndpoints(controller.webserver, bot, function() {
console.log('This bot is online!!!');
});
});
// this is triggered when a user clicks the send-to-messenger plugin
controller.on('facebook_optin', function(bot, message) {
bot.reply(message, 'Welcome to my app!');
});
// user said hello
controller.hears(['hello'], 'message_received', function(bot, message) {
bot.reply(message, 'Hey there.');
});
controller.hears(['cookies'], 'message_received', function(bot, message) {
bot.startConversation(message, function(err, convo) {
convo.say('Did someone say cookies!?!!');
convo.ask('What is your favorite type of cookie?', function(response, convo) {
convo.say('Golly, I love ' + response.text + ' too!!!');
convo.next();
});
});
});
there is an example for stickers, images, and audio replies in the facebook starter project: https://github.com/howdyai/botkit-starter-facebook/blob/master/skills/sample_events.js
If you have trouble using them, feel free to create an issues on the github!

Resources