is the Speech Support in Bot Framework also available for the german language?
kind regards
if you read through this blog in the section "Cross platform speech support in your app using the DirectLine channel" there is this code snippet:
_botClient = new Microsoft.Bot.Client.BotClient(
BotConnection.DirectLineSecret,
BotConnection.ApplicationName
){
// We used the Cognitive Services Speech-To-Text API, with speech priming support as the speech recognizer as well as the Text-To-Speech API.
// Alternate/custom speech recognizer & synthesizer implementation are supported as well.
SpeechRecognizer = new CognitiveServicesSpeechRecognizer(BotConnection.BingSpeechKey),
SpeechSynthesizer = new CognitiveServicesSpeechSynthesizer(BotConnection.BingSpeechKey, Microsoft.Bot.Client.SpeechSynthesis.CognitiveServices.VoiceNames.Jessa_EnUs)
};
in theory you could replace this line:
SpeechSynthesizer = new CognitiveServicesSpeechSynthesizer(BotConnection.BingSpeechKey, Microsoft.Bot.Client.SpeechSynthesis.CognitiveServices.VoiceNames.Jessa_EnUs)
with any synthesizer you want. So assuming there is a German synthesizer out there, the answer to your question is yes.
Related
I need to add Speech to text feature in RASA, where user can ask questionsusing his voice and bot will answer him by chat. Does anyone know how can I do it in RASA?
As my front-end will be an Android Application. Kindly do tell me how to do it.
Thanks in Advance.
You can build a voice bot with Rasa Open Source as long as you use a Speech to Text (STT) API, since Rasa will only process text. This would involve building a custom channel that takes the voice as input, sends it to a STT API and returns the text to Rasa.
You can find some detailed examples on the Rasa blog:
https://blog.rasa.com/how-to-build-a-voice-assistant-with-open-source-rasa-and-mozilla-tools/
https://blog.rasa.com/how-to-build-a-mobile-voice-assistant-with-open-source-rasa-and-aimybox/
If you don't mind using something closed source, integrating the Google Speech API is also an option.
I am using Botframework SDK v4 and NLP is as google dialogflow. I need to find intent and it's fulfillment text through recognnizer. The below given code works in SDK v3.
is there any substitue class for IntentDialog in SDK v4 ? so that
code will work.
var intents = new builder.IntentDialog({ recognizers: [recognizer] });`
There is an equivalent to IntentDialog being developed to be included in a future build. Unfortunately, at this time there is no substitution.
I have developed a FAQ Bot using C# and Bot Builder SDK 3.15.3. We have a large set of question/answer pairs which are uploaded to a QNA Maker Service. I have enabled the Direct Line Channel and the bot is displayed on a web page. I have used the Web Chat control provided by Microsoft with some customization and skinning.
Now I want to enable voice interaction with the bot, for that I decided to use the Microsoft Speech to Text Cognitive Service.
What I want to do is that when ever user speaks some utterance, I want to send the utterance to my bot service similar to like text is sent. Then inside C# code I want to run the Speech to Text and further do a Spell Check on the text retrieved and finally send to QNA Maker Service. The response for now will only be showed as text, but can also opt to read the response for the user.
Kindly guide my how this is achievable, as after looking at CognitiveService.js and other articles on enabling speech, I notice that Web Chat control directly sends the voice input to speech to text service.
You can make a hybrid between a calling bot which utilizes speech to text and a QnA bot to achieve your goal. For the calling bot, look over the SimpleIVRbot sample to get you going. For QnAMaker, you can reference the SimpleQnABot. It shouldn't take too much work bridging the two into a single unified bot. Just be sure to remove duplicate code and combine files where necessary.
Hope of help!
How to convert speech to text, with out using IBM watson API?
that means i need another API for conversion.
You can try:
Google Cloud Speech : https://cloud.google.com/speech-to-text/ it provides dictation mode, and you can also select the context of the speech (e.g Medical, School, etc)
Bing Speech : https://azure.microsoft.com/en-us/services/cognitive-services/speech/ it provides dictation mode, and also you can select the context of the speech
Microsoft Cognitive Speech : https://azure.microsoft.com/en-us/services/cognitive-services/custom-speech-service/ you can make your own language model and accoustic model by sending data training to the azure
CMUSphinx : https://cmusphinx.github.io/ it's open source, you can make your own language model, accoustic model, dictionary etc, but you have to handle everything by yourself. (Very Recommended)
The first version of the Bot Framework advertised automatic language translation as a major Bot Connector feature as outlined in the v1 Bot Framework Overview page.
However, the v3 documentation doesn't mention it. I was wondering if this feature is no longer available or should we use the Cognitive Services Text APIs instead?
Here's what I tried with a Skype Bot:
I would like to speak in German
May I speak in French
fr-FR
Everything I've tried hasn't worked.
This is something which I've looked into while building my own bot as well. The feature is no longer available in V3 and will have to call the Translator API directly. https://github.com/Microsoft/BotBuilder/issues/1156
In V4, you can use the code as below
Resources.Global.Culture = new System.Globalization.CultureInfo("en-US");