In actions-on-google , both the request and response object need to provide as input to this library. but in lambda function, only the request object exists.
So how can i override it ?
in aws lambda the format is
exports.handler = function (event, context, callback) { // event is the request object , the response is provided using the callback() functon
}
the actions-on-google object is created as :
const DialogflowApp = require('actions-on-google').DialogflowApp;
const app = new DialogflowApp({ request: request, response: response });
To get a Google Action to work on AWS Lambda, you need to do 2 things:
Code your app in a way that it's executable on Lambda
Create an API Gateway to your Lambda Function which you can then use for Dialogflow Fulfillment
I believe the first setp can't be done off-the-shelf with the Actions SDK. If you're using a framework like Jovo, you can create code that works for both Amazon Alexa and Google Assistant, and host it on AWS Lambda.
You can find a step by step tutorial about setting up a "Hello World" Google Action, host it on Lambda, and create an API Gateway here: https://www.jovo.tech/blog/google-action-tutorial-nodejs/
Disclaimer: I'm one of the founders of Jovo. Happy to answer any further questions.
This is only a half answer:
Ok, so I dont think I can tell you how to make the action on google sdk correct working on AWS Lambda.
Maybe its easy, I just dont know and need to read everything to know it.
My, "easy to go", but at the end you will maybe have more work solution, would be just interprete the request jsons by yourself and responde with a message as shown below
This here would be a extrem trivial javascript function to create a extrem trivial JSON response.
Parameters:
Message is the string you would like to add as answer.
Slots should be an array that can be used to bias the speech recognition.
(you can just give an empty array to this function if you dont want to bias the speech).
And State is any kind of serilizable javascript object this is for your self to maintain states or something else It will be transfered between all the intents.
This is an standard response on an speech request.
You can add other plattforms than speech for this, by adding different initial prompts please see the JSON tabs from the documentation:
https://developers.google.com/actions/assistant/responses#json
function answerWithMessage(message,slots,state){
let newmessage = message.toLowerCase();
let jsonResponse = {
conversationToken: JSON.stringify(state),
expectUserResponse: true,
expectedInputs: [
{
inputPrompt: {
initialPrompts: [
{
textToSpeech: newmessage
}
],
noInputPrompts: []
},
possibleIntents: [
{
intent: "actions.intent.TEXT"
}
],
speechBiasingHints: slots
}
]
};
return JSON.stringify(jsonResponse,null, 4);
}
Related
I am beginning with MassTransit, for a publisher/consumer scenario. In production we will be using SQS, however i would like to be able to use "In Memory" for development locally.
I am having trouble with forming the correct Uri for the call to ISendEndpointProvider.GetSendEnpoint(), as per:
//THE SET UP CODE:
x.AddConsumer<MTConsumer, MTMessageConsumerDefinition>()
.Endpoint(e =>
{
// override the default endpoint name
e.Name = "process-input-item";
//... more configurations as per docs here...
})
;
x.UsingInMemory((context, cfg) =>
{
cfg.ConfigureEndpoints(context);
});
});
//The Publish Code:
var endpoint = await SendEndpointProvider.GetSendEndpoint(new Uri("/ProcessInputItem"));
await endpoint.Send(new MTMessage { InputItemId = item.Id});
Note
I have tried the various cases for the endpoint string.
I do not want to capture the instance of IBus to call Send as that is not the 'closest' instance to the consumer, which according to the docs is important to consider.
Mass Transit document reference: https://masstransit-project.com/usage/configuration.html#receive-endpoints
Thank you for any guidance with this,
Dylan
As explained in the documentation, there are short endpoint addresses which can be used. In your case:
await SendEndpointProvider.GetSendEndpoint(new Uri("queue:process-input-item"));
I have a question regarding a small issue that I'm having. I've created a widget that will live on the Service Portal to allow an admin to Accept or Reject requests.
The data for the widget is pulling from the Approvals (approval_approver) table. Under my GlideRecord, I have a query that checks for the state as requested. (Ex. addQuery('state', 'requested'))
To narrow down the search, I tried entering addQuery('sys_id', current.sys_id). When I use this query, my script breaks and I get an error on the Service Portal end.
Here's a sample of the GlideRecord script I've written to Accept.
[//Accept Request
if(input && input.action=="acceptApproval") {
var inRec1 = new GlideRecord('sysapproval_approver');
inRec1.addQuery('state', 'requested');
//inRec1.get('sys_id', current.sys_id);
inRec1.query();
if(inRec1.next()) {
inRec1.setValue('state', 'Approved');
inRec1.setValue('approver', gs.getUserID());
gs.addInfoMessage("Accept Approval Processed");
inRec1.update();
}
}][1]
I've research the web, tried using $sp.getParameter() as a work-around and no change.
I would really appreciate any help or insight on what I can do different to get script to work and filter the right records.
If I understand your question correctly, you are asking how to get the sysId of the sysapproval_approver record from the client-side in a widget.
Unless you have defined current elsewhere in your server script, current is undefined. Secondly, $sp.getParameter() is used to retrieve URL parameters. So unless you've included the sysId as a URL parameter, that will not get you what you are looking for.
One pattern that I've used is to pass an object to the client after the initial query that gets the list of requests.
When you're ready to send input to the server from the client, you can add relevant information to the input object. See the simplified example below. For the sake of brevity, the code below does not include error handling.
// Client-side function
approveRequest = function(sysId) {
$scope.server.get({
action: "requestApproval",
sysId: sysId
})
.then(function(response) {
console.log("Request approved");
});
};
// Server-side
var requestGr = new GlideRecord();
requestGr.addQuery("SOME_QUERY");
requestGr.query(); // Retrieve initial list of requests to display in the template
data.requests = []; // Add array of requests to data object to be passed to the client via the controller
while(requestsGr.next()) {
data.requests.push({
"number": requestsGr.getValue("number");
"state" : requestsGr.getValue("state");
"sysId" : requestsGr.getValue("sys_id");
});
}
if(input && input.action=="acceptApproval") {
var sysapprovalGr = new GlideRecord('sysapproval_approver');
if(sysapprovalGr.get(input.sysId)) {
sysapprovalGr.setValue('state', 'Approved');
sysapprovalGr.setValue('approver', gs.getUserID());
sysapprovalGr.update();
gs.addInfoMessage("Accept Approval Processed");
}
...
In our Teams calling bot, we would like to transfer certain calls to specific Teams users, PSTN, but also to an other Teams calling bot and/or voicemail.
For specific Teams users and PSTN we got it working. If we want to transfer a call to another application, we can do so by using its pstn number. But ideally we would also like to transfer using its objectId.
I tried using a transferrequest like this:
var requestBody = new CallTransferRequestBody()
{
TransferTarget = new InvitationParticipantInfo()
{
Identity = new IdentitySet()
{
AdditionalData = new Dictionary<string, object>()
}
}
};
requestBody.TransferTarget.Identity.Application = new Identity { Id = transferTargetId };
//this line does not make any difference
requestBody.TransferTarget.Identity.Application.SetTenantId(tenantId);
But this results in a "Request authorization tenant mismatch." error. Is it possible to directly transfer to another application?
I haven't tried voicemail boxes yet, but if any info on how to transfer to those, is appreciated.
Basically we can transfer an active peer-to-peer call. This is only supported if both the transferee and transfer target are Microsoft Teams users that belong to the same tenant.
However for redirecting call to call queue or auto attendants, you can use the "applicationInstance" identity. The bot is expected to redirect the call before the call times out. The current timeout value is 15 seconds.
const requestBody = {
"targets": [{
"#odata.type": "#microsoft.graph.invitationParticipantInfo",
"identity": {
"#odata.type": "#microsoft.graph.identitySet",
"applicationInstance": {
"#odata.type": "#microsoft.graph.identity",
"displayName": "Call Queue",
"id": queueId
}
}
}],}
Please refer to the documentation here: https://learn.microsoft.com/en-us/graph/api/call-redirect?view=graph-rest-beta&tabs=csharp#request
The redirect API is still having that limitation from my understanding.
But that should work with the new Transfer API:
https://learn.microsoft.com/en-us/graph/api/call-transfer?view=graph-rest-beta&tabs=http
I've created a lambda that retrieves user attributes as (username, email, name...etc) however, I wonder how it's possible to get user attributes without explicitly hardcoding sub value to get all other related attributes? do I need to decode JWT Cognito token in frontend and use it in the lambda to determine the correct user and retrieve the related attributes?
here is my lambda in Node.JS:
const AWS = require('aws-sdk');
exports.handler = function(event, context) {
var cog = new AWS.CognitoIdentityServiceProvider();
var filter = "sub = \"" + "UserSUB" + "\"";
var req = {
"Filter": filter,
"UserPoolId": 'POOL here',
};
cog.listUsers(req, function(err, data) {
if (err) {
console.log(err);
}
else {
if (data.Users.length === 1){
var user = data.Users[0];
var attributes = data.Users[0].Attributes;
console.log(JSON.stringify(attributes));
} else {
console.log("error.");
}
}
});
}
I think the proper way to do this depends on whether you want to use API Gateway or not (It will make things simpler IMHO).
If you don't want to use APIG, and you are calling the lambda directly using temporary credentials, then you should pass the entire ID token and have the lambda do all of the validation and decoding (probably using a third party library for JWTs). It's not safe to do it in the frontend as that would mean you have a lambda that blindly accepts the attributes as facts from the frontend, and a malicious user could change them if they wanted.
If you are using API Gateway to put lambdas behind an API then I would create a cognito authorizer based on the User Pool, create a resource/method and configure it to use the authorizer, and enable Use Lambda Proxy Integration for the Integration Request. All the token's claims enabled for the client will be passed through on event.requestContext.authorizer.claims so long as it's valid.
There are some AWS docs here, although this does not use proxy integration. If you use proxy integration then you can skip 6b as the APIG will set the values for you. This is described in an answer here.
I am currently working on a project where visitors are normally using both English and Chinese to talk to each other.
Since LUIS did not support multi-language very well (Yes I know it can support in certain ways but I want a better service), I would like to build my own Neural Network as a REST API so that, when someone submits their text, we can simply predict the "Intent", while we are still using MS BotFramework (NodeJS).
By doing this we can bypass MS LUIS and using our own Language understanding service.
Here are my two questions:
Has anyone done that before? Any GitHub link I can reference to?
If I did that, what is the BotFramework API I should use? There is a recognizer called "Custom Recognizer" and I wonder if it really works.
Thank you very much in advance for all your help.
Another option apart from Alexandru's suggestions is to add a middleware which will call the NLP service of your choosing everytime the bot receive a chat/request.
Botbuilder allows middleware functions to be applied before handling any dialogs, I created a sample code for a better understanding below.
const bot = new builder.UniversalBot(connector, function(session) {
//pass to root
session.replaceDialog('root_dialog');
})
//custom middleware
bot.use({
botbuilder: specialCommandHandler
});
//dummy call NLP service
let callNLP = (text) => {
return new Promise((resolve, reject) => {
// do your NLP service API call here and resolve the result
resolve({});
});
}
let specialCommandHandler = (session, next) => {
//user message here
let userMessage = session.message.text;
callNLP.then(NLPresult => {
// you can save your NLP result to a session
session.conversationData.nlpResult = NLPResult;
// this will continue to the bot dialog, in this case it will continue to root
// dialog
next();
}).catch(err => {
//handle errors
})
}
//root dialog
bot.dialog('root_dialog', [(session, args, next) => {
// your NLP call result
let nlpResult = session.conversationData.nlpResult;
// do any operations with the result here, either redirecting to a new dialog
// for specific intent/entity, etc.
}]);
For Nodejs botframework implementation you have at least two ways:
With LuisRecognizer as a starting point to create your own Recognizer. This approach works with single intent NLU's and entities arrays (just like LUIS);
Create a SimpleDialog with a single handler function that calls the desired NLU API;