I’m using multiple live queries in my js app and was wondering what are the best practices on this subject.
I did read somewhere that opening multiple websockets at a time should be avoided. What are the recommendations using the Parse.com live queries ?
Currently, I'm launching 3 times the following function (and changing the document and function name)
startWebserver1: async function(param) {
Parse.initialize(this.parse.initialize)
Parse.serverURL = this.parse.serverURL
const snapshots = Parse.Object.extend("myDocument1");
const query = new Parse.Query(snapshots)
const subscriptionSnapshots = await query.subscribe()
subscriptionSnapshots.on('open', () => {
console.log(' -> Snapshots subscription opened');
});
}
Related
I am building a slackbot that will remind people in my organisation to perform certain admin (hours expenses etc) every week. I know this can be very easily done by each person creating a recurring reminder. What i want is to create a bot that will send a preconfigured message to people every week. I've looked online extensively, and haven't yet found out how slackbot can send a message without an event or being otherwise prompted.
I'm currently testing this on a local ngrok server with the following backend:
const { WebClient } = require('#slack/web-api');
const { createEventAdapter } = require('#slack/events-api');
const slackSigningSecret = process.env.SLACK_SIGNING_SECRET;
const slackToken = process.env.SLACK_TOKEN;
const port = process.env.SLACK_PORT || 3000;
const slackEvents = createEventAdapter(slackSigningSecret);
const slackClient = new WebClient(slackToken);
slackEvents.on('app_mention', (event) => {
console.log(`Got message from user ${event.user}: ${event.text}`);
(async () => {
try {
await slackClient.chat.postMessage({ channel: event.channel, text: `Hello <#${event.user}>! Have you completed your Time sheets for this week yet?` })
} catch (error) {
console.log(error.data)
}
})();
});
slackEvents.on('error', console.error);
slackEvents.start(port).then(() => {
console.log(`Server started on port ${port}`)
});
Once this reminder is done, i intend to build upon it (more features, just need a beginning) so please don't recommend alternative ways my organisation can send reminders to people.
You can try using the chat.scheduleMessage method instead (https://api.slack.com/methods/chat.scheduleMessage). Since you won't rely on an event you may want to store the necessary conversations ids so that they're ready when the app needs to call the method.
My organisation is starting to experiment with the Microsoft bot framework. One of the questions our enterprise architect has asked is as follows:
How do we identify questions that the bot was unable to answer?
I've checked the documentation but I'm still unclear. Can anyone elaborate on the techniques that they use to identify unanswered questions? We feel this is important as it identifies opportunities for further growth.
You can achieve this using a number of techniques. Essentially, what you are trying to do is to store any questions the Bot has not been able to provide an answer for analysis.
You can do this by using the scoring mechanism in the QnAMaker. For example, if the QnAMaker returns a score of zero, an answer doesn't exist, so we need to write that question back to storage for analysis.
You can use a number of storage solutions for this in the Azure stack, such as Application Insights, Cosmos, Blob, SharePoint Lists etc.
In the example below (code trimmed for brevity), I'm using Application Insights to store this information. I have imported the botbuilder-applicationinsights package and have created a simple custom event to capture any responses that score zero against the QnAMaker.
const {
ApplicationInsightsTelemetryClient,
ApplicationInsightsWebserverMiddleware
} = require('botbuilder-applicationinsights');
const {
MessageFactory,
CardFactory
} = require('botbuilder');
const {
QnAServiceHelper
} = require('../helpers/qnAServiceHelper');
const {
CardHelper
} = require('../helpers/cardHelper');
const {
FunctionDialogBase
} = require('./functionDialogBase');
// Setup Application Insights
settings = require('../settings').settings;
const appInsightsClient = new ApplicationInsightsTelemetryClient(settings.instrumentationKey);
class QnADialog extends FunctionDialogBase {
constructor() {
super('qnaDialog');
}
async processAsync(oldState, activity) {
var newState = null;
var query = activity.text;
var qnaResult = await QnAServiceHelper.queryQnAService(query, oldState);
var qnaAnswer = qnaResult[0].answer;
var qnaNonResponse = qnaResult[0].score;
var prompts = null;
if (qnaResult[0].context != null) {
prompts = qnaResult[0].context.prompts;
}
var outputActivity = null;
if (prompts == null || prompts.length < 1) {
outputActivity = MessageFactory.text(qnaAnswer);
} else {
var newState = {
PreviousQnaId: qnaResult[0].id,
PreviousUserQuery: query
}
outputActivity = CardHelper.GetHeroCard(qnaAnswer, prompts);
}
if (qnaNonResponse === 0) {
const {
NonResponseCard
} = require('../dialogs/non-response');
const quicknonresponseCard = CardFactory.adaptiveCard(NonResponseCard);
outputActivity = ({
attachments: [quicknonresponseCard]
});
console.log("Cannot find QnA response for" + " " + query);
appInsightsClient.trackEvent({
name: "Non-response",
properties: {
question: query
}
});
}
return ([newState, outputActivity, null]);
}
}
module.exports.QnADialog = QnADialog;
I can then hook up the query I might use in Application Insights in Power Bi to surface those non-answered questions.
There are multiple ways to achieve this, but this was one I ended up going with.
Depending of the size and the complexity of your model you will want to use LUIS or qnamaker. If your mother is very simple qnamaker will works. for something a bit more complex especially if you want to make use of entities LUIS is definitely the way to go. Each of them have their own technique and #steviebleeds describe how to do it on qnamaker. For Louis you are going to look at your confidence threshold and you should record that have below the confidence threshold you have set. each time you get a prediction from Lewis it send you a list of intent each of them having a confidence percentage on the predictions. You should assess this confidence percentage and decide depending of your fresh hold if you want or not to answer you users. You also want to look at all questions that have return none intent.
i was trying to use Lambda#Edge to handle A/B testing in my site.
i wonder is there a way to let Lambada#Edge functions load an external config data from a url, eg. i created an api to return the traffic rates of A/B channels,i want control that config data outside, so that i can dynamically adjust the traffics flow to A or B channel and no need to modify the Lambda function.
what i did now is
var versions = [];
var isLoadingVersionData = false;
const https = require('https');
function loadVersions() {
if (isLoadingVersionData)
return null;
isLoadingVersionData=true;
https.get('https://example.com/getAbTestConfig', (res) => {
res.on('data', (d) => {
var parsedBody = JSON.parse(d);
if (parsedBody.status)
versions = parsedBody.data;
});
}).on('error', (e) => {
console.log(e);
});
}
//and load the function in handler
exports.handler = (event, context, callback) => {
context.callbackWaitsForEmptyEventLoop = false;
loadVersions();
}
i wonder this variable "versions" can be loaded correctly and shared in each later requests
do you have some more effectually solutions?
Why not maintain this data in S3 and use your Lambda#Edge to get the configuration from there? Further to reduce latency, you can front the S3 bucket containing traffic ratio with CloudFront and have your L#E make a call to CloudFront and get the desired value.
I was facing the same issue but not for A/B testing. I just created a json file in my lambda function to avoid delay of making http calls inside lambda functions. It works but the maintenance is not good, as every time when I need to change the Json file I need deploy the lambda function again.
While I was searching for it, I found same solution described for Mr.Ocean above, sounds like a good alternative to maintain the data in S3.
I am currently working on a project where visitors are normally using both English and Chinese to talk to each other.
Since LUIS did not support multi-language very well (Yes I know it can support in certain ways but I want a better service), I would like to build my own Neural Network as a REST API so that, when someone submits their text, we can simply predict the "Intent", while we are still using MS BotFramework (NodeJS).
By doing this we can bypass MS LUIS and using our own Language understanding service.
Here are my two questions:
Has anyone done that before? Any GitHub link I can reference to?
If I did that, what is the BotFramework API I should use? There is a recognizer called "Custom Recognizer" and I wonder if it really works.
Thank you very much in advance for all your help.
Another option apart from Alexandru's suggestions is to add a middleware which will call the NLP service of your choosing everytime the bot receive a chat/request.
Botbuilder allows middleware functions to be applied before handling any dialogs, I created a sample code for a better understanding below.
const bot = new builder.UniversalBot(connector, function(session) {
//pass to root
session.replaceDialog('root_dialog');
})
//custom middleware
bot.use({
botbuilder: specialCommandHandler
});
//dummy call NLP service
let callNLP = (text) => {
return new Promise((resolve, reject) => {
// do your NLP service API call here and resolve the result
resolve({});
});
}
let specialCommandHandler = (session, next) => {
//user message here
let userMessage = session.message.text;
callNLP.then(NLPresult => {
// you can save your NLP result to a session
session.conversationData.nlpResult = NLPResult;
// this will continue to the bot dialog, in this case it will continue to root
// dialog
next();
}).catch(err => {
//handle errors
})
}
//root dialog
bot.dialog('root_dialog', [(session, args, next) => {
// your NLP call result
let nlpResult = session.conversationData.nlpResult;
// do any operations with the result here, either redirecting to a new dialog
// for specific intent/entity, etc.
}]);
For Nodejs botframework implementation you have at least two ways:
With LuisRecognizer as a starting point to create your own Recognizer. This approach works with single intent NLU's and entities arrays (just like LUIS);
Create a SimpleDialog with a single handler function that calls the desired NLU API;
const TeleBot = require('telebot');
const bot = new TeleBot({
token: 'i9NhrhCQGq7rxaA' // Telegram Bot API token.
});
bot.on(/^([Hh]ey|[Hh]oi|[Hh]a*i)$/, function (msg) {
return bot.sendMessage(msg.from.id, "Hello Commander");
});
var Historiepics = ['Schoolfotos/grr.jpg', 'Schoolfotos/boe.jpg',
'Schoolfotos/tobinsexy.jpg'];
console.log('Historiepics')
console.log(Math.floor(Math.random() * Historiepics.length));
var foto = Historiepics[(Math.floor(Math.random() * Historiepics.length))];
bot.on(/aap/, (msg) => {
return bot.sendPhoto(msg.from.id, foto);
});
bot.start();
The result I'm getting from this is just one picture everytime, but if I ask for another random picture it keeps showing me the same one without change.
I recently figured this out, so I'll drop an answer for anyone that runs into this issue.
The problem is with Telegram's cache. They cache images server side so that they don't have to do multiple requests to the same url. This protects them from potentially getting blacklisted for too many requests, and makes things snappier.
Unfortunately if you're using an API like The Cat API this means you will be sending the same image over and over again. The simplest solution is just to somehow make the link a little different every time. This is most easily accomplished by including the current epoch time as a part of the url.
For your example with javascript this can be accomplished with the following modifications
bot.on(/aap/, (msg) => {
let epoch = (new Date).getTime();
return bot.sendPhoto(msg.from.id, foto + "?time=" + epoch);
});
Or something similar. The main point is, as long as the URL is different you won't receive a cached result. The other option is to download the file and then send it locally. This is what Telebot does if you pass the serverDownload option into sendPhoto.