I am trying to increase performance of my code and switching from XMLHttpRequest to fetch requests using asynchronous code - async-await

I want to confirm if I am not slowing down the code below. My goal is to increase the performance of the application. I was considering a promise.all but not sure if it is necessary as I believe the code is written now, all the fetch requests are running simultaneously?
The test functions don't need to wait for each other. I don't want to create a funnel where each test function waits for the other one to finish or each fetch request is waiting for the previous one to finish. The goal is to have them all running together. Is my code doing that at the moment? Thank you for your assistance.
function test1() {
//some code here
fetch(URL)
.then(checkStatusandContentType)
.then(HtmlToObject)
.then(subTest1=> //work with the data here)
.catch(error => {console.log('Request failed', error);});
}
function test2() {
//some code here
fetch(URL)
.then(checkStatusandContentType)
.then(HtmlToObject)
.then(subTest2 => //work with the data here)
.catch(error => {console.log('Request failed', error);});
fetch(URL)
.then(checkStatusandContentType)
.then(HtmlToObject)
.then(subTest3 => //work with the data here)
.catch(error => {console.log('Request failed', error);});
fetch(URL)
.then(checkStatusandContentType)
.then(HtmlToObject)
.then(subTest4 => //work with the data here)
.catch(error => {console.log('Request failed', error);});
}
//....and hundreds like the above test functions down here
const checkStatusandContentType = async response => {
const isJson = response.headers.get('content-type')?.includes('application/json');
const isHtml = response.headers.get('content-type')?.includes('text/html');
const data = isJson ? await response.json()
: isHtml ? await response.text()
: null;
// check for error response
if (!response.ok) {
// get error message from body or default to response status
const error = (data && data.message) || response.status;
return Promise.reject(error);
}
return data;
}
const HtmlToObject = data => {
const stringified = data;
const processCode = stringified.substring(stringified.lastIndexOf("<X1>") + 4, stringified.indexOf("</X1>"));
//CONTENT EXTRACT
data = JSON.parse(processCode);
return data;
};

TL;DR fetch and XmlHTTPRequest perform the same.
You said you want to increase your application's performance. It's usually wise to dream up a way of measuring the performance when you do that.
Your performance measurement may be for just one desktop user with an excellent connection to the network. Or, it may be for hundreds of mobile devices using your app at the same time.
Browser HTML / Javascript apps using XmlHTTPRequest (XHR for short) or fetch requests are often designed to display something useful to your user, and then use the received data to make that display even more useful. If your measure of performance is how long it takes for a single user to see something useful, you may already have acceptable peformance. But if you have hundreds of mobile users, performance is harder to define.
You asked about whether XHR or fetch has better performance. From the point of view of your server, they are the same: both generate requests that your server must satisfy. They both generate the same requests; your server can't tell them apart. fetch requests are easier to code, as you have discovered.
Your code runs many requests. You showed us three but you said you have many more. Browsers restrict the number of concurrent outbound requests, and others wait for an available slot. Here's information about concurrent requests in an answer. Most browsers allow six concurrent requests to any given domain, and ten overall.
So, your concurrent fetch operations (or concurrent XHR operations, it doesn't matter which) will hit your server with six connections at once. That's fine for low-volume applications with good bandwidth. But if your app scales up to many users or must work over limited (mobile) bandwidth, you should consider whether you will overload your users' networks or your server.
Reducing the number of requests coming from your browser app, and perhaps returning more complete information in each request, is a good way to manage this server and network load.

Related

connect ETIMEDOUT 137.116.128.188:443 for bot FRAMEWORK, can be extended

So I have a request that is expected to run for at least 1 min. before it will give a response
To help aid user on not doing anything while my request is still running, I set some sendTyping activities:
For censoring production codes work sensitive information
, this is generally how my code looks like:
var queryDone = "No";
var xmlData = '';
let soappy = soapQuery("123", "456", "789","getInfo");
soappy.then(function (res) {
queryDone = 'Yes';
xmlData = res;
console.log(xmlData);
}, function (err) {
queryDone = 'Timeout';
})
while (queryDone == 'No') {
await step.context.sendActivity({ type: 'typing' });
}
where soapQuery() is a function that sends and returns the POST request which looks like this:
return new Promise(function (resolve, reject) {
request.post(options, function (error, response, body) {
if (!error && response.statusCode == 200) {
resolve(body);
}
else {
reject(error);
}
})
})
Problem comes because of this 1 minute response, (it's not really negotiable as the server requires at least 1 min to process it, due to large number of data and validation of my request).
Though after 1 minute, the console does print the response, sadly even before this, the bot already time out.
Any suggestion how to fix this, or extend time out of the bot?
I need the sendtyping activity so that user understands that the request is not yet done. And again, it really takes 1 minute before my request responds.
Thank you!
So, the reason that this happens is that HTTP requests work a little bit differently in the Bot Framework than you might expect. Here's how it works:
So basically, what's happening in your scenario is:
User sends HTTP POST
Bot calls your soapQuery
Bot starts sending Typing Indicators
soapQuery completes
Bot finally sends an HTTP Response to the HTTP POST from step #1, after the request has already timed out, which happens after 15 seconds
To fix this, I would:
Use showTypingMiddleware to send the typing indicator continuously and automatically until the bot sends another message (this gets rid of your blocking while loop)
Once soapQuery finishes, the bot will have lost context for the conversation, so your soappy.then() function will need to send a proactive message. To do so, you'll need to save a reference to the conversation prior to calling soappy(), and then within the then() function, you'll need to use that conversationReference to send the proactive message to the user.
Note, however, that the bot in the sample I linked above calls the proactive message after receiving a request on the api/notify endpoint. Yours doesn't need to do that. It just needs to send the proactive message using similar code. Here's some more documentation on Proactive Messages

How to manage user inputs in a short time interval?

i would like to implement a way of managing a user sending many messages in a time interval (for example 3 seconds), so that the chatbot only responds to the last one.
Example of inputs (in a gap of 3 seconds):
-Hi
-Hi
-Hi
-Help
Result: The chatbot only responds to the Help message.
Thanks in advance.
You can leverage Middleware feature to intercept every message, with which you can store every user's every message in cache, when your bot receive a new message, you can compaire with those info in cache, then dicide whether the flow needs to go forward.
Npde.js code snippet for quick test:
const moment = require('moment');
let lastMessage = null;
let lastMessageTime = null;
bot.use({
receive: (session, next) => {
let currentMessage = session
if (currentMessage.text !== lastMessage) {
lastMessage = currentMessage.text;
lastMessageTime = currentMessage.timestamp;
next();
} else {
if (moment(currentMessage.timestamp) - moment(lastMessageTime) >= 3000) {
lastMessageTime = currentMessage.timestamp;
next();
}
}
}
})
What needs you paying attention is that, in production env, you need to store the message with session/user id. E.G. Using session/user id as prefix of message and timesamp key in cache.
Please refer to https://learn.microsoft.com/en-us/bot-framework/dotnet/bot-builder-dotnet-middleware for how to intercept messages in C#,
and refer to https://learn.microsoft.com/en-us/bot-framework/nodejs/bot-builder-nodejs-intercept-messages for Node.js version.
Hope it helps.

Telegram Bot Random Images (How to send random images with Telegram-Bot)

const TeleBot = require('telebot');
const bot = new TeleBot({
token: 'i9NhrhCQGq7rxaA' // Telegram Bot API token.
});
bot.on(/^([Hh]ey|[Hh]oi|[Hh]a*i)$/, function (msg) {
return bot.sendMessage(msg.from.id, "Hello Commander");
});
var Historiepics = ['Schoolfotos/grr.jpg', 'Schoolfotos/boe.jpg',
'Schoolfotos/tobinsexy.jpg'];
console.log('Historiepics')
console.log(Math.floor(Math.random() * Historiepics.length));
var foto = Historiepics[(Math.floor(Math.random() * Historiepics.length))];
bot.on(/aap/, (msg) => {
return bot.sendPhoto(msg.from.id, foto);
});
bot.start();
The result I'm getting from this is just one picture everytime, but if I ask for another random picture it keeps showing me the same one without change.
I recently figured this out, so I'll drop an answer for anyone that runs into this issue.
The problem is with Telegram's cache. They cache images server side so that they don't have to do multiple requests to the same url. This protects them from potentially getting blacklisted for too many requests, and makes things snappier.
Unfortunately if you're using an API like The Cat API this means you will be sending the same image over and over again. The simplest solution is just to somehow make the link a little different every time. This is most easily accomplished by including the current epoch time as a part of the url.
For your example with javascript this can be accomplished with the following modifications
bot.on(/aap/, (msg) => {
let epoch = (new Date).getTime();
return bot.sendPhoto(msg.from.id, foto + "?time=" + epoch);
});
Or something similar. The main point is, as long as the URL is different you won't receive a cached result. The other option is to download the file and then send it locally. This is what Telebot does if you pass the serverDownload option into sendPhoto.

Does Relay.js support isomorphic server-side rendering with multiple sessions?

Last time I checked, Relay.js did not support session-based NetworkLayer (only one NetworkLayer could be used at the same time).
Thus, queue-hack (https://github.com/codefoundries/isomorphic-material-relay-starter-kit/blob/master/webapp/renderOnServer.js#L66) was required to support multiple sessions. It cannot be used in production as each render is completely blocking another render (including data fetching).
What's the current status on this issue?
Where can I follow the progress (github issues) and possibly help?
This is the GitHub issue you're looking for, and great progress has been made on making most of Relay "contextual" at this point. See that issue for more details.
Since version 0.6 isomorphic-relay (which isomorphic-material-relay-starter-kit uses under the hood) supports per HTTP request network layers, allowing to pass session data on to GraphQL server. And the important thing is that it uses isolated Relay store for each request, thus no user could see another user's private data.
Example usage:
app.get('/', (req, res, next) => {
// Pass the user cookies on to the GraphQL server:
const networkLayer = new Relay.DefaultNetworkLayer(
'http://localhost:8080/graphql',
{ headers: { cookie: req.headers.cookie } },
);
// Pass the network layer to IsomorphicRelay.prepareData:
IsomorphicRelay.prepareData(rootContainerProps, networkLayer).then({ data, props } => {
const reactOutput = ReactDOMServer.renderToString(
<IsomorphicRelay.Renderer {...props} />
);
res.render('index.ejs', {
preloadedData: JSON.stringify(data),
reactOutput
});
}).catch(next);
});
Sounds like the problem is in relay.JS which means you should start on their GitHub page if you want to help.

XHR Bandwidth reduction

So were using XHR to validate pages exists and they have content, but as we do a lot of request we wanted to trim down some of the bandwidth used.
We thought about using a HEAD request to check for !200 and then thought well that's still 2 request's if the page exists and then we come up with this sample code
Ajax.prototype.get = function (location, callback)
{
var Request = new XMLHttpRequest();
Request.open("GET", location, true);
Request.onreadystatechange = function ()
{
if(Request.readyState === Request.HEADERS_RECEIVED)
{
if(Request.status != 200)
{
//Ignore the data to save bandwidth
callback(Request);
Request.abort();
}
else
{
//#Overide the callback here to assure asingle callback fire
Request.onreadystatechange = function()
{
if (Request.readyState === Request.DONE)
{
callback(Request);
}
}
}
}
}
Request.send(null);
}
What I would like to know is does this actually work, or does the response body always come back to the client.
Thanks
I won't give a definitve answer but I have some thoughts that are to long for a comment.
Theoretically, a abortion of the request should cause the underlying connection to be closed. Assuming a TCP based communication that means sending a FIN to the server, which should then stop sending data and ACKs the FIN. But this is HTTP and there might be other magic going on (like connection pipelining, etc.)...
Anyway, when you close the connection early, the client will receive all data that was send in communication delay as the server will at least keep sending until he gets the the STOP signal. If you have a medium delay and a high bandwith connection this could be a lot of data and will, depending on the amount of data, most likely be a good portion of the complete data.
Note that, while the code will not receive any of this data, it will be transferred to the network device of the client and will be at least passed a little bit up the network stack. So, while this data never receives you application level, the bandwith will be consumed anyway.
My (educated) guess is that it will not save as much as you would like (under "normal" conditions). I would suggest that you do a real world test and see if it is worth the afford.

Resources