XHR Bandwidth reduction - ajax

So were using XHR to validate pages exists and they have content, but as we do a lot of request we wanted to trim down some of the bandwidth used.
We thought about using a HEAD request to check for !200 and then thought well that's still 2 request's if the page exists and then we come up with this sample code
Ajax.prototype.get = function (location, callback)
{
var Request = new XMLHttpRequest();
Request.open("GET", location, true);
Request.onreadystatechange = function ()
{
if(Request.readyState === Request.HEADERS_RECEIVED)
{
if(Request.status != 200)
{
//Ignore the data to save bandwidth
callback(Request);
Request.abort();
}
else
{
//#Overide the callback here to assure asingle callback fire
Request.onreadystatechange = function()
{
if (Request.readyState === Request.DONE)
{
callback(Request);
}
}
}
}
}
Request.send(null);
}
What I would like to know is does this actually work, or does the response body always come back to the client.
Thanks

I won't give a definitve answer but I have some thoughts that are to long for a comment.
Theoretically, a abortion of the request should cause the underlying connection to be closed. Assuming a TCP based communication that means sending a FIN to the server, which should then stop sending data and ACKs the FIN. But this is HTTP and there might be other magic going on (like connection pipelining, etc.)...
Anyway, when you close the connection early, the client will receive all data that was send in communication delay as the server will at least keep sending until he gets the the STOP signal. If you have a medium delay and a high bandwith connection this could be a lot of data and will, depending on the amount of data, most likely be a good portion of the complete data.
Note that, while the code will not receive any of this data, it will be transferred to the network device of the client and will be at least passed a little bit up the network stack. So, while this data never receives you application level, the bandwith will be consumed anyway.
My (educated) guess is that it will not save as much as you would like (under "normal" conditions). I would suggest that you do a real world test and see if it is worth the afford.

Related

I am trying to increase performance of my code and switching from XMLHttpRequest to fetch requests using asynchronous code

I want to confirm if I am not slowing down the code below. My goal is to increase the performance of the application. I was considering a promise.all but not sure if it is necessary as I believe the code is written now, all the fetch requests are running simultaneously?
The test functions don't need to wait for each other. I don't want to create a funnel where each test function waits for the other one to finish or each fetch request is waiting for the previous one to finish. The goal is to have them all running together. Is my code doing that at the moment? Thank you for your assistance.
function test1() {
//some code here
fetch(URL)
.then(checkStatusandContentType)
.then(HtmlToObject)
.then(subTest1=> //work with the data here)
.catch(error => {console.log('Request failed', error);});
}
function test2() {
//some code here
fetch(URL)
.then(checkStatusandContentType)
.then(HtmlToObject)
.then(subTest2 => //work with the data here)
.catch(error => {console.log('Request failed', error);});
fetch(URL)
.then(checkStatusandContentType)
.then(HtmlToObject)
.then(subTest3 => //work with the data here)
.catch(error => {console.log('Request failed', error);});
fetch(URL)
.then(checkStatusandContentType)
.then(HtmlToObject)
.then(subTest4 => //work with the data here)
.catch(error => {console.log('Request failed', error);});
}
//....and hundreds like the above test functions down here
const checkStatusandContentType = async response => {
const isJson = response.headers.get('content-type')?.includes('application/json');
const isHtml = response.headers.get('content-type')?.includes('text/html');
const data = isJson ? await response.json()
: isHtml ? await response.text()
: null;
// check for error response
if (!response.ok) {
// get error message from body or default to response status
const error = (data && data.message) || response.status;
return Promise.reject(error);
}
return data;
}
const HtmlToObject = data => {
const stringified = data;
const processCode = stringified.substring(stringified.lastIndexOf("<X1>") + 4, stringified.indexOf("</X1>"));
//CONTENT EXTRACT
data = JSON.parse(processCode);
return data;
};
TL;DR fetch and XmlHTTPRequest perform the same.
You said you want to increase your application's performance. It's usually wise to dream up a way of measuring the performance when you do that.
Your performance measurement may be for just one desktop user with an excellent connection to the network. Or, it may be for hundreds of mobile devices using your app at the same time.
Browser HTML / Javascript apps using XmlHTTPRequest (XHR for short) or fetch requests are often designed to display something useful to your user, and then use the received data to make that display even more useful. If your measure of performance is how long it takes for a single user to see something useful, you may already have acceptable peformance. But if you have hundreds of mobile users, performance is harder to define.
You asked about whether XHR or fetch has better performance. From the point of view of your server, they are the same: both generate requests that your server must satisfy. They both generate the same requests; your server can't tell them apart. fetch requests are easier to code, as you have discovered.
Your code runs many requests. You showed us three but you said you have many more. Browsers restrict the number of concurrent outbound requests, and others wait for an available slot. Here's information about concurrent requests in an answer. Most browsers allow six concurrent requests to any given domain, and ten overall.
So, your concurrent fetch operations (or concurrent XHR operations, it doesn't matter which) will hit your server with six connections at once. That's fine for low-volume applications with good bandwidth. But if your app scales up to many users or must work over limited (mobile) bandwidth, you should consider whether you will overload your users' networks or your server.
Reducing the number of requests coming from your browser app, and perhaps returning more complete information in each request, is a good way to manage this server and network load.

connect ETIMEDOUT 137.116.128.188:443 for bot FRAMEWORK, can be extended

So I have a request that is expected to run for at least 1 min. before it will give a response
To help aid user on not doing anything while my request is still running, I set some sendTyping activities:
For censoring production codes work sensitive information
, this is generally how my code looks like:
var queryDone = "No";
var xmlData = '';
let soappy = soapQuery("123", "456", "789","getInfo");
soappy.then(function (res) {
queryDone = 'Yes';
xmlData = res;
console.log(xmlData);
}, function (err) {
queryDone = 'Timeout';
})
while (queryDone == 'No') {
await step.context.sendActivity({ type: 'typing' });
}
where soapQuery() is a function that sends and returns the POST request which looks like this:
return new Promise(function (resolve, reject) {
request.post(options, function (error, response, body) {
if (!error && response.statusCode == 200) {
resolve(body);
}
else {
reject(error);
}
})
})
Problem comes because of this 1 minute response, (it's not really negotiable as the server requires at least 1 min to process it, due to large number of data and validation of my request).
Though after 1 minute, the console does print the response, sadly even before this, the bot already time out.
Any suggestion how to fix this, or extend time out of the bot?
I need the sendtyping activity so that user understands that the request is not yet done. And again, it really takes 1 minute before my request responds.
Thank you!
So, the reason that this happens is that HTTP requests work a little bit differently in the Bot Framework than you might expect. Here's how it works:
So basically, what's happening in your scenario is:
User sends HTTP POST
Bot calls your soapQuery
Bot starts sending Typing Indicators
soapQuery completes
Bot finally sends an HTTP Response to the HTTP POST from step #1, after the request has already timed out, which happens after 15 seconds
To fix this, I would:
Use showTypingMiddleware to send the typing indicator continuously and automatically until the bot sends another message (this gets rid of your blocking while loop)
Once soapQuery finishes, the bot will have lost context for the conversation, so your soappy.then() function will need to send a proactive message. To do so, you'll need to save a reference to the conversation prior to calling soappy(), and then within the then() function, you'll need to use that conversationReference to send the proactive message to the user.
Note, however, that the bot in the sample I linked above calls the proactive message after receiving a request on the api/notify endpoint. Yours doesn't need to do that. It just needs to send the proactive message using similar code. Here's some more documentation on Proactive Messages

realCall.cancel appears that it 'could' close my connection

For some reason, at 27 requests per second we start to see issues sometimes with okhttp and we noticed a 5 request per host limit. We are talking to an api that sometimes is flaky/times out, etc.
I noticed that we are not cancelling requests on timeout and they seem to be in flight still. (ie. I want to start using RealCall.cancel)
in looking into this though RealCall.cancel calls engine.cancel() if the engine is not null which calls streamAllocation.cancel() which calls the following code...
public void cancel() {
HttpStream streamToCancel;
RealConnection connectionToCancel;
synchronized (connectionPool) {
canceled = true;
streamToCancel = stream;
connectionToCancel = connection;
}
if (streamToCancel != null) {
streamToCancel.cancel();
} else if (connectionToCancel != null) {
connectionToCancel.cancel();
}
}
This looks extremely scary as I just wanted to cancel the one request not the entire connection. ie. just the stream http2 stream maybe but I definitely want the connection alive(I think).
thanks,
Dean
If you cancel before there is a stream, such as during the TLS handshake, canceling will cancel the entire connection. Once you have a stream canceling only cancels the stream.

ZeroMQ choose recipient

I'm new to ZeroMQ (and to networking in general), and have a question about using ZeroMQ in a setup where multiple clients connect to a single server. My situation is as follows:
--1 server
--multiple clients
--Clients send messages to server: I've already figured out how to do this part.
--Server sends messages to a specific client: This is the part I'm having trouble with. When certain events get handled on the server, the server will need to send a message to a specific client -- not all clients. In other words, the server will need to be able to choose which client to send a given message to.
Right now, this is my server code:
using (NetMQContext ctx = NetMQContext.Create())
{
using (var server = ctx.CreateResponseSocket())
{
server.Bind(#"tcp://127.0.0.1:5555");
while (true)
{
string fromClientMessage = server.ReceiveString();
Console.WriteLine("From Client: {0}", fromClientMessage);
server.Send("ack"); // There is no overload for the 'Send'
method that takes an IP address as an argument!
}
}
}
I have a feeling that the problem is that my design is wrong, and that the ResponseSocket type isn't meant to be used in the way that I want to use it. Since I'm new to this, any advice is very much appreciated!
when using the Response socket you always replying to the client that sent you the message. So the Request-Response socket types together are just simple request response.
To more complicated scenarios you probably want to use Dealer-Router.
With router the first frame of each message is the routing id (the identity of the client that sent you the message)
so your example with router will look like:
using (NetMQContext ctx = NetMQContext.Create())
{
using (var server = ctx.CreateRouterSocket())
{
server.Bind(#"tcp://127.0.0.1:5555");
while (true)
{
byte[] routingId = server.Receive();
string fromClientMessage = server.ReceiveString();
Console.WriteLine("From Client: {0}", fromClientMessage);
server.SendMore(routingId).Send("ack");
}
}
}
I also suggest to read the zeromq guide, it will probably answer most of your questions.

Async sends in .NET ActiveMQ

I'm looking to increase the performance of a high-throughput producer that I'm writing against ActiveMQ, and according to this useAsyncSend will:
Forces the use of Async Sends which adds a massive performance boost;
but means that the send() method will return immediately whether the
message has been sent or not which could lead to message loss.
However I can't see it making any difference to my simple test case.
Using this very basic application:
const string QueueName = "....";
const string Uri = "....";
static readonly Stopwatch TotalRuntime = new Stopwatch();
static void Main(string[] args)
{
TotalRuntime.Start();
SendMessage();
Console.ReadLine();
}
static void SendMessage()
{
var session = CreateSession();
var destination = session.GetQueue(QueueName);
var producer = session.CreateProducer(destination);
Console.WriteLine("Ready to send 700 messages");
Console.ReadLine();
var body = new byte[600*1024];
Parallel.For(0, 700, i => SendMessage(producer, i, body, session));
}
static void SendMessage(IMessageProducer producer, int i, byte[] body, ISession session)
{
var message = session.CreateBytesMessage(body);
var sw = new Stopwatch();
sw.Start();
producer.Send(message);
sw.Stop();
Console.WriteLine("Running for {0}ms: Sent message {1} blocked for {2}ms",
TotalRuntime.ElapsedMilliseconds,
i,
sw.ElapsedMilliseconds);
}
static ISession CreateSession()
{
var connectionFactory = new ConnectionFactory(Uri)
{
AsyncSend = true,
CopyMessageOnSend = false
};
var connection = connectionFactory.CreateConnection();
connection.Start();
var session = connection.CreateSession(AcknowledgementMode.AutoAcknowledge);
return session;
}
I get the following output:
Ready to send 700 messages
Running for 2430ms: Sent message 696 blocked for 12ms
Running for 4275ms: Sent message 348 blocked for 1858ms
Running for 5106ms: Sent message 609 blocked for 2689ms
Running for 5924ms: Sent message 1 blocked for 2535ms
Running for 6749ms: Sent message 88 blocked for 1860ms
Running for 7537ms: Sent message 610 blocked for 2429ms
Running for 8340ms: Sent message 175 blocked for 2451ms
Running for 9163ms: Sent message 89 blocked for 2413ms
.....
Which shows that each message takes about 800ms to send and the call to session.Send() blocks for about two and a half seconds. Even though the documentation says that
"send() method will return immediately"
Also these number are basically the same if I either change the parallel for to a normal for loop or change the AsyncSend = true to AlwaysSyncSend = true so I don't believe that the async switch is working at all...
Can anyone see what I'm missing here to make the send asynchronous?
After further testing:
According to ANTS performance profiler that vast majority of the runtime is being spent waiting for synchronization. It appears that the issue is that the various transport classes block internally through monitors. In particular I seem to get hung up on the MutexTransport's OneWay method which only allows one thread to access it at a time.
It looks as though the call to Send will block until the previous message has completed, this explains why my output shows that the first message blocked for 12ms, while the next took 1858ms. I can have multiple transports by implementing a connection-per-message pattern which improves matters and makes the message sends work in parallel, but greatly increases the time to send a single message, and uses up so many resources that it doesn't seem like the right solution.
I've retested all of this with 1.5.6 and haven't seen any difference.
As always the best thing to do is update to the latest version (1.5.6 at the time of this writing). A send can block if the broker has producer flow control enabled and you've reached a queue size limit although with async send this shouldn't happen unless you are sending with a producerWindowSize set. One good way to get help is to create a test case and submit it via a Jira issue to the NMS.ActiveMQ site so that we can look into it using your test code. There have been many fixes since 1.5.1 so I'd recommend giving that new version a try as it could already be a non-issue.

Resources