HTTP Synchronous nature - ajax

I have read that HTTP is a synchronous protocol. Client sends a request and wait for a response. Client has wait for the first response before sending the next request. Ajax uses HTTP protocol but is asynchronous in contrast. I also read that
Synchronous request blocks the client until operation complete from here. I am confused and my quesetion are:
what is definition of synchronous when talking about HTTP Protocol?
Does synchronous associated with blocking?

HTTP as a protocol is synchronous. You send a request, you wait for a response. As opposed to other protocols where you can send data in rapid succession over the same connection without waiting for a response to your previous data. Note that HTTP/2 is more along those lines actually.
Having said that, you can send multiple independent HTTP requests in parallel over separate connections. There's no "global" lock for HTTP requests, it's just a single HTTP request/response per open connection. (And again, HTTP/2 remedies that limit.)
Now, from the point of view of a Javascript application, an HTTP request is asynchronous. Meaning, Javascript will send the HTTP request to the server, and its response will arrive sometime later. In the meantime, Javascript can continue to work on other things, and when the HTTP response comes in, it will continue working on that. That is asynchronous Javascript execution. Javascript could opt to wait until the HTTP response comes back, blocking everything else in the meantime; but that is pretty bad, since an HTTP response can take a relative eternity compared to all the other things you could get done in the meantime (like keeping the UI responsive).

Asynchronous means, you do an HTTP request, but you are not waiting until the answer arrives. You will handle it, when it arrives and are free to do other stuff in between. Meaning: You are not blocking your application from doing anything else.
Synchronous on the other Hand means, you do a request and wait for the answer before you do anything else. Meaning: You are blocking your application from doing anything else.

Related

How to capture Multiple response for a same request in Jmeter

I am initiating the request from Jmeter using HTTP Request Sampler. In the body data i am sending the request and the application server sends back two different responses for the same request. After receiving the first response, Jmeter will close the request and in my scenario i need to capture the second response also. Kindy share your ideas on this.
Regards,
Chandru
Which protocol? In case of HTTP this is exactly how it supposed to work: one request -> one response, you can keep the underlying TCP connection alive so JMeter would re-use the connection for sending the next request, but HTTP Request sampler won't expect any additional responses.
In case of Server Side Events you will need to do some scripting in order to handle the situation like it's described in How to Load Test SSE Services with JMeter article
In case of WebSockets take a look at Read continuation frames.jmx example test plan

ZeroMQ Request/Response pattern with Node.js

I'm implementing a distributed system for a project and am a bit confused as to how I should properly implement the Req/Res pattern. Basically I have a few endpoints that will send a request to a client for processing tasks and responding.
So basically:
Incoming request is received
The endpoint opens a req and res socket type with the broker
Broker receives the request, proxies it to an available worker
Worker responds and the endpoint receives the processed value, reports it back via the endpoint.
I've found a decent load balance broker script here: http://zguide.zeromq.org/js:lbbroker. There's also an async client/server pattern I'm interested in implementing: http://zguide.zeromq.org/js:asyncsrv which I might adapt into a load balanced implementation.
My question is perhaps a bit simplistic but, would each endpoint open a new socket on EVERY request or maintain and open socket for every request? That means there would be n connections for every request made to the endpoint.
You'd keep the sockets open, there's no need to close them after each request. And there'd be a single socket one every endpoint (client and server). At the server end you read a request from the socket, and write your response back to the socket; zmq takes care of ensuring that the response goes back from the right client.

How to issue http request with golang context capability but not by golang http client?

I found golang context is useful for canceling the processing of the server during a client-server request scope.
I can use http.Request.WithContext method to issue the http request with context, but if the client side is NOT using golang, is it possible to achieve that?
Thanks
I'm not 100% sure what you are asking, but using a context for sometime like a timeout is possible for both handling incoming requests and outbound requests.
For incoming requests you can use the context and send back a timeout http status code indicating that the server want able to process the request. It doesn't matter what the client sends you, you get to decide the timeout on your own with the server.
For outgoing requests you don't need the server to even know you have a timeout. You simply set a timeout and have your request just cancel if it doesn't get a response back in a set time. This means you likely won't get any response from the server because your code would cancel the outgoing request.
Now are you asking for an example of how to code on of these? Or just if both are possible?

Long-polling vs websocket when expecting one-time response from server-side

I have read many articles on real-time push notifications. And the resume is that websocket is generally the preferred technique as long as you are not concerned about 100% browser compatibility. And yet, one article states that
Long polling - potentially when you are exchanging single call with
server, and server is doing some work in background.
This is exactly my case. The user presses a button which initiates some complex calculations on server-side, and as soon as the answer is ready, the server sends a push-notification to the client. The question is, can we say that for the case of one-time responses, long-polling is better choice than websockets?
Or unless we are concerned about obsolete browsers support and if I am going to start the project from scratch, websockets should ALWAYS be preferred to long-polling when it comes to push-protocol ?
The question is, can we say that for the case of one-time responses,
long-polling is better choice than websockets?
Not really. Long polling is inefficient (multiple incoming requests, multiple times your server has to check on the state of the long running job), particularly if the usual time period is long enough that you're going to have to poll many times.
If a given client page is only likely to do this operation once, then you can really go either way. There are some advantages and disadvantages to each mechanism.
At a response time of 5-10 minutes you cannot assume that a single http request will stay alive that long awaiting a response, even if you make sure the server side will stay open that long. Clients or intermediate network equipment (proxies, etc...) just make not keep the initial http connection open that long. That would have been the most efficient mechanism if you could have done that. But, I don't think you can count on that for a random network configuration and client configuration that you do not control.
So, that leaves you with several options which I think you already know, but I will describe here for completeness for others.
Option 1:
Establish websocket connection to the server by which you can receive push response.
Make http request to initiate the long running operation. Return response that the operation has been successfully initiated.
Receive websocket push response some time later.
Close webSocket (assuming this page won't be doing this again).
Option 2:
Make http request to initiate the long running operation. Return response that the operation has been successfully initiated and probably some sort of taskID that can be used for future querying.
Using http "long polling" to "wait" for the answer. Since these requests will likely "time out" before the response is received, you will have to regularly long poll until the response is received.
Option 3:
Establish webSocket connection.
Send message over webSocket connection to initiate the operation.
Receive response some time later that the operation is complete.
Close webSocket connection (assuming this page won't be using it any more).
Option 4:
Same as option 3, but using socket.io instead of plain webSocket to give you heartbeat and auto-reconnect logic to make sure the webSocket connection stays alive.
If you're looking at things purely from the networking and server efficiency point of view, then options 3 or 4 are likely to be the most efficient. You only have the overhead of one TCP connection between client and server and that one connection is used for all traffic and the traffic on that one connection is pretty efficient and supports actual push so the client gets notified as soon as possible.
From an architecture point of view, I'm not a fan of option 1 because it just seems a bit convoluted when you initiate the request using one technology and then send the response via another and it requires you to create a correlation between the client that initiated an incoming http request and a connected webSocket. That can be done, but it's extra bookkeeping on the server. Option 2 is simple architecturally, but inefficient (regularly polling the server) so it's not my favorite either.
There is an alterternative that don't require polling or having an open socket connection all the time.
It's called web push.
The Push API gives web applications the ability to receive messages pushed to them from a server, whether or not the web app is in the foreground, or even currently loaded, on a user agent. This lets developers deliver asynchronous notifications and updates to users that opt in, resulting in better engagement with timely new content.
Some perks are
You need to ask for notification permission
Your site needs to have a service worker running in foreground
having a service worker also means you need to have SSL / HTTPS

What's the right way to retry a proxy request that failed

I have a proxy servlet that is implemented using Jetty's AsyncProxyServlet.Transparent (Jetty 9). Proxied requests are occasionally failing with EarlyEOF exceptions because of the way the remote server sometimes closes connections. In these cases, I would like the proxy to retry the request on behalf of the client instead of returning a 502 status response. What is the correct way to do this?
I assume I need to override AbstractProxyServlet's onProxyResponseFailure method and implement my own error handling, but I'm not sure how to create and send a new proxy request and associate it with the original request from the client.
Proxy retry with AsyncProxyServlet isn't feasible.
The Async nature of both the browser HTTP exchange and the proxied HTTP exchange means they are tied at the hip to each other.
If one fails, both fail, automatically.
Its very difficult to retry, as the browser HTTP exchange is already committed and partially completed as well.
In essence, the browser HTTP exchange would need to be suspended, and then the proxy HTTP exchange would need to be restarted, from scratch, then you'll need to "catch up" the exchange on the proxy side to the point where you are on the browser side. Once you are caught up, you'll have to adapt the proxy response to match the techniques for the browser response (things like known content-length, gzip state, chunking, etc..)
This is further complicated if the proxy response changes between requests, even in minor ways (response headers, sizes, compression, content, etc..)
The only way you can accomplish retry is to NOT use async, but use full caching of the proxy response BEFORE you send the response to the client (but this is actually more difficult to implement than the Async proxy techniques, as you have to deal with complex memory, http caching, and timeout concerns)

Resources