I have a proxy servlet that is implemented using Jetty's AsyncProxyServlet.Transparent (Jetty 9). Proxied requests are occasionally failing with EarlyEOF exceptions because of the way the remote server sometimes closes connections. In these cases, I would like the proxy to retry the request on behalf of the client instead of returning a 502 status response. What is the correct way to do this?
I assume I need to override AbstractProxyServlet's onProxyResponseFailure method and implement my own error handling, but I'm not sure how to create and send a new proxy request and associate it with the original request from the client.
Proxy retry with AsyncProxyServlet isn't feasible.
The Async nature of both the browser HTTP exchange and the proxied HTTP exchange means they are tied at the hip to each other.
If one fails, both fail, automatically.
Its very difficult to retry, as the browser HTTP exchange is already committed and partially completed as well.
In essence, the browser HTTP exchange would need to be suspended, and then the proxy HTTP exchange would need to be restarted, from scratch, then you'll need to "catch up" the exchange on the proxy side to the point where you are on the browser side. Once you are caught up, you'll have to adapt the proxy response to match the techniques for the browser response (things like known content-length, gzip state, chunking, etc..)
This is further complicated if the proxy response changes between requests, even in minor ways (response headers, sizes, compression, content, etc..)
The only way you can accomplish retry is to NOT use async, but use full caching of the proxy response BEFORE you send the response to the client (but this is actually more difficult to implement than the Async proxy techniques, as you have to deal with complex memory, http caching, and timeout concerns)
Related
I have read that HTTP is a synchronous protocol. Client sends a request and wait for a response. Client has wait for the first response before sending the next request. Ajax uses HTTP protocol but is asynchronous in contrast. I also read that
Synchronous request blocks the client until operation complete from here. I am confused and my quesetion are:
what is definition of synchronous when talking about HTTP Protocol?
Does synchronous associated with blocking?
HTTP as a protocol is synchronous. You send a request, you wait for a response. As opposed to other protocols where you can send data in rapid succession over the same connection without waiting for a response to your previous data. Note that HTTP/2 is more along those lines actually.
Having said that, you can send multiple independent HTTP requests in parallel over separate connections. There's no "global" lock for HTTP requests, it's just a single HTTP request/response per open connection. (And again, HTTP/2 remedies that limit.)
Now, from the point of view of a Javascript application, an HTTP request is asynchronous. Meaning, Javascript will send the HTTP request to the server, and its response will arrive sometime later. In the meantime, Javascript can continue to work on other things, and when the HTTP response comes in, it will continue working on that. That is asynchronous Javascript execution. Javascript could opt to wait until the HTTP response comes back, blocking everything else in the meantime; but that is pretty bad, since an HTTP response can take a relative eternity compared to all the other things you could get done in the meantime (like keeping the UI responsive).
Asynchronous means, you do an HTTP request, but you are not waiting until the answer arrives. You will handle it, when it arrives and are free to do other stuff in between. Meaning: You are not blocking your application from doing anything else.
Synchronous on the other Hand means, you do a request and wait for the answer before you do anything else. Meaning: You are blocking your application from doing anything else.
I have an unknown App consuming my Spring webservices.
The app set a timeout to every webservice calls.
The server regardless of the app timeout keeps processing.
Is there a risk of any other webservice call in receiving a misresponse (the response to the timed out webservice call)? How does Spring manages this? Doesn't HTTP protocol take care of this, given that each connection channel is open for a particular call to webservice and if broken there shouldn't be possible to retrieve the response?
As a developer, you should try to make all possible HTTP requests to your web server to be idempotent. It means that the client side has to be able to retry the failed request without new possible errors due to the inability to know the previous (timeout) request results.
The client side should handle the HTTP client timeouts himself and (by default) should treat the timeout error as a failure. Your clientside may repeat the request later and the server side should be able to handle the same request.
The solutions may vary for different tasks depending on complexity (from an INSERT statement to the database or scheduling a new CRON job avoiding duplication).
I found golang context is useful for canceling the processing of the server during a client-server request scope.
I can use http.Request.WithContext method to issue the http request with context, but if the client side is NOT using golang, is it possible to achieve that?
Thanks
I'm not 100% sure what you are asking, but using a context for sometime like a timeout is possible for both handling incoming requests and outbound requests.
For incoming requests you can use the context and send back a timeout http status code indicating that the server want able to process the request. It doesn't matter what the client sends you, you get to decide the timeout on your own with the server.
For outgoing requests you don't need the server to even know you have a timeout. You simply set a timeout and have your request just cancel if it doesn't get a response back in a set time. This means you likely won't get any response from the server because your code would cancel the outgoing request.
Now are you asking for an example of how to code on of these? Or just if both are possible?
I want to run a simple webserver in Go doing some basic authorisation and routing to multiple apps.
Is it possible to have the webserver running as a standalone executable and pass the response writer and http request to other executables?
The idea is that the app binaries can hopefully be compiled and deployed independently of the webserver.
Memory areas of running applications are isolated: a process cannot just read or write another application's memory (Wikipedia: Process isolation).
So just passing the response writer and the http request is not so easy. And even if you would implement it (e.g. serializing them into binary or text data, sending/passing them over somehow, and reconstructing them on the other side) serving an HTTP request in the background is more than just interacting with the ResponseWriter and Request objects: it involves reading from and writing to the underlying TCP connection... so you would also have to "pass" the TCP connection or create a bridge between the real HTTP client and the application you forward to.
Another option would be to send a redirect back to the client (HTTP 3xx status codes) after doing the authentication and routing logic. With this solution you could have authentication and certain routing logic implemented in your app, but you would lose further routing possibilities because further request would go directly to the designated host.
Essentially what you try to create is the functionality of a proxy server which have plenty of implementations out there. Given the complexity of a good proxy server, it should not be feasible to reproduce one.
I suggest to either utilize an existing proxy server or "refactor" your architecture to avoid this kind of segmentation.
Is there a way to let haproxy or squid to run a (bash)script (or another http request) before performing the proxying of the incoming requests?
I want to host a userX specific http server(and service) at userX.mydomain.com, but these kind of services can be running or not, depending on the load of the machine that hosts them.
So the first time in the day, the userX access to the url userX.mydomain.com, the http server hosting the serviceX has to be started.
I already managed, thanks to haproxy, xinetd, some bash script, and the "HTTP Refresh header directive" to perform a refresh after the http server/service start..
but now I would like to make it even better, to let the "http service starting" to be transparent to the client asking for a GET, a PUT or a POST, and to immediately reply correctly, with the correct service response even at the first http request.
So I will need to start the service and then immediately proxying the request to the service just started.
I already try the "http-check" and "check" options in haproxy but I don't think they can be useful to me, because the healt checks are asynchrnous to the request handling of haproxy. Instead, I will need to perform this script execution for each request and before that haproxy proxies the request..
If squid allows to perform this kind of action, I can even let haproxy to proxy the request to squid, that then, can start the service and proxy the request
Does someone have an idea to achieve it?
Thanks in advance.
This can be done using proxymachine - https://github.com/mojombo/proxymachine
Basically proxymachine can intercept the HTTP request, parse the headers, run arbitrary Ruby code, and then forward the connection.
You would need to terminate the SSL prior to proxymachine getting the connection - e.g. using haproxy (with the new SSL capability).