sending multiple responses to client - go

I have a web client and a Go server. The client send some json data, which is processed and the server then return a json response.
But what can I do when I want to inform the client about the results of a very slow process, and even allow the client to stop the process?
I've been thinking maybe I could keep sending new requests every 5-10 seconds for updates, but that doesn't seem very efficient, and it wouldn't allow me to stop a process I started using go mySlowFunc()

You may create some “guards” for slow functions. They limit execution time, of function succeeded during this time they return result, if not - default value is returned and function is cancelled.
Example of code:
select {
case result := <-successChan:
return result, nil
case <-timeoutChan:
return "", nil
}
Example of usage: https://github.com/lisitsky/go-site-search-string

Related

Laravel API check if client http connection is still alive

In Laravel we have an api endpoint that may take a few minutes. It's processing an input in batches and giving a response when all batches are processed. Pseudo-code below.
Sometimes it takes too long for the user, so the user navigates away and the connection is killed client-side. However, the backend processing still continues until the backend tries to return the response with a broken pipe error.
To save ressources, we're looking for a way to check after each batch if the client is still connected with a function like check_if_client_is_still_connected() below. If not, an error is raised and processing is stopped. Is there a way to achieve this ?
function myAPIEndpoint($all_batches){
$result = [];
for ($batch in $all_batches) {
$batch_result = do_something_long($batch);
$result = $result + $batch_result;
check_if_client_is_still_connected();
}
return result;
}
PS: I know async tasks or web sockets could be more appropriate for long requests, but we have good reasons to use a standard http endpoint for this.

How to create a shared queue in Go?

I am trying to implement the least connections algorithm for a load balancer. I am using priority queue to keep the count of connections per server in a sorted order.
Here is the code:
server = spq[0]
serverNumber = server.value
updatedPriority = server.priority + 1 // Increment connection count for server
spq.update(server, serverNumber, updatedPriority)
targetUrl, err := url.Parse(configuration.Servers[serverNumber])
if err != nil {
log.Fatal(err)
}
// Send the request to the selected server
httputil.NewSingleHostReverseProxy(targetUrl).ServeHTTP(w, r)
updatedPriority = server.priority - 1 // Decrement connection count for server
spq.update(server, serverNumber, updatedPriority)
where spq is my priority queue.
This code will run for every request the balancer will receive.
But I am not getting correct results after logging the state of queue for every request.
For example in one case I saw the queue contained the same server twice with different priorities.
I am sure this has something to do with synchronising and locking the queue across the requests. But I am not sure what is the correct approach in this particular case.
If this is really your code that runs in multiple goroutines, then you clearly have race.
I do not understand spq.update. At first it looks like it is a function that reorders the queue to have the server with minimum number of calls at element 0, but then why does it need both server and serverNumber? serverNumber appears to be a unique ID for the server, and since you already have the server, why do you need that?
In any case, you should have a sync.Mutex shared by all goroutines, and lock the mutex before the first line, and unlock after spq.update, also you should again lock it after proxy call, and unlock when all done. The line that subtracts 1 from server.priority will only work if server is a pointer. If it is not a pointer, you're losing all the updates to server happened during the call.

Request body too large causing connection reset in Go

I have a simple multipart form which uploads to a Go app. I wanted to set a restriction on the upload size, so I did the following:
func myHandler(rw http.ResponseWriter, request *http.Request){
request.Body = http.MaxBytesReader(rw, request.Body, 1024)
err := request.ParseMultipartForm(1024)
if err != nil{
// Some response.
}
}
Whenever an upload exceeds the maximum size, I get a connection reset like the following:
and yet the code continues executing. I can't seem to provide any feedback to the user. Instead of severing the connection I'd prefer to say "You've exceeded the size limit". Is this possible?
This code works as intended. Description of http.MaxBytesReader
MaxBytesReader is similar to io.LimitReader but is intended for
limiting the size of incoming request bodies. In contrast to
io.LimitReader, MaxBytesReader's result is a ReadCloser, returns a
non-EOF error for a Read beyond the limit, and closes the underlying
reader when its Close method is called.
MaxBytesReader prevents clients from accidentally or maliciously
sending a large request and wasting server resources.
You could use io.LimitReader to read just N bytes and then do the handling of the HTTP request on your own.
The only way to force a client to stop sending data is to forcefully close the connection, which is what you're doing with http.MaxBytesReader.
You could use a io.LimitReader wrapped in a ioutil.NopCloser, and notify the client of the error state. You could then check for more data, and try and drain the connection up to another limit to keep it open. However, clients that aren't responding correctly to MaxBytesReader may not work in this case either.
The graceful way to handle something like this is using Expect: 100-continue, but that only really applies to clients other than web browsers.

How can I orchestrate concurrent request-response flow?

I'm new to concurrent programming, and have no idea what concepts to start with, so please be gentle.
I am writing a webservice as a front-end to a TCP server. This server listens to the port I give it, and returns the response to the TCP connection for each request.
Here is why I'm writing a web-service front-end for this server:
The server can handle one request at a time, and I'm trying to make it be able to process several inputs concurrently, by launching multiple processes and giving them a different port to listen on. For example, I want to launch 30 instances and tell them to listen on ports 20000-20029.
Our team uses PHP, and PHP does not have the capacity to launch server instances and maintain them concurrently, so I'm trying to write an API they can just send HTTP requests to.
So, here is the structure I have thought of.
I will have a main() function. This function launches the processes concurrently, then starts an HTTP server on port 80 and listens.
I have an http.Handler that adds the content of a request to a channel,.
I will have gorutines, one per server instance, that are in an infinite loop.
The code for the function mentioned in item three would be something like this:
func handleRequest(queue chan string) {
for {
request := <-queue
conn, err := connectToServer()
err = sendRequestToServer(conn)
response, err := readResponseFromServer(conn)
}
}
So, my http.Handler can simply do something like queue<- request to add the request to the queue, and handleRequest, which has blocked, waiting for the channel to have something to get, will simply get the request and continue on. When done, the loop finishes, execution comes back to the request := <-queue, and the same thing continues.
My problem starts in the http.Handler. It makes perfect sense to put requests in a channel, because multiple gorutines are all listening to it. However, how can these gorutines return the result to my http.Handler?
One way is to use a channel, let's call it responseQueue, that all of these gorutines would then write to. The problem is that when a response is added to the channel, I don't know which request it belongs to. In other words, when multiple http.Handlers send requests, each executing handler will not know which response the current message in the channel belongs to.
Is there a best practice, or a pattern, to send data to a gorutine from another gorutine and receive the data back?
Create a per request response channel and include it in the value sent to the worker. The handler receives from the channel. The worker sends the result to the channel.

Long Running Wicket Ajax Request

I occasionally have some long running AJAX requests in my Wicket application. When this occurs the application is largely unusable as subsequent AJAX requests are queued up to process synchronously after the current request. I would like the request to terminate after a period of time regardless of whether or not a response has been returned (I have a user requirement that if this occurs we should present the user an error message and continue). This presents two questions:
Is there any way to specify a
timeout that's specific to an AJAX
or all AJAX request(s)?
If not, is there any way to kill the current request?
I've looked through the wicket-ajax.js file and I don't see any mention of a request timeout whatsoever.
I've even gone so far as to try re-loading the page after some timeout on the client side, but unfortunately the server is still busy processing the original AJAX request and does not return until the AJAX request has finished processing.
Thanks!
I think it won't help you to let the client 'cancel' the request. (However this could work.)
The point is that the server is busy processing a request that is not required anymore. If you want to timeout such operations you had to implement the timeout on the server side. If the operation takes too long, then the server aborts it and returns some error value as the result of the Ajax request.
Regarding your queuing problem: You may consider to use asynchronous requests in spite of synchronous ones. This means that the client first sends a request for starting the long running process. This request immediately returns. Then the client periodically polls the server and asks if the process has finished. Those poll requests also return immediately saying either that the process is still running or that it has finished with a certain result.
Failed solution: After a given setTimeout I kill the active transports and restart the channel, which handles everything on the client side. I avoided request conflicts by tying each to an ID and checking that against a global reference that increments each time a request is made and each time a request completes.
function longRunningCallCheck(refId) {
// make sure the reference id matches the global id.
// this indicates that we are still processing the
// long running ajax call.
if(refId == id){
// perform client processing here
// kill all active transport layers
var t = Wicket.Ajax.transports;
for (var i = 0; i < t.length; ++i) {
if (t[i].readyState != 0) {
t[i].onreadystatechange = Wicket.emptyFunction;
t[i].abort();
}
}
// process the default channel
Wicket.channelManager.done('0|s');
}
}
Unfortunately, this still left the PageMap blocked and any subsequent calls wait for the request to complete on the server side.
My solution at this point is to instead provide the user an option to logout using a BookmarkablePageLink (which instantiates a new page, thus not having contention on the PageMap). Definitely not optimal.
Any better solutions are more than welcome, but this is the best one I could come up with.

Resources