Do Overlapped I/O on Windows complete in FIFO? - winapi

When I issues a multiple Overlapped I/O on the same I/O handle. Do Windows complete the first request before doing the next request? Or the order is unpredictable?
If in the first case, What happen when error occurred on a request that being processed? Windows will do the next request? or it will cancel all outstanding requests?

Related

Best way to handle timeout / delay event?

I would implement a timeout event in quarkus and I search the best way to do that.
Problem summary :
I have a process who wait answer from a REST service
If the service is call, I'll go to next process
If the service isn't call before the delay => I must not validate the process and go to the next process
So I'm thinking of using the quarkus event bus, with a delayed message. If the message is send, I close the process and go to the next process. If the client answer before the delay, the message will never be send (how can do that?)
Thanks you

Why may happen if boost::asio::ip::tcp::socket::cancel() is called while the socket is just sending message to the remote end?

As per the official document, which says that
basic_stream_socket::cancel causes all outstanding asynchronous connect, send and receive operations to finish immediately.
But it does not say where the data which has not been sent out goes?
Must I resend the entire message again?
Or just resend the data which has not been sent out (in this case, how could I know how many bytes have sent and how many have not?)?
And I know that this API will only cancel asynchronous operations that were initiated in the current thread.

Front-facing REST API with an internal message queue?

I have created a REST API - in a few words, my client hits a particular URL and she gets back a JSON response.
Internally, quite a complicated process starts when the URL is hit, and there are various services involved as a microservice architecture is being used.
I was observing some performance bottlenecks and decided to switch to a message queue system. The idea is that now, once the user hits the URL, a request is published on internal message queue waiting for it to be consumed. This consumer will process and publish back on a queue and this will happen quite a few times until finally, the same node servicing the user will receive back the processed response to be delivered to the user.
An asynchronous "fire-and-forget" pattern is now being used. But my question is, how can the node servicing a particular person remember who it was servicing once the processed result arrives back and without blocking (i.e. it can handle several requests until the response is received)? If it makes any difference, my stack looks a little like this: TomCat, Spring, Kubernetes and RabbitMQ.
In summary, how can the request node (whose job is to push items on the queue) maintain an open connection with the client who requested a JSON response (i.e. client is waiting for JSON response) and receive back the data of the correct client?
You have few different scenarios according to how much control you have on the client.
If the client behaviour cannot be changed, you will have to keep the session open until the request has not been fully processed. This can be achieved employing a pool of workers (futures/coroutines, threads or processes) where each worker keeps the session open for a given request.
This method has few drawbacks and I would keep it as last resort. Firstly, you will only be able to serve a limited amount of concurrent requests proportional to your pool size. Lastly as your processing is behind a queue, your front-end won't be able to estimate how long it will take for a task to complete. This means you will have to deal with long lasting sessions which are prone to fail (what if the user gives up?).
If the client behaviour can be changed, the most common approach is to use a fully asynchronous flow. When the client initiates a request, it is placed within the queue and a Task Identifier is returned. The client can use the given TaskId to poll for status updates. Each time the client requests updates about a task you simply check if it was completed and you respond accordingly. A common pattern when a task is still in progress is to let the front-end return to the client the estimated amount of time before trying again. This allows your server to control how frequently clients are polling. If your architecture supports it, you can go the extra mile and provide information about the progress as well.
Example response when task is in progress:
{"status": "in_progress",
"retry_after_seconds": 30,
"progress": "30%"}
A more complex yet elegant solution would consist in using HTTP callbacks. In short, when the client makes a request for a new task it provides a tuple (URL, Method) the server can use to signal the processing is done. It then waits for the server to send the signal to the given URL. You can see a better explanation here. In most of the cases this solution is overkill. Yet I think it's worth to mention it.
One option would be to use DeferredResult provided by spring but that means you need to maintain some pool of threads in request serving node and max no. of active threads will decide the throughput of your system. For more details on how to implement DeferredResult refer this link https://www.baeldung.com/spring-deferred-result

Sending same HTTP request concurrently in Jmter

We have a test scenario where we want to upload a file in a multithread manner i.e., concurrently upload a file based on thread count.
The transaction involves a couple of HTTP requests where only the upload HTTP request needs to be triggered simultaneously (not in a loop).
Throughput controller does not help since it again sends the HTTP request one after the other.
Synchronizing Timer does not work either since the same HTTP request needs to be triggered concurrently for a single Thread.

The current thread on the server changes after leaving the server?

In a MVC application, when I go to the client and return using AJAX, is the current thread from before the same current thread when I return or is it a new thread from the threadpool?
It might be but there is no guarantee and if that happens it would be mere luck.
From Performing Asynchronous Work, or Tasks, in ASP.NET Applications
New requests are received by HTTP.sys, a kernel driver. HTTP.sys posts the request to an I/O completion port on which IIS listens. IIS picks up the request on one of its thread pool threads and calls into ASP.NET where ASP.NET immediately posts the request to the CLR ThreadPool and returns a pending status to IIS.
and it continues
To execute it, we raise all of the pipeline events and the modules and handlers in the pipeline work on the request, typically while remaining on the same thread, but they can alternatively handle these events asynchronously.
So for one request you can be on the same thread.
As your AJAX request will be a new call to the http.sys kernel driver and the handover to the managed thread pool it is highly unlikely that the same thread will be re-used. If it did your web application would use way more threads then it is capable off, slowing the webserver to a crawl.
If you want to mimic that the same thread is used you must hookup in one of the state managment related event life cycle events

Resources