How to respond to http and keep the goroutine running ? Go - go

I'm wondering if it's possible to respond to a http request using the std http package and still keep the go routine alive ( e.g. run a task intensive task ). The use case is that I need to receive a http request and then call back that service after few minutes

Just spawn a new goroutine from your handler and keep that alive as long as you like.

Related

Best way to handle timeout / delay event?

I would implement a timeout event in quarkus and I search the best way to do that.
Problem summary :
I have a process who wait answer from a REST service
If the service is call, I'll go to next process
If the service isn't call before the delay => I must not validate the process and go to the next process
So I'm thinking of using the quarkus event bus, with a delayed message. If the message is send, I close the process and go to the next process. If the client answer before the delay, the message will never be send (how can do that?)
Thanks you

How to handle http stream responses from within a Substrate offchain worker?

Starting from the Substrate's Offchain Worker recipe that leverages the Substrate http module, I'm trying to handle http responses that are delivered as streams (basically interfacing a pubsub mechanism with a chain through a custom pallet).
Non-stream responses are perfectly handled as-is and reflecting them on-chain with signed transactions is working for me, as advertised in the doc.
However, when the responses are streams (meaning the http requests are never completed), I can only see the stream data logs in my terminal when I shut down the Substrate node. Trying to reflect each received chunk as a signed transaction doesn't work either: I can also see my logs only on node shut down, and the transaction is never sent (which makes sense since the node is down).
Is there an existing pattern for this use case? Is there a way to get the stream observed in background (not in the offchain worker runtime)?
Actually, would it be a good practice to keep the worker instance running ad vitam for this http request? (knowing that in my configuration the http request is sent only once, via a scheme of command queue - in the pallet storage - that gets cleaned at each block import).

How can I trigger spring batch execution from vuejs

I am trying to trigger a spring batch execution from endpoint. I have implemented a service at the backend. So from Vue i am trying make a call to that endpoint.
async trigger(data) {
let response = await Axios.post('')
console.log(response.data.message)
}
My service at the backend returns a response " Batch started" and does execution in the background since it is async but does not return back once job is executed(i see the status only in console). In such scenario how can i await the call from vue for the service execution to complete. I understand that service send no response once execution is complete/failed. Any changes i need to make either at the backend or frontend to support this. Please let me know your thoughts.
It's like you said, the backend service is asynchronous, which means that once the code has been executed, it moves to the next line. If no next line exits, the function exists, the script closes, and the server sends an empty response back to the frontend.
Your options are:
Implement a websocket that broadcasts back when the service has been completed, and use that instead.
Use a timeout function to watch for a flag change within the service that indicates that the service has finished its duties, or
Don't use an asynchronus service
how can i await the call from vue for the service execution to complete
I would not recommend that since the job may take too long to complete and you don't want your web client to wait that long to get a reply. When configured with an asynchronous task executor, the job launcher returns immediately a job execution with an Id which you can inspect later on.
Please check the Running Jobs from within a Web Container for more details and code examples.
My suggestion is that you should make the front-end query for the job status instead of waiting for the job to complete and respond because the job may take very long to complete.
Your API to trigger the job start should return the job ID, you can get the job ID in the JobExecution object. This object is returned when you call JobLauncher.run.
You then implement a Query API in your backend to get the status of the job by job ID. You can implement this using the Spring JobExplorer.
Your front-end can then call this Query API to get the job status. You should do this in an interval (E.g. 30 secs, 5 mins, .etc depending on your job). This will prevent your app from stuck in waiting for the job and time-out errors.

Front-facing REST API with an internal message queue?

I have created a REST API - in a few words, my client hits a particular URL and she gets back a JSON response.
Internally, quite a complicated process starts when the URL is hit, and there are various services involved as a microservice architecture is being used.
I was observing some performance bottlenecks and decided to switch to a message queue system. The idea is that now, once the user hits the URL, a request is published on internal message queue waiting for it to be consumed. This consumer will process and publish back on a queue and this will happen quite a few times until finally, the same node servicing the user will receive back the processed response to be delivered to the user.
An asynchronous "fire-and-forget" pattern is now being used. But my question is, how can the node servicing a particular person remember who it was servicing once the processed result arrives back and without blocking (i.e. it can handle several requests until the response is received)? If it makes any difference, my stack looks a little like this: TomCat, Spring, Kubernetes and RabbitMQ.
In summary, how can the request node (whose job is to push items on the queue) maintain an open connection with the client who requested a JSON response (i.e. client is waiting for JSON response) and receive back the data of the correct client?
You have few different scenarios according to how much control you have on the client.
If the client behaviour cannot be changed, you will have to keep the session open until the request has not been fully processed. This can be achieved employing a pool of workers (futures/coroutines, threads or processes) where each worker keeps the session open for a given request.
This method has few drawbacks and I would keep it as last resort. Firstly, you will only be able to serve a limited amount of concurrent requests proportional to your pool size. Lastly as your processing is behind a queue, your front-end won't be able to estimate how long it will take for a task to complete. This means you will have to deal with long lasting sessions which are prone to fail (what if the user gives up?).
If the client behaviour can be changed, the most common approach is to use a fully asynchronous flow. When the client initiates a request, it is placed within the queue and a Task Identifier is returned. The client can use the given TaskId to poll for status updates. Each time the client requests updates about a task you simply check if it was completed and you respond accordingly. A common pattern when a task is still in progress is to let the front-end return to the client the estimated amount of time before trying again. This allows your server to control how frequently clients are polling. If your architecture supports it, you can go the extra mile and provide information about the progress as well.
Example response when task is in progress:
{"status": "in_progress",
"retry_after_seconds": 30,
"progress": "30%"}
A more complex yet elegant solution would consist in using HTTP callbacks. In short, when the client makes a request for a new task it provides a tuple (URL, Method) the server can use to signal the processing is done. It then waits for the server to send the signal to the given URL. You can see a better explanation here. In most of the cases this solution is overkill. Yet I think it's worth to mention it.
One option would be to use DeferredResult provided by spring but that means you need to maintain some pool of threads in request serving node and max no. of active threads will decide the throughput of your system. For more details on how to implement DeferredResult refer this link https://www.baeldung.com/spring-deferred-result

Is the http server in the go libraries nonblocking?

I want a nonblocking http server for restful endpoints for my go project. Will the server included in the go libs do the trick?
The Go http package is concurrent, rather than nonblocking in the node.js sense. This means that the request handlers will not delay the processing of other requests even if they perform blocking operations. As Dave C said, it creates a new goroutine for each request. In practice, this means that you get the benefits of a nonblocking server without needing to worry about whether the code you write is blocking.

Resources