Set exit code when receiving SIGTERM in Quarkus - quarkus

I’d like to know how I can tell Quarkus to gracefully exit with code 0 upon receiving SIGTERM. I’d still like to react differently (i.e. with an “error” exit code, meaning any number strictly greater than 0) to other exit conditions (such as receiving another signal than SIGTERM).
I have read Graceful Shutdown but it does not talk about specifically reacting to SIGTERM.
I suppose that this might involve ApplicationLifecycleManager or Quarkus#asyncExit() but I’d appreciate having some official doc or some best practices advice rather than try / error, if this is available somewhere. (It is currently unclear to me exactly when and how I am supposed to use ApplicationLifecycleManager, if this is indeed the recommended way, or when exactly it is permitted to call Quarkus#asyncExit().)
The reason for this question is that the free hosting plan of Render sends SIGTERM to applications after 15 minutes of inactivity, and Render will consider it an error if the application exits with a non-zero code and will log it accordingly. I’d like to distinguish error conditions from usual, expected activity.

Related

How to terminate long running function after a timeout

So I a attempting to shut down a long running function if something takes too long, maybe is just a solution to treating the symptoms rather than cause, but in any case for my situation it didn't really worked out.
I did it like this:
func foo(abort <- chan struct{}) {
for {
select{
case <-abort:
return
default:
///long running code
}
}
}
And in separate function I have which after some time closes the passed chain, which it does, if I cut the body returns the function. However if there is some long running code, it does not affect the outcome it simply continues the work as if nothing has happened.
I am pretty new to GO, but it feels like it should work, but it does not. Is there anything I am missing. After all routers frameworks have timeout function, after which whatever is running is terminated. So maybe this is just out of curiosity, but I would really want how to od it.
your code only checks whether the channel was closed once per iteration, before executing the long running code. There's no opportunity to check the abort chan after the long running code starts, so it will run to completion.
You need to occasionally check whether to exit early in the body of the long running code, and this is more idiomatically accomplished using context.Context and WithTimeout for example: https://pkg.go.dev/context#example-WithTimeout
In your "long running code" you have to periodically check that abort channel.
The usual approach to implement that "periodically" is to split the code into chunks each of which completes in a reasonably short time frame (given that the system the process runs on is not overloaded).
After executing each such chunk you check whether the termination condition holds and then terminate execution if it is.
The idiomatic approach to perform such a check is "select with default":
select {
case <-channel:
// terminate processing
default:
}
Here, the default no-op branch is immediately taken if channel is not ready to be received from (or closed).
Some alogrithms make such chunking easier because they employ a loop where each iteration takes roughly the same time to execute.
If your algorithm is not like this, you'd have to chunk it manually; in this case, it's best to create a separate function (or a method) for each chunk.
Further points.
Consider using contexts: they provide a useful framework to solve the style of problems like the one you're solving.
What's better, the fact they can "inherit" one another allow one to easily implement two neat things:
You can combine various ways to cancel contexts: say, it's possible to create a context which is cancelled either when some timeout passes or explicitly by some other code.
They make it possible to create "cancellation trees" — when cancelling the root context propagates this signal to all the inheriting contexts — making them cancel what other goroutines are doing.
Sometimes, when people say "long-running code" they do not mean code actually crunching numbers on a CPU all that time, but rather the code which performs requests to slow entities — such as databases, HTTP servers etc, — in which case the code is not actually running but sleeping on the I/O to deliver some data to be processed.
If this is your case, note that all well-written Go packages (of course, this includes all the packages of the Go standard library which deal with networked services) accept contexts in those functions of their APIs which actually make calls to such slow entities, and this means that if you make your function to accept a context, you can (actually should) pass this context down the stack of calls where applicable — so that all the code you call can be cancelled in the same way as yours.
Further reading:
https://go.dev/blog/pipelines
https://blog.golang.org/advanced-go-concurrency-patterns

Use Cases for LRA

I am attempting to accomplish something along these lines with Quarkus, and Naryana:
client calls service to start a process that takes a while: /lra/start
This call sets off an LRA, and returns an LRA id used to track the status of the action
client can keep polling some endpoint to determine status
service eventually finishes and marks the action done through the coordinator
client sees that the action has completed, is given the result or makes another request to get that result
Is this a valid use case? Am I visualizing the correct way this tool can work? Based on how the linked guide reads, it seems that the endpoints are more of a passthrough to the coordinator, notifying it that we start and end an LRA. Is there a more programmatic way to interact with the coordinator?
Yes, it might be a valid use case, but in every case please read the MicroProfile LRA specification - https://github.com/eclipse/microprofile-lra.
The idea you describe is more or less one LRA participant executing in a new LRA and polling the status of this execution. This is not totally what the LRA is intended for, but surely can be used this way.
The main idea of LRA is the composition of distributed transactions based on the saga pattern. Basically, the point is to coordinate multiple services to achieve consistent results with an eventual consistency guarantee. So you see that the main benefit arises when you can propagate LRA through different services that either all complete their actions or all of their compensation callbacks will be called in case of failures (and, of course, only for the services that executed their actions in the first place). Here is also an example with the LRA propagation https://github.com/xstefank/quarkus-lra-trip-example.
EDIT: Sorry, I forgot to add the programmatic API that allows same interactions as annotations - https://github.com/jbosstm/narayana/blob/master/rts/lra/client/src/main/java/io/narayana/lra/client/NarayanaLRAClient.java. However, note that is not in the specification and is only specific to Narayana.

Co-ordinating processes in a microservices world

I was reading the accepted answer on this SO post: Orchestrating microservices and my question is, how does one monitor a process using the choreographed approach? The author of the quoted book writes:
One approach I like for dealing with this is to build a monitoring system that explicitly matches the view of the business process in [the workflow], but then tracks what each of the services does as independent entities, letting you see odd exceptions mapped onto the more explicit process flow.
What I would like to know is, how exactly does this monitoring system work? I have tried to research this but wasn't able to find anywhere that properly describes what I am trying to understand.
The way I am thinking of this is that, we store some kind of representation of the process like "here is the work that needs to be done" and then, as that is done, have each service update it accordingly. We can then have something like a cron that monitors this and sends another message if has not been completed. After trying five times say, and it has still not been done, we can deduce that the process has failed and reply accordingly to the caller. Is this an accurate interpretation of what the author is alluding to?

Windows: Child Process with Redirected Input and Output

I'm trying to create a Child Process with Redirected Input and Output (as described here - http://msdn.microsoft.com/en-us/library/ms682499(VS.85).aspx).
For the people that don't want to bother reading the source code on that page, the author is using anonymous pipes to redirect the child's input and output. The parent process writes to the child process's input and reads from the child process's output.
In that code however, the program is closing the pipes after reading and writing (in WriteToPipe and ReadFromPipe), so actually the program just reads a file, dumps it on the child process input stream and then reads the child process response.
Now, what I'm looking for is a code where we will not close the pipes, but we will continuously post requests and read the child process response (in contrast to making just 1 request).
I've tried several modifications to the source code given on the link posted above, but no matter what I try, the program always hangs when calling ReadFile() in the ReadFromPipe() function (it probably waits for the child to quit - but as I said I like to get the child response, and then send other requests to it).
Any ideas on how I can get over this?
Update:
Can anyone at least tell me whether using the .NET Process class with RedirectStandardInput and RedirectStandardOutput is a good option?
Had exactly the same problem, and solved it by using PeekNamedPipe (which according to MSDN is also fine for anonymous read pipes) to check for available data before each call to ReadFile. That removed the blocking issues I was getting, and allowed my GetExitCodeProcess() to see that the process had exited and cleanup the pipes.
Yes - the .Net Process class redirects the standard input / output of the child process with anonymous pipes in a very similar way to the linked sample if you use RedirectStandardInput and RedirectStandardOutput, so this is probably a fairly good option.
As for why ReadFile is hanging - it sounds like this function is waiting for the child process to either send some data back, or to close the pipe. If you want to continuously post requests to the child process then you need to either:
Know exactly when it is appropriate to read so that you are not left waiting / blocked for the child process (so for example you only read immediately after a request has been sent). This strategy is very risky as there is always a chance that the child process doesn't behave as expected and you are left waiting on the child process indefinitely.
Have a dedicated thread for reading - if you have a dedicated thread for reading then it doesn't matter that the thread may wait indefinitely for the child process as your other threads are still able to send requests and operate as normal. This method is more complex than the first option, however when done properly is far more robust. The only other drawback to this approach is that it requires you have an additional read thread for each child process, meaning that it doesn't scale very well if you need to communicate with a large number of child processes.
Use asynchronous IO - It is possible to call the ReadFile function in a way such that it always immediately returns, however notifies you when a read has completed (I'm a little fuzzy on the exact details on how this works as I'm more used to C#). This is know as Asynchronous IO and is the most versatile of these 3 methods as it allows you to communicate with many child processes without needing a dedicated thread for each one. The tradeoff however is that it is also the most complex to do correctly (at least in my opinion).

How can I call a long-running external program from Excel / VBA?

What is the best way to run an external program from excel. It might run for several minutes. What's the best-practice about how to to this. Ideally,
A model dialog box that let's the user know that the process is executing.
If the executable fails, the user should receive a notification.
A timeout should be enforced.
A cancel button should be on the dialog box.
But any best-practices are welcome. I'm interested in solutions with calling either a .dll or an .exe. Preferably something that works with Excel '03 or earlier, but I'd love to hear a reason to move to a later version as well.
You should check out these two Microsoft KB articles
How to launch a Win32 Application from Visual Basic
and
How To Use a 32-Bit Application to Determine When a Shelled Process Ends
They both quickly give you the framework to launch a process and then check on its completion. Each of the KB articles have some additional references that may be relevant.
The latter knowledgebase article assumes that you want to wait for an infinite amount of time for your shell process to end.
You can modify the ret& = WaitForSingleObject(proc.hProcess, INFINITE) function call to return after some finite amount of time in milliseconds--replace INFINITE with a positive value representing milliseconds and wrap the whole thing in a Do While loop. The return value tells you if the process completed or the timer ran out. The return value will be zero if the process ended.
If the return value is non-zero then the process is still running, but control is given back to your application. During this time while you have positive control of your application, you can determine whether to update some sort of UI status, check on cancellation, etc. Or you can loop around again and wait some more.
There are even additional options if the program you are shelling to is something that you wrote. You could hook into one of its windows and have the program post messages that you can attach to and use as status feedback. This is probably best left for a separate item if you need to consider it.
You can use the process structure to get a return value from the called process. Your process does need to return a value for this to be useful.
My general approach to this kind of need is to:
give the user a non-modal status dialog at the start of the process with a cancel button, which when clicked will set a flag to be checked later. Providing the user with any status is most likely impossible, so giving them an elapsed time or one of those animated GIFs might be helpful in managing expectations.
Start the process
Wait for the process to complete, allowing cancellation check every 500ms
If the process is complete close the dialog and continue along.
If the process is not complete, see if the user hit cancel and if so send a close message to the process' window. This may not work and terminating the process might be required--careful if you do this. Your best bet may be to abandon the process if it won't close properly. You can also check for a timeout at this point and try to take the same path as if the user hit cancel.
Hope this helps,
Bill.

Resources