Managing the lifetime of a process I don't control - winapi

I'm using Chromium Embedded Framework 3 (via CEFGlue) to host a browser in a third-party process via a plugin. CEF spins up various external processes (e.g. the renderer process) and manages the lifetime of these.
When the third-party process exits cleanly, CefRuntime.Shutdown is called and all the processes exit cleanly. When the third-party process exits badly (for example it crashes) I'm left with CEF executables still running and this (sometimes) causes problems with the host application meaning it doesn't start again.
I'd like a way to ensure that whatever manner the host application exits CefRuntime.Shutdown is called and the user doesn't end up with spurious processes running.
I've been pointed in the direction of job objects (see here) but this seems like it might be difficult to ship in a real solution as on some versions of Windows it requires administrative rights.
I could also set CEF to run in single process mode, but the documentation specifies that this is really for "debugging" only, so I'm assuming shipping this in production code is bad for some reason (see here).
What other options do I have?
Following on from the comments, I've tried passing the PID of the host process through to the client (I can do this by overriding OnBeforeChildProcessLaunch). I've then created a simple watchdog with the following code:
ThreadPool.QueueUserWorkItem(_ => {
var process = Process.GetProcessById(pid);
while (!process.WaitForExit(5000)) {
Console.WriteLine("Waiting for external process to die...");
}
Process.GetCurrentProcess().Kill();
});
I can verify in the debugger that this code executes and that the PID I'm passing into it is correct. However, if I terminate the host process I find that the thread simply dies in a way that I can't control and that the lines following the while loop are never executed (even if I replace it with a Console.WriteLine I never see any more messages printed from this thread.

For posterity, the solution suggested by #IInspectable worked, but in order to make it work I had to switch the implementation of of the external process to use the non-multi threaded message loop.
settings.MultiThreadedMessageLoop = false;
CefRuntime.Initialize(mainArgs, settings, cefWebApp, IntPtr.Zero);
Application.Idle += (sender,e) => {
if (parentProcess.HasExited) Process.GetCurrentProcess().Kill();
CefRuntime.DoMessageLoopWork();
}
Application.Run();

Related

Automatically re-spawn a ruby script from within node.js when it fails

I have a Node.js app which when started spawns a Ruby script to connect to a streaming data service and captures the output via STDOUT which is then served to the client via websocket.
Every now and again the Ruby script will fail (normally due to a disconnect from the far end) and while the Node script will carry on running its obviously not aware the spawned Ruby script has died.
Is there any way I can automate recovery of the spawned Ruby script from within Node or Ruby where I don't have to restart the entire Node instance (thus not booting the clients off) and the script will re-spawn attached to the correct instance of Node?
The script is spawned using the following;
var cp = require('child_process');
var tail = cp.spawn('/var/www/html/mapper/test/feed1-db.rb');
tail.stdout.on('data', function(chunk) {
#<more stuff here where data is split and emitted from the socket>#
I've finally had more time to look into this and have decided that it's probably a very bad idea to be automatically re-spawning failed scripts! (More on that later)
I have found that I can catch both error's and exit's of the child process by using the following;
tail.on('exit', function (code) {
console.log('child process exited with code ' + code);
});
Which will give me the exit code of the child script.
I also found out that I can catch any other errors using;
tail.stderr.on('data', (data) => {
console.error(`child stderr:\n${data}`);
});
Both of these output there error to the console meaning you can still back-trace any issues. I've also expanded the code for the error detection to output a failure notice to connected clients on the web socket.
Now on to why I decided that to auto re-spawn the script was a bad idea...
Up to now most of my underlying issues where caused up stream where I may get some invalid data which would choke my script (I know I should handle that else where but I'm kinda new to this!) or fat fingered problems caused by me!
Without lots of work if the script died due to some invalid data from upstream it would simply try and reconnect to consume the same bad data over and over again till the script got blocked from continuously connecting then disconnecting from the messaging server.
If it was something caused by a fat fingered moment like a bad variable name which isn't often called then I'd have the same problem as above but it could end up bringing down the local server running this script rather then the messaging server. Either way neither of those outcomes are a good way to go!
Unless you are catching very specific exit codes or failures which you know are not 'damaging' then I wouldn't go down this route. The two code blocks above at least allow me to catch the exit/error and notify someone about it so they can intervene and see what triggered it. It also means my on-line users are aware of a background failure where they might see data that appears to be valid, but is actually not updating.
Hopefully this insight helps someone else.

In Windows 7, how to send a Ctrl-C or Ctrl-Break to a separate process

Our group has long running processes which run daily. The processes are typically started at 9pm on any given day and run until 7pm the next day. Thus they typically run 22hrs/day. They are started by scheduled tasks on servers under a particular generic user ID, and they start and run regardless of whether or not that user ID is logged on. Thus, they are windowless console executables.
The tasks orchestrate computations running on a large server farm. Generally these controlling tasks run uninterrupted for the full 22hrs/day. However, we often have a need to stop and restart these processes. Because they control a multitude of tasks running on our server farm, it is important that they be shut down cleanly, so that they can stop and shut down all the server farm processes. Which brings me to our problem.
The controlling process has been programmed to respond to ctrl-C and ctrl-break signals. This works fine when the process is manually started in a console where we have access to the console and can "type" ctrl-c or ctrl-break in the console window. However, as mentioned, the processes typically run as windowless scheduled tasks. Hence we cannot "type" anything into a non-existent console window. Because they are console processes that execute without a logon process, the also must be able to execute in a completely windowless environment. So, how do we set up the process to listen for a shut-down signal?
While the process does indeed listen for a ctrl-C and ctrl-break signal, I can see no way to send that signal to a process. This seems to be a fundamental problem in Windows, or am I wrong? I am aware of SendSignal.exe, but so far have been unable to get it to work. It fails as follows:
>SendSignal 26320
Sending signal to process 26320...
CreateRemoteThread failed with 0x00000005.
StartRemoteThread failed with 0x00000005.
0x00000005 == Access is denied.
Trying "taskkill" without -F results in:
>taskkill /PID 24840
ERROR: The process with PID 24840 could not be terminated.
Reason: This process can only be terminated forcefully (with /F option).
All other "kill" functions kill the process immediately rather than sending a signal.
One possible solution would be a file-watch based solution: create a watch for some modification of a specific file. But this is a hack and we would prefer to do it with appropriate signaling. Has anyone solved this issue? It seems to be so very basic a functionality, and it is certainly trivial to do it in a Unix environment. Surely Microsoft has provided SOME mechanism to allow clean shut down of a windowless executable?
I am aware of the thread below, whose question is virtually identical (save for the specification of why the answer is necessary, i.e. why one needs to be able to do this for a windowless, console-less process), but there is no answer there excpet for "use SendSignal", which, as I said, does not work for us:
Can I send a ctrl-C (SIGINT) to an application on Windows?
There are other similar questions, but no answers as yet.
Any help appreciated.
[Upgrading #Anon's comment to an answer for visibility]
windows-kill worked perfectly and managed to resolve access denial issues faced with SendSignal. A privileged user would have to run it as well of course.
windows-kill also supports both ctrl-c and ctrl-break signals.

Start and monitor multiple instances of one process in Windows

I have a Windows application of which I need multiple instances running, with different command line parameters. The application is quite unstable and tends to crash every 48 hours or so.
Since manual checking for failure and restarting in case of one isn't what I love to do I want to write a "manager program" for this. It would launch the program (all its instances) and then watch them. In case a process crashes it would be restarted.
In Linux I could achieve this with fork()s and pids, but this obviously is not available in Windows. So, should I try to implement a CreateProcess version or is there a better way?
When you call CreateProcess, you are returned a handle to the new process in the hProcess member of the process information struct that you pass to CreateProcess. You can use this handle to detect when the process terminates.
For instance, you can create another thread and call WaitForSingleObject(hProcess) and block until the process terminates. Then you can decide whether or not to restart it.
Or your could call GetExitCodeProcess(hProcess, &exitcode) and test exitcode. If it has the value STILL_ACTIVE then your process has not terminated. This approach based on GetExitCodeProcess necessitates polling.
If it can be run as a daemon, the simplest way to ensure it keep running is Non-Sucking Service Manager.
It will allow to run as win32 service applications not designed as services. It will monitor and restart if necessary. And the source code is included, if any customization is needed.
All you need to do is define each of your instances as a service, with the required parameters, at it will do the rest.
If you have some kind of security police limitation and can't use third party tools, then coding will be necessary. The answer from David Heffernan gives you the appropiate direction.
Or it can be done in batch, vbs or js without need of anything out of the system. WMI Win32_Process class should allow you to handle it.

How to keep a Win64 C++ console app running like a service as it controls other executables?

I have a series of Win64 console apps, one is the "master" and 16 others are the "slaves".
The original version of this logic only had one executable, which when launched with a command line parameter "init" would initialize a very large data set (3gigs) and then sit on the Windows Message Pump waiting for messages requesting analysis of that large data set. When analysis of the data set is needed, that same executable is launched with analysis requesting parameters, and the newly launched executable would find the Window Handle of the already initialized instance of itself, and send the analysis request and parameters to the already initialized instance via the Windows Message WM_COPYDATA. This works like a charm for a single executable architecture.
However, now I have a more powerful system, and I want to run multiple analysis executables at once, each on a different core. So I made a new architecture where there are 16 analysis executables that act as "slaves" to a Manager console application which acts as "master". (FYI, one analysis request can take anywhere from 0.5 to 4.0 seconds - hence my desire to run multiples at once.)
Still using the Windows Messages as my communication means, use of SendMessage() blocks the caller until the message is handled by the receiver. That's no good, because I want these messages to be handled concurrently. So, I tried to use the asynchronous Windows Message functions, such as SendMessageCallback() and SendNotifyMessage(). They either failed or executed in a blocking manner, not concurrently.
More research led me to "Named Shared Memory" as a means of communicating between my executables (essentially a memory-mapped file). So I set that up, and my master is now able to create the Named Shared Memory block, and all my executables are able to request views of that same memory, and via a state machine handling of the data in the Named Shared Memory, I have synchronization between the master and the slaves.
HOWEVER, I am finding that the master and the slaves do not appear to be running continuously.
I'm still using the basic idea of the master is launched via a command line holding analysis parameters, that executable sends a message to the already running version of itself with the Named Shared Memory setup plus the current state of all the slaves, and then an available slave is selected and it's Named Shared Memory resident state machine gets the analysis request.
This portion is working fine. However, the slave(s) appear to be in a sleep or other dormant state, because my modified Windows Message Loop does not appear to be looping.
Here's what my current Windows Message handling loop looks like:
while (1) {
int status = ::GetMessage(&msg, 0, 0, 0);
if (status != 0) {
if (status == -1) return -1
::DispatchMessage(&msg);
}
else if (status == 0)
break;
HandleSharedMemoryDeliveredTasks(); // NOTE: checking Named Shared Memory
}
Placing a break point inside this loop does not get triggered unless a Windows Message is received.
So, I'm wondering how to keep my loop alive so the checking of Named Shared Memory continues without having to send a (blocking) message.
I aware that I'm operating in a problem space where I should convert my master and slave executables into Windows Services. However, I am very, very close to getting this working (it seems), and rewriting to a Windows Service is an area I have no experience.
Also, I am using this in my executables to keep them active (but it does not appear to be helping):
// during program init:
SetThreadExecutationState(ES_CONTINIOUS | ES_SYSTEM_REQUIRED | ES_AWAYMODE_REQUIRED);
Any suggestions how I can "wake up" a slave asynchronously when the master sets their state machine for work, and likewise have the slave "wake up" the master when the work is completed?

Windows Service exits when calling an child process using _execv()

I have a C++ Windows application that was designed to be a Windows service. It executes an updater periodically to see if there's a new version. To execute the updater, _execv() is used. The updater looks for new versions, downloads them and stops the Windows service (all of these actions are logged), replaces the files, and starts the service again. Doing that in CLI mode (not going into service mode) works fine that way. According to my log files, the child process is launched, but the parent process (the Windows service) exits.
Is it even "allowed" to launch child processes in Windows services, and, why does the service exit unexpected then? My log files show no error (I am even monitoring for segfaults etc which is written to the log).
Why are you using _execv() rather than doing it the windows way and using CreateProcess()?
I assume you've put some debug into your service and you aren't getting past the point where you call _execv() in your service?
_execv replaces the existing process with a new one running the file you pass as the parameter. Under Unix (and similar) that's handled directly/natively. Windows, however, doesn't support that directly -- so it's done by having the parent process exit and arrange for a child process to be started as soon as it does.
IOW, it sounds like _execv is doing exactly what it's designed to -- but in this case, it's probably not what you really want. You can spawn a process from a service, but you generally want to use CreateProcessAsUser to create it under a specified account instead of the service account (which has a rather unusual set of rights assigned to it). The service process will then exit and restart when it's asked to by the service manager when your updater calls ControlService, CreateService, etc.

Resources