IContextMenu::InvokeCommand, break away from job? - winapi

I have a child process in a job that has JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE specified.
When I invoke IContextMenu::InvokeCommand, though, any processes that are started are automatically killed when my child process exits, because they are automatically included in a job.
How can I prevent this from happening?

The solution I've found is to specify
JOB_OBJECT_LIMIT_BREAKAWAY_OK | JOB_OBJECT_LIMIT_SILENT_BREAKAWAY_OK
for the child process, to allow its children to automatically break away from the job.

Related

Is it possible to make a console wait on another child process?

Usually when a program is run from the Windows console, the console will wait for the process to exit and then print the prompt and wait for user input. However, if the process starts a child process, the console will still only wait for the first process to exit. It will not wait for the child as well.
Is there a way for the program to get the console to wait on another child process instead of (or as well as) the current process.
I would assume it's impossible because presumably the console is waiting on the process' handle and there's no way to replace that handle. However, I'm struggling to find any confirmation of this.
Is there a way for the program to get the console to wait on another child process instead of (or as well as) the current process.
No. As you noted, as soon as the 1st process the console creates has exited, the console stops waiting. It has no concept of any child processes being created by that 1st process.
So, what you can do instead is either:
simply have the 1st process wait for any child process it creates before then exiting itself.
if that is not an option, then create a separate helper process that creates a Job Object and then starts the main process and assigns it to that job. Any child processes it creates will automatically be put into the same job as well 1. The helper process can then wait for all processes in the job to exit before then exiting itself. Then, you can have the console run and wait on the helper process rather than the main process.
1: by default - a process spawner can choose to break out a new child process from the current job, if the job is setup to allow that.

Killing PPID can kill all child process association with it at same time?

I have tried to kill PPID process which terminate process (also kills child pid's) immediately sends signal back in seconds to one of my console, but child process are taking time to respond back termination response. Any one has any idea why it is happening..?
Whenever the parent process gets killed, the child processes become ORPHAN processes so the INIT process becomes the parent of the ORPHAN processes. As INIT process is created in such a way that whenever any process gets killed all of it's children are taken care by the INIT process until the processes finish.
It looks like the parent process did not catch any signals, while the child processes did.
Alternatively, the child processes had resources open and are attempting a graceful exit, making sure those resources are properly taken care of.
In this case you may need to rewrite the parent process to catch the signal, forward it to its children, and then wait() for them to finish, and exit.

Ensure orphaned processes are killed when the parent process dies

In Ruby, how do I ensure that child processes spawned from my program don't keep running when my main process exits or is killed?
Initially I thought I could just use at_exit in the main process, but that won't work if my main process gets kill -9ed or calls Kernel.exec. I need a solution that is (basically) foolproof, and cross-platform.
If you have to handle kill -9 termination for your parent app, then you have only a couple of choices that I can see:
Create a work queue manager and spawn/kill child processes from work queue manager. If you can't guarantee that the work queue manager won't also be killed without warning, then option 2 is your only choice I think, since the only thing you know for sure is that the child processes are still running.
http://www.celeryproject.org/
http://aws.amazon.com/elasticbeanstalk/
More aggressive approach - basically spawn off whole OS instances but they'll definitely get killed off within your parameters for operation
Have the child processes check a "heartbeat" from the parent process through RPC or monitoring parent PID in memory or watching a date/time on keep-alive file in /tmp to make sure it's current.
If the child processes fail to see the parent processes doing it's job of either responding to RPC messages, staying in memory itself, or keeping a file date/time current the child processes must kill themselves.

Resque: Does dequeueing kill the process?

I'm implementing resque on this project where I need the feature of killing whatever gets enqueued to resque. So, I've seen that there is a dequeuing method, which will remove the jobs from the queue. But, if this job has already been started, and is currently running, does dequeuing kill the process?
Also important: If a job gets dequeued, do I get a handle where I can do something, or is an exception thrown?
As far I know it don't kill the process its just remove the job from the queue if it exist check here
But if you want to achieve killing a job perhaps then you need to use various signal that resque provide
Here a list of them
Resque workers respond to a few different signals:
QUIT - Wait for child to finish processing then exit
TERM / INT - Immediately kill child then exit
USR1 - Immediately kill child but don't exit
USR2 - Don't start to process any new jobs
CONT - Start to process new jobs again after a USR2
In your case if would be USR1
Hope this help
The answer to this issue was actually using one of the many extensions for the resque gem, called resque-status. This handles worker instances, assignes a unique id to each of them (which I can use to identify them, feature I needed the most) and provides me with a kill method to be called on a job, which will guarantee that the job will process the kill signal the next time I call a certain method of their API (not exactly a kill and assign exception, but it's better than nothing).

Why is there timing problem while to fork child processes

When I took a look at the reference of 'Launching-Jobs' in gnu.org, I didn't get this part.
The shell should also call setpgid to put each of its child processes into the new process group. This is because there is a potential timing problem: each child process must be put in the process group before it begins executing a new program, and the shell depends on having all the child processes in the group before it continues executing. If both the child processes and the shell call setpgid, this ensures that the right things happen no matter which process gets to it first.
There is two method on the link page, launch_job () and launch_process ().
They both call the setpgid in order to prevent the timing problem.
But I didn't get why is there such a problem.
I guess new program means result of execvp (p->argv[0], p->argv); in launch_process(). And before run execvp, setpgid (pid, pgid); is always executed, without same function on launch_job ().
So again, why is there such a problem? (why we have to call setpgid (); on launch_job () either?)
The problem is that the shell wants the process to be in the right process group. If the shell doesn't call setpgid() on its child process, there is a window of time during which the child process is not part of the process group, while the shell execution continues. (By calling setpgid() the shell can guarantee that the child process is part of the process group after that call).
There is another problem, which is that the child process may execute the new program (via exec) before its process group id has been properly set (i.e. before the parent calls setpgid()). That is why the child process should also call setpgid() (before calling exec()).
The description is admittedly pretty bad. There isn't just one problem being solved here; it's really two separate problems. One - the parent (i.e. the shell) wants to have the child process in the right process group. Two - the new program should begin execution only once its process has already been put into the right process group.

Resources