How does HAWQ handle different kill signals? - hawq

If I kill -QUIT the master resource manager process, it restarts and then runs normally. But if I kill -ABRT the master resource manager process, it restarts but processes on HAWQ restart. So how does HAWQ handle different kill signals?

There are multiple processes started by postmaster
1. Resource Manager
2. master logger process
3. stats collector process
4. writer process
5. checkpoint process
6. seqserver process
7. WAL Send Server process
8. DFS Metadata Cache Process.
For the SIGNAL response, different processes have different handling.
master logger process is independent with other processes, other process restart will not influence it, and its restart will not influence other processes too. For SIGQUIT, it will ignore. For SIGABORT and SIGKILL, it will restart.
stats collector process doesn't influence other processes, but will be influenced by other process's restart. For signal SIGQUIT, SIGABORT, or SIGKILL, it will restart itself.
Resource Manager process, For signal SIGQUIT, it will restart itself, for signal SIGABORT and SIGKILL, will restart all other sub-processes.
The other 5 processes, for signal SIGQUIT, SIGABORT, or SIGKILL, will restart itself and all other sub-processes except master logger process.

HAWQ Resource Manager catches the signal SIGQUIT, and registers a function named quitResManager to handle the signal. The process will quit peaceably. But for the signal SIGABRT, the Resource manager will generate coredump according to the definition of SIGABORT.
Since the Resource manager process is forked by postmaster, the postmaster process keeps an eye on watching its subprocess, and will restart the sub-process itself if finding subprocess normally quits, but will restart all the sub-processes if finding error in subprocess.
So you can see the Resource Manager process itself is restarted after you send kill -QUIT, but see all the processes on the HAWQ master restarted after you send kill -ABORT.

I wonder if hawq resource manager process is killed by SIGKILL, whether parent postmaster process will also exit?

#huan, parent postmaster will not restart itself if any sub-processes meets error. But if itself meets error, it will not restart anyone, all the sub-processes will be killed.

Related

Windows service cannot be killed

I've got a service that needs to be resarted, but all attempts to kill it fail.
I have tried everything i've found online and nothing has seemed to work.
The core issue seems to be that Services is holding onto the process and not allowing it to be killed
ERROR: The process with PID 11204 (child process of PID 572) could not be terminated.
Reason: There is no running instance of the task.
this happens when i try to force kill the task using taskkill/f /pid 11204 /t
PID 572 is services, so i cannot kill it without crashing windows.
There is also an Interactive Services detection that is activating but just leads to a blank screen i can't exit out of (since the process is dead) but turning this off still doesn't allow me to kill it.
I've found similar issues around but none seem to have the problem of the program being a child of services, and so can't kill the parent.
Is a system restart the ONLY option here? This is a production server and so restarting has to be done only at scheduled downtime, so looking for other options.
Services should be controlled via services APIs, or SCcommand-line tool. Try SC stop command.
On a call to ControlService[Ex] with SERVICE_CONTROL_STOP, explicitly from your SW or from SC tool, service's Handler[Ex] should receive SERVICE_CONTROL_STOP. At this point service should
Stop all its own started threads and free its own allocated resources
If it takes long, should also call SetServiceStatus with SERVICE_STOP_PENDING before that
Call SetServiceStatus with SERVICE_STOPPED to inform the system that is is no longer running
Return from Handler[Ex]
If the service was the only service in its process, StartServiceCtrlDispatcher is likely to return shortly, and at this point service process should exit. If there are other services in the process, StartServiceCtrlDispatcher will not return, and process should not exit, but the service being stopped is considered stopped anyway.

Laravel Forge: How to stop queue workers?

I have configured a queue worker as a daemon on Forge, then used the recommended deployment script command (php artisan queue:restart).
How do I manually stop and restart the queue worker? If I stop it, supervisor will just restart it. Do I need kill the active worker in Forge first?
This may be required on an ad-hoc basis. For example, if I want to clear a log file that the queue has open.
I've been pretty vocal in deployment discussions, and I always tell people to stop their worker processes with the supervisorctl command.
supervisorctl stop <name of task>
Using the queue:restart command doesn't actually restart anything. It sets an entry in the cache which the worker processes check, and shutdown. As you noticed, supervisor will then restart the process.
This means that queue:restart has one huge problem, ignoring the naming and the fact that it doesn't restart; it will cause all worker processes on all servers that uses the same cache to restart. I think this is wrong, I think a deployment should only affect the current server currently being deployed to.
If you're using a per-server cache, like the file cache driver, then this has another problem; what happens if your deployment entirely removes the website folder? The cache would change, the queues would start again, and the worker process may have a mix of old and new code. Fun things to debug...
Supervisor will signal the process when it is shutting down, and wait for it to shut down cleanly, and if it doesn't, forcefully kill it. These timeouts can be configured in the supervisor configuration file. This means that using supervisorctl to stop the queue process will not terminate any jobs "half-way through", they will all complete (assuming they run for a short enough time, or you increase the timeouts).

How do you signal a process (PID) to restart in the same shell where it is running, from another shell using bash?

I have one terminal open with a java server running. I can easily get the process-ID (PID) of this JAR process. From another terminal, I want to signal this process to stop and restart in that same terminal where is is currently running. Is this possible using bash? Is there any signal i can send to the PID to make it stop and start again?
From another terminal, I want to signal this process to stop and restart in that same terminal where is is currently running. Is this possible using bash?
You can send signals to a process via the kill command. You may need to either be a privileged user or have the same UID as the process you're signaling.
But read on ...
Is there any signal i can send to the PID to make it stop and start again?
If you mean "stop" as in terminate and "[re]start" as in run from the beginning as if a new process, then there is no signal for which that is the default handling. A process could provide such behavior in response to a signal of its choice, but you cannot evoke it from an arbitrary process.
On the other hand, if you want to "stop" in the sense of temporarily suspending operations, and to "start" in the sense of resuming from such a stop, then there are SIGSTOP and SIGCONT, for which the default handling should suffice. Be aware, however, that stopping a process in this sense probably will not take it out of the foreground (if it is in the foreground), and that processes can block, ignore, or provide their own handling for those signals.
You can send signals with kill.
kill -s <SIGNAL> <PID>
will send the signal <SIGNAL> to the process with pid <PID>.
To stop and continue, you can use SIGSTOP and SIGCONT, which acts basically like hitting Ctrl-Z to pause a program on the terminal.

Tuxedo tmshutdown stops server but process still exists

i've got problem with tuxedo tmshutdown command. One of processes still runs (with huge cpu usage) though tmshutdown stops it succesfull. There is also one opened IPC shared memory which i can close when I kill existing process. There are other servers but only this one is problematic. Is it possible that the problem is in code (tpsvrdone is exiting without errors)?
Tmshudown normally sends a SIGTERM signal to tuxedo serves unless you use -k KILL (which sends a SIGKILL)
If the source code of the Tuxedo server implements a handler of the signal, you could get the behavior you explained.
http://www.thegeekstuff.com/2012/03/catch-signals-sample-c-code/
Also, if it is not possible to shutdown a server, or remove a service advertisement, a diagnostic is written on the ULOG.

Can I handle the killing of my windows process through the Task Manager?

I have a windows C++ application (app.exe). When the app is closed, I need to perform some cleanup tasks specific to my application. What happens when this process (app.exe) is killed through the Task Manager. Assuming that the application is still responsive, can I somehow handle this situation in my app.exe?
I am looking for something similar to how kill <pid> in Linux will send the SIGTERM signal to the process indicated by pid. I could then register my own signal handler for SIGTERM and perform the cleanup.
There are two ways to kill application in Task Manager.
Killing through Applications tab would roughly be equivalent of SIGTERM. Application may intercept it and do more processing, since it's basically sending a "close window" message. Message to catch is WM_CLOSE.
Killing through Processes tab would roughly be equivalent of SIGKILL. There is nothing you can do to intercept that, short of monitoring user's actions in Task Manager's listbox and End Process button, or having a watchdog process that will see when the first one is killed.
Alternatively, design the application in a way that does not require cleanup, or in a way that it will perform cleanup at startup.
I think you will need another PID that is monitoring the PID of your app.exe and does the necessary work at the time.
That depends, if the user chooses to "End Task" your application you will be notified and you can handle it see this.
but if the user chooses to end the process, you have no way to handle it in your application. the easiest way would be a second process or you can inject into process manager and hook the TerminateProcess API.

Resources