How is signal transmitted when pressing ctrl+c - linux-kernel

As I know, each process running in bash is the child process of that bash.
For example, if I run an infinite loop in bash, the OS will fork bash and create a new child process to run that loop.
Then if I press ctrl+c, the child process would be killed.
Now I am confused about who the sender of the signal is , since I think the parent bash process is waiting now.
Is it the child process itself or the kernel? As far as I know, keyboard activities can cause hard interuptions,which can only be handled by kernel, or does the child process turn to kernel state when the key is pressed?

Your understanding is incorrect.
It's not "the os forking bash", it is be bash forking itself, except for the trivial case you had in mind there is no fork in the first place.
Either way, the tty driver "sends" the signal to the foreground process (here: bash), and said process can catch it and act on it, e.g. break the loop.

Related

Job control in Ruby - SIGCONT handlers not working, SIGTSTP handler working only for irb. What am I missing?

I was working on trying to implement some kind of shell job control for a custom event loop handler with the GLib2 API in Ruby-GNOME. Ideally, this would be able to handle SIGTSTP and SIGCONT signals, to background the process at a TTY when running under a shell and to resume the background process on 'fg' from the shell.
I've not been able to figure out how to completely approach this with the API available in Ruby.
For a simpler usage case, I thought that I'd try adding a similar job support for IRB. I've added the following to my ~/.irbrc. The SIGTSTP handler seems to work, but the process remains suspended even after SIGCONT from fg in BASH.
## conditional section for ~/.irbrc
## can be activated with `IRB_JOBS_TEST=Defined irb`
if ENV['IRB_JOBS_TEST']
module Jobs
TSTP_HDLR_ORIG ||= Signal.trap("TSTP") do
STDERR.puts "\nJobs: backgrounding #{Process.pid} (#{TSTP_HDLR_ORIG.inspect}, #{CONT_HDLR_ORIG.inspect})"
Process.setpgid(0, Process.ppid)
TSTP_HDLR_ORIG.call if TSTP_HDLR_ORIG.respond_to?(:call)
end
CONT_HDLR_ORIG ||= Signal.trap("CONT") do
Process.setpgid(0, Process.pid)
STDERR.puts "Continuing in #{Process.pid}" ## not reached, not shown
IRB.CurrentContext.thread.wakeup ## no effect
CONT_HDLR_ORIG.call if CONT_HDLR_ORIG.respond_to?(:call)
end
end
end
I'm testing this on FreeBSD 13.1. I've read the FreeBSD termios(4), tcsetpgrp(3), and fcntl(2) manual pages. I'm not sure how much of the terminal-related API is available in Ruby.
The TSTP handler here seems to work, but the CONT handler is apparently not ever reached. I'm not sure if the TSTP handler is actually doing enough for - in effect - backgrounding the process in the shell's process group and relinquishing the controlling terminal.
With that TSTP handler, I can then background the IRB process in the shell with Ctrl-z. I can also foreground the process with 'fg' or BASH '%', but then the process is unresponsive. FreeBSD's Ctl-t handler shows the process as suspended. Apparently nothing in my CONT handler is reached.
I'm really stumped about what's failing in this approach - what my TSTP/CONT handlers are missing, what's available in Ruby, and why the process stays suspended after 'fg' in the shell.
In a more complex example, with the code I've written for glib2 it was apparently not enough to just call
Process.setpgid(0, Process.ppid)
as the process was not being backgrounded then. This would probably need another question though, as the example code for it isn't quite so short. So, I thought I'd try starting with IRB ...
After trying to foreground the process, then with Ctrl-t at the TTY on FreeBSD, I'm seeing the following
$ %
IRB_JOBS_TEST=Defined irb
load: 0.16 cmd: ruby31 4076 [suspended] 2.36r 0.19u 0.03s 1% 23828k
mi_switch+0xc2 thread_suspend_check+0x260 sleepq_catch_signals+0x113 sleepq_wait_sig+0x9 _cv_wait_sig+0xec tty_wait_background+0x30d ttydev_ioctl+0x14b devfs_ioctl+0xc6 vn_ioctl+0x1a4 devfs_ioctl_f+0x1e kern_ioctl+0x25b sys_ioctl+0xf1 amd64_syscall+0x10c fast_syscall_common+0xf8
So, it's blocking in an ioctl on resume?
Update
After a few hours of ineffectual hacking about this, I've removed the SIGTSTP and SIGCONT signal handlers from my GLib example code and now it "Just Works". I can background the example app at the console ... at least when it's not running under IRB ... and I can bring it back to the process group foreground with the shell. It resumes running on SIGCONT and everything looks alright in the logging from its main event loop.
I'm still not certain what the missing parts may have been, in may handlers/hacks for SIGTSTP and SIGCONT with IRB. Of course, with the input history recording in IRB it's typically simple enough to just restart the process..
Looking at how other applications have approached job control at the console, I think Emacs wraps its TTY I/O streams in some kind of an encapsulated struct? looking at Emacs' terminal.c mainly.
Glad to see if there's job control available in Ruby though, and it does not even need a custom signal handler for some applications?

kill child process - exec.Command

How do you kill child processes?
I have a long running application starting a new process with "exec.Command":
// ...I am a long running application in the background
// Now I am starting a child process, that should be killed togeter with the parent application.
cmd := exec.Command("sh", "-c", execThis)
// create a new process group
// cmd.SysProcAttr = &syscall.SysProcAttr{Setpgid: true}
Now if I kill <pid of long running application in the background> it does not kill the child process - do you know how?
There are quite a few things to be teased apart here.
First, there's what the OS itself does. Then, once we know what the OS is and what it does, there's what the program does.
What the OS does is, obviously, OS-dependent. POSIX-flavored OSes have two kinds of kill though: plain kill and process-group-based kill, or killpg. The killpg variety of this function is the only one that sends a signal to an entire process group; plain kill just sends a signal to a single process.
When a program is run from a controlling terminal, keyboard signals (^C, ^Z, etc) get sent to the foreground process group of that control terminal (see the linked page for a reasonably good description of these and note that BSD/macOS has ^T and SIGINFO as well). But if the signals are being sent from some other program, rather than from a controlling terminal, it is up to that program whether to call killpg or kill, and what signal(s) to send.
Some signals cannot be caught. This is the case for SIGKILL and SIGSTOP. These signals should not be sent willy-nilly; they should be reserved to a last resort. Instead, programs that want another program to stop should generally send one of SIGINT, SIGTERM, SIGHUP, or (rarely) SIGQUIT. Go tends to tie SIGQUIT to debug (in that the runtime on POSIX systems makes ^\ dump the stacks of the various goroutines) so that one is not a good choice. However, it's not up to the Go program you write here, which can only try to catch the signal. The choice of what to send is up to the sender.
The "Go way" to catch the signal is to use a goroutine and a channel. The signal.Notify function turns an OS-level signal into an event on the channel. What you do not (and cannot) know is whether the signal reached your process through kill or killpg (though if it came from a controlling terminal interaction, the POSIX-y kernel sent it via the equivalent of killpg). If you want to propagate that signal on your own, simply use the notification event to invoke code that makes an OS-level kill call. When using the os/exec package, use cmd.Process.Signal: note that this invokes the POSIX kill, not its killpg, but you would not want to use killpg here since we're assuming a non-process-group signal in the first place (a pgroup-based signal presumably needs no propagation).
There is no fully portable way to send a signal to a POSIX process group (which is not surprising, since this isn't portable to non-POSIX systems). Sadly, there's no direct Unix or POSIX specific way to do that either, it seems, in Go.
On non-POSIX systems, everything is quite different. See the discussion near the front of the os/signal package.

fork()/exec() in XWindow application

How to execute xterm from XWindow program, insert it into my window, but continue execution both while xterm is active and after it was closed?
In my XWindows (XLib over XCB) application I want to execute xterm -Into <handle>. So that my window contains xterm window in it. Unfortunately something wrong is happening.
pseudo code:
if (fork() == 0) {
pipe = popen('xterm -Into ' + handle);
while (feof(pipe)) gets(pipe);
exit(0);
}
I tired system() and execvp() as well. Every thing is fine until I exit from bash that runs in xterm, then my program exits. I guess that connection to X server is lost because it is shared between parent and child.
UPDATE: here is what is shown on terminal after program exits (or rather crashes).
XIO: fatal IO error 11 (Resource temporarily unavailable) on X server ":0.0"
after 59 requests (59 known processed) with 1 events remaining.
[xcb] Unknown sequence number while processing queue
[xcb] Most likely this is a multi-threaded client and XInitThreads has not been called
[xcb] Aborting, sorry about that.
y: ../../src/xcb_io.c:274: poll_for_event: Assertion `!xcb_xlib_threads_sequence_lost' failed.
Aborted
One possibility is that you are terminating due to the SIGCHLD signal not
being ignored and causing your program to abort.
signal(SIGCHLD, SIG_IGN);
Another is, as you suspect something actively closing the X session. Just
closing the socket itself should not matter but are you using a library that
registers an atexit call it could cause an issue.
Since from your snippet,
it looks like you don't actually care about the stdout of the xterm, a
better way to do it would be to actuall close fd's 0,1,2. Also since it looks
like you don't need to do anything in the child process after xterm
terminates you can use 'exec' rather than 'popen' to fully replace the
child process with that of the xterm including any cleanup handlers that
were left around. Though, I am not sure how pruned your snippet is from what you want to do as obviously the call to 'gets' is not what you want.
to make sure the X connection is closed, you can set its close on exec flag
with the following. (this will work on POSIX systems where the x connection
number is the fd of the server socket)
fcntl(XConnectionNumber(display), F_SETFD, fcntl(XConnectionNumber(display), F_GETFD) | FD_CLOEXEC);
Also note that 'popen' itself forks in the background in addition to your fork, I think you probably want to do an execvp there then use waitpid(... , WNOHANG) to check for the childs termination in your main X11 loop if you care to know when it exited.

Best way to handle SIGKILL in Linux kernel

I'm writing a syscall in Linux 3.0, and while I wait for some event to occur (using a waitqueue), I would like to check for a pending SIGKILL and if one occurs, I would like for the current task to die as soon as possible. As far as I can tell, as soon as I return from the syscall (well, really: as soon as the process is to enter into user mode) returns, the kernel checks for pending signals and upon seeing the SIGKILL, the kernel will kill current before it returns to user mode.
Question: Is my above assumption correct about how SIGKILL works? My other option is to see that the fatal SIGKILL is pending, and instead of returning from the syscall, I just perform a do_exit(). I'd like to be as consistent as possible with other Linux use cases...and it appears that simply returning from the syscall is what other code does. I just want to ensure that the above assumption about how SIGKILL kills the task is correct.
Signal checking happens after system call exit, yes.
See e.g. ret_from_sys_call at arch/x86/kernel/entry_64.S.

How can a C/C++ program put itself into background?

What's the best way for a running C or C++ program that's been launched from the command line to put itself into the background, equivalent to if the user had launched from the unix shell with '&' at the end of the command? (But the user didn't.) It's a GUI app and doesn't need any shell I/O, so there's no reason to tie up the shell after launch. But I want a shell command launch to be auto-backgrounded without the '&' (or on Windows).
Ideally, I want a solution that would work on any of Linux, OS X, and Windows. (Or separate solutions that I can select with #ifdef.) It's ok to assume that this should be done right at the beginning of execution, as opposed to somewhere in the middle.
One solution is to have the main program be a script that launches the real binary, carefully putting it into the background. But it seems unsatisfying to need these coupled shell/binary pairs.
Another solution is to immediately launch another executed version (with 'system' or CreateProcess), with the same command line arguments, but putting the child in the background and then having the parent exit. But this seems clunky compared to the process putting itself into background.
Edited after a few answers: Yes, a fork() (or system(), or CreateProcess on Windows) is one way to sort of do this, that I hinted at in my original question. But all of these solutions make a SECOND process that is backgrounded, and then terminate the original process. I was wondering if there was a way to put the EXISTING process into the background. One difference is that if the app was launched from a script that recorded its process id (perhaps for later killing or other purpose), the newly forked or created process will have a different id and so will not be controllable by any launching script, if you see what I'm getting at.
Edit #2:
fork() isn't a good solution for OS X, where the man page for 'fork' says that it's unsafe if certain frameworks or libraries are being used. I tried it, and my app complains loudly at runtime: "The process has forked and you cannot use this CoreFoundation functionality safely. You MUST exec()."
I was intrigued by daemon(), but when I tried it on OS X, it gave the same error message, so I assume that it's just a fancy wrapper for fork() and has the same restrictions.
Excuse the OS X centrism, it just happens to be the system in front of me at the moment. But I am indeed looking for a solution to all three platforms.
My advice: don't do this, at least not under Linux/UNIX.
GUI programs under Linux/UNIX traditionally do not auto-background themselves. While this may occasionally be annoying to newbies, it has a number of advantages:
Makes it easy to capture standard error in case of core dumps / other problems that need debugging.
Makes it easy for a shell script to run the program and wait until it's completed.
Makes it easy for a shell script to run the program in the background and get its process id:
gui-program &
pid=$!
# do something with $pid later, such as check if the program is still running
If your program forks itself, this behavior will break.
"Scriptability" is useful in so many unexpected circumstances, even with GUI programs, that I would hesitate to explicitly break these behaviors.
Windows is another story. AFAIK, Windows programs automatically run in the background--even when invoked from a command shell--unless they explicitly request access to the command window.
On Linux, daemon() is what you're looking for, if I understand you correctly.
The way it's typically done on Unix-like OSes is to fork() at the beginning and exit from the parent. This won't work on Windows, but is much more elegant than launching another process where forking exists.
Three things need doing,
fork
setsid
redirect STDIN, STDOUT and STDERR to /dev/null
This applies to POSIX systems (all the ones you mention claim to be POSIX (but Windows stops at the claiming bit))
On UNIX, you need to fork twice in a row and let the parent die.
A process cannot put itself into the background, because it isn't the one in charge of background vs. foreground. That would be the shell, which is waiting for process exit. If you launch a process with an ampersand "&" at the end, then the shell does not wait for process exit.
But the only way the process can escape the shell is to fork off another child and then let its original self exit back to the waiting shell.
From the shell, you can background a process with Control-Z, then type "bg".
Backgrounding a process is a shell function, not an OS function.
If you want an app to start in the background, the typical trick is to write a shell script to launch it that launches it in the background.
#! /bin/sh
/path/to/myGuiApplication &
To followup on your edited question:
I was wondering if there was a way to put the EXISTING process into the background.
In a Unix-like OS, there really is not a way to do this that I know of. The shell is blocked because it is executing one of the variants of a wait() call, waiting for the child process to exit. There is not a way for the child process to remain running but somehow cause the shell's wait() to return with a "please stop watching me" status. The reason you have the child fork and exit the original is so the shell will return from wait().
Here is some pseudocode for Linux/UNIX:
initialization_code()
if(failure) exit(1)
if( fork() > 0 ) exit(0)
setsid()
setup_signal_handlers()
for(fd=0; fd<NOFILE; fd++) close(fd)
open("/dev/null", O_RDONLY)
open("/dev/null", O_WRONLY)
open("/dev/null", o_WRONLY)
chdir("/")
And congratulations, your program continues as an independent "daemonized" process without a controlling TTY and without any standard input or output.
Now, in Windows you simply build your program as a Win32 application with WinMain() instead of main(), and it runs without a console automatically. If you want to run as a service, you'll have to look that up because I've never written one and I don't really know how they work.
You edited your question, but you may still be missing the point that your question is a syntax error of sorts -- if the process wasn't put in the background to begin with and you want the PID to stay the same, you can't ignore the fact that the program which started the process is waiting on that PID and that is pretty much the definition of being in the foreground.
I think you need to think about why you want to both put something in the background and keep the PID the same. I suggest you probably don't need both of those constraints.
As others mentioned, fork() is how to do it on *nix. You can get fork() on Windows by using MingW or Cygwin libraries. But those will require you to switch to using GCC as your compiler.
In pure Windows world, you'd use CreateProcess (or one of its derivatives CreateProcessAsUser, CreateProcessWithLogonW).
The simplest form of backgrounding is:
if (fork() != 0) exit(0);
In Unix, if you want to background an disassociate from the tty completely, you would do:
Close all descriptors which may access a tty (usually 0, 1, and 2).
if (fork() != 0) exit(0);
setpgroup(0,getpid()); /* Might be necessary to prevent a SIGHUP on shell exit. */
signal(SIGHUP,SIG_IGN); /* just in case, same as using nohup to launch program. */
fd=open("/dev/tty",O_RDWR);
ioctl(fd,TIOCNOTTY,0); /* Disassociates from the terminal */
close(fd);
if (fork() != 0) exit(0); /* just for good measure */
That should fully daemonize your program.
The most common way of doing this under Linux is via forking. The same should work on Mac, as for Windows I'm not 100% sure but I believe they have something similar.
Basically what happens is the process splits itself into two processes, and then the original one exits (returning control to the shell or whatever), and the second process continues to run in the background.
I'm not sure about Windows, but on UNIX-like systems, you can fork() then setsid() the forked process to move it into a new process group that is not connected to a terminal.
Under Windows, the closing thing you're going to get to fork() is loading your program as a Windows service, I think.
Here is a link to an intro article on Windows services...
CodeProject: Simple Windows Service Sample
So, as you say, just fork()ing will not do the trick. What you must do is fork() and then re-exec(), as this code sample does:
#include stdio.h>
#include <unistd.h>
#include <string.h>
#include <CoreFoundation/CoreFoundation.h>
int main(int argc, char **argv)
{
int i, j;
for (i=1; i<argc; i++)
if (strcmp(argv[i], "--daemon") == 0)
{
for (j = i+1; j<argc; j++)
argv[j-1] = argv[j];
argv[argc - 1] = NULL;
if (fork()) return 0;
execv(argv[0], argv);
return 0;
}
sleep(1);
CFRunLoopRun();
CFStringRef hello = CFSTR("Hello, world!");
printf("str: %s\n", CFStringGetCStringPtr(hello, CFStringGetFastestEncoding(hello)));
return 0;
}
The loop is to check for a --daemon argument, and if it is present, remove it before re-execing so an infinite loop is avoided.
I don't think this will work if the binary is put into the path because argv[0] is not necessarily a full path, so it will need to be modified.
/**Deamonize*/
pid_t pid;
pid = fork(); /**father makes a little deamon(son)*/
if(pid>0)
exit(0); /**father dies*/
while(1){
printf("Hello I'm your little deamon %d\n",pid); /**The child deamon goes on*/
sleep(1)
}
/** try 'nohup' in linux(usage: nohup <command> &) */
In Unix, I have learned to do that using fork().
If you want to put a running process into the background, fork it twice.
I was trying the solution.
Only one fork is needed from the parent process.
The most important point is that, after fork, the parent process must die by calling _exit(0); and NOT by calling exit(0);
When _exit(0); is used, the command prompt immediately returns on the shell.
This is the trick.
If you need a script to have the PID of the program, you can still get it after a fork.
When you fork, save the PID of the child in the parent process. When you exit the parent process, either output the PID to STD{OUT,ERR} or simply have a return pid; statement at the end of main(). A calling script can then get the pid of the program, although it requires a certain knowledge of how the program works.

Resources