"gnome-terminal --" exits with all forks terminated - fork

I wrote a simple C program to create orphan process:
int main(){
int pid = fork();
if(pid == 0){
execl("/usr/bin/firefox", "firefox", (char*)0);
}else{
sleep(2);
return 0;
}
}
I compile this file to a.out and run the following command in terminal:
gnome-terminal -- ./a.out
This opens a new terminal and firefox, but after 2s the terminal exits and firefox terminates, but I want firefox to be an orphan process with terminal exiting.
My program is correct, because when I tried
./a.out
directly in terminal, firefox opens and when I close the present terminal manually, firefox is still there. So it must be a problem of gnome-terminal -- ....
I also replaced gnome-terminal -- with xterm -e, but things are the same.
Is there any way to open a new terminal with a.out run in the new terminal window and make firefox an orphan?(I know how to execute a.out in a new terminal and preserve the new terminal after a.out return, but I want exit the new terminal and keep firefox an orphan)
.

Firefox is getting killed by SIGHUP because its controlling terminal goes away when gnome-terminal or xterm exits. You have two options to stop this:
Do what nohup does: Set Firefox to ignore SIGHUP by doing signal(SIGHUP, SIG_IGN); before your execl.
Do setsid() before your execl so that the process doesn't have a controlling terminal. Note that this might result in Firefox suddenly acquiring a controlling terminal later if it happens to open one for some reason.

Related

Is it possible to execute a bashscript after kill a terminal?

I know that have a file called .bash_profile that executes code (bashscript) when you open a terminal.
And there is another file that is called .bash_logout that executes code when you exit the terminal.
How I would execute some script when terminal is killed?
(.bash_logout do not cover this when terminal is killed).
How I would execute some script when terminal is killed?
I interpret this as "execute a script when the terminal window is closed". To do so, add the following inside your .bashrc or .bash_profile:
trap '[ -t 0 ] || command to execute' EXIT
Of course you can replace command to execute with source ~/.bash_exit and put all the commands inside the file .bash_exit in your home directory.
The special EXIT trap is executed whenever the shell exits (e.g. by closing the terminal, but also by pressing CtrlD on the prompt, or executing exit, or ...).
[ -t 0 ] checks whether stdin is connected to a terminal. Due to || the next command is executed only if that test fails, which it does when closing the terminal, but doesn't for other common ways to exit bash (e.g. pressing CtrlD on the prompt or executing exit).
Failed attempts (read only if you try to find and alternative)
In the terminals I have heard of, bash always receives a SIGHUP signal when the window is closed. Sometimes there are even two SIGHUPs; one from the terminal, and one from the kernel when the pty (pseudoterminal) is closed. However, sometimes both SIGHUPs are lost in interactive sessions, because bash's readline temporarily uses its own traps. Strangely enough, the SIGHUPs always seem to get caught when there is an EXIT trap; even if that EXIT trap does nothing.
However, I strongly advise against setting any trap on SIGHUP. Bash processes non-EXIT traps only after the current command finished. If you ran sh -c 'while true; do true; done' and closed the terminal, bash would continue to run in the background as if you had used disown or nohup.

Close a Terminator terminal opened from bash when the program in it exits

I have a udev rule that calls a script whenever I insert a certain USB device. That script launches a terminal using the following command:
terminator -e "...some_program" & exit
(Could have also been xterm, doesn't matter as far as I can tell.)
Once 'some_program' finishes doing what it should, it exits (from inside that program, not the bash), but the terminator terminal remains open, unless I Ctrl+C it, in which case it closes. But I don't want to Ctrl+C it, that's the whole point.
I have another udev rule that operates when the USB device is removed. But that rule won't trigger until the terminal that was opened from the
'insert usb rule' closes (even though I used & exit after launching the script from the 'insert usb rule')
I don't have anymore ideas and I've searched high and low for a solution. But nothing worked.
I tried sending SIGINT from inside some_program instead of using exit(1), it didn't work. The program terminated, but the terminal stayed open.
I tried killing the terminal by getting its PID and killing it. It didn't work.
I tried opening another terminal and killing the PID from there, it didn't work.
You might want to try this:
terminator -e "bash -c 'yourcommand'"
At least when I call ls this way it automatically closes:
# this closes automatically:
terminator -e "bash -c 'ls'"
# to test, this closes when the less command is ended (eg. by hitting q):
terminator -e "bash -c 'ls | less'"
Apparently terminator doesn't initialize it's own shell this way and as soon as the command passed with the -c option ends, the shell is terminated and terminator automatically closes the window.
Solved it. no need use 'bash -c'.
'some_program' is a ROS node, so all i needed to do is kill the rosmaster...
$ killall -9 rosmaster
and it works now.

How to immediately trap a signal to an interactive Bash shell?

I try to send a signal from one terminal A to another terminal B. Both run an interactive shell.
In terminal B, I trap signal SIGUSR1 like so :
$ trap 'source ~/mycommand' SIGUSR1
Now in terminal A I send a signal like so :
$ kill -SIGUSR1 pidOfB
Unfortunately, nothing happens in B. If I want to have my command executed, I need to switch to B and either input a new command or press enter.
How can I avoid this drawback and immediately execute my command instead ?
EDIT :
It's important to note that I want to interact directly with the interactive shell in terminal B from terminal A.
For this reason, every solution where the trap command would be executed in a subshell would not work for me...
Also, terminal B must stay interactive.
The shell may simply be stuck in a blocking read, waiting for command-line input. Hitting enter causes the handler to execute before the entered command. Running a non-blocking command like wait:
$ sleep 60 & wait
then sending the signal causes wait to terminate immediately, followed by the output of the handler.
Based on the answers and my numerous attempt to solve this, I don't think it's possible to catch a trap signal immediately in an interactive bash terminal.
For it to trigger, there must be an interaction from the user.
This is due to the readline program blocks until a newline is entered. And there is no way to stop this read.
My solution is to use dtach, a small program that emulate the detach feature of screen.
This program can run a fully interactive shell and features in its last version a way to communicate via a custom socket to this shell (or whatever program you launch)
To start a new dtach session running an interactive bash, in terminal B :
$ dtach -a /tmp/MySocket bash -i
Now from terminal A, we can send a message to the bash session in terminal B like so :
$ echo 'echo hello' | dtach -p /tmp/MySocket
In terminal B, we now see :
$ echo hello
hello
To expand on that if I now do in terminal A :
$ trap 'echo "cd $(pwd)" | dtach -p /tmp/MySocket' DEBUG
I'll have the directory of the two terminals synced
PS :I'd still like to know if there is a way to do this in pure bash
I use a similar trap so that periodically I can (from a separate cron job) force all idle bash processes to do a 'history -a'. I found that if I trap SIGALRM instead of SIGUSR1, then the bash blocking read seems not to be a problem: the trap runs now, rather than next time one hits return. I tried SIGINT, but that caused an annoying "^C", followed by a new prompt line, to be displayed. I haven't yet found any drawbacks of using SIGALRM, but perhaps they will arise.
It may be buffering.
As a test, try installing a loop trigger. In window A:
{ trap 'ls' USR1; while sleep 1; do echo>/dev/null;done } &
[1] 7316
in window B:
kill -usr1 7316
back in window A the ls is firing when the loop does an echo.
Don't know if that will help, but it's something.

Run vim using NSTask

I'm writing console program. I want to launch vim from that program, wait until user exits it and continue execution.
let editorTask = NSTask()
editorTask.currentDirectoryPath = "/Users/vbezhenar/Documents"
editorTask.launchPath = "/usr/bin/vim"
editorTask.arguments = ["/Users/vbezhenar/Documents/file"]
editorTask.launch()
editorTask.waitUntilExit()
I'm running this program from terminal. I can see running vim with ps aux|grep vim in another terminal, but I don't see any vim user interface. Console just hangs until I press "Ctrl+C".
It seems like problem with stdout or stdin, but documentation clearly states that by default those file descriptors are inherited from launching process so there shouldn't be any problems. I don't alter environment either so it should inherit too.
I tried to launch "/bin/sh", it didn't work too.

Shell script behaves strangely when called via an Erlang port

When calling shell scripts from Erlang, I generally need their exit status (0 or something else), so I run them using this function:
%% in module util
os_cmd_exitstatus(Action, Cmd) ->
?debug("~ts starting... Shell command: ~ts", [Action, Cmd]),
try erlang:open_port({spawn, Cmd}, [exit_status, stderr_to_stdout]) of
Port ->
os_cmd_exitstatus_loop(Action, Port)
catch
_:Reason ->
case Reason of
badarg ->
Message = "Bad input arguments";
system_limit ->
Message = "All available ports in the Erlang emulator are in use";
_ ->
Message = file:format_error(Reason)
end,
?error("~ts: shell command error: ~ts", [Action, Message]),
error
end.
os_cmd_exitstatus_loop(Action, Port) ->
receive
{Port, {data, Data}} ->
?debug("~ts... Shell output: ~ts", [Action, Data]),
os_cmd_exitstatus_loop(Action, Port);
{Port, {exit_status, 0}} ->
?info("~ts finished successfully", [Action]),
ok;
{Port, {exit_status, Status}} ->
?error("~ts failed with exit status ~p", [Action, Status]),
error;
{'EXIT', Port, Reason} ->
?error("~ts failed with port exit: reason ~ts",
[Action, file:format_error(Reason)]),
error
end.
This worked fine, until I used this to start a script which forks off a program and exits:
#!/bin/sh
FILENAME=$1
eog $FILENAME &
exit 0
(In the actual usecase, there are quite a few more arguments, and some massaging before they are passed to the program). When run from the terminal, it shows the image and exits immediately, as expected.
But running from Erlang, it doesn't. In the log file I see that it starts fine:
22/Mar/2011 13:38:30.518 Debug: Starting player starting... Shell command: /home/aromanov/workspace/gmcontroller/scripts.dummy/image/show-image.sh /home/aromanov/workspace/media/images/9e89471e-eb0b-43f8-8c12-97bbe598e7f7.png
and the eog window appears. But I don't get
22/Mar/2011 13:47:14.709 Info: Starting player finished successfully
until killing the eog process (with kill or just closing the window), which isn't suitable for my requirements. Why the difference in behavior? Is there a way to fix it?
Normally if you run a command in background with & in a shell script and the shell script terminates before the command, then the command gets orphaned. It might be that erlang trys to prevent orphaned processes in open_port and waits for eog to terminate. Normally if you want to run something in background during a shell script you should put in a wait at the end of the script to wait for your background processes to terminate. But this is exactly what youd don't want to do.
You might try the following in your shell script:
#!/bin/sh
FILENAME=$1
daemon eog $FILENAME
# exit 0 not needed: daemon returns 0 if everything is ok
If your operating system has a daemon command. I checked in FreeBSD and it has one: daemon(8)
This is not a command available on all Unix alike systems, however there might be a different command doing the same thing in your operating system.
The daemon utility detaches itself from the controlling terminal and executes the program specified by its arguments.
I'm not sure if this solves your problem, but I suspect that eog somehow stays attached to stdin/stdou as a kind of controling terminal. Worth a try anyway.
This should also solve the possible problem that job control is on erroneously which could also cause the problem. Since daemon does exit normally your shell can't try to wait for the background job on exit because there is none in the shells view.
Having said all this: why not just keep the port open in Erlang while eog runs?
Start it with:
#!/bin/sh
FILENAME=$1
exec eog $FILENAME
Calling it with exec doesn't fork it bu replaces the shell process with eog. The exit status you'll see in Erlang will then be the status of eog when it terminates. Also you have the possibility to close the port and terminate eog from Erlang if you want to do so.
Perhaps your /bin/sh doesn't support job control when it isn't run interactively? At least the /bin/sh (actually dash(1)!) on my Ubuntu system mentions:
-m monitor Turn on job control (set automatically
when interactive).
When you run the script from a terminal, the shell probably recognizes that it is being run interactively and supports job control. When you run the shell script as a port, the shell probably runs without job control.

Resources