run xterm -e without terminating - bash

I want to run xterm -e file.sh without terminating.
In the file, I'm sending commands to the background and when the script is done, they are still not finished.
What I'm doing currently is:
(cd /myfolder; /xterm -ls -geometry 115x65 -sb -sl 1000)
and then after the window pops up
sh file.sh
exit
What I want to do is something like:
(cd /myfolder; /xterm -ls -geometry 115x65 -sb -sl 1000 -e sh file.sh)
without terminating and wait until the commands in the background finish.
Anyone know how to do that?

Use hold option:
xterm -hold -e file.sh
-hold Turn on the hold resource, i.e., xterm will not immediately destroy its window when the shell command completes. It will wait
until you use the window manager to destroy/kill the window, or if you
use the menu entries that send a signal, e.g., HUP or KILL.

I tried -hold, and it leaves xterm in an unresponsive state that requires closing through non-standard means (the window manager, a kill command). If you would rather have an open shell from which you can exit, try adding that shell to the end of your command:
xterm -e "cd /etc; bash"
I came across the answer on Super User.

Use the wait built-in in you shell script. It'll wait until all the background jobs are finished.
Working Example:
#!/bin/bash
# Script to show usage of wait
sleep 20 &
sleep 20 &
sleep 20 &
sleep 20 &
sleep 20 &
wait
The output
sgulati#maverick:~$ bash test.sh
[1] Done sleep 20
[2] Done sleep 20
[3] Done sleep 20
[4]- Done sleep 20
[5]+ Done sleep 20
sgulati#maverick:~$

Building on a previoius answer, if you specify $SHELL instead of bash, it will use the users preferred shell.
xterm -e "cd /etc; $SHELL"

With respect to creating the separate shell, you'll probably want to run it in the background so that you can continue to execute more commands in the current shell - independent of the separate one. In which case, just add the & operator:
xterm -e "cd /etc; bash" &
PID=$!
<"do stuff while xterm is still running">
wait $PID
The wait command at the end will prevent your primary shell from exiting until the xterm shell does. Without the wait, your xterm shell will still continue to run even after the primary shell exits.

Related

Run command in new background tmux window and wait for process to finish

I'm trying to use tmux in a script, so that it runs a command that takes some time (let's say 'ping -c 5 8.8.8.8', for example) in a new hidden pane, while blocking the current script itself until the ping ends.
By "hidden pane", I mean running the command in a new pane that would be sent in background, and is still accessible by switching panes in order to monitor and/or interact with it (not necessarily ping).
(cf. EDIT)
Here is some pseudo bash code to show more clearly what I'm trying to do:
echo "Waiting for ping to finish..."
echo "Ctrl-b + p to switch pane and see running process"
tmux new-window -d 'ping -c 5 8.8.8.8' # run command in new "background" window
tmux wait-for # display "Done!" only when ping command has finished
echo "Done!"
I know the tmux commands here don't really have any sense like this, but this is just to illustrate.
I've looked at different solutions in order to either send a command in background, or wait until a process has finished in an other pane, but I still haven't found a way to do both correctly.
EDIT
Thanks to Nicholas Marriott for pointing out the -d option exists when creating a new window to avoid switching to it automatically. Now the only issue is to block the main script until the command ends.
I tried the following, hoping it would work, but it doesn't either (the script doesn't resume).
tmux new-window -d 'ping -c 5 8.8.8.8; tmux wait -S ping' &
tmux wait $!
Maybe there is a way by playing with processes (using fg,bg...), but I still haven't figured it out.
Similar questions:
[1] Make tmux block until programs complete
[2] Bash - executing blocking scripts in tmux
[3] How do you hide a tmux pane
[4] how to wait for first command to finish?
You can use wait-for but you need to give it a channel and signal that channel when your process is done, something like:
tmux neww -d 'ping blah; tmux wait -S ping'
tmux wait ping
echo done
If you think you might run the script several times in parallel, I suggest making a channel name using mktemp or similar (and removing the file when wait-for returns).
wait-for can't automatically wait for stuff like pane or windows exiting, silence in a pane, and so on, but I would like to see that implemented at some point.
The other answers are only working if you're already within a tmux session.
But if you are outside of it you've to use something like this:
tmux new-session -d 'vi /etc/passwd' \; split-window -d 'vi /etc/group' \; attach
If you want to call this within a script you should check whether or not "$TMUX" is set. (Or just unset to force a nested tmux window).
#!/bin/sh
export com1="vi /etc/passwd"
export com2="vi /etc/group"
if [ -z $TMUX ]
then
export doNewSession="new-session -d 'exit 0'"
else
export doNewSession=""
fi
tmux $doNewSession \; split-window -d $com1 \; split-window -d $com2 \; attach;
[ -z $TMUX ] && exit 0
My solution was to make a named pipe and then wait for input using read:
#!/bin/sh
rm -f /wait
mkfifo /wait
tmux new-window -d '/bin/sh -c "ping -c 5 8.8.8.8; echo . > /wait"'
read -t 10 WAIT <>/wait
[ -z "$WAIT" ] &&
echo 'The operation failed to complete within 10 seconds.' ||
echo 'Operation completed successfully.'
I like this approach because you can set a timeout and, if you wanted, you could extend this further with other tmux controls to kill the ongoing process if it doesn't end the way you want.

Command file (when double clicked) doesn't work if this code is not inserted...why?

When I double click on my .command file, it reads every line until it hits the nohup command which terminates the application (Terminal) as it runs in the background.
Now, when entering this command below at the top of the script, the whole script works. Why on earth would that be happening?
exec 3>&1 4>&2
trap 'exec 2>&4 1>&3' 0 1 2 3
exec 1>/Users/$username/Desktop/log.out 2>&1
Apperently when I'm running this command in the script:
nohup osascript -e 'tell application "Terminal" to do script "'"$DIR"'"' & pkill -f -a Terminal
Terminating the application with pkill creates issues for the overall script. Thus, disrupting the SIGHUP signal. My solution is the following...
trap '' 1 # this blocks sighup so we don't die when terminal is closed
Thanks to izabera from Freenode Chat - #bash channel.

Execute a timed function in bash

I am trying to implement a timed function. If the timer times out the function/command should be killed. If the function/command finishes, the timer should not make the bash to wait for the timer to timeout.
(cmdpid=$BASHPID; \
( sleep 60; kill $cmdpid 2>/dev/null) & \
child_pid=$!; \
ssh remote_host /users/jj/test.sh; \
kill -9 $child_pid)
The test.sh may or may not finish in 60 seconds. This worked fine.
But when I want to get the result of the test.sh, which echoes "SUCESS" or "FAILURE", I tried with
result=$(cmdpid=$BASHPID; \
( sleep 60; kill $cmdpid 2>/dev/null) & \
child_pid=$!; \
ssh remote_host /users/jj/test.sh; \
kill -9 $child_pid)
Here it waits for timer to exit. I can see the "kill -9 $child_pid" is executed, using set -x command, but the kill is not really killing the sub-shell.
One way to tackle this problem would be to run the timer on a separate script, say MyTimerTest, which is called from the (say) MainScriptTest but runs separately, and then whichever script that finishes first "kills" the other. For example:
On MainScriptTest you could put this at the beginning:
nohup /folder/MyTimerTest > /dev/null 2>&1 &
On MainScriptTest you could put this at the very end:
killall MyTimerTest > /dev/null 2>&1
The MyTimerTest could be something like this:
#!/bin/bash
sleep 60
killall MainScriptTest > /dev/null 2>&1
exit 0
Note: the long name for the scripts with mixed capital and lowercase letters (ex.: MainScriptTest) is on purpose, killall is case sensitive and that helps to preclude it from killing something it should not. To be very safe, you might want to even add a token in addition to the longer name, like: MainScriptTest88888 or something like that.
Edit: Thanks to gilez, who suggested the use of the timeout command. If that is available to you on your system, one could do a quick one-liner like this:
timeout 60 bash -c "/folder/MainScriptTest"
Using timeout is convenient. However, if MainScriptTest creates independent child processes (for example by calling: nohup /folder/OtherScript &) then timeout would not kill those child processes, and the exit would not be clean.
The first solution I gave is longer, but it could be customized to kill those child processes (or any other processes you want) by adding them to the MainScriptTest, like for example:
killall OtherScript > /dev/null 2>&1
Found some other way.
result=$( ssh $remote_host /users/jj/test.sh ) & mypid=$!
( sleep 10; kill -9 $mypid ) &
wait $mypid

Bash script to wait for gnome-terminal to finish before continuing script, only works for first instance of script

I have a bash script that opens a new gnome terminal with two tabs that runs more scripts. After the scripts in the two tabs finishes, the main script in the parent terminal continues to run.
When I run multiple instances of this bash script, it no longer waits for the additional gnome-terminals to finish before continuing the parent terminal script.
How do I fix it so that the additional instances of the script runs just like the first one?
Here is the bash script that I'm running. I run additional instances of this by typing sh scriptname.sh in a new terminal.
gnome-terminal --tab --command="expect launchneuron.exp" --tab --command="expect launchmpj.exp"
echo "Simulation Complete"
echo "Plotting Results"
expect -c "
set timeout -1
spawn ssh $username#server
expect \"password\"
send \"$password\r\"
expect \"$ \"
send \"qsub -I -q abc -A lc_tb -l nodes=1 -l walltime=24:00:00 -d .\r\"
expect \"$ \"
send \"sh plotgraph.sh\r\"
expect \"$ \"
send \"exit\r\"
"
#!/bin/bash
date
bash -c "sleep 7" &
bash -c "sleep 5" &
wait
date
As you can see while running this script, both sleep commands will run in parallel, but main thread stalls, while they are running.
Sat. Jule 27 01:11:49 2013
Sat. Jule 27 01:11:56 2013
Replace sleep 7 with expect launchneuron.exp
and sleep 5 with expect launchmpj.exp
and add your plot commands after calling "wait":
echo "Simulation Complete"
...(your code to plot results)

How to suppress Terminated message after killing in bash?

How can you suppress the Terminated message that comes up after you kill a
process in a bash script?
I tried set +bm, but that doesn't work.
I know another solution involves calling exec 2> /dev/null, but is that
reliable? How do I reset it back so that I can continue to see stderr?
In order to silence the message, you must be redirecting stderr at the time the message is generated. Because the kill command sends a signal and doesn't wait for the target process to respond, redirecting stderr of the kill command does you no good. The bash builtin wait was made specifically for this purpose.
Here is very simple example that kills the most recent background command. (Learn more about $! here.)
kill $!
wait $! 2>/dev/null
Because both kill and wait accept multiple pids, you can also do batch kills. Here is an example that kills all background processes (of the current process/script of course).
kill $(jobs -rp)
wait $(jobs -rp) 2>/dev/null
I was led here from bash: silently kill background function process.
The short answer is that you can't. Bash always prints the status of foreground jobs. The monitoring flag only applies for background jobs, and only for interactive shells, not scripts.
see notify_of_job_status() in jobs.c.
As you say, you can redirect so standard error is pointing to /dev/null but then you miss any other error messages. You can make it temporary by doing the redirection in a subshell which runs the script. This leaves the original environment alone.
(script 2> /dev/null)
which will lose all error messages, but just from that script, not from anything else run in that shell.
You can save and restore standard error, by redirecting a new filedescriptor to point there:
exec 3>&2 # 3 is now a copy of 2
exec 2> /dev/null # 2 now points to /dev/null
script # run script with redirected stderr
exec 2>&3 # restore stderr to saved
exec 3>&- # close saved version
But I wouldn't recommend this -- the only upside from the first one is that it saves a sub-shell invocation, while being more complicated and, possibly even altering the behavior of the script, if the script alters file descriptors.
EDIT:
For more appropriate answer check answer given by Mark Edgar
Solution: use SIGINT (works only in non-interactive shells)
Demo:
cat > silent.sh <<"EOF"
sleep 100 &
kill -INT $!
sleep 1
EOF
sh silent.sh
http://thread.gmane.org/gmane.comp.shells.bash.bugs/15798
Maybe detach the process from the current shell process by calling disown?
The Terminated is logged by the default signal handler of bash 3.x and 4.x. Just trap the TERM signal at the very first of child process:
#!/bin/sh
## assume script name is test.sh
foo() {
trap 'exit 0' TERM ## here is the key
while true; do sleep 1; done
}
echo before child
ps aux | grep 'test\.s[h]\|slee[p]'
foo &
pid=$!
sleep 1 # wait trap is done
echo before kill
ps aux | grep 'test\.s[h]\|slee[p]'
kill $pid ## no need to redirect stdin/stderr
sleep 1 # wait kill is done
echo after kill
ps aux | grep 'test\.s[h]\|slee[p]'
Is this what we are all looking for?
Not wanted:
$ sleep 3 &
[1] 234
<pressing enter a few times....>
$
$
[1]+ Done sleep 3
$
Wanted:
$ (set +m; sleep 3 &)
<again, pressing enter several times....>
$
$
$
$
$
As you can see, no job end message. Works for me in bash scripts as well, also for killed background processes.
'set +m' disables job control (see 'help set') for the current shell. So if you enter your command in a subshell (as done here in brackets) you will not influence the job control settings of the current shell. Only disadvantage is that you need to get the pid of your background process back to the current shell if you want to check whether it has terminated, or evaluate the return code.
This also works for killall (for those who prefer it):
killall -s SIGINT (yourprogram)
suppresses the message... I was running mpg123 in background mode.
It could only silently be killed by sending a ctrl-c (SIGINT) instead of a SIGTERM (default).
disown did exactly the right thing for me -- the exec 3>&2 is risky for a lot of reasons -- set +bm didn't seem to work inside a script, only at the command prompt
Had success with adding 'jobs 2>&1 >/dev/null' to the script, not certain if it will help anyone else's script, but here is a sample.
while true; do echo $RANDOM; done | while read line
do
echo Random is $line the last jobid is $(jobs -lp)
jobs 2>&1 >/dev/null
sleep 3
done
Another way to disable job notifications is to place your command to be backgrounded in a sh -c 'cmd &' construct.
#!/bin/bash
# ...
pid="`sh -c 'sleep 30 & echo ${!}' | head -1`"
kill "$pid"
# ...
# or put several cmds in sh -c '...' construct
sh -c '
sleep 30 &
pid="${!}"
sleep 5
kill "${pid}"
'
I found that putting the kill command in a function and then backgrounding the function suppresses the termination output
function killCmd() {
kill $1
}
killCmd $somePID &
Simple:
{ kill $! } 2>/dev/null
Advantage? can use any signal
ex:
{ kill -9 $PID } 2>/dev/null

Resources