There is this program called sshuttle that can connects to a server and create a tunnel.
I wish to create a bash function that sequentially:
opens a tunnel to a remote server (sshuttle -r myhost 0/0),
performs 1 arbitrary commandline,
kill -s TERM <pidOfTheAboveTunnel>.
A basic idea (that works but the 5 seconds delay is a problem) is like sshuttle -r myhost 0/0 & ; sleep 5 ; mycommand ; kill -s TERM $(pgrep sshuttle)
Could expect be used to expect the string "c : Connected to server." that is received from stderr here? My attempts as a newbie were met with nothing but failure, and the man page is quite impressive.
When you use expect to control another program, it connects to that program through a pseudo-terminal (pty), so expect sees the same output from the program as you would on a terminal, in particular there is no distinction between stdout and stderr. Assuming that your mycommand is to be executed on the local machine, you could use something like this as an expect (not bash) script:
#!/usr/bin/expect
spawn sshuttle -r myhost 0/0
expect "Connected to server."
exec mycommand
exec kill [exp_pid]
close
The exec kill may not be needed if sshuttle exits when its stdin is closed, which will happen on the next line.
I am looking for an alternative to something like ssh user#node1 uptime && ssh user#node2 uptime, where both of the SSH-commands are run simultaneosly. As they are both blocking until the command returns, && and ; between them don't work.
My goal is to run infinite while loops on both nodes via SSH. So the first one would never return, and the second one would never be run. I would then like to save the output after terminating the loops with Ctrl+C to a log-file and read that one via Python.
Is there an easy solution to this?
Thanks in advance!
Capturing SSH output
On the one hand, you need to capture the ssh output/error and store it into a file so that you can process it afterwards with Python. To this purpose you can:
1- Store output and error directly into a file
ssh user#node cmd 2>&1 > session.log
2- Show output/error in the console while storing it into a file (I would recommend this one)
ssh user#node cmd 2>&1 | tee session.log
Check this for further information about the tee command.
Running commands in parallel
On the other hand, you want to run both commands in parallel and block the current bash process. You can achieve this by:
1- Blocking the current bash process until their childs are done.
cmd1 & ; cmd2 & ; wait
Check this for further information about the wait command.
2- Spawning the child processes and freeing the current bash process. Notice that the processes will be kept alive although the main process ends.
nohup cmd & ; nohup cmd &
The whole thing
I would recommend combining both approaches using tee (so you can still see the ssh outputs on your terminal) and blocking the current process until everything is done (so that when you kill the main process all the processes are killed too).
ssh user#node1 uptime 2>&1 | tee session1.log & ; ssh user#node2 uptime 2>&1 | tee session2.log & ; wait
I've recently run into some slightly odd behaviour when running commands over ssh. I would be interested to hear any explanations for the behaviour below.
Running ssh localhost 'touch foobar &' creates a file called foobar as expected:
[bob#server ~]$ ssh localhost 'touch foobar &'
[bob#server ~]$ ls foobar
foobar
However running the same command but with the -t option to force pseudo-tty allocation fails to create foobar:
[bob#server ~]$ ssh -t localhost 'touch foobar &'
Connection to localhost closed.
[bob#server ~]$ echo $?
0
[bob#server ~]$ ls foobar
ls: cannot access foobar: No such file or directory
My current theory is that because the touch process is being backgrounded the pseudo-tty is allocated and unallocated before the process has a chance to run. Certainly adding one second sleep allows touch to run as expected:
[bob#pidora ~]$ ssh -t localhost 'touch foobar & sleep 1'
Connection to localhost closed.
[bob#pidora ~]$ ls foobar
foobar
If anyone has a definitive explanation I would be very interested to hear it. Thanks.
Oh, that's a good one.
This is related with how process groups work, how bash behaves when invoked as a non-interactive shell with -c, and the effect of & in input commands.
The answer assumes you're familiar with how job control works in UNIX; if you're not, here's a high level view: every process belongs to a process group (the processes in the same group are often put there as part of a command pipeline, e.g. cat file | sort | grep 'word' would place the processes running cat(1), sort(1) and grep(1) in the same process group). bash is a process like any other, and it also belongs to a process group. Process groups are part of a session (a session is composed of one or more process groups). In a session, there is at most one process group, called the foreground process group, and possibly many background process groups. The foreground process group has control of the terminal (if there is a controlling terminal attached to the session); the session leader (bash) moves processes from background to foreground and from foreground to background with tcsetpgrp(3). A signal sent to a process group is delivered to every process in that group.
If the concept of process groups and job control is completely new to you, I think you'll need to read up on that to fully understand this answer. A great resource to learn this is Chapter 9 of Advanced Programming in the UNIX Environment (3rd edition).
That being said, let's see what is happening here. We have to fit together every piece of the puzzle.
In both cases, the ssh remote side invokes bash(1) with -c. The -c flag causes bash(1) to run as a non-interactive shell. From the manpage:
An interactive shell is one started without non-option arguments and
without the -c option whose standard input and error are both
connected to terminals (as determined by isatty(3)), or one started
with the -i option. PS1 is set and $- includes i if bash is
interactive, allowing a shell script or a startup file to test this
state.
Also, it is important to know that job control is disabled when bash is started in non-interactive mode. This means that bash will not create a separate process group to run the command, since job control is disabled, there will be no need to move this command between foreground and background, so it might as well just remain in the same process group as bash. This will happen whether or not you forced PTY allocation on ssh with -t.
However, the use of & has the side effect of causing the shell not to wait for command termination (even if job control is disabled). From the manpage:
If a command is terminated by the control operator &, the shell
executes the command in the background in a subshell. The shell does
not wait for the command to finish, and the return status is 0.
Commands separated by a ; are executed sequentially; the shell waits
for each command to terminate in turn. The return status is the exit
status of the last command executed.
So, in both cases, bash will not wait for command execution, and touch(1) will be executed in the same process group as bash(1).
Now, consider what happens when a session leader exits. Quoting from setpgid(2) manpage:
If a session has a controlling terminal, and the CLOCAL flag for that
terminal is not set, and a terminal hangup occurs, then the session
leader is sent a SIGHUP. If the session leader exits, then a SIGHUP
signal will also be sent to each process in the foreground process
group of the controlling terminal.
(Emphasis mine)
When you don't use -t
When you don't use -t, there is no PTY allocation on the remote side, so bash is not a session leader, and in fact no new session is created. Because sshd is running as a daemon, the bash process that is forked + exec()'d will not have a controlling terminal. As such, even though the shell terminates very quickly (probably before touch(1)), there is no SIGHUP sent to the process group, because bash wasn't a session leader (and there is no controlling terminal). So everything works.
When you use -t
-t forces PTY allocation, which means that the ssh remote side will call setsid(2), allocate a pseudo-terminal + fork a new process with forkpty(3), connect the PTY master device input and output to the socket endpoints that lead to your machine, and finally execute bash(1). forkpty(3) opens the PTY slave side in the forked process that will become bash; since there's no controlling terminal for the current session, and a terminal device is being opened, the PTY device becomes the controlling terminal for the session and bash becomes the session leader.
Then the same thing happens again: touch(1) is executed in the same process group, etc., yadda yadda. The point is, this time, there is a session leader and a controlling terminal. So, since bash does not bother waiting because of the &, when it exits, SIGHUP is delivered to the process group and touch(1) dies prematurely.
About nohup
nohup(1) doesn't work here because there is still a race condition. If bash(1) terminates before nohup(1) has the chance to set up the necessary signal handling and file redirection, it will have no effect (which is probably what happens)
A possible fix
Forcefully re-enabling job control fixes it. In bash, you do that with set -m. This works:
ssh -t localhost 'set -m ; touch foobar &'
Or force bash to wait for touch(1) to complete:
ssh -t localhost 'touch foobar & wait `pgrep touch`'
The answer of #Filipe Gonçalves is great, but it has something wrong. I have no enough reputation to comment there, so i correct/enrich content here:
When you don't use -t,
#Filipe says:
When you don't use -t, there is no PTY allocation on the remote side, so bash is not a session leader, and in fact no new session is created. ...
Actually, bash is a session leader and new session is created.
Let us test this:
# run sleep background process first, then call ps directly:
[root#90fb1c3f30ce ~]# ssh localhost 'sleep 66 & ps -o pid,ppid,pgid,sess,tpgid,tty,args'
PID PPID PGID SESS TPGID TT COMMAND
184074 67 184074 184074 -1 ? sshd: root#notty
184076 184074 184076 184076 -1 ? bash -c sleep 66 & ps -o pid,ppid,pgid,sess,tpgid,tty,args
184081 184076 184076 184076 -1 ? sleep 66
184082 184076 184076 184076 -1 ? ps -o pid,ppid,pgid,sess,tpgid,tty,args
Notice ^^^^^ ^^^^^
We can see these bash/sleep/ps processes have the same PGID/SESS which equals to PID 184076 of bash process, but sshd parent prcoess has a different PGID/SESS. Here, the bash process is the leader of a new session and bash/sleep/ps processes belong to another process group.
In addition, we can find the ssh command does not return right away, it still waits about 66 seconds. You can find its reason here: Getting ssh to execute a command in the background on target machine
During the ssh command waiting, we can open another session and run:
[root#90fb1c3f30ce ~]# ps -eo pid,ppid,pgid,sess,tpgid,tty,args
PID PPID PGID SESS TPGID TT COMMAND
# unrelated lines removed #
184074 67 184074 184074 -1 ? sshd: root#notty
184081 1 184076 184076 -1 ? sleep 66
Notice ^^^^^ ^^^^^
[root#90fb1c3f30ce ~]# ps -e | grep 184076
[root#90fb1c3f30ce ~]#
We can see the bash process (pid 184076) has already gone, but PGID/SESS of the sleep background process keeps no change. It does not matter, APUE session 9.4:
Each prcoess group can have a process group leader. The leader is identified by its process group ID being equal to its process ID.
It is possible for a process group leader to create a process group, create processes in the group, and then terminate. The process group still exists, as long as at least one process is in the group, regardless of whether the group leader terminates.
So, why doesn't this sleep process die?
When you don't use -t, there is no PTY allocation on the remote side, so prcoess group on the remote side is not a foreground process group (without a terminal, no meaning of foreground or background). As such, even though the shell terminates very quickly, there is no SIGHUP sent to its process group, because the process group is not a foreground process group. (SIGHUP signal will be sent to each process in the foreground process group of the controlling terminal).
The key is decoupling the stdin/stdout/stderr streams of the child process from the originating bash/ssh session; then pseudo-tty allocation (ssh -t) is no longer required to allow the child to survive the termination of the ssh connection. See here for a complete answer...
From myhost.mydomain.com, I start a nc listener. Then login to another host to start a netcat push to my host:
nc -l 9999 > data.gz &
ssh repo.mydomain.com "cat /path/to/file.gz | nc myhost.mydomain.com 9999"
These two commands are part of a script. Only 32K bytes are sent to the host and the ssh command terminates, the nc listener gets an EOF and it terminates as well.
When I run the ssh command on the command line (i.e. not as part of the script) on myhost.mydomain.com the complete file is downloaded. What's going on?
I think there is something else that happens in your script which causes this effect. For example, if you run the second command in the background as well and terminate the script, your OS might kill the background commands during script cleanup.
Also look for set -o pipebreak which terminates all the commands in a pipeline when one of them returns with != 0.
On a second note, the approach looks overly complex to me. Try to reduce it to
ssh repo.mydomain.com "cat /path/to/file.gz" > data.gz
(ssh connects stdout of the remote with the local). It's more clear when you write it like this:
ssh > data.gz repo.mydomain.com "cat /path/to/file.gz"
That way, you can get rid of nc. As far as I know, nc is synchronous, so the second invocation (which sends the data) should only return after all the data has been sent and flushed.
I have a previously running process (process1.sh) that is running in the background with a PID of 1111 (or some other arbitrary number). How could I send something like command option1 option2 to that process with a PID of 1111?
I don't want to start a new process1.sh!
Named Pipes are your friend. See the article Linux Journal: Using Named Pipes (FIFOs) with Bash.
Based on the answers:
Writing to stdin of background process
Accessing bash command line args $# vs $*
Why my named pipe input command line just hangs when it is called?
Can I redirect output to a log file and background a process at the same time?
I wrote two shell scripts to communicate with my game server.
This first script is run when computer start up. It does start the server and configure it to read/receive my commands while it run in background:
start_czero_server.sh
#!/bin/sh
# Go to the game server application folder where the game application `hlds_run` is
cd /home/user/Half-Life
# Set up a pipe named `/tmp/srv-input`
rm /tmp/srv-input
mkfifo /tmp/srv-input
# To avoid your server to receive a EOF. At least one process must have
# the fifo opened in writing so your server does not receive a EOF.
cat > /tmp/srv-input &
# The PID of this command is saved in the /tmp/srv-input-cat-pid file
# for latter kill.
#
# To send a EOF to your server, you need to kill the `cat > /tmp/srv-input` process
# which PID has been saved in the `/tmp/srv-input-cat-pid file`.
echo $! > /tmp/srv-input-cat-pid
# Start the server reading from the pipe named `/tmp/srv-input`
# And also output all its console to the file `/home/user/Half-Life/my_logs.txt`
#
# Replace the `./hlds_run -console -game czero +port 27015` by your application command
./hlds_run -console -game czero +port 27015 > my_logs.txt 2>&1 < /tmp/srv-input &
# Successful execution
exit 0
This second script it just a wrapper which allow me easily to send commands to the my server:
send.sh
half_life_folder="/home/jack/Steam/steamapps/common/Half-Life"
half_life_pid_tail_file_name=hlds_logs_tail_pid.txt
half_life_pid_tail="$(cat $half_life_folder/$half_life_pid_tail_file_name)"
if ps -p $half_life_pid_tail > /dev/null
then
echo "$half_life_pid_tail is running"
else
echo "Starting the tailing..."
tail -2f $half_life_folder/my_logs.txt &
echo $! > $half_life_folder/$half_life_pid_tail_file_name
fi
echo "$#" > /tmp/srv-input
sleep 1
exit 0
Now every time I want to send a command to my server I just do on the terminal:
./send.sh mp_timelimit 30
This script allows me to keep tailing the process on your current terminal, because every time I send a command, it checks whether there is a tail process running in background. If not, it just start one and every time the process sends outputs, I can see it on the terminal I used to send the command, just like for the applications you run appending the & operator.
You could always keep another open terminal open just to listen to my server server console. To do it just use the tail command with the -f flag to follow my server console output:
./tail -f /home/user/Half-Life/my_logs.txt
If you don't want to be limited to signals, your program must support one of the Inter Process Communication methods. See the corresponding Wikipedia article.
A simple method is to make it listen for commands on a Unix domain socket.
For how to send commands to a server via a named pipe (fifo) from the shell see here:
Redirecting input of application (java) but still allowing stdin in BASH
How do I use exec 3>myfifo in a script, and not have echo foo>&3 close the pipe?
You can use the bash's coproc comamnd. (avaliable only in 4.0+) - it's like ksh's |&
check this for examples http://wiki.bash-hackers.org/syntax/keywords/coproc
you can't send new args to a running process.
But if you are implementing this process or its a process that can take the args from a pipe, then the other answer would help.