starting a job remotely with bash [duplicate] - bash

This question already has answers here:
Getting ssh to execute a command in the background on target machine
(19 answers)
Closed 6 years ago.
I'm trying to run a bash script on a remote machine, and I'd like to return immediately after running the script in the background of the remote machine. For instance:
$ echo foo.txt
sleep 2000 &
then when I tried to do:
$ ssh x.x.x.x 'bash -s' < foo.txt
the command never returns. Is there a way to make it return while sleep runs in the background on the remote machine?

May by;
echo foo.txt
sleep 2000 >&- 2>&- <&- &
>&- means close stdout.
2>&- means close stderr.
<&- means close stdin.
& means run in the background

Related

Execute a script through ssh and store its pid in a file on the remote machine [duplicate]

This question already has answers here:
How to pass argument with exclamation mark on Linux?
(3 answers)
Closed 3 years ago.
I am not able to store any PID in a file on the remote machine when running a script in background through ssh.
I need to store the PID of the script process in a file in purpose to kill it whenever needed. When running the exact command on the remote machine it is working, why through ssh it is not working so ?
What is wrong with the following command:
ssh user#remote_machine "nohup ./script.sh > /dev/null 2>&1 & echo $! > ./pid.log"
Result: The file pid.log is created but empty.
Expected: The file pid.log should contain the PID of the running script.
Use
ssh user#remote_machine 'nohup ./script.sh > /dev/null 2>&1 & echo $! > ./pid.log'
OR
ssh user#remote_machine "nohup ./script.sh > /dev/null 2>&1 & echo \$! > ./pid.log"
Issue:
Your $! was getting expanded locally, before calling ssh at all.
Worse, before calling the ssh command, if there was a process stared in the background, then $! would have expanded to that and complete ssh command would have got expanded to contain that PID as argument to echo.
e.g.
$ ls &
[12342] <~~~~ This is the PID of ls
$ <~~~~ Prompt returns immediately because ls was stared in background.
myfile1 myfile2 <~~~~ Output of ls.
[1]+ Done ls
#### At this point, $! contains 12342
$ ssh user#remote "command & echo $! > pidfile"
# before even calling ssh, shell internally expands it to:
$ ssh user#remote "command & echo 12342 > pidfile"
And it will put the wrong PID in the pidfile.

Run two xterm commands at the same time in BASH [duplicate]

This question already has answers here:
How do you run multiple programs in parallel from a bash script?
(19 answers)
Closed 4 years ago.
I am trying to run two different programs in xterm windows from the same automation script. My current code looks like this:
#!/bin/bash/sh
echo "STARTING PROGRAM ONE"
# change into correct directory
cd ~/myProjects/ProgramOne
xterm -e myProg1 -a P1 &> /tmp/ProgramOne/P1.txt
echo "STARTING PROGRAM TWO"
# change into correct directory
cd ~/myProjects/ProjectTwo
xterm -e myProg2 -a P2 &> /tmp/ProgramTwo/P2.txt
# Code to kill the xterm process?
echo "******************************************"
echo "START AUTOMATION COMPLETE"
echo "******************************************"
What I am looking to accomplish is to have two separate programs, in different directories, run in two different xterm windows so I can demonstrate to the end user that the programs are running appropriately.
Currently, the first program executes fine, and when I Ctrl + C it, the second kicks off just fine. However, I would like both to execute at the same time.
I have looked at a few resources on SO but have not found anything to help me with this problem.
I am on a CentOS7 system, trying to automate this process. Any help or advice would be great.
Thanks!
Start them in the background and wait for them to finish:
#!/bin/bash/sh
echo "STARTING PROGRAM ONE"
# change into correct directory
cd ~/myProjects/ProgramOne
xterm -e myProg1 -a P1 &> /tmp/ProgramOne/P1.txt &
echo "STARTING PROGRAM TWO"
# change into correct directory
cd ~/myProjects/ProjectTwo
xterm -e myProg2 -a P2 &> /tmp/ProgramTwo/P2.txt &
# Code to kill the xterm process?
wait
echo "******************************************"
echo "START AUTOMATION COMPLETE"
echo "******************************************"

How to put a command in nohup in shell script? [duplicate]

This question already has answers here:
How to include nohup inside a bash script?
(6 answers)
Closed 4 years ago.
I am trying to put shred command in nohup for shell script, once the script is run it has to take input and run shred operation on a device in nohup mode. The problem is when I add nohup to command the script does not exit and run the command in background, also i am trying to send success and failure mails with the output once shred operation is completed.
What I tried so far:
nohup shred -n 2 "device name" > success.out 2>failure.out
if [$? -eq 0]
then
mail -s "success" -a success.out "EmailID" <.
exit 0
else
mail -s "failure" -a failure.out "EmailID" <.
exit 1
fi
I am getting the success email with attachment n=but the attachment is non readable format, is there any other way??
"the script does not exit and run the command in background".
You do not tell the script to background it. Add &.
So:
nohup shred -n 2 "device name" & >success.out 2>failure.out
Will do it.

Linux bash nohup + background in a for loop hang [duplicate]

This question already has answers here:
Getting ssh to execute a command in the background on target machine
(19 answers)
Closed 7 years ago.
Good Morning! trying to start a script on multiple servers w/ nohup and keep it running in the back ground.
The command (provided by my vendor) runs as expected when inputting directly on each server (the python script scrapes the log file and sends relevant info to another server via UDP):
nohup tail -f /log/log.log | python /test/deliver.py > /dev/null 2>&1 &
However, when placing into for loop to reach many servers, I must press Ctrl-C between each server to keep the loop going.
Please assist if possible:
for i in `cat /etc/hosts`; do ssh $i nohup tail -f /log/log.log | python /test/deliver.py > /dev/null 2>&1 &; done
Solution (Thank you all for the help):
ssh -f user#host "cd /whereever; nohup ./whatever > /dev/null 2>&1 &"
Had to use the -f in combination with double quotes as described here:
Getting ssh to execute a command in the background on target machine
Like Etan Reisner already pointed out in comments, you need to run the entire command remotely. Additionally, you will need to detach standard input from the ssh command line.
Finally, the for loop is bad form; use while to read from a line-oriented input file.
while read -r host; do
ssh "$host" 'nohup tail -f /log/log.log |
python /test/deliver.py > /dev/null 2>&1 &' </dev/null
done </etc/hosts
(though on the machines I have seen, the format of /etc/hosts is not really suitable for this sort of processing. Perhaps only read the first field and discard the others? That's easy; while read -r host _; do...)
I'm not having any issues with this:
$ for i in {1..5}; do ssh -i me#ip nohup sleep 10 >/dev/null 2>&1 &
=> done
[7] 34057
[8] 34058
[9] 34059
[10] 34060
[11] 34061
$ #
[7] Done
[8] Done
[9] Done
[10] Done
[11] Done

Starting a process over ssh using bash and then killing it on sigint

I want to start a couple of jobs on different machines using ssh. If the user then interrupts the main script I want to shut down all the jobs gracefully.
Here is a short example of what I'm trying to do:
#!/bin/bash
trap "aborted" SIGINT SIGTERM
aborted() {
kill -SIGTERM $bash2_pid
exit
}
ssh -t remote_machine /foo/bar.sh &
bash2_pid=$!
wait
However the bar.sh process is still running the remote machine. If I do the same commands in a terminal window it shuts down the process on the remote host.
Is there an easy way to make this happen when I run the bash script? Or do I need to make it log on to the remote machine, find the right process and kill it that way?
edit:
Seems like I have to go with option B, killing the remotescript through another ssh connection
So no I want to know how do I get the remotepid?
I've tried a something along the lines of :
remote_pid=$(ssh remote_machine '{ /foo/bar.sh & } ; echo $!')
This doesn't work since it blocks.
How do I wait for a variable to print and then "release" a subprocess?
It would definitely be preferable to keep your cleanup managed by the ssh that starts the process rather than moving in for the kill with a second ssh session later on.
When ssh is attached to your terminal; it behaves quite well. However, detach it from your terminal and it becomes (as you've noticed) a pain to signal or manage remote processes. You can shut down the link, but not the remote processes.
That leaves you with one option: Use the link as a way for the remote process to get notified that it needs to shut down. The cleanest way to do this is by using blocking I/O. Make the remote read input from ssh and when you want the process to shut down; send it some data so that the remote's reading operation unblocks and it can proceed with the cleanup:
command & read; kill $!
This is what we would want to run on the remote. We invoke our command that we want to run remotely; we read a line of text (blocks until we receive one) and when we're done, signal the command to terminate.
To send the signal from our local script to the remote, all we need to do now is send it a line of text. Unfortunately, Bash does not give you a lot of good options, here. At least, not if you want to be compatible with bash < 4.0.
With bash 4 we can use co-processes:
coproc ssh user#host 'command & read; kill $!'
trap 'echo >&"${COPROC[1]}"' EXIT
...
Now, when the local script exits (don't trap on INT, TERM, etc. Just EXIT) it sends a new line to the file in the second element of the COPROC array. That file is a pipe which is connected to ssh's stdin, effectively routing our line to ssh. The remote command reads the line, ends the read and kills the command.
Before bash 4 things get a bit harder since we don't have co-processes. In that case, we need to do the piping ourselves:
mkfifo /tmp/mysshcommand
ssh user#host 'command & read; kill $!' < /tmp/mysshcommand &
trap 'echo > /tmp/mysshcommand; rm /tmp/mysshcommand' EXIT
This should work in pretty much any bash version.
Try this:
ssh -tt host command </dev/null &
When you kill the local ssh process, the remote pty will close and SIGHUP will be sent to the remote process.
Referencing the answer by lhunath and https://unix.stackexchange.com/questions/71205/background-process-pipe-input I came up with this script
run.sh:
#/bin/bash
log="log"
eval "$#" \&
PID=$!
echo "running" "$#" "in PID $PID"> $log
{ (cat <&3 3<&- >/dev/null; kill $PID; echo "killed" >> $log) & } 3<&0
trap "echo EXIT >> $log" EXIT
wait $PID
The difference being that this version kills the process when the connection is closed, but also returns the exit code of the command when it runs to completion.
$ ssh localhost ./run.sh true; echo $?; cat log
0
running true in PID 19247
EXIT
$ ssh localhost ./run.sh false; echo $?; cat log
1
running false in PID 19298
EXIT
$ ssh localhost ./run.sh sleep 99; echo $?; cat log
^C130
running sleep 99 in PID 20499
killed
EXIT
$ ssh localhost ./run.sh sleep 2; echo $?; cat log
0
running sleep 2 in PID 20556
EXIT
For a one-liner:
ssh localhost "sleep 99 & PID=\$!; { (cat <&3 3<&- >/dev/null; kill \$PID) & } 3<&0; wait \$PID"
For convenience:
HUP_KILL="& PID=\$!; { (cat <&3 3<&- >/dev/null; kill \$PID) & } 3<&0; wait \$PID"
ssh localhost "sleep 99 $HUP_KILL"
Note: kill 0 may be preferred to kill $PID depending on the behavior needed with regard to spawned child processes. You can also kill -HUP or kill -INT if you desire.
Update:
A secondary job control channel is better than reading from stdin.
ssh -n -R9002:localhost:8001 -L8001:localhost:9001 localhost ./test.sh sleep 2
Set job control mode and monitor the job control channel:
set -m
trap "kill %1 %2 %3" EXIT
(sleep infinity | netcat -l 127.0.0.1 9001) &
(netcat -d 127.0.0.1 9002; kill -INT $$) &
"$#" &
wait %3
Finally, here's another approach and a reference to a bug filed on openssh:
https://bugzilla.mindrot.org/show_bug.cgi?id=396#c14
This is the best way I have found to do this. You want something on the server side that attempts to read stdin and then kills the process group when that fails, but you also want a stdin on the client side that blocks until the server side process is done and will not leave lingering processes like <(sleep infinity) might.
ssh localhost "sleep 99 < <(cat; kill -INT 0)" <&1
It doesn't actually seem to redirect stdout anywhere but it does function as a blocking input and avoids capturing keystrokes.
The solution for bash 3.2:
mkfifo /tmp/mysshcommand
ssh user#host 'command & read; kill $!' < /tmp/mysshcommand &
trap 'echo > /tmp/mysshcommand; rm /tmp/mysshcommand' EXIT
doesn't work. The ssh command is not on the ps list on the "client" machine. Only after I echo something into the pipe will it appear in the process list of the client machine. The process that appears on the "server" machine would just be the command itself, not the read/kill part.
Writing again into the pipe does not terminate the process.
So summarizing, I need to write into the pipe for the command to start up, and if I write again, it does not kill the remote command, as expected.
You may want to consider mounting the remote file system and run the script from the master box. For instance, if your kernel is compiled with fuse (can check with the following):
/sbin/lsmod | grep -i fuse
You can then mount the remote file system with the following command:
sshfs user#remote_system: mount_point
Now just run your script on the file located in mount_point.

Resources