Here is the piece of code from shell script that is causing the problem.
LOG_FILE="/home/sample.log"
PID_FILE="/home/sample.pid"
sudo -u user1 trinidad -e production > "$LOG_FILE" 2>&1 & echo $! > "$PID_FILE"
PARENT_PID=`cat "$PID_FILE"`
pgrep -P "$PARENT_PID" > "$PID_FILE"
But here the last command does not print anything to PID_FILE. So for debugging purpose I tried echoing echo $PARENT_PID. It correctly prints the output like 1234.
Also in shell script If I do pgrep -P 1234 then also it prints the child process correctly but only if I do pgrep -P $PARENT_PID then it prints nothing.
You are writing stuff into a file and then reading the file back in. While that is just wasteful, not actually an explanation of your problem, I would refactor to
LOG_FILE="/home/sample.log"
PID_FILE="/home/sample.pid"
sudo -u user1 trinidad -e production > "$LOG_FILE" 2>&1 &
PARENT_PID=$!
pgrep -P "$PARENT_PID" > "$PID_FILE"
I'm guessing your actual problem is that the sudo process doesn't spawn any children. The action of pgrep -P is to print processes which are children of the PID you specify; if your process doesn't spawn any children, it won't print any.
I've done my homework, but I think I may be mixing apples and oranges here. My script is designed to run a remote inline series of commands, exit, and then run some additional LOCAL commands. It has to be done remote first, as these services are for a fail-over agent. The problem is that after the remote ssh line disconnects, the entire script just stops. I'm not sure why the disconnect is halting the entire script. Perhaps the exit line is to blame?
#!/bin/bash
#
### Run remote svc restarts and then Local restarts
#
exec ssh -t REMOTEHOST 'stop svc1; restart svc2; start svc3; exit'
(SCRIPT FAILS HERE)
## Run local shell (This works independently, but not in the entire script)
rst=`pgrep -n failoversvc`
echo "Stopping 1st service at `date | awk '{print $2,$3,$4}'`" && service 1 stop >> SYNCLOG.txt
sleep 2
echo "Restarting 2nd service at `date | awk '{print $2,$3,$4}'`" && service 2 restart >> SYNCLOG.txt
if rst="";then
echo "Starting 3rd service at `date | awk '{print $2,$3,$4}'`" && service 3 start >> SYNCLOG.txt
else
echo "3rd Service PID not found! Check for functionality"
fi
I took a look at but THIS I wasn't able to get the results I was looking for.
exec is a very brutal command: it completely replaces the current process (in this case, your shell that's running the script) with the command you specify. Unless exec fails, nothing after that line in your script will ever run. This is by design, that's what exec is for.
If you want your script to continue after the ssh, simply remove exec.
I am running a minecraft server in a screen session. I also am using a named pipe in order to send commands to the minecraft server from other scripts.
I can see output from the server in the screen session, however I cannot enter any. I expected this anyway since I am taking input from the named pipe.
Here's the line I run to start everything:
screen -S minecraft sh startup.sh
Here's startup.sh:
#!/bin/bash
rm mct
if [ ! -p mct ]; then
mkfifo mct && chmod 0777 mct
fi
tail -f mct | java -Xincgc -Xmx2048M -jar minecraft_server.jar
I want to be able to enter commands from the screen session and from the named pipe. Is there a way I can accomplish this? I'm just now messing around with bash scripts, been learning a lot about it today. I just can't seem how to do this.
One approach is to run your tail -f mct concurrently with a command that reads from the console and writes to the same anonymous pipe:
( tail -f mct & cat ) | java -Xincgc -Xmx2048M -jar minecraft_server.jar
I'd like to execute several commands in sequence on a remote machine, and some of the later commands depend on earlier ones. In the simplest possible example I get this:
ssh my_server "echo this is my_server; abc=2;"
this is my_server
abc=2: Command not found.
ssh my_server "echo this is my_server; abc=2; echo abc is $abc"
abc: undefined variable
For a bit of background info, what I actually want to do is piece together a path and launch a java application:
ssh my_server 'nohup sh -c "( ( echo this is my_server; jabref_exe=`which jabref`; jabref_dir=`dirname $jabref_exe`; java -jar $jabref_dir/../jabref.jar` $1 &/dev/null ) & )"' &
jabref_dir: Undefined variable.
That way, whenever jabref gets updated to a new version on the server, I won't have to manually update the path to the jar file. The jabref executable doesn't take arguments, but launching it with java -jar does, which is why I have to juggle the path a bit.
At the moment I have the list of commands in a separate script file and call
ssh my_server 'nohup sh -c "( ( my_script.sh &/dev/null ) & )"' &
which works, but since the ssh call is already inside one script file it would be nice to have everything together.
In this example
ssh my_server "echo this is my_server; abc=2;"
abc is set on the remote side, so it should be clear why it is not set on your local machine.
In the next example,
ssh my_server "echo this is my_server; abc=2; echo abc is $abc"
your local shell tries to expand $abc in the argument before it is ever sent to the remote host. A slight modification would work as you expected:
ssh my_server 'echo this is my_server; abc=2; echo abc is $abc'
The single quotes prevent your local shell from trying to expand $abc, and so the literal text makes it to the remote host.
To finally address your real question, try this:
jabref_dir=$( ssh my_server 'jabref_exe=$(which jabref); jabref_dir=$(dirname $jabref_exe);
java -jar $jabref_dir/../jabref.jar > /dev/null; echo $jabref_dir' )
This will run the quoted string as a command on your remote server, and output exactly one string: $jabref_dir. That string is captured and stored in a variable on your local host.
With some inspiration from chepner, I now have a solution that works, but only when called from a bash shell or bash script. It doesn't work from tcsh.
ssh my_server "bash -c 'echo this is \$HOSTNAME; abc=2; echo abc is \$abc;'"
Based on this, the code below is a local script which runs jabref on a remote server (although with X-forwarding by default and passwordless authentication the user can't tell it's remote):
#!/bin/bash
if [ -f "$1" ]
then
fname_start=$(echo ${1:0:4})
if [ "$fname_start" = "/tmp" ]
then
scp $1 my_server:$1
ssh my_server "bash -c 'source load_module jdk; source load_module jabref; java_exe=\$(which java); jabref_exe=\$(which jabref); jabref_dir=\$(echo \${jabref_exe%/bin/jabref});eval \$(java -jar \$jabref_dir/jabref.jar $1)'" &
else
echo input argument must be a file in /tmp.
else
echo this function requires 1 argument
fi
and this is the 1-line script load_module, since modulecmd sets environment variables and I couldn't figure out how to do that without sourcing a script.
eval `/path/to/modulecmd bash load $1`;
I also looked at heredocs, inspired by How to use SSH to run a shell script on a remote machine? and http://tldp.org/LDP/abs/html/here-docs.html. The nice part is that it works even from tcsh. I got this working from the command line, but not inside a script. That's probably easy enough to fix, but I've got a solution now so I'm happy :-)
ssh my_server 'bash -s' << EOF
echo this is \$HOSTNAME; abc=2; echo abc is \$abc;
EOF
This is a follow-on question to the How do you use ssh in a shell script? question. If I want to execute a command on the remote machine that runs in the background on that machine, how do I get the ssh command to return? When I try to just include the ampersand (&) at the end of the command it just hangs. The exact form of the command looks like this:
ssh user#target "cd /some/directory; program-to-execute &"
Any ideas? One thing to note is that logins to the target machine always produce a text banner and I have SSH keys set up so no password is required.
I had this problem in a program I wrote a year ago -- turns out the answer is rather complicated. You'll need to use nohup as well as output redirection, as explained in the wikipedia artcle on nohup, copied here for your convenience.
Nohuping backgrounded jobs is for
example useful when logged in via SSH,
since backgrounded jobs can cause the
shell to hang on logout due to a race
condition [2]. This problem can also
be overcome by redirecting all three
I/O streams:
nohup myprogram > foo.out 2> foo.err < /dev/null &
This has been the cleanest way to do it for me:-
ssh -n -f user#host "sh -c 'cd /whereever; nohup ./whatever > /dev/null 2>&1 &'"
The only thing running after this is the actual command on the remote machine
Redirect fd's
Output needs to be redirected with &>/dev/null which redirects both stderr and stdout to /dev/null and is a synonym of >/dev/null 2>/dev/null or >/dev/null 2>&1.
Parantheses
The best way is to use sh -c '( ( command ) & )' where command is anything.
ssh askapache 'sh -c "( ( nohup chown -R ask:ask /www/askapache.com &>/dev/null ) & )"'
Nohup Shell
You can also use nohup directly to launch the shell:
ssh askapache 'nohup sh -c "( ( chown -R ask:ask /www/askapache.com &>/dev/null ) & )"'
Nice Launch
Another trick is to use nice to launch the command/shell:
ssh askapache 'nice -n 19 sh -c "( ( nohup chown -R ask:ask /www/askapache.com &>/dev/null ) & )"'
If you don't/can't keep the connection open you could use screen, if you have the rights to install it.
user#localhost $ screen -t remote-command
user#localhost $ ssh user#target # now inside of a screen session
user#remotehost $ cd /some/directory; program-to-execute &
To detach the screen session: ctrl-a d
To list screen sessions:
screen -ls
To reattach a session:
screen -d -r remote-command
Note that screen can also create multiple shells within each session. A similar effect can be achieved with tmux.
user#localhost $ tmux
user#localhost $ ssh user#target # now inside of a tmux session
user#remotehost $ cd /some/directory; program-to-execute &
To detach the tmux session: ctrl-b d
To list screen sessions:
tmux list-sessions
To reattach a session:
tmux attach <session number>
The default tmux control key, 'ctrl-b', is somewhat difficult to use but there are several example tmux configs that ship with tmux that you can try.
I just wanted to show a working example that you can cut and paste:
ssh REMOTE "sh -c \"(nohup sleep 30; touch nohup-exit) > /dev/null &\""
You can do this without nohup:
ssh user#host 'myprogram >out.log 2>err.log &'
Quickest and easiest way is to use the 'at' command:
ssh user#target "at now -f /home/foo.sh"
I think you'll have to combine a couple of these answers to get what you want. If you use nohup in conjunction with the semicolon, and wrap the whole thing in quotes, then you get:
ssh user#target "cd /some/directory; nohup myprogram > foo.out 2> foo.err < /dev/null"
which seems to work for me. With nohup, you don't need to append the & to the command to be run. Also, if you don't need to read any of the output of the command, you can use
ssh user#target "cd /some/directory; nohup myprogram > /dev/null 2>&1"
to redirect all output to /dev/null.
This worked for me may times:
ssh -x remoteServer "cd yourRemoteDir; ./yourRemoteScript.sh </dev/null >/dev/null 2>&1 & "
You can do it like this...
sudo /home/script.sh -opt1 > /tmp/script.out &
It appeared quite convenient for me to have a remote tmux session using the tmux new -d <shell cmd> syntax like this:
ssh someone#elsewhere 'tmux new -d sleep 600'
This will launch new session on elsewhere host and ssh command on local machine will return to shell almost instantly. You can then ssh to the remote host and tmux attach to that session. Note that there's nothing about local tmux running, only remote!
Also, if you want your session to persist after the job is done, simply add a shell launcher after your command, but don't forget to enclose in quotes:
ssh someone#elsewhere 'tmux new -d "~/myscript.sh; bash"'
Actually, whenever I need to run a command on a remote machine that's complicated, I like to put the command in a script on the destination machine, and just run that script using ssh.
For example:
# simple_script.sh (located on remote server)
#!/bin/bash
cat /var/log/messages | grep <some value> | awk -F " " '{print $8}'
And then I just run this command on the source machine:
ssh user#ip "/path/to/simple_script.sh"
If you run remote command without allocating tty, redirect stdout/stderr works, nohup is not necessary.
ssh user#host 'background command &>/dev/null &'
If you use -t to allocate tty to run interactive command along with background command, and background command is the last command, like this:
ssh -t user#host 'bash -c "interactive command; nohup backgroud command &>/dev/null &"'
It's possible that background command doesn't actually start. There's race here:
bash exits after nohup starts. As a session leader, bash exit results in HUP signal sent to nohup process.
nohup ignores HUP signal.
If 1 completes before 2, the nohup process will exit and won't start the background command at all. We need to wait nohup start the background command. A simple workaroung is to just add a sleep:
ssh -t user#host 'bash -c "interactive command; nohup backgroud command &>/dev/null & sleep 1"'
The question was asked and answered years ago, I don't know if openssh behavior changed since then. I was testing on:
OpenSSH_8.6p1, OpenSSL 1.1.1g FIPS 21 Apr 2020
I was trying to do the same thing, but with the added complexity that I was trying to do it from Java. So on one machine running java, I was trying to run a script on another machine, in the background (with nohup).
From the command line, here is what worked: (you may not need the "-i keyFile" if you don't need it to ssh to the host)
ssh -i keyFile user#host bash -c "\"nohup ./script arg1 arg2 > output.txt 2>&1 &\""
Note that to my command line, there is one argument after the "-c", which is all in quotes. But for it to work on the other end, it still needs the quotes, so I had to put escaped quotes within it.
From java, here is what worked:
ProcessBuilder b = new ProcessBuilder("ssh", "-i", "keyFile", "bash", "-c",
"\"nohup ./script arg1 arg2 > output.txt 2>&1 &\"");
Process process = b.start();
// then read from process.getInputStream() and close it.
It took a bit of trial & error to get this working, but it seems to work well now.
YOUR-COMMAND &> YOUR-LOG.log &
This should run the command and assign a process id you can simply tail -f YOUR-LOG.log to see results written to it as they happen. you can log out anytime and the process will carry on
If you are using zsh then use program-to-execute &! is a zsh-specific shortcut to both background and disown the process, such that exiting the shell will leave it running.
A follow-on to #cmcginty's concise working example which also shows how to alternatively wrap the outer command in double quotes. This is how the template would look if invoked from within a PowerShell script (which can only interpolate variables from within double-quotes and ignores any variable expansion when wrapped in single quotes):
ssh user#server "sh -c `"($cmd) &>/dev/null </dev/null &`""
Inner double-quotes are escaped with back-tick instead of backslash. This allows $cmd to be composed by the PowerShell script, e.g. for deployment scripts and automation and the like. $cmd can even contain a multi-line heredoc if composed with unix LF.
First follow this procedure:
Log in on A as user a and generate a pair of authentication keys. Do not enter a passphrase:
a#A:~> ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/a/.ssh/id_rsa):
Created directory '/home/a/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/a/.ssh/id_rsa.
Your public key has been saved in /home/a/.ssh/id_rsa.pub.
The key fingerprint is:
3e:4f:05:79:3a:9f:96:7c:3b:ad:e9:58:37:bc:37:e4 a#A
Now use ssh to create a directory ~/.ssh as user b on B. (The directory may already exist, which is fine):
a#A:~> ssh b#B mkdir -p .ssh
b#B's password:
Finally append a's new public key to b#B:.ssh/authorized_keys and enter b's password one last time:
a#A:~> cat .ssh/id_rsa.pub | ssh b#B 'cat >> .ssh/authorized_keys'
b#B's password:
From now on you can log into B as b from A as a without password:
a#A:~> ssh b#B
then this will work without entering a password
ssh b#B "cd /some/directory; program-to-execute &"
I think this is what you need:
At first you need to install sshpass on your machine.
then you can write your own script:
while read pass port user ip; do
sshpass -p$pass ssh -p $port $user#$ip <<ENDSSH1
COMMAND 1
.
.
.
COMMAND n
ENDSSH1
done <<____HERE
PASS PORT USER IP
. . . .
. . . .
. . . .
PASS PORT USER IP
____HERE