Get the PID of a process started with nohup via ssh - bash

I want to start a process using nohup on a remote machine via ssh. The problem is how to get the PID of the process started with nohup, so the "process actually doing something", not some outer shell instance or the like. Also, I want to store stdout and stderr in files, but that is not the issue here...
Locally, it works flawlessly using
nohup sleep 30 > out 2> err < /dev/null & echo $!
It is echoing me the exact PID of the command "sleep 30", which I can also see using "top" or "ps aux|grep sleep".
But I'm having trouble doing it remotely via ssh. I tried something like
ssh remote_machine 'nohup bash -c "( ( sleep 30 ) & )" > out 2> err < /dev/null'
but I cannot figure out where to place the "echo $!" so that it is displayed in my local shell. It is always showing me wrong PIDs, for example the one of the "bash" instance etc.
Has somebody an idea how to solve this?
EDIT:
OK, the "bash -c" might not be needed here. Like Lotharyx pointed out, I get the right PID just fine using
ssh remote 'nohup sleep 30 > out 2> err < /dev/null & echo $!'
but then the problem is that if you substitute "sleep 30" with something that produces output, say, "echo Hello World!", that output does not end up in the file "out", neither on the local nor on remote side. Anybody got an idea why?
EDIT2: My fault! There was just no space left on the other device, that's why the files "out" and "err" stayed empty!
So this is working. In addition, if one wants to call multiple commands in a row, separated by a semicolon (;), one can still use "bash -c", like so:
ssh remote 'nohup bash -c "echo bla;sleep 30;echo blupp" > out 2> err < /dev/null & echo $!'
Then it prints out the PID of the "bash -c" on the local side, which is just fine. (It is impossible to get the PID of the "innermost" or "busy" process, because every program itself can spawn new subprocesses, there is no way to find out...)

I tried the following (the local machine is Debian; the remote machine is CentOS), and it worked exactly as I think you're expecting:
~# ssh someone#somewhere 'nohup sleep 30 > out 2> err < /dev/null & echo $!'
someone#somewhere's password:
14193
~#
On the remote machine, I did ps -e, and saw this line:
14193 ? 00:00:00 sleep
So, clearly, on my local machine, the output is the PID of "sleep" executing on the remote machine.
Why are you adding bash to your command when sending it across an SSH tunnel?

Related

Execute a script through ssh and store its pid in a file on the remote machine [duplicate]

This question already has answers here:
How to pass argument with exclamation mark on Linux?
(3 answers)
Closed 3 years ago.
I am not able to store any PID in a file on the remote machine when running a script in background through ssh.
I need to store the PID of the script process in a file in purpose to kill it whenever needed. When running the exact command on the remote machine it is working, why through ssh it is not working so ?
What is wrong with the following command:
ssh user#remote_machine "nohup ./script.sh > /dev/null 2>&1 & echo $! > ./pid.log"
Result: The file pid.log is created but empty.
Expected: The file pid.log should contain the PID of the running script.
Use
ssh user#remote_machine 'nohup ./script.sh > /dev/null 2>&1 & echo $! > ./pid.log'
OR
ssh user#remote_machine "nohup ./script.sh > /dev/null 2>&1 & echo \$! > ./pid.log"
Issue:
Your $! was getting expanded locally, before calling ssh at all.
Worse, before calling the ssh command, if there was a process stared in the background, then $! would have expanded to that and complete ssh command would have got expanded to contain that PID as argument to echo.
e.g.
$ ls &
[12342] <~~~~ This is the PID of ls
$ <~~~~ Prompt returns immediately because ls was stared in background.
myfile1 myfile2 <~~~~ Output of ls.
[1]+ Done ls
#### At this point, $! contains 12342
$ ssh user#remote "command & echo $! > pidfile"
# before even calling ssh, shell internally expands it to:
$ ssh user#remote "command & echo 12342 > pidfile"
And it will put the wrong PID in the pidfile.

SSH: Send remote command to local background

So I have a problem similar to how to send ssh job to background.
I have a windows c# program automated to execute tcpdump on a remote linux os using http://sshnet.codeplex.com/. I'm trying to execute tcpdump on the remote linux and leave it running after I disconnect.
I've been doing a lot of debugging using plink, but cannot seem to achieve the desired result. I've tried:
plink root#10.5.1.1 bash -c "tcpdump -i eth0 -w test.cap"
but it holds the sshclient until I ctrl+C (not going to work for automated solution). I've also tried variations of:
plink root#10.5.1.1 bash -c "tcpdump -i eth0 -w test.cap &"
but either the command is not executed at all (test.cap does not exist) or is terminated immediately (test.cap contains 1 line). During testing, I've left a ping going, so the capture should have somthing...
The previously mentioned link solves the problem with screen, but the remote linux os is not configurable and does not have screen. Any suggestions are welcome.
In the latter case, your tcpdump process is probably being aborted when you disconnect. Try:
plink root#10.5.1.1 bash -c "nohup tcpdump -i eth0 -w test.cap &"
See the manpage for nohup. You may also want to consider redirecting stdout and stderr to a file or /dev/null to prevent nohup from writing output to a file:
plink root#10.5.1.1 bash -c "nohup tcpdump -i eth0 -w test.cap >/dev/null 2>&1 &"
I had a similar problem while starting a remote application. This pattern worked for me on Debian servers:
ssh root#server "nohup /usr/local/bin/app -c cfg &; exit"
addition: for another test the above didn't work, ie. the command didn't start on the remote server. Adding a command that returns successfully before the exit seems to work.
ssh root#server "nohup /usr/local/bin/otherapp &; w; exit"
I had a similar situation:
(on windows machine) i wanted to create a ms batch script to open an SSH connection to a raspberry pi and execute a local script in the background.
I found that combining both Raj's and fahd's answers did the trick for me:
my ms batch script:
plink -load "raspberry Pi" -t -m startCommand.txt
the content of startCommand.txt is as follows:
nohup /home/pi/myscript >/dev/null 2>&1 &
w
exit
The ">/dev/null 2>&1 " is important!
I found out (the hard way) that the RPi's SDcard kept getting full by an extremely large nohup.out file (and with a full SDcard, the RPi couldn't even login properly)
reasoning:
I used the -load to load a saved session in PuTTY (i do this because i am authenticating with public/private keys instead of passwords, but this should be the same as simply typing in the host)
then -t (as recommended by Raj)
then -m to load a list of commands in that file
without the parameter "-t" and without the "w" and "exit", my batch script would just run, not execute 'myscript' and close again.
I had the same issue. I had a scrip in which I had nohup tcpdump .... & . I could not use ssh to run it as it dies when the ssh finished. The solution I came up with was super simple. I just added sleep 5 to the end of my script and it works just fine. It seems tcpdump needs some seconds to go to background safely before you exit even with nohup.
I had the same problem, and I found that the "-t" option seems to be important to nohup. I found the nohup wasn't taking affect without the "-t" option.
ssh -t user#remote 'nohup tcpdump -i any -w /tmp/somefile &>/dev/null & sleep 2'
I think that I've nailed it, at least in IBM AIX
I'm using
ssh -tq user#host "/path/start-tcpdump.ksh"
(authentication is done by publick key).
I was having inconsistent results using simple "nohup tcpdump .... &", sometimes it worked, sometimes it did not, sometimes it even blocked and I had to disconnect the session.
So far, this is working ok, I can't really say WHY it is working, but it is...
This is my start-tcpip.ksh
#!/usr/bin/ksh
HOST=$(uname -n)
FILTER="port not 22"
(tcpdump -i en1 -w $HOST-en1.cap $FILTER >/dev/null 2>&1 ) &
sleep 2
(tcpdump -i en2 -w $HOST-en2.cap $FILTER >/dev/null 2>&1 ) &
sleep 2
exit 0

Terminating SSH session executed by bash script

I have a script I can run locally to remotely start a server:
#!/bin/bash
ssh user#host.com <<EOF
nohup /path/to/run.sh &
EOF
echo 'done'
After running nohup, it hangs. I have to hit ctrl-c to exit the script.
I've tried adding an explicit exit at the end of the here doc and using "-t" argument for ssh. Neither works. How do I make this script exit immediately?
EDIT: The client is OSX 10.6, server is Ubuntu.
I think the problem is that nohup can't redirect output when you come in from ssh, it only redirects to nohup.out when it thinks it's connected to a terminal, and I the stdin override you have will prevent that, even with -t.
A workaround might be to redirect the output yourself, then the ssh client can disconnect - it's not waiting for the stream to close. Something like:
nohup /path/to/run.sh > run.log &
(This worked for me in a simple test connecting to an Ubuntu server from an OS X client.)
The problem might be that ...
... ssh is respecting the POSIX standard when not closing the session
if a process is still attached to the tty.
Therefore a solution might be to detach the stdin of the nohup command from the tty:
nohup /path/to/run.sh </dev/null &
See: SSH Hangs On Exit When Using nohup
Yet another approach might be to use ssh -t -t to force pseudo-tty allocation even if stdin isn't a terminal.
man ssh | less -Ip 'multiple -t'
ssh -t -t user#host.com <<EOF
nohup /path/to/run.sh &
EOF
See: BASH spawn subshell for SSH and continue with program flow
Redirecting the stdin of the remote host from a here document while invoking ssh without an explicit command leads to the message: Pseudo-terminal will not be allocated because stdin is not a terminal.
To avoid this message either use ssh's -T switch to tell the remote host there is no need to allocate a pseudo-terminal or explicitly specify a command (such as /bin/sh) for the remote host to execute the commands provided by the here document.
If an explicit command is given to ssh, the default is to provide no login shell in the form of a pseudo-terminal, i. e. there will be no normal login session when a command is specified (see man ssh).
Without a command specified for ssh, on the other hand, the default is to create a pseudo-tty for an interactive login session on the remote host.
- ssh user#host.com <<EOF
+ ssh -T user#host.com <<EOF
+ ssh user#host.com /bin/bash <<EOF
As a rule, ssh -t or even ssh -t -t should only be used if there are commands that expect stdin / stdout to be a terminal (such as top or vim) or if it is necessary to kill the remote shell and its children when the ssh client command finishes execution (see: ssh command unexpectedly continues on other system after ssh terminates).
As far as I can tell, the only way to combine an ssh command that does not allocate a pseudo-tty and a nohup command that writes to nohup.out on the remote host is to let the nohup command execute in a pseudo-terminal not created by the ssh mechanism. This can be done with the script command, for example, and will avoid the tcgetattr: Inappropriate ioctl for device message.
#!/bin/bash
ssh localhost /bin/sh <<EOF
#0<&- script -q /dev/null nohup sleep 10 1>&- &
#0<&- script -q -c "nohup sh -c 'date; sleep 10 1>&- &'" /dev/null # Linux
0<&- script -q /dev/null nohup sh -c 'date; sleep 10 1>&- &' # FreeBSD, Mac OS X
cat nohup.out
exit 0
EOF
echo 'done'
exit 0
You need to add a exit 0 at the end.

Starting a process over ssh using bash and then killing it on sigint

I want to start a couple of jobs on different machines using ssh. If the user then interrupts the main script I want to shut down all the jobs gracefully.
Here is a short example of what I'm trying to do:
#!/bin/bash
trap "aborted" SIGINT SIGTERM
aborted() {
kill -SIGTERM $bash2_pid
exit
}
ssh -t remote_machine /foo/bar.sh &
bash2_pid=$!
wait
However the bar.sh process is still running the remote machine. If I do the same commands in a terminal window it shuts down the process on the remote host.
Is there an easy way to make this happen when I run the bash script? Or do I need to make it log on to the remote machine, find the right process and kill it that way?
edit:
Seems like I have to go with option B, killing the remotescript through another ssh connection
So no I want to know how do I get the remotepid?
I've tried a something along the lines of :
remote_pid=$(ssh remote_machine '{ /foo/bar.sh & } ; echo $!')
This doesn't work since it blocks.
How do I wait for a variable to print and then "release" a subprocess?
It would definitely be preferable to keep your cleanup managed by the ssh that starts the process rather than moving in for the kill with a second ssh session later on.
When ssh is attached to your terminal; it behaves quite well. However, detach it from your terminal and it becomes (as you've noticed) a pain to signal or manage remote processes. You can shut down the link, but not the remote processes.
That leaves you with one option: Use the link as a way for the remote process to get notified that it needs to shut down. The cleanest way to do this is by using blocking I/O. Make the remote read input from ssh and when you want the process to shut down; send it some data so that the remote's reading operation unblocks and it can proceed with the cleanup:
command & read; kill $!
This is what we would want to run on the remote. We invoke our command that we want to run remotely; we read a line of text (blocks until we receive one) and when we're done, signal the command to terminate.
To send the signal from our local script to the remote, all we need to do now is send it a line of text. Unfortunately, Bash does not give you a lot of good options, here. At least, not if you want to be compatible with bash < 4.0.
With bash 4 we can use co-processes:
coproc ssh user#host 'command & read; kill $!'
trap 'echo >&"${COPROC[1]}"' EXIT
...
Now, when the local script exits (don't trap on INT, TERM, etc. Just EXIT) it sends a new line to the file in the second element of the COPROC array. That file is a pipe which is connected to ssh's stdin, effectively routing our line to ssh. The remote command reads the line, ends the read and kills the command.
Before bash 4 things get a bit harder since we don't have co-processes. In that case, we need to do the piping ourselves:
mkfifo /tmp/mysshcommand
ssh user#host 'command & read; kill $!' < /tmp/mysshcommand &
trap 'echo > /tmp/mysshcommand; rm /tmp/mysshcommand' EXIT
This should work in pretty much any bash version.
Try this:
ssh -tt host command </dev/null &
When you kill the local ssh process, the remote pty will close and SIGHUP will be sent to the remote process.
Referencing the answer by lhunath and https://unix.stackexchange.com/questions/71205/background-process-pipe-input I came up with this script
run.sh:
#/bin/bash
log="log"
eval "$#" \&
PID=$!
echo "running" "$#" "in PID $PID"> $log
{ (cat <&3 3<&- >/dev/null; kill $PID; echo "killed" >> $log) & } 3<&0
trap "echo EXIT >> $log" EXIT
wait $PID
The difference being that this version kills the process when the connection is closed, but also returns the exit code of the command when it runs to completion.
$ ssh localhost ./run.sh true; echo $?; cat log
0
running true in PID 19247
EXIT
$ ssh localhost ./run.sh false; echo $?; cat log
1
running false in PID 19298
EXIT
$ ssh localhost ./run.sh sleep 99; echo $?; cat log
^C130
running sleep 99 in PID 20499
killed
EXIT
$ ssh localhost ./run.sh sleep 2; echo $?; cat log
0
running sleep 2 in PID 20556
EXIT
For a one-liner:
ssh localhost "sleep 99 & PID=\$!; { (cat <&3 3<&- >/dev/null; kill \$PID) & } 3<&0; wait \$PID"
For convenience:
HUP_KILL="& PID=\$!; { (cat <&3 3<&- >/dev/null; kill \$PID) & } 3<&0; wait \$PID"
ssh localhost "sleep 99 $HUP_KILL"
Note: kill 0 may be preferred to kill $PID depending on the behavior needed with regard to spawned child processes. You can also kill -HUP or kill -INT if you desire.
Update:
A secondary job control channel is better than reading from stdin.
ssh -n -R9002:localhost:8001 -L8001:localhost:9001 localhost ./test.sh sleep 2
Set job control mode and monitor the job control channel:
set -m
trap "kill %1 %2 %3" EXIT
(sleep infinity | netcat -l 127.0.0.1 9001) &
(netcat -d 127.0.0.1 9002; kill -INT $$) &
"$#" &
wait %3
Finally, here's another approach and a reference to a bug filed on openssh:
https://bugzilla.mindrot.org/show_bug.cgi?id=396#c14
This is the best way I have found to do this. You want something on the server side that attempts to read stdin and then kills the process group when that fails, but you also want a stdin on the client side that blocks until the server side process is done and will not leave lingering processes like <(sleep infinity) might.
ssh localhost "sleep 99 < <(cat; kill -INT 0)" <&1
It doesn't actually seem to redirect stdout anywhere but it does function as a blocking input and avoids capturing keystrokes.
The solution for bash 3.2:
mkfifo /tmp/mysshcommand
ssh user#host 'command & read; kill $!' < /tmp/mysshcommand &
trap 'echo > /tmp/mysshcommand; rm /tmp/mysshcommand' EXIT
doesn't work. The ssh command is not on the ps list on the "client" machine. Only after I echo something into the pipe will it appear in the process list of the client machine. The process that appears on the "server" machine would just be the command itself, not the read/kill part.
Writing again into the pipe does not terminate the process.
So summarizing, I need to write into the pipe for the command to start up, and if I write again, it does not kill the remote command, as expected.
You may want to consider mounting the remote file system and run the script from the master box. For instance, if your kernel is compiled with fuse (can check with the following):
/sbin/lsmod | grep -i fuse
You can then mount the remote file system with the following command:
sshfs user#remote_system: mount_point
Now just run your script on the file located in mount_point.

Getting ssh to execute a command in the background on target machine

This is a follow-on question to the How do you use ssh in a shell script? question. If I want to execute a command on the remote machine that runs in the background on that machine, how do I get the ssh command to return? When I try to just include the ampersand (&) at the end of the command it just hangs. The exact form of the command looks like this:
ssh user#target "cd /some/directory; program-to-execute &"
Any ideas? One thing to note is that logins to the target machine always produce a text banner and I have SSH keys set up so no password is required.
I had this problem in a program I wrote a year ago -- turns out the answer is rather complicated. You'll need to use nohup as well as output redirection, as explained in the wikipedia artcle on nohup, copied here for your convenience.
Nohuping backgrounded jobs is for
example useful when logged in via SSH,
since backgrounded jobs can cause the
shell to hang on logout due to a race
condition [2]. This problem can also
be overcome by redirecting all three
I/O streams:
nohup myprogram > foo.out 2> foo.err < /dev/null &
This has been the cleanest way to do it for me:-
ssh -n -f user#host "sh -c 'cd /whereever; nohup ./whatever > /dev/null 2>&1 &'"
The only thing running after this is the actual command on the remote machine
Redirect fd's
Output needs to be redirected with &>/dev/null which redirects both stderr and stdout to /dev/null and is a synonym of >/dev/null 2>/dev/null or >/dev/null 2>&1.
Parantheses
The best way is to use sh -c '( ( command ) & )' where command is anything.
ssh askapache 'sh -c "( ( nohup chown -R ask:ask /www/askapache.com &>/dev/null ) & )"'
Nohup Shell
You can also use nohup directly to launch the shell:
ssh askapache 'nohup sh -c "( ( chown -R ask:ask /www/askapache.com &>/dev/null ) & )"'
Nice Launch
Another trick is to use nice to launch the command/shell:
ssh askapache 'nice -n 19 sh -c "( ( nohup chown -R ask:ask /www/askapache.com &>/dev/null ) & )"'
If you don't/can't keep the connection open you could use screen, if you have the rights to install it.
user#localhost $ screen -t remote-command
user#localhost $ ssh user#target # now inside of a screen session
user#remotehost $ cd /some/directory; program-to-execute &
To detach the screen session: ctrl-a d
To list screen sessions:
screen -ls
To reattach a session:
screen -d -r remote-command
Note that screen can also create multiple shells within each session. A similar effect can be achieved with tmux.
user#localhost $ tmux
user#localhost $ ssh user#target # now inside of a tmux session
user#remotehost $ cd /some/directory; program-to-execute &
To detach the tmux session: ctrl-b d
To list screen sessions:
tmux list-sessions
To reattach a session:
tmux attach <session number>
The default tmux control key, 'ctrl-b', is somewhat difficult to use but there are several example tmux configs that ship with tmux that you can try.
I just wanted to show a working example that you can cut and paste:
ssh REMOTE "sh -c \"(nohup sleep 30; touch nohup-exit) > /dev/null &\""
You can do this without nohup:
ssh user#host 'myprogram >out.log 2>err.log &'
Quickest and easiest way is to use the 'at' command:
ssh user#target "at now -f /home/foo.sh"
I think you'll have to combine a couple of these answers to get what you want. If you use nohup in conjunction with the semicolon, and wrap the whole thing in quotes, then you get:
ssh user#target "cd /some/directory; nohup myprogram > foo.out 2> foo.err < /dev/null"
which seems to work for me. With nohup, you don't need to append the & to the command to be run. Also, if you don't need to read any of the output of the command, you can use
ssh user#target "cd /some/directory; nohup myprogram > /dev/null 2>&1"
to redirect all output to /dev/null.
This worked for me may times:
ssh -x remoteServer "cd yourRemoteDir; ./yourRemoteScript.sh </dev/null >/dev/null 2>&1 & "
You can do it like this...
sudo /home/script.sh -opt1 > /tmp/script.out &
It appeared quite convenient for me to have a remote tmux session using the tmux new -d <shell cmd> syntax like this:
ssh someone#elsewhere 'tmux new -d sleep 600'
This will launch new session on elsewhere host and ssh command on local machine will return to shell almost instantly. You can then ssh to the remote host and tmux attach to that session. Note that there's nothing about local tmux running, only remote!
Also, if you want your session to persist after the job is done, simply add a shell launcher after your command, but don't forget to enclose in quotes:
ssh someone#elsewhere 'tmux new -d "~/myscript.sh; bash"'
Actually, whenever I need to run a command on a remote machine that's complicated, I like to put the command in a script on the destination machine, and just run that script using ssh.
For example:
# simple_script.sh (located on remote server)
#!/bin/bash
cat /var/log/messages | grep <some value> | awk -F " " '{print $8}'
And then I just run this command on the source machine:
ssh user#ip "/path/to/simple_script.sh"
If you run remote command without allocating tty, redirect stdout/stderr works, nohup is not necessary.
ssh user#host 'background command &>/dev/null &'
If you use -t to allocate tty to run interactive command along with background command, and background command is the last command, like this:
ssh -t user#host 'bash -c "interactive command; nohup backgroud command &>/dev/null &"'
It's possible that background command doesn't actually start. There's race here:
bash exits after nohup starts. As a session leader, bash exit results in HUP signal sent to nohup process.
nohup ignores HUP signal.
If 1 completes before 2, the nohup process will exit and won't start the background command at all. We need to wait nohup start the background command. A simple workaroung is to just add a sleep:
ssh -t user#host 'bash -c "interactive command; nohup backgroud command &>/dev/null & sleep 1"'
The question was asked and answered years ago, I don't know if openssh behavior changed since then. I was testing on:
OpenSSH_8.6p1, OpenSSL 1.1.1g FIPS 21 Apr 2020
I was trying to do the same thing, but with the added complexity that I was trying to do it from Java. So on one machine running java, I was trying to run a script on another machine, in the background (with nohup).
From the command line, here is what worked: (you may not need the "-i keyFile" if you don't need it to ssh to the host)
ssh -i keyFile user#host bash -c "\"nohup ./script arg1 arg2 > output.txt 2>&1 &\""
Note that to my command line, there is one argument after the "-c", which is all in quotes. But for it to work on the other end, it still needs the quotes, so I had to put escaped quotes within it.
From java, here is what worked:
ProcessBuilder b = new ProcessBuilder("ssh", "-i", "keyFile", "bash", "-c",
"\"nohup ./script arg1 arg2 > output.txt 2>&1 &\"");
Process process = b.start();
// then read from process.getInputStream() and close it.
It took a bit of trial & error to get this working, but it seems to work well now.
YOUR-COMMAND &> YOUR-LOG.log &
This should run the command and assign a process id you can simply tail -f YOUR-LOG.log to see results written to it as they happen. you can log out anytime and the process will carry on
If you are using zsh then use program-to-execute &! is a zsh-specific shortcut to both background and disown the process, such that exiting the shell will leave it running.
A follow-on to #cmcginty's concise working example which also shows how to alternatively wrap the outer command in double quotes. This is how the template would look if invoked from within a PowerShell script (which can only interpolate variables from within double-quotes and ignores any variable expansion when wrapped in single quotes):
ssh user#server "sh -c `"($cmd) &>/dev/null </dev/null &`""
Inner double-quotes are escaped with back-tick instead of backslash. This allows $cmd to be composed by the PowerShell script, e.g. for deployment scripts and automation and the like. $cmd can even contain a multi-line heredoc if composed with unix LF.
First follow this procedure:
Log in on A as user a and generate a pair of authentication keys. Do not enter a passphrase:
a#A:~> ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/a/.ssh/id_rsa):
Created directory '/home/a/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/a/.ssh/id_rsa.
Your public key has been saved in /home/a/.ssh/id_rsa.pub.
The key fingerprint is:
3e:4f:05:79:3a:9f:96:7c:3b:ad:e9:58:37:bc:37:e4 a#A
Now use ssh to create a directory ~/.ssh as user b on B. (The directory may already exist, which is fine):
a#A:~> ssh b#B mkdir -p .ssh
b#B's password:
Finally append a's new public key to b#B:.ssh/authorized_keys and enter b's password one last time:
a#A:~> cat .ssh/id_rsa.pub | ssh b#B 'cat >> .ssh/authorized_keys'
b#B's password:
From now on you can log into B as b from A as a without password:
a#A:~> ssh b#B
then this will work without entering a password
ssh b#B "cd /some/directory; program-to-execute &"
I think this is what you need:
At first you need to install sshpass on your machine.
then you can write your own script:
while read pass port user ip; do
sshpass -p$pass ssh -p $port $user#$ip <<ENDSSH1
COMMAND 1
.
.
.
COMMAND n
ENDSSH1
done <<____HERE
PASS PORT USER IP
. . . .
. . . .
. . . .
PASS PORT USER IP
____HERE

Resources