ssh with a set -o vi as first interactive command - bash

I have a small script that i use to log in to a server. I exchanged the keys.
The default set by the adminstirator is emacs. I got kind of addicted to
the vi key bindings. I can't log in as myself, I have to log in as a group user.
most of the time the first thing that i do is type in set -o vi . SOmetimes I forget and start using the vi key binding, but they work work, then i have to use the emacs key bindings. my muscle memory get messed up. It would be great to just automagically have the key bindings set when i log in with the login script.
anyhow I am trying to add the set command to my ssh script.
This one does not work.
#!/bin/bash
ssh -q -T bighost <<EOF
set -o vi
EOF
~
This one does not work
#!/bin/bash
ssh bighost bash -c "'
set -o vi
'"
This lets me ssh to the host, but the vi is not set as the keybinding.
#!/bin/bash
ssh -t bighost "$(< set -o vi )"
corp_user#bighost:~$ set -o
allexport off
braceexpand on
emacs on
errexit off
errtrace off
functrace off
hashall on
histexpand on
history on
ignoreeof off
interactive-comments on
keyword off
monitor on
noclobber off
noexec off
noglob off
nolog off
notify off
nounset off
onecmd off
physical off
pipefail off
posix off
privileged off
verbose off
vi off
xtrace off
corp_user#big_host:~$
I even tried something like this:
ssh corp_user#bighost "$( < . ~/woogie)
Where woogie has "set -o vi " in it.
Can this be done?

This script works when I use it here:
#!/bin/bash
ssh [host] -t bash -o vi
where [host] should be the host you want to connect to. The -t option for ssh tells ssh to force the usage of a tty. If you don't do that, bash won't behave like a normal interactive shell. The option you were looking for is -o vi which is the same thing you'd give to set. The man page for bash mentions that you can give on the command line the same things you'd give set.
This does not require you to create any file on the remote host.

Use Expect
The easiest way to do this in a way that won't impact the other users who share the remote account is with expect. For example:
expect -c 'spawn ssh localhost; expect "$ "; send "set -o vi\r"; interact return'
This will login and wait for a prompt before attempting to set the vi key bindings, and then turn control back over to you.

Related

How to remotely execute a bash script with an option?

I am attempting to remotely execute a Bash script defined as $CONST_FILE while passing an option to it (in this case -u). Unfortunately for me, the Bash Interpreter assigns my option to ssh instead my script; causing an error as ssh does not have a -u option. The below section of code is causing a problem for me:
(ssh -o StrictHostKeyChecking=no -l $CONST_USERNAME $HOST_NAME_LOGIN<$CONST_FILE -u)
In previous Bash scripts, I have been able to execute Bash scripts via the above method so long as I was not passing an option with the script I was attempting to execute.
I have tried various placements of {} "" '' [] and other characters without success. What set of characters do I need in order for the Bash Interpreter to understand that -u needs to be consumed by $CONST_FILE instead of ssh?
The usual way is to use command like this:
ssh -o StrictHostKeyChecking=no -l $CONST_USERNAME $HOST_NAME_LOGIN "$CONST_FILE -u"
You can use also format like:
ssh -o StrictHostKeyChecking=no ${CONST_USERNAME}#${HOST_NAME_LOGIN} "$CONST_FILE -u"

Authenticating with user/password *once* for multiple commands? (session multiplexing)

I got this trick from Solaris documentation, for copying ssh public keys to remote hosts.
note: ssh-copy-id isn't available on Solaris
$ cat some_data_file | ssh user#host "cat >/tmp/some_data_file; some_shell_cmd"
I wanted to adapt it to do more involved things.
Specifically I wanted some_shell_command to be a script sent from the local host to execute on the remote host... a script would interact with the local keyboard (e.g. prompt user when the script was running on the remote host).
I experimented with ways of sending multiple things over stdin from multiple sources. But certain things that work in in local shell don't work over ssh, and some things, such as the following, didn't do what I wanted at all:
$ echo "abc" | cat <(echo "def") # echoes: def (I wanted abc\ndef)
$ echo "abc" | cat < <(echo "def") # echoes: def (I wanted abc\ndef)
$ echo "abc" | cat <<-EOF
> echo $(</dev/stdin) #echoes: echo abc (I wanted: abc)
> EOF
# messed with eval for the above but that was a problem too.
#chepner concluded it's not feasible to do all of that in a single ssh command. He suggested a theoretical alternative that didn't work as hoped, but I got it working after some research and tweaking and documented the results of that and posted it as an answer to this question.
Without that solution, having to run multiple ssh, and scp commands by default entails being prompted for password multiple times, which is a major drag.
I can't expect all the users of a script I write in a multi-user environment to configure public key authorization, nor expect they will put up with having to enter a password over and over.
OpenSSH Session Multiplexing
    This solution works even when using earlier versions of OpenSSH where the
    ControlPersistoption isn't available. (Working bash example at end of this answer)
Note: OpenSSH 3.9 introduced Session Multiplexing over a "control master connection" (in 2005), However, the ControlPersist option wasn't introduced until OpenSSH 5.6 (released in 2010).
ssh session multiplexing allows a script to authenticate once and do multiple ssh transactions over the authenticated connection. For example, if you have a script that runs several distinct tasks using ssh, scp, or sftp, each transaction can be carried out over OpenSSH 'control master session' that refers to location of its named-socket in the filesystem.
The following one-time-password authentication is useful when running a script that has to perform multiple ssh operations and one wants to avoid users having to password authenticate more than once, and is especially useful in cases where public key authentication isn't viable - e.g. not permitted, or at least not configured.
Most solutions I've seen entail using ControlPersist to tell ssh to keep the control master connection open, either indefinitely, or for some specific number of seconds.
Unfortunately, systems with OpenSSH prior to 5.6 don't have that option (wherein upgrading them might not be feasible). Unfortunately, there doesn't seem to be much documentation or discussion about that limitation online.
Reading through old release docs I discovered ControlPersist arrived late in the game for ssh session multiplexing scene. implying there may have been an alternative way to configure session multiplexing without relying on the ControlPersist option prior to it.
Initially trying to configure persistent-sessions from command line options rather than the config parameter, I ran into the problem of the ssh session terminating prematurely, closing control connection client sessions with it, or, alternatively, the connection was held open (kept ssh control master alive), terminal I/O was blocked, and the script would hang.
The following clarifies how to accomplish it.
OpenSSH option ssh flag Purpose
------------------- --------- -----------------------------
-o ControlMaster=yes -M Establishes sharable connection
-o ControlPath=path -S path Specifies path of connection's named socket
-o ControlPersist=600 Keep shareable connection open 10 min.
-o ControlPersist=yes Keep shareable connection open indefinitely
-N Don't create shell or run a command
-f Go into background after authenticating
-O exit Closes persistent connection
ControlPersist form Equivalent Purpose
------------------- ---------------- -------------------------
-o ControlPersist=yes ssh -Nf Keep control connection open indefinitely
-o ControlPersist=300 ssh -f sleep 300 Keep control connection open 5 min.
Note: scp and sftp implement -S flag differently, and -M flag not at all, so, for those commands, the -o option form is always required.
Sketchy Overview of Operations:
Note: This incomplete example doesn't execute as shown.
ctl=<path to dir to store named socket>
ssh -fNMS $ctl user#host # open control master connection
ssh -S $ctl … # example of ssh over connection
scp -o ControlPath=$ctl … # example of scp over connection
sftp -o ControlPath=$ctl … # example of sftp over connection
ssh -S $ctl -O exit # close control master connection
Session Multiplexing Demo
(Try it. You'll like it. Working example - authenticates only once):
Running this script will probably help you understand it quicker than reading it, and it is fascinating.
Note: If you lack access to remote host, just enter localhost at the "Host...?" prompt if you want to try this demo script
#!/bin/bash # This script demonstrates ssh session multiplexing
trap "[ -z "$ctl" ] || ssh -S $ctl -O exit $user#$host" EXIT # closes conn, deletes fifo
read -p "Host to connect to? " host
read -p "User to login with? " user
BOLD="\n$(tput bold)"; NORMAL="$(tput sgr0)"
echo -e "${BOLD}Create authenticated persistent control master connection:${NORMAL}"
sshfifos=~/.ssh/controlmasters
[ -d $sshfifos ] || mkdir -p $sshfifos; chmod 755 $sshfifos
ctl=$sshfifos/$user#$host:22 # ssh stores named socket ctrl conn here
ssh -fNMS $ctl $user#$host # Control Master: Prompts passwd then persists in background
lcldir=$(mktemp -d /tmp/XXXX)
echo -e "\nLocal dir: $lcldir"
rmtdir=$(ssh -S $ctl $user#$host "mktemp -d /tmp/XXXX")
echo "Remote dir: $rmtdir"
echo -e "${BOLD}Copy self to remote with scp:${NORMAL}"
scp -o ControlPath=$ctl ${BASH_SOURCE[0]} $user#$host:$rmtdir
echo -e "${BOLD}Display 4 lines of remote script, with ssh:${NORMAL}"
echo "====================================================================="
echo $rmtdir | ssh -S $ctl $user#$host "dir=$(</dev/stdin); head -4 \$dir/*"
echo "====================================================================="
echo -e "${BOLD}Do some pointless things with sftp:${NORMAL}"
sftp -o ControlPath=$ctl $user#$host:$rmtdir <<EOF
pwd
ls
lcd $lcldir
get *
quit
EOF
Using a master control socket, you can use multiple processes without having to authenticate more than once. This is just a simple example; see man ssh_config under ControlPath for advice on using a more secure socket.
It's not quite clear what you mean by sourcing somecommand locally; I'm going to assume it is a local script that you want copied over to the remote host. The simplest thing to do is just copy it over to run it.
# Copy the first file, and tell ssh to keep the connection open
# in the background after scp completes
$ scp -o ControlMaster=yes -o ControlPersist=yes -o ControlPath=%C somefile user#host:/tmp/somefile
# Copy the script on the same connection
$ scp -o ControlPath=%C somecommand user#host:
# Run the script on the same connection
$ ssh -o ControlPath=%C user#host somecommand
# Close the connection
$ ssh -o ControlPath=%C -O exit user#host
Of course, the user could use public key authentication to avoid entering their credentials at all, but ssh would still go through the authentication process each time. Here, the authentication process is only done once, by the command using ControlMaster=yes. The other two processes reuse that connection. The last commmand, with -O exit, doesn't actually connect; it just tells the local connection to close itself.
$ echo "abc" | cat <(echo "def")
The expression <(echo "def") expands to a file name, typically something like /dev/fd/63, that names a (virtual) file containing the text "def". So lets's simplify it a bit:
$ echo "def" > def.txt
$ echo "abc" | cat def.txt
This will also prints just def.
The pipe does feed the line abc to the standard input of the cat command. But because cat is given a file name on its command line, it doesn't read from its standard input. The abc is just quietly ignored, and the cat command prints the contents of the named file -- which is exactly what you told it to do.
The problem with echo abc | cat <(echo def) is that the <() wins the "providing the input" race. Luckily, bash will allow you to supply many inputs using mulitple <() constructs. So the trick is, how do you get the output of your echo abc into the <()?
How about:
$ echo abc | cat <(echo def) <(cat)
def
abc
If you need to handle the input from the pipe first, just switch the order:
$ echo abc | cat <(cat) <(echo def)
abc
def

ssh + here document + interactive mode

Can I run a here document script over ssh on remote machine with interactive mode?
Code example is:
ssh -t xijing#ggzshop.com 'bash -s' <<EOF
sudo ls
......Other big scripts......
EOF
double -t won't work properly as well.
-----------------------------One possible solution:-------------------
After a lot of tries, I come up with following answers:
Script=`cat <<'EOF'
sudo ls
.....Big scripts.....
EOF`
ssh -t user#host ${Script}
which will allow user to type password in.
Solution of Xijing appears to work ok for me. However, I did a couple of cosmetic changes. First, for readability I used "dollar-parentheses" instead of backticks. For another thing I don't offer any explanation: Semicolons were needed to separate multiple commands in Script snippet even though commands are written on separate lines. My test:
Script=$( cat <<'HERE'
hostname;
cat /etc/issue;
sudo id
HERE
)
ssh -t user#host ${Script}
Sudo password will be asked in a normal manner, no need to omit that.
No, I don't think you can run interactive scripts like that.
To achieve what you want, you could create dedicated users for your common admin tasks that can run admin commands with sudo without password. Next, setup ssh key authentication to login as the dedicated users and perform the necessary tasks.
It is not necessary to use semicolons to separate multiple commands in Script if there are quotes around it.
Script="$( cat <<'HERE'
hostname;
cat /etc/issue;
sudo id
HERE
)"
- ssh -t user#host ${Script}
+ ssh -t user#host "${Script}"
# alternative (not recommended)
# set IFS variable to null string to avoid deletion of newlines \n in unquoted variable expansion
export IFS=''
ssh -t user#host ${Script}

Chain together commands with SSH on ProCurve

I need to back up the configuration of a HP ProCurve and I need to do it through SSH.
The ProCurve shell is interactive and I can't put something like:
$ ssh user#ip 'command1; command2'
...because the shell gives me an error — likely because the machine doesn’t actually have a Unix–like shell. I need to connect, input the password, and execute two commands.
With Procurve/Cisco devices you can not simply run ssh hostname command. You need to use some tool that can work with interactive ssh sessions, i.e. expect, pexpect or something like this. Another solution is to use socat:
(sleep 5; echo ${PASSWORD}; sleep 2; echo ; echo command1; sleep 2) \
| socat - EXEC:"ssh ${SWITCH}",setsid,pty,ctty
I'm not sure I entirely understand your question. Are you trying to run multiple commands?
Have you tried sh -c ?
ssh user#ip sh -c "command1; command2"
From man sh:
-c Read commands from the command_string operand instead of from the standard input. Special parameter 0 will be set from the command_name operand
and the positional parameters ($1, $2, etc.) set from the remaining argument operands.

Getting ssh to execute a command in the background on target machine

This is a follow-on question to the How do you use ssh in a shell script? question. If I want to execute a command on the remote machine that runs in the background on that machine, how do I get the ssh command to return? When I try to just include the ampersand (&) at the end of the command it just hangs. The exact form of the command looks like this:
ssh user#target "cd /some/directory; program-to-execute &"
Any ideas? One thing to note is that logins to the target machine always produce a text banner and I have SSH keys set up so no password is required.
I had this problem in a program I wrote a year ago -- turns out the answer is rather complicated. You'll need to use nohup as well as output redirection, as explained in the wikipedia artcle on nohup, copied here for your convenience.
Nohuping backgrounded jobs is for
example useful when logged in via SSH,
since backgrounded jobs can cause the
shell to hang on logout due to a race
condition [2]. This problem can also
be overcome by redirecting all three
I/O streams:
nohup myprogram > foo.out 2> foo.err < /dev/null &
This has been the cleanest way to do it for me:-
ssh -n -f user#host "sh -c 'cd /whereever; nohup ./whatever > /dev/null 2>&1 &'"
The only thing running after this is the actual command on the remote machine
Redirect fd's
Output needs to be redirected with &>/dev/null which redirects both stderr and stdout to /dev/null and is a synonym of >/dev/null 2>/dev/null or >/dev/null 2>&1.
Parantheses
The best way is to use sh -c '( ( command ) & )' where command is anything.
ssh askapache 'sh -c "( ( nohup chown -R ask:ask /www/askapache.com &>/dev/null ) & )"'
Nohup Shell
You can also use nohup directly to launch the shell:
ssh askapache 'nohup sh -c "( ( chown -R ask:ask /www/askapache.com &>/dev/null ) & )"'
Nice Launch
Another trick is to use nice to launch the command/shell:
ssh askapache 'nice -n 19 sh -c "( ( nohup chown -R ask:ask /www/askapache.com &>/dev/null ) & )"'
If you don't/can't keep the connection open you could use screen, if you have the rights to install it.
user#localhost $ screen -t remote-command
user#localhost $ ssh user#target # now inside of a screen session
user#remotehost $ cd /some/directory; program-to-execute &
To detach the screen session: ctrl-a d
To list screen sessions:
screen -ls
To reattach a session:
screen -d -r remote-command
Note that screen can also create multiple shells within each session. A similar effect can be achieved with tmux.
user#localhost $ tmux
user#localhost $ ssh user#target # now inside of a tmux session
user#remotehost $ cd /some/directory; program-to-execute &
To detach the tmux session: ctrl-b d
To list screen sessions:
tmux list-sessions
To reattach a session:
tmux attach <session number>
The default tmux control key, 'ctrl-b', is somewhat difficult to use but there are several example tmux configs that ship with tmux that you can try.
I just wanted to show a working example that you can cut and paste:
ssh REMOTE "sh -c \"(nohup sleep 30; touch nohup-exit) > /dev/null &\""
You can do this without nohup:
ssh user#host 'myprogram >out.log 2>err.log &'
Quickest and easiest way is to use the 'at' command:
ssh user#target "at now -f /home/foo.sh"
I think you'll have to combine a couple of these answers to get what you want. If you use nohup in conjunction with the semicolon, and wrap the whole thing in quotes, then you get:
ssh user#target "cd /some/directory; nohup myprogram > foo.out 2> foo.err < /dev/null"
which seems to work for me. With nohup, you don't need to append the & to the command to be run. Also, if you don't need to read any of the output of the command, you can use
ssh user#target "cd /some/directory; nohup myprogram > /dev/null 2>&1"
to redirect all output to /dev/null.
This worked for me may times:
ssh -x remoteServer "cd yourRemoteDir; ./yourRemoteScript.sh </dev/null >/dev/null 2>&1 & "
You can do it like this...
sudo /home/script.sh -opt1 > /tmp/script.out &
It appeared quite convenient for me to have a remote tmux session using the tmux new -d <shell cmd> syntax like this:
ssh someone#elsewhere 'tmux new -d sleep 600'
This will launch new session on elsewhere host and ssh command on local machine will return to shell almost instantly. You can then ssh to the remote host and tmux attach to that session. Note that there's nothing about local tmux running, only remote!
Also, if you want your session to persist after the job is done, simply add a shell launcher after your command, but don't forget to enclose in quotes:
ssh someone#elsewhere 'tmux new -d "~/myscript.sh; bash"'
Actually, whenever I need to run a command on a remote machine that's complicated, I like to put the command in a script on the destination machine, and just run that script using ssh.
For example:
# simple_script.sh (located on remote server)
#!/bin/bash
cat /var/log/messages | grep <some value> | awk -F " " '{print $8}'
And then I just run this command on the source machine:
ssh user#ip "/path/to/simple_script.sh"
If you run remote command without allocating tty, redirect stdout/stderr works, nohup is not necessary.
ssh user#host 'background command &>/dev/null &'
If you use -t to allocate tty to run interactive command along with background command, and background command is the last command, like this:
ssh -t user#host 'bash -c "interactive command; nohup backgroud command &>/dev/null &"'
It's possible that background command doesn't actually start. There's race here:
bash exits after nohup starts. As a session leader, bash exit results in HUP signal sent to nohup process.
nohup ignores HUP signal.
If 1 completes before 2, the nohup process will exit and won't start the background command at all. We need to wait nohup start the background command. A simple workaroung is to just add a sleep:
ssh -t user#host 'bash -c "interactive command; nohup backgroud command &>/dev/null & sleep 1"'
The question was asked and answered years ago, I don't know if openssh behavior changed since then. I was testing on:
OpenSSH_8.6p1, OpenSSL 1.1.1g FIPS 21 Apr 2020
I was trying to do the same thing, but with the added complexity that I was trying to do it from Java. So on one machine running java, I was trying to run a script on another machine, in the background (with nohup).
From the command line, here is what worked: (you may not need the "-i keyFile" if you don't need it to ssh to the host)
ssh -i keyFile user#host bash -c "\"nohup ./script arg1 arg2 > output.txt 2>&1 &\""
Note that to my command line, there is one argument after the "-c", which is all in quotes. But for it to work on the other end, it still needs the quotes, so I had to put escaped quotes within it.
From java, here is what worked:
ProcessBuilder b = new ProcessBuilder("ssh", "-i", "keyFile", "bash", "-c",
"\"nohup ./script arg1 arg2 > output.txt 2>&1 &\"");
Process process = b.start();
// then read from process.getInputStream() and close it.
It took a bit of trial & error to get this working, but it seems to work well now.
YOUR-COMMAND &> YOUR-LOG.log &
This should run the command and assign a process id you can simply tail -f YOUR-LOG.log to see results written to it as they happen. you can log out anytime and the process will carry on
If you are using zsh then use program-to-execute &! is a zsh-specific shortcut to both background and disown the process, such that exiting the shell will leave it running.
A follow-on to #cmcginty's concise working example which also shows how to alternatively wrap the outer command in double quotes. This is how the template would look if invoked from within a PowerShell script (which can only interpolate variables from within double-quotes and ignores any variable expansion when wrapped in single quotes):
ssh user#server "sh -c `"($cmd) &>/dev/null </dev/null &`""
Inner double-quotes are escaped with back-tick instead of backslash. This allows $cmd to be composed by the PowerShell script, e.g. for deployment scripts and automation and the like. $cmd can even contain a multi-line heredoc if composed with unix LF.
First follow this procedure:
Log in on A as user a and generate a pair of authentication keys. Do not enter a passphrase:
a#A:~> ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/a/.ssh/id_rsa):
Created directory '/home/a/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/a/.ssh/id_rsa.
Your public key has been saved in /home/a/.ssh/id_rsa.pub.
The key fingerprint is:
3e:4f:05:79:3a:9f:96:7c:3b:ad:e9:58:37:bc:37:e4 a#A
Now use ssh to create a directory ~/.ssh as user b on B. (The directory may already exist, which is fine):
a#A:~> ssh b#B mkdir -p .ssh
b#B's password:
Finally append a's new public key to b#B:.ssh/authorized_keys and enter b's password one last time:
a#A:~> cat .ssh/id_rsa.pub | ssh b#B 'cat >> .ssh/authorized_keys'
b#B's password:
From now on you can log into B as b from A as a without password:
a#A:~> ssh b#B
then this will work without entering a password
ssh b#B "cd /some/directory; program-to-execute &"
I think this is what you need:
At first you need to install sshpass on your machine.
then you can write your own script:
while read pass port user ip; do
sshpass -p$pass ssh -p $port $user#$ip <<ENDSSH1
COMMAND 1
.
.
.
COMMAND n
ENDSSH1
done <<____HERE
PASS PORT USER IP
. . . .
. . . .
. . . .
PASS PORT USER IP
____HERE

Resources