loads of similar questions and answers are floating out there - many seem like they might help but none do.
First, my situation / caveats: I am a researcher using publicly available data and I do not care one iota about security. I'm only concerned with getting my application up and running to do the math I need.
Second, what I'm doing: I will be running an asynchronous MPI algorithm on an HPC cluster. Right now I'm simply trying to automatically provision that cluster.
Third, Platform: I am using Microsoft Azure virtual machines (not Batch) running CentOS 7.6 and IntelMPI. I am using a custom image, which is unchanged from the stock Azure image except for the pre-installation of sshpass.
Fourth, the goal: In order for the VM's in the cluster to communicate, they require password-less ssh. I can set everything up manually without any trouble. But as n, the size of the cluster, grows, affirming every connection with a password grows by n square. O(n^2). So a cluster of 10 requires 100 password inputs. 20 requires 400, etc. Therefore this must be done in a script.
Fifth, the problem: Initial set up of passwordless ssh from a script is working fine, BUT the first connection between every pair of machines still requires a password. Subsequent connections do not. It is THIS initial connection I am trying to make in my script without a password - and am failing. Some tutorials / answers online don't even recognize that an initial connection must be made using the password (e.g. https://netbeez.net/blog/connect-to-ssh-without-password/). Perhaps, because the number of connections grow linearly in their application, they simply ignore it as a minor inconvenience.
Here is a simple script that illustrates the problem.
#!/usr/bin/bash
ssh -o StrictHostKeyChecking=no $connectHostName #Of course this asks for a password.
#The following options don't work either
#echo -e "${pswd}\n" | ssh -o StrictHostKeyChecking=no $connectHostName
#sshpass -p "${pswd}" ssh -o StrictHostKeyChecking=no $connectHostName
Before running this script all key pairs were generated and copied to the other VMs in a previous script. That script is working fine without requiring password intervention. Here is an excerpt:
#..........
#Adds shared username to the group wheel
echo -e "${pswd}\n" | sudo -S usermod -aG wheel "${user}"
#Generates an ssh key pair for the machine running the script
echo | ssh-keygen -t rsa -P ''
#Looping through list of IP's
i=1
myHostName=`uname -n`
while IFS= read -r IP; do
thisHostName=$(cat hosts | sed -n "${i}p" )
let "i++"
if [ "$myHostName" == "$thisHostName" ]; then
continue
fi
sshpass -p "${pswd}" ssh-copy-id -o StrictHostKeyChecking=no "${user}#${IP}"
done < Init_IPs
i=1 #resetting in case I use it later
# Loop Ends Here
echo -e "${pswd}\n" | eval `ssh-agent`
echo -e "${pswd}\n" | ssh-add ~/.ssh/id_rsa
#......
We've around 3000 VMs & 450 Physical servers which are Linux based servers (few of then ubuntu starting from 9.x & few of them are Susu starting 8.X & majority of them are RHEL starting from 4.x till 7.4) on all of them I need to add few hostname entries with IP details into their respective /etc/hosts files.
I've different users on each server with full sudoers access which I can use
Hence I've created a CSV file with hostname, username & password format. which contains required details to log in. Filename is "hostname_logins.csv"
I need to upload a file (i.e. hostname_list to each of these servers and then update those same details in each of the servers host files.
I'll be running this script using one RHEL 6 server. (All of the other hosts are resolvable from this server & are reachable, I've confirmed it already.)
The script is working but it's asking for accepting the host key once and also asked for the password 2 times however the 3rd time it does not asked for a password it worked automatically I guess, but need to ensure it does not askes to accept the host key or passwords.:
#!/bin/bash
runing_ssh()
{
while read hostname_login user_name user_password
do ssh -vveS -ttq rishee:rishee#192.168.1.105 "sudo -S -ttq < ./.pwtmp cp -p /etc/hosts /etc/hosts.$(date +%Y-%m-%d_%H:%M:%S).bkp && sudo -S bash -c 'cat ./hostname_list >> /etc/hosts' && rm -f ./.pwtmp ./hostname_list"
done < hostname_logins.csv
}
while read hostname_login user_name user_password
do echo $user_password > ./.pwtmp
cat ./.pwtmp
scp -p ./.pwtmp ./hostname_list $user_name#$hostname_login:
runing_ssh
done < hostname_logins.csv
I need to make this as a single script which will work on all these servers. thanks in advance.
You are executing the original copy from /tmp with sudo, but nothing else.
while read hostname_login user_name user_password
do echo $myPW >.pwtmp
scp -p ./.pwtmp ./hostname_list $user_name:$user_password#$hostname_login:
ssh -etS $user_name:$user_password#$hostname_login "sudo -S <.pwtmp cp -p /etc/hosts /etc/hosts.bkp && sudo -S <.pwtmp cat ./hostname_list >> /etc/hosts && rm -f ./.pwtmp ./hostname_list"
done < hostname_logins.csv
I dropped the explicit send to /tmp and the cp back to your home dir, and defaulted the location (to $user_name's home dir) by not passing anything to scp after the colon. Fix that if it doesn't work for you.
I created a password file for improved security and code reuse, and sent it along with the hosts list. I added a sudo -S to each relevant command, reading from the password file.
That [bash -c ...] syntax doesn't work on my implementation, so I took it out.
Hope that helps.
Update
Added -t to ssh call. Try that.
I have a shell script, which I am using to access the SMB Client:
#!/bin/bash
cd /home/username
smbclient //link/to/server$ password -W domain -U username
recurse
prompt
mput baclupfiles
exit
Right now, the script runs, accesses the server, and then asks for a manual input of the commands.
Can someone show me how to get the commands recurse, prompt, mput baclupfiles and exit commands to be run by the shell script please?
I worked out a solution to this, and sharing for future references.
#!/bin/bash
cd /home/username
smbclient //link/to/server$ password -W domain -U username << SMBCLIENTCOMMANDS
recurse
prompt
mput backupfiles
exit
SMBCLIENTCOMMANDS
This will enter the commands between the two SMBCLIENTCOMMANDS statements into the smb terminal.
smbclient accepts the -c flag for this purpose.
-c|--command command string
command string is a semicolon-separated list of commands to be executed instead of
prompting from stdin.
-N is implied by -c.
This is particularly useful in scripts and for printing stdin to the server, e.g.
-c 'print -'.
For instance, you might run
$ smbclient -N \\\\Remote\\archive -c 'put /results/test-20170504.xz test-20170504.xz'
smbclient disconnects when it is finished executing the commands.
smbclient //link/to/server$ password -W domain -U username -c "recurse;prompt;mput backupfiles"
I would comment to Calchas's answer which is the correct approach-but did not directly answer OP's question-but I am new and don't have the reputation to comment.
Note that the -c listed above is semicolon separated list of commands (as documented in other answers), thus adding recurse and prompt enables the mput to copy without prompting.
You may also consider using the -A flag to use a file (or a command that decrypts a file to pass to -A) to fully automate this script
smbclient //link/to/server$ password -A ~/.smbcred -c "recurse;prompt;mput backupfiles"
Where the file format is:
username = <username>
password = <password>
domain = <domain>
workgroup = <workgroup>
workgroup is optional, as is domain, but usually needed if not using a domain\username formatted username.
I suspect this post is WAY too late to be useful to this particular need, but maybe useful to other searchers, since this thread lead me to the more elegant answer through -c and semicolons.
I would take a different approach using autofs with smb. Then you can eliminate the smbclient/ftp like approach and refactor your shell script to use other functions like rsync to move your files around. This way your credentials aren't stored in the script itself as well. You can bury them somewhere on your fs and make it read only by root an no one else.
Regardless of security issues, I want to automate ssh login by putting password into a script file (in form of plaintext). For example, I tried following, but without success...
echo "mypassword" | ssh -X root#remote_node_address
it still prompt with password inputs...
Edit: I am aware of setting up passphraseless ssh (and actually have done this). What my question really is is how to automate process of setting up passphraseless ssh...
Automate with Expect
You can use Expect to drive password authentication with SSH. For example:
#!/usr/bin/expect -f
set timeout -1
spawn ssh -o PubkeyAuthentication=no host.example.com
expect -exact "Password: "
send -- "secret\r"
expect {\$\s*} { interact }
This script is a very basic example, and not especially robust in the face of failure or when running under a non-standard remote TERM like GNU screen, but it works for the common case. You can also use /usr/bin/autoexpect from the expect-dev package to generate your own custom scripts based on a manual session.
you will need to use public key authentication, see
http://www.ece.uci.edu/~chou/ssh-key.html
in order to add new keys for existing hosts, you will need to automate updating of public keys in ~/.ssh/authorized_keys on remote machine
it is easy to do with
ssh-keygen -t rsa -b 1024 -f ~/.ssh/new-key -P ""
cat ~/.ssh/new-key.pub | ssh root#target-host 'cat >> ~/.ssh/authorized_keys'
then you can use new key to access host with
ssh -i ~/.ssh/new-key root#remote-host
I run into empty recently. I am surprised that it seems not to be well known since it is rarely talked about when problems like "how to automate ssh" arise.
I use it on openwrt, it has a package about 7KB in size without dependency, while tcl package is around 440KB. And you can use it in shell directly.
"empty is an utility that provides an interface to execute and/or interact with processes under pseudo-terminal sessions (PTYs). This tool is definitely useful in programming of shell scripts designed to communicate with interactive programs like telnet, ssh, ftp, etc. In some cases empty can be the simplest replacement for TCL/expect or other similar programming tools "
For example:
#!/bin/sh
empty -f -i in -o out telnet foo.bar.com
empty -w -i out -o in "ogin:" "luser\n"
empty -w -i out -o in "assword:" "TopSecret\n"
empty -s -o in "who am i\n"
empty -s -o in "exit\n"
I often have to login to one of several servers and go to one of several directories on those machines. Currently I do something of this sort:
localhost ~]$ ssh somehost
Welcome to somehost!
somehost ~]$ cd /some/directory/somewhere/named/Foo
somehost Foo]$
I have scripts that can determine which host and which directory I need to get into but I cannot figure out a way to do this:
localhost ~]$ go_to_dir Foo
Welcome to somehost!
somehost Foo]$
Is there an easy, clever or any way to do this?
You can do the following:
ssh -t xxx.xxx.xxx.xxx "cd /directory_wanted ; bash --login"
This way, you will get a login shell right on the directory_wanted.
Explanation
-t Force pseudo-terminal allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g. when implementing menu services.
Multiple -t options force tty allocation, even if ssh has no local tty.
If you don't use -t then no prompt will appear.
If you don't add ; bash then the connection will get closed and return control to your local machine
If you don't add bash --login then it will not use your configs because its not a login shell
You could add
cd /some/directory/somewhere/named/Foo
to your .bashrc file (or .profile or whatever you call it) at the other host. That way, no matter what you do or where you ssh from, whenever you log onto that server, it will cd to the proper directory for you, and all you have to do is use ssh like normal.
Of curse, rogeriopvl's solution works too, but it's a tad bit more verbose, and you have to remember to do it every time (unless you make an alias) so it seems a bit less "fun".
My preferred approach is using the SSH config file (described below), but there are a few possible solutions depending on your usages.
Command Line Arguments
I think the best answer for this approach is christianbundy's reply to the accepted answer:
ssh -t example.com "cd /foo/bar; exec \$SHELL -l"
Using double quotes will allow you to use variables from your local machine, unless they are escaped (as $SHELL is here). Alternatively, you can use single quotes, and all of the variables you use will be the ones from the target machine:
ssh -t example.com 'cd /foo/bar; exec $SHELL -l'
Bash Function
You can simplify the command by wrapping it in a bash function. Let's say you just want to type this:
sshcd example.com /foo/bar
You can make this work by adding this to your ~/.bashrc:
sshcd () { ssh -t "$1" "cd \"$2\"; exec \$SHELL -l"; }
If you are using a variable that exists on the remote machine for the directory, be sure to escape it or put it in single quotes. For example, this will cd to the directory that is stored in the JBOSS_HOME variable on the remote machine:
sshcd example.com \$JBOSS_HOME
SSH Config File
If you'd like to see this behavior all the time for specific (or any) hosts with the normal ssh command without having to use extra command line arguments, you can set the RequestTTY and RemoteCommand options in your ssh config file.
For example, I'd like to type only this command:
ssh qaapps18
but want it to always behave like this command:
ssh -t qaapps18 'cd $JBOSS_HOME; exec $SHELL'
So I added this to my ~/.ssh/config file:
Host *apps*
RequestTTY yes
RemoteCommand cd $JBOSS_HOME; exec $SHELL
Now this rule applies to any host with "apps" in its hostname.
For more information, see http://man7.org/linux/man-pages/man5/ssh_config.5.html
I've created a tool to SSH and CD into a server consecutively – aptly named sshcd. For the example you've given, you'd simply use:
sshcd somehost:/some/directory/somewhere/named/Foo
Let me know if you have any questions or problems!
Based on additions to #rogeriopvl's answer, I suggest the following:
ssh -t xxx.xxx.xxx.xxx "cd /directory_wanted && bash"
Chaining commands by && will make the next command run only when the previous one was successful (as opposed to using ;, which executes commands sequentially). This is particularly useful when needing to cd to a directory performing the command.
Imagine doing the following:
/home/me$ cd /usr/share/teminal; rm -R *
The directory teminal doesn't exist, which causes you to stay in the home directory and remove all the files in there with the following command.
If you use &&:
/home/me$ cd /usr/share/teminal && rm -R *
The command will fail after not finding the directory.
In my very specific case, I just wanted to execute a command in a remote host, inside a specific directory from a Jenkins slave machine:
ssh myuser#mydomain
cd /home/myuser/somedir
./commandThatMustBeRunInside_somedir
exit
But my machine couldn't perform the ssh (it couldn't allocate a pseudo-tty I suppose) and kept me giving the following error:
Pseudo-terminal will not be allocated because stdin is not a terminal
I could get around this issue passing "cd to dir + my command" as a parameter of the ssh command (to not have to allocate a Pseudo-terminal) and by passing the option -T to explicitly tell to the ssh command that I didn't need pseudo-terminal allocation.
ssh -T myuser#mydomain "cd /home/myuser/somedir; ./commandThatMustBeRunInside_somedir"
I use the environment variable CDPATH
going one step further with the -t idea. I keep a set of scripts calling the one below to go to specific places in my frequently visited hosts. I keep them all in ~/bin and keep that directory in my path.
#!/bin/bash
# does ssh session switching to particular directory
# $1, hostname from config file
# $2, directory to move to after login
# can save this as say 'con' then
# make another script calling this one, e.g.
# con myhost repos/i2c
ssh -t $1 "cd $2; exec \$SHELL --login"
My answer may differ from what you really want, but I write here as may be useful for some people. In my solution you have to enter into the directory once and then every new ssh session goes to the same dir (after the first logout).
How to ssh to the same directory you have been in your last login.
(I assume you use bash on the remote node.)
Add this line to your ~/.bash_logout on the remote node(!):
echo $PWD > ~/.bash_lastpwd
and these lines to the ~/.bashrc file (still on the remote node!)
if [ -f ~/.bash_lastpwd ]; then
cd $(cat ~/.bash_lastpwd)
fi
This way you save your current path on every logout and .bashrc put you into that directory after login.
ps: You can tweak it further like using the SSH_CLIENT variable to decide to go into that directory or not, so you can differentiate between local logins and ssh or even between different ssh clients.
Another way of going to directly after logging in is create "Alias". When you login into your system just type that alias and you will be in that directory.
Example : Alias = myfolder '/var/www/Folder'
After you log in to your system type that alias (this works from any part of the system)
this command if not in bashrc will work for current session. So you can also add this alias to bashrc to use that in future
$ myfolder => takes you to that folder
I know this has been answered ages ago but I found the question while trying to incorporate an ssh login in a bash script and once logged in run a few commands and log back out and continue with the bash script. The simplest way I found which hasnt been mentioned elsewhere because it is so trivial is to do this.
#!/bin/bash
sshpass -p "password" ssh user#server 'cd /path/to/dir;somecommand;someothercommand;exit;'
Connect With User
In case if you don't know this, you can use this to connect by specifying both user and host
ssh -t <user>#<Host domain / IP> "cd /path/to/directory; bash --login"
Example: ssh -t admin#test.com "cd public_html; bash --login"
You can also append the commands to be executed on every login by appending it in the double quotes with a ; before each command
Unfortunately, the suggested solution (of #rogeriopvl) doesn't work when you use multiple hops, so I found another one.
On remote machine add into ~/.bashrc the following:
[ "x$CDTO" != "x" ] && cd $CDTO
This allows you to specify the desired target directory on command line in this way:
ssh -t host1 ssh -t host2 "CDTO=/desired_directory exec bash --login"
Sure, this way can be used for a single hop too.
This solution can be combined with the usefull tip of #redseven for greater flexibilty (if no $CDTO, go to saved directory, if exists).
SSH itself provides a means of communication, it does not know anything about directories. Since you can specify which remote command to execute (this is - by default - your shell), I'd start there.
simply modify your home with the command:
usermod -d /newhome username