Not sure the best to title this question... I have a bash script on Server A.
Work Ubuntu Desktop:
ssh -l USER host
*password*
coolscript var1 var2 var3
Server A (coolscript):
command1 $1
command2 $2
now at this point, I need to trigger coolscript2 on server b with the third argument passed. something like
run_remote_command_on_server_b coolscript2 $3
Server B (coolscript2)
command3 $3
However i need this to happen without having to enter user/pass for the second server.
If I understand your question correctly, you need to setup SSH keys.
Arch Linux Wiki has a great article on using SSH keys.
You can also read shorter HOWTO here.
Basically, when you login from host A to host B via SSH, you can omit password authentication by generating private-public key pair. Private key is stored on the host A, and public key you copy to host B.
Please note, that there is an option to secure SSH private key with passphrase - in your case you wouldn't do that.
So, just generate keys on desktop:
$ ssh-keygen
Then copy them to Server A and Server B:
$ ssh-copy-id -i ~/.ssh/id_rsa.pub HOST_A
$ ssh-copy-id -i ~/.ssh/id_rsa.pub HOST_B
Related
I want to write a shell script and put it in a cron. This shell script will copy one particular directory from my server to another server everyday once. So, I don't want it to prompt for passwords. Is there something that I can add in my script so that it wont ask for passwords everyday?
You need to have a password less SSH Login in your Unix Boxes
Below link describe how to set password less SSH login
http://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/
you can use FTP or NDM to transfer the Files
In this way you can achieve your requirement.
Using the below script, I am able to achieve what I mentioned :
#!/bin/bash
com="sshpass -p Password0 scp arul#172.25.184.93:/home/arul/test.sh ."
eval $com
You can use RSA key option also for this. Using RSA key you can authorized your second server in first server. This is one time operation.
ssh-copy-id -i ~/.ssh/id_rsa.pub [Your 2nd server IP]
Example:-
[root#vasmon home]# ssh-copy-id -i ~/.ssh/id_rsa.pub xxx.xxx.xxx.xxx
root#xxx.xxx.xxx.xxx's password:
Now try logging into the machine, with "ssh 'xxx.xxx.xxx.xxx'", and check in:
.ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting.
[root#vasmon home]#
I've setup ssh keys form server A to server B and I can login to server B without a password. I'm trying to setup a reverse ssh tunnel in a bash script. From the command line if I do
ssh -N -R 1234:localhost:22 user#mydomain.co.uk -p 22
form server A it works as expected i.e no password required, however if I use it in a script
#!/bin/bash
/usr/bin/ssh -N -R 1234:localhost:22 user#mydomain.co.uk -p 22
I get asked for the password
user#mydomain.co.uk's password:
How do I make it so it uses the keys?
You need to let ssh know where it should search for the keys, if they are not in standard location and not passphrase protected. The easiest thing is by specifying -i switch directly to ssh:
/usr/bin/ssh -i /path/to/key -N -R 1234:localhost:22 user#mydomain.co.uk -p 22
Or cleaner way in your ~/.ssh/config like this:
Host mydomain.co.uk
IdentityFile /path/to/key
But make sure the script is run with your user context, so the script will see the configuration file.
If you have keys in standard location (~/.ssh/id_rsa), your code should work just fine. Although it should work if you have your keys stored in ssh-agent, which you can verify using ssh-add -L before starting the script. ssh-agent also solve the problem, if he keys are passphrase protected.
How to store string into remote file ?
I need to override content of file on remote machine (I have username, password and ip and can access through ssh) with some string from command line.
How to achieve this on linux ?
You can modify the file in one operation with ssh:
ssh user#host "echo \"$local_variable\" > /path/to/file"
However, this is risky - What if there's a double quote character in the local variable? In Bash, you can get around it by quoting the value:
ssh user#host "echo $(printf %q "$local_variable") > /path/to/file"
The much simpler and safer way to do this, and avoid any weird escaping problems, is to simply save the contents to a file locally and then copy it over with scp or rsync.
If you have access to ssh you can execute any command on the remote machine:
ssh username#remotehost.com /usr/bin/mycommand
(For example you could echo a string into a file if you like)
if you want to get rid of the password prompt, you could use SSH Key Authentication:
ssh-keygen -t dsa
scp .ssh/id_rsa.pub username#remotehost.com:~/.ssh/authorized_keys2
Once your key is on the remote machine, you can use ssh without password. (Warning: Using a key which is not password protected, could be risky)
I can connect to a server via SSH using the -i option to specify the private key:
ssh -i ~/.ssh/id_dsa user#hostname
I am creating a script that takes the id_dsa text from the database but I am not sure how I can give that string to SSH. I would need something like:
ssh --option $STRING user#hostname
Where $STRING contains the value of id_dsa. I need to know the --option if there is one.
Try the following:
echo $KEY | ssh -i /dev/stdin username#host command
The key doesn't appear from a PS statement, but because stdin is redirected it's only useful for single commands or tunnels.
There is no such switch - as it would leak sensitive information. If there were, anyone could get your private key by doing a simple ps command.
EDIT: (because of theg added details in comment)
You really should store the key in to a temporary file. Make sure you set the permissions correctly before writing to the file, if you do not use command like mktemp to create the temporary file.
Make sure you run the broker (or agent in case of OpenSSH) process and load the key using <whatever command you use to fetch it form the database> | ssh-add -
Passing cryptokey as a string is not advisable but for the sake of the question, I would say I came across the same situation where I need to pass key as a string in a script. I could use key stored in a file too but the nature of the script is to make it very flexible, containing everything in itself was a requirement. so I used to assign variable and pass it and echo it as follows :
#!/bin/bash
KEY="${ YOUR SSH KEY HERE INSIDE }"
echo "${KEY}" | ssh -q -i /dev/stdin username#IP 'hostnamectl'
exit 0
Notes:
-q suppress all warnings
By the way , the catch here in above script, since we are using echo it will print the ssh key which is again not recommended , to hide that you can use grep to grep some anything which will not be printed for sure but still stdin will have the value from the echo. So the final cmd can be modified as follows :
#!/bin/bash
KEY="${ YOUR SSH KEY HERE INSIDE }"
echo "${KEY}" | grep -qw "less" | ssh -q -i /dev/stdin username#IP 'hostnamectl'
exit 0
This worked for me.
I was looking at the same problem. Adding private key content to ssh command via stdin did not work for me. I found out that its possible to add the private key file contents to ssh-agent using the command ssh-add. This will let you ssh into the remote host without explicitly specifying the identity file. My particular usecase was that I didn't want to store the SSH key in cleartext on my machine and was dynamically getting it from a secrets vault. This answer is mostly a collection of other answers on StackOverflow.
ssh-agent is a program to hold private keys used for public key
authentication. Through use of environment variables the agent can
be located and automatically used for authentication when logging
in to other machines using ssh
Source
This is what I have done.
First start the ssh-agent.
You can start it from your terminal by simply executing ssh-agent.
OPTIONAL: If you'd like to make sure ssh-agent is running on every login, you can add something like the following to your shell config.
This is what I have added to my ~/.bashrc file.
# set SSH_AUTH_SOCK env var to a fixed value
export SSH_AUTH_SOCK=~/.ssh/ssh-agent.sock
# test whether $SSH_AUTH_SOCK is valid
ssh-add -l 2>/dev/null >/dev/null
# if not valid, then start ssh-agent using $SSH_AUTH_SOCK
[ $? -ge 2 ] && ssh-agent -a "$SSH_AUTH_SOCK" >/dev/null
Source
(This particular snippet also makes sure new ssh-agent processes are not getting created when there's one already running.)
Now you have the ssh-agent running.
Since we're interested in loading SSH key as a string, I'll assume a scenario where private key contents has already been loaded in to a variable, $SSH_PRIVATE_KEY.
I can now add this Key contents to the ssh-agent by executing the following command.
ssh-add - <<< "${SSH_PRIVATE_KEY}"
This can just be added to the bashrc file as well.
You can confirm that your key has been added by listing all keys by executing ssh-agent -l. Aaand you're done now.
Try connecting to the remote host and you don't need a private key file.
ssh username#hostname
This does come with extra security risks. These are some I could think of:
Adding the private key to the ssh-agent will let any process on the machine access the key to authenticate remote hosts without explicitly providing any information.
Since the goal is to load Private key as a string, it will either be stored in a variable or the contents embedded directly in the command. This might make the key available in command history, the shell variable and other places.
When you're using pscp to send files to a single machine is not a big deal because you will get the rsa fingerprint prompt once and never again after. But if you want to connect to 200 machines, you definitely don't want to type "yes" 200 times....
I'm using pscp on a Windows machine and I really don't care about the fingerprint, I only want to accept it. I'm using Amazon EC2 and the finger print change every time i restart the machines....
If there is a way to avoid it using pscp or a different tool please let me know!!!
Thanks!
See Putty won't cache the keys to access a server when run script in hudson
On Windows you can use prefix echo y | in front of your command which will blindly accept any host key every time. However, a more secure solution is to run interactively the first time, or generate a .reg file that can be run on any client machine.
I do not completely agree with the last answer. The first time you accept an SSH key, you know nothing about the remote host, so automatically accepting it makes no difference.
What I would do is auto accept the key the first time you connect to a host. I've read that doing something like yes yes | ssh user#host works, but it doesn't, because SSH does not read from stdin, but from a terminal.
What does work is to pass, that first time you connect, the following ssh option (it works for both scp and ssh:
scp -oStrictHostKeyChecking=no user#host1:file1 user#host2:file2
This command would add the key the first time you run it, but if, as Eric says, doing this once you have accepted the key is dangerous (man in the middle is uncool). If I were you I'd add it to a script that checked in ~/.ssh/known_hosts if there's already a line for that host, in which case I wouldn't add that option. On the other hand, if there was no line, I'd do so ;).
If you are dealing with an encrypted version of known_hosts, try with
ssh-keygen -F hostname
Here's something I'm actually using (function receiving the following arguments: user, host, source_file)
deployToServer() {
echo "Deployng to $1#$2 from $3"
if [ -z "`cat ~/.ssh/known_hosts | grep $2`" ] && [ -z "`ssh-keygen -F $2`" ]
then
echo 'Auto accepting SSH key'
scp -oStrictHostKeyChecking=no $3* $1#$2:.
else
scp $3* $1#$2:.
fi
}
Hope this helped ;)
The host ssh key fingerprint should not change if you simply reboot or stop/start an instance. If it does, then the instance/AMI is not configured correctly or something else (malicious?) is going on.
Good EC2 AMIs are set up to create a random host ssh key on first boot. Most popular AMIs will output the fingerprint to the console output. For security, you should be requesting the instance console output through the EC2 API (command line tool or console) and comparing that to the fingerprint in the ssh prompt.
By saying you "don't care about the fingerprint" you are saying that you don't care about encrypting the traffic between yourself and the instance and it's ok for anybody in between you and the instance to see that communication. It may even be possible for a man-in-the-middle to take over the ssh session and gain access to control your instance.
With ssh on Linux you can turn off the ssh fingerprint check with a command line or config file option. I hesitate to publish how to do this as it is not recommended and seriously reduces the safety of your connections.
A better option is to have your instances set up their own host ssh key to a secret value that you know. You can save the public side of the host ssh key in your known hosts file. This way your traffic is encrypted and safe, and you don't have to continually answer the prompt about the fingerprints when connecting to your own machine.
I created a expect file with following commands in it:
spawn ssh -i ec2Key.pem ubuntu#ec2IpAddress
expect "Are you sure you want to continue connecting (yes/no)?" { send "yes\n" }
interact
I was able to ssh into the ec2 console without disabling the rsa fingerprint. My machine was added to the known hosts of this ec2.
I hope it helps.