SSH and agent for Ubuntu file transfer automation - shell

I had a script which is used to create dumps of Database and transfers the files from Ubuntu server to Linux machine, I use scp for file transfer it prompts for password every time, need to automate it. I had the Rsa public key of Linux in Ubuntu machine as authorized_keys, when i scp it says Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password) checked the permissions and every thing like passwordAuthontication off etc no luck.
Can i write the password in my script and use regardless of security as i will provide 700 permissin and no one can access it except me the root user.
This is my script:
export DB_DUMP_DIR=/home/database_dump
export DB_NAME=database_name_$(date '+%Y_%m_%d').sql
mysqldump -u root mysql > ${DB_DUMP_DIR}/${DB_NAME}
if [ $? -eq 0 ];then
scp -i /root/.ssh/id_rsa ${DB_DUMP_DIR}/${DB_NAME} root#192.0.0.0:
else
echo "Error generating database dump"
fi

The first things that come to mind are
Is the server set to allow key authentication authentication? (that's PubkeyAuthentication yes in sshd_config)
Is the server allowing RSA keys? (this might look like RSAAuthentication no in your sshd_config)
Is root's ~/.ssh directory set to 700? (or tighter)
Is root's ~/.ssh/authorized_keys set to 600? (or tighter)
Is the remote machine allowing you to log in as root? (the PermitRootLogin no option in sshd_config)
Is it really the right key you're sending here? Did you try with a different key you created just to test this?
Lastly, it is never, ever a good idea to write the password down in a script. Just don't do it. Fix the problem you have with key authentication here instead.

Related

Shell script to copy one directory from one server to another without asking for password

I want to write a shell script and put it in a cron. This shell script will copy one particular directory from my server to another server everyday once. So, I don't want it to prompt for passwords. Is there something that I can add in my script so that it wont ask for passwords everyday?
You need to have a password less SSH Login in your Unix Boxes
Below link describe how to set password less SSH login
http://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/
you can use FTP or NDM to transfer the Files
In this way you can achieve your requirement.
Using the below script, I am able to achieve what I mentioned :
#!/bin/bash
com="sshpass -p Password0 scp arul#172.25.184.93:/home/arul/test.sh ."
eval $com
You can use RSA key option also for this. Using RSA key you can authorized your second server in first server. This is one time operation.
ssh-copy-id -i ~/.ssh/id_rsa.pub [Your 2nd server IP]
Example:-
[root#vasmon home]# ssh-copy-id -i ~/.ssh/id_rsa.pub xxx.xxx.xxx.xxx
root#xxx.xxx.xxx.xxx's password:
Now try logging into the machine, with "ssh 'xxx.xxx.xxx.xxx'", and check in:
.ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting.
[root#vasmon home]#

script for accessing remote server, get error log and rename it automatically.

Hi, my name is Evan, newbie on UNIX :)
i want to ask about scripting on unix. here is the case :
i have 4 unix server (with freeBSD OS), let call them "Gorrila's"
And one gateway server (also, with unix FreeBSD OS), Let call this one "Monkey's"
if i want access and login to Gorillas server, i have to using putty to access Monkey and then, from monkey doing ssh connection to enter Gorillas server.
The case is, my boss asking to me, to get an apache error log, everday, in fourth of gorrila's server.
All this time, i am doing manually. putty to monkeys - ssh to gorrilas - copy error log into monkey server using scp command and then, get error log with winscp from monkeys server.
the problem is :
how to make script with this case ?
how to rename automatically the error_log because, error log name in every server has a same name. which is "01_error.log". i had to rename it manually so they can't replace each other.
i hope, somebody can help me with this.
All, Thank you for your help and time. and sorry for the bad english language. :)
The easiest way to accomplish this would be to setup an automated job on Gorilla4.
Your first problem, is that you'll need to setup password-less SSH access between Gorilla4 and Monkey so you don't need a person to physically type in the password.
While you can do this with the 'root' user I would STRONGLY recommend against it.
Instead create a maintenance user on BOTH hosts:
$ useradd -m maintuser
Then switch to the new user and create SSH key on Gorilla4:
$ ssh-keygen -t rsa -b 2048
Accept the defaults when prompted. Then copy the id_rsa.pub file to the ~/.ssh directory of the maintuser on Monkey.
Now, when you are the "maintuser" on Gorilla4, you can SSH to Monkey without a password.
Then you can create a script called "copy_log.sh":
#!/bin/bash
# copy_log.sh
log_path="/path/to/logdir"
log_name="01_error.log"
target_host="monkey"
echo "copying ${log_name} to ${target_host}..."
# note: $(hostname) below will add "Gorilla4" to the name of the file
scp ${log_path}/${log_name} maintuser#${target_host}:/path/to/dest/$(hostname)_${log_name} || {
echo "Failed to scp file"
exit 2
}
echo "completed successfully"
Make it executable:
$ chmod +x copy_log.sh
Add it to the maintuser's crontab on Gorilla4 to run at whatever time you would nomrally do it yourself, say 8am everyday:
00 08 * * * /path/to/copy_log.sh >> /some/log/dir/copy_log.out 2>&1
Hope this helps; if nothing else, it will give you plenty to Google :)

ssh to another server with different username

I have to write a shell script which ssh to another server with other username without actually asking for a password from the user?
Due to constraints I cannot use key based authentication.
let,
Source Server -- abc.efg.com
Source UserName -- tom
Source Password -- tom123
Destination Server -- xyz.efc.com
Destination UserName -- bob
destination Password -- bob123
I have to place the bash script in source server.
Please let me know if something could be done using expect tool and/or sshpass.
It is okay for me to hardcode the password for destination server in the bash script but I cannot bear an interactive session, simply when I run he script, I want to see the destination server logged in with another username.
Thanks in Advance.
You want to use key-authentication http://ornellas.apanela.com/dokuwiki/pub:ssh_key_auth
Generate your keys ssh-keygen
Copy the keys to your new box ssh-copy-id -i ~/.ssh/id_rsa.pub me#otherhost.com
ssh to other host without password ssh me#otherhost.com
You can use expect to wrap ssh, but it's pretty hectic, and fails easily when there are network errors, so test it well or use a script specifically designed for wrapping ssh passwords. Key based authentication is better.
You can prevent interactive sessions by redirecting standard input from the null device, ie.
ssh me#destination destination-command < /dev/null
About placing the script in the source server, if the script you are running is local, rather than remote, then you can pass the script on standard input, rather than the command line:
cat bashscript.sh | ssh me#destination
You can install the sshpass program, which lets you write a script like
#!/bin/bash
sshpass -p bob123 ssh UserName#xyz.efc.com
The answer is that you can't as OpenSSH actively prevent headless password-based authentication. Use key-based authentication.
You may be able to fork the OpenSSH client code and patch it, but I think that is a bit excessive.

Automatically accept rsa fingerprint using pscp

When you're using pscp to send files to a single machine is not a big deal because you will get the rsa fingerprint prompt once and never again after. But if you want to connect to 200 machines, you definitely don't want to type "yes" 200 times....
I'm using pscp on a Windows machine and I really don't care about the fingerprint, I only want to accept it. I'm using Amazon EC2 and the finger print change every time i restart the machines....
If there is a way to avoid it using pscp or a different tool please let me know!!!
Thanks!
See Putty won't cache the keys to access a server when run script in hudson
On Windows you can use prefix echo y | in front of your command which will blindly accept any host key every time. However, a more secure solution is to run interactively the first time, or generate a .reg file that can be run on any client machine.
I do not completely agree with the last answer. The first time you accept an SSH key, you know nothing about the remote host, so automatically accepting it makes no difference.
What I would do is auto accept the key the first time you connect to a host. I've read that doing something like yes yes | ssh user#host works, but it doesn't, because SSH does not read from stdin, but from a terminal.
What does work is to pass, that first time you connect, the following ssh option (it works for both scp and ssh:
scp -oStrictHostKeyChecking=no user#host1:file1 user#host2:file2
This command would add the key the first time you run it, but if, as Eric says, doing this once you have accepted the key is dangerous (man in the middle is uncool). If I were you I'd add it to a script that checked in ~/.ssh/known_hosts if there's already a line for that host, in which case I wouldn't add that option. On the other hand, if there was no line, I'd do so ;).
If you are dealing with an encrypted version of known_hosts, try with
ssh-keygen -F hostname
Here's something I'm actually using (function receiving the following arguments: user, host, source_file)
deployToServer() {
echo "Deployng to $1#$2 from $3"
if [ -z "`cat ~/.ssh/known_hosts | grep $2`" ] && [ -z "`ssh-keygen -F $2`" ]
then
echo 'Auto accepting SSH key'
scp -oStrictHostKeyChecking=no $3* $1#$2:.
else
scp $3* $1#$2:.
fi
}
Hope this helped ;)
The host ssh key fingerprint should not change if you simply reboot or stop/start an instance. If it does, then the instance/AMI is not configured correctly or something else (malicious?) is going on.
Good EC2 AMIs are set up to create a random host ssh key on first boot. Most popular AMIs will output the fingerprint to the console output. For security, you should be requesting the instance console output through the EC2 API (command line tool or console) and comparing that to the fingerprint in the ssh prompt.
By saying you "don't care about the fingerprint" you are saying that you don't care about encrypting the traffic between yourself and the instance and it's ok for anybody in between you and the instance to see that communication. It may even be possible for a man-in-the-middle to take over the ssh session and gain access to control your instance.
With ssh on Linux you can turn off the ssh fingerprint check with a command line or config file option. I hesitate to publish how to do this as it is not recommended and seriously reduces the safety of your connections.
A better option is to have your instances set up their own host ssh key to a secret value that you know. You can save the public side of the host ssh key in your known hosts file. This way your traffic is encrypted and safe, and you don't have to continually answer the prompt about the fingerprints when connecting to your own machine.
I created a expect file with following commands in it:
spawn ssh -i ec2Key.pem ubuntu#ec2IpAddress
expect "Are you sure you want to continue connecting (yes/no)?" { send "yes\n" }
interact
I was able to ssh into the ec2 console without disabling the rsa fingerprint. My machine was added to the known hosts of this ec2.
I hope it helps.

Using a variable's value as password for scp, ssh etc. instead of prompting for user input every time

AFAIK, the commands ssh or scp do not have/take a password parameter. Otherwise I could keep the password in a shell variable and probably get rid of the enter password prompt. If I write an scp command in my shell script, it prompts the user to input the password. I have multiple ssh and scp commands in my script and I do not want the user to enter the password every time. I would prefer to save the password in a shell variable in the beginning (by asking password once), then use it for every ssh or scp.
I read about "public key identification" in this question. Is it related to the solution I am looking for?
Update
I read in How to use ssh command in shell script? why it is unsafe to specify passwords on the commandline. Does using expect also store the password and is world visible (using ps aux)? Is that the security issue with using expect?
Further Explanation
To further make it clear, I am writing this shell script to automate code and database backup, do code upload, run necessary database queries, do all the things that are needed for a new version release of a LAMP project from a developer system to a remote live server. My shell script will be there inside the main codebase of the project in every developer instance.
Requirement
I want all developers (all may be working from different remote systems) knowing the SSH/FTP password to be able to use the shell by entering the ssh/ftp password same only at run-time once. I would prefer the password to be the ssh/ftp password
Note - I do not want other developers who don't know the SSH password to be able to use it (So I guess public key authentication will not work because it stores the passwords in the systems).
I do not want any command line solution which stores the password in some log in the system and can be world visible using ps aux or something.
Opening Bounty
From all the answers so far and my anaylsis of those solutions, it looks like other than public key authentication all others are insecure. I am not yet sure if using expect is insecure. I think it is otherwise the correct solution for me. In that case, I am getting command not found errors while trying to do that as already commented on one of the answers.
From http://www.debianadmin.com/sshpass-non-interactive-ssh-password-authentication.html -
First and foremost, users of sshpass
should realize that ssh’s insistance
on only getting the password
interactively is not without reason.
It is close to be impossible to
securely store the password, and users
of sshpass should consider whether
ssh’s public key authentication
provides the same end-user experience,
while involving less hassle and being
more secure.
So, is it not possible to securely run multiple ssh, scp commands by entering the ssh/ftp password (if only once at runtime? Please read my Requirement section again.
Also, can anyone explain this -
In particular, people writing programs
that satisfies are meant to
communicate the above points)password
programatically are encouraged to use
an anonymous pipe and pass the pipe’s
reading end to sshpass using the -d
option.
Does this mean anything is possible?
Indeed, you'll definitely want to look into setting up ssh keys, over saving a password in a bash script. If the key is passwordless, then no user input will be required to ssh/scp. You just set it up to use the key on both ends and voila, secured communication.
However, I'll get downvoted to hell if I don't say this. Many consider passwordless ssh keys to be a Bad Idea(TM). If anybody gets their hands on the keys, the have full access. This means that you are relying on other security measures such as file permissions to keep your password safe.
Also, look into ssh-agent. It allows you to set it up so that you have a password protected ssh-key, but you only need to type it in once and it will manage the password for the key for you and use it when necessary. On my linux box at home, I have ssh-agent set up to run in my .xinitrc file so that it prompts me once and then starts X. YMMV.
UPDATE:
With regards to your requirements, password protected public key authentication + ssh-agent still seems to fit. Only the developers privy to the SSH/FTP password could start up ssh-agent, type in the password and ssh-agent would manage the passwords for the public keys for the rest of the session, never requiring interaction again.
Of course, how it stores it is another matter entirely. IANASE, but for more information on security concerns of using ssh-agent, I found symantec's article to be pretty informative: http://www.symantec.com/connect/articles/ssh-and-ssh-agent
"The ssh-agent creates a unix domain
socket, and then listens for
connections from /usr/bin/ssh on this
socket. It relies on simple unix
permissions to prevent access to this
socket, which means that any keys you
put into your agent are available to
anyone who can connect to this socket.
[ie. root]" ...
"however, [..] they are only usable
while the agent is running -- root
could use your agent to authenticate
to your accounts on other systems, but
it doesn't provide direct access to
the keys themselves. This means that
the keys can't be taken off the
machine and used from other locations
indefinitely."
Hopefully you're not in a situation where you're trying to use an untrusted root's system.
The right way to do that is as follows:
Ensure that all your users are using ssh-agent (nowadays this is the default for most Linux systems). You can check it running the following command:
echo $SSH_AUTH_SOCK
If that variable is not empty, it means that the user is using ssh-agent.
Create a pair of authentication keys for every user ensuring they are protected by a non empty passphrase.
Install the public part of the authentication keys on the remote host so that users can log there.
You are done!
Now, the first time an user wants to log into the remote machine from some session it will have to enter the passphrase for its private key.
In later logins from the same session ssh-agent will provide the unlocked key for authentication in behalf of the user that will not be required to introduce the passphrase again.
Ugh. I hit the man pages hard for this. Here's what I got:
Use this code near the beginning of the script to silently get the ssh password:
read -p "Password: " -s SSHPASS # *MUST* be SSHPASS
export SSHPASS
And then use sshpass for ssh like so:
sshpass -e ssh username#hostname
Hope that helps.
You can Using expect to pass a password to ssh do this or as said already use public key authentication instead if that's a viable option.
For password authentication, as you mentioned in you description, you can use "sshpass". On Ubuntu, you can install as "sudo apt-get install sshpass".
For public/private key-pair base authentication,
First generate keys using, "ssh-keygen"
Then copy your key to the remote machine, using "ssh-copy-id username#remote-machine"
Once copied, the subsequent logins should not ask for password.
Expect is insecure
It drives an interactive session. If you were to pass a password via expect it would be no different from you typing a password on the command line except that the expect script would have retrieve the password from somewhere. It's typically insecure because people will put the password in the script, or in a config file.
It's also notoriously brittle because it waits on particular output as the event mechanism for input.
ssh-agent
ssh-agent is a fine solution if this is script that will always be driven manually. If there is someone who will be logged in to drive the execution of the script than an agent is a good way to go. It is not a good solution for automation because an agent implies a session. You usually don't initiate a session to automatically kick of a script (ie. cron).
ssh command keys
Ssh command keys is your best bet for an automated solution. It doesn't require a session, and the command key restricts what runs on the server to only the command specified in the authorized_keys. They are also typically setup without passwords. This can be a difficult solution to manage if you have thousands of servers. If you only have a few then it's pretty easy to setup and manage.
service ssh accounts
I've also seen setups with password-less service accounts. Instead of the command entry in tehh authorized_keys file, and alternative mechanism is used to restrict access/commands. These solutions often use sudo or restricted shells. However, I think these are more complicated to manage correctly, and therefore tend to be more insecure.
host to host automatic authentication
You can also setup host 2 host automatic authentication, but there are alot of things to get write to do this correctly. From setting up your network properly, using a bastion host for host key dissemination, proper ssh server configuration, etc. As a result this is not a solution a recommend unless you know what your doing and have the capacity and ability to set everything up correctly and maintain it as such.
For those for who setting up a keypair is not an option and absolutely need to perform password authentication, use $SSH_ASKPASS:
SSH_ASKPASS - If ssh needs a passphrase, it will read the passphrase from the current terminal if it was run from a terminal. If ssh does not have a terminal associated with it but DISPLAY and SSH_ASKPASS are set, it will execute the program specified by SSH_ASKPASS and open an X11 window to read the passphrase. This is particularly useful when calling ssh from a .xsession or related script. (Note that on some machines it may be necessary to redirect the input from /dev/null to make this work.)
E.g.:
$ echo <<EOF >password.sh
#!/bin/sh
echo 'password'
EOF
$ chmod 500 password.sh
$ echo $(DISPLAY=bogus SSH_ASKPASS=$(pwd)/password.sh setsid ssh user#host id </dev/null)
See also Tell SSH to use a graphical prompt for key passphrase.
Yes, you want pubkey authentication.
Today, the only way I was able to do this in a bash script via crontab was like that:
eval $(keychain --eval --agents ssh id_rsa id_dsa id_ed25519)
source $HOME/.keychain/$HOSTNAME-sh
This is with the ssh agent already running and to achieve that it was needed the passphrase.
ssh, ssh-keygen, ssh-agent, ssh-add and a correct configuration in /etc/ssh_config on the remote systems are necessary ingredients for securing access to remote systems.
First, a private/public keypair needs to be generated with ssh-keygen. The result of the keygen process are two files: the public key and the private key.
The public key file, usually stored in ~/.ssh/id_dsa.pub (or ~/.ssh/id_rsa.pub, for RSA encryptions) needs to be copied to each remote system that will be granting remote access to the user.
The private key file should remain on the originating system, or on a portable USB ("thumb") drive that is referenced from the sourcing system.
When generating the key pair, a passphrase is used to protect it from usage by non-authenticated users. When establishing an ssh session for the first time, the private key can only be unlocked with the passphrase. Once unlocked, it is possible for the originating system to remember the unlocked private key with ssh-agent. Some systems (e.g., Mac OS X) will automatically start up ssh-agent as part of the login process, and then do an automatic ssh-add -k that unlocks your private ssh keys using a passphrase previously stored in the keychain file.
Connections to remote systems can be direct, or proxied through ssh gateways. In the former case, the remote system only needs to have the public key corresponding to the available unlocked private keys. In the case of using a gateway, the intermediate system must have the public key as well as the eventual target system. In addition, the original ssh command needs to enable agent forwarding, either by configuration in ~/.ssh/config or by command option -A.
For example, to login to remote system "app1" through an ssh gateway system called "gw", the following can be done:
ssh -At gw ssh -A app1
or the following stanzas placed in the ~/.ssh/config file:
Host app1
ForwardAgent = yes
ProxyCommand = ssh -At gw nc %h %p 2>/dev/null
which runs "net cat" (aka nc) on the ssh gateway as a network pipe.
The above setup will allow very simple ssh commands, even through ssh gateways:
ssh app1
Sometimes, even more important than terminal sessions are scp and rsync commands for moving files around securely. For example, I use something like this to synchronize my personal environment to a remote system:
rsync -vaut ~/.env* ~/.bash* app1:
Without the config file and nc proxy command, the rsync would get a little more complicated:
rsync -vaut -e 'ssh -A gw' app1:
None of this will work correctly unless the remote systems' /etc/ssh_config is configured correctly. One such configuration is to remove "root" access via ssh, which improve tracking and accountability when several staff can perform root functions.
In unattended batch scripts, a special ssh key-pair needs to be generated for the non-root userid under which the scripts are run. Just as with ssh session management, the batch user ssh key-pair needs to be deployed similarly, with the public key copied to the remote systems, and the private key residing on the source system.
The private key can be locked with a passphrase or unlocked, as desired by the system managers and/or developers. The way to use the special batch ssh key, even in a script running under root, is to use the "ssh -i ~/.ssh/id_dsa" command options with all remote access commands. For example, to copy a file within a script using the special "batch" user access:
rsync -vaut -e 'ssh -i ~batch/.ssh/id_dsa -A gw' $sourcefiles batch#app2:/Sites/www/
This causes rsync to use a special ssh command as the remote access shell. The special-case ssh command uses the "batch" user's DSA private key as its identity. The rsync command's target remote system will be accessed using the "batch" user.

Resources