Ansible task command with credentials - ansible

I am wondering if during an Ansible task is it safe to send credentials (password, api key) in a command line task?
No one on the remote server should see the command line (and even less credentials).
Thank you.

If you are not trusting remote server - you should never expose sensitive credentials to it, since anyone having root access on that server can easily intercept traffic, files and memory allocated by you on that server. The easiest way for someone to get you secrets would be to dump temporary files that ansible creating to do it's job on remote server, since it requires only privileges of the user you are connecting as!
There is a special environment variable called ANSIBLE_KEEP_REMOTE_FILES=1 used to troubleshoot problems. It should give you an idea about what information is actually stored by ansible on remote disks, even for a brief seconds. I've executed
ANSIBLE_KEEP_REMOTE_FILES=1 ansible -m command -a "echo 'SUPER_SECRET_INFO'" -i 127.0.0.1, all
command on my machine to see files ansible creates on remote machine. After it's execution i see temporary file in my home directory, named ~/.ansible/tmp/ansible-tmp-1492114067.19-55553396244878/command.py
So let's grep out secret info:
grep SUPER_SECRET ~/.ansible/tmp/ansible-tmp-1492114067.19-55553396244878/command.py
Result:
ANSIBALLZ_PARAMS = '{"ANSIBLE_MODULE_ARGS": {"_ansible_version": "2.2.2.0", "_ansible_selinux_special_fs": ["fuse", "nfs", "vboxsf", "ramfs"], "_ansible_no_log": false, "_ansible_module_name": "command", "_raw_params": "echo \'SUPER_SECRET_INFO\'", "_ansible_verbosity": 0, "_ansible_syslog_facility": "LOG_USER", "_ansible_diff": false, "_ansible_debug": false, "_ansible_check_mode": false}}'
As you can see - nothing is safe from the prying eyes! So if you are really concerned about your secrets - don't use anything critical on suspected hosts, use one time passwords, keys or revokable tokens to mitigate this issue.

It depends on how paranoid are you about this credentials. In general: no, it is not safe.
I guess root user on remote host can see anything.
For example, run strace -f -p$(pidof -s sshd) on remote host and try to execute any command via ssh.
By default Ansible write all invocations to syslog on remote host, you can set no_log: no for task to avoid this.

Related

What is the usage of local-exec provisioner?

I am getting familiar with Terraform and Ansible through books. Could someone enlighten me about the following block of code?
provisioner "local-exec" {
command = "ansible-playbook -u ubuntu --key-file ansible-key.pem -T 300 -i '${self.public_ip},', app.yml"
}
The short answer is local-exec is for anything you want to do on your local machine instead of the remote machine.
You can do a bunch of different things:
write an ssh key into your ~/.ssh to access the server
run a sleep 30 or something to make sure the next commands wait a bit for your machine to provision
write logs to your local directory (last run, date completed, etc.)
write some env_vars to your local machine you can use to access the machine
the ansible example you provided
FYI, hashicorp hates local- and remote- exec. If you talk to one of their devs, they will tell you that it is a necessary evil. Other than maybe a sleep or write this or that, avoid it for any stateful data.
I would interpret that as Terraform should execute a local command on the Control Node.
Reading the documentation about local-exec Provisioner it turns out that
The local-exec provisioner invokes a local executable after a (annot.: remote) resource is created. This invokes a process on the machine running Terraform ...
and not on the Remote Resource.
So after Terraform has in example created a Virtual Machine, it calls an Ansible playbook to proceed further on it.

How to repair sshkey pairs after recreating global ssh keys with Ansible

In a nutshell, after deleting then recreating new global ssh keys on a managed host as part of an ansible play, the shared ssh keys between the controller and the host break. I would like to know a superior method to "fix" this issue and regain the original ssh key trust using ansible itself. Unfortunately this will require some explanation.
Basically as a start, right now, I don't have ansible set up when a new image is deployed. To remedy that, I have created a bash script, utilizing expect which nicely and neatly does 2 things on that new managed host:
Creates an ansible account with appropriate sudo permissions
Creates an ssh key pair between the controller and the controller and the managed host.
That's it, and that's all, however it does require manual input at this time as to the IP of the host to be run on. We now have a desired state from which ansible works well via ssh. However it seems cumbersome at 328 lines of code to check and do this procedure, more on this later.
The issue starts, due to the fact that the host/server is deployed from an image, there is a need to recreate the global keys on each so that they do not have the same set. The fix for this part of that issue is a simple 2 steps:
Find and delete all ^ssh_host_. files in the directory /etc/ssh/
Run the command: /usr/bin/ssh-keygen -A to generate new global ssh keys.
We however now have a problem, once the current ssh connection is broken to the managed host, we can no longer connect to our managed hosts as our known_hosts file on the controller now have keys that don't match. If you do nothing else, you get a prompt again to verify the remote key as it has "changed" and you can't continue until you do. (Stopping all playbooks from functioning) OR if you try to clear the IP out of the known_hosts file on the controller and put it back in, you get the lovely below message:
"changed": false,
"msg": "Failed to connect to the host via ssh: ###########################################################\r\n# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! ***SNIP*** You can use following command to remove the offending key:\r\nssh-keygen -R 10.200.5.4 -f /home/ansible/.ssh/known_hosts\r\nECDSA host key for 10.200.5.4 has changed and you have requested strict checking.\r\nHost key verification failed.",
"unreachable": true
So now I have an issue, and there must be a few commands which I can utilize with ssh-keygen, and/or ssh-keyscan to fix this mess cleanly. However for the life of me I can't figure it out. My only recourse now is to re-run the bash script which initially sets this all up, and replace everything on the controller/host sshkey wise. This seems like overkill, I can't possibly believe that is necessary.
My only hope now is that someone else has an idea how to solve this cleanly and permanently without manual intervention. Otherwise, the only thing I can do is set the ansible_ssh_common_args: "-o StrictHostKeyChecking=no"fact and run the commands my script does but only in playbook form. I can't believe there aren't any modules which can accomplish this. I tried the known_host module, but either I don't know how to use it properly, or it doesn't have this functionality. (Also it has the annoying property of changing my known_hosts file to root ownership, which I must then change back.)
If anyone can help that would be fantastic! Thanks in advance!
The below is not strictly needed as it's extra text clogging up the works, but it does illustrate how the bash script fixes this issue and maybe give some insight on a better solution:
In short, it generates an ssh public and private key, attaches the hostname to them, creates an ssh config identity file using a heredocs population method, puts them in the proper spots, and then copies the public key over to mangaged host in question.
The code snipits are below to show how this is accomplished. This is not the entire script just relevant parts:
#HOMEDIR is /home/ansible This host is the IP of managed host in the run.
#THISHOST is the IP of the managed host in question. Yes, we ONLY use IP's, there is no DNS.
cd "$HOMEDIR"
rm -f $HOMEDIR/.ssh/id_rsa
ssh-keygen -t rsa -f "$HOMEDIR"/.ssh/id_rsa -q -P ""
sudo mkdir -p "$HOMEDIR"/.ssh/rsa_inventory && sudo chown ansible:users "$HOMEDIR"/.ssh/rsa_inventory
cp -p "$HOMEDIR"/.ssh/id_rsa "$HOMEDIR"/.ssh/rsa_inventory/$THISHOST-id_rsa
cp -p "$HOMEDIR"/.ssh/id_rsa.pub "$HOMEDIR"/.ssh/rsa_inventory/$THISHOST-id_rsa.pub
#Heredocs implementation of the ssh config identity file:
cat <<EOT >> /home/ansible/.ssh/config
Host $THISHOST $THISHOST
HostName $THISHOST
IdentityFile ~/.ssh/rsa_inventory/${THISHOST}-id_rsa
User ansible
EOT
#Define the variable earlier before the expect script is run so it makes sense in next snipit:
ssh_key=$( cat "$HOMEDIR"/.ssh/id_rsa.pub )
#Snipit in except script where it echos over the public ssh key to the managed host from the controller.
send "sudo echo '"$ssh_key"' >> /home/ansible/.ssh/authorized_keys\n"
expect -re {:~> *$}
send "sudo chmod 644 /home/ansible/.ssh/authorized_keys\n"
expect -re {:~> *$}
#etc etc, so on and so forth properly setting attributes on this file. ```
Now things work with passwordless ssh as they should. Until they are re-ruined by the global ssh key replacement.

Ansible Transfer a file from window server A to window server B after running ansible playbook on server_ansible

Good day. If I have linux ansible server (called server_ansible), then I would like to transfer a file from window server A to window server B.
How could I do?
Tried win_copy but not work.
Thank you.
I've had the same problem and resorted to a taks with a shell command using the scp -r option to transfer between two remote hosts.
scp -3 remote_user#ip_remote1:/path_to_source remote_user#remote2:/path_to_destination
You do however need to set up keys between the hosts.

Terminate scp session if key is invalid, without prompting for password

I have a script to monitor remote status of my servers. For this I run a script on each remote machine, which logs information about uptime, cpu load, Distro version to a log file. The script on my master server, queries their servers by doing an scp of these files like this: scp -i ~/.ssh/id_rsa pingmonitor#IP:~/pingmon.dat ./. I have some machines for which this information is to be queried, and others where the remote servers do not supply this information.
What I want to do is run an scp with the private key on the master server, and if successful, the file is received. If the key has not been added to the remote server, scp should fail, and no more is to be done.
The problem is that if the server has not been setup for ssh access previously (not available in known_hosts), it stalls the script, and prompts: "The authenticity of host can't be established. Are you sure you want to continue connecting (yes/no)?". If yes was entered, it then demands the user password about 3 times.
I tried to remove the yes/no, and the password prompts by piping from a file which contains three newlines, but that does not stop it from asking for a password
How can I automate the script such that if a private key is invalid, it stops the session and returns control to the script, without stalling and asking for a password?
I'm using Debian Wheezy.
Set PasswordAuthentication=no in your ssh_config (Most likely in ~/.ssh/config). In case this doesnt work you can also set NumberOfPasswordPrompts=0 to zero so you wont be prompted for a password.

Using a variable's value as password for scp, ssh etc. instead of prompting for user input every time

AFAIK, the commands ssh or scp do not have/take a password parameter. Otherwise I could keep the password in a shell variable and probably get rid of the enter password prompt. If I write an scp command in my shell script, it prompts the user to input the password. I have multiple ssh and scp commands in my script and I do not want the user to enter the password every time. I would prefer to save the password in a shell variable in the beginning (by asking password once), then use it for every ssh or scp.
I read about "public key identification" in this question. Is it related to the solution I am looking for?
Update
I read in How to use ssh command in shell script? why it is unsafe to specify passwords on the commandline. Does using expect also store the password and is world visible (using ps aux)? Is that the security issue with using expect?
Further Explanation
To further make it clear, I am writing this shell script to automate code and database backup, do code upload, run necessary database queries, do all the things that are needed for a new version release of a LAMP project from a developer system to a remote live server. My shell script will be there inside the main codebase of the project in every developer instance.
Requirement
I want all developers (all may be working from different remote systems) knowing the SSH/FTP password to be able to use the shell by entering the ssh/ftp password same only at run-time once. I would prefer the password to be the ssh/ftp password
Note - I do not want other developers who don't know the SSH password to be able to use it (So I guess public key authentication will not work because it stores the passwords in the systems).
I do not want any command line solution which stores the password in some log in the system and can be world visible using ps aux or something.
Opening Bounty
From all the answers so far and my anaylsis of those solutions, it looks like other than public key authentication all others are insecure. I am not yet sure if using expect is insecure. I think it is otherwise the correct solution for me. In that case, I am getting command not found errors while trying to do that as already commented on one of the answers.
From http://www.debianadmin.com/sshpass-non-interactive-ssh-password-authentication.html -
First and foremost, users of sshpass
should realize that ssh’s insistance
on only getting the password
interactively is not without reason.
It is close to be impossible to
securely store the password, and users
of sshpass should consider whether
ssh’s public key authentication
provides the same end-user experience,
while involving less hassle and being
more secure.
So, is it not possible to securely run multiple ssh, scp commands by entering the ssh/ftp password (if only once at runtime? Please read my Requirement section again.
Also, can anyone explain this -
In particular, people writing programs
that satisfies are meant to
communicate the above points)password
programatically are encouraged to use
an anonymous pipe and pass the pipe’s
reading end to sshpass using the -d
option.
Does this mean anything is possible?
Indeed, you'll definitely want to look into setting up ssh keys, over saving a password in a bash script. If the key is passwordless, then no user input will be required to ssh/scp. You just set it up to use the key on both ends and voila, secured communication.
However, I'll get downvoted to hell if I don't say this. Many consider passwordless ssh keys to be a Bad Idea(TM). If anybody gets their hands on the keys, the have full access. This means that you are relying on other security measures such as file permissions to keep your password safe.
Also, look into ssh-agent. It allows you to set it up so that you have a password protected ssh-key, but you only need to type it in once and it will manage the password for the key for you and use it when necessary. On my linux box at home, I have ssh-agent set up to run in my .xinitrc file so that it prompts me once and then starts X. YMMV.
UPDATE:
With regards to your requirements, password protected public key authentication + ssh-agent still seems to fit. Only the developers privy to the SSH/FTP password could start up ssh-agent, type in the password and ssh-agent would manage the passwords for the public keys for the rest of the session, never requiring interaction again.
Of course, how it stores it is another matter entirely. IANASE, but for more information on security concerns of using ssh-agent, I found symantec's article to be pretty informative: http://www.symantec.com/connect/articles/ssh-and-ssh-agent
"The ssh-agent creates a unix domain
socket, and then listens for
connections from /usr/bin/ssh on this
socket. It relies on simple unix
permissions to prevent access to this
socket, which means that any keys you
put into your agent are available to
anyone who can connect to this socket.
[ie. root]" ...
"however, [..] they are only usable
while the agent is running -- root
could use your agent to authenticate
to your accounts on other systems, but
it doesn't provide direct access to
the keys themselves. This means that
the keys can't be taken off the
machine and used from other locations
indefinitely."
Hopefully you're not in a situation where you're trying to use an untrusted root's system.
The right way to do that is as follows:
Ensure that all your users are using ssh-agent (nowadays this is the default for most Linux systems). You can check it running the following command:
echo $SSH_AUTH_SOCK
If that variable is not empty, it means that the user is using ssh-agent.
Create a pair of authentication keys for every user ensuring they are protected by a non empty passphrase.
Install the public part of the authentication keys on the remote host so that users can log there.
You are done!
Now, the first time an user wants to log into the remote machine from some session it will have to enter the passphrase for its private key.
In later logins from the same session ssh-agent will provide the unlocked key for authentication in behalf of the user that will not be required to introduce the passphrase again.
Ugh. I hit the man pages hard for this. Here's what I got:
Use this code near the beginning of the script to silently get the ssh password:
read -p "Password: " -s SSHPASS # *MUST* be SSHPASS
export SSHPASS
And then use sshpass for ssh like so:
sshpass -e ssh username#hostname
Hope that helps.
You can Using expect to pass a password to ssh do this or as said already use public key authentication instead if that's a viable option.
For password authentication, as you mentioned in you description, you can use "sshpass". On Ubuntu, you can install as "sudo apt-get install sshpass".
For public/private key-pair base authentication,
First generate keys using, "ssh-keygen"
Then copy your key to the remote machine, using "ssh-copy-id username#remote-machine"
Once copied, the subsequent logins should not ask for password.
Expect is insecure
It drives an interactive session. If you were to pass a password via expect it would be no different from you typing a password on the command line except that the expect script would have retrieve the password from somewhere. It's typically insecure because people will put the password in the script, or in a config file.
It's also notoriously brittle because it waits on particular output as the event mechanism for input.
ssh-agent
ssh-agent is a fine solution if this is script that will always be driven manually. If there is someone who will be logged in to drive the execution of the script than an agent is a good way to go. It is not a good solution for automation because an agent implies a session. You usually don't initiate a session to automatically kick of a script (ie. cron).
ssh command keys
Ssh command keys is your best bet for an automated solution. It doesn't require a session, and the command key restricts what runs on the server to only the command specified in the authorized_keys. They are also typically setup without passwords. This can be a difficult solution to manage if you have thousands of servers. If you only have a few then it's pretty easy to setup and manage.
service ssh accounts
I've also seen setups with password-less service accounts. Instead of the command entry in tehh authorized_keys file, and alternative mechanism is used to restrict access/commands. These solutions often use sudo or restricted shells. However, I think these are more complicated to manage correctly, and therefore tend to be more insecure.
host to host automatic authentication
You can also setup host 2 host automatic authentication, but there are alot of things to get write to do this correctly. From setting up your network properly, using a bastion host for host key dissemination, proper ssh server configuration, etc. As a result this is not a solution a recommend unless you know what your doing and have the capacity and ability to set everything up correctly and maintain it as such.
For those for who setting up a keypair is not an option and absolutely need to perform password authentication, use $SSH_ASKPASS:
SSH_ASKPASS - If ssh needs a passphrase, it will read the passphrase from the current terminal if it was run from a terminal. If ssh does not have a terminal associated with it but DISPLAY and SSH_ASKPASS are set, it will execute the program specified by SSH_ASKPASS and open an X11 window to read the passphrase. This is particularly useful when calling ssh from a .xsession or related script. (Note that on some machines it may be necessary to redirect the input from /dev/null to make this work.)
E.g.:
$ echo <<EOF >password.sh
#!/bin/sh
echo 'password'
EOF
$ chmod 500 password.sh
$ echo $(DISPLAY=bogus SSH_ASKPASS=$(pwd)/password.sh setsid ssh user#host id </dev/null)
See also Tell SSH to use a graphical prompt for key passphrase.
Yes, you want pubkey authentication.
Today, the only way I was able to do this in a bash script via crontab was like that:
eval $(keychain --eval --agents ssh id_rsa id_dsa id_ed25519)
source $HOME/.keychain/$HOSTNAME-sh
This is with the ssh agent already running and to achieve that it was needed the passphrase.
ssh, ssh-keygen, ssh-agent, ssh-add and a correct configuration in /etc/ssh_config on the remote systems are necessary ingredients for securing access to remote systems.
First, a private/public keypair needs to be generated with ssh-keygen. The result of the keygen process are two files: the public key and the private key.
The public key file, usually stored in ~/.ssh/id_dsa.pub (or ~/.ssh/id_rsa.pub, for RSA encryptions) needs to be copied to each remote system that will be granting remote access to the user.
The private key file should remain on the originating system, or on a portable USB ("thumb") drive that is referenced from the sourcing system.
When generating the key pair, a passphrase is used to protect it from usage by non-authenticated users. When establishing an ssh session for the first time, the private key can only be unlocked with the passphrase. Once unlocked, it is possible for the originating system to remember the unlocked private key with ssh-agent. Some systems (e.g., Mac OS X) will automatically start up ssh-agent as part of the login process, and then do an automatic ssh-add -k that unlocks your private ssh keys using a passphrase previously stored in the keychain file.
Connections to remote systems can be direct, or proxied through ssh gateways. In the former case, the remote system only needs to have the public key corresponding to the available unlocked private keys. In the case of using a gateway, the intermediate system must have the public key as well as the eventual target system. In addition, the original ssh command needs to enable agent forwarding, either by configuration in ~/.ssh/config or by command option -A.
For example, to login to remote system "app1" through an ssh gateway system called "gw", the following can be done:
ssh -At gw ssh -A app1
or the following stanzas placed in the ~/.ssh/config file:
Host app1
ForwardAgent = yes
ProxyCommand = ssh -At gw nc %h %p 2>/dev/null
which runs "net cat" (aka nc) on the ssh gateway as a network pipe.
The above setup will allow very simple ssh commands, even through ssh gateways:
ssh app1
Sometimes, even more important than terminal sessions are scp and rsync commands for moving files around securely. For example, I use something like this to synchronize my personal environment to a remote system:
rsync -vaut ~/.env* ~/.bash* app1:
Without the config file and nc proxy command, the rsync would get a little more complicated:
rsync -vaut -e 'ssh -A gw' app1:
None of this will work correctly unless the remote systems' /etc/ssh_config is configured correctly. One such configuration is to remove "root" access via ssh, which improve tracking and accountability when several staff can perform root functions.
In unattended batch scripts, a special ssh key-pair needs to be generated for the non-root userid under which the scripts are run. Just as with ssh session management, the batch user ssh key-pair needs to be deployed similarly, with the public key copied to the remote systems, and the private key residing on the source system.
The private key can be locked with a passphrase or unlocked, as desired by the system managers and/or developers. The way to use the special batch ssh key, even in a script running under root, is to use the "ssh -i ~/.ssh/id_dsa" command options with all remote access commands. For example, to copy a file within a script using the special "batch" user access:
rsync -vaut -e 'ssh -i ~batch/.ssh/id_dsa -A gw' $sourcefiles batch#app2:/Sites/www/
This causes rsync to use a special ssh command as the remote access shell. The special-case ssh command uses the "batch" user's DSA private key as its identity. The rsync command's target remote system will be accessed using the "batch" user.

Resources