Ansible root/password login - ansible

I'm trying to use Ansible to provision a server and the first thing I want to do is test the ssh access. If I use ssh directly I can log in fine...
ssh root#server
root#backups's password:
If I use Ansible I can't...
user#ansible:~$ ansible backups -m ping --user root --ask-pass
SSH password:
backups | UNREACHABLE! => {
"changed": false,
"msg": "Invalid/incorrect password: Permission denied, please try again.",
"unreachable": true
}
The password I'm using is correct - 100%.
Before anyone suggests using SSH keys - that's what part of what I'm looking to automate.

The issue was caused by the getting started documentation setting a trap.
It instructs you to create an inventory file with servers, use ansible all -m ping to ping those servers and to use the -u switch to change the remote user.
What it doesn't tell you is that if like me not all you servers have the same user, the advised way to specify a user per server is in the inventory file...
server1 ansible_connection=ssh ansible_user=user1
server2 ansible_connection=ssh ansible_user=user2
server3 ansible_connection=ssh ansible_user=user3
I was provisioning a server, and the only user I had available to me at the time was root. But trying to do ansible server3 -user root --ask-pass failed to authenticate. After a couple of wasted hours I discovered the -user switch is only effective if the inventory file doesn't have a user. This is intended precedence behaviour. There are a few gripes about this in GitHub issues but a firm 'intended behaviour' mantra is the response you get if you challenge it. It seems to go against the grain to me.
I subsequently discovered that you can specify -e 'ansible_ssh_user=root' to override the inventory user - I will see about creating a pull request to improve the docs.
While you're here, I might be able to save you some time with some further gotchas. This behaviour is the same if you use playbooks. In there you can specify a remote_user but this isn't honoured - presumably also because of precedence. Again you can override the inventory user with -e 'ansible_ssh_user=root'
Finally, until I realised Linode could provision a server with an SSH key deployed, I was trying to specify the root password to an ad-hoc command. You have to encrypt the password and this gives you a long string and this is almost certainly going to include $ in it which bash will treat as substitutions. Make sure you escape these.
The lineinfile module behaviour isn't intuitive either.

Write your hosts file like this. It will work.
192.168.2.4
192.168.1.4
[all:vars]
ansible_user=azureuser
Then execute the following command: ansible-playbook --ask-pass -i hosts main.yml --check to check before configuration.
Also create a ansible.cfg file. Then paste the following contents there:
[defaults]
inventory = hosts
host_key_checking = False
Note: All the 3 files namely, main.yml,ansible.cfg & hosts must be in the same folder.
Also, the code is tested for devices connected to a private network using Private IPs. I haven't checked using Public IPs. If using Azure/AWS, create a test VM and connect it to the VPN of the other devices.
Note: You need to install the SSHPass package to be able to authenticate with Password.
For Ubuntu: apt-get install sshpass

Related

Ansible :execute playbook from localhost through bastion host

I am newbie to the ansible
We are doing our deployments via ansible and a bastion host is provisioned for the deployments.
The current approach I am using is to clone the ansible repo in bastion host and run the commands from that folder
My question is it possible to run the ansible code through the local machine through bastion??
(basically, avoid the repo in bastion host)
Let's say you want to provision a couple of VMs 172.20.0.10 and 172.20.0.11 in your development environment going through your 172.20.0.1 bastion. Your inventory looks a bit like this
[development]
172.20.0.10
172.20.0.11
Then you can edit your ~/.ssh/config and add
Host bastion
Hostname 172.20.0.1
User youruser
Host 172.20.*
ProxyJump bastion
User youruser
Then you can test a ssh 172.20.0.10 that should land you in your first VM. If it works for SSH, Ansible should work the same.
Note, you can run ansible with -vvv (or is it one more v, not sure atm), you'll see the SSH commands Ansible is running.
Note 2, ProxyJump requires a recent OpenSSH, 6.7 at least if I remember correctly
Using this data
host remoto : 10.0.1.121
user remoto : application_user
ssh key : app_ssh_key
host bastian : 212.34.345.12
user bastian : bastian_user
ssh key: bastian_ssh_key
and using key to access with ssh (you have to store keys in a secure storage, not with ansible playbook).
In a ssh single command
$ ssh application_user#10.0.1.121 -i path/to/app_ssh_key \
-o ProxyCommand="ssh -q bastian_user#212.34.345.12 -i path/to/bastian_ssh_key -W %h:%p"
In ansible
you can use two method:
Method 1
Use variables for inventory machine/group, in order to have different connection option for different machine/group.
Add to inventory file:
[remote-vm]
10.0.1.121
[remote-vm:vars]
ansible_ssh_user=application_user
ansible_ssh_private_key_file=path/to/app_ssh_key
ansible_ssh_common_args= -o ProxyCommand="ssh -q bastian_user#212.34.345.12 -i path/to/bastian_ssh_key -W %h:%p"
Method 2
Single configuration valid for all inventory machines.
Add to/replace in ansible.cfg:
[defaults]
remote_user = application_user
[ssh_connection]
ssh_args=-i path/to/app_ssh_key -o ProxyCommand="ssh -q bastian_user#212.34.345.12 -i path/to/bastian_ssh_key -W %h:%p"

Ansible Missing sudo password [duplicate]

How do I specify a sudo password for Ansible in non-interactive way?
I'm running Ansible playbook like this:
$ ansible-playbook playbook.yml -i inventory.ini \
--user=username --ask-sudo-pass
But I want to run it like this:
$ ansible-playbook playbook.yml -i inventory.ini \
--user=username` **--sudo-pass=12345**
Is there a way? I want to automate my project deployment as much as possible.
The docs strongly recommend against setting the sudo password in plaintext:
As a reminder passwords should never be stored in plain text. For information on encrypting your passwords and other secrets with Ansible Vault, see Encrypting content with Ansible Vault.
Instead you should be using --ask-become-pass on the command line when running ansible-playbook
Previous versions of Ansible have used --ask-sudo-pass and sudo instead of become.
You can pass variable on the command line via --extra-vars "name=value". Sudo password variable is ansible_sudo_pass. So your command would look like:
ansible-playbook playbook.yml -i inventory.ini --user=username \
--extra-vars "ansible_sudo_pass=yourPassword"
Update 2017: Ansible 2.2.1.0 now uses var ansible_become_pass. Either seems to work.
Update 2021: ansible_become_pass is still working, but for now, we should use -e instead of --extra-vars
Probably the best way to do this - assuming that you can't use the NOPASSWD solution provided by scottod - is to use Mircea Vutcovici's solution in combination with Ansible vault Archived.
For example, you might have a playbook something like this:
- hosts: all
vars_files:
- secret
tasks:
- name: Do something as sudo
service: name=nginx state=restarted
sudo: yes
Here we are including a file called secret which will contain our sudo password.
We will use ansible-vault to create an encrypted version of this file:
ansible-vault create secret
This will ask you for a password, then open your default editor to edit the file. You can put your ansible_sudo_pass in here.
e.g.: secret:
ansible_sudo_pass: mysudopassword
Save and exit, now you have an encrypted secret file which Ansible is able to decrypt when you run your playbook. Note: you can edit the file with ansible-vault edit secret (and enter the password that you used when creating the file)
The final piece of the puzzle is to provide Ansible with a --vault-password-file which it will use to decrypt your secret file.
Create a file called vault.txt and in that put the password that you used when creating your secret file. The password should be a string stored as a single line in the file.
From the Ansible Docs:
.. ensure permissions on the file are such that no one else can access your key and do not add your key to source control
Finally: you can now run your playbook with something like
ansible-playbook playbook.yml -u someuser -i hosts --sudo --vault-password-file=vault.txt
The above is assuming the following directory layout:
.
|_ playbook.yml
|_ secret
|_ hosts
|_ vault.txt
You can read more about Ansible Vault here: https://docs.ansible.com/playbooks_vault.html Archived
https://docs.ansible.com/ansible/latest/user_guide/vault.html
Looking at the code (runner/__init__.py), I think you can probably set it in your inventory file :
[whatever]
some-host ansible_sudo_pass='foobar'
There seem to be some provision in ansible.cfg config file too, but not implemented right now (constants.py).
I don't think ansible will let you specify a password in the flags as you wish to do.
There may be somewhere in the configs this can be set but this would make using ansible less secure overall and would not be recommended.
One thing you can do is to create a user on the target machine and grant them passwordless sudo privileges to either all commands or a restricted list of commands.
If you run sudo visudo and enter a line like the below, then the user 'privilegedUser' should not have to enter a password when they run something like sudo service xxxx start:
%privilegedUser ALL= NOPASSWD: /usr/bin/service
The sudo password is stored as a variable called ansible_sudo_pass.
You can set this variable in a few ways:
Per host, in your inventory hosts file (inventory/<inventoryname>/hosts)
[server]
10.0.0.0 ansible_sudo_pass=foobar
Per group, in your inventory groups file (inventory/<inventoryname>/groups)
[server:vars]
ansible_sudo_pass=foobar
Per group, in group vars (group_vars/<groupname>/ansible.yml)
ansible_sudo_pass: "foobar"
Per group, encrypted (ansible-vault create group_vars/<groupname>/ansible.yml)
ansible_sudo_pass: "foobar"
You can set the password for a group or for all servers at once:
[all:vars]
ansible_sudo_pass=default_sudo_password_for_all_hosts
[group1:vars]
ansible_sudo_pass=default_sudo_password_for_group1
I was tearing my hair out over this one, now I found a solution which does what i want:
1 encrypted file per host containing the sudo password
/etc/ansible/hosts:
[all:vars]
ansible_ssh_connection=ssh ansible_ssh_user=myuser ansible_ssh_private_key_file=~/.ssh/id_rsa
[some_service_group]
node-0
node-1
then you create for each host an encrypted var-file like so:
ansible-vault create /etc/ansible/host_vars/node-0
with content
ansible_sudo_pass: "my_sudo_pass_for_host_node-0"
how you organize the vault password (enter via --ask-vault-pass) or by cfg is up to you
based on this i suspect you can just encrypt the whole hosts file...
A more savvy way to do this is to store your sudo password in a secure vault such as LastPass or KeePass and then pass it to ansible-playbook using the -e# but instead of hardcoding the contents in an actual file, you can use the construct -e#<(...) to run a command in a sub-shell, and redirect its output (STDOUT) to a anonymous file descriptor, effectively feeding the password to the -e#<(..).
Example
$ ansible-playbook -i /tmp/hosts pb.yml \
-e#<(echo "ansible_sudo_pass: $(lpass show folder1/item1 --password)")
The above is doing several things, let's break it down.
ansible-playbook -i /tmp/hosts pb.yml - obviously running a playbook via ansible-playbook
$(lpass show folder1/item1 --password)" - runs the LastPass CLI lpass and retrieves the password to use
echo "ansible_sudo_pass: ...password..." - takes the string 'ansible_sudo_pass: ' and combines it with the password supplied by lpass
-e#<(..) - puts the above together, and connects the subshell of <(...) as a file descriptor for ansible-playbook to consume.
Further improvements
If you'd rather not type that every time you can simply things like so. First create an alias in your .bashrc like so:
$ cat ~/.bashrc
alias asp='echo "ansible_sudo_pass: $(lpass show folder1/item1 --password)"'
Now you can run your playbook like this:
$ ansible-playbook -i /tmp/hosts pb.yml -e#<(asp)
References
https://docs.ansible.com/ansible/2.4/ansible-playbook.html#cmdoption-ansible-playbook-e
If you are comfortable with keeping passwords in plain text files, another option is to use a JSON file with the --extra-vars parameter (be sure to exclude the file from source control):
ansible-playbook --extra-vars "#private_vars.json" playbook.yml
Ansible has supported this option since 1.3.
you can write sudo password for your playbook in the hosts file like this:
[host-group-name]
host-name:port ansible_sudo_pass='*your-sudo-password*'
Ansible vault has been suggested a couple of times here, but I prefer git-crypt for encrypting sensitive files in my playbooks. If you're using git to keep your ansible playbooks, it's a snap. The problem I've found with ansible vault is that I inevitably end up coming across encrypted copies of the file that I want to work with and have to go decrypt it before I can work. git-crypt offers a nicer workflow IMO.
https://github.com/AGWA/git-crypt
Using this, you can put your passwords in a var in your playbook, and mark your playbook as an encrypted file in .gitattributes like this:
my_playbook.yml filter=git-crypt diff=git-crypt
Your playbook will be transparently encrypted on Github. Then you just need to either install your encryption key on the host you use to run ansible, or follow the instruction on the documentation to set it up with gpg.
There's a good Q&A on forwarding gpg keys like your ssh-agent forwards SSH keys here: https://superuser.com/questions/161973/how-can-i-forward-a-gpg-key-via-ssh-agent.
My hack to automate this was to use an environment variable and access it via --extra-vars="ansible_become_pass='{{ lookup('env', 'ANSIBLE_BECOME_PASS') }}'".
Export an env var, but avoid bash/shell history (prepend with a space, or other methods). E.g.:
export ANSIBLE_BECOME_PASS='<your password>'
Lookup the env var while passing the extra ansible_become_pass variable into the ansible-playbook, E.g.:
ansible-playbook playbook.yml -i inventories/dev/hosts.yml -u user --extra-vars="ansible_become_pass='{{ lookup('env', 'ANSIBLE_BECOME_PASS') }}'"
Good alternate answers:
#toast38coza: simply use a vaulted value for ansible_become_pass. This is decent. However, for the paranoid teams that need to share ansible vault passwords, and execute ansible plays with induvidual accounts, they coudld use the shared vault password to reverse each others operating system password (identiy theft). Arguably, you need to trust your own team?
#slm's bash subshell output generated to temp file descriptor and using the # prefix to read the ansible variable from the file desriptor. Avoids bash history at least. Not sure, but hopefully subshell echo doesn't get caught and exposed in audit logging (e.g. auditd).
You can use ansible vault which will code your password into encrypted vault. After that you can use variable from vault in playbooks.
Some documentation on ansible vault:
http://docs.ansible.com/playbooks_vault.html
We are using it as vault per environment. To edit vault we have command as:
ansible-vault edit inventories/production/group_vars/all/vault
If you want to call vault variable you have to use ansible-playbook with parameters like:
ansible-playbook -s --vault-password-file=~/.ansible_vault.password
Yes we are storing vault password in local directory in plain text but it's not more dangerous like store root password for every system. Root password is inside vault file or you can have it like sudoers file for your user/group.
I'm recommending to use sudoers file on the server. Here is example for group admin:
%admin ALL=(ALL) NOPASSWD:ALL
Using ansible 2.4.1.0 and the following shall work:
[all]
17.26.131.10
17.26.131.11
17.26.131.12
17.26.131.13
17.26.131.14
[all:vars]
ansible_connection=ssh
ansible_user=per
ansible_ssh_pass=per
ansible_sudo_pass=per
And just run the playbook with this inventory as:
ansible-playbook -i inventory copyTest.yml
You can use sshpass utility as below,
$ sshpass -p "your pass" ansible pattern -m module -a args \
-i inventory --ask-sudo-pass
After five years, I can see this is still a very relevant subject. Somewhat mirroring leucos's answer which I find the best in my case, using ansible tools only (without any centralised authentication, tokens or whatever). This assumes you have the same username and the same public key on all servers. If you don't, of course you'd need to be more specific and add the corresponding variables next to the hosts:
[all:vars]
ansible_ssh_user=ansible
ansible_ssh_private_key_file=home/user/.ssh/mykey
[group]
192.168.0.50 ansible_sudo_pass='{{ myserver_sudo }}'
ansible-vault create mypasswd.yml
ansible-vault edit mypasswd.yml
Add:
myserver_sudo: mysecretpassword
Then:
ansible-playbook -i inv.ini my_role.yml --ask-vault --extra-vars '#passwd.yml'
At least this way you don't have to write more the variables which point to the passwords.
Just call your playbook with --extra-vars "become_pass=Password"
become_pass=('ansible_become_password', 'ansible_become_pass')
Just an addendum, so nobody else goes through the annoyance I recently did:
AFAIK, the best solution is one along the general lines of toast38coza's above. If it makes sense to tie your password files and your playbook together statically, then follow his template with vars_files (or include_vars). If you want to keep them separate, you can supply the vault contents on the command line like so:
ansible-playbook --ask-vault-pass -e#<PATH_TO_VAULT_FILE> <PLAYBOOK_FILE>
That's obvious in retrospect, but here are the gotchas:
That bloody # sign. If you leave it out, parsing will fail silently, and ansible-playbook will proceed as though you'd never specified the file in the first place.
You must explicitly import the contents of the vault, either with a command-line --extra-vars/-e or within your YAML code. The --ask-vault-pass flag doesn't do anything by itself (besides prompt you for a value which may or may not be used later).
May you include your "#"s and save an hour.
Above solution by #toast38coza worked for me; just that sudo: yes is deprecated in Ansible now.
Use become and become_user instead.
tasks:
- name: Restart apache service
service: name=apache2 state=restarted
become: yes
become_user: root
For new updates
just run your playbook with the flag -K and he will ask you for your sudo password
g.e ansible-playbook yourPlaybookFile.yaml -K
from the doc
To specify a password for sudo, run ansible-playbook with --ask-become-pass (-K for short)
Just hint to other solution.
You can to setup your ansible user to run sudo without password (it's default on GCP VMs)
sudo visudo
add line (tom is a user):
tom ALL=(ALL) NOPASSWD:ALL
we Can also Use EXPECT BLOCK in ansible to spawn bash and customize it as per your needs
- name: Run expect to INSTALL TA
shell: |
set timeout 100
spawn /bin/sh -i
expect -re "$ "
send "sudo yum remove -y xyz\n"
expect "$ "
send "sudo yum localinstall -y {{ rpm_remotehost_path_for_xyz }}\n"
expect "~]$ "
send "\n"
exit 0
args:
executable: /usr/bin/expect
If you are using the pass password manager, you can use the module passwordstore, which makes this very easy.
Let's say you saved your user's sudo password in pass as
Server1/User
Then you can use the decrypted value like so
{{ lookup('community.general.passwordstore', 'Server1/User')}}"
I use it in my inventory:
---
servers:
hosts:
server1:
ansible_become_pass: "{{ lookup('community.general.passwordstore', 'Server1/User')}}"
Note that you should be running gpg-agent so that you won't see a pinentry prompt every time a 'become' task is run.
You can pass it during playbook execution. the syntax is -
ansible-playbook -i inventory my.yml \
--extra-vars 'ansible_become_pass=YOUR-PASSWORD-HERE'
But that is not a good idea for security reasons. Better to use ansible vault
First update your inventory file as follows:
[cluster:vars]
k_ver="linux-image-4.13.0-26-generic"
ansible_user=vivek # ssh login user
ansible_become=yes # use sudo
ansible_become_method=sudo
ansible_become_pass='{{ my_cluser_sudo_pass }}'
[cluster]
www1
www2
www3
db1
db2
cache1
cache2
Next create a new encrypted data file named password.yml, run the following command:
$ ansible-vault create passwd.yml
Set the password for vault. After providing a password, the tool will start whatever editor you have defined with $EDITOR. Append the following
my_cluser_sudo_pass: your_sudo_password_for_remote_servers
Save and close the file in vi/vim. Finally run playbook as follows:
$ ansible-playbook -i inventory --ask-vault-pass --extra-vars '#passwd.yml' my.yml
How to edit my encrypted file again
ansible-vault edit passwd.yml
How to change password for my encrypted file
ansible-vault rekey passwd.yml
Very simple, and only add in the variable file:
Example:
$ vim group_vars/all
And add these:
Ansible_connection: ssh
Ansible_ssh_user: rafael
Ansible_ssh_pass: password123
Ansible_become_pass: password123
This worked for me...
Created file /etc/sudoers.d/90-init-users file with NOPASSWD
echo "user ALL=(ALL) NOPASSWD:ALL" > 90-init-users
where "user" is your userid.

Not able to switch user in ansible

We have a system where we have user A and user B. We can switch to user B from A using "sudo su" command only. Direct login to B user is not allowed.
Now From Ansible master, we can login to A user (as ansible remote user) successfully. Our use case is, We have to run some commands as user B using ansible. But we are failing to switch to B user and run those commands.
Our yml file looks like -
Module to copy java to the target host.
- name: Copying Java jdk1.8.0_192
remote_user: A
become_user: B
become: true
become_method: su
copy:
src: /etc/ansible/jboss7-cluster/raw_template/jdk1.8.0_192.zip
dest: "{{ java_install_dir }}"
Any inputs?
In your case, I believe the become_method should be the default sudo. Have you tried using that? If so, what is the result? Can you copy/paste the result here?
Also, can you try to run an ad hoc command against the host, and post the result here?
Something like this:
ansible -i inventory.ini -u A --become --become-user B -m ping myhost
And one more thing: note that there are some restrictions when using become to switch to a non-privileged user:
"In addition to the additional means of doing this securely, Ansible 2.1 also makes it harder to unknowingly do this insecurely. Whereas in Ansible 2.0.x and below, Ansible will silently allow the insecure behaviour if it was unable to find another way to share the files with the unprivileged user, in Ansible 2.1 and above Ansible defaults to issuing an error if it can’t do this securely. If you can’t make any of the changes above to resolve the problem, and you decide that the machine you’re running on is secure enough for the modules you want to run there to be world readable, you can turn on allow_world_readable_tmpfiles in the ansible.cfg file. Setting allow_world_readable_tmpfiles will change this from an error into a warning and allow the task to run as it did prior to 2.1."
And just as a side note: please avoid using sudo su - use sudo -i, or at least sudo su - . These will populate the environment correctly, unlike sudo su. For a fun read about why you want this, see here.

Ansible 2.0.0.2: ansible doesn't respect the "- u" switch

I'm probing a freshly installed Archlinux installation on a Raspberry PI 2 like so:
ansible -i PI2 arch -m setup -c paramiko -k -u alarm -vvvv
This reads to me: Fire the setup module against the IP connecting with the user "alarm" asking for the password of this specific user. However the user that eventually attempts to connect is "root".
Here's the debug response:
Loaded callback minimal of type stdout, v2.0
<192.168.1.18> ESTABLISH CONNECTION FOR USER: root on PORT 22 TO 192.168.1.18
192.168.1.18 | UNREACHABLE! => {
"changed": false,
"msg": "ERROR! Authentication failed.",
"unreachable": true
}
The inventory looks like this:
[arch]
192.168.1.18
Some things that may or may not be relevant are the following:
ssh logins via root are not permitted
sudo is not installed
default user and pass are "alarm" : "alarm"
no ssh key being copied to the machine hence the paramiko connection attempt
What is NOT ignored and leads to a successful connection is adding ansible_user=alarm to the IP line in the inventory file.
EDIT
Found this interesting passage in the official docs: http://docs.ansible.com/ansible/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable which states:
Another important thing to consider (for all versions) is that connection specific variables override config, command line and play specific options and directives. For example:
ansible_user will override-u andremote_user: `
The original question seems to remain though. Without any mention of ansible_user in the inventory, why is root being used instead of the explicitly mentioned user via - u?
EDIT_END
Is this expected behaviour?
Thanks
You don't need to specify paramiko as the connection type, Ansible will figure that part out. You may have a group_vars directory with an ansible_user or ansible_ssh_user variable defined for this host which could be overriding the alarm user.
I was able to replicate your test on ansible 2.0.0.2 without any issues against a raspberry pi 2 running Raspian:
➜ ansible ansible -i PI2 arch -m setup -u alarm -vvvv -k
Using /etc/ansible/ansible.cfg as config file
SSH password:
Loaded callback minimal of type stdout, v2.0
<192.168.1.84> ESTABLISH CONNECTION FOR USER: alarm on PORT 22 TO 192.168.1.84
CONNECTION: pid 78534 waiting for lock on 9
CONNECTION: pid 78534 acquired lock on 9
paramiko: The authenticity of host '192.168.1.84' can't be established.
The ssh-rsa key fingerprint is 54e12e8153e0319f450934d606dca7df.
Are you sure you want to continue connecting (yes/no)?
yes
CONNECTION: pid 78534 released lock on 9
<192.168.1.84> EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454502995.07-263327298967895 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454502995.07-263327298967895 )" )
<192.168.1.84> PUT /var/folders/39/t0dm88q50dbcshd5nc5m5c640000gn/T/tmp5DqywL TO /home/alarm/.ansible/tmp/ansible- tmp-1454502995.07-263327298967895/setup
<192.168.1.84> EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/alarm/.ansible/ tmp/ansible-tmp-1454502995.07-263327298967895/setup; rm -rf "/home/alarm/.ansible/tmp/ansible-tmp-1454502995. 07-263327298967895/" > /dev/null 2>&1
192.168.1.84 | SUCCESS => {
"ansible_facts": {
"ansible_all_ipv4_addresses": [
"192.168.1.84"
],
I virtualenv'd myself a Ansible 1.9.4 sandbox, copied over the ansible.cfg and the inventory and ran the command again. Kinda worked as expected:
⤷ ansible --version
ansible 1.9.4
configured module search path = None
(ANS19TEST)~/Documents/Code/VENVS/ANS19TEST
⤷ ansible -i PI2 arch -m setup -c paramiko -k -u alarm -vvvv
SSH password:
<192.168.1.18> ESTABLISH CONNECTION FOR USER: alarm on PORT 22 TO 192.168.1.18
<192.168.1.18> REMOTE_MODULE setup
From where I'm standing I'd say this is a bug. Maybe somebody can confirm?! This goes to the bugtracker then...
Cheers
EDIT
For brevity I omitted a important part of my inventory file, which ultimately is responsible for the behavior. It looks like this:
[hypriot]
192.168.1.18 ansible_user=root
[arch]
192.168.1.18
Quote from the Ansible bugtracker:
The names used in the inventory is the key in a dictionary. So everything you put in there as host-specific variables will be merged into one big dictionary. That means that in some conflicting cases variables are superseded by other values.
You can prevent this by using different names for the same host (e.g. using IP address and hostname, or an alias or DNS-alias) and in that case you can still do what you like to do.
So my inventory looks like this now:
[hypriot]
hypriot_local ansible_host=192.168.1.18 ansible_user=root
[archlinux]
arch_local ansible_host=192.168.1.18
This works fine. The corresponding issue on the Ansible tracker is here: https://github.com/ansible/ansible/issues/14268

Specify sudo password for Ansible

How do I specify a sudo password for Ansible in non-interactive way?
I'm running Ansible playbook like this:
$ ansible-playbook playbook.yml -i inventory.ini \
--user=username --ask-sudo-pass
But I want to run it like this:
$ ansible-playbook playbook.yml -i inventory.ini \
--user=username` **--sudo-pass=12345**
Is there a way? I want to automate my project deployment as much as possible.
The docs strongly recommend against setting the sudo password in plaintext:
As a reminder passwords should never be stored in plain text. For information on encrypting your passwords and other secrets with Ansible Vault, see Encrypting content with Ansible Vault.
Instead you should be using --ask-become-pass on the command line when running ansible-playbook
Previous versions of Ansible have used --ask-sudo-pass and sudo instead of become.
You can pass variable on the command line via --extra-vars "name=value". Sudo password variable is ansible_sudo_pass. So your command would look like:
ansible-playbook playbook.yml -i inventory.ini --user=username \
--extra-vars "ansible_sudo_pass=yourPassword"
Update 2017: Ansible 2.2.1.0 now uses var ansible_become_pass. Either seems to work.
Update 2021: ansible_become_pass is still working, but for now, we should use -e instead of --extra-vars
Probably the best way to do this - assuming that you can't use the NOPASSWD solution provided by scottod - is to use Mircea Vutcovici's solution in combination with Ansible vault Archived.
For example, you might have a playbook something like this:
- hosts: all
vars_files:
- secret
tasks:
- name: Do something as sudo
service: name=nginx state=restarted
sudo: yes
Here we are including a file called secret which will contain our sudo password.
We will use ansible-vault to create an encrypted version of this file:
ansible-vault create secret
This will ask you for a password, then open your default editor to edit the file. You can put your ansible_sudo_pass in here.
e.g.: secret:
ansible_sudo_pass: mysudopassword
Save and exit, now you have an encrypted secret file which Ansible is able to decrypt when you run your playbook. Note: you can edit the file with ansible-vault edit secret (and enter the password that you used when creating the file)
The final piece of the puzzle is to provide Ansible with a --vault-password-file which it will use to decrypt your secret file.
Create a file called vault.txt and in that put the password that you used when creating your secret file. The password should be a string stored as a single line in the file.
From the Ansible Docs:
.. ensure permissions on the file are such that no one else can access your key and do not add your key to source control
Finally: you can now run your playbook with something like
ansible-playbook playbook.yml -u someuser -i hosts --sudo --vault-password-file=vault.txt
The above is assuming the following directory layout:
.
|_ playbook.yml
|_ secret
|_ hosts
|_ vault.txt
You can read more about Ansible Vault here: https://docs.ansible.com/playbooks_vault.html Archived
https://docs.ansible.com/ansible/latest/user_guide/vault.html
Looking at the code (runner/__init__.py), I think you can probably set it in your inventory file :
[whatever]
some-host ansible_sudo_pass='foobar'
There seem to be some provision in ansible.cfg config file too, but not implemented right now (constants.py).
I don't think ansible will let you specify a password in the flags as you wish to do.
There may be somewhere in the configs this can be set but this would make using ansible less secure overall and would not be recommended.
One thing you can do is to create a user on the target machine and grant them passwordless sudo privileges to either all commands or a restricted list of commands.
If you run sudo visudo and enter a line like the below, then the user 'privilegedUser' should not have to enter a password when they run something like sudo service xxxx start:
%privilegedUser ALL= NOPASSWD: /usr/bin/service
The sudo password is stored as a variable called ansible_sudo_pass.
You can set this variable in a few ways:
Per host, in your inventory hosts file (inventory/<inventoryname>/hosts)
[server]
10.0.0.0 ansible_sudo_pass=foobar
Per group, in your inventory groups file (inventory/<inventoryname>/groups)
[server:vars]
ansible_sudo_pass=foobar
Per group, in group vars (group_vars/<groupname>/ansible.yml)
ansible_sudo_pass: "foobar"
Per group, encrypted (ansible-vault create group_vars/<groupname>/ansible.yml)
ansible_sudo_pass: "foobar"
You can set the password for a group or for all servers at once:
[all:vars]
ansible_sudo_pass=default_sudo_password_for_all_hosts
[group1:vars]
ansible_sudo_pass=default_sudo_password_for_group1
I was tearing my hair out over this one, now I found a solution which does what i want:
1 encrypted file per host containing the sudo password
/etc/ansible/hosts:
[all:vars]
ansible_ssh_connection=ssh ansible_ssh_user=myuser ansible_ssh_private_key_file=~/.ssh/id_rsa
[some_service_group]
node-0
node-1
then you create for each host an encrypted var-file like so:
ansible-vault create /etc/ansible/host_vars/node-0
with content
ansible_sudo_pass: "my_sudo_pass_for_host_node-0"
how you organize the vault password (enter via --ask-vault-pass) or by cfg is up to you
based on this i suspect you can just encrypt the whole hosts file...
A more savvy way to do this is to store your sudo password in a secure vault such as LastPass or KeePass and then pass it to ansible-playbook using the -e# but instead of hardcoding the contents in an actual file, you can use the construct -e#<(...) to run a command in a sub-shell, and redirect its output (STDOUT) to a anonymous file descriptor, effectively feeding the password to the -e#<(..).
Example
$ ansible-playbook -i /tmp/hosts pb.yml \
-e#<(echo "ansible_sudo_pass: $(lpass show folder1/item1 --password)")
The above is doing several things, let's break it down.
ansible-playbook -i /tmp/hosts pb.yml - obviously running a playbook via ansible-playbook
$(lpass show folder1/item1 --password)" - runs the LastPass CLI lpass and retrieves the password to use
echo "ansible_sudo_pass: ...password..." - takes the string 'ansible_sudo_pass: ' and combines it with the password supplied by lpass
-e#<(..) - puts the above together, and connects the subshell of <(...) as a file descriptor for ansible-playbook to consume.
Further improvements
If you'd rather not type that every time you can simply things like so. First create an alias in your .bashrc like so:
$ cat ~/.bashrc
alias asp='echo "ansible_sudo_pass: $(lpass show folder1/item1 --password)"'
Now you can run your playbook like this:
$ ansible-playbook -i /tmp/hosts pb.yml -e#<(asp)
References
https://docs.ansible.com/ansible/2.4/ansible-playbook.html#cmdoption-ansible-playbook-e
If you are comfortable with keeping passwords in plain text files, another option is to use a JSON file with the --extra-vars parameter (be sure to exclude the file from source control):
ansible-playbook --extra-vars "#private_vars.json" playbook.yml
Ansible has supported this option since 1.3.
you can write sudo password for your playbook in the hosts file like this:
[host-group-name]
host-name:port ansible_sudo_pass='*your-sudo-password*'
Ansible vault has been suggested a couple of times here, but I prefer git-crypt for encrypting sensitive files in my playbooks. If you're using git to keep your ansible playbooks, it's a snap. The problem I've found with ansible vault is that I inevitably end up coming across encrypted copies of the file that I want to work with and have to go decrypt it before I can work. git-crypt offers a nicer workflow IMO.
https://github.com/AGWA/git-crypt
Using this, you can put your passwords in a var in your playbook, and mark your playbook as an encrypted file in .gitattributes like this:
my_playbook.yml filter=git-crypt diff=git-crypt
Your playbook will be transparently encrypted on Github. Then you just need to either install your encryption key on the host you use to run ansible, or follow the instruction on the documentation to set it up with gpg.
There's a good Q&A on forwarding gpg keys like your ssh-agent forwards SSH keys here: https://superuser.com/questions/161973/how-can-i-forward-a-gpg-key-via-ssh-agent.
My hack to automate this was to use an environment variable and access it via --extra-vars="ansible_become_pass='{{ lookup('env', 'ANSIBLE_BECOME_PASS') }}'".
Export an env var, but avoid bash/shell history (prepend with a space, or other methods). E.g.:
export ANSIBLE_BECOME_PASS='<your password>'
Lookup the env var while passing the extra ansible_become_pass variable into the ansible-playbook, E.g.:
ansible-playbook playbook.yml -i inventories/dev/hosts.yml -u user --extra-vars="ansible_become_pass='{{ lookup('env', 'ANSIBLE_BECOME_PASS') }}'"
Good alternate answers:
#toast38coza: simply use a vaulted value for ansible_become_pass. This is decent. However, for the paranoid teams that need to share ansible vault passwords, and execute ansible plays with induvidual accounts, they coudld use the shared vault password to reverse each others operating system password (identiy theft). Arguably, you need to trust your own team?
#slm's bash subshell output generated to temp file descriptor and using the # prefix to read the ansible variable from the file desriptor. Avoids bash history at least. Not sure, but hopefully subshell echo doesn't get caught and exposed in audit logging (e.g. auditd).
You can use ansible vault which will code your password into encrypted vault. After that you can use variable from vault in playbooks.
Some documentation on ansible vault:
http://docs.ansible.com/playbooks_vault.html
We are using it as vault per environment. To edit vault we have command as:
ansible-vault edit inventories/production/group_vars/all/vault
If you want to call vault variable you have to use ansible-playbook with parameters like:
ansible-playbook -s --vault-password-file=~/.ansible_vault.password
Yes we are storing vault password in local directory in plain text but it's not more dangerous like store root password for every system. Root password is inside vault file or you can have it like sudoers file for your user/group.
I'm recommending to use sudoers file on the server. Here is example for group admin:
%admin ALL=(ALL) NOPASSWD:ALL
Using ansible 2.4.1.0 and the following shall work:
[all]
17.26.131.10
17.26.131.11
17.26.131.12
17.26.131.13
17.26.131.14
[all:vars]
ansible_connection=ssh
ansible_user=per
ansible_ssh_pass=per
ansible_sudo_pass=per
And just run the playbook with this inventory as:
ansible-playbook -i inventory copyTest.yml
You can use sshpass utility as below,
$ sshpass -p "your pass" ansible pattern -m module -a args \
-i inventory --ask-sudo-pass
After five years, I can see this is still a very relevant subject. Somewhat mirroring leucos's answer which I find the best in my case, using ansible tools only (without any centralised authentication, tokens or whatever). This assumes you have the same username and the same public key on all servers. If you don't, of course you'd need to be more specific and add the corresponding variables next to the hosts:
[all:vars]
ansible_ssh_user=ansible
ansible_ssh_private_key_file=home/user/.ssh/mykey
[group]
192.168.0.50 ansible_sudo_pass='{{ myserver_sudo }}'
ansible-vault create mypasswd.yml
ansible-vault edit mypasswd.yml
Add:
myserver_sudo: mysecretpassword
Then:
ansible-playbook -i inv.ini my_role.yml --ask-vault --extra-vars '#passwd.yml'
At least this way you don't have to write more the variables which point to the passwords.
Just call your playbook with --extra-vars "become_pass=Password"
become_pass=('ansible_become_password', 'ansible_become_pass')
Just an addendum, so nobody else goes through the annoyance I recently did:
AFAIK, the best solution is one along the general lines of toast38coza's above. If it makes sense to tie your password files and your playbook together statically, then follow his template with vars_files (or include_vars). If you want to keep them separate, you can supply the vault contents on the command line like so:
ansible-playbook --ask-vault-pass -e#<PATH_TO_VAULT_FILE> <PLAYBOOK_FILE>
That's obvious in retrospect, but here are the gotchas:
That bloody # sign. If you leave it out, parsing will fail silently, and ansible-playbook will proceed as though you'd never specified the file in the first place.
You must explicitly import the contents of the vault, either with a command-line --extra-vars/-e or within your YAML code. The --ask-vault-pass flag doesn't do anything by itself (besides prompt you for a value which may or may not be used later).
May you include your "#"s and save an hour.
Above solution by #toast38coza worked for me; just that sudo: yes is deprecated in Ansible now.
Use become and become_user instead.
tasks:
- name: Restart apache service
service: name=apache2 state=restarted
become: yes
become_user: root
For new updates
just run your playbook with the flag -K and he will ask you for your sudo password
g.e ansible-playbook yourPlaybookFile.yaml -K
from the doc
To specify a password for sudo, run ansible-playbook with --ask-become-pass (-K for short)
Just hint to other solution.
You can to setup your ansible user to run sudo without password (it's default on GCP VMs)
sudo visudo
add line (tom is a user):
tom ALL=(ALL) NOPASSWD:ALL
we Can also Use EXPECT BLOCK in ansible to spawn bash and customize it as per your needs
- name: Run expect to INSTALL TA
shell: |
set timeout 100
spawn /bin/sh -i
expect -re "$ "
send "sudo yum remove -y xyz\n"
expect "$ "
send "sudo yum localinstall -y {{ rpm_remotehost_path_for_xyz }}\n"
expect "~]$ "
send "\n"
exit 0
args:
executable: /usr/bin/expect
If you are using the pass password manager, you can use the module passwordstore, which makes this very easy.
Let's say you saved your user's sudo password in pass as
Server1/User
Then you can use the decrypted value like so
{{ lookup('community.general.passwordstore', 'Server1/User')}}"
I use it in my inventory:
---
servers:
hosts:
server1:
ansible_become_pass: "{{ lookup('community.general.passwordstore', 'Server1/User')}}"
Note that you should be running gpg-agent so that you won't see a pinentry prompt every time a 'become' task is run.
You can pass it during playbook execution. the syntax is -
ansible-playbook -i inventory my.yml \
--extra-vars 'ansible_become_pass=YOUR-PASSWORD-HERE'
But that is not a good idea for security reasons. Better to use ansible vault
First update your inventory file as follows:
[cluster:vars]
k_ver="linux-image-4.13.0-26-generic"
ansible_user=vivek # ssh login user
ansible_become=yes # use sudo
ansible_become_method=sudo
ansible_become_pass='{{ my_cluser_sudo_pass }}'
[cluster]
www1
www2
www3
db1
db2
cache1
cache2
Next create a new encrypted data file named password.yml, run the following command:
$ ansible-vault create passwd.yml
Set the password for vault. After providing a password, the tool will start whatever editor you have defined with $EDITOR. Append the following
my_cluser_sudo_pass: your_sudo_password_for_remote_servers
Save and close the file in vi/vim. Finally run playbook as follows:
$ ansible-playbook -i inventory --ask-vault-pass --extra-vars '#passwd.yml' my.yml
How to edit my encrypted file again
ansible-vault edit passwd.yml
How to change password for my encrypted file
ansible-vault rekey passwd.yml
Very simple, and only add in the variable file:
Example:
$ vim group_vars/all
And add these:
Ansible_connection: ssh
Ansible_ssh_user: rafael
Ansible_ssh_pass: password123
Ansible_become_pass: password123
This worked for me...
Created file /etc/sudoers.d/90-init-users file with NOPASSWD
echo "user ALL=(ALL) NOPASSWD:ALL" > 90-init-users
where "user" is your userid.

Resources