I'm trying to use Ansible to provision a server and the first thing I want to do is test the ssh access. If I use ssh directly I can log in fine...
ssh root#server
root#backups's password:
If I use Ansible I can't...
user#ansible:~$ ansible backups -m ping --user root --ask-pass
SSH password:
backups | UNREACHABLE! => {
"changed": false,
"msg": "Invalid/incorrect password: Permission denied, please try again.",
"unreachable": true
}
The password I'm using is correct - 100%.
Before anyone suggests using SSH keys - that's what part of what I'm looking to automate.
The issue was caused by the getting started documentation setting a trap.
It instructs you to create an inventory file with servers, use ansible all -m ping to ping those servers and to use the -u switch to change the remote user.
What it doesn't tell you is that if like me not all you servers have the same user, the advised way to specify a user per server is in the inventory file...
server1 ansible_connection=ssh ansible_user=user1
server2 ansible_connection=ssh ansible_user=user2
server3 ansible_connection=ssh ansible_user=user3
I was provisioning a server, and the only user I had available to me at the time was root. But trying to do ansible server3 -user root --ask-pass failed to authenticate. After a couple of wasted hours I discovered the -user switch is only effective if the inventory file doesn't have a user. This is intended precedence behaviour. There are a few gripes about this in GitHub issues but a firm 'intended behaviour' mantra is the response you get if you challenge it. It seems to go against the grain to me.
I subsequently discovered that you can specify -e 'ansible_ssh_user=root' to override the inventory user - I will see about creating a pull request to improve the docs.
While you're here, I might be able to save you some time with some further gotchas. This behaviour is the same if you use playbooks. In there you can specify a remote_user but this isn't honoured - presumably also because of precedence. Again you can override the inventory user with -e 'ansible_ssh_user=root'
Finally, until I realised Linode could provision a server with an SSH key deployed, I was trying to specify the root password to an ad-hoc command. You have to encrypt the password and this gives you a long string and this is almost certainly going to include $ in it which bash will treat as substitutions. Make sure you escape these.
The lineinfile module behaviour isn't intuitive either.
Write your hosts file like this. It will work.
192.168.2.4
192.168.1.4
[all:vars]
ansible_user=azureuser
Then execute the following command: ansible-playbook --ask-pass -i hosts main.yml --check to check before configuration.
Also create a ansible.cfg file. Then paste the following contents there:
[defaults]
inventory = hosts
host_key_checking = False
Note: All the 3 files namely, main.yml,ansible.cfg & hosts must be in the same folder.
Also, the code is tested for devices connected to a private network using Private IPs. I haven't checked using Public IPs. If using Azure/AWS, create a test VM and connect it to the VPN of the other devices.
Note: You need to install the SSHPass package to be able to authenticate with Password.
For Ubuntu: apt-get install sshpass
I know this has been discussed in other questions and I have tried various options but nothing seems to be solving my issue.
I am starting to learn Ansible and trying to run a playbook. I have one role which have tasks in it. I want to run the whole playbook as user john on remote machine. Playbook begins with a copying task. Everything was working fine till I started using become and become_user. I tried running the role as user john by specifying in major playbbok :
---
- hosts: target-machine.com
roles:
- role: installSoftware
become: yes
become_user: john
Then when executing the playbook, I run the following :
ansible-playbook -s major.yml -K
which prompts me for
SUDO password:
I enter the password of user john which exists in the remote target machine. As soon as it starts running playbook, it hangs at the task which requires user as john and complains :
fatal: [target.com]: FAILED! => {"failed": true, "msg": "Failed to get information on remote file (/home/john/software.zip): MODULE FAILURE"}
I also tried the following:
---
- hosts: target-machine.com
become: yes
become_user: john
roles:
- role: installSoftware
so as to run the whole playbook as user john. It again asks me for the SUDO password and then after 5 minutes complains:
fatal: [openam.ansible-target.com]: FAILED! => {"changed": false, "failed": true, "module_stderr": "", "module_stdout": "\r\nSorry, try again .\r\n[sudo via ansible, key=kchjtpjorgnvaksspqvgwzzkgtjnxsyv] password: \r\nsudo: 1 incorrect password attempt\r\n", "msg": "MODULE FAILURE"}
although the password I have entered is correct. On SO it was suggested to increase the SSH timeout which I did in ansible.cfg file:
[ssh_connection]
ssh_args = -o ServerAliveInterval=60 -o ControlMaster=auto -o ControlPersist=1m
All I want to mirror is what I would run on target machine:
su john
Enter Password:
john$
Any help will be appreciated. Thank You.
By default, become uses sudo, not su. As a result, the password Ansible is prompting you for is the password of the user you're logged-in as, not the user you're changing to. You can change this by setting become_method to su.
If you want to always run all your tasks on that host as a different user, you probably instead want to set ansible_user. See the inventory documentation for more details on that.
I'm probing a freshly installed Archlinux installation on a Raspberry PI 2 like so:
ansible -i PI2 arch -m setup -c paramiko -k -u alarm -vvvv
This reads to me: Fire the setup module against the IP connecting with the user "alarm" asking for the password of this specific user. However the user that eventually attempts to connect is "root".
Here's the debug response:
Loaded callback minimal of type stdout, v2.0
<192.168.1.18> ESTABLISH CONNECTION FOR USER: root on PORT 22 TO 192.168.1.18
192.168.1.18 | UNREACHABLE! => {
"changed": false,
"msg": "ERROR! Authentication failed.",
"unreachable": true
}
The inventory looks like this:
[arch]
192.168.1.18
Some things that may or may not be relevant are the following:
ssh logins via root are not permitted
sudo is not installed
default user and pass are "alarm" : "alarm"
no ssh key being copied to the machine hence the paramiko connection attempt
What is NOT ignored and leads to a successful connection is adding ansible_user=alarm to the IP line in the inventory file.
EDIT
Found this interesting passage in the official docs: http://docs.ansible.com/ansible/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable which states:
Another important thing to consider (for all versions) is that connection specific variables override config, command line and play specific options and directives. For example:
ansible_user will override-u andremote_user: `
The original question seems to remain though. Without any mention of ansible_user in the inventory, why is root being used instead of the explicitly mentioned user via - u?
EDIT_END
Is this expected behaviour?
Thanks
You don't need to specify paramiko as the connection type, Ansible will figure that part out. You may have a group_vars directory with an ansible_user or ansible_ssh_user variable defined for this host which could be overriding the alarm user.
I was able to replicate your test on ansible 2.0.0.2 without any issues against a raspberry pi 2 running Raspian:
➜ ansible ansible -i PI2 arch -m setup -u alarm -vvvv -k
Using /etc/ansible/ansible.cfg as config file
SSH password:
Loaded callback minimal of type stdout, v2.0
<192.168.1.84> ESTABLISH CONNECTION FOR USER: alarm on PORT 22 TO 192.168.1.84
CONNECTION: pid 78534 waiting for lock on 9
CONNECTION: pid 78534 acquired lock on 9
paramiko: The authenticity of host '192.168.1.84' can't be established.
The ssh-rsa key fingerprint is 54e12e8153e0319f450934d606dca7df.
Are you sure you want to continue connecting (yes/no)?
yes
CONNECTION: pid 78534 released lock on 9
<192.168.1.84> EXEC ( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1454502995.07-263327298967895 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1454502995.07-263327298967895 )" )
<192.168.1.84> PUT /var/folders/39/t0dm88q50dbcshd5nc5m5c640000gn/T/tmp5DqywL TO /home/alarm/.ansible/tmp/ansible- tmp-1454502995.07-263327298967895/setup
<192.168.1.84> EXEC LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/alarm/.ansible/ tmp/ansible-tmp-1454502995.07-263327298967895/setup; rm -rf "/home/alarm/.ansible/tmp/ansible-tmp-1454502995. 07-263327298967895/" > /dev/null 2>&1
192.168.1.84 | SUCCESS => {
"ansible_facts": {
"ansible_all_ipv4_addresses": [
"192.168.1.84"
],
I virtualenv'd myself a Ansible 1.9.4 sandbox, copied over the ansible.cfg and the inventory and ran the command again. Kinda worked as expected:
⤷ ansible --version
ansible 1.9.4
configured module search path = None
(ANS19TEST)~/Documents/Code/VENVS/ANS19TEST
⤷ ansible -i PI2 arch -m setup -c paramiko -k -u alarm -vvvv
SSH password:
<192.168.1.18> ESTABLISH CONNECTION FOR USER: alarm on PORT 22 TO 192.168.1.18
<192.168.1.18> REMOTE_MODULE setup
From where I'm standing I'd say this is a bug. Maybe somebody can confirm?! This goes to the bugtracker then...
Cheers
EDIT
For brevity I omitted a important part of my inventory file, which ultimately is responsible for the behavior. It looks like this:
[hypriot]
192.168.1.18 ansible_user=root
[arch]
192.168.1.18
Quote from the Ansible bugtracker:
The names used in the inventory is the key in a dictionary. So everything you put in there as host-specific variables will be merged into one big dictionary. That means that in some conflicting cases variables are superseded by other values.
You can prevent this by using different names for the same host (e.g. using IP address and hostname, or an alias or DNS-alias) and in that case you can still do what you like to do.
So my inventory looks like this now:
[hypriot]
hypriot_local ansible_host=192.168.1.18 ansible_user=root
[archlinux]
arch_local ansible_host=192.168.1.18
This works fine. The corresponding issue on the Ansible tracker is here: https://github.com/ansible/ansible/issues/14268
I am trying to create playbook for deploy with simple scenario: login to server and clone/update open github repo.
All access parameters written in ~/.ssh/config
Here are my files:
hosts
[staging]
staging
deploy.yml
- hosts: staging
tasks:
- name: Update code
git: repo=https://github.com/travis-ci-examples/php.git dest=hello_ansible
When I am trying to run ansible-playbook -s deploy.yml -i hosts, it outputs error like this:
GATHERING FACTS ***************************************************************
fatal: [staging] => Missing become password
TASK: [Update code] ***********************************************************
FATAL: no hosts matched or all hosts have already failed -- aborting
I have tried to add sudo: False and become: False, but it does not seem to have any effect. I assume this operation should not request sudo password as I am trying work with files in ssh user's home directory.
I am sorry if my question is a bit lame, but I do not have much experience with Ansible.
It is asking for the sudo password because you are using the -s option. It seems like you do not want to use sudo for this task so try running the command without -s.
ansible-playbook deploy.yml -i hosts
Ansible asks for sudo password from following code, it tries to create a new postgres user.
Error message:
fatal: [xxx.xxx.xxx.xxx] => Missing sudo password
main.yml
- name: 'Provision a PostgreSQL server'
hosts: "dbservers"
sudo: yes
sudo_user: postgres
roles:
- postgres
create_db.yml
- name: Make sure the PostgreSQL users are present
postgresql_user: name=rails password=secret role_attr_flags=CREATEDB,NOSUPERUSER
sudo_user: postgres
sudo: yes
The remote_user that used to login to this machine is a non-root user, it has no password, and can only login using key auth.
For user postgres, this account doesn't have the password as well, because the database was just installed.
Since I logged in as non-root user, of course it will ask for password when switch to postgress account in order to create database user. But it won't be need for password if switch to postgres from root account. So, I wonder if there is a way to switch to root, and then switch to user postgres.
Note: the root account has no public key, no password, and cannot login from SSH.
Try with the option -kK. It will prompt for password.
$ ansible-playbook mail.yml -kK
SSH password:
BECOME password[defaults to SSH password]:
-k, --ask-pass: ask for connection password
-K, --ask-become-pass: ask for privilege escalation password
You can specificy the sudo password when running the Ansible playbook:
ansible-playbook playbook.yml -i inventory.ini --extra-vars "ansible_sudo_pass=yourPassword"
Add a file to the /etc/sudoers.d directory on the target machine called postgres with the following contents:
postgres ALL=(ALL) NOPASSWD:ALL
This ensures that the postgres user (provided you are using that as your sudo user) will not be asked for a password when it attempts sudo commands.
If you are using a different user to connect to the target machine, then you'll have to amend the above to give the NOPASSWD permission to that user instead.
See here for further details.
In my case, I added the information to the servergroup's group variables
So in /etc/ansible/group_vars/{servergroup}/vars
I added
ansible_become: yes
ansible_become_method: sudo
ansible_become_pass: "{{ vault_ansible_password }}"
This article helped me workout the answer https://www.cyberciti.biz/faq/how-to-set-and-use-sudo-password-for-ansible-vault/
You would need to modify /etc/sudoers file or command visudo to allow user with which you connect to the remove server to switch to another user without password prompt.
If all of the above solutions did not work for you, which was my case. My problem was that my ansible_user has not all the permissions, I don't like to allow root to connect from ssh.
But my tester user did not have all the sudo permissions to perform some operations:
Initial tester_user permission:
tester ALL= NOPASSWD:ALL # bad
changed to :
tester ALL=(ALL:ALL) NOPASSWD:ALL # good
The meaning of these additional fields is:
First “ALL” indicates that the user can run commands as all users.
The second “ALL” indicates that the user can run commands as all groups.
Initially wanted to restrict permissions for maintainers, but it is mandatory that the ansible_user can run commands as all users use become_user in Ansible.
Add this to your /etc/sudoers file
## Allows people in group wheel to run all commands
%wheel ALL=(ALL) ALL
## Same thing without a password
# %wheel ALL=(ALL) NOPASSWD: ALL
username-u-want-to-allow ALL=(ALL) NOPASSWD: ALL
This will happen from Ansible Tower UI if you select the 'Enable Privilege Escalation' option. You might need to supply the password twice in Ansible Tower.
In your Remote-server (Client-Server) or (target-server) whatever you call, as a root user write this command
visudo pressenter
Under
User privilege specification
<your-name on (client-server)> ALL=(ALL) NOPASSWD: ALL
save file
Now from your Controller-Server (Workstation) or (Ansible-Server) whatever you call, run your command
ssh <your-user on (client-server)>#ipaddress
SUCCESS
In my case, my user did not have sudo permission on the managed node. By default ansible was setting the become_method: sudo
I found out this by specifying -vvvv, and looking at the logs.
...
remote_user: username
become_method: sudo
inventory: (u'/etc/ansible/hosts',)
...
ansible-playbook -u -b ansible-script.yml -vvvv
To get around the problem, I specify "become no" in the ansible script.
For example:
- name: Ensure the httpd service is running
service:
name: httpd
state: started
become: no
You don't need specify the sudo_user if the ssh_user that you use to make the connection belongs to the sudoers group, only has to say the sudo_pass.
My solution / workaround for error message:
fatal: [node]: FAILED! => {"msg": "Missing sudo password"}
For me although the user already existed in the sudoers file on the remote host to perform commands without the use of password I still got this message. What I did to enter in the main YAML playbook enter:
---
- hosts: [your targeted inventory list of hosts]
become_user: [your remote privileged user]
become: true
roles:
- [your playbook role]
Also in the /etc/ansible/ansible.cfg I enabled/ commented out or changed the following:
[privilege_escalation]
become=True
become_method=sudo
become_user=root
become_ask_pass=False
[defaults]
remote_tmp = /tmp/ansible-$USER
host_key_checking = False
sudo_user = [your remote privileged user]
ask_sudo_pass = False
ask_pass = False
The entry remote_tmp = /tmp/ansible-$USER was to avoid messages like:
OSError: [Errno 13] Permission denied: '/etc/.ansible_tmpLyoZOOyum.conf'
fatal: [node]: FAILED! => {"changed": false, "msg": "The destination directory (/etc) is not writable by the current user. Error was: [Errno 13] Permission denied: '/etc/.ansible_tmpLyoZOOyum.conf'"}
In my case I have solved it by adding the command /bin/sh in the line of /etc/sudoers to allow executing commands without password.
This was the error shown:
BECOME password:
debian | FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"module_stderr": "Shared connection to debian9 closed.\r\n",
"module_stdout": "\r\n",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
Only add this:
user ALL= NOPASSWD: /usr/bin/id, /usr/bin/whoami, /bin/sh
for testing purposes I also added id and whoami.
In my case, even though password was correct, I was getting this error because playbook had "connection: local" specified. The playbook had connection type set to local as all commands were supposed to be run on localhost. After adding a new task which required delegation to remote host, the connection method was still set to local which resulted in the Missing sudo password error. The error was fixed by removing the "connection: local" in playbook.