Missing become password in ansible playbook - ansible

I am trying to create playbook for deploy with simple scenario: login to server and clone/update open github repo.
All access parameters written in ~/.ssh/config
Here are my files:
hosts
[staging]
staging
deploy.yml
- hosts: staging
tasks:
- name: Update code
git: repo=https://github.com/travis-ci-examples/php.git dest=hello_ansible
When I am trying to run ansible-playbook -s deploy.yml -i hosts, it outputs error like this:
GATHERING FACTS ***************************************************************
fatal: [staging] => Missing become password
TASK: [Update code] ***********************************************************
FATAL: no hosts matched or all hosts have already failed -- aborting
I have tried to add sudo: False and become: False, but it does not seem to have any effect. I assume this operation should not request sudo password as I am trying work with files in ssh user's home directory.
I am sorry if my question is a bit lame, but I do not have much experience with Ansible.

It is asking for the sudo password because you are using the -s option. It seems like you do not want to use sudo for this task so try running the command without -s.
ansible-playbook deploy.yml -i hosts

Related

Executing task after being logged as root in ansible

I am trying to subsequently run a task after I am connected using ssh. I am connecting using this in my playbook
- name: connect using password # task 1; this task set/connect me as root
expect:
command: ssh -o "StrictHostKeyChecking=no" myuser#********
responses:
"password:":
-my password
-my password
delegate_to: localhost
That task is fine and I am able to see that I am connected. The problem now is that when I try to run subsequent tasks for example:
- name: copy folder # task 2 in the same playbook
copy:
src: "files/mylocalfile.txt"
dest: "etc/temp"
mode: "0777"
I have the following message:
"msg: etc/temp not writable"
How do I do to continue executing the remaining task as root that got connected in task1?
I believe this might not be an ansible question, but a linux one.
Is your user in /etc/wheel?
Ansible has the direective become, which will let you execute a task as root, if the user you are connecting with is allowed to escalate privileges. The task you want to run with privileges would be something like:
- name: copy folder # task 2 in the same playbook
become: yes
copy:
src: "files/mylocalfile.txt"
dest: "etc/temp"
mode: "0777"
you can use become_user if you need to specify the user you want to run the task as, and if you have a password for the privileged user, you can ask ansible to prompt for the password when running ansible-playbook, using --ask-become-password.
The following link offers documentation about privilege escalation in ansible:
https://docs.ansible.com/ansible/latest/user_guide/become.html

How to run playbook on my application server using sudo?

I am writing a playbook for my Application tomcat node It will copy, deploy and stop/start tomcats.
I have a hop box serverA, another hop box serverB and tomcat node tomcatC. Manually using putty i use below steps to get on to the tomcat
Login to serverA using userId1
ssh to serverB using userId2
ssh to tomcatC using userId1
sudo to tomcat user.
Also I am able to directly ssh to tomcatC from serverA and my Ansible master is also serverA from where I am running the playbooks.
How do i run my playbook for this? Below is my playbook i am using as of now but it's not working.
ansible-playbook -i my-inventory my-V3.yml --tags=download,copy,deploy -e release_version=5.7 -e target_env=tomcatC -u userId1 --ask-pass. AND my-v3.yml looks like below -
hosts: '{{ target_env }}'
#serial: 1
remote_user: userId1
become: yes
become_user: tomcat
Getting this Error NOW -
GATHERING FACTS ***************************************************************
fatal: [tomcatC] => Missing become password
You can set the user a command is run as like so:
- cron:
name: "Clear Root Mail"
minute: "0"
hour: "22"
job: "sudo rm /var/spool/mail/root"
user: myuser
Or use become: true like so:
- name: Start Server
shell: "nohup /home/myuser/StartServer.sh &"
become: true
You can have shell scripts run the commands you need to run as well that ansible calls from the jump box you have. Your problem looks like you dont have the correct ssh key applied though.

Ansible root/password login

I'm trying to use Ansible to provision a server and the first thing I want to do is test the ssh access. If I use ssh directly I can log in fine...
ssh root#server
root#backups's password:
If I use Ansible I can't...
user#ansible:~$ ansible backups -m ping --user root --ask-pass
SSH password:
backups | UNREACHABLE! => {
"changed": false,
"msg": "Invalid/incorrect password: Permission denied, please try again.",
"unreachable": true
}
The password I'm using is correct - 100%.
Before anyone suggests using SSH keys - that's what part of what I'm looking to automate.
The issue was caused by the getting started documentation setting a trap.
It instructs you to create an inventory file with servers, use ansible all -m ping to ping those servers and to use the -u switch to change the remote user.
What it doesn't tell you is that if like me not all you servers have the same user, the advised way to specify a user per server is in the inventory file...
server1 ansible_connection=ssh ansible_user=user1
server2 ansible_connection=ssh ansible_user=user2
server3 ansible_connection=ssh ansible_user=user3
I was provisioning a server, and the only user I had available to me at the time was root. But trying to do ansible server3 -user root --ask-pass failed to authenticate. After a couple of wasted hours I discovered the -user switch is only effective if the inventory file doesn't have a user. This is intended precedence behaviour. There are a few gripes about this in GitHub issues but a firm 'intended behaviour' mantra is the response you get if you challenge it. It seems to go against the grain to me.
I subsequently discovered that you can specify -e 'ansible_ssh_user=root' to override the inventory user - I will see about creating a pull request to improve the docs.
While you're here, I might be able to save you some time with some further gotchas. This behaviour is the same if you use playbooks. In there you can specify a remote_user but this isn't honoured - presumably also because of precedence. Again you can override the inventory user with -e 'ansible_ssh_user=root'
Finally, until I realised Linode could provision a server with an SSH key deployed, I was trying to specify the root password to an ad-hoc command. You have to encrypt the password and this gives you a long string and this is almost certainly going to include $ in it which bash will treat as substitutions. Make sure you escape these.
The lineinfile module behaviour isn't intuitive either.
Write your hosts file like this. It will work.
192.168.2.4
192.168.1.4
[all:vars]
ansible_user=azureuser
Then execute the following command: ansible-playbook --ask-pass -i hosts main.yml --check to check before configuration.
Also create a ansible.cfg file. Then paste the following contents there:
[defaults]
inventory = hosts
host_key_checking = False
Note: All the 3 files namely, main.yml,ansible.cfg & hosts must be in the same folder.
Also, the code is tested for devices connected to a private network using Private IPs. I haven't checked using Public IPs. If using Azure/AWS, create a test VM and connect it to the VPN of the other devices.
Note: You need to install the SSHPass package to be able to authenticate with Password.
For Ubuntu: apt-get install sshpass

How to become a user at remote machine when running a role in Ansible playbook

I know this has been discussed in other questions and I have tried various options but nothing seems to be solving my issue.
I am starting to learn Ansible and trying to run a playbook. I have one role which have tasks in it. I want to run the whole playbook as user john on remote machine. Playbook begins with a copying task. Everything was working fine till I started using become and become_user. I tried running the role as user john by specifying in major playbbok :
---
- hosts: target-machine.com
roles:
- role: installSoftware
become: yes
become_user: john
Then when executing the playbook, I run the following :
ansible-playbook -s major.yml -K
which prompts me for
SUDO password:
I enter the password of user john which exists in the remote target machine. As soon as it starts running playbook, it hangs at the task which requires user as john and complains :
fatal: [target.com]: FAILED! => {"failed": true, "msg": "Failed to get information on remote file (/home/john/software.zip): MODULE FAILURE"}
I also tried the following:
---
- hosts: target-machine.com
become: yes
become_user: john
roles:
- role: installSoftware
so as to run the whole playbook as user john. It again asks me for the SUDO password and then after 5 minutes complains:
fatal: [openam.ansible-target.com]: FAILED! => {"changed": false, "failed": true, "module_stderr": "", "module_stdout": "\r\nSorry, try again .\r\n[sudo via ansible, key=kchjtpjorgnvaksspqvgwzzkgtjnxsyv] password: \r\nsudo: 1 incorrect password attempt\r\n", "msg": "MODULE FAILURE"}
although the password I have entered is correct. On SO it was suggested to increase the SSH timeout which I did in ansible.cfg file:
[ssh_connection]
ssh_args = -o ServerAliveInterval=60 -o ControlMaster=auto -o ControlPersist=1m
All I want to mirror is what I would run on target machine:
su john
Enter Password:
john$
Any help will be appreciated. Thank You.
By default, become uses sudo, not su. As a result, the password Ansible is prompting you for is the password of the user you're logged-in as, not the user you're changing to. You can change this by setting become_method to su.
If you want to always run all your tasks on that host as a different user, you probably instead want to set ansible_user. See the inventory documentation for more details on that.

Ansible Timeout (12s) waiting for privilege escalation prompt

I'm having trouble running my Ansible playbook on AWS instance. Here is my version:
$ ansible --version
ansible 2.0.0.2
I created an inventory file as:
[my_ec2_instance]
default ansible_host=MY_EC2_ADDRESS ansible_user='ubuntu' ansible_ssh_private_key_file='/home/MY_USER/MY_KEYS/MY_KEY.pem'
Testing connection to my server:
$ ansible -i provisioner/inventory my_ec2_instance -m ping
default | SUCCESS => {
"changed": false,
"ping": "pong"
}
Now when running my playbook on this inventory I get the error Timeout (12s) waiting for privilege escalation prompt as follows:
$ ansible-playbook -i provisioner/inventory -l my_ec2_instance provisioner/playbook.yml
PLAY [Ubuntu14/Python3/Postgres/Nginx/Gunicorn/Django stack] *****
TASK [setup] *******************************************************************
fatal: [default]: FAILED! => {"failed": true, "msg": "ERROR! Timeout (12s) waiting for privilege escalation prompt: "}
NO MORE HOSTS LEFT *************************************************************
PLAY RECAP *********************************************************************
default : ok=0 changed=0 unreachable=0 failed=1
If I run the same playbook using the .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory as the inventory parameter it works perfectly on my Vagrant instance.(I believe, proving there is nothing wrong in the playbook/roles itself)
Also, if I run it with an -vvvv, copy the exec ssh line and run it manually it indeed connects to AWS without problems.
Do I need to add any other parameter on my inventory file to connect an EC2 instance? What am I missing?
$ vim /etc/ansible/ansible.cfg
SSH timeout
[defaults]
timeout = 10 ( change to 60 )
There is a git issue about this error that affect various versions of Ansible 2.x in here https://github.com/ansible/ansible/issues/13278#issuecomment-216307695
My solution was simply to add timeout=30 to /etc/ansible/ansible.cfg.
This is not a "task" or "role" timeout and was enough to solve the error (I do have some roles/tasks that take much longer than that).
In my case, the root cause was an incorrect entry in /etc/hosts for the localhost, causing a 20s delay for any sudo command.
127.0.0.1 wronghostname
Changed it to the correct hostname to fix it. No more delay for sudo/privileged commands.
In my case it was because my playbook had
become_method: su
become_flags: "-"
which prompts a password request on the host.
Adding ansible-playbooks … --ask-become-pass and passing the password solved the issue.
I ran the command like follows & it works :
command:
ansible-playbook -c paramiko httpd.yml
As the issue is related to the openssl implementation, the usage of paramiko dodges it.
Ansible defaults ssh_args setting, as documented here https://docs.ansible.com/ansible/latest/reference_appendices/config.html#ansible-ssh-args, is
-C -o ControlMaster=auto -o ControlPersist=60s
and by changing ControlMaster to either yes (or no) resolved the issue for me (somehow):
ansible.cfg:
[ssh_connection]
ssh_args = -C -o ControlMaster=yes -o ControlPersist=60s
I had the same issue. I was able to solve it adding become_exe: sudo su -
- hosts: "{{ host | default('webservers')}}"
become: yes
become_user: someuser
become_method: su
become_exe: sudo su -
The thread is old but the varied solutions keep coming.
In my case, the issue was that the ansible script had modified the sudoers file in the vagrant vm to add an entry for the vagrant group (%vagrant) after the existing entry for the vagrant user.
That was enough to cause the ansible script to timeout waiting for privilege escalation.
The solution was to force the sudoers entry for the vagrant group to be above the entry for the vagrant user.
Sometime setup phase takes more time for ec2 instances, you need to change timeout value in ansible.cfg to something like timeout=40 . This will set the timeout value to 40 seconds.
I fixed this error for my system because I forgot I had altered the ansible config file:
sudo vim /etc/ansible/ansible.cfg
Try commenting the priviledge parameters that could be trying to sudo to root.
like so:
[privilege_escalation]
#become=True
#become_method=su
#become_user=root
#become_ask_pass=False
#become_exe="sudo su -"
The account I was trying to ssh as did not have permission to become root.
I am building secure VM images for AWS, QEMU and VBox on an isolated network, with limited DNS support. Increasing the SSH Timeout to 40 sec had limited effect in my situation.
I am using Packer v1.5.5, Ansible v2.9.2 and OpenSSH v7.4p1
My solution was to change the UseDNS parameter in /etc/ssh/ssd_config to no.
I added the following lines in my RHEL/CentOS kickstart configuration, with great result.
%post
# Disable DNS lookups by sshd to address Ansible timeouts
perl -npe 's/^#UseDNS yes/UseDNS no/g' -i /etc/ssh/sshd_config
%end
Check if it is a problem with an old version of sudo at destination server. Some old sudo versions does not have the -n option ansible uses.

Resources