I have a playbook than run roles, and logs in the server with a user that has the sudo privileges. The problem is that, when switching to this user, I still need to use sudo to, say, install packages.
ie:
sudo yum install httpd
However, Ansible seems to ignore that and will try to install packages without sudo, which will result as a fail.
Ansible will run the following:
yum install httpd
This is the role that I use:
tasks:
- name: Import du role 'memcacheExtension'
import_role:
name: memcacheExtension
become: yes
become_method: sudo
become_user: "{{become_user}}"
become_flags: '-i'
tags:
- never
- memcached
And this is the tasks that fails in my context:
- name: Install Memcached
yum:
name: memcached.x86_64
state: present
Am I setting the sudo parameter at the wrong place? Or am I doing something wrong?
Thank you in advance
You can specify become: yes a few places. Often it is used at the task level, sometimes it is used as command line parameter (--become, -b run operations with become). It can be also set at the play level:
- hosts: servers
become: yes
become_method: enable
tasks:
- name: Hello
...
You can also enable it in group_vars:
group_vars/exmaple.yml
ansible_become: yes
For your example, using it for installing software I would set it at the task level. I think in your case the import is the problem. You should set it in the file you are importing.
I ended up specifying Ansible to become root for some of the tasks that were failing (my example wasn't the only one failing, and it worked well. The tweak in my environment is that I can't login as root, but I can "become" root once logged in as someone else.
Here is how my tasks looks like now:
- name: Install Memcached
yum:
name: memcached.x86_64
state: present
become_user: root
Use shell module instead of yum.
- name: Install Memcached
shell: sudo yum install -y {{ your_package_here }}
Not as cool as using a module, but it will get the job done.
Your become_user is ok. If you don't use it, you'll end up trying to run the commands in the playbook, by using the user used to stablish the ssh connection (ansible_user or remote_user or the user used to execute the playbook).
Related
I am setting up a playbook that automatically configures my workstation. This will hopefully allow me to quickly install linux somewhere and automatically have all the resources I need.
One of the steps is installing homebrew and I cannot figure out how to do it.
I have created this playbook
- hosts: localhost
become: yes
become_user: myUser
tasks:
- name: Download homebrew install script from source
get_url:
url: https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh
dest: ~/Downloads/install_homebrew.sh
mode: 'u+rwx'
- name: Install homebrew
shell: ~/Downloads/install_homebrew.sh
and run it with ansible-playbook myplaybook.yaml.
However, when I execute it, there is a permission denied error. Apparently this is because of how the copy-module works (here). So I thought I'd just run the sudo ansible-playbook myplaybook.yaml instead. This leads to the exact same permission error. I guess this is because I have become_user: myUser.
However, when i remove become_user, I obviously get another error Destination /root/Downloads does not exist because my destination is coded to the users download-directory.
So how can I execute the playbook as the user myUser but with root privileges? This would allow me to access the root-stuff but still refer to my home-directory. In theory this should be possible since I can run
sudo ls -a /root && ls ~/
and get both the content of the root-folder and of my home directory. But I don't know how to do this in ansible.
I'm trying to create a script for work using Ansible, and for privilege elevation we have to use Powerbroker. Due to having issues with Powerbroker and Ansible in the past, I just have a basic playbook created:
tasks:
- name: Getting list of installed software
command: yum list installed > /home/<user>/yum_output.txt
become: yes
become_method: pbrun
become_flags: 'su -'
To escalate, you have to use: pbrun su - to root, and then down to the service account that you need.
I've looked through the current Ansible documentation, and tried searching for examples through Google, and I'm hitting a dead end. I wanted to see if anyone else has gone this route.
This is the Ansible page I was using:
https://docs.ansible.com/ansible/2.3/become.html
I've tried with and without the become_flags: 'su - '
Playbook command: ansible-playbook ansible_check_rhel.yml --ask-pass --become-method=pbrun --ask-become-pass -vvvv
I got some trouble with automating an installation using ansible.
I use this role (https://github.com/elastic/ansible-elasticsearch) to install elasticsearch on my ubuntu 16.04 server.
The role depends on the package python-jmespath, as mentioned in the documentation.
The role DOES NOT install the package itsself, so i try to install it before role execution.
- hosts: elasticsearch_master_servers
become: yes
tasks:
- name: preinstall jmespath
command: "apt-get install python-jmespath"
- name: Run the equivalent of "apt-get update" as a separate step
apt:
update_cache: yes
- hosts: elasticsearch_master_servers
become: yes
roles:
- role: elastic.elasticsearch
vars:
...
When running the playbook i expect the python-jmespath package to be installed before execuction of role takes place, but role execution fails with
You need to install \"jmespath\" prior to running json_query filter"
When i check if the package is installed manually using dpkg -s python-jmespath i can see the package is installed correctly.
A second run of the playbook (with the package already installed) doesnt fail.
Do I miss an ansible configuration, that updates the list of installed packages during the playbook run ?
Am I doing something wrong in general ?
Thanks in advance
FWIW. It's possible to tag installation tasks and install the packages in the first step. For example
- name: install packages
package:
name: "{{ item.name }}"
state: "{{ item.state|default('present') }}"
state: present
loop: "{{ packages_needed_by_this_role }}"
tags: manage_packages
Install packages first
shell> ansible_playbook my-playbook.yml -t manage_packages
and then run the playbook
shell> ansible_playbook my-playbook.yml
Notes
This approach makes checking of the playbooks with "--check" much easier.
Checking idempotency is also easier.
With tags: [manage_packages, never] the package task will be skipped when not explicitly selected. This will speed up the playbook.
I took over a project that is running on Ansible for server provisioning and management. I'm fairly new to Ansible but thanks to the good documentation I'm getting my head around it.
Still I'm having an error which has the following output:
failed: [build] (item=[u'software-properties-common', u'python-pycurl', u'openssh-server', u'ufw', u'unattended-upgrades', u'vim', u'curl', u'git', u'ntp']) => {"failed": true, "item": ["software-properties-common", "python-pycurl", "openssh-server", "ufw", "unattended-upgrades", "vim", "curl", "git", "ntp"], "msg": "Failed to lock apt for exclusive operation"}
The playbook is run with sudo: yes so I don't understand why I'm getting this error (which looks like a permission error). Any idea how to trace this down?
- name: "Install very important packages"
apt: pkg={{ item }} update_cache=yes state=present
with_items:
- software-properties-common # for apt repository management
- python-pycurl # for apt repository management (Ansible support)
- openssh-server
- ufw
- unattended-upgrades
- vim
- curl
- git
- ntp
playbook:
- hosts: build.url.com
sudo: yes
roles:
- { role: postgresql, tags: postgresql }
- { role: ruby, tags: ruby }
- { role: build, tags: build }
I just had the same issue on a new VM. I tried many approaches, including retrying the apt commands, but in the end the only way to do this was by removing unattended upgrades.
I'm using raw commands here, since at this point the VM doesn't have Python installed, so I need to install that first, but I need a reliable apt for that.
Since it is a VM and I was testing the playbook by resetting it to a Snapshot, the system date was off, which forced me to use the date -s command in order to not have problems with the SSL certificate during apt commands. This date -s triggered an unattended upgrade.
So this snippet of a playbook is basically the part relevant to disabling unattended upgrades in a new system. They are the first commands I'm issuing on a new system.
- name: Disable timers for unattended upgrade, so that none will be triggered by the `date -s` call.
raw: systemctl disable --now {{item}}
with_items:
- 'apt-daily.timer'
- 'apt-daily-upgrade.timer'
- name: Reload systemctl daemon to apply the new changes
raw: systemctl daemon-reload
# Syncing time is only relevant for testing, because of the VM's outdated date.
#- name: Sync time
# raw: date -s "{{ lookup('pipe', 'date') }}"
- name: Wait for any possibly running unattended upgrade to finish
raw: systemd-run --property="After=apt-daily.service apt-daily-upgrade.service" --wait /bin/true
- name: Purge unattended upgrades
raw: apt-get -y purge unattended-upgrades
- name: Update apt cache
raw: apt-get -y update
- name: If needed, install Python
raw: test -e /usr/bin/python || apt-get -y install python
Anything else would cause apt commands to randomly fail because of locking issues caused by unattended upgrades.
This is a very common situation when provisioning Ubuntu (and likely some other distributions). You try to run Ansible while automatic updates are running in background (which is what happens right after setting up a new machine). As APT uses semaphore, Ansible gets kicked out.
The playbook is ok and the easiest way to verify is to run it later (after automatic update process finishes).
For a permanent resolution, you might want to:
use an OS image with automatic updates disabled
add an explicit loop in the Ansible playbook to repeat the failed task until it succeeds
I bumped into Failed to lock apt for exclusive operation issue:
https://github.com/geerlingguy/ansible-role-apache/issues/50
I posted a lot of details in GitHub.
I googled a lot of "Failed to lock apt for exclusive operation" Ansible complaints, but no simple answer. Any help?
I'm also getting this error, while setting up a couple of new boxes. I'm connecting as root, so I didn't think it was necessary, but it is:
become: yes
Now everything works as intended.
I know this question has been answered a long time ago, but for me, the solution was different.
The problem was the update_cache step. I had this with every install step, and somehow that caused the "apt failed lock error".
the solution was adding the update_cache as a seperate step, like so:
- tasks:
- name: update apt list
apt:
update_cache: yes
Running the following commands in the same sequence as below should resolve this issue :
sudo rm -rf /var/lib/apt/lists/*
sudo apt-get clean
sudo apt-get update
In my case, ansible was trying to lock /var/lib/apt/lists/ which did not exist.
The error ansible gave me was Failed to lock apt for exclusive operation: Failed to lock directory /var/lib/apt/lists/: E:Could not open lock file /var/lib/apt/lists/lock - open (2: No such file or directory)
Adding the lists directory fixed the issues:
- name: packages | ensure apt list dir exists
file:
path: /var/lib/apt/lists/
state: directory
mode: 0755
The answer from #guaka was entirely correct for me. It lacks only one thing-- where to put become: yes.
In the following, there are three places the become might go. Only one of them is correct:
- name: setup nginx web server
#1 become: yes
include_role:
name: nginx
#2 become: yes
vars:
ansible_user: "{{ devops_ssh_user }}"
#3 become: yes
In position #2 you will get an unexpected parameter error, because become is not a parameter for the include_role module.
In position #3 the play will run, but will not execute as sudo.
Only in position #1 does become: yes do any good. become is a parameter for the task, not for the module. It is not a variable like ansible_user. Just to be clear, here's a working configuration:
- name: setup nginx web server
become: yes
include_role:
name: nginx
vars:
ansible_user: "{{ devops_ssh_user }}"
I had remote_user: root in my playbook, and this happened a lot. I couldn't figure out why, it was happening only on some roles but not others (but always at the same place.) I removed remote_user: root and it stopped happening. I just use --become and --become-user=root. (Which I was using, but it still was randomly failing to get a lock for some reason.)
On my hosts file, I have some hosts that use a different user for sudo than the one Ansible uses for SSH so I had ansible_user and ansible_become_user specified for every host in the inventory.
In those hosts where ansible_user == ansible_become_user I got Failed to lock apt for exclusive operation.
The solution for me was to remove ansible_become_user line for those cases where the user is the same in both parameters.
Ok, so there is nothing wrong with Ansible, SSH or the role. It is just that apt in Debian can get really confused and lock itself out. As I was using a home-brewed Docker VM, I only had to recreate my image and container to get APT working again.
I faced the same error while using ansible to set up a ceph cluster. Just running the ansible-playbook without sudo did the trick for me!
adding become: yes to the task is solved my problem.
- name: Install dependencies
apt:
name: "{{ packages }}"
state: present
update_cache: yes
vars:
packages:
- package1
- package2
- package3
become: yes
In my case, the silly problem was that I had the same host in my inventory, but under two different DNS names.
So, apt was legitimately locked in my case, because I had multiple ansible connections to the same host doing the same thing at the same time. Hah! Doh!
https://serverfault.com/questions/716260/ansible-sudo-error-while-building-on-atlas
Check ps aux | grep apt output for suspicious processes.