I've set up a box with a user david who has sudo privileges. I can ssh into the box and perform sudo operations like apt-get install. When I try to do the same thing using Ansible's "become privilege escalation", I get a permission denied error. So a simple playbook might look like this:
simple_playbook.yml:
---
- name: Testing...
hosts: all
become: true
become_user: david
become_method: sudo
tasks:
- name: Just want to install sqlite3 for example...
apt: name=sqlite3 state=present
I run this playbook with the following command:
ansible-playbook -i inventory simple_playbook.yml --ask-become-pass
This gives me a prompt for a password, which I give, and I get the following error (abbreviated):
fatal: [123.45.67.89]: FAILED! => {...
failed: E: Could not open lock file /var/lib/dpkg/lock - open (13:
Permission denied)\nE: Unable to lock the administration directory
(/var/lib/dpkg/), are you root?\n", ...}
Why am I getting permission denied?
Additional information
I'm running Ansible 2.1.1.0 and am targeting a Ubuntu 16.04 box. If I use remote_user and sudo options as per Ansible < v1.9, it works fine, like this:
remote_user: david
sudo: yes
Update
The local and remote usernames are the same. To get this working, I just needed to specify become: yes (see #techraf's answer):
Why am I getting permission denied?
Because APT requires root permissions (see the error: are you root?) and you are running the tasks as david.
Per these settings:
become: true
become_user: david
become_method: sudo
Ansible becomes david using sudo method. It basically runs its Python script with sudo david in front.
the user 'david' on the remote box has sudo privileges.
It means david can execute commands (some or all) using sudo-executable to change the effective user for the child process (the command). If no username is given, this process runs as the root account.
Compare the results of these two commands:
$ sudo whoami
root
$ sudo david whoami
david
Back to the APT problem, you (from CLI) as well as Ansible (connecting with SSH using your account) need to run:
sudo apt-get install sqlite3
not:
sudo david apt-get install sqlite3
which will fail with the very exact message Ansible displayed.
The following playbook will escalate by default to the root user:
---
- name: Testing...
hosts: all
become: true
tasks:
- name: Just want to install sqlite3 for example...
apt: name=sqlite3 state=present
remote_user is david. Call the script with --ask-pass and give password for david. If david doesn't have passwordless sudo, then you should also call it with --ask-become-pass.
- name: Testing...
hosts: all
remote_user: david
become: true
become_method: sudo
tasks:
- name: Just want to install sqlite3 for example...
apt: name=sqlite3 state=present
I have the same issue.
The issue is solved by using root and --ask-become-pass --ask-pass command together.
- name: Testing...
hosts: all
remote_user: root
become: true
become_method: sudo
tasks:
- name: Just want to install sqlite3 for example...
apt: name=sqlite3 state=present
ansible-playbook playbook.yml --ask-become-pass --ask-pass
Related
I have a requirement where I have to install AWS SSM Agent on multiple EC2 instances(of different flavors) using Ansible. Can someone please help me? or Suggest me how to achieve this?
I wrote the following script and tried. It is working but, is there a way to achieve this using "Package" module? Because I am afraid my approach might do a reinstall (even if it is already installed). Thanks in advance.
(OR)
Do you think it is better to use Shell commands (https://aws.amazon.com/premiumsupport/knowledge-center/install-ssm-agent-ec2-linux/) in the Ansible script to install the agent and refer to them based on the condition of os flavour?
---
- hosts: all
remote_user: ansible
become: true
tasks:
- name: install SSM if REDHAT
command: "{{ item }}"
loop:
- sudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm
- sudo systemctl enable amazon-ssm-agent
- sudo systemctl start amazon-ssm-agent
when: ansible_os_family == "RedHat"
- name: install SSM if UBUNTU
command: "{{ item }}"
loop:
- sudo snap install amazon-ssm-agent --classic
- sudo systemctl start snap.amazon-ssm-agent.amazon-ssm-agent.service
when: ansible_os_family == "Debian"
When I am trying to run pip command through ansible I land up in error.
{"changed": false, "msg": "Unable to find any of pip to use. pip needs to be installed."}
When I debugged on my machine I found pip was installed in latest version. I realized my file uses sudo to run the pip.
So if I do which pip
I get the path of pip but if I do sudo which pip I get nothing.
I don't know how to change my file so instead of sudo it take
- name: "Allow newuser for new super user without SUDO password for using rsync:"
lineinfile:
path: /etc/sudoers
state: present
insertafter: '^%sudo'
line: "{{ user }} ALL=(ALL:ALL) NOPASSWD: /usr/bin/rsync"
- pip:
name: opencv-python
state: forcereinstall
executable: pip
I have no idea how to fix this issue
At the remote host find out which pip is used by the remote user. Use the executable and become the remote user. For example,
- name: Install opencv-python
become_user: admin
become: true
pip:
name: opencv-python
state: forcereinstall
executable: /home/admin/.local/bin/pip
See Becoming an Unprivileged User. Pipelining should solve the problems of becoming an unprivileged user.
shell> grep pipe ansible.cfg
pipelining = true
Notes
Whatever you install by pip this way will be available to this particular remote user only.
Preferably distro packages should be used. For example
shell> apt-cache search python-opencv
python-opencv - Python bindings for the computer vision library
python-opencv-apps - opencv_apps Robot OS package - Python 2 bindings
See The pip module isn't always idempotent #28952 and the Conclusions in particular.
I have this:
- name: composer install in includes
become: yes
become_user: git
become_method: sudo
become_flags: '-s'
chdir: /var/www/html/includes
command: php /usr/local/bin/composer install
creates: /var/www/html/includes/vendor
When I run it, I get an error 'Composer could not find a composer.json file in /home/git'. But note that I specified a chdir to /var/www/html/includes which DOES have a composer.json. Note also that, while trying things out, I added become_method and become_flags. -s should tell sudo not to change directories or load environments.
Does anybody know what I am doing wrong?
Thanks, Ed
I think the task should be written this way:
- name: composer install in includes
become: yes
become_user: git
become_method: sudo
become_flags: '-s'
command: php /usr/local/bin/composer install
args:
chdir: /var/www/html/includes
creates: /var/www/html/includes/vendor
For more information you can run:
$ ansible-doc command
I'm trying to use Ansible to run the following two commands:
sudo apt-get update && sudo apt-get upgrade -y
I know with ansible you can use:
ansible all -m shell -u user -K -a "uptime"
Would running the following command do it? Or do I have to use some sort of raw command
ansible all -m shell -u user -K -a "sudo apt-get update && sudo apt-get upgrade -y"
I wouldn't recommend using shell for this, as Ansible has the apt module designed for just this purpose. I've detailed using apt below.
In a playbook, you can update and upgrade like so:
- name: Update and upgrade apt packages
become: true
apt:
upgrade: yes
update_cache: yes
cache_valid_time: 86400 #One day
The cache_valid_time value can be omitted. Its purpose from the docs:
Update the apt cache if its older than the cache_valid_time. This
option is set in seconds.
So it's good to include if you don't want to update the cache when it has only recently been updated.
To do this as an ad-hoc command you can run:
$ ansible all -m apt -a "upgrade=yes update_cache=yes cache_valid_time=86400" --become
ad-hoc commands are described in detail here
Note that I am using --become and become: true. This is an example of typical privilege escalation through Ansible. You use -u user and -K (ask for privilege escalation password). Use whichever works for you, this is just to show you the most common form.
Just to add a flavour on the answer. This one is an executable playbook in all the hosts specified in your inventory file.
- hosts: all
become: true
tasks:
- name: Update and upgrade apt packages
apt:
upgrade: yes
update_cache: yes
cache_valid_time: 86400
Using Ubuntu 16.04, I did a little adjustement:
- name: Update and upgrade apt packages
become: true
apt:
update_cache: yes
upgrade: 'yes'
I juste put the upgrade yes between apostrophe to avoid un annoying warning:
[WARNING]: The value True (type bool) in a string field was converted to u'True' (type string). If this does
not look like what you expect, quote the entire value to ensure it does not change.
At 2020, I would like just to comment into the original answer, but no enough reputation at that time...
Ref:
The value True (type bool) in a string field was converted to u'True' (type string)
We use the following command to update and upgrade all packages :
ansible all -m apt -a "name='*' state=latest update_cache=yes upgrade=yes" -b --become-user root
I'm attempting to run an Ansible playbook and I can't find out, through documentation or by looking at examples, what is wrong with my playbook.
---
- hosts: all
sudo: yes
pre_tasks:
ignore_errors: True
tasks:
command: sudo apt-get install build-essential libssl-dev libffi-dev python-dev python-pip
command: sudo pip install caravel
command: fabmanager create-admin --app caravel
command: caravel db upgrade
command: caravel init
command: caravel runserver -p 8088
- copy: src=../zika.db dest=zika.db
failed_when: false
I've been chasing my tail and I don't understand this error:
The offending line appears to be:
---
- hosts: all
^ here
command is a task, and tasks is a list, so you should prefix every task with a dash.
tasks:
- command: ....
- command: ....
....