I am setting up a playbook that automatically configures my workstation. This will hopefully allow me to quickly install linux somewhere and automatically have all the resources I need.
One of the steps is installing homebrew and I cannot figure out how to do it.
I have created this playbook
- hosts: localhost
become: yes
become_user: myUser
tasks:
- name: Download homebrew install script from source
get_url:
url: https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh
dest: ~/Downloads/install_homebrew.sh
mode: 'u+rwx'
- name: Install homebrew
shell: ~/Downloads/install_homebrew.sh
and run it with ansible-playbook myplaybook.yaml.
However, when I execute it, there is a permission denied error. Apparently this is because of how the copy-module works (here). So I thought I'd just run the sudo ansible-playbook myplaybook.yaml instead. This leads to the exact same permission error. I guess this is because I have become_user: myUser.
However, when i remove become_user, I obviously get another error Destination /root/Downloads does not exist because my destination is coded to the users download-directory.
So how can I execute the playbook as the user myUser but with root privileges? This would allow me to access the root-stuff but still refer to my home-directory. In theory this should be possible since I can run
sudo ls -a /root && ls ~/
and get both the content of the root-folder and of my home directory. But I don't know how to do this in ansible.
Related
I'm trying to create an ec2 instance through ansible playbook and then run a few commands on it. The ec2 instance is created sucessfully and also ssh'd into it. Now, when I try to run this command:
- name: Create scripts directory
ansible.builtin.file:
path: /scripts
state: directory
mode: '0755'
An error is generated:
{"changed": false, "msg": "There was an issue creating /scripts as requested: [Errno 30] Read-only file system: '/scripts'", "path": "/scripts"}
I have specified the mode as 0755 which is read write execute permissions. So why is it giving an error, the same error even for 0777? I'm new to ansible and Ubuntu in general.
you have to add the option become to use the admin privilege (your user have to be sudo er...)
- name: Create scripts directory
ansible.builtin.file:
path: /scripts
state: directory
mode: '0755'
become: yes
here you create a folder on level root, are you sure to do that?
To specify a password for sudo, run ansible-playbook with --ask-become-pass (-K for short). If you run a playbook utilizing become and the playbook seems to hang, most likely it is stuck at the privilege escalation prompt. Stop it with CTRL-c, then execute the playbook with -K and the appropriate password.
Your Ansible user doesn't have the required privileges to create a directory at that level. Either use another user (bad option) or make sure the user has sudo capabilities and use become: yes to execute the command as sudo
I've got a CentOS cluster where /home is going to get mounted over nfs. So I think the centos user's home should get moved to somewhere which will remain local, maybe /var/lib/centos or something. But given centos is the ansible_user I can't use:
hosts: cluster
become: yes
tasks:
- ansible.builtin.user:
name: centos
move_home: yes
home: "/var/lib/centos"
as unsurprisingly it fails with
usermod: user centos is currently used by process 45028
Any semi-tidy workarounds for this, or better ideas?
I don't think you're going to be able to do this with the user module if you're connecting as the centos user. However, if you handle the individual steps yourself it should work:
---
- hosts: centos
gather_facts: false
become: true
tasks:
- command: rsync -a /home/centos/ /var/lib/centos/
- command: sed -i 's,/home/centos,/var/lib/centos,' /etc/passwd
args:
warn: false
- meta: reset_connection
- command: rm -rf /home/centos
args:
warn: false
This relocates the home directory, updates /etc/passwd, and then removes the old home directory. The reset_connection is in there to force a new ssh connection: without that, Ansible will be unhappy when you remove the home directory.
In practice, you'd want to add some logic to the above playbook to make it idempotent.
I'm trying to provision a local kubernetes cluster using Vagrant (v2.1.2) and VirtualBox (v5.2.20).
My Vagrantfile uses the ansible_local provisioner to run some ansible playbooks to provision a K8s cluster using kubeadm.
This was all working perfectly a few months back when I ran this, but it's not working anymore. Not sure why it stopped working, but the ansible playbook for the master node fails when I trying to copy the kube config to the vagrant users home dir.
The ansible task that fails is as follows:
- name: Copy kubeconfig for vagrant user
copy:
src: /etc/kubernetes/admin.conf
dest: /home/vagrant/.kube/
owner: vagrant
group: vagrant
This is causing the following error: fatal: [master1]: FAILED! => {"msg": "an error occurred while trying to read the file '/etc/kubernetes/admin.conf': [Errno 13] Permission denied: '/etc/kubernetes/admin.conf'"}
The src file does exist. If I ssh into the VM after the failure, I can copy the file with sudo cp /etc/kubernetes/admin.conf /home/vagrant/, but the failure above causes the vagrant provisioning to fail/halt.
FYI., I've tried a few combinatons of things at the play and task levels e.g. become: true, remote_user: root e.g.
---
- hosts: all
become: true
tasks:
...
... but to no avail.
permissions on admin.conf are as follows:
vagrant#master1:/etc/kubernetes$ ls -al admin.conf
-rw------- 1 root root 5453 Aug 5 14:07 admin.conf
Full master-playbook.yml can be found here.
How do I get ansible to copy the file?
Quoting from copy
src Local path to a file to copy to the remote server.
The play failed because the user who is running the ansible playbook can't read the file at the controller (local path)
permission denied: '/etc/kubernetes/admin.conf'
Use remote_src: yes
remote_src If yes it will go to the remote/target machine for the src
- name: Copy kubeconfig for vagrant user
copy:
remote_src: yes
src: /etc/kubernetes/admin.conf
dest: /home/vagrant/.kube/
owner: vagrant
group: vagrant
From the comment, this seems to be what you want to do
"The src file does exist. If I ssh into the VM after the failure, I can copy the file with sudo cp /etc/kubernetes/admin.conf /home/vagrant/"
This should work if the remote_user at VM is allowed to sudo su and escalation is properly set
- hosts: all
become: yes
become_user: root
become_method: sudo
It would be not enough to allow the remote_user sudo cp only. See Can’t limit escalation to certain commands
Privilege escalation permissions have to be general. Ansible does not always use a specific command to do something but runs modules (code) from a temporary file name which changes every time. If you have ‘/sbin/service’ or ‘/bin/chmod’ as the allowed commands this will fail ...
I have a playbook than run roles, and logs in the server with a user that has the sudo privileges. The problem is that, when switching to this user, I still need to use sudo to, say, install packages.
ie:
sudo yum install httpd
However, Ansible seems to ignore that and will try to install packages without sudo, which will result as a fail.
Ansible will run the following:
yum install httpd
This is the role that I use:
tasks:
- name: Import du role 'memcacheExtension'
import_role:
name: memcacheExtension
become: yes
become_method: sudo
become_user: "{{become_user}}"
become_flags: '-i'
tags:
- never
- memcached
And this is the tasks that fails in my context:
- name: Install Memcached
yum:
name: memcached.x86_64
state: present
Am I setting the sudo parameter at the wrong place? Or am I doing something wrong?
Thank you in advance
You can specify become: yes a few places. Often it is used at the task level, sometimes it is used as command line parameter (--become, -b run operations with become). It can be also set at the play level:
- hosts: servers
become: yes
become_method: enable
tasks:
- name: Hello
...
You can also enable it in group_vars:
group_vars/exmaple.yml
ansible_become: yes
For your example, using it for installing software I would set it at the task level. I think in your case the import is the problem. You should set it in the file you are importing.
I ended up specifying Ansible to become root for some of the tasks that were failing (my example wasn't the only one failing, and it worked well. The tweak in my environment is that I can't login as root, but I can "become" root once logged in as someone else.
Here is how my tasks looks like now:
- name: Install Memcached
yum:
name: memcached.x86_64
state: present
become_user: root
Use shell module instead of yum.
- name: Install Memcached
shell: sudo yum install -y {{ your_package_here }}
Not as cool as using a module, but it will get the job done.
Your become_user is ok. If you don't use it, you'll end up trying to run the commands in the playbook, by using the user used to stablish the ssh connection (ansible_user or remote_user or the user used to execute the playbook).
I am new to Ansible. Trying to copy some files to remote machine.
I am able to copy to remote server's tmp folder, but not able to copy to a particular users folder.
I think it is possible if we can switch to that particular user. But I am not able to do so using playbook.
Please help me on this.
Regards,
KP
This is a permission issue. The user which you use to connect to the host does not have permissions to write to that other users folder.
If you have access to that users account (e.g. your ssh key is accepted) you can simply define the user per task through remote_user:
- copy: src=...
dest=...
remote_user: <SET_OWNER_HERE>
If you do not have access, you can use the sudo flag to execute a task with root permissions. But make sure you set the permissions correctly or the user might not be able to read/write those files:
- copy: src=...
dest=...
owner=<SET_OWNER_HERE>
group=<SET_GROUP_HERE>
mode=0644
sudo: yes
Also, you can define the username as which the sudo command is executed with sudo_user:
- copy: src=...
dest=...
sudo: yes
sudo_user: <SET_OWNER_HERE>
If sudo requires a password from you, you have to provide it or the task will hang forever without any error message.
You can define this globally in the ansible.cfg:
ask_sudo_pass=True
Or pass the option when you call your playbook:
ansible-playbook ... --ask-sudo-pass