Issue with ansible copying file - ansible

I'm trying to provision a local kubernetes cluster using Vagrant (v2.1.2) and VirtualBox (v5.2.20).
My Vagrantfile uses the ansible_local provisioner to run some ansible playbooks to provision a K8s cluster using kubeadm.
This was all working perfectly a few months back when I ran this, but it's not working anymore. Not sure why it stopped working, but the ansible playbook for the master node fails when I trying to copy the kube config to the vagrant users home dir.
The ansible task that fails is as follows:
- name: Copy kubeconfig for vagrant user
copy:
src: /etc/kubernetes/admin.conf
dest: /home/vagrant/.kube/
owner: vagrant
group: vagrant
This is causing the following error: fatal: [master1]: FAILED! => {"msg": "an error occurred while trying to read the file '/etc/kubernetes/admin.conf': [Errno 13] Permission denied: '/etc/kubernetes/admin.conf'"}
The src file does exist. If I ssh into the VM after the failure, I can copy the file with sudo cp /etc/kubernetes/admin.conf /home/vagrant/, but the failure above causes the vagrant provisioning to fail/halt.
FYI., I've tried a few combinatons of things at the play and task levels e.g. become: true, remote_user: root e.g.
---
- hosts: all
become: true
tasks:
...
... but to no avail.
permissions on admin.conf are as follows:
vagrant#master1:/etc/kubernetes$ ls -al admin.conf
-rw------- 1 root root 5453 Aug 5 14:07 admin.conf
Full master-playbook.yml can be found here.
How do I get ansible to copy the file?

Quoting from copy
src Local path to a file to copy to the remote server.
The play failed because the user who is running the ansible playbook can't read the file at the controller (local path)
permission denied: '/etc/kubernetes/admin.conf'
Use remote_src: yes
remote_src If yes it will go to the remote/target machine for the src
- name: Copy kubeconfig for vagrant user
copy:
remote_src: yes
src: /etc/kubernetes/admin.conf
dest: /home/vagrant/.kube/
owner: vagrant
group: vagrant
From the comment, this seems to be what you want to do
"The src file does exist. If I ssh into the VM after the failure, I can copy the file with sudo cp /etc/kubernetes/admin.conf /home/vagrant/"
This should work if the remote_user at VM is allowed to sudo su and escalation is properly set
- hosts: all
become: yes
become_user: root
become_method: sudo
It would be not enough to allow the remote_user sudo cp only. See Can’t limit escalation to certain commands
Privilege escalation permissions have to be general. Ansible does not always use a specific command to do something but runs modules (code) from a temporary file name which changes every time. If you have ‘/sbin/service’ or ‘/bin/chmod’ as the allowed commands this will fail ...

Related

How to become a user with root priveleges in ansible

I am setting up a playbook that automatically configures my workstation. This will hopefully allow me to quickly install linux somewhere and automatically have all the resources I need.
One of the steps is installing homebrew and I cannot figure out how to do it.
I have created this playbook
- hosts: localhost
become: yes
become_user: myUser
tasks:
- name: Download homebrew install script from source
get_url:
url: https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh
dest: ~/Downloads/install_homebrew.sh
mode: 'u+rwx'
- name: Install homebrew
shell: ~/Downloads/install_homebrew.sh
and run it with ansible-playbook myplaybook.yaml.
However, when I execute it, there is a permission denied error. Apparently this is because of how the copy-module works (here). So I thought I'd just run the sudo ansible-playbook myplaybook.yaml instead. This leads to the exact same permission error. I guess this is because I have become_user: myUser.
However, when i remove become_user, I obviously get another error Destination /root/Downloads does not exist because my destination is coded to the users download-directory.
So how can I execute the playbook as the user myUser but with root privileges? This would allow me to access the root-stuff but still refer to my home-directory. In theory this should be possible since I can run
sudo ls -a /root && ls ~/
and get both the content of the root-folder and of my home directory. But I don't know how to do this in ansible.

Creating a directory on Ubuntu using ansible playbook

I'm trying to create an ec2 instance through ansible playbook and then run a few commands on it. The ec2 instance is created sucessfully and also ssh'd into it. Now, when I try to run this command:
- name: Create scripts directory
ansible.builtin.file:
path: /scripts
state: directory
mode: '0755'
An error is generated:
{"changed": false, "msg": "There was an issue creating /scripts as requested: [Errno 30] Read-only file system: '/scripts'", "path": "/scripts"}
I have specified the mode as 0755 which is read write execute permissions. So why is it giving an error, the same error even for 0777? I'm new to ansible and Ubuntu in general.
you have to add the option become to use the admin privilege (your user have to be sudo er...)
- name: Create scripts directory
ansible.builtin.file:
path: /scripts
state: directory
mode: '0755'
become: yes
here you create a folder on level root, are you sure to do that?
To specify a password for sudo, run ansible-playbook with --ask-become-pass (-K for short). If you run a playbook utilizing become and the playbook seems to hang, most likely it is stuck at the privilege escalation prompt. Stop it with CTRL-c, then execute the playbook with -K and the appropriate password.
Your Ansible user doesn't have the required privileges to create a directory at that level. Either use another user (bad option) or make sure the user has sudo capabilities and use become: yes to execute the command as sudo

Permission denied trying to access file owned by root

I have 2 remote servers, and I need to transfer files from one server to another using synchronize directive.
In serverA I created an SSH key (id_rsa) using the sudo user, and copied the public key into serverB (into authorized_keys file of the same sudo user).
Hosts file
[servers]
prod_server ansible_host=IP_prod
new_server ansible_host=IP_new
[servers:vars]
ansible_user=sudo_user
ansible_sudo_pass=sudo_password
ansible_ssh_private_key_file=~/.ssh/id_rsa
Play
- name: Transfer files from prod to new server
hosts: new_server
gather_facts: false
roles:
- rsync
Task
- name: Copy files to new server
synchronize:
src: /etc/letsencrypt/live/domain/fullchain.pem
dest: /opt
delegate_to: prod_server
When running the playbook an error shows up:
change_dir \"/etc/letsencrypt/live/domain.tld\" failed: Permission denied
That means that the sudo_user don't have root privileges to access that file.
And if I set become: true it would be possible to access the fullchain.pem file, but the play will try then to transfer the file using the root user, and the SSH key id_rsa is owned by the sudo_user
What do I have to set to make this work?
If you can "sudo su - " on the remote box and are using -b (become)..
synchronize:
src: /your local directory path
dest: /path on remote host owned by root
rsync_path: "sudo rsync"
mode: push

Ansible: editing a config file as sudo (CoreOS)

I'm an ansible newbie.
I'm using ansible 2.3.0.0
I have the playbook below to bootstrap nodes for a k8s cluster in openstack:
- name: bootstrap
hosts: coreos
become_user: root
become_method: su
gather_facts: False
roles:
- defunctzombie.coreos-bootstrap
tasks:
- lineinfile:
path: /etc/coreos/update.conf
state: present
regexp: '^REBOOT_STRATEGY'
line: 'REBOOT_STRATEGY=off'
I want to turn off auto-reboots on coreos because our openstack installation has a problem with reboots not coming back up properly and having coreos reboot often is causing instance to have to be manually shut down and restarted.
Anyway, the playbook above doesn't work. I get this error:
"The destination directory (/etc/coreos) is not writable by the current user. Error was: [Errno 13] Permission denied: '/etc/coreos/.ansible_tmppQCJrCupdate.conf'"
So my syntax is wrong (I've tried a few different combinations with no luck).
Could someone point me in the right direction? And feel free to make a suggestion on anything about this playbook.
Thanks!
Instead of execute playbook as root user, use different user with sudo access.
Please try this:
- name: bootstrap
hosts: coreos
user: <user_name>
become_method: sudo
gather_facts: False
roles:
- defunctzombie.coreos-bootstrap
tasks:
- lineinfile:
path: /etc/coreos/update.conf
state: present
regexp: '^REBOOT_STRATEGY'
line: 'REBOOT_STRATEGY=off'
Replace <user_name> with your user.
Run your playbook as ansible-playbook <playbook_name> --ask-sudo-pass

Ansible text file busy error

I have a Vagrant / Ansible set up on my Windows host. Ansible is set up to run on the Ubuntu guest, as Vagrant up executes a shell script which copies the provisioning yml files onto the guest and installs Ansible also on the guest.
The script runs the Ansible set up on the guest with the following command:
# Ansible installations
sudo apt-get install -y ansible
# Copy all Ansible scripts to the ubuntu guest
sudo cp -rf -v /vagrant/provisioning /home/vagrant/
# cp /vagrant/hosts /home/vagrant/
sudo chmod 666 /home/vagrant/provisioning/hosts
# Install roles
sudo ansible-playbook /home/vagrant/provisioning/local.yml -i /home/vagrant/provisioning/hosts --connection=local
One of the steps in the Ansible configuration process is setting up a fresh copy of Laravel in the /var/www directory. After pulling in Laravel, my script then copies and then edits the .env file in document root (/var/www).
But therein is the problem, it fails with text file busy message. This is a copy that is happening on the guest as source and destination, so I guess nothing to do with VBox. I have a feeling it has to do with the file name being so unusual, but I have not found an answer.
My task.yml file for Laravel is:
---
- name: Clone git repository
git: >
dest=/var/www
repo=https://github.com/laravel/laravel.git
update=no
sudo: yes
sudo_user: www-data
register: cloned
- name: copy .env file
copy: src=env.j2 dest={{ conf_file }}
- name: set APP_DOMAIN={{ server_name }}
lineinfile: dest=/var/www/.env regexp='^APP_DOMAIN=' line=APP_DOMAIN={{ server_name }}
I have also tried using the template method with the same error:
- name: copy .env file
template: src=env.j2 dest={{ conf_file }}
My conf file contains:
conf_file: /var/www/.env
It fails at the copy step as follows:
==> default: TASK: [laravel | copy .env file] **********************************************
==> default: failed: [10.0.1.10] => {"failed": true, "md5sum": "a380715fa81750708f7b9b6fea1a48fe"}
==> default: msg: Could not replace file: /root/.ansible/tmp/ansible-tmp-1441176559.19-197929606462535/source to /var/www/.env: [Errno 26] Text file busy
==> default:
==> default: FATAL: all hosts have already failed -- aborting
==> default:
==> default: PLAY RECAP ********************************************************************
==> default: to retry, use: --limit #/root/local.retry
==> default:
==> default: 10.0.1.10 : ok=21 changed=18 unreachable=0 failed=1
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
The .env source file is in a folder called files off the Laravel task folder, same as other items I configure which work OK. The file .env is not found in the www folder after it fails so it does not copy it.
There have been some issues with using the Ansible "copy" module on Virtualbox shared folders apparently (cf https://github.com/ansible/ansible/issues/9526) - is the /var/www/ directory a shared folder set up by Vagrant?
Maybe try touching the file first to create it, then copying it:
Change:
- name: copy .env file
copy: src=env.j2 dest={{ conf_file }}
into:
- name: create .env file
shell: touch {{ conf_file }}
- name: copy .env file
copy: src=env.j2 dest={{ conf_file }}
EDIT: This file copy error was fixed in the Ansible 1.9.3 release (July 19th 2015) for some users, but is still a problem for people on Windows hosts running Virtualbox (to do with vboxsf sharing) as of 2016-06-14. The GitHub issue is still closed, but people are still commenting and seem to be supplying possible fixes.
The solution linked below appears to work for most people (several upvotes). It suggests adding the Ansible remote_tmp configuration setting to your local ~/.ansible.cfg which instructs Ansible to use a temp folder on the target (shared) filesystem:
https://github.com/ansible/ansible/issues/9526#issuecomment-199443969
with ansible 2.2 you can also set unsafe_writes=yes

Resources