I have a Vagrant / Ansible set up on my Windows host. Ansible is set up to run on the Ubuntu guest, as Vagrant up executes a shell script which copies the provisioning yml files onto the guest and installs Ansible also on the guest.
The script runs the Ansible set up on the guest with the following command:
# Ansible installations
sudo apt-get install -y ansible
# Copy all Ansible scripts to the ubuntu guest
sudo cp -rf -v /vagrant/provisioning /home/vagrant/
# cp /vagrant/hosts /home/vagrant/
sudo chmod 666 /home/vagrant/provisioning/hosts
# Install roles
sudo ansible-playbook /home/vagrant/provisioning/local.yml -i /home/vagrant/provisioning/hosts --connection=local
One of the steps in the Ansible configuration process is setting up a fresh copy of Laravel in the /var/www directory. After pulling in Laravel, my script then copies and then edits the .env file in document root (/var/www).
But therein is the problem, it fails with text file busy message. This is a copy that is happening on the guest as source and destination, so I guess nothing to do with VBox. I have a feeling it has to do with the file name being so unusual, but I have not found an answer.
My task.yml file for Laravel is:
---
- name: Clone git repository
git: >
dest=/var/www
repo=https://github.com/laravel/laravel.git
update=no
sudo: yes
sudo_user: www-data
register: cloned
- name: copy .env file
copy: src=env.j2 dest={{ conf_file }}
- name: set APP_DOMAIN={{ server_name }}
lineinfile: dest=/var/www/.env regexp='^APP_DOMAIN=' line=APP_DOMAIN={{ server_name }}
I have also tried using the template method with the same error:
- name: copy .env file
template: src=env.j2 dest={{ conf_file }}
My conf file contains:
conf_file: /var/www/.env
It fails at the copy step as follows:
==> default: TASK: [laravel | copy .env file] **********************************************
==> default: failed: [10.0.1.10] => {"failed": true, "md5sum": "a380715fa81750708f7b9b6fea1a48fe"}
==> default: msg: Could not replace file: /root/.ansible/tmp/ansible-tmp-1441176559.19-197929606462535/source to /var/www/.env: [Errno 26] Text file busy
==> default:
==> default: FATAL: all hosts have already failed -- aborting
==> default:
==> default: PLAY RECAP ********************************************************************
==> default: to retry, use: --limit #/root/local.retry
==> default:
==> default: 10.0.1.10 : ok=21 changed=18 unreachable=0 failed=1
The SSH command responded with a non-zero exit status. Vagrant
assumes that this means the command failed. The output for this command
should be in the log above. Please read the output to determine what
went wrong.
The .env source file is in a folder called files off the Laravel task folder, same as other items I configure which work OK. The file .env is not found in the www folder after it fails so it does not copy it.
There have been some issues with using the Ansible "copy" module on Virtualbox shared folders apparently (cf https://github.com/ansible/ansible/issues/9526) - is the /var/www/ directory a shared folder set up by Vagrant?
Maybe try touching the file first to create it, then copying it:
Change:
- name: copy .env file
copy: src=env.j2 dest={{ conf_file }}
into:
- name: create .env file
shell: touch {{ conf_file }}
- name: copy .env file
copy: src=env.j2 dest={{ conf_file }}
EDIT: This file copy error was fixed in the Ansible 1.9.3 release (July 19th 2015) for some users, but is still a problem for people on Windows hosts running Virtualbox (to do with vboxsf sharing) as of 2016-06-14. The GitHub issue is still closed, but people are still commenting and seem to be supplying possible fixes.
The solution linked below appears to work for most people (several upvotes). It suggests adding the Ansible remote_tmp configuration setting to your local ~/.ansible.cfg which instructs Ansible to use a temp folder on the target (shared) filesystem:
https://github.com/ansible/ansible/issues/9526#issuecomment-199443969
with ansible 2.2 you can also set unsafe_writes=yes
Related
I'm trying to create an ec2 instance through ansible playbook and then run a few commands on it. The ec2 instance is created sucessfully and also ssh'd into it. Now, when I try to run this command:
- name: Create scripts directory
ansible.builtin.file:
path: /scripts
state: directory
mode: '0755'
An error is generated:
{"changed": false, "msg": "There was an issue creating /scripts as requested: [Errno 30] Read-only file system: '/scripts'", "path": "/scripts"}
I have specified the mode as 0755 which is read write execute permissions. So why is it giving an error, the same error even for 0777? I'm new to ansible and Ubuntu in general.
you have to add the option become to use the admin privilege (your user have to be sudo er...)
- name: Create scripts directory
ansible.builtin.file:
path: /scripts
state: directory
mode: '0755'
become: yes
here you create a folder on level root, are you sure to do that?
To specify a password for sudo, run ansible-playbook with --ask-become-pass (-K for short). If you run a playbook utilizing become and the playbook seems to hang, most likely it is stuck at the privilege escalation prompt. Stop it with CTRL-c, then execute the playbook with -K and the appropriate password.
Your Ansible user doesn't have the required privileges to create a directory at that level. Either use another user (bad option) or make sure the user has sudo capabilities and use become: yes to execute the command as sudo
I'm trying to provision a local kubernetes cluster using Vagrant (v2.1.2) and VirtualBox (v5.2.20).
My Vagrantfile uses the ansible_local provisioner to run some ansible playbooks to provision a K8s cluster using kubeadm.
This was all working perfectly a few months back when I ran this, but it's not working anymore. Not sure why it stopped working, but the ansible playbook for the master node fails when I trying to copy the kube config to the vagrant users home dir.
The ansible task that fails is as follows:
- name: Copy kubeconfig for vagrant user
copy:
src: /etc/kubernetes/admin.conf
dest: /home/vagrant/.kube/
owner: vagrant
group: vagrant
This is causing the following error: fatal: [master1]: FAILED! => {"msg": "an error occurred while trying to read the file '/etc/kubernetes/admin.conf': [Errno 13] Permission denied: '/etc/kubernetes/admin.conf'"}
The src file does exist. If I ssh into the VM after the failure, I can copy the file with sudo cp /etc/kubernetes/admin.conf /home/vagrant/, but the failure above causes the vagrant provisioning to fail/halt.
FYI., I've tried a few combinatons of things at the play and task levels e.g. become: true, remote_user: root e.g.
---
- hosts: all
become: true
tasks:
...
... but to no avail.
permissions on admin.conf are as follows:
vagrant#master1:/etc/kubernetes$ ls -al admin.conf
-rw------- 1 root root 5453 Aug 5 14:07 admin.conf
Full master-playbook.yml can be found here.
How do I get ansible to copy the file?
Quoting from copy
src Local path to a file to copy to the remote server.
The play failed because the user who is running the ansible playbook can't read the file at the controller (local path)
permission denied: '/etc/kubernetes/admin.conf'
Use remote_src: yes
remote_src If yes it will go to the remote/target machine for the src
- name: Copy kubeconfig for vagrant user
copy:
remote_src: yes
src: /etc/kubernetes/admin.conf
dest: /home/vagrant/.kube/
owner: vagrant
group: vagrant
From the comment, this seems to be what you want to do
"The src file does exist. If I ssh into the VM after the failure, I can copy the file with sudo cp /etc/kubernetes/admin.conf /home/vagrant/"
This should work if the remote_user at VM is allowed to sudo su and escalation is properly set
- hosts: all
become: yes
become_user: root
become_method: sudo
It would be not enough to allow the remote_user sudo cp only. See Can’t limit escalation to certain commands
Privilege escalation permissions have to be general. Ansible does not always use a specific command to do something but runs modules (code) from a temporary file name which changes every time. If you have ‘/sbin/service’ or ‘/bin/chmod’ as the allowed commands this will fail ...
I am trying to copy a file from my Host Mac System to CentOS on a VM through Ansible Roles.
I have a folder created called Ansible Roles and under that I have used ansible-galaxy command and have created a role called tomcatdoccfg. helloworld.war is present in the root Ansible Roles folder.
The folder structure is as below :
Ansible tasks\main.yml playbook on Mac is as below:
- name: Copy war file to tmp
copy:
src: helloworld.war
dest: /tmp/helloworld.war
The helloworld.war file should be accessible for user abhilashdk(My Default MAC username). The CentOS VM also has a user called abhilashdk. I have configured ssh keys. Meaning I have generated ssh-keys -t rsa and moved the keys to the CentOS VM using ssh-copy-id and I am able to ping to VM using ansible -i hosts node1 -m ping command. I am able to install docker also on my node1 machine using ansible.
I have a main.yml file in the root Ansible Roles folder the contents of which is as below:
---
- hosts: node1
vars:
webapp:
app1:
PORT: 8090
NAME: webapp1
app2:
PORT: 8091
NAME: webapp2
become: true
roles:
- docinstall
- tomcatdoccfg
Now when I run the command ansible-playbook -i hosts main.yml I get the below error for Copy war file to tmp:
TASK [tomcatdoccfg : Copy war file to tmp] ************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: /Files/DevOps/Ansible/Ansible_roles/helloworld.war
fatal: [node1]: FAILED! => {"changed": false, "msg": "Could not find or access 'helloworld.war'\nSearched in:\n\t/Files/DevOps/Ansible/Ansible_roles/tomcatdoccfg/files/helloworld.war\n\t/Files/DevOps/Ansible/Ansible_roles/tomcatdoccfg/helloworld.war\n\t/Files/DevOps/Ansible/Ansible_roles/tomcatdoccfg/tasks/files/helloworld.war\n\t/Files/DevOps/Ansible/Ansible_roles/tomcatdoccfg/tasks/helloworld.war\n\t/Files/DevOps/Ansible/Ansible_roles/files/helloworld.war\n\t/Files/DevOps/Ansible/Ansible_roles/helloworld.war"}
I am not understanding what permissions should I give to hellowrold.war file so that my centos on vm will be able to access it through ansible playbook/roles.
Could anybody help me out how to solve this issue.
Thanks in Advance
adding as answer, so i can show the non-latin characters that the log you attached in the question includes:
note right before the helloworld.war.
Could be the reason why Ansible cant find the file on the FS.
To be on the safe side, i would delete the whole main.yml and rewrite it.
ansible-playbook -i hosts main.yml --ask-sudo-pass or ansible-playbook -i hosts main.yml --ask-pass
This params will ask you for sudo password for playbook operations
Consider the following Ansible playbook code...
# provisioning/roles/operations/tasks/vagrant.yml
---
- block:
- name: Copy vagrantfile onto dev machine
local_action:
template src=Vagrantfile.j2
dest={{playbook_dir}}/../Vagrantfile
- name: Boot up VMs
local_action:
command vagrant up
environment:
VAGRANT_VAGRANTFILE: "{{playbook_dir}}/../Vagrantfile"
register: vagrantUpResults
- debug: msg="{{vagrantUpResults}}"
tags:
- vagrant
This will correctly generate a valid Vagrantfile and will successfully run the "vagrant up" command. However, by doing this through ansible it seems it will not register the created newly created VMs. I know this because when I run...
vagrant status
I see lines that declare "VM (not started)".
Turns out that the correct ".vagrant" directory was being generated in ./provisioning whereas the Vagrantfile was generated to ./Vagrantfile. Although "VAGRANT_VAGRANTFILE" was correctly pointing to a generated file, ".vagrant" dir was being generated inside my playbook directory (./provisioning).
After further research into available environment variables, I found a variable that was more useful; "VAGRANT_CWD". This will locate the generated Vagrantfile as well as generate a correct ".vagrant" directory. "vagrant status" now detects the running VMs correctly.
Here is what my code looks like now:
---
- block:
- name: Copy vagrantfile onto dev machine
local_action:
template src=Vagrantfile.j2
dest={{playbook_dir}}/../Vagrantfile
- name: Boot up VMs
local_action:
command vagrant up
environment:
VAGRANT_CWD: "{{playbook_dir}}/.."
register: vagrantUpResults
- debug: msg="{{vagrantUpResults}}"
tags:
- vagrant
Whenever I run my playbook on my control machine I only see this:
PLAY RECAP *********************************************************************
So I get the feeling ansible is not finding my task file. Here is my directory structure (it's a git project in Eclipse):
ansible
ansible
dockerhosts.yml
hosts
roles
dockerhost
tasks
main.yml
My dockerhosts.yml:
---
- hosts: integration
roles: [dockerhost]
...
My hosts file:
[integration]
192.168.1.8
192.168.1.9
And my main.yml file:
- name: Install Docker CE from added Docker YUM repo
remote_user: installer
become: true
become_user: root
become_method: sudo
command: yum -y install docker-ce
I don't have any syntax errors clearly as it's running but for some reason it doesn't appear to find my main.yml file. I tried to see what user ansible runs under in case it's a question of file permissions but I haven't found anything.
I am running ansible-playbook dockerhosts.yml from the /ansible/ansible directory.
What am I doing wrong?
I have a hosts file but it's not in the /etc/ansible/hosts default location. As I showed in my question it's actually at the same level as dockerhosts.yml since this is a git project.
I used the -vvvv flag but that didn't tell me much. After running ansible-playbook -h I tried the -i flag and ran ansible-playbook dockerhosts.yml -i hosts and that actually did something.
It gave me SSH connection errors but it did more than just the blank PLAY RECAP I got before which to me means it's actually running the tasks now.