I want to copy a new certificate to Proxmox with Ansible.
My setup
.ssh/config is modified so ssh machine will log in with root.
scp /Users/dir/key.pem /etc/pve/nodes/machine/pve-ssl.key works fine.
Problem
Ansible fails. I'm running this on an up-to-date macbook. ansible --version is ansible 2.2.1.0.
machine.yml
- hosts: machines
vars:
ca_dir: /Users/dir/
- name: copy a pve-ssl.key
copy:
src="{{ ca_dir }}/key.pem"
dest=/etc/pve/nodes/machine/pve-ssl.key
Permissions?
This works fine:
- hosts: machines
vars:
ca_dir: /Users/dir/
- name: copy a pve-ssl.key
copy:
src="{{ ca_dir }}/key.pem"
dest=/root/pve-ssl.key
So it's a permissions problem, but why. Ansible is entering my machine with root - ansible machine -m shell -a 'who'.
Probably something to do with group permissions, since
$ ls -la /etc/pve/nodes/machine/
drwxr-xr-x 2 root www-data 0 Feb 26 01:35 .
[...]
$ ls -la /root
drwx------ 5 root root 4096 Feb 26 12:09 .
[...]
How can I copy the file with ansible?
If the question is "what is the problem?" then the answer is:
It's because of the /dev/fuse filesystem mounted on /etc/pve (Ansible just cannot move the file from /tmp to the branch of /etc/pve, just like a simple mv /tmp/file /etc/pve command fails).
If the question is "how to deal with the problem?" then:
Copy the files elsewhere (/home/user) with Ansible and then copy the files using the command module on Proxmox and delete the originals.
You could also first touch the file and then copy it:
- name: touch empty file
file:
path: /etc/pve/nodes/machine/pve-ssl.key
state: touch
- name: copy a pve-ssl.key
copy:
src: "{{ ca_dir }}/key.pem"
dest: /etc/pve/nodes/machine/pve-ssl.key
Related
I am trying to emulate scenario of copying local file from one directory to another directory on same machine..but ansible copy command is looking for remote server always..
code I am using
- name: Configure Create directory
hosts: 127.0.0.1
connection: local
vars:
customer_folder: "{{ customer }}"
tasks:
- file:
path: /opt/scripts/{ customer_folder }}
state: directory
- copy:
src: /home/centos/absample.txt
dest: /opt/scripts/{{ customer_folder }}
~
I am running this play book like
ansible-playbook ab_deploy.yml --extra-vars "customer=ab"
So two problem i am facing
It should create a directory called ab under /opt/scripts/ but it creating folder as { customer_folder }}..its not taking ab as name of directory
second, copy as i read documentation, copy only work to copy files from local to remote machine, But i want is simply copy from local to local..
how can i achieve this..might be silly, i am just trying out things
Please suggest.
I solved it..i used cmd under shell module then it worked.
I have a use case i want to create a backup file with latest time stamp and then I want to copy the same/latest back up file to destination.(I was using item and with_items for that). But ansible overriding the existing backup file and creating new file with same time stamp. When I copy the latest backup file it is not moving to destination because same time stamp file already exists.
---
# understanding with_items for correct syntax
- name: create a back up file
file:
path: /test/backup_path/backup_{{ansible_date_time.date}}.bkp
state: touch
mode: u=rw,g=r,o=r
- name: get the latest back up file
shell: ls -1t /test/backup_path | head -1
register: latest_bkp
- debug: msg="{{ latest_bkp.stdout_lines }}"
- name: Copy the latest file to restore path
copy:
src: /test/backup_path/{{ item }}
dest: /test/restore_path/
remote_src: yes
owner: root
group: root
mode: '0644'
force: yes
with_items: "{{ latest_bkp.stdout_lines }}"
My target node output
[root#localhost ~]# ls -alrt /test/backup_path/
total 0
-rw-r--r--. 1 root root 0 Aug 19 17:33 backup1
-rw-r--r--. 1 root root 0 Aug 19 17:34 backup2
drwxr-xr-x. 5 root root 59 Aug 19 18:03 ..
drwxr-xr-x. 2 root root 65 Aug 19 18:49 .
-rw-r--r--. 1 root root 0 Aug 19 19:07 backup_2019-08-19.bkp
[root#localhost ~]# ls -alrt /test/restore_path/
total 0
-rw-r--r--. 1 root root 0 Aug 19 17:34 backup2
drwxr-xr-x. 5 root root 59 Aug 19 18:03 ..
-rw-r--r--. 1 root root 0 Aug 19 18:49 backup_2019-08-19.bkp
drwxr-xr-x. 2 root root 50 Aug 19 18:49 .
My ultimate target is creating a back up file with time stamp and transferring the same file to destination
As I look in your example you create and copy empty none modified file and restart playbook some times.
In this case:
1) Ansible recreates source file in /test/backup_path/ (touch the same file, but set some permissions). And its time stamp updates after every running playbook.
- name: create a back up file
file:
path: /test/backup_path/backup_{{ansible_date_time.date}}.bkp
state: touch
mode: u=rw,g=r,o=r
2) Ansible will copy empty or none modified source file to /test/restore_path/only at first running playbook or when there is no same destination file (condition of Ansible Idempotency). For this reason time stamp of destination file didn't update after playbook restarting (there is older file with same content). Idempotency means you can be sure of a consistent state in your environment.
- name: Copy the latest file to restore path
copy:
src: /test/backup_path/{{ item }}
dest: /test/restore_path/
remote_src: yes
owner: root
group: root
mode: '0644'
force: yes
with_items: "{{ latest_bkp.stdout_lines }}"
Let's say I will add one new line test to source file. In this case content of file will be updated.
- name: create a back up file
file:
path: /test/backup_path/backup_{{ansible_date_time.date}}.bkp
state: touch
mode: u=rw,g=r,o=r
- name: Add data to file
lineinfile:
path: /test/backup_path/backup_{{ansible_date_time.date}}.bkp
line: test
After rerunning playbook (only one time) you can sure that source and destination file were updated and have same content and time stamp (note please it's idempotency too).
To resolve your issue you can use none idempotent module Ansible for copying file (note please it is not best practice).
For example:
instead:
- name: Copy the latest file to restore path
copy:
src: /test/backup_path/{{ item }}
dest: /test/restore_path/
remote_src: yes
owner: root
group: root
mode: '0644'
force: yes
with_items: "{{ latest_bkp.stdout_lines }}"
just use:
- name: Copy the latest file to restore path
shell: cp -rf /test/backup_path/{{ item }} /test/restore_path/{{ item }}
with_items: "{{ latest_bkp.stdout_lines }}"
- name: Change file permissions
file:
owner: root
group: root
mode: '0644'
path: /home/restore_path/{{ item }}
with_items: "{{ latest_bkp.stdout_lines }}"
I am trying to copy a script to a remote machine and I have to preserve mtime of the file.
I have tried using mode:preserve and mode=preserve as stated in ansible documentation, here!
but it seems to preserve permissions and ownership only.
- name: Transfer executable script script
copy:
src: /home/avinash/Downloads/freeradius.sh
dest: /home/ssrnd09/ansible
mode: preserve
In respect to your question and the comment of Zeitounator, I've setup a test with synchronize_module and with parameter times, which was working as required.
---
- hosts: test.example.com
become: no
gather_facts: no
tasks:
- name: Synchronize passing in extra rsync options
synchronize:
src: test.sh
dest: "/home/{{ ansible_user }}/test.sh"
times: yes
Thanks to
How to tell rsync to preserve time stamp on files
How can I copy a file from machine A to machine B and machine C on different location.
ie:
On machine A I have file abc and I want to copy it on the /tmp area of machine B and /op area of machine C
Assuming an inventory that was structured something like this:
[remote-servers]
192.168.X.1
192.168.X.10
192.168.X.20
192.168.X.30
ran the following copy task:
- name: copy the file to the remote machine
hosts: remote-servers
copy:
src: /path/to/file
dest: /path/to/dest
I've figured out how to use the shell module to create a mount on the network using the following command:
- name: mount image folder share
shell: "mount -t cifs -o domain=MY_DOMAIN,username=USER,password=PASSWORD //network_path/folder /local_path/folder
sudo_user: root
args:
executable: /bin/bash
But it seems like it's better practice to use Ansible's mount module to do the same thing.
Specifically, I'm confused about going about providing the options for domain=MY_DOMAIN,username=USER,password=PASSWORD. I see there is a flag for opts, but I'm not quite sure how this would look.
Expanding on #adam-kalnas answer:
Creating an fstab entry and then calling mount will allow you to mount the file-system in more appropriate permission level (i.e. not 0777). By doing state: present ansible will only create the entry in /etc/fstab and then can be mounted by the user. From ansible mount module documentation:
absent and present only deal with fstab but will not affect current
mounting.
Additionally, you'd want to avoid leaving credentials in the /etc/fstab file (as its readable by all, human or service). To avoid that, create a credential file that is appropriately secured.
All together looks something like this:
- name: Create credential file (used for fstab entry)
copy:
content: |
username=USER
password=PASSWORD
dest: /home/myusername/.credential
mode: 0600
become: true
become_user: myusername
- name: Create fstab entry for product image folder share
mount:
state: present
fstype: cifs
opts: "credentials=/home/myusername/.credential,file_mode=0755,dir_mode=0755,user"
src="//network_path/folder"
path="/local_path/folder"
become: true
- name: Mount product image folder share
shell: |
mount "/local_path/folder"
become: true
become_user: myusername
Here's the command I ended up going with:
- name: mount product image folder share
mount: state="present"
fstype="cifs"
opts="domain= MY_DOMAIN,username=USER,password=PASSWORD,file_mode=0777,dir_mode=0777" src="//network_path/folder" name="/local_path/folder"
sudo_user: root
sudo: yes
A few notes about it:
I don't think the file_mode=0777,dir_mode=0777 should have to be required, but in my situation is was needed in order for me to have write access to the folder. I could read the folder without specifying the permissions, but I couldn't write to it.
This snippet is required right not because of this ansible bug I tested this on 1.9.0 and 1.9.1, and it was an issue in both versions.
sudo_user: root
sudo: yes
I have not personally used the mount module, but from the documentation it looks like you'd want to do something like:
- name: mount image folder share
mount: fstype="cifs"
opts="domain=MY_DOMAIN,username=USER,password=PASSWORD"
src="//network_path/folder"
name="/local_path/folder"
I'd give that a try, and run it using -vvvv to see all the ansible output. If it doesn't work then the debug output should give you a pretty decent idea of how to proceed.