I have 2 remote servers, and I need to transfer files from one server to another using synchronize directive.
In serverA I created an SSH key (id_rsa) using the sudo user, and copied the public key into serverB (into authorized_keys file of the same sudo user).
Hosts file
[servers]
prod_server ansible_host=IP_prod
new_server ansible_host=IP_new
[servers:vars]
ansible_user=sudo_user
ansible_sudo_pass=sudo_password
ansible_ssh_private_key_file=~/.ssh/id_rsa
Play
- name: Transfer files from prod to new server
hosts: new_server
gather_facts: false
roles:
- rsync
Task
- name: Copy files to new server
synchronize:
src: /etc/letsencrypt/live/domain/fullchain.pem
dest: /opt
delegate_to: prod_server
When running the playbook an error shows up:
change_dir \"/etc/letsencrypt/live/domain.tld\" failed: Permission denied
That means that the sudo_user don't have root privileges to access that file.
And if I set become: true it would be possible to access the fullchain.pem file, but the play will try then to transfer the file using the root user, and the SSH key id_rsa is owned by the sudo_user
What do I have to set to make this work?
If you can "sudo su - " on the remote box and are using -b (become)..
synchronize:
src: /your local directory path
dest: /path on remote host owned by root
rsync_path: "sudo rsync"
mode: push
Related
I want store the execution of my playbook on remote host in a specified directory.
For example below playbook I want the entire output of execution on remote host where the below tasks are being executed.
hosts: localhost
tasks:
name: copy the svn text file to {{tgt1.stdout}} folder
copy:
src: /u01/checkout/
dest: /u01/checkout/
follow: true
mode: 0755
name: commit the changes
shell: svn ci --username bamboo --password bamboo
args:
chdir: /checkout/checkout/
I'm trying to provision a local kubernetes cluster using Vagrant (v2.1.2) and VirtualBox (v5.2.20).
My Vagrantfile uses the ansible_local provisioner to run some ansible playbooks to provision a K8s cluster using kubeadm.
This was all working perfectly a few months back when I ran this, but it's not working anymore. Not sure why it stopped working, but the ansible playbook for the master node fails when I trying to copy the kube config to the vagrant users home dir.
The ansible task that fails is as follows:
- name: Copy kubeconfig for vagrant user
copy:
src: /etc/kubernetes/admin.conf
dest: /home/vagrant/.kube/
owner: vagrant
group: vagrant
This is causing the following error: fatal: [master1]: FAILED! => {"msg": "an error occurred while trying to read the file '/etc/kubernetes/admin.conf': [Errno 13] Permission denied: '/etc/kubernetes/admin.conf'"}
The src file does exist. If I ssh into the VM after the failure, I can copy the file with sudo cp /etc/kubernetes/admin.conf /home/vagrant/, but the failure above causes the vagrant provisioning to fail/halt.
FYI., I've tried a few combinatons of things at the play and task levels e.g. become: true, remote_user: root e.g.
---
- hosts: all
become: true
tasks:
...
... but to no avail.
permissions on admin.conf are as follows:
vagrant#master1:/etc/kubernetes$ ls -al admin.conf
-rw------- 1 root root 5453 Aug 5 14:07 admin.conf
Full master-playbook.yml can be found here.
How do I get ansible to copy the file?
Quoting from copy
src Local path to a file to copy to the remote server.
The play failed because the user who is running the ansible playbook can't read the file at the controller (local path)
permission denied: '/etc/kubernetes/admin.conf'
Use remote_src: yes
remote_src If yes it will go to the remote/target machine for the src
- name: Copy kubeconfig for vagrant user
copy:
remote_src: yes
src: /etc/kubernetes/admin.conf
dest: /home/vagrant/.kube/
owner: vagrant
group: vagrant
From the comment, this seems to be what you want to do
"The src file does exist. If I ssh into the VM after the failure, I can copy the file with sudo cp /etc/kubernetes/admin.conf /home/vagrant/"
This should work if the remote_user at VM is allowed to sudo su and escalation is properly set
- hosts: all
become: yes
become_user: root
become_method: sudo
It would be not enough to allow the remote_user sudo cp only. See Can’t limit escalation to certain commands
Privilege escalation permissions have to be general. Ansible does not always use a specific command to do something but runs modules (code) from a temporary file name which changes every time. If you have ‘/sbin/service’ or ‘/bin/chmod’ as the allowed commands this will fail ...
I could able to copy a directory from a remote machine to localhost.
However when the remote directory contains different files/folders which belongs to different users (gerrit & hal) and I'm getting permission denied error, as I'm executing playbook on the remote machine with gerrit's user.
My inventory file:
Target1 ansible_host=X.X.X.X ansible_connection=ssh ansible_user=gerrit
This is my playbook:
- hosts: Target1
gather_facts: false
tasks:
- name: Copy file
synchronize:
src: /dir/path/mydir
dest: .
mode: pull
I expect all data from mydir folder which belongs to gerrit's user should copy on localhost and exclude the files/folders which belongs to Hal user. But currently, I'm getting permission denied error.
Goal: I'm trying to copy /home/foo/certificates/*/{fullchain.cer,*.key}
from one server (tower.example.com) to other nodes.
When trying the task with verbose options, it's stuck. Output logs can be found there (Ansible 2.5): https://gist.github.com/tristanbes/1be96509d4853d647a49d30259672779
- hosts: staging:web:tower:!tower.example.com
gather_facts: yes
become: yes
become_user: foo
tasks:
- name: synchronize the certificates folder
synchronize:
src: "/home/foo/certificates/"
dest: "{{ certificates_path }}/"
rsync_opts:
- "--include=fullchain.cer"
- "--include=*.key"
delegate_to: tower.example.com
I'm running this playbook on localhost.
What I except of this is to connect as foo on tower.example.com then, to ssh as foo to servers of groups web and staging to rsync push the content of the folder matching the filters.
I can run other playbooks on tower.example.com
I can connect on tower.example.com with my user and then sudo -i && su foo to foo.
As foo, I can connect to other hosts as foo for servers under group web and staging.
What am I missing?
It looks like tower.example.com may not have ssh access to the other hosts. This would cause Ansible to get stuck as ssh is waiting for a password. You can fix this by
generating a new ssh key on tower.example.com and authorizing it on the other hosts, and/or
setting up ssh-agent forwarding.
Alternatively, you could fetch the certificates to a temp folder on localhost and copy them to the other hosts. This avoids the use of delegate_to, but is not as efficient as syncing the files directly between the remote hosts.
I'm new to Ansible and I'm struggeling with creating a new user on a remote machine and copying ssh-keys (for git) from the local machine to the remote machine's new user.
Basically, from localmachine/somepath/keys/ to remotemachine/newuser/home/.ssh/.
So far I tried:
- name: Create user
hosts: remote_host
remote_user: root
tasks:
- name: Create new user
user: name=newuser ssh_key_file=../keys/newuser
While this creates the newuser on the remote machine, it doesn't copy any keys (.ssh is still empty). I also tried authorized_key as a second task but only got an error message when trying to copy the private key.
Is it even possible that the keys are still added after I already ran it and newuseralready exists. Ie, can I just run it again or will I have to delete the newuser first?
The ssh_key_file is the path used by the option generate_ssh_key of user module. It's not the path of a local SSH key to upload to the remote user created.
If you want to upload the SSH key, you have to use the copy module
- name: Create user
hosts: remote_host
remote_user: root
tasks:
- name: Create new user
user:
name: newuser
- name: Create .ssh folder
file:
path: ~newuser/.ssh
state: directory
owner: newuser
group: newuser
mode: 0700
- name: Upload SSH key
copy:
src: ../keys/newuser
dest: ~newuser/.ssh/id_rsa
owner: newuser
group: newuser
mode: 0700
BTW, it's recommended to use the YAML syntax instead of the args syntax.