Using key file in ansible for a particular task - ansible

What I need to achieve:
I need to copy file from one remote server to another. I am using the yml below.
- name: fetching file to localhost
synchronize:
src: /lcaope01/lca/lst1/logs/{{ inventory_hostname }}
dest: /lcaope01/lca/FETCHEDLOG/
mode: pull
delegate_to: serverX
now I have 5(A,B,C,D,E) servers from which files are copied to serverX. If i create a password less connection between serverX and this servers it copies easily.
But the problem is I cant create a password less connection for this.
Is there any way i can use a private key file for the same specifying it in playbook or tasks.
NOTE: Ansible is not running in serverX it is in a different server.
NOTE: I have a private key for serverX

Related

How can I specify remote destination server when using copy module in ansible

Copying file to second server
Here is a task that I want to execute in order to copy dump file from prod server to test server.
How can I explicitly explicitly tell what the destination is? Or what's the correct approach.
I have already created a destination dir on the test server (/home/{{sys_user}}/prod_db_copy/).
Here is the task that I tried:
- name: "Copy dump file to test server"
when: inventory_hostname in groups['prod']
copy:
src: "/home/{{ sys_user }}/db_copy/{{ db_name }}.dump.gz"
dest: "{{ inventory_hostname=='test' }} /home/{{sys_user}}/prod_db_copy/"
remote_src: yes
The copy module copies files from the ansible controller to a remote machine or files on the remote machine, but what you are trying to do is copying files between two remote machines, so copy won't do the job. You need to use the synchronize module with delegate_to.
Depending on your network and users on the machines, you might need to add some options or add ssh-keys to some servers, but in general, it can look like this:
- hosts: prod
tasks:
- name: Transfer file from prod machines to backup-server
synchronize:
src: "/home/{{ sys_user }}/db_copy/{{ db_name }}.dump.gz"
dest: "/home/{{sys_user}}/prod_db_copy/"
mode: pull
delegate_to: backup-server
This uses the pull mode. You can use push as well, but then you need to run the task on the backup-server and delegate it to the prod machine.

Ansible AWX - Deploy multiple hosts with different credentials with one playbook in one AWX template

Environment are 3 hosts. Hosts A; B and C
Host A runs Ansible with AWX within a Docker container.
My Task is, to fetch backup files from host B, and transfer them to server C.
In fact, this are two task.
Hosts B and C have different Credentials.
I know how to gather this both task in one playbook, but how can I add the second credential to the C server to establish the copy task in an AWX template?
Playbook example:
- hosts: B
tasks:
- name: Fetch backup data from remote host B to my_path
fetch:
src: /origin_path/backup_file.tar.gz
dest: /my_path/backup_file.tar.gz
- hosts: C
tasks:
- name: Copy backup data from my_path to remote host C
tasks:
copy:
src: /my_path/backup_file.tar.gz
dest: /remote_path/backup_file.tar.gz
```
For this type of scenario I have created separate playbooks for each host, created a template for each playbook and then used a workflow template to run the two playbooks in sequence.
I'd like a better solution, which would be to allow multiple credentials as you describe.
You can look into using inventory variables to set the credentials for your hosts (via host vars or group vars).
Encrypt your passwords using Ansible Vault (https://docs.ansible.com/ansible/latest/user_guide/vault.html#choosing-between-a-single-password-and-multiple-passwords)
Set the ansible_user and ansible_password host variables in your inventory(note the ansible_password is the value encrypted using ansible-vault)
create a Vault credential in AWX which will store the key used to encrypt your password
add the vault credential to your playbook Template in AWX. This will ensure that the password is decrypted when the playbook is run.
you can use inventory variables to handle multiple domain credentials for servers

Ansible Synchronize module stuck while testing

Goal: I'm trying to copy /home/foo/certificates/*/{fullchain.cer,*.key}
from one server (tower.example.com) to other nodes.
When trying the task with verbose options, it's stuck. Output logs can be found there (Ansible 2.5): https://gist.github.com/tristanbes/1be96509d4853d647a49d30259672779
- hosts: staging:web:tower:!tower.example.com
gather_facts: yes
become: yes
become_user: foo
tasks:
- name: synchronize the certificates folder
synchronize:
src: "/home/foo/certificates/"
dest: "{{ certificates_path }}/"
rsync_opts:
- "--include=fullchain.cer"
- "--include=*.key"
delegate_to: tower.example.com
I'm running this playbook on localhost.
What I except of this is to connect as foo on tower.example.com then, to ssh as foo to servers of groups web and staging to rsync push the content of the folder matching the filters.
I can run other playbooks on tower.example.com
I can connect on tower.example.com with my user and then sudo -i && su foo to foo.
As foo, I can connect to other hosts as foo for servers under group web and staging.
What am I missing?
It looks like tower.example.com may not have ssh access to the other hosts. This would cause Ansible to get stuck as ssh is waiting for a password. You can fix this by
generating a new ssh key on tower.example.com and authorizing it on the other hosts, and/or
setting up ssh-agent forwarding.
Alternatively, you could fetch the certificates to a temp folder on localhost and copy them to the other hosts. This avoids the use of delegate_to, but is not as efficient as syncing the files directly between the remote hosts.

Is it possible to use Ansible fetch and copy modules for SCPing files

Im trying to SCP a .sql file from one server to another, and I am trying to use an Ansible module to do so. I stumbled across the fetch and copy modules, but I am not sure how to specify which host I want to copy the file from.
This is my current Ansible file:
firstDB database dump happens on seperate host
- hosts: galeraDatabase
become: yes
remote_user: kday
become_method: sudo
tasks:
- name: Copy file to the galera server
fetch:
dest: /tmp/
src: /tmp/{{ tenant }}Prod.sql
validate_checksum: yes
fail_on_missing: yes
Basically, I want to take the dump file from the firstDB host, and then get it over to the other galeraDatabase host. How would I do this? Im order to use fetch or copy, I would need to pass it the second hostname to copy the files from, and I don't see any parameters to do that inside of the documentation. Should I be using a different method altogether?
Thanks
Try using the synchronize module with delegate_to, or if you don't have rsync then use the copy module. Some good answers relating to this topic already on stackoverflow.
Also, check the ansible documentation for more info on the copy and synchronize modules along with the delegate_to tasks parameter.
hth.

Ansible: transferring files between hosts

With ansible, I'm trying to copy an application artifact from a remote server "artifacts_host" to a target machine, i.e. a host in my inventory. The play I'm trying to run is something like:
- name: rsync WAR artifact from artifacts host
synchronize: >
src={{ artifacts_path }}/{{ artifact_filename }}.war
dest={{ artifact_installation_dir }}
delegate_to: "{{ artifacts_host }}"
I came very close to getting this to work by using ansible-vault to encrypt a "secrets.yml" variable file with the artifact_host's public key and then installed it on the target machine's auth file like:
- name: install artifacts_host's public key to auth file
authorized_key: >
user={{ ansible_ssh_user }}
key='{{ artifacts_host_public_key }}'
sudo: yes
but the problem is that my artifacts_host cannot resolve an IP address from the FQDN that Ansible passes to it. If I was able to "inform" the artifacts_host of the IP to use (what the fqdn should resolve to) then I would be fine. I would also be fine having the task fire off on the target machine to pull from the artifacts_host, but I can't find an idempotent way of accomplishing this, nor can I figure out how to feed the target machine a login/password OR ssh key to use.
Am I just gonna have to template out a script to push to my targets???
For anyone who comes across this and has the same question, I did not really figure it out, I just decided to install the private key in the target machines' /etc/ssh directory and chmod it to 0600. I figure it's basically as secure as it could get without a transient (in-memory only) key/password and with idempotence.

Resources