Similar questions have been asked before, but none have been answered or are specific to Vagrant.
I have a directory on host master which I would like to synchronize with my vagrant instance. Here's my playbook:
- hosts: master
vars:
backup_dir: /var/backups/projects/civi.common.scot/backups/latest/
dest_dir: /var/import
tasks:
- name: Synchronize directories
synchronize:
src: "{{ backup_dir }}"
dest: "{{ dest_dir }}"
mode: pull
delegate_to: default
Here is my inventory:
default ansible_host=192.168.121.199 ansible_port=22 ansible_user='vagrant' ansible_ssh_private_key_file='/run/media/daniel/RAIDStore/Workspace/docker/newhume/.vagrant/machines/default/libvirt/private_key'
master ansible_host=hume.common.scot
When I run this play, the process does not seem to copy any files to disk, but does not error or quit either.
With ssh.config.forward_agent = true in my Vagrantfile, I am able to issue the following command from the Vagrant guest:
rsync --rsync-path='sudo rsync' -avz -e ssh $remote_user#$remote_host:$remote_path $local_path`
However, the following playbook does not work (same problem as when using the synchronize module):
- name: synchronize directories (bugfix for above)
command: "rsync --rsync-path='sudo rsync' -avz -e ssh {{ remote_user }}#{{ remote_host }}:{{ backup_directory }} {{ dest_dir }}"
I have also tried using shell instead of command.
How can I copy these files to my vagrant instance?
The "syschronize" Ansible module "is run and originates on the local host where Ansible is being run" (quote from manpage). So it is for copy from local to remote. What you want to do, is to copy from remote A (master) to remote B (default).
To acomplish that you'd have to exchange ssh keys for a specific user from B to A and vice versa for known_hosts. The following should guide you through the process:
- hosts: default
tasks:
# transfer local pub-key to remote authorized_keys
- name: fetch local ssh key from root user
shell: cat /root/.ssh/id_rsa.pub
register: ssh_keys
changed_when: false
- name: deploy ssh key to remote server
authorized_key:
user: "root"
key: "{{ item }}"
delegate_to: "master"
with_items:
- "{{ ssh_keys.stdout }}"
# fetch remote host key and add to local known_hosts
# to omit key accept prompt
- name: fetch ssh rsa host key from remote server
shell: cat /etc/ssh/ssh_host_rsa_key.pub
register: ssh_host_rsa_key
delegate_to: master
changed_when: false
- name: create /root/.ssh/ if not existant
file:
path: "/root/.ssh/"
owner: root
group: root
mode: 0700
state: directory
- name: add hostkey to root known host file
lineinfile:
path: "/root/.ssh/known_hosts"
line: "{{ master.fqdn }} {{ ssh_host_rsa_key.stdout }}"
mode: 0600
create: yes
state: present
with_items:
- "{{ ssh_keys.stdout }}"
# now call rsync to fetch from master
- name: fetch from remote
shell: rsync --rsync-path='sudo rsync' -avz -e ssh root#{{ master.fqdn }}:{{ backup_directory }} {{ dest_dir }}
Related
I have three hosts:
my local ansible controller
a jump/bastion host (jump_host) for my infrastructure
a target host I want to run ansible tasks against (target_host) which is only accessible through jump_host
As part of my inventory file, I have the details of both jump_host and target_host as follows:
jump_host:
ansible_host: "{{ jump_host_ip }}"
ansible_port: 22
ansible_user: root
ansible_password: password
target_host:
ansible_host: "{{ target_host_ip }}"
ansible_port: 22
ansible_user: root
ansible_password: password
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q root#{{ jump_host_ip }}"'
How can we configure ansible to use the password mentioned in the jump_host settings from the inventory file instead of using any additional configurations from ~/.ssh/config file?
There is no direct way to provide the password for the jump host as part of the ProxyCommand.
So, I ended up doing the following:
# Generate SSH keys on the controller
- hosts: localhost
become: false
tasks:
- name: Generate the localhost ssh keys
community.crypto.openssh_keypair:
path: ~/.ssh/id_rsa
force: no
# Copy the host keys of Ansible host into the jump_host .ssh/authorized_keys file
# to ensure that no password is prompted while logging into jump_host
- hosts: jump_host
become: false
tasks:
- name: make sure public key exists on target for user
ansible.posix.authorized_key:
user: "{{ ansible_user }}"
key: "{{ lookup('file', '~/.ssh/id_rsa') }}"
state: present
I had our security group ask for all the information we gather from the hosts we manage with Ansible Tower. I want to run the setup command and put it into a file in a folder I can run ansible-cmdb against. I need to do this in a playbook because we have disabled root login on the hosts and only allow public / private key authentication of the Tower user. The private key is stored in the database so I cannot run the setup command from the cli and impersonate the Tower user.
EDIT I am adding my code so it can be tested elsewhere.
gather_facts.yml
---
- hosts: all
gather_facts: true
become: true
tasks:
- name: Check for temporary dir make it if it does not exist
file:
path: /tmp/ansible
state: directory
mode: 0755
- name: Gather Facts into a file
copy:
content: '{"ansible_facts": {{ ansible_facts | to_json }}}'
dest: /tmp/ansible/{{ inventory_hostname }}
cmdb_gather.yml
---
- hosts: all
gather_facts: no
become: true
tasks:
- name: Fetch fact gather files
fetch:
src: /tmp/ansible/{{ inventory_hostname }}
dest: /depot/out/
flat: yes
CLI:
ansible -i devinventory -m setup --tree out/ all
This would basically look like (wrote on spot not tested):
- name: make the equivalent of "ansible somehosts -m setup --tree /some/dir"
hosts: my_hosts
vars:
treebase: /path/to/tree/base
tasks:
- name: Make sure we have a folder for tree base
file:
path: "{{ treebase }}"
state: directory
delegate_to: localhost
run_once: true
- name: Dump facts host by host in our treebase
copy:
dest: "{{ treebase }}/{{ inventory_hostname }}"
content: '{"ansible_facts": {{ ansible_facts | to_json }}}'
delegate_to: localhost
Ok I figured it out. While #Zeitounator had a correct answer I had the wrong idea. In checking the documentation for ansible-cmdb I caught that the application can use the fact cache. When I included a custom ansible.cfg in the project folder and added:
[defaults]
fact_caching=jsonfile
fact_caching_connection = /depot/out
ansible-cmdb was able to parse the output correctly using:
ansible-cmdb -f /depot/out > ansible_hosts.html
I'm trying to create a playbook that has to complete the following tasks:
retrieve hostnames and releases from a file file
connect to those hostnames one by one
retrieve the contents of another file another_file in each host that will give us the environment (dev, qa, prod)
so my first task is to retrieve the names of the hosts I need to connect to
- name: retrieve nodes
shell: cat file | grep ";;" | grep foo | grep -v "bar" | sort | uniq
register: nodes
- adding nodes:
add_host:
name: "{{ item }}"
group: "servers"
with_items: "{{ nodes.stdout_lines }}"
the next task connects to the hosts
- hosts: "{{ nodes }}"
become: yes
become_user: user1
become_method: sudo
gather_facts: no
tasks:
- name: check remote file
slurp:
src: /xyz/directory/another_file
register: thing
- debug: msg="{{ thing['content'] | b64decode }}"
but it doesn't work, when I add verbosity I still see the task being executed with the user running the playbook (local_user)
<node> ESTABLISH SSH CONNECTION FOR USER: local_user
what am I doing wrong?
ansible version is 2.7.1 over rhel 6.1
UPDATE
I've also used remote_user: user1
- hosts: "{{ nodes }}"
remote_user: user1
gather_facts: no
tasks:
- name: check remote file
slurp:
src: /xyz/directory/another_file
register: thing
- debug: msg="{{ thing['content'] | b64decode }}"
no luck so far, same error
You need to set remote_user, which is the one that controls what username is used when connecting to the server. Only after the connection is established, become_user is used.
https://docs.ansible.com/ansible/latest/user_guide/playbooks_intro.html#hosts-and-users
if you use become_user it is like using sudo command on the remote machine.
You want to connect to your machines as a specific user. And after the connection is established issue the commands with sudo.
This question has some answers that should help you.
I am trying to build an rsync type backup on multiple servers. I would like to create a backup directory per server locally on my laptop and then back them up. If the directory does not exist create it.
I start off by calling the playbook locally, so that I can create the directories locally, then change the playbook to the backup group. The issue is that I dont know how to populate the hostnames in the backup group. When I run the playbook below the only directory that gets created is localhost. I need for each host in the backup group to create a local directory and back it up. what would be the easiest way to make this work?
- hosts: localhost
become: yes
#strategy: free
pre_tasks:
vars:
- backupDir: "/Users/user1/Desktop/Fusion/backups/{{ inventory_hostname }}/"
roles:
tasks:
- name: Check if Backup Folder Exisits.
stat:
path: "{{ backupDir }}"
register: my_folder
- name: "Ansible Create directory if not exists"
file:
path: "{{ backupDir }}"
state: directory
when: my_folder.stat.exists == false
- hosts: backup
tasks:
- name: Rsync Directories from Remote to Local
synchronize:
mode: pull
src: "{{ item }}"
dest: "{{ backupDir }}/{{ansible_date_time.date}}.back"
with_items:
- "/home/user1/"
- "/var/www/html/"
- "/root/"
when: my_folder.stat.exists
handlers:
In that case, I think you're looking for the loop module.
something like this..
- name: "Ansible Create directory if not exists"
file:
path: "{{ backupDir }}"
state: directory
when: my_folder.stat.exists == false
loop: {{ inventory_hostname }}
https://docs.ansible.com/ansible/latest/user_guide/playbooks_loops.html
In your inventory file you can create groups that tie back to your hosts you're calling on.
[localhost]
127.0.0.1
[backup]
host1
host2
host3
With Ansible I am trying to copy a file from one host to another host. I accomplish this using the synchronize module and use the delegate_to. However, I believe that this is the root of my problems.
- hosts: hosts
tasks:
- name: Synchronization using rsync protocol (pull)
synchronize:
mode: pull
src: /tmp/file.tar
dest: /tmp
delegate_to: 10.x.x.1
become_user: root
become_method: su
become: yes
My code to get the root password
- name: Set root password for host
set_fact:
ansible_become_password: password
My inventory file containing the hosts:
[hosts]
10.x.x.2
Now i'm facing the following error:
fatal: [hosts]: FAILED! => {"msg": "Incorrect su password"}
Playbook:
- name: Fetch root password for host from CA PAM
shell: ./grabPassword.ksh
args:
chdir: /home/virgil
register: password
delegate_to: "{{ delegate_host }}"
become_user: "{{ root_user }}"
become_method: su
- debug:
msg: "{{ password.stdout }}"
- name: Set root password for host
set_fact:
ansible_become_password: "{{ password.stdout }}"
- name: Copying tar file over (Never existed)
synchronize:
src: /tmp/emcgrab_Linux_v4.8.3.tar
dest: /tmp
delegate_to: "{{ delegate_host }}"
register: test
become: yes
Check if you have not set the become password in the command line or configuration.