I tried to find an example where I can pull a file from a serverA to a group of servers.
mygroup consists of 10 servers. need to copy that file over to those 10 servers.
here is what I have but its not working exactly. I can do one to one copy no problem without the handlers part.
- hosts: serverA
tasks:
- name: Transfer file from serverA to mygroup
synchronize:
src: /tmp/file.txt
dest: /tmp/file.txt
mode: pull
handlers:
- name: to many servers
delegate_to: $item
with_items: ${groups.mygroup}
You should carefully read the documentation about how ansible works (what is host pattern, what is strategy, what is handler...).
Here's the answer to your question:
---
# Push mode (connect to xenial1 and rsync-push to other hosts)
- hosts: xenial-group:!xenial1
gather_facts: no
tasks:
- synchronize:
src: /tmp/hello.txt
dest: /tmp/hello.txt
delegate_to: xenial1
# Pull mode (connect to other hosts and rsync-pull from xenial1)
- hosts: xenial1
gather_facts: no
tasks:
- synchronize:
src: /tmp/hello.txt
dest: /tmp/hello.txt
mode: pull
delegate_to: "{{ item }}"
with_inventory_hostnames: xenial-group:!xenial1
Inventory:
[xenial-group]
xenial1 ansible_ssh_host=192.168.168.186 ansible_user=ubuntu ansible_ssh_extra_args='-o ForwardAgent=yes'
xenial2 ansible_ssh_host=192.168.168.187 ansible_user=ubuntu ansible_ssh_extra_args='-o ForwardAgent=yes'
xenial3 ansible_ssh_host=192.168.168.188 ansible_user=ubuntu ansible_ssh_extra_args='-o ForwardAgent=yes'
Keep in mind that synchronize is a wrapper for rsync, so for this setup to work, there must be ssh-connectivity between target hosts (usually you have ssh connection between control-host and target hosts). I use agent forwarding for this.
Related
We have two servers in the group web_servers, and are trying to fetch logs. When we run the play as below, it is only returning files for the 2nd server.
-Inventory FILE-
[web_servers]
(IP-redacted) server_name=QA dest_path=(web address-redacted)
(same-IP-redacted) server_name=STAGING dest_path=(web address-
redacted)
---
- hosts: web_servers
become: true
remote_user: jackansible
gather_facts: False
tasks:
- set_fact: mydate="{{lookup('pipe','date +%Y-%m-%d_%H.%M.%S')}}"
- debug: var=mydate
- name: Copy file from {{ server_name }} to local
ansible.builtin.fetch:
src: /var/log/{{ item }}
dest: /Users/jackl/Desktop/{{ mydate }}-{{ server_name }}-{{ item }}.txt
flat: yes
register: fetch_output
loop:
- syslog
- faillog
- dmesg
- auth.log
- kern.log
- debug: var=fetch_output
Yes, #Zeitounator, we modified the inventory file to change the IP address at the beginning (which was the same for both hosts), to the unique web address for each host.
This worked perfectly.
I have a task based on a shell command that needs to run on a local computer. As part of the command, I need to add the IP address on part of the groups I have in my inventory file
Inventory file :
[server1]
130.1.1.1
130.1.1.2
[server2]
130.1.1.3
130.1.1.4
[server3]
130.1.1.5
130.1.1.6
I need to run the following command from the local computer on the Ips that are part of the Server 2 + 3 groups
ssh-copy-id user#<IP>
# <IP> should be 130.1.1.3 , 130.1.1.4 , 130.1.1.5 , 130.1.1.6
Playbook - were I'm missing the part of the ip
- hosts: localhost
gather_facts: no
become: yes
tasks:
- name: Generate ssh key for root user
shell: "ssh-copy-id user#{{ item }}"
run_once: True
with_items:
- server2 group
- server3 group
In a nutshell:
- hosts: server1:server2
gather_facts: no
become: true
tasks:
- name: Push local root pub key for remote user
shell: "ssh-copy-id user#{{ inventory_hostname }}"
delegate_to: localhost
Note that I kept your exact shell command which is actually a bad practice since there is a dedicated ansible module to manage that. So this could be translated to something like.
- hosts: server1:server2
gather_facts: no
become: true
tasks:
- name: Push local root pub key for remote user
authorized_key:
user: user
state: present
key: "{{ lookup('file', '/root/.ssh/id_rsa.pub') }}"
I had our security group ask for all the information we gather from the hosts we manage with Ansible Tower. I want to run the setup command and put it into a file in a folder I can run ansible-cmdb against. I need to do this in a playbook because we have disabled root login on the hosts and only allow public / private key authentication of the Tower user. The private key is stored in the database so I cannot run the setup command from the cli and impersonate the Tower user.
EDIT I am adding my code so it can be tested elsewhere.
gather_facts.yml
---
- hosts: all
gather_facts: true
become: true
tasks:
- name: Check for temporary dir make it if it does not exist
file:
path: /tmp/ansible
state: directory
mode: 0755
- name: Gather Facts into a file
copy:
content: '{"ansible_facts": {{ ansible_facts | to_json }}}'
dest: /tmp/ansible/{{ inventory_hostname }}
cmdb_gather.yml
---
- hosts: all
gather_facts: no
become: true
tasks:
- name: Fetch fact gather files
fetch:
src: /tmp/ansible/{{ inventory_hostname }}
dest: /depot/out/
flat: yes
CLI:
ansible -i devinventory -m setup --tree out/ all
This would basically look like (wrote on spot not tested):
- name: make the equivalent of "ansible somehosts -m setup --tree /some/dir"
hosts: my_hosts
vars:
treebase: /path/to/tree/base
tasks:
- name: Make sure we have a folder for tree base
file:
path: "{{ treebase }}"
state: directory
delegate_to: localhost
run_once: true
- name: Dump facts host by host in our treebase
copy:
dest: "{{ treebase }}/{{ inventory_hostname }}"
content: '{"ansible_facts": {{ ansible_facts | to_json }}}'
delegate_to: localhost
Ok I figured it out. While #Zeitounator had a correct answer I had the wrong idea. In checking the documentation for ansible-cmdb I caught that the application can use the fact cache. When I included a custom ansible.cfg in the project folder and added:
[defaults]
fact_caching=jsonfile
fact_caching_connection = /depot/out
ansible-cmdb was able to parse the output correctly using:
ansible-cmdb -f /depot/out > ansible_hosts.html
I am trying to build an rsync type backup on multiple servers. I would like to create a backup directory per server locally on my laptop and then back them up. If the directory does not exist create it.
I start off by calling the playbook locally, so that I can create the directories locally, then change the playbook to the backup group. The issue is that I dont know how to populate the hostnames in the backup group. When I run the playbook below the only directory that gets created is localhost. I need for each host in the backup group to create a local directory and back it up. what would be the easiest way to make this work?
- hosts: localhost
become: yes
#strategy: free
pre_tasks:
vars:
- backupDir: "/Users/user1/Desktop/Fusion/backups/{{ inventory_hostname }}/"
roles:
tasks:
- name: Check if Backup Folder Exisits.
stat:
path: "{{ backupDir }}"
register: my_folder
- name: "Ansible Create directory if not exists"
file:
path: "{{ backupDir }}"
state: directory
when: my_folder.stat.exists == false
- hosts: backup
tasks:
- name: Rsync Directories from Remote to Local
synchronize:
mode: pull
src: "{{ item }}"
dest: "{{ backupDir }}/{{ansible_date_time.date}}.back"
with_items:
- "/home/user1/"
- "/var/www/html/"
- "/root/"
when: my_folder.stat.exists
handlers:
In that case, I think you're looking for the loop module.
something like this..
- name: "Ansible Create directory if not exists"
file:
path: "{{ backupDir }}"
state: directory
when: my_folder.stat.exists == false
loop: {{ inventory_hostname }}
https://docs.ansible.com/ansible/latest/user_guide/playbooks_loops.html
In your inventory file you can create groups that tie back to your hosts you're calling on.
[localhost]
127.0.0.1
[backup]
host1
host2
host3
I'm running an Ansible playbook for host_a. Some tasks I delegate to host_b.
Now I would like to use the synchronize module to copy a directory from localhost to host_b. But delegate_to is the wrong option here, since this results in copying from host_b to host_a.
Is there a possibility to do that?
- hosts: host_a
tasks:
- name: rsync directory from localhost to host_b
synchronize:
# files on localhost
src: files/directory
dest: /directory/on/host_b
# delegate_to does not work here
# delegate_to: host_b
The only solution I can think of is deleting the target directory and then using a recursive copy with the copy module.
I couldn't find anything in the module documentation.
(Using ansible 2.4.2.0)
Doing this task in its own play for host_b is also not really an option because the variables I need for this task depend on host_a.
The easiest solution in this case is to use rsync command with local_action, i.e
- hosts: cache1
tasks:
- name: rsync directory from localhost to host_b
local_action: command rsync -az "{{ playbook_dir }}/files/directory" "{{ hostvars['host_b']['ansible_host'] }}:/directory/on/host_b"
{{ playbook_dir }} helps by not hardcoding paths on local system.