could you please help me to add NTP address as variable , this is vmware ntp server update and i am trying to pass NTP server address as variable.
var file content
cat var.yml
NTP_Servers:
- 192.168.10.20
- 192.168.10.21
Playbook file
- name: Add NTP to host on specified portgroup
local_action:
module: vmware_host_ntp
hostname: "{{ vcenter_ip}}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
validate_certs: False
esxi_hostname: "{{ item.value.mgmtip }}"
ntp_servers: <<<<<<<<<IP ADDRESS>
state: present
with_dict: "{{ PayloadNodes | default({}) }}"
ignore_errors: yes
tags: NTP
If you want to get the contents of the file as a variable, you can either: use include_vars on the task level
- name: Add NTP to host on specified portgroup
include_vars: var.yaml
or use vars_file on the playbook level
---
hosts: netservers
vars_files:
- var.yml
Related
My goal is to execute a shell command on different hosts with credentials given directly from the playbook. Until now I have tried two things.
Attempt one:
- name: test
shell:
command: "whoami"
with_items: "{{lookup('file', '../files/deviceList.txt').splitlines()}}"
delegate_to: "{{item.split(';')[0]}}"
args:
ansible_connection: network_cli
ansible_network_os: ios
ansible_user: "{{ cred_ios_r_user }}"
ansible_password: "{{ cred_ios_r_pass }}"
Attempt 2:
- name: set default credentials
set_fact:
ansible_connection: network_cli
ansible_network_os: ios
ansible_user: "{{ cred_ios_r_user }}"
ansible_password: "{{ cred_ios_r_pass }}"
with_items: "{{lookup('file', '../files/deviceList.txt').splitlines()}}"
delegate_to: "{{item.split(';')[0]}}"
- name: test
shell:
command: "whoami"
with_items: "{{lookup('file', '../files/deviceList.txt').splitlines()}}"
delegate_to: "{{item.split(';')[0]}}"
The Username in the variable {{cred_ios_r_user}} is 'User1'. But when I look in the Output from Ansible, it tells me it used the default ssh user named "SSH_User".
What do I need to change so that Ansible takes the given credentials?
first thing you should to check - its "remote_user" attribute of module:
- name: DigitalOcean | Disallow root SSH access
remote_user: root
lineinfile: dest=/etc/ssh/sshd_config
regexp="^PermitRootLogin"
line="PermitRootLogin no"
state=present
notify: Restart ssh
Next thing, its ssh_keys. By default, your connections have your own private key, but you can override it by the extraargs of ansible-playbook command, of variables in inventory-file or job-vars:
vars:
ansible_ssh_user: "root"
ansible_ssh_private_key_file: "{{ role_path }}/files/ansible.key"
so if you want, you can set custom keys you have in vars or files:
- name: DigitalOcean | Add Pub key
remote_user: root
authorized_key:
user: "{{ do_user }}"
key: "{{ lookup('file', do_key_public) }}"
state: present
you have take more information in my auto-droplet-digitalocean-role
Your vars should be... vars available for your task, not arguments passed to the shell module. So in a nutshell:
- name: test
vars:
ansible_connection: network_cli
ansible_network_os: ios
ansible_user: "{{ cred_ios_r_user }}"
ansible_password: "{{ cred_ios_r_pass }}"
shell:
command: "whoami"
with_items: "{{ lookup('file', '../files/deviceList.txt').splitlines() }}"
delegate_to: "{{ item.split(';')[0] }}"
Meanwhile this looks like a bad practice. You would normally use add_host to build an in-memory inventory with the correct hosts and credentials and then run your task using the natural host batch play loop. Something like
- name: manage hosts
hosts: localhost
gather_facts: false
tasks:
- name: add hosts to inventory
add_host:
name: "{{ item.split(';')[0] }}"
groups:
- my_custom_group
ansible_connection: network_cli
ansible_network_os: ios
ansible_user: "{{ cred_ios_r_user }}"
ansible_password: "{{ cred_ios_r_pass }}"
with_items: "{{ lookup('file', '../files/deviceList.txt').splitlines() }}"
- name: run my command on all added hosts
hosts: my_custom_group
gather_facts: false
tasks:
- name: test command
shell: whoami
Alternativelly, you can create a full inventory from scratch from your csv file and use it directly or use/develop an inventory plugin that will read your csv file directly (like in this example)
I want to write hostname and ssh_host under a specific group from an inventory file to a file in my localhost.
I have tried this:
my_role
set_fact:
hosts: "{{ hosts_list|default({}) | combine( {inventory_hostname: ansible_ssh_host} ) }}"
when: "'my_group' in group_names"
Playbook
become: no
gather_facts: no
roles:
- my_role
But now if i try to write this to a file in another task, it will create a file for the hosts in inventory file.
Can you please suggest if there is a better way to create a file containing dictionary with inventory_hostname and ansible_ssh_host as key and value respectively, by running playbook on localhost.
An option would be to use lineinfile and delegate_to localhost.
- hosts: test_01
tasks:
- set_fact:
my_inventory_hostname: "{{ ansible_host }}"
- tempfile:
delegate_to: localhost
register: result
- lineinfile:
path: "{{ result.path }}"
line: "inventory_hostname: {{ my_inventory_hostname }}"
delegate_to: localhost
- debug:
msg: "inventory_hostname: {{ ansible_host }}
stored in {{ result.path }}"
Note: There is no need to quote the conditions. Conditions are expanded by default.
when: my_group in group_names
Thanks a lot for your answer#Vladimir Botka
I came up with another requirement to write the hosts belonging to a certain group to a file, below code can help with that (adding to #Vladimir's solution).
- hosts: all
become: no
gather_facts: no
tasks:
- set_fact:
my_inventory_hostname: "{{ ansible_host }}"
- lineinfile:
path: temp/hosts
line: "inventory_hostname: {{ my_inventory_hostname }}"
when: inventory_hostname in groups['my_group']
delegate_to: localhost
- debug:
msg: "inventory_hostname: {{ ansible_host }}"
I am trying to create switch backups. And i want to create dynamic files based on config output and switch hostname.
As an example, the configuration of switch1 should be saved in file names hostname1, config of switch2 should be saved in file names hostname2 and so on.
I am getting the hostnames from the switches from a file.
And my problem is, that the config of switch1 gets saved in file hostname1, hostname2 etc.
How can I loop the variables correctly to get the right config in the right file?
My current playbook looks like this:
---
- hosts: cisco
connection: local
gather_facts: false
vars:
backup_path: /etc/ansible/tests
cli:
host: "{{ inventory_hostname }}"
username: test
password: test
tasks:
- name: show run on switches
ios_command:
commands: show running-config
provider: "{{ cli }}"
register: config
- name: creating folder
file:
path: "{{ backup_path }}"
state: directory
run_once: yes
- name: get hostnames
become: yes
shell: cat /etc/ansible/tests/hostname_ios.txt
register: hostnames
- name: copy config
copy:
content: "{{ config.stdout[0] }}"
dest: "{{ backup_path }}/{{ item }}.txt"
with_together: "{{ hostnames.stdout_lines }}"
...
Rely on Ansible native host loop instead of inventing your own one.
It's as simple as:
- name: show run on switches
ios_command:
commands: show running-config
provider: "{{ cli }}"
register: config
- name: copy config
copy:
content: "{{ config.stdout[0] }}"
dest: "{{ backup_path }}/{{ inventory_hostname }}.txt"
Finally got it running.
Defined names in my inventory as followed:
test-switch1 ansible_host=ip
Changed the host var in my playbook
vars:
backup_path: /etc/ansible/tests
cli:
host: "{{ ansible_host }}"
username: test
password: test
And then executing my tasks:
- name: show run on switches
ios_command:
commands: show running-config
provider: "{{ cli }}"
register: config
- name: copy config
copy:
content: "{{ config.stdout[0] }}"
dest: "{{ backup_path }}/{{ inventory_hostname }}.txt"
I wrote an Ansible Playbook that creates multiple VMs. The Playbook is split in two files. Main.yaml and vars.yaml. Its creating the VMs and it seems to work nicely. Im not getting any errors so i assume it successfully added the created hosts to the Inventory. I want to check if the created hosts were added to the Inventory. How can i print/list the Hosts of the inventory? My goal is to run scripts at a later stage on the created VMs. Thanks.
**Main.yaml**
#########CREATING VM#########
---
- hosts: localhost
vars:
http_port: 80
max_clients: 200
vars_files:
- vars.yaml
tasks:
- name: create VM
os_server:
name: "{{ item.name }}"
state: present
image: "{{ item.image }}"
boot_from_volume: True
security_groups: ssh
flavor: "{{ item.flavor }}"
key_name: mykey
region_name: "{{ lookup('env', 'OS_REGION_NAME') }}"
nics:
- net-name: private
wait: yes
register: instances
with_items: "{{ instance_definitions }}"
############################################
- name: whait 15 seconds
pause: seconds=15
when: instances.changed
######DEBUG#################################
- name: display results
debug:
msg: "{{ item }}"
with_items: "{{ instances.results }}"
############################################
- name: Add new VM to ansible Inventory
add_host:
name: "{{ item.server.name}}"
ansible_host: "{{item.server.public_v4}}"
ansible_user: "{{ansible_user}}"
ansible_ssh_common_args: -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
groups: just_created
with_items: "{{ instances.results }}"
**vars.yaml**
---
instance_definitions:
- { name: Debian Jessie, image: Debian Jessie 8, flavor: c1.small, loginame: debian }
- { name: Debian Stretch, image: Debian Stretch 9, flavor: c1.small, loginame: debian }
This is what magic variables are for.
All your hosts will be in a list:
groups['just_created']
The example below illustrates how to create in-memory hosts and list them. The magic sauce is that add_hosts needs to be in a separate play.
---
- name: adding host playbook
hosts: localhost
connection: local
tasks:
- name: add host to in-memory inventory
add_host:
name: awesome_host_name
groups: in_memory
- name: checking hosts
hosts: in_memory
connection: local
gather_facts: false
tasks:
- debug: var=group_names
- debug: msg="{{ inventory_hostname }}"
- debug: var=hostvars[inventory_hostname]
You can print the the content of the inventory file using from the playbook using:
- debug: msg="the hosts are is {{lookup('file', '/etc/ansible/hosts') }}"
Alternatively, you can list the hosts from command line using:
ansible --list-hosts all
or use this command from the playbook:
tasks:
- name: list hosts
command: ansible --list-hosts all
register: hosts
- debug:
msg: "{{hosts.stdout_lines}}"
I have two playbooks - my first playbook iterates on the list of ESXi servers getting list of all VMs, and then passes that list to the second playbook, that should iterates on the IPs of the VMs. Instead it is still trying to execute on the last ESXi server. I have to switch host to that VM IP that I'm currently passing to the second playbook. Don't know how to switch... Anybody?
First playbook:
- name: get VM list from ESXi
hosts: all
tasks:
- name: get facts
vmware_vm_facts:
hostname: "{{ inventory_hostname }}"
username: "{{ ansible_ssh_user }}"
password: "{{ ansible_ssh_pass }}"
delegate_to: localhost
register: esx_facts
- name: Debugging data
debug:
msg: "IP of {{ item.key }} is {{ item.value.ip_address }} and is {{ item.value.power_state }}"
with_dict: "{{ esx_facts.virtual_machines }}"
- name: Passing data to include file
include: includeFile.yml ip_address="{{ item.value.ip_address }}"
with_dict: "{{ esx_facts.virtual_machines }}"
My second playbook:
- name: <<Check the IP received
debug:
msg: "Received IP: {{ ip_address }}"
- name: <<Get custom facts
vmware_vm_facts:
hostname: "{{ ip_address }}"
username: root
password: passw
validate_certs: False
delegate_to: localhost
register: custom_facts
I do receive the correct VM's ip_address but vmware_vm_facts is still trying to run on the ESXi server instead...
If you can't afford to setup dynamic inventory for VMs, your playbook should have two plays: one for collecting VMs' IPs and another for tasks on that VMs, like this (pseudocode):
---
- hosts: hypervisors
gather_facts: no
connection: local
tasks:
- vmware_vm_facts:
... params_here ...
register: vmfacts
- add_host:
name: "{{ item.key }}"
ansible_host: "{{ item.value.ip_address }}"
group: myvms
with_dict: "{{ vmfacts.virtual_machines }}"
- hosts: myvms
tasks:
- apt:
name: "*"
state: latest
Within the first play we collect facts from each hypervisor and populate myvms group of inmemory inventory. Within second play we run apt module for every host in myvms group.