Ansible read variables from bash - bash

I created 2 playbooks:
deploy VM(centos8) into ESX
join VM to AD
Inside of them are plenty variables which are useful only for one specific VM (beside info about ESX)
---
- name: create vm from template on ESX
hosts: localhost
gather_facts: no
become: yes
tasks:
- name: clone the template
vmware_guest:
hostname: "IP.."
username: "user"
password: "password"
validate_certs: false
name: ll-ansible
template: ll
datacenter: "LAB KE"
folder: "OIR"
resource_pool: "pool"
cluster: "PROD cluster"
networks:
- name: IS-VLAN1102
device_type: vmxnet3
vlan: IS-VLAN1102
ip: ip.ip.ip.ip
netmask: 255.255.255.0
gateway: gw.gw.gw.gw
customization:
hostname: ll-ansible
timezone: timezone
domain: domain
dns_servers:
- ip.ip.ip.ip
- ip.ip.ip.ip
disk:
- size: 60gb
type: default
datastore: Dell-OIR
- size: 10gb
type: default
datastore: Dell-OIR
hardware:
memory_mb: 4096
num_cpus: 4
num_cpu_cores_per_socket: 2
boot-firmware: efi
state: poweredon
wait_for_ip_address: yes
register: vm
- debug: msg "{{ vm }}"
My question is:
Is there a way to make a script which will read all necessary variables for deploying VM from command line and fill fields in playbook and then run playbook.
Or if is it possible make it only within ansible possibilities.

There are many ways to send variables to playbook.
Let's start from these 3 most popular options:
env vars
ansible-playbook CLI arguments
var files
See also all 22 options: Understanding variable precedence
Option 1: use lookup('env') in the playbook
Just use "{{ lookup('env', 'ENV_VAR_NAME')}}"
E.g., for your case:
---
- name: create vm from template on ESX
hosts: localhost
gather_facts: no
become: yes
tasks:
- name: clone the template
vmware_guest:
hostname: "{{ lookup('env', 'IP_ADDRESS')}}"
Option 2: pass var through CLI args
You may pass Env var value using --extra-vars CLI argument
For your case:
playbook.yml
...
- name: clone the template
vmware_guest:
hostname: "{{ ip_address }}"
user: "{{ user }}"
usage:
ansible-playbook playbook.yml --extra-vars="user={{ lookup('env', 'USER') }}, ip_address='10.10.10.10'"
Option 3: use vars_files in the playbook
For your case:
playbook.yml:
---
- name: create vm from template on ESX
hosts: localhost
gather_facts: no
become: yes
vars_files:
- "vars/vars.yml"
tasks:
- name: clone the template
vmware_guest:
hostname: "{{ hostname }}"
vars/vars.yml:
---
hostname: "host.example.com"
Let's combine them all:
Suppose you have two different environments:
staging for staging environment
prod for productive environment
Then we create two different vars files:
vars/staging.yml
---
hostname: "staging.domain.com"
vars/prod.yml
---
hostname: "prod.domain.com"
playbook.yml:
---
- name: create vm from template on ESX
y6uu hosts: localhost
gather_facts: no
become: yes
vars_files:
- "vars/{{ env }}.yml"
tasks:
- name: clone the template
vmware_guest:
hostname: "{{ hostname }}"
Usage
run playbook with staging vars:
ansible-playbook playbook.yml --extra-vars=staging
run playbook with prod vars:
ansible-playbook playbook.yml --extra-vars=prod

Related

Ansible AWX workflow

I am new to ansible and AWX. Making good progress with my project but am getting stuck with one part and hoping you guys can help.
I have a Workflow Template as follows
Deploy VM
Get IP of the VM Deployed & create a temp host configuration
Change the deployed machine hostname
Where I am getting stuck is once I create the hostname when the next template kicks off the hostname group is missing. I assume this is because of some sort of runspace. How do I move this information around? I need this as later on I want to get into more complicated flows.
What I have so far:
1.
- name: Deploy VM from Template
hosts: xenservers
tasks:
- name: Deploy VM
shell: xe vm-install new-name-label="{{ vm_name }}" template="{{ vm_template }}"
- name: Start VM
shell: xe vm-start vm="{{ vm_name }}"
---
- name: Get VM IP
hosts: xenservers
remote_user: root
tasks:
- name: Get IP
shell: xe vm-list name-label="{{ vm_name }}" params=networks | grep -oE '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+' | head -1
register: vm_ip
until: vm_ip.stdout != ""
retries: 15
delay: 5
Here I was setting the hosts and this works locally to the template but when it moves on it fails. I tried this in a task and a play.
- name: Add host to group
add_host:
name: "{{ vm_ip.stdout }}"
groups: deploy_vm
- hosts: deploy_vm
- name: Set hosts
hosts: deployed_vm
tasks:
- name: Set a hostname
hostname:
name: "{{ vm_name }}"
when: ansible_distribution == 'Rocky Linux'
Thanks

Using Ansible to manage ESXI Inventory

I am trying to use Ansible to modify the DNS settings on a group of ESXI servers. I've been able to get my playbook to change the settings on a single server like this:
---
- hosts: localhost
tasks:
- name: Configure ESXi hostname and DNS servers
vmware_dns_config:
hostname: 'myesxiserver.domain.local'
username: 'username'
password: 'password'
dns_servers:
- x.x.x.x
- x.x.x.x
delegate_to: localhost
How can I get this to work for multiple servers? The Ansible documentation provides this example:
---
- hosts: localhost
tasks:
- name: Configure ESXi hostname and DNS servers
vmware_dns_config:
hostname: '{{ esxi_hostname }}'
username: '{{ esxi_username }}'
password: '{{ esxi_password }}'
change_hostname_to: esx01
domainname: foo.org
dns_servers:
- 8.8.8.8
- 8.8.4.4
delegate_to: localhost
I'm not clear on how to iterate through a list of hosts and pass the correct values into the variable '{{ esxi_hostname }}' for each of my servers. I'm assuming that the variables can be passed using an inventory file but I haven't found any good examples on how to do this for ESXI servers.
So I did get this working.
---
- hosts: localhost
vars_files:
- vars.yml
- vars2.yml
tasks:
- name: Configure ESXi hostname and DNS servers
vmware_dns_config:
hostname: "{{ item }}"
username: 'myadmin'
password: "{{ Password }}"
validate_certs: no
change_hostname_to: "{{ item }}"
domainname: foo.org
dns_servers:
- x.x.x.x
- x.x.x.x
delegate_to: localhost
loop: "{{ esxihost }}"
I had to pass a list of host names using vars_file and iterate through it using the loop keyword. I tried to use the {{inventory_hostname}} variable along with a standard inventory file but because SSH is not generally enabled by default on ESXi servers I would get an SSH connection error.

Ansible | Deploy VM then run additional playbooks against new host?

I am fairly new to Ansible. I have created an Ansible role that contains the following tasks that will deploy a vm from a template, then configure the VM with custom OS settings:
create-vm.yml
configure-vm.yml
I can successfully deploy a VM from a template via the "create-vm" task. But after that is
complete, I would like to continue with the "configure-vm" task. Since the playbooks/role-vm-deploy.yml file contains "localhost" as shown here...
- hosts: localhost
roles:
- vm-deploy
gather_facts: no
connection: local
... the next task doesn't run successfully because it is attempting to run the task against "localhost" and not the new VM hostname. I have since added the following to the end of the "create-vm" task...
- name: Add host to group 'just_created'
add_host:
name: '{{ hostname }}.{{ domain }}'
groups: just_created
...but I'm not quite sure what to do with it. I can't quite wrap my head around what else I need to do and how to call the new hostname in the "configure-vm" task instead of localhost.
I am executing the playbook via CLI
# ansible-playbook playbooks/role-vm-deploy.yml
I saw this post, which was kind of helpful
I also saw the dynamic inventory documentation, but it's a bit over my head at this juncture. Any help would be appreciated. Thank you!
Here are the contents for the playbooks and tasks
### playbooks -> role-vm-deploy.yml
- hosts: localhost
roles:
- vm-deploy
gather_facts: no
connection: local
### roles -> vm-deploy -> tasks -> main.yml
- name: Deploy VM
include: create-vm.yml
tags:
- create-vm
- name: Configure VM
include: configure-vm.yml
tags:
- configure-vm
### roles -> vm-deploy -> tasks -> create-vm.yml
- name: Clone the template
vmware_guest:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_pwd }}'
validate_certs: False
name: '{{ hostname }}'
template: '{{ template_name }}'
datacenter: '{{ datacenter }}'
folder: '/'
hardware:
memory_mb: '{{ memory }}'
num_cpus: '{{ num_cpu }}'
networks:
- label: "Network adapter 1"
state: present
connected: True
name: '{{ vlan }}'
state: poweredon
wait_for_ip_address: yes
### roles -> vm-deploy -> tasks -> configure-vm.yml
### This task is what I need to execute on the new hostname, but it attempts to execute on "localhost" ###
# Configure Networking
- name: Configure IP Address
lineinfile:
path: '{{ network_conf_file }}'
regexp: '^IPADDR='
line: 'IPADDR={{ ip_address }}'
- name: Configure Gateway Address
lineinfile:
path: '{{ network_conf_file }}'
regexp: '^GATEWAY='
line: 'GATEWAY={{ gw_address }}'
### roles -> vm-deploy -> defaults -> main.yml
- All of the variables reside here including "{{ hostname }}.{{ domain }}"
You got so close! The trick is to observe that a playbook is actually a list of plays (the yaml objects that are {"hosts": "...", "tasks": []}), and the targets of subsequent plays don't have to exist when the playbook starts -- presumably for this very reason. Thus:
- hosts: localhost
roles:
- vm-deploy
gather_facts: no
connection: local
# or wherever you were executing this -- it wasn't obvious from your question
post_tasks:
- name: Add host to group 'just_created'
add_host:
name: '{{ hostname }}.{{ domain }}'
groups: just_created
- hosts: just_created
tasks:
- debug:
msg: hello from the newly created {{ inventory_hostname }}

Ansible playbook for Cisco Nexus fails with nxapi in provider

I have not been able to get this playbook working. I am trying to use the 'nxapi' provider method. I have nxapi setup on the target devices. I have a group_vars file with the following settings:
ansible_connection: local
ansible_network_os: nxos
ansible_user: user
ansible_password: password
The playbook code is:
- name: nxos_facts module
hosts: "{{ myhosts }}"
vars:
ssh:
host: "{{ myhosts }}"
username: "{{ ansible_user }}"
password: "{{ ansible_password }}"
transport: cli
nxapi:
host: "{{ myhosts }}"
username: "{{ ansible_user }}"
password: "{{ ansible_password }}"
transport: nxapi
use_ssl: yes
validate_certs: no
port: 8443
tasks:
- name: nxos_facts nxapi
nxos_facts:
provider: "{{ nxapi }}"
When I debug the failed playbook, I see this line:
ESTABLISH LOCAL CONNECTION FOR USER: < MY USER NAME >
It seems like the Playbook is not using the variables in the group_vars file , which specify a specific user to connect to the nxapi service on the switch.
Can't add a comment due to lack of rep
Make sure your group_vars match a child group in your host file. (Can't tell from your question)
host file:
all:
hosts:
SWITCH1:
ansible_host: 192.168.1.1
children:
group1:
hosts:
SWITCH1:
You can then have your Group variables in one of 2 files, all.yml or group1.yml

How to deploy a custom VM with ansible and run subsequent steps on the guest VM via the host?

I have a playbook that I run to deploy a guest VM onto my target node.
After the guest VM is fired up, it is not available to the whole network, but to the host machine only.
Also, after booting up the guest VM, I need to run some commands on that guest to configure it and make it available to all the network members.
---
- block:
- name: Verify the deploy VM script
stat: path="{{ deploy_script }}"
register: deploy_exists
failed_when: deploy_exists.stat.exists == False
no_log: True
rescue:
- name: Copy the deploy script from Ansible
copy:
src: "scripts/new-install.pl"
dest: "/home/orch"
owner: "{{ my_user }}"
group: "{{ my_user }}"
mode: 0750
backup: yes
register: copy_script
- name: Deploy VM
shell: run my VM deploy script
<other tasks>
- name: Run something on the guest VM
shell: my_other_script
args:
cdir: /var/scripts/
- name: Other task on guest VM
shell: uname -r
<and so on>
How can I run those subsequent steps on the guest VM via the host?
My only workaround is to populate a new inventory file with the VMs details and add the use the host as a bastion host.
[myvm]
myvm-01 ansible_connection=ssh ansible_ssh_user=my_user ansible_ssh_common_args='-oStrictHostKeyChecking=no -o ProxyCommand="ssh -A -W %h:%p someuser#host_machine"'
However, I want everything to happen on a single playbook, rather than splitting them.
I have resolved it myself.
I managed to dynamically add the host to the inventory and used a group:vars for the newly created hosts to use the VM manager as a bastion host
Playbook:
---
hosts: "{{ vm_manager }}"
become_method: sudo
gather_facts: False
vars_files:
- vars/vars.yml
- vars/vault.yml
pre_tasks:
- name: do stuff here on the VM manager
debug: msg="test"
roles:
- { role: vm_deploy, become: yes, become_user: root }
tasks:
- name: Dinamically add newly created VM to the inventory
add_host:
hostname: "{{ vm_name }}"
groups: vms
ansible_ssh_user: "{{ vm_user }}"
ansible_ssh_pass: "{{ vm_pass }}"
- name: Run the rest of tasks on the VM through the host machine
hosts: "{{ vm_name }}"
become: true
become_user: root
become_method: sudo
post_tasks:
- name: My first task on the VM
static: no
include_role:
name: my_role_for_the_VM
Inventory:
[vm_manager]
vm-manager.local
[vms]
my-test-01
my-test-02
[vms:vars]
ansible_connection=ssh
ansible_ssh_common_args='-oStrictHostKeyChecking=no -o ProxyCommand="ssh -A -W %h:%p username#vm-manager.local"'
Run playbook:
ansible-playbook -i hosts -vv playbook.yml -e vm_name=some-test-vm-name

Resources