I am trying to configure a VyOS vm that I"ve built from a template. The template is a fresh install without any configuration.
The vm doesn't have an IP configured, so I can't use the ssh options or the vyos ansible module. So I'm trying to use the vmware_vm_shell module, which will let me execute commands but I can't enter conf mode for VyOS.
I've tried bash and vbash for my shell. I've tried setting the conf commands to environment vars to execute, I've tried with_item but it doesn't seem that will work with vmware_vm_shell.
The bear minimum I need is to configure an IP address so that I can then ssh or use the vyos ansible module to complete the configuration.
conf
set interfaces ethernet eth0 address 192.168.1.251/24
set service ssh port 22
commit
save
---
- hosts: localhost
gather_facts: no
connection: local
vars:
vcenter_hostname: "192.168.1.100"
vcenter_username: "administrator#vsphere.local"
vcenter_password: "SekretPassword!"
datacenter: "Datacenter"
cluster: "Cluster"
vm_name: "router-01"
tasks:
- name: Run command inside a virtual machine
vmware_vm_shell:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ datacenter }}"
validate_certs: False
vm_id: "{{ vm_name }}"
vm_username: 'vyos'
vm_password: 'abc123!!!'
vm_shell: /bin/vbash
vm_shell_args: 'conf 2> myFile'
vm_shell_cwd: "/tmp"
delegate_to: localhost
register: shell_command_output
This throws the error:
/bin/vbash: conf: No such file or directory
I have an EdgeRouter, which uses a Vyatta-derived operating system. The issue you're having is caused by the fact that conf (or configure for me) isn't actually the name of a command. The cli features are implement through a complex collection of bash functions that aren't loaded when you log in non-interactively.
There is a wiki page on vyos.net that suggests a solution. By sourcing in /opt/vyatta/etc/functions/script-template, you prepare the shell environment such that vyos commands will work as expected.
That is, you need to execute a shell script (with vbash) that looks like this:
source /opt/vyatta/etc/functions/script-template
conf
set interfaces ethernet eth0 address 192.168.1.251/24
set service ssh port 22
commit
save
exit
I'm not familiar with the vmware_vm_shell module, so I don't know exactly how you would do that, but for example this works for me to run a single command:
ssh ubnt#router 'vbash -c "source /opt/vyatta/etc/functions/script-template
configure
show interfaces
"'
Note the newlines in the above. That suggests that this might work:
- name: Run command inside a virtual machine
vmware_vm_shell:
hostname: "{{ vcenter_hostname }}"
username: "{{ vcenter_username }}"
password: "{{ vcenter_password }}"
datacenter: "{{ datacenter }}"
validate_certs: False
vm_id: "{{ vm_name }}"
vm_username: 'vyos'
vm_password: 'abc123!!!'
vm_shell: /bin/vbash
vm_shell_cwd: "/tmp"
vm_shell_args: |-
-c "source /opt/vyatta/etc/functions/script-template
configure
set interfaces ethernet eth0 address 192.168.1.251/24
set service ssh port 22
commit
save"
delegate_to: localhost
register: shell_command_output
Related
I am trying to use Ansible to modify the DNS settings on a group of ESXI servers. I've been able to get my playbook to change the settings on a single server like this:
---
- hosts: localhost
tasks:
- name: Configure ESXi hostname and DNS servers
vmware_dns_config:
hostname: 'myesxiserver.domain.local'
username: 'username'
password: 'password'
dns_servers:
- x.x.x.x
- x.x.x.x
delegate_to: localhost
How can I get this to work for multiple servers? The Ansible documentation provides this example:
---
- hosts: localhost
tasks:
- name: Configure ESXi hostname and DNS servers
vmware_dns_config:
hostname: '{{ esxi_hostname }}'
username: '{{ esxi_username }}'
password: '{{ esxi_password }}'
change_hostname_to: esx01
domainname: foo.org
dns_servers:
- 8.8.8.8
- 8.8.4.4
delegate_to: localhost
I'm not clear on how to iterate through a list of hosts and pass the correct values into the variable '{{ esxi_hostname }}' for each of my servers. I'm assuming that the variables can be passed using an inventory file but I haven't found any good examples on how to do this for ESXI servers.
So I did get this working.
---
- hosts: localhost
vars_files:
- vars.yml
- vars2.yml
tasks:
- name: Configure ESXi hostname and DNS servers
vmware_dns_config:
hostname: "{{ item }}"
username: 'myadmin'
password: "{{ Password }}"
validate_certs: no
change_hostname_to: "{{ item }}"
domainname: foo.org
dns_servers:
- x.x.x.x
- x.x.x.x
delegate_to: localhost
loop: "{{ esxihost }}"
I had to pass a list of host names using vars_file and iterate through it using the loop keyword. I tried to use the {{inventory_hostname}} variable along with a standard inventory file but because SSH is not generally enabled by default on ESXi servers I would get an SSH connection error.
I'm running an ansible-playbook configured to provision ec2 and configure the machine. I set the connection to local for the playbook because no machine to manage before the script runs. Once provisioned, I supposed to create directory in the remote server. Since the playbook runs in local connection, I set to delete_to: {{ remote_host }} so this directory creation executed in remote host but it still creates the directory in the control machine.
- name: provision instance for Apache
hosts: localhost
connection: local
remote_user: ubuntu
gather_facts: false
vars_files:
- vars/env.yml
vars:
allow_world_readable_tmpfiles: true
key_name: ansible-test
region: us-east-2
image: ami-0e82959d4ed12de3f # Ubuntu 18.04
id: "practice-akash-ajay"
sec_group: "{{ id }}-sec"
remote_host: ansible-test
remaining_days: 20
acme_directory: https://acme-staging-v02.api.letsencrypt.org/directory
# acme_directory: https://acme-v02.api.letsencrypt.org/directory
cert_name: "{{ app_slug }}.{{ app_domain}}"
intermediate_path: /etc/pki/letsencrypt/intermediate.pem
cert:
common_name: "{{ app_slug }}.{{ app_domain}}"
organization_name: PearlThoughts
email_address: "{{ letsencrypt_email }}"
subject_alt_name:
- "DNS:{{ app_slug }}.{{ app_domain}}"
roles:
- aws
- name: Create certificate storage directory
file:
dest: "{{item.path}}"
mode: 0750
state: directory
delegate_to: {{ remote_host }}
with_items:
- path: ~/lets-seng-test
When you set connection explicitly on the play, it will be used for all tasks in that play. So don't do that. Ansible will by default use a local connection for localhost unless you have explicitly changed that in your inventory (and again, don't do that).
If you remove the connection setting on your play, delegate_to might work the way you expect...but I don't think you want to do that.
If you have your playbook provisioning a new host for you, the way to target that host with Ansible is to have a new play with that host (or it's corresponding group) listed in the target hosts: for the play. Conceptually, you want:
- host: localhost
tasks:
- name: provisiong an AWS instance
aws_ec2: [...]
register: hostinfo
- add_host:
name: myhost
ansible_host: "{{ hostinfo... }}"
- hosts: myhost
tasks:
- name: do something on the new host
command: uptime
You probably need some logic in between provisioning the host and executing tasks against it to ensure that it is up and ready to service requests.
Instead of using the add_host module, a better solution is often to rely on the appropriate inventory plugin.
My script doesn't change the portgroup of a VM Network adapter, what am I doing wrong ?
Let's assume that I want to change the current portgroup named "A" to a different portgroup named "B".
---
- hosts: localhost
gather_facts: no
vars:
vm_name: VM
tasks:
- name: Changing Portgroup for Network adapter 1
vmware_guest_network:
hostname: "{{ vc_host }}"
username: "{{ vc_user }}"
password: "{{ vc_pass }}"
validate_certs: no
name: "{{ vm_name }}"
gather_network_info: false
networks:
- label: "Network adapter 1"
name: "B"
state: present
delegate_to: localhost
register: network_info
I'm getting output that something changed, but in VM Settings nothing changed.
TASK [Changing Portgroup for Network Adapter 1]
******************************************************************************
changed: [localhost -> localhost]
I found that removing and adding a Network adapter changes the portgroup, but when I do that I cannot add a Network adapter type Flexible which I had in the first place.
Edit1: After updating Ansible to 2.9.12 I get OK output when running the script, so it really isn't changing anything.
TASK [Changing Portgroup for Network Adapter 1] ******************************************************************************
ok: [localhost]
Edit2: After a few days searching I found that it isn't possible to just change the portgroup with Ansible, so I used PowerCLI to help me with task.
---
- hosts: localhost
gather_facts: no
vars:
vm_name: "VM"
tasks:
- name: "Changing the portgroup for {{ vm_name }}"
win_command: 'powershell.exe -ExecutionPolicy ByPass -File C:\Scripts\change_portgroup.ps1 {{ vm_name }}'
delegate_to: WIN_SRV
With powershell script going like this:
$OldNetwork = "PG old"
$NewNetwork = "PG new"
Get-VM -Name $args[0] |Get-NetworkAdapter |Where {$_.NetworkName -eq $OldNetwork } |Set-NetworkAdapter -NetworkName $NewNetwork -Confirm:$false
Edit 3:
I got it working with vmware community module. (Thanks #sky-jokerxx)
First I installed it with command:
ansible-galaxy collection install community.vmware
Then used the module like this:
- name: Change network
community.vmware.vmware_guest_network:
validate_certs: no
hostname: '{{ vc_host }}'
username: '{{ vc_user }}'
password: '{{ vc_pass }}'
name: '{{ vm_name }}'
label: "Network adapter 1"
network_name: "B"
state: present
delegate_to: localhost
The vmware_guest_network module has a lot of issues.
https://github.com/ansible-collections/community.vmware/issues/378
The following vmware_guest_network module is fixed some issues.
https://github.com/ansible-collections/community.vmware/pull/401
In the following procedure, you can use the above module.
# ansible-galaxy collection install community.vmware -p collections
# mkdir library
# cd library/
# curl -L https://raw.githubusercontent.com/ansible-collections/community.vmware/7ac9ebb9bf5df0f1ead3ef1a3ed35f2d4ad45622/plugins/modules/vmware_guest_network.py -O
# cd ..
https://docs.ansible.com/ansible/latest/dev_guide/developing_locally.html
Maybe it's fixed, so how about trying it?
I'm playing around with Ansible VMWare Modules and tried to get all the Information from ESXi Hosts from a vCenter.
With the Module vmware_host_facts it should be possible.
But when I run a Playbook with the following configuration, I only get the Information of one Host back - and not all. In this vCenter there are about 20 Hosts.
Playbook:
- name: Gather vmware host facts
vmware_host_facts:
hostname: vCenter_IP
username: username
password: password
register: host_facts
delegate_to: localhost
In the Documentation it tells me, that the hostname can also be a vCenter IP.
Resource:
http://docs.ansible.com/ansible/latest/modules/vmware_host_facts_module.html#vmware-host-facts
Is that module not the correct one to gather all host information from a vCenter? Or is there a "hidden trick", which I am missing?
Thanks a lot!
Kind regards,
M
I got an answer on another resource.
https://github.com/ansible/ansible/issues/43187
Basically you have to add the Hostnames as a list to the Task.
Example:
- name: Gather vmware host facts
vmware_host_facts:
hostname: "{{ item.esxi_hostname }}"
username: "{{ item.esxi_user }}"
password: "{{ item.esxi_pass }}"
validate_certs: no
register: host_facts
delegate_to: localhost
with_items:
- {esxi_hostname: hostname_1, esxi_user: username_host_1, esxi_pass: pass_host_1}
- {esxi_hostname: hostname_2, esxi_user: username_host_2, esxi_pass: pass_host_2}
These Hostnames you can gather with another module - vmware_vm_facts. Here you can get the Hostnames from. I will update this with an example playbook in the near future.
I use this module with these options, so far I have one issue if I put vcenter address it only give me first Esxi output.
- name: Somethign.
vmware_host_facts:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_user }}'
password: '{{ vcenter_pass }}'
validate_certs: no
register: all_cluster_hosts_facts
delegate_to: localhost
- debug: var=all_cluster_hosts_facts
I have a playbook that I run to deploy a guest VM onto my target node.
After the guest VM is fired up, it is not available to the whole network, but to the host machine only.
Also, after booting up the guest VM, I need to run some commands on that guest to configure it and make it available to all the network members.
---
- block:
- name: Verify the deploy VM script
stat: path="{{ deploy_script }}"
register: deploy_exists
failed_when: deploy_exists.stat.exists == False
no_log: True
rescue:
- name: Copy the deploy script from Ansible
copy:
src: "scripts/new-install.pl"
dest: "/home/orch"
owner: "{{ my_user }}"
group: "{{ my_user }}"
mode: 0750
backup: yes
register: copy_script
- name: Deploy VM
shell: run my VM deploy script
<other tasks>
- name: Run something on the guest VM
shell: my_other_script
args:
cdir: /var/scripts/
- name: Other task on guest VM
shell: uname -r
<and so on>
How can I run those subsequent steps on the guest VM via the host?
My only workaround is to populate a new inventory file with the VMs details and add the use the host as a bastion host.
[myvm]
myvm-01 ansible_connection=ssh ansible_ssh_user=my_user ansible_ssh_common_args='-oStrictHostKeyChecking=no -o ProxyCommand="ssh -A -W %h:%p someuser#host_machine"'
However, I want everything to happen on a single playbook, rather than splitting them.
I have resolved it myself.
I managed to dynamically add the host to the inventory and used a group:vars for the newly created hosts to use the VM manager as a bastion host
Playbook:
---
hosts: "{{ vm_manager }}"
become_method: sudo
gather_facts: False
vars_files:
- vars/vars.yml
- vars/vault.yml
pre_tasks:
- name: do stuff here on the VM manager
debug: msg="test"
roles:
- { role: vm_deploy, become: yes, become_user: root }
tasks:
- name: Dinamically add newly created VM to the inventory
add_host:
hostname: "{{ vm_name }}"
groups: vms
ansible_ssh_user: "{{ vm_user }}"
ansible_ssh_pass: "{{ vm_pass }}"
- name: Run the rest of tasks on the VM through the host machine
hosts: "{{ vm_name }}"
become: true
become_user: root
become_method: sudo
post_tasks:
- name: My first task on the VM
static: no
include_role:
name: my_role_for_the_VM
Inventory:
[vm_manager]
vm-manager.local
[vms]
my-test-01
my-test-02
[vms:vars]
ansible_connection=ssh
ansible_ssh_common_args='-oStrictHostKeyChecking=no -o ProxyCommand="ssh -A -W %h:%p username#vm-manager.local"'
Run playbook:
ansible-playbook -i hosts -vv playbook.yml -e vm_name=some-test-vm-name