I can create an interface with something like:
- name: create dummy interface
community.general.nmcli:
type: dummy
conn_name: '{{ item.conn_name }}'
ifname: '{{ item.ifname }}'
ip6: '{{ item.ip6 }}'
state: present
with_items:
- '{{ nmcli_dummy }}'
But if in the server I put the interface down: ifconfig dummy0 down, what parameter or options could be used to manage the interface state, for example up or down?
For a connection in example named eth1 the equivalents for show, up and down are
nmcli con show eth1
nmcli con up eth1
nmcli con down eth1
Whereby in nmcli module are certain Parameters for
Manage the network devices. Create, modify and manage various connection and device type e.g., ethernet, teams, bonds, vlans etc.
a parameter for bringing a network interface up or down is not explicit named.
This might leave one to think to workaround it with
- name: Brings the interface up or down
command:
cmd: "nmcli con {{ CMD }} eth1"
register: nmcli_con_cmd_result
as this is almost what the module code is doing under the hood.
However, according the NetworkManager / ansible-network-role it seems that the paramter state can have more values. In your case you could first check how it is implemented there in the project role and test after with state: up and state: down accordingly.
Regarding
I can create an interface with ...
it seems that if connection becomes created it is brought up, as well if connection becomes removed, it is brought down before.
Further Documentation
Configuring IP Networking with nmcli
NetworkManager Project - Ansible Network Role
community.general/plugins/modules/nmcli.py
Related
I am new to Ansible, I am only using a central machine and a host node on Ubuntu server, for which I have to deploy a firewall; I was able to make the SSH connections and the execution of the playbook. What I need to know is how to verify that the port I described in the playbook was blocked or opened, either on the controller machine and on the host node. Thanks
According your question
How to verify that the port I described in the playbook was blocked or opened, either on the controller machine and on the host node?
you may are looking for an approach like
- name: "Test connection to NFS_SERVER: {{ NFS_SERVER }}"
wait_for:
host: "{{ NFS_SERVER }}"
port: "{{ item }}"
state: drained
delay: 0
timeout: 3
active_connection_states: SYN_RECV
with_items:
- 111
- 2049
and have also a look into How to use Ansible module wait_for together with loop?
Documentation
Ansible wait_for Examples
You may also interested in Manage firewall with UFW and have a look into
Ansible ufw Examples
I poked around a bit here but didn't see anything that quite matched up to what I am trying to accomplish, so here goes.
So I've put together my first Ansible playbook which opens or closes one or more ports on the firewall of one or more hosts, for one or more specified IP addresses. Works great so far. But what I want to do is restart the firewall service after all the tasks for a given host are complete (with no errors, of course).
NOTE: The hostvars/localhost references just hold vars_prompt input from the user in a task list above this one. I store prompted data in hosts: localhost build a dynamic host list based on what the user entered, and then have a separate task list to actually do the work.
So:
- name: Execute remote firewall-cmd for each host in "dynamically created host group"
hosts: dynamically_created_host_list
gather_facts: no
tasks:
- set_fact:
hostList: "{{hostvars['localhost']['hostList']}}"
- set_fact:
portList: "{{hostvars['localhost']['portList']}}"
- set_fact:
portStateRequested: "{{hostvars['localhost']['portStateRequested']}}"
- set_fact:
portState: "{{hostvars['localhost']['portState']}}"
- set_fact:
remoteIPs: "{{hostvars['localhost']['remoteIPs']}}"
- name: Invoke firewall-cmd remotely
firewalld:
.. module-specific stuff here ...
with_nested:
- "{{ remoteIPs.split(',') }}"
- "{{ portList.split(',') }}"
register: requestStatus
In my original version of the script, which only did 1 port for 1 host for 1 IP, I just did:
- name: Reload firewalld
when: requestStatus.changed
systemd:
name: firewalld
state: reloaded
But I don't think that will work as easily here because of the nesting. For example. Let's say I want to open port 9999 for a remote IP address of 1.1.1.1 on 10 different hosts. And let's say the 5th host has an error for some reason. I may not want to restart the firewall service at that point.
Actually, now that I think about it, I guess that in that scenario, there would be 4 new entries to the firewall config, and 6 that didn't take because of the error. Now I'm wondering if I need to track the successes, and have a rescue block within the Playbook to back those entries that did go through.
Grrr.... any ideas? Sorry, new to Ansible here. Plus, I hate YAML for things like this. :D
Thanks in advance for any guidance.
It looks to me like what you are looking for is what Ansible call handlers.
As we’ve mentioned, modules should be idempotent and can relay when
they have made a change on the remote system. Playbooks recognize this
and have a basic event system that can be used to respond to change.
These ‘notify’ actions are triggered at the end of each block of tasks
in a play, and will only be triggered once even if notified by
multiple different tasks.
For instance, multiple resources may indicate that apache needs to be
restarted because they have changed a config file, but apache will
only be bounced once to avoid unnecessary restarts.
Note that handlers are simply a pair of
A notify attribute on one or multiple tasks
A handler, with a name matching your above mentioned notify attribute
So your playbook should look like
- name: Execute remote firewall-cmd for each host in "dynamically created host group"
hosts: dynamically_created_host_list
gather_facts: no
tasks:
# set_fact removed for concision
- name: Invoke firewall-cmd remotely
firewalld:
# .. module-specific stuff here ...
with_nested:
- "{{ remoteIPs.split(',') }}"
- "{{ portList.split(',') }}"
notify: Reload firewalld
handlers:
- name: Reload firewalld
systemd:
name: firewalld
state: reloaded
I want to deploy an ovirt template with Ansible, but the problem is that when I want to apply the cloud-init the template displays a different network interface file name every time, that is,
sometimes it is eth0, other times ens33 etc. How could I get this information to be able to apply the cloud init correctly.
Thank you.
How about the ovirt_nic_info module?
If you need to gather data there are many ovirt _info modules.
If you have further issues/questions you can open an issue on
https://github.com/ovirt/ovirt-ansible-collection
- ovirt_nic_info:
auth: "{{ ovirt_auth }}"
vm: centos8
register: result
- debug:
msg: "{{ result.ovirt_nics[0].reported_devices[0].name }}"
I have problems with my playbook which should create new EC2 instances trough built-in module and connect to them to set some default stuff.
I went trough lot of tutorials/posts, but none of them mentioned same problem, therefor i'm asking there.
Everything, in terms of creating goes well, but when i have instances created, and successfully waited for SSH to come up. I got error which says machine is unreachable.
UNREACHABLE! => {"changed": false, "msg": "ERROR! SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh", "unreachable": true}
I tried to connect manually (from terminal to the same host) and i was successful (while the playbook was waiting for connection). I also tried to increase timeout generally in ansible.cfg. I verified that given hostname is valid (and it is) and also tried public ip instead of public DNS, but nothing helps.
basically my playbook looks like that
---
- name: create ec2 instances
hosts: local
connection: local
gather_facts: False
vars:
machines:
- { type: "t2.micro", instance_tags: { Name: "machine1", group: "some_group" }, security_group: ["SSH"] }
tasks:
- name: lunch new ec2 instances
local_action: ec2
group={{ item.security_group }}
instance_type={{ item.type}}
image=...
wait=true
region=...
keypair=...
count=1
instance_tags=...
with_items: machines
register: ec2
- name: wait for SSH to come up
local_action: wait_for host={{ item.instances.0.public_dns_name }} port=22 delay=60 timeout=320 state=started
with_items: ec2.results
- name: add host into launched group
add_host: name={{ item.instances.0.public_ip }} group=launched
with_items: ec2.results
- name: with the newly provisioned EC2 node configure basic stuff
hosts: launched
sudo: yes
remote_user: ubuntu
gather_facts: True
roles:
- common
Note: in many tutorials are results from creating ec2 instances accessed in different way, but thats probably for different question.
Thanks
Solved:
I don't know how, but it suddenly started to work. No clue. In case i will find some new info, will update this question
A couple points that may help:
I'm guessing it's a version difference, but I've never seen a 'results' key in the registered 'ec2' variable. In any case, I usually use 'tagged_instances' -- this ensures that even if the play didn't create an instance (ie, because a matching instance already existed from a previous run-through), the variable will still return instance data you can use to add a new host to inventory.
Try adding 'search_regex: "OpenSSH"' to your 'wait_for' play to ensure that it's not trying to run before the SSH daemon is completely up.
The modified plays would look like this:
- name: wait for SSH to come up
local_action: wait_for host={{ item.public_dns_name }} port=22 delay=60 timeout=320 state=started search_regex="OpenSSH"
with_items: ec2.tagged_instances
- name: add host into launched group
add_host: name={{ item.public_ip }} group=launched
with_items: ec2.tagged_instances
You also, of course, want to make sure that Ansible knows to use the specified key when SSH'ing to the remote host either by adding 'ansible_ssh_private_key_file' to the inventory entry or specifying '--private-key=...' on the command line.
I am trying to use a variable in keepalived.conf.j2 template file that I push to the remote machine. Basically I am trying to insert the remote machine dynamic IP address for eth1 interface into keepalived.conf.j2.
Here is the task:
- name: Keepalived config push
template: src=keepalived.conf.j2 dest=/etc/keepalived/keepalived.conf force=yes owner=root mode=664
tags: Config push
Here is the content of the jinja2 conf file:
}
vrrp_instance 50 {
virtual_router_id 50
advert_int 1
priority 101
state MASTER
interface eth0
virtual_ipaddress {
{{ ansible_eth1:network}} dev eth0
What is the best way to implement this, so every time I push to the remote machine it will have its eth1 interface in the conf file?
Ok, It seems I have figured it out.
Your playbook has to have gather_facts: on and in j2 template you need to have the following line:
{{ ansible_eth1.ipv4.address }}