Conditionally add properties to task/module in Ansible Playbook - ansible

I'm developing a role where I want to launch a docker container, among the tasks in my role I have one using the docker_container module to do that:
- name: Launch docker container
docker_container:
name: abc
...
This works fine but now I want to have a variable that will define whether this container needs to be attached to a particular docker network.
If I require it is fine:
- name: Launch docker container
docker_container:
name: abc
networks:
- name: '{{ network_name_var}}'
...
But I want to allow the users to not define it, in which case no networks: ... property should be added.
I have found no easy way of achieving this, is there one?
Semantically I want something like this:
- name: Launch docker container
docker_container:
name: abc
{% if network_name_var is defined %}
networks:
- name: '{{ network_name_var}}'
...
{% endif %}

Here is a possible scenario you can use. The key points:
We keep your single network_name_var that is exposed to your user. I took for granted that this var could be either undefined, or empty.
We define the full network list definition dynamically if the var has a value set. This list stays unset otherwise.
We use the omit place holder to not define any networks in the module if need be.
- name: demo playbook for omit
hosts: localhost
tasks:
- name: set the list of networks for our container
# don't define anywhere else. it should only exist
# if network_name_var is set
set_fact:
my_networks:
- name: '{{ network_name_var }}'
when: network_name_var | default('') | length > 0
- name: make sure container is started
docker_container:
name: abc
networks: "{{ my_networks | default(omit) }}"

Related

Ansible - Is it possible to loop over a list of objects in input within a playbook

I am trying to create a playbook which is managing to create some load balancers.
The playbook takes a configuration YAML in input, which is formatted like so:
-----configuration.yml-----
virtual_servers:
- name: "test-1.local"
type: "standard"
vs_port: 443
description: ""
monitor_interval: 30
ssl_flag: true
(omissis)
As you can see, this defines a list of load balancing objects with the relative specifications.
If I want to create for example a monitor instance, which depends on these definitions, I created this task which is defined within a playbook.
-----Playbook snippet-----
...
- name: "Creator | Create new monitor"
include_role:
name: vs-creator
tasks_from: pool_creator
with_items: "{{ virtual_servers }}"
loop_control:
loop_var: monitor_item
...
-----Monitor Task-----
- name: "Set monitor facts - Site 1"
set_fact:
monitor_name: "{{ monitor_item.name }}"
monitor_vs_port: "{{ monitor_item.vs_port }}"
monitor_interval: "{{ monitor_item.monitor_interval}}"
monitor_partition: "{{ hostvars['localhost']['vlan_partition'] | first }}"
...
(omissis)
- name: "Create HTTP monitor - Site 1"
bigip_monitor_http:
state: present
name: "{{ monitor_name }}_{{ monitor_vs_port }}.monitor"
partition: "{{ monitor_partition }}"
interval: "{{ monitor_interval }}"
timeout: "{{ monitor_interval | int * 3 | int + 1 | int }}"
provider:
server: "{{ inventory_hostname}}"
user: "{{ username }}"
password: "{{ password }}"
delegate_to: localhost
when:
- site: 1
- monitor_item.name | regex_search(regex_site_1) != None
...
As you can probably already see, I have a few problems with this code, the main one which I would like to optimize is the following:
The creation of a load balancer (virtual_server) involves multiple tasks (creation of a monitor, pool, etc...), and I would need to treat each list element in the configuration like an object to create, with all the necessary definitions.
I would need to do this for different sites which pertain to our datacenters - for which I use regex_site_1 and site: 1 in order to get the correct one... though I realize that this is not ideal.
The script, as of now, does that, but it's not well-managed I believe, and I'm at a loss on what approach should I take in developing this playbook: I was thinking about looping over the playbook with each element from the configuration list, but apparently, this is not possible, and I'm wondering if there's any way to do this, if possible with an example.
Thanks in advance for any input you might have.
If you can influence input data I advise to turn elements of virtual_servers into hosts.
In this case inventory will look like this:
virtual_servers:
hosts:
test-1.local:
vs_port: 443
description: ""
monitor_interval: 30
ssl_flag: true
And all code code will become a bliss:
- hosts: virtual_servers
tasks:
- name: Doo something
delegate_to: other_host
debug: msg=done
...
Ansible will create all loops for you for free (no need for include_roles or odd loops), and most of things with variables will be very easy. Each host has own set of variable which you just ... use.
And part where 'we are doing configuration on a real host, not this virtual' is done by use of delegate_to.
This is idiomatic Ansible and it's better to follow this way. Every time you have include_role within loop, you for sure made a mistake in designing the inventory.

Using gitlab-ci vars inside an ansible playbook

I want to set a remote environment inside a docker container using an Ansible playbook. This playbook will run from gitlab-ci with variables I set in in the Gitlab CI/CD confituration. How can I acheive that?
Here is the template I want to use. How do I set the user_id and password from the CI/CD variables?
tasks:
- name: Run XYZ Container
docker_container:
name: XYZ
restart_policy: on-failure
image: xxxxxxxxxxx
container_default_behavior: "compatibility"
env:
USER_ID= $USER_ID
PASSWORD= $PASSWORD
Since gitlab-ci variables are just environment variables inside your job, and since your ansible controller runs inside that job, you can use the env lookup to read them from the controller.
Please note that:
the docker_container module's env parameter expects a dict and not a new line separated string of bash like env vars definition like in your example.
as a security measure, you should either check that the vars are defined prior to using them (with an assert or fail task) or use a default value in case they're not. My example uses a default value. For more on providing default value, you can see the ansible documentation (and the original jinja2 documentation to understand that d is a an alias to default)
tasks:
- name: Run XYZ Container
docker_container:
name: XYZ
restart_policy: on-failure
image: xxxxxxxxxxx
container_default_behavior: "compatibility"
env:
USER_ID: "{{ lookup('env', 'USER_ID') | d('defaultuser', true) }}"
PASSWORD: "{{ lookup('env', 'PASSWORD') | d('defaultpass', true) }}"
i wanted to use the CI_JOB_TOKEN so i used:
tasks:
- include_role: role_name
vars:
ci_job_token: "{{ lookup('env', 'CI_JOB_TOKEN') }}"

Ansible Check if lxd container with name already exists

Is it possible to check in ansible if a container already exists?
I tried the following:
- name: LXD | Check for already existing container
lxc_container:
name: {{ container_name }}
state: absent
register: check_container_absent
- debug: msg="{{ check_container_absent }}"
- name: LXD | Create dev container
command: # script to create container #
when: check_container_absent.exists
But the output of check_container_absent did not change after I created the container.
Another solution would also be to check the location, where the containers are stored, if a folder with the container name exists.
Is there a better solution than to check for the folder?
According to the official documentation
Containers must have a unique name. If you attempt to create a
container with a name that already exists in the users namespace the
module will simply return as “unchanged”.
You should be able to check if the container with name container_name exists or doesn't exist respectively by checking if the task reports as changed.
- name: Do container things
hosts: localhost
gather_facts: false
tasks:
- name: Delete container if exists
lxc_container:
name: {{ container_name }}
state: absent
register: delete_container
- name: Reports false if container did not already exist
debug:
var: delete_container.changed
- name: Create container if not already exists
lxc_container:
name: {{ container_name }}
register: create_container
- name: Reports false if container did already exist
debug:
var: create_container.changed
Both of the above tasks will actually create/delete the object if it does/does not already exist though.
If you are just looking to collect data on on whether the object exists and conditionally perform some action later depending, you will not want to use the lxc_container module as it is intended to create/delete, not gather information.
Instead you'll probably want to just use the command/shell module with changed_when: false and store the output.
- name: Check whether container exists
shell: "lxc list | grep -v {{ container_name }}"
changed_when: false
ignore_errors: true
register: lxc_list
- name: Do thing if container does not exist
debug:
msg: "It doesn't exist"
when: lxc_list.rc != 0

How to use lines of a file as a variables in ansible playbook?

I am looking for a way that uses lines of file as variable in ansible playbook.
I have a playbook which uses number of variables approximate 15-20 variables. So it is very difficult for me to pass these 15 variables during runtime.
For the same I will create one file like:
**** variables.txt *****
Tomcat8
192.168.0.67
8080
8081
8082
8084
Playbook Sample:
---
- hosts: tomcat_server
vars:
tomcat_instances:
- name: foo
user: tomcatfoo
group: tomcatfoo
path: /srv/tomcatfoo
home: /home/tomcatfoo
service_name: foo#tomcat
service_file: foo#.service
port_ajp: 18009
port_connector: 18080
port_redirect: 18443
port_shutdown: 18005
So if there is any way where I can call the line number to pass the value of the variable in playbook it will be very helpful.
Thanks in advance!
I recommend using a structured way of managing variables like:
tomcat:
p_port: 8080
s_port: 8081
ip : 192.168.0.1
Then read the variables like
- name: Read all variables
block:
- name: Get All Variables
stat:
path: "{{item}}"
with_fileglob:
- "/myansiblehome/vars/common/tomcat1.yml"
- "/myansiblehome/vars/common/tomcat2.yml"
register: _variables_stat
- name: Include Variables when found
include_vars : "{{item.stat.path}}"
when : item.stat.exists
with_items : "{{_variables_stat.results}}"
no_log : true
delegate_to: localhost
become: false
After that, use like:
- name: My Tomcat install
mymodule:
myaction1: "{{ tomcat.p_port }}"
myaction2: "{{ tomcat.s_port }}"
myaction3: "{{ tomcat.ip }}"
Hope it helps
There are a few ways of doing it, but I would set up a Tomcat role and set the vars in tomcat_role/default/main.yml. If you ever need to change your Tomcat config it's done neatly from one place.
tomcat_role/default/main.yml example:
---
tomcat_user: tomcat
tomcat_group: tomcat
tomcat_path: "/path/to/tomcat"
When I want to call the Tomcat config I would declare it in my play like this:
- name: Set up Tomcat
hosts: your_host_here
become: yes
roles:
- tomcat_role
I can then call the vars from that role in my play like this:
user: {{tomcat_user}}
path: {{tomcat_path}}
Hope this helps.

Finding the volume mount points for a list of docker volumes in ansible

It's unfortunate that there is no current API that would allow us to work with docker volumes. At the moment if I need to copy data into a docker volume (NB: not a docker container) I must first ensure that some container can access the volume, and then use ansible to run docker cp. But for these kind of tasks, there may not even be docker containers which have the volume mounted. This is not idempotent. This disallows the vast majority of ansible's generally awesome API. This complicates the process by adding many extra steps. This is not the ansible way. What if we could simply find the mountpoints for each volume we are interested in, and then have ansible talk to the host's filesystem directly?
So let's say we have some list of the names of some docker volumes we'll be using. For each item in the list, we'd like to inspect it using the docker daemon, then use ansible to set a fact about its mountpoint. This is what I have so far:
- name: Get docker volume information
command: "docker volume inspect {{ item }}"
register: output
with_items: "{{ volumes }}"
NB: Command returns something like this:
[
{
"Name": "docker_sites-enabled",
"Driver": "local",
"Mountpoint": "/var/lib/docker/volumes/docker_sites-enabled/_data",
"Labels": null,
"Scope": "local"
}
]
Playbook continues:
- name: Set volume facts
set_fact:
"{{ item.stdout|from_json|json_query('Name') }}": "{{ item.stdout|from_json|json_query('Mountpoint') }}"
with_items: "{{ output.results }}"
- name: The following facts are now set
debug:
var: "{{ item }}"
with_items:
- "{{ volumes }}"
However, this doesn't work as I expected it to as ansible reports error "The variable name '' is not valid. Variables must start with a letter or underscore character, and contain only letters, numbers and underscores. It's probably because of the syntax of the JSON query filter I'm using, but I can't find any documentation about how I should be using it.
Not sure why do you want to generate root-level variables for each volume.
You can do like this:
- hosts: docker_host
become: true
gather_facts: false
vars:
volumes:
- vol1
- vol2
- vol4
tasks:
- shell: docker volume inspect {{ volumes | join(' ') }}
register: vlm_res
- set_fact: mountpoints={{ dict(vlm_res.stdout | from_json | json_query('[].[Name,Mountpoint]')) }}
- debug: var=mountpoints['vol2']
mountpoints is a dict, so we can access mountpoints['vol2'] to access vol2's mountpoint.

Resources