Is it possible to check in ansible if a container already exists?
I tried the following:
- name: LXD | Check for already existing container
lxc_container:
name: {{ container_name }}
state: absent
register: check_container_absent
- debug: msg="{{ check_container_absent }}"
- name: LXD | Create dev container
command: # script to create container #
when: check_container_absent.exists
But the output of check_container_absent did not change after I created the container.
Another solution would also be to check the location, where the containers are stored, if a folder with the container name exists.
Is there a better solution than to check for the folder?
According to the official documentation
Containers must have a unique name. If you attempt to create a
container with a name that already exists in the users namespace the
module will simply return as “unchanged”.
You should be able to check if the container with name container_name exists or doesn't exist respectively by checking if the task reports as changed.
- name: Do container things
hosts: localhost
gather_facts: false
tasks:
- name: Delete container if exists
lxc_container:
name: {{ container_name }}
state: absent
register: delete_container
- name: Reports false if container did not already exist
debug:
var: delete_container.changed
- name: Create container if not already exists
lxc_container:
name: {{ container_name }}
register: create_container
- name: Reports false if container did already exist
debug:
var: create_container.changed
Both of the above tasks will actually create/delete the object if it does/does not already exist though.
If you are just looking to collect data on on whether the object exists and conditionally perform some action later depending, you will not want to use the lxc_container module as it is intended to create/delete, not gather information.
Instead you'll probably want to just use the command/shell module with changed_when: false and store the output.
- name: Check whether container exists
shell: "lxc list | grep -v {{ container_name }}"
changed_when: false
ignore_errors: true
register: lxc_list
- name: Do thing if container does not exist
debug:
msg: "It doesn't exist"
when: lxc_list.rc != 0
Related
I'm developing a role where I want to launch a docker container, among the tasks in my role I have one using the docker_container module to do that:
- name: Launch docker container
docker_container:
name: abc
...
This works fine but now I want to have a variable that will define whether this container needs to be attached to a particular docker network.
If I require it is fine:
- name: Launch docker container
docker_container:
name: abc
networks:
- name: '{{ network_name_var}}'
...
But I want to allow the users to not define it, in which case no networks: ... property should be added.
I have found no easy way of achieving this, is there one?
Semantically I want something like this:
- name: Launch docker container
docker_container:
name: abc
{% if network_name_var is defined %}
networks:
- name: '{{ network_name_var}}'
...
{% endif %}
Here is a possible scenario you can use. The key points:
We keep your single network_name_var that is exposed to your user. I took for granted that this var could be either undefined, or empty.
We define the full network list definition dynamically if the var has a value set. This list stays unset otherwise.
We use the omit place holder to not define any networks in the module if need be.
- name: demo playbook for omit
hosts: localhost
tasks:
- name: set the list of networks for our container
# don't define anywhere else. it should only exist
# if network_name_var is set
set_fact:
my_networks:
- name: '{{ network_name_var }}'
when: network_name_var | default('') | length > 0
- name: make sure container is started
docker_container:
name: abc
networks: "{{ my_networks | default(omit) }}"
I use Ansible to create and delete AWS launch configurations. I add the timestamp in the name.
My problem is that, I can create the LC, but when it comes to deletion, the timestamp changes and then the playbook of deletion can't find the LC to delete it.
this is how I use the timestamp variable:
I put this in a file called timestamp_lc.yml:
- set_fact: now="{{ lookup('pipe', 'date +%Y%m%d%H%M%S') }}"
and in the playbooks I call it:
- include: timestamp_lc.yml
How to make the variable now persistent so that ansible does not execute the date command every time I call the variable now ?
this is the creation task:
- name: Create launch configuration
sudo: yes
command: >
aws autoscaling create-launch-configuration
--region {{ asg.region }}
--launch-configuration-name "{{ asg.launch_configuration.name }}_{{ now }}"
The deletion task:
- name: Delete launch configuration
sudo: yes
command: >
aws autoscaling delete-launch-configuration
--region {{ asg.region }}
--launch-configuration-name {{ asg.launch_configuration.name }}_{{ now }}
That will happen with every execution of the ansible as you are getting the value from the date command and setting a fact of that and that keeps on updating with every iteration.
One way I can think is to save the value in a file in an extension on the target server or the local server -- I feel this would be more reliable
---
- name: test play
hosts: localhost
tasks:
- name: checking the file stats
stat:
path: stack_delete_info
register: delete_file_stat
- name: tt
debug:
var: delete_file_stat
- name: test
shell: echo "{{ lookup('pipe', 'date +%Y%m%d%H%M%S') }}" > stack_delete_info
when: delete_file_stat.stat.exists == false
I'm creating playbook which will be applied to new Docker swarm manager(s). Server(s) is/are not configured before playbook run.
We already have some Swarm managers. I can find all of them (include new one) with:
- name: 'Search for SwarmManager server IPs'
ec2_instance_facts:
region: "{{ ec2_region }}"
filters:
vpc-id: "{{ ec2_vpc_id }}"
"tag:aws:cloudformation:logical-id": "AutoScalingGroupSwarmManager"
register: swarmmanager_instance_facts_result
Now I can use something like this to get join-token:
- set_fact:
swarmmanager_ip: "{{ swarmmanager_instance_facts_result.instances[0].private_ip_address }}"
- name: 'Get the docker swarm join-token'
shell: docker swarm join-token -q manager
changed_when: False
register: docker_swarm_token_result
delegate_to: "{{ swarmmanager_ip }}"
run_once: true
Success shell output looks like this — just 1 line started with "SWMTKN-1":
SWMTKN-1-11xxxyyyzzz-xxxyyyzzz
But I see some possible problems here with swarmmanager_ip:
it can be new instance which still unconfigured,
it can be instance with not working Swarm manager.
So I decided to loop over results until I've got join-token. But many code variants I've tried doesn't work. For example, this one runs over all list without break:
- name: 'Get the docker swarm join-token'
shell: docker swarm join-token -q manager
changed_when: False
register: docker_swarm_token_result
delegate_to: "{{ item.private_ip_address }}"
loop: "{{ swarmmanager_instance_facts_result.instances }}"
# ignore_errors: true
# until: docker_swarm_token_result.stdout_lines|length == 1
when: docker_swarm_token_result is not defined or docker_swarm_token_result.stdout_lines is not defined or docker_swarm_token_result.stdout_lines|length == 1
run_once: true
check_mode: false
Do you know how to iterate over list until first success shell output?
I use Ansible 2.6.11, it is OK to receive answer about 2.7.
P.S.: I've already read How to break `with_lines` cycle in Ansible?, it doesn't works for modern Ansible versions.
It's unfortunate that there is no current API that would allow us to work with docker volumes. At the moment if I need to copy data into a docker volume (NB: not a docker container) I must first ensure that some container can access the volume, and then use ansible to run docker cp. But for these kind of tasks, there may not even be docker containers which have the volume mounted. This is not idempotent. This disallows the vast majority of ansible's generally awesome API. This complicates the process by adding many extra steps. This is not the ansible way. What if we could simply find the mountpoints for each volume we are interested in, and then have ansible talk to the host's filesystem directly?
So let's say we have some list of the names of some docker volumes we'll be using. For each item in the list, we'd like to inspect it using the docker daemon, then use ansible to set a fact about its mountpoint. This is what I have so far:
- name: Get docker volume information
command: "docker volume inspect {{ item }}"
register: output
with_items: "{{ volumes }}"
NB: Command returns something like this:
[
{
"Name": "docker_sites-enabled",
"Driver": "local",
"Mountpoint": "/var/lib/docker/volumes/docker_sites-enabled/_data",
"Labels": null,
"Scope": "local"
}
]
Playbook continues:
- name: Set volume facts
set_fact:
"{{ item.stdout|from_json|json_query('Name') }}": "{{ item.stdout|from_json|json_query('Mountpoint') }}"
with_items: "{{ output.results }}"
- name: The following facts are now set
debug:
var: "{{ item }}"
with_items:
- "{{ volumes }}"
However, this doesn't work as I expected it to as ansible reports error "The variable name '' is not valid. Variables must start with a letter or underscore character, and contain only letters, numbers and underscores. It's probably because of the syntax of the JSON query filter I'm using, but I can't find any documentation about how I should be using it.
Not sure why do you want to generate root-level variables for each volume.
You can do like this:
- hosts: docker_host
become: true
gather_facts: false
vars:
volumes:
- vol1
- vol2
- vol4
tasks:
- shell: docker volume inspect {{ volumes | join(' ') }}
register: vlm_res
- set_fact: mountpoints={{ dict(vlm_res.stdout | from_json | json_query('[].[Name,Mountpoint]')) }}
- debug: var=mountpoints['vol2']
mountpoints is a dict, so we can access mountpoints['vol2'] to access vol2's mountpoint.
Im creating a deployment playbook for our web services. Each web service is in its own directory such as:
/webapps/service-one/
/webapps/service-two/
/webapps/service-three/
I want to check to see if the service directory exists, and if so, I want to run a shell script that stops the service gracefully. Currently, I am able to complete this step by using ignore_errors: yes.
- name: Stop services
with_items: services_to_stop
shell: "/webapps/scripts/stopService.sh {{item}}"
ignore_errors: yes
While this works, the output is very messy if one of the directories doesnt exist or a service is being deployed for the first time. I effectively want to something like one of these:
This:
- name: Stop services
with_items: services_to_stop
shell: "/webapps/scripts/stopService.sh {{item}}"
when: shell: [ -d /webapps/{{item}} ]
or this:
- name: Stop services
with_items: services_to_stop
shell: "/webapps/scripts/stopService.sh {{item}}"
stat:
path: /webapps/{{item}}
register: path
when: path.stat.exists == True
I'd collect facts first and then do only necessary things.
- name: Check existing services
stat:
path: "/tmp/{{ item }}"
with_items: "{{ services_to_stop }}"
register: services_stat
- name: Stop existing services
with_items: "{{ services_stat.results | selectattr('stat.exists') | map(attribute='item') | list }}"
shell: "/webapps/scripts/stopService.sh {{ item }}"
Also note, that bare variables in with_items don't work since Ansible 2.2, so you should template them.
This will let you get a list of existing directory names into the list variable dir_names (use recurse: no to read only the first level under webapps):
---
- hosts: localhost
connection: local
vars:
dir_names: []
tasks:
- find:
paths: "/webapps"
file_type: directory
recurse: no
register: tmp_dirs
- set_fact: dir_names="{{ dir_names+ [item['path']] }}"
no_log: True
with_items:
- "{{ tmp_dirs['files'] }}"
- debug: var=dir_names
You can then use dir_names in your "Stop services" task via a with_items. It looks like you're intending to use only the name of the directory under "webapps" so you probably want to use the | basename jinja2 filter to get that, so something like this:
- name: Stop services
with_items: "{{ dir_names }}"
shell: "/webapps/scripts/stopService.sh {{item | basename }}"