How to Execute Role Inside the Ansible Blocks - ansible

I have written a ansible playbook which do a deployment on the remote machines
- name: Deployment part of the script
vars:
hostName:
build_num:
hosts: "{{hostName}}"
become: true
serial: 1
tasks: this does deployment
after this I want to execute the util which is on localhost from where this playbook will be executed.
Now I have written a roles which does this for me if I execute them separately as a playbook
- name: Roles Demo
hosts: 127.0.0.1
connection: local
vars:
var1: "sometextvalue"
var2: "sometextvalue"
var3: "someurl"
roles:
- demorole #role which I created
Now I want to integrate the role in my main playbook mentioned at the top but I am getting
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
although its the same snippet which is working fine when run individually
Also I wanted to execute this using "Ansible blocks" like when a certain condition is matched execute a certain role but for that also I am getting the same above error just to summarize what I want to achieve using the blocks is as below
- name: Deployment part of the script
vars:
hostName:
build_num:
hosts: "{{hostName}}"
become: true
serial: 1
tasks: this does deployment complete
- name: Task for Doing some check
hosts: 127.0.0.1
connection: local
vars:
var1: "dakdkadadakdhkahdkahkdh
var2: "jdjaldjlaj"
var3: "djasjdlajdlajdljadljaldjlaj"
block:
- name: Doing Check for some1
roles:
- role1
when: x == "somevalue1"
- block:
- name: Doing check for some2
roles:
- role2
when: x == "somevalue2"
.
.
.
assuming the vars value are same
so I am not sure if this could be achieved

Using a block outside of the tasks section is not valid.
You can however execute roles from within the tasks section, which will allow you to use blocks and when conditionals however you choose.
Example:
- name: Task for Doing some check
hosts: 127.0.0.1
connection: local
vars:
var1: "dakdkadadakdhkahdkahkdh
var2: "jdjaldjlaj"
var3: "djasjdlajdlajdljadljaldjlaj"
tasks:
- name: Doing Check for some1
import_role:
name: role1
when: x == "somevalue1"
You will need to decide whether to use import_role or include_role. Take a look at https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse.html#dynamic-vs-static for an explanation of the differences.

Related

Run tasks on different hosts within an imported task

The calling playbook has:
- hosts: ssh_servers
tasks:
- import_tasks: create_files.yml
Then, in create_files.yml, I'd like to run some tasks on hosts other than ssh_servers, such as:
- Hosts: other_servers
tasks:
- file:
I get: ERROR! conflicting action statements: hosts, tasks .
Is this because I'm trying to run against hosts that were never included in the calling task ?
Is there a way to accomplish this other than in the calling playbook have:
- hosts:
- ssh_servers
- other_servers
tasks:
- import_tasks: create_files.yml
Thank you.
Is this because I'm trying to run against hosts that were never included in the calling task ? Is there a way to accomplish this other than in the calling playbook
I believe the answer is yes, although it'll be weird and could cause subsequent folks who interact with your playbook some confusion
given a hypothetical create_files.yml of:
- name: create /tmp/hello_world on hosts "not_known_at_launch_time"
file:
path: /tmp/hello_world
state: present
delegate_to: '{{ item }}'
with_items: '{{ groups["not_known_at_launch_time"] }}'
then the glue needed to bridge them together is the dynamic creation of a group and that delegate_to: keyword
- hosts: ssh_hosts
tasks:
- add_host:
groups: not_known_at_launch_time
name: secret-host-0
ansible_host: 192.168.1.1 # or whatever
# ... other hostvars here ...
- include_tasks: create_files.yml
it may be possible to combine those inside create_files.yml, via some shared vars: that say which host-and-ip should be added to the magic group name, which also has the benefit of keeping the magic group name localized to the file that consumes it.
BE AWARE, I did actually test this, but not extensively, so there may be some weird things such as the need to run_once: yes on them to keep the tasks from being run groups.ssh_hosts|length times or similar stuff
As Vladimir correctly pointed out, what you actually want to happen is to make that relationship formal:
- hosts: ssh_hosts
tasks:
... whatever tasks you had before
- add_host: ... as before ...
- hosts: anonymous_group_name_goes_here
tasks:
- include_tasks: create_files.yml
- hosts: ssh_hosts
tasks:
- debug:
msg: and now you are back to the ssh_hosts to pick up what they were supposed to be doing when you stopped to post on SO

Validation of parameters and use of roles/tags Ansible

I am writing a playbook where I am automating my deployment. In the command line I am passing some parameters which needs to be validated mandatory. I am using roles/tags to run my playbook. Below is my command -
ansible-playbook -i my-inventory my-main.yml --tags=copy,deploy -e my_release_version=1.0.0 -e target_env=prod
In my-main.yml I am first validating the parameters and then executing the roles. Now if i pass tags in the command it is not doing any validation and directly executing the tags which is the way ansible works i guess.
Is there a way to pre execute the steps as mentioned in my-main.yml before executing the tags?
and my-main.yml looks like below -
- hosts: localhost
connection: local
gather_facts: no
vars:
_allowed_envs:
- dev
- preprod
- prod
pre_tasks:
- name: Checking if the Target Environment is ok
fail:
msg: >-
Environment "{{ target_env }}" is not allowed.
Please choose a target environment from "{{ _allowed_envs | join(', ') }}"
when: not target_env in _allowed_envs
run_once: true
roles:
- role: copy
tags:
- copy
- role: deploy
tags:
- deploy
NOTE : My playbook will have roles/tags like copy,deploy and also stoptomcat, starttomcat. So when user only mention tags like stoptomcat and starttomcat I just want one input parameter to be validated target_env because in that case I would not want my_release_version.
Any help is appreciated.
There is nothing wrong with the pre_tasks or with your play; the pre_tasks are skipped because you are passing --tags in the command line while calling the playbook; hence tags take the highest precedence while invoking the play that is the reason your validation fails each time; If you want to run the pre_tasks, then you dont have to mention ----tags option in cli- and handle the roles-validation with pre-tasks by passing only -e options: something like this
---
- name: test_play
hosts: localhost
connection: local
gather_facts: false
vars:
_allowed_envs:
- dev
- preprod
- prod
pre_tasks:
- name: Checking if the Target Environment is ok
fail:
msg: >-
Environment "{{ target_env }}" is not allowed.
Please choose a target environment from "{{ _allowed_envs | join(', ') }}"
when: not target_env in _allowed_envs
run_once: true
tasks:
- name: include the dynamic role
include_role:
name: "{{ ROLE }}"
tags: "{{ tag_name }}"
and then run it ansible-playbook -i my-inventory my-main.yml -e my_release_version=1.0.0 -e target_env=dev -e ROLE=copy -e tag_name=copy
Hope this helps!
Cheers

Stopping ansible from a playbook (site.yml)

I have re-written the question to be more specific instead of using a generic example of what I am trying to achieve, as per #Zeitounator's suggestion.
I use ansible to spin up VM's in VMware by adding a new entry in the hosts.ini file and running ansible-playbook -i inventory/dev/hosts.ini --limit SomeGroup playbooks/site.yml
The vmware role (calledvmware) will
* check to see if the VM already exists.
* If it does, then obviously it does not create the VM.
* If it does not exist, then it will create the VM from a template.
To destroy a VM, I run this: ansible-playbook -i inventory/dev/hosts.ini --limit SomeGroup playbooks/site.yml -e 'vmware_destroy=true'
That works as intended. Now for my issue.
When this variable is set (vmware_destroy=true), it will destroy the VM successfully, BUT ansible will attempt to carry on with the rest of the playbook on the host that has just been destroyed. Obviously it fails. The playbook does actually stop due to the failure. But not gracefully.
Here is an example of the flow of logic:
$ cat playbooks/site.yml
---
- hosts: all
gather_facts: no
roles:
- { role: vmware, tags: vmware }
- hosts: all
gather_facts: yes
roles:
- { role: bootstrap, tags: bootstrap }
- { role: common, tags: common }
- hosts: AppServers
gather_facts: no
roles:
- { role: application }
# and so on.
$ cat playbooks/roles/vmware/main.yml
---
# Checks to see if the VM exists already.
# A variable `found_vm` is registered in this task.
- import_tasks: find.yml
# Only import this task when all of the `when` conditions are met.
- import_tasks: destroy.yml
when:
- vmware_destroy is defined
- vmware_destroy # Meaning 'True'
- found_vm
# If the above is true, it will not import this task.
- import_tasks: create.yml
when:
- found_vm.failed
- vmware_destroy is not defined
So, the point is, when I specify -e 'vmware_destroy=true', ansible will attempt to run the rest of the playbook and fail.
I don't want ansible to fail. I want it to stop gracefully after completing the vmware role based on -e 'vmware_destroy=true flag provided on the command line.
I am aware I can use a different playbook for this, something like:
ansible-playbook -i inventory/dev/hosts.ini --limit SomeGroup playbooks/VMWARE_DESTROY.yml. But I would rather use a variable as opposed to a separate playbook. If there is a strong argument to split out the playbook in this way, please provide references.
Please let me know if more clarification is needed.
Thank you in advance.
A playbook is the top Ansible abstraction layer (playbook -> role -> task -> ...). There are 2 more layers in AWX (workflow -> job template -> playbook ...) to control the playbooks. To follow the architecture either AWX, or any other tool to interface Ansible (ansible-runner, or scripts for example) should be used to control the playbooks.
Controlling of the playbooks inside Ansible is rather awkward. Create 2 playbooks vmware-create.yml and vmware-destroy.yml
$ cat vmware-create.yml
- hosts: all
gather_facts: yes # include cached variables
tasks:
- block:
- debug:
msg: Don't create VM this time. End of play.
- meta: end_play
when: not hostvars['localhost'].vmware_create
- debug:
msg: Create VM.
$ cat vmware-destroy.yml
- hosts: all
gather_facts: yes # include cached variables
tasks:
- block:
- debug:
msg: Don't destroy VM this time. End of play.
- meta: end_play
when: not hostvars['localhost'].vmware_destroy
- debug:
msg: Destroy VM.
and import them into the playbook vmware_control.yml. See below
$ cat vmware-control.yml
- hosts: localhost
vars:
vmware_create: true
vmware_destroy: false
tasks:
- set_fact:
vmware_create: "{{ vmware_create }}" # cache variable
- set_fact:
vmware_destroy: "{{ vmware_destroy }}" # cache variable
- import_playbook: vmware-create.yml
- import_playbook: vmware-destroy.yml
Control the flow with the variables vmware_create and vmware_destroy. Run vmware_control.yml at localhost and declare hosts: all inside vmware-create.yml and vmware-destroy.yml.

How to avoid a playbook to run when no host matches?

Use case: users can provide a host name and will trigger a playbook run. In case the hostname has a typo I want to fail complete playbook run when "no hosts matched". I want to fail it since I like to detect a failure im Bamboo (which I use for CD/CI) to run the playbook.
I have done quite extensive research. It seems that it is a wanted behavior that the playbook exists with an exit code = 0 when no host matches. Here is one indication I found. I agree that the general behavior should be like this.
So I need for my use case an extra check. I tried the following:
- name: Deploy product
hosts: "{{ target_hosts }}"
gather_facts: no
any_errors_fatal: true
pre_tasks:
- name: Check for a valid target host
fail:
msg: "The provided host is not knwon"
when: target_hosts not in groups.tomcat_servers
But since there is no host match the playbook will not run, that is ok but it also ends with exit code 0. That way I can not fail the run in my automation system (Bamboo).
Due to this I am looking for a solution to throw an exit code != 0 when no host matches.
Add a play which would set a fact if a host matched, then check that fact in a second play:
- name: Check hosts
hosts: "{{ target_hosts }}"
gather_facts: no
tasks:
- set_fact:
hosts_confirmed: true
delegate_to: localhost
delegate_facts: true
- name: Verify hosts
hosts: localhost
gather_facts: no
tasks:
- assert:
that: hosts_confirmed | default(false)
- name: The real play
hosts: "{{ target_hosts }}"
# ...

Set remote_user for set of tasks in Ansible playbook without repeating it per task

I am creating a playbook which first creates a new username. I then want to run "moretasks.yml" as that new user that I just created. Currently, I'm setting remote_user for every task. Is there a way I can set it for the entire set of tasks once? I couldn't seem to find examples of this, nor did any of my attempts to move remote_user around help.
Below is main.yml:
---
- name: Configure Instance(s)
hosts: all
remote_user: root
gather_facts: true
tags:
- config
- configure
tasks:
- include: createuser.yml new_user=username
- include: moretasks.yml new_user=username
- include: roottasks.yml #some tasks unrelated to username.
moretasks.yml:
---
- name: Task1
copy:
src: /vagrant/FILE
dest: ~/FILE
remote_user: "{{newuser}}"
- name: Task2
copy:
src: /vagrant/FILE
dest: ~/FILE
remote_user: "{{newuser}}"
First of all you surely want to use sudo_user (remote user is the one that logs in, sudo_user is the one who executes the task).
In your case you want to execute the task as another user (the one previously created) just set:
- include: moretasks.yml
sudo: yes
sudo_user: "{{ newuser }}"
and those tasks will be executed as {{ newuser }} (Don't forget the quotes)
Remark: In most cases you should consider remote_user as a host parameter. It is the user that is allowed to login on the machine and that has sufficient rights to do things. For operational stuff you should use sudo / sudo_user
You could split this up into to separate plays? (playbooks can contain multiple plays)
---
- name: PLAY 1
hosts: all
remote_user: root
gather_facts: true
tasks:
- include: createuser.yml new_user=username
- include: roottasks.yml #some tasks unrelated to username.
- name: PLAY 2
hosts: all
remote_user: username
gather_facts: false
tasks:
- include: moretasks.yml new_user=username
There is a gotcha using separate plays: you can't use variables set with register: or set_fact: in the first play to do things in the second play (this statement is not entirely true, the variables are available in hostvars, but I recommend not using variables between roles). Defined variables like in group_vars and host_vars work just fine.
Another tip I'd like to give is to look into using roles http://docs.ansible.com/playbooks_roles.html. While it might seem more complicated at first, it's much easier to re-use them (as you seem to be doing with the "createuser.yml"). Looking at the type of things you are trying to achieve, the 'include all the things' path won't last much longer.
Kind of inline with your issue. Hope it helps. While updating my playbooks for Ansible 2.5 support for Cisco IOS network_cli connection
Credential file created with ansible-vault: auth/secrets.yml
---
creds:
username: 'ansible'
password: 'user_password'
Playbook:
---
- hosts: ios
gather_facts: yes
connection: network_cli
become: yes
become_method: enable
ignore_errors: yes
tasks:
- name: obtain login credentials
include_vars: auth/secrets.yml
- name: Set Username/ Password
set_fact:
remote_user: "{{ creds['username'] }}"
ansible_ssh_pass: "{{ creds['password'] }}"
- name: Find info for "{{ inventory_hostname }}" via ios_facts
ios_facts:
gather_subset: all
register: hardware_fact
Running playbook without auth/secrets.yml creds:
ansible-playbook -u ansible -k playbook.yml -l inventory_hostname

Resources