Run a playbook on hosts with a specific parameter - ansible

In my host file I have:
[web]
192.168.1.1:8682 master="yes"
192.168.1.1:8682 master="no"
and in my playbook I would like to run roles only on the server with master=yes like:
---
- name: Switch MySQL master
hosts: web[master=yes]
remote_user: andy
become: yes
roles:
- replication_setup_switch_server
...
Is it possible to do that with Ansible ?

In Ansible you can create ad-hoc groups based on facts using group_by module.
Add a new play before Switch MySQL master and create a new group in it. I called the new play Create groups by role in the example below. You could maybe rename the variable master to role to make the playbook more intuitive. So, the inventory and the playbook would become:
---
[web]
192.168.1.1:8682 role="master"
192.168.1.1:8682 role="slave"
---
- name: Create groups by role
hosts: web
tasks:
- name: Group by web role
group_by:
key: "web_{{ role }}"
- name: Switch MySQL master
hosts: web_master
remote_user: andy
become: yes
roles:
- replication_setup_switch_server

Related

Ansible. Reconnecting the playbook connection

The server is being created. Initially there is user root, his password and ssh on 22 port (default).
There is a written playbook, for example, for a react application.
When you start playbook'a, everything is deployed for it, but before deploying, you need to configure the server to a minimum. Those. create a new sudo user, change the ssh port and copy the ssh key to the server. I think this is probably needed for any server.
After this setting, yaml appears in the host_vars directory with the variables for this server (ansible_user, ansible_sudo_pass, etc.)
For example, there are 2 roles: initial-server, deploy-react-app.
And the playbook itself (main.yml) for a specific application:
- name: Deploy
hosts: prod
roles:
- role: initial-server
- role: deploy-react-app
How to make it so that when you run ansible-playbook main.yml, the initial-server role is executed from the root user with his password, and the deploy-react-app role from the newly created one user and connection was by ssh key and not by password (root)? Or is it, in principle, not the correct approach?
Note: using dashes (-) in role names is deprecated. I fixed that in my below example
Basically:
- name: initialize server
hosts: prod
remote_user: root
roles:
- role: initial_server
- name: deploy application
hosts: prod
# That one will prevent to gather facts twice but is not mandatory
gather_facts: false
remote_user: reactappuser
roles:
- role: deploy_react_app
You could also set the ansible_user for each role vars in a single play:
- name: init and deploy
hosts: prod
roles:
- role: initial_server
vars:
ansible_user: root
- role: deploy_react_app
vars:
ansible_user: reactappuser
There are other possibilities (using an include_role task). This really depends on your precise requirement.

How do I make sure a role is run host A before running a task on host B?

Lets say I want to setup 1) Apache webserver and 2) a repository of Tarballs on host A and then download those tarballs over http in some tasks on host B. How would I set up that dependency in Ansible?
So you can create two playbook files:
hosts-A.yml
---
- hosts: hosts-A
gather_facts: yes
roles:
- { role: apache }
- { role: repo_of_tarballs }
hosts-B.yml
---
- hosts: hosts-B
gather_facts: yes
roles:
- { role: download_tarballs }
Afterwards you can create a site.yml file which will be contains:
---
- import_playbook: hosts-A.yml
- import_playbook: hosts-B.yml
To execute playbook: ansible-playbook site.yml

How to add host to group in Ansible Tower inventory?

How can I add a host to a group using tower_group or tower_host modules?
The following code creates a host and a group, but they are unrelated to each other:
---
- hosts: localhost
connection: local
gather_facts: false
tasks:
- tower_inventory:
name: My Inventory
organization: Default
state: present
tower_config_file: "~/tower_cli.cfg"
- tower_host:
name: myhost
inventory: My Inventory
state: present
tower_config_file: "~/tower_cli.cfg"
- tower_group:
name: mygroup
inventory: My Inventory
state: present
tower_config_file: "~/tower_cli.cfg"
Docs mention instance_filters parameter ("Comma-separated list of filter expressions for matching hosts."), however do not provide any usage example.
Adding instance_filters: myhost to the tower_group task has no effect.
I solved it using Ansible shell module and tower-cli. I Know that create a ansible module is better than it, but to a fast solution...
- hosts: awx
vars:
tasks:
- name: Create Inventory
tower_inventory:
name: "Foo Inventory"
description: "Our Foo Cloud Servers"
organization: "Default"
state: present
- name: Create Group
tower_group:
inventory: "Foo Inventory"
name: Testes
register: fs_group
- name: Create Host
tower_host:
inventory: "Foo Inventory"
name: "host"
register: fs_host
- name: Associate host group
shell: tower-cli host associate --host "{{fs_host.id}}" --group "> {{fs_group.id}}"
This isn't natively available in the modules included with Tower, which are older and use the deprecated tower-cli package.
But it is available in the newer AWX collection, which uses the awx CLI, as long as you have a recent enough Ansible (2.9 should be fine).
In essence, install the awx collection through a requirements file, or directly like
ansible-galaxy collection install awx.awx -p ./collections
Add the awx.awx collection to your playbook
collections:
- awx.awx
and then use the hosts: option to tower_group:.
- tower_group:
name: mygroup
inventory: My Inventory
hosts:
- myhost
state: present
You can see a demo playbook here.
Be aware though that you may need preserve_existing_hosts: True if your group already contains other hosts. Unfortunately there does not seem to be an easy way to remove a single host from a group.
In terms of your example this would probably work:
---
- hosts: localhost
connection: local
gather_facts: false
collections:
- awx.awx
tasks:
- tower_inventory:
name: My Inventory
organization: Default
state: present
tower_config_file: "~/tower_cli.cfg"
- tower_host:
name: myhost
inventory: My Inventory
state: present
tower_config_file: "~/tower_cli.cfg"
- tower_group:
name: mygroup
inventory: My Inventory
state: present
tower_config_file: "~/tower_cli.cfg"
hosts:
- myhost

How do I apply an Ansible task to multiple hosts from within a playbook?

I am writing an ansible playbook to rotate IAM access keys. It runs on my localhost to create a new IAM Access Key on AWS. I want to push that key to multiple other hosts' ~/.aws/credentials files.
---
- name: Roll IAM access keys
hosts: localhost
connection: local
gather_facts: false
strategy: free
roles:
- iam-rotation
In the iam-rotation role, I have something like this:
- name: Create new Access Key
iam:
iam_type: user
name: "{{ item }}"
state: present
access_key_state: create
key_count: 2
with_items:
- ansible-test-user
register: aws_user
- set_fact:
aws_user_name: "{{ aws_user.results.0.user_name }}"
created_keys_count: "{{ aws_user.results.0.created_keys | length }}"
aws_user_keys: "{{ aws_user.results[0]['keys'] }}"
I want to use push the newly created access keys out to jenkins builders. How would I use the list of hosts from with_items in the task? The debug task is just a placeholder.
# Deploy to all Jenkins builders
- name: Deploy new keys to jenkins builders
debug:
msg: "Deploying key to host {{item}}"
with_items:
- "{{ groups.jenkins_builders }}"
Hosts file that includes the list of hosts I want to apply to
[jenkins_builders]
builder1.example.org
builder2.example.org
builder3.example.org
I am executing the playbook on localhost. But within the playbook I want one task to execute on remote hosts which I'm getting from the hosts file. The question was...
How would I use the list of hosts from with_items in the task?
Separate the tasks into two roles. Then execute the first role against localhost and the second one against jenkins_builders:
---
- name: Rotate IAM access keys
hosts: localhost
connection: local
gather_facts: false
strategy: free
roles:
- iam-rotation
- name: Push out IAM access keys
hosts: jenkins_builders
roles:
- iam-propagation
Per AWS best practices recommendations, if you are running an application on an Amazon EC2 instance and the application needs access to AWS resources, you should use IAM roles for EC2 instead of keys:
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html
You can use
delegate_to: servername
in the task module, it will run only on the particular

How to run only one role of an Ansible playbook?

I have a site.yml which imports several playbooks.
- import_playbook: webservers.yml
- ....
Every playbook "calls" several roles:
- name: apply the webserver configuration
hosts: webservers
roles:
- javajdk
- tomcat
- apache
How can I run only the javajdk role ?
This would run all roles...
ansible-playbook -i inventory webservers.yml
I know that there are tags, but how do I assign them to a role in general?
Tags are natural way to go. Three ways of specifying them for roles below:
- name: apply the webserver configuration
hosts: webservers
roles:
- role: javajdk
tags: java_tag
- { role: tomcat, tags: tomcat_tag }
tasks:
- include_role:
name: apache
tags: apache_tag
You can explictly specify the tags to run:
ansible-playbook example.yml --tags "java_tag"
Reference to docs

Resources