Ansible Tower/AWX multi credentials - ansible

I have Ansible Tower running, I would like to be able to assign multiple credentials (ssh keys) to a job template and then when I run the job I would like it to use the correct credentials for the machine it connects to, thus not causing user lockouts as it cycles through the available credentials.
I have a job template and have added multiple credentials using custom credentials types, but how can I tell it to use the correct key for each host?
Thanks

I'm using community ansible so forgive if ansible tower behave differently
inventory.yml
all:
children:
web_servers:
vars:
ansible_user: www_user
ansible_ssh_private_key_file: ~/.ssh/www_rsa
hosts:
node1-www:
ansible_host: 192.168.88.20
node2-www:
ansible_host: 192.168.88.21
database_servers:
vars:
ansible_user: db_user
ansible_ssh_private_key_file: ~/.ssh/db_rsa
hosts:
node1-db:
ansible_host: 192.168.88.20
node2-db:
ansible_host: 192.168.88.21
What you can see here is that:
Both server are runing both webapp and db
Webapp & DB uses different users to run
Groups host names differ base on user (this is important)

You have to create Credential Types per Hosts and then create Credential per your credential types. you should transform injector configurations to host variables.
Please see the following post:
https://faun.pub/lets-do-devops-ansible-awx-tower-authentication-power-user-2d4c3c68ca9e

Related

Ansible where set ansible_user and ansible_ssh_pass in role

I am creating my first role in ansible to install packages through apt as it is a task I usually do on a daily basis.
apt
├── apt_install_package.yaml
├── server.yaml
├── tasks
│ └── main.yaml
└── vars
└── main.yaml
# file: apt_install_package.yaml
- name: apt role
hosts: server
gather_facts: no
become: yes
roles:
- apt
# tasks
- name: install package
apt:
name: "{{ package }}"
state: present
update_cache: True
with_items: "{{ package }}"
#vars
---
package:
- nginx
- supervisor
#inventory
---
server:
hosts:
new-server:
ansible_host: 10.54.x.x
In order to install the packages I must indicate a user of ssh but I do not know where to indicate it.
My idea is to indicate parameters with variables, something similar to this
ansible_user: "{{ myuser }}"
ansible_ssh_pass: "{{ mypass }}"
ansible_become_pass: "{{ mypass }}"
Any sugestions??
Regards,
As per the latest documentation from Ansible regarding inventory sources, an SSH password can be configured using the ansible_password keyword while an SSH user is specified by ansible_user keyword, for any given host or host group entry.
It is worth mentioning that in order to implement SSH password login for hosts with Ansible, the sshpass program is required on the controller. Any plays will otherwise fail due to an error injecting the password while initializing connections.
Define these keywords within your existing inventory in the following manner:
#inventory
---
server:
hosts:
new-server:
ansible_host: 10.54.x.x
ansible_user: <username>
ansible_password: <password>
ansible_become_password; <password>
The YAML plugin will parse your host configuration parameters from the inventory source and they will be appended host/group variables used for establishing an SSH connection.
The major disadvantage in this scenario is the fact that your authentication secrets will be stored in plain text, and usually Ansible is IaC committed to source control repositories. Unacceptable in production environments.
Consider using a lookup plugin such as env or file to render password secrets dynamically from the controller and prevent leakage.
#inventory
---
server:
hosts:
new-server:
ansible_host: 10.54.x.x
ansible_user: "{{ lookup('env', 'MY_SECRET_ANSIBLE_USER') }}"
ansible_password: "{{ lookup('file', '/path/to/password') }}"
ansible_become_password; "{{ lookup('env', 'MY_SECRET_BECOME_PASSWORD') }}"
Other lookup plugins may also be of use to you depending on your use-case, and can be further researched here.
Alternatively, you could designate a default remote connection user, connection user password, and become password separately within your Ansible configuration file, usually located at /etc/ansible/ansible.cfg.
Read more about available configuration options in Ansible here.
All three mentioned configuration options can be specified under the [defaults] INI section as:
remote_user: default connection username
connection_password_file: default connection password file
become_password_file: default become password file
NOTE: connection_password_file and become_password_file must be a filesystem path containing the password with which Ansible should use to login or escalate privileges. Storing a default password as plain-text string within the configuration file is not supported.
Another option involves environment variables being present at the time of playbook execution. Either export or pass them explicitly via the command line at the time of playbook execution.
ANSIBLE_REMOTE_USER: default connection username
ANSIBLE_CONNECTION_PASSWORD_FILE: default connection password file
ANSIBLE_BECOME_PASSWORD_FILE: default become password file
Such as the following:
ANSIBLE_BECOME_PASSWORD_FILE="/path/to/become_password" ansible-playbook ...
This approach is considerably less efficient unless you update the shell profile of the Ansible run user on the controller to set these variables upon login automatically. Generally not recommended for sensitive data such as passwords.
In regards to security, SSH key-pair authentication is preferred whenever possible. However, the second best approach would be to vault the sensitive passwords using ansible-vault.
Vault allows you to encrypt data at rest, and reference it within inventory/plays without risk of compromise.
Review this link to gain understanding of available procedures and get started.

Passing IP address to another ansible playbook

I have two playbooks
install azure vm
install mongo db and tomcat
I want to integrate both so first one send ip to second playbook and second play book does its job.
AzurePlaybook.yml
-----All the other tasks----
azure_rm_publicipaddress:
resource_group: Solutioning
allocation_method: Static
name: PublicIP
register: output_ip_address
- name: Include another playbook
import_playbook: install_MongoDb_and_Tomcat.yml
Second Playbook
install_MongoDb_and_Tomcat.yml
---
- name: install Mongo and Tomcat
hosts: demo1
become: yes
become_method: sudo # Set become method
remote_user: azureuser # Update username for remote server
vars:
tomcat_ver: 9.0.30 # Tomcat version to install
ui_manager_user: manager # User who can access the UI manager section only
ui_manager_pass: Str0ngManagerP#ssw3rd # UI manager user password
ui_admin_username: admin # User who can access bpth manager and admin UI sections
ui_admin_pass: Str0ngAdminP#ssw3rd
# UI admin password
roles:
- install_mongodb
- mongo_post_install
- install_tomcat
- tomcat_post_install
I have used import playbook and I want to pass the IP address instead of taking it from inventory file currently install_MongoDb_and_Tomcat.yml playboook taking it from hosts: demo1 which is declared in the inventory file
Declare the variable in group_vars/all.yml which will make it site wide global variable. It'll be overwritten during execution. Then reference the variable in the new playbook and account for the possibility the variable could be the default (fail/exit fast logic).
See Ansible documentation on variable scoping:
https://docs.ansible.com/ansible/2.3/playbooks_variables.html#variable-scopes

Ansible AWX - Deploy multiple hosts with different credentials with one playbook in one AWX template

Environment are 3 hosts. Hosts A; B and C
Host A runs Ansible with AWX within a Docker container.
My Task is, to fetch backup files from host B, and transfer them to server C.
In fact, this are two task.
Hosts B and C have different Credentials.
I know how to gather this both task in one playbook, but how can I add the second credential to the C server to establish the copy task in an AWX template?
Playbook example:
- hosts: B
tasks:
- name: Fetch backup data from remote host B to my_path
fetch:
src: /origin_path/backup_file.tar.gz
dest: /my_path/backup_file.tar.gz
- hosts: C
tasks:
- name: Copy backup data from my_path to remote host C
tasks:
copy:
src: /my_path/backup_file.tar.gz
dest: /remote_path/backup_file.tar.gz
```
For this type of scenario I have created separate playbooks for each host, created a template for each playbook and then used a workflow template to run the two playbooks in sequence.
I'd like a better solution, which would be to allow multiple credentials as you describe.
You can look into using inventory variables to set the credentials for your hosts (via host vars or group vars).
Encrypt your passwords using Ansible Vault (https://docs.ansible.com/ansible/latest/user_guide/vault.html#choosing-between-a-single-password-and-multiple-passwords)
Set the ansible_user and ansible_password host variables in your inventory(note the ansible_password is the value encrypted using ansible-vault)
create a Vault credential in AWX which will store the key used to encrypt your password
add the vault credential to your playbook Template in AWX. This will ensure that the password is decrypted when the playbook is run.
you can use inventory variables to handle multiple domain credentials for servers

How to set common ansible_user to all hosts in a group?

Does anyone know if ansible enables users share the same ansible_user settings across all hosts included in a group? This feature would be particularly useful when using some cloud computing platforms such as OpenStack, which enables users to launch multiple instances that share the same config, such as user accounts and SSH keys.
There are several behavioral parameters you can configure to modify the way Ansible connects to your host. Among them is the ansible_user variable. You can set it per host pr per group. You can also define a general ansible_user variable under the all hosts group, that you override at the group or host level.
If you were writing your inventory in just one hosts.yml file you'd do it like this:
all:
children:
ubuntu_linux:
hosts:
ubuntu_linux_1:
ubuntu_linux_2:
aws_linux:
hosts:
aws_linux_host_1:
aws_linux_host_2:
aws_linux_host_3:
vars:
ansible_user: ec2-user
vars:
ansible_host: ubuntu
And, if you are using a directory to create your inventory, you can set it inside the ./inventory/group/vars.yml file.
Check the "Connecting to hosts: behavioral inventory parameters" section of Ansible docs to see what other parameters you can configure.
I hope it helps

Can I force current hosts group to be identified as another in a playbook include?

The current case is this:
I have a playbook which provisions a bunch of servers and installs apps to these servers.
One of these apps already has it's own ansible playbook which I wanted to use. Now my problem arises from this playbook, as it's limited to hosts: [prod] and the host groups I have in the upper-level playbook are different.
I know I could just use add_host to add the needed hosts to a prod group, but that is a solution which I don't like.
So my question is: Is there a way to add the current hosts to a new host group in the include statement?
Something like - include: foo.yml prod={{ ansible_host_group }}
Or can I somehow include only the tasks from a playbook?
No, there's no direct way to do this.
Now my problem arises from this playbook, as it's limited to
hosts: [prod]
You can setup host's more flexible via extra vars:
- name: add role fail2ban
hosts: '{{ target }}'
remote_user: root
roles:
- fail2ban
Run it:
ansible-playbook testplaybook.yml --extra-vars "target=10.0.190.123"
ansible-playbook testplaybook.yml --extra-vars "target=webservers"
Is this workaround suitable for you?

Resources