Passing IP address to another ansible playbook - ansible

I have two playbooks
install azure vm
install mongo db and tomcat
I want to integrate both so first one send ip to second playbook and second play book does its job.
AzurePlaybook.yml
-----All the other tasks----
azure_rm_publicipaddress:
resource_group: Solutioning
allocation_method: Static
name: PublicIP
register: output_ip_address
- name: Include another playbook
import_playbook: install_MongoDb_and_Tomcat.yml
Second Playbook
install_MongoDb_and_Tomcat.yml
---
- name: install Mongo and Tomcat
hosts: demo1
become: yes
become_method: sudo # Set become method
remote_user: azureuser # Update username for remote server
vars:
tomcat_ver: 9.0.30 # Tomcat version to install
ui_manager_user: manager # User who can access the UI manager section only
ui_manager_pass: Str0ngManagerP#ssw3rd # UI manager user password
ui_admin_username: admin # User who can access bpth manager and admin UI sections
ui_admin_pass: Str0ngAdminP#ssw3rd
# UI admin password
roles:
- install_mongodb
- mongo_post_install
- install_tomcat
- tomcat_post_install
I have used import playbook and I want to pass the IP address instead of taking it from inventory file currently install_MongoDb_and_Tomcat.yml playboook taking it from hosts: demo1 which is declared in the inventory file

Declare the variable in group_vars/all.yml which will make it site wide global variable. It'll be overwritten during execution. Then reference the variable in the new playbook and account for the possibility the variable could be the default (fail/exit fast logic).
See Ansible documentation on variable scoping:
https://docs.ansible.com/ansible/2.3/playbooks_variables.html#variable-scopes

Related

Ansible where set ansible_user and ansible_ssh_pass in role

I am creating my first role in ansible to install packages through apt as it is a task I usually do on a daily basis.
apt
├── apt_install_package.yaml
├── server.yaml
├── tasks
│ └── main.yaml
└── vars
└── main.yaml
# file: apt_install_package.yaml
- name: apt role
hosts: server
gather_facts: no
become: yes
roles:
- apt
# tasks
- name: install package
apt:
name: "{{ package }}"
state: present
update_cache: True
with_items: "{{ package }}"
#vars
---
package:
- nginx
- supervisor
#inventory
---
server:
hosts:
new-server:
ansible_host: 10.54.x.x
In order to install the packages I must indicate a user of ssh but I do not know where to indicate it.
My idea is to indicate parameters with variables, something similar to this
ansible_user: "{{ myuser }}"
ansible_ssh_pass: "{{ mypass }}"
ansible_become_pass: "{{ mypass }}"
Any sugestions??
Regards,
As per the latest documentation from Ansible regarding inventory sources, an SSH password can be configured using the ansible_password keyword while an SSH user is specified by ansible_user keyword, for any given host or host group entry.
It is worth mentioning that in order to implement SSH password login for hosts with Ansible, the sshpass program is required on the controller. Any plays will otherwise fail due to an error injecting the password while initializing connections.
Define these keywords within your existing inventory in the following manner:
#inventory
---
server:
hosts:
new-server:
ansible_host: 10.54.x.x
ansible_user: <username>
ansible_password: <password>
ansible_become_password; <password>
The YAML plugin will parse your host configuration parameters from the inventory source and they will be appended host/group variables used for establishing an SSH connection.
The major disadvantage in this scenario is the fact that your authentication secrets will be stored in plain text, and usually Ansible is IaC committed to source control repositories. Unacceptable in production environments.
Consider using a lookup plugin such as env or file to render password secrets dynamically from the controller and prevent leakage.
#inventory
---
server:
hosts:
new-server:
ansible_host: 10.54.x.x
ansible_user: "{{ lookup('env', 'MY_SECRET_ANSIBLE_USER') }}"
ansible_password: "{{ lookup('file', '/path/to/password') }}"
ansible_become_password; "{{ lookup('env', 'MY_SECRET_BECOME_PASSWORD') }}"
Other lookup plugins may also be of use to you depending on your use-case, and can be further researched here.
Alternatively, you could designate a default remote connection user, connection user password, and become password separately within your Ansible configuration file, usually located at /etc/ansible/ansible.cfg.
Read more about available configuration options in Ansible here.
All three mentioned configuration options can be specified under the [defaults] INI section as:
remote_user: default connection username
connection_password_file: default connection password file
become_password_file: default become password file
NOTE: connection_password_file and become_password_file must be a filesystem path containing the password with which Ansible should use to login or escalate privileges. Storing a default password as plain-text string within the configuration file is not supported.
Another option involves environment variables being present at the time of playbook execution. Either export or pass them explicitly via the command line at the time of playbook execution.
ANSIBLE_REMOTE_USER: default connection username
ANSIBLE_CONNECTION_PASSWORD_FILE: default connection password file
ANSIBLE_BECOME_PASSWORD_FILE: default become password file
Such as the following:
ANSIBLE_BECOME_PASSWORD_FILE="/path/to/become_password" ansible-playbook ...
This approach is considerably less efficient unless you update the shell profile of the Ansible run user on the controller to set these variables upon login automatically. Generally not recommended for sensitive data such as passwords.
In regards to security, SSH key-pair authentication is preferred whenever possible. However, the second best approach would be to vault the sensitive passwords using ansible-vault.
Vault allows you to encrypt data at rest, and reference it within inventory/plays without risk of compromise.
Review this link to gain understanding of available procedures and get started.

Ansible. Reconnecting the playbook connection

The server is being created. Initially there is user root, his password and ssh on 22 port (default).
There is a written playbook, for example, for a react application.
When you start playbook'a, everything is deployed for it, but before deploying, you need to configure the server to a minimum. Those. create a new sudo user, change the ssh port and copy the ssh key to the server. I think this is probably needed for any server.
After this setting, yaml appears in the host_vars directory with the variables for this server (ansible_user, ansible_sudo_pass, etc.)
For example, there are 2 roles: initial-server, deploy-react-app.
And the playbook itself (main.yml) for a specific application:
- name: Deploy
hosts: prod
roles:
- role: initial-server
- role: deploy-react-app
How to make it so that when you run ansible-playbook main.yml, the initial-server role is executed from the root user with his password, and the deploy-react-app role from the newly created one user and connection was by ssh key and not by password (root)? Or is it, in principle, not the correct approach?
Note: using dashes (-) in role names is deprecated. I fixed that in my below example
Basically:
- name: initialize server
hosts: prod
remote_user: root
roles:
- role: initial_server
- name: deploy application
hosts: prod
# That one will prevent to gather facts twice but is not mandatory
gather_facts: false
remote_user: reactappuser
roles:
- role: deploy_react_app
You could also set the ansible_user for each role vars in a single play:
- name: init and deploy
hosts: prod
roles:
- role: initial_server
vars:
ansible_user: root
- role: deploy_react_app
vars:
ansible_user: reactappuser
There are other possibilities (using an include_role task). This really depends on your precise requirement.

Ansible Tower/AWX multi credentials

I have Ansible Tower running, I would like to be able to assign multiple credentials (ssh keys) to a job template and then when I run the job I would like it to use the correct credentials for the machine it connects to, thus not causing user lockouts as it cycles through the available credentials.
I have a job template and have added multiple credentials using custom credentials types, but how can I tell it to use the correct key for each host?
Thanks
I'm using community ansible so forgive if ansible tower behave differently
inventory.yml
all:
children:
web_servers:
vars:
ansible_user: www_user
ansible_ssh_private_key_file: ~/.ssh/www_rsa
hosts:
node1-www:
ansible_host: 192.168.88.20
node2-www:
ansible_host: 192.168.88.21
database_servers:
vars:
ansible_user: db_user
ansible_ssh_private_key_file: ~/.ssh/db_rsa
hosts:
node1-db:
ansible_host: 192.168.88.20
node2-db:
ansible_host: 192.168.88.21
What you can see here is that:
Both server are runing both webapp and db
Webapp & DB uses different users to run
Groups host names differ base on user (this is important)
You have to create Credential Types per Hosts and then create Credential per your credential types. you should transform injector configurations to host variables.
Please see the following post:
https://faun.pub/lets-do-devops-ansible-awx-tower-authentication-power-user-2d4c3c68ca9e

ansible run role against host name that is returned from a local script

I have a local script that returns to me the name of the host I need to do an install on. How can I take the result of this script and use it to set the host? Here is an example of the kind of code I thought I could write to achieve this:
---
- hosts: localhost
roles:
- getHostScript # running this sets a variable called hostname
- hosts: "{{{hostvars.localhost.hostname}}"
roles:
- install # this runs the install script
What is the correct way to do this?
Use the add_host module to the new host to your inventory, and then target that host in another play:
---
- hosts: localhost
roles:
- getHostScript # running this sets a variable called hostname
tasks:
- name: add new host to inventory
add_host:
name: "{{ hostname }}"
groups: target
- hosts: target
roles:
- install # this runs the install script
Here I'm adding the new host to a group named target so that I can just refer to the group name, rather than needing to know the actual host name in the following play.

Ansible group variable evaluation with local actions

I have an an Ansible playbook that includes a role for creating some Azure cloud resources. Group variables are used to set parameters for the creation of those resources. An inventory file contains multiple groups which reference that play as a descendant node.
The problem is that since the target is localhost for running the cloud actions, all the group variables are picked up at once. Here is the inventory:
[cloud:children]
cloud_instance_a
cloud_instance_b
[cloud_instance_a:children]
azure_infrastructure
[cloud_instance_b:children]
azure_infrastructure
[azure_infrastructure]
127.0.0.1 ansible_connection=local ansible_python_interpreter=python
The playbook contains an azure_infrastructure play that references the actual role to be run.
What happens is that this role is run twice against localhost, but each time the group variables from cloud_instance_a and cloud_instance_b have both been loaded. I want it to run twice, but with cloud_instance_a variables loaded the first time, and cloud_instance_b variables loaded the second.
Is there anyway to do this? In essence, I'm looking for a pseudo-host for localhost that makes it think these are different targets. The only way I've been able to workaround this is to create two different inventories.
It's a bit hard to guess how you playbook look like, anyway...
Keep in mind that inventory host/group variables are host-bound, so any host always have only one set of inventory variables (variables defined in different groups overwrite each other).
If you want to execute some tasks or plays on your control machine, you can use connection: local for plays or local_action: for tasks.
For example, for this hosts file:
[group1]
server1
[group2]
server2
[group1:vars]
testvar=aaa
[group2:vars]
testvar=zzz
You can do this:
- hosts: group1:group2
connection: local
tasks:
- name: provision
azure: ...
- hosts: group1:group2
tasks:
- name: install things
apk: ...
Or this:
- hosts: group1:group2
gather_facts: no
tasks:
- name: provision
local_action: azure: ...
- name: gather facts
setup:
- name: install things
apk:
In this examples testvar=aaa for server1 and testvar=zzz for server2.
Still azure action is executed from control host.
In the second example you should turn off fact gathering and call setup manually to prevent Ansible from connecting to possibly unprovisioned servers.

Resources