I am creating my first role in ansible to install packages through apt as it is a task I usually do on a daily basis.
apt
├── apt_install_package.yaml
├── server.yaml
├── tasks
│ └── main.yaml
└── vars
└── main.yaml
# file: apt_install_package.yaml
- name: apt role
hosts: server
gather_facts: no
become: yes
roles:
- apt
# tasks
- name: install package
apt:
name: "{{ package }}"
state: present
update_cache: True
with_items: "{{ package }}"
#vars
---
package:
- nginx
- supervisor
#inventory
---
server:
hosts:
new-server:
ansible_host: 10.54.x.x
In order to install the packages I must indicate a user of ssh but I do not know where to indicate it.
My idea is to indicate parameters with variables, something similar to this
ansible_user: "{{ myuser }}"
ansible_ssh_pass: "{{ mypass }}"
ansible_become_pass: "{{ mypass }}"
Any sugestions??
Regards,
As per the latest documentation from Ansible regarding inventory sources, an SSH password can be configured using the ansible_password keyword while an SSH user is specified by ansible_user keyword, for any given host or host group entry.
It is worth mentioning that in order to implement SSH password login for hosts with Ansible, the sshpass program is required on the controller. Any plays will otherwise fail due to an error injecting the password while initializing connections.
Define these keywords within your existing inventory in the following manner:
#inventory
---
server:
hosts:
new-server:
ansible_host: 10.54.x.x
ansible_user: <username>
ansible_password: <password>
ansible_become_password; <password>
The YAML plugin will parse your host configuration parameters from the inventory source and they will be appended host/group variables used for establishing an SSH connection.
The major disadvantage in this scenario is the fact that your authentication secrets will be stored in plain text, and usually Ansible is IaC committed to source control repositories. Unacceptable in production environments.
Consider using a lookup plugin such as env or file to render password secrets dynamically from the controller and prevent leakage.
#inventory
---
server:
hosts:
new-server:
ansible_host: 10.54.x.x
ansible_user: "{{ lookup('env', 'MY_SECRET_ANSIBLE_USER') }}"
ansible_password: "{{ lookup('file', '/path/to/password') }}"
ansible_become_password; "{{ lookup('env', 'MY_SECRET_BECOME_PASSWORD') }}"
Other lookup plugins may also be of use to you depending on your use-case, and can be further researched here.
Alternatively, you could designate a default remote connection user, connection user password, and become password separately within your Ansible configuration file, usually located at /etc/ansible/ansible.cfg.
Read more about available configuration options in Ansible here.
All three mentioned configuration options can be specified under the [defaults] INI section as:
remote_user: default connection username
connection_password_file: default connection password file
become_password_file: default become password file
NOTE: connection_password_file and become_password_file must be a filesystem path containing the password with which Ansible should use to login or escalate privileges. Storing a default password as plain-text string within the configuration file is not supported.
Another option involves environment variables being present at the time of playbook execution. Either export or pass them explicitly via the command line at the time of playbook execution.
ANSIBLE_REMOTE_USER: default connection username
ANSIBLE_CONNECTION_PASSWORD_FILE: default connection password file
ANSIBLE_BECOME_PASSWORD_FILE: default become password file
Such as the following:
ANSIBLE_BECOME_PASSWORD_FILE="/path/to/become_password" ansible-playbook ...
This approach is considerably less efficient unless you update the shell profile of the Ansible run user on the controller to set these variables upon login automatically. Generally not recommended for sensitive data such as passwords.
In regards to security, SSH key-pair authentication is preferred whenever possible. However, the second best approach would be to vault the sensitive passwords using ansible-vault.
Vault allows you to encrypt data at rest, and reference it within inventory/plays without risk of compromise.
Review this link to gain understanding of available procedures and get started.
Related
I have two playbooks
install azure vm
install mongo db and tomcat
I want to integrate both so first one send ip to second playbook and second play book does its job.
AzurePlaybook.yml
-----All the other tasks----
azure_rm_publicipaddress:
resource_group: Solutioning
allocation_method: Static
name: PublicIP
register: output_ip_address
- name: Include another playbook
import_playbook: install_MongoDb_and_Tomcat.yml
Second Playbook
install_MongoDb_and_Tomcat.yml
---
- name: install Mongo and Tomcat
hosts: demo1
become: yes
become_method: sudo # Set become method
remote_user: azureuser # Update username for remote server
vars:
tomcat_ver: 9.0.30 # Tomcat version to install
ui_manager_user: manager # User who can access the UI manager section only
ui_manager_pass: Str0ngManagerP#ssw3rd # UI manager user password
ui_admin_username: admin # User who can access bpth manager and admin UI sections
ui_admin_pass: Str0ngAdminP#ssw3rd
# UI admin password
roles:
- install_mongodb
- mongo_post_install
- install_tomcat
- tomcat_post_install
I have used import playbook and I want to pass the IP address instead of taking it from inventory file currently install_MongoDb_and_Tomcat.yml playboook taking it from hosts: demo1 which is declared in the inventory file
Declare the variable in group_vars/all.yml which will make it site wide global variable. It'll be overwritten during execution. Then reference the variable in the new playbook and account for the possibility the variable could be the default (fail/exit fast logic).
See Ansible documentation on variable scoping:
https://docs.ansible.com/ansible/2.3/playbooks_variables.html#variable-scopes
The scenario is: I have several services running on several hosts. There is one special service - the reverseproxy/loadbalancer. Any service needs to configure that special service on the host, that runs the rp/lp service. During installation/update/deletion of a random service with an Ansible role, I need to call the ReverseProxy role on the specific host to configure the corresponding vhost.
At the moment I call a specific task file in the reverse proxy role to add or remove a vhost by the service with include_role and set some vars (very easy example without service and inventory specific vars).
- name: "Configure ReverseProxy"
include_role:
name: reverseproxy
tasks_from: vhost_add
apply:
delegate_to: "{{ groups['reverseproxy'][0] }}"
vars:
reverse_proxy_url: "http://{{ ansible_fqdn }}:{{ service_port }}/"
reverse_proxy_domain: "sub.domain.tld"
I have three problems.
I know, it's not a good idea to build such dependencies between roles and different hosts. I don't know a better way, especially if you think about the situation, where you need to do some extra stuff after creating the vhost (f.e. configure the service via REST API, which needs the external fqdn). In case of two separate playbooks with "backend"-service and "reverseproxy"-service - then I need a third playbook for configuring "hanging" services. Also I'm not sure, if I can retrieve the correct backend URL in the reverse proxy role (only think about the HTTP scheme or paths). That sounds not easy, or?
Earlier I had separate roles for adding/removing vhosts to a reverseproxy. This roles didn't have dependencies, but I needed to duplicate several defaults and templates and vars etc. which isn't nice too. Then I've changed that to a single role. Of course - in my opinion, this isn't really that, what a "role" should be. A role is something like "webserver" or "reverseproxy" (a state). But not something like "add_vhost_to_reverseproxy" (a verb). This would be something like a playbook - but is calling a parameterized playbook via a role a good idea/possible? The main problem is, that the state of reverseproxy is the sum of all services in the inventory.
In case of that single included role, including it, starts also all dependent roles (configure custom, firewall, etc.). Nevertheless in that case I found out, that the delegation did not use the facts of the delegated host.
I tested that with the following example - the inventory:
all:
hosts:
server1:
my_var: a
server2:
my_var: b
children:
service:
hosts:
server1:
reverseproxy:
hosts:
server2:
And playbook which assigns a role-a to the group webserver. The role-a has a task like:
- block:
- setup:
- name: "Include role b on delegated {{ groups['reverseproxy'][0] }}"
include_role:
name: role-b
delegate_to: "{{ groups['reverseproxy'][0] }}"
delegate_facts: true # or false or omit - it has no effect on Ansible 2.9 and 2.10
And in role-b only outputing the my_var of the inventory will output
TASK [role-b : My_Var on server1] *******************
ok: [server1 -> <ip-of-server2>] =>
my_var: a
Which says me, that role-b that should be run on server2 has the facts of server1. So - configuring the "reverseproxy" service is done in context of the "backend"-service. Which would have several other issues - when you think about firewall-dependencies etc. I can avoid that, by using tags - but then I need to run the playbook not just with the tag of the service, but also with all tags I want to configure, and I cannot use include_tasks with args-apply-tags anymore inside a role that also includes other roles (the tags will applied to all subtasks...). I miss something like include_role but only that specific tags or ignore dependencies. This isn't a bug, but has possible side effects in case of delegate_to.
I'm not really sure, what is the question? The question is - what is a good way to handle dependencies between hosts and roles in Ansible - especially when they are not on the same host?
I am sure I do not fully understand your exact problem, but when I was dealing with load balancers I used a template. So this was my disable_workers playbook:
---
- hosts: "{{ ip_list | default( 'jboss' ) }}"
tasks:
- name: Tag JBoss service as 'disabled'
ec2_tag:
resource: "{{ ec2_id }}"
region: "{{ region }}"
state: present
tags:
State: 'disabled'
delegate_to: localhost
- action: setup
- hosts: httpd
become: yes
become_user: root
vars:
uriworkermap_file: "{{ httpd_conf_dir }}/uriworkermap.properties"
tasks:
- name: Refresh inventory cache
ec2_remote_facts:
region: "{{ region }}"
delegate_to: localhost
- name: Update uriworkermap.properties
template:
backup: yes
dest: "{{ uriworkermap_file }}"
mode: 0644
src: ./J2/uriworkermap.properties.j2
Do not expect this to work as-is. It was v1.8 on AWS hosts, and things may have changed.
But the point is to set user-defined facts, on each host, for that host's desired state (enabled, disabled, stopped), reload the facts, and then run the Jinja template that uses those facts.
I have Ansible Tower running, I would like to be able to assign multiple credentials (ssh keys) to a job template and then when I run the job I would like it to use the correct credentials for the machine it connects to, thus not causing user lockouts as it cycles through the available credentials.
I have a job template and have added multiple credentials using custom credentials types, but how can I tell it to use the correct key for each host?
Thanks
I'm using community ansible so forgive if ansible tower behave differently
inventory.yml
all:
children:
web_servers:
vars:
ansible_user: www_user
ansible_ssh_private_key_file: ~/.ssh/www_rsa
hosts:
node1-www:
ansible_host: 192.168.88.20
node2-www:
ansible_host: 192.168.88.21
database_servers:
vars:
ansible_user: db_user
ansible_ssh_private_key_file: ~/.ssh/db_rsa
hosts:
node1-db:
ansible_host: 192.168.88.20
node2-db:
ansible_host: 192.168.88.21
What you can see here is that:
Both server are runing both webapp and db
Webapp & DB uses different users to run
Groups host names differ base on user (this is important)
You have to create Credential Types per Hosts and then create Credential per your credential types. you should transform injector configurations to host variables.
Please see the following post:
https://faun.pub/lets-do-devops-ansible-awx-tower-authentication-power-user-2d4c3c68ca9e
In Ansible I have a need to execute a set of tasks and obtain the passwords from a third party (this part was handled) and then use those SSH credentials to connect.
The problem is it seems when I am doing this the best way to loop through my inventory is to include a list of tasks, that's great. The major problem is that I can only get this if I specify hosts in my main.yml playbook to localhost. (Or set to the name of the server group and specify connection: local) this makes the command module execute locally, which defeats the purpose.
I have tried looking into the SSH module but it looks like it is not registering to give me a no_action detected. I am aware I am likely overlooking something glaring.
I will be posting closer to exact code later but what I have now is
main.yml
---
- hosts: localhost
tasks:
- name: subplay
include: secondary.yml
vars:
user:myUser
address:"{{hostvars[item].address}}"
with_items: hostvars['mygroup']
secondary.yml
---
- name: fetch password
[...fethchMyPassword, it works]
register: password
- name:
[...Need to connect with fetched user for this task and this task only..]
command: /my/local/usr/task.sh
I am wanting to connect and execute the script there but it seems no matter what I try it either fails to execute at all or executes locally.
Additionally, I might note I checked out https://docs.ansible.com/ansible/latest/plugins/connection/paramiko_ssh.html and
https://docs.ansible.com/ansible/latest/plugins/connection/ssh.html but must be doing something wrong
it looks like to me that only your fetch task needs to be delegated to localhost, the rest on my_group, and when you have all your connection info, setup connection with set_facts by setting values to ansible_{user, ssh_pass, password} try this :
main.yml
---
- hosts: mygroup # inventory_hostname will loop through all your hosts in my_group
tasks:
- name: subplay
include: secondary.yml
vars:
user:myUser
address:"{{hostvars[inventory_hostname].address}}"
secondary.yml
---
- name: fetch password
[...fethchMyPassword, it works]
delegate_to: localhost # this task is only run on localhost
register: password
- set_fact: # use registered password and vars to setup connection
ansible_user: "{{ user}}"
ansible_ssh_pass: "{{ password }}"
ansible_host: "{{ address }}"
- name: Launch task # this task is run on each hosts of my_group
[...Need to connect with fetched user for this task and this task only..]
command: /my/local/usr/task.sh
launch this with
ansible-playbook main.yml
try to write a role with your secondary.yml, and a playbook witht your main.yml
I need to connect windows ec2 instances of aws using common username and password. Here is my ansible code :
- win_ping:
with_items: "{{ Ips }}"
delegate_to: "{{ item }}"
ansible_ssh_port: 5986
ansible_connection: winrm
ansible_winrm_server_cert_validation: ignore
It showing error like "ERROR! 'ansible_connection' is not a valid attribute for a Task. When i pass this as variable file ,playbook itself running in port 5986.So how can i connect windows host with my user and password from linux using ansible??
Thanks
Put it in your inventory in groups_vars or host_vars as suggested here https://docs.ansible.com/ansible/intro_windows.html. You can't specify ansible_connection as a task attribute.
I would also suggest encrypting the file (also recommended in the link provided) so you don't have usernames and passwords laying around in plain text.