Passing variables to ansible roles - ansible

I have my directory structure as this
└── digitalocean
├── README.md
├── play.yml
└── roles
├── bootstrap_server
│   └── tasks
│   └── main.yml
├── create_new_user
│   └── tasks
│   └── main.yml
├── update
│   └── tasks
│   └── main.yml
└── vimserver
├── files
│   └── vimrc_server
└── tasks
└── main.yml
When I am creating a user under the role create_new_user, I was hard coding the user name as
---
- name: Creating a user named username on the specified web server.
user:
name: username
state: present
shell: /bin/bash
groups: admin
generate_ssh_key: yes
ssh_key_bits: 2048
ssh_key_file: .ssh/id_rsa
- name: Copy .ssh/id_rsa from host box to the remote box for user username
become: true
copy:
src: ~/.ssh/id_rsa.pub
dest: /home/usernmame/.ssh/authorized_keys
mode: 0600
owner: username
group: username
One way of solving this may be to create a var/main.yml and put the username there. But I wanted something through which I can specify the username at play.yml level. As I am also using the username in the role vimrcserver.
I am calling the roles using play.yml
---
- hosts: testdroplets
roles:
- update
- bootstrap_server
- create_new_user
- vimserver
Would a template work here in this case? Couldn't find much from these SO questions

I got it working by doing a
---
- hosts: testdroplets
roles:
- update
- bootstrap_server
- role: create_new_user
username: username
- role: vimserver
username: username
on play.yml
Although would love to see a different approach then this
Docs: http://docs.ansible.com/ansible/playbooks_roles.html#roles
EDIT
I finally settled with a directory structure like
$ tree
.
├── README.md
├── ansible.cfg
├── play.yml
└── roles
├── bootstrap_server
│   └── tasks
│   └── main.yml
├── create_new_user
│   ├── defaults
│   │   └── main.yml
│   └── tasks
│   └── main.yml
├── update
│   └── tasks
│   └── main.yml
└── vimserver
├── defaults
│   └── main.yml
├── files
│   └── vimrc_server
└── tasks
└── main.yml
Where I am creating a defaults/main.yml file inside the roles where I need the usage of {{username}}
If someone is interested in the code,
https://github.com/tasdikrahman/ansible-bootstrap-server

You should be able to put username in a vars entry in play.yml.
Variables can also be split out into separate files.
Here is an example which shows both options:
- hosts: all
vars:
favcolor: blue
vars_files:
- /vars/external_vars.yml
tasks:
- name: this is just a placeholder
command: /bin/echo foo
https://docs.ansible.com/ansible/playbooks_variables.html#variable-file-separation
Ansible seems to delight in having different ways to do the same thing, without having either a nice comprehensive reference, or a rationale discussing the full implications of each different approach :). If you didn't remember the above was possible (I'd completely forgotten vars_files), the easiest option to find from the documentation might have been a third way, which is the most sophisticated one.
There's a prominent recommendation for ansible-examples. You can see a group_vars directory, with files which automatically provide values for hosts according to their groups, including the magic all group. The group_vars directory can be placed in the same directory as the playbook.
https://github.com/ansible/ansible-examples/tree/master/lamp_simple

Maybe this is what you want?
---
- hosts: testdroplets
roles:
- update
- bootstrap_server
- { role: create_new_user, username: 'foobar' }
- vimserver
https://docs.ansible.com/ansible/2.5/user_guide/playbooks_reuse_roles.html#using-roles

If you use include_role, variables can be passed like below.
- hosts: all_hosts
tasks:
- include_role:
name: "path/to/role"
vars:
var1: "var2_value"
var2: "var2_value"

Can't you just pass the variable from the command line with the -e parameter? So you can specifiy the variable even before execution. This also results in the strongest variable declaration which always takes precendence (see Variable precendence).
If you want to place it inside your playbook I suggest defining the username with the set_fact directive in the playbook. This variable is then available in all roles and included playbooks as well. Something like:
---
- hosts: testdroplets
pre_tasks:
- set_fact:
username: my_username
roles:
- update
- bootstrap_server
- create_new_user
- vimserver

It is all here: http://docs.ansible.com/ansible/playbooks_variables.html
while there are already some good answers, but I wanted to add mine because I've done this exact thing.
Here is the role I wrote: https://github.com/jmalacho/ansible-examples/tree/master/roles/users
And, I use hash_merge=true, and ansible's group_vars to make a dictionary of users: keys,groups so that adding a new user by host or by environment, and re-running is easy.
I also wrote up how my team uses group variables for environments once like this: "https://www.coveros.com/ansible-environment-design/"

Related

Ansible copy module inside a role

I have an Ansible role that will provision an ubuntu server in Azure. The VM provisioning is working fine for me but I need to copy few files from my localhost to this VM how can I do this?
├── defaults
│   └── main.yml
├── files
│   ├── cloud-init.yaml
│   └── files.txt
├── handlers
│   └── main.yml
├── meta
│   └── main.yml
├── tasks
│   ├── copyfile.yaml
│   ├── main.yml
│   ├── apachenic.yaml
│   └── apachevm.yaml
└── vars
└── main.yml
Here I have kept the file files.txt (the file I want to copy) in file folder and created a new task file (copyfile.yaml) for copying. Adding content of copy tasks below.
- name: copy certs
become: true
become_user: admin
ansible_sudo_password: pass123
copy:
src: files.txt
dest: /home/admin/
owner: admin
group: admin
I have tried multiple times re-arranging and creating a new task file getting the below error message.
ERROR! conflicting action statements: copy, ansible_sudo_password
for me looks like your YAML is malformed.
try this: (top level play, lets say, its copyFilesTXT.yml )
- hosts: all
become: true
become_user: admin
become_method: sudo
tasks:
- name: copy files.txt file to remote machine
copy:
src: files.txt
dest: /home/admin/
owner: admin
group: admin
mode: 0644
this task can be started by this console invocation
/bin/ansible-playbook --ask-become-pass -i hosts.ini copyFilesTXT.yml -vv
it will ask for sudo password from STDIN, so you can provide it.
Also, you can omit --ask-become-pass and add this variable in vars/main.yml
ansible_become_pass: pass123
so, sudo password will be loaded from vars/main.yml

ansible-playbook: Unable to find subdir in expected paths with fileglob

I try to copy files from roles/common/files with fileglob, but ansible-playbook searchs them in roles/common/tasks/files
Using Roles documentation says:
Any copy, script, template or include tasks (in the role) can
reference files in roles/x/{files,templates,tasks}/ (dir depends on
task) without having to path them relatively or absolutely.
Playbook:
# ./ansible/roles/common/tasks/main.yml
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: setup bashrc
import_tasks: bashrc.yml
Task:
# ./ansible/roles/common/tasks/bashrc.yml
- name: try to find bashrc libs in roles/common/files/bashrc
copy:
src: "{{ item }}"
dest: /tmp
with_fileglob:
- bashrc/*.lib.sh
# Causes the same error:
# loop: "{{ lookup('fileglob', 'bashrc/*', wantlist=True) }}"
Files tree:
.
├── ansible
│   └── roles
│   └── common
│   ├── files
│   │   └── bashrc
│   │   ├── shell-aliases.lib.sh
│   │   ├── shell-functions.lib.sh
│   │   └── shell-settings.lib.sh
│   └── tasks
│   ├── bashrc.yml
│   ├── main.retry
│   └── main.yml
Run playbook:
$ ansible-playbook -vvvvv ./ansible/roles/common/tasks/main.yml
...
TASK [try to find bashrc libs in roles/common/files/bashrc] *******...
task path: /home/<user>/git/homedirsync/ansible/roles/common/tasks/bashrc.yml:1
looking for "bashrc" at "/home/<user>/git/homedirsync/ansible/roles/common/tasks/files/bashrc"
looking for "bashrc" at "/home/<user>/git/homedirsync/ansible/roles/common/tasks/bashrc"
looking for "bashrc" at "/home/<user>/git/homedirsync/ansible/roles/common/tasks/files/bashrc"
looking for "bashrc" at "/home/<user>/git/homedirsync/ansible/roles/common/tasks/bashrc"
[WARNING]: Unable to find 'bashrc' in expected paths
...
Ansible version:
ansible 2.6.1
config file = /home/<user>/.ansible.cfg
configured module search path = [u'/var/ansible/library']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.6 (default, Nov 13 2018, 12:45:42) [GCC 4.8.4]
I saw a lot of examples where it works and issues where it doesn't, but I can't localize the root of problem in my case. Community, please help.
Q: "Unable to find subdir in expected paths with fileglob"
A: Quoting from fileglob's NOTES
"Matching is against local system files on the Ansible controller. "
Role's feature "Any copy, script, template or include tasks can reference files in roles/x/{files,templates,tasks} ..." does not apply to fileglob.
Instead, it's possible to use special variables. For example
with_fileglob:
- '{{ role_path }}/files/bashrc/*.lib.sh'

Problem assigning variable per environment in Ansible

I'm building an AMI with packer and providing it with Ansible. The AMI needs to be build with different variables depending on the environment.Therefore I used the below structure based on this article:
├── ansible.cfg
├── bamboo-server-dev.yml
├── environments
├── group_vars
│   ├── bamboo_packer_dev.yml
│   └── bamboo_packer_prod.yml
└── roles/
├── bamboo-server/
│ └── vars/
│   ├── bamboo-dev.yml
│   ├── bamboo-prod.yml
│   └── main.yml
└── ubuntu-server/
My environment file is where I define my inventory and looks like this:
[bamboo_packer_dev]
localhost
[bamboo_packer_prod]
localhost
The group_vars/bamboo_packer_dev.yml looks like the below:
---
# Setup specific variables per environment
environment: "dev"
The playbook bamboo-server-dev.yml looks like this:
---
- hosts: bamboo_packer_dev
become: true
roles:
- ubuntu-server
- bamboo-server
And finally the roles/bamboo-server/vars/main.yml looks like this:
---
# Provide variables according to the environment
- name: provide variables
include_vars:
file: "bamboo-{{ environment }}.yml"
But when I run the playbook I get the following message:
ERROR! failed to combine variables, expected dicts but got a 'dict' and a 'AnsibleSequence':
{}
[{"name": "provide variables", "include_vars": {"file": "bamboo-{{ environment }}.yml"}}]
It seems the "environment" variable is not getting picked up. And I cannot figure it why. Can someone please suggest what might be wrong with my playbook? Or abetter way to achieve this? My Ansible version is 2.9.1 running on Mojave.
Your vars file cannot include ansible tasks, as you have done. Files in vars must be -- as ansible very clearly informed you -- dict and not list, since the keys of the vars file becomes variable names, and [] is not a variable name
Perhaps you meant to put that content into roles/bamboo-server/tasks/main.yml, which will allow you to have include_vars: as an action

Ansible Custom Role (with custom command module)

I am trying to implement a custom network command module as an Ansible role. This module will run commands on remote devices.
Ansible connection type is network_cli
Created the role by using below command
ansible-galaxy init --type=network test-command-mod
which gave me below default directory structure
roles
└── test-command-mod
├── cliconf_plugins
│   ├── myos.py (my file)
├── defaults
│   └── main.yml
├── files
├── library
│   ├── __init__.py
│   └── myos_command.py (my file - the command module)
├── meta
│   └── main.yml
├── module_utils
│   └── myos.py (my file)
├── README.md
├── tasks
│   ├── execute-commands.yml ( a test task to run commands )
│   └── main.yml
├── templates
├── terminal_plugins
│   ├── myos.py (my file - emulating the myos terminal)
├── tests
│   ├── inventory
│   └── test.yml
└── vars
└── main.yml
Below it is how used in an Ansible playbook
- hosts: my_os_cli
gather_facts: False
roles:
- role: test-command-mod
tasks:
- name: run some commands on the device
myos_command:
commands:
- command: 'show version'
- command: 'show ntp status'
vars:
ansible_connection: network_cli
ansible_network_os: myos
When using from a role, it fails with below message
The full traceback is:
Traceback (most recent call last):
File "/ansible/bin/ansible-connection", line 102, in start
self.connection._connect()
File "/ansible/lib/ansible/plugins/connection/network_cli.py", line 338, in _connect
raise AnsibleConnectionFailure('network os %s is not supported' % self._network_os)
AnsibleConnectionFailure: network os myos is not supported
When running Ansible in debug mode, saw below in the log
unable to load cliconf for network_os myos
Moreover, it is trying to look for plugins/cliconf/myos.py under default Ansible location instead of the my Ansible role (test-command-mod).
I expect it to look in roles/test-command-mod/cliconf_plugins/myos.py
Is this a bug or by design?
Also, role works if
ansible.cfg is updated with
cliconf_plugins = ./roles/test-command-mod/cliconf_plugins
terminal_plugins = ./roles/test-command-mod/terminal_plugins
OR set the below environment variables
export ANSIBLE_TERMINAL_PLUGINS=./roles/test-command-mod/terminal_plugins
export ANSIBLE_CLICONF_PLUGINS=./roles/test-command-mod/cliconf_plugins
OR if copy files (my file); to their respective directories in ansible installation.
Thoughts please?
EDIT: ansible.cfg as follows
[defaults]
# after suggestion
#cliconf_plugins = cliconf_plugins:./roles/test-command-mod/cliconf_plugins
#terminal_plugins = terminal_plugins:../roles/test-command-mod/terminal_plugins
# before suggestion
cliconf_plugins = ./roles/test-command-mod/cliconf_plugins
terminal_plugins = ./roles/test-command-mod/terminal_plugins
[paramiko_connection]
look_for_keys = False
We have the exact same issue and we have been investigating for a week.
We tried the same things as you did with the same results.
The solution we came up with is to use the install.yml in the tasks directory in order to add the terminal and cliconf into their respectives system install directories.
---
- block:
- name: Create terminal target directory
file:
path: /usr/share/ansible/plugins/terminal/
state: directory
mode: 0755
connection: local
- name: install terminal plugin
copy:
src: terminal_plugins/myos.py
dest: /usr/share/ansible/plugins/terminal/myos.py
connection: local
- name: Create cliconf target directory
file:
path: /usr/share/ansible/plugins/cliconf/
state: directory
mode: 0755
connection: local
- name: install cliconf plugin
copy:
src: cliconf_plugins/myos.py
dest: /usr/share/ansible/plugins/cliconf/myos.py
connection: local
run_once: true
You may then add a call to install.yml in main.yml so that it executes the copy when you use the role:
---
- name: install/update driver
include: install.yml
Regards,
Stopostit

How can I ignore failures to decrypt a vaulted file?

I have two roles, one of which has a group_vars file that is vaulted, and another that is not. I would like to run the role that does not require any vaulted information, but ansible prompts me for a vault password anyway:
$ tree
├── deploy-home-secure.yml
├── deploy-home.yml
├── group_vars
│   ├── home
│   │   └── unvaulted
│   └── home-secure
│   ├── unvaulted
│   └── vaulted
├── hosts
└── roles
├── home
│   └── tasks
│   └── main.yaml
└── home-secure
└── tasks
└── main.yaml
$ ansible-playbook --version
ansible-playbook 1.8.2
configured module search path = None
$ ansible-playbook -i hosts deploy-home.yml
ERROR: A vault password must be specified to decrypt vaulttest/group_vars/home-secure/vaulted
$ ansible-playbook --vault-password-file=/dev/null -i hosts deploy-home.yml
ERROR: Decryption failed
I have something like this to solve this kind of problem (mine was not different roles, but different hosts, but I think the same principle applies):
This is the simplified file structure:
group_vars
development_vars
staging_vars
vaulted_vars
production_vars
This allows you to deploy development or staging without Ansible asking you to decrypt production_vars.
And then, the production playbook goes like this:
hosts: production
roles:
- role...
vars_files:
- vaulted_vars/production_vars
The vars_files line where you specify the path to the vaulted var is the key.
Ansible will try to load a group_vars file for any group it encounters in your inventory. If you split inventory file (hosts) into one for home group and another for home-secure then it will not try to decrypt vars it is not supposed to.
$ ansible-playbook -i hosts-home deploy-home.yml
$ ansible-playbook --ask-vault-password -i hosts-home-secure deploy-home-secure.yml
Here is another option if you don't always need your vaulted variables.
You can have a folder structure like this:
group_vars
├── all
├── prod
├── dev
│ └── vars.yml
└── dev-vault
└── vault.yml
You store vaulted variables in the '-vault' variant of that inventory.
Then your inventories might be something like the following:
dev:
[servers]
dev.bla.bla
[dev:children]
servers
dev-vault:
[servers]
dev.bla.bla
[dev:children]
servers
[dev-vault:children]
servers
So you're only saving sensitive data in the dev-vault vars, if in most cases you don't actually need passwords etc you can run playbooks without having extra options you're not really using, or storing the vault password in plaintext for convenience, etc.
So the "normal" command might be:
ansible-playbook -i dev some.yml
And the "vaulted" command could be:
ansible-playbook -i dev-vault some.yml --extra-vars="use_vault=true"
Or you could manage the "include vault variables" via tagging, including some.yml in some-vault.yml etc

Resources