Using Ansible environment and assume a role with boto3 - ansible

I've run into an issue assuming a role when using the environment setting to set proxies on a task.
For example, if I use a custom module with proxy_env set:
- name: compare values from api
my_custom_module:
module_data: "{{ some_var }}"
register: cpmared_vals
environment: "{{ proxy_env }}"
I get this error:
botocore.exceptions.NoCredentialsError: Unable to locate credentials
however if I remove 'environment: "{{ proxy_env }}"' it works.
This is what proxy_env looks like:
proxy_env:
https_proxy: "http://corp-proxy.com:80"
http_proxy: "http://corp-proxy.com:80"
no_proxy: "internal-apps.com"
Thanks

Related

Ansible import role run conditionally

I am writing a parent ansible role that runs another role though import_role. The idea that this sibling role (staticdev.pyenv) only runs when an argument pyenv_python_versions is passed, otherwise this is skipped.
According to the official documentation, I tried the following approach:
parent/tasks/main.yml
---
- name: Install pyenv
import_role:
name: staticdev.pyenv
vars:
pyenv_owner: "{{ ansible_env.USER }}"
pyenv_path: "{{ ansible_env.HOME }}/pyenv"
pyenv_global: "{{ pyenv_global }}"
pyenv_python_versions: "{{ pyenv_python_versions }}"
pyenv_virtualenvs: []
when: pyenv_python_versions
I am using currently ansible 4.1.0 (core 2.11.1), and when I test it on Debian 11 (image: cisagov/docker-debian11-ansible:latest) it executes the role anyway, even without any value for pyenv_python_versions. when is not being considered and I also tried with include_role. Complete logs can be found here.
Any idea?
UPDATE: changed condition when from to pyenv_python_versions as suggested by #lonetwin.
The problem was role import was replicating variables from the imported role (pyenv_global, pyenv_python_versions and pyenv_virtualenvs), in this case you solve it just by omitting imported role params (they will be overwritten if you create new defaults for them).
Solution:
---
- name: Install pyenv
import_role:
name: staticdev.pyenv
vars:
pyenv_owner: "{{ ansible_env.USER }}"
pyenv_path: "{{ ansible_env.HOME }}/pyenv"
when: pyenv_python_versions

How to deploy an openstack instance with ansible in a specific project

I've been trying to deploy an instance in openstack to a different project then my users default project. The only way to do this appears to be by passing the project_name within the auth: setting. This works fine, but is not really compatible with using a clouds.yaml config with the clouds: setting or even with using the admin-openrc.sh file that openstack provides. (The admin-openrc.sh appears to take precedence over any settings in auth:).
I'm using the current openstack.cloud collection 1.3.0 (https://docs.ansible.com/ansible/latest/collections/openstack/cloud/index.html). Some of the modules have the option to specify a project: like the network module, but the one server module does not.
So this deploys in a named project:
- name: Create instances
server:
state: present
auth:
auth_url: "{{ auth_url}}"
username: "{{ username }}"
password: "{{ password }}"
project_name: "{{ project }}"
project_domain_name: "{{ domain_name }}"
user_domain_name: "{{ domain_name }}"
name: "test-instance-1"
image: "{{ image_name }}"
key_name: "{{ key_name }}"
timeout: 200
flavor: "{{ flavor }}"
network: "{{ network }}"
When having sourced the admin-openrc.sh, this deploys only to your default project (OS_PROJECT_NAME=<project_name>
- name: Create instances
server:
state: present
name: "test-instance-1"
image: "{{ image_name }}"
key_name: "{{ key_name }}"
timeout: 200
flavor: "{{ flavor }}"
network: "{{ network }}
image: "{{ image_name }}"
key_name: "{{ key_name }}"
timeout: 200
flavor: "{{ flavor }}"
network: "{{ network }}"
When I unset the OS_PROJECT_NAME, but set all other values from admin-openrc.sh, I can do this, but this requires to work with a non-default setting (unsetting the one enviromental variable:
- name: Create instances
server:
state: present
auth:
project_name: "{{ project }}"
name: "test-instance-1"
image: "{{ image_name }}"
key_name: "{{ key_name }}"
timeout: 200
flavor: "{{ flavor }}"
network: "{{ network }}"
I'm looking for the most usefull way to use a specific authorization model (be it clouds.yaml or environmental variables) for all my openstack modules, while still being able to deploy to a specific project.
(You should upgrade to the last collection (1.5.3), or perhaps it is compatible with 1.3.0)
You can use the cloud property from the
"server" task (openstack.cloud.server). Here is how you can proceed :
All projects definitions are stored into clouds.yml (here is a part of its content)
clouds:
tarantula:
auth:
auth_url: https://auth.cloud.myprovider.net/v3/
project_name: tarantula
project_id: the-id-of-my-project
user_domain_name: Default
project_domain_name: Default
username: my-username
password: my-password
regions :
- US
- EU1
from a task you can refer to the appropriate cloud like this
- name: Create instances
server:
state: present
cloud: tarantula
region_name: EU1
name: "test-instance-1"
We now refrain from using any environment variables and make sure the project id is set in the configuration of in the group_vars. This works because it does not depend on a local clouds.yml file. We basically build and auth object in Ansible that can be use thoughout the deployment

Setting Ansible vars with set_fact results

Im running ansible 2.9.18 on RHEL7.
I am using hvac to retrieve usernames and passwords from a Hashicorp vault.
vars:
- creds: "{{ lookup('hashi_vault', 'secret=tst2/data/cisco token= url=http://10.80.23.81:8200') }}"
tasks:
- name: set Cisco creds
set_fact:
cisco: "{{ creds['data'] }}"
- name: Get nxos facts
nxos_command:
username: "{{ cisco['username'] }}"
password: "{{ cisco['password'] }}"
commands: show ver
timeout: 30
register: ver_out
- debug: msg="{{ ver_out.stdout }}"
But username and password are deprecated and I am trying to figure out how to pass the username, password as a "provider" variable. And this code doesn't work:
vars:
asa_api:
- creds: "{{ lookup('hashi_vault', 'secret=tst2/data/cisco token= url=http://10.80.23.81:8200') }}"
set_fact:
cisco: "{{ creds['data'] }}"
username: "{{ cisco['username'] }}"
password: "{{ cisco['password'] }}"
tasks:
- name: show run
asa_command:
commands: show run
provider: "{{ asa_api }}"
register: run
become: yes
tags:
- show_run
I cannot figure how syntax for making this work. I would greatly appreciate any help.
Thanks,
Steve
Disclaimer: This is a generic answer. I do not have any network device to test this fully so you might have to adapt a bit after reading the documentation
Your are taking this the wrong way. You don't need set_fact at all and both method you are trying to use (user/pass or provider dict) are actually deprecated. Ansible treats you network device as any host and will use the available user and password you have configured if they exist.
In the following example, I'm assuming your playbook only targets network devices and that the login/pass stored in your vault is the same on all devices.
- name: Untested network device connection configuration demo
hosts: my_network_device_group
vars:
# This indicates which connection plugin to use. Default is ssh
# An other possible value is httpapi. See above documentation link
ansible_connection: network_cli
vault_secret: tst2/data/cisco
vault_token: verysecret
vault_url: http://10.80.23.81:8200
vault_options: "secret={{ vault_secret }} token={{ vault_token }} url={{ vault_url }}"
creds: "{{ lookup('hashi_vault', vault_options).data }}"
# These are the user and pass used for connection.
ansible_user: "{{ creds.username }}"
ansible_password: "{{ creds.password }}"
tasks:
- name: Get nxos version
nxos_command:
commands: show ver
timeout: 30
register: ver_cmd
- name: show version
debug:
msg: "NXOS version on {{ inventory_hostname }} is {{ ver_cmd.stdout }}"
- name: An other task to play on targets
debug:
msg: "Task played on {{ inventory_hostname }}"
Rather than vars at play level, you can store this information in your inventory for all hosts or for a specific group, even for each host. See how to organise your group and host variables if you want to use that feature.

How set 2 environment (PATH and proxy_env ) in ansible-playbook?

How correct write task ?
- name: Install required python modules
pip:
name: "{{ item }}"
extra_args: "{{ pip_extra_args | default(omit) }}"
with_items: "{{pip_python_coreos_modules}}"
environment:
PATH: "some path"
environment: "{{ proxy_env }}"
How set 2 environment (PATH and proxy_env ) ?
Thanks
Ansible makes it easy for you to configure your environment by using the ‘environment’ keyword. Here is an example:
- hosts: all
remote_user: root
tasks:
- apt: name=cobbler state=installed
environment:
http_proxy: http://proxy.example.com:8080
The environment can also be stored in a variable, and accessed like so:
- hosts: all
remote_user: root
# here we make a variable named "proxy_env" that is a dictionary
vars:
proxy_env:
http_proxy: http://proxy.example.com:8080
tasks:
- apt: name=cobbler state=installed
environment: "{{proxy_env}}"
Whole thing in explained in ansible docs, you can read it here

jenkins_configure_proxy role missing in Ansible

I have Ansible 2.4.0. I am trying to configure Jenkins proxy based on Ansible Jenkins DevOps Roles documentation:
- hosts: master
roles:
- jenkins_configure_proxy:
jenkins_home: "{{ jenkins_home }}"
proxy_host: "{{ proxy_host }}"
proxy_port: "{{ proxy_port }}"
become: true
environment: "{{proxy_env}}"
When trying to execute ansible_playbook I get:
ERROR! role definitions must contain a role name
The error appears to have been in '/Users/me/projects/jenkins/jenkins.yml': line 10, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
roles:
- jenkins_configure_proxy:
^ here
That pretty strange, because other roles like jenkins_plugin work fine.
What am I doing wrong?
You use role syntax without parameters to apply role with parameters, see example:
- roles:
# without or with default parameters
- jenkins_configure_proxy
# with custom parameters
- role: jenkins_configure_proxy
jenkins_home: "{{ jenkins_home }}"
proxy_host: "{{ proxy_host }}"

Resources