I am using the aws_ec2 inventory plugin and would like to pass the boto_profile in as a var at runtime.
I am trying to run the following command:
ansible-playbook playbook.yml --extra-vars profile=foo
Inside my aws_ec2.yml plugin file I have:
boto_profile: "{{ profile }}"
This returns the error:
The config profile ({{ profile }}) could not be found
I am able to use the profile var inside my playbook. I am using the ec2 module with profile: "{{ profile }}" That seems to work if I define a static inventory.
Is it possible to pass the profile var into the dynamic inventory file?
Jinja2 templates are not applicable to inventory configuration files.
Use environment variable AWS_PROFILE or AWS_DEFAULT_PROFILE to set profile at runtime.
Like: AWS_PROFILE=foo ansible-playbook playbook.yml
I'm using this in GitLab CI/CD and had the same issue, however you can look up the env variables in the dynamic inventory like this:
plugin: amazon.aws.aws_ec2
aws_access_key: "{{ lookup('env','AWS_ACCESS_KEY_ID') }}"
aws_secret_key: "{{ lookup('env','AWS_SECRET_ACCESS_KEY') }}"
regions:
- eu-central-1
groups:
webservers: "'app-server' in tags.Type"
and I cen set this up in the CI/CD under variables: and this is passed to docker container.
Related
Using an ansible playbook, I have three stages of development aka. test stage prod and want to access a dictionary with settings based on the stages. My ./defaults/main.yaml would have:
prod:
proxy_url: prod.example.com
stage:
proxy_url: stage.example.com
test:
proxy_url: test.example.com
I set a custom fact based on the machine hostname. As a convention I use the stage as a postfix like app1-test
- set_fact: deployment="{{ ansible_hostname.split('-')[-1] | lower }}"
This creates the fact deployment as test like so:
"{{ deployment }}"
Now when I want to access my variables this works:
"{{ test['proxy_host'] }}"
But this not:
"{{ (deployment)['proxy_host'] }}"
What am I missing?
Solution thanks to #mdaniel:
{{ vars[deployment]["proxy_host"] }}
Can a playbook load inventory list from variables? So I can easily customize the run based on chosen environment?
tasks:
- name: include environment config variables
include_vars:
file: "{{ item }}"
with_items:
- "../../environments/default.yml"
- "../../environments/{{ env_name }}.yml"
- name: set inventory
set_fact:
inventory.docker_host = " {{ env_docker_host }}"
Yes. Use the add_host module: https://docs.ansible.com/ansible/latest/modules/add_host_module.html
As I'm in ansible 2.3 I can't use the add_host module (see Jack's answer and add_host docs) and that would be a superior solution. Therefore, I'll use a different trick to augment an existing ansible inventory file, reload and use it.
hosts.inv
[remotehosts]
main.yml
- hosts: localhost
pre_tasks:
- name: include environment config variables
include_vars:
file: "{{ item }}"
with_items:
- "../environments/default.yml"
- "../environments/{{ env_name }}.yml"
- name: inventory facts
run_once: true
set_fact:
my_host: "{{ env_host_name }}"
- name: update inventory for env
local_action: lineinfile
path=hosts.inv
regexp={{ my_host }}
insertafter="[remotehosts]" line={{ my_host }}
- meta: refresh_inventory
- hosts: remotehosts
...
The pretasks process the environments yml with all the variable replacement etc and use that to populate hosts.inv prior to reloading via refresh_inventory
Any tasks defined beneath - hosts: remotehosts would execute on the remote host or hosts.
I have a CloudFormation stack that was created using the Ansible cloudformation module, and then I have some masked parameters that was updated manually by a separate operations team.
Now I would like to update the stack to perform a version upgrade, and while this is easily done in the AWS Console and through the AWS CLI, I can't seem to find a way to do this through the Ansible module.
Based on another post here, it was noted that upgrades are not possible, and the only way was to simply not use Ansible.
I have tried using the Ansible cloudformation_facts module to try and fetch the parameters to no avail. Is there any other method to fetch this data from CloudFormation, or will I have to accept that I cannot use Ansible?
Thank you in advance.
You can fetch all the parameters from cloudformation using Ansbile with something like the below:
---
- name: Get CloudFormation stats
cloudformation_facts:
stack_name: "{{ stack_name }}"
region: "{{ region }}"
register: my_stack
If you had a parameter called "subnet-id", you could view what the return would like look like this:
---
- name: Get CloudFormation stats
cloudformation_facts:
stack_name: "{{ stack_name }}"
region: "{{ region }}"
register: my_stack
- debug: msg="{{ my_stack.ansible_facts.cloudformation[stack_name].stack_parameters.subnet-id }}"
The return would look like this:
ok: [localhost] => {
"msg": "subnet12345"
}
If values are hashed out however, you won't be able to see what their value was - so the answer is that in that case, you shouldn't be updating cloudformation directly if you're trying to move over to Ansbile. Rather have the values updated in an encrypted file on your source control, and build from there with Ansible.
I have a playbook that provisions a host for use with Rails/rvm/passenger. I'd like to add use the same playbook to setup both test and production.
In testing, the user to add to the rvm group is jenkins. The one in production is passenger. My playbook excerpt below does this based on the inventory_hostname parameter.
It seems like adding a new user:/when: block in the playbook for every testing or production host is the wrong way to go here. Should I be using an Ansible role for this?
Thanks
---
- hosts: all
become: true
...
tasks:
- name: add jenkins user to rvm group when on testing
user: name={{ item }}
shell=/bin/bash
groups=rvm
append=yes
with_items:
- jenkins
when: "'bob.mydomain' in inventory_hostname"
- name: add passenger user to rvm group when on rails production
user: name={{ item }}
shell=/bin/bash
groups=rvm
append=yes
with_items:
- passenger
when: "'alice.mydomain' in inventory_hostname"
Create an inventory file called inventories/testing
[web]
alice.mydomain
[testing:children]
web
This will control what hosts are targeted when you run your playbook against your testing environment.
Create another file called group_vars/testing
rvm_user: jenkins
This file will keep all variables required for running a playbook against the testing environment. Your production file should have the same variables, but with different values.
Finally in your playbook:
---
- hosts: all
become: true
...
tasks:
- name: add user to rvm group
user:
name: "{{ rvm_user }}"
shell: "/bin/bash"
groups: rvm
append: yes
Now, when you want to run your playbook, you execute it like so:
ansible-playbook -i inventories/testing site.yml
Ansible will do the right thing, and look for a testing file in group_vars and read variables from there. It will ignore variables in a file or folder not named after your environment with the exception of a file called all which is intended to be for common variables across playbooks.
Good luck - Ansible is an amazing tool :)
Does Ansible duplicate role variables for role dependencies that have enabled allow_duplicates?
For example, given a playbook that includes more than once role application-environment that allows duplicates, will Ansible create multiple copies of its variables?
meta/main.yml:
---
allow_duplicates: yes
dependencies:
- src: git+http://javasource/git/ansible/roles/organization
version: 1.1.0
vars/main.yml:
---
application_directory: "{{ organization.directory }}/{{ application_name }}"
application_component_directory: "{{ application_directory }}/{{ application_component_name }}"
If Ansible does not create multiple copies of these variables, how could I rework the role so that it can support multiple variables?
You may find some helpful information here:
About vars:
Anything in the vars directory of the role overrides previous versions of that variable in namespace.
About defaults:
Tasks in each role will see their own role’s defaults. Tasks defined outside of a role will see the last role’s defaults.