I have a need for dynamic facts, and as a solution, have written a script that lives in /etc/ansible/facts.d/my_facts.fact
It has to pull some of its information from a cloud provider, and therefore, needs an API token, which, for obvious reasons, one does not want to store in plain text on the server.
Ansible is always triggered by a process that includes MY_SECRET_API_TOKEN exported in it's environment.
The ansible code looks like such with MY_SECRET_API_TOKEN :
---
- name: do the things
import_tasks: my_tasks.yml
environment:
MY_SECRET_API_TOKEN: "{{ lookup('env', 'MY_SECRET_TOKEN') }}"
when: ansible_facts['os_family'] == "Debian"
And the my_facts.fact bash file has a line like this that'll trip fact gathering with a rc: 99 when the TOKEN is not present in the environment.
[[ ${MY_SECRET_TOKEN} == '' ]] && { echo Secret not found && exit 99 ; }
curl --silent -XGET -H "Authorization: Bearer ${MY_SECRET_TOKEN}" https://web.site.com/api/lookup
I've tried putting the environment: block at the role level to no avail as well.
How can I get Gather Facts on the remote server to pick up that ENV variable?
You have to set environment at the play level if you want it to affect the play-level fact gathering:
- hosts: all
gather_facts: true
environment:
MY_SECRET_API_TOKEN: "{{ lookup('env', 'MY_SECRET_TOKEN') }}"
tasks:
# etc.
If you do not want to (or cannot) modify the play, you can instead add an explicit setup task with the correct environment:
- name: Re-gather local facts
setup:
gather_subset: local
environment:
MY_SECRET_API_TOKEN: "{{ lookup('env', 'MY_SECRET_TOKEN') }}"
Related
I'm running a few tasks in a playbook which runs a bash script and registers its output:
playbook.yml:
- name: Compare FOO to BAZ
shell: . script.sh
register: output
- name: Print the generated output
debug:
msg: "The output is {{ output }}"
- include: Run if BAZ is true
when: output.stdout == "true"
script.sh:
#!/bin/bash
FOO=$(curl example.com/file.txt)
BAR=$(cat file2.txt)
if [ $FOO == $BAR ]; then
export BAZ=true
else
export BAZ=false
fi
What happens is that Ansible registers the output of FOO=$(curl example.com/file.txt) instead of export BAZ.
Is there a way to register BAZ instead of FOO?
I tried running another task that would get the exported value:
- name: Register value of BAZ
shell: echo $BAZ
register: output
But then I realized that every task opens a separate shell on the remote host and doesn't have access to the variables that were exported in previous steps.
Is there any other way to register the right output as a variable?
I've come up with a workaround, but there must be an other way to do this...
I added a line in script.sh and cat the file in a seperate task
script.sh:
...
echo $BAZ > ~/baz.txt
then in the playbook.yml:
- name: Check value of BAZ
shell: cat ~/baz.txt
register: output
This looks a bit like using a hammer to drive a screw... or a screwdriver to plant a nail. Decide if you want to use nails or screws then use the appropriate tool.
Your question misses quite a few details so I hope my answer wont be too far from your requirements. Meanwhile here is an (untested and quite generic) example using ansible to compare your files and run a task based on the result:
- name: compare files and run task (or not...)
hosts: my_group
vars:
reference_url: https://example.com/file.txt
compared_file_path: /path/on/target/to/file2.txt
# Next var will only be defined when the two tasks below have run
file_matches: "{{ reference.content == (compared.content | b64decode) }}"
tasks:
- name: Get reference once for all hosts in play
uri:
url: "{{ reference_url }}"
return_content: true
register: reference
delegate_to: localhost
run_once: true
- name: slurp file to compare from each host in play
slurp:
path: "{{ compared_file_path }}"
register: compared
- name: run a task on each target host if compared is different
debug:
msg: "compared file is different"
when: not file_matches | bool
Just in case you would be doing all this just to check if a file needs to be updated, there's no need to bother: just download the file on the target. It will only be replaced if needed. You can even launch an action at the end of the playbook if (and only if) the file was actually updated on the target server.
- name: Update file from reference if needed
hosts: my_group
vars:
reference_url: https://example.com/file.txt
target_file_path: /path/on/target/to/file2.txt
tasks:
- name: Update file on target if needed and notify handler if changed
get_url:
url: "{{ reference_url }}"
dest: "{{ target_file_path }}"
notify: do_something_if_changed
handlers:
- name: do whatever task is needed if file was updated
debug:
msg: "file was updated: doing some work"
listen: do_something_if_changed
Some references to go further on above concepts:
uri module
get_url module
slurp module
delegating tasks in ansible
registering output of tasks
run_once
handlers
I have a playbook that runs correctly when used with ansible-playbook.
It contains an encrypted variable. According to the manual https://docs.ansible.com/ansible/latest/user_guide/vault.html#id16, I can view the variable with
$ ansible localhost -m ansible.builtin.debug -a var="ansible_value" -e
"'debug_playbook.yml" --vault-password-file=./pw_file
But I get an error of
ERROR! failed at splitting arguments, either an unbalanced jinja2 block or quotes: 'debug_playbook.yml
As the playbook itself runs, presumably its syntax is correct.
The playbook is
- name: Run a series of debug tasks to see the value of variables
hosts: localhost
vars:
ansible_password: vault |
$ANSIBLE_VAULT;1.1;AES256
63343064633966653833383264346638303466663265363566643062623436383364376639636630
3032653839323831316361613138333999999999999999999a313439383536353737616334326636
63616162323230333635663364643935383330623637633239626632626539656434333434316631
3965373931643338370a393530323165393762656264306130386561376362353863303232346462
3039
user: myuser
tasks:
- name: show env variable HOME and LOGNAME
debug:
msg: "environment variable {{ item }}"
with_items:
- "{{ lookup('env','HOME') }}"
- "{{ lookup('env','LOGNAME') }}"
- name: now show all of the variables for the current managed machine
debug:
msg: "{{ hostvars[inventory_hostname] }}"
- name: now show all of the hosts in the group from inventory file
debug:
msg: "server {{ item }}"
with_items:
- "{{ groups.mintServers }}"
- "{{ groups.centosServers }}"
I have googled the error and nothing jumps out (to me anyway). Is the manual correct? I have seen other methods where the encrypted variable is echoed into ansible-vault decrypt but it is all a bit of a bother.
I have yamllint'd the playbook. So interested to know what the error means and a way of debugging.
Regards
Following my comments: you cannot view an encrypted var inside a playbook with the technique proposed in the documentation, for memory:
ansible localhost -m debug -a "var=your_var" \
-e #your_file.yml --ask-vault-password
This will only work if your file is a "simple" var file where the top level element is a dictionary.
What I have done in the past is use the yq command line tool (which is a wrapper above jq) that you can easily install with pip install yq. (Note that jq needs to be installed separately and is available in most linux distribution channels. On ubuntu: apt install jq).
Once the prerequisites are available you can use yq to extract the var from you playbook and decrypt it with ansible-vault directly.
For this to work, you will still need to fix your var value which is not a valid vault definition as it misses a question mark in front of the vault marker:
vars:
ansible_password: !vault |
$ANSIBLE_VAULT;1.1;AES256
63343064633966653833383264346638303466663265363566643062623436383364376639636630
3032653839323831316361613138333999999999999999999a313439383536353737616334326636
63616162323230333635663364643935383330623637633239626632626539656434333434316631
3965373931643338370a393530323165393762656264306130386561376362353863303232346462
3039
The solution is not so trivial as yq will get the vault value adding some garbage white-space and new line at the end that will make ansible-vault literally freak out. With your above fixed playbook example, this would give:
yq -r '.[0].vars.ansible_password|rtrimstr(" \n")' your_playbook.yaml \
| ansible-vault decrypt --ask-vault-pass
In my playbook i have tasks that use the hostname of a server and extrapolate data to set location and environment based on that. But some servers have unique names and I'm not sure how to set variables on those. I'd prefer not to use ansible facts since i would like to share the playbook with a team. One way I was thinking is to do what's listed below but i'm running into issues. Could someone please guide me.
Create vars_file inventory
---
customservers
customhostname1:
env: test
location: hawaii
customhostname2:
env: prod
location: alaska
In Playbook.
---
task
tasks:
- name: set hostname
shell: echo "customhostname1"
register: my_hostname
- name: setting env var
set_fact:
env: "{{ item.value.env }}"
when: my_hostname == "{{ item.key }}"
with_dict: "{{ customservers }}"
- name: outputing env var
debug:
msg: the output is {{ env }}
Expected output should be test.
Thank you.
In my playbook i have tasks that use the hostname of a server and
extrapolate data to set location and environment based on that.
Bad Idea.
But some servers have unique names and I'm not sure how to set variables on those
And that is why.
The second Bad Idea is to have TEST and PROD in the same inventory. That's just begging for a disaster. They should be two completely separate inventories, though perhaps under the same parent directory:
inventories/
inventories/test/
inventories/test/hosts
inventories/test/host_vars/
inventories/test/host_vars/customhostname1.yml
inventories/prod/
inventories/prod/hosts
inventories/prod/host_vars/
inventories/prod/host_vars/customhostname2.yml
So inventories/prod/hosts could look like this (I prefer the ini format):
[customservers]
customhostname2 location=hawaii
Or:
[customservers]
customhostname2
[customhostname2:vars]
location=hawaii
But in any case, DO NOT combine test and prod inventories.
If you still need that env variable, you can either put it in group_vars/all.yml or right in the hosts file like so:
[all:vars]
env=prod
I trying to create a simple paybook with a common role. Unfortunately I get stymied by ansible. I have looked up and down the internet for solution for this error.
The setup:
I am running ansible 2.7.4 on Ubuntu 18.04
directory structure:
~/Ansible_Do
playbook.yml
inventory (hosts file)
/roles
/common
/defaults
main.yml (other variables)
/tasks
main.yml
richard_e.yml
/vars
vars_and_stuff.yml (vault)
I have a simple playbook.yml
---
# My playbook 1
- hosts: test
- name: Go to common role to run tasks.
roles:
- common
tasks:
- name: echo something
shell: echo $(ip addr | grep inet)
...
I run this command to start the playbook:
~/Ansible_Do$ ansible-playbook -vv --vault-id #prompt -i ~/Ansible_Do/inventory playbook.yml
I enter the vault password continuing the playbook.
The playbook starts pulls facts from the test group of servers. Then reads the role and works to /roles/common. That calls the /common/tasks/main.yml file. This is where the error happens.
The error appears to have been in '/home/~/Ansible_Do/roles/common/tasks/main.yml': line 8, column 3
# Common/tasks file
---
- name: Bring variable from vault
include_vars:
file: vars_and_stuff.yml
name: My_password
- name: Super Richard <====== Error
become: yes
vars:
ansible_become_pass: "{{ My_password }}"
- import_tasks: ./roles/common/tasks/ricahrd_e.yml
...
The ./roles/common/tasks/ricahrd_e.yml is a simple testing task.
---
- name: say hi
debug:
msg: "Active server."
...
The error on "- name". I have checked online and in the Ansible docs to see if there is a key I'm missing. I found an example for include_vars in a /role/tasks (https://gist.github.com/halberom/ef3ea6d6764e929923b0888740e05211) showing proper syntax (I presume) in a simple role. The code works as parts, but not together.
I have reached what I can understand. I feel that is error is utterly simple and I am missing something (forest for the trees).
The error means exactly what it says, except the "module name" is not misspelled in your case, but missing altogether.
This...
- name: Super Richard <====== Error
become: yes
vars:
ansible_become_pass: "{{ My_password }}"
... is not a valid task definition, it does not declare an action.
An action in Ansible is a call to a module, hence "misspelled module name".
The error comes after name, because that's where Ansible expects the name of the "module" that you want to call, e.g. shell in your first example.
You are probably assuming that become is a "module", but it is not.
It is a "playbook keyword", in this case applied on the task level, which has the effect that you become another user for this task only.
But as the task has no action, you get this error.
See docs:
Playbook keywords
Understanding privilege escalation
After a bit of work I got the playbook to work. Knowing that 'become' is not a task was the start. I also found out how to pull the proper vars from the vault.
# My first playbook 1
- hosts: test
become: yes
vars_files:
- ./roles/common/vars/vars_and_stuff.yml
vars:
ansible_become_pass: "{{ My_password }}"
roles:
- common
tasks:
- name: echo something
shell: echo $(ip addr | grep inet)
The vars file access the vault and then vars: pulls the password used by become. With become in force I ran the other tasks in the common role with a last standalone task. Lastly, don't try to - name: at the top level of the playbook as it trigger a hosts undefined error.
Error MsgI am trying to execute a ansible playbook using roles. I have some variables, which I defined in main.yaml of vars section. I am copying these variables (main.yaml) from another remote machine.
My question is, my playbook throws an error for the first time even though it copies the main.yaml file in my vars section. When I run for second time, it executes playbook well. I am understanding, for the first time though the file is copied it doesn't read it as it was not present before the playbook was running. Is there a way I can run it successfully without an error for the first time.
Image roles will give clear idea about roles and sub files. Roles
site.yaml
---
- name: Testing the Mini project
hosts: all
roles:
- test
tasks/main.yaml
---
- name: Converting Mysql to CSV file
command: mysqldump -u root -padmin -T/tmp charan test --fields-terminated-by=,
when: inventory_hostname == "ravi"
- name: Converting CSV file to yaml format
shell: python /tmp/test.py > /tmp/main.yaml
when: inventory_hostname == "ravi"
- name:Copying yaml file from remote node to vars
shell: sshpass -p admin scp -r root#192.168.56.101:/tmp/main.yaml /etc/ansible/Test/roles/vars/main.yaml
when: inventory_hostname == "charan"
- name:Install Application as per the table
apt: name={{ item.Application }} state=present
when: inventory_hostname == {{ item.Username }}
with_items: user_app
/vars/main.yaml This will be copied from remote machine.
---
user_app:
- {Username: '"ravi"' , Application: curl}
- {Username: '"charan"' , Application: git}
Take a look at the include_vars task. It may do what you need. It looks like you need to be explicitly including /vars/main.yaml in a task before your apt task where you reference the variables.