Is it possible to detect within a playbook what the Ansible invocation was? Specifically I'd like to detect whether the "--ask-vault-pass" option was supplied, and if not, exit the playbook.
I think the easiest way might be to use another task to check the content of your file. For example,
- name: check if a string is in the content
failed_when: "expected_string" not in "{{ file.content }}"
If the vault-pass/file is not supplied, the file can not be decrypted. However, you will also need to make sure the ansible.cfg file is not configured with the file path already.
Related
This is the offending line where the error seems to be:
- name: Generate random password
vars:
jupyter_pwd: "{{ lookup('password', '/root/jupyter_password length=10') }}"
I'm trying to generate a random password and store it in the jupyter_pwd variable.
This is not how Ansible works. It's not like i.e. bash's variable expansion. In Ansible you need to use specified action (command) to do the task - in this case you need to use the task that would support executing shell commands or scripts. This middle layer is needed because Ansible can run your playbook on any target host, be it remote or local machine, with the same unaltered playbook. Therefore it needs to provide middle man for any task to be able to do that and this is what Ansible's actions are for.
In your case you most likely need command action:
- name: "Generate random password"
command: command-to-execute
register: jupyter_pwd
Where command-to-execute is binary or script on remote host you want to run. Whatever that command-to-execute returns is captured and made available as jupyter_pwd variable to the rest of your playbook.
See [docs] for more information.
If command action is too strict (which may be if you want to chain several shell tools), then shell action may be the way to go instead.
I'm aware I can add host variables dynamically via an inventory script. I'm wondering if I can scripts in the host_vars directory which will be executed instead of simple read.
I have tried to create a simple script that outputs some variables. It seems only .json and .yml or no extension are read by the ansible-playbook. Since these are not executed the raw source will result in an error.
So, hence the question. Is this even possible and if not, would you be aware of a method to achieve the same results: Query a (local) dynamic source for variables of a particular host.
I'm pretty sure lookup("pipe") will do what you want, provided the script is available on the target host:
- set_fact:
my_vars: '{{ lookup("pipe", "./my_script.py") | from_json }}'
(substituting from_json with from_yaml or whatever to coerce the textual output from the script into a python datastructure; it's possible that ansible would coerce it automagically, but explicit is better than implicit)
If you want the script that is on the control machine to run, you'll likely have to do some hoopjumpery with delegate_to: and some hostvars ninjary to promote the set_fact: off of the control host over to all playbook hosts
I have a rather simple hosts file
[clients]
qas0062
[dbs_server]
qas0063
For the users of the project we don't want them to modify hosts file but rather we have a separate user.config.yml file that contain various user-configurable parameters. There we have entry such as
dbs_server: qas0065
So the question is: is it possible to use a variable in the hosts file that would use a value defined in user.config.yml? And what would be the format?
Pretty sure you can't templatize the actual host key entry in the inventory, but you can templatize the value of its ansible_host connection var to achieve roughly the same effect, eg:
[clients]
clienthost ansible_host="{{ clienthost_var }}"
[dbs_server]
dbsserver ansible_host="{{ dbsserver_var }}"
then set the value of those vars from external vars before the play starts executing (eg, with the vars_files directive or -e).
There is another way to do the same thing. We can simply refer to values in the hosts (inventory) file by using the following syntax in our playbook
host={{ groups['dbs_server'][0] }}
This works well when you have one entry in the group (db_server in this specific case)
In using Ansible, I'm trying to use a vaulted vars file to store private variables, and then using those in another vars file, in the same role. (The idea from 'Vault Pseudo leaf encryption' here.)
e.g. I have one standard vars file, roles/myrole/vars/main.yml:
---
my_variable: '{{ my_variable_vaulted }}'
and then one which is encrypted, roles/myrole/vars/vaulted_vars.yml:
---
my_variable_vaulted: 'SECRET!'
But when I run the playbook I always get '"ERROR! ERROR! 'my_variable_vaulted' is undefined"'.
I've tried it without encrypting the second file, to make sure it's not an issue with encryption, and I'm getting the same error.
The reason why my_variable_vaulted wasn't available was because I hadn't included the variable file. I'd assumed that all files in a role's vars/ directory were picked up automatically, but I think that's only the case with vars/main.yml.
So, to make the vaulted variables available to all tasks within the role, in roles/myrole/tasks/main.yml I added this before all the tasks:
- include_vars: vars/vaulted_vars.yml
That is not the best way to handle vault in ansibles. Much better approach is outlined in vault documentation for ansible. So you would create your basic variable for environment in group_vars/all.yml like that:
my_variable: {{ vault_my_variable }}
And then in your inventories/main you decide which hosts should load which vault file to satisfy this variable. As example you can have that in your inventories/main:
[production:children]
myhost1
[development:children]
myhost2
[production_vault:children]
production
[development_vault:children]
development
Then ansible will automatically fetch production_vault.yml or development_vault.yml respectively from group_vars depending on which environment box belongs to. And then you can use my_variable in your roles/playbooks as before, without having to worry about fetching it from the right place.
I have a numerical config (e.g., number of milliseconds for some config) that needs to be set in a standard system file. I don't want to keep the whole config file in version control since it's part of a standard install. Is there a way to add a line to a file and have some variable replacement text in the line that can depend on a specified variable (e.g., passed via command line when the playbook is run using --extra-vars.
For example, something like the following (my best effort so far):
- name: Set ring delay
lineinfile:
dest: /etc/cassandra/cassandra-env.sh
state: present
regexp: 'JVM_OPTS="$JVM_OPTS -Dcassandra.ring_delay_ms=.*"'
line: 'JVM_OPTS="$JVM_OPTS -Dcassandra.ring_delay_ms=${ring_delay}"'
backrefs: yes
when: ring_delay is defined
where the playbook is executed with ansible-playbook -e "ring_delay=10000"
The above example works fine if I don't have variable value for the config (e.g., I just hard code line: 'JVM_OPTS="$JVM_OPTS -Dcassandra.ring_delay_ms=10000"', but I would like to be able to specify the value manually from the command line when I run the playbook. Is there a good way to do this? Ideally, rerunning the playbook would overwrite the ring_delay with the new value
EDIT: From this link, It appears that the ${ring_delay} notation I used above is not a feature of ansible, though there are a couple of examples on the web that suggest there is some related functionality for string replacement. The docs refer to "named backreferences", but I'm not sure what those are.
The proper syntax for interpolation is '{{ var }}'. The '${ var }' syntax has been deprecated for some time now.
Changing your task like below should do it :
- name: Set ring delay
lineinfile:
dest: /etc/cassandra/cassandra-env.sh
state: present
regexp: 'JVM_OPTS="$JVM_OPTS -Dcassandra.ring_delay_ms=.*"'
line: 'JVM_OPTS="$JVM_OPTS -Dcassandra.ring_delay_ms={{ring_delay}}"'
when: ring_delay is defined
You don't need backrefs here since there are no catching groups in the regexp.
Good luck.