I created a Custom Credential in Ansible Tower and need to use it in a role.
The credential name is custom_cred -> this has 2 keys custom username and custom password.
I've tried hostvars[inventory_hostname][custom_cred]['custom username'] but its not working.
To debug your Custom Credential Types you could use
- hosts: localhost
gather_facts: yes
tasks:
- name: Get environment
debug:
msg: "{{ ansible_env }}"
resulting into an output of
TASK [Get environment] *********************************************************
ok: [localhost] => {
"msg": [
{
...
"custom_username": "username",
"custom_password": "********",
...
}
...
if such Custom Test Credentials are configured. This is working for AWX/Tower. You can then follow up with
Ansible Tower - How to pass credentials as an extra vars to the job template?
Related
I'm using Ansible playbook to get information about the server's hardware internals through iDrac controller. It is performed by 3rd party module, which uses API to connect to the device.
I get server's internals info (controllers, disks, CPU information, etc.) by running the task. And I would like to register some variables from such output (the output is just shortened by dots).
I kept the main structure of output, to make it clear:
ok: [rac1] => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"invocation": {
"module_args": {
"ca_path": null,
"idrac_ip": "192.168.168.100",
"idrac_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"idrac_port": 443,
...
}
},
"msg": "Successfully fetched the system inventory details.",
"system_info": {
"BIOS": [
{
"BIOSReleaseDate": "09/14/2022",
"FQDD": "BIOS.Setup.1-1",
…
}
],
"CPU": [
{
"CPUFamily": "Intel(R) Xeon(TM)",
"Characteristics": "64-bit capable",
"CurrentClockSpeed": "2.1 GHz",
…
},
{
"CPUFamily": "Intel(R) Xeon(TM)",
"Characteristics": "64-bit capable",
…
}
],
"Controller": [
{
"Bus": "67",
"CacheSize": "8192.0 MB",
"DeviceDescription": "RAID Controller in SL 3",
"FQDD": "RAID.SL.3-1",
"Key": "RAID.SL.3-1",
…
},
I need to get only couple values from output (PCI slot num where RAID controller is located):
"DeviceDescription": "RAID Controller in SL 3",
"Key": "RAID.SL.3-1"
But I have no clue, which example from documentation can I use to register value to variable.
Considering this is a third party module. The task execution is very slow, so it is not so easy for me to play with it as much as possible.
Could somebody suggest me please, which direction should I dig? I'm not a big expert in Ansible yet.
My role's tasks are following below.
I tried to get nested values using debug task(just to figure out key which I need to register), like this, but no luck:
### Get inventory key:value pairs and trying to save certain value to variable ###:
- name: Get Inventory
dellemc.openmanage.idrac_system_info:
idrac_ip: "{{ idrac_ip }}"
idrac_user: "{{ idrac_user }}"
idrac_password: "{{ idrac_password }}"
validate_certs: False
register: ansible_facts[system_info][Controller][FQDD].result
### Trying to show my saved variable in this task ###
- name: print registered value
debug:
var: RAID slot is at "{{ result }}"
verbosity: 4
I get this message after launching playbook:
"msg": "Unsupported parameters for (dellemc.openmanage.idrac_system_info) module: register. Supported parameters include: idrac_ip, timeout, idrac_user, ca_path, idrac_port, validate_certs, idrac_password (idrac_pwd)."
Since you are providing already valid output, how do have generated that? How was it "printed"?
A minimal example playbook
---
- hosts: rac1
become: false
gather_facts: false
vars:
result:
system_info: {
"BIOS": [
{
"BIOSReleaseDate": "09/14/2022",
"FQDD": "BIOS.Setup.1-1"
}
],
"CPU": [
{
"CPUFamily": "Intel(R) Xeon(TM)",
"Characteristics": "64-bit capable",
"CurrentClockSpeed": "2.1 GHz"
},
{
"CPUFamily": "Intel(R) Xeon(TM)",
"Characteristics": "64-bit capable"
}
],
"Controller": [
{
"Bus": "67",
"CacheSize": "8192.0 MB",
"DeviceDescription": "RAID Controller in SL 3",
"FQDD": "RAID.SL.3-1",
"Key": "RAID.SL.3-1"
}
]
}
tasks:
- name: Show Facts
debug:
msg: "{{ result.system_info.Controller }}"
will result already into the expected output of
TASK [Show Facts] *****************************
ok: [rac1] =>
msg:
- Bus: '67'
CacheSize: 8192.0 MB
DeviceDescription: RAID Controller in SL 3
FQDD: RAID.SL.3-1
Key: RAID.SL.3-1
Regarding
which example from documentation can I use to register value to variable.
you may read about Registering variables. For registering results, even for 3rd-party or Custom Modules the structure will be
- name: Task
module_name:
module_parameter: values
register: variable_name
That's why you get an syntax error
Unsupported parameters for (dellemc.openmanage.idrac_system_info) module: register.
about the incorrect indention. Therefore try first
- name: Get Inventory
dellemc.openmanage.idrac_system_info:
idrac_ip: "{{ idrac_ip }}"
idrac_user: "{{ idrac_user }}"
idrac_password: "{{ idrac_password }}"
validate_certs: False
register: inventory
- name: Show Inventory
debug:
msg: "{{ inventory }}"
to get familiar with the result set and data structure.
Further documentation which might help are Return Values and idrac_system_info module – Get the PowerEdge Server System Inventory.
I have an inventory:
[dbs]
server1.intranet
[webservices]
server2.intranet
[apps]
server3.intranet
And a file with variables:
users:
- { name: user1, ssh_key: <SSH_KEY> }
- { name: user2, ssh_key: <SSH_KEY> }
(1) My first question is: How can I tell in the inventory which user is part of each server? (without having to copy and duplicate the user information at every host) Note that users can change and can belong to multiple servers.
(2) The final objective is to do some tasks at each host. For example, to create the users at each host, and add the corresponding user SSH key at each server, something like:
- name: SSH
ansible.posix.authorized_key:
user: "item.name"
state: present
key: "item.ssh_key"
with_items: "{{ users[??] }}"
Of course the users variable should only have the users for the specific host iterating.
How can I do this?
I didn't understand your second point, but this solution could be helpful.
Define destination hosts as an array:
users:
- { name: user1, ssh_key: <SSH_KEY>,hosts: ['test-001','test-002'] }
- { name: user2, ssh_key: <SSH_KEY>,hosts: ['test-002'] }
Use selectattr filter for your loop to search the running hostname in the hosts list defined in the vars:
- name: SSH
ansible.posix.authorized_key:
user: "{{ item.name }}"
state: present
key: "{{ item.ssh_key }}"
loop: "{{ users | selectattr('hosts', 'search', inventory_hostname) }}"
ok: [test-001] => (item={'name': 'user1', 'ssh_key': '<SSH_KEY1>', 'hosts': ['test-001', 'test-002']}) => {
"msg": "user1"
}
ok: [test-002] => (item={'name': 'user1', 'ssh_key': '<SSH_KEY1>', 'hosts': ['test-001', 'test-002']}) => {
"msg": "user1"
}
ok: [test-002] => (item={'name': 'user2', 'ssh_key': '<SSH_KEY2>', 'hosts': ['test-002']}) => {
"msg": "user2"
}
I'm creating playbook to install fluentbit on windows hosts. Everything is working properly but i'm getting error when creating service, it doesn’t fail the install as then everything is already in place but I would like to figure out how I could leverage conditionals. Could you help me with this? :)
My adhoc test-play where I've tried to parse results from ansible.windows.win_service_info module is as follows:
---
- name: Check Windows service status
hosts: win
gather_facts: True
tasks:
- name: Check if a service is installed
win_service:
name: fluent-bit
register: service_info
- debug: msg="{{service_info}}"
- name: Get info for a single service
ansible.windows.win_service_info:
name: fluent-bit
register: service_info
- debug: msg="{{ service_info }}"
- name: Get info for a fluent-bit service
ansible.windows.win_service_info:
name: logging
register: service_exists
- debug: msg="{{ service_exists }}"
- name: Send message if service exists
debug:
msg: "Service is installed"
when: service_exists.state is not defined or service_exists.name is not defined
- name: Send message if service exists
debug:
msg: "Service is NOT installed"
when: service_exists.state is not running
I just don’t get it how I could parse output so that I could skip task when fluent-bit -service exists = True like here:
TASK [debug] *****************************************************************************************
ok: [win-server-1] => {
"msg": {
"can_pause_and_continue": false,
"changed": false,
"depended_by": [],
"dependencies": [],
"description": "",
"desktop_interact": false,
"display_name": "fluent-bit",
**"exists": true,**
"failed": false,
"name": "fluent-bit",
"path": "C:\\fluent-bit\\bin\\fluent-bit.exe -c C:\\fluent-bit\\conf\\fluent-bit.conf",
"start_mode": "manual",
"state": "stopped",
"username": "LocalSystem"
}
}
Cheers :)
So, got it working as I wanted with service_info.exists != True, now it will skip the task if service is already present.
I have a role that is common for both mongo replicas and arbiters and hosts groups separately for each replica and arbiter because the role should support the arbiter on the same host & different host.
hosts:
[replicas]
127.0.0.1
127.0.0.2
[arbiter]
127.0.0.2
the task inside role:
- name: Run only on replicas
debug msg=" Only on replica"
when: '"replicas" in group_names'
- name: Run only on the arbiter
debug: msg="Only on the arbiter"
when: '"arbiter" in group_names'
playbook:
- hosts: replicas
roles:
- role: "common"
- role: "replica"
- hosts: arbiter
roles:
- role: "common"
- role: "arbiter'
Expected output while running on replicas:
TASK [debug] *********************************************************************************************************************************************
ok: [127.0.0.1] => {
"msg": " Only on replica"
}
ok: [127.0.0.2] => {
"msg": " Only on replica"
}
TASK [debug(arbiter)] *********************************************************************************************************************************************
skipping: [127.0.0.1]
skipping: [127.0.0.2]
But is not skipping on arbiter task as expected as the same host is part of replicas group. Below is the actual output.
Actual output:
TASK [debug] *********************************************************************************************************************************************
ok: [127.0.0.1] => {
"msg": " Only on replica"
}
ok: [127.0.0.2] => {
"msg": " Only on replica"
}
TASK [debug(arbiter)] *********************************************************************************************************************************************
skipping: [127.0.0.1]
ok: [127.0.0.2] => {
"msg": " Only on replica"
}
How to run on a specific group that playbook delegated?
hello you can use this method:
Playbook:
- hosts: replicas
roles:
- { role: common, vars: { group: "replicas" } }
- { role: replica, vars: { group: "replicas" } }
- hosts: arbiter
roles:
- { role: common, vars: { group: "arbiter" } }
- { role: arbiter, vars: { group: "arbiter" } }
and inside your role:
- name: Run only on replicas
debug msg=" Only on replica"
when: group == "replicas"
- name: Run only on the arbiter
debug: msg="Only on the arbiter"
when: group == "arbiter"
I hope that can help you to resolve your issue.
I know how to process jinja2 templates files and let them create files. I also know how to POST to webservices using the url module.
For now I use some code like this, which successfully posts hardcoded JSON to my remote service:
tasks:
- name: GSA app definition
uri:
url: "http://localhost:8764/api/apps?relatedObjects=false"
method: POST
force_basic_auth: yes
user: "{{ admin_name }}"
password: "{{ admin_pass }}"
body_format: json
body: "{\"name\":\"My new app\", \"description\":\"A really great new app\" }"
follow_redirects: all
status_code: 200
timeout: 15
register: app_gsa_cfg
But the JSON is static, how can I process a jinja2 template and POST its content ? I would prefer not having to create temporary files on disk and POST them, what I am looking for is a direct connection or perhaps an approach that puts the template processing result into a string.
For starters a jinja2 template could look like this, later I will add variables too:
{#
This file creates the basic GSA app in Fusion. See https://doc.lucidworks.com/fusion-server/4.2/reference-guides/api/apps-api.html#create-a-new-app for details
#}
{
"name": "GSA",
"description": "Contains all configuration specific to the migrated GSA legacy searches"
}
(I know that this has little advantage over a static json included into the playbook. But is is easier to edit and offers me the opportunity to have (jinja style) comments in Json, which is normally not possible)
In my case, what I do is the following:
I have an API, so I do the following:
- name: Change API Status
uri:
url: "{{ enpoint }}/v1/requests/{{ whatever }}"
method: PATCH
user: "{{ tokenid }}"
password: x
headers:
X-4me-Account: "myaccount"
body: '{ "status":"{{ reqstatus }}" }'
body_format: json
status_code:
- 201
- 200
force_basic_auth: true
validate_certs: false
return_content: true
Then your reqstatus var will change.
Even you can add your whole text as yaml, import into a variable and convert with filters {{ some_variable | to_json }}
Note: Have a look to the formatting without escaping quotes. That will help.
It makes no sense creating a file with jinja2 if you are not going to copy it remotely. Ansible supports jinja natively but its strength is the possibility to have plugins for better maintainability. There is no difference between template (or win_template) modules unless (as said) you copy the file somewhere. Look this example:
---
- name: Adhoc Jinja
hosts: localhost
connection: local
gather_facts: false
vars:
mytemplate:
- name: "GSA"
description: "Contains all configuration specific to the migrated GSA legacy searches"
- name: "Another Name"
description: "Contains Another Var"
tasks:
- name: Read Vars Loop
debug:
msg: "{{ item | to_json }}"
with_items: "{{ mytemplate }}"
- name: Include Vars
include_vars: adhocjinja2.yml
- name: Read Vars Loop
debug:
msg: "{{ item | to_json }}"
with_items: "{{ mytemplate }}"
And adhocjinja2.yml:
mytemplate:
- name: "GSA2"
description: "Contains all configuration specific to the migrated GSA legacy searches"
- name: "Another Name 2"
description: "Contains Another Var"
The output is:
TASK [Read Vars Loop] **************************************************************************************
ok: [localhost] => (item={'name': 'GSA', 'description': 'Contains all configuration specific to the migrated GSA legacy searches'}) => {
"msg": "{\"name\": \"GSA\", \"description\": \"Contains all configuration specific to the migrated GSA legacy searches\"}"
}
ok: [localhost] => (item={'name': 'Another Name', 'description': 'Contains Another Var'}) => {
"msg": "{\"name\": \"Another Name\", \"description\": \"Contains Another Var\"}"
}
TASK [Include Vars] ****************************************************************************************
ok: [localhost]
TASK [Read Vars Loop] **************************************************************************************
ok: [localhost] => (item={'name': 'GSA2', 'description': 'Contains all configuration specific to the migrated GSA legacy searches'}) => {
"msg": "{\"name\": \"GSA2\", \"description\": \"Contains all configuration specific to the migrated GSA legacy searches\"}"
}
ok: [localhost] => (item={'name': 'Another Name 2', 'description': 'Contains Another Var'}) => {
"msg": "{\"name\": \"Another Name 2\", \"description\": \"Contains Another Var\"}"
}
You can manage your variables as you want and create your json on the fly as Ansible has jinja and json it its heart.