I'm writing a playbook to create many user accounts across many servers. At the end I want to get output with credentials sorted by username.
I used set_fact with run_once but it seems that defined variable is not playbook-wide.
main.yml
- name: Create users
import_tasks: creation_task.yml
creation_task.yml
- name: Init variable for creds
set_fact:
creds: []
delegate_to: localhost
run_once: true
- name: Create specific users
include: create.yml
with_items:
- input_data
- .......
- name: Print output creds
debug: var=creds
run_once: true
create.yml
- name: some actions that actually create users
....
- name: add creds to list
set_fact:
creds: "{{ creds + [ {'hostname': inventory_hostname,'username':item.name,'password':password.stdout} ]}}"
- name: add splitter to list
set_fact:
creds: "{{ creds + [ '-----------------------------------------------------' ]}}"
This is actually working but i get output sorted by server because (as I think) every host reports his version of "creds" variable.
I'd like to create one variable that will be visible and writeable across all nested plays. So output would be sorted by input data but not hostname. Is it possible?
I'd use the following syntax, to fetch a variable set from a specific host via hostvars:
- debug:
msg: "{{ groups['my_host_name']|map('extract',hostvars,'my_variable_name')|list|first }}"
when: groups['my_host_name']|map('extract',hostvars,'my_variable_name')|list|first|length > 0
Then, you can loop over your hostnames to create an array of values and sort them.
Though, printing all servers hostnames, users & plain text passwords in a text file seems to be a security risk.
Related
I created a Worflow job in awx containing 2 jobs:
Job 1 is using the credentials of the windows server where we get the json file from. It reads the content and put it in a variable using set_stats
Job2 is using the credential of the server where to upload the json file. It reads the content of the variable set in the job 1 in the set_stats task and creates a json file with the content.
First job:
- name: get content
win_shell: 'type {{ file_dir }}{{ file_name }}'
register: content
- name: write content
debug:
msg: "{{ content.stdout_lines }} "
register: result
- set_fact:
this_local: "{{ content.stdout_lines }}"
- set_stats:
data:
test_stat: "{{ this_local }}"
- name: set hostname in a variable
set_stats:
data:
current_hostname: "{{ ansible_hostname }}"
per_host: no
Second job
- name: convert to json and copy the file to destination control node.
copy:
content: "{{ test_stat | to_json }}"
dest: "/tmp/{{ current_hostname }}.json"
How can I get the current_hostname, so that the the created json file is named <original_hostname>.json? In my case its concatenating the two hosts which I passed in the first job.
In my case its concatenating the two hosts which I passed in the first job
... which is precisely what you asked for since you used per_host: no as parameter to set_stats to gather the current_hostname stat globally for all host and that aggregate: yes is the default.
Anyhow, this is not exactly the intended use of set_stats and you are making this overly complicated IMO.
You don't need two jobs. In this particular case, you can delegate the write task to a linux host in the middle of a play dedicated to windows hosts (and one awx job can use several credentials).
Here is a pseudo untested playbook to give you the idea. You'll want to read the slurp module documentation which I used to replace your shell task to read the file (which is a bad practice).
Assuming your inventory looks something like:
---
windows_hosts:
hosts:
win1:
win2:
linux_hosts:
hosts:
json_file_target_server:
The playbook would look like:
- name: Gather jsons from win and write to linux target
hosts: windows_hosts
tasks:
- name: Get file content
slurp:
src: "{{ file_dir }}{{ file_name }}"
register: json_file
- name: Push json content to target linux
copy:
content: "{{ json_file.content | b64decode | to_json }}"
dest: "/tmp/{{ inventory_hostname }}.json"
delegate_to: json_file_target_server
I created roles for getting different passwords from CyberArk. I noticed that everything in the roles was the same except for the query parameter to find the password, so I decided to make the query parameter a variable, and I send it to a new single CyberArk role as a variable.
Example from Playbook:
- name: Get AVI Dev Or Prod password
import_role:
name: /Users/n0118883/python/ansible/roles/cyberark_creds
vars:
query_parm: "Username=admin;Address=avi;Environment=Development"
- name: Get Venafi password
import_role:
name: /Users/n0118883/python/ansible/roles/cyberark_creds
vars:
query_parm: "Username=sahsp-avi-venafi;Address=LM"
I was hoping to call the role, pass the query parameter, and use multiple "set_fact" with a when clause to have the correct password assigned to the correct fact.
Example from Role:
- name: Get PW from cyberark
cyberark_credential:
api_base_url: "https://cyberark.lmig.com"
app_id: "AVI_Cyberark_Automation"
query: "{{ query_parm }}"
register: cyberark_command_output
- set_fact:
admin_password: "{{ cyberark_command_output | json_query('result.Content')}}"
when: '"Address=avi" in query_parm'
- set_fact:
venafi_password: "{{ cyberark_command_output | json_query('result.Content')}}"
when: '"sahsp-avi-venaf" in query_parm'
- name: "Return"
debug:
msg: "{{ admin_password }}"
- name: "Return"
debug:
msg: "{{ venafi_password }}"
No matter what I change when I run the playbook, I either get the first password twice or I get the password for the first and the error "The error was: 'venafi_password' is undefined" for the second.
In one playbook, I go to CyberArk to get three different passwords. I'm not sure if I'm trying to do too much with a role and should go back to three separate roles, or is this possible, but I'm just doing it wrong.
I am trying to create a playbook which is managing to create some load balancers.
The playbook takes a configuration YAML in input, which is formatted like so:
-----configuration.yml-----
virtual_servers:
- name: "test-1.local"
type: "standard"
vs_port: 443
description: ""
monitor_interval: 30
ssl_flag: true
(omissis)
As you can see, this defines a list of load balancing objects with the relative specifications.
If I want to create for example a monitor instance, which depends on these definitions, I created this task which is defined within a playbook.
-----Playbook snippet-----
...
- name: "Creator | Create new monitor"
include_role:
name: vs-creator
tasks_from: pool_creator
with_items: "{{ virtual_servers }}"
loop_control:
loop_var: monitor_item
...
-----Monitor Task-----
- name: "Set monitor facts - Site 1"
set_fact:
monitor_name: "{{ monitor_item.name }}"
monitor_vs_port: "{{ monitor_item.vs_port }}"
monitor_interval: "{{ monitor_item.monitor_interval}}"
monitor_partition: "{{ hostvars['localhost']['vlan_partition'] | first }}"
...
(omissis)
- name: "Create HTTP monitor - Site 1"
bigip_monitor_http:
state: present
name: "{{ monitor_name }}_{{ monitor_vs_port }}.monitor"
partition: "{{ monitor_partition }}"
interval: "{{ monitor_interval }}"
timeout: "{{ monitor_interval | int * 3 | int + 1 | int }}"
provider:
server: "{{ inventory_hostname}}"
user: "{{ username }}"
password: "{{ password }}"
delegate_to: localhost
when:
- site: 1
- monitor_item.name | regex_search(regex_site_1) != None
...
As you can probably already see, I have a few problems with this code, the main one which I would like to optimize is the following:
The creation of a load balancer (virtual_server) involves multiple tasks (creation of a monitor, pool, etc...), and I would need to treat each list element in the configuration like an object to create, with all the necessary definitions.
I would need to do this for different sites which pertain to our datacenters - for which I use regex_site_1 and site: 1 in order to get the correct one... though I realize that this is not ideal.
The script, as of now, does that, but it's not well-managed I believe, and I'm at a loss on what approach should I take in developing this playbook: I was thinking about looping over the playbook with each element from the configuration list, but apparently, this is not possible, and I'm wondering if there's any way to do this, if possible with an example.
Thanks in advance for any input you might have.
If you can influence input data I advise to turn elements of virtual_servers into hosts.
In this case inventory will look like this:
virtual_servers:
hosts:
test-1.local:
vs_port: 443
description: ""
monitor_interval: 30
ssl_flag: true
And all code code will become a bliss:
- hosts: virtual_servers
tasks:
- name: Doo something
delegate_to: other_host
debug: msg=done
...
Ansible will create all loops for you for free (no need for include_roles or odd loops), and most of things with variables will be very easy. Each host has own set of variable which you just ... use.
And part where 'we are doing configuration on a real host, not this virtual' is done by use of delegate_to.
This is idiomatic Ansible and it's better to follow this way. Every time you have include_role within loop, you for sure made a mistake in designing the inventory.
I'm new to Ansible & I've been trying to read the content of a file, split it based on a specific criteria & then I want to copy that content or return that content.
for example, a file sample.txt contains:
userid= "abc"
I want to read the content in sample.txt & split whereever there's a '=' sign, so that I can extract the creds (userid & abc) & then use it further.
I'm dropping drafts of the code snippets I've tried.
---
- name: extracting creds
hosts: servers
tasks:
- name: read secure value
lineinfile:
path: /home/usr/Desktop/sample.txt
register: creds
debug:
msg: "{{ creds.split('=') }}"
Another code I tried:
---
- name: Creds
hosts: servers
vars:
test: /home/usr/Desktop/sample.txt
tasks:
- debug:
msg: "{{lookup('file', test).split('=') }}"
None of them works :( What shall be followed to get it done?
You can also try the following approach to read the contents from file and split them.
---
- hosts: localhost
tasks:
- name: add host
add_host:
hostname: "{{ server1 }}"
groups: host1
- hosts: host1
become: yes
tasks:
- name: Fetch the sample file
slurp:
src: /tmp/sample.txt
register: var1
- name: extract content for matching pattern
set_fact:
sample_var1: "{{ var1['content'] | b64decode | regex_findall ('(.+=.+)', multiline=True, ignorecase=True) }}"
- debug:
msg: "{{ item.split('=')[1] }}"
loop: "{{ sample_var1 }}"
According to ansible doc, this is what lineinfile does. So, if you want to modify some content from one file and write to another file then this module wouldn't help.
This module ensures a particular line is in a file, or replace an
existing line using a back-referenced regular expression. This is
primarily useful when you want to change a single line in a file
only.
lookup on the other hand works on control machine. Judging by the code you have added, may be you were trying to use the file on target host. So, lookup wouldn't help either.
If the file is available on local/control host then read file, split content and copy to another file on the control machine and then copy the final file to the target host using copy module. Here is a sample that reads a file from control host and split every line using = as a separator.
- hosts: localhost
tasks:
- debug:
msg: "{{ item.split('=') }}"
with_lines: "cat /home/usr/Desktop/sample.txt"
If the file is on remote/managed host then you can use something like below:
- hosts: servers
tasks:
- command: "cat /home/usr/Desktop/sample.txt"
register: content
- debug:
msg: "{{ item.split('=') }}"
loop: "{{ content.stdout_lines }}"
I have a playbook which contains more than one plays. One of the plays generates a variable and stores it using the set_stats module as an artifact. The subsequent plays need to access the variable, but an error occurs that the given variable is undefined. How can I access a variable in the artifacts? (Btw using a workflow which would result in saving the variable in the extra_variables instead of the artifacts container is no option in this scenario)
The Problem in detail:
I have the following playbook which includes 2 plays which get executed on different hosts:
---
- hosts: ansible
roles:
- role_parse_strings
- hosts: all, !ansible
roles:
- role_setup_basics
- role_create_accounts
The role "role_parse_strings" in the first play generates the variable "users" which gets stored because of the set_stats module as an artifact. The following content lands in the artifact section of ansible awx:
users:
- username: user1
admin: true
- username: user2
admin: false
When the role "role_create_accounts" gets executed which tries to access the variable "users" in the following way...
- user: name={{ item.username }}
shell=/bin/bash
createhome=yes
groups=user
state=present
with_items: "{{ users }}"
..this error gets displayed:
{
"msg": "'users' is undefined",
"_ansible_no_log": false
}
You can use set_fact to share variable between hosts. Below example show how to share a file content via set_fact.
- hosts: host1
pre_tasks:
- name: Slurp the public key
slurp:
src: /tmp/ssh_key.pub
register: my_key_pub
- name: Save the public key
set_fact:
my_slave_key: >-
{{ my_key_pub['content'] | b64decode }}
- hosts: host2
vars:
slave_key: "{{ my_slave_key }}"
pre_tasks:
- set_fact:
my_slave_key: >-
{{ hostvars[groups["host1"][0]].my_slave_key | trim }}
We saved the content of public key as a fact name called my_slave_key and
assgined it another variable as slave_key in host2 with:
hostvars[groups["host1"][0]].my_slave_key