Ansible SNOW Module - ansible

How can we add multiple Configuration Items (CIs) in ServiceNow incident using ansible snow_record module? I tried looping multiple CIs in snow_record-> data-> cmdb_ci but it appears to update only one CI in the ticket and not adding multiple CIs in the Affected CIs list.
- snow_record:
username: "{{ snow_user }}"
password: "{{ snow_password }}"
instance: "{{ snow_instance }}"
state: present
number: "INC0XXX"
data:
cmdb_ci: example1.com
#loop:
# - example1.com
# - example2.com

I think I found the solution, actually cmdb_ci: is for adding the primary CI in the ticket. To add additional CIs we need to use task_ci table and pass the additional CI name and the ticket number, it is working fine now. :)
- snow_record:
username: "{{ snow_user }}"
password: "{{ snow_password }}"
instance: "{{ snow_instance }}"
state: present
table: task_ci
data:
ci_item: "{{ item }}"
task: "INCXXXX"
loop:
- example1.com
- example2.com

Related

Ansible can't loop through subelements in variables files

I have the following user lists in separated files.
The idea behind this is to create multiple users and assign them to different user groups.
To make it easier, I shortened the list. I reality they include passwords and etc.
First variables file
userlist-os:
group: os
users:
- comment: Test User
username: ostest1
user_id: 9404
user_state: present
- comment: Test User
username: ostest2
user_id: 9405
user_state: present
Second variables file
userlist-zos:
group: zos
users:
- comment: Test User1
username: zostest1
user_id: 9204
user_state: present
- comment: Test User2
username: zostest2
user_id: 9205
user_state: present
This is how my playbook looks like:
- name: test
hosts: all
user: root
vars_files:
- [userlist-zos.yml]
- [userlist-os.yml]
tasks:
- name: Create user accounts
user:
name: "{{ item.users.username }}"
update_password: on_create
uid: "{{ item.users.user_id }}"
shell: /bin/bash
create_home: yes
group: "{{ item.group }}"
state: present
comment: "{{ item.users.comment }}"
when: item.users.user_state == 'present'
with_items:
- "{{ userlist-os }}"
- "{{ userlist-zos }}"
The problem is that I'm not getting into the sub elements of users(variable username is undefined), but when I set an index like this name: "{{ item.users.0.username }}" I do get the first username from each file.
Any help is appreciated.
In your scenario, item.users are lists of users, they are not dictionaries. Therefore they don't have username field, they have list elements which have that field instead. You were able to access to first element of the list with "item.users.0.username". What I suggest you to do is to access these nested variables with an include_task variable as follows:
main.yaml
- name: Trial
hosts: localhost
vars:
# YOUR VARS
tasks:
- name: Create user accounts
include_tasks: helper.yml
with_items:
- "{{ userlistos }}"
- "{{ userlistzos }}"
loop_control:
loop_var: list
helper.yml
- name: Create user accounts
user:
name: "{{ item.username }}"
update_password: on_create
uid: "{{ item.user_id }}"
shell: /bin/bash
create_home: yes
group: "{{ list.group }}"
state: present
comment: "{{ item.comment }}"
when: item.user_state == 'present'
with_items:
- "{{list.users}}"

Use Jinja2 dict as part of an Ansible modules options

I have the following dict:
endpoint:
esxi_hostname: servername.domain.com
I'm trying to use it as an option via jinja2 for the vmware_guest but have been unsuccessful. The reason I'm trying to do it this way is because the dict is dynamic...it can either be cluster: clustername or esxi_hostname: hostname, both mutually exclusive in the vmware_guest module.
Here is how I'm presenting it to the module:
- name: Create VM pysphere
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ username }}"
password: "{{ password }}"
validate_certs: no
datacenter: "{{ ansible_host_datacenter }}"
folder: "/DCC/{{ ansible_host_datacenter }}/vm"
"{{ endpoint }}"
name: "{{ guest }}"
state: present
guest_id: "{{ osid }}"
disk: "{{ disks }}"
networks: "{{ niclist }}"
hardware:
memory_mb: "{{ memory_gb|int * 1024 }}"
num_cpus: "{{ num_cpus|int }}"
scsi: "{{ scsi }}"
customvalues: "{{ customvalues }}"
cdrom:
type: client
delegate_to: localhost
And here is the error I'm getting when including the tasks file:
TASK [Preparation : Include VM tasks] *********************************************************************************************************************************************************************************
fatal: [10.10.10.10]: FAILED! => {"reason": "Syntax Error while loading YAML.
The error appears to have been in '/data01/home/hit/tools/ansible/playbooks/roles/Preparation/tasks/prepareVM.yml': line 36, column 4, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
"{{ endpoint }}"
hostname: "{{ vcenter_hostname }}"
^ here
We could be wrong, but this one looks like it might be an issue with
missing quotes. Always quote template expression brackets when they
start a value. For instance:
with_items:
- {{ foo }}
Should be written as:
with_items:
- "{{ foo }}"
exception type: <class 'yaml.parser.ParserError'>
exception: while parsing a block mapping
in "<unicode string>", line 33, column 3
did not find expected key
in "<unicode string>", line 36, column 4"}
So in summary, I'm not sure how to format this or if it is even possible.
The post from techraf sums up your problem, but for a possible solution, in the docs, especially regarding Jinja filters, there is the following bit:
Omitting Parameters
As of Ansible 1.8, it is possible to use the default filter to omit
module parameters using the special omit variable:
- name: touch files with an optional mode
file: dest={{item.path}} state=touch mode={{item.mode|default(omit)}} > with_items:
- path: /tmp/foo
- path: /tmp/bar
- path: /tmp/baz
mode: "0444"
For the first two files in the list, the default mode will be
determined by the umask of the system as the mode= parameter will not
be sent to the file module while the final file will receive the
mode=0444 option.
So it looks like what should be tried is:
esxi_hostname: "{{ endpoint.esxi_hostname | default(omit) }}"
# however you want the alternative cluster settings done.
# I dont know this module.
cluster: "{{ cluster | default(omit) }}"
This is obviously reliant on the vars to only have one choice set.
There is no way you could ever use the syntax you tried in the question, because firstly and foremostly Ansible requires a valid YAML file.
The closest workaround would be to use a YAML anchor/alias although it would work only with literals:
# ...
vars:
endpoint: &endpoint
esxi_hostname: servername.domain.com
tasks:
- name: Create VM pysphere
vmware_guest:
hostname: "{{ vcenter_hostname }}"
username: "{{ username }}"
password: "{{ password }}"
validate_certs: no
datacenter: "{{ ansible_host_datacenter }}"
folder: "/DCC/{{ ansible_host_datacenter }}/vm"
<<: *endpoint
name: "{{ guest }}"
state: present
guest_id: "{{ osid }}"
disk: "{{ disks }}"
networks: "{{ niclist }}"
hardware:
memory_mb: "{{ memory_gb|int * 1024 }}"
num_cpus: "{{ num_cpus|int }}"
scsi: "{{ scsi }}"
customvalues: "{{ customvalues }}"
cdrom:
type: client
delegate_to: localhost

Ansible: How to create uid's within certain range

I am currently working on a host where i have installed ansible. I have created 2 application accounts with groups with nologin and within that groups i want to add users, so that every department has their own ansible directory.
My vars look like below:
---
- hosts: localhost
become: yes
vars:
ansible_groupuser:
- name: "ansible-dictators"
ansible_groupuser_uid: "3000"
ansible_users:
- idia
- josefs
- donaldt
- kimjongu
- name: "ansible-druglords"
ansible_groupuser_uid: "3001"
ansible_users:
- pabloe
- javierg
- frankl
- rossu
Now i have 2 plays. 1 to create the Groupuser:
# This creates the groupuser
- name: Play 1 Create central ansible user and group per department
user:
name: "{{ item.name }}"
shell: "/sbin/nologin"
home: "/home/{{ item.name }}"
comment: "{{ item.name }} Group Account"
uid: "{{ item.ansible_groupuser_uid }}"
append: "yes"
with_items:
- "{{ansible_groupuser}}"
And 1 to create the "normal" users:
- name: Play 2 Create users
user:
name: "{{ item.1 }}"
shell: "/bin/bash"
home: "/home/{{ item.1 }}"
comment: "{{ item.1 }}"
groups: "{{ item.0.name }}"
append: "yes"
with_subelements:
- "{{ ansible_groupuser }}"
- ansible_users
If i run this play it creates the groupuser ansible-dictators on 3000 and ansible-druglords on 3001. idia gets 3002, josefs gets 3003 etc. It gets kinda messy, when i want to add a 3th groupuser like ansible-rockstars, it starts counting at the first available uid, 3010. What i want is to place the groupusers and the common users in 2 different ranges (2000 and 3000 for example)
When i do a with_together on the first play, like below, it works:
- name: Play1 Create central ansible user and group per department
user:
name: "{{ item.0.name }}"
shell: "/sbin/nologin"
home: "/home/{{ item.0.name }}"
comment: "{{ item.0.name }} Group Account"
uid: "{{ item.1 }}"
append: "yes"
with_together:
- "{{ansible_groupuser}}"
- "{{ range(3000,3020)|list }}"
when: item.0 != None
But when i do a with_together on the second play, it doesnt work:
- name: Create users
user:
name: "{{ item.1 }}"
shell: "/bin/bash"
home: "/home/{{ item.1 }}"
comment: "{{ item.1 }}"
groups: "{{ item.0.name }}"
append: "yes"
uid: "{{ item.2 }}"
with_together:
- "{{ ansible_groupuser }}"
- ansible_users
- "{{ range(2000,2020)|list }}"
Anyone got a suggestion how to make the second play work with a uid in a certain range? Or another suggestion how to get the uid's in different groups? To give the groupusers an uid in the vars is no problem. But i am expecting a lot of "common" users to add (+50) and i dont want to specify a uid for all of those users.
Hope it makes sense. Thanks in advance.
I think range(...) approach has a flaw: if you delete some user from your list in the future, IDs for subsequent entries will change and you can end up with messed file permissions on your system.
You can patch user module to support --firstuid/--lastuid arguments of the underlying adduser command, so you can set different range for uid generation.
But I'd suggest you to define "static" uids for top-level users in your vars file (from some predefined range, say: 3000..30xx) – this way you can safely add/remove top-level user/groups in the future.
And leave "common" users to get their ids automatically, so adding/deleting them will not mess your ids. If you like them to be from some specific range, you can modify system-wide /etc/adduser.conf with FIRST_UID=5000/LAST_UID=6000.

Ansible Pass multiple vaules with single defined Variable

I need to add a server to service group every time I create a new server using the following task.
Task
- name: Create a service group
a10_service_group_v3:
validate_certs: no
host: "{{ item.0.a10_host }}"
state: "{{ item.1.service_state }}"
username: "{{ item.0.user }}"
password: "{{ item.0.pass }}"
service_group: "{{ item.1.group_name }}"
reset_on_server_selection_fail: yes
servers:
- name: "{{ item.1.server_name1 }}"
port: "{{ item.1.server_port1 }}"
overwrite: yes
write_config: yes
ignore_errors: yes
with_nested:
- "{{ a10 }}"
- "{{ service_group }}"
Variables:
service_group:
- group_name: bif_sg
service_state: present
server_name1: bif01
server_port1: 80
I need help with passing variables for server_name and server_port, let's say If I have 3 servers to add to service group in the task I need to add 3 times server_name1, server_port1
server_name2, server_port2 ......
Everytime I add server I need to update in the task as well :(
Is there a way to pass multiple times sever_name and serer_port with single defined value in the task.
I you expect server_group to have a list of servers, refactor your variable to have a list of servers and not a bunch of separate subkeys:
service_group:
- group_name: bif_sg
service_state: present
servers:
- name: bif01
port: 80
- name: bif02
port: 8080
And in your task:
...
servers: "{{ item.1.servers }}"
...

How do I add routes to a VPC's default/main route table with Ansible ec2_vpc_route_table?

I'm having real trouble adding additional routes to a newly created VPC's default Route Table in a NAT scenario using Ansible and the ec2_vpc_route_table module. The related excerpts from my Playbook are...
Creating the VPC
- name: Create Production VPC
ec2_vpc:
region: "{{ aws.region }}"
aws_access_key: "{{ aws.access_key }}"
aws_secret_key: "{{ aws.secret_key }}"
state: present
cidr_block: 10.0.0.0/16
resource_tags: { "Name":"Production" }
internet_gateway: yes
dns_hostnames: yes
dns_support: yes
subnets:
- cidr: 10.0.10.0/24
resource_tags: { "Name":"A - NAT" }
- cidr: 10.0.20.0/24
resource_tags: { "Name":"B - Public" }
- cidr: 10.0.30.0/24
resource_tags: { "Name":"C - Private" }
wait: yes
register: prod_vpc
Gather facts and append new routes
- name: Gather default Route Table facts
ec2_vpc_route_table_facts:
region: "{{ aws.region }}"
filters:
vpc-id: "{{ prod_vpc.vpc.id }}"
register: vpc_default_route
- name: Add NAT routes
ec2_vpc_route_table:
aws_access_key: "{{ aws.access_key }}"
aws_secret_key: "{{ aws.secret_key }}"
vpc_id: "{{ prod_vpc.vpc.id }}"
region: "{{ aws.region }}"
route_table_id: "{{ vpc_default_route.route_tables[0].id }}"
lookup: id
tags:
Name: NAT
subnets:
- '10.0.10.0/24'
routes:
- dest: 0.0.0.0/0
gateway_id: igw
So first off, I tried to include the creation of the routes within the ec2_vpc task, but that actually ended up creating a second Route Table rather than including the routes in the default table.
So first off, why does a second table get created instead of adding the routes to the default?
Because including it in the vpc creation wasn't working I fell back on the above, which just identifies the default, or main Route Table. The problem now is that when I try to use ec2_vpc_route_table to add the NAT route, Ansible fails with the following error...
fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Unable to associate subnets for route table RouteTable:rtb-..., error: EC2ResponseError: 400 Bad Request\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Response><Errors><Error><Code>InvalidParameterValue</Code><Message>cannot disassociate the main route table association rtbassoc-...</Message></Error></Errors><RequestID>...</RequestID></Response>"}
That error makes it sound like, dispite the fact I'm explicitly providing route_table_id, and lookup: id, that it's not updating the existing table, but it's basically recreating it, trying to delete the original (which is the main route table) in the process.
How can I append additional routes to an existing main route table?
So far the only workaround I've been able to use is to call the EC2 CLI via the command: ... method, but that's obviously not ideal.
We're on Ansible 2.3.0 (devel 14a2757116)
Any help would be greatly appreciated!
I've managed to update the main route table by looking it up and then using the route_table_id to modify it.
- name: Lookup VPC facts
ec2_vpc_net_facts:
region: "{{ region }}"
filters:
"tag:Name": "{{ vpc_name }}"
register: vpc_group
- name: Create vgw
ec2_vpc_vgw:
region: "{{ region }}"
vpc_id: "{{ vpc_group.vpcs[0].id }}"
name: "{{ vpc_name }}"
type: ipsec.1
state: present
validate_certs: no
register: vpc_vgw
- name: Create igw
ec2_vpc_igw:
region: "{{ region }}"
vpc_id: "{{ vpc_group.vpcs[0].id }}"
state: present
validate_certs: no
register: igw
- name: Lookup route tables
ec2_vpc_route_table_facts:
region: "{{ region }}"
filters:
vpc-id: "{{ vpc_group.vpcs[0].id }}"
register: vpc_route_tables
- name: Setup route tables
ec2_vpc_route_table:
region: "{{ region }}"
vpc_id: "{{ vpc_group.vpcs[0].id }}"
lookup: id
purge_subnets: false
route_table_id: "{{ vpc_route_tables.route_tables[0].id }}"
subnets: "{{ subnet_ids }}"
routes:
- dest: 10.0.0.0/8
gateway_id: "{{ vpc_vgw.vgw.id }}"
- dest: 0.0.0.0/0
gateway_id: "{{ igw.gateway_id }}"
Note: You need to disable purging the subnets otherwise you'll get an error when ansible tries to remove the default subnet association from the main route_table.
From http://docs.ansible.com/ansible/ec2_vpc_route_table_module.html#options
Look up route table by either tags or by route table ID. Non-unique tag lookup will fail. If no tags are specifed then no lookup for an existing route table is performed and a new route table will be created. To change tags of a route table or delete a route table, you must look up by id.
Try lookup: tag instead of lookup: id.
That should work.

Resources