I am now experiencing a typical problem in ansible V 2.1.0. In the case below,
[DEV:children]
DEV8
[DEV8]
thehost ansible_ssh_host=10.2.131.26 ansible_ssh_user=someuser1
Now, the when I run
{{hostvars[inventory_hostname].group_names, it outputs
TASK [debug] ************************************************************
ok: [thehost] => {
"msg": [
"DEV",
"DEV8"
]
}
Now, for other group of machines
[PRODCTE:children]
CTE3
[CTE3]
thehost1 ansible_ssh_host=10.2.131.30 ansible_ssh_user=someuser2
output:
TASK [debug] *******************************************************************
ok: [thehost] => {
"msg": [
"CTE3",
"PRODCTE"
]
}
PROBLEM:
[PROD]
PRODA
[PRODA]
PROD1
[PROD1]
thehost2 ansible_ssh_host=10.2.3.33 ansible_ssh_user=someuser3
output:
TASK [debug] *******************************************************************
ok: [thehost] => {
"msg": [
"PROD",
"PROD1"
"PRODA"
]
}
Now, If ansible code is to execute alphabetically, then consistency cannot be achieved. The output always has to be consistent. I mean, if group_names[0] or group_names[1] shows me different values for different groups based alphabetically, the playbooks cannot be standardized.
Anyways, even if go with this behavior, I am trying to understand on what factors does ansible outputs these values?
If alphabetically, then how was PROD1 chosen over PRODA? Does ansible considers numerics to be priority than alphabets here?
Why should it be parent->children?
I guess it is supposed to be alphabetically sorted. From code:
results['group_names'] = sorted([ g.name for g in self.get_groups() if g.name != 'all'])
Related
I am trying to build a custom module for our private cloud Infrastructure.
I followed this doc http://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general.html
I created a my_module.py module file.
When I hit ansible-playbook playbook/my_module.yml
Response:
PLAY [Create, Update, Delete VM] *********************************************************************************************************
TASK [Gathering Facts] ************************************************************************************************************************
ok: [localhost]
TASK [VM create] *************************************************************************************************************************
changed: [localhost]
TASK [dump test output] ***********************************************************************************************************************
ok: [localhost] => {
"msg": {
"changed": true,
"failed": false,
"message": "goodbye",
"original_message": "pcp_vm_ansible"
}
}
PLAY RECAP ************************************************************************************************************************************
localhost : ok=3 changed=1 unreachable=0 failed=0
Which means it is working fine as expected.
Module.py
from ansible.module_utils.basic import AnsibleModule
def run_module():
module_args = dict(
name=dict(type='str', required=True),
new=dict(type='bool', required=False, default=False)
)
result = dict(
changed=False,
original_message='',
message=''
)
module = AnsibleModule(
argument_spec=module_args,
supports_check_mode=True
)
if module.check_mode:
return result
result['original_message'] = module.params['name']
result['message'] = 'goodbye'
if module.params['new']:
result['changed'] = True
if module.params['name'] == 'fail me':
module.fail_json(msg='You requested this to fail', **result)
module.exit_json(**result)
def main():
print("================== Main Called =======================")
run_module()
if __name__ == '__main__':
main()
I am trying to print logs to visualize my input data using print() or even logging.
print("================== Main Called =======================")
But nothing is getting printed to console.
As per Conventions, Best Practices, and Pitfalls, "Modules must output valid JSON only. The top level return type must be a hash (dictionary) although they can be nested. Lists or simple scalar values are not supported, though they can be trivially contained inside a dictionary."
Effectively, the core runtime only communicates with the module via JSON and the core runtime controls stdout so the standard print statements from the module are suppressed. If you want or need more information out of the execution runtime then I suggest a Callback Plugin.
I'm using the Ansible slack module to send notifications, which include #mentions for notifiable people. I'm having trouble with multi-word display names, though single-word display names work fine.
For example:
ok: [localhost] => {
"msg": "message: `SANDPIT` started ELMS and LACM MapGuide\nFYI <#alice> <#bob>"
}
But then <#Genghis Khan> comes to work alongside Alice and Bob. And he, because he's Ghengis Khan, of course chooses a multi-word display name. Given the following message:
ok: [localhost] => {
"msg": "message: `SANDPIT` started ELMS and LACM MapGuide\nFYI <#Ghenkis Khan> <#bob>"
}
#bob is recognised, <#Ghenkis Khan> isn't.
I've also tried user ids as recommended at https://api.slack.com/changelog/2017-09-the-one-about-usernames but they don't work any better, so I've reverted to display names. But for completeness here is an example of the message content, with valid user ids but an invalid result:
TASK [slack : debug]
***********************************************************
ok: [localhost] => {
"msg": "message: `SANDPIT` started ELMS and LACM MapGuide\nFYI <#U12345DK> <#U12345TT>"
}
As mentioned, Slack is not buying these user ids either.
Here's the Ansible task that sends the message to Slack. All tokens etc are working as expected, and this is all happening in the same workspace.
- name: Send message via Slack
slack:
token: 'valid token'
color: 'valid color'
msg: '{{ message }}'
link_names: 1
# problems with notifications should not fail the pipeline
ignore_errors: yes
According to the official documentation of the msg property of the slack plugin all angle brackets and ampersands need to be escaped using HTML notation:
Message to send. Note that the module does not handle escaping
characters. Plain-text angle brackets and ampersands should be
converted to HTML entities (e.g. & to &) before sending. See
Slack's documentation (https://api.slack.com/docs/message-formatting)
for more.
So you want to try something like this:
TASK [slack : debug]
***********************************************************
ok: [localhost] => {
"msg": "message: `SANDPIT` started ELMS and LACM MapGuide\nFYI <#U12345DK> <#U12345TT>"
}
I am trying to filter results that arrived from boto3 in Ansible.
When I use json query on the results without the "[?starts_with(...)]" it works well, but when adding the starts_with syntax:
"state_machines[?starts_with(name,'hello')].state_machine_arn"
In order to filter results:
{u'boto3': u'1.4.4', u'state_machines':
[{u'state_machine_arn': u'<state machine arn 1>', u'name': u'hello_world_sfn', u'creation_date': u'2017-05-16 14:26:39.088000+00:00'},
{u'state_machine_arn': u'<state machine arn 2>', u'name': u'my_private_sfn', u'creation_date': u'2017-06-08 07:25:49.931000+00:00'},
{u'state_machine_arn': u'<state machine arn 3>', u'name': u'alex_sfn', u'creation_date': u'2017-06-14 08:35:07.123000+00:00'}],
u'changed': True}" }
I expect to get the first state_machine_arn value: "state machine arn 1"
But instead, I get the exception:
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: JMESPathTypeError: In function contains(), invalid type for value: <lamdba_name>, expected one of: ['array', 'string'], received: "unknown" fatal: [localhost]: FAILED!
=> {"failed": true, "msg": "Unexpected failure during module execution.", "stdout": ""}
What can be the problem?
The problem is that json_query filter expects to get a dictionary with ascii strings, but what you're providing it are unicode strings (notice the u'blabla' in your input).
This is an issue with json_query that apparently got introduced in Ansible 2.2.1 (although that is not really clear), here are some more details:
https://github.com/ansible/ansible/issues/20379#issuecomment-284034650
I hope this gets fixed in a future version, but for now this is a workaround that worked for us:
"{{ results | to_json | from_json | json_query(jmespath_query) }}"
Where jmespath_query is a variable that contains a starts_with query.
This trick of going to and from json turns the unicode strings into ASCII ones :)
As an alternative to writing | to_json | from_json | json_query(…) everywhere, you can monkey-patch Ansible's json_query filter by creating the following filter_plugins/json_bug_workaround.py file:
import json
from ansible.parsing.ajson import AnsibleJSONEncoder
from ansible.plugins.filter.json_query import json_query
class FilterModule(object):
def filters(self):
return {
# Workaround for Unicode bug https://stackoverflow.com/a/44547305
'json_query': lambda data, query: json_query(
json.loads(json.dumps(data, cls=AnsibleJSONEncoder)),
query
),
}
Then you can just use | json_query(…) naturally. This shim does the equivalent of calling | to_json | from_json for you.
You can put it inside your role (roles/role_name/filter_plugins/json_bug_workaround.py) or anywhere in Ansible's plugin search path.
I have a playbook
---
- hosts: all
gather_facts: True
tasks:
- action: debug msg="time = {{ ansible_date_time }}"
Which returns the full json representation for each machine.
How do I further filter that within the playbook such that I only get the iso8601_basic_short part
[root#pjux playbooks]# ansible --version
ansible 2.1.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
TASK [debug] *******************************************************************
ok: [10.99.97.222] => {
"msg": "time = {u'weekday_number': u'2', u'iso8601_basic_short': u'20160906T182117', u'tz': u'BST', u'weeknumber': u'36', u'hour': u'18', u'year': u'2016', u'minute': u'21', u'tz_offset': u'+0100', u'month': u'09', u'epoch': u'1473182477', u'iso8601_micro': u'2016-09-06T17:21:17.761900Z', u'weekday': u'Tuesday', u'time': u'18:21:17', u'date': u'2016-09-06', u'iso8601': u'2016-09-06T17:21:17Z', u'day': u'06', u'iso8601_basic': u'20160906T182117761843', u'second': u'17'}"
}
ok: [10.99.97.216] => {
"msg": "time = {u'weekday_number': u'2', u'iso8601_basic_short': u'20160906T182117', u'tz': u'BST', u'weeknumber': u'36', u'hour': u'18', u'year': u'2016', u'minute': u'21', u'tz_offset': u'+0100', u'month': u'09', u'epoch': u'1473182477', u'iso8601_micro': u'2016-09-06T17:21:17.938563Z', u'weekday': u'Tuesday', u'time': u'18:21:17', u'date': u'2016-09-06', u'iso8601': u'2016-09-06T17:21:17Z', u'day': u'06', u'iso8601_basic': u'20160906T182117938491', u'second': u'17'}"
}
Have you tried {{ ansible_date_time.iso8601_basic_short }}?
I have an Ansible play-book for working with EC2 instances. I'm using dynamic inventory (ec2.py) to get the group of instances that I want to work with (hosts: tag_Service_Foo). When I run it, it produces output like:
GATHERING FACTS ***************************************************************
ok: [54.149.9.198]
ok: [52.11.22.29]
ok: [52.11.0.3]
However, I can fetch the "Name" tag for a particular instance from Amazon (I do this and store it in a variable for use in a couple parts of the playbook).
Is there a way to get Ansible to use this string for the hostname when displaying progress? I'd like to see something more descriptive (since I don't have the IPs memorized):
GATHERING FACTS ***************************************************************
ok: [main-server]
ok: [extra-server]
ok: [my-cool-server]
The output of the ec2.py inventory script looks like this (truncated; it's very long).
{
"_meta": {
"hostvars": {
"54.149.9.198": {
"ec2__in_monitoring_element": false,
"ec2_ami_launch_index": "0",
"ec2_architecture": "x86_64",
"ec2_client_token": "xxx",
"ec2_dns_name": "xxx",
"ec2_ebs_optimized": false,
"ec2_eventsSet": "",
"ec2_group_name": "",
"ec2_hypervisor": "xen",
"ec2_id": "i-xxx",
"ec2_image_id": "ami-xxx",
"ec2_instance_type": "xxx",
"ec2_ip_address": "xxx",
"ec2_item": "",
"ec2_kernel": "",
"ec2_key_name": "xxx",
"ec2_launch_time": "xxx",
"ec2_monitored": xxx,
"ec2_monitoring": "",
"ec2_monitoring_state": "xxx",
"ec2_persistent": false,
"ec2_placement": "xxx",
"ec2_platform": "",
"ec2_previous_state": "",
"ec2_previous_state_code": 0,
"ec2_private_dns_name": "xxx",
"ec2_private_ip_address": "xxx",
"ec2_public_dns_name": "xxx",
"ec2_ramdisk": "",
"ec2_reason": "",
"ec2_region": "xxx",
"ec2_requester_id": "",
"ec2_root_device_name": "/dev/xvda",
"ec2_root_device_type": "ebs",
"ec2_security_group_ids": "xxx",
"ec2_security_group_names": "xxx",
"ec2_sourceDestCheck": "true",
"ec2_spot_instance_request_id": "",
"ec2_state": "running",
"ec2_state_code": 16,
"ec2_state_reason": "",
"ec2_subnet_id": "subnet-xxx",
"ec2_tag_Name": "main-server",
"ec2_tag_aws_autoscaling_groupName": "xxx",
"ec2_virtualization_type": "hvm",
"ec2_vpc_id": "vpc-xxx"
}
}
}
"tag_Service_Foo": [
"54.149.9.198",
"52.11.22.29",
"52.11.0.3"
],
}
What you need to do is create your own wrapper (say my_ec2.py) over the ec2.py that would post process the output. Idea is to use the behavioral hostvar ansible_ssh_host. You can use any language not only python. As long as it prints valid json on stdout you're good to go. Reference if needed.
It'll be a tiny bit of work. But hope the sudo code would help:
output_json_map = new map
for each group in <ec2_output>: # e.g. tag_Service_Foo, I think there would be another
# key in the output that contains list of group names.
for each ip_address in group:
hname = ec2_output._meta.hostvars.find(ip_address).find(ec2_tag_Name)
# Add new host to the group member list
output_json_map.add(key=group, value=hname)
copy all vars from ec2_output._meta.hostvars.<ip_address>
to output_json_map._meta.hostvars.<hname>
# Assign the IP address of this host to the ansible_ssh_host
# in hostvars for this host
output_json_map.add(key=_meta.hostvars.<hname>.ansible_ssh_host,
value=ip_address)
output_json_map.add(key=_meta.hostvars.find(ip_address).ansible_ssh_host,
value=ip_address)
print output_json_map to stdout
E.g. for your example the output of my_ec2.py should be:
{
"_meta": {
"hostvars": {
"main-server": {
"ansible_ssh_host": "54.149.9.198"
--- snip ---
"ec2_tag_Name": "main-server",
--- snip ---
},
"extra-server": {
"ansible_ssh_host": "52.11.22.29"
--- snip ---
"ec2_tag_Name": "extra-server",
--- snip ---
},
<other hosts from all groups>
}
}
"tag_Service_Foo": [
"main-server",
"extra-server",
<other hosts in this group>
],
"some other group": [
<hosts in this group>,
...
],
}
and obviously, use this my_ec2.py instead of ec2.py as the inventory file. :-)
-- edit --
1) In the groups, can I only refer to things by one name? 2) There's
no notion of an alias? 3) I'm wondering if I could use the IP addr in
the groups and just modify the _meta part or if I need to do it all?
Yes*, No and no.
* Technically first yes should be no. Let me explain.
What we are doing here can be done with static inventory file like this:
Original ec2.py was returning json equivalent of following inventory file:
[tag_Service_Foo]
54.149.9.198 ec2_tag_Name="main-server" ec2_previous_state_code="0" ...
52.11.22.29 ec2_tag_Name="extra-server" ec2_previous_state_code="0" ...
our new my_ec2.py returns this:
[tag_Service_Foo]
main-server ansible_ssh_host="54.149.9.198" ec2_tag_Name="main-server" ec2_previous_state_code="0" ...
extra-server ansible_ssh_host="52.11.22.29" ec2_tag_Name="extra-server" ec2_previous_state_code="0" ...
# Technically it's possible to create "an alias" for main-server like this:
main-server-alias ansible_ssh_host="54.149.9.198" ec2_tag_Name="main-server" ec2_previous_state_code="0" ...
Now you would be able to run a play with main-server-alias in the host list and ansible would execute it on 54.149.9.198.
BUT, and this is a big BUT, when you run a play with 'all' as the host pattern ansible would run the task on main-server-alias as well as main-server. So what you created is an alias in one context and a new host in another. I've not tested this BUT part so do come back and correct me if you find out otherwise.
HTH
If you put
vpc_destination_variable = Name
in your ec2.ini file, that should work too.
For a more recent version, working since version 2.9 of Ansible, at least, the aws_ec2 plugin now allows this as a simple configuration, without the need to create any warper around ec2.py.
This can be done using a combination of the parameters hostnames and compose.
So the trick is to change the name of the EC2 instance via the hostnames parameter to have something human readable — for example from an AWS tag on the instance — and then to use the compose parameter to set the ansible_host to the private_ip_address, public_ip_address, private_dns_name or the public_dns_name to allow the connection, still.
Here would be the minimal configuration of the plugin to allow this:
plugin: aws_ec2
hostnames:
- tag:Name
# ^-- Given that you indeed have a tag named "Name" on your EC2 instances
compose:
ansible_host: public_dns_address
Mind, that, in AWS, the tags are not uniques, so you can have multiple instances tagged with the same name, but in Ansible, the hostname is something unique. This means that you will probably be better of composing the name with the tag:Name postfixing with something truly unique, like the instance ID.
Something like:
plugin: aws_ec2
hostnames:
- name: 'instance-id'
separator: '_'
prefix: 'tag:Name'
compose:
ansible_host: public_dns_address