Role duplication and execution only works on first role entry - ansible

Disclaimer ... I am new to Ansible but after a few days of googling and trying different things I am struggling with a seemingly basic problem. Below I have put my playbooks and the job runs fine but only the first role actually executes with the variable. Any help in this is greatly appreciated.
---
- connection: local
hosts: all
gather_facts: false
roles:
- role: slb
vars:
name: "test1"
- { role: slb, vars: { name: "test2" }}
- { role: slb, vars: { name: "test3" }}
The folder structure is then roles/slb/tasks/main.yml
- name: create virtual server
a10_slb_virtual_server:
a10_host: "10.247.5.29"
a10_username: "xxxxx"
a10_password: "xxx"
a10_port: "443"
a10_protocol: "https"
name: " {{ name }} "
ip_address: "10.1.1.1"
netmask: "255.255.255.0"
port_list:
- port_number: 80
protocol: tcp
enable_disable_action: enable
stats_data_action: stats-data-enable
Edit - here is a code example that works using the same syntax so maybe it is an issue with the module?
---
- connection: local
hosts: localhost
gather_facts: false
roles:
- role: text
vars:
name: "Scooby"
- { role: text, vars: { name: "Shaggy" }}
- name: Create a text file
file:
path: "/var/lib/awx/projects/test/{{ name }}.txt"
state: touch
'''
[root#awx-ansible a10]# ansible-playbook -i hosts main.yml -vvvv
ansible-playbook 2.8.4
config file = /var/lib/awx/projects/a10/ansible.cfg
configured module search path = [u'/usr/share/ansible/plugins/modules/a10_ansible/library']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /bin/ansible-playbook
python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
Using /var/lib/awx/projects/a10/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /var/lib/awx/projects/a10/hosts as it did not pass it's verify_file() method
script declined parsing /var/lib/awx/projects/a10/hosts as it did not pass it's verify_file() method
auto declined parsing /var/lib/awx/projects/a10/hosts as it did not pass it's verify_file() method
Parsed /var/lib/awx/projects/a10/hosts inventory source with ini plugin
Loading callback plugin default of type stdout, v2.0 from /usr/lib/python2.7/site-packages/ansible/plugins/callback/default.pyc
PLAYBOOK: main.yml ****************************************************************************************************************************************************************
Positional arguments: main.yml
become_method: sudo
inventory: (u'/var/lib/awx/projects/a10/hosts',)
forks: 5
tags: (u'all',)
verbosity: 4
connection: smart
timeout: 10
1 plays in main.yml
[WARNING]: Found variable using reserved name: name
PLAY [all] ************************************************************************************************************************************************************************
META: ran handlers
TASK [slb : create] ***************************************************************************************************************************************************************
task path: /var/lib/awx/projects/a10/roles/slb/tasks/main.yml:3
<10.247.5.29> ESTABLISH LOCAL CONNECTION FOR USER: root
<10.247.5.29> EXEC /bin/sh -c 'echo ~root && sleep 0'
<10.247.5.29> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1572014930.09-177848662916729 `" && echo ansible-tmp-1572014930.09-177848662916729="`
echo /root/.ansible/tmp/ansible-tmp-1572014930.09-177848662916729 `" ) && sleep 0'
<10.247.5.29> Attempting python interpreter discovery
<10.247.5.29> EXEC /bin/sh -c 'echo PLATFORM; uname; echo FOUND; command -v '"'"'/usr/bin/python'"'"'; command -v '"'"'python3.7'"'"'; command -v '"'"'python3.6'"'"'; command -v '"'"'python3.5'"'"'; command -v '"'"'python2.7'"'"'; command -v '"'"'python2.6'"'"'; command -v '"'"'/usr/libexec/platform-python'"'"'; command -v '"'"'/usr/bin/python3'"'"'; command -v '"'"'python'"'"'; echo ENDFOUND && sleep 0'
<10.247.5.29> EXEC /bin/sh -c '/usr/bin/python && sleep 0'
Using module file /usr/share/ansible/plugins/modules/a10_ansible/library/a10_slb_virtual_server.py
<10.247.5.29> PUT /root/.ansible/tmp/ansible-local-57176X2vg1j/tmp5QQ2WF TO /root/.ansible/tmp/ansible-tmp-1572014930.09-177848662916729/AnsiballZ_a10_slb_virtual_server.py
<10.247.5.29> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1572014930.09-177848662916729/ /root/.ansible/tmp/ansible-tmp-1572014930.09-177848662916729/AnsiballZ_a10_slb_virtual_server.py && sleep 0'
<10.247.5.29> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1572014930.09-177848662916729/AnsiballZ_a10_slb_virtual_server.py && sleep 0'
<10.247.5.29> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1572014930.09-177848662916729/ > /dev/null 2>&1 && sleep 0'
ok: [10.247.5.29] => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"invocation": {
"module_args": {
"a10_host": "10.247.5.29",
"a10_partition": null,
"a10_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"a10_port": 443,
"a10_protocol": "https",
"a10_username": "admin",
"acl_id": null,
"acl_id_shared": null,
"acl_name": null,
"acl_name_shared": null,
"arp_disable": null,
"description": null,
"disable_vip_adv": null,
"enable_disable_action": "enable",
"ethernet": null,
"extended_stats": null,
"get_type": null,
"ha_dynamic": null,
"ip_address": "10.1.1.1",
"ipv6_acl": null,
"ipv6_acl_shared": null,
"ipv6_address": null,
"migrate_vip": null,
"name": " test1 ",
"netmask": "255.255.255.0",
"port_list": [
{
"port_number": 80,
"protocol": "tcp"
}
],
"redistribute_route_map": null,
"redistribution_flagged": null,
"shared_partition_policy_template": null,
"state": "present",
"stats_data_action": "stats-data-enable",
"template_logging": null,
"template_policy": null,
"template_policy_shared": null,
"template_scaleout": null,
"template_virtual_server": null,
"use_if_ip": null,
"user_tag": null,
"uuid": null,
"vport_disable_action": null,
"vrid": null
}
},
"message": "",
"original_message": "",
"result": {},
"virtual-server": {
"a10-url": "/axapi/v3/slb/virtual-server/%20test1%20",
"arp-disable": 0,
"disable-vip-adv": 0,
"enable-disable-action": "enable",
"extended-stats": 0,
"ip-address": "10.1.1.1",
"name": " test1 ",
"netmask": "/24",
"port-list": [
{
"a10-url": "/axapi/v3/slb/virtual-server/%20test1%20/port/80+tcp",
"action": "enable",
"auto": 0,
"clientip-sticky-nat": 0,
"conn-limit": 64000000,
"cpu-compute": 0,
"def-selection-if-pref-failed": "def-selection-if-pref-failed",
"extended-stats": 0,
"force-routing-mode": 0,
"ha-conn-mirror": 0,
"ipinip": 0,
"memory-compute": 0,
"message-switching": 0,
"no-auto-up-on-aflex": 0,
"no-dest-nat": 0,
"no-logging": 0,
"port-number": 80,
"protocol": "tcp",
"range": 0,
"reset": 0,
"reset-on-server-selection-fail": 0,
"rtp-sip-call-id-match": 0,
"scaleout-bucket-count": 32,
"skip-rev-hash": 0,
"snat-on-vip": 0,
"stats-data-action": "stats-data-enable",
"syn-cookie": 0,
"template-tcp": "default",
"template-virtual-port": "default",
"use-alternate-port": 0,
"use-default-if-no-server": 0,
"use-rcv-hop-for-resp": 0,
"uuid": "0c2a963c-f741-11e9-b845-e9b0dd63a720"
}
],
"redistribution-flagged": 0,
"stats-data-action": "stats-data-enable",
"uuid": "0c2a19e6-f741-11e9-b845-e9b0dd63a720"
}
}
TASK [slb : create] ***************************************************************************************************************************************************************
task path: /var/lib/awx/projects/a10/roles/slb/tasks/main.yml:3
<10.247.5.29> ESTABLISH LOCAL CONNECTION FOR USER: root
<10.247.5.29> EXEC /bin/sh -c 'echo ~root && sleep 0'
<10.247.5.29> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1572014931.51-10342010886567 `" && echo ansible-tmp-1572014931.51-10342010886567="` echo /root/.ansible/tmp/ansible-tmp-1572014931.51-10342010886567 `" ) && sleep 0'
Using module file /usr/share/ansible/plugins/modules/a10_ansible/library/a10_slb_virtual_server.py
<10.247.5.29> PUT /root/.ansible/tmp/ansible-local-57176X2vg1j/tmpKJVm5x TO /root/.ansible/tmp/ansible-tmp-1572014931.51-10342010886567/AnsiballZ_a10_slb_virtual_server.py
<10.247.5.29> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1572014931.51-10342010886567/ /root/.ansible/tmp/ansible-tmp-1572014931.51-10342010886567/AnsiballZ_a10_slb_virtual_server.py && sleep 0'
<10.247.5.29> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1572014931.51-10342010886567/AnsiballZ_a10_slb_virtual_server.py && sleep 0'
<10.247.5.29> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1572014931.51-10342010886567/ > /dev/null 2>&1 && sleep 0'
ok: [10.247.5.29] => {
"changed": false,
"invocation": {
"module_args": {
"a10_host": "10.247.5.29",
"a10_partition": null,
"a10_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"a10_port": 443,
"a10_protocol": "https",
"a10_username": "admin",
"acl_id": null,
"acl_id_shared": null,
"acl_name": null,
"acl_name_shared": null,
"arp_disable": null,
"description": null,
"disable_vip_adv": null,
"enable_disable_action": "enable",
"ethernet": null,
"extended_stats": null,
"get_type": null,
"ha_dynamic": null,
"ip_address": "10.1.1.1",
"ipv6_acl": null,
"ipv6_acl_shared": null,
"ipv6_address": null,
"migrate_vip": null,
"name": " test2 ",
"netmask": "255.255.255.0",
"port_list": [
{
"port_number": 80,
"protocol": "tcp"
}
],
"redistribute_route_map": null,
"redistribution_flagged": null,
"shared_partition_policy_template": null,
"state": "present",
"stats_data_action": "stats-data-enable",
"template_logging": null,
"template_policy": null,
"template_policy_shared": null,
"template_scaleout": null,
"template_virtual_server": null,
"use_if_ip": null,
"user_tag": null,
"uuid": null,
"vport_disable_action": null,
"vrid": null
}
},
"message": "",
"original_message": "",
"result": {}
}
TASK [slb : create] ***************************************************************************************************************************************************************
task path: /var/lib/awx/projects/a10/roles/slb/tasks/main.yml:3
<10.247.5.29> ESTABLISH LOCAL CONNECTION FOR USER: root
<10.247.5.29> EXEC /bin/sh -c 'echo ~root && sleep 0'
<10.247.5.29> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp/ansible-tmp-1572014932.64-244561048768912 `" && echo ansible-tmp-1572014932.64-244561048768912="`
echo /root/.ansible/tmp/ansible-tmp-1572014932.64-244561048768912 `" ) && sleep 0'
Using module file /usr/share/ansible/plugins/modules/a10_ansible/library/a10_slb_virtual_server.py
<10.247.5.29> PUT /root/.ansible/tmp/ansible-local-57176X2vg1j/tmpuWRYRS TO /root/.ansible/tmp/ansible-tmp-1572014932.64-244561048768912/AnsiballZ_a10_slb_virtual_server.py
<10.247.5.29> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1572014932.64-244561048768912/ /root/.ansible/tmp/ansible-tmp-1572014932.64-244561048768912/AnsiballZ_a10_slb_virtual_server.py && sleep 0'
<10.247.5.29> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1572014932.64-244561048768912/AnsiballZ_a10_slb_virtual_server.py && sleep 0'
<10.247.5.29> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1572014932.64-244561048768912/ > /dev/null 2>&1 && sleep 0'
ok: [10.247.5.29] => {
"changed": false,
"invocation": {
"module_args": {
"a10_host": "10.247.5.29",
"a10_partition": null,
"a10_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"a10_port": 443,
"a10_protocol": "https",
"a10_username": "admin",
"acl_id": null,
"acl_id_shared": null,
"acl_name": null,
"acl_name_shared": null,
"arp_disable": null,
"description": null,
"disable_vip_adv": null,
"enable_disable_action": "enable",
"ethernet": null,
"extended_stats": null,
"get_type": null,
"ha_dynamic": null,
"ip_address": "10.1.1.1",
"ipv6_acl": null,
"ipv6_acl_shared": null,
"ipv6_address": null,
"migrate_vip": null,
"name": " test3 ",
"netmask": "255.255.255.0",
"port_list": [
{
"port_number": 80,
"protocol": "tcp"
}
],
"redistribute_route_map": null,
"redistribution_flagged": null,
"shared_partition_policy_template": null,
"state": "present",
"stats_data_action": "stats-data-enable",
"template_logging": null,
"template_policy": null,
"template_policy_shared": null,
"template_scaleout": null,
"template_virtual_server": null,
"use_if_ip": null,
"user_tag": null,
"uuid": null,
"vport_disable_action": null,
"vrid": null
}
},
"message": "",
"original_message": "",
"result": {}
}
META: ran handlers
META: ran handlers
PLAY RECAP ************************************************************************************************************************************************************************
10.247.5.29 : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
'''

After much work I was able to resolve my issue by starting from scratch and it appears that the original problem was caused by spaces in the play variable " {{ name }} ". Once I changed it to "{{ name }}" things seemed to work as expected. I'm still struggling with the syntax but once I got this working I've been able to really tie plays together as I had hoped. Thanks to both of you for the help.

Related

Unable to create directory using ansible playbook

Steps to reproduce-
Ensure you have a VM running in VirtualBox (RHEL8)
Create a ansible galaxy collection
ansible-galaxy collection init myorg.mycollection
Navigate into the Roles Directory and execute following command
ansible-galaxy role init myrole
Add following code in main.yml inside the roles/myrole/tasks/main.yml
---
# tasks file for myrole
- name: Create /home/{{username}}/.ssh, if not exist
file:
path: "/home/{{username}}/.ssh"
state: directory
Create a play.yml file with following content
---
- name: Configure Development Workstation
hosts: my_user_name-rhel8
connection: local
debugger: on_failed
gather_facts: no
become_user: my_user_name
vars:
uname: "my_user_name"
roles:
- role: myorg.mycollection.myrole
username: "{{ uname }}"
build your collection with following command
ansible-galaxy collection build myorg/mycollection
install your collection with following command
ansible-galaxy collection install ./myorg-mycollection-1.0.0.tar.gz --force
run ansible playbook with following command
ansible-playbook play.yml -i my_user_name-rhel8, --ask-become-pass -vvv
Expected Result: The /home/username/.ssh folder should be created successfully.
Actual Result: The ansible fails with following result
[WARNING]: Platform darwin on host my_user_name-rhel8 is using the discovered Python interpreter at /usr/bin/python, but future
installation of another Python interpreter could change the meaning of that path. See
https://docs.ansible.com/ansible/2.11/reference_appendices/interpreter_discovery.html for more information.
fatal: [my_user_name-rhel8]: FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"invocation": {
"module_args": {
"_diff_peek": null,
"_original_basename": null,
"access_time": null,
"access_time_format": "%Y%m%d%H%M.%S",
"attributes": null,
"follow": true,
"force": false,
"group": null,
"mode": null,
"modification_time": null,
"modification_time_format": "%Y%m%d%H%M.%S",
"owner": null,
"path": "/home/my_user_name/.ssh",
"recurse": false,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": null,
"state": "directory",
"unsafe_writes": false
}
},
"msg": "There was an issue creating /home/anchavan as requested: [Errno 45] Operation not supported: '/home/my_user_name'",
"path": "/home/my_user_name/.ssh"
}

Ansible command inside a playbook with shell module

Can I use something as below in ansible:
---
- hosts: webserver
gather_facts: False
tasks:
- name: Check ping
shell: ansible -i localhost.yml -m shell -a 'ping'
That localhost contains all hosts whereas playbook will run on webserver.
Actual requirement is to run on webserver whereas in one task I need to run a command on all hosts specified in the host file.
Thanks in advance!
Just to add error is :
fatal: [webservice]: FAILED! => {
"changed": true,
"cmd": "ansible -i localhost.yml -m shell -a 'ping'",
"delta": "0:00:00.009121",
"end": "2020-02-12 04:47:06.174390",
"invocation": {
"module_args": {
"_raw_params": "ansible -i localhost.yml -m shell -a 'ping'",
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"msg": "non-zero return code",
"rc": 127,
"start": "2020-02-12 04:47:06.165269",
"stderr": "ansible: not found",
"stderr_lines": [
"ansible: not found"
],
"stdout": "",
"stdout_lines": [] }
You want to run an ansible playbook or a shell command inside a server ?
If you want to run a shell command then don't mention playbook name. If you want to run a playbook then don't mention -m shell -a shell parameter.

Browsing JSON variable in Ansible

For some reasons, I'm not allowed to use jsqon_query with Ansible, I'm trying to reach stdout element in a results list in a variable resulting from a shell call.
The JSON variable is saved this way :
"request": {
"changed": true,
"msg": "All items completed",
"results": [
{
"_ansible_ignore_errors": null,
"_ansible_item_result": true,
"_ansible_no_log": false,
"_ansible_parsed": true,
"changed": true,
"cmd": "echo \"****:********\" | grep -o -P '^*****:[^\\n]*$' | awk '{split($0,a,\":\"); print a[2]}'",
"delta": "0:00:00.003660",
"end": "2018-10-31 17:26:17.697864",
"failed": false,
"invocation": {
"module_args": {
"_raw_params": "echo \"**************\" | grep -o -P '^************:[^\\n]*$' | awk '{split($0,a,\":\"); print a[2]}'",
"_uses_shell": true,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"warn": true
}
},
"item": "**********:************",
"rc": 0,
"start": "2018-10-31 17:26:17.694204",
"stderr": "",
"stderr_lines": [],
"stdout": "**********",
"stdout_lines": [
"*********"
]
}
]
}
}
I'm trying to browse my stdout element this way :
- name: Tarball copy
copy: src= "{{ '%s/%s' | format( TARBALL_DIR , request.results[0].stdout ) }}" dest= "/tmp/tarball/"
I tryied also :
- name: Tarball copy
copy: src= "{{ '%s/%s' | format( TARBALL_DIR , request.results[.stdout] ) }}" dest= "/tmp/tarball/"
- name: Tarball copy
copy: src= "{{ '%s/%s' | format( TARBALL_DIR , item.stdout ) }}" dest= "/tmp/tarball/"
with_items: "{{ request.results }}"
I've no idea why I'm always getting the same error :
- template error while templating string: unexpected '.'. String: {{ request.results[.stdout] }} (when trying with [.stdout)
- The task includes an option with an undefined variable (when putting [0] index)
I've finally solved my problem using :
- name: Tarball copy
copy:
src: "{{ '%s/%s' | format( TARBALL_DIR , request.results[0].stdout ) }}"
dest: "/tmp/tarball/"
It seems that src and dest couldn't accept space after the equal char.

Docker Image Download with download-frozen-image-v2.sh on Windows

I am working on downloading a Docker Image on an internet-connected Windows machine that does not have (and cannot have) Docker installed on it, to transfer to an non-internet-connected Linux machine that does have Docker. I'm using git-bash to run download-frozen-image-v2.sh. Everything is working as expected until the script begins to download the final layer of any given image. On the final layer the json file is being returned empty. Through echo statements, I'm able to see that everything is working flawlessly until lines 119-142
jq "$addJson + ." > "$dir/$layerId/json" <<-'EOJSON'
{
"created": "0001-01-01T00:00:00Z",
"container_config": {
"Hostname": "",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": null,
"Cmd": null,
"Image": "",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": null,
"OnBuild": null,
"Labels": null
}
}
EOJSON
Only on the final layer, this code is resulting in an empty json file, which in-turn creates an error in line 173
jq --raw-output "$imageOldConfig + del(.history, .rootfs)" "$dir/$configFile" > "$dir/$imageId/json"
jq: error: syntax error, unexpected '+', expecting $end (Windows cmd shell quoting issues?) at <top-level>, line 1:
+ del(.history, .rootfs)
jq: 1 compile error
Update
Exact steps to replicate
Perform on Windows 10 computer.
1) Install scoop for Windows https://scoop.sh/
2) in Powershell scoop install git curl jq go tar
3) git-bash
4) in git-bash curl -o download-frozen-image-v2.sh https://raw.githubusercontent.com/moby/moby/master/contrib/download-frozen-image-v2.sh
5) bash download-frozen-image-vs.sh ubuntu ubuntu:latest
The above will result in the aforementioned error.
in response to #peak below
The command I'm using is bash download-frozen-image-v2.sh ubuntu ubuntu:latest which should download 5 layers. The first 4 download flawlessly, it is only the last layer that fails. I tried this process for several other images, and it always fails on the final layer.
addJson:
{ id: "ee6b1042efee4fb07d2fe1a5079ce498567e6f5ac849413f0e623d4582da5bc9", parent: "80a2fb00dfe137a28c24fbc39fde656650cd68028d612e6f33912902d887b108" }
dir/configFile:
ubuntu/113a43faa1382a7404681f1b9af2f0d70b182c569aab71db497e33fa59ed87e6.json
dir/configFile contents:
{
"architecture": "amd64",
"config": {
"Hostname": "",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": [
"/bin/bash"
],
"ArgsEscaped": true,
"Image": "sha256:c2775c69594daa3ee360d8e7bbca93c65d9c925e89bd731f12515f9bf8382164",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": null,
"OnBuild": null,
"Labels": null
},
"container": "6713e927cc43b61a4ce3950a69907336ff55047bae9393256e32613a54321c70",
"container_config": {
"Hostname": "6713e927cc43",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": [
"/bin/sh",
"-c",
"#(nop) ",
"CMD [\"/bin/bash\"]"
],
"ArgsEscaped": true,
"Image": "sha256:c2775c69594daa3ee360d8e7bbca93c65d9c925e89bd731f12515f9bf8382164",
"Volumes": null,
"WorkingDir": "",
"Entrypoint": null,
"OnBuild": null,
"Labels": {}
},
"created": "2018-06-05T21:20:54.310450149Z",
"docker_version": "17.06.2-ce",
"history": [
{
"created": "2018-06-05T21:20:51.286433694Z",
"created_by": "/bin/sh -c #(nop) ADD file:28c0771e44ff530dba3f237024acc38e8ec9293d60f0e44c8c78536c12f13a0b in / "
},
{
"created": "2018-06-05T21:20:52.045074543Z",
"created_by": "/bin/sh -c set -xe \t\t&& echo '#!/bin/sh' > /usr/sbin/policy-rc.d \t&& echo 'exit 101' >> /usr/sbin/policy-rc.d \t&& chmod +x /usr/sbin/policy-rc.d \t\t&& dpkg-divert --local --rename --add /sbin/initctl \t&& cp -a /usr/sbin/policy-rc.d /sbin/initctl \t&& sed -i 's/^exit.*/exit 0/' /sbin/initctl \t\t&& echo 'force-unsafe-io' > /etc/dpkg/dpkg.cfg.d/docker-apt-speedup \t\t&& echo 'DPkg::Post-Invoke { \"rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true\"; };' > /etc/apt/apt.conf.d/docker-clean \t&& echo 'APT::Update::Post-Invoke { \"rm -f /var/cache/apt/archives/*.deb /var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true\"; };' >> /etc/apt/apt.conf.d/docker-clean \t&& echo 'Dir::Cache::pkgcache \"\"; Dir::Cache::srcpkgcache \"\";' >> /etc/apt/apt.conf.d/docker-clean \t\t&& echo 'Acquire::Languages \"none\";' > /etc/apt/apt.conf.d/docker-no-languages \t\t&& echo 'Acquire::GzipIndexes \"true\"; Acquire::CompressionTypes::Order:: \"gz\";' > /etc/apt/apt.conf.d/docker-gzip-indexes \t\t&& echo 'Apt::AutoRemove::SuggestsImportant \"false\";' > /etc/apt/apt.conf.d/docker-autoremove-suggests"
},
{
"created": "2018-06-05T21:20:52.712120056Z",
"created_by": "/bin/sh -c rm -rf /var/lib/apt/lists/*"
},
{
"created": "2018-06-05T21:20:53.405342638Z",
"created_by": "/bin/sh -c sed -i 's/^#\\s*\\(deb.*universe\\)$/\\1/g' /etc/apt/sources.list"
},
{
"created": "2018-06-05T21:20:54.091704323Z",
"created_by": "/bin/sh -c mkdir -p /run/systemd && echo 'docker' > /run/systemd/container"
},
{
"created": "2018-06-05T21:20:54.310450149Z",
"created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]",
"empty_layer": true
}
],
"os": "linux",
"rootfs": {
"type": "layers",
"diff_ids": [
"sha256:db9476e6d963ed2b6042abef1c354223148cdcdbd6c7416c71a019ebcaea0edb",
"sha256:3a89e0d8654e098e949764b1cb23018e27f299b0931c5fd41c207d610ff356c4",
"sha256:904d60939c360b5f528b886c1b534855a008f9a7fd411d4977e09aa7de74c834",
"sha256:a20a262b87bd8a00717f3b30c001bcdaf0fd85d049e6d10500597caa29c013c5",
"sha256:b6f13d447e00fba3b9bd10c1e5c6697e913462f44aa24af349bfaea2054e32f4"
]
}
}
Any help in figuring out what is occurring here would be greatly appreciated.
Thank you.
I can't tell you why this happens but it appears to be a problem with how jq parses the input file. It's segfaulting when reading the file. It's a known issue in the windows builds where the problem is triggered by the length of the paths to the files.
Fortunately, there is a way around this issue by modifying the script to go against all conventional wisdom and cat the file to jq.
The script isn't utilizing jq very well and builds some of the json manually so some additional fixes would be needed. It will have errors regarding INVALID_CHARACTER when parsing. It's probably a manifestation of this issue since the script is manually building a lot of the jq programs.
I put up a gist with the updated file that at least doesn't error out, check to see if it works as expected.
Changes start at line 172 and 342.
The way it builds the manifest is just messy. I've cleaned it up a bit removing all the string interpolations instead passing all parameters in as arguments to jq.
# munge the top layer image manifest to have the appropriate image configuration for older daemons
local imageOldConfig="$(cat "$dir/$imageId/json" | jq --raw-output --compact-output '{ id: .id } + if .parent then { parent: .parent } else {} end')"
cat "$dir/$configFile" | jq --raw-output "$imageOldConfig + del(.history, .rootfs)" > "$dir/$imageId/json"
local manifestJsonEntry="$(
jq --raw-output --compact-output -n \
--arg configFile "$configFile" \
--arg repoTags "${image#library\/}:$tag" \
--argjson layers "$(IFS=$'\n'; jq --arg a "${layerFiles[*]}" -n '$a | split("\n")')" \
'{
Config: $configFile,
RepoTags: [ $repoTags ],
Layers: $layers
}'
)"
(1) I have verified that using bash, the sequence:
addJson='{ id: "ee6b1042efee4fb07d2fe1a5079ce498567e6f5ac849413f0e623d4582da5bc9",
parent: "80a2fb00dfe137a28c24fbc39fde656650cd68028d612e6f33912902d887b108" }'
jq "$addJson + ." configFile > layerId.json
succeeds, where configFile has the contents shown in the updated question.
(2) Similarly, I have verified that the following also succeeds:
imageOldConfig="$(jq --raw-output --compact-output '{ id: .id } + if .parent then { parent: .parent } else {} end' layerId.json)"
jq --raw-output "$imageOldConfig + del(.history, .rootfs)" <<-'EOJSON'
<JSON as in the question>
EOJSON
where <JSON as in the question> stands for the JSON shown in the question.
(3) In general, it is not a good idea to pass shell $-variables into jq programs by shell string interpolation.
For example, rather than writing:
jq --raw-output "$imageOldConfig + del(.history, .rootfs)"
it would be much better to write something like:
jq --raw-output --argjson imageOldConfig "$imageOldConfig" '
$imageOldConfig + del(.history, .rootfs)'

How to automate adding license key into hazelcast mancenter

I am playing around with hazelcast, using aws cloudformation and ansible to spin up a cluster of two hazelcast nodes + a separate mancenter.
All documentation on the mancenter implies everything must be done manually by a user in a browser. However this is not ideal as we will have many environments and have a hardened ami provided to us every few weeks which we must update existing environment to.
What I am trying to do is create an ansible role that automatically creates the first admin user, and then adds the enterprise license into the mancenter.
I have successfully scripted the user creation (just http for now, baby steps)
- name: Check for first user
uri:
url: "http://{{ hazelcastmanagement_dns }}:8080/mancenter/user.do?operation=anyUser&_=1480397059541"
method: GET
return_content: no
register: anyuser
until: anyuser.json["anyUser"] is defined
retries: 10
delay: 5
- name: Register Admin user
uri:
url: "http://{{ hazelcastmanagement_dns }}:8080/mancenter/user.do?operation=signUp&username={{ hazelcastmanagement_user }}&password={{ hazelcastmanagement_password }}&confirmpassword={{ hazelcastmanagement_password }}&email={{ hazelcastmanagement_email }}&_=1479951949840"
method: GET
return_content: no
register: result
until: result.json["success"] is defined
retries: 10
delay: 5
when: anyuser.json["anyUser"] == "false"
However I am having trouble successfully orchestrating the update license call.
In a browser, certain calls return the JSESSION ID, and HTTP 200's. When trying to emulate this in ansible however, I am always getting a 302, redirect to the login page.
I have pasted the tasks below that I am attempting.
These task examples do not contain many headers, however I have tried emulating every single header that a browser sends previously but had the same result.
- name: Call to update license unauthorized (returns set_cookie)
uri:
url: "http://{{ hazelcastmanagement_dns }}:8080/mancenter/main.do"
method: POST
return_content: yes
body: "operation=savelicense_getLicenceInfo&key={{ hazelcast_license }} "
status_code: 302
register: cookie
- name: Login (302 ok because browser mirrors this result)
uri:
url: "http://{{ hazelcastmanagement_dns }}:8080/mancenter/j_spring_security_check"
method: POST
body: "j_username={{ hazelcastmanagement_user }}&j_password={{ hazelcastmanagement_password }}"
return_content: yes
status_code: 302
HEADER_Cookie: "{{cookie.set_cookie}}"
- name: Call to update license authorized
uri:
url: "http://{{ hazelcastmanagement_dns }}:8080/mancenter/main.do"
method: POST
return_content: yes
body: "operation=savelicense_getLicenceInfo&key={{ hazelcast_license }}"
HEADER_Cookie: "{{cookie.set_cookie}}"
My ansible task logs are below, -vvvv
Hoping someone else has looked into this previously, could not find any questions related to it elsewhere however.
Ansible Log Output:
TASK [hazelcastmanagement_launch : Call to update license authorized] **********
task path: /app/esg/ansible/roles/hazelcastmanagement_launch/tasks/launch.yml:5
ESTABLISH LOCAL CONNECTION FOR USER: root
hazelcast EXEC ( umask 22 && mkdir -p "$( echo /tmp/ansible-tmp-1480399947.07-7077332634698 )" && echo "$( echo /tmp/ansible-tmp-1480399947.07-7077332634698 )" )
hazelcast PUT /tmp/tmpBbuVj0 TO /tmp/ansible-tmp-1480399947.07-7077332634698/uri
hazelcast EXEC chmod a+r /tmp/ansible-tmp-1480399947.07-7077332634698/uri
hazelcast EXEC /bin/sh -c 'sudo -H -S -n -u esg /bin/sh -c '"'"'echo BECOME-SUCCESS-lemxlebthsblahblahblahcevqzkafjdo; LANG=en_US.UTF-8 HTTP_PROXY=proxy.com LC_MESSAGES=en_US.UTF-8 HTTPS_PROXY=proxy.com no_proxy=proxy.com http_proxy=proxy.com https_proxy=proxy.com NO_PROXY=proxy.com LC_ALL=en_US.UTF-8 /usr/bin/python /tmp/ansible-tmp-1480399947.07-7077332634698/uri'"'"''
hazelcast EXEC rm -f -r /tmp/ansible-tmp-1480399947.07-7077332634698/ > /dev/null 2>&1
ok: [hazelcast] => {"changed": false, "content": "", "content_length": "0", "expires": "Thu, 01 Jan 1970 00:00:00 GMT", "invocation": {"module_args": {"backup": null, "body": "operation=savelicense_getLicenceInfo&key=ENTERPRISELicense12341234123412341234123412341234", "body_format": "raw", "content": null, "creates": null, "delimiter": null, "dest": null, "directory_mode": null, "follow": false, "follow_redirects": "safe", "force": null, "force_basic_auth": false, "group": null, "method": "POST", "mode": null, "owner": null, "password": null, "regexp": null, "remote_src": null, "removes": null, "return_content": true, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "status_code": ["302"], "timeout": 30, "url": "http://internal-esg-aws.elb.amazonaws.com:8080/mancenter/main.do", "user": null, "validate_certs": true}, "module_name": "uri"}, "location": "http://internal-esg-aws.elb.amazonaws.com:8080/mancenter/login.jsp;jsessionid=dq0hzdvm2xm91r4h6eyef1l48", "redirected": false, "server": "Jetty(8.y.z-SNAPSHOT)", "set_cookie": "JSESSIONID=dq0hzdvm2xm91r4h6eyef1l48;Path=/mancenter;HttpOnly", "status": 302}
TASK [hazelcastmanagement_launch : Login] **************************************
task path: /app/app/ansible/roles/hazelcastmanagement_launch/tasks/launch.yml:14
ESTABLISH LOCAL CONNECTION FOR USER: root
hazelcast EXEC ( umask 22 && mkdir -p "$( echo /tmp/ansible-tmp-1480399947.23-71435275964843 )" && echo "$( echo /tmp/ansible-tmp-1480399947.23-71435275964843 )" )
hazelcast PUT /tmp/tmpKhOI1y TO /tmp/ansible-tmp-1480399947.23-71435275964843/uri
hazelcast EXEC chmod a+r /tmp/ansible-tmp-1480399947.23-71435275964843/uri
hazelcast EXEC /bin/sh -c 'sudo -H -S -n -u app /bin/sh -c '"'"'echo BECOME-SUCCESS-rfxrchqnblahblahblahhvryauidnf; LANG=en_US.UTF-8 HTTP_PROXY=proxy.com8 LC_MESSAGES=en_US.UTF-8 HTTPS_PROXY=proxy.com no_proxy=proxy.com http_proxy=proxy.com NO_PROXY=proxy.com LC_ALL=en_US.UTF-8 /usr/bin/python /tmp/ansible-tmp-1480399947.23-71435275964843/uri'"'"''
hazelcast EXEC rm -f -r /tmp/ansible-tmp-1480399947.23-71435275964843/ > /dev/null 2>&1
ok: [hazelcast] => {"changed": false, "content": "", "content_length": "0", "invocation": {"module_args": {"HEADER_Cookie": "JSESSIONID=dq0hzdvm2xm91r4h6eyef1l48;Path=/mancenter;HttpOnly", "backup": null, "body": "j_username=admin&j_password=admin1", "body_format": "raw", "content": null, "creates": null, "delimiter": null, "dest": null, "directory_mode": null, "follow": false, "follow_redirects": "safe", "force": null, "force_basic_auth": false, "group": null, "method": "POST", "mode": null, "owner": null, "password": null, "regexp": null, "remote_src": null, "removes": null, "return_content": true, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "status_code": ["302"], "timeout": 30, "url": "http://internal-aws.elb.amazonaws.com:8080/mancenter/j_spring_security_check", "user": null, "validate_certs": true}, "module_name": "uri"}, "location": "http://internal-aws.elb.amazonaws.com:8080/mancenter/login.jsp?login_error=true", "redirected": false, "server": "Jetty(8.y.z-SNAPSHOT)", "status": 302}
TASK [hazelcastmanagement_launch : Call to update license authorized] **********
task path: /app/app/ansible/roles/hazelcastmanagement_launch/tasks/launch.yml:23
ESTABLISH LOCAL CONNECTION FOR USER: root
hazelcast EXEC ( umask 22 && mkdir -p "$( echo /tmp/ansible-tmp-1480399947.38-137956022601151 )" && echo "$( echo /tmp/ansible-tmp-1480399947.38-137956022601151 )" )
hazelcast PUT /tmp/tmpAbC8uL TO /tmp/ansible-tmp-1480399947.38-137956022601151/uri
hazelcast EXEC chmod a+r /tmp/ansible-tmp-1480399947.38-137956022601151/uri
hazelcast EXEC /bin/sh -c 'sudo -H -S -n -u app /bin/sh -c '"'"'echo BECOME-SUCCESS-cciaazzdblahblahblahdufmpuhe; LANG=en_US.UTF-8 HTTP_PROXY=proxy.com LC_MESSAGES=en_US.UTF-8 HTTPS_PROXY=proxy.com no_proxy=proxy.com http_proxy=proxy.com https_proxy=proxy.com NO_PROXY=proxy.comLC_ALL=en_US.UTF-8 /usr/bin/python /tmp/ansible-tmp-1480399947.38-137956022601151/uri'"'"''
hazelcast EXEC rm -f -r /tmp/ansible-tmp-1480399947.38-137956022601151/ > /dev/null 2>&1
fatal: [hazelcast]: FAILED! => {"changed": false, "content": "", "content_length": "0", "failed": true, "invocation": {"module_args": {"HEADER_Cookie": "JSESSIONID=dq0hzdvm2xm91r4h6eyef1l48;Path=/mancenter;HttpOnly", "backup": null, "body": "operation=savelicense_getLicenceInfo&key=ENTERPRISELicense123412341234123412341234123412341234", "body_format": "raw", "content": null, "creates": null, "delimiter": null, "dest": null, "directory_mode": null, "follow": false, "follow_redirects": "safe", "force": null, "force_basic_auth": false, "group": null, "method": "POST", "mode": null, "owner": null, "password": null, "regexp": null, "remote_src": null, "removes": null, "return_content": true, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "status_code": [200], "timeout": 30, "url": "http://internal-aws.elb.amazonaws.com:8080/mancenter/main.do", "user": null, "validate_certs": true}, "module_name": "uri"}, "location": "http://internal-aws.elb.amazonaws.com:8080/mancenter/login.jsp", "msg": "Status code was not [200]", "redirected": false, "server": "Jetty(8.y.z-SNAPSHOT)", "status": 302}
EDIT:
Thanks for that solution emre. Using curl was the way to go.
I tried a few more times with the uri ansible module. But no dice... must be something under the hood going on.
Since your curl's hit the nail on the head, I just wrapped this in the ansible command module instead of using the uri module to construct the calls.
I chdir to /tmp to ensure I have write access for the cookie file.
- name: Login to management
shell: "curl -X POST http://{{ hazelcastmanagement_dns }}:8080/mancenter/j_spring_security_check -d "j_username={{ hazelcastmanagement_user}}" -d "j_password={{ hazelcastmanagement_password }}" -c cookies.file
args:
chdir: /tmp
- name: Login to management
shell: "curl -H "Content-Type: application/x-www-form-urlencoded" -X POST http://{{ hazelcastmanagement_dns }}:8080/mancenter/main.do?operation=savelicense -d 'key={{ hazelcast_licence }}' -b cookies.file
args:
chdir: /tmp
I don't know about Ansible, but using cUrl you can log in and set the license key as follows:
curl -X POST http://localhost:8083/mancenter/j_spring_security_check -d "j_username=emre" -d "j_password=Password1" -c cookies.file
curl -H "Content-Type: application/x-www-form-urlencoded" -X POST http://localhost:8083/mancenter/main.do?operation=savelicense -d 'key=aaaa' -b cookies.file
Note that you need to log in with an admin user and the license key you provide needs to be correct for the server to return 200.
Edit:
With Hazelcast Management Center version 3.9.3, a new system property to configure the license was introduced. See the release notes for version 3.9.3 and the relevant section on the latest reference manual for details.

Resources