I want to check the ip on the server where ip list i have in the json file as below,
cat /tmp/iplist.json
[
"10.10.10.182",
"182.10.10.2",
"192.168.200.2"
]
now the condition is only one ip exist on the system so i was executing the loop to store the only the success output in variable but i am not able to do that does any one knows how can i do this
here is my playbook
---
- name: Load values from json file
hosts: localhost
gather_facts: false
vars:
ip: "{{ lookup('file', '/tmp/iplist.json') | from_json }}"
tasks:
- name: Loop over imported iplist
shell: ip a | grep {{ item }}
loop: "{{ ip }}"
changed_when: true
register: echo
- debug:
msg: "{{ echo }}"
And this how it getting failed error
PLAY [Load values from json file] *************************************************************************************************************************
TASK [Loop over imported iplist] *******************************************************************************************************************************
changed: [localhost] => (item=10.10.10.182)
failed: [localhost] (item=182.10.10.2) => {"ansible_loop_var": "item", "changed": true, "cmd": "ip a | grep 182.10.10.2", "delta": "0:00:00.012178", "end": "2020-05-09 11:30:06.919913", "item": "182.10.10.2", "msg": "non-zero return code", "rc": 1, "start": "2020-05-09 11:30:06.907735", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
failed: [localhost] (item=192.168.200.2) => {"ansible_loop_var": "item", "changed": true, "cmd": "ip a | grep 192.168.200.2", "delta": "0:00:00.029234", "end": "2020-05-09 11:30:07.178768", "item": "192.168.200.2", "msg": "non-zero return code", "rc": 1, "start": "2020-05-09 11:30:07.149534", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
PLAY RECAP *****************************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
When you enable gather_facts: true the variable ansible_all_ipv4_addresses will keep the list of all IPv4 addresses of the host. Use intersect to find common items. For example
- debug:
msg: "{{ ansible_all_ipv4_addresses | intersect(ip) }}"
Related
I'm trying to start the logstash service using below playbook. Output says starting but when I checked the status its in stopped state.
---
- hosts: test
gather_facts: False
remote_user: test
become: yes
become_user: root
become_method: sudo
tasks:
- name: starting /etc/init.d/logstash start
shell: /etc/init.d/logstash start
- name: status /etc/init.d/logstash status
shell: /etc/init.d/logstash status
register: logstash_status
- name: output
debug:
msg: "{{logstash_status}}"
Output
PLAY [test] ************************************************************************************************************************************************************
TASK [starting /etc/init.d/logstash start] *****************************************************************************************************************************
changed: [192.168.1.10]
TASK [status /etc/init.d/logstash status] ******************************************************************************************************************************
fatal: [192.168.1.10]: FAILED! => {"changed": true, "cmd": "/etc/init.d/logstash status", "delta": "0:00:00.021383", "end": "2021-06-02 20:31:17.701169", "msg": "non-zero return code", "rc": 1, "start": "2021-06-02 20:31:17.679786", "stderr": "", "stderr_lines": [], "stdout": "Stopped", "stdout_lines": ["Stopped"]}
to retry, use: --limit #/home/test/logstat-config/new.retry
PLAY RECAP *************************************************************************************************************************************************************
192.168.1.10 : ok=1 changed=1 unreachable=0 failed=1
I was able to start the service by running it in background.
---
- hosts: test
gather_facts: False
remote_user: test
become: yes
become_user: root
become_method: sudo
tasks:
- name: starting /etc/init.d/logstash start
shell: nohup /etc/init.d/logstash start &
register: logstash
- debug:
msg: "{{logstash}}"
- name: status /etc/init.d/logstash status
shell: /etc/init.d/logstash status
register: logstash_status
- name: output
debug:
msg: "{{logstash_status}}"
Output:
PLAY [test] ************************************************************************************************************************************************************
TASK [starting /etc/init.d/logstash start] *****************************************************************************************************************************
changed: [192.168.1.10]
TASK [debug] ***********************************************************************************************************************************************************
ok: [192.168.1.10] => {
"msg": {
"changed": true,
"cmd": "nohup /etc/init.d/logstash start &",
"delta": "0:00:00.014488",
"end": "2021-06-03 17:31:02.914306",
"failed": false,
"rc": 0,
"start": "2021-06-03 17:31:02.899818",
"stderr": "",
"stderr_lines": [],
"stdout": "Starting logstash",
"stdout_lines": [
"Starting logstash"
]
}
}
TASK [status /etc/init.d/logstash status] ******************************************************************************************************************************
changed: [192.168.1.10]
TASK [output] **********************************************************************************************************************************************************
ok: [192.168.1.10] => {
"msg": {
"changed": true,
"cmd": "/etc/init.d/logstash status",
"delta": "0:00:00.011286",
"end": "2021-06-03 17:31:03.272873",
"failed": false,
"rc": 0,
"start": "2021-06-03 17:31:03.261587",
"stderr": "",
"stderr_lines": [],
"stdout": "Running",
"stdout_lines": [
"Running"
]
}
}
PLAY RECAP *************************************************************************************************************************************************************
192.168.1.10 : ok=4 changed=2 unreachable=0 failed=0
I am trying to define a template in Ansible Tower, where I want to extract the id for the Active Controller in Kafka Broker and then use this value in another template / task that will perform the rolling restart but will make sure the active controller is started last
When I run this Ansible task
- name: Find active controller
shell: '/bin/zookeeper-shell 192.168.129.227 get /controller'
register: resultAC
I get the below result. I want to extract the brokerid and assign the value of 2 to a variable that can be used in a different task in the same template or pass it to another template when the templates are part of a workflow definition.
I tried using resultAC.stdout_lines[5].brokerid but that does not work.
The structure of resultAC:
{
"resultAC": {
"stderr_lines": [],
"changed": true,
"end": "2020-08-19 07:36:01.950347",
"stdout": "Connecting to 192.168.129.227\n\nWATCHER::\n\nWatchedEvent state:SyncConnected type:None path:null\n{\"version\":1,\"brokerid\":2,\"timestamp\":\"1597241391146\"}",
"cmd": "/bin/zookeeper-shell 192.168.129.227 get /controller",
"failed": false,
"delta": "0:00:02.843972",
"stderr": "",
"rc": 0,
"stdout_lines": [
"Connecting to 192.168.129.227",
"",
"WATCHER::",
"",
"WatchedEvent state:SyncConnected type:None path:null",
"{\"version\":1,\"brokerid\":2,\"timestamp\":\"1597241391146\"}"
],
"start": "2020-08-19 07:35:59.106375"
},
"_ansible_verbose_always": true,
"_ansible_no_log": false,
"changed": false
}
Because your JSON is just part of a list of strings, it is not parsed or considered as a JSON.
You will have to use the Ansible filter from_json in order to parse it back to a dictionary.
Given the playbook:
- hosts: all
gather_facts: no
vars:
resultAC:
stdout_lines:
- "Connecting to 192.168.129.227"
- ""
- "WATCHER::"
- ""
- "WatchedEvent state:SyncConnected type:None path:null"
- "{\"version\":1,\"brokerid\":2,\"timestamp\":\"1597241391146\"}"
tasks:
- debug:
msg: "{{ (resultAC.stdout_lines[5] | from_json).brokerid }}"
This gives the recap:
PLAY [all] *************************************************************************************************************************************************************
TASK [debug] ***********************************************************************************************************************************************************
ok: [localhost] => {
"msg": "2"
}
PLAY RECAP *************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Going further, maybe I would select and match the JSON in the stdout_lines list, just in case it is not always at the sixth line:
- hosts: all
gather_facts: no
vars:
resultAC:
stdout_lines:
- "Connecting to 192.168.129.227"
- ""
- "WATCHER::"
- ""
- "WatchedEvent state:SyncConnected type:None path:null"
- "{\"version\":1,\"brokerid\":2,\"timestamp\":\"1597241391146\"}"
tasks:
- debug:
msg: "{{ (resultAC.stdout_lines | select('match','{.*\"brokerid\":.*}') | first | from_json).brokerid }}"
I am running a command to get the status of tomcat and registering it in a variable. How do i extract the specific output of that command and put it in a variable to check further
Play -
- name: Check the State og tomcat service
shell: "svcs tomcat"
register: tomcat_status
- name: Show captured processes
debug:
msg: "{{ tomcat_status.stdout_lines|list }}"
The output of the above is -
server1 ok: {
"changed": false,
"msg": [
"STATE STIME FMRI",
"online 20:11:48 svc:/network/tomcat:tomcat"
]
}
How do i extract the value of STATE here? I want to know if it's online or disabled or shutdown etc.
NOTE - Output with -vvvv
server1 done: {
"changed": true,
"cmd": "svcs tomcat",
"delta": "0:00:00.025711",
"end": "2020-05-11 12:43:43.323017",
"invocation": {
"module_args": {
"_raw_params": "svcs tomcat",
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": false
}
},
"rc": 0,
"start": "2020-05-11 12:43:43.297306",
"stderr": "",
"stderr_lines": [],
"stdout": "STATE STIME FMRI\nonline 20:11:48 svc:/network/tomcat:tomcat",
"stdout_lines": [
"STATE STIME FMRI",
"online 20:11:48 svc:/network/tomcat:tomcat"
]
}
Based on your debug about, tomcat_status.stdout_lines is a list (you don't need the |list filter here because it's already a list) that looks like:
["STATE STIME FMRI", "online 20:11:48 svc:/network/tomcat:tomcat"]
To get the STATE value, you need the first field from the second line. So:
- set_fact:
tomcat_state: "{{ tomcat_status.stdout_lines.1.split().0 }}"
That takes the second line (tomcat_status.stdout_lines.1) then splits it on whitespace (.split()), and then takes the first value (.0).
Here's a complete test:
- hosts: localhost
vars:
tomcat_status:
stdout_lines:
- "STATE STIME FMRI"
- "online 20:11:48 svc:/network/tomcat:tomcat"
tasks:
- set_fact:
tomcat_state: "{{ tomcat_status.stdout_lines.1.split().0 }}"
- debug:
var: tomcat_state
Running that playbook results in:
PLAY [localhost] *****************************************************************************
TASK [set_fact] ******************************************************************************
ok: [localhost]
TASK [debug] *********************************************************************************
ok: [localhost] => {
"tomcat_state": "online"
}
PLAY RECAP ***********************************************************************************
localhost : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Let me try and explain my need:
As part of the regular deployment of our application, we have a SQL script(which would alter tables, add tables or update, etc) which needs to be run on 3 schemas in a region and 5 schemas in another for example. The application is in AWS and the database is Arora db(RDS)- MySQL. This schema can take anywhere between 30 minutes to 3 hours.
This SQL script needs to be run in parallel and with a delay of 2 minutes between each schema run.
This is what I have achieved till now:
A file having DB details- dbdata.yml
---
conn_details:
- { host: localhost, user: root, password: "Password1!" }
- { host: localhost, user: root, password: "Password1!" }
The playbook:
- hosts: localhost
vars:
script_file: "{{ path }}"
vars_files:
- dbdata.yml
tasks:
- name: shell command to execute script in parallel
shell: |
sleep 30s
"mysql -h {{ item.host }} -u {{ item.user }} -p{{ item.password }} < {{ script_file }} >> /usr/local/testscript.log"
with_items: "{{ conn_details }}"
register: sql_query_output
async: 600
poll: 0
- name: Wait for sql execution to finish
async_status:
jid: "{{ item.ansible_job_id }}"
register: _jobs
until: _jobs.finished
delay: 20 # Check every 20 seconds. Adjust as you like.
retries: 10
with_items: "{{ sql_query_output.results }}"
1st part- executes the script in parallel and this also includes a time gap of 30 seconds before each execution.
2nd part- picks the ansible job id from the registered output and checks if the job is completed or not.
Please note: before including the 30 seconds sleep, this playbook was working fine.
We have following erroneous output upon execution:
ansible-playbook parallel_local.yml --extra-vars "path=RDS_script.sql"
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [localhost] **************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************************************************************************************************
ok: [localhost]
TASK [sample command- ansible-playbook my_sqldeploy.yml --extra-vars "path=/home/NICEONDEMAND/bsahu/RDS_create_user1.sql"] ****************************************************************************************
changed: [localhost] => (item={u'host': u'localhost', u'password': u'Password1!', u'user': u'root'})
changed: [localhost] => (item={u'host': u'localhost', u'password': u'Password1!', u'user': u'root'})
TASK [Wait for creation to finish] ********************************************************************************************************************************************************************************
FAILED - RETRYING: Wait for creation to finish (10 retries left).
FAILED - RETRYING: Wait for creation to finish (9 retries left).
failed: [localhost] (item={'ansible_loop_var': u'item', u'ansible_job_id': u'591787538842.77844', 'item': {u'host': u'localhost', u'password': u'Password1!', u'user': u'root'}, u'started': 1, 'changed': True, 'failed': False, u'finished': 0, u'results_file': u'/root/.ansible_async/591787538842.77844'}) => {"ansible_job_id": "591787538842.77844", "ansible_loop_var": "item", "attempts": 3, "changed": true, "cmd": "sleep 30s\n\"mysql -h localhost -u root -pPassword1! < RDS_script.sql >> /usr/local/testscript.log\"\n", "delta": "0:00:30.073191", "end": "2019-11-28 17:01:57.632285", "finished": 1, "item": {"ansible_job_id": "591787538842.77844", "ansible_loop_var": "item", "changed": true, "failed": false, "finished": 0, "item": {"host": "localhost", "password": "Password1!", "user": "root"}, "results_file": "/root/.ansible_async/591787538842.77844", "started": 1}, "msg": "non-zero return code", "rc": 127, "start": "2019-11-28 17:01:27.559094", "stderr": "/bin/sh: line 1: mysql -h localhost -u root -pPassword1! < RDS_script.sql >> /usr/local/testscript.log: No such file or directory", "stderr_lines": ["/bin/sh: line 1: mysql -h localhost -u root -pPassword1! < RDS_script.sql >> /usr/local/testscript.log: No such file or directory"], "stdout": "", "stdout_lines": []}
failed: [localhost] (item={'ansible_loop_var': u'item', u'ansible_job_id': u'999397686792.77873', 'item': {u'host': u'localhost', u'password': u'Password1!', u'user': u'root'}, u'started': 1, 'changed': True, 'failed': False, u'finished': 0, u'results_file': u'/root/.ansible_async/999397686792.77873'}) => {"ansible_job_id": "999397686792.77873", "ansible_loop_var": "item", "attempts": 1, "changed": true, "cmd": "sleep 30s\n\"mysql -h localhost -u root -pPassword1! < RDS_script.sql >> /usr/local/testscript.log\"\n", "delta": "0:00:30.120136", "end": "2019-11-28 17:01:58.694713", "finished": 1, "item": {"ansible_job_id": "999397686792.77873", "ansible_loop_var": "item", "changed": true, "failed": false, "finished": 0, "item": {"host": "localhost", "password": "Password1!", "user": "root"}, "results_file": "/root/.ansible_async/999397686792.77873", "started": 1}, "msg": "non-zero return code", "rc": 127, "start": "2019-11-28 17:01:28.574577", "stderr": "/bin/sh: line 1: mysql -h localhost -u root -pPassword1! < RDS_script.sql >> /usr/local/testscript.log: No such file or directory", "stderr_lines": ["/bin/sh: line 1: mysql -h localhost -u root -pPassword1! < RDS_script.sql >> /usr/local/testscript.log: No such file or directory"], "stdout": "", "stdout_lines": []}
PLAY RECAP ********************************************************************************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Any suggestions how to overcome this. Thanks in advance for all the help.
My bad. I had a silly mistake which was creating trouble. I needed to remove "" from the line executing the sql file. Reproducing the correct yaml file below
- hosts: localhost
vars:
script_file: "{{ path }}"
vars_files:
- dbdata.yml
tasks:
- name: sample command- ansible-playbook my_sqldeploy.yml --extra-vars "path=/home/NICEONDEMAND/bsahu/RDS_create_user1.sql"
shell: |
sleep 30s
mysql -h {{ item.host }} -u {{ item.user }} -p{{ item.password }} < {{ script_file }} >> /usr/local/testscript.log
with_items: "{{ conn_details }}"
register: sql_query_output
async: 600
poll: 0
- name: Wait for creation to finish
async_status:
jid: "{{ item.ansible_job_id }}"
register: _jobs
until: _jobs.finished
delay: 20 # Check every 5 seconds. Adjust as you like.
retries: 10
with_items: "{{ sql_query_output.results }}"
Thanks all for the help.
I would like to fetch first few character from a registered variable. Can somebody please suggest me how to do that.
- hosts: node1
gather_facts: False
tasks:
- name: Check Value mango
shell: cat /home/vagrant/mango
register: result
- name: Display Result In Loop
debug: msg="Version is {{ result.stdout[5] }}"
The above code displays the fifth character rather first 5 characters of the registered string.
PLAY [node1] ******************************************************************
TASK: [Check Value mango] *****************************************************
changed: [10.200.19.21] => {"changed": true, "cmd": "cat /home/vagrant/mango", "delta": "0:00:00.003000", "end": "2015-08-19 09:29:58.229244", "rc": 0, "start": "2015-08-19 09:29:58.226244", "stderr": "", "stdout": "d3aa6131ec1a2e73f69ee150816265b5617d7e69", "warnings": []}
TASK: [Display Result In Loop] ************************************************
ok: [10.200.19.21] => {
"msg": "Version is 1"
}
PLAY RECAP ********************************************************************
10.200.19.21 : ok=2 changed=1 unreachable=0 failed=0
You can work with ranges:
- name: Display Result In Loop
debug: msg="Version is {{ result.stdout[:5] }}"
This will print the first 5 characters.
1:5 would print the characters 2 to 5, skipping the 1st char.
5: Would skip the the first 5 chars and print the rest of the string