How to check uri status in next task to excute - ansible

Hi I am using rest api post request to create one resource thru URI module in ansible... I want to check the status of the resource created or not in next task to execute it..can you please suggest me how can I do this.. here resource is new server I am creating and want to install packages
In next tasks when it is spin up and on.

Using failed_when: false the playbook execution will not fail on error codes. Then you can register the result and access the status code using the status key (follows an example.yml):
---
- hosts: localhost
tasks:
- name: Example uri module status
uri:
url: http://www.example.com
return_content: no
register: result
failed_when: false
- debug:
var: result.status

Related

Ansible playbook to fetch JSON file from an API via 'uri'

I wrote an Ansible playbook intend to fetch the list from an API as JSON file using uri module as below.
- name: API check
gather_facts: no
tasks:
- name : "Get the list as JSON"
uri:
url: http://sampleapitest.com/api
user: test
password: test
force_basic_auth: yes
status_code: 200
body_format: json
dest: "/home/peterw/list.json"
But when I am running the Playbook it is showing that the Playbook needs Hosts file. But I have access only to the URL, not to the SSH port 22.
ERROR! the field 'hosts' is required but was not set
I am new to Ansible. Can anyone please help me to fetch the details from the API as JSON file?
you have lot of ways to resolve your problem
either you have hosts defined, and you want to use an ansible module only on localhost, so you add delegate_to: localhost and run_once: true to signify i just want to play this task only one time.
- hosts: listohhosts
tasks:
- name: what does the task
moduleusing:
param1:
:
delegate_to: localhost
run_once: true
either you add hosts: localhost
It is possible to write tasks like
- name: Gather stored entitlement from repository service
local_action:
module: uri
url: "https://{{ REPOSITORY_URL }}/api/system/security/certificates"
method: GET
url_username: "{{ ansible_user }}"
url_password: "{{ ansible_password }}"
validate_certs: yes
return_content: yes
status_code: 200
body_format: json
check_mode: false
register: result
- name: Show result
debug:
msg: "{{ result.json }}"
check_mode: false
which in example gather installed certificates from a JFrog Artifactory repository service via REST API call, as well
- name: Gather stored entitlement from repository service
action:
module: uri
...
local_action
Same as action but also implies delegate_to: localhost
for Controlling where tasks run: delegation and local actions.

Ansible URI not working in Gitlab CI pipeline in first run

I am facing currently a strange issue and I am not able to guess what causes it.
I wrote small Ansible scripts to test if Kafka schema-registry and connectors are running by calling their APIs.
I could run those Ansible playbooks on my local machine successfully. However, when running them in Gitlab CI pipeline (im using the same local machine as gitlab runner), the connect_test always breaks with the following error:
fatal: [xx.xxx.x.x]: FAILED! => {"changed": false, "elapsed": 0, "msg": "Status code was -1 and not [200]: Request failed: <urlopen error [Errno 111] Connection refused>", "redirected": false, "status": -1, "url": "http://localhost:8083"}
The strange thing, that this failed job will work when I click on retry button in CI pipeline.
Has anyone an idea about this issue? I would appreciate your help.
schema_test.yml
---
- name: Test schema-registry
hosts: SCHEMA
become: yes
become_user: root
tasks:
- name: list schemas
uri:
url: http://localhost:8081/subjects
register: schema
- debug:
msg: "{{ schema }}"
connect_test.yml
---
- name: Test connect
hosts: CONNECT
become: yes
become_user: root
tasks:
- name: check connect
uri:
url: http://localhost:8083
register: connect
- debug:
msg: "{{ connect }}"
.gitlab-ci.yml
test-connect:
stage: test
script:
- ansible-playbook connect_test.yml
tags:
- gitlab-runner
test-schema:
stage: test
script:
- ansible-playbook schema_test.yml
tags:
- gitlab-runner
update
I replaced URI module with shell. as a result, I see the same behaviour. The initial run of the pipeline will fail and retrying the job will fix it
maybe you are restarting the services in a previous job, take in mind that kafka connect needs generally more time to be available after the restart. Try to pause ansible after you restart the service for a minute or so.

How to view the single service status via Ansible with modules rather than shell?

Need to check the Single service status on multiple system using Ansible playbook , is there a way to do using service modules rather than shell module ?
With the service module (and the related more specific modules like systemd) you can make sure that a service is in a desired state.
For example, the following task will enable apache start at boot if not already configured, start apache if it is stopped, and report change if any change was made or ok if no change was needed.
- name: Enable and start apache
service:
name: apache
enabled: yes
state: started
Simply checking the service status without any change is not supported by those modules. You will have to use the command line and analyse the output / return status.
example with systemd
- name: Check status of my service
command: systemctl -q is-active my_service
check_mode: no
failed_when: false
changed_when: false
register: my_service_status
- name: Report status of my service
debug:
msg: "my_service is {{ (my_service_status.rc == 0) | ternary('Up', 'Down') }}"
To be noted:
check_mode: no make the task run wether or not you use '--check' on ansible_playbook command line. Without this, in check mode, the next task will fail with an undefined variable.
failed_when: false refrains the task to fail when the return code is different from 0 (when the service is not started). You can be more specific by listing all the possible return code in normal conditions and failing when you get an other (e.g. failed_when: not (my_service_status in [0, 3, X]))
changed_when: false makes the task always report ok except of changed by default for command and shell module.

Ansible: ERROR! conflicting action statements

When I run my ansible file i get the following error:
conflicting action statements: user, uri
- name: Post Install watcher
hosts: director.0
gather_facts: no
tasks:
- name: Wait for Elastic Cluster to be ready
uri:
url: https://mlaascloudui.{{ lookup('env','ENV') }}.pre.mls.eu.gs.aws.cloud.vwgroup.com/api/v1/clusters/elasticsearch/{{elasticClusterDetails.elasticsea$
method: GET
user: admin
password: "{{rootpw.stdout}}"
force_basic_auth: yes
register: result
until: result['status']|default(0) == 412
retries: 60
delay: 10
- name: Install watcher
syntactically the code is correct. the user and password should be used for basic auth and I used similar code elsewhere and don't get any errors. What am I missing?
Remember your spacing. YAML is concern with the alignment of the spacing with the commands. Your "uri:" action should be aligned under "- name:". Ansible is thinking there are multiple actions associated with the "- name:" task.
Hope this helps.

Running a task on a single host always with Ansible?

I am writing a task to download a database dump from a specific location. It will always be run on the same host.
So I am including the task as follows in the main playbook:
tasks:
include: tasks/dl-db.yml
The content of the task is:
---
- name: Fetch the Database
fetch: src=/home/ubuntu/mydb.sql.gz dest=/tmp/mydb.sql.bz fail_on_missing=yes
But I want it to fetch from a single specific host not all hosts.
Is a task the right approach for this?
If all you need to happen is that it's only run once rather than on every host you can instead use run_once like so:
---
- name: Fetch the Database
run_once: true
fetch: src=/home/ubuntu/mydb.sql.gz dest=/tmp/mydb.sql.bz fail_on_missing=yes
This will then be run from the first host that runs the task. You can further limit this with delegate_to if you want to specifically target a specific host:
---
- name: Fetch the Database
run_once: true
delegate_to: node1
fetch: src=/home/ubuntu/mydb.sql.gz dest=/tmp/mydb.sql.bz fail_on_missing=yes

Resources