I have a gitlab-runner running in openshift. I am running a playbook using this image registry.redhat.io/ansible-tower-38/ansible-runner-rhel7
The gitlab ci file:
image:
name: registry.redhat.io/ansible-tower-38/ansible-runner-rhel7
variables:
FF_GITLAB_REGISTRY_HELPER_IMAGE: "true"
stages:
- deploy
run-playbook:
tags:
- test
stage: deploy
rules:
- if: '$CI_COMMIT_BRANCH == "master"'
script:
- sh add_user_to_etc.sh
- export ANSIBLE_CONFIG="/builds/group-name/repo-name/ansible.cfg"
- echo "$ANSIBLE_CONFIG"
- ansible-playbook playbooks/linux-node.yml -e "ansible_user=$ROBOT_USERNAME ansible_ssh_pass=$ROBOT_PASSWORD"
The play file:
---
- name: Add Linux servers
hosts: all
gather_facts: yes
any_errors_fatal: true
environment:
http_proxy: "{{ proxy_global }}"
https_proxy: "{{ proxy_global }}"
no_proxy: "127.0.0.1"
roles:
- linux-node
The task where the error occurs:
- name: Build a JSON dashboard def
template:
src: templates/dashboard_summary.j2
dest: /tmp/{{ BLAH }}_dashboard_summary.json
force: yes
delegate_to: localhost
run_once: true
The error:
FAILED! => {"msg": "Failed to get information on remote file (/tmp/my-service-dev_dashboard_summary.json): sudo: PERM_SUDOERS: setresuid(-1, 1, -1): Operation not permitted\nsudo: no valid sudoers sources found, quitting\nsudo: setresuid() [0, 0, 0] -> [1001010000, -1, -1]: Operation not permitted\nsudo: unable to initialize policy plugin\n"}
I'm guessing its because of the user 1001010000 openshift assigns. But other tasks are run using the robot user which has sudo permissions, those are all working fine. I don't know why this task alone uses 1001010000. Or maybe i got it all wrong.
Related
I have two yaml files. One is azure-pipeline.yml
name: test-resources
trigger: none
resources:
repositories:
- repository: pipeline
type: git
name: test-templates
parameters:
- name: whetherYesOrNo
type: string
default: Yes
values:
- Yes
- No
extends:
template: pipelines/ansible-playbook-deploy.yml#pipeline
parameters:
folderName: test-3scale
As for this file, when I run the pipeline, I could choose Yes or No as options before running it.
The other one is the playbook.yml for Ansible
- hosts: localhost
connection: local
become: true
vars_files:
- test_service.yml
- "vars/test.yml"
collections:
- test_collection
tasks:
- name: Find out playbooks pwd
shell: pwd
register: playbook_path_output
no_log: false
- debug: var=playbook_path_output.stdout
- name: echo something
shell: echo 'test this out'
register: playbook_ls_content_output
no_log: false
- debug: var=playbook_ls_content_output.stdout
I wish to add a condition in the playbook.yml task, so that
When I choose "Yes" when running the pipeline, task named "echo something" will run, but if I choose "No", this task will be skipped. I am really new in yaml syntax and logic. Could someone help? Many thanks!
These runs successfully on my side(I can judge the condition with no problem, at compile time it will be expanded.):
azure-pipeline.yml
trigger: none
parameters:
- name: whetherYesOrNo
type: string
default: Yes
values:
- Yes
- No
extends:
template: pipelines/ansible-playbook-deploy.yml
parameters:
whetherYesOrNo: ${{parameters.whetherYesOrNo}}
ansible-playbook-deploy.yml
parameters:
- name: whetherYesOrNo
type: string
default: No
steps:
- ${{ if eq(parameters.whetherYesOrNo, 'Yes') }}:
- task: PowerShell#2
inputs:
targetType: 'inline'
script: |
# Write your PowerShell commands here.
Write-Host "Hello World"
Repository structure on my side:
If Yes:
If No:
- name: test
hosts: all
gather_facts: no
tasks:
#command 1
- name: ansible-test command 1
iosxr_command:
commands:
- show inventory
when: ansible_network_os == 'iosxr'
register: output
- debug:
var: output.stdout
- name: print command executed
hosts: 127.0.0.1
connection: local
command:
- echo sh inventory
register: output1
- debug:
var: output1.stdout
this is my playbook
ERROR! conflicting action statements: hosts, command
The error appears to be in '/root/container/playbook2.yaml': line 16, column 5, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: print command executed
^ here
I am encountering this error.
Please help me fix the issue.
Indents. You need to start a new play with a new set of hosts and a new task list.
- name: test
hosts: all
gather_facts: no
tasks:
#command 1
- name: ansible-test command 1
iosxr_command:
commands:
- show inventory
when: ansible_network_os == 'iosxr'
register: output
- debug:
var: output.stdout
- name: print command executed
hosts: 127.0.0.1
connection: local
gather_facts: no
tasks:
- command: echo sh inventory
register: output1
- debug:
var: output1.stdout
I'm sort of trying to build an inventory file from an ansible playbook run.
I'm trying to list out all the kvm hosts and the guests running on them, by running both service libvirtd status and if successful, virsh list --all, and to store the values in a file on the ansible host.
Ive tried a few different playbook structures but none have been successful in writing the file (using local_action wrote the ansible_hostname from just one host).
Please can someone guide me on what I'm doing wrong?
This is what I'm running:
- name: Determine KVM hosts
hosts: all
become: yes
#gather_facts: false
tasks:
- name: Check if libvirtd service exists
shell: "service libvirtd status"
register: libvirtd_status
failed_when: not(libvirtd_status.rc == 0)
ignore_errors: true
- name: List KVM guests
shell: "virsh list --all"
register: list_vms
when: libvirtd_status.rc == 0
ignore_errors: true
- name: Write hostname to file
lineinfile:
path: /tmp/libvirtd_hosts
line: "{{ ansible_hostname }} kvm guests: "
create: true
#local_action: copy content="{{ item.value }}" dest="/tmp/libvirtd_hosts"
with_items:
- variable: ansible_hostname
value: "{{ ansible_hostname }}"
- variable: list_vms
value: "{{ list_vms }}"
when: libvirtd_status.rc == 0 or list_vms.rc == 0
Was able to cobble something that's mostly working:
- name: Check if libvirtd service exists
shell: "service libvirtd status"
register: libvirtd_status
failed_when: libvirtd_status.rc not in [0, 1]
- name: List KVM guests
#shell: "virsh list --all"
virt:
command: list_vms
register: all_vms
when: libvirtd_status.rc == 0
---
- name: List all KVM hosts
hosts: production, admin_hosts, kvm_hosts
become: yes
tasks:
- name: create file
file:
dest: /tmp/libvirtd_hosts
state: touch
delegate_to: localhost
- name: Copy VMs list
include_tasks: run_libvirtd_commands.yaml
- name: saving cumulative result
lineinfile:
line: '{{ ansible_hostname }} has {{ all_vms }}'
dest: /tmp/libvirtd_hosts
insertafter: EOF
delegate_to: localhost
when: groups["list_vms"] is defined and (groups["list_vms"] | length > 0)
Now if only I could clean up the output to filter out false positives (machines that don't have libvirtd status, and have an empty/no list of VMs, because the above doesn't really work.
But at least there is output from all the KVM hosts!
I have a playbook which reads in a list of variables:
vars_files:
- vars/myvariables.yml
tasks:
- name: Debug Variable List
debug:
msg: "An item: {{item}}"
with_list: "{{ myvariables }}"
This prints out the list of "myvariables" from a file variables.yml, which contains:
---
myvariables:
- variable1
- variable2
I get the following as expected.
"msg": "An item: variable1"
"msg": "An item: variable2"
However, when I connect to another host, and run the same Debug statement, it throws an error:
vars_files:
- vars/myvariables.yml
tasks:
- name: Configure instance(s)
hosts: launched
become: True
remote_user: ubuntu
port: 22
gather_facts: False
tasks:
- name: Wait for SSH to come up
delegate_to: ***
remote_user: ubuntu
connection: ssh
register: item
- name: Debug Variable List
debug:
msg: "An item: {{item}}"
with_list: "{{ myvariables }}"
OUTPUT:
"msg": "'myvariables' is undefined"
How do I define the variables file when connecting to another host that is not localhost?
Any help on this would be greatly appreciated.
With "hosts: launched" you started new playbook. Put the vars_files: into the scope of this playbook (see below).
- name: Configure instance(s)
hosts: launched
become: True
remote_user: ubuntu
port: 22
gather_facts: False
vars_files:
- vars/myvariables.yml
tasks:
Review the Scoping variables.
I have to run an ansible playbook to execute the following tasks
1) Calculate date in YYYY_MM_DD format and then use this prefix to download some file from aws to my local machine. The filename is of the following format 2015_06_04_latest_file.csv
2) I have to then create a folder by the name 2015_06_04 into multiple hosts and upload this file there.
This is my current playbook -
---
- hosts: 127.0.0.1
connection: local
sudo: yes
gather_facts: no
tasks:
- name: calculate date
shell: date "+%Y_%m_%d" --date="1 days ago"
register: output
- name: set date variable
set_fact: latest_date={{ item }}
with_items: output.stdout_lines
- local_action: command mkdir -p /tmp/latest_contracts/{{ latest_date }}
- local_action: command /root/bin/aws s3 cp s3://primarydatafolder/data/{{ latest_date }}_latest_data.csv /tmp/latest_contracts/{{ latest_date }}/ creates=/tmp/latest_contracts/{{ latest_date }}/latest_data.csv
register: result
ignore_errors: true
- local_action: command /root/bin/aws s3 cp s3://secondarydatafolder/data/{{ latest_date }}_latest_data.csv /tmp/latest_contracts/{{ latest_date }}/ creates=/tmp/latest_contracts/{{ latest_date }}/latest_data.csv
when: result|failed
# remove the date prefix from the downloaded file
- local_action: command ./rename_date.sh {{ latest_date }}
ignore_errors: true
- hosts: contractsServers
sudo: yes
gather_facts: no
tasks:
- name: create directory
file: path={{item.path}} state=directory mode=0775 owner=root group=root
with_items:
- {path: '/var/mukul/contracts/{{ latest_date }}' }
- {path: '/var/mukul/contracts/dummy' }
- name: copy dummy contracts
copy: src=dummy dest=/var/mukul/contracts/
- name: delete previous symlink
shell: unlink /var/mukul/contracts/latest
ignore_errors: true
- name: upload the newly created latest date folder to the host
copy: src=/tmp/latest_contracts/{{ latest_date }} dest=/var/mukul/contracts/
- name: create a symbolic link to the folder on the host and call it latest
action: file state=link src=/var/mukul/contracts/{{ latest_date }} dest=/var/mukul/contracts/latest
As per ansible's documentation on set_fact variable, this variable latest_date should be available across plays. However, ansible fails with the following message
failed: [192.168.101.177] => (item={'path': u'/var/mukul/contracts/{# latest_date #}'}) => {"failed": true, "item": {"path": "/var/mukul/contracts/{# latest_date #}"}}
msg: this module requires key=value arguments (['path=/var/mukul/contracts/{#', 'latest_date', '#}', 'state=directory', 'mode=0775', 'owner=root', 'group=root'])
It looks as if the second playbook is unable to get the value of the latest_date fact. Can you please tell me where i'm making a mistake?
Facts are host specific. As the documentation about set_fact says, "[v]ariables [set with set_fact] are set on a host-by-host basis".
Instead, I'd try using run_once as defined in Delegation, rolling updates, and local actions, like this:
- hosts: contractsServers
tasks:
- name: Determine date
local_action: shell: date "+%Y_%m_%d" --date="1 days ago"
register: yesterday
always_run: True
changed_when: False
run_once: True
- name: Do something else locally
local_action: ...
register: some_variable_name
always_run: True
changed_when: False
run_once: True
- name: Do something remotely using the variables registered above
...
You could enable fact-caching. You will need to set up a local redis instance where facts then will be stored.