ansible playbook doesnot recognicise '*' regular expression - ansible

- name: Copy shipped media files on to symlink
shell: '/bin/cp -Rf /opt/omnius/ose/media.bak/* /var/opt/ose_media/'
the below tasks failed with below error
fatal: [nl3832yr]: FAILED! => {"changed": true, "cmd": ["/bin/cp", "-Rf", "/opt/omnius/ose/media.bak/*", "/var/opt/ose_media/"], "delta": "0:00:00.004059", "end": "2021-04-15 17:13:31.123072", "msg": "non-zero return code", "rc": 1, "start": "2021-04-15 17:13:31.119013", "stderr": "/bin/cp: cannot stat ‘/opt/omnius/ose/media.bak/*’: No such file or directory", "stderr_lines": ["/bin/cp: cannot stat ‘/opt/omnius/ose/media.bak/*’: No such file or directory"], "stdout": "", "stdout_lines": []}
seems ansible doesn't recognize *

This does not have anything to do with ansible directly, as ansible will execute the line as-is in a shell. So the shell you are using does not resolve the *.
You can change that by using this:
- name: Copy shipped media files on to symlink
shell: '/bin/cp -Rf /opt/omnius/ose/media.bak/* /var/opt/ose_media/'
executable: /bin/bash
But you should actually use the copy-module with the remote_src parameter like this:
- name: Copy shipped media files on to symlink
copy:
src: /opt/omnius/ose/media.bak/
dest: /var/opt/ose_media/
remote_src: yes
It is always better to use ansible modules than the shell module.

Related

Ansible4 builtin shell rewrites trust list command to trust list-modules

Ansible4 builtin shell rewrites trust list command to trust list-modules:
- block:
- name: Check if Certs installed - Linux RHEL8 - step 1
ansible.builtin.shell:
cmd: "trust list"
register: isCertInstalled
some additional formatting is piped to whittle this down to a number (omitted for privacy purposes), then:
TASK [Check if Certs installed - Linux RHEL8 - step 1] ********************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["trust", "list"], "delta": "0:00:00.007482", "end": "2022-06-21 18:20:43.759496", "msg": "non-zero return code", "rc": 2, "start": "2022-06-21 18:20:43.752014", "stderr": "p11-kit: 'list-modules' is not a valid command. See 'trust --help'", "stderr_lines": ["p11-kit: 'list-modules' is not a valid command. See 'trust --help'"], "stdout": "", "stdout_lines": []}
Any idea how to force it to literally just run "trust list" instead of "trust list-modules"?
RHEL 8.5, Ansible 4.1, Python 3.9
Use the full path for the trust command and enclose trust list in single quotes. Ansible is only seeing LIST, tying that in as list module.
ie. "'trust list' | grep Internal-Cert | wc -l | tr -d '\n'"

ANSIBLE - shell task returns non-zero return code but otherwise works in terminal

I have an ansible task:
- name: Get vault's binary path
shell: type -p vault
register: vault_binary_path
returns
TASK [update_vault : Get vault's binary path] **********************************************************************************************************************************************************************
fatal: [xxxxx]: FAILED! => {"changed": true, "cmd": "type -p vault", "delta": "0:00:00.003303", "end": "2020-04-08 11:37:19.636528", "msg": "non-zero return code", "rc": 1, "start": "2020-04-08 11:37:19.633225", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
but when I run it in shell it returns just fine
[root#ip-xxxxx]# type -p vault
/usr/local/bin/vault
I run ansible as root with become: true. All previous steps are fine up until this one. Any advice appreciated.
Define an update to your PATH in your playbook:
environment:
PATH: "{{ ansible_env.PATH }}:/usr/local/bin"
...so that /usr/local/bin is guaranteed to be included.
(Also, while when writing bash-specific code type is almost always preferable to which, this isn't such a case, as your shell may be /bin/sh, which isn't guaranteed to support any features that aren't given in the POSIX sh specification. Consider changing to shell: command -v vault, which is guaranteed to work as-intended on all POSIX-compliant shells).

How to create folder in $USER $HOME using Ansible

I'm new to Ansible trying to become $USER then create .ssh folder inside $HOME directory and I'm getting Permission denied:
---
- hosts: amazon
gather_facts: False
vars:
ansible_python_interpreter: "/usr/bin/env python3"
account: 'jenkins'
home: "{{out.stdout}}"
tasks:
- name: Create .SSH directory
become: true
become_method: sudo
become_user: "{{account}}"
shell: "echo $HOME"
register: out
- file:
path: "{{home}}/.ssh"
state: directory
My output is:
MacBook-Pro-60:playbooks stefanov$ ansible-playbook variable.yml -v
Using /Users/stefanov/.ansible/ansible.cfg as config file
PLAY [amazon] *************************************************************************************************************************************************************************************
TASK [Create .SSH directory] **********************************************************************************************************************************************************************
changed: [slave] => {"changed": true, "cmd": "echo $HOME", "delta": "0:00:00.001438", "end": "2017-08-21 10:23:34.882835", "rc": 0, "start": "2017-08-21 10:23:34.881397", "stderr": "", "stderr_lines": [], "stdout": "/home/jenkins", "stdout_lines": ["/home/jenkins"]}
TASK [file] ***************************************************************************************************************************************************************************************
fatal: [slave]: FAILED! => {"changed": false, "failed": true, "msg": "There was an issue creating /home/jenkins/.ssh as requested: [Errno 13] Permission denied: b'/home/jenkins/.ssh'", "path": "/home/jenkins/.ssh", "state": "absent"}
to retry, use: --limit #/Users/stefanov/playbooks/variable.retry
PLAY RECAP ****************************************************************************************************************************************************************************************
slave : ok=1 changed=1 unreachable=0 failed=1
I'm guessing - name and - file are dicts and considered different tasks.
And what was executed in - name is no longer valid in - file?
Because I switched to Jenkins user in - name and in - file I'm likely with the account I do SSH.
Then how can I concatenate both tasks in one?
What is the right way to do this?
Another thing how can I do sudo with file module? I can't see such option:
http://docs.ansible.com/ansible/latest/file_module.html
Or should I just do shell: mkdir -pv $HOME/.ssh instead of using file module?
Then how can I concatenate both tasks in one?
You cannot do it, but you can just add become to the second task, which will make it run with the same permissions as the first one:
- file:
path: "{{home}}/.ssh"
state: directory
become: true
become_method: sudo
become_user: "{{account}}"
Another thing how can i do sudo with file module can't see such option
Because become (and other) is not a parameter of a module, but a general declaration for any task (and play).
I'm guessing -name and -file are dicts and considered different tasks.
The first task is shell, not name. You can add name to any task (just like become).

Ansible Local connection script argument path failed to detect

I have a ansible playbook which calls 2 roles. role 1 runs on local, which has a script with arg as file path /tmp/inputfile/input.csv. The playbook looks:
- hosts: "{{my_extra_var_IP}}"
connection: local
roles:
- prereq
Roles task:
- name: Copy script to local
copy:
src: files/csv_to_files.sh
dest: /tmp/input_dir/
mode: 0777
- command: ls -ltr /tmp/input_dir
- command: cat /tmp/input_dir/inputFile.csv
#- name: run csv to yml script
# script: /tmp/input_dir/csv_to_files.sh /tmp/input_dir/inputFile.csv
# become_user: niceha
The output of first 2 tasks is success and is as expected but on 3rd & 4th step I get error:
FAILED! => {"changed": true, "cmd": ["cat", "/tmp/input_dir/inputFile.csv"], "delta": "0:00:00.007141", "end": "2017-06-09 15:53:58.673450", "failed": true, "rc": 1, "start": "2017-06-09 15:53:58.666309", "stderr": "cat: /tmp/input_dir/inputFile.csv: No such file or directory", "stdout": "", "stdout_lines": [], "warnings": []}
I am running this job from tower which uses userA I also tried to change the users but no luck.
The indenting does not look right:
- name: Copy script to local
copy:
src: files/csv_to_files.sh
dest: /tmp/input_dir/
mode: 0777
Ok. So after much reading I got to know the code is fine as it runs from the console but not from the Ansible tower, and just to cross check it worked from other dir paths.
Ansible tower actually uses /tmp/ dir as staging area so any changes/tasks mentioned in the playbook to be run in tmp dir wont take affect.
Changing my input file path from /tmp to /home/user did the work for me.

Ansible copy or move files only on remote host

This should work but doesn't and gives the following error (below).
I've read a couple of posts on stackoverflow here and here but there doesn't seem to be a good answer that works in this case. I'm really hoping I'm just missing something dumb and I have been at this for hours so please don't mind my snark but I need to vent.
Since ansible, 2.3.0, can't do something as simple as copy/move/rename files ONLY on the remote host, I'm mean who would want to do that? And it also can't act on globs (*) (say when you don't know what files to act on), a 2 step approach seems to be the only way (that I know of) to move some files (only on the remote host). But not even this works.
migrate_rhel2centos.yml
---
- hosts: RedHat
become: true
become_user: root
become_method: sudo
vars:
repo_dir: /etc/yum.repos.d
tasks:
- name: create directory
file: path=/etc/yum.repos.d/bak/ state=directory
- name: get repo files
shell: "ls {{ repo_dir }}/*.repo"
register: repo_list
- debug: var=repo_list.stdout_lines
- name: move repo files
command: "/bin/mv -f {{ item }} bak"
args:
chdir: "{{repo_dir}}"
with_items: repo_list.stdout_lines
#################################
TASK [get repo files]
**********************************************************************
changed: [myhost]
TASK [debug]
**********************************************************************
ok: [myhost] => {
"repo_list.stdout_lines": [
"/etc/yum.repos.d/centric.repo",
"/etc/yum.repos.d/redhat.repo",
"/etc/yum.repos.d/rhel-source.repo"
]
}
TASK [move repo files]
*******************************************************************
failed: [myhost] (item=repo_list.stdout_lines) => {"changed": true, "cmd": ["/bin/mv", "-f", "repo_list.stdout_lines", "bak"], "delta": "0:00:00.001945", "end": "2016-12-13 15:07:14.103823", "failed": true, "item": "repo_list.stdout_lines", "rc": 1, "start": "2016-12-13 15:07:14.101878", "stderr": "/bin/mv: cannot stat `repo_list.stdout_lines': No such file or directory", "stdout": "", "stdout_lines": [], "warnings": []}
to retry, use: --limit #/home/jimm/.ansible/migrate_rhel2centos.retry
PLAY RECAP
********************************
myhost : ok=5 changed=1 unreachable=0 failed=1
Copy has the flag
remote_src
If no, it will search for src at originating/master machine.
If yes it will go to the remote/target machine for the src. Default is no.
edit: https://docs.ansible.com/ansible/latest/collections/ansible/builtin/copy_module.html
Does now support recursive copying.
if you want to copy file only on remote server you need to use ansible.builtin.copy module with key
remote_src: yes
Example from dock
- name: Copy a "sudoers" file on the remote machine for editing
ansible.builtin.copy:
src: /etc/sudoers
dest: /etc/sudoers.edit
remote_src: yes
validate: /usr/sbin/visudo -csf %s
name: copy files task
shell: cp source/path/file destination/path/file
This resolved my issue with coping files on remote host.
You need to use as below
with_items: "{{repo_list.stdout_lines}}"

Resources