Ansible Local connection script argument path failed to detect - ansible

I have a ansible playbook which calls 2 roles. role 1 runs on local, which has a script with arg as file path /tmp/inputfile/input.csv. The playbook looks:
- hosts: "{{my_extra_var_IP}}"
connection: local
roles:
- prereq
Roles task:
- name: Copy script to local
copy:
src: files/csv_to_files.sh
dest: /tmp/input_dir/
mode: 0777
- command: ls -ltr /tmp/input_dir
- command: cat /tmp/input_dir/inputFile.csv
#- name: run csv to yml script
# script: /tmp/input_dir/csv_to_files.sh /tmp/input_dir/inputFile.csv
# become_user: niceha
The output of first 2 tasks is success and is as expected but on 3rd & 4th step I get error:
FAILED! => {"changed": true, "cmd": ["cat", "/tmp/input_dir/inputFile.csv"], "delta": "0:00:00.007141", "end": "2017-06-09 15:53:58.673450", "failed": true, "rc": 1, "start": "2017-06-09 15:53:58.666309", "stderr": "cat: /tmp/input_dir/inputFile.csv: No such file or directory", "stdout": "", "stdout_lines": [], "warnings": []}
I am running this job from tower which uses userA I also tried to change the users but no luck.

The indenting does not look right:
- name: Copy script to local
copy:
src: files/csv_to_files.sh
dest: /tmp/input_dir/
mode: 0777

Ok. So after much reading I got to know the code is fine as it runs from the console but not from the Ansible tower, and just to cross check it worked from other dir paths.
Ansible tower actually uses /tmp/ dir as staging area so any changes/tasks mentioned in the playbook to be run in tmp dir wont take affect.
Changing my input file path from /tmp to /home/user did the work for me.

Related

ansible playbook doesnot recognicise '*' regular expression

- name: Copy shipped media files on to symlink
shell: '/bin/cp -Rf /opt/omnius/ose/media.bak/* /var/opt/ose_media/'
the below tasks failed with below error
fatal: [nl3832yr]: FAILED! => {"changed": true, "cmd": ["/bin/cp", "-Rf", "/opt/omnius/ose/media.bak/*", "/var/opt/ose_media/"], "delta": "0:00:00.004059", "end": "2021-04-15 17:13:31.123072", "msg": "non-zero return code", "rc": 1, "start": "2021-04-15 17:13:31.119013", "stderr": "/bin/cp: cannot stat ‘/opt/omnius/ose/media.bak/*’: No such file or directory", "stderr_lines": ["/bin/cp: cannot stat ‘/opt/omnius/ose/media.bak/*’: No such file or directory"], "stdout": "", "stdout_lines": []}
seems ansible doesn't recognize *
This does not have anything to do with ansible directly, as ansible will execute the line as-is in a shell. So the shell you are using does not resolve the *.
You can change that by using this:
- name: Copy shipped media files on to symlink
shell: '/bin/cp -Rf /opt/omnius/ose/media.bak/* /var/opt/ose_media/'
executable: /bin/bash
But you should actually use the copy-module with the remote_src parameter like this:
- name: Copy shipped media files on to symlink
copy:
src: /opt/omnius/ose/media.bak/
dest: /var/opt/ose_media/
remote_src: yes
It is always better to use ansible modules than the shell module.

How to synchronize a file between two remote servers in Ansible?

The end goal is for me to copy file.txt from Host2 over to Host1. However, I keep getting the same error whenever I perform the function. I have triple checked my spacing and made sure I spelled everything correctly, but nothing seems to work.
Command to start the playbook:
ansible-playbook playbook_name.yml -i inventory/inventory_name -u username -k
My Code:
- hosts: Host1
tasks:
- name: Synchronization using rsync protocol on delegate host (pull)
synchronize:
mode: pull
src: rsync://Host2.linux.us.com/tmp/file.txt
dest: /tmp
delegate_to: Host2.linux.us.com
Expected Result:
Successfully working
Actual Result:
fatal: [Host1.linux.us.com]: FAILED! => {"changed": false, "cmd": "sshpass", "msg": "[Errno 2] No such file or directory", "rc": 2}
I have the same problem as you,Installing sshpass on the target host can work normally
yum install -y sshpass

How to create folder in $USER $HOME using Ansible

I'm new to Ansible trying to become $USER then create .ssh folder inside $HOME directory and I'm getting Permission denied:
---
- hosts: amazon
gather_facts: False
vars:
ansible_python_interpreter: "/usr/bin/env python3"
account: 'jenkins'
home: "{{out.stdout}}"
tasks:
- name: Create .SSH directory
become: true
become_method: sudo
become_user: "{{account}}"
shell: "echo $HOME"
register: out
- file:
path: "{{home}}/.ssh"
state: directory
My output is:
MacBook-Pro-60:playbooks stefanov$ ansible-playbook variable.yml -v
Using /Users/stefanov/.ansible/ansible.cfg as config file
PLAY [amazon] *************************************************************************************************************************************************************************************
TASK [Create .SSH directory] **********************************************************************************************************************************************************************
changed: [slave] => {"changed": true, "cmd": "echo $HOME", "delta": "0:00:00.001438", "end": "2017-08-21 10:23:34.882835", "rc": 0, "start": "2017-08-21 10:23:34.881397", "stderr": "", "stderr_lines": [], "stdout": "/home/jenkins", "stdout_lines": ["/home/jenkins"]}
TASK [file] ***************************************************************************************************************************************************************************************
fatal: [slave]: FAILED! => {"changed": false, "failed": true, "msg": "There was an issue creating /home/jenkins/.ssh as requested: [Errno 13] Permission denied: b'/home/jenkins/.ssh'", "path": "/home/jenkins/.ssh", "state": "absent"}
to retry, use: --limit #/Users/stefanov/playbooks/variable.retry
PLAY RECAP ****************************************************************************************************************************************************************************************
slave : ok=1 changed=1 unreachable=0 failed=1
I'm guessing - name and - file are dicts and considered different tasks.
And what was executed in - name is no longer valid in - file?
Because I switched to Jenkins user in - name and in - file I'm likely with the account I do SSH.
Then how can I concatenate both tasks in one?
What is the right way to do this?
Another thing how can I do sudo with file module? I can't see such option:
http://docs.ansible.com/ansible/latest/file_module.html
Or should I just do shell: mkdir -pv $HOME/.ssh instead of using file module?
Then how can I concatenate both tasks in one?
You cannot do it, but you can just add become to the second task, which will make it run with the same permissions as the first one:
- file:
path: "{{home}}/.ssh"
state: directory
become: true
become_method: sudo
become_user: "{{account}}"
Another thing how can i do sudo with file module can't see such option
Because become (and other) is not a parameter of a module, but a general declaration for any task (and play).
I'm guessing -name and -file are dicts and considered different tasks.
The first task is shell, not name. You can add name to any task (just like become).

Add binaries to PATH with Ansible

I'm trying to install the Kiex Version manager for the Elixir programming language using Ansible.
These are the plays I use for this:
- name: Kiex Installation
hosts: web
gather_facts: false
remote_user: deployer
tasks:
- shell: \curl -sSL https://raw.githubusercontent.com/taylor/kiex/master/install | bash -s
- name: Add Kiex Bin to Path
lineinfile:
dest: /home/deployer/.bashrc
regexp: '^test -s'
line: '[[ -s "$HOME/.kiex/scripts/kiex" ]] && source "$HOME/.kiex/scripts/kiex"'
- name: Reload Path
shell: source /home/deployer/.bashrc
args:
executable: /bin/bash
- shell: echo $PATH
register: pathul
- debug:
var: pathul
- name: Elixir Installation
hosts: web
gather_facts: false
remote_user: deployer
tasks:
- shell: echo $PATH
register: pathul
- debug:
var: pathul
- name: Install Elixir Version
command: /home/deployer/.kiex/bin/kiex list
args:
executable: /bin/bash
chdir: /home/deployer/
- name: Set Elixir Version as Default
shell: kiex default 1.4
The Installation of Kiex is a success and if I log in to the remote Machine I am able to run it simply by using the kiex command. I can do this because I sourced the binaries in "~/.kiex/scripts/kiex". When I echo the $PATH variable it shows the kiex binaries path /home/deployer/.kiex/bin in it:
$ echo $PATH
/home/deployer/.kiex/bin:/home/deployer/.kiex/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
However the kiex, kiex list and even the /home/deoployer/.kiex/bin/kiex list in the Elixir Installation Play shown above fail with the message:
TASK [Set Elixir Version as Default] *******************************************
fatal: [local-web-2]: FAILED! => {"changed": true, "cmd": "kiex default 1.4", "delta": "0:00:00.002042", "end": "2017-01-26 22:13:32.898082", "failed": true, "rc": 127, "start": "2017-01-26 22:13:32.896040", "stderr": "/bin/sh: 1: kiex: not found", "stdout": "", "stdout_lines": [], "warnings": []}
Also the pathul variable that registered the result of echoing the path via ansible doesn't contain /home/deployer/.kiex/bin:
"stdout": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games"
How can I make the kiex command work properly via Ansible?
Just use the full, absolute path, like you tried in the Install Elixir Version task, but mind that you have a typo, both, in the example and in the explanation you posted:
command: /home/deoployer/.kiex/bin/kiex list
[ ] even the /home/deoployer/.kiex/bin/kiex list [ ] fail[s]
It should likely be deployer, like in the first play, not deoployer.
There is no reason otherwise for Ansible to fail with "kiex: not found" message, if you provide the correct path.
Explanations regarding other tasks:
Quoting man bash:
When an interactive shell that is not a login shell is started, bash reads and executes commands from ~/.bashrc, if that file exists.
So your ~/.bashrc is not even read when you execute tasks with Ansible, because it's not an interactive session.
This is for example why your pathul variable does not contain changes applied in the ~/.bashrc.
The following two tasks run separate bash processes. The environment sourced in the first task has no influence on the environment of the second:
- name: Reload Path
shell: source /home/deployer/.bashrc
args:
executable: /bin/bash
- shell: echo $PATH
register: pathul

Ansible copy or move files only on remote host

This should work but doesn't and gives the following error (below).
I've read a couple of posts on stackoverflow here and here but there doesn't seem to be a good answer that works in this case. I'm really hoping I'm just missing something dumb and I have been at this for hours so please don't mind my snark but I need to vent.
Since ansible, 2.3.0, can't do something as simple as copy/move/rename files ONLY on the remote host, I'm mean who would want to do that? And it also can't act on globs (*) (say when you don't know what files to act on), a 2 step approach seems to be the only way (that I know of) to move some files (only on the remote host). But not even this works.
migrate_rhel2centos.yml
---
- hosts: RedHat
become: true
become_user: root
become_method: sudo
vars:
repo_dir: /etc/yum.repos.d
tasks:
- name: create directory
file: path=/etc/yum.repos.d/bak/ state=directory
- name: get repo files
shell: "ls {{ repo_dir }}/*.repo"
register: repo_list
- debug: var=repo_list.stdout_lines
- name: move repo files
command: "/bin/mv -f {{ item }} bak"
args:
chdir: "{{repo_dir}}"
with_items: repo_list.stdout_lines
#################################
TASK [get repo files]
**********************************************************************
changed: [myhost]
TASK [debug]
**********************************************************************
ok: [myhost] => {
"repo_list.stdout_lines": [
"/etc/yum.repos.d/centric.repo",
"/etc/yum.repos.d/redhat.repo",
"/etc/yum.repos.d/rhel-source.repo"
]
}
TASK [move repo files]
*******************************************************************
failed: [myhost] (item=repo_list.stdout_lines) => {"changed": true, "cmd": ["/bin/mv", "-f", "repo_list.stdout_lines", "bak"], "delta": "0:00:00.001945", "end": "2016-12-13 15:07:14.103823", "failed": true, "item": "repo_list.stdout_lines", "rc": 1, "start": "2016-12-13 15:07:14.101878", "stderr": "/bin/mv: cannot stat `repo_list.stdout_lines': No such file or directory", "stdout": "", "stdout_lines": [], "warnings": []}
to retry, use: --limit #/home/jimm/.ansible/migrate_rhel2centos.retry
PLAY RECAP
********************************
myhost : ok=5 changed=1 unreachable=0 failed=1
Copy has the flag
remote_src
If no, it will search for src at originating/master machine.
If yes it will go to the remote/target machine for the src. Default is no.
edit: https://docs.ansible.com/ansible/latest/collections/ansible/builtin/copy_module.html
Does now support recursive copying.
if you want to copy file only on remote server you need to use ansible.builtin.copy module with key
remote_src: yes
Example from dock
- name: Copy a "sudoers" file on the remote machine for editing
ansible.builtin.copy:
src: /etc/sudoers
dest: /etc/sudoers.edit
remote_src: yes
validate: /usr/sbin/visudo -csf %s
name: copy files task
shell: cp source/path/file destination/path/file
This resolved my issue with coping files on remote host.
You need to use as below
with_items: "{{repo_list.stdout_lines}}"

Resources