Folks I have an ansible.cfg
[defaults]
remote_user = sysadmin
inventory = hosts.yaml
host_key_checking = False
local_tmp = /Users/juergen/Documents/DPSCodeAcademy/ansible/#dev/ddve-aws/ddve6-7.4
Further down a playbook
---
-
hosts: ddve
gather_facts: False
tasks:
- name: net show all
command: net show all
...
and the ddve host is a very special linux box which it's own commandset, so regular linux operation do not work. What I was trying is to redirect the tmp dir to a local dir on my mac and just fire a valid command on that ddve host but this fails with:
fatal: [3.126.251.125]: UNREACHABLE! => {"changed": false, "msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"` echo For example, \"help timezone\" shows all commands relating to timezones./.ansible/tmp `\"&& mkdir \"` echo For example, \"help timezone\" shows all commands relating to timezones./.ansible/tmp/ansible-tmp-1611501684.866448-10774-109898410031575 `\" && echo ansible-tmp-1611501684.866448-10774-109898410031575=\"` echo For example, \"help timezone\" shows all commands relating to timezones./.ansible/tmp/ansible-tmp-1611501684.866448-10774-109898410031575 `\" ), exited with result 40, stdout output: That doesn't look like a valid command, displaying help...\n\nHelp is available on the following topics:\n\n adminaccess ddboost ntp\n alerts disk qos\n alias elicense quota\n authentication enclosure replication\n autosupport filesys smt\n cifs ifgroup snapshot\n client-group log snmp\n cloud migration storage\n compression mtree support\n config net system\n data-movement nfs user\n\nType \"help <topic>\" to view help for the given topic.\n\nType \"help <keyword>\" to search the commands for a specific keyword.\nFor example, \"help timezone\" shows all commands relating to timezones.\n\n", "unreachable": true}
PLAY RECAP ************************************************************************************************************************************************************************************
3.126.251.125 : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
but a ssh login is working.
❯ ssh sysadmin#3.126.251.125
EMC Data Domain Virtual Edition
Last login: Sun Jan 24 07:21:24 PST 2021 from 95.91.249.86 on ssh
Welcome to Data Domain OS 7.4.0.5-671629
----------------------------------------
sysadmin#ip-172-31-16-174# net show all
Active Network Configuration:
ethV0 Link encap:Ethernet HWaddr 02:C9:AF:87:AC:7C
inet addr:172.31.16.1
can you help me what the error is telling
Ansible relies on being able to run python on the remote host. If "regular linux operations" won't work, this is probably the problem.
The simplest workaround is to use the raw module, which simply executes commands via ssh. This is the only module you would be able to use to target the remote host.
- name: net show all
raw: net show all
It looks like the remote system is some sort of networking device. There are a number of Ansible modules designed to work with switches and other network devices that don't support regular Linux commands, or Python, etc. See the documentation for Ansible Network Automation for more information. Possibly there is a module for the device you are managing?
Related
I am new to Ansible and I cannot solve an error: I use the ansible.builtin.shell to call the pcs utility (Pacemaker). The pcs is installed on the remote machine, and I can use it when I ssh that machine, but Ansible reports a 'command not found' error with error code 127.
Here is my inventory.yml:
---
all:
children:
centos7:
hosts:
UVMEL7:
ansible_host: UVMEL7
Here is my play-book, TestPcs.yaml:
---
- name: Test the execution of pcs command
hosts: UVMEL7
tasks:
- name: Call echo
ansible.builtin.shell: echo
- name: pcs
ansible.builtin.shell: pcs
Note: I also used the echo command to verify that I am corectly using ansible.builtin.shell.
I launch my play-book with: ansible-playbook -i inventory.yml TestPcs.yaml --user=traite
And I get this result:
PLAY [Test the execution of pcs command] *****************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***********************************************************************************************************************************************************************************************************************
ok: [UVMEL7]
TASK [Call echo] *****************************************************************************************************************************************************************************************************************************
changed: [UVMEL7]
TASK [pcs] ***********************************************************************************************************************************************************************************************************************************
fatal: [UVMEL7]: FAILED! => {"changed": true, "cmd": "pcs", "delta": "0:00:00.003490", "end": "2022-03-10 15:02:17.418475", "msg": "non-zero return code", "rc": 127, "start": "2022-03-10 15:02:17.414985", "stderr": "/bin/sh: pcs : commande introuvable", "stderr_lines": ["/bin/sh: pcs : commande introuvable"], "stdout": "", "stdout_lines": []}
PLAY RECAP ***********************************************************************************************************************************************************************************************************************************
UVMEL7 : ok=2 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
The pcs command is failing and in stderr, there is a 'command not found' error.
On the other and, when I ssh the machine and run pcs command, the command is executed and returns 1 which is different from 127. It is normal that pcs returns an error: I simplified the test case to the strict minimum to keep my question short.
I expect Ansible to have the same behavior: Error on pcs with return code 1.
Here is what I did to simulate what Ansible does (Based on remarks by #Zeitounator): ssh <user>#<machine> '/bin/bash -c "echo $PATH"'
I get my default PATH as explained in the manual page of bash. In my system sh links to bash.
I see that /etc/profile does the path manipulation that I need. However, it seems that because of the option -c, the bash is not started as login shell and therefore etc/profile is not sourced.
I end up doing the job manually:
---
- name: Test the execution of pcs command
hosts: UVMEL7
tasks:
- name: Call echo
ansible.builtin.shell: echo
- name: pcs
ansible.builtin.shell: source /etc/profile && pcs
Which executes pcs as expected.
To sum up, my executable was not executed because the folder holding it was not listed in my PATH environment variable. This was due to the fact that /bin/sh aka /bin/bash was called with the flag -c which prevents sourcing /etc/profile and other configuration files. The issue was 'solved' by sourcing manually the configuration file that correctly sets the PATH environment variable.
I have a simple playbook that tries to install packages.
My task is failing(see output).
I can ping the host, and manually I can run the command as the super user(tco).
my ansible.cfg
[defaults]
inventory = /Users/<myuser>/<automation>/ansible/inventory
remote_user = tco
packages
packages:
- yum-utils
- sshpass
playbook
---
- hosts: all
vars_files:
- vars/packages.yml
tasks:
- name: testing connection
ping:
remote_user: tco
- name: Installing packages
yum:
name: "{{ packages }}"
state: present
Running playbook:
ansible-playbook my-playbook.yml --limit master --become --ask-become-pass --become-user=tco --become-method=sudo
Output:
ansible-playbook register_sys_rh.yml --limit master --become --ask-become-pass --become-user=tco --become-method=sudo
BECOME password:
PLAY [all] ******************************************************************************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************************************************************
ok: [xx.xxx.13.105]
TASK [testing connection] ***************************************************************************************************************************************************
ok: [xx.xxx.13.105]
TASK [Installing packages] **************************************************************************************************************************************************
fatal: [xx.xxx.13.105]: FAILED! => {"changed": false, "msg": "This command has to be run under the root user.", "results": []}
PLAY RECAP ******************************************************************************************************************************************************************
xx.xxx.13.105 : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
inventory:
ansible-inventory --list | jq '.master'
{
"hosts": [
"xx.xxx.13.105"
]
}
I have copied my id_rsa.pub to the host already. I cannot loging to the host without a password.
I can log in and do sudo su or run any other command that needs root privilege.
[tco#control-plane-0 ~]$ whoami
tco
[tco#control-plane-0 ~]$ hostname -I
xx.xxx.13.105 192.168.122.1
[tco#control-plane-0 ~]$ sudo su
[sudo] password for tco:
[root#control-plane-0 tco]#
I explicitly override user, sudo_method through ansible_cli, no idea what I am doing wrong here.
Thanks in advance.
Fixed it.
But, I need to understand the Ansible concept better.
I changed ansible.cfg to this(changed become_user to root)
[defaults]
inventory = <my-inventory-path>
remote_user = tco
[privilege_escalation]
become=True
become_method=sudo
become_ask_pass=False
become_user=root
become_pass=<password>
And, running it like this:
ansible-playbook my-playbook.yml --limit master
this gives me an error:
FAILED! => {"msg": "Missing sudo password"}
So, I run like this:
ansible-playbook my-playbook.yml --limit master --ask-become-pass
and when a password is prompted I provide tco password not sure what is the password for the root user is.
And this works.
Not sure why cfg file password is not working, even though I provide the same password when prompted.
As per my understanding, when I say become_user and become_pass that is what ansible uses to run privilege commands. But, here I am saying remote_user: tco and become_user:root
This error is because the playbook is unable to run on the remote nodes as a root user.
We can modify the ansible.cfg file in order to resolve this issue.
Steps:
Open the ansible.cfg, default command is: sudo vim /etc/ansible/ansible.cfg
Search for regular expression: /privilege_escalation ( / is basically searching for the keyword that follows in VIM)
Press I (Insert mode), uncomment following lines:
[privilege_escalation]
become=True
become_method=sudo
become_user=root
become_ask_pass=False
Note: If the .cfg lines are not modified, the corresponding values can be passed via Terminal during the playbook execution(as specified in the question itself).
Run the playbook again with: ansible-playbook playbookname.yml
Note, if there is no ssh authentication pre-established between the Ansible controller and the nodes, appending -kK is required (this asks for the privileged user password for the remote nodes ) i.e. ansible-playbook playbookname.yml -kK
This will require you to input password for the SSH connection, as well as the BECOME password(the password for the privileged user on the node)
I am able to provision Windows 10 using Ansible/Chocolatey by running Ansible in Ubuntu WSL. I am now trying to provision the Ubuntu WSL environment using that same Ansible instance. It seems to authenticate properly but I'm getting the following permission error when I try to provision Ubuntu WSL from Ubuntu WSL itself:
fatal: [localhost-wsl]: UNREACHABLE! => {"changed": false, "msg": "Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\". Failed command was: ( umask 77 && mkdir -p \"` echo /tmp/.ansible-${USER}/tmp/ansible-tmp-1594006839.9280272-267367995921233 `\" && echo ansible-tmp-1594006839.9280272-267367995921233=\"` echo /tmp/.ansible-${USER}/tmp/ansible-tmp-1594006839.9280272-267367995921233 `\" ), exited with
result 1, stdout output: ansible-tmp-1594006839.9280272-267367995921233=/tmp/.ansible-***/tmp/ansible-tmp-1594006839.9280272-267367995921233\n", "unreachable": true}
[WARNING]: Failure using method (v2_runner_on_unreachable) in callback plugin
(<ansible.plugins.callback.mail.CallbackModule object at 0x7feccbade550>): [Errno
111] Connection refused
Here's my inventory.yml:
all:
children:
ubuntu-wsl:
hosts:
localhost-wsl:
ansible_port: 22
ansible_host: localhost
ansible_password: "{{ passwordd}}"
ansible_user: "{{ usernamee}}"
And here's my ansible.cfg:
[defaults]
inventory = inventory.ymlforks = 50
transport = ssh
gathering = smart
fact_caching = jsonfile
fact_caching_connection = ~/.ansible/factcachingconnection
callback_whitelist = mailfact_caching_timeout = 60480000hash_behavior = merge
retry_files_enable = False
pipelining = True
host_key_checking = False
remote_tmp = /tmp/.ansible-${USER}/tmp
[winrm_connection]
server_cert_validation = ignore
transport = credssp,ssl
[ssh_connection]
transfer_method = piped
Can anyone spot an error or suggest a possible solution? I was unable to get it working using the local type connection as well (the above is using SSH).
Thanks
The solution to this was upgrading the Ubuntu WSL environment to WSL 2. See https://learn.microsoft.com/en-us/windows/wsl/install-win10#update-to-wsl-2
Using Ansible, I wish to transfer all files from redhat localhost /app/tmpfiles/ folder to remoteaixhost at location /was/IBM/BACKUP/00005/ with current timestamp and 755 permissions
Below command shows that I do have a file on my localhost:
[localuser#localhost]$ ls -ltr /app/tmpfiles/*
-rw-rw-r-- 1 localuser user 12 Sep 13 15:53 /app/tmpfiles/testingjsp4.jsp
Below is my ansible playbook for this task.
- name: Play 1
host: remoteaixhost
- name: "Backup on Destination Server"
tags: validate
local_action: command rsync --delay-updates -F --compress --chmod=755 "--rsh=/usr/bin/ssh -S none -i /app/my_id_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" "--out-format=<<CHANGED>>%i %n%L" "{{ playbook_dir }}/tmpfiles/*" "{{ USER }}#{{ inventory_hostname }}":"{{ vars[inventory_hostname] }}/BACKUP/{{ Number }}/"
run_once: true
Running the above playbook fails and below are the error log from ansible:
TASK [Backup on Destination Server remoteaixhost.] *** fatal:
[remoteaixhost -> localhost]: FAILED! => {"changed": true, "cmd":
["rsync", "--delay-updates", "-F", "--compress", "--chmod=755",
"--rsh=/usr/bin/ssh -S none -i /app/my_id_rsa -o
StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null",
"--out-format=<>%i %n%L", "/app/tmpfiles/",
"user2#remoteaixhost:/was/IBM/BACKUP/00005/"], "delta":
"0:00:00.228097", "end": "2019-10-04 00:23:09.103525", "msg":
"non-zero return code", "rc": 23, "start": "2019-10-04
00:23:08.875428", "stderr": "Warning: Permanently added
'remoteaixhost' (RSA) to the list of known hosts.\r\nThis system is
for the use of authorized users only. Individuals using this computer
system without authority, or in excess of their authority, are subject
to having all of their activities on this system monitored and
recorded by system personnel. In the course of monitoring individuals
improperly using this system, or in the course of system maintenance,
the activities of authorized users may also be monitored. Anyone using
this system expressly consents to such monitoring and is advised that
if such such monitoring reveals possible evidence of criminal
activity, system personnel may provide the evidence of such monitoring
to the law enforcement officials\nrsync: link_stat \"/app/tmpfiles/\"
failed: No such file or directory (2)\nrsync error: some files/attrs
were not transferred (see previous errors) (code 23) at main.c(1178)
[sender=3.1.2]", "stderr_lines": ["Warning: Permanently added
'remoteaixhost' (RSA) to the list of known hosts.", "This system is
for the use of authorized users only. Individuals using this computer
system without authority, or in excess of their authority, are subject
to having all of their activities on this system monitored and
recorded by system personnel. In the course of monitoring individuals
improperly using this system, or in the course of system maintenance,
the activities of authorized users may also be monitored. Anyone using
this system expressly consents to such monitoring and is advised that
if such such monitoring reveals possible evidence of criminal
activity, system personnel may provide the evidence of such monitoring
to the law enforcement officials", "rsync: link_stat
\"/app/tmpfiles/*\" failed: No such file or directory (2)", "rsync
error: some files/attrs were not transferred (see previous errors)
(code 23) at main.c(1178) [sender=3.1.2]"], "stdout": "",
"stdout_lines": []}
NO MORE HOSTS LEFT
PLAY RECAP
********************************************************************* remoteaixhost : ok=8 changed=4 unreachable=0
failed=1 skipped=6 rescued=0 ignored=0 localhost
: ok=4 changed=2 unreachable=0 failed=0 skipped=1
rescued=0 ignored=0
Build step 'Execute shell' marked build as failure Finished: FAILURE
One thing that bothers me in the output is "fatal: [remoteaixhost -> localhost]" .Why does it say "remoteaixhost -> localhost" when we are actually specifying local_action and I was expecting it to show like this: "localhost -> remoteaixhost"
However, I m not sure if my observation is concerning, what the solution is and if this helps us look in the right direction.
Note: I also tried removing the astrick from the rsync command above like below:
rsync --delay-updates -F --compress --chmod=755 "--rsh=/usr/bin/ssh -S none -i /app/my_id_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" "--out-format=<<CHANGED>>%i %n%L" "{{ playbook_dir }}/tmpfiles/" "{{ USER }}#{{ inventory_hostname }}":"{{ vars[inventory_hostname] }}/BACKUP/{{ Number }}/"
No errors are received however, the file/s does not get copied to the target host.
I tried to run manually the rsync command on localhost as generated by the ansible and below is the observation:
# with aistrick the copy is successful:
[localuser#localhost]$ rsync --delay-updates -F --compress --chmod=755 "--rsh=/usr/bin/ssh -S none -i /app/my_id_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" --out-format="<<CHANGED>>%i %n%L" /app/tmpfiles/* user2#remoteaixhost:/was/IBM/BACKUP/00005/
Warning: Permanently added 'remoteaixhost' (RSA) to the list of known hosts.
This system is for the use of authorized users only. Individuals using this computer system without authority, or in excess of their authority, are subject to having all of their activities on this system monitored and recorded by system personnel. In the course of monitoring individuals improperly using this system, or in the course of system maintenance, the activities of authorized users may also be monitored. Anyone using this system expressly consents to such monitoring and is advised that if such such monitoring reveals possible evidence of criminal activity, system personnel may provide the evidence of such monitoring to the law enforcement officials
<<CHANGED>><f+++++++++ testingjsp4.jsp
[localuser#localhost]$ echo $?
0
# without aistrick the files do not get copied over:
[localuser#localhost]$ rsync --delay-updates -F --compress --chmod=755 "--rsh=/usr/bin/ssh -S none -i /app/my_id_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" --out-format="<<CHANGED>>%i %n%L" /app/tmpfiles/ user2#remoteaixhost:/was/IBM/BACKUP/00005/
Warning: Permanently added 'remoteaixhost' (RSA) to the list of known hosts.
This system is for the use of authorized users only. Individuals using this computer system without authority, or in excess of their authority, are subject to having all of their activities on this system monitored and recorded by system personnel. In the course of monitoring individuals improperly using this system, or in the course of system maintenance, the activities of authorized users may also be monitored. Anyone using this system expressly consents to such monitoring and is advised that if such such monitoring reveals possible evidence of criminal activity, system personnel may provide the evidence of such monitoring to the law enforcement officials
skipping directory .
[localuser#localhost]$ echo $?
0
Can you please suggest ?
local_action: command rsync -r --delay-updates -F --compress --chmod=755 "--rsh=/usr/bin/ssh -S none -i /app/my_id_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" "--out-format=<<CHANGED>>%i %n%L" {{ playbook_dir }}/tmpfiles/*.log "{{ USER }}#{{ inventory_hostname }}":"{{ vars[inventory_hostname] }}/BACKUP/{{ Number }}/"
This works and I have removed the double quotes around -> {{ playbook_dir }}/tmpfiles/*.log
Remote Server's "/home"
enter image description here
Remote Server User
1. bitnami
2. take02
3. take03
4. take04
But local Host are only ubuntu users.
I would like to copy the "home" directory of REMOTE HOST as ansible,
keeping the OWNER information.
This is my playbook:
---
- hosts: discovery_bitnami
gather_facts: no
become: yes
tasks:
- name: "Creates directory"
local_action: >
file path=/tmp/{{ inventory_hostname }}/home/ state=directory
- name: "remote-to-local sync test"
become_method: sudo
synchronize:
mode: pull
src: /home/
dest: /tmp/{{ inventory_hostname }}/home
rsync_path: "sudo rsync"
Playbook result is:
PLAY [discovery_bitnami] *******************************************************
TASK [Creates directory] *******************************************************
ok: [discovery_bitnami -> localhost]
TASK [remote-to-local sync test] ***********************************************
fatal: [discovery_bitnami]: FAILED! => {"changed": false, "cmd": "/usr/bin/rsync --delay-updates -F --compress --archive --rsh 'ssh -i /home/ubuntu/.ssh/red_LightsailDefaultPrivateKey.pem -S none -o StrictHostKeyChecking=no -o Port=22' --rsync-path=\"sudo rsync\" --out-format='<<CHANGED>>%i %n%L' \"bitnami#54.236.34.197:/home/\" \"/tmp/discovery_bitnami/home\"", "failed": true, "msg": "rsync: failed to set times on \"/tmp/discovery_bitnami/home/.\": Operation not permitted (1)\nrsync: recv_generator: mkdir \"/tmp/discovery_bitnami/home/bitnami\" failed: Permission denied (13)\n*** Skipping any contents from this failed directory ***\nrsync: recv_generator: mkdir \"/tmp/discovery_bitnami/home/take02\" failed: Permission denied (13)\n*** Skipping any contents from this failed directory ***\nrsync: recv_generator: mkdir \"/tmp/discovery_bitnami/home/take03\" failed: Permission denied (13)\n*** Skipping any contents from this failed directory ***\nrsync: recv_generator: mkdir \"/tmp/discovery_bitnami/home/take04\" failed: Permission denied (13)\n*** Skipping any contents from this failed directory ***\nrsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1655) [generator=3.1.1]\n", "rc": 23}
to retry, use: --limit #/home/ubuntu/work/esc_discovery/ansible_test/ansible_sync_test.retry
PLAY RECAP *********************************************************************
discovery_bitnami : ok=1 changed=0 unreachable=0 failed=1
But,
failed "cmd" works fine run with sudo on the console.
$ sudo /usr/bin/rsync --delay-updates -F --compress --archive --rsh 'ssh -i /home/ubuntu/.ssh/red_PrivateKey.pem -S none -o StrictHostKeyChecking=no -o Port=22' --rsync-path=\"sudo rsync\" --out-format='<<CHANGED>>%i %n%L' bitnami#54.236.34.197:/home/ /tmp/discovery_bitnami/home
How do I run "task" with sudo?
ps. remove become: yes then all permission is "ubuntu"
enter image description here
I guess you are out of options for the synchronize module. It runs locally without sudo and it's hardcoded.
On the other hand, in the first task you create a directory under /tmp as root, so the permissions are limited to the root user. As a result you get "permissions denied" error.
Either:
refactor the code so that you don't need root permissions for the local destination (or add become: no for the task "Creates directory"), as you use archive option which implies permissions preservation, this might not be an option;
or:
create your own version of the synchronize module and add sudo to the front of the cmd variable;
or:
use the command module with sudo /usr/bin/rsync as the call.
Mind that synchronize module is a non-standard one, there were changes in the past regarding the accounts used, and requests for the changes.
On top of everything, the current documentation for the module is pretty confusing. On one hand it states strongly:
The user and permissions for the synchronize dest are those of the remote_user on the destination host or the become_user if become=yes is active.
But in another place it only hints that the source and destination meaning is reversed when using pull mode:
In pull mode the remote host in context is the source.
So for the case from this question, the following passage is relevant, even though it incorrectly states the "src":
The user and permissions for the synchronize src are those of the user running the Ansible task on the local host (or the remote_user for a delegate_to host when delegate_to is used).