OSError: [Errno 1] Operation not permitted in ansible - ansible

From my CentOS(Ansible controller host) trying to run below playbook.
Ansible version:-
$ ansible --version
ansible 2.9.21
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Aug 24 2020, 17:57:11) [GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
---
- hosts: pro-server
become: yes
remote_user: root
tasks:
- name: Set authorized key taken from file
ansible.posix.authorized_key:
user: root
state: present
key: "{{ lookup('file', '/root/.ssh/id_rsa.pub') }}"
It fails with below error.
$ ansible-playbook -i hosts add-ssh-key.yml
PLAY [pro-server] ****************************************************************************************************************************************
TASK [Gathering Facts] ****************************************************************************************************************************************
ok: [50.51.52.24]
TASK [Set authorized key taken from file] ********************************************************************************************************************
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: OSError: [Errno 1] Operation not permitted
fatal: [50.51.52.24]: FAILED! => {"changed": false, "msg": "Unable to make /tmp/tmp73HusP into to /root/.ssh/authorized_keys, failed final rename from /root/.ssh/.ansible_tmpy4MPxlauthorized_keys: [Errno 1] Operation not permitted"}
PLAY RECAP ****************************************************************************************************************************************************
50.51.52.24 : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
added the following to /etc/ansible/ansible.cfg. However still same problem persists.
allow_world_readable_tmpfiles = True
Any pointer to solve this problem will be helpful. Thank you.

As discussed in the comments, the problem is an 'a' attribute set on the authorized_keys file.
From man chattr:
A file with the 'a' attribute set can only be open in append mode for writing. Only the superuser or a process possessing the CAP_LINUX_IMMUTABLE capability can set or clear this attribute.
This can be fixed using the file module:
- name: make sure the 'a' attribute is removed from the authorized_keys-file
file:
path: '/root/.ssh/authorized_keys'
attributes: '-a'

Related

Ansible host file how to provide # in ansible_ssh_pass

I am new to ansible. I am facing a problem in hosts file. error output is below.
My question is : How do I escape the # in the ansible_ssh_pass.
I tried with ansible_ssh_pass="airtel\#121" and ansible_ssh_pass=airtel\#121 without double quotes both ways. it is throwing the error.
ansible version: ansible-playbook 2.9.6
host file entry is as below:
[devices]
10.10.10.10 ansible_ssh_user="abcd" ansible_ssh_pass="airtel#121"
playbook is as below:
- name: Cisco show version example
hosts: devices
gather_facts: false
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: cisco.ios.ios
ansible_become: yes
ansible_become_method: enable
tasks:
- name: run show version on the routers
ios_command:
commands:
- show version
register: output
- name: print output
debug:
var: output.stdout_lines
Getting error as below.
xxxx#xxxx:/etc/ansible/playbooks# ansible-playbook check_connectivity_temp.yml -vvvv
ansible-playbook 2.9.6
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 3.8.10 (default, Mar 15 2022, 12:22:08) [GCC 9.4.0]
Using /etc/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
Loading callback plugin default of type stdout, v2.0 from /usr/lib/python3/dist-packages/ansible/plugins/callback/default.py
PLAYBOOK: check_connectivity_temp.yml ***************************************************************************************
Positional arguments: check_connectivity_temp.yml
verbosity: 4
connection: smart
timeout: 10
become_method: sudo
tags: ('all',)
inventory: ('/etc/ansible/hosts',)
forks: 5
1 plays in check_connectivity_temp.yml
PLAY [Cisco show version example] *******************************************************************************************
META: ran handlers
TASK [run show version on the routers] **************************************************************************************
task path: /etc/ansible/playbooks/check_connectivity_temp.yml:14
<10.10.10.10> attempting to start connection
<10.10.10.10> using connection plugin ansible.netcommon.network_cli
<10.10.10.10> local domain socket does not exist, starting it
<10.10.10.10> control socket path is /root/.ansible/pc/aaec916454
<10.10.10.10> local domain socket listeners started successfully
<10.10.10.10> loaded cliconf plugin ansible_collections.cisco.ios.plugins.cliconf.ios from path /root/.ansible/collections/ansible_collections/cisco/ios/plugins/cliconf/ios.py for network_os cisco.ios.ios
<10.10.10.10> ssh type is set to auto
<10.10.10.10> autodetecting ssh_type
[WARNING]: ansible-pylibssh not installed, falling back to paramiko
<10.10.10.10> ssh type is now set to paramiko
<10.10.10.10>
<10.10.10.10> local domain socket path is /root/.ansible/pc/aaec916454
fatal: [10.10.10.10]: FAILED! => {
"changed": false,
"msg": "Failed to authenticate: Authentication failed."
}
PLAY RECAP ******************************************************************************************************************
10.10.10.10 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
My question is : How do I escape the # in the ansible_ssh_pass.
Put the hash character '#' into an expression {{ '#' }}. For example,
- debug:
var: ssh_pass
vars:
ssh_pass: "airtel{{ '#' }}121"
gives
ssh_pass: airtel#121

Playbook reports different locale depending on control host

Hello In my playbook I try to determine the locale of the target host using the following command
- name: get locale info
command: printenv LANG
register: my_loc
The strange thing is, the result changes depending on which control host I execute the playbook.
If I run it from my CentOS 8 box i will get as result the value en_US.UTF-8, If I run it a CentOS7 machine I will get en_US.utf8.
These values are the same as i would get in a shell of the console host. But I would expect to that the vales are computed on the target machine and thus should be the same independently from which control host I execute the playbook.
On the CentOS7 machine
[me]$ ansible --version
ansible 2.9.25
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/me/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Nov 16 2020, 22:23:17) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
[me]$ printenv LANG
en_US.utf8
On the CentOS8 machine
[me]$ ansible --version
ansible 2.9.25
config file = /home/me/ansible-toolbox/ansible.cfg
configured module search path = ['/home/me/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Sep 21 2021, 20:17:36) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
[me]$ printenv LANG
en_US.UTF-8
The playbook looks like this
# playbook for experiments
---
- name: setup servers
hosts: all
tasks:
# make sure the system local is set to american English
- name: setting up as centos server
command: printenv LANG
register: my_loc
- name: show locale
debug:
var: my_loc.stdout
And I run it with the command
ansible-playbook -i 172.19.1.5 area51.yml
The output is in one case
PLAY [setup servers] *****************************************************************************
TASK [Gathering Facts] ****************************************************************************
ok: [172.19.1.5]
TASK [setting up as centos server] ***************************************************************
changed: [172.19.1.5]
TASK [show locale] ********************************************************************************
ok: [172.19.1.5] => {
"my_loc.stdout": "en_US.UTF-8"
}
PLAY RECAP ****************************************************************************************
172.19.1.5 : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
And in the other case
PLAY [setup servers] ***************************************************************************************
TASK [Gathering Facts] **************************************************************************************
ok: [172.19.1.5]
TASK [setting up as centos server] *************************************************************************
changed: [172.19.1.5]
TASK [show locale] ******************************************************************************************
ok: [172.19.1.5] => {
"my_loc.stdout": "en_US.utf8"
}
PLAY RECAP **************************************************************************************************
172.19.1.5 : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
I have tried the same thing with running the command ssh user#172.19.1.9 printenv LANG and got the same result thus I think it is not a problem with ansible.
The problem her seems to be that ssh transfers the locale settings from the control host to the remote hosts and this will in CentOS have the effect that the remote host will mirror the locale of the control host instead of using the default locale.
See
https://github.com/ansible/ansible/issues/10698

Why ansible module win_template didn't see my Windows folder?

I run ansible from an ubuntu server to configure a Windows server.
I have a problem with the win_template module. This is my win_template task:
---
- name: copy hosts
win_template:
src: hosts
dest: C:\windows\system32\drivers\etc\hosts
- name: copy bootstrap.yaml
win_template:
src: bootstrap.yaml.j2
dest: "C:\Service\bootstrap.yml2"
notify: restart
When I run my role, I get the following error:
TASK [win : copy hosts] ****************************************************************************************************************************************************************************
Monday 09 August 2021 13:06:29 +0000 (0:00:00.061) 0:00:04.554 *********
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: OSError: [Errno 2] No such file or directory: 'C:\\windows\\system32\\drivers\\etc\\hosts'
fatal: [win-01]: FAILED! => {"changed": false, "msg": "OSError: [Errno 2] No such file or directory: 'C:\\\\windows\\\\system32\\\\drivers\\\\etc\\\\hosts'"}
[WARNING]: Failure using method (v2_runner_on_failed) in callback plugin (<ansible_collections.community.general.plugins.callback.mail.CallbackModule object at 0x7fa1932605d0>): [Errno 99] Cannot
assign requested address
PLAY RECAP ********************************************************************************************************************************************************************************************
win-01 : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
But I don't understand why since I have a directory at this path:
C:\windows\system32\drivers\etc\hosts
Why ansible didn't see it?
If I run other modules on the Windows server it works as expected. For example, the following task runs fine:
- name: Start service
win_service:
name: my-service
state: started
But I can't succeed copying the hosts file into C:\windows\system32\drivers\etc

Ansible is not picking the group_vars and host_vars

I am new to Ansible and couldn't figure out why the playbook is not picking up the group_vars/ and host_vars I have defined. According to the document:
You can also add group_vars/ and host_vars/ directories to your playbook directory. The ansible-playbook command looks for these directories in the current working directory by default.
My playbook, inventory, and other files structure are quite simple. It should be matching the default.
Inventory file:
dummy
[spider]
s0ra
s0ra_slave
The playbook:
- name: base mix release upgrade Prod.
hosts: spider
gather_facts: false
# vars_files:
# - vars/s0ra_sup.yaml
tasks:
- name: check release bin
stat:
path: "{{ sh_lastrel }}"
register: rel_bin
When I tried to run the playbook by ansible-playbook -i inventory.ini mix_upgrade.yaml, it complains:
PLAY [base mix release upgrade Prod.] **********************************************************************************
TASK [check release bin] ***********************************************************************************************
fatal: [s0ra]: FAILED! => {"msg": "The task includes an option with an undefined variable.
The error was: 'sh_lastrel' is undefined\n\n
The error appears to be in 'xxx/ansible/mix_upgrade.yaml': line 19, column 7, but may\n
be elsewhere in the file depending on the exact syntax problem.\n\n
The offending line appears to be:\n\n\n
- name: check release bin\n ^ here\n"}
PLAY RECAP *************************************************************************************************************
s0ra : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
But the sh_lastrel is defined in spider.yaml actually. I don't know why it is not loaded. I tried to turn on -v mode but it does not seem to have more debugging info. Any hint of the cause or how to further debug is greatly appreciated.
My ansible version is as below:
╰─ ansible --version  ✔  22:01:57 
ansible 2.9.9
config file = None
configured module search path = ['/Users/kenchen/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/kenchen/.pyenv/versions/3.8.2/lib/python3.8/site-packages/ansible
executable location = /Users/kenchen/.pyenv/versions/3.8.2/bin/ansible
python version = 3.8.2 (default, May 18 2020, 00:02:00) [Clang 10.0.1 (clang-1001.0.46.4)]
Make sure group_vars/spider.yml is available. For example,
shell> cat group_vars/spider.yml
sh_lastrel: value of sh_lastrel defined in group_vars/spider.yml
shell> cat inventory.ini
dummy
[spider]
s0ra
s0ra_slave
shell> cat mix_upgrade.yaml
- hosts: spider
gather_facts: false
tasks:
- debug:
var: sh_lastrel
shell> ansible-playbook -i inventory.ini mix_upgrade.yaml
ok: [s0ra] =>
sh_lastrel: value of sh_lastrel defined in group_vars/spider.yml
ok: [s0ra_slave] =>
sh_lastrel: value of sh_lastrel defined in group_vars/spider.yml

Executing python script on remote server using ansible Error

I am logged in as root#x.x.x.12 with ansible 2.8.3 Rhel 8.
I wish to copy few files to root#x.x.x.13 Rhel 8 and then execute a python script.
I am able to copy the files sucessfully using ansible. I had even copied the keys and now it is ssh-less.
But during execution of script :
'fatal: [web_node1]: FAILED! => {"changed": false, "msg": "Could not find or access '/root/ansible_copy/write_file.py' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option"}'
Please note that I am a novice to ansible.
I guess there is some permission issues.
Please Help me out if possible.
Thanking in anticipation
**yaml_file**
-
name: Copy_all_ansible_files_to_servers
hosts: copy_Servers
become: true
become_user: root
tasks:
-
name: copy_to_all
copy:
src: /home/testuser/ansible_project/{{item}}
dest: /root/ansible_copy/{{item}}
owner: root
group: root
mode: u=rxw,g=rxw,o=rxw
with_items:
- write_file.py
- sink.txt
- ansible_playbook_task.yaml
- copy_codes_2.yaml
notify :
- Run_date_command
-
name: Run_python_script
script: /root/ansible_copy/write_file.py > /root/ansible_copy/sink.txt
args:
#chdir: '{{ role_path }}'
executable: /usr/bin/python3.6
**inventory_file**
-
web_node1 ansible_host=x.x.x.13
[control]
thisPc ansible_connection=local
#Groups
[copy_Servers]
web_node1
Command: ansible-playbook copy_codes_2.yaml -i inventory.dat =>
PLAY [Copy_all_ansible_files_to_servers] *******************************************************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************************************************************************
ok: [web_node1]
TASK [copy_to_all] *****************************************************************************************************************************************************************************************
ok: [web_node1] => (item=write_file.py)
ok: [web_node1] => (item=sink.txt)
ok: [web_node1] => (item=ansible_playbook_task.yaml)
ok: [web_node1] => (item=copy_codes_2.yaml)
TASK [Run_python_script] ***********************************************************************************************************************************************************************************
fatal: [web_node1]: FAILED! => {"changed": false, "msg": "Could not find or access '/root/ansible_copy/write_file.py' on the Ansible Controller.\nIf you are using a module and expect the file to exist on the remote, see the remote_src option"}
PLAY RECAP *************************************************************************************************************************************************************************************************
web_node1 : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
The script command will actually copy the file to the remote server before running it. Thus, when it complains about not being able to find or access the script, it's because it's trying to copy from /root/ansible_copy/write_file.py to the server.
If you don't really need the script to remain on the server after you execute it, you could remove the script from the copy task and change the script task to have the src point at /home/testuser/ansible_project/write_file.py.
Alternatively, instead of using the script command, you can manually run the script after transferring it using:
- name: run the write_file.py after it has already been transferred
command: python3.6 /root/ansible_copy/write_file.py > /root/ansible_copy/sink.txt
(Note: you may need to provide the full path to your python3.6 executable)

Resources