I am trying to create a pipeline in Gitlab-ci to run an ansible-playbook.
Here is my .gitlab-ci.yml file:
image: "my_ansible_image"
before_script:
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
build:
script:
- ansible-playbook -i inventory -u root --private-key "$SSH_PRIVATE_KEY" playbook.yml -vv
The playbook is trying to execute a simple ping module:
---
- name: Ping test ## name of the first task
ping: ##ping is a special case that do not requieres any attributs
From some reason the ssh connection is always failing with following error:
$ ansible-playbook -i inventory -u root --private-key /builds/my_name/rhe7_set-up-rpm/private_key playbook.yml -vv
[WARNING]: Ansible is being run in a world writable directory
(/builds/aramniko/rhe7_set-up-rpm), ignoring it as an ansible.cfg source. For
more information see
https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-
world-writable-dir
ansible-playbook 2.9.11
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.8/site-packages/ansible
executable location = /usr/local/bin/ansible-playbook
python version = 3.8.3 (default, Jul 26 2020, 02:36:32) [GCC 9.2.0]
No config file found; using defaults
PLAYBOOK: playbook.yml *********************************************************
1 plays in playbook.yml
PLAY [Set-UP] ******************************************************************
TASK [Gathering Facts] *********************************************************
task path: /builds/my_name/rhe7_set-up-rpm/playbook.yml:2
fatal: [XXXXXXXXX]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added 'XXXXXXXX' (ECDSA) to the list of known hosts.\r\nno such identity: /builds/my_name/rhe7_set-up-rpm/private_key: No such file or directory\r\nroot#XXXXXXXXXX: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).", "unreachable": true}
PLAY RECAP *********************************************************************
XXXXXXXXXXX : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
How can i solve this issue?
EDIT Solution was to add the following to before_script:
- ssh-keyscan DNS/Remote_IP
- echo "$SSH_KNOWN_HOSTS" >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
You should verify your host keys by running ssh-keyscan on the private server and copying the results to a GitLab variable called SSH_KNOWN_HOSTS. In your pipeline you need to copy that to the ~/.ssh/known_hosts file in the pipeline environment. Instructions are here:
https://docs.gitlab.com/ee/ci/ssh_keys/#verifying-the-ssh-host-keys.
On a side note, you should consider creating a secure directory to store your config file in. See: https://docs.ansible.com/ansible/latest/reference_appendices/config.html
An easier option is just creating an ansible.cfg file and add those configs and others like remote_user, inventory, private_key_file etc..
#ansible.cfg
[defaults]
host_key_checking = False
No need to create ansible.cfg file, just add ANSIBLE_HOST_KEY_CHECKING global variable in your .gitlab-ci.yml file:
variables:
ANSIBLE_HOST_KEY_CHECKING: "False"
or in the build job:
build:
variables:
ANSIBLE_HOST_KEY_CHECKING: "False"
script:
- ansible-playbook -i inventory -u root --private-key "$SSH_PRIVATE_KEY" playbook.yml -vv
Related
I have a simple playbook that tries to install packages.
My task is failing(see output).
I can ping the host, and manually I can run the command as the super user(tco).
my ansible.cfg
[defaults]
inventory = /Users/<myuser>/<automation>/ansible/inventory
remote_user = tco
packages
packages:
- yum-utils
- sshpass
playbook
---
- hosts: all
vars_files:
- vars/packages.yml
tasks:
- name: testing connection
ping:
remote_user: tco
- name: Installing packages
yum:
name: "{{ packages }}"
state: present
Running playbook:
ansible-playbook my-playbook.yml --limit master --become --ask-become-pass --become-user=tco --become-method=sudo
Output:
ansible-playbook register_sys_rh.yml --limit master --become --ask-become-pass --become-user=tco --become-method=sudo
BECOME password:
PLAY [all] ******************************************************************************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************************************************************
ok: [xx.xxx.13.105]
TASK [testing connection] ***************************************************************************************************************************************************
ok: [xx.xxx.13.105]
TASK [Installing packages] **************************************************************************************************************************************************
fatal: [xx.xxx.13.105]: FAILED! => {"changed": false, "msg": "This command has to be run under the root user.", "results": []}
PLAY RECAP ******************************************************************************************************************************************************************
xx.xxx.13.105 : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
inventory:
ansible-inventory --list | jq '.master'
{
"hosts": [
"xx.xxx.13.105"
]
}
I have copied my id_rsa.pub to the host already. I cannot loging to the host without a password.
I can log in and do sudo su or run any other command that needs root privilege.
[tco#control-plane-0 ~]$ whoami
tco
[tco#control-plane-0 ~]$ hostname -I
xx.xxx.13.105 192.168.122.1
[tco#control-plane-0 ~]$ sudo su
[sudo] password for tco:
[root#control-plane-0 tco]#
I explicitly override user, sudo_method through ansible_cli, no idea what I am doing wrong here.
Thanks in advance.
Fixed it.
But, I need to understand the Ansible concept better.
I changed ansible.cfg to this(changed become_user to root)
[defaults]
inventory = <my-inventory-path>
remote_user = tco
[privilege_escalation]
become=True
become_method=sudo
become_ask_pass=False
become_user=root
become_pass=<password>
And, running it like this:
ansible-playbook my-playbook.yml --limit master
this gives me an error:
FAILED! => {"msg": "Missing sudo password"}
So, I run like this:
ansible-playbook my-playbook.yml --limit master --ask-become-pass
and when a password is prompted I provide tco password not sure what is the password for the root user is.
And this works.
Not sure why cfg file password is not working, even though I provide the same password when prompted.
As per my understanding, when I say become_user and become_pass that is what ansible uses to run privilege commands. But, here I am saying remote_user: tco and become_user:root
This error is because the playbook is unable to run on the remote nodes as a root user.
We can modify the ansible.cfg file in order to resolve this issue.
Steps:
Open the ansible.cfg, default command is: sudo vim /etc/ansible/ansible.cfg
Search for regular expression: /privilege_escalation ( / is basically searching for the keyword that follows in VIM)
Press I (Insert mode), uncomment following lines:
[privilege_escalation]
become=True
become_method=sudo
become_user=root
become_ask_pass=False
Note: If the .cfg lines are not modified, the corresponding values can be passed via Terminal during the playbook execution(as specified in the question itself).
Run the playbook again with: ansible-playbook playbookname.yml
Note, if there is no ssh authentication pre-established between the Ansible controller and the nodes, appending -kK is required (this asks for the privileged user password for the remote nodes ) i.e. ansible-playbook playbookname.yml -kK
This will require you to input password for the SSH connection, as well as the BECOME password(the password for the privileged user on the node)
I am executing some ansible playbooks via gitlab-ci and what I could see is
Ansible playbook executing successfully through pipeline, but it doesn't produce the output it is intended to do
When I retry the gitlab job, it produces the output I needed.
This is one of the many playbooks I am executing through gitlab:
1_ca.yaml
---
- hosts: 127.0.0.1
connection: local
tasks:
- name: Create ca-csr.json
become: true
copy:
dest: ca-csr.json
content: '{"CN":"Kubernetes","key":{"algo":"rsa","size":2048},"names":[{"C":"US","L":"Portland","O":"Kubernetes","OU":"CA","ST":"Oregon"}]}'
- name: Create ca-config.json
become: true
copy:
dest: ca-config.json
content: '{"signing":{"default":{"expiry":"8760h"},"profiles":{"kubernetes":{"usages":["signing","key encipherment","server auth","client auth"],"expiry":"8760h"}}}}'
- name: Create the ca.pem & ca-key.pem
# become: true
shell: |
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
Basically what does this do is, it creates some certs I needed.
But in the first attempt even though pipeline passes and it doesn't generate these certs. When I restart (running the same job for the second time) that particular job in gitlab it generates these certs.
Why this is happening?
This is how my .gitlab-ci.yaml looks like:
Create-Certificates:
stage: ansible-play-books-create-certs
retry:
max: 2
when:
- always
script:
- echo "Executing ansible playbooks for generating certficates"
- ansible-playbook ./ansible-playbooks/1_ca/1_ca.yaml
- ansible-playbook ./ansible-playbooks/1_ca/2_admin.yaml
- ansible-playbook ./ansible-playbooks/1_ca/3_kubelet.yaml
- ansible-playbook ./ansible-playbooks/1_ca/4_kube-controller.yaml
- ansible-playbook ./ansible-playbooks/1_ca/5_kube-proxy.yaml
- ansible-playbook ./ansible-playbooks/1_ca/6_kube-scheduler.yaml
- ansible-playbook ./ansible-playbooks/1_ca/7_kube-api-server.yaml
- ansible-playbook ./ansible-playbooks/1_ca/8_service-account.yaml
- ansible-playbook ./ansible-playbooks/1_ca/9_distribute-client-server-cert.yaml
# when: delayed
# start_in: 1 minutes
tags:
- banuka-gcp-k8s-hard-way
PS: These ansible playbooks are executing in the ansible host itself, not in remote servers. So I can log into the ansible master server and check if these files are created or not.
running your playbook without the last shell module produces the follwing output:
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [127.0.0.1] **************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************************************************************************************************
ok: [127.0.0.1]
TASK [Create ca-csr.json] *****************************************************************************************************************************************************************************************
[WARNING]: File './ca-csr.json' created with default permissions '600'. The previous default was '666'. Specify 'mode' to avoid this warning.
changed: [127.0.0.1]
TASK [Create ca-config.json] **************************************************************************************************************************************************************************************
[WARNING]: File './ca-config.json' created with default permissions '600'. The previous default was '666'. Specify 'mode' to avoid this warning.
changed: [127.0.0.1]
PLAY RECAP ********************************************************************************************************************************************************************************************************
127.0.0.1 : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
and checking the existence:
$ ls ca* -al
-rw------- 1 root root 155 Aug 17 02:48 ca-config.json
-rw------- 1 root root 129 Aug 17 02:48 ca-csr.json
so although it's quite dirty way of writing a playbook - it works.
Why is it dirty ? :
you're not using any inventory
you should use local_action and not connection: local for local tasks
you are misusing ansible that is multi-node configuration management to do a bash script task
so in conclusion - there's nothing wrong with your ansible playbook - or maybe the file permissions (?) and if it does not run - you should look more in the gitlab-ci direction.
you need to provide more details on Gitlab-CI setup but - maybe the stage is not correct ?
I am creating a pipeline which is automatically triggered when I push my code on gitlab.com.
The project is about the provisioning of a machine.
Here my .gitlab-ci.yml file
ansible_build:
image: debian:10
script:
- apt-get update -q -y
- apt-get install -y ansible git openssh-server keychain
- service ssh stop
- service ssh start
- cp files/<ad-hoc-created-key> key.pem && chmod 600 key.pem
- eval `keychain --eval` > /dev/null 2>&1
- ssh-add key.pem
- ansible-galaxy install -r requirements.yml
- ansible-playbook provision.yml --inventory hosts --limit local
When I push my code, the gitlab environment starts running all commands, but then it exits with the following error
$ ansible-playbook provision.yml --inventory hosts --limit local
PLAY [Provision step] **********************************************************
TASK [Gathering Facts] *********************************************************
fatal: [127.0.0.1]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Host key verification failed.", "unreachable": true}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit #/builds/team/ansible/provision.retry
PLAY RECAP *********************************************************************
127.0.0.1 : ok=0 changed=0 unreachable=1 failed=0
In my local PC, I solved it using the ssh-copy-id <path-to-the-key> <localhost> command, but I don't know how to solve it for gitlab-ci, given that it's not an environment which I can control.
I tried also to replace the 127.0.0.1 IP address with localhost.
ansible-playbook provision.yml --inventory hosts --limit localhost
Then it fails:
ansible-playbook provision.yml --inventory hosts --limit localhost
[WARNING] Ansible is being run in a world writable directory (/builds/teamiguana/minerva-ansible), ignoring it as an ansible.cfg source. For more information see https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir
[WARNING]: Found both group and host with same name: localhost
PLAY [Provision step] **********************************************************
TASK [Gathering Facts] *********************************************************
fatal: [localhost]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.\r\nroot#localhost: Permission denied (publickey,password).", "unreachable": true}
NO MORE HOSTS LEFT *************************************************************
to retry, use: --limit #/builds/teamiguana/minerva-ansible/provision.retry
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=1 failed=0
I don't have experience setting up similar - but my first thought would be to check
What system user is Gitlab trying to SSH as?
What system user has the corresponding public keys on the remote hosts?
You can override which user Ansible connects with either in the playbooks, or via --user <user> command-line flag, see https://docs.ansible.com/ansible/latest/cli/ansible-playbook.html#cmdoption-ansible-playbook-u.
Though maybe I'm misunderstanding, because I just noticed that you've set --limit local in your command?
You may try to add env variable ANSIBLE_TRANSPORT with value "local" to your ansible-playbook command, like this:
ANSIBLE_TRANSPORT=local ansible-playbook provision.yml --inventory hosts --limit local
I am trying to run ansible playbooks within Concourse for remote hosts, however i cannot do that. Below are my steps:-
Concourse Yaml File:-
---
resource_types:
- name: ansible-playbook
type: docker-image
source:
repository: troykinsella/concourse-ansible-playbook-resource
tag: latest
resources:
- name: ansible
type: ansible-playbook
source:
debug: true
user: cloud_user
ssh_private_key: ((ssh-key))
verbose: vvv
- name: source-code
type: git
source:
uri: ((git-repo))
branch: master
private_key: ((ssh-key))
jobs:
- name: ansible-concourse
plan:
- get: source-code # git resource
- put: ansible
params:
check: true
diff: true
become: true
become_user: root
inventory: inventory/hosts
playbook: site.yml
path: source-code
Host File:-
[test]
localhost
Inside the Container:-
I intercepted the container and i can ssh to any IP inside, however i am not able to make ssh-login.
Ansible Playbook:-
---
- name: "Running Current Working Directory"
hosts: test
gather_facts: no
tasks:
- name: "Current Working Directory"
shell: pwd
register: value
- debug:
msg: "The Current Working Directory {{value.stdout_lines}}"
Output Coming in Concourse:-
ansible-playbook -i inventory/hosts --private-key /tmp/ansible-playbook-resource-ssh-private-key --user cloud_user -vvv site.yml
ansible-playbook 2.9.0
config file = /tmp/build/put/source-code/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible-playbook
python version = 3.6.8 (default, Oct 7 2019, 12:59:55) [GCC 8.3.0]
Using /tmp/build/put/source-code/ansible.cfg as config file
host_list declined parsing /tmp/build/put/source-code/inventory/hosts as it did not pass its verify_file() method
script declined parsing /tmp/build/put/source-code/inventory/hosts as it did not pass its verify_file() method
auto declined parsing /tmp/build/put/source-code/inventory/hosts as it did not pass its verify_file() method
Parsed /tmp/build/put/source-code/inventory/hosts inventory source with ini plugin
PLAYBOOK: site.yml *************************************************************
1 plays in site.yml
PLAY [Running Current Working Directory] ***************************************
META: ran handlers
TASK [Current Working Directory] ***********************************************
task path: /tmp/build/put/source-code/site.yml:7
Monday 18 November 2019 12:38:49 +0000 (0:00:00.084) 0:00:00.085 *******
<localhost> ESTABLISH SSH CONNECTION FOR USER: cloud_user
<localhost> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/tmp/ansible-playbook-resource-ssh-private-key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="cloud_user"' -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/dc52b3112c localhost '/bin/sh -c '"'"'echo ~cloud_user && sleep 0'"'"''
<localhost> (255, b'', b'')
fatal: [localhost]: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ",
"unreachable": true
}
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
Monday 18 November 2019 12:38:49 +0000 (0:00:00.029) 0:00:00.114 *******
===============================================================================
Current Working Directory ----------------------------------------------- 0.03s
/tmp/build/put/source-code/site.yml:7 -----------------------------------------
localhost is normally accessed through the local connection plugin (unless you are trying to do something really special and you have configured access through ssh which does not seem to be the case from your above error message).
If you don't declare it in you inventory, localhost is implicit, uses the local connection and is not matched in the all group.
However, if you declare localhost explicitly in your inventory, the default connection plugin becomes ssh and the all group matches this host too. You have to set the connection back to local yourself in that case.
You have two options to make your current test work:
Delete your inventory (or use one that does not explicitly declare localhost) and modify your playbook to target localhost directly => hosts: localhost
Keep your playbook as is and modify your inventory
[test]
localhost ansible_connection=local
SSH Authentication problem
TASK [backup : Gather facts (ops)] *************************************************************************************
fatal: [10.X.X.X]: FAILED! => {"msg": " [WARNING] Ansible is being run
in a world writable directory
(/mnt/c/Users/AnirudhSomanchi/Desktop/KVM/Scripting/Ansible/network/backup),
ignoring it as an ansible.cfg source. For more information see
https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir\n{\"socket_path\":
(https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir/n{/'socket_path/':)
\"/home/SaiAnirudh/.ansible/pc/87fd82198c\", \"exception\":
\"Traceback (most recent call last):\n File
\\"/usr/bin/ansible-connection\\", line 104, in start\n
self.connection._connect()\n File
\\"/usr/lib/python2.7/dist-packages/ansible/plugins/connection/network_cli.py\\",
line 327, in _connect\n ssh = self.paramiko_conn._connect()\n
File
\\"/usr/lib/python2.7/dist-packages/ansible/plugins/connection/paramiko_ssh.py\\",
line 245, in _connect\n self.ssh = SSH_CONNECTION_CACHE[cache_key]
= self._connect_uncached()\n File \\"/usr/lib/python2.7/dist-packages/ansible/plugins/connection/paramiko_ssh.py\\",
line 368, in _connect_uncached\n raise
AnsibleConnectionFailure(msg)\nAnsibleConnectionFailure: paramiko:
The authenticity of host '10.X.X.X' can't be established.\nThe
ssh-rsa key fingerprint is 4b595d868720e28de57bef23c90546ad.\n\",
\"messages\": [[\"vvvv\", \"local domain socket does not exist,
starting it\"], [\"vvvv\", \"control socket path is
/home/SaiAnirudh/.ansible/pc/87fd82198c\"], [\"vvvv\", \"loaded
cliconf plugin for network_os vyos\"], [\"log\", \"network_os is set
to vyos\"], [\"vvvv\", \"\"]], \"error\": \"paramiko: The authenticity
of host '10.X.X.X' can't be established.\nThe ssh-rsa key fingerprint
is 4b595d868720e28de57bef23c90546ad.\"}"}
Command used-- ansible-playbook playbook.yml -i hosts --ask-vault-pass
Tried changing
host_key_checking = False in Ansible.cfg
Command used-- ansible-playbook playbook.yml -i hosts --ask-vault-pass
=====================================================================
playbook.yml
---
- hosts: all
gather_facts: false
roles:
- backup
===============================================================
Main.yml
C:\Ansible\network\backup\roles\backup\tasks\main.yml
- name: Gather facts (ops)
vyos_facts:
gather_subset: all
- name: execute Vyos run to initiate backup
vyos_command:
commands:
- sh configuration commands | no-more
register: v_backup
- name: local_action
local_action:
module: copy
dest: "C:/Users/Desktop/KVM/Scripting/Ansible/network/backup/RESULTS/Backup.out"
content: "{{ v_backup.stdout[0] }}"
===============================================================
Ansible.cfg
[defaults]
host_key_checking = False
===============================================================
Hosts
[all]
10.X.X.X
[all:vars]
ansible_user=
ansible_ssh_pass=
ansible_connection=network_cli
ansible_network_os=vyos
We need the backup of the vyatta devices but getting the following error
Vault password:
PLAY [all] *************************************************************************************************************
TASK [backup : Gather facts (ops)] *************************************************************************************
fatal: [10.X.X.X]: FAILED! => {"msg": " [WARNING] Ansible is being run
in a world writable directory
(/mnt/c/Users/AnirudhSomanchi/Desktop/KVM/Scripting/Ansible/network/backup),
ignoring it as an ansible.cfg source. For more information see
https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir\n{\"socket_path\":
(https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir/n{/'socket_path/':)
\"/home/SaiAnirudh/.ansible/pc/87fd82198c\", \"exception\":
\"Traceback (most recent call last):\n File
\\"/usr/bin/ansible-connection\\", line 104, in start\n
self.connection._connect()\n File
\\"/usr/lib/python2.7/dist-packages/ansible/plugins/connection/network_cli.py\\",
line 327, in _connect\n ssh = self.paramiko_conn._connect()\n
File
\\"/usr/lib/python2.7/dist-packages/ansible/plugins/connection/paramiko_ssh.py\\",
line 245, in _connect\n self.ssh = SSH_CONNECTION_CACHE[cache_key]
= self._connect_uncached()\n File \\"/usr/lib/python2.7/dist-packages/ansible/plugins/connection/paramiko_ssh.py\\",
line 368, in _connect_uncached\n raise
AnsibleConnectionFailure(msg)\nAnsibleConnectionFailure: paramiko:
The authenticity of host '10.X.X.X' can't be established.\nThe
ssh-rsa key fingerprint is 4b595d868720e28de57bef23c90546ad.\n\",
\"messages\": [[\"vvvv\", \"local domain socket does not exist,
starting it\"], [\"vvvv\", \"control socket path is
/home/SaiAnirudh/.ansible/pc/87fd82198c\"], [\"vvvv\", \"loaded
cliconf plugin for network_os vyos\"], [\"log\", \"network_os is set
to vyos\"], [\"vvvv\", \"\"]], \"error\": \"paramiko: The authenticity
of host '10.X.X.X' can't be established.\nThe ssh-rsa key fingerprint
is 4b595d868720e28de57bef23c90546ad.\"}"}
PLAY RECAP *************************************************************************************************************
10.X.X.X : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
[WARNING] Ansible is in a world writable directory , ignoring it as an ansible.cfg source.
I also got this warning but this won't lead you to the failure of your command execution in ansible. it is just a warning which means you have given full permission(i.e drwxrwxrwx or 777) to your directory which is not secure. you can remove this warning by changing permission with chmod command(like sudo chmod 755).
For me this was not working....
I needed to unmount it and then mount it again and use another command:
sudo umount /mnt/c
sudo mount -t drvfs C: /mnt/c -o metadata
Go to the actual folder and:
chmod o-w .
More Info, Link1 Link2
if you don't want to change permissions , all you need to do is to append your config file into the default ansible config file
. in linux it is in /etc/ansible/ansible.cfg