Why are my nfs4 mounts not working with ansible? - ansible

With Ansible, I cannot mount nfs4.
I have nfs4 exports configured on the server, and I can mount both nfs4 and nfs using a bash shell.
I can also get nfs to work in ansible, just not nfs4.
so I'm wondering how I can mount a share like /pool1/volume1 on the server to the same style path on the client-
/pool1/volume1
tried switching to standard nfs which worked, and I can mount nfs4 in a bash shell, but not with ansible
This works-
- name: mount softnas NFS volume
become: yes
mount:
fstype: nfs
path: "/pool1/volume1"
opts: rsize=8192,wsize=8192,timeo=14,intr,_netdev
src: "10.0.11.11:/pool1/volume1"
state: mounted
But this doesn't
- name: mount softnas NFS volume
become: yes
mount:
fstype: nfs4
path: "/pool1/volume1"
opts: rsize=8192,wsize=8192,timeo=14,intr,_netdev
src: "10.0.11.11:/pool1/volume1"
state: mounted
and if i use this command from a shell, this works fine in mounting the paths into test.
sudo mount -t nfs4 10.0.11.11:/ /test
although its not quite right, because id like /pool1/volume1 and /pool2/volume2 to not appear under /test
my exports file on the server is this-
/ *(ro,fsid=0)
# These mounts are managed in ansible playbook softnas-ebs-disk-update-exports.yaml
# BEGIN ANSIBLE MANAGED BLOCK /pool1/volume1/
/pool1/volume1/ *(async,insecure,no_subtree_check,no_root_squash,rw,nohide)
# END ANSIBLE MANAGED BLOCK /pool1/volume1/
# BEGIN ANSIBLE MANAGED BLOCK /pool2/volume2/
/pool2/volume2/ *(async,insecure,no_subtree_check,no_root_squash,rw,nohide)
# END ANSIBLE MANAGED BLOCK /pool2/volume2/
when I try to switch to nfs4, i get this error with ansible
Error mounting /pool1/volume1/: mount.nfs4: mounting 10.0.11.11:/pool1/volume1/ failed, reason given by server: No such file or directory

Came across this same issue, using Ansible 2.11.5 the above answer didn't work for me. Ansible was complaining about the fstype "nfs4" and quoting was needed for the "opts" equal signs. Here is what I used:
- name: mount AWS EFS Volume
mount:
fstype: nfs
path: "/mnt/efs"
opts: "nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport"
src: "{{ efs_share_address[env] }}:/"
state: mounted
From the roles var file:
efs_share_address:
qa: 10.2.2.2
stage: 10.3.3.3
production: 10.4.4.4

Check if mount point exists
mkdir /pool1/volume1 # if not exists. Or create an ansible task to create the directory
Updated to change as the share is /
- name: mount softnas NFS volume
become: yes
mount:
fstype: nfs4
path: "/pool1/volume1"
opts: rsize=8192,wsize=8192,timeo=14,intr,_netdev
src: "10.0.11.11:/"
state: mounted
If you don't want to mount /, then share the /pool1/volume1 in the server.

I'm not sure how I fixed it exactly, but I decided to opt for the recommended workflow of binding my exports below the /export folder, and using
/export *(ro,fsid=0)
...as the root share. and then these-
/export/pool1/volume1 *(async,insecure,no_subtree_check,no_root_squash,rw,nohide)
/export/pool2/volume2 *(async,insecure,no_subtree_check,no_root_squash,rw,nohide)

Related

How to copy file from Network path to controller machine in ansible

Let's assume I have a textfile in some network path
EX: \cr-ampd-a01.abcd.loc\BulkFolder\textfile.txt
How to copy that file to my controller machine in ansible
Note: I can access it by (WIN+R and that path) then it pops up for credentials to access.
This is the sample code with win_copy
- name: copy from network to controller
win_copy:
src: \\cr-ampd-a01.abcd.loc\BulkFolder\textfile.txt
dest: C:\Temp\OTW\Mastapps
remote_src: yes
become: yes
vars:
ansible_become_user: XXX\Uxxxxxx
ansible_become_pass: PasswordHere
Error with this code is: Cannot copy src file as it does not exit
Another with net_get
- name: copy from network to controller
net_get:
src: \\cr-ampd-a01.abcd.loc\BulkFolder\textfile.txt
dest: C:\Temp\OTW\Mastapps
ansible_network_os: "eos"
remote_src: True
become: True
vars:
ansible_become_user: XXX\Uxxxxxx
ansible_become_pass: PasswordHere
Error here: ansible_network_os must be specified on this host
How can I achieve this?
You try to copy a file from a CIFS resource. I'm relative sure that this isn't working with copy or win_copy because they are used to copy file on that (physical) machine. Because CIFS is a protocol like HTTP and FTP you need a client to get the file from the remote machine. As you also wrote, you need to identify yourself with some credentials, the win_copy task doesnt have that option.
One option could be - mount the network device to -for example- N:/ or something like that and use then win_copy with N:/source.txt - in that case the N:-drive is like C:/ or D:/ a known path on the machine. Have a look at win_share - https://docs.ansible.com/ansible/latest/modules/win_share_module.html
Another option is to call a local CIFS client via command like command: copy //server/path/file c:/file or robocopy, but that isn't easy to be idempotent. See Copy files to network computers on windows command line
net_get is useful to copy files from "network devices" - please have a look at https://docs.ansible.com/ansible/latest/network/user_guide/platform_index.html#platform-options for a list of supported platforms. As of the list - I would say coping from a CIFS-share is not supported by net_get.

Download a file from a windows share using Ansible

I have a file in windows share. I need to download this file using ansible.
Playbook
- name: Copy installer
get_url:
url: file:'\\Winserver\Share_Binary\Installer-v6.tar.gz'
dest: /tmp
Error Output:
"mode": "01777",
"msg": "Request failed: <urlopen error [Errno 2] No such file or directory: \"'\\\\\\\\Winserver\\\\Share_Binary\\\\Installer-v6.tar.gz'\">",
"owner": "root",
"size": 4096,
"state": "directory",
"uid": 0,
"url": "file:'\\\\Winserver\\Share_Binary\\Installer-v6.tar.gz'"
The file exists. When I paste \\Winserver\Share_Binary\Installer-v6.tar.gz in the file explorer in my system I could see the file. Please advice.
This can be accomplished with a credential file and 3 Ansible tasks.
First, create a credential file (e.g. /home/youruser/smbshare.cred) containing the username and password of a service account with permissions to mount the CIFS share:
username=your_service_account_name
password=your_service_account_password
Make sure your remote ansible user owns the file (or root if you're using become) and it has 0400 permissions. You might consider generating this credential file with Ansible and/or encrypting it with Ansible Vault in the future as well.
Task 1: Use the mount module to mount the SMB share on the target.
- name: Mount SMB share
mount:
path: /mnt/smbshare
src: '\\\\Winserver\\Share_Binary'
fstype: cifs
opts: 'credentials=/home/youruser/smbshare.cred'
state: mounted
Task 2: Use the copy module to copy the file wherever you actually want it.
- name: Copy installer tarball to target
copy:
src: /mnt/smbshare/Installer-v6.tar.gz
dest: /some/local/path/Installer-v6.tar.gz
owner: foo
group: foo
mode: 0640
Task 3: Use the mount module to unmount the SMB share.
- name: Unmount SMB share
mount:
path: /mnt/smbshare
state: unmounted
Note: Depending on your environment, you might need to add more mount options (refer to mount.cifs(8) man page) to the opts: parameter in task 1.
not sure if get_url is capable to do that. Pls try this one:
- name: Get file from smb.
command:
smbclient //Winserver/Share_Binary/ <pass> -U <user> -c "get Installer-v6.tar.gz"
creates=/tmp/Installer-v6.tar.gz
Of course, you have to install smbclient first

Mount smb share temprorarily with Ansible using module instead of shell

I need to mount a smb share to have access on large, shared installation files in Ansible. This works using CLI:
- name: Mount share
become: yes
shell: "mount.cifs {{ smb_share.path }} {{ smb_share.mount_point }} -o user={{ smb_share.user }},password={{ smb_share.password }},mfsymlinks,exec"
However, this has two disadvantages:
Doesn't follow best practices of using modules instead of shell commands when required
No detection if already mounted - I'd need to implement this, e.g. by grep for the mountpoint in mount
There is a mount module in Ansible. But since this share ist just for installation and uses credentials, I don't want to have it permanently mounted. The boot parameter looks what I need, sadly not for Linux:
Determines if the filesystem should be mounted on boot.
Only applies to Solaris systems.
I still tried to set boot: no but as described in the docs, it still creates an /etc/fstab entry with the plain text password.
Is there any alternative to have a Windows share temprorarily mounted on CentOS 7 with any Ansible module?
I am not aware there exist some temp mount specific module in ansible.
But from docs you can use mount module in the next way:
- name: Mount network share
mount:
src: //path/to/windows/share
path: /mnt
fstype: cifs
opts: 'username=example#domain,password=Password1!'
state: mounted
become: true
- name: Unmount network share
mount:
path: /mnt
state: absent
become: true
First task state=mounted will create record in /etc/fstab and mount that network share, and second task state=absent you can use to umount mounted share and remove corresponding record from /etc/fstab. That is the best option that comes to my mind.

Issue with ansible copying file

I'm trying to provision a local kubernetes cluster using Vagrant (v2.1.2) and VirtualBox (v5.2.20).
My Vagrantfile uses the ansible_local provisioner to run some ansible playbooks to provision a K8s cluster using kubeadm.
This was all working perfectly a few months back when I ran this, but it's not working anymore. Not sure why it stopped working, but the ansible playbook for the master node fails when I trying to copy the kube config to the vagrant users home dir.
The ansible task that fails is as follows:
- name: Copy kubeconfig for vagrant user
copy:
src: /etc/kubernetes/admin.conf
dest: /home/vagrant/.kube/
owner: vagrant
group: vagrant
This is causing the following error: fatal: [master1]: FAILED! => {"msg": "an error occurred while trying to read the file '/etc/kubernetes/admin.conf': [Errno 13] Permission denied: '/etc/kubernetes/admin.conf'"}
The src file does exist. If I ssh into the VM after the failure, I can copy the file with sudo cp /etc/kubernetes/admin.conf /home/vagrant/, but the failure above causes the vagrant provisioning to fail/halt.
FYI., I've tried a few combinatons of things at the play and task levels e.g. become: true, remote_user: root e.g.
---
- hosts: all
become: true
tasks:
...
... but to no avail.
permissions on admin.conf are as follows:
vagrant#master1:/etc/kubernetes$ ls -al admin.conf
-rw------- 1 root root 5453 Aug 5 14:07 admin.conf
Full master-playbook.yml can be found here.
How do I get ansible to copy the file?
Quoting from copy
src Local path to a file to copy to the remote server.
The play failed because the user who is running the ansible playbook can't read the file at the controller (local path)
permission denied: '/etc/kubernetes/admin.conf'
Use remote_src: yes
remote_src If yes it will go to the remote/target machine for the src
- name: Copy kubeconfig for vagrant user
copy:
remote_src: yes
src: /etc/kubernetes/admin.conf
dest: /home/vagrant/.kube/
owner: vagrant
group: vagrant
From the comment, this seems to be what you want to do
"The src file does exist. If I ssh into the VM after the failure, I can copy the file with sudo cp /etc/kubernetes/admin.conf /home/vagrant/"
This should work if the remote_user at VM is allowed to sudo su and escalation is properly set
- hosts: all
become: yes
become_user: root
become_method: sudo
It would be not enough to allow the remote_user sudo cp only. See Can’t limit escalation to certain commands
Privilege escalation permissions have to be general. Ansible does not always use a specific command to do something but runs modules (code) from a temporary file name which changes every time. If you have ‘/sbin/service’ or ‘/bin/chmod’ as the allowed commands this will fail ...

Ansible mount with credential file

I have a playbook containing a mount task which seems to be failing when i try using the credentials option.
The task below succeeds
- name: "mount share"
mount:
state: "mounted"
fstype: "cifs"
name: "/opt/shared"
src: "//192.168.2.5/my_shared_drive"
opts: "username=john,password=secretpass,file_mode=0644,dir_mode=0755,gid=root,uid=root"
While the one below fails with a credentials missing related error message.
- name: "mount share"
mount:
state: "mounted"
fstype: "cifs"
name: "/opt/shared"
src: "//192.168.2.5/my_shared_drive"
opts: "credentials=/root/secretcreds,file_mode=0644,dir_mode=0755,gid=root,uid=root"
Below is the content of the credentials file
[root#localhost]# cat /root/secretcreds
username=john
password=secretpass
It seems like there is a bug, there is an open issue #22930.

Resources