Mount smb share temprorarily with Ansible using module instead of shell - ansible

I need to mount a smb share to have access on large, shared installation files in Ansible. This works using CLI:
- name: Mount share
become: yes
shell: "mount.cifs {{ smb_share.path }} {{ smb_share.mount_point }} -o user={{ smb_share.user }},password={{ smb_share.password }},mfsymlinks,exec"
However, this has two disadvantages:
Doesn't follow best practices of using modules instead of shell commands when required
No detection if already mounted - I'd need to implement this, e.g. by grep for the mountpoint in mount
There is a mount module in Ansible. But since this share ist just for installation and uses credentials, I don't want to have it permanently mounted. The boot parameter looks what I need, sadly not for Linux:
Determines if the filesystem should be mounted on boot.
Only applies to Solaris systems.
I still tried to set boot: no but as described in the docs, it still creates an /etc/fstab entry with the plain text password.
Is there any alternative to have a Windows share temprorarily mounted on CentOS 7 with any Ansible module?

I am not aware there exist some temp mount specific module in ansible.
But from docs you can use mount module in the next way:
- name: Mount network share
mount:
src: //path/to/windows/share
path: /mnt
fstype: cifs
opts: 'username=example#domain,password=Password1!'
state: mounted
become: true
- name: Unmount network share
mount:
path: /mnt
state: absent
become: true
First task state=mounted will create record in /etc/fstab and mount that network share, and second task state=absent you can use to umount mounted share and remove corresponding record from /etc/fstab. That is the best option that comes to my mind.

Related

How can I ensure correct permissions when using Ansible to deploy an application

I am having a really hard time understanding how to approach privilege escalation when deploying a python application using Ansible.
In my current setup:
The VM is provisioned by our sysadmins
My user (let's call it "John") is not root, but can become it via calling sudo.
No matter what I do via Ansible, it ends being either done via John or root (with become: true).
I would like to:
install system packages like python3, pip or supervisord
setup separate system user
download the code using the git repository
create virtualenv
install requirements in the virtualenv
setup supervisord
The list above is really easy when I do it manually - I log into the VM, I use sudo su - to become superuser to handle points 1-2, then su new-system-user, complete points 3-5 etc.
Some of the tasks are also really easy to do in Ansible when using become: true or using my SSH user, however I get many permission issues when using the newly created system user, and become_user: "{{ system_user }}" in the tasks 3-5 seem not to work as intended.
I would like to ask you - what is the optimal way to tackle this issue? My only workaround is to do everything in the context of my SSH user, and then copy&chown to the new system-user, but this seems like a hack, and I'm pretty sure I'm missing something when it comes to the correct privilege escalation.
Even if I can't answer
What is the optimal way to tackle this issue?
fully, steps 1-3 and 6 might be possible just in a way
---
- hosts: pyapp
become: true
tasks:
- name: Make sure packages are installed
yum:
name: supervisor
state: latest
- name: Create group in local system
group:
name: pyapp
gid: '1234'
- name: Configure local account in system
user:
name: "pyapp"
system: yes
createhome: yes
uid: '1234'
group: '1234'
shell: /sbin/nologin
comment: "My Python App"
state: present
- name: Download and unpack
unarchive:
src: "https://{{ DOWNLOAD_URL }}/myPythonApp.tar.gz"
dest: "/home/pyapp/"
remote_src: yes
owner: "pyapp"
group: "pyapp"
- name: Make sure log directory exists
file:
path: "/var/log/pyapp"
state: directory
owner: 'pyapp'
group: 'pyapp'
- name: Make sure supervisord config file is provided
copy:
src: "pyapp.ini"
dest: "/etc/supervisord.d/pyapp.ini"
- name: Make sure service becomes started and enabled
systemd:
name: supervisord
state: started
enabled: true
daemon_reload: true
- name: Make sure application is in started state
supervisorctl:
name: pyapp
state: started
with a Configuration File (pyapp.ini) like
[program:pyapp]
directory=/home/pyapp
command=/usr/bin/python /home/pyapp/myPythonApp.py
user=pyapp
process_name=%(program_name)s
autorestart=true
stderr_logfile=/var/log/pyapp/stderr.log
stdout_logfile=/var/log/pyapp/stdout.log
leaving other values in default.
I am using such approach and it just works. Step 3, the download should also be possible by using the git module.
Module Documentaion in order of occurence
Understanding privilege escalation: become
group – Add or remove groups
user – Manage user accounts
unarchive – Unpacks an archive after (optionally) copying it from the local machine
file – Manage files and file properties
copy – Copy files to remote locations
systemd – Manage systemd units
supervisorctl – Manage the state of a program or group of programs running via supervisord
Further Readings
You may than have a look into Manages Python library dependencies and the Examples. As well Installing a Distribution Package and Running Supervisor.

How can I move the centos user's homedir using ansible?

I've got a CentOS cluster where /home is going to get mounted over nfs. So I think the centos user's home should get moved to somewhere which will remain local, maybe /var/lib/centos or something. But given centos is the ansible_user I can't use:
hosts: cluster
become: yes
tasks:
- ansible.builtin.user:
name: centos
move_home: yes
home: "/var/lib/centos"
as unsurprisingly it fails with
usermod: user centos is currently used by process 45028
Any semi-tidy workarounds for this, or better ideas?
I don't think you're going to be able to do this with the user module if you're connecting as the centos user. However, if you handle the individual steps yourself it should work:
---
- hosts: centos
gather_facts: false
become: true
tasks:
- command: rsync -a /home/centos/ /var/lib/centos/
- command: sed -i 's,/home/centos,/var/lib/centos,' /etc/passwd
args:
warn: false
- meta: reset_connection
- command: rm -rf /home/centos
args:
warn: false
This relocates the home directory, updates /etc/passwd, and then removes the old home directory. The reset_connection is in there to force a new ssh connection: without that, Ansible will be unhappy when you remove the home directory.
In practice, you'd want to add some logic to the above playbook to make it idempotent.

How to copy file from Network path to controller machine in ansible

Let's assume I have a textfile in some network path
EX: \cr-ampd-a01.abcd.loc\BulkFolder\textfile.txt
How to copy that file to my controller machine in ansible
Note: I can access it by (WIN+R and that path) then it pops up for credentials to access.
This is the sample code with win_copy
- name: copy from network to controller
win_copy:
src: \\cr-ampd-a01.abcd.loc\BulkFolder\textfile.txt
dest: C:\Temp\OTW\Mastapps
remote_src: yes
become: yes
vars:
ansible_become_user: XXX\Uxxxxxx
ansible_become_pass: PasswordHere
Error with this code is: Cannot copy src file as it does not exit
Another with net_get
- name: copy from network to controller
net_get:
src: \\cr-ampd-a01.abcd.loc\BulkFolder\textfile.txt
dest: C:\Temp\OTW\Mastapps
ansible_network_os: "eos"
remote_src: True
become: True
vars:
ansible_become_user: XXX\Uxxxxxx
ansible_become_pass: PasswordHere
Error here: ansible_network_os must be specified on this host
How can I achieve this?
You try to copy a file from a CIFS resource. I'm relative sure that this isn't working with copy or win_copy because they are used to copy file on that (physical) machine. Because CIFS is a protocol like HTTP and FTP you need a client to get the file from the remote machine. As you also wrote, you need to identify yourself with some credentials, the win_copy task doesnt have that option.
One option could be - mount the network device to -for example- N:/ or something like that and use then win_copy with N:/source.txt - in that case the N:-drive is like C:/ or D:/ a known path on the machine. Have a look at win_share - https://docs.ansible.com/ansible/latest/modules/win_share_module.html
Another option is to call a local CIFS client via command like command: copy //server/path/file c:/file or robocopy, but that isn't easy to be idempotent. See Copy files to network computers on windows command line
net_get is useful to copy files from "network devices" - please have a look at https://docs.ansible.com/ansible/latest/network/user_guide/platform_index.html#platform-options for a list of supported platforms. As of the list - I would say coping from a CIFS-share is not supported by net_get.

Download a file from a windows share using Ansible

I have a file in windows share. I need to download this file using ansible.
Playbook
- name: Copy installer
get_url:
url: file:'\\Winserver\Share_Binary\Installer-v6.tar.gz'
dest: /tmp
Error Output:
"mode": "01777",
"msg": "Request failed: <urlopen error [Errno 2] No such file or directory: \"'\\\\\\\\Winserver\\\\Share_Binary\\\\Installer-v6.tar.gz'\">",
"owner": "root",
"size": 4096,
"state": "directory",
"uid": 0,
"url": "file:'\\\\Winserver\\Share_Binary\\Installer-v6.tar.gz'"
The file exists. When I paste \\Winserver\Share_Binary\Installer-v6.tar.gz in the file explorer in my system I could see the file. Please advice.
This can be accomplished with a credential file and 3 Ansible tasks.
First, create a credential file (e.g. /home/youruser/smbshare.cred) containing the username and password of a service account with permissions to mount the CIFS share:
username=your_service_account_name
password=your_service_account_password
Make sure your remote ansible user owns the file (or root if you're using become) and it has 0400 permissions. You might consider generating this credential file with Ansible and/or encrypting it with Ansible Vault in the future as well.
Task 1: Use the mount module to mount the SMB share on the target.
- name: Mount SMB share
mount:
path: /mnt/smbshare
src: '\\\\Winserver\\Share_Binary'
fstype: cifs
opts: 'credentials=/home/youruser/smbshare.cred'
state: mounted
Task 2: Use the copy module to copy the file wherever you actually want it.
- name: Copy installer tarball to target
copy:
src: /mnt/smbshare/Installer-v6.tar.gz
dest: /some/local/path/Installer-v6.tar.gz
owner: foo
group: foo
mode: 0640
Task 3: Use the mount module to unmount the SMB share.
- name: Unmount SMB share
mount:
path: /mnt/smbshare
state: unmounted
Note: Depending on your environment, you might need to add more mount options (refer to mount.cifs(8) man page) to the opts: parameter in task 1.
not sure if get_url is capable to do that. Pls try this one:
- name: Get file from smb.
command:
smbclient //Winserver/Share_Binary/ <pass> -U <user> -c "get Installer-v6.tar.gz"
creates=/tmp/Installer-v6.tar.gz
Of course, you have to install smbclient first

Why are my nfs4 mounts not working with ansible?

With Ansible, I cannot mount nfs4.
I have nfs4 exports configured on the server, and I can mount both nfs4 and nfs using a bash shell.
I can also get nfs to work in ansible, just not nfs4.
so I'm wondering how I can mount a share like /pool1/volume1 on the server to the same style path on the client-
/pool1/volume1
tried switching to standard nfs which worked, and I can mount nfs4 in a bash shell, but not with ansible
This works-
- name: mount softnas NFS volume
become: yes
mount:
fstype: nfs
path: "/pool1/volume1"
opts: rsize=8192,wsize=8192,timeo=14,intr,_netdev
src: "10.0.11.11:/pool1/volume1"
state: mounted
But this doesn't
- name: mount softnas NFS volume
become: yes
mount:
fstype: nfs4
path: "/pool1/volume1"
opts: rsize=8192,wsize=8192,timeo=14,intr,_netdev
src: "10.0.11.11:/pool1/volume1"
state: mounted
and if i use this command from a shell, this works fine in mounting the paths into test.
sudo mount -t nfs4 10.0.11.11:/ /test
although its not quite right, because id like /pool1/volume1 and /pool2/volume2 to not appear under /test
my exports file on the server is this-
/ *(ro,fsid=0)
# These mounts are managed in ansible playbook softnas-ebs-disk-update-exports.yaml
# BEGIN ANSIBLE MANAGED BLOCK /pool1/volume1/
/pool1/volume1/ *(async,insecure,no_subtree_check,no_root_squash,rw,nohide)
# END ANSIBLE MANAGED BLOCK /pool1/volume1/
# BEGIN ANSIBLE MANAGED BLOCK /pool2/volume2/
/pool2/volume2/ *(async,insecure,no_subtree_check,no_root_squash,rw,nohide)
# END ANSIBLE MANAGED BLOCK /pool2/volume2/
when I try to switch to nfs4, i get this error with ansible
Error mounting /pool1/volume1/: mount.nfs4: mounting 10.0.11.11:/pool1/volume1/ failed, reason given by server: No such file or directory
Came across this same issue, using Ansible 2.11.5 the above answer didn't work for me. Ansible was complaining about the fstype "nfs4" and quoting was needed for the "opts" equal signs. Here is what I used:
- name: mount AWS EFS Volume
mount:
fstype: nfs
path: "/mnt/efs"
opts: "nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport"
src: "{{ efs_share_address[env] }}:/"
state: mounted
From the roles var file:
efs_share_address:
qa: 10.2.2.2
stage: 10.3.3.3
production: 10.4.4.4
Check if mount point exists
mkdir /pool1/volume1 # if not exists. Or create an ansible task to create the directory
Updated to change as the share is /
- name: mount softnas NFS volume
become: yes
mount:
fstype: nfs4
path: "/pool1/volume1"
opts: rsize=8192,wsize=8192,timeo=14,intr,_netdev
src: "10.0.11.11:/"
state: mounted
If you don't want to mount /, then share the /pool1/volume1 in the server.
I'm not sure how I fixed it exactly, but I decided to opt for the recommended workflow of binding my exports below the /export folder, and using
/export *(ro,fsid=0)
...as the root share. and then these-
/export/pool1/volume1 *(async,insecure,no_subtree_check,no_root_squash,rw,nohide)
/export/pool2/volume2 *(async,insecure,no_subtree_check,no_root_squash,rw,nohide)

Resources