Kubernetes - nfs: access denied by server while mounting - MacOS - macos

Unable to mount a pod - nfs: access denied by server while mounting
I have NFS Manager for Mac and i have tried removing all of the security. Right click on finder and connect to server and it works as it should. I can also see it in showmount -e
The error message in full;
MountVolume.SetUp failed for volume "magento2-monolith-volume" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/cd5fb5f0-885a-11e9-9fb4-080027390119/volumes/kubernetes.io~nfs/magento2-monolith-volume
--scope -- mount -t nfs 192.168.31.135:/Users/macmini/desktop/magento2-devbox /var/lib/kubelet/pods/cd5fb5f0-885a-11e9-9fb4-080027390119/volumes/kubernetes.io~nfs/magento2-monolith-volume Output: Running scope as unit: run-r256a12ba3ca8416c904902f477ec2397.scope mount.nfs: access denied by server while mounting
192.168.31.135:/Users/macmini/desktop/magento2-devbox
Is there something that I am missing on MacOS? Most of the issues seem to be with people running on Linux. Is there anything specific to Mac and Kubernetes?

Related

Homestead with HyperV unable to create SMB folders - mount error(2): No such file or directory

After running vagrant up I get the following error message.
Vagrant requires administrator access to create SMB shares and
may request access to complete setup of configured shares.
==> homestead: Setting hostname...
==> homestead: Mounting SMB shared folders...
homestead: C:/Code => /home/*****/code
Failed to mount folders in Linux guest. This is usually because
the "vboxsf" file system is not available. Please verify that
the guest additions are properly installed in the guest and
can work properly. The command attempted was:
mount -t cifs -o vers=3.02,credentials=/etc/smb_creds_vgt-07cc5c30ef2cc20d12e837c88c36370a-66f0bd5cbca4d218f5f0b8a5f1712727,uid=1000,gid=1000,mfsymlinks,_netdev,nofail //169.254.x.x/vgt-07cc5c30ef2cc20d12e837c88c36370a-66f0bd5cbca4d218f5f0b8a5f1712727 /home/*****/code
The error output from the last command was:
mount error(2): No such file or directory
I am able to ssh into the HyperV instance and when I run the command it returns the same. If I look at the properties of C:/Code folder I can see the network path is \\PCNAME\vgt-07cc5c30ef2cc20d12e837c88c36370a-66f0bd5cbca4d218f5f0b8a5f1712727 so the same as the mount command other than the PCNAME is now an IP. I can ping the IP from within the instance and seems to work ok.
Homestead file:
folders:
- map: C:/Code
to: /home/vagrant/code
type: smb
smb_username: vagrant
smb_password: vagrant
The vagrant user has full permissions to the local code folder.
I am running Windows 11, Vagrant 2.3.1, HyperV 10. The External Switch is set-up via my Wi-Fi - could that cause an issue?

Cannot mount my Storage Gateway filesystem using OCI Storage Gateway

I would like to mount a local folder to a File System created within OCI Storage Gateway Management Console
I created the filesystem based on the documentation: https://docs.cloud.oracle.com/iaas/Content/StorageGateway/Tasks/creatingyourfirstfilesystem.htm
When I try to mount it using this command from local host nothing happens for minutes
The mount command I executed:
sudo mount -t nfs -o vers=4,port=32770 <ip address>:/SG-Test /home/opc/ocisg/
Instead of a successful mount, I get an error message:
mount.nfs: Connection timed out
The solution was to restart docker (in which OCI Storage Gateway runs).

minikube mount crashes: mount system call fails

I am running minikube on my mac (running OSX 10.14.5)
minikube version: 1.1.0
minikube is using VirtualBox
I would like to have a single set of kubernetes yaml files that I use in different environments. Therefore, I'm trying to mount the same directory I would use in other environments into my minikube. (If there's a different way to go about this but ease development let me know.)
Anyway, the mount fails.
$ minikube mount /etc/vsc:/etc/vsc
📁 Mounting host path /etc/vsc into VM as /etc/vsc ...
💾 Mount options:
▪ Type: 9p
▪ UID: docker
▪ GID: docker
▪ Version: 9p2000.L
▪ MSize: 262144
▪ Mode: 755 (-rwxr-xr-x)
▪ Options: map[]
🚀 Userspace file server: ufs starting
💣 mount failed: mount: /etc/vsc: mount(2) system call failed: Connection timed out.
: Process exited with status 32

How to resolve vagrant mount issue?

Vagrant up is not working properly after restarting the machine. Before restart, it was working fine. It is hang up after "default: Mounting NFS shared folders" and throwing an error like "mount.nfs: Connection timed out".
I have checked the exports file and restore with blank data.
==> default: Mounting NFS shared folders...
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
mount -o vers=3,udp,rw,actimeo=2 192.168.200.1:/Users/USERNAME/vagrant/ol7/vagabond7 /var/nfs//Users/USERNAME/vagrant/ol7/vagabond7
Stdout from the command:
Stderr from the command:
mount.nfs: Connection timed out

Volume mapped filebeat.yml permissions from Docker on a Windows host

I'm trying to run the official 5.4.3 Filebeat docker container via VirtualBox on a Windows host. Rather than creating a custom image, I'm using a volume mapping to pass the filebeat.yml file to the container using the automatically created VirtualBox mount /c/Users which points to C:\Users on my host.
Unfortunately I'm stuck on this error:
Exiting: error loading config file: config file ("filebeat.yml") can only be writable by the owner but the permissions are "-rwxrwxrwx" (to fix the permissions use: 'chmod go-w /usr/share/filebeat/filebeat.yml')
My docker-compose config is:
filebeat:
image: "docker.elastic.co/beats/filebeat:5.4.3"
volumes:
- "/c/Users/Nathan/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro"
- "/c/Users/Nathan/log:/mnt/log:ro"
I've tried SSH-ing into the machine and running the chmod go-w command but no change. Is this some kind of permission limitation when working with VirtualBox shared folders on a Windows host?
It looks like this is a side effect of the Windows DACL permissions system. Fortunately I only need this for a development environment so I've simply disabled the permission check by overriding the container entry point and passing the strict.perms argument.
filebeat:
image: "docker.elastic.co/beats/filebeat:5.4.3"
entrypoint: "filebeat -e -strict.perms=false"
volumes:
- "/c/Users/Nathan/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro"
- "/c/Users/Nathan/log:/mnt/log:ro"

Resources