I would like to mount a local folder to a File System created within OCI Storage Gateway Management Console
I created the filesystem based on the documentation: https://docs.cloud.oracle.com/iaas/Content/StorageGateway/Tasks/creatingyourfirstfilesystem.htm
When I try to mount it using this command from local host nothing happens for minutes
The mount command I executed:
sudo mount -t nfs -o vers=4,port=32770 <ip address>:/SG-Test /home/opc/ocisg/
Instead of a successful mount, I get an error message:
mount.nfs: Connection timed out
The solution was to restart docker (in which OCI Storage Gateway runs).
Related
Hi what I am actually trying is to connect remotly from a MySQL Client in Windows Subsystem for Linux mysql -h 172.18.0.2 -P 3306 -u root -p and before that I started the Docker Container as follows: docker container run --name testdb --network testnetwork -p 3306:3306 -e MYSQL_ROOT_PASSWORD=mysqlRootPassword -e MYSQL_DATABASE=localtestdb -d mariadb/server.
The purpose why I put the container in a own network, is because I also have a dockerized Spring Boot Application (GraphQL-Server) which shall communicated with this db. But always when I try to connect from my built-in mysql client, in my Windows Subsystem for Linux, with the above shown command. I got the error message: ERROR 2002 (HY000): Can't connect to MySQL server on '172.18.0.2' (115).
What I already tried, to solve the problem on my own is, look up whether the configuration file line (bind-address) is commented out. But it wont work. Interestingly it already worked to set up a docker container with MariaDB and connect from the outside, but now when I try exactly the same, only with the difference that I now put the container in a own existing network, it wont work.
Hopefully there some one out there which is able to help me with this annonying problem.
Thanks!
So far,
Daniel
//edit:
Now I tried the solution advice from a guy from this topic: How to configure containers in one network to connect to each other (server -> mysql)?. Futhermore I linked my Spring Boot (server) application with the "--link databaseContainerName" parameter to the MariaDB container.
Now I am able to start both containers without any error, but I am still not able to connect remotly to the MariaDB container. Which is now running in a virtual docker network with his own subnet.
I explored this recently - this is by design - container isolation. Usually only main (service httpd) host is accessible externally, hiding internal connections (hosts it communicates to deliver response).
Container created in own network is not accessible from external adresses, even from containers in the same bridge but other network (172.19.0.0/16).
Your container should be accessible on docker host address (127.0.0.1 if run locally) and mapped ("-p 3306:3306") port - 3306. But of course it won't work if many running db containers have the same mapping to the same host port.
Isolation is done using firewall - iptables. You can list rules (iptables -L) to see that - from docker host level.
You can modify firewall to allow external access to internal networks. I used this rule:
iptables -A DOCKER -d 172.16.0.0/12 -j ACCEPT
After that your MySQL containerized engine should be accessible using internal address 172.18.0.2 and source (not mapped) port 3306.
Warnings
it disables all isolation, dont't use it on production;
you have to run this after every docker start - rules created/modified by docker on the fly
not every docker container will respond on ping, check it from docker host (linux subsystem in this case) first, from windows cmd later
I used this option (in docker.service) to make rule permanent:
ExecStartPost=/bin/sh -c '/etc/iptables/accept172_16.sh'
For docker on external(shared in lan) host you should use route add (or hosts file on your machine or router) to forward 172.x.x.x addresses into lan docker host.
Hint: use portainer project (with restart policy - always) to manage docker containers. It's easier to see config errors, too.
Unable to mount a pod - nfs: access denied by server while mounting
I have NFS Manager for Mac and i have tried removing all of the security. Right click on finder and connect to server and it works as it should. I can also see it in showmount -e
The error message in full;
MountVolume.SetUp failed for volume "magento2-monolith-volume" : mount failed: exit status 32 Mounting command: systemd-run Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/cd5fb5f0-885a-11e9-9fb4-080027390119/volumes/kubernetes.io~nfs/magento2-monolith-volume
--scope -- mount -t nfs 192.168.31.135:/Users/macmini/desktop/magento2-devbox /var/lib/kubelet/pods/cd5fb5f0-885a-11e9-9fb4-080027390119/volumes/kubernetes.io~nfs/magento2-monolith-volume Output: Running scope as unit: run-r256a12ba3ca8416c904902f477ec2397.scope mount.nfs: access denied by server while mounting
192.168.31.135:/Users/macmini/desktop/magento2-devbox
Is there something that I am missing on MacOS? Most of the issues seem to be with people running on Linux. Is there anything specific to Mac and Kubernetes?
I am running a linux docker container on windows 10. I need my host to have access to the data that my container generates. I also need the data to persist if I update the container's image.
I created a folder on the host (On a NTFS formated drive), in the docker settings, I share that drive with docker. I then create the container with the host directory mounted (using the -v option on the docker run command)
The problem is that docker creates a cifs mount to my shared drive on the host. It seems like the CIFS protocol is not case sensitive. I create two files:
/data/Test
/data/test
But only one file will be generated. I setup the kernel to support case sensitive files. For example, if I mount the same folder inside cygwin bash, I can create those two files without any problem. The problem is with the CIFS implementation I think.
My current thoughts of solving this issue:
Use Cygwin to create an NFS server on the host, and mount the NFS volume from within the linux container. I am not sure how I can automate this processes though.
Create another linux container with a SAMBA server. Create a volume on that container:
docker run -d -v /data --name dbstore --name a-samba-server
Then use that volume in my container:
docker run -d --volumes-from dbstore --name my-container my-container-image
Then I need to share /data in the samba server and create a map to that share on my host.
Both solutions seem quite cumbersome and I would like to know if there is anyway I can solve this directly with the CIFS share that docker natively creates.
I'm using an EC2 instance to run docker. From my local machine using OSX, I'm using docker machine to create containers and volumes. However when I'm trying to mount a local folder to any container is not possible.
docker create -v /data --name data-only-container ubuntu /bin/true
docker run -it --volumes-from data-only-container -v $(pwd)/data:/backup ubuntu bash
With the first command I create a data only container and I'm executing the second command to get into a container that should have the data-only-container volumes and the one I'm trying to mount, however when access it the folder /backup is empty
What I'm doing wrong?
EDIT:
I'm trying to mount a host folder in order to restore backuped data from my PC to container. In that case what would be a different approach?
Shall I try to use Flocker?
A host volume mounted with -v /path/to/dir:/container/mnt mounts a directory from the docker host inside the container. When you run this command on your OSX system, the $(pwd)/data will reference a directory on your local machine that doesn't exist on the docker host, the EC2 instance. If you log into your EC2 instance, you'll likely find the $(pwd)/data directory created there and empty.
If you want to mount folders from your OSX system into a docker container, you'll need to run Docker on the OSX system itself.
Edit: To answer the added question of how to move data up to your container in the cloud, there are often ways to move your data to the cloud provider itself, outside of docker, and then include it directly inside the container. To do a docker only approach, you can do something like:
tar -cC /source . | \
docker run --rm -i -v app-data:/target busybox \
/bin/sh -c "tar -xC /target"
This will upload your data with tar over a pipe into a named volume on your docker host. You can then include the named "app-data" volume in any other containers. If you need to do this multiple times with larger data sets, creating an rsync container would be more efficient.
Background
CoreOS-Kubernetes has a project for multi-node on Vagrant:
https://github.com/coreos/coreos-kubernetes
https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html
They have a custom cloud config for the etcd node, but none for the worker node. For those, the Vagrant file references shell scripts, which contain some cloud config but mostly Kubernetes yaml:
https://github.com/coreos/coreos-kubernetes/blob/master/multi-node/generic/worker-install.sh
Objective
I'm trying to mount a NFS directory onto the coreOS worker nodes, for use in a Kubernetes pod. From what I read about Kubernetes in docs and tutorials, I want to mount on the node first as a persistent volume, like this on docker:
http://www.emergingafrican.com/2015/02/enabling-docker-volumes-and-kubernetes.html
I saw some posts that said mounting in the pod itself can be buggy, and want to avoid it by mounting on coreOS worker node first:
Kubernetes NFS volume mount fail with exit status 32
If mounting right in the pod is the standard way, just let me know and I'll do that.
Question
Are there options for customizing the cloud config for the worker node? I'm about to start hacking on that shell script, but thought I should check first. I looked through the docs but couldn't find any.
This is the coreOS cloud config I'm trying to add to the Vagrant file:
https://coreos.com/os/docs/latest/mounting-storage.html#mounting-nfs-exports
No NFS mount on coreOS is needed. Kubernetes will do it for you right in the pod:
http://kubernetes.io/v1.1/examples/nfs/README.html
Checkout nfs-busybox replication controller:
http://kubernetes.io/v1.1/examples/nfs/nfs-busybox-rc.yaml
I ran this and got it to write files to the server. That helped me debug the application. Note that even though nfs mounts do not show up when you ssh into the kubernetes node and run docker -it run /bin/bash, they are mounted in the kubernetes pod.. That's where most of my misunderstanding occurred. I guess you have to add the mount parameters to the command when doing it manually.
Additionally, my application, gogs, stored it's config files in /data . To get it to work, I first mounted the nfs to /mnt. Then, like in the kubernetes nfs-busybox example, I created a command which would copy all folders in /data to /mnt . In the replication controller yaml, under the container node, I put a command:
command:
- sh
- -c
- 'sleep 300; cp -a /data /mnt; done'
This gave me enough time to run the initial config of my app. Then I just waited until the sleep time was up and the files were copied over.
I then change my mount point to /data, and now the app starts right where it left off when pod restarts. Coupled with external mysql server, and it so far it looks like it's stateless.