I have tried all options like
Mount /dev/xvdf /vol -t xfs
Mount /dev/xvdf1 /vol -t xfs
Mount /dev/xvdf2 /vol -t xfs
Same repeated with ext4
But I keep on getting the error :
Wrong FS type
This was a volume attached to another ec2 instance and I detached it then attached to new instance. It was boot volume.
When you attach an EBS volume to a different EC2 instance, it may not have the same device name as on the previous instance. Make sure you are trying to mount the proper device, use 'lsblk' to determine the block device name.
More details are documented in the AWS EC2 User Guide:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html
Related
I would like to mount a local folder to a File System created within OCI Storage Gateway Management Console
I created the filesystem based on the documentation: https://docs.cloud.oracle.com/iaas/Content/StorageGateway/Tasks/creatingyourfirstfilesystem.htm
When I try to mount it using this command from local host nothing happens for minutes
The mount command I executed:
sudo mount -t nfs -o vers=4,port=32770 <ip address>:/SG-Test /home/opc/ocisg/
Instead of a successful mount, I get an error message:
mount.nfs: Connection timed out
The solution was to restart docker (in which OCI Storage Gateway runs).
I am trying to mount a nfs directory in rw mode but it is always getting mounted as read-only mode. In fact the uid and gid of the user who owns the directory is same as nfs server.
This is there in the /etc/fstab:
<nfs-server>:/home /home nfs rw,noatime,vers=3
Not sure what wrong I am doing here.
Strangely this happens when I use Jenkins EC2 plugin but works fine if I manully start an instance from an ami. I checked the /etc/exports as well and it has no entry.
I even tried running sudo mount -o remount,rw /home/ but I still end up with same error.
I am running a linux docker container on windows 10. I need my host to have access to the data that my container generates. I also need the data to persist if I update the container's image.
I created a folder on the host (On a NTFS formated drive), in the docker settings, I share that drive with docker. I then create the container with the host directory mounted (using the -v option on the docker run command)
The problem is that docker creates a cifs mount to my shared drive on the host. It seems like the CIFS protocol is not case sensitive. I create two files:
/data/Test
/data/test
But only one file will be generated. I setup the kernel to support case sensitive files. For example, if I mount the same folder inside cygwin bash, I can create those two files without any problem. The problem is with the CIFS implementation I think.
My current thoughts of solving this issue:
Use Cygwin to create an NFS server on the host, and mount the NFS volume from within the linux container. I am not sure how I can automate this processes though.
Create another linux container with a SAMBA server. Create a volume on that container:
docker run -d -v /data --name dbstore --name a-samba-server
Then use that volume in my container:
docker run -d --volumes-from dbstore --name my-container my-container-image
Then I need to share /data in the samba server and create a map to that share on my host.
Both solutions seem quite cumbersome and I would like to know if there is anyway I can solve this directly with the CIFS share that docker natively creates.
I'm using an EC2 instance to run docker. From my local machine using OSX, I'm using docker machine to create containers and volumes. However when I'm trying to mount a local folder to any container is not possible.
docker create -v /data --name data-only-container ubuntu /bin/true
docker run -it --volumes-from data-only-container -v $(pwd)/data:/backup ubuntu bash
With the first command I create a data only container and I'm executing the second command to get into a container that should have the data-only-container volumes and the one I'm trying to mount, however when access it the folder /backup is empty
What I'm doing wrong?
EDIT:
I'm trying to mount a host folder in order to restore backuped data from my PC to container. In that case what would be a different approach?
Shall I try to use Flocker?
A host volume mounted with -v /path/to/dir:/container/mnt mounts a directory from the docker host inside the container. When you run this command on your OSX system, the $(pwd)/data will reference a directory on your local machine that doesn't exist on the docker host, the EC2 instance. If you log into your EC2 instance, you'll likely find the $(pwd)/data directory created there and empty.
If you want to mount folders from your OSX system into a docker container, you'll need to run Docker on the OSX system itself.
Edit: To answer the added question of how to move data up to your container in the cloud, there are often ways to move your data to the cloud provider itself, outside of docker, and then include it directly inside the container. To do a docker only approach, you can do something like:
tar -cC /source . | \
docker run --rm -i -v app-data:/target busybox \
/bin/sh -c "tar -xC /target"
This will upload your data with tar over a pipe into a named volume on your docker host. You can then include the named "app-data" volume in any other containers. If you need to do this multiple times with larger data sets, creating an rsync container would be more efficient.
Background
CoreOS-Kubernetes has a project for multi-node on Vagrant:
https://github.com/coreos/coreos-kubernetes
https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html
They have a custom cloud config for the etcd node, but none for the worker node. For those, the Vagrant file references shell scripts, which contain some cloud config but mostly Kubernetes yaml:
https://github.com/coreos/coreos-kubernetes/blob/master/multi-node/generic/worker-install.sh
Objective
I'm trying to mount a NFS directory onto the coreOS worker nodes, for use in a Kubernetes pod. From what I read about Kubernetes in docs and tutorials, I want to mount on the node first as a persistent volume, like this on docker:
http://www.emergingafrican.com/2015/02/enabling-docker-volumes-and-kubernetes.html
I saw some posts that said mounting in the pod itself can be buggy, and want to avoid it by mounting on coreOS worker node first:
Kubernetes NFS volume mount fail with exit status 32
If mounting right in the pod is the standard way, just let me know and I'll do that.
Question
Are there options for customizing the cloud config for the worker node? I'm about to start hacking on that shell script, but thought I should check first. I looked through the docs but couldn't find any.
This is the coreOS cloud config I'm trying to add to the Vagrant file:
https://coreos.com/os/docs/latest/mounting-storage.html#mounting-nfs-exports
No NFS mount on coreOS is needed. Kubernetes will do it for you right in the pod:
http://kubernetes.io/v1.1/examples/nfs/README.html
Checkout nfs-busybox replication controller:
http://kubernetes.io/v1.1/examples/nfs/nfs-busybox-rc.yaml
I ran this and got it to write files to the server. That helped me debug the application. Note that even though nfs mounts do not show up when you ssh into the kubernetes node and run docker -it run /bin/bash, they are mounted in the kubernetes pod.. That's where most of my misunderstanding occurred. I guess you have to add the mount parameters to the command when doing it manually.
Additionally, my application, gogs, stored it's config files in /data . To get it to work, I first mounted the nfs to /mnt. Then, like in the kubernetes nfs-busybox example, I created a command which would copy all folders in /data to /mnt . In the replication controller yaml, under the container node, I put a command:
command:
- sh
- -c
- 'sleep 300; cp -a /data /mnt; done'
This gave me enough time to run the initial config of my app. Then I just waited until the sleep time was up and the files were copied over.
I then change my mount point to /data, and now the app starts right where it left off when pod restarts. Coupled with external mysql server, and it so far it looks like it's stateless.