I'm trying to figure out an approach to mount a directory on my OSX machine to an AWS EC2 instance I've launched with Docker Machine. After some searching online, it sounds like one approach is to use sshfs in concert with docker-machine, although I'm not entirely clear how to make that happen. I've found a reference to this here but I've not been able to get it to work. https://github.com/docker/machine/issues/691
Any thoughts on how I can make this happen?
Thanks!
Related
My working env. is: Windows 10 Pro Up-to-Date (13/01/2020) and Docker 19.03.5
Trying to map a volume to a folder location from dockers to windows FS is not working as read on this discussion:
https://github.com/docker-library/mariadb/issues/152
So as suggested, I use the docker managed volumes to persists my data. It actually works but what I would like to know if there is a way to explore these volumes that are handles directly by docker.
By now I think that the file where everythins is stored is:
C:\ProgramData\DockerDesktop\vm-data\DockerDesktop.vhdx
Which I tried to open with wome vhdx reader without success. I think it may be possible to mount it but apparently I have not rights to do so. Maybe I'm even wrong on the file name.
Any help is appreciated. Thanks on advance!!
I need some straight answers about this as the current docker info and general Web info mixes hyperv and vmware info up.
I have installed docker on my windows 10 pro machine. I do not have vmware/virtual box installed I don't need it I have hyperv? I can use docker on a Linux Ubuntu box fairly well and I (think!) I understand volumes. Sorry for the background...
I am developing a node app and I simply want to have a volume within my Linux container mapped to a local directory on my windows machine, this should be simple but everytime I run my (lets say alpine linux) container '-v /c/Users:/somedata' the directory within the Linux /somedata directory is empty?
I just don't get this functionality on Windows? If you have a decent link I would be very grateful as I have been going over the docker info for two days and I feel I am nowhere!
If volumes are not supported between Windows and Linux because of the OS differences would the answer be to use copy within a Docker file? And simply copy my dev files into the container being created?
MANY MANY THANKS IN ADVANCE!
I have been able to get a link to take place, but I don't really know what the rules are, yet.
(I am using Docker for windows 1.12.0-beta21 (build: 5971) )
You have to share the drive(s) in your docker settings (this may require logging in)
The one or two times I've gotten it to work, I used
-v //d/vms/mysql:/var/lib/mysql
(where I have a folder D:\vms\mysql)
(note the "//d" to indicate the drive letter)
I am trying to reproduce this with a different setup, though, and I am not having any luck. Hopefully the next release will make this even easier for us!
I'm evaluating a change in development process toward Vagrant, but I frequently develop interdependent, not-yet-released Node modules that are wired together npm link.
Since Vagrant doesn't have all the source files shared on the guest machine, the symlinks npm link creates are no longer sufficient as a means of developing these modules in sync with one another. For one, there doesn't seem to be any way to get npm link to create hard links. For two, sharing the symlink destinations across the board a la the following won't scale:
config.vm.synced_folder "/usr/local/share/npm/lib/node_modules", "/usr/lib/node_modules"
Now, the question. Is any of the above incorrect (e.g. npm support for hard links exists, and I missed it)? What processes have people used to develop interrelated, private Node modules with testing accomplished via Vagrant?
EDIT: Ultimately, I'm hoping for a solution that will work on both Mac & Windows. Also, for the record, I don't intend to intimate how hard linking a Node module would work; I'm just trying to leverage Vagrant to improve this not-uncommon workflow.
Idea: instead of using the VM sync feature, use a sharing service in the VM to make the files accessible from the host OS.
For example, if your VM runs Linux and the host OS is Windows, you could start up samba and configure it to share the relevant directories. Then have the host OS map the samba share.
If the host OS is Mac, you could use something like macfuse to mount a directory over SSH to the VM.
Good luck!
I'm an EC2 beginner. I was able to setup a working EC2 instance for my site. The problem is that I want to use a different AMI (a CentOS one).
I'm wondering what's the exact way to transfer files from one EBS to another EBS?
To be clear, I've researched online and I see that the best way to do this is to mount the EBS to the new instance, and copy the files then. My problem is that I haven't seen any clear, step-by-step instructions on how to do this.
I'm hoping you guys can give me this, as I don't want to mess up my working EC2 instance by using rsync to sync the files in between it and the new instance. (Unless this is an acceptable way to do it, then by all means please let me know)
Thanks!
For transferring files/folders to Ec2 instance, here are the suggested steps
Windows to linux
You can download and install winscp http://download.cnet.com/WinSCP/3000-2160_4-10400769.html, using this application you can transfer the files/folders
Linux to Windows
you can execute the following command,
scp -i key_pairfile_location zip_file_name.zip root#machine_name:target_folder
Hope this helps !
Is there a way to run an Amazon EC2 AMI image in Windows? I'd like to be able to do some testing and configuration locally. I'm looking for something like Virtual PC.
If you build your images from scratch you can do it with VMware (or insert your favorite VM software here).
Build and install your linux box as you'd like it, then run the AMI packaging/uploading tools in the guest. Then, just keep backup copies of your VM image in sync with the different AMI's you upload.
Some caveats: you'll need to make sure you're using compatible kernels, or at least have compatible kernel modules in the VM, or your instance won't boot on the EC2 network. You'll also have to make sure your system can autoconfigure itself, too (network, mounts, etc).
If you want to use an existing AMI, it's a little trickier. You need to download and unpack the AMI into a VM image, add a kernel and boot it. As far as I know, there's no 'one click' method to make it work. Also, the AMI's might be encrypted (I know they are at least signed).
You may be able to do this by having a 'bootstrap' VM set up to specifically extract the AMI's into a virtual disk using the AMI tools, then boot that virtual disk separately.
I know it's pretty vague, but those are the steps you'd have to go through. You could probably do some scripting to automate the process of converting AMI's to vdks.
The Amazon forum is also helpful. For example, see this article.
Oh, this article also talks about some of these processes in detail.
Amazon EC2 with Windows Server - announced this morning, very exciting
http://aws.amazon.com/windows/
It's a bit of a square peg in a round hole ... kind of like running MS-Office on Linux.
Depending on how you value your time, it's cheaper to just get another PC and install Linux and Xen.