I'm an EC2 beginner. I was able to setup a working EC2 instance for my site. The problem is that I want to use a different AMI (a CentOS one).
I'm wondering what's the exact way to transfer files from one EBS to another EBS?
To be clear, I've researched online and I see that the best way to do this is to mount the EBS to the new instance, and copy the files then. My problem is that I haven't seen any clear, step-by-step instructions on how to do this.
I'm hoping you guys can give me this, as I don't want to mess up my working EC2 instance by using rsync to sync the files in between it and the new instance. (Unless this is an acceptable way to do it, then by all means please let me know)
Thanks!
For transferring files/folders to Ec2 instance, here are the suggested steps
Windows to linux
You can download and install winscp http://download.cnet.com/WinSCP/3000-2160_4-10400769.html, using this application you can transfer the files/folders
Linux to Windows
you can execute the following command,
scp -i key_pairfile_location zip_file_name.zip root#machine_name:target_folder
Hope this helps !
Related
My working env. is: Windows 10 Pro Up-to-Date (13/01/2020) and Docker 19.03.5
Trying to map a volume to a folder location from dockers to windows FS is not working as read on this discussion:
https://github.com/docker-library/mariadb/issues/152
So as suggested, I use the docker managed volumes to persists my data. It actually works but what I would like to know if there is a way to explore these volumes that are handles directly by docker.
By now I think that the file where everythins is stored is:
C:\ProgramData\DockerDesktop\vm-data\DockerDesktop.vhdx
Which I tried to open with wome vhdx reader without success. I think it may be possible to mount it but apparently I have not rights to do so. Maybe I'm even wrong on the file name.
Any help is appreciated. Thanks on advance!!
I'm trying to figure out an approach to mount a directory on my OSX machine to an AWS EC2 instance I've launched with Docker Machine. After some searching online, it sounds like one approach is to use sshfs in concert with docker-machine, although I'm not entirely clear how to make that happen. I've found a reference to this here but I've not been able to get it to work. https://github.com/docker/machine/issues/691
Any thoughts on how I can make this happen?
Thanks!
Ok, so I'm a bit late jumping onto the Vagrant band-wagon, but figured it's about time I did.
Brief background: I've been a freelance developer for quite some time now developing solutions based on Magento and Drupal, and have finally gathered enough demand to warrant the need to build up a team. Previously, whenever I started development on any new project, I use to clone a preconfigured base VM in Virtualbox, and use that. Of course there were still configurations to do on it until I could start with actual development. Every project's web files therefore all resided inside /var/www/projectname on an Ubuntu VM.
Now I've read up on why I should be Vagrant, especially considering that I now have a team of 4 developers working with me, but I would appreciate any feedback on the following questions I have:
Moderator note: I know this isn't exactly asking a programming question, so please advise if this could be turned into a wiki, as I'm sure that feedback into this will help someone just like me.
I am still reading through the Vagrant docs, so please be kind...noob questions ahead!
I now work on a Mac. Does it matter if I use Parallels, and another developer uses VirtualBox on Windows if we need to share or collaborate on projects?
When I issue the command, vagrant up for an existing project, will it start the VM up as I would in VirtualBox or will it recreate the VM?
Is the command vagrant halt the same issuing sudo poweroff in Ubuntu, for example?
I currently use PhpStorm and its SFTP feature for project files synchronization with the option to exclude certain files on the remote server (VM) from being imported and sync'ed...will I be able to specify the same using Vagrant folder sharing?
Could I easily zip or archive a Vagrant VM, move it to a file server, and then "re-import" when and if needed? (example bug fixes, or new feature enhancements)
What do we use to easily provision VMs for common projects? Should we being using Puppet, Chef, Puphpet or Salt? I've seen that Puphpet provides a nice GUI to create a vagrantfile which I'm sure once generated, we could customize for future projects. At a very basic level, we need to ensure that certain applications are installed onto the server (zip, phpmyadmin, OpenSSL, etc.), certain PHP settings, PHP and PEAR modules, and Apache settings. I already have base VMs set up as I'd like them for both Magento projects as well as Drupal projects.
EDIT: I should also add that I use to enable Host Adapter in VirtualBox (on Windows), configure the VHost inside Ubuntu, and then update my host machine's hosts file with something like 192.168.56.3 drupalsite1.dev. So I'm unsure if Port Forwarding would be better to use? I'm not very clued up on that I must admit.
Like i said - noob questions! However, I would really appreciate any feedback on these questions. My deepest thanks!
Most of what you are asking is subjective so common sense and experience are the best tools.
I recommend all team members use the same provider (parallels isn't officially supported) and virtualbox is readily available. The base boxes, by provider, could have slight variances, you never know.
Vagrant will start the vm similarly but vagrant also does other things like configuration the network, hostname, shared folders, etc. Not quite the same. The big power lies in the capability to be able to teardown the environment and bring it back in a cleanly provisioned state.
Basically, yes.
Yes, your vagrant VMs are just like your own mini cloud. You would interact the servers similar to the way you'd interact with external boxes.
Yes, the simple answer is that it's called packaging and you can share the resultant .box. However, it's good practice to keep the base box and provisioning scripts under CM so you can rebuild and modify as needed.
For provisioners, I think it is dependent upon your experience and your familiarity with the provisioner language and how much you want to invest in learning them. Look through the provisioner support and see what fits your need and budget. Chef has a very steep learning curve, in my experience, but also has a lot of thought built in. Most provisioners have wide libraries of available installation "scripts".
The host adapter can be handled identically in vagrant.
Learn by doing, I recommend going down the table of contents (navbar) of the vagrant docs and trying each step where it makes sense. Then make your decisions.
That is my 2 cents. Hope this helps!
I like the functionality of dreamweaver where you can add a site and define an ftp and then when you save a file it saves a local copy and also uploads a file via ftp. I am trying to get similar functionality with linux. What I have thought of doing is have inotify monitor a local folder and upload any new or changed files to an ftp site, but I am having a hard time finding information on this. Any ideas on how I can accomplish this?
Also, I do not want to install any programs on the ftp server.
Thanks
Dean
You might want to take a look at cron scheduling an rsync job, which will efficiently copy changed files across a network at a chosen interval. rsync will use ssh or rsh (not ftp), so this might not work, but would seem a better way in most cases.
I'd throw together a python script which uses inotify and scp/ftp.
These are all common and should be supported by whatever distro your using. They're also all pretty well documented.
Is there a way to run an Amazon EC2 AMI image in Windows? I'd like to be able to do some testing and configuration locally. I'm looking for something like Virtual PC.
If you build your images from scratch you can do it with VMware (or insert your favorite VM software here).
Build and install your linux box as you'd like it, then run the AMI packaging/uploading tools in the guest. Then, just keep backup copies of your VM image in sync with the different AMI's you upload.
Some caveats: you'll need to make sure you're using compatible kernels, or at least have compatible kernel modules in the VM, or your instance won't boot on the EC2 network. You'll also have to make sure your system can autoconfigure itself, too (network, mounts, etc).
If you want to use an existing AMI, it's a little trickier. You need to download and unpack the AMI into a VM image, add a kernel and boot it. As far as I know, there's no 'one click' method to make it work. Also, the AMI's might be encrypted (I know they are at least signed).
You may be able to do this by having a 'bootstrap' VM set up to specifically extract the AMI's into a virtual disk using the AMI tools, then boot that virtual disk separately.
I know it's pretty vague, but those are the steps you'd have to go through. You could probably do some scripting to automate the process of converting AMI's to vdks.
The Amazon forum is also helpful. For example, see this article.
Oh, this article also talks about some of these processes in detail.
Amazon EC2 with Windows Server - announced this morning, very exciting
http://aws.amazon.com/windows/
It's a bit of a square peg in a round hole ... kind of like running MS-Office on Linux.
Depending on how you value your time, it's cheaper to just get another PC and install Linux and Xen.