Using Vagrant with npm-linked dependencies - windows

I'm evaluating a change in development process toward Vagrant, but I frequently develop interdependent, not-yet-released Node modules that are wired together npm link.
Since Vagrant doesn't have all the source files shared on the guest machine, the symlinks npm link creates are no longer sufficient as a means of developing these modules in sync with one another. For one, there doesn't seem to be any way to get npm link to create hard links. For two, sharing the symlink destinations across the board a la the following won't scale:
config.vm.synced_folder "/usr/local/share/npm/lib/node_modules", "/usr/lib/node_modules"
Now, the question. Is any of the above incorrect (e.g. npm support for hard links exists, and I missed it)? What processes have people used to develop interrelated, private Node modules with testing accomplished via Vagrant?
EDIT: Ultimately, I'm hoping for a solution that will work on both Mac & Windows. Also, for the record, I don't intend to intimate how hard linking a Node module would work; I'm just trying to leverage Vagrant to improve this not-uncommon workflow.

Idea: instead of using the VM sync feature, use a sharing service in the VM to make the files accessible from the host OS.
For example, if your VM runs Linux and the host OS is Windows, you could start up samba and configure it to share the relevant directories. Then have the host OS map the samba share.
If the host OS is Mac, you could use something like macfuse to mount a directory over SSH to the VM.
Good luck!

Related

is Vagrant suitable for beginners that are not tech savvy?

I'm planning to teach a group a people how to setup a website using WordPress. Those people have some basic computer usage knowledge : they can surf the web, write emails, install software on their computer, ... But they are absolutely not developers. And the training does not aim to teach them development.
But I want them to be able to setup a fully working local web environment or their computer that runs on Windows. I was planning to use XAMPP, but I'm wondering if Vagrant is not more suitable. I could prepare a box with a lot of tools already included, and they will just have to install it. Interaction with the server would take place only via http and FTP (no ssh needed).
Is it possible to create a batch file that they can click on to launch the Vagrant ? If properly configured, is that as easy to use as that for absolute beginners ?
from what you describe there is almost no vagrant thing, you would be responsible to make the vagrant box and the vagrantfile, and you will not expose your students to vagrant. only thing is that they would need to have this bat file on their desktop (the only command that it will need to run is vagrant up, make sure to expose the vagrant cwd variable) and the server will be up and running.
The main advantage I see then is that you will completely make your students in the same situation they will be with their production system. they will face the same tool (FTP, wordpress admin ...) on an environment (more or less) identical to a production environment.

Custom developer setup with Vagrant

I currently have a Vagrant vm with a typical web setup (apache, php, etc). The actual web repo is checked out onto the my local machine, and accessed through synced folders and forwarded apache ports (like http://127.0.0.1:4567/web). The question is, how do I make this work for a team, as opposed to just me? Since you have to sync folders, how is it possible for multiple devs to connect to one VM and sync folders?
I initially thought sharing the Vagrantfile would be all that is needed, but the Vagrantfile is agnostic to all apt-get commands and apache/php/node etc setup. I don't want devs to have to worry about running those.
This is what Vagrant provisioning is for.
Vagrant allows you to automatically alter configuration of a box or install additional software packages as a part of the first vagrant up process. Moreover, you can provision your system with various configuration management systems: Chef, Puppet, Ansible, CFEngine. Or you can simply use shell scripts.
Sitepoint offers a good tutorial on provisioning Ubuntu 14.04 box with bash scripts to install nginx, PHP-FPM and MySQL. Read it. Later you might want to move to configuration management systems.
Take a look at Phansible project which generates Ansible provisionings for PHP-based projects. Puppet provisioning for PHP-projects can be generated via PuPHPet. They will give you an idea how to use configuration management with Vagrant and might be used as s template for your Vagrantfile.
I always try to craft my Vagrantfiles carefully (provisioning, synced folder, for example, maps host's ./src/public onto /var/www/html/ on guest machine, port forwards and so on) so I can put them into project's root under version control. Later when my teammate clones project's repo on his machine he can issue vagrant up right away and get a fully functional development environment.
The only problem I haven't solved yet is updating existing VMs when new dependencies are added to projects. Now we update provisioning scripts and if developers encounter errors they manually reprovision their VMs by using vagrant reload --provision.
P.S. I asked a question regarding this problem

Docker and file sharing on OS X

Ok. I am playing around with different tools to prepare dev environment. Docker is nice option. I created the whole dev environment in docker and can build a project in it.
The source code for this project lives outside of docker container (on the host). This way you can use your IDE to edit it and use docker just to build it.
However, there is one problem
a) Docker on OS X uses VM (VirtualBox VM)
b) File sharing is reasonably slow (way slower than file IO on host)
c) The project has something like a gazzilion files (which exaggerate problems #a an #b).
If I move source code in the docker, I will have the same problem in IDE (it will have to access shared files and it will be slow).
I heard about some workaround to make it fast. However, I can't seem to find any information on this subject.
Update 1
I used Docker file sharing feature (meaning I run)
docker run -P -i -v <VMDIR>:<DOCKERDIR> -t <imageName> /bin/bash
However, sharing between VM and Docker isn't a problem. It's fast.
The bottle neck is sharing between host and VM.
The workaround I use is not to use boot2docker but instead have a vagrant VM provisioned with docker. No such big penalty for mounting folders host->vagrant->docker.
On the downside, I have to pre-map folders to vagrant (basically my whole work directory) and pre-expose a range of ports from the vagrant box to the host to have access to the docker services directly from there.
On the plus side, when I want to clean unused docker garbage (images, volumes, etc.) I simply destroy the vagrant vm and re-create it again :)
Elaboration
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "trusty-docker"
config.vm.box_url = "https://oss-binaries.phusionpassenger.com/vagrant/boxes/latest/ubuntu-14.04-amd64-vbox.box"
config.vm.provision "docker"
#by default we'll claim ports 9080-9090 on the host system
for i in 9080..9090
config.vm.network :forwarded_port, guest: i, host: i
end
#NB: this folder mapping will not have the boot2docker issue of slow sync
config.vm.synced_folder "~/work", "/home/vagrant/work"
end
Having that:
host$ vagrant up && vagrant ssh
vagrant$ docker run -it --rm -v $(pwd)/work:/work ubuntu:12.04 find /work
This is unfortunately a typical problem Windows and OS X users are currently struggling with that cannot be solved trivially, especially in the case of Windows users. The main culprit is VirtualBox's vboxfs which is used for file sharing which, despite being incredibly useful, results in poor filesystem I/O.
There are numerous situations by which developing the project sources inside the guest VM are brought to a crawl, the main two being scores of 3rd party sources introduce by package managers and Git repositories with a sizable history.
The obvious approach is to move as much of the project-related files outside of vboxfs somewhere else into the guest. For instance, symlinking the package manager directory into the project's vboxfs tree, with something like:
mkdir /var/cache/node_modules && ln -s /var/cache/node_modules /myproject/node_modules
This alone improved the startup time from ~28 seconds down to ~4 seconds for a Node.js application with a few dozen dependencies running on my SSD.
Unfortunately, this is not applicable to managing Git repositories, short of splatting/truncating your history and committing to data loss, unless the Git repository itself is provisioned within the guest, which forces you to have two repositories: one to clone the environment for inflating the guest and another containing the actual sources, where consolidating the two worlds becomes an absolute pain.
The best way to approach the situation is to either:
drop vboxfs in favor of a shared transport mechanism that results in better I/O in the guest, such as the Network File System. Unfortunately, for Windows users, the only way to get NFS service support is to run the enterprise edition of Windows (which I believe will still be true for Windows 10).
revert to mounting raw disk partitions into the guest, noting the related risks of giving your hypervisor raw disk access
If your developer audience is wholly compromised of Linux and OS X users, option 1 might be viable. Create a Vagrant machine and configure NFS shares between your host and guest and profit. If you do have Windows users, then, short of buying them an enterprise license, it would be best to simply ask them to repartition their disks and work inside a guest VM.
I personally use a Windows host and have a 64 GB partition on my SSD that I mount directly into my Arch Linux guest and operate from there. I also switched to GPT and UEFI and have an option to boot directly into Arch Linux in case I want to circumvent the overhead of the virtualized hardware, giving me the best of both worlds with little compromise.
Two steps can improve your performance quite well:
Switch to NFS. You can use this script to do this.
Switch your VBox NIC driver to FAST III. You can do this on your default machine by running:
VBoxManage modifyvm default --nictype1 Am79C973
VBoxManage modifyvm default --nictype2 Am79C973
I run a simple watch script that kicks off an rsync inside my container(s) from the shared source-code volume to a container-only volume whenever anything changes. My entry-point only reads from the container-only volume so that avoids the performance issues, and rsync works really well, especially if you set it up correctly to avoid things like .git folders and libraries that don't change frequently.
Couple of pieces of info which I found
I started to use Vagrant to work with VirtualBox and install Docker in it. It gives more flexibility
Default sharing in Vagrant VirtualBox provisioning is very slow
NFS sharing is much faster. However, it could be reasonably slow it (especially if your build process will create files which needs to be written back to this share).
Vagrant 1.5+ have a rsync option (to use rsync to copy files from host to VM). It's faster because it doesn't have to write back any changes.
This rsync option has autosync (to continiously sync it).
This rsync option consumes a lot of CPU and people came up with a gem to overcome it (https://github.com/smerrill/vagrant-gatling-rsync)
So, Vagrant + VirtualBox + Rsync shared folder + auto rsync + vagrant gatling looks like a good option for my case (still researching it).
I tried vagrant gatling. However, it results in non deterministic behavior. I never know whether new files were copied into VM or not. It wouldn't be a problem if it would take 1 second. However, it make take 20 seconds which is too much (a user can switch a window and start build when new files weren't synced yet).
Now, I am thinking about some way to copy over ONLY files which changed. I am still in research phase.The idea would be to use FSEvent to listen for file changes and send over only changed files. It looks like there are some tools around which do that.
BTW. Gatling internally using FSEvent. The only problem that it triggers full rsync (which goes and start comparing date/times and sizes for 40k files)

Clarity on Vagrant usage and provisioning tool

Ok, so I'm a bit late jumping onto the Vagrant band-wagon, but figured it's about time I did.
Brief background: I've been a freelance developer for quite some time now developing solutions based on Magento and Drupal, and have finally gathered enough demand to warrant the need to build up a team. Previously, whenever I started development on any new project, I use to clone a preconfigured base VM in Virtualbox, and use that. Of course there were still configurations to do on it until I could start with actual development. Every project's web files therefore all resided inside /var/www/projectname on an Ubuntu VM.
Now I've read up on why I should be Vagrant, especially considering that I now have a team of 4 developers working with me, but I would appreciate any feedback on the following questions I have:
Moderator note: I know this isn't exactly asking a programming question, so please advise if this could be turned into a wiki, as I'm sure that feedback into this will help someone just like me.
I am still reading through the Vagrant docs, so please be kind...noob questions ahead!
I now work on a Mac. Does it matter if I use Parallels, and another developer uses VirtualBox on Windows if we need to share or collaborate on projects?
When I issue the command, vagrant up for an existing project, will it start the VM up as I would in VirtualBox or will it recreate the VM?
Is the command vagrant halt the same issuing sudo poweroff in Ubuntu, for example?
I currently use PhpStorm and its SFTP feature for project files synchronization with the option to exclude certain files on the remote server (VM) from being imported and sync'ed...will I be able to specify the same using Vagrant folder sharing?
Could I easily zip or archive a Vagrant VM, move it to a file server, and then "re-import" when and if needed? (example bug fixes, or new feature enhancements)
What do we use to easily provision VMs for common projects? Should we being using Puppet, Chef, Puphpet or Salt? I've seen that Puphpet provides a nice GUI to create a vagrantfile which I'm sure once generated, we could customize for future projects. At a very basic level, we need to ensure that certain applications are installed onto the server (zip, phpmyadmin, OpenSSL, etc.), certain PHP settings, PHP and PEAR modules, and Apache settings. I already have base VMs set up as I'd like them for both Magento projects as well as Drupal projects.
EDIT: I should also add that I use to enable Host Adapter in VirtualBox (on Windows), configure the VHost inside Ubuntu, and then update my host machine's hosts file with something like 192.168.56.3 drupalsite1.dev. So I'm unsure if Port Forwarding would be better to use? I'm not very clued up on that I must admit.
Like i said - noob questions! However, I would really appreciate any feedback on these questions. My deepest thanks!
Most of what you are asking is subjective so common sense and experience are the best tools.
I recommend all team members use the same provider (parallels isn't officially supported) and virtualbox is readily available. The base boxes, by provider, could have slight variances, you never know.
Vagrant will start the vm similarly but vagrant also does other things like configuration the network, hostname, shared folders, etc. Not quite the same. The big power lies in the capability to be able to teardown the environment and bring it back in a cleanly provisioned state.
Basically, yes.
Yes, your vagrant VMs are just like your own mini cloud. You would interact the servers similar to the way you'd interact with external boxes.
Yes, the simple answer is that it's called packaging and you can share the resultant .box. However, it's good practice to keep the base box and provisioning scripts under CM so you can rebuild and modify as needed.
For provisioners, I think it is dependent upon your experience and your familiarity with the provisioner language and how much you want to invest in learning them. Look through the provisioner support and see what fits your need and budget. Chef has a very steep learning curve, in my experience, but also has a lot of thought built in. Most provisioners have wide libraries of available installation "scripts".
The host adapter can be handled identically in vagrant.
Learn by doing, I recommend going down the table of contents (navbar) of the vagrant docs and trying each step where it makes sense. Then make your decisions.
That is my 2 cents. Hope this helps!

Scripting VirtualBox and isolating from existing installation

I am trying to create an application that runs on windows. I want this application to download a "disk image" from the network (from pre-assigned server) and create a virtual machine based on it. This VM would run for a specified number of hours and then shutdown.
I want to use VirtualBox by scripting it. I found VBoxManage command and it seems to be what I am looking for. However, it seems that VirtualBox tools store their configuration as XML files in User home directory. I did learn that i can change the value of VBOX_USER_HOME environment variable to control where they are stored. However, I am not sure whether this is enough.
My problem is that the user may already have installed VirtualBox on his/her computer. I do not want my application code (and it's packaged VirtualBox binaries and conf) to mess with the existing installation.
How do I cleanly isolate my application specific VirtualBox binaries and configuration from the potentially pre-existing installed VirtualBox setup? (Even if both instances of VirtualBox binaries are being used at the same time)
I chose VirtualBox because of it's open source license and applicability of commercial use (if I compile my own binaries from the source) and because it works on Windows too (heard QEMU support for windows is still not stable). Will VirtualBox suffice for my use-case or should I look elsewhere?
Thanks for reading so far :)
If the VBoxManage command really does rely on the environment variable VBOX_USER_HOME then you could write your scripts to change the environment variable to reflect your deployment for the execution of that script and its children, staying away from user data.
Check out the VirtualBox SDK http://download.virtualbox.org/virtualbox/SDKRef.pdf

Resources