How to differ linked clone from a full clone in VirtualBox? - clone

I've got a list of virtual machines. I know that most of them are only clones of a few base ones. How to determine if a VM is a full clone or a linked clone? Do linked clones have their own disks or are these only differencing images of their base?

Basically these are the definitions:
Full-clone: A full clone is an independent copy of a virtual machine that shares nothing with the parent virtual machine after the cloning operation. Ongoing operation of a full clone is entirely separate from the parent virtual machine
Linked-clone: A linked clone is a copy of a virtual machine that shares virtual disks with the parent virtual machine in an ongoing manner. This conserves disk space, and allows multiple virtual machines to use the same software installation
Linked clones have their own virtual disk but this is just a differential disk that contains new files or whatever that the parent virtual machine didn't have when the snapshot of that linked clone was created.
The main difference is that full-clone are independent virtual machines and linked-clones are dependent virtual machines because linked clones need the parent virtual disk as their base-virtual-disk.

Linked Clone is better the full cone why because we are running multiple copy of linked clone still we don't get any issue
fatastic
Thiyagu

Related

Copy files from drive mounted on master PC to Jenkins slave machine

The environment
Master PC has access to shared drive X
Master PC has Jenkins as a Windows service
Slave PC is a windows PC in same network as master
Slave PC most likely will not have access to drive X (there will be many slaves PCs running this in the future)
The scenario
I need to copy some files from drive X to the slave machine, but this is a conditional step based on a parameter of the job, so this should be a pipeline step as we don't want to copy the files if not needed. The files to copy might be large so stash/unstash is not an option.
So basically my question is, Is there a simple way to solve the scenario without having to give access to X drive to the slave(s) PC?
I think you think you should copy the files to a neutral location, like a binary repo and copy from there.
So ultimately I found that stash has no hard limit, for now I'm using stash/unstash even on large files and there is no error (e.g. 1.5 Gb) until we start using a different method, like the one in Holleoman's answer

How to create a customizable environment that can be rapidly distributed to a local machine?

I am looking for a way to be able to do the following:
Create an instance of Windows with installed prerequisites and configuration
An isolated environment would be recommended (As in it will not modify the existing configuration on local machine only in that VM-like environment)
Ability to use the internet within that environment
Using it sort of like a "check-point" (Start working on it, doing something wrong and being able to start once again from the instance that we created)
Ability to share the environment
Possibility of creating multiple different environments
Low disk usage if possible
Fast deployment of environment on local machine
I have looked into Docker which seems pretty good for what I need, but I want to investigate other options as well because it requires Windows 10 x64 Enterprise
.
Something that works on Windows 7/Server/8/8.1 would be nice
I would also love to get arguments on why X option is better than Y option.
Thanks in advance!
If you want a completely separate environment, creating a Virtual Machine will be worth considering.
There are products from VMware and Oracle to create your virtual machine. I have been using Oracle Virtualbox (Oracle's virtual machine software) for some time now and find it pretty useful.
With a virtual machine it addresses all your concerns:
Create an instance of Windows with installed prerequisites and
configuration - A virtual machine will run on top of your installed OS without making
any modifications in current installation
An isolated environment would be recommended (As in it will not
modify the existing configuration on local machine only in that
VM-like environment) - It runs completely isolated like a separate
machine.
Ability to use the internet within that environment - You can use
internet inside of a virtual machine
Using it sort of like a "check-point" (Start working on it, doing
something wrong and being able to start once again from the instance
that we created) - You can take a snapshot and save the state. Next time when you start the VM it will be started from this state only.
Ability to share the environment - Export a created VM and it can be
reused.
Possibility of creating multiple different environments - You can run
multiple VMs on your machine. Configure the disk usage and RAM
accordingly.
Low disk usage if possible - Configurable while creating a virtual
machine.
Fast deployment of environment on local machine - Yes, you'll need
the .iso image of your Operating System

Docker and file sharing on OS X

Ok. I am playing around with different tools to prepare dev environment. Docker is nice option. I created the whole dev environment in docker and can build a project in it.
The source code for this project lives outside of docker container (on the host). This way you can use your IDE to edit it and use docker just to build it.
However, there is one problem
a) Docker on OS X uses VM (VirtualBox VM)
b) File sharing is reasonably slow (way slower than file IO on host)
c) The project has something like a gazzilion files (which exaggerate problems #a an #b).
If I move source code in the docker, I will have the same problem in IDE (it will have to access shared files and it will be slow).
I heard about some workaround to make it fast. However, I can't seem to find any information on this subject.
Update 1
I used Docker file sharing feature (meaning I run)
docker run -P -i -v <VMDIR>:<DOCKERDIR> -t <imageName> /bin/bash
However, sharing between VM and Docker isn't a problem. It's fast.
The bottle neck is sharing between host and VM.
The workaround I use is not to use boot2docker but instead have a vagrant VM provisioned with docker. No such big penalty for mounting folders host->vagrant->docker.
On the downside, I have to pre-map folders to vagrant (basically my whole work directory) and pre-expose a range of ports from the vagrant box to the host to have access to the docker services directly from there.
On the plus side, when I want to clean unused docker garbage (images, volumes, etc.) I simply destroy the vagrant vm and re-create it again :)
Elaboration
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.box = "trusty-docker"
config.vm.box_url = "https://oss-binaries.phusionpassenger.com/vagrant/boxes/latest/ubuntu-14.04-amd64-vbox.box"
config.vm.provision "docker"
#by default we'll claim ports 9080-9090 on the host system
for i in 9080..9090
config.vm.network :forwarded_port, guest: i, host: i
end
#NB: this folder mapping will not have the boot2docker issue of slow sync
config.vm.synced_folder "~/work", "/home/vagrant/work"
end
Having that:
host$ vagrant up && vagrant ssh
vagrant$ docker run -it --rm -v $(pwd)/work:/work ubuntu:12.04 find /work
This is unfortunately a typical problem Windows and OS X users are currently struggling with that cannot be solved trivially, especially in the case of Windows users. The main culprit is VirtualBox's vboxfs which is used for file sharing which, despite being incredibly useful, results in poor filesystem I/O.
There are numerous situations by which developing the project sources inside the guest VM are brought to a crawl, the main two being scores of 3rd party sources introduce by package managers and Git repositories with a sizable history.
The obvious approach is to move as much of the project-related files outside of vboxfs somewhere else into the guest. For instance, symlinking the package manager directory into the project's vboxfs tree, with something like:
mkdir /var/cache/node_modules && ln -s /var/cache/node_modules /myproject/node_modules
This alone improved the startup time from ~28 seconds down to ~4 seconds for a Node.js application with a few dozen dependencies running on my SSD.
Unfortunately, this is not applicable to managing Git repositories, short of splatting/truncating your history and committing to data loss, unless the Git repository itself is provisioned within the guest, which forces you to have two repositories: one to clone the environment for inflating the guest and another containing the actual sources, where consolidating the two worlds becomes an absolute pain.
The best way to approach the situation is to either:
drop vboxfs in favor of a shared transport mechanism that results in better I/O in the guest, such as the Network File System. Unfortunately, for Windows users, the only way to get NFS service support is to run the enterprise edition of Windows (which I believe will still be true for Windows 10).
revert to mounting raw disk partitions into the guest, noting the related risks of giving your hypervisor raw disk access
If your developer audience is wholly compromised of Linux and OS X users, option 1 might be viable. Create a Vagrant machine and configure NFS shares between your host and guest and profit. If you do have Windows users, then, short of buying them an enterprise license, it would be best to simply ask them to repartition their disks and work inside a guest VM.
I personally use a Windows host and have a 64 GB partition on my SSD that I mount directly into my Arch Linux guest and operate from there. I also switched to GPT and UEFI and have an option to boot directly into Arch Linux in case I want to circumvent the overhead of the virtualized hardware, giving me the best of both worlds with little compromise.
Two steps can improve your performance quite well:
Switch to NFS. You can use this script to do this.
Switch your VBox NIC driver to FAST III. You can do this on your default machine by running:
VBoxManage modifyvm default --nictype1 Am79C973
VBoxManage modifyvm default --nictype2 Am79C973
I run a simple watch script that kicks off an rsync inside my container(s) from the shared source-code volume to a container-only volume whenever anything changes. My entry-point only reads from the container-only volume so that avoids the performance issues, and rsync works really well, especially if you set it up correctly to avoid things like .git folders and libraries that don't change frequently.
Couple of pieces of info which I found
I started to use Vagrant to work with VirtualBox and install Docker in it. It gives more flexibility
Default sharing in Vagrant VirtualBox provisioning is very slow
NFS sharing is much faster. However, it could be reasonably slow it (especially if your build process will create files which needs to be written back to this share).
Vagrant 1.5+ have a rsync option (to use rsync to copy files from host to VM). It's faster because it doesn't have to write back any changes.
This rsync option has autosync (to continiously sync it).
This rsync option consumes a lot of CPU and people came up with a gem to overcome it (https://github.com/smerrill/vagrant-gatling-rsync)
So, Vagrant + VirtualBox + Rsync shared folder + auto rsync + vagrant gatling looks like a good option for my case (still researching it).
I tried vagrant gatling. However, it results in non deterministic behavior. I never know whether new files were copied into VM or not. It wouldn't be a problem if it would take 1 second. However, it make take 20 seconds which is too much (a user can switch a window and start build when new files weren't synced yet).
Now, I am thinking about some way to copy over ONLY files which changed. I am still in research phase.The idea would be to use FSEvent to listen for file changes and send over only changed files. It looks like there are some tools around which do that.
BTW. Gatling internally using FSEvent. The only problem that it triggers full rsync (which goes and start comparing date/times and sizes for 40k files)

Use TFS Workspace in Virtual Machine

I have a virtual machine (in Virtual PC) that is used to run/update specific COM objects in our solution. Currently, both the host OS and the VM OS have separate workspaces, and I have to check out the files in either location, then check them in separately as work is completed.
It's also a huge branch (several GB of data) that needs to be pulled down over a slow VPN connection. Given that I need the files on my host and the VM, it means pulling this code down twice.
Is there a way I can configure the VM to make use of the Workspace on the host? I'm fairly sure I can map that folder into the VM, but I want, when I check out files in the VM, that it checks them out from the hosts workspace.
Update 1
I tried to fool the system, by setting the _CLUSTER_NETWORK_NAME_ environment variable as per this answer. This certainly allowed Visual Studio to see the workspace as valid for the machine. However, when I rebooted the machine, I couldn't connect to the machine since the Guest and the Host now appear to have the same name.
You cannot have the same workspace on two machines, fullstop. This means that you can fool Team Explorer mapping a common file system for both the machine, but careful! you should always get from one client and not the other.
Now I can suggest you to test this recipe based on DiskMgmt.msc.
Say VM and PM your two clients, say that both map $/YourProj/src and you have $/YourProj/src/Common that you want to download once.
PM workspace mapping is $/YourProj/src -> C:\src.
PM is at least Win7; create a VHD and mount it on C:\src\Common; now you can get latest.
Unmount the VHD, start your VM with the same VHD as a secondary disk. Mount this secondary disk as C:\src\Common inside the VM.
Inside the VM the workspace mapping should be
$/YourProj/src -> C:\src
$/YourProj/src/Common -> (cloacked)

How to set VirtualBox's machine-folder relative to current Vagrant Project?

This is a follow-up question to
How can I change where Vagrant looks for its virtual hard drive?
Is it possible to set the machine-folder relative/inside the current Vagrant project (maybe is there a provider-option for that) ?
Scenario: Vagrant project is stored on an external drive. The created machine files (vbox & vmdk) should also be stored on the external drive (whose mount point / drive letter differs from host to host and might change on the host itself) inside the same project folder. Therefore the general Virtualbox setting is not an option.
With that setting I should be able to have instantly the same state of my virtual machine on any host system.
(this is my first question here - please excuse any unintended noobness :) )

Resources