Use TFS Workspace in Virtual Machine - visual-studio

I have a virtual machine (in Virtual PC) that is used to run/update specific COM objects in our solution. Currently, both the host OS and the VM OS have separate workspaces, and I have to check out the files in either location, then check them in separately as work is completed.
It's also a huge branch (several GB of data) that needs to be pulled down over a slow VPN connection. Given that I need the files on my host and the VM, it means pulling this code down twice.
Is there a way I can configure the VM to make use of the Workspace on the host? I'm fairly sure I can map that folder into the VM, but I want, when I check out files in the VM, that it checks them out from the hosts workspace.
Update 1
I tried to fool the system, by setting the _CLUSTER_NETWORK_NAME_ environment variable as per this answer. This certainly allowed Visual Studio to see the workspace as valid for the machine. However, when I rebooted the machine, I couldn't connect to the machine since the Guest and the Host now appear to have the same name.

You cannot have the same workspace on two machines, fullstop. This means that you can fool Team Explorer mapping a common file system for both the machine, but careful! you should always get from one client and not the other.
Now I can suggest you to test this recipe based on DiskMgmt.msc.
Say VM and PM your two clients, say that both map $/YourProj/src and you have $/YourProj/src/Common that you want to download once.
PM workspace mapping is $/YourProj/src -> C:\src.
PM is at least Win7; create a VHD and mount it on C:\src\Common; now you can get latest.
Unmount the VHD, start your VM with the same VHD as a secondary disk. Mount this secondary disk as C:\src\Common inside the VM.
Inside the VM the workspace mapping should be
$/YourProj/src -> C:\src
$/YourProj/src/Common -> (cloacked)

Related

Copy files from drive mounted on master PC to Jenkins slave machine

The environment
Master PC has access to shared drive X
Master PC has Jenkins as a Windows service
Slave PC is a windows PC in same network as master
Slave PC most likely will not have access to drive X (there will be many slaves PCs running this in the future)
The scenario
I need to copy some files from drive X to the slave machine, but this is a conditional step based on a parameter of the job, so this should be a pipeline step as we don't want to copy the files if not needed. The files to copy might be large so stash/unstash is not an option.
So basically my question is, Is there a simple way to solve the scenario without having to give access to X drive to the slave(s) PC?
I think you think you should copy the files to a neutral location, like a binary repo and copy from there.
So ultimately I found that stash has no hard limit, for now I'm using stash/unstash even on large files and there is no error (e.g. 1.5 Gb) until we start using a different method, like the one in Holleoman's answer

How to create a customizable environment that can be rapidly distributed to a local machine?

I am looking for a way to be able to do the following:
Create an instance of Windows with installed prerequisites and configuration
An isolated environment would be recommended (As in it will not modify the existing configuration on local machine only in that VM-like environment)
Ability to use the internet within that environment
Using it sort of like a "check-point" (Start working on it, doing something wrong and being able to start once again from the instance that we created)
Ability to share the environment
Possibility of creating multiple different environments
Low disk usage if possible
Fast deployment of environment on local machine
I have looked into Docker which seems pretty good for what I need, but I want to investigate other options as well because it requires Windows 10 x64 Enterprise
.
Something that works on Windows 7/Server/8/8.1 would be nice
I would also love to get arguments on why X option is better than Y option.
Thanks in advance!
If you want a completely separate environment, creating a Virtual Machine will be worth considering.
There are products from VMware and Oracle to create your virtual machine. I have been using Oracle Virtualbox (Oracle's virtual machine software) for some time now and find it pretty useful.
With a virtual machine it addresses all your concerns:
Create an instance of Windows with installed prerequisites and
configuration - A virtual machine will run on top of your installed OS without making
any modifications in current installation
An isolated environment would be recommended (As in it will not
modify the existing configuration on local machine only in that
VM-like environment) - It runs completely isolated like a separate
machine.
Ability to use the internet within that environment - You can use
internet inside of a virtual machine
Using it sort of like a "check-point" (Start working on it, doing
something wrong and being able to start once again from the instance
that we created) - You can take a snapshot and save the state. Next time when you start the VM it will be started from this state only.
Ability to share the environment - Export a created VM and it can be
reused.
Possibility of creating multiple different environments - You can run
multiple VMs on your machine. Configure the disk usage and RAM
accordingly.
Low disk usage if possible - Configurable while creating a virtual
machine.
Fast deployment of environment on local machine - Yes, you'll need
the .iso image of your Operating System

How to set VirtualBox's machine-folder relative to current Vagrant Project?

This is a follow-up question to
How can I change where Vagrant looks for its virtual hard drive?
Is it possible to set the machine-folder relative/inside the current Vagrant project (maybe is there a provider-option for that) ?
Scenario: Vagrant project is stored on an external drive. The created machine files (vbox & vmdk) should also be stored on the external drive (whose mount point / drive letter differs from host to host and might change on the host itself) inside the same project folder. Therefore the general Virtualbox setting is not an option.
With that setting I should be able to have instantly the same state of my virtual machine on any host system.
(this is my first question here - please excuse any unintended noobness :) )

Running and debugging program from Visual Studio in virtual machine like VirtualPC or VirtualBox

I have program that I want to test on clean Windows installation. For now I have image in VirtualBox and I start program from shared folder, but this is not comfortable and I can't debug.
For debugging I found that I can use Remote Debugging Monitor, but still I want to automate whole process, especially uploading application on virtual machine.
I thought that VirtualPC would be better then VirtualBox, because this application was created by Microsoft. Unfortunately I can't find any info how to connect them.
EDIT:
After research: only possibility is to treat virtual machine as remote computer. There is no easier way. Project need to be published to VM using shared folders. After configuring in Visual Studion new release type for remote debugging all triggers automaticly and working.
I would:
1.Place the program in a pre-defined shared directory, such that it is immediately visible to the virtual machine after redeployment.
2.Remote debugger invokation can be automated - all the parameters, such as users allowed to debug can be passed on the command line.
VirtualBox is quite OK for this task, as it allows you to replace only the disk image with clean one, while leaving the setup, including shared directories intact. I am sure VirtualPC also allows such a thing, but choosing it just because it's also written by Microsoft does not seem like a valid consideration here.

Creating a virtual machine image as a continuous integration artifact?

I'm currently working on a server-side product which is a bit complex to deploy on a new server, which makes it an ideal candidate for testing out in a VM. We are already using Hudson as our CI system, and I would really like to be able to deploy a virtual machine image with the latest and greatest software as a build artifact.
So, how does one go about doing this exactly? What VM software is recommended for this purpose? How much scripting needs to be done to accomplish this? Are there any issues in particular when using Windows 2003 Server as the OS here?
Sorry to deny anyone an accepted answer here, but based on further research (thanks to your answers!), I've found a better solution and wanted to summarize what I've found.
First, both VirtualBox and VMWare Server are great products, and since both are free, each is worth evaluating. We've decided to go with VMWare Server, since it is a more established product and we can get support for it should we need. This is especially important since we are also considering distributing our software to clients as a VM instead of a special server installation, assuming that the overhead from the VMWare Player is not too high. Also, there is a VMWare scripting interface called VIX which one can use to directly install files to the VM without needing to install SSH or SFTP, which is a big advantage.
So our solution is basically as follows... first we create a "vanilla" VM image with OS, nothing else, and check it into the repository. Then, we write a script which acts as our installer, putting the artifacts created by Hudson on the VM. This script should have interfaces to copy files directly, over SFTP, and through VIX. This will allow us to continue distributing software directly on the target machine, or through a VM of our choice. This resulting image is then compressed and distributed as an artifact of the CI server.
Regardless of the VM software (I can recommend VirtualBox, too) I think you are looking at the following scenario:
Build is done
CI launches virtual machine (or it is always running)
CI uses scp/sftp to upload build into VM over the network
CI uses the ssh (if available on target OS running in VM) or other remote command execution facility to trigger installation in the VM environment
VMWare Server is free and a very stable product. It also gives you the ability to create snapshots of the VM slice and rollback to previous version of your virtual machine when needed. It will run fine on Win 2003.
In terms of provisioning new VM slices for your builds, you can simply copy and past the folder that contains the VMWare files, change the SID and IP of the new VM and you have a new machine. Takes 15 minutes depending on the size of your VM slice. No scripting required.
If you use VirtualBox, you'll want to look into running it headless, since it'll be on your server. Normally, VirtualBox runs as a desktop app, but it's possible to start VMs from the commandline and access the virtual machine over RDP.
VBoxManage startvm "Windows 2003 Server" -type vrdp
We are using Jenkins + Vagrant + Chef for this scenario.
So you can do the following process:
Version control your VM environment using vagrant provisioning scripts (Chef or Puppet)
Build your system using Jenkins/Hudson
Run your Vagrant script to fetch the last stable release from CI output
Save the VM state to reuse in future.
Reference:
vagrantup.com
I'd recommend VirtualBox. It is free and has a well-defined programming interface, although I haven't personally used it in automated build situations.
Choosing VMWare is currently NOT a bad choice.
However,
Just like VMWare gives support for VMWare server, SUN gives support for VirtualBOX.
You can also accomplish this task using VMWare Studio, which is also free.
The basic workflow is this:
1. Create an XML file that describes your virtual machine
2. Use studio to create the shell.
3. Use VMWare server to provision the virtual machine.

Resources