Related
I want a Windows 10 x64 Professional hosted on AWS, is that possible? And if so, how might one go about it?
To expound.
I just want a real windows 10 environment hosted remotely with static IP address so i can use it like a personal computer + server for some dev stuffs.
This is likely what you are looking for:
https://aws.amazon.com/workspaces/
Amazon WorkSpaces is a managed, secure cloud desktop service. You can
use Amazon WorkSpaces to provision either Windows or Linux desktops in
just a few minutes and quickly scale to provide thousands of desktops
to workers across the globe. You can pay either monthly or hourly,
just for the WorkSpaces you launch, which helps you save money when
compared to traditional desktops and on-premises VDI solutions. Amazon
WorkSpaces helps you eliminate the complexity in managing hardware
inventory, OS versions and patches, and Virtual Desktop Infrastructure
(VDI), which helps simplify your desktop delivery strategy. With
Amazon WorkSpaces, your users get a fast, responsive desktop of their
choice that they can access anywhere, anytime, from any supported
device.
and this is how you can give it a static ip:
https://aws.amazon.com/premiumsupport/knowledge-center/associate-elastic-ip-workspace/
Edit:
Amazon WorkSpaces now offers bundles that come with a Windows 10
desktop experience, powered by Windows Server 2016. Amazon WorkSpaces
Windows 10 bundles provides you an easy way to move users to a modern
operating system, while also simplifying licensing. Amazon WorkSpaces
continues to offer bundles that come with a Windows 7 desktop
experience, provided by Windows Server 2008 R2. You can also run
Windows 7 and Windows 10 Enterprise operating systems with Amazon
WorkSpaces if your organization meets the licensing requirements set
by Microsoft.
#BrownChiLD
You can create your own AMI on AWS. Steps are below:
1. create the machine on your system by using vmware wokrstation or hyper-v
2. Export the VM
3. Upload it to S3 bucket
once your vm is uploaded to S3, follow the steps on the below link
https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html#import-vm-image
At present time the only way to achieve what you want is by spinning your own Win10 instance assigning the static internal IP while creating it or by adding an Elastic IP if it's in an Internet Gateway enabled subnet.
It's not that convenient, you'll need to set up the environment yourself, including Security Groups, ACLs, etc to allow a bit of security and connecting using RDP will be a bit of a pain (beside doing so over internet isn't exactly advisable). You might start thinking about Chrome Remote Desktop or even Teamviewer.. and will be very pricey running it. First things first, apparently there's no Win10 available as AMI, so you'll need to deploy it yourself. Once running you'll need to license it. A type suitable for this could cost around 80$ per month.. unreserved.
Using AWS Workspaces isn't really an option: besides it is not "Windows 10" but Windows server 2016 (I needed WSL, which has been introduced with Server 2019 so, no joy), the only way to have a proper Win10 is using BYOL but... (cit from FAQ) :
You need to commit to running 200 Amazon WorkSpaces in a region per month on hardware that is dedicated to you. If you want to bring your own Windows desktop licenses for graphics use cases, you need to commit to at least 4 monthly or 20 hourly GPU-enabled WorkSpaces.
:-/
Amazon WorkSpaces is a virtual desktop that runs on AWS but you connect through an Amazon client software that acts a lot like virtualbox, except the OS that you're using is not on your local machine. So it's more like a Thin Client environment over the internet. I believe the OS through Workspaces is managed by AWS as far as patching and updates through a software called A.C.M.E. (Amazon Client Management Engine).
https://youtu.be/jsqI7KU3S8I
Amazon EC2 instances also provide Windows instances that you would connect through an RDP connection. You'll have to manage the patching and updates yourself though.
Here's a link for your reading pleasure
https://aws.amazon.com/windows/resources/licensing/
If I create a container with windows image on it, is it possible to use a remote connection to actually see the desktop and , for example, play minesweeper?
My use case is this:
I have hundreds of users. Each user need to create their our infrastructure consisting in about 6 machines linked together. After creating, the user will open some desktop gui apps on each one using a remote desktop connection.
No, this isn't something you will be able to do.
There are currently two Windows container images, microsoft/windowsservercore and microsoft/nanoserver
nanoserver
This blog post about TP4 (one of the earlier releases) says
The only option available when logging into console of a virtual machine running Nano Server or connecting a crash cart to a physical Nano Server is this very plain emergency console
This section on managing Nano server also states
Nano Server is managed remotely. There is no local logon capability at all, nor does it support Terminal Services.
There is also this article, admittedly not from Microsoft, about Windows Nano server
Nano Server strips back the operating system further still, dropping things like the GUI stack, 32-bit Win32 support, local logins, and remote desktop support.
Nano Server is designed for two kinds of workload: cloud apps built on runtimes such as .NET, Java, Node.js, or Python, and cloud infrastructure, such as hosting Hyper-V virtual machines.
servercore
Docker blog has a pretty interesting entry
Introducing Docker for Windows Server 2016. This part addresses the question of GUI apps
The Windows Server Core image comes with a mostly complete userland with the processes and DLLs found on a standard Windows Server Core install. With the exception of GUI apps and apps requiring Windows Remote Desktop, most apps that run on Windows Server can be dockerized to run in an image based on microsoft/windowsservercore with minimal effort.
If you wanted to set up that kind of an environment, one option is to use something like Vagrant to orchestrate starting and provisioning regular windows VMs. Though 6 windows VMs will not be easy on memory.
I need a virtual server for web development, it'll host Apache+Postgres+Ruby+something else.
What's the most effective software to run such a server? (ie with least virtualization overhead)
Is there a way to run Linux as as service?
I use VirtualBox at the moment, but it's inconvenient in some ways, such as it needs an emulator window open which also captures keyboard input when alttabbed into.
(Also, coLinux hangs at boot on my machine, so it's probably not an option)
Check out the features of VMWare Server. It's free, you just have to register.
I've never found VMware to be much of a performance hog unless running 3+ virtual machines.
The latest free server version (VMware Server 2) runs as a service IIRC, so you can set up your dev server to start up and shut down when your PC does, and you can either log on to the VM's console through the web interface, or create a shortcut on your desktop so it's fairly non-obtrusive.
There is a very convenient utility that hides VirtualBox from the foreground completely: vboxctrl. With vboxctrl you can run a Linux server on your Windows machine, make it automatically go to sleep when Windows shuts down or hibernates; then use any SSH client to log in to the server. Or you can use Xming to open graphical windows from the Linux server; I've worked quite a lot of time in GVim open through Xming.
If anyone needs more details, leave a comment, I may write an article about this.
After having had a dev PC HD corrupt, I'm considering the idea of making my development environment be fully Virtual PC based.
The core items would be:
- XP Pro 32
- IIS
- VS2003
- VS2008
- SQL Server 2005
- Office 2003
Primary source would reside on a server in SVN with only a clocal copy on the VPC.
This would be for Windows based web and desktop development.
Assuming that the host machine has decent performance and provides for hardware virtualization, are there any known gotchas with such a setup, ie main pros and cons. Any performance issues or other issues that make this a good or bad idea?
I'd like to go this route so I can create a full backup VPC that can be put on a new PC if one fails and is repalced or copied to a laptop as needed for offsite work, etc. With the new Virtual PC features of Win7 this seems like it may be even better goign forward too.
Would like to get some feedback on this before we go down that road...
I wouldn't recommend Virtual PC because the performance is pretty disappointing compared to VMWare.
I've used a virtual development machine inside VMWare Workstation and VMWare Fusion on Mac for quite a while, and it works very well. It feels as if you're running on a dedicated machine.
My recommendations are:
Use a 64-bit OS as your host OS (Vista x64, Windows 7 64-bit, Mac OS X Leopord)
Have at least 6GB of RAM on your physical machine
Allocate 3GB of RAM to your VM for 32-bit, or more for a 64-bit guest OS
Pre-allocate the diskspace for your guest OS (no auto-grow)
Another advantage is that you can take your VM from a Windows-based VMWare Workstation to a Mac-based VMWare Fusion (and the other way around) without any problems.
I have been running multiple virtual development environments in MS Virtual PC and Virtualbox for 2 years now. I am doing mostly asp.net applications, some of the solutions are relatively large and use large databases which I also run inside the VM.
My observations based on this:
It is a good idea for exactly the reasons you mention and it works fine. Go for it!
768 megs of ram for the VM is enough, but more is better.
Have a Multi-core CPU.
Install the virtual machine additions for the guest OS. (This is basically like installing the proper drivers for your "virtual" hardware, and seems to be more important for performance than having hardware virtualisation support).
If possible, have the VM disk image on
a separate physical disk from the
host OS.
Use Virtualbox. It's free, and being developed rapidly. It might already be the best.
If you can satisfy the above, performance is no issue. Multiple Visual studio instances, IIS, SQL, Office, works just fine.
Running multiple copies of the same guest OS when it is a member of a domain/AD is tricky. If you need to do this you should read up on the sysprep.exe tool. Basically you can't just make a copy of the virtual disk, you need to take some special precautions.
Virtual PC is very convenient and it was what I used for starters, but I have to say that virtualbox seems to have overtaken it now. It was a bit rough in the beginning but the last few versions have really gotten there.
Virtualbox is fully free, and it has better features than VPC2007 - the main one that made me switch was the support for high resolutions. Virtualbox runs fullscreen on my 1920x1080 no problem.
It can also run virtual PC images, so switching was just a matter of installing virtualbox and adding my existing virtual PC disks to it.
An added benefit is that I can run the virtual images just as easily on my new mac as on the old pc.
The commercial options are not (anymore) worth what they cost, IMHO.
One thing you might have to consider is the lack of support for multiple monitors within the VM. I really like using multiple monitors, one for my source, the rest for all the rest. As far as I know, this is not possible in Virtual PC. Aside from that I can't think of anything that should hold you back, it's something I have been considering as well.
Regards,
Sebastiaan
VirtualBox from Sun is also a good choice. I am writing this from a Vista laptop with a virtualised Ubuntu dev environment.
One thing that Virtual Box is great for is having a seamless mode in which the guest OS application windows are presented as just windows on the host system, with a single common background (you get 2 status bars - one for Windows and one for Linux).
The Z-orders don't interpolate (ie all guest windows appear on the same Z plane in the host Window system, with their own Z-order within that plane) which can make it a bit odd, but you get used to it.
It is particularly useful if you need to build across many environments. VirtualBox is getting better and I now have an OpenSolaris environment and a FreeBSD one as well.
It is free as in beer which can be handy.
I actually run three development environments (and many test environments) under Ubuntu host in Windows guest virtual machines - it's very good for keeping things separated and for being able to restore test environments to a known point. It's also handy since the backup is a simple directory copy on the host and you don't have to worry about recovering settings or re-installing applications. etc.
I prefer VMWare over Virtual PC for both performance and usability (keep in mind that's my opinion). You don't need the VMWare Workstation product to create a VM - check out EasyVMX here for a way to create easy VMs.
The one thing you'll miss though is VMWare tools which only comes with the Workstation product, not the player. But VMWare has this for download here - I'm unsure of the legality of this even though it's an official download from VMWare, you may only be able to use it if you have the paid product.
I actually have a license for Workstation, it's just an earlier version and I prefer the latest Player.
Recently the buzz of virtualization has reached my workplace where developers trying out virtual machines on their computers. Earlier I've been hearing from several different developers about setting up virtual machine in their desktop computers for sake of keeping their development environments clean.
There are plenty of Virtual Machine software products in the market:
Microsoft Virtual PC
Sun VirtualBox
VMWare Workstation or Player
Parallell Inc's Parallells Desktop
I'm interested to know how you use virtualization effectively in your work. My question is how do you use Virtual Machines for day-to-day development and for what reason?
I just built a real beefy machine at home so that I could run multiple VMs at once. My case is probably extreme though, but here is my logic for doing so.
Testing
When I test, particularly a desktop app, I typically create multiple VMs, one for each platform that my software should run on (Windows 2000/XP/Vista etc). If 32 and 64 bit flavors are available, I also build one of each. I also play with the VM hardware settings (e.g. lots of RAM, little RAM, 1 core, 2 core, etc). I found plenty of little bugs this way, that definitely would have made it into the wild had I not used this approach.
This approach also makes it easy to play with different software scenarios (what happens if the user installing the program doesn't have .NET 3.5 sp1? What happens if he doesn't have XXX component? etc?
Development
When I develop, I have one VM running my database servers (SQL2000/2005/2008). This is for two reasons. First, it is more realistic. In a production environment your app is probably not running on the same box as the db. Why not replicate it when you develop? Also, when I'm not developing (remember this is also my home machine), do I really need to have all of those database services running? Yes, I could turn them on and off manually, but its so much easier to switch a VM on.
Clients
If I want to show a client some web work I've done, I can put just a single VM into the DMZ and he can log into the VM and play with the web project, while the rest of my network/computer is safe.
Compatibility
Vista64 is now my main machine. Any older hardware/software I own will not play nicely with that OS. My solutions is to have Windows XP 32 as a VM for all of those items.
Here's something that hasn't been mentioned yet.
Whenever a project enter maintenance mode (aka abandonded), I create a VM with all the tools , libraries, and source code necessary to build the project. That way if I have to come back to it a year later, I won't bet bit in the ass by any upgraded tools or libraries on my workstation.
When I started at my current company, most support/dev/PM staff would run Virtual PC with 1-3 VMs on their desktop for testing.
After a few months I put together a proposal and now we use a VMware ESXi server running a pool of virtual machines (all on 24/7) with different environments for our support staff to test customer problems and reproduce issues on. We have VMs of Windows 2000/XP/Vista with each of Office 2000/2002/2003/2007 installed (so that's 12 VMs) plus some more general test VMs, some Server 2003/2008 machines running Citrix, Terminal Services, etc. Basically most of the time when we hit a new customer configuration that we need to debug, and it's likely other customers also have that configuration, I'll setup a VM for it. (eg. We're only using three 64-bit VMs at the moment - mostly it's 32bit)
On top of that the same server runs a XP VM that I use for building installers (InstallShield, WiX) debugging (VS 2005) and localization (Lingobit) as well as a second VM that our developers use for automated testing (TestComplete).
The development and installer VM have been allocated higher priority and are both configured as dual-cpu VMs with 1Gb memory. The remaining VMs have equal priority and 256-1Gb RAM.
Everything runs on a dual-quad-core Xeon with 8Gb of RAM running ESXi and hardware raid (4x1Tb RAID10)
For little more than US$2.5k investment we've improved productivity 10 fold (imagine the downtime while a support lackie installs an older version of office on their desktop to replicate a customer problem, or the time that I can't use my desktop because we're building installers). Next step will be to double the RAM to 16Gb as we add more memory hungry Server 2008 and Vista VMs.
We still have the odd VM on our desktops (I've got localized versions of Windows, Ubuntu and Windows 7 running under VMware Workstation for example) but the commonly/heavily used configurations have been offloaded to a dedicated server that we can all remotely connect into. Much, much easier.
Virtualisation (with snapshots or non-persistent disks) is really useful for testing software installation in a known clean configuration (i.e. nothing left over from previous buggy installs of your software).
Having your development box on a single file (with a Virtual Machine) will make it much easier to backup and restore if an issue occurs.
Other than that, you can also carry your portable development box around different machines, since you aren't restricted on that single particular machine you usually work on.
Not only that, but you can test on different Operating Systems at once, with a single OS installed on a each Virtual Machine file you have.
Believe me, this will save quite a hassle when doing the jobs I mentioned above.
Another nice use case for VMs is to create a virtual network of machines. For example you can bring up machines running the different tiers of your application stack, each running in its own VM. Think of it as a poor man's datacentre.
These VMs can also appear available on your physical network, so you can use RDP or similar to get a remote terminal session with them.
You can have a beefy machine (lots of memory) running these VMs, while you access them remotely from another machine such as a laptop, or whichever machine you have with the best screen.
I use a VM under Windows to run Linux. Even though there's already a version of emacs for windows, using it in Linux just feels more gratifying for some reason.
Maintaining shelved computers
I have the situation where schools in my region are closed down but their finance system has to be maintained for up to 2 years to ensure all outstanding bills are paid. This used to be handled by maintaining the hardware from the mothballed schools which had some problems:
This wasted scarce hardware resources and took up a lot of physical space.
Finance officers had to be physically present at the hardware to work on each system.
Today I host each mothballed school on its own virtual box inside a single physical host. Each individual system is accessed by rdp on the IP number of the host, but with its own port number and the original security of each school is maintained.
Finance officers can now work on the mothballed schools without having to travel to where they are physically located, there is more physical space in the server room and backup of all the mothballed schools at once is a simple automated process.
With each mothballed school in its own vbox there is no way for cross contamination of data between systems. Many thousands of dollars worth of hardware is also freed up for redeployment.
Virtualisation appears to be the perfect solution to this problem.
I used the Virtualization approach using VMWare Server when the task in front of me was to test a clustered environment of WebSphere Application Server. After setting up VMWare Server i created a new virtual machine and did all the software installations that i would need like WebSphere App Server, Oracle, WebSphere Commerce etc, after which i shutdown the VM, and copied over the virtual hard disk image to two different files, one as a clone VM and another as a backup.
Created a new VM and assigned the one of the copied disk images, so i got two systems up and running now which allowed me to test the same scenario of a clustered environment. I took a snapshot of the VM through VMware and if i goofed up with any activities i would revert the changes to the snapshot taken thereby going to the previous state and increasing my productivity instead of having to find out what to reverse. The backup disk image can also be used if i need to revert to a very old state, instead of having to start from scratch.
The snapshot functionality which exists in both VMWare and Microsoft's Virtual PC/Server is good enough to consider Virtualization for scenarios where you think you might do breaking changes, which may not be that easy to revert.
From what I know, there is nothing like Parallels on Mac, but rather for work instead of testing.
The integration (with "coherence", your VM is not running "in a window" of your host system, all programs in the guest system have their proper window in the host system) is splendid and let's you fill all (ALL!) gaps:
My coworker has it configured that Outlook (there is nothing like Outlook for MacOsX) in Windows pops up when he clicks on a "mailto:"-link on a web page, browsed with Firefox on Mac !
In the other direction, if he get's send a PDF, he doubleclicks the attachment in Outlook (in Windows) which opens the PDF-File in the Mac-buildin PDF-viewer.
VirtualBox also offers this window-separation possibility (at least when windows is running in the VM on Linux), which is really useful for work.
For testing etc. of course, there is nothing like a cleanly separated environment.
We have a physical server dedicated to hosting virtual machines in our development environment. The virtual machines are brought up and torn down on a regular basis and are used for testing software on known Standard Operating Environments.
It is also really helpful when we want an application to run on a domain that is different to the development environment.
Also, the organisation I am working for are in the planning stage to create a large virtual testing ground. This will be a large grid of machines, sitting on it's own network, and all of the organisations' internal staff, contractors and third-party vendors will be able to stage their software for testing purposes prior to implementing into the production environment. The virtual machines will reflect the physical machines in the production environment.
It sounds great, but everyone's a bit skeptical: This is a Government organisation... Bureaucracy and red-tape will probably turn this into a big waste of time and money.
If we are using Virtual machine (vpc 2007,Virtual Server 2005,VMWare application etc..)
1.We can run multiple operating systems(windows98,2000,XP,Vista,Windows Server 2003,2008,Windows 7/linux/solaris) on a single server
2.We can Reduce hardware costs & Data Center Space
3.We can Reduce power & AC cooling cost.
4.We can reduce admin resource,
5.We can reduce Application Cost
6.We can run ADS/DNS/DHCP/Exchange/SQL/Sharepoint Server/File Server...etc