i want to create a virtual monitor. The way this would work is that the virtual monitor would appear in a window on my desktop. As far as Windows knows it is just another monitor.
It occurs to me that it would, as a practical matter, have to be done as video card driver (i.e. rather than the video going out a wire to an LCD panel, it would go into another window on the desktop).
Does what i'm describing sound, technically, possible? (from a DDK point of view)
Note: i can't use a virtual pc, because no virtual PC has resolutions high enough for my needs. Also because it's not what i asked for.
Note: My reasons are unimportant, but i can make some up:
i want to test my application under high-dpi settings (288dpi)
i want to create a monitor that my iPad can VNC to
the family TV runs on the main monitor
the hijacker is monitoring the bus, and he'll blow it up if he suspects we're getting the passengers off
i'm trying to expand the limits of human knowledge and understanding, for the good of all man-kind
I'd say it's definitely possible, since that's what virtualization tools do for their guest utilities, but I wouldn't be able to tell you how in details. I'd suggest looking at the VirtualBox guest driver code as a starting point:
http://www.virtualbox.org/browser/trunk/src/VBox/Additions/WINNT/Graphics
(This is released under GPL as far as I'm aware.)
It's definitely possible, see for example the UltraVNC mirror driver. But I don't know of any virtual video driver that makes source code available.
I have been searching for something similar and I found a nice solution: spacedesk. You can download it here: http://spacedesk.ph/
In windows, it installs an extra monitor, which you can open in a browser or a viewer. Enjoy!
Don't know about Windows, but for X (Linux) there is Xvfb (X Virtual Frame Buffer), which is quite a useful thing.
Related
I need to use a monitor as a "private" device for my special application, I want to use it as a flashlight of a sort and draw special patterns on it in full screen. I don't want this monitor to be recognized by OS (Windows 7) as a usual monitor. I.e. user should not be able to move mouse to that monitor, or change its resolution, or run screensaver on it or whatever. But I want to be able to interact with it from my application. Monitor is plugged using an HDMI cable to a video card (most probably nVidia).
What is the simplest way to do this? All solutions are appreciated, including purchasing additional adapters or simple video cards, or any other special devices. The only solution I could imagine for now is to plug the monitor to another computer, run a daemon on that computer, connect it to my computer via ethernet or whatever, communicate with that daemon from my computer. It is pretty ugly and require additional computer. But I need to solve this problem.
To do this, detach the monitor from the desktop. Detaching a monitor from the desktop prevents Windows from using it for normal UI.
Sample code for attaching and detaching monitors is in this KB article. Once you've done that, you can use the monitor as an independent display.
Building upon your own idea of using an external PC, and Mark's comment on using a VM as this "external" device:
You could buy an external USB-to-VGA video adapter like one of these, approx. USD40:
http://www.newegg.com/USB-Display-Adapters/SubCategory/ID-3046
Almost every VM software supports some kind of USB passthrough. VirtualBox is a great example.
Only the VM sees the USB device, the host ignores it completely.
So the steps would be:
Buy said USB-to-VGA adapter.
Configure slim a virtual machine and cook up a little utility to receive the images to show on he screen by network.
Configure VirtualBox to connect the USB-to-VGA adapter directly to the virtual machine.
Here is another simple solution to monitor you application.
Your app should provide an API monitor service, served as HTTP on any port you want (for example http://{userip}:{port}/{appname}/monitor).
Your app monitors itself, keeping monitoring data in memory, in a local file or a database, hidden from the user. The monitor API serves this data to any device you want that has a browser (tablet, phone, netbook, android mini-PC, low cost linux device, any PC or any OS... from the internet, your LAN or direct connection to the PC hosting the app).
Pros:
Data to monitor is collected (and served) within your app : only one executable
Display can be done remotely : from anywhere !
Access security easily done using standard HTTP authentication mecanisms
You can monitor several applications (ie several monitoring URLs)
You are free to use any browser to monitor (even a local window browser on the same PC for testing purposes)
Monitor from any hardware and OS you want
Simple and flexible !
Cons:
There is few, but tell me...
Choosing this solution depends on what kind of data you need to monitor (text, images, video...), and also on what is the refresh rate you expect depending on your system network configuration.
Hope it helps :)
I've got an old 16-bit application, that was developed for Windows 3.1. It preforms some calculations and is part of a more complex system. The system sets up the inputs for the program, and collects the output results.
Unfortunately, the 16-bit program is here to stay for the mean time, so we have to work around the frustrations it causes on modern operating systems.
The system runs on Windows XP, and on physical Windows XP machines it runs alright. The machine I'm having a problem with, is a Windows XP instance running on VirtualBox (version 4.1.12) on a Debian box. The physical computer is an HP Proliant server, with Quad Core Xeon 3.4 Ghz. I'm using remote desktop to access the computer from my Windows 7 box.
The error I'm getting is, "PROGRAM caused a General Protection Fault in WIN87EM.DLL at address : 0001:02C9". The annoying thing is, at times it works and other times it doesn't, making troubleshooting all that more frustrating.
From trawling the internet, I've come across a few sites that mention the same problem. None of them seem to offer real solutions, except to say that WIN87EM.DLL supplies floating point routines, and has some issues with certain printers.
I've uninstalled all printers on the virtual machine, I've also tried installing a PDF writer and setting it as the default printer - so that there is a printer on the machine. I've disabled resource sharing with my Remote Desktop connection. I've updated the Virtual Machine Guest drivers on the machine. I've also tried setting the compatibility to Windows 95 in the properties of the executable.
Any pointers for troubleshooting this problem, or methods I could try to get it working?
This question is old but I had this exact win87em.dll crash with some 16-bit factory automation software running natively on windows 7. By following the method of HIDE87.com and editing autoexec.nt I was able to make the software stop crashing so that I could make edits.
This machine was running Intel 8 Series/C220 Series chips. I attribute this configuration to the crash because I have used this same 16-bit software on tons of other windows 7 machines for years now.
edit: here's the steps I used to fix the problem
Download winfloat.exe from http://www.conradshome.com/win31/archive/
Open winfloat.exe with 7zip. Find HIDE87.com and extract it to desktop.
Copy HIDE87.com to C:\Windows\System32\
Open c:\windows\system32\autoexec.nt with notepad
At top of file, after first group of comments add the following
lh %SystemRoot%\system32\HIDE87.com
Add a comment above your last line
REM Fix for Gen. Protection Fault in win87em.dll
Save changes to autoexec.nt and reboot pc.
This was the same error I had with Microsoft XP Mode.
Obviously WIN87EM.DLL has Problems with virtualized processors.
My Solution: I "unloaded" the XP-Version of WIN87EM.DLL in the registry (search and delete every item with this name), and copied a much older version into application folder. The old version can be found her: http://support.microsoft.com/kb/86869/de
Good luck!
Video Driver win87em.dll
This is the step by step resolution to the problem we had with the “win87em.dll” issue.
Left-Click the START button in the bottom left corner of the screen.
Right-Click My Computer and left click Properties.
Left-Click the tab at the top that says Hardware
Left-Click the button that says Device Manager.
Left-Click the + sign next to Display Adapters near the top of the list.
Right-Click the items shown in the expanded list under Display Adapters and left-click Disable.
Left-Click the Yes button that shows when windows asks if you are sure you want to disable it.
Left-Click the No button when windows asks if you want to reboot.
Repeat the disable process for each item listed under Display Adapters (usually only one or two)
Reboot the PC and the win87em.dll General Protection Fault errors should go away.
This is only applicable for users on Windows XP. Most likely the display adapters listed will be shown as an Intel G41 internal display adapter, but it may be another Intel device. If this does not fix the issue then it is likely a bad printer driver causing the problem.
Disabling the video adapter will not hurt windows. It will make their computer unable to watch videos or play 3D games, but windows will still run and look fine. (They will probably need to change their screen resolution after rebooting.)
VirtualBox 4.3.16 should also have a fix. See https://www.virtualbox.org/ticket/12646 If you want the fix immediately, you'll have to build VirtualBox from OSE sources.
Update: VirtualBox 4.3.16 containing this fix is now officially released.
I know this is an old thread but I came across it while searching as I was having the same issue under Windows XP running VirtualBox. Eventually I found the following:
https://communities.vmware.com/people/jmattson/blog/2012/03
This is for VMWare and seems to have fixed the issue, couldn't find anything similar for VirtualBox but as VMWare Player is free it is a good workaround for anyone having this problem.
in the case of virtual machines - vxBOX (tested) of VM ware (maybe)
you just have to switch off all para virtualization options in the processors section of VX BOX options.
works like magic!
I'm building an IOKit CFPlugin driver for OS X. I'll be working with network data coming in that will be translated to MIDI data. No hardware is involved other than the built-in Airport. I have experience with drivers on Windows machines and firmware but this is my first dip into doing it on the Mac. So far things are going pretty well, but the Apple documentation sez: "For safety reasons, you should not load your driver on your development machine."
I only have one Mac. I really don't want two Macs- sorry, Apple. Should I take this warning seriously? Are there things I need to know?
Thanks, Tom Jeffries
You could also consider running OS X inside a VM as your testbed. It would surely be much more convenient that having a separate boot volume.
The warning is rather poorly worded; what you should consider doing is using a separate boot volume (partition) for trying out your driver, since it's possible to arbitrarily hose your system with your driver.
If you're doing kernel development on any OS that isn't isolated from your main system (via a VM, alternate boot disk, etc.), you're crazy!
What may be a bigger issue is that you can't do any kernel debugging, because the only option for that is to use GDB on a remote OS X system. For this, you may want to consider running OS X in virtualization.
you DEFINITELY want to have some way to recover a fubar kext installation: a bootable external drive or something you can quickly restore from-- this is the main reason for Apple's warning against running in-development-kernel-extensions on your production machine.
Nicholas is right that in order to debug using gdb (the only way in kernel space) you do need two machines. I've never tried using a VM as Coxy suggests: but I guess it's feasible (assuming that you run your kext on the virtual machine and use the real host machine to run gdb).
My preferred method for tracing and debugging in the kernel is kprintf() routed to firewire (aka firewire kprintf (man fwkpfv) ). for this you do need two machines with firewire ports.
finally, being an old computer musician myself, I wonder why you want to program a MIDI synthesizer (or transformer) on the network stack level. my guess is that you would have a much more gratifying experience working in userland (where you can use floating point math...)
if you need some hints or tips, feel free to get in touch...
|K<
from the ADC Kernel Programming Guide
Kernel programming is a black art that
should be avoided if at all possible.
Fortunately, kernel programming is
usually unnecessary. You can write
most software entirely in user space.
Even most device drivers (FireWire and
USB, for example) can be written as
applications, rather than as kernel
code. A few low-level drivers must be
resident in the kernel's address
space, however, and this document
might be marginally useful if you are
writing drivers that fall into this
category.
I'm currently setting up vmware Server 2.0 for kernel debugging with gdb ( see this setup guide ) and someone asked me why not use kvm?
So I ask: kvm vs. vmware for kernel debugging / USB driver development
what are the pros and cons of each?
Driver development? are you working on a driver for a particular piece of hardware? if so, then you probably won't be able to use virtualization, because the virtualized instance won't have access to the new hardware.
For this you will need two machines, one running a remote debugger on the other.
*Edit: * Apparently you're developing a driver for a USB Device? this is one area in particular that a VM actually Can help. These days most VM's have the ability to delegate specific USB devices to a guest OS.
That said, this situation doesn't really offer any benefits over the remote debugger option, because you still need a way to inspect the state of the running or crashed OS, and VM's offer very little assistance in this regard. You might be able to replay saved states from just before a crash.
You might be able to get a bit of traction using UML, which would allow you to do local debugging as on a regular user process, which is a little bit less trouble.
Instead of answering the direct question I'll add another option... Depending on if the kernel in question is a Linux kernel, and what part(s) of it you are working on, you might find that UserModeLinux (included in the 2.6.x source, and available as patch sets for 2.4 and 2.2) may trump both of those options.
As it runs the kernel as a userland process under the host kernel it is easier to attach common debugging tools to. I believe it is very commonly used in the early stages of updates/additions to file-system related code. If you are developing/debugging modules that interact directly with hardware it may be much less use to you though.
Reference links: home,
other
I recently started building GNU Mach/HURD and found the combination of QEmu/KVM to work really quite well.. for the following reasons:
QEmu presents quite a clean environment
Networking has alot of options
I can easily mount the filesystem using a raw device file / loopback
Bottom line is, for kernel work I just want the minimum of functionality to boot and see the result. VMWare is much more for usable virtualization rather than down-and-dirty.
There is however no comparison to booting on a real machine with real hardware. The VM environment can seem like a safety blanket somtimes ... because even my toaster would know what a Realtek RTL8139C was.
If it is a "real hardware" device, of course, vmware will not emulate it, so you won't be able to debug the driver under it (nor will any other virtualisation software, unless you extend one to do so).
Device driver debugging can be done to some extent with a real hardware machine with a normal kernel - although there are obviously things you can't do - like set breakpoints.
It is still possible to attach a debugger to the kernel and inspect stuff. Moreover, traditional printf() debugging is quite possible (printk, anyone), and there are various features in the kernel which make debugging easier. It's possible to build the kernel with various debug options to try to detect pointer problems, memory leaks etc.
By default, the kernel even gives a nice-ish stack trace on the log when it encounters an OOPS or BUG condition (obviously this does not necessarily get written anywhere if the system hangs or crashes). Of course a pointer-out-of-range condition happening inside an interrupt is a recipe for disaster, but you could still get a stack trace on the screen immediately before the panic :)
What do you recommend for quickly creating images for testing a software product (that needs hardware access - full USB port access)? Does virtualization cover this? I need to be able to quickly re-image the system to test from scratch again, and need good options for Windows and Mac OS.
Virtualization may work for you as long as it is only USB access.
VirtualBox is available with USB support either for "private use or evaluation" or commercially and works on Win, Mac and Linux. USB support on Linux and Mac is somewhat sporadical though and does not work with all devices. VBox supports snapshots.
VMWare has one free product called VMWare Server for Win and Linux but I'm not sure how far USB support is included in their server products. For Mac there is VMWare Fusion but that's not available for free. Fusion should work with most USB devices. Workstation products for Windows are more expensive. I think there is a trial version for all of them. All do snapshots.
I don't know how far Parallels (Mac) supports USB devices or snapshots.
You don't need snapshot functionality if you can afford some short downtime between re-imaging. You can shut down the VM and then just copy the disk image (which is nothing else but one or multiple regular files) and start the VM again. Snapshots can be reverted to a lot faster (without rebooting).
If virtual machines will work for you, you can choose between Virtual PC, VMWare and VirtualBox.
Virtual PC supports Win host and Win/linux guests. Although there are some caveats with regards to the X resolution support.
VMWare supports Win, OS X and Linux host. It supports Win and Linux guests.
VirtualBox supports same hosts and guests as VMWare.
None of the three supports OS X as guest officially. The reason is that OS X is licensed only for Apple machines. However, there are some hacks that allow installing OS X under VMWare. It might be also possible to install it under VirtualBox or Virtual PC, although I have not seen specific instructions.
If virtualization is not good enough for you, you can use precreated installation images or a disk imaging program.
For precreated installation images for Windows, you can use the sysprep tool (search for sysprep or system preparation tool). I don't know if there are equivalent tools for OS X from Apple.
For disk imaging programs, I know quite a lot people swear by Symantec Ghost. I personally have not used it, so can't give you much info about it. There's also a list of disk imaging programs on Wikipedia, so you evaluate these as well.
Hope that helps.
If virtualization is right for you depends on how much access direct access you actually need.
But if virtualization works then vmware offers products for Windows and Mac that support a Snapshot feature.
Or there's also VirtualBox which works on Linux, Windows and Mac, also supports snapshots and is free.
I use VMWare Player for this sort of stuff. I've not tested it with the sort of access you discuss (since I mostly do apps rather than driver-level stuff) but the advantages are many, specifically being able to copy the VM when it's shut down for later restore to a specific point (sort of a poor man's snapshot) and being able to have lots of configurations without blowing the hardware budget.
It certainly provides USB virtualization and I would say it's the best bet for providing the full device access. I would suggest testing it since, if it provides the hardware support you need, it's a very good solution for the other reasons given. The only other (non-VM) suggestion I can think of which would match it would be hard disk image backups which can be restored at will.
I've used Virtual PC heavily for this kind of thing in Windows, without ever hitting any issues. It's free, which is always a bonus ;o)
Edit: Just re-read the question - not sure that it has USB support. Should tick all the other boxes though
CloneZilla is a great, free way to reimage machines.
Once I worked for some company where we needed to test our software for various combinations of versions of OS, SPs and some other libraries which our application was dependent on. For each separate identified combination we had a separate partition image created with the help of Norton Ghost (DOS version). All images were put to a server. Whenever a tester got the next version of the system core to test, they would just methodically restore from all applicable images, install the application, test it and report it.
This approach though a straightforward one would allow full access to the hardware and will provide you with 100% native installation.
Nowadays, I still use this approach for my private PC. I'm sure you can try the latest achievements like Hyper-V. We use it nowadays where I work. When we tried to install Team Foundation Server (the process is far from being easy) we also had to drop the process at some point and just restore a virtual machine from an image because we realized we made a few mistakes during installation. Conceptually the same approach that saves a great deal of time. I'm not really sure though how compatible a virtual PC is in the sense of hardware access.
You can try both approaches.
P.S. Today there are two Ghost products, Symantec Ghost (good old one) for corporate use and Norton Ghost for home use (bloatware in my opinion). If you decide to try this option, I would recommend the Symantec Ghost (part of Ghost Solution Suite).
If you can't just use a virtual machine and take snapshots of the fresh install then do a fresh install onto real hardware and use a disk imaging tool (Ghost comes to mind).
If cost is a factor then there's a few Linux live CDs that will do what you want. This comes to mind. Put a second disk in the machine and image from the second disk unless you have fast networks and network storage; it's way to slow to go to and from the network regularly. If you're using a Linux live CD then you can actually set the second disk to EXT3 so Windows won't detect it and assign a drive letter too.
If you have a dedicated workstation for testing then I would highly recommend Symantec Ghost. Simply get the workstation to the clean state, reboot to ghost and 'take a snapshot' of the HD or partition. You can then replace the HD or partition from the image say from CD or multicasted over a network connection from another PC.
I have used it for years now, even to the point of automating the build of 60 test workstations (at the same time).