I'm trying to write a small utility that will enable/disable monitors under Windows 7 with my nVidia graphics card. (ie. "Extend the desktop onto this monitor", etc)
The reason is that my nVidia Geforce GTX 480 has three outputs (2x DVI, 1x Mini-HDMI) but only allows two to be active at any given time so I need to enable/disable monitors when I want to switch to my TV (HDMI) display.
The Win32 API function EnumDisplayDevices isn't working because it doesn't show disabled monitors.
nVidia provides an API (NVAPI) and has functions to enumerate all monitors (even disabled ones) and you can enable a monitor but you can't disable a monitor. (I'm referring to NvAPI_CreateDisplayFromUnAttachedDisplay)
UltraMon seems to have figured out how to perform this but I can't find any information.
I think that if 2 out of three displays are already connected, the 3rd one will not be detected.
the card will stop listening for a new hardware.
you have to manually take out the cable , and then insert a new one in a different port.
unless there is a way to "eject" the connection, similar to a usb storage device.
Related
I have a question to you and I really hope you can provide me some information.
I wish to build a media center because I have not found any possibilities to cast my stuff straight to the big screen from my Windows mobile phone.
Off course there is the wireless display adapter from Microsoft but I wish not to cast my whole display to my tv.
After testing a few product (Amazon fire tv box, apple tv 3, display dock and the wireless dock) I came to the conclusion that I can not have an all in one solution which fits my perceptions.
From that point I thought that I have to build my own "tv application".
Ok ok... There is kodi(xbmc) and so on... But I think this is just making a detour.
Following features must be included:
running on Windows 10
Cast music, videos and pictures.
Ability to launch and download windows store apps.
Project Rome implementation to share data across devices.
Seems possible but here´s one big problem...
If we are talking about mediaboxes, we do talk about those small boxes besides your tv. Instead off building a micro ATX setup, I want to take this to the next level... using IoT (Raspberry Pi 3).
Using IoT may have some advantages but there are a few disadvantages I have to worry about.
Will Windows 10 work properly on IoT (advantages - disadvantages)
Media streaming?
ARM architecture
Bluetooth, WIFI, Ethernet connectivity
I have never ever worked on IoT before, so I am kinda noob again. I´am asking for some advices to make this possible.
[UWP] How can I stream data (e.g. video, music, images) to another application?
[UWP] Implement a remote control - just like the amazon fire tv controler ?
Advantages - Disadvantages of using Windows 10 on a Raspberry Pi ?
Using windows 10 default applications (Groove Music, Images, Videos - Application) to play incomming data?
What do you think? Is it possible to create a Mediacenter which is running on a raspberry pi using windows 10?
Thank you in advance.
The most straightforward idea would be to create an always-running app with a MediaPlayerElement with a Source property that can be set programmatically by a remote control app. A remote control app could also control the pause, play, next, previous actions.
Be aware that there is no hardware video acceleration support for Raspberry Pi on Windows IoT Core yet, and probably that also won't come soon. There are other devices that do have proper video drivers (look at the hardware support page of Windows IoT Core).
Also be aware that there is no Windows Store on Windows IoT Core, unless you are an OEM (then you can publish your properly signed apps in an official way to devices that are managed by you).
A simpler way would be to buy a Windows 10 box from aliexpress. Then you can use Miracast to stream your screen, install apps from the App Store and play films directly on it, for example using Kodi for which remote control apps exist.
I‘ve got multiple nVidia GPU Cards (Q2000) on a Windows 7 system,without SLI, only one monitor.
Now what I'm trying to do is make a Direct3D9 device runing on a specific GPU.
I can use the [Adapter] parameter in IDirect3D9::CreateDevice to choose a GPU, but unless I connect a second monitor on that GPU card, it will not work (if I've only got one desktop on Windows).
If I click the "Detect" button in Resolution Control Panel, it can make a "fake" desktop on the side of my primary desktop, and CreateDevice(1, ...) works well - but this is not what I want.
For OpenGL, it's easy because the WGL_NV_gpu_affinity, It can make a OpenGL device runs on the second GPU with only one monitor connected, one desktop on windows.
I wonder if there is any API can use for Directx 9 work as "WGL_NV_gpu_affinity".
Any hint will be very appreciated. Thanks in advance!
IDirect3D9::CreateDevice uses at 1st parametr "Adapter", which not GPU, just monitor adapter
I need to use a monitor as a "private" device for my special application, I want to use it as a flashlight of a sort and draw special patterns on it in full screen. I don't want this monitor to be recognized by OS (Windows 7) as a usual monitor. I.e. user should not be able to move mouse to that monitor, or change its resolution, or run screensaver on it or whatever. But I want to be able to interact with it from my application. Monitor is plugged using an HDMI cable to a video card (most probably nVidia).
What is the simplest way to do this? All solutions are appreciated, including purchasing additional adapters or simple video cards, or any other special devices. The only solution I could imagine for now is to plug the monitor to another computer, run a daemon on that computer, connect it to my computer via ethernet or whatever, communicate with that daemon from my computer. It is pretty ugly and require additional computer. But I need to solve this problem.
To do this, detach the monitor from the desktop. Detaching a monitor from the desktop prevents Windows from using it for normal UI.
Sample code for attaching and detaching monitors is in this KB article. Once you've done that, you can use the monitor as an independent display.
Building upon your own idea of using an external PC, and Mark's comment on using a VM as this "external" device:
You could buy an external USB-to-VGA video adapter like one of these, approx. USD40:
http://www.newegg.com/USB-Display-Adapters/SubCategory/ID-3046
Almost every VM software supports some kind of USB passthrough. VirtualBox is a great example.
Only the VM sees the USB device, the host ignores it completely.
So the steps would be:
Buy said USB-to-VGA adapter.
Configure slim a virtual machine and cook up a little utility to receive the images to show on he screen by network.
Configure VirtualBox to connect the USB-to-VGA adapter directly to the virtual machine.
Here is another simple solution to monitor you application.
Your app should provide an API monitor service, served as HTTP on any port you want (for example http://{userip}:{port}/{appname}/monitor).
Your app monitors itself, keeping monitoring data in memory, in a local file or a database, hidden from the user. The monitor API serves this data to any device you want that has a browser (tablet, phone, netbook, android mini-PC, low cost linux device, any PC or any OS... from the internet, your LAN or direct connection to the PC hosting the app).
Pros:
Data to monitor is collected (and served) within your app : only one executable
Display can be done remotely : from anywhere !
Access security easily done using standard HTTP authentication mecanisms
You can monitor several applications (ie several monitoring URLs)
You are free to use any browser to monitor (even a local window browser on the same PC for testing purposes)
Monitor from any hardware and OS you want
Simple and flexible !
Cons:
There is few, but tell me...
Choosing this solution depends on what kind of data you need to monitor (text, images, video...), and also on what is the refresh rate you expect depending on your system network configuration.
Hope it helps :)
Iam using nvidia gt 440 gpu. It is used for both display and computational purpose which leads to less performance while computation. can i enable it only for computational purpose? if so how can i disable it from using display.
It depends -- are you working on Windows or Linux? Do you have any other display adapters (graphics cards) in the machine?
If you're on Linux, you can run without the X Windows Server (i.e., from a terminal) and SSH into the box (or attach your display to another adapter).
If you're on Windows, you need to have a second display adapter. As long as your display is connected to your GeForce 440 GT, there's no way to use it only for computational purposes. That also includes Remote Desktop, which won't work at all unless you have a Tesla card because of the way the WDDM (Windows Display Driver Model) was designed (it can't be accessed from within Session 0, which is where the RDP service runs).
I'm using Intel integrated graphics for display purposes and GPU for compute purpose on Linux.
You'll need to setup from bios to use the integrated graphics on mobo. This will leave your GPU free. It depends on your hardware available. =)
How much does it affects the performance? I did checked before, the display in windows did takes up some memory (less than 10mb).
Check that you have write permission on the /dev/nvidia* devices. The CUDA C Getting Started Guide for Linux contains a script that automatically sets the correct permissions at startup.
I have a problem (and have been having it for some time now) -- the new sound mixer stack in Vista features new cool things, but also re-invents the wheel. Many applications that used to use Volume Mixer on a Windows system to mix different voiced outputs into one input (for example Wave-out + Line-in --> Stereo Mix) have since stopped working. The prime example of this behavior is the Shoutcast DSP plugin (could be useful for solution testing).
How Can I re-enable XP-mixer controls, or maybe emulate this behavior somehow, so that the program (SC DSP) can properly manage Microphone/Line-In playback volume along with Wave-out playback volume?
My thinking would be to emulate a program hooked-in into the Vista Mixer for Wave-Out and Line-out (or Mic speaker volume -- all playback, shown as separate adjustable "programs" so that the Vista Mixer could refer to it) and 'hook' it into the system under some emulation representing itself as the old volume mixer control interface for the program, but I frankly have no idea how to do that.
To clarify: this is not my PC (it is a HP Pavilion laptop). The problem seems to exist mostly due to the fact that Vista mixer controls separate programs, not separate inputs/outputs. The hardware is fully capable of doing what is needed when using Windows XP. I am well aware of the fact that this is a driver issue, but the driver is simply prepared for what Vista presents to the programmer through interfaces. The mixer device - as seen in the operating system, however it might look in software - is based on the mixer APIs for Windows Audio control.
Search using Google on Vista and line-in playback volume control for more info on the problem (and the sheer amount of users affected by it). Of course, a re-write of the Shoutcast Source DSP plug-in for WinAMP would do the trick, but that is not likely to happen...
Controlling the volume levels of a soundcards indivudual input/output levels in Windows Vista mixer is possible using the audio EndPoint API
This should allow you to adjust the main volume, and the volume of and connected audio inputs. One wrinkle about this that when you enumerate the end points, if there isn't a microphone plugged into your soundcard, then nothing will be enumerated. This means you'll need to change your application to respond to "microphone plugged in" events, and notify the user appropriately.
Another option is to dip below the Microsoft Core Audio and access the WaveRT driver directly. This is a lot more work than using the WASAPI/Endpoint APIs, but will give you the most control over access to the inputs/outputs of the soundcard.
The audio driver controls which mixer controls are available, and this will depend largely on the capabilities of the hardware.
If the Vista driver doesn't have certain controls, then it's likely to be a shortcoming of that driver and not of Vista.
(Please tell us which sound card/device you are using.)
It would be possible to write a program to create your own mixer controls (this would be a software-only driver for a virtual sound card), but this program wouldn't be able to affect the audio routing inside the device if the actual driver doesn't have some mixer control for this.
If you mark your app as running in Windows XP compatibility, then all the old controls and behaviors will come back.
If you mark your app as running in Windows XP compatibility, then all the old controls and behaviors will come back.
This is true, but as of Vista SP1 patch KB957388, included in SP2, and with some soundcard drivers, the old mixer API (winmm.dll) functions can hang when the app is in XP compatibility mode. In particular, mixerGetNumDevs and less often mixerOpen will not return on some computers.
I've got reports from 5 Vista users out of around 200 Vista users in total where my app hangs when starting up, and I have tracked it down to these functions hanging.
I would like to report this to Microsoft but cannot find anywhere to do so.
All I can do now is release my software without compatibility mode enabled, but this loses functionality in my app, and the software cannot control the line-in or microphone mixers.
I don't have time to work with low level API functions directly. I rely on high level components, and I cannot find any for the new audio API's for my development system (Delphi).
I would be interested in paying someone to write a DLL for me!!!
e mail ross att stationplaylist dott com