My goal is to control the volume of the current media playing application on Windows 11 instead of the system wide volume. This should be done from a small console programm.
This program will then be called using programmable keys on my keyboard.
I already know of the "System Media Transport Controls" API in the .NET Framework to play/pause, skip, get track/artist information but it doesn't seem to be possible to control the volume using SMTC.
I'm open to other languages than C# if there's a possibility to accomplish my goal but since it's for Windows, I expect a solution to be using the .NET Framework.
Under Linux, this is possible using dbus/MPRIS. I hope there's a solution for Windows.
Is there a way to control the volume of the current active media player?
Related
I am building one desktop application using electron js + Node Js. This will be an desktop usages monitoring application.
My target is to list down all the open applications names and their process IDs if possible which are having user interface. e.g. Notepad, MS Office applications, Visual studio etc.
Ultimately I need to control which applications can be opened on the desktop and which can be blocked from usages. I can not have a list of all the applications in the world and then check those one by one via process list. So as a work around, If I get the list of all the open applications with UI associated with it, I can at least show the user if this application is allowed to use or not.
Platform I am using is Unix/Windows/MacOS.
If there is no straight forward solution to it, can we think of something like using OpenCV and understand image by some algo to check the open application windows? Because any way I am interested to find the application on the top of all i.e. active.
Is there any way to identify such processes/applications?
This will be OS dependant so here goes:
Windows 7/8/10
I'd like to be able to access a desktop application UI from another application (that I've written).
I'd like to be able to simply pass some mouse and keyboard inputs from my application to the external one.
Is this possible and if so, where do I start? Is there some Windows API that I can do this with?
I know I can use the Microsoft detours API to hook into Direct3D applications/games, but not sure about regular desktop applications.
I am considering writing a Kinect v2.0 Gesture -> Keyboard/Mouse event translator so I can control video games. Since I will be using Microsoft's SDK, cross-platform is out-of-the-question; it seems natural to distribute this through the Windows store. However, I know Windows store apps have significant restrictions. Can a Windows store app:
Run in the background (possibly with an elevated priority to ensure that the game doesn't miss input)?
Create user input events like "key-down" and "mouse move" that will be read by other applications?
Looking at Microsoft's capability page didn't seem to give me a definite yes or no.
You'll need to write this as a desktop app. Windows Store apps run in a sandboxed context with limited access to the system. They cannot interact with other processes as you'd need, and they cannot inject input events.
I'm wondering if anyone has any idea that capture audio from the device's microphone on the new Windows Phone 7 in background (Silverlight, not XNA)
or any code to do it?
Even in a Silverlight application, the Microphone is accessed via libraries in the Microsoft.Xna.* namespaces.
The use of such namespaces is not supported in a Background Task. See http://msdn.microsoft.com/en-us/library/hh202962(v=vs.92)
This is not possible and would break the security principle of not allowing apps to do something that the user isn't aware of.
I have an existing cross platform project that runs on Mac, Linux and Windows.
Now, I want to add a 'native' UI to it - the ability to show some popup windows (to request user credentials) and perhaps FileOpen dialogs. By native I mean I want to use the systems build in file-open dialog - so on the Mac the mac file finder is shown and on Windows the shells file open window is shown.
Qt seems a good fit - its samples show that it can show the correct dialog on all platforms.
However, all the available Qt samples start at the very base level - assuming the entire project is developed in Qt. Is it possible to initialize and use Qt in a more ad-hoc fashion :- i want to keep all my Qt UI code in a seperate dll/dylib/so file with some simple exports (think ShowLoginPopup).
I think that the easiest approach would be to do it the other way around - having the Qt GUI drive the rest of the application. Qt is event based and does rely on its event loop, so you need to keep that running.