I have googled and searched a number of forums and developer websites without any success; I believe it is a specific question that needs direct expertise or knowledge, so please read on!
BACKGROUND:
I have an audio enhancement algorithm that is implemented as a system Audio Processing Object (sAPO) that was developed and tested successfully in Windows 7. As an APO, it applies processing to all audio stream through an end point device, including audio originating from Skype.
QUESTION:
Is it true that this is not applicable to Windows 8.x ( 8.1 or greater)? More specifically, does sAPO processing still work for Skype? Does Skype disable any and all APO processing on its stream?
WHAT HAS BEEN TRIED SO FAR:
(1) I have succeeded in following the standard procedure of loading an unsigned APO from Windows 7 in Windows 8.
(2) I have tested this with Skype audio stream and that works as well.
HOWEVER:
(1) above, fails in Windows 8.1 developer preview. As a result I have not been able to test (2).
Please note that I am specifically asking about Windows 8.1, in a laptop or desktop. This is not for mobile phones or tablets. Any information or links regarding this is much appreciated!
I am also trying to update an APO which have been developed for W7/8 to the new format introduced by W8.1, however it doesn't seem like much documentation has been released yet.
What I have found so far, is that Windows 8.1 requires some new methods for discovery and control of effects to be implemented into the APO. This means that Skype would be able to discover certain APO effects and disable them if it needs to.
The new interface IAudioSystemEffects2:
link
Some updated code can be found in the new SwapAPO example:
link
Not much, but hopefully it can help you get going in the right direction.
Related
I want to detect buffer underrun situation in direct sound environment.
I use two sound buffers (primary and secondary). Sometimes (when server gives data) I call lock method of IDirectSoundBuffer interface for posting data to secondary sound buffer. If data from server do not have time to come, the sound buffer starts play again from start of the buffer (and repeat it until I send new data to buffer). Maybe it's flag DSBPLAY_LOOPING, but as I read (and test it), primary buffer cannot be created without this flag (there was error at Play method).
I try to get status of playing, but GetStatus method always returns the same status, even there is no new data and player repeat old data.
So, how can I detect buffer underrun situation (when there is no new data for playing and all old data is played)?
Thanks in advance.
IDirectSoundBuffer8::GetCurrentPosition is really the only way you can determine where it is playing from, but it's also only reliable on Windows Vista or later systems that report DSBCAPS_TRUEPLAYPOSITION.
A few things to note:
DirectSound is 'legacy', meaning it hasn't been actively worked on, promoted, or tested directly in ages. The last samples were shipped in the also now end-of-life DirectX SDK in November 2007. Versions of DirectSound prior to 8 are not supported for x64 native applications, and the DirectSound 8 headers/libs are in the Windows SDK as of the Windows SDK 7. It's not supported for Windows on ARM, or Windows Store apps, or Universal Windows apps. The documentation for DirectSound can only be found offline in the legacy DirectX SDK and is not on Microsoft Docs--the only DirectSound content still online is for driver writers.
DirectSound is also 'emulated' on modern versions of Windows so there's nothing actually 'direct' about it. The primary buffer is not actually connected directly to the audio hardware or used for mixing at all, so it's just another software buffer like a secondary buffer. It does emulate the legacy restrictions of primary buffers that applied back in Windows 9x/ME, but it doesn't' do much at all otherwise.
Starting with Windows Vista, LOC_HARDWARE buffers are no longer supported at all. Windows Vista did add support for multi-channel LOC_SOFTWARE buffers, which on Windows XP were only available as LOC_HARDWARE buffers.
Starting with Windows Vista, Effects (ID3L, EAX, etc.) are not supported through DirectSound.
TL;DR: Don't use DirectSound in new applications. It is only still supported at all for old software & games.
So, what is a developer supposed to use if not DirectSound?
(1) Windows Core Audio (WASAPI) is a good option if you can provide the sound data at a known data rate and format. If you need any real-time mixing or source-rate conversion, you have to do it yourself -or- you can use one of the many existing 3rd party audio libraries to do it that also send the final result to WASAPI. See Microsoft docs.
(2) XAudio version 2 is a good choice if you want to do real-time mixing, source-rate conversion, and software-based DSP effects. It is included in the operating system as of Windows 8, but to support Windows 7 you have to use some legacy distribution and SDKs. See Microoft docs and this blog.
Both WASPI and XAudio use a 'audio packet' model instead of a looping buffer for data submission. As long as a packet is pending processing, you won't have under-run.
I would like to take video from a webcam, render some text on the frames and do some motion tracking and pass it on to a virtual webcam so it can be streamed easily.
I found some answers on stackoverflow suggesting that I should use DirectShow. According to information in DirectShow documentation, the DirectShow SDK is part of Windows SDK. So I installed the latest Windows SDK but it seems that it doesn't include DirectShow because there are no DirectShow samples under C:\Program Files (x86)\Microsoft SDKs\Windows. (The stackoverflow answers are also pretty old - dated around 2010)
Can you suggest a way to make DirectShow work (including samples working on Visual Studio 2015) or some other alternative to DirectShow, that would help me create a virtual webcam?
Virtual webcam is typically a software only implementation that application discover as if it is a device with physical representation. The mentioned applications use APIs to work with web cameras and ability to extend the APIs and add your own video source is the way to create a virtual web camera.
In Windows there are a few APIs to consume video sources: Video for Windows, DirectShow, Media Foundation (in chronological order).
Video for Windows is not really extensible and limited in capabilities overall. It will see a virtual device if you provide a kernel mode driver for a virtual camera.
DirectShow is the API used by most video capture enabled Windows applications and it is present in all Windows versions including Windows 10 (except just Windows RT). Then it's perfectly extensible and in most cases the term "virtual webcam" refers to DirectShow virtual webcam. Methods to create DirectShow virtual webcam discussed in many StackOverflow questions remain perfectly valid for Windows 10, for applications that implement video capture using DirectShow:
Virtual webcam input as byte stream
Simulate a DirectShow Webcam
DirectShow samples were removed from Windows SDK but you can still find them in older releases:
Getting DirectShow Samples on Windows 8
If you provide a kernel mode driver for video camera device (your virtual webcam through custom kernel driver), DirectShow would also see it just like other video APIs.
Media Foundation is a supposed successor of DirectShow but its video capture capabilities in the part of extensibility simply do not exist1. Microsoft decided to not allow custom video sources application would be able to discover the same way as web cameras. Due to Media Foundation complexity, and overhead and overall unfriendliness it is used by modest amount of applications. To implement a virtual webcam for Media Foundation application you again, like in case of Video for Windows, have to implement a kernel mode driver.
1 Starting with Windows Build 22000 (Windows 11), there is new API MFCreateVirtualCamera which offers virtual camera creation. A developer can implement a video source which the API connects to so called Windows Camera Frame Server service, which in turn distributes the generated video as a source along with regular cameras. Applications see this software implementation the same way as if it was, for example, a webcam.
This is an ancient question internet-wise but I thought I could contribute:
I was looking into this about a year ago and almost abandoned my project altogether until I found Microsoft's SimpleMediaSource driver sample on their Github. It is documented here but it is a tough read if you haven't written drivers before - which was the case for me. Fortunately, documentation seems to have been updated and improved since I used it.
To get it working, I had to manually delete and copy-paste the DLL into C:\System32 after each compilation with Visual Studio. I also had to side-download and install the now removed (from what I can tell) devcon utility to add & remove drivers with devcon dp_add/dp_remove commands. You also need the Windows Driver Kit (WDK).
You need to enable unsigned driver loading within Windows so it may not be a great route if you want to distribute it. Anticheat and DRM software may also not appreciate it :)
There are two projects being compiled:
MediaSource - COM DLL project for the custom media source
SimpleMediaSourceDriver - UMDF driver install package
Just install obs studio
In newer versions it automatiaclly installs an easy to use virtual webcam that mirrors the OBS scene.
I know the midiXxx API, but I saw it is currently listed under 'legacy' in msdn.
http://msdn.microsoft.com/en-us/library/windows/desktop/dd743619(v=vs.85).aspx
Is there some other API i should use to target the newer Windows versions?
Will the old API still work on Windows 7 and 8?
Thanx,
Marc
Last Friday Microsoft released a preview Windows Runtime API for MIDI. Check out the //build/ session here:
http://channel9.msdn.com/Events/Build/2014/3-548
MSDN: http://msdn.microsoft.com/en-us/library/windows/apps/dn643522.aspx
Although a preview, apps can go live and be deployed to the Windows Store. Please let us know what you like or don't like. Happy app building!
For dektop applications (non metro) you can still use the legacy API safely.
Sadly for WinRT/Metro, there is no midi support at all (see this discussion on msdn).
Hope they will change that.
Catflier is correct that there is no direct support (at the API level) for MIDI in WinRT. However, if you are wanting to have MIDI-like capabilities in your app, there are workarounds. A protocol growing in popularity is the network-based OSC (Open Sound Control). Since it is network-based, you can use that.
For example, one can use external hardware like The Missing Link which translates from MIDI to OSC. You hook up your MIDI device to The Missing Link, which then translates to OSC messages that are sent to the computer. Your app can then receive OSC messages and talk to the MIDI device. I don't have any code to show here, but I've seen demos of this working in-action.
You can see OSC in use on iOS as well with popular apps like TouchOSC.
I'm trying to implement an AVRCP/A2DP connection between my Android phone and my car PC. The A2DP bit basically works out of the box so no issue there. I want the PC to be the AVRCP CT (controller) and the A2DP sink. The phone is the AVRCP TG (target) and the A2DP source.
Where I'm having trouble is getting any sort of AVRCP connection that I can use. Windows 7 comes with a toolbar application that at least provides the basic play/pause/skip/stop type functions. So it definitely works with the software I have without any extra drivers or otherwise. However my searching has produced little results on any way to do this or documentation on creating an L2CAP connection which I believe I need.
The 32feet.NET libraries don't support L2CAP connections unless you use a Broadcom/Widcomm stack. Buying a new BT USB device may be a viable solution but at the moment I'm trying to do this all in software :). i.e. like this although there a problems noted there that weren't solve (or reported as solved)
link: How can I establish an AVRCP connection from Windows 7 (controller) to phone (target) using L2CAP on Widcomm SDK?
I'd prefer to do it C# if possible but if I had some kind of library to interface with my code, that would be fine (like the 32feet.NET library which works quite well for the things it does work on.)
This is about the closest I've got but is all a bit Greek to me and not quite enough to get me started (I'm an embedded guy):
http://msdn.microsoft.com/en-us/library/windows/hardware/ff536674(v=vs.85).aspx
Is Bluetooth really such a mess on Windows that it seems to be from my searching? There are multiple different stacks that all seem to be significantly different in terms of the API etc.
Can anyone point me in the right direction? I've done a lot of searching/reading other posts here and elsewhere and not really made any progress.
Thanks
Christian
The current ocx controls I'm using for voice recording and playback are not compatible with Windows 7. I'm already feeling the pressure to produce a Windows 7 compatible version of my software. The author has already stated that he is not planning to write a Windows 7 compatible ocx.
I work from xharbour so I need to consume an OCX or write the whole thing (which I'd like to avoid and don't even know where to start). My basic needs are (1) to record dictation from the microphone with methods to pause and vox preferably, (2) save to file, (3) and later playback with methods to ff and rew.
Thank you,
Reinaldo.
I found this activeX that seems to works: http://www.download3k.com/Software-Development/Active-X/Download-Active-Audio-Record-Component.html
Reinaldo.