ESP32 ESPNOW fails because of ESP_ERR_ESPNOW_IF - esp32

I spent many hours on the ESP_ERR_ESPNOW_IF error that I can't explain, in my case.
I'm much likely missing the point... :o
here are the essential chunks of code (the whole project is way too big to be posted)
wifi init :
nvs_init();
ESP_ERROR_CHECK(esp_netif_init());
ESP_ERROR_CHECK(esp_event_loop_create_default());
esp_netif_t *ap_netif = esp_netif_create_default_wifi_ap();
esp_netif_t *sta_netif = esp_netif_create_default_wifi_sta();
wifi_init_config_t cfg = WIFI_INIT_CONFIG_DEFAULT();
ESP_ERROR_CHECK(esp_wifi_init(&cfg));
esp_err_t ret = esp_wifi_set_mode(WIFI_MODE_APSTA);
espnow init :
ESP_ERROR_CHECK( esp_now_init() );
ESP_ERROR_CHECK( esp_now_register_send_cb(espnow_send_cb) );
ESP_ERROR_CHECK( esp_now_register_recv_cb(espnow_recv_cb) );
add peer to list :
esp_now_peer_info_t *peer = malloc(sizeof(esp_now_peer_info_t));
peer->channel = 0; // Same channel as wifi softAP or station
peer->ifidx = ESP_IF_WIFI_AP; // ESP32 soft-AP interface
peer->encrypt = false; // No LMK set
memcpy(peer->peer_addr, peer_mac_addr, ESP_NOW_ETH_ALEN); // MAC address of new peer
ESP_ERROR_CHECK( esp_now_add_peer(peer) ); // add peer to the list
When the esp32 runs, all goes well, until it tries to send a message to a peer. there comes the error ESP_ERR_ESPNOW_IF, which means that wifi interfaces of peer and espnow process are not the same.
But here are the tests I run to check these parameters..:
esp_now_peer_info_t peer_struct_temp;
esp_now_get_peer(&peer_debug_mac1, &peer_struct_temp);
printf("channel : %d, IF : %d\n", peer_struct_temp.channel, peer_struct_temp.ifidx);
uint8_t protocol_bitmap_temp;
err = esp_wifi_get_protocol(ESP_IF_WIFI_STA, &protocol_bitmap_temp);
printf("err : %d, interface STA, bitmap : %d\n", err, protocol_bitmap_temp);
err = esp_wifi_get_protocol(ESP_IF_WIFI_AP, &protocol_bitmap_temp);
printf("err : %d, interface AP, bitmap : %d\n", err, protocol_bitmap_temp);
And the console prints I get :
espnow_send FAILED, err : ESP_ERR_ESPNOW_IF
channel : 0, IF : 1
err : 0, interface STA, bitmap : 7
err : 0, interface AP, bitmap : 7
Which means, I believe, that :
-peer is correctly set in ESP_IF_WIFI_AP ( = 1)
-both ESP_IF_WIFI_STA and ESP_IF_WIFI_AP are created (otherwise, err would not be equal to ESP_OK)
...??
If someone has got a hint, a suggestion, an idea... it would be of a great help.
Thank you all :)

Related

VoiceProcessingIO Audio Unit adds an unexpected input stream to Built-in output device (macOS)

I work on VoIP app on macOS and use VoiceProcessingIO Audio Unit for audio processing like Echo cancellation and automatic gain control.
Problem is, when I init the audio unit, the list of Core Audio devices changes - not just by adding new aggregate device which VP audio unit uses for it's needs, but also because built-in output device (i.e. "Built - In MacBook Pro Speakers") now appears also as an input device, i.e. having an unexpected input stream in addition to output ones.
This is a list of INPUT devices (aka "microphones") I get from Core Audio before initialising my VP AU:
DEVICE: INPUT 45 BlackHole_UID
DEVICE: INPUT 93 BuiltInMicrophoneDevice
This is the same list when my VP AU is initialised:
DEVICE: INPUT 45 BlackHole_UID
DEVICE: INPUT 93 BuiltInMicrophoneDevice
DEVICE: INPUT 86 BuiltInSpeakerDevice /// WHY?
DEVICE: INPUT 98 VPAUAggregateAudioDevice-0x101046040
This is very frustrating because I need to display a list of devices in the app and even though I can filter out Aggregate devices from device list boldly (they are not usable with VP AU anyway), I cannot exclude our built-in macBook Speaker device.
Maybe someone of You has already been through this and has a clue what's going on and if this can be fixed. Some kAudioObjectPropertyXX I need to watch for to exclude the device from inputs list. Or course this might be a bug/feature on Apple's side and I simply have to hack my way around this.
VP AU works well, and the problem reproduces despite devices used (I tried on built-in and on external/USB/Bluetooth alike). The problem is reproduced on all macOS version I could test on, starting from 10.13 and ending by 11.0 included. This also reproduces on different Macs and different audio device sets connected. I am curious that there is next to zero info on that problem available, which brings me to a thought that I did something wrong.
One more strange thing is, when VP AU is working, the HALLab app indicates the another thing: Built-in Input having two more input streams (ok, I would survive this If it was just that!). But it doesn't indicate that Built-In output has input streams added, like in my app.
Here is extract from cpp code on how I setup VP Audio Unit:
#define MAX_FRAMES_PER_CALLBACK 1024
AudioComponentInstance AvHwVoIP::getComponentInstance(OSType type, OSType subType) {
AudioComponentDescription desc = {0};
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentSubType = subType;
desc.componentType = type;
AudioComponent ioComponent = AudioComponentFindNext(NULL, &desc);
AudioComponentInstance unit;
OSStatus status = AudioComponentInstanceNew(ioComponent, &unit);
if (status != noErr) {
printf("Error: %d\n", status);
}
return unit;
}
void AvHwVoIP::enableIO(uint32_t enableIO, AudioUnit auDev) {
UInt32 no = 0;
setAudioUnitProperty(auDev,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
1,
&enableIO,
sizeof(enableIO));
setAudioUnitProperty(auDev,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
0,
&enableIO,
sizeof(enableIO));
}
void AvHwVoIP::setDeviceAsCurrent(AudioUnit auDev, AudioUnitElement element, AudioObjectID devId) {
//Set the Current Device to the AUHAL.
//this should be done only after IO has been enabled on the AUHAL.
setAudioUnitProperty(auDev,
kAudioOutputUnitProperty_CurrentDevice,
element == 0 ? kAudioUnitScope_Output : kAudioUnitScope_Input,
element,
&devId,
sizeof(AudioDeviceID));
}
void AvHwVoIP::setAudioUnitProperty(AudioUnit auDev,
AudioUnitPropertyID inID,
AudioUnitScope inScope,
AudioUnitElement inElement,
const void* __nullable inData,
uint32_t inDataSize) {
OSStatus status = AudioUnitSetProperty(auDev, inID, inScope, inElement, inData, inDataSize);
if (noErr != status) {
std::cout << "****** ::setAudioUnitProperty failed" << std::endl;
}
}
void AvHwVoIP::start() {
m_auVoiceProcesing = getComponentInstance(kAudioUnitType_Output, kAudioUnitSubType_VoiceProcessingIO);
enableIO(1, m_auVoiceProcesing);
m_format_description = SetAudioUnitStreamFormatFloat(m_auVoiceProcesing);
SetAudioUnitCallbacks(m_auVoiceProcesing);
setDeviceAsCurrent(m_auVoiceProcesing, 0, m_renderDeviceID);//output device AudioDeviceID here
setDeviceAsCurrent(m_auVoiceProcesing, 1, m_capDeviceID);//input device AudioDeviceID here
setInputLevelListener();
setVPEnabled(true);
setAGCEnabled(true);
UInt32 maximumFramesPerSlice = 0;
UInt32 size = sizeof(maximumFramesPerSlice);
OSStatus s1 = AudioUnitGetProperty(m_auVoiceProcesing, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maximumFramesPerSlice, &size);
printf("max frames per callback: %d\n", maximumFramesPerSlice);
maximumFramesPerSlice = MAX_FRAMES_PER_CALLBACK;
s1 = AudioUnitSetProperty(m_auVoiceProcesing, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, &maximumFramesPerSlice, size);
OSStatus status = AudioUnitInitialize(m_auVoiceProcesing);
if (noErr != status) {
printf("*** error AU initialize: %d", status);
}
status = AudioOutputUnitStart(m_auVoiceProcesing);
if (noErr != status) {
printf("*** AU start error: %d", status);
}
}
And Here is how I get my list of devices:
//does this device have input/output streams?
bool hasStreamsForCategory(AudioObjectID devId, bool input)
{
const AudioObjectPropertyScope scope = (input == true ? kAudioObjectPropertyScopeInput : kAudioObjectPropertyScopeOutput);
AudioObjectPropertyAddress propertyAddress{kAudioDevicePropertyStreams, scope, kAudioObjectPropertyElementWildcard};
uint32_t dataSize = 0;
OSStatus status = AudioObjectGetPropertyDataSize(devId,
&propertyAddress,
0,
NULL,
&dataSize);
if (noErr != status)
printf("%s: Error in AudioObjectGetPropertyDataSize: %d \n", __FUNCTION__, status);
return (dataSize / sizeof(AudioStreamID)) > 0;
}
std::set<AudioDeviceID> scanCoreAudioDeviceUIDs(bool isInput)
{
std::set<AudioDeviceID> deviceIDs{};
// find out how many audio devices there are
AudioObjectPropertyAddress propertyAddress = {kAudioHardwarePropertyDevices, kAudioObjectPropertyScopeGlobal, kAudioObjectPropertyElementMaster};
uint32_t dataSize{0};
OSStatus err = AudioObjectGetPropertyDataSize(kAudioObjectSystemObject, &propertyAddress, 0, NULL, &dataSize);
if ( err != noErr )
{
printf("%s: AudioObjectGetPropertyDataSize: %d\n", __FUNCTION__, dataSize);
return deviceIDs;//empty
}
// calculate the number of device available
uint32_t devicesAvailable = dataSize / sizeof(AudioObjectID);
if ( devicesAvailable < 1 )
{
printf("%s: Core audio available devices were not found\n", __FUNCTION__);
return deviceIDs;//empty
}
AudioObjectID devices[devicesAvailable];//devices to get
err = AudioObjectGetPropertyData(kAudioObjectSystemObject, &propertyAddress, 0, NULL, &dataSize, devices);
if ( err != noErr )
{
printf("%s: Core audio available devices were not found\n", __FUNCTION__);
return deviceIDs;//empty
}
const AudioObjectPropertyScope scope = (isInput == true ? kAudioObjectPropertyScopeInput : kAudioObjectPropertyScopeOutput);
for (uint32_t i = 0; i < devicesAvailable; ++i)
{
const bool hasCorrespondingStreams = hasStreamsForCategory(devices[i], isInput);
if (!hasCorrespondingStreams) {
continue;
}
printf("DEVICE: \t %s \t %d \t %s\n", isInput ? "INPUT" : "OUTPUT", devices[i], deviceUIDFromAudioDeviceID(devices[i]).c_str());
deviceIDs.insert(devices[i]);
}//end for
return deviceIDs;
}
Well, replying my own question in 4 months since Apple Feedback Assistant responded to my request:
"There are two things you were noticing, both of which are expected and considered as implementation details of AUVP:
The speaker device has input stream - this is the reference tap stream for echo cancellation.
There is additional input stream under the built-in mic device - this is the raw mic streams enabled by AUVP.
For #1, We'd advise you to treat built-in speaker and (on certain Macs) headphone with special caution when determining whether it’s input/output device based on its input/output streams.
For #2, We'd advise you to ignore the extra streams on the device."
So they suggest me doing exactly what I did then: determine built - in output device before starting AU and then just memorising it; Ignoring any extra streams that appear in built - in devices during VP AU operation.

Multiple Slamtec LIDAR Connection Issues with MATLAB

I'm running into some initial LIDAR connection issue with simultaneously connecting 4 Slamtec RPLIDAR A3 using MATALB
with the provided interface library found here: https://github.com/ENSTABretagneRobotics/Hardware-MATLAB
The issue is that I am having to retry the connection on at least one of the LIDARS before it connects.
And it can also vary with LIDAR that is. That is, all but one LIDAR connects the first time.
One time, it could be LIDAR on one COM port, another time it could be a LIDAR on another COM port.
This is the way it is set up right now.
Basically MATALB loads the provided interface library, hardwarex.dll. That exposes some library methods to be used by MATLAB.
The method to connect the LIDAR does the following:
Opens the RS232 port
Sets port options
Gets some info and health statuses form lidar
Sets the motor PWM to zero (stop lidar motor)
Uses express scan mode option
Here somewhere the communication will error out.
Using a serial sniffer I was able to see that the LIDAR errors out after the following message to the LIDAR:
a5 f0 02 ff 03 ab a5 25 a5 82 05 00 00 00 00 00 22
Which I tracked to the following library methods, in that order
SetMotorPWMRequestRPLIDAR()
CheckMotorControlSupportRequestRPLIDAR()
StopRequestRPLIDAR()
StartExpressScanRequestRPLIDAR() <-- Error here
To which the LIDAR responds with:
a5 5a 54 00 00 40 82
Where as a successfully connection response from the LIDAR much longer in content.
Things I've tried
Drain (force all write data) the write buffer with the interface libraries DrainComputerRS232Port() method before and/or after any write to lidar.
Setting the TX/Write OS FIFO buffer to FILE_FLAG_NO_BUFFERING (ie. WriteFile()).
Changing the Hardware FIFO buffer form max (16) to min (1).
Using MATLAB's serial() command to flush any input or output buffers prior to loading the library or trying the connections.
This is the system and settings I am working with
Lidar (x4):
Slamtec RPLIDAR A3
Firmware 1.26
Connected via USB (no USB hub used)
No other COM port devices connected
Computer
OS: Windows 10 Pro - Build 1903
CPU: Intel Xeon 3.00Ghz
RAM: 64 GB
HD: SSD - 512GB NVMe
Serial Port Settings
Boud Rate: 256000
Timeout: 1000
Software
MATLAB R2018b (9.5.0)
I've been banging my head on the wall with this. Any help is much much appreciated!
I'm going to answer my own question. And anyone is interested in a more detailed discussion please refer to the issue posted on the MATLAB RPLIDAR repo:
https://github.com/ENSTABretagneRobotics/Hardware-MATLAB/issues/2
As I mentioned, when debugging, the error seemed to happen ConnectRPLIDAR() --> StartExpressScanRequestRPLIDAR(), then specifically here:
// Receive the first data response (2 data responses needed for angles computation...).
memset(pRPLIDAR->esdata_prev, 0, sizeof(pRPLIDAR->esdata_prev));
if (ReadAllRS232Port(&pRPLIDAR->RS232Port, pRPLIDAR->esdata_prev, sizeof(pRPLIDAR->esdata_prev)) != EXIT_SUCCESS)
{
// Failure
printf("A RPLIDAR is not responding correctly. \n");
return EXIT_FAILURE;
}
What seemed to have happened before that is after the command being send out in WriteAllRS232Port(), sometimes it would not read a response in the ReadAllRS232Port(), esdata_prev would be nothing.
We tried implementing a mSleep(500) delay before that second ReadAllRS232Port(), and it seemed to help (my guess that the lidar was slow to respond), but the issue did not get resolved fully.
The following is what made it work every time with 4 lidars:
inline int StartExpressScanRequestRPLIDAR(RPLIDAR* pRPLIDAR)
{
unsigned char reqbuf[] = { START_FLAG1_RPLIDAR,EXPRESS_SCAN_REQUEST_RPLIDAR,0x05,0,0,0,0,0,0x22 };
unsigned char descbuf[7];
unsigned char sync = 0;
unsigned char ChkSum = 0;
// Send request to output/tx OS FIFO buffer for port
if (WriteAllRS232Port(&pRPLIDAR->RS232Port, reqbuf, sizeof(reqbuf)) != EXIT_SUCCESS)
{
printf("Error writing data to a RPLIDAR. \n");
return EXIT_FAILURE;
}
// Receive the response descriptor.
memset(descbuf, 0, sizeof(descbuf)); // Alocate memory
if (ReadAllRS232Port(&pRPLIDAR->RS232Port, descbuf, sizeof(descbuf)) != EXIT_SUCCESS)
{
printf("A RPLIDAR is not responding correctly. \n");
return EXIT_FAILURE;
}
// Quick check of the response descriptor.
if ((descbuf[2] != 0x54) || (descbuf[5] != 0x40) || (descbuf[6] != MEASUREMENT_CAPSULED_RESPONSE_RPLIDAR))
{
printf("A RPLIDAR is not responding correctly. \n");
return EXIT_FAILURE;
}
// Keep anticipating a port read buffer for 1 second
int timeout = 1500;
// Check it every 5 ms
// Note on Checking Period Value:
// Waiting on 82 bytes in lidar payload
// 10 bits per byte for the serial communication
// 820 bits / 256000 baud = 0.0032s = 3.2ms
int checkingperiod = 5;
RS232PORT* pRS232Port = &pRPLIDAR->RS232Port;
int i;
int count = 0;
// Wait for something to show up on the input buffer on port
if (!WaitForRS232Port(&pRPLIDAR->RS232Port, timeout, checkingperiod))
{
//Success - Something is there
// If anything is on the input buffer, wait until there is enough
count = 0;
for (i = 0; i < 50; i++)
{
// Check the input FIFO buffer on the port
GetFIFOComputerRS232Port(pRS232Port->hDev, &count);
// Check if there is enough to get a full payload read
if (count >= sizeof(pRPLIDAR->esdata_prev))
{
// Thre is enough, stop waiting
break;
}
else
{
// Not enough, wait a little
mSleep(checkingperiod);
}
}
}
else
{
//Failure - After waiting for an input buffer, it wasn't there
printf("[StartExpressScanRequestRPLIDAR] : Failed to detect response on the input FIFO buffer. \n");
return EXIT_FAILURE;
}
// Receive the first data response (2 data responses needed for angles computation...).
memset(pRPLIDAR->esdata_prev, 0, sizeof(pRPLIDAR->esdata_prev));
if (ReadAllRS232Port(&pRPLIDAR->RS232Port, pRPLIDAR->esdata_prev, sizeof(pRPLIDAR->esdata_prev)) != EXIT_SUCCESS)
{
// Failure
printf("A RPLIDAR is not responding correctly. \n");
return EXIT_FAILURE;
}
// Analyze the first data response.
sync = (pRPLIDAR->esdata_prev[0] & 0xF0)|(pRPLIDAR->esdata_prev[1]>>4);
if (sync != START_FLAG1_RPLIDAR)
{
printf("A RPLIDAR is not responding correctly : Bad sync1 or sync2. \n");
return EXIT_FAILURE;
}
ChkSum = (pRPLIDAR->esdata_prev[1]<<4)|(pRPLIDAR->esdata_prev[0] & 0x0F);
// Force ComputeChecksumRPLIDAR() to compute until the last byte...
if (ChkSum != ComputeChecksumRPLIDAR(pRPLIDAR->esdata_prev+2, sizeof(pRPLIDAR->esdata_prev)-1))
{
printf("A RPLIDAR is not responding correctly : Bad ChkSum. \n");
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}
So in the above code, we are waiting for the OS read FIFO buffer to show something within 1.5s, checking every 5ms (WaitForRS232Port()). If anything shows up, makes sure to wait to have enough, the size of the payload (GetFIFOComputerRS232Port()).
I'm not sure if it made a difference but we also removed the OS write FIFO buffer by changing it from 0 to FILE_FLAG_NO_BUFFERING:
File: OSComputerRS232Port.h
...
hDev = CreateFile(
tstr,
GENERIC_READ|GENERIC_WRITE,
0, // Must be opened with exclusive-access.
NULL, // No security attributes.
OPEN_EXISTING, // Must use OPEN_EXISTING.
FILE_FLAG_NO_BUFFERING, // Not overlapped I/O. Should use FILE_FLAG_WRITE_THROUGH and maybe also FILE_FLAG_NO_BUFFERING?
NULL // hTemplate must be NULL for comm devices.
);
...

MFTransform encoder->ProcessInput returns E_FAIL

When I run encoder->ProcessInput(stream_id, sample.Get(), 0) I am getting a E_FAIL ("Unspecified error") error which isn't very helpful.
I am either trying to (1) Figure out what the real error is and/or (2) get past this unspecified error.
Ultimately, my goal is achieving this: http://alax.info/blog/1716
Here's the gist of what I am doing:
(Error occurs in this block)
void encode_frame(ComPtr<ID3D11Texture2D> texture) {
_com_error error = NULL;
IMFTransform *encoder = nullptr;
encoder = get_encoder();
if (!encoder) {
cout << "Did not get a valid encoder to utilize\n";
return;
}
cout << "Making it Direct3D aware...\n";
setup_D3_aware_mft(encoder);
cout << "Setting up input/output media types...\n";
setup_media_types(encoder);
error = encoder->ProcessMessage(MFT_MESSAGE_COMMAND_FLUSH, NULL); // flush all stored data
error = encoder->ProcessMessage(MFT_MESSAGE_NOTIFY_BEGIN_STREAMING, NULL);
error = encoder->ProcessMessage(MFT_MESSAGE_NOTIFY_START_OF_STREAM, NULL); // first sample is about to be processed, req for async
cout << "Encoding image...\n";
IMFMediaEventGenerator *event_generator = nullptr;
error = encoder->QueryInterface(&event_generator);
while (true) {
IMFMediaEvent *event = nullptr;
MediaEventType type;
error = event_generator->GetEvent(0, &event);
error = event->GetType(&type);
uint32_t stream_id = get_stream_id(encoder); // Likely just going to be 0
uint32_t frame = 1;
uint64_t sample_duration = 0;
ComPtr<IMFSample> sample = nullptr;
IMFMediaBuffer *mbuffer = nullptr;
DWORD length = 0;
uint32_t img_size = 0;
MFCalculateImageSize(desktop_info.input_sub_type, desktop_info.width, desktop_info.height, &img_size);
switch (type) {
case METransformNeedInput:
ThrowIfFailed(MFCreateDXGISurfaceBuffer(__uuidof(ID3D11Texture2D), texture.Get(), 0, false, &mbuffer),
mbuffer, "Failed to generate a media buffer");
ThrowIfFailed(MFCreateSample(&sample), sample.Get(), "Couldn't create sample buffer");
ThrowIfFailed(sample->AddBuffer(mbuffer), sample.Get(), "Couldn't add buffer");
// Test (delete this) - fake buffer
/*byte *buffer_data;
MFCreateMemoryBuffer(img_size, &mbuffer);
mbuffer->Lock(&buffer_data, NULL, NULL);
mbuffer->GetCurrentLength(&length);
memset(buffer_data, 0, img_size);
mbuffer->Unlock();
mbuffer->SetCurrentLength(img_size);
sample->AddBuffer(mbuffer);*/
MFFrameRateToAverageTimePerFrame(desktop_info.fps, 1, &sample_duration);
sample->SetSampleDuration(sample_duration);
// ERROR
ThrowIfFailed(encoder->ProcessInput(stream_id, sample.Get(), 0), sample.Get(), "ProcessInput failed.");
I setup my media types like this:
void setup_media_types(IMFTransform *encoder) {
IMFMediaType *output_type = nullptr;
IMFMediaType *input_type = nullptr;
ThrowIfFailed(MFCreateMediaType(&output_type), output_type, "Failed to create output type");
ThrowIfFailed(MFCreateMediaType(&input_type), input_type, "Failed to create input type");
/*
List of all MF types:
https://learn.microsoft.com/en-us/windows/desktop/medfound/alphabetical-list-of-media-foundation-attributes
*/
_com_error error = NULL;
int stream_id = get_stream_id(encoder);
error = output_type->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video);
error = output_type->SetGUID(MF_MT_SUBTYPE, desktop_info.output_sub_type);
error = output_type->SetUINT32(MF_MT_AVG_BITRATE, desktop_info.bitrate);
error = MFSetAttributeSize(output_type, MF_MT_FRAME_SIZE, desktop_info.width, desktop_info.height);
error = MFSetAttributeRatio(output_type, MF_MT_FRAME_RATE, desktop_info.fps, 1);
error = output_type->SetUINT32(MF_MT_INTERLACE_MODE, MFVideoInterlace_Progressive); // motion will be smoother, fewer artifacts
error = output_type->SetUINT32(MF_MT_MPEG2_PROFILE, eAVEncH264VProfile_High);
error = output_type->SetUINT32(MF_MT_MPEG2_LEVEL, eAVEncH264VLevel3_1);
error = output_type->SetUINT32(CODECAPI_AVEncCommonRateControlMode, eAVEncCommonRateControlMode_CBR); // probably will change this
ThrowIfFailed(encoder->SetOutputType(stream_id, output_type, 0), output_type, "Couldn't set output type");
error = input_type->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video);
error = input_type->SetGUID(MF_MT_SUBTYPE, desktop_info.input_sub_type);
error = input_type->SetUINT32(MF_MT_INTERLACE_MODE, MFVideoInterlace_Progressive);
error = MFSetAttributeSize(input_type, MF_MT_FRAME_SIZE, desktop_info.width, desktop_info.height);
error = MFSetAttributeRatio(input_type, MF_MT_FRAME_RATE, desktop_info.fps, 1);
error = MFSetAttributeRatio(input_type, MF_MT_PIXEL_ASPECT_RATIO, 1, 1);
ThrowIfFailed(encoder->SetInputType(stream_id, input_type, 0), input_type, "Couldn't set input type");
}
My desktop_info struct is:
struct desktop_info {
int fps = 30;
int width = 2560;
int height = 1440;
uint32_t bitrate = 10 * 1000000; // 10Mb
GUID input_sub_type = MFVideoFormat_ARGB32;
GUID output_sub_type = MFVideoFormat_H264;
} desktop_info;
Output of my program prior to reaching ProcessInput:
Hello World!
Number of devices: 3
Device #0
Adapter: Intel(R) HD Graphics 630
Got some information about the device:
\\.\DISPLAY2
Attached to desktop : 1
Got some information about the device:
\\.\DISPLAY1
Attached to desktop : 1
Did not find another adapter. Index higher than the # of outputs.
Successfully duplicated output from IDXGIOutput1
Accumulated frames: 0
Created a 2D texture...
Number of encoders/processors available: 1
Encoder name: Intel« Quick Sync Video H.264 Encoder MFT
Making it Direct3D aware...
Setting up input/output media types...
If you're curious what my Locals were right before ProcessInput: http://prntscr.com/mx1i9t
This may be an "unpopular" answer since it doesn't provide a solution for MFT specifically but after 8 months of working heavily on this stuff, I would highly recommend not using MFT and implementing encoders directly.
My solution was implementing an HW encoder like NVENC/QSV and you could fall back on a software encoder like x264 if the client doesn't have HW acceleration available.
The reason for this is that MFT is far more opaque and not well documented/supported by Microsoft. I think you'll find you want more control over the settings & parameter tuning of the encoder's as well wherein each encoder implementation is subtly different.
We have seen this error coming from the Intel graphics driver. (The H.264 encoder MFT uses the Intel GPU to do the encode the video into H.264 format.)
In our case, I think the bug was triggered by configuring the encoder to a very high bit rate and then configuring to a low bit rate. In your sample code, it does not look like you are changing the bit rate, so I am not sure if it is the same bug.
Intel just released a new driver about two weeks ago, that is supposed to have the fix for the bug that we were seeing. So, you may want to give that new driver a try -- hopefully it will fix the problem that you are having.
The new driver is version 25.20.100.6519. You can get it from the Intel web site: https://downloadcenter.intel.com/download/28566/Intel-Graphics-Driver-for-Windows-10
If the new driver does not fix the problem, you could try running your program on a different PC that uses a NVidia or AMD graphics card, to see if the problem only happens on PCs that have Intel graphics.

How do I make sure insmod fails on error?

I developed a peripheral driver for Linux. The .probe function performs the usual error checks like memory allocation failures, and also attempts to communicate with the hardware and in any type of error, deallocates any memory and returns an error code like -ENOMEM or -EIO.
The problem is, although the module probe function return -EIO when the hardware is unreachable, I still see the module is listed in lsmod output. Is it possible to make sure an insmod completely fails when there is a problem during initialization?
Here is my current probe function. All device specific functions return an appropriate error code on failure, usually -EIO.
static int mlx90399_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
int err;
struct mlx90399_data *data;
data = kzalloc(sizeof(*data), GFP_KERNEL);
if (!data) {
dev_err(&client->dev, "Memory allocation fails\n");
err = -ENOMEM;
goto exit;
}
i2c_set_clientdata(client, data);
data->client = client;
mutex_init(&data->lock);
data->mode = MLX90399_MODE_OFF;
err = mlx90399_reset(client);
if (err < 0)
goto exit;
msleep(1); /* nominal 0.6ms from reset */
err = mlx90399_write_register_defaults(client);
if (err < 0)
goto exit;
err = mlx90399_update_scale(client);
if (err < 0)
goto exit;
data->indio_dev = iio_allocate_device(0);
if (data->indio_dev == NULL) {
err = -ENOMEM;
goto exit;
}
data->indio_dev->dev.parent = &client->dev;
data->indio_dev->info = &mlx90399_info;
data->indio_dev->dev_data = (void *)(data);
data->indio_dev->modes = INDIO_DIRECT_MODE;
mlx90399_setup_irq(client);
err = iio_device_register(data->indio_dev);
if(err < 0)
goto exit;
return 0;
exit:
kfree(data);
return err;
}
See the comment in __driver_attach():
/*
* Lock device and try to bind to it. We drop the error
* here and always return 0, because we need to keep trying
* to bind to devices and some drivers will return an error
* simply if it didn't support the device.
*
* driver_probe_device() will spit a warning if there
* is an error.
*/
To make the module initialization fail, unregister the driver and return an error code from your init function.
Note that there isn't necessarily a 1:1 relationship between a module and a device. One module may be used for several devices. With the use of device trees, for example, a device tree may declare several on-board UARTs, all using one serial device kernel module. The module's probe function would be called several times, once for each device. Just because one probe call fails, that doesn't necessarily mean the module should be unloaded.

Getting PCSC reader serial number with WinSCard

I have a problem with getting PCSC reader serial number if card is not present in the reader. I am using winscard.dll and c++.
The following code will work only for the case if card is present in the reader. Otherwise the SCardHandle is not retrieved. I haven't found any other way to get SCardHandle.
SCARDHANDLE hCardHandle;
SCARDCONTEXT hSC;
WCHAR pCardReaderName[256];
LONG lReturn;
lReturn = SCardEstablishContext(SCARD_SCOPE_USER, 0, 0, &hSC);
if (lReturn != SCARD_S_SUCCESS)
{
Console::WriteLine("SCardEstablishContext() failed\n");
return;
}
my_select_reader(hSC, pCardReaderName); // just shows reader names in console and requires you to pick one
// connect to smart card
DWORD dwAP;
lReturn = SCardConnect( hSC,
(LPCWSTR)pCardReaderName,
SCARD_SHARE_SHARED,
SCARD_PROTOCOL_T0 | SCARD_PROTOCOL_T1 | SCARD_PROTOCOL_RAW,
&hCardHandle,
&dwAP );
if ( SCARD_S_SUCCESS != lReturn )
{
Console::WriteLine("Failed SCardConnect\n");
exit(1); // Or other appropriate action.
}
// get reader serial no
LPBYTE pbAtr = NULL;
DWORD cByte = SCARD_AUTOALLOCATE;
lReturn = SCardGetAttrib(hCardHandle,
SCARD_ATTR_VENDOR_IFD_SERIAL_NO,
(LPBYTE)&pbAtr,
&cByte);
if ( SCARD_S_SUCCESS != lReturn )
{
Console::WriteLine("Failed to retrieve Reader Serial\n");
exit(1); // Or other appropriate action.
}
printf("serial no: %s", pbAtr);
SCardFreeMemory(hCardHandle, pbAtr);
Is there a way to get readers serial number without connecting to card?
Maybe i'm a bit late - but anyway...
You can connect directly to the card reader using the SCARD_SHARE_DIRECT flag with SCardConnect. At least with us this works fine.. (we use a protocol flag of "0x00")
You should be using:
lReturn = SCardConnect(hResManager,szAvailRdr,SCARD_SHARE_SHARED,SCARD_PROTOCOL_T1,
&hCardHandle,
&dwActProtocol);
Instead, try using:
lReturn = SCardConnect(hResManager,szAvailRdr,SCARD_SHARE_DIRECT,
NULL,
&hCardHandle,
NULL);
where szAvailRdr refers to the reader name (smartcard readername) and hCardHandle is a handle obtained before using scardconnect.
This should keep you going!

Resources