Media Foundation wrong video size - winapi

Sometimes the source reader returns incorrect video size. It happens with some H265 video files.
This is the reference video: https://drive.google.com/file/d/12oH9x7MCW7YFZu1MDKGbOnIt4VvxrgtZ/view?usp=share_link
3840x2160 pixels
CComPtr<IMFSourceReader> r;
CComPtr<IMFMediaType> NativeMT;
MFCreateSourceReaderFromURL(L"file.mp4" 0, &r);
r->GetCurrentMediaType((DWORD)MF_SOURCE_READER_FIRST_VIDEO_STREAM, &NativeMT);
UINT wi = 0,he = 0;
MFGetAttributeSize(NativeMT, MF_MT_FRAME_SIZE, &wi,&he);
This returns 3840x2176. Why?
I will follow up with more problems because this format fails to be converted to another H264/H265 video with Media Foundation.

So you have this media type for the video track:
MF_MT_FRAME_SIZE, 16492674418816 (Type VT_UI8, 3840x2176)
MF_MT_MINIMUM_DISPLAY_APERTURE, 00 00 00 00 00 00 00 00 00 0F 00 00 70 08 00 00 (Type VT_VECTOR | VT_UI1)
The latter quoted attribute reads as this:
{OffsetX={fract=0 value=0 } OffsetY={fract=0 value=0 } Area={cx=3840 cy=2160 } }
{fract=0 value=0 }
{fract=0 value=0 }
{cx=3840 cy=2160 }
You should take this into account and be ready to accept samples with 3840x2160 payload being carried in 3840x2176 buffers.
Related Q: Handling Image data from IMFSourceReader and IMFSample.
IMO this is still a bug since H.265 demultiplexer is aware that this is encoded video, and padding does not make sense. Decoder would apply this when it sets up textures or buffers with padding, it starts being important only there.
Also, this is a behavior different from H.264 codec AFAIR. And then, again, this side effect AFAIR causes a problem with property sheet handler which displays this wrong resolution for media files.

Related

Can't get the right formula to set frame pts for a stream using libav

I'm trying to save a stream of frames as mp4.
Source framerate is not fixed and it stay in the range [15,30]
Encoder params:
...
eCodec.time_base = AVRational(1,3000);
eCodec.framerate = AVRational(30, 1);
...
Stream params:
eStream = avformat_new_stream(eFormat, null);
eStream.codecpar = codecParams;
eStream.time_base = eCodec.time_base;
Decoder time_base is 0/1 and it marks each frame with a pts like:
480000
528000
576000
...
PTS(f) is always == PTS(f-1)+48000
Encoding (dFrame is the received frame, micro the elapsed time in microseconds):
av_frame_copy(eFrame, dFrame);
eFrame.pts = micro*3/1000;
This make the video playing too fast.
I can't understand why, but changing micro*3/1000 to micro*3*4/1000 make the video play at the correct speed (checked against a clock after many minutes of varying fps)
What am I missing?

What is this H264 container?

I have an H264 video file, when I remove a header that I think is a container, I can play it in VLC or convert with ffmpeg.
This is the Header that appear before a I frame:
Note: Ever start with 02** 32 **64 63 and have 0x40 = 64 bytes.
After that start the H264 NAL 00 00 00 01...
This is the Header that appear before a P frame:
Note: Ever start with 02** 33 **64 63 and have 0x40 = 64 bytes.
After that start the H264 NAL 00 00 00 01...
This is the Header that appear before a audio frame:
Note: Ever start with 02** 34 **64 63 and have 0x40 = 64 bytes.
I still don't use it because the device has no audio.
Maybe if I know what it is I can extract more info or set ffmpeg to convert directly.
These are not a magic number in:
https://www.garykessler.net/library/file_sigs.html
https://en.wikipedia.org/wiki/List_of_file_signatures

Xamarin Forms Debug Build to iPhone from PC Causes Crash

I'm building my first multi-platform app for android and iOS from my PC and have had no issues on the android side. I also have a mac so that I can do the iOS side of things. Set up my Mac via the microsoft guides, have a developer account and did certificates, provisioning, etc.
So I start by running the app on the iPhone 13 pro max simulator iOS 15.2 - works flawlessly. The app requires BLE functionality so obviously I can't test everything in the simulator. So I debug to my actual iPhone 13 Pro Max iOS15.2 that's plugged into my PC. App boots up and immediately the UI looks kind of strange, there are weird information display bugs that aren't present in the simulator - which looks as expected. I navigate from the main page to the second page, no issues. From the second page to the 3rd page, immediate crash, no errors, no exceptions.
Just this:
Native Crash Reporting
=================================================================
Got a abrt while executing native code. This usually indicates
a fatal error in the mono runtime or one of the native libraries
used by your application.
=================================================================
=================================================================
Native stacktrace:
=================================================================
0x10446c1e0 - /private/var/containers/Bundle/Application/0EFE911B-4292-432A-B028-6CEB1EFAFB2A/EMBTuner.iOS.app/Xamarin.PreBuilt.iOS : mono_dump_native_crash_info
0x104462c5c - /private/var/containers/Bundle/Application/0EFE911B-4292-432A-B028-6CEB1EFAFB2A/EMBTuner.iOS.app/Xamarin.PreBuilt.iOS : mono_handle_native_crash
0x10446b728 - /private/var/containers/Bundle/Application/0EFE911B-4292-432A-B028-6CEB1EFAFB2A/EMBTuner.iOS.app/Xamarin.PreBuilt.iOS : sigabrt_signal_handler
0x1f2a5ac10 - /usr/lib/system/libsystem_platform.dylib : <redacted>
0x1b8ea45b8 - /usr/lib/system/libsystem_kernel.dylib : <redacted>
0x1b8ea45ec - /usr/lib/system/libsystem_kernel.dylib : <redacted>
0x1e9b35a54 - /System/Library/PrivateFrameworks/TCC.framework/TCC : <redacted>
0x1e9b36230 - /System/Library/PrivateFrameworks/TCC.framework/TCC : <redacted>
0x1e9b32ea8 - /System/Library/PrivateFrameworks/TCC.framework/TCC : <redacted>
0x1f2a9832c - /usr/lib/system/libxpc.dylib : <redacted>
0x1f2a8b85c - /usr/lib/system/libxpc.dylib : <redacted>
0x181b4b6e0 - /usr/lib/system/libdispatch.dylib : <redacted>
0x181b68ec8 - /usr/lib/system/libdispatch.dylib : <redacted>
0x181b5db60 - /usr/lib/system/libdispatch.dylib : <redacted>
0x1f2a6212c - /usr/lib/system/libsystem_pthread.dylib : _pthread_wqthread
0x1f2a61e94 - /usr/lib/system/libsystem_pthread.dylib : start_wqthread
=================================================================
Basic Fault Address Reporting
=================================================================
Memory around native instruction pointer (0x1b8ea1cf8):0x1b8ea1ce8 ff 0f 5f d6 c0 03 5f d6 30 41 80 d2 01 10
00 d4 .._..._.0A......
0x1b8ea1cf8 03 01 00 54 7f 23 03 d5 fd 7b bf a9 fd 03 00 91 ...T.#...{......
0x1b8ea1d08 4f 61
ff 97 bf 03 00 91 fd 7b c1 a8 ff 0f 5f d6 Oa.......{...._.
0x1b8ea1d18 c0 03 5f d6 90 32 80 d2 01 10 00 d4 03 01 00 54 .._..2.........T
The app has been terminated.
All I'm doing is clicking a button that calls a push navigation in the same exact way that I do from the first page.
private async void Choose_Device_Menu_OnClicked(object sender, EventArgs e)
{
await Navigation.PushAsync(new ListOfDevices(this));
}
on appearing section
protected override void OnAppearing()
{
base.OnAppearing();
}
Contructor section
namespace EMBTuner
{
[XamlCompilation(XamlCompilationOptions.Compile)]
public partial class ListOfDevices : ContentPage
{
private readonly IDeviceManipulationPage _page;
public ObservableCollection<IBacGenericDevice> Items { get; set; } = new ObservableCollection<IBacGenericDevice>(BacCommunication.CurrentRepository.BacDevices);
public ListOfDevices(IDeviceManipulationPage page)
{
InitializeComponent();
_page = page;
}
...
}
}
Again, it's very strange since this works perfectly on the simulator.
SOLUTION:
Adding BLE permissions within info.plist fixed the error. Wish the debugger was a bit smarter in this situation...

Image properties "dimensions" with "odd" unicode code points

I am poking around in the file properties for images, specifically jpg files created by a camera/scanner/adobe/etc.
There is one detail that is different than the rest. The image dimensions seems to have a Unicode codepoint that doesn't appear in the displayed text. The text appears as something like: ‪3264 x 2448.
As it turns out, there are codepoints on either end of this string that I cannot figure out. It is probably very straight forward, but after my searching I am at a loss.
The property documentation can be found here:
System.Image.Dimensions
property format: {6444048F-4C8B-11D1-8B70-080036B11A03}
0xd => 13 => property id (for Systems.Image.Dimensions)
3264 x 2448 => Image dimensions as the "appear" on the screen
Here is what I have (Python 3.5 output):
0xd => ‪3264 x 2448‬ 0xd => b"?3264 x 2448?" len: 13
This is the actual string converted to hex bytes.
Hex Bytes: e2 80 aa 33 32 36 34 20 78 20 32 34 34 38 e2 80 ac
Character: ?? ?? ?? 3 2 6 4 x 2 4 4 8 ?? ?? ??
Does anyone know what the "0xe280aa" and "0xe280ac" are and what I am missing?
They are the only "interesting" characters in the entire properties collection for a jpg image. I don't know what they are, or why they are present.
Your property text is encoded in UTF-8.
e2 80 aa is the UTF-8 encoding of Unicode codepoint U+202A LEFT-TO-RIGHT EMBEDDING.
e2 80 ac is the UTF-8 encoding of Unicode codepoint U+202C POP DIRECTIONAL FORMATTING.
These markers are used when embedding left-to-right text in bidirectional text.
Raymond Chen blogged about this in relation to a similar issue with filenames displayed in Windows Explorer:
Why is there an invisible U+202A at the start of my file name?

NFC : APDU and SNEP length limitation

I'm working on a project in order to exchange large data from PC to Android device throught NFC. I'm using ACR122.
The following is a general exemple of data sent :
// ADPU
FF FF 00 00 00 nn // CLA, INS, P1, P2, Le, Lc
D4 40 // TFI, PD0
01 // (Mi), Target
// LLCP
13 20 // DSAP, PTYPE, SSAP
00 // Sequence
D4 40 // TFI, PD0
// SNEP
10 02 // Protocol Version, Action
nn nn nn nn // Total SNEP Length
// NDEF Header
A2 // First byte (MB = 1, ME = 0, Cf = 1, SR = 0, Il, TNF)
22 // Type length
mm mm mm mm // Payload length
// NDEF Content
61.....65 // Type (34 bytes in that case)
01.....01 // Payload (mm mm mm mm bytes)
Here, I send a Record (not short record).So the NDEF header allows to enter a 4 bytes payload length.
Finaly, my question is how could we send a such large payload regarding the 1 byte APDU Lc ?
If this limitation is only due to the pn532 chip or PS/SC, what alternative hardware would you suggest ?
Thank you for any clarification
EDIT :
I found what I was looking for here :
Sending Extended APDU to Javacard
It's a hardware problem, PN532 don't support Extended APDU.
As you've already found out the ACR122 does not support extended APDU due to a limitation of the PN532 chip.
However, there is no need to pack the entire SNEP transfer into a single APDU. You can split the payload into multiple smaller frames and send them one after another. It's only important that the NDEF header gets transmitted as a whole in the first frame.

Resources