this is a hard to reproduce bug, but I finally managed to reproduce it. However, I do not have a clear understanding of what might have caused it. I am currently trying to push myself through this bug and figure out the source of error.
Wondering if someone can give me some directions or hints.
My program is deadlocked in the stop function in Directshow Filter graph.
here is the call stack:
ntdll.dll!_ZwDeviceIoControlFile#40() + 0x15 bytes
ntdll.dll!_ZwDeviceIoControlFile#40() + 0x15 bytes
KernelBase.dll!_CreateEventExW#16() + 0x6e bytes
ksproxy.ax!SetState() + 0x3e bytes
ksproxy.ax!Inactive() + 0x3d bytes
ksproxy.ax!CKsOutputPin::Inactive() + 0x1d bytes
ksproxy.ax!CKsProxy::Stop() + 0x59 bytes
quartz.dll!CFilterGraph::Stop() + 0x123f3 bytes
quartz.dll!CFGControl::CImplMediaControl::Stop() + 0x12dba bytes <--- Called into direct show
*cam.dll!UVCCamera::Shutdown() Line 140 + 0x1b bytes C++
cam.dll!anonymous namespace'::closeCamera(unsigned int hCamera) Line 297 C++
cam.dll!anonymous namespace'::CoreThreadFunc(void * data) Line 916 + 0xb bytes C++
kernel32.dll!#BaseThreadInitThunk#12() + 0x12 bytes
ntdll.dll!__RtlUserThreadStart#8() + 0x27 bytes
ntdll.dll!_RtlUserThreadStart#8() + 0x1b bytes*
I may have solved this problem by using the method described at the end of the link below:
http://social.msdn.microsoft.com/Forums/en-US/windowsdirectshowdevelopment/thread/53563921-6398-491c-999c-3bfaa2f218ca/
Now I am getting a different error!
Related
I'm trying to receive RTP stream encoding h264 over TCP from my intercom Hikvision DS-KH8350-WTE1. By reverse engineering I was able to replicate how Hikvision original software Hik-Connect on iPhone and iVMS-4200 on MacOS connects and negotaties streaming. Now I'm getting the very same stream as original apps - verified through Wireshark. Now I need to "make sense" of the stream. I know it's RTP because I inspected how iVMS-4200 uses it using /usr/bin/sample on MacOS. Which yields:
! : 2 CStreamConvert::InputData(void*, int) (in libOpenNetStream.dylib) + 52 [0x11ff7c7a6]
+ ! : 2 SYSTRANS_InputData (in libSystemTransform.dylib) + 153 [0x114f917f2]
+ ! : 1 CRTPDemux::ProcessH264(unsigned char*, unsigned int, unsigned int, unsigned int) (in libSystemTransform.dylib) + 302 [0x114fa2c04]
+ ! : | 1 CRTPDemux::AddAVCStartCode() (in libSystemTransform.dylib) + 47 [0x114fa40f1]
+ ! : 1 CRTPDemux::ProcessH264(unsigned char*, unsigned int, unsigned int, unsigned int) (in libSystemTransform.dylib) + 476 [0x114fa2cb2]
+ ! : 1 CRTPDemux::ProcessVideoFrame(unsigned char*, unsigned int, unsigned int) (in libSystemTransform.dylib) + 1339 [0x114fa29b3]
+ ! : 1 CMPEG2PSPack::InputData(unsigned char*, unsigned int, FRAME_INFO*) (in libSystemTransform.dylib) + 228 [0x114f961d6]
+ ! : 1 CMPEG2PSPack::PackH264Frame(unsigned char*, unsigned int, FRAME_INFO*) (in libSystemTransform.dylib) + 238 [0x114f972fe]
+ ! : 1 CMPEG2PSPack::FindAVCStartCode(unsigned char*, unsigned int) (in libSystemTransform.dylib) + 23 [0x114f97561]`
I can catch that with lldb and see the arriving packet data making sense as the format I'm describing.
The packet signatures look following:
0x24 0x02 0x05 0x85 0x80 0x60 0x01 0x57 0x00 0x00 0x00 0x02 0x00 0x00 0x27 0xde
0x0d 0x80 0x60 0x37 0x94 0x71 0xe3 0x97 0x10 0x77 0x20 0x2c 0x51 | 0x7c
0x85 0xb8 0x00 00 00 00 01 65 0xb8 0x0 0x0 0xa 0x35 ...
0x24 0x02 0x05 0x85 0x80 0x60 0x01 0x58 0x00 0x00 0x00 0x02 0x00 0x00 0x27 0xde
0xd 0x80 0x60 0x37 0x95 0x71 0xe3 0x97 0x10 0x77 0x20 0x2c 0x51 | 0x7c 0x05 0x15 0xac ...
0x24 0x02 0x5 0x85 0x80 0x60 0x01 0x59 0x00 0x0 0x00 0x02 0x00 00x0 0x27 0xde
0xd 0x80 0x60 0x37 0x96 0x71 0xe3 0x97 0x10 0x77 0x20 0x2c 0x51 | 0x7c 0x05 0x5d 0x00 ...
By the means of reverse engineering the original software I was able to figure out that 0x7c85 indicates a key frame. 0x7c85 bytes in the genuine software processing do get replaced by h264 00 00 00 01 65 Key frame NALU . That's h264 appendix-B format.
The 0x7c05 packets always follow and are the remaining payload of the key frame. No NALU are added during their handling (the 0x7c05 is stripped away and rest of the bytes is copied). None of the bytes preceding 0x7cXX make it to a mp4 recording (that makes sense as it's the RTP protocol , albeit I'm not sure if it's entirely RTP standard or there's something custom from Hikvision).
If you pay close attention in the Header there are 2 separate bytes indicating order which always match, so I'm sure no packet loss is occurring.
I also observed nonkey frames arriving as 0x7c81 and converted to 00 00 00 01 61 NALU but I want to focus solely on the single key frame for now. Mostly because if I record a movie with the original software it will always begin with 00 00 00 01 65 Key frame (that obviously makes sense).
To get a working mp4 I decided to copy paste a mp4 header of a genuine iVMS-4200 recording (in this sense that's every byte preceding 1st frame NALU 00 00 00 01 65 in the mp4 file). I know that the resolution will match the actual camera footage. With the strategy of waiting for a keyframe , replacing 0x7c85 with 00 00 00 01 65 NALU and appending the remaining bytes, or only appending bytes in the 0x7c05 case I do seem to get something that eventually could work.
When I attempt to ffplay the custom crafted mp4 result I do get something (with a little stretch of imagination that's actually the camera fisheye image forming), but clearly there is a problem.
It seems around 3-4th 0x7c05 packet (as the failing packet differs on every run), when I copy bytes eventually the h264 stream is incorrect. Just by eye-inspecting the bytes I don't see anything unusual.
This is the failing packet around offset 750 decimal, (I know it's around this place because I keep stripping bytes away to see if there's still same amount frame arriving before it breaks).
More over I did dump those bytes from original software using lldb taking out my own python implementation out of equation. And I run into very same problem with the original packets.
The mp4 header I use should work (since it does for original recordings even if I manipulate number of frames and leave just the first keyframe).
Correct me if I'm wrong but the phase of converting this to MPEG2-PS (which iVMS-4200 does and I don't) should be entirely optional and should not be related to my problem.
Update:
I went the path of setting up recording and only then dumping the original iVMS-4200 packets. I edited the recorded movie to only contain keyframe of interest and it works. I found differences but I cannot explain where they are there yet:
Somehow 00 00 01 E0 13 FA 88 00 02 FF FF is inserted in the genuine recording (that's 4th packet), but I have no idea how this byte string was generated and what is its purpose.
When I fixed the first difference the next one is:
The pattern is striking. But what00 00 01 E0 13 FA 88 00 02 FF FF actually is? And why is it inserted after 18 03 25 10 & 2F F4 80 FC
The 00 00 01 E0 signature would suggest those are Packetized Elementary Stream (PES) headers
Going for mp4 container wasn't a good choice after all. It turns out the RTP essentially yields raw h264 stream. To inspect its structure I converted the genuine mp4 recording to .264 like this:
ffmpeg -i recording.mp4 -codec copy recording.264
It's essentially a PPS (00 00 00 01 67) and SPS 00 00 00 01 68 followed by frame data NALUs that I got in the stream.
Raw h264 turned out a way simpler structure to aim at, and I don't have to deal with those Packetized Elementary Stream (PES) headers anymore. That yields correct image. In my case I just took the PPS & SPS settings that original recordings use from recording.264. That could definitely be resolved dynamically somehow but I didn't bother.
I have a program that can encrypt and decrypt a text with Boneh-Franklin encryption. This works great on a PC, but for some reason causes a constant reboot on ESP32 with the following error message:
setup2
setup2.2
setup2.3
Guru Meditation Error: Core 1 panic'ed (Unhandled debug exception)
Debug exception reason: Stack canary watchpoint triggered (loopTask)
Core 1 register dump:
PC : 0x40083774 PS : 0x00060b36 A0 : 0x3ffb0120 A1 : 0x3ffb0060
A2 : 0x68efa751 A3 : 0x3ffb0938 A4 : 0x3ffb0720 A5 : 0xfb879c5c
A6 : 0x61b36b71 A7 : 0x0006970f A8 : 0x01709af4 A9 : 0x01709af4
A10 : 0xfaa5dfed A11 : 0x01a3ff3b A12 : 0x76651dec A13 : 0x00000001
A14 : 0x00000000 A15 : 0x04adbe74 SAR : 0x0000001e EXCCAUSE: 0x00000001
EXCVADDR: 0x00000000 LBEG : 0x400f1cc5 LEND : 0x400f1cc9 LCOUNT : 0x00000000
ELF file SHA256: 0000000000000000
I use an Arduino ESP32 environment, CONFIG_ARDUINO_LOOP_STACK_SIZE is set to 8192 in main.cpp, 8k stack should be enough to run it. It works perfectly on PC, it’s a mystery to me why not on ESP32. Can anyone help? I absolutely ran out of ideas.
For the Boneh-Franklin implementation I used this library: https://github.com/miracl/core
My own code is ~200 lines, I have uploaded it to Google Drive: https://drive.google.com/file/d/1EY0mGC2UiVNhE68b5Q0VB9JIY2Owbpxg/view?usp=sharing
Looking at the code, it isn't unreasonable that the stack will require more than 8192 bytes as a lot of big objects are allocated on the stack, both in your code and in the library, e.g.:
loop()
csprng RNG – 128 bytes
ECP2 pPublic – 264 bytes
ECP2 cipherPointU – 264 bytes
ECP privateKey – 132 bytes
encrypt(...)
ECP pointQId – 132 bytes
char[] dst – 256 bytes
BIG l – 40 bytes
FP12 theta – 532 bytes
PAIR_fexp()
FP2 X – 88 bytes
BIG x – 40 bytes
FP a, b – 2 * 44 = 88 bytes
FP12 t0, y0, y1, y2, y3 – 5 * 532 = 2660 bytes
Increase your stack size. It will likely help.
I got to this page by googling my own troubles so thought I would add my solution here
I experienced this error recently and by using a series of Serial.println's was able to trace to an infinite loop created in the code
I have an application using DX11.
The debug build works well. But the release build crash on exit().
The stack:
000007fef697d630()
user32.dll!DispatchHookA() + 0x72 bytes
user32.dll!CallHookWithSEH() + 0x27 bytes
user32.dll!__fnHkINLPMSG() + 0x59 bytes
ntdll.dll!KiUserCallbackDispatcherContinue()
user32.dll!NtUserPeekMessage() + 0xa bytes
user32.dll!PeekMessageW() + 0x89 bytes
msctf.dll!RemovePrivateMessage() + 0x52 bytes
msctf.dll!SYSTHREAD::DestroyMarshalWindow() - 0x1b7a bytes
msctf.dll!TF_UninitThreadSystem() + 0xc4 bytes
msctf.dll!CicFlsCallback() + 0x40 bytes
ntdll.dll!RtlProcessFlsData() + 0x84 bytes
ntdll.dll!LdrShutdownProcess() + 0xa9 bytes
ntdll.dll!RtlExitUserProcess() + 0x90 bytes
msvcr100.dll!doexit(int code=0, int quick=0, int retcaller=0) Line 621 + 0x11 bytes
If I call LoadLibrary("d3d11.dll") before calling exit(), there is no crash.
I am looking at the code for the font file here:
http://www.openobject.org/opensourceurbanism/Bike_POV_Beta_4
The code starts like this:
const byte font[][5] = {
{0x00,0x00,0x00,0x00,0x00}, // 0x20 32
{0x00,0x00,0x6f,0x00,0x00}, // ! 0x21 33
{0x00,0x07,0x00,0x07,0x00}, // " 0x22 34
{0x14,0x7f,0x14,0x7f,0x14}, // # 0x23 35
{0x00,0x07,0x04,0x1e,0x00}, // $ 0x24 36
{0x23,0x13,0x08,0x64,0x62}, // % 0x25 37
{0x36,0x49,0x56,0x20,0x50}, // & 0x26 38
{0x00,0x00,0x07,0x00,0x00}, // ' 0x27 39
{0x00,0x1c,0x22,0x41,0x00}, // ( 0x28 40
{0x00,0x41,0x22,0x1c,0x00}, // ) 0x29 41
{0x14,0x08,0x3e,0x08,0x14}, // * 0x2a 42
{0x08,0x08,0x3e,0x08,0x08}, // + 0x2b 43
and so on...
I am very confused as to how this code works - can someone explain it to me please?
Thanks,
Majd
Each array of 5 bytes = 40 bits which map to the 7x5 = 35 pixels in the character grid (there are 5 unused bits presumably).
When you want to display a character you copy the corresponding 5 byte bitmap for that character to the appropriate memory location. E.g. to display the character X you would copy the data from font['X'].
I'm getting this exception when trying to use dbgeng from mdbglib: First-chance exception at 0x037ba4f4 (dbgeng.dll) in ASDumpAnalyzer.exe: 0xC000001D: Illegal Instruction. I'm wondering how to go about debugging this?
It is throwing on the assembly instruction vmcpuid. When I step over that instruction the code works as expected.
Stack trace:
dbgeng.dll!X86IsVirtualMachine() + 0x44 bytes
dbgeng.dll!LiveUserDebugServices::GetTargetInfo() + 0x95 bytes
dbgeng.dll!LiveUserTargetInfo::InitFromServices() + 0x95 bytes
dbgeng.dll!LiveUserTargetInfo::WaitForEvent() + 0x4f bytes
dbgeng.dll!WaitForAnyTarget() + 0x5f bytes
dbgeng.dll!RawWaitForEvent() + 0x2ae bytes
dbgeng.dll!DebugClient::WaitForEvent() + 0xb0 bytes
[Managed to Native Transition]
mdbglib.dll!MS::Debuggers::DbgEng::DebugControl::WaitForEvent(unsigned int timeout = 0) Line 107 + 0x38 bytes C++
mdbglib.dll!MS::Debuggers::DbgEng::Debuggee::WaitForEvent(unsigned int timeout = 0) Line 365 C++
ASDumpAnalyzer.exe!ASDumpAnalyzer.Program.WriteMemoryDump() Line 51 + 0xd bytes C#
ASDumpAnalyzer.exe!ASDumpAnalyzer.Program.Main() Line 21 + 0x5 bytes C#
mscoree.dll!__CorExeMain#0() + 0x34 bytes
kernel32.dll!_BaseProcessStart#4() + 0x23 bytes
Have you tried not breaking on first chance exceptions? I bet that X86IsVirtualMachine has a __try/__finally block around VMCPUID... since it's not a valid instruction you're probably not running under a VM.