msctf/d3d11 crash on exit() - windows

I have an application using DX11.
The debug build works well. But the release build crash on exit().
The stack:
000007fef697d630()
user32.dll!DispatchHookA() + 0x72 bytes
user32.dll!CallHookWithSEH() + 0x27 bytes
user32.dll!__fnHkINLPMSG() + 0x59 bytes
ntdll.dll!KiUserCallbackDispatcherContinue()
user32.dll!NtUserPeekMessage() + 0xa bytes
user32.dll!PeekMessageW() + 0x89 bytes
msctf.dll!RemovePrivateMessage() + 0x52 bytes
msctf.dll!SYSTHREAD::DestroyMarshalWindow() - 0x1b7a bytes
msctf.dll!TF_UninitThreadSystem() + 0xc4 bytes
msctf.dll!CicFlsCallback() + 0x40 bytes
ntdll.dll!RtlProcessFlsData() + 0x84 bytes
ntdll.dll!LdrShutdownProcess() + 0xa9 bytes
ntdll.dll!RtlExitUserProcess() + 0x90 bytes
msvcr100.dll!doexit(int code=0, int quick=0, int retcaller=0) Line 621 + 0x11 bytes
If I call LoadLibrary("d3d11.dll") before calling exit(), there is no crash.

Related

What's wrong with my PNG IDAT chunk?

I'm trying to manually construct a simple, 4x1, uncompressed PNG.
So far, I have:
89504E47 // PNG Header
0D0A1A0A
0000000D // byte length of IHDR chunk contents, 4 bytes, value 13
49484452 // IHDR start - 4 bytes
00000004 // Width 4 bytes }
00000001 // Height 4 bytes }
08 // bit depth 8 = 24/32 bit 1 byte }
06 // color type, 6 - RGBa 1 byte }
00 // compression, 0 = Deflate 1 byte }
00 // filter, 0 = no filter 1 byte }
00 // interlace, 0 = no interlace 1 byte } Total, 13 Bytes
F93C0FCD // CRC of IHDR chunk, 4 bytes
00000013 // byte length of IDAT chunk contents, 4 bytes, value 19
49444154 // IDAT start - 4 bytes
0000 // ZLib 0 compression, 2 bytes }
00 // Filter = 0, 1 bytes }
CC0000FF // Pixel 1, Red-ish, 4 bytes }
00CC00FF // Pixel 2, Green-ish, 4 bytes }
0000CCFF // Pixel 3, Blue-ish, 4 bytes }
CCCCCCCC // Pixel 4, transclucent grey, 4 bytes } Total, 19 Bytes
6464C2B0 // CRC of IHDR chunk, 4 bytes
00000000 // byte length of IEND chunk, 4 bytes (value: 0)
49454E44 // IEND start - 4 bytes
AE426082 // CRC of IEND chunk, 4 bytes
Update
I think the issue I'm having is down to the ZLib/Deflate ordering.
I think that I have to include the "Non-compressed blocks format" details from RFC 1951, sec. 3.2.4, but I'm a little unsure as to the interactions. The only examples I can find are for Compressed blocks (understandably!)
So I've now tried:
49444154 // IDAT start - 4 bytes
01 // BFINAL = 1, BTYPE = 00 1 byte }
11EE // LEN & NLEN of data 2 bytes }
00 // Filter = 0, 1 byte }
CC0000FF // Pixel 1, Red-ish, 4 bytes }
00CC00FF // Pixel 2, Green-ish, 4 bytes }
0000CCFF // Pixel 3, Blue-ish, 4 bytes }
CCCCCCCC // Pixel 4, transclucent grey, 4 bytes } Total, 19 Bytes
6464C2B0 // CRC of IHDR chunk, 4 bytes
So the whole PNG file is:
89504E47 // PNG Block
0d0a1a0A
0000000D // IHDR Block
49484452
00000004
00000001
08060000
00
F93C0FCD
00000014 // IDAT Block
49444154
0111EE00
CC0000FF
00CC00FF
0000CCFF
CCCCCCCC
6464C2B0
00000000 // IEND Block
49454E44
AE426082
I'd be really grateful for some pointers as to where the issue lies... or even the PNG data for a working file so that I can reverse-engineer it?
Update 2
Thanks to Mark Adler, I've corrected my newbie errors, and now have functional code that can reproduce the result shown in his answer below, i.e. 4x1 pixel image. From this I can now happily produce a 100x1 image!
However, as a last step, I'd hoped, by tweaking the height field in the IHDR and adding additional non-terminal IDATs, to extend this to say a 4 x 2 image. Unfortunately this doesn't appear to work the way I'd expected.
I now have something like...
89504E47 // PNG Header
0D0A1A0A
0000000D // re calc'ed IHDR with 2 rows
49484452
00000004
00000002 // changed - now 2 rows
08
06
00
00
00
7FA87D63 // CRC of IHDR updated
0000001C // row 1 IDAT, non-terminal
49444154
7801
00 // BFINAL = 0, BTYPE = 00
1100EEFF
00
CC0000FF
00CC00FF
0000CCFF
CCCCCCCC
3D3A0892
5D19A623
0000001C // row 2, terminal IDAT, as Mark Adler's answer
49444154
7801
01 // BFINAL = 1, BTYPE = 00
1100EEFF
00
CC0000FF
00CC00FF
0000CCFF
CCCCCCCC
3D3A0892
BA0400B4
00000000
49454E44
AE426082
This:
11EE // LEN & NLEN of data 2 bytes }
is wrong. LEN and NLEN are both 16 bits, not 8 bits. So that needs to be:
1100EEFF // LEN & NLEN of data 4 bytes }
You also need a zlib wrapper around the deflate data. See RFC 1950.
Lastly you will need to update the CRC of the chunk. (Which has the wrong comment by the way -- it should say CRC of IDAT chunk.)
Thusly repaired:
89504E47 // PNG Header
0D0A1A0A
0000000D // byte length of IHDR chunk contents, 4 bytes, value 13
49484452 // IHDR start - 4 bytes
00000004 // Width 4 bytes }
00000001 // Height 4 bytes }
08 // bit depth 8 = 24/32 bit 1 byte }
06 // color type, 6 - RGBa 1 byte }
00 // compression, 0 = Deflate 1 byte }
00 // filter, 0 = no filter 1 byte }
00 // interlace, 0 = no interlace 1 byte } Total, 13 Bytes
F93C0FCD // CRC of IHDR chunk, 4 bytes
0000001C // byte length of IDAT chunk contents, 4 bytes, value 28
49444154 // IDAT start - 4 bytes
7801 // zlib Header 2 bytes }
01 // BFINAL = 1, BTYPE = 00 1 byte }
1100EEFF // LEN & NLEN of data 4 bytes }
00 // Filter = 0, 1 byte }
CC0000FF // Pixel 1, Red-ish, 4 bytes }
00CC00FF // Pixel 2, Green-ish, 4 bytes }
0000CCFF // Pixel 3, Blue-ish, 4 bytes }
CCCCCCCC // Pixel 4, transclucent grey, 4 bytes }
3d3a0892 // Adler-32 check 4 bytes }
ba0400b4 // CRC of IDAT chunk, 4 bytes
00000000 // byte length of IEND chunk, 4 bytes (value: 0)
49454E44 // IEND start - 4 bytes
AE426082 // CRC of IEND chunk, 4 bytes

Deadlocked in windows filter graph

this is a hard to reproduce bug, but I finally managed to reproduce it. However, I do not have a clear understanding of what might have caused it. I am currently trying to push myself through this bug and figure out the source of error.
Wondering if someone can give me some directions or hints.
My program is deadlocked in the stop function in Directshow Filter graph.
here is the call stack:
ntdll.dll!_ZwDeviceIoControlFile#40() + 0x15 bytes
ntdll.dll!_ZwDeviceIoControlFile#40() + 0x15 bytes
KernelBase.dll!_CreateEventExW#16() + 0x6e bytes
ksproxy.ax!SetState() + 0x3e bytes
ksproxy.ax!Inactive() + 0x3d bytes
ksproxy.ax!CKsOutputPin::Inactive() + 0x1d bytes
ksproxy.ax!CKsProxy::Stop() + 0x59 bytes
quartz.dll!CFilterGraph::Stop() + 0x123f3 bytes
quartz.dll!CFGControl::CImplMediaControl::Stop() + 0x12dba bytes <--- Called into direct show
*cam.dll!UVCCamera::Shutdown() Line 140 + 0x1b bytes C++
cam.dll!anonymous namespace'::closeCamera(unsigned int hCamera) Line 297 C++
cam.dll!anonymous namespace'::CoreThreadFunc(void * data) Line 916 + 0xb bytes C++
kernel32.dll!#BaseThreadInitThunk#12() + 0x12 bytes
ntdll.dll!__RtlUserThreadStart#8() + 0x27 bytes
ntdll.dll!_RtlUserThreadStart#8() + 0x1b bytes*
I may have solved this problem by using the method described at the end of the link below:
http://social.msdn.microsoft.com/Forums/en-US/windowsdirectshowdevelopment/thread/53563921-6398-491c-999c-3bfaa2f218ca/
Now I am getting a different error!

XMM register storing

I am trying to store an XMM register into a certain location such as address line 4534342.
Example:
What I have done?
I know that the xmm registers contain 128 bit values. So, my program has already produced allocated 16 bytes of memory. In addition, since the registers are aligned data, I have allocated 31 bytes and then found an aligned address within it. That should prevent any exceptions from being thrown.
What I am trying to do visually?
Mem Adr | Contents (binary)
4534342 | 0000 0000 0000 0000 ; I want to pass in address 4534342 and the
4534346 | 0000 0000 0000 0000 ; the program inline-assembly will store it
4534348 | 0000 0000 0000 0000 ; its contents straight down to address 45-
4534350 | 0000 0000 0000 0000 ; 34350
4534352 | 0000 0000 0000 0000
4534354 | 0000 0000 0000 0000
4534356 | 0000 0000 0000 0000
4534358 | 0000 0000 0000 0000
Setup
cyg_uint8 *start_value; //pointer to the first value of the allocated block
cyg_uint32 end_address; //aligned address location value
cyg_uint32 *aligned_value; //pointer to the value at the end_address
start_value = xmm_container; //get the pointer to the allocated block
end_address = (((unsigned int) start_value) + 0x0000001f) & 0xFFFFFFF8; //find aligned memory
aligned_value = ((cyg_uint32*)end_address); //create a pointer to get the first value of the block
Debug statements BEFORE assembly call to ensure function
printf("aligned_value: %d\n", (cyg_uint32) aligned_value);
printf("*aligned_value: %d\n", *aligned_value);
Assembly Call
__asm__("movdqa %%xmm0, %0\n" : "=m"(*aligned_value)); //assembly call
Debug statements AFTER assembly call to ensure function
printf("aligned_value: %d\n", (cyg_uint32) aligned_value);
printf("*aligned_value: %d\n", *aligned_value);
The output from printf [FAILURE]
aligned_value: 1661836 //Looks good!
*aligned_value: 0 //Looks good!
aligned_value: -1 //Looks wrong :(
//then program gets stuck
Basically, am I doing this process correctly? Why do you think it is getting stuck?
Thank you for your time and effort.
I don't think your alignment logic is correct if you want a 16-byte aligned address.
Just do the math, it's easy!:
(0 + 0x1f) & 0xFFFFFFF8 = 0x18 ; 0x18-0=0x18 unused bytes, 0x1F-0x18=7 bytes left
(1 + 0x1f) & 0xFFFFFFF8 = 0x20 ; 0x20-1=0x1F unused bytes, 0x1F-0x1F=0 bytes left
...
(8 + 0x1f) & 0xFFFFFFF8 = 0x20 ; 0x20-8=0x18 unused bytes, 0x1F-0x18=7 bytes left
(9 + 0x1f) & 0xFFFFFFF8 = 0x28 ; 0x28-9=0x1F unused bytes, 0x1F-0x1F=0 bytes left
...
(0xF + 0x1f) & 0xFFFFFFF8 = 0x28 ; 0x28-0xF=0x19 unused bytes, 0x1F-0x19=6 bytes left
(0x10 + 0x1f) & 0xFFFFFFF8 = 0x28 ; 0x28-0x10=0x18 unused bytes, 0x1F-0x18=7 bytes left
(0x11 + 0x1f) & 0xFFFFFFF8 = 0x30 ; 0x30-0x11=0x1F unused bytes, 0x1F-0x1F=0 bytes left
...
(0x18 + 0x1f) & 0xFFFFFFF8 = 0x30 ; 0x30-0x18=0x18 unused bytes, 0x1F-0x18=7 bytes left
(0x19 + 0x1f) & 0xFFFFFFF8 = 0x38 ; 0x38-0x19=0x1F unused bytes, 0x1F-0x1F=0 bytes left
...
(0x1F + 0x1f) & 0xFFFFFFF8 = 0x38 ; 0x38-0x1F=0x19 unused bytes, 0x1F-0x19=6 bytes left
First, to get all zeroes in the 4 least significant bits the mask should be 0xFFFFFFF0.
Next, your overflowing the 31-byte buffer if you calculate the aligned address in this way. Your math leaves you with 0 to 7 bytes of space, which isn't sufficient to store 16 bytes.
For correct 16-byte alignment you should write this:
end_address = (((unsigned int)start_value) + 0xF) & 0xFFFFFFF0;

ASCII 7x5 side-feeding characters for led modules

I am looking at the code for the font file here:
http://www.openobject.org/opensourceurbanism/Bike_POV_Beta_4
The code starts like this:
const byte font[][5] = {
{0x00,0x00,0x00,0x00,0x00}, // 0x20 32
{0x00,0x00,0x6f,0x00,0x00}, // ! 0x21 33
{0x00,0x07,0x00,0x07,0x00}, // " 0x22 34
{0x14,0x7f,0x14,0x7f,0x14}, // # 0x23 35
{0x00,0x07,0x04,0x1e,0x00}, // $ 0x24 36
{0x23,0x13,0x08,0x64,0x62}, // % 0x25 37
{0x36,0x49,0x56,0x20,0x50}, // & 0x26 38
{0x00,0x00,0x07,0x00,0x00}, // ' 0x27 39
{0x00,0x1c,0x22,0x41,0x00}, // ( 0x28 40
{0x00,0x41,0x22,0x1c,0x00}, // ) 0x29 41
{0x14,0x08,0x3e,0x08,0x14}, // * 0x2a 42
{0x08,0x08,0x3e,0x08,0x08}, // + 0x2b 43
and so on...
I am very confused as to how this code works - can someone explain it to me please?
Thanks,
Majd
Each array of 5 bytes = 40 bits which map to the 7x5 = 35 pixels in the character grid (there are 5 unused bits presumably).
When you want to display a character you copy the corresponding 5 byte bitmap for that character to the appropriate memory location. E.g. to display the character X you would copy the data from font['X'].

How do I debug Illegal Instruction exception?

I'm getting this exception when trying to use dbgeng from mdbglib: First-chance exception at 0x037ba4f4 (dbgeng.dll) in ASDumpAnalyzer.exe: 0xC000001D: Illegal Instruction. I'm wondering how to go about debugging this?
It is throwing on the assembly instruction vmcpuid. When I step over that instruction the code works as expected.
Stack trace:
dbgeng.dll!X86IsVirtualMachine() + 0x44 bytes
dbgeng.dll!LiveUserDebugServices::GetTargetInfo() + 0x95 bytes
dbgeng.dll!LiveUserTargetInfo::InitFromServices() + 0x95 bytes
dbgeng.dll!LiveUserTargetInfo::WaitForEvent() + 0x4f bytes
dbgeng.dll!WaitForAnyTarget() + 0x5f bytes
dbgeng.dll!RawWaitForEvent() + 0x2ae bytes
dbgeng.dll!DebugClient::WaitForEvent() + 0xb0 bytes
[Managed to Native Transition]
mdbglib.dll!MS::Debuggers::DbgEng::DebugControl::WaitForEvent(unsigned int timeout = 0) Line 107 + 0x38 bytes C++
mdbglib.dll!MS::Debuggers::DbgEng::Debuggee::WaitForEvent(unsigned int timeout = 0) Line 365 C++
ASDumpAnalyzer.exe!ASDumpAnalyzer.Program.WriteMemoryDump() Line 51 + 0xd bytes C#
ASDumpAnalyzer.exe!ASDumpAnalyzer.Program.Main() Line 21 + 0x5 bytes C#
mscoree.dll!__CorExeMain#0() + 0x34 bytes
kernel32.dll!_BaseProcessStart#4() + 0x23 bytes
Have you tried not breaking on first chance exceptions? I bet that X86IsVirtualMachine has a __try/__finally block around VMCPUID... since it's not a valid instruction you're probably not running under a VM.

Resources