Why can't I see windows WM_UNICHAR message? - winapi

I'm trying WM_UNICHAR message based on Charles Petzold's KeyView2 code, with some code enhancement by my own(code at github).
MSDN: WM_UNICHAR is quite vague on WM_UNICHAR's relationship with the traditional WM_CHAR message, so I try it my own.
However, I never see WM_UNICHAR appear. On Win10.22H2, I bring up the IME and try to type in a "red apple" Ext-Unicode character(U+1F34E, UTF8: F0 9F 8D 8E, UTF-16LE: D83C DF4E), the unicode version of KeyView2 receives two WM_CHAR with wParamD83C and DF4E.
No WM_UNICHAR is received. Can someone tell me why? Thank you.

Related

NXP NTAG 424: write command returns 917E: "Length Error"". Why?

I have started working with the NXP NTAG 424 TT chip together with nfcpy and an Identive SCL3711 Reader/Writer. I can successfully send and receive APDU commands, securely authenticate myself and send and receive commands in encrypted communication mode.
However I can't read or write Data to the chip, and I don't know why. Here is what I do (mostly taken from the NXP application note Page 24):
I send the command "ISO Select NDEF application using DF Name"
00A404C07D276000085010100
Then I perform the secure authentication protocol via AuthenticatEV2First with key 0x00
I try to write some data as follows:
cmd_header = 02000000040000
cmd_data = 00D1FF00 (before padding)
cmd_data = 00D1FF00800000000000000000000000 (after padding)
The complete command which I send looks like this:
cla cmd P1&2| Lc |ISO Header | encrypted Data |LE
90 8D 00 00 1F 02 000000 040000 6688A4D75482FC972C2447A1A20F0AC9C073C1CF506B2BD3 00
However the chip only responds with 917E: "Length Error"" which translates to "Command size not allowed"
What am I doing wrong? It can't be the encryption, I tested that with various other commands (getTTStatus, SetConfiguration) and these all worked fine. I quadruple checked the header. Did I perhaps fail to select the correct File, or did I miss some other steps? Also what does "Command size not allowed" mean? This error is pretty cryptic to me (which is funny when working with encrypted chips :D).
Any help is greatly appreciated!
Best regards,
Phil
The length of "encrypted data" field in your case is 24 bytes, whereas the length which you have mentioned in ISO Header is "040000" i.e. 4 bytes.
Your encrypted data length should match with the length of data you are writing.
In your case there is mismatch in both lengths and resulting in error.
Hope the information is clear.
Cheers!

Unexpected windows button and VK_ESC key messages

I don't usually do much coding for Windows even though my development environment is in Windows. However I'm doing some work on an app. I am experiencing something rather unusual:
A second window is opened when the user clicks a particular button. The class is registered, window created and a few other things initialized. Everything seems to go without error except that after creation the window receives a WM_KEYDOWN for RWIN followed by WM_KEYDOWN/WM_KEYUP for VK_ESCAPE.
The VK_ESCAPE is tied to functionality to hide the window, so it appears and then hides because of these messages coming through after it is created.
I never touch the keyboard. The application is launched and interacted with the entire time using a mouse.
I have searched the codebase and inspected every SendMessage, PostMessage and related call. None of them are sending anything of the sort.
Furthermore the lParam for the WM_KEYUP message looks reasonable. So its highly unlikely to be an errant message that is completely different in nature. It is very much looking like an authentic keypress message.
So I have essentially two questions:
Am I able to track where the message is being sent from?
Is there some known mechanism by which keypress messages might be inserted into the message queue (other than the app itself calling SendMessage)?
EDIT:
For the WM_KEYUP message the values are:
lParam: 3221291009 or 1100 0000 0000 001 0000 0000 0000 001
wParam: 27
Which translates to:
repeat count: 1
scan code: 1
context code: 0
previous keystate: 1
transition state: 1

is request id asn.1 encoded?

I posted this to the net-snmp mailing list Monday and got no reply, so I am trying here.
I am confused and I hope someone can help.
I am writing an SNMP agent for a Cortex M4 application.
The SNMP books I have bought and what I have read on the net indicate that all data fields should be ASN.1 encoded. I know the OIDs are ASN.1 encoded. I am not sure if that applies to other fields like Request ID.
Looking at snmp commands sent by net-snmp, it appears that the Request ID field is a simple (4 byte) 32 bit integer.
Here is a screen shot showing an snmpget transaction monitored through Wireshark:
http://www.ko4bb.com/net-snmp/RequestID.png
It shows the RequestID to be 1750020546 (decimal) and 0x684F31C2 in hex. The data field in Wireshark also shows it to be “68 4f 31 c2”
This is not ASN.1 encoded, otherwise the first 3 bytes would have their bit 7 set to 1 and the last byte would have bit 7 set to 0, meaning the first 3 values would be >0x7F and the last value should be < 0x80
So is ASN.1 not used for the RequestID field?
I added the wireshark tag, as this is purely a Wireshark issue.
The Request ID field, is strictly in ASN.1 BER format, which is 02 04 68 4f 31 c2.
You should be careful that Wireshark is too smart to parse the data and hide some details from you.
Please check the botton panel where 68 4f 31 c2 is highlighted. Wireshark highlights them, but intentionally ignore 02 04 ahead. That's the problem.
As #GuyHarris pointed out in the comment, this Wireshark behavior is configurable. Other packet analyzers (such as Microsoft Network Monitor) might behave differently in the same scenario.

How to know where the address is through Turbo Debugger?

I would like to understand how Turbo Debugger works. So, for example, I have a message and I moved it to the DX register.
(I show you how it looks in the debugger):
MOV DX,009E ;This is a version which debugger shows me in debugger mode;
;It takes all information from this code: MOV DX,OFFSET MSG.
In fact, the message's first element address is at 9E (this is how the debugger understands). But actually, in the debugger screen, I can see that in the DS register, MSG address is at A0. How can it be?
I know that code is more preferable, but this time a screenshot is more suitable:
As you can see, I marked 2 addresses, but they are not the same. Actually, I can see that my MSG begins in the above-marked address A0, but that the debugger understands it like 9E and moves it to DX. Can someone explain to me how this can be?
By the way, the program works and prints everything fine, the purpose is to understand how the debugger understands addresses.
MSG code is simply:
MSG db 'Hello, how do you do!!!!,'$'
I believe that if you will single step your code, instruction by instruction, you will see that
You are correct that that your message starts at DS:00A0
Turbo Debugger is also correct that it starts two bytes before that
Look carefully at what is located at DS:009E.
What do you see there ? Two bytes: 0A and 0D
That's an ascii "Line Feed" and an Ascii "Carriage Return".
Your confusion can be reduced by understanding the historical perspective...
Way back when printers used ink and paper, and telephones carried modem signals at 1200 BPS, and you paid something like ten hours of minimum wage pay for one hour of that connection to a city only three states away, there really was an economic imperative in choosing between running the little print head back to the left, or just jacking the platen down a line while the print head stayed in the same position.
I mean, you really saw it in your phone bill.
No joke, this one change, using the 0A byte without the 0D byte, could mean a 10 or 20 dollar difference in your phone bill; and remember to factor in inflation back then.
The reason that you see the message properly is because your machine is first placing a "line feed" (i.e., the cursor is probably dropping to the next line) and a "carriage return" (i.e., the cursor jumps back to the left edge) before putting your message on the screen. This happens much faster than your eye can see.
With the miracle of Turbo Debugger, you can single step this and watch it happen.
So, you are correct when you write that your message "starts" at 00A0, but Turbo Debugger is also right when it's telling you that the message starts two bytes ahead of that.

Fix hard-coded display setting without source (24-bit, need 32-bit)

I wrote a program about 10 years ago in Visual Basic 6 which was basically a full-screen game similar to Breakout / Arkanoid but had 'demoscene'-style backgrounds. I found the program, but not the source code. Back then I hard-coded the display mode to 800x600x24, and the program crashes whenever I try to run it as a result. No virtual machine seems to support 24-bit display when the host display mode is 16/32-bit. It uses DirectX 7 so DOSBox is no use.
I've tried all sorts of decompiler and at best they give me the form names and a bunch of assembly calls which mean nothing to me. The display mode setting was a DirectX 7 call but there's no clear reference to it in the decompilation.
In this situation, is there any pointers on how I can:
pin-point the function call in the program which is setting the display mode to 800x600x24 (ResHacker maybe?) and change the value being passed to it so it sets 800x600x32
view/intercept DirectX calls being made while it's running
or if that's not possible, at least
run the program in an environment that emulates a 24-bit display
I don't need to recover the source code (as nice as it would be) so much as just want to get it running.
One technique you could try in your disassembler is to do a search for the constants you remember, but as the actual bytes that would be contained within the executable. I guess you used the DirectDraw SetDisplayMode call, which is a COM object so can't be as easily traced to/from an entry point in a DLL. It takes parameters for width, height and bits per pixel and they are DWORDs (32-bit) so do a search for "58 02 00 00", "20 03 00 00" and "18 00 00 00". Hopefully that will narrow it down to what you need to change.
By the way which disassembler are you using?
This approach may be complicated somewhat if your VB6 program compiled to p-code rather than native code as you'll just get a huge chunk of data that represents the program rather than useful assembler instructions.
Check this:
http://www.sevenforums.com/tutorials/258-color-bit-depth-display-settings.html
If your graphics card doesn't have an entry for 24-bit display....I guess hacking your code's the only possibility. That or finding an old machine to throw windows 95 on :P.

Resources