What is GetAsyncKeyState's return value? - ruby

I trying to use GetAsyncKeyState in a project I'm working on. The only problem is I don't know what value it returns, and in turn how to check for it. According to the MSN documentation:
Type: SHORT
If the function succeeds, the return value specifies whether the key was pressed since the last call to GetAsyncKeyState, and whether the key is currently up or down. If the most significant bit is set, the key is down, and if the least significant bit is set, the key was pressed after the previous call to GetAsyncKeyState. However, you should not rely on this last behavior; for more information, see the Remarks.
I know that type SHORT is a number, but I've seen a lot of different answers across StackOverflow, and the internet, but what is GetAsyncKeyState's return value when it evaluates to true, does it return 0 or 0x8001?

The return value can be one of 4 possible values:
0x0000
0x0001
0x8000
0x8001
Use & 0x8000 (or alternatively < 0 since SHORT is a signed type) to check if "the most significant bit is set" (which makes a signed type negative).
Use & 0x0001 to check if "the least significant bit is set".

Related

Define "injected"

Reading the following documentation:
https://learn.microsoft.com/en-us/windows/desktop/api/winuser/ns-winuser-tagkbdllhookstruct
The bit 4 (counting from 0), is defined as - "Specifies whether the event was injected. The value is 1 if that is the case; otherwise, it is 0. Note that bit 1 is not necessarily set when bit 4 is set."
What is the actual definition of "injected event?" in this context?
You'd think it was an easier thing to google.
If you look at Microsoft's documentation for the SendInput function it describes what it does as inserting or injecting input:
"The function returns the number of events that it successfully inserted into the keyboard or mouse input stream. .... This function is subject to UIPI. Applications are permitted to inject input only into applications that are at an equal or lesser integrity level."
Keyboard input generated by the user and sent from the device driver will not have the bit set. Input created using API functions will have the bit set.

How do I retrieve high and low-order parts of a value from two registers in inline assembly?

I'm currently working on a little game that can run from the boot sector of a hard drive, just for something fun to do. This means my program runs in 16-bit real mode, and I have my compiler flags set up to emit pure i386 code. I'm writing the game in C++, but I do need a lot of inline assembly to talk to the BIOS via interrupt calls. Some of these calls return a 32-bit integer, but stored in two 16-bit registers. Currently I'm doing the following to get my number out of the assembly:
auto getTicks = [](){
uint16_t ticksL{ 0 }, ticksH{ 0 };
asm volatile("int $0x1a" : "=c"(ticksH), "=d"(ticksL) : "a"(0x0));
return static_cast<uint32_t>( (ticksH << 16) | ticksL );
};
This is a lambda function I use to call this interrupt function which returns a tick count. I'm aware that there are better methods to get time data, and that I haven't implemented a check for AL to see if midnight has passed, but that's another topic.
As you can see, I have to use two 16-bit values, get the register values separately, then combine them into a 32-bit number the way you see at the return statement.
Is there any way I could retrieve that data into a single 32-bit number in my code right away avoid the shift and bitwise-or? I know that those 16-bit registers I'm accessing are really just the higher and lower 16-bits of a 32-bit register in reality, but I have no idea how to access the original 32-bit register as a whole.
I know that those 16-bit registers I'm accessing are really just the higher and lower 16-bits of a 32-bit register in reality, but I have no idea how to access the original 32-bit register as a whole.
As Jester has already pointed out, these are in fact 2 separate registers, so there is no way to retrieve "the original 32-bit register."
One other point: That interrupt modifies the ax register (returning the 'past midnight' flag), however your asm doesn't inform gcc that you are changing ax. Might I suggest something like this:
asm volatile("int $0x1a" : "=c"(ticksH), "=d"(ticksL), "=a"(midnight) : "a"(0x0));
Note that midnight is also a uint16_t.
As other answers suggest you can't load DX and CX directly into a 32-bit register. You'd have to combine them as you suggest.
In this case there is an alternative. Rather than using INT 1Ah/AH=0h you can read the BIOS Data Area (BDA) in low memory for the 32-bit DWORD value and load it into a 32-bit register. This is allowed in real mode on i386 processors. Two memory addresses of interest:
40:6C dword Daily timer counter, equal to zero at midnight;
incremented by INT 8; read/set by INT 1A
40:70 byte Clock rollover flag, set when 40:6C exceeds 24hrs
These two memory addresses are in segment:offset format, but would be equivalent to physical address 0x0046C and 0x00470.
All you'd have to do is temporarily set the DS register to 0 (saving the previous value), turn off interrupts with CLI retrieve the values from lower memory using C/C++ pointers, re-enable interrupts with STI and restore DS to the previously saved value. This of course is added overhead in the boot sector compared to using INT 1Ah/AH=0h but would allow you direct access to the memory addresses the BIOS is reading/writing on your behalf.
Note: If DS is set to zero already no need to save/set/restore it. Since we don't see the code that sets up the environment before calling into the C++ code I don't know what your default segment values are. If you don't need to retrieve both the roll over and timer values and only wish to get them individually you can eliminate the CLI/STI.
You're looking for the 'A' constraint, which refers to the dx:ax register pair as a double-wide value. You can see the full set of defined constraints for x86 in the gcc documentation. Unfortunately there are no constraints for any other register pairs, so you have to get them as two values and reassemble them with shift and or, like you describe.

what is the meaning of 0xdead000000000000?

This value was appeared in the poison.h (linux source\include\linux\poison.h):
/*
* Architectures might want to move the poison pointer offset
* into some well-recognized area such as 0xdead000000000000,
* that is also not mappable by user-space exploits:
*/
I just curious about the special of the value 0xdead000000000000?
Pretty sure this is just a variant of deadbeef; i.e. it's just an easily identified signal value (see http://en.wikipedia.org/wiki/Hexspeak for deadbeef)
The idea of pointer poisoning is to ensure that a poisoned list pointer can't be used without causing a crash. Say you unlink a structure from the list it was in. You then want to invalidate the pointer value to make sure it's not used again for traversing the list. If there's a bug somewhere in the code -- a dangling pointer reference -- you want to make sure that any code trying to follow the list through this now-unlinked node crashes immediately (rather than later in some possibly unrelated area of code).
Of course you can poison the pointer simply by putting a null value in it or any other invalid address. Using 0xdead000000000000 as the base value just makes it easier to distinguish an explicitly poisoned value from one that was initialized with zero or got overwritten with zeroes. And it can be used with an offset (LIST_POISON{1,2}) to create multiple distinct poison values that all point into unusable areas of the virtual address space and are identifiable as invalid at a glance.

How can I use DnsQuery to get a SOA record?

I am implementing a DNS cache in Windows† and I need to know how long to cache negative responses. RFC 2308 says to use the TTL of the SOA record I receive, but despite the SOA record being sent to me on the wire (as confirmed by tcpdump), DnsQuery will not expose it to me. For example, running the following C++ code gives me no output:
PDNS_RECORDA data = nullptr;
DNS_STATUS rv = dnsQuery("www.google.com", DNS_TYPE_SOA, DNS_QUERY_STANDARD,
nullptr, &data, nullptr);
if (data != nullptr) {
cout << "Yippee" << endl;
}
I have tried every value between 0 and 0xFF as the second argument, but never am I given a SOA record (though several other records come through fine such as A, AAAA, and CNAME). Does anyone know how to get it to behave?
The third parameter accepts various flags. I tried DNS_QUERY_RETURN_MESSAGE which looked promising, but seemed to have no visible effect (I am on Windows 7 so it should be supported per the docs). I also tried a variety of other flags in desperation. Namely DNS_QUERY_BYPASS_CACHE, DNS_QUERY_RETURN_MESSAGE | DNS_QUERY_BYPASS_CACHE, DNS_QUERY_WIRE_ONLY, and DNS_QUERY_DONT_RESET_TTL_VALUES. I have read through the entire list of query options and did not see anything else that looked terribly promising. Note that for each flag, I tried every value between 0 and 0xFF as the second argument, and looked for anything in the resulting list of type DNS_TYPE_SOA.
† I know Windows has a perfectly good DNS cache.

CC LSB and MSB in a midi file

In a midi file, if I want to change, say, panning I write
<delta_time> <176(ch1 control change)> <10(pan control)> <value>
This sets panning value to a number between 0 and 127 (msb). For more fine tuning I can also use control message 42 which sets a lsb for the panning. My question is, to set a precise tuning, do I have to repeat the whole message such as:
<delta_time> <176(ch1 control change)> <10(pan control msb)> <value>
<delta_time(0)> <176(ch1 control change)> <42(pan control lsb)> <value>
or can I send
<delta_time> <176(ch1 control change)> <10(pan control)> <value(msb)> <value(lsb)>
Also, what happens if I just send lsb? will it assume msb as 0?
Thanks
Each control-change event is an independent event, so it needs its own delta time, its own status byte, and its own parameter byte(s).
(The status byte can be omitted if it has the same value as the previous one, but this depends only on the status byte's value, and not on whether the events are actually related.)
What happens if you send the MSB message without the LSB message is not clearly specified, and even if it were, you could not be sure that devices would implement it correctly.
To be safe, to change a control with a 14-bit value, send both the MSB and LSB messages, in that order.
I'm just researching this myself. The answer above is not correct, it seems to me. If you reference the official MIDI spec...
(normally you have to pay for the spec but here's a free reprint:)
http://oktopus.hu/uploaded/Tudastar/MIDI%201.0%20Detailed%20Specification.pdf
On page 12, this MSB/LSB scheme is discussed. It says that sending the LSB values is OPTIONAL. I think it's safe to assume that the LSB will be zero if not specified. Furthermore, it clearly states that re-sending the MSB when only the LSB changes is also optional. The only thing that's required is that if you are using LSBs and you change the MSB, you must update the LSB as well.

Resources