HID device info structure from GetRawInputData - windows

Where can I get the structure for an HID device?
For example:
raw data from a device, using GetRawInputData:
( 0 137 117 0 146 130 24 128 0 )
( 0 137 117 0 146 130 8 128 0 )
/\
at this I can see that || there is being a button released
that means, at the 6-th char at 4-th bit
By analyzing the raw stream I can figure out where are the buttons, switches and analog data. Is there a way to ask this information from Windows.
My main goal is to basically get structure:
Button - 6th char, 4th bit.
Analog - 2nd char
Switch - 6th char, 0-3th bit.
The only solution I found was HID Descriptors. But I'm not sure how to use them.
After reading documentation I felt like running into a brick wall. Is there
maybe a good example how to use them or a book that describes them better. (Or a easier way
doing it without descriptors)
I found HidP_GetButtons and HidP_GetUsages but still no idea how to extract the structure (as described above).

Oh, you have to use GetRawInputData. There is a somewhat crummy example on msdn.

The problem seems to be that each device has it's own structure. There doesn't seem to be a universal way through the win32 api to get the interpretation of the structure.
The combination of
GetRawInputDeviceInfo which gives you a RID_DEVICE_INFO struct
GetRawInputData
GetRawInputBuffer
Seems to get you all the information you can from win32.
After that, you probably need some external source of information (or generated by you), that describes specific fields etc.

Related

Multidrop Bus addressing

Currently I am reading the MDB_interface_specification( (https://namanow.org/wp-content/uploads/Multi-Drop-Bus-and-Internal-Communication-Protocol.pdf) Version 4. 3(July 2019). In Kapitel 2.3 page 34 they are talking about the Peripheral Address and I don't undrstand how the Address scheme has been built . One prototy of the address scheme look like this: 00101xxxB ( this can be 28H for example ). They upper five bits are used for addressing and the lower 3 bit are the command. If i considered this statement wih my example then the address ist 5 and the Command is 0. I am a little bit confuse can someone please explain me that?
OK. First, read this:
Then, we have value 0x00 as command to Energy Management System (huh, i never saw this kind of MBD device in the wild). MDB datasheet doesn't contain any references to this device yet, but seems it's just POLL command, device must respond to POLL with last status changes (if any) or just ACK with x100 - it's not mistake, it's 0x00 with 9th bit set. Don't read this datasheet unless you want to lose your mind. I am already read this awesome shit and put it (mostly) to hardware implementation see github repo with complete solution
Cheers.

using int64 type for snmp v2c oid?

I am debugging some snmp code for an integer overflow problem. Basically we use an integer to store disk/raid capacity in KB. However when a disk/raid of more than 2TB is used, it'll overflow.
I read from some internet forums that snmp v2c support integer64 or unsigned64. In my test it'll still just send the lower 32 bits even though I have set the type to integer64 or unsigned64.
Here is how I did it:
a standalone program will obtain the capacity and write the data to a file. example lines for raid capacity
my-sub-oid
Counter64
7813857280
/etc/snmp/snmpd.conf has a clause to pass thru the oids:
pass_persist mymiboid /path/to/snmpagent
in the mysnmpagent source, read the oidmap into oid/type/value structure from the file, and print to stdout.
printf("%s\n", it->first.c_str());
printf("%s\n", it->second.type.c_str());
printf("%s\n", it->second.value.c_str());
fflush(stdout);
use snmpget to get the sub-oid, and it returns:
mysuboid = Counter32: 3518889984
I use tcpdump and the last segment of the value portion is:
41 0500 d1be 0000
41 should be the tag, 05 should be the length, and the value is only carrying the lower 32-bit of the capacity. (note 7813857280 is 0x1.d1.be.00.00)
I do find that using string type would send correct value (in octetstring format). But I want to know if there is a way to use 64-bit integer in snmp v2c.
I am running NET-SNMP 5.4.2.1 though.
thanks a lot.
Update:
Found the following from snmpd.conf regarding pass (and probably also pass_persist) in net-snmp doc page. I guess it's forcing the Counter64 to Counter32.
Note:
The SMIv2 type counter64 and SNMPv2 noSuchObject exception are not supported.
You are supposed to use two Unsigned32 for lower and upper bytes of your large number.
Counter64 is not meant to be used for large numbers this way.
For reference : 17 Common MIB Design Errors (last one)
SNMP SMIv2 defines a new type Counter64,
https://www.rfc-editor.org/rfc/rfc2578#page-24
which is in fact unsigned 64 bit integer. So if your data fall into the range, using Counter64 is proper.
"In my test it'll still just send the lower 32 bits even though I have set the type to integer64 or unsigned64" sounds like a problem, but unless you show more details (like showing some code) on how you tested it out and received the result, nobody might help further.

String pattern/algorithm for PIN & PUK generated from MSIN

I wonder how mobile phone companies generate both PIN and PUK for their SIM cards?
I have a large database of already generated codes, this database contains 3 columns:
* MSIN : Mobile Subscriber Identification Number (10 digits)
* PIN : Personal Identification Number (4 digits)
* PUK : Personal Unblocking Code (8 digits)
So far, maybe both PIN and PUK are generated from MSIN because the MSIN column is incrementing, while others, are generated with a logic, something like;
MSIN PIN PUK
1000000000 3234 20005627
1000000001 5993 92870018
1000000002 3465 30327846
...
is it possible to know how this serials are generated? Using the existing database is it possible to guess the algo used?
I'm asking this for the sake of knowledge only not to use the provided information in any illegal activity ;)
thanx.
UPDATE
I searched for how many times some pin codes are repeated and found this
0000 –> 261 times
1111 –> 429982 times
2222 –> 275
3333 –> 233
4444 –> 279
5555 –> 277
6666 –> 242
7777 –> 263
8888 –> 249
9999 –> 242
the pin 1111 is used more than others! so maybe the algo is changed from time to time.. or ther's no logic at all inside :(
UPDATE 2
I checked the MSIN and found that they make jumps in the incrementing system, so for example;
1011000000
1011000001
… here they followed incrementing until 1011499999
1011499999 and they jumped to 1031000000
1031000000
… the same thing here
1031299999
1131000000
…
this leads to an idea that whenever they want to issue new cards, lets say 500 000 cards, they start with a new MSIN that doesn't follow the incrementing rule in the database and the may change the algo behind the code generation (that's why we find in some cases they issued all the next cards with pin 1111)
The answer can go from really easy to pretty complex.
If I had to design the system, the f function (Pin,Puk) = f(MSIN) wouldn't be easy to guess, and, moreover, not reversible (meaning if you know (pin,puk) you cannot guess MSIN).
Because the subject is around security and payment, you can probably expect a complex function.
Unless it is documented somewhere on the net (which I doubt) it is very unlikely you will find the function f.
If we make the assumption that PUK/PIN are generated from the MSIN, there's a virtually infinite number of ways they could be doing this. To take one (reasonable) example, they could be using an HMAC. Even assuming you knew what hash algorithm they're using, you'd still have to determine the secret key, and the search space for that is on the order of 2^160 (for HMAC-SHA1) - totally impractical to search exhaustively.
The only chance you have is if they're doing something stupid, like using an easily guessed or determined algorithm to generate the PIN/PUK - and there's no practical mechanical procedure to work that out, just trial, error, and intuition.
Usually its not the mobile network operator who generates the PIN and PUK. The SIM card manufacturer does this unless ordered otherwise by the operator.
What makes you believing that one can calculate SIM and/or PUK from the MSIN? Neither the network operator nor the SIM manufacturer would have any advantage from this. I would assume that PIN and PUK are as random as economically feasible in order to implement the intended security.
However, I find the 1111 anomaly interesting. Is your sample right from manufacturing? Or did you get an HLR dump? The latter one might provide an explanation for the 1111 cumulation: People change their PIN to something easy to remember and to type, 1111 would be the most common candidate for this.

Fix hard-coded display setting without source (24-bit, need 32-bit)

I wrote a program about 10 years ago in Visual Basic 6 which was basically a full-screen game similar to Breakout / Arkanoid but had 'demoscene'-style backgrounds. I found the program, but not the source code. Back then I hard-coded the display mode to 800x600x24, and the program crashes whenever I try to run it as a result. No virtual machine seems to support 24-bit display when the host display mode is 16/32-bit. It uses DirectX 7 so DOSBox is no use.
I've tried all sorts of decompiler and at best they give me the form names and a bunch of assembly calls which mean nothing to me. The display mode setting was a DirectX 7 call but there's no clear reference to it in the decompilation.
In this situation, is there any pointers on how I can:
pin-point the function call in the program which is setting the display mode to 800x600x24 (ResHacker maybe?) and change the value being passed to it so it sets 800x600x32
view/intercept DirectX calls being made while it's running
or if that's not possible, at least
run the program in an environment that emulates a 24-bit display
I don't need to recover the source code (as nice as it would be) so much as just want to get it running.
One technique you could try in your disassembler is to do a search for the constants you remember, but as the actual bytes that would be contained within the executable. I guess you used the DirectDraw SetDisplayMode call, which is a COM object so can't be as easily traced to/from an entry point in a DLL. It takes parameters for width, height and bits per pixel and they are DWORDs (32-bit) so do a search for "58 02 00 00", "20 03 00 00" and "18 00 00 00". Hopefully that will narrow it down to what you need to change.
By the way which disassembler are you using?
This approach may be complicated somewhat if your VB6 program compiled to p-code rather than native code as you'll just get a huge chunk of data that represents the program rather than useful assembler instructions.
Check this:
http://www.sevenforums.com/tutorials/258-color-bit-depth-display-settings.html
If your graphics card doesn't have an entry for 24-bit display....I guess hacking your code's the only possibility. That or finding an old machine to throw windows 95 on :P.

Using Win32 Crypto API

I can't find any help to implement PROV_RSA_AES CSP in c++. is there any article or book to help me out with it?
Here is an article about it.
Here is another one.
i just want to use one, i figured
how to get context but i'm still
thinking about the size of buffer i
need to use for CryptEncrypt() to get
it working with aes256 ? i also want
to use random salt.
AES256 in CBC-mode with PKCS#7-padding (which is the default) will need a buffersize that is the input-data rounded up to the next multiple of 16 (but always at least one byte more). Ie. 35 -> 48, 52 -> 64, 80 -> 96.
There is no salt involved in AES256. Are you talking about key-derivation? Or do you mean the IV?

Resources