I am developing a USB HID device using an STMicro microcontroller. I started with STMicro's HID example which works fine. I am using C++ on Windows 7 64-bit for the PC side. I have an application that works with my device. There is one thing I can't figure out, however.
The example firmware only allowed sending and receiving 2 bytes at a time, which is determined by a HIDP_CAPS.OutputReportByteLength and InputReportByteLength. I would like to send more data than this at once, but I can't figure out how to increase the report lengths. I successfully changed the endpoint wMaxPacketSize, the VID and PID, and a few other things, but I can't figure out how Windows is calculating the in and out report lengths. There doesn't seem to be any fields in my report or device descriptions that indicate this length, but I can't imagine where else it might be coming from.
Can anyone tell me how Windows determines the HIDP_CAPS.OutputReportByteLength and HIDP_CAPS.InputReportByteLength?
How can I increase these lengths?
I figured it out. I thought I would post here in case anyone else needs to know. I'm not entirely sure I really understand it all, so if I made a mistake, someone please correct me.
I had to change the report description in my firmware. I had several usages. Windows gets the report description and figures out which usage requires the longest length and uses that length. On one of my input reports I made the following changes (the input report is just an array of bytes in firmware):
0x27, 0xFF, 0xFF, 0xFF, 0xFF, //Logical maximum is 4 bytes long, and has a value of 0xFFFFFFFF
0x95, 0x01, //There is one report
0x75, 0x20, //There are 32 bits per report
I did something similar for the output, but there is no report number field (0x95).
Windows now tells me I can send and receive 5 bytes, which I believe means the end point plus report number times report size.
Related
I'm using an NTAG I2C plus 2k memory tag and am able to successfully perform a FAST_READ for a particular page address range, but just beyond the range I'm getting an error.
iOS
Start address 0x04 and end address 0x46 reads successfully
await cmd([0x3a, 0x04, 0x46]);
while, start address 0x04 and end address 0x47 fails with
await cmd([0x3a, 0x04, 0x47]);
Error
input bytes: 3A0C0C
input bytes: 3A0447
[CoreNFC] 00000002 816c6760 -[NFCTagReaderSession transceive:tagUpdate:error:]:771 Error Domain=NFCError Code=100 "Tag connection lost" UserInfo={NSLocalizedDescription=Tag connection lost}
Android
Start address 0x04 and end address 0x49 reads successfully
await cmd([0x3a, 0x04, 0x49]);
while, start address 0x04 and end address 0x4b fails with
await cmd([0x3a, 0x04, 0x4b]);
Error
D/NfcService: Transceive start
D/NfcService: Transceive End, Result: 0 mTransceiveSuccess: 1 mTransceiveFail: 0
D/NfcService: Transceive start
D/NfcService: Transceive End, Result: 2 mTransceiveSuccess: 1 mTransceiveFail: 1
D/ReactNativeNfcManager: transceive fail: android.nfc.TagLostException: Tag was lost.
I/ReactNativeJS: Error: transceive fail
Thanks in advance.
From the Tag's datasheet
Remark: The FAST_READ command is able to read out the entire memory of one sector with one command. Nevertheless, the receive buffer of the NFC device must be able to handle the requested amount of data as no chaining is possible
When I do a FAST_READ on a similar Tag type on Android in native code I do a getMaxTransceiveLength to find out how big the buffer is and divide that by 4 and round down to find the max number of pages a FAST_READ can do at once and break up in to multiple FAST_READ's if necessary.
Generally the max Transceive length is 253 bytes on Android or 63 pages.
The react-native-nfc-manager API for Android also has getMaxTransceiveLength in it's API so you can do the same calculation of the maximum pages a FAST_READ can do on your hardware.
I've not done FAST_READ's on iOS but expect there is a similar limit (it does have an error code for transceive packet too large but I've not seen a way to ask it the max transceive length before you send a command)
While probably getMaxTransceiveLength is meant for size of sending command this amount of bytes should be able to be returned before the transceive timeout is hit, as the send and receive data rate is identical.
The transceive timeout can set but not got using react-native-nfc-manager API
Again no options on changing any timeout values in iOS but there is an error to indicate that the communication to the Tag has timed out.
So you could try increasing the timeout value on Android instead of breaking up in to a number of FAST_READ's but working out how long the timeout should be could be difficult and might have a negative impact if you set it too big.
For Android it's probably easier to assume the max send size is safe to receive as well. For iOS assume a max receive size from your experiments or handle the error and re-read with a back off algorithm.
I am using Go for a project and am transmitting data to an embedded device via the serial port (ttyusb). During fast and "large" transfers I've noticed that the transmitted data did not match the values I'd wanted to send.
I've tried various available libraries, in the end they all read and write using syscalls. So I've attached a Logic Analyzer to see what's going on.
Then I noticed that the data mismatch in the output had a clear pattern: Instead of sending my data the serial port would interleave my data with the following values:
0x55, 0x53, 0x42, 0x53, 0x70, 0x02
Followed by zeros (0x00). In total 22 Bytes. The total number of bytes transmitted via the serial line did match the number of bytes I wanted to write > so essentially my data was masked with these 22 Byte-Blocks. The weird thing is that I can translate those bytes to ASCII
0x55, 0x53, 0x42, 0x53, 0x70 = "USBSp"
Now my Question is: Can't I send arbitrary data (HEX values) over the serial port or are there some control characters that I should be aware of that would make the serial port send out Identity information or the like?
[EDIT]: Additional Information:
Host is MacOS running Go v1.10; tried with go.bug.st/serial.v1 and github.com/tarm/serial, various communication settings (bitrate etc.)
Target is nRF52840 preview development kit, using Nordic nRF5 SDK v12.3.0_d7731ad (not the newest, I know, but the only one supporting other boards too). Using app_uart_x API
you have to configure the serial port. the settings for both devices for baud rate, start/stop bits, ... have to match. then there are libraries in go like https://github.com/jacobsa/go-serial that enable standard serial port communication with that you can also use any hex values.
i can not say why the USBSp is sent because you did not post any code and gave no information what libraries you use. very likely this is not generated by the kernel module and instead by higher layer software because the kernel module used is usb-serial and USBSp does not appear in the source code :
https://elixir.bootlin.com/linux/v4.0/source/drivers/usb/serial/usb-serial.c
also not in kernel module ftdi-sio ( if you use ftdi chip )
https://elixir.bootlin.com/linux/v4.0/source/drivers/usb/serial/ftdi_sio.c
and also not in https://elixir.bootlin.com/linux/v3.3/source/drivers/usb/core/urb.c
I'm using the pcsc-sharp library to communicate with an ACR122U Reader an read/write information to MIFARE Classic 1k cards.
After getting familiar with the library and the APDU concept I'm able to use the cards UID as identifier in my applications.
Now I am in need of setting my own ID's to the card. Therefore I read some manuals regarding NXP's MIFARE (like MF1S70YYX_V1) and also got some information about ISO 7816-4.
I'm aware of the need to do authentication before accessing the cards memory to perform read/write operations and I know the standard Key value.
I downloaded the pcsc-sharp examples from GitHub and ran the Mifare1kTest example. I works but card.LoadKey in Line 36 fails. The response values of the Apdu command in LoadKey is SW1=99 SW2=0, which I cannot find in any documentation. Commenting out the "throw new Exception" section makes the example work.
My question now is, which values are the correct ones to pass to Card.LoadKey, respectively which are the correct values to use for parameters in the Apdu Command. What is meant with "keynumber" (Sectornumber - Sector/Block Combination)? Is the LoadKey call necessary, if the example works?
Your question is broad, but these should work for you. Code is explained with comments
var loadKeySuccessful = card.LoadKey(
KeyStructure.VolatileMemory,
0x00, // first key slot
new byte[] {0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF} // key
);
Am trying to get the manufacturer serial number (not volume number) of usb external harddrives or disk?
[EDIT]
i don't know or have any code yet on how to do this. the previous method i tried only returned the volume serial number
You can use WMI to retrieve this information. Hard drive serial numbers are in Win32_PhysicalMedia. I'm not going to take the time to write the code here; if you have any experience querying WMI in VB 6, you should be able to do it without much trouble. Otherwise, search for sample code online. You won't find much of anything specifically about hard drive serial numbers, but you'll find lots of WMI examples.
Be weary of the fact that you won't always get the serial number in the format you're expecting. For example, comments to this article indicate you might get something like:
Serial No. : 4a3532544e464137202020202020202020202020
in which case, you will have to decode the serial number:
By converting to hexdecimal bytes, we get the following (0x20 is blank character and is trimmed out):
0x4a, 0x35, 0x32, 0x54, 0x4e, 0x46, 0x41, 0x37
Swapping the odd and even bytes gets the following:
0x35, 0x4a, 0x54, 0x32, 0x46, 0x4e, 0x37, 0x41
The above ASCII codes are equal to the serial number string:
"5JT2FN7A"
I'm also not positive that all external/removable hard drives provide this information. Your mileage may vary, but the suggested method does work fine on internal hard drives.
Alternatively, it appears that you can do this using low-level Windows APIs like DeviceIOControl. You'll need to add declarations for the necessary functions to a module in your VB 6 app. This article on Code Project should help get you started; the code is written in native C++ and it's geared for consumption by .NET languages like C#, but I see no difficulty in adapting the code to VB 6.
Operating System: Windows XP 64 bit, SP2.
I have an unusual problem. I am porting some code from 32 bit to 64 bit. The 32 bit code works just fine. But when I call CreateThread() for the 64 bit version the call fails. I have three places where this fails. 2 call CreateThread(). 1 calls beginthreadex() which calls CreateThread().
All three calls fail with error code 0x3E6, "Invalid access to memory location".
The problem is all the input parameters are correct.
HANDLE h;
DWORD threadID;
h = CreateThread(0, // default security
0, // default stack size
myThreadFunc, // valid function to call
myParam, // my param
0, // no flags, start thread immediately
&threadID);
All three calls to CreateThread() are made from a DLL I've injected into the target program at the start of the program execution (this is before the program has got to the start of main()/WinMain()). If I call CreateThread() from the target program (same params) via say a menu, it works. Same parameters etc. Bizarre.
If I pass NULL instead of &threadID, it still fails.
If I pass NULL as myParam, it still fails.
I'm not calling CreateThread from inside DllMain(), so that isn't the problem. I'm confused and searching on Google etc hasn't shown any relevant answers.
If anyone has seen this before or has any ideas, please let me know.
Thanks for reading.
ANSWER
Short answer: Stack Frames on x64 need to be 16 byte aligned.
Longer answer:
After much banging my head against the debugger wall and posting responses to the various suggestions (all of which helped in someway, prodding me to try new directions) I started exploring what-ifs about what was on the stack prior to calling CreateThread(). This proved to be a red-herring but it did lead to the solution.
Adding extra data to the stack changes the stack frame alignment. Sooner or later one of the tests gets you to 16 byte stack frame alignment. At that point the code worked. So I retraced my steps and started putting NULL data onto the stack rather than what I thought was the correct values (I had been pushing return addresses to fake up a call frame). It still worked - so the data isn't important, it must be the actual stack addresses.
I quickly realised it was 16 byte alignment for the stack. Previously I was only aware of 8 byte alignment for data. This microsoft document explains all the alignment requirements.
If the stackframe is not 16 byte aligned on x64 the compiler may put large (8 byte or more) data on the wrong alignment boundaries when it pushes data onto the stack.
Hence the problem I faced - the hooking code was called with a stack that was not aligned on a 16 byte boundary.
Quick summary of alignment requirements, expressed as size : alignment
1 : 1
2 : 2
4 : 4
8 : 8
10 : 16
16 : 16
Anything larger than 8 bytes is aligned on the next power of 2 boundary.
I think Microsoft's error code is a bit misleading. The initial STATUS_DATATYPE_MISALIGNMENT could be expressed as a STATUS_STACK_MISALIGNMENT which would be more helpful. But then turning STATUS_DATATYPE_MISALIGNMENT into ERROR_NOACCESS - that actually disguises and misleads as to what the problem is. Very unhelpful.
Thank you to everyone that posted suggestions. Even if I disagreed with the suggestions, they prompted me to test in a wide variety of directions (including the ones I disagreed with).
Written a more detailed description of the problem of datatype misalignment here: 64 bit porting gotcha #1! x64 Datatype misalignment.
The only reason that 64bit would make a difference is that threading on 64bit requires 64bit aligned values. If threadID isn't 64bit aligned, you could cause this problem.
Ok, that idea's not it. Are you sure it's valid to call CreateThread before main/WinMain? It would explain why it works in a menu- because that's after main/WinMain.
In addition, I'd triple-check the lifetime of myParam. CreateThread returns (this I know from experience) long before the function you pass in is called.
Post the thread routine's code (or just a few lines).
It suddenly occurs to me: Are you sure that you're injecting your 64bit code into a 64bit process? Because if you had a 64bit CreateThread call and tried to inject that into a 32bit process running under WOW64, bad things could happen.
Starting to seriously run out of ideas. Does the compiler report any warnings?
Could the bug be due to a bug in the host program, rather than the DLL? There's some other code, such as loading a DLL if you used __declspec(import/export), that occurs before main/WinMain. If that DLLMain, for example, had a bug in it.
I ran into this issue today. And I checked every argument feed into _beginthread/CreateThread/NtCreateThread via rohitab's Windows API Monitor v2. Every argument is aligned properly (AFAIK).
So, where does STATUS_DATATYPE_MISALIGNMENT come from?
The first few lines of NtCreateThread validate parameters passed from user mode.
ProbeForReadSmallStructure (ThreadContext, sizeof (CONTEXT), CONTEXT_ALIGN);
for i386
#define CONTEXT_ALIGN (sizeof(ULONG))
for amd64
#define STACK_ALIGN (16UI64)
...
#define CONTEXT_ALIGN STACK_ALIGN
On amd64, if the ThreadContext pointer is not aligned to 16 bytes, NtCreateThread will return STATUS_DATATYPE_MISALIGNMENT.
CreateThread (actually CreateRemoteThread) allocated ThreadContext from stack, and did nothing special to guarantee the alignment requirement is satisfied. Things will work smoothly if every piece of your code followed Microsoft x64 calling convention, which unfortunately not true for me.
PS: The same code may work on newer Windows (say Vista and newer). I didn't check though. I'm facing this issue on Windows Server 2003 R2 x64.
I'm in the business of using parallel threads under windows
for calculations. No funny business, no dll-calls, and certainly
no call-back's. The following works in 32 bits windows. I set up the stack for my calculation, well within the area reserved for my program.
All releveant data about area's and start addresses is contained in
a data structure that is passed to CreateThread as parameter 3.
The address that is called contains a small assembler routine
that uses this data stucture.
Indeed this routine finds the address to return to on the stack,
then the address of the data structure.
There is no reason to go far into this. It just works and it calculates
the number of primes below 2,000,000,000 just fine, in one thread,
in two threads or in 20 threads.
Now CreateThread in 64 bits doesn't push the address of the data
structure. That seems implausible so I show you the smoking gun,
a dump of a debug session.
In the subwindow at the bottom right you see the stack, and
there is merely the return address, amidst a sea of zeroes.
The mechanism I use to fill in parameters is portable between 32 and 64 bits.
No other call exhibits a difference between word-sizes.
Moreover why would the code address work but not the data address?
The bottom line: one would expect that CreateThread passes the data parameter on the stack in the same way in 64 bits as in 32 bits, then does a subroutine call. At the assembler level it doesn't work that way. If there are any hidden requirements to e.g. RSP that are automatically fullfilled in C++ that would be very nasty.
P.S. No there are no 16 byte alignment problems. That lies ages behind me.
Try using _beginthread() or _beginthreadex() instead, you shouldn't be using CreateThread directly.
See this previous question.