I have an application that reads from serial port from PC. When i read using my standalone application, all the expected read bytes are received. But when i incorporate the application into HWUT ( Hello World Unit Testing), the .exe output generated in OUT folder contains a portion of the received data and fills the rest will NULL. I use the same receive buffer size for both cases. What could be the reason?
When you run the application on the command line, is the output correct?
Does 'fflush(stdout)' help?
How large is th output? Note, that HWUT has an in-buildt oversize detection. If you need larger output respond to "--hwut-info" with
... printf("SIZE-LIMIT: 4711MB;\n"); ...
Change MB to KB for kilo byte or GB for giga byte. 4711 is your size limit.
Related
I am trying to write a Bash script to receive a file of known number of Bytes (we are talking about 1MB) of binary data from a serial device connected to my embedded device. These bytes must then be saved to a file for later ops.
I've tried something like (stty raw; cat > blob.bin) < /dev/ttyS0 but I would like cat to stop when it reaches the number of Bytes I am expecting, as the script needs to progress on other functions when the file is complete.
The data flow will be started from the external device and it will run continuously until the the end of the binary file from the external device.
Working on Linux Buster, unfortunately I cannot use Python or other programming languages.
Thanks!
Thanks to the comment of #meuh, I was able to write a working script using dd:
dd ibs=1 count=$PLBYTE iflag=count_bytes if=/dev/ttyS0 of=/.../dump.bin
using dd operands count and iflag, (counting the received bytes and reading 1 byte/block) with $PLBYTE the number of expected bytes.
The script now works as expected.
Make sure to set stty in noncanonical mode (-icanon) otherwise data over 4096 bytes will be truncated and dd will not receive the expected amount of bytes.
I'm using SetFilePointer to rewrite the second half of the MBR with something, its a user-mode application and i opened a handle to PhysicalDrive
At first i tried to set the size parameter in WriteFile to 256 but the writefile gave the INVALID_PARAMETER error, as it turns out based on some search on other questions here it seems like this is because we are forced to write in multiplicand of the sector size when the handle is PhysicalDrive for some reason
then i tried to set the filePointer to 256, and Write 512 bytes, both of them return no error, but for some unknown reason it writes from the beginning of the sector! as if the SetFilePointer didn't work even tho the return value of SetFilePointer is OK and it returns 256
So my questions is :
Why the write size have to be multiplicand of sector size when the handle is PhysicalDrive? which other device handles are like this?
Why is this happening and when I set the file pointer to 256, WriteFile still writes from the start?
isn't this really redundant, considering that even if I want to change 1 byte then I have to read the entire sector, change the one byte and then write it back, instead of just writing 1 byte, it seems like 10 times more overhead! isn't there a faster way to write a few bytes in a sector?
I think you are mixing the file system and the storage (block device). File system stays above storage device stack. If your code obtains a handle to a file system device, you can write byte by byte. But if you are accessing storage device stack, you can only write sector by sector (or block size).
Directly writing to block device is definitely slow as you discovered. However, in most cases, people just talk to file systems. Most file system drivers maintain cache and use algorithms for both read and write to improve performance.
Can't comment on file pointer based offset before seeing the actual code. But I guess it might be not sector aligned or it's not used at all.
I'm writing custom firmware for a SparkFun Logomatic V2 that records binary data to a file on a 2GB micro-SD card. The data file size will range from 100 MB to 1 GB.
The format of the binary data is in flux as the board's firmware evolves (it will actually be dynamically reconfigurable at run-time). Rather than create and maintain a separate decoder/converter program for each version of firmware/configuration, I'd much rather make the data files self-converting to CSV format by starting the data file with a Bash script that is written to the data file before data recording starts.
I know how to create a Here Document, but I suspect Bash would be unable to quickly parse and convert a gigabyte of binary data, so I'd like to make the process run much faster by having the script first compile some C code (assume GCC is present and in the path), then run the resulting program, passing the binary data to stdin.
To make the problem more concrete, assume the firmware will create binary data consisting of 4 16-bit integer values: A timestamp (unsigned) followed by 3 accelerometer axes (signed). There is no separator between records (mainly because I'm saturating the SPI interface to the uSD card).
So, I think I need a script with TWO here documents: One for the C code (parameterized by expanded Bash variables), and another for the binary data. Here's where I am so far:
#! env bash
# Produced by firmware version 0.0.0.0.0.1 alpha
# Configuration for this data run:
header_string = "Time, X, Y, Z"
column_count = 4
# Create the converter executable
# Use "<<-" to permit code to be indented for readability.
# Allow variable expansion/substitution.
gcc -xc /tmp/convertit - <<-THE_C_CODE
#include <stdio.h>
int main (int argc, char **argv) {
// Write ${header_string} to stdout
while (1) {
// Read $(column_count} shorts from stdin
// Break if EOF
// Write $(column_count} comma-delimited values to stdout
}
// Close stdout
return 0;
}
THE_C_CODE
# Pass the binary data to the converter
# Hard-quote the Here tag to prevent subsequent expansion/substitution
/tmp/convertit >./$1.csv <<'THE_BINARY_DATA'
...
... hundreds of megabytes of semi-random data ...
...
THE_BINARY_DATA
rm /tmp/convertit
exit 0
Does that look about right? I don't yet have a real data file to test this with, but I wanted to verify the idea before going much further.
Will Bash complain if the closing lines are missing? This may happen if data capture terminates unexpectedly due to a shock knocking loose the battery or uSD card. Or if the firmware borks.
Is there a faster or better method I should consider? For example, I wonder if Bash will be too slow to copy the binary data as fast as the C program can consume it: Should the C program open the data file directly?
TIA,
-BobC
You may want to have a look at makeself. It allows you to change any .tar.gz archive into a self-extracting file which is platform independent (something like a shell script that contains a here document). This will allow you to easily distribute your data and decoder. It also allows you to configure a script contained within the archive to be run when the container script is run. This way you can use makeself for packaging and inside the archive you can put your data files and decoder written in C or bash or whatever language you find suitable.
While it is possible to decode binary data using shell tools (e.g. using od), it's very cumbersome and ineffective. I'd recommend using either a C program or perl which is also likely to be found on almost any machine (check this page).
I set my report size to 64 bytes and want to stream single reports (say 2 for now) to the host. My understanding is that there is a ReadFile buffer where these reports can sit. At the host, I have a 64 byte buffer that I use to read single reports. If I send one report from the device, the host reads it fine. If I use two ReadFiles in a loop, the second ReadFile times out. The device is sending two reports. I don't know if they're getting on the ReadFile buffer at the same time, so when the host reads the end point for the first report, the buffer gets purged and I lose the second report? If there are indeed 2 reports on the ReadFile buffer, do I need to read them both at once? How would I know how many reports are on the buffer?
ReadFile reads as many reports as there are in the HID driver's ring buffer up to the numberOfBytesToRead parameter.
The respective HID driver will implement everything as needed. You need not worry about whether those packets arrive "simultaneously". They won't.
The first packet should tell you the length of the report (i.e. a collection of packets), which in turn should allow you to figure out whether you have the full report, yet.
Of course you will have to keep an internal representation of the data from the report, because the packet buffers can only be at most 64 byte in size according to the specification. So to collect a full report you will have to handle that yourself or use the Hid_* routines described in the WDK.
HI all,
I am doing serial port communication program. How do I achieve the following.
Need to know number bytes available for reading.
Flushing
Note: I am creating File with Overlapped option.
thanks in advance
~ Johnnie
You are trying to query the number of bytes available first, and then read them. The standard way would be to just allocate a buffer (say 1000 chars), then call ReadComm() which tells you how many bytes were actually used (e.g. less than or equal to 1000).
You can flush the buffer of serial io using FlushFileBuffers() (http://msdn.microsoft.com/en-us/library/aa364439%28VS.85%29.aspx) but since you want asynchronous IO, you probably only want to do that when you have written to a file and then want to move the file (certainly not on every call to WriteComm()).
More info:
http://msdn.microsoft.com/en-us/library/ms810467.aspx