Bash script to read N bytes of binary data from serial device and save them to file - bash

I am trying to write a Bash script to receive a file of known number of Bytes (we are talking about 1MB) of binary data from a serial device connected to my embedded device. These bytes must then be saved to a file for later ops.
I've tried something like (stty raw; cat > blob.bin) < /dev/ttyS0 but I would like cat to stop when it reaches the number of Bytes I am expecting, as the script needs to progress on other functions when the file is complete.
The data flow will be started from the external device and it will run continuously until the the end of the binary file from the external device.
Working on Linux Buster, unfortunately I cannot use Python or other programming languages.
Thanks!

Thanks to the comment of #meuh, I was able to write a working script using dd:
dd ibs=1 count=$PLBYTE iflag=count_bytes if=/dev/ttyS0 of=/.../dump.bin
using dd operands count and iflag, (counting the received bytes and reading 1 byte/block) with $PLBYTE the number of expected bytes.
The script now works as expected.
Make sure to set stty in noncanonical mode (-icanon) otherwise data over 4096 bytes will be truncated and dd will not receive the expected amount of bytes.

Related

Bash - How to write a file to a specific address on a disk

I am trying to recreate a disk image manually through bash. I have an empty disk the same size as the original and I am trying to insert each file at same address as the original disk so that both hash's match. However I cant seem to find the commands to do this. I was advised to use DD or DCFLDD but I cant figure out how to do this with the documentation online. I have a disk, image.dmg and the first file is ._.Trashes with an inode of 4 and size of 4096 bytes.
With dd you may like to use the following arguments:
bs=BYTES
read and write up to BYTES bytes at a time
count=N
copy only N input blocks
seek=N skip N obs-sized blocks at start of output
skip=N skip N ibs-sized blocks at start of input
In other words, to copy N bytes at offset X in file A to offset Y in file B, something like the following should do:
dd bs=1 count=N if=A skip=X of=B seek=Y

HWUT - exe.txt in OUT file not populating all the read data

I have an application that reads from serial port from PC. When i read using my standalone application, all the expected read bytes are received. But when i incorporate the application into HWUT ( Hello World Unit Testing), the .exe output generated in OUT folder contains a portion of the received data and fills the rest will NULL. I use the same receive buffer size for both cases. What could be the reason?
When you run the application on the command line, is the output correct?
Does 'fflush(stdout)' help?
How large is th output? Note, that HWUT has an in-buildt oversize detection. If you need larger output respond to "--hwut-info" with
... printf("SIZE-LIMIT: 4711MB;\n"); ...
Change MB to KB for kilo byte or GB for giga byte. 4711 is your size limit.

Test the limit of read/write cycles of a USB flash drive

I read from an wiki article (http://en.wikipedia.org/wiki/NAND_flash#Write_endurance)
that says flash storage has a limit of read/write cycles (for NAND flash this limit is about 10K-100K).
My question is: Is it possible to test/find out this limit on my PC in a relatively
short time (in a few hours or a few days)?
I wrote a simple script (/dev/sdb is flash disk),
but since flash controller will do wear-levelling,
I think this script may not work.
echo "0011223344556677" | xxd -r -p > a.bin
for ((n=0;n<1000000;n++)); do
dd if=a.bin of=/dev/sdb
done
It may be easier to do something like this to fill the device with random data:
dd if=/dev/urandom of=/dev/DEVICE bs=1M
Over and over again until the device fails.
If you want to check for single bit errors you could do the following:
Fill the disk with zeroes - dd if=/dev/zero of=/dev/DEVICE bs=1M
Get the SHA-1 sum of the whole disk - dd if=/dev/DEVICE | sha1sum
Fill the disk with random data - dd if=/dev/urandom of=/dev/DEVICE bs=1M
Repeat until the number from step 2 changes or the disk stops working
That should probably be wrapped up in a bash script to save you some time.
NOTE: I used /dev/DEVICE so nobody would accidentally copy and paste these snippets without thinking. You'll need to change it to your specific device and be very careful that you get it right!

Wrapping a binary data file to self-convert to CSV?

I'm writing custom firmware for a SparkFun Logomatic V2 that records binary data to a file on a 2GB micro-SD card. The data file size will range from 100 MB to 1 GB.
The format of the binary data is in flux as the board's firmware evolves (it will actually be dynamically reconfigurable at run-time). Rather than create and maintain a separate decoder/converter program for each version of firmware/configuration, I'd much rather make the data files self-converting to CSV format by starting the data file with a Bash script that is written to the data file before data recording starts.
I know how to create a Here Document, but I suspect Bash would be unable to quickly parse and convert a gigabyte of binary data, so I'd like to make the process run much faster by having the script first compile some C code (assume GCC is present and in the path), then run the resulting program, passing the binary data to stdin.
To make the problem more concrete, assume the firmware will create binary data consisting of 4 16-bit integer values: A timestamp (unsigned) followed by 3 accelerometer axes (signed). There is no separator between records (mainly because I'm saturating the SPI interface to the uSD card).
So, I think I need a script with TWO here documents: One for the C code (parameterized by expanded Bash variables), and another for the binary data. Here's where I am so far:
#! env bash
# Produced by firmware version 0.0.0.0.0.1 alpha
# Configuration for this data run:
header_string = "Time, X, Y, Z"
column_count = 4
# Create the converter executable
# Use "<<-" to permit code to be indented for readability.
# Allow variable expansion/substitution.
gcc -xc /tmp/convertit - <<-THE_C_CODE
#include <stdio.h>
int main (int argc, char **argv) {
// Write ${header_string} to stdout
while (1) {
// Read $(column_count} shorts from stdin
// Break if EOF
// Write $(column_count} comma-delimited values to stdout
}
// Close stdout
return 0;
}
THE_C_CODE
# Pass the binary data to the converter
# Hard-quote the Here tag to prevent subsequent expansion/substitution
/tmp/convertit >./$1.csv <<'THE_BINARY_DATA'
...
... hundreds of megabytes of semi-random data ...
...
THE_BINARY_DATA
rm /tmp/convertit
exit 0
Does that look about right? I don't yet have a real data file to test this with, but I wanted to verify the idea before going much further.
Will Bash complain if the closing lines are missing? This may happen if data capture terminates unexpectedly due to a shock knocking loose the battery or uSD card. Or if the firmware borks.
Is there a faster or better method I should consider? For example, I wonder if Bash will be too slow to copy the binary data as fast as the C program can consume it: Should the C program open the data file directly?
TIA,
-BobC
You may want to have a look at makeself. It allows you to change any .tar.gz archive into a self-extracting file which is platform independent (something like a shell script that contains a here document). This will allow you to easily distribute your data and decoder. It also allows you to configure a script contained within the archive to be run when the container script is run. This way you can use makeself for packaging and inside the archive you can put your data files and decoder written in C or bash or whatever language you find suitable.
While it is possible to decode binary data using shell tools (e.g. using od), it's very cumbersome and ineffective. I'd recommend using either a C program or perl which is also likely to be found on almost any machine (check this page).

serial port communication

HI all,
I am doing serial port communication program. How do I achieve the following.
Need to know number bytes available for reading.
Flushing
Note: I am creating File with Overlapped option.
thanks in advance
~ Johnnie
You are trying to query the number of bytes available first, and then read them. The standard way would be to just allocate a buffer (say 1000 chars), then call ReadComm() which tells you how many bytes were actually used (e.g. less than or equal to 1000).
You can flush the buffer of serial io using FlushFileBuffers() (http://msdn.microsoft.com/en-us/library/aa364439%28VS.85%29.aspx) but since you want asynchronous IO, you probably only want to do that when you have written to a file and then want to move the file (certainly not on every call to WriteComm()).
More info:
http://msdn.microsoft.com/en-us/library/ms810467.aspx

Resources