Reimplementing QSerialPort canReadLine() and readLine() methods - methods

I am trying to receive custom framed raw bytes via QSerialPort using value 0 as delimiter in asynchronous mode (using signals instead of polling).
The inconvenience is that QSerialPort doesn't seem to have a method that can read serial data until a specified byte value is encountered e.g. read_until (delimiter_value) in pyserial.
I was wondering if it's possible to reimplement QSerialPort's readLine() function in Python so that it reads until 0 byte value is encountered instead of '\n'. Similarly, it would be handy to reimplement canReadLine() as well.
I know that it is possible to use readAll() method and then parse the data for delimiter value. But this approach likely implies more code and decrease in efficiency. I would like to have the lowest overhead possible when processing the frames (serial baud rate and number of incoming bytes are large). However, if you know a fast approach to do it, I would like to take a look.

I ended up parsing the frame, it seems to work well enough.
Below is a method extract from my script which receives and parses serial data asynchronously. self.serial_buffer is a QByteArray array initialized inside a custom class init method. You can also use a globally declared bytearray but you will have to check for your delimiter value in another way.
#pyqtSlot()
def receive(self):
self.serial_buffer += self.serial.readAll() # Read all data from serial buffer
start_pos, del_pos = 0, 0
while True:
del_pos = self.serial_buffer.indexOf(b'\x00', start_pos) # b'\x00' is delimiter byte
if del_pos == -1: break # del_pos is -1 if b'\x00' is not found
frame = self.serial_buffer[start_pos: del_pos] # Copy data until delimiter
start_pos = del_pos + 1 # Exclude old delimiter from your search
self.serial_buffer = self.serial_buffer[start_pos:] # Copy remaining data excluding frame
self.process_frame(frame) # Process frame

Related

Read FT2332H FIFO Data

I tried to read the FIFO buffer in FT2332H and it was successful but the to data is coming is a format make it difficult to process or plot it .. Here is the example ... I use ftd2xx library
while True:
rxn = d.getQueueStatus()
if (rxn>1024):
print(bytearray(d.read(1024)))
The output is as below .. Each 4 is a byte received from the buffer .. How to get each
bytearray(b'4444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444')
This is the result without bytearray
print((d.read(1024)))
b'4444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444444'
Assuming this is python:
You can index each individual byte in the byte array with []
my_buffer = bytearray(d.read(1024)
now my_buffer[0] holds the value of the first byte in your byte array, represented as an integer with value 0-255. You will need to additionally cast this integer to a character to create a character array / string. ASCII is the typical correlation between an integer value and its character representation. The order of the bytes in your FIFO buffer is dependent on what is putting bytes into the FIFO on the not-USB side of the FT232. Many devices send data most-significant first, but you should verify this against that device's data sheet.

How to continuously read a binary file in Crystal and get Bytes out of it?

Reading binary files in Crystal is supposed to be done with Bytes.new(size) and File#read, but... what if you don't know how many bytes you'll read in advance, and you want to keep reading chunks at a time?
Here's an example, reading 3 chunks from an imaginary file format that specifies the length of data chunks with an initial byte:
file = File.open "something.bin", "rb"
The following doesn't work, since Bytes can't be concatenated (as it's really a Slice(UInt8), and slices can't be concatenated):
data = Bytes.new(0)
3.times do
bytes_to_read = file.read_byte.not_nil!
chunk = Bytes.new(bytes_to_read)
file.read(chunk)
data += chunk
end
The best thing I've come up with is to use an Array(UInt8) instead of Bytes, and call to_a on all the bytes read:
data = [] of UInt8
3.times do
bytes_to_read = file.read_byte.not_nil!
chunk = Bytes.new(bytes_to_read)
file.read(chunk)
data += chunk.to_a
end
However, there's then seemingly no way to turn that back into Bytes (Array#to_slice was removed), which is needed for many applications and recommended by the authors to be the type of all binary data.
So... how do I keep reading from a file, concatenating to the end of previous data, and get Bytes out of it?
One solution would be to copy the data to a resized Bytes on every iteration. You could also collect the Bytes instances in a container (e.g. Array) and merge them at the end, but that would all mean additional copy operations.
The best solution would probably be to use a buffer that is large enough to fit all data that could possibly be read - or at least be very likely to (resize if necessary).
If the maximum size is just 3 * 255 bytes this is a no-brainer. You can size down at the end if the buffer is too large.
data = Bytes.new 3 * UInt8::MAX
bytes_read = 0
3.times do
bytes_to_read = file.read_byte.not_nil!
file.read_fully(data + bytes_read)
bytes_read += bytes_to_read
end
# resize to actual size at the end:
data = data[0, bytes_read]
Note: As the data format tells how many bytes to read, you should use read_fully instead of read which would silently ignore if there are actually less bytes to read.
EDIT: Since the number of chunks and thus the maximum size is not known in advance (per comment), you should use a dynamically resizing buffer. This can be easily implemented using IO::Memory, which will take care of resizing the buffer accordingly if necessary.
io = IO::Memory.new
loop do
bytes_to_read = file.read_byte
break if bytes_to_read.nil?
IO.copy(file, io, bytes_to_read)
end
data = io.to_slice

Concatenate multiple input sources into a single IO object in Ruby

I have a collection of input sources -- strings, files, etc. -- that I want to concatenate and pass to an API that expects to read from a single IO object. The files can be quite large (~10 GB), so reading them into memory and concatenating them into a single string isn't an option. (I also considered using IO.pipe, but spinning up extra threads or processes seems like overkill.)
Is there an existing library class for this in Ruby, cf. Java's SequenceInputStream? If not, is there some other way to do it straightforwardly and idiomatically?
Unfortunately it's writing to a socket with IO.copy_stream
For IO::copy_stream(src, ...) to work, the « IO-like object for src should have readpartial or read method »
So, let's try to create a class that can read over a sequence of IO objects; here's the spec of IO#read:
read(maxlen = nil) → string or nil
read(maxlen = nil, out_string) → out_string or nil
Reads bytes from the stream (in binary mode):
If maxlen is nil, reads all bytes.
Otherwise reads maxlen bytes, if available.
Otherwise reads all bytes.
Returns a string (either a new string or the given out_string) containing the bytes read. The encoding of the string depends on both maxLen and out_string:
maxlen is nil: uses internal encoding of self (regardless of whether out_string was given).
maxlen not nil:
out_string given: encoding of out_string not modified.
out_string not given: ASCII-8BIT is used.
class ConcatIO
def initialize(*io)
#array = io
#index = 0
end
def read(maxlen = nil, out_string = (maxlen.nil? ? "" : String.new))
out_string.clear
if maxlen.nil?
if #index < #array.count
#array[#index..-1].each{|io| out_string.concat(io.read)}
#index = #array.count
end
elsif maxlen >= 0
while out_string.bytesize < maxlen && #index < #array.count
bytes = #array[#index].read(maxlen - out_string.bytesize)
if bytes.nil?
#index += 1
else
out_string.concat(bytes)
end
end
return nil unless out_string.bytesize
end
out_string
end
end
note: The code is inaccurate in regard to the encoding part of the spec.
Now let's use this class with IO::copy_stream:
require 'stringio'
io1 = StringIO.new( "1")
io2 = StringIO.new( "22")
io3 = StringIO.new("333")
ioN = StringIO.new( "\n")
catio = ConcatIO.new(io1,io2,io3,ioN)
print catio.read(2), "\n"
IO.copy_stream(catio,STDOUT)
And it works!
12
2333
Aside
In fact there's a multi_io gem for concatenating multiple IO sources into a single IO object; the problem is that its methods don't follow the specs of the IO class; for ex. you can't use it with IO::copy_stream, it doesn't work.
Additionally, even if you're able to use ARGF (ie. you're only handling input files stored in ARGV), you still have to be cautious: there are slight differences between some of ARGF's and IO's methods, so it's not 100% safe to feed ARGF to an API that needs to read from an IO object.
Conclusion
Because there's no gem nor core class for it, the only sensible work-around would be to determine the IO methods that the API requires and write a class that implements them. It isn't so straining, as long as you don't have to implement the whole IO interface. Furthermore, you already have a working read method in my answer 😉.
We can try using StringIO to concatenate multiple input sources and pass them to an API that expects to read from a single IO object
require 'stringio'
# Create a new StringIO object from the first input source
input1 = StringIO.new("First input source")
# Create a new StringIO object from the second input source
input2 = StringIO.new("Second input source")
# Concatenate the two input sources into a single IO object
inputs = input1 + input2
# Pass the concatenated input sources to the API
api.process(inputs)

Algorithm for fragmenting data into packets

Lets just say I want to fragment some data units into packets (max size per packet is lets say 1024 bytes). Each data unit can be of variable size, say:
a = 20 bytes
b = 1000 bytes
c = 10 bytes
d = 800 bytes
Can anyone please suggest any efficient algorithm to create packets with such random data efficiently utilizing the bandwidth? I cannot split the individual data units into bytes...they go whole inside a packet.
EDIT: The ordering of data units is of no concern!
There are several different ways, depending on your requirements and how much time you want to spend on it. The general problem, as #amit mentioned in comments, is NP-Hard. But you can get some improvement with some simple changes.
Before we go there, are you sure you really need to do this? Most networking layers have a packet-sized (or larger) buffer. When you write to the network, it puts your data in that buffer. If you don't fill the buffer completely, the code will delay briefly before sending. If you add more data during that delay, the new data is added to the buffer. The buffer is sent once it fills, or after the delay timeout expires.
So if you have a loop that writes one byte at a time to the network, it's not like you'll be creating a large number of one-byte packets.
On the receiving side, the lowest level networking layer receives an entire packet, but there's no guarantee that your call to receive the data will get the entire packet. That is, the sender might send an 800 byte packet, but on the receiving end the first call to read might only return 50 or 273 bytes.
This depends, of course, at what level you're reading the data. If you're talking about something like Java or .NET, where your interface to the network stack is through a socket, you almost certainly can't guarantee that a call to socket.Read() will return an entire packet.
Now, if you can guarantee that every call to read returns an entire packet, then the easiest way to pack things would be to serialize everything into one big buffer and then send it out in multiple 1,024-byte packets. You'll want to create a header at the front of the first packet that says how many total bytes will be sent, so the receiver knows what to expect. The result will be a bunch of 1,024-byte packets, potentially followed by a final packet that is somewhat smaller.
If you want to make sure that a data object is fully contained within a single packet, then you have to do something like:
add a to buffer
if remaining buffer < size of b
send buffer
clear buffer
add b to buffer
if remaining buffer < size of c
send buffer
clear buffer
add c to buffer
... etc ...
Here's some simple JavaScript pseudo code. The packets will stay ordered and the bandwidth will be used optimally.
packets = [];
PACKET_SIZE = 1024;
currentPacket = [];
function write(data) {
var len = currentPacket.length + data.length;
if(len < PACKET_SIZE) {
currentPacket = currentPacket.concat(data);
} else if(len === PACKET_SIZE) {
packets.push(currentPacket.concat(data));
currentPacket = [];
} else { // if(len > PACKET_SIZE) {
packets.push(currentPacket);
currentPacket = data;
}
}
function flush() {
if(currentPacket.length > 0) {
packets.push(currentPacket);
currentPacket = [];
}
}
write(data20bytes);
write(data1000bytes);
write(data10bytes);
write(data800bytes);
flush();
EDIT Since you have all of the data chunks and you want to optimally package them out of order (bin packing) then you left with trying every permutation of the chunks for an exact answer or compromising with an best guess/first fit type algorithm.

What is the appropriate value for the ExtAudioFileWrite inNumberFrames parameter?

I'm working on a FLAC-to-ALAC transcoder and trying to use ExtAudioFile to write to ALAC. I am using the FLAC library's callback-based system to read in the FLAC file, which means that every frame in the FLAC file results in a function call. Within that call, I set up my buffers and call the ExtAudioFileWrite code as follows:
AudioBufferList * fillBufList;
fillBufList = malloc(sizeof *fillBufList + (frame->header.channels - 1) * sizeof fillBufList->mBuffers[0]);
fillBufList->mNumberBuffers = frame->header.channels;
for (int i = 0; i < fillBufList->mNumberBuffers; i++) {
fillBufList->mBuffers[i].mNumberChannels = 1; // non-interleaved
fillBufList->mBuffers[i].mDataByteSize = frame->header.blocksize * frame->header.bits_per_sample / 8;
fillBufList->mBuffers[i].mData = (void *)(buffer[i]);
NSLog(#"%i", fillBufList->mBuffers[i].mDataByteSize);
}
OSStatus err = ExtAudioFileWrite (outFile, 1, fillBufList);
Now, the number 1 in the final line is something of a magic number that I've chosen because I figured that one frame in the FLAC file would probably correspond to one frame in the corresponding ALAC file, but this seems not to be the case. Every call to ExtAudioFileWrite returns an error value of -50 (error in user parameter list). The obvious culprit is the value I'm providing for the frame parameter.
So I ask, then, what value should I be providing?
Or am I barking up the wrong tree?
(Side note: I suspected that, despite the param-related error value, the true problem could be the buffer setup, so I tried mallocing a zeroed-out dummy buffer just to see what would happen. Same error.)
For ExtAudioFileWrite, the number of frames is equal to the number of samples you want to write. If you're working with 32-bit float interleaved data, it would be the mDataByteSize / (sizeof(Float32) / mNumberChannels). It shouldn't be 1 unless you meant to write only one sample, and if you're writing a compressed format, it expects a certain number of samples, I think. It's also possible that the -50 error is another issue.
One thing to check is that ExtAudioFile expects only one buffer. So your fillBufList->mNumberBuffers should always equal 1, and if you need to do stereo, you need to interleave the audio data, so that mBuffers[0].mNumberChannels is equal to 2 for stereo.

Resources