I'm using a SAML21 board to accept some data over a serial connection, and at the moment, just mirror it to a serial port on a computer. However this data is 6 bytes at ~250Hz(It was closer to 3KHz before). As far as I can tell I'm tracking the start and end bytes correctly, however my columnar alignment gets out of whack on occasion in realterm.
I have it set up for 6 bytes in single mode. SO all columns should be presenting the same bytes up and down. However, over time as I increase the rate at which I mirror(I am still receiving the data at a fixed rate) the first column's byte tends to float.
I have not used realterm at speeds this high before, so I am not aware of it's limitations.
Related
There's a following setup (it's basically a pair of TWS earbuds and a smartphone):
2 audio sink devices (or buds), both are connected to the same source device. One of these devices is primary (and is responsible for handling connection), other is secondary (and simply sniffs data).
Source device transmits a stream of encoded data and sink device need to decode and play it in sync with each other. There problem is that there's a considerable delay between each receiver (~5 ms # 300 kbps, ~10 ms # 600 kbps and # 900 kbps).
It seems that synchronisation mechanism which is already implemented simply doesn't want to work, so it seems that my only option is to implement another one.
It's possible to send messages between buds (but because this uses the same radio interface as sink-to-source communication, only small amount of bytes at relatively big interval could be transferred, i.e. 48 bytes per 300 ms, maybe few times more, but probably not by much) and to control the decoder library.
I tried the following simple algorithm: secondary will send every 50 milliseconds message to primary containing number of decoded packets. Primary would receive it and update state of decoder accordingly. The decoder on primary only decodes if the difference between number of already decoded frame and received one from peer is from 0 to 100 (every frame is 2.(6) ms) and the cycle continues.
This actually only makes things worse: now latency is about 200 ms or even higher.
Is there something that could be done to my synchronization method or I'd be better using something other? If so, what would be the best in such case? Probably fixing already existing implementation would be the best way, but it seems that it's closed-source, so I cannot modify it.
I'm using STM32F411 with USB CDC library, and max speed for this library is ~1Mb/s.
I'm creating a project where I have 8 microphones connected into ADC line (this part works fine), I need a 16-bit signal, so I'm increasing accuracy by adding first 16 signals from one line (ADC gives only 12-bits signal). In my project, I need 96k 16-bit samples for one line, so it's 0,768M signals for all 8 lines. This signal needs 12000Kb space, but STM32 have only 128Kb SRAM, so I decided to send about 120 with 100Kb data in one second.
The conclusion is I need ~11,72Mb/s to send this.
The problem is that I'm unable to do that because CDC USB limited me to ~1Mb/s.
Question is how to increase USB speed to 12Mb/s for STM32F4. I need some prompt or library.
Or maybe should I set up "audio device" in CubeMX?
If small b means byte in your question, the answer is: it is not possible as your micro has FS USB which max speeds is 12M bits per second.
If it means bits your 1Mb (bit) speed assumption is wrong. But you will not reach the 12M bit payload transfer.
You may try to write (only if b means bit) your own class but I afraid you will not find a ready made library. You will need also to write the device driver on the host computer
It is just a question out of curiosity. Windows function GetIfEntry2 returns the output in a MIB_IF_ROW2 structure. Some members of the structure like InOctets,OutOctets,etc are of type ULONG64. This microsoft page tells that max limit of ULONG64 is 18446744073709551615. Since these InOctets, OutOctets etc goes on increasing, what happens if the value exceeds the limit? Will it return or fail?
It would never happen. That many bytes is equal to ~17179869184 GiB, and to transfer that much data, you'd need to send or receive one full gigabyte per second for more than 540 years.
But if that did somehow happen, the overflow behavior would depend entirely on the network driver, since that's what reports the data (according to this NDIS summary). More than likely, it would just flip over to zero and start counting again. GetIfEntry2 just gives you whatever data it receives from the driver.
I am reading several hundred bytes from a DESFire card using APDU commands.
The data application is authenticated, and the response MAC'ed.
I submit a series of READ_DATA commands (0xBD), each retrieving 54 bytes+MAC while increasing read offset for each command.
Will this operation go much quicker if I use a long READ with ADDITIONAL_FRAME (AF) instead of many sequential reads?
I understand that a simple AF is 1 byte vs 8 bytes for a full READ DATA command, thus reducing the number of bytes transferred by ~10%.
But will use of AF give additional performance benefits, for example because of less processing needed by the card?
I am asking this since I am getting only ~220kbit/s effective transfer rate when the theoretical limit is 424kbit/s. See my question on this here
I modified my reads to use ADDITIONAL FRAME.
This reduced the total bytes sent+received from 1628 to 1549 bytes, a 5% reduction.
The time used by tranciecve() was reduced from 602ms to 593ms, a 1.5% reduction.
The conclusion is that use of AF will not give additional performance other that the reduced time for bytes transfered.
The finding also indicates that since the time was reduced by a much lower factor than the data reduction, there must be operations that introduce significant latency that is not dependent on the data trancieved, but ReadFile is not the one.
Authenticate, SelectApplication or ReadRecord may be significantly more expensive than the time used for data transfer.
(Wanted to write a comment, but it got quite long...)
I would use the ADDITIONAL FRAME (AF) way, some reasoning follows:
in addition to the mentioned 7 bytes saved in the command, you save 4 MAC bytes in all of the responses (but the last one, of course)
every time you read 54 bytes, you wastefully MAC 2 zero bytes of padding, which might have been MACed as data (given by DES block size of 8). In the "AF way" there is only a single MAC run covering all the data, so this does not happen here
you are not enforcing the actual frame size. It is up to the reader and the card to select the right frame size (here I am not 100% sure how DESFire handles the ISO 14443-4 chaining and FSD)
some readers can handle the AF situation on their own and (magically?) give you the complete answer (doing the AF somehow in their firmware -- I have seen at least one reader doing this)
If my thoughts are (at least partially) correct, these points make only 9ms difference in your scenario. But under another scenario they might make much more.
Additional notes:
I would exclude the SELECT APPLICATION and AUTHENTICATE from the benchmark and measure them separately. It is up to you, but I would say that they only interfere with the desired "raw" data transfer measurement
I would recommend to benchmark the pure "plain data transfer" mode which is (presumably) the fastest "raw" data transfer possible
Thank you for sharing your results, not many people do so...Good luck!
One byte is used to store each of the three color channels in a pixel. This gives 256 different levels each of red, green and blue. What would be the effect of increasing the number of bytes per channel to 2 bytes?
2^16 = 65536 values per channel.
The raw image size doubles.
Processing the file takes roughly 2 times more time ("roughly", because you have more data, but then again this new data size may be better suited for your CPU and/or memory alignment than the previous sections of 3 bytes -- "3" is an awkward data size for CPUs).
Displaying the image on a typical screen may take more time (where "a typical screen" is 24- or 32-bit and would as yet not have hardware acceleration for this particular job).
Chances are you cannot use the original data format to store the image back into. (Currently, TIFF is the only file format I know that routinely uses 16 bits/channel. There may be more. Can yours?)
The image quality may degrade. (If you add bytes you cannot set them to a sensible value. If 3 bytes of 0xFF signified 'white' in your original image, what would be the comparable 16-bit value? 0xFFFF, or 0xFF00? Why? (For either choice-- and remember, you have to make a similar choice for black.))
Common library routines may stop working correctly. Only the very best libraries are data size-ignorant (and they'd still need to be rewritten to make use of this new size.)
If this is a real world scenario -- say, I just finished writing a fully antialiased graphics 2D library, and then my boss offhandedly adds this "requirement" -- it'd have a particular graphic effect on me as well.