Baud rate on Arduino - arduino-uno

I am implementing a simple visible light communication module with two Arduinos, as a transmitter and a receiver, with a short text message consisting of 120 characters. I have used Manchester encoding with on-off -keying modulation.
Altogether, in my message frame, with Manchester encoding and with preambles and end-of-frame byte, there are 2480 bits. I have set one bit period to be 500 microseconds. On the receiver side, I am sampling this bit four times, at (500/4) 125 microseconds. From my knowledge, since each bit is 500 μs, there are 2000 bits/s that are being transmitted from the transmitter so a baud rate of 9600 bits/s should work. However, 9600 is not working and any baud rate above 38400 up to 115200 is working, and I can properly decode this short message in my receiver.
What is the explanation for this behavior? Why is baud rate of 9600 not working though I am only transmitting 2000 bits per second?
Further information: I have set the prescalar to 128, so the ADC sampling frequency is (16 MHz/128)/13 = 9.6 kHz.

When you suddenly started talking about "baud rates", it implied that you're using the hardware serial port on the Arduino. If so, then realise that feeding 2,000 bit/s into a device expecting 9,600 bit/s has problems.
The way that an asynchronous UART works is that it waits for a start signal (bit), then decodes the next (typically) 9 signals at the current bit rate. It then checks that the 9th bit is a stop bit; if it isn't, it discards the byte.
Since you're only changing every 9600/2000 = 4.8 bits, then odds on the 9th "stop" bit will be of the wrong sense, and the data will be lost.
Below is an ASCII diagram for the timing that I'm talking about.
I'll use the bitstream 00101101 for the signal produced by the circuit, with a . as a 0 ms separator between bits;
I'll use a ^ to point out where the UART is sampling the bits;
I'll use a * to indicate a "correct" byte (insofar as the byte ends in a correct stop bit);
I'll use a ! to indicate an "incorrect" byte (insofar as the byte ends in an incorrect stop bit);
Of course, I'll assume a baud rate of 10,000 bit/s (5 rather than 4.8...)
00000.00000.11111.00000.11111.11111.00000.11111
^^^^^.^^^^!.......^^^^^.^^^^*.......^^^^^.^^^^*
This sequence would result in the UART recording the following 3 bytes:
Error
0xF0 (LSB first is defined...)
0xF0 (LSB first is defined...)

Related

Start bit and end bits in Serial Data Transmission Confusion

I am bit confused how start and stop bit are differentiated from the actual data bits. For example say "data" whose binary is 01100100 01100001 01110100 01100001 is being set from System A to System B as a single packet (because it's less than 64 Kibibytes) bit by bit. Please let me know how start bit and stop bits are added to these data bits. There were two related thread on Stacloverflow with only one answer this was not accepted but is very confusing. Can someone explain it in simple terms please. Thank you
When you want to send data over serial line, you need to synchronize transmitter and receiver. The start bit simply marks the beginning of the data chunk (typically one byte with or without parity bit), and the stop bit marks the end of data chunk.
In the beginning, there’s no data being transmitted - let´s say there is ‘0’ on the line for some time. The receiver is waiting for the start bit (both start and stop bits are always ‘1’). When the start bit arrives, it starts an internal timer and on every tick it reads the value from the line, until all data and parity bits are read. Then it waits for the stop bit and then it begins to start waiting for a new start bit.
Without the start bit, the receiver would not now when to start reading data bits. Imagine sending zero byte without parity: The line would just stay in 0 state all the time.
The stop bit is not necessary, it’s there just for enhancing the reliability (both receiver and transmitter must use the same frequency).
So, the start and stop bits don’t need to be distinguished from data bits. Quite the oposite: They allow the receiver to properly identify data bits.
When sending your data, you would take them byte by byte and for each of them you would send the start bit (‘1’) first, then individual bits, then maybe parity bit and then a ‘1’ - the stop bit, everything at a given frequency. Then you would wait at least for one timer tick.
Usually you don’t need to do all of this, because there are specialized chips for this on the board. You just provide your data using a buffer and wait until they’re sent, or you wait for data being received.

GPIO32 pin works in analog mode, always reads 0 in digital mode

I'm having some difficulty getting PCNT pulse counting working with a prototype ESP32 device board.
I have a water level sensor (model D2LS-A) that signals state by the frequency of a square wave signal it sends to GPIO32 (20Hz, 50Hz, 100Hz, 200Hz, 400Hz).
Sadly, the PCNT counter stays at 0.
To troubleshoot, I tried putting GPIO32 in ADC mode (attenuation 0, 10-bit mode) to read the raw signal (sampling it many times a second), and I'm getting values that I would expect (0-1023). But trying the same thing using digital GPIO mode, it always returns 0, never 1 in all the samples.
Since the PCNT ESP IDF component depends on reading the pin digitally, the counter never increments past 0.
So the real problem I'm having is: why aren't the ADC readings (between 0-1023) translating to digital readings of 0-1 as one would expect?

STM32F411 I need to send a lot of data by USB with high speed

I'm using STM32F411 with USB CDC library, and max speed for this library is ~1Mb/s.
I'm creating a project where I have 8 microphones connected into ADC line (this part works fine), I need a 16-bit signal, so I'm increasing accuracy by adding first 16 signals from one line (ADC gives only 12-bits signal). In my project, I need 96k 16-bit samples for one line, so it's 0,768M signals for all 8 lines. This signal needs 12000Kb space, but STM32 have only 128Kb SRAM, so I decided to send about 120 with 100Kb data in one second.
The conclusion is I need ~11,72Mb/s to send this.
The problem is that I'm unable to do that because CDC USB limited me to ~1Mb/s.
Question is how to increase USB speed to 12Mb/s for STM32F4. I need some prompt or library.
Or maybe should I set up "audio device" in CubeMX?
If small b means byte in your question, the answer is: it is not possible as your micro has FS USB which max speeds is 12M bits per second.
If it means bits your 1Mb (bit) speed assumption is wrong. But you will not reach the 12M bit payload transfer.
You may try to write (only if b means bit) your own class but I afraid you will not find a ready made library. You will need also to write the device driver on the host computer

A COM Port on a Windows PC indicates the bit rate, or the Baud rate?

If you search around the internet, you can easily find websites, google images, as well as many (YouTube) videos that explain the various properties of COM/serial/RS232 ports. As far as i'm concerned in most of these they state that in the COM port dialogue box the baud rate can be seen (and not just in Windows OS), such as here, here and even on Sparkfun here. And this is clearly false, since it explicitly states the bit rate. Here's an image from my Windows 8.1 PC as well:
And we know that bit rate isn't the same as baud rate. Also numerous times i've heard people e.g. on youtube videos talking about messing around with the "baud-rate" on windows pc. Now i'm confused. What is going on here. It clearly states the bit rate, isn't that right? Am i missing something?
Despite being marked "bits per second", that dialog actually displays baud as a rate in symbols per second. (Symbols include data bits but also start, stop, and parity. For serial ports these are often also called "bits".)
Besides framing symbols, the other cause for a difference between bit rate and baud would be multilevel signalling -- however this doesn't apply to PC serial ports since they only use binary signalling, therefore one data symbol = one bit. Don't be confused by the fact that many serial-attached modems use a larger signal constellation, this refers to the link between the modem and computer, not between two modems.
The selections shown in the image in the question will result in 9600 baud, but only 960 bytes per second. (1 byte = 8 bits but due to start and stop intervals, the serial port sends 10 symbols per byte)
According to this answer:
What is the difference between baud rate and bit rate?
It looks like it's due to the fact that with early analog phones, bps = baud rate. ie 1 symbol = 1 bit. That would lead to the assumption that a UI designer at some point simply made some assumptions and mixed the terms based on some expectation that COM ports were going to be used to plug modems in.
Modems don't use a strict digital transmission method, but instead use FSK, which allows for a baud (your "symbol") o be more than one bit (binary data). A phone line has a high frequency limit of about 3300 Hz. If that was the cutoff, your modem couldn't send more than 2400 baud (bit rate). By shifting the signal within one cycle, it's able to transmit more than 1 bit in 1 baud. Add 4 shifts and you up the bit rate from 2400 to 9600.
At least that's what I remember from some 20 years ago.

Non-standard comport baudrates in windows

Do the windows built in com port drivers support non-standard baudrates? (actually does windows have a built in driver for com1 & 2?)
The reason I ask is I'm having trouble getting a reliable connection to a device that uses the unusual baudrate 5787. The device & PC talk briefly, then seem to loose the dialogue, and then get it again. Once a long message is sent, it gets lost at the other end, a short time later the dialogue is back. This sounds to me like the classic baudrate mismatch. Not quite close enough to be reliable though but close enough that some data gets through.
If I use an inexpensive PCI serial board it works without problems. It's only computers that use on board serial I've found don't work properly.
Baudrates in a PC are controlled by a UART and a crystal. The crystal frequency determines what baudrates the serial port can generate. The baudrate is often generated by a divide by 16 counter. The crystal frequency for a standard PC is normally 1.8432 MHz. Dividing that by 16 gives you 115200 which is usually the maximum the com port can do.
Inside the UART is a DLAB register. This further divides the clock. So essentially, to get 5787 baud you're talking about dividing 115200 by 5787 which gives you 19.906687...
It's close to 20 you'd load the DLAB register with 20. 115200 / 20 gives you 5760. Therefore you're probably getting 5760 baud out of the PC com port. That's probably enough of a difference to cause the issue that you're seeing.
No, the difference from 5760 to 5787 is nowhere near enough to explain any sort of problems. UARTs identify the start of a byte from the leading edge of the start bit, then sample the data in the middle of each bit. This means they are tolerant to errors in Baud rate up to the point where the predicted middle is an edge. That's a half bit error in one full byte, because each byte has a stop bit so there's a re-synchronise event per byte. On half bit in ten bits (8 data, one start, one stop) is 5%. The difference from 5760 to 5787 is only 0.5% so miles inside the safe region.

Resources