Non-standard comport baudrates in windows - windows

Do the windows built in com port drivers support non-standard baudrates? (actually does windows have a built in driver for com1 & 2?)
The reason I ask is I'm having trouble getting a reliable connection to a device that uses the unusual baudrate 5787. The device & PC talk briefly, then seem to loose the dialogue, and then get it again. Once a long message is sent, it gets lost at the other end, a short time later the dialogue is back. This sounds to me like the classic baudrate mismatch. Not quite close enough to be reliable though but close enough that some data gets through.
If I use an inexpensive PCI serial board it works without problems. It's only computers that use on board serial I've found don't work properly.

Baudrates in a PC are controlled by a UART and a crystal. The crystal frequency determines what baudrates the serial port can generate. The baudrate is often generated by a divide by 16 counter. The crystal frequency for a standard PC is normally 1.8432 MHz. Dividing that by 16 gives you 115200 which is usually the maximum the com port can do.
Inside the UART is a DLAB register. This further divides the clock. So essentially, to get 5787 baud you're talking about dividing 115200 by 5787 which gives you 19.906687...
It's close to 20 you'd load the DLAB register with 20. 115200 / 20 gives you 5760. Therefore you're probably getting 5760 baud out of the PC com port. That's probably enough of a difference to cause the issue that you're seeing.

No, the difference from 5760 to 5787 is nowhere near enough to explain any sort of problems. UARTs identify the start of a byte from the leading edge of the start bit, then sample the data in the middle of each bit. This means they are tolerant to errors in Baud rate up to the point where the predicted middle is an edge. That's a half bit error in one full byte, because each byte has a stop bit so there's a re-synchronise event per byte. On half bit in ten bits (8 data, one start, one stop) is 5%. The difference from 5760 to 5787 is only 0.5% so miles inside the safe region.

Related

STM32F411 I need to send a lot of data by USB with high speed

I'm using STM32F411 with USB CDC library, and max speed for this library is ~1Mb/s.
I'm creating a project where I have 8 microphones connected into ADC line (this part works fine), I need a 16-bit signal, so I'm increasing accuracy by adding first 16 signals from one line (ADC gives only 12-bits signal). In my project, I need 96k 16-bit samples for one line, so it's 0,768M signals for all 8 lines. This signal needs 12000Kb space, but STM32 have only 128Kb SRAM, so I decided to send about 120 with 100Kb data in one second.
The conclusion is I need ~11,72Mb/s to send this.
The problem is that I'm unable to do that because CDC USB limited me to ~1Mb/s.
Question is how to increase USB speed to 12Mb/s for STM32F4. I need some prompt or library.
Or maybe should I set up "audio device" in CubeMX?
If small b means byte in your question, the answer is: it is not possible as your micro has FS USB which max speeds is 12M bits per second.
If it means bits your 1Mb (bit) speed assumption is wrong. But you will not reach the 12M bit payload transfer.
You may try to write (only if b means bit) your own class but I afraid you will not find a ready made library. You will need also to write the device driver on the host computer

FTDI driver (Windows) FT_Write() issue with large (1KB) chunk - (version 2.12.16.0)

My application on PC sends a file (2 MB) in chunks of 1 KB to embedded device.
I use FTDI Windows driver, I use the classic FT_Write() API function as my code is cross-platform.
Note: These issues below appear when I use 1KB chunk size. Smaller chunk (I tried 64 bytes) works fine.
The problem is the function returns "0 byte sent" every couple hundred packets and stuck. I found a work around, by purging both TX and Rx, followed by ResetDevice() call recovered the chip. It still happened every couple hundred packets, but at least I can send the whole file (2 MB).
But when I use USB isolator (http://www.bb-elec.com/Products/USB-Connectivity/USB-Isolators/Compact-USB-Port-Guardian.aspx)
the work around failed.
I believe my work around is not a graceful solution.
Note: I use large chunk because of suggestion I found in FTDI application note below:
When writing data to an FTDI device, as much data as possible should
be buffered in the application and written to the device in a single
write function call (either WriteFile for a VCP application using the
Win32 API, FT_Write if using the D2XX classic interface or
FT_WriteFile if using the D2XX FT_W32 interface). The result of this
is that the data will be written to the device with 64 bytes per USB
packet.
Any idea what's the proper fix for these issues? Is it related to FTDI initialization? My driver version is 2.12.16.0 (3/9/2016).
I also saw the same problem of API FT_Write() not working right if too much data was passed,
while working on the library for my USB device Nusbio.
I mostly work in the mode Synchronous Bitbanging rather than UART but after all it is the same
hardware, driver and API.
There are the USB 2.0 specification or the FTDI FT232RL specification and then there is
reality of the electron and bit. The expected numbers of transfer speed never really match at
least at first. In other words it is complicated (see more below in my referenced blog post).
In 2015 I was under the impression that with FTDI chip FT232RL the size of 384 bytes was working well
and the number comes from the chip datasheet (128 byte receive buffer and 256 byte transmit buffer).
Using a size of 500 bytes would still work but above 600 bytes thing would not work.
I later used the chip FT231X which has a larger buffer (1k, 512 byte receive buffer and 512 byte transmit buffer).
and was able to transfer with FT_Write() 1k and 2k buffer of data, therefore more than doubling my speed of transfer.
But above 2k things would not work.
In 2016, I read every thing you can read about FTDI USB 2.0 Full speed chip, I came to the
conclusion that FT_Write should support up to 64K (see datasheet for the following chip
FT232RL, FT231X, FT232H, FT260, FT4222).
I also did some research on faster serial port communication from .NET than 115200 baud.
Somehow I was able to update my C# library to send data in buffer of 32k in FT_Write() and it is
working with the FT232RL and the FT231X chip, but I can't tell you what changed.
I was probably not completely underdanding the in and out of the USB 2.0 full speed FTDI technology.
For example let's say you are using the FT232RL and transfering 384 bytes at the time with
FT_Write(). Knowing that there is at least a 1 milli-second latency in USB 2.0 full speed what ever you
do, you are transfering from a USB point of view 384*1000/1024, that is 375 K byte/s in theory
(that would be the max), that said now what is the baudrate supported by your embedded device.
What is the baudrate used?
The FT232RL max baudrate is 900 000 baud, which would give you only 900000/(1+8+1) == 87 K byte/S.
Right away you can tell there is going to be some problem, may be the FTDI driver takes care of
it or not. I can't tell.
Re do the math based on the baudrate supported by your embedded device, and a 384 byte buffer
sent 1000 per second, then slow down your USB speed with a sleep() to match your baud rate.
That is where I would start.

A COM Port on a Windows PC indicates the bit rate, or the Baud rate?

If you search around the internet, you can easily find websites, google images, as well as many (YouTube) videos that explain the various properties of COM/serial/RS232 ports. As far as i'm concerned in most of these they state that in the COM port dialogue box the baud rate can be seen (and not just in Windows OS), such as here, here and even on Sparkfun here. And this is clearly false, since it explicitly states the bit rate. Here's an image from my Windows 8.1 PC as well:
And we know that bit rate isn't the same as baud rate. Also numerous times i've heard people e.g. on youtube videos talking about messing around with the "baud-rate" on windows pc. Now i'm confused. What is going on here. It clearly states the bit rate, isn't that right? Am i missing something?
Despite being marked "bits per second", that dialog actually displays baud as a rate in symbols per second. (Symbols include data bits but also start, stop, and parity. For serial ports these are often also called "bits".)
Besides framing symbols, the other cause for a difference between bit rate and baud would be multilevel signalling -- however this doesn't apply to PC serial ports since they only use binary signalling, therefore one data symbol = one bit. Don't be confused by the fact that many serial-attached modems use a larger signal constellation, this refers to the link between the modem and computer, not between two modems.
The selections shown in the image in the question will result in 9600 baud, but only 960 bytes per second. (1 byte = 8 bits but due to start and stop intervals, the serial port sends 10 symbols per byte)
According to this answer:
What is the difference between baud rate and bit rate?
It looks like it's due to the fact that with early analog phones, bps = baud rate. ie 1 symbol = 1 bit. That would lead to the assumption that a UI designer at some point simply made some assumptions and mixed the terms based on some expectation that COM ports were going to be used to plug modems in.
Modems don't use a strict digital transmission method, but instead use FSK, which allows for a baud (your "symbol") o be more than one bit (binary data). A phone line has a high frequency limit of about 3300 Hz. If that was the cutoff, your modem couldn't send more than 2400 baud (bit rate). By shifting the signal within one cycle, it's able to transmit more than 1 bit in 1 baud. Add 4 shifts and you up the bit rate from 2400 to 9600.
At least that's what I remember from some 20 years ago.

IR emitter and PWM output

I have been using FRDM_KL46Z development board to do some IR communication experiment. Right now, I got two PWM outputs with same setting (50% duty cycle, 38 kHz) had different voltage levels. When both were idle, one was 1.56V, but another was 3.30V. When the outputs were used to power the same IR emitter, the voltages were changed to 1.13V and 2.29V.
And why couldn't I use one PWM output to power two IR emitters at the same time? When I tried to do this, it seemed that the frequency was changed, so two IR receivers could not work.
I am not an expert in freescale, but how are you controlling your pwm? I'm guessing each pwm comes from a separate timer, maybe they are set up differently. Like one is in 16 bit mode (the 3.3V) and the other in 32 (1.56v) in that case even if they have the same limit in the counter ((2^17 - 1) / 2) would be 50% duty cycle of a 16 bit timer. But in a 32 bit, that same value would only be 25% duty so, one output would be ~1/2 the voltage of the other. SO I suggest checking the timer setup.
The reason the voltage changed is because the IR emmiters were loading the circuit. In an ideal situation this wouldn't happen, but if a source is giving too much current the voltage usually drops a bit.

What makes a CPU architecture "X-bit"?

Warning: I'm not sure where this type of question belongs. If you know a better place for it, drop a link.
Background: Imagine you heard a sentence like this: "this computer/processor has X-bit architecture". Now, if that computer is standard, you get a lot of information, like maximum RAM capacity, maximum unsigned/signed integer value and so on... But what if computer is not standard?
The mystery: back to 70's and 80's, the period referred as "8-bit era". Wait, 8-bit? Yes. So, if a CPU architecture is 8-bit, then:
The maximum RAM capacity of computer is exactly 256 bytes.
The maximum UInt range is from 0 to 256 and the maximum signed integer range is -128 to 127.
The maximum ROM capacity is also 256 bytes, because you have to be able to jump around?
However, it's clearly not like that. Look at some technical characteristics of game consoles of that time and you will see that those exceed the 256 limit.
Quotes (http://www.8bitcomputers.co.uk/whatbasics.html):
The Sharp PC1211 is actually a 4-bit computer but cleverly glues two together to look like 8 (a computer able to add up to 16 would not be very useful!)
So if it's a 4-bit computer, why can manipulate 8-bit integers? And another one...
The Sinclair QL is one of those computers that actually leaves the experts arguing. In parts, it is a 16 bit computer, in some ways it is even like a 32 bit computer but it holds its memory in 8 bits.
What? So why is this mess in www.8bitcomputers.co.uk?
Generally: how is an X-bit computer defined?
The biggest data bus that it has is X bits long (then Sinclair QL is a 32-bit computer)?
The CU functions of that computer are X bits long?
It holds its memory (in registers, ROM, RAM, whatever) in 8 bits?
Other definitions?
Purpose: I think that what I am designing is a 4-bit CPU. I don't really know if it has a 4-bit architecture, because it uses double ROM address, and includes functions like "activate ALU" that take another 4 bits from register Y. I want to know if I can still call it a 4-bit CPU. That's it!
Thank you very much in advance :)
An X-bit computer (or CPU) is defined whether the central unites and registers, such as CPU and ALU, are in X-bit. The addressing doesn't matter in defining the number X. As you have mentioned, an 8-bit computer (e.g. Motorola 68HC11 even tough it is a MCU, still it can be counted as a computer with CPU, I/O and Memory) can have 16-bit addressing in order to increase the RAM or memory size.
The data-bus size and the register sizes of CPU and ALU is the limiting factor in defining the X number in an X-bit computer architecture. You can get more information from http://en.wikipedia.org/wiki/Word_(computer_architecture)
An answer to your question will be "Yes, you are designing a 4-bit CPU if the registers and data bus size are in 4-bit.

Resources