Is there a better way to send a serial break then the setcommbreak - delay - clearcommbreak sequence?
I have to communicate with a microcontroller that uses serial break as the start of a packet on 115k2, and the setcommbreak has two problems:
with 115k2, the break is well below 1ms, and it will get timing critical.
Since the break must be embedded in the packet stream at the correct position, I expect trouble with the fifo.
Is there a better way of doing this, without moving the serial communication to a thread without fifo ? The UART is typically a 16550+
I have a choice in the sense that the microcontroller setup can be switched(other firmware) to a more convention packet format, but the manual warns that the "break" way features hardware integrity checking of the serial.
Compiler is Delphi (2009/XE), but any code or even just a reference is welcome.
The short answer is that serial programming with Windows is fairly limited :-(
You're right that the normal way of sending a break is with SetCommBreak(), and yes, you have to handle the delay yourself - which tends to mean the break ends up substantially longer than it needs to be. The good news is that this doesn't usually matter - most devices expecting a break will treat a much longer break in exactly the same way as a short one.
In the event that your microcontroller is fussy about the precise duration of the break, one way of achieving a shorter, precisely-defined break is to change the baud rate on the port to a slower rate, send a zero byte, then change it back again.
The reason that this works is that a byte sent to the serial port is sent as (usually) one start bit (a zero), followed by the bits in the byte, followed by one or more stop bits (high bits). A 'break' is a sequence of zero bits that is too long to be a byte - i.e. the stop bits don't come in time. By choosing a slower baud rate and sending a zero, you end up holding the line at zero for longer than the receiver expects a byte to be, so it interprets it as a break. (It's up to you whether to determine the baud rate to use by precise calculation or trial-and-error of what the microcontroller seems to like :-)
Of course, either method (SetCommBreak() or baud changing) requires you to know when all data has been sent out of the serial port (i.e. there's nothing left in the transmit FIFO). This nice article about Windows Serial programming describes how to use SetCommMask(), WaitCommEvent() etc. to determine this.
Related
I have a device which is continuously sending data through UART.
I'm trying to read it using a terminal application on Windows-based PC. The problem is that I don't know at which baud rate the device is sending the data.
The data I'm getting at higher baud rates doesn't make any sense so I have narrowed it down to lesser than or equal to 600 among the standard baud rates available on terminal.
Is there any software to detect the baud rate or a method using any microcontroller??
No, not if you want to get done quickly. Ten years doing this type of task says an oscilloscope or even an inexpensive USB-based logic analyzer is your best solution here. This isn't a software problem yet, it's a signal detection problem. You should be able to clear this up in a few minutes with the right instrument.
I'm assuming you're only doing this exercise because the transmitting part is one for which you cannot find a datasheet. If you had a datasheet in hand, that would clear this up, or at least suggest the possible baud rates you should try.
I tried Realterm software and found out that the data is coming at 300 Baud Rate, no parity, 8 data bits and 1 or 2 stop bits. For the remaining options, I get Framing error and break condition in the software. Thank you all...
Old thread but I thought it might be useful.
Something I did was write a python/pyserial script that kept cycling through different baud rates (300 - 115200) and listened and filtered for strings that aren't garbage. Something that decides easily, with the assumption that but would be good clear text so you have the right rate.
It worked well enough and seemed to find the right rate more than fast enough to esc into the bootloader of my AP.
I am working on a project in VHDL wich includes mutliplying matrices. I would like to be able to load data from PC to arrays on FPGA using UART. I am only making my first bigger steps in VHDL and I am not sure if I am taking the right attitude.
I wanted to declare an array of integer signals, and then implement UART to receive data form PC and load it into those signals. However, I can't use for-loop for that, as it will be synthesised to load data parallelly (which is impossible, because values will be comming from PC one after another, using serial port.) And because matrices may be various sizes, in order to assign signals one by one I would need to write lots of specific code (and it appears to be a bad practice to me.)
Is the idea to use an array of signals and load data to those signals through UART realizable? And if my approach is entirely wrong, how could I achieve that?
What you want is doable but you will probably need to design a kind of hardware monitor to act as an intermediate between your UART and your storage (your array of integer signals). This hardware monitor will interpret commands coming from the UART and perform read/write operations in your storage. It will have one interface with the storage and another with the UART. You will have to define a kind of protocol with a syntax for your commands and of sequences of operations for each command.
Example: the monitor waits for commands coming from the UART. The first received character indicates whether it is a read (0) or a write (1). The four next characters are the target address, least significant byte first. If the command is a read, the monitor reads the data at the specified address in your storage and sends it to the UART, one byte at a time, least significant byte first. If the command is a write, the address is followed by a data to write in your storage at the specified address, least significant byte first, and your monitor waits until the data is received and writes it in your storage.
Optionally, the monitor could send an exit status byte at the end of each command to indicate potential errors (protocol errors, unmapped addresses, write attempts in read-only regions...)
Of course, depending on the characteristics of your application, you will probably define a completely different protocol, simpler or more complex, but the principle will be the same.
All this is usually implemented in software and runs on a CPU that has the UART as peripheral and the storage in its memory space. But if you do not have a CPU...
Warning: this is quite complex. The UART itself is quite complex. Not sure you should start with this if you are a VHDL beginner.
Your approach is not entirely wrong but you have a software orientated way of expressing this which indicate you are missing the fundamentals. People with strong software backgrounds tend to think in terms of the programming language and not in terms of the actual FPGA specific structures they want to achieve. It is the important to unlearn this if you want to be successful in designing for FPGA.
Based on what I just wrote you should consider in what type of FPGA structure you would like to store the data. The speed, resource and power requirements govern this choice. One suitable way to store the data would be in either a single or an array of either Block RAM or LUTRAM. Both of these structures can be inferred by using a signal of an array type in the hardware description language which is why I said you are not entirely off track. Consult the manual of your synthesis tool to find templates for how to infer these structures. An alternative is to use a vendor IP block or to instantiate a primitive directly but both those methods are clumsier in my opinion.
Important parameters to consider are the total number of words you need to store, the size of a word and the number of read/write operations per clock cycle. For higher number of reads per cycle an array of memories must be used since most FPGA memories only support two reads per cycle.
I am interested in using atmel avr controllers to read data from LIN bus. Unfortunately, messages on such bus have no beginning or end indicator and only reasonable solution seems to be brute force parsing. Available data from bus is loaded into circular buffer, and brute force method finds valid messages in buffer.
Working with 64 byte buffer and 20MHZ attiny, how can I test the performance of my code in order to see if buffer overflow is likely to occur? Added: My concern is that algorith will be running slow, thus buffering even more data.
A bit about brute force algorithm. Second element in a buffer is assumed to be message size. For example, if assumed length is 22, first 21 bytes are XORed and tested against 22nd byte in buffer. If checksum passes, code checks if first (SRC) and third (DST) byte are what they are supposed to be.
AVR is one of the easiest microcontrollers for performance analysis, because it is a RISC machine with a simple intruction set and well-known instruction execution time for each instruction.
So, the beasic procedure is that you take the assembly coude and start calculating different scenarios. Basic register operations take one clock cycle, branches usually two cycles, and memory accesses three cycles. A XORing cycle would take maybe 5-10 cycles per byte, so it is relatively cheap. How you get your hands on the assembly code depends on the compiler, but all compilers tend to give you the end result in a reasonable legible form.
Usually without seeing the algorithm and knowing anything about the timing requirements it is quite impossible to give a definite answer to this kind of questions. However, as the LIN bus speed is limited to 20 kbit/s, you will have around 10 000 clock cycles for each byte. That is enough for almost anything.
A more difficult question is what to do with the LIN framing which is dependent on timing. It is not a very nice habit, as it really requires some time extra effort from the microcontroller. (What on earth is wrong with using the 9th bit?)
The LIN frame consists of a
break (at least 13 bit times)
synch delimiter (0x55)
message id (8 bits)
message (0..8 x 8 bits)
checksum (8 bits)
There are at least four possible approaches with their ups and downs:
(Your apporach.) Start at all possible starting positions and try to figure out where the checksummed message is. Once you are in sync, this is not needed. (Easy but returns ghost messages with a probability 1/256. Remember to discard the synch field.)
Use the internal UART and look for the synch field; try to figure out whether the data after the delimiter makes any sense. (This has lower probability of errors than the above, but requires the synch delimiter to come through without glitches and may thus miss messages.)
Look for the break. Easiest way to do this to timestamp all arriving bytes. It is quite probably not required to buffer the incoming data in any way, as the data rate is very low (max. 2000 bytes/s). Nominally, the distance between the end of the last character of a frame and the start of the first character of the next frame is at least 13 bits. As receiving a character takes 10 bits, the delay between receiving the end of the last character in the previous message and end of the first character of the next message is nominally at least 23 bits. In order to allow some tolerance for the bit timing, the limit could be set to, e.g. 17 bits. If the distance in time between "character received" interrupts exceeds this limit, the characters belong to different frame. Once you have detected the break, you may start collecting a new message. (This works almost according to the official spec.)
Do-it-yourself bit-by-bit. If you do not have a good synchronization between the slave and the master, you will have to determine the master clock using this method. The implementation is not very straightforward, but one example is: http://www.atmel.com/images/doc1637.pdf (I do not claim that one to be foolproof, it is rather simplistic.)
I would go with #3. Create an interrupt for incoming data and whenever data comes you compare the current timestamp (for which you need a counter) to the timestamp of the previous interrupt. If the inter-character time is too long, you start a new message, otherwise append to the old message. Then you may need double buffering for the messages (one you are collecting, another you are analyzing) to avoid very long interrupt routines.
The actual implementation depends on the other structure of your code. This shouldn't take much time.
And if you cannot make sure your clock is well enough synchronized (+- 4%) to the moster clock, then you'll have to look at #4, which is probably much more instructive but quite tedious.
Your fundamental question is this (as I see it):
how can I test the performance of my code in order to see if buffer overflow is likely to occur?
Set a pin high at the start of the algorithm, set it low at the end. Look at it on an oscilloscope (I assume you have one of these - embedded development is very difficult without it.) You'll be able to measure the max time the algorithm takes, and also get some idea of the variability.
I have designed an Asynchrounous asymmetric fifo using VHDL constructs.It is generic fifo with depth and prog_full as parameters. It has 32-bit in 16-bit output data width.
You can find the fifo design link here.
The top level asymmetric fifo (fifo_wrapper.vhd),is built upon an 32-bit asynchronous fifo(async_fifo.vhd). This internal fifo (async_fifo) is build using the logic from generic FIFO on open cores (http://opencores.org/project,generic_fifos). I have added a simple testbench to try out this fifo design.
BUT there is some issue with this design that I am not able to figure out. The fifo design works perfectly fine when I simulate it, but when I synthesize it and run It along with my other design on hardware I get some erroneous data sometimes. May be there is some corner case that I am not able to simulate or Is it some thing else?
That's why I would like anyone who needs this design to try it and let me know if he/she encounters any Issues during simulation or after synthesis.
thanks
PS: kindly let me know if there is some other forum where I can put my design for public use. thanks
There are a number of issues to point out in relation to this asynchronous FIFO
design, based on the assumption that the write and read clocks are fully
asynchronous.
A (and probably THE) major problem is that the write side pointer (wp in
async_fifo), which is a normal binary counter, is transfered and synchronized
to the read side clock without any Gray encoding. So the different bits in
the vector may arrive at different time in the read clock domain, thus the
write pointer value can (and most likely will from time to time) be different
from the write side value. The comparison with the read pointer (rp) will
therefore make no sense. Binary values that are transfered over clock
domains should be Gray encoded before transfer and decoded at arrival. Also
use synchronization with two flip-flop levels.
The two clocks (rd_clk and wr_clk) are assumed to be asynchronous, but there
is only a single reset (rst), so timing may be violated when reset is
deasserted, unless there are some additional requirements for clocking at the
time of reset deassert.
An similar with clear, where there is only one signal for use in two
different clock domains.
Suggestion would be to use a port naming convention where the clock domain
relationship for the port is cleared indicated in the name, like naming all
the ports in the write clock domain wr_* (e.g. wr_clk_i, wr_clk_we_i, etc.,
and all the ports in the read clock domain as rd_*.
Reset is asserted low, so a naming of rst_n would be nice.
n'I can't access your code (firewall) so I'll just mention the general points in designing them, which might be of help to you and others.
to be completely clock safe, the write side should exchange its pointer to the read side using a fully safe asynchronous handshaking method using 2 meta-stability signalling chains.
This contruct for this is a double buffered register.
The write side registers its write pointer into a buffer, and asserts a valid signal high.
A meta-stability chain reclocks the buffer valid signal to the read clock domain
On the read clock side, once a transition to high of valid is seen at the output of the meta chain, the data in the write side buffer is reregistered onto another register on the read domain. This is ok, because it is known that the data in the buffer is stable. (because of the meta chain).
The read domain asserts an ack signal high.
Another meta-stability chain reclocks the ack signal to the write clock domain.
The write side awaits a transition of the ack signal at the output of the meta chain, once seen it deasserts its valid signal.
The read side awaits a transition of the valid signal at the output of the meta chain to low, once seen it deasserts its ack signal.
The write side awaits a transition of the ack signal at the output of the meta chain to low. The cycle is now complete.
The current write pointer, which may have moved on quite a bit now, may now be transferred again.
A similar approach is taken for transferring the read pointer to the write domain.
It can be seen that although this approach leads to a latency between the write pointer on the write/ read side, and the read pointer on the read / write side, that this latency can never lead to overflow. Instead it leads to a premature full on the write side, adn a premature empty on the read side, which will eventually resolve once the pointers are next exchanged.
This approach is the only completely clock safe design for a fifo that doesn't depend on a-priori knowledge of the clock speeds. Gray coding is not required at all.
The other thing to note is that the logic for addressing / empty / full etc. needs to be duplicated on each clock domain.
In a serial communication link, what is the prefered message framing/sync method?
framing with SOF and escaping sequences, like in HDLC?
relying on using a header with length info and CRC?
It's an embedded system using DMA transfers of data from UART to memory.
I think the framing method with SOF is most attractive, but maybe the other one is good enough?
Does anyone has pros and cons for these two methods?
Following based on UART serial experience, not research.
I have found fewer communication issues when the following are included - or in other words, do both SOF/EOF and (length - maybe)/checkcode. Frame:
SOFrame
(Length maybe)
Data (address, to, from, type, sequence #, opcode, bytes, etc.)
CheckCode
EOFrame
Invariably, the received "fames" include:
good ones - no issues.
Corrupt due to sender not sending a complete message (it hung, powered down, or rump power-on transmission) (receiver should timeout stale incomplete messages.)
Corrupt due to noise or transmission interference. (byte framing errors, parity, incorrect data)
Corrupt due to receiver starting up in the middle of a sent message or missing a few bytes due to input buffer over-run.
Shared bus collisions.
Break - is this legit in your system?
Whatever framing you use, insure it is robust to address these messages types, promptly validate #1 and rapidly identifying 2-5 and becoming ready for the next frame.
SOF has the huge advantage of it it easy to get started again, if the receiver is lost due to a previous crap frame, etc.
Length is good, but IMHO the least useful. It can limit through-put, if the length needs to be at the beginning of a message. Some low latency operations just do not know the length before they are ready to begin transmitting.
CRC Recommend more than 2-byte. A short check-code does not improve things enough for me. I'd rather have no check code than a 1-byte one. If errors occur from time-to-time only be caught by the check-code, I want something better than a 2-byte's 99.999% of the time, I like a 4-byte's 99.99999997%
EOF so useful!
BTW: If your protocol is ASCII (instead of binary), recommend to not use cr or lf as EOFrame. Maybe only use them out-of-frame where they are not part of a message.
BTW2: If your receiver can auto-detect the baud, it saves on a lot of configuration issues.
BTW3: A sender could consider sending a "nothing" byte (before the SOF) to insure proper SOF syncing.