Role of FIFO Buffer for COM Port in windows - windows

can anyone here please explain the role of FiFo Buffer check (at advanced COM Port settings from device manager) in windows?
How checking/unchecking the FIFO Buffer affects reading data from COM Port?
Many Thanks in advance for helpful replies!

The original UART chip used in IBM-PC designs was the 8250. It could store just one received byte while the receiver was busy receiving the next byte. That puts a high demand on the responsiveness of the operating system's serial port driver, responding to the "data received" interrupt. It must be quick enough to read that byte before the receiver overwrites it. Not being quick enough causes an overrun error and irretrievable data loss. High interrupt rates are also detrimental.
That design was improved upon by the 16550 UART chip. It got a larger buffer, the FIFO, giving the OS more time to empty the buffer before an overrun could occur. The serial port driver can program it to generate an interrupt at a particular fill level, thus reducing the interrupt rate as well.
But chips designs have the same kind of problem that software has, the original 16550 had a bug in the FIFO implementation. Fixed in the 16550A, version 1.1 in software speak.
Problem was, the driver could not tell whether the machine had the buggy version of the 16550 or a good one. Simple chips like that don't have a GetVersion() equivalent. So it provided a property page that lets the user turn the FIFO support off, thus bypassing the bug.
The odds that today you'll have the buggy version are zero. Turning the FIFO off is no longer necessary.

Related

(libusb) Confusion about continous isochronous USB streams

I am using a 32-bit AVR microcontroller (AT32UC3A3256) with High speed USB support. I want to stream data regularly from my PC to the device (without acknowledge of data), so exactly like a USB audio interface, except the data I want to send isn't audio. Such an interface is described here: http://www.edn.com/design/consumer/4376143/Fundamentals-of-USB-Audio.
I am a bit confused about USB isochronous transfers. I understand how a single transfer works, but how and when is the next subsequent transfer planned? I want a continuous stream of data that is calculated a little ahead of time, but streamed with minimum latency and without interruptions (except some occasional data loss). From my understanding, Windows is not a realtime OS so I think the transfers should not be planned with a timer every x milliseconds, but rather using interrupts/events? Or maybe a buffer needs to be filled continuously with as much data as there is available?
I think my question is still about the concepts of USB and not code-related, but if anyone wants to see my code, I am testing and modifying the "USB Vendor Class" example in the ASF framework of Atmel Studio, which contains the firmware source for the AVR and the source for the Windows EXE as well. The Windows example program uses libusb with a supplied driver.
Stephen -
You say "exactly like USB Audio"; but beware! The USB Audio class is very, very complicated because it implements a closed-loop servo system to establish long-term synchronisation between the PC and the audio device. You probably don't need all of that in your application.
To explain a bit more about long-term synchronisation: The audio codec at one end (e.g. the USB headphones) may run at a nominal 48KHz sampling rate, and the audio file at the other end (e.g. the PC) may be designed to offer 48 thousand samples per second, but the PC and the headphones are never going to run at exactly the same speed. Sooner or later there is going to be a buffer overrun or under-run. So the USB audio class implements a control pipe as well as the audio pipe(s). The control pipe is used to negotiate a slight speed-up or slow-down at one end, usually the Device end (e.g. headphones), to avoid data loss. That's why the USB descriptors for audio device class products are so incredibly complex.
If your application can tolerate a slight error in the speed at which data is delivered to the AVR from the PC, you can dispense with the closed-loop servo. That makes things much, much simpler.
You are absolutely right in assuming the need for long-term buffering when streaming data using isochronous pipes. A single isochronous transfer is pointless - you may as well use a bulk pipe for that. The whole reason for isochronous pipes is to handle data streaming. So a lot of look-ahead buffering has to be set up, just as you say.
I use LibUsbK for my iso transfers in product-specific applications which do not fit any preconceived USB classes. There is reasonably good documentation at libusbk for iso transfers. In short - you decide how many bytes per packet and how many packets per transfer. You decide how many buffers to pre-fill (I use five), and offer the libusbk driver the whole lot to start things going. Then you get callbacks as each of those buffers gets emptied by the driver, so you can fill them with new data. It works well for me, even though I have awkward sampling rates to deal with. In my case I set up a bunch of twenty-one packets where twenty of them carry 40 bytes and the twenty-first carries 44 bytes!
Hope that helps
- Tony

Questions on how network cards in Windows work

I am trying to figure out how network cards work in Windows, and how the data is being relayed.
I have two hypotheses.
1.
Data is received by the network card.
The card then puts the data in an internal buffer, possibly a double buffer or a ring buffer.
The card accumulates data until some amount has been reached, upon which it sends an interrupt.
Windows copies the data from the card to the RAM and notifies appropriate handlers.
2.
Data is received.
The card puts the data in the RAM using DMA. (Does DMA guarantee that data will not be lost, or does the card still need its own buffer?)
The card fires an interrupt upon putting enough data in the RAM.
Windows receives the interrupt and copies or exposes the data to appropriate handlers.
Are either of my hypotheses correct?
Is there any message from the card or Windows if buffers are full?
In my Windows systems properties for my ethernet controller I can see properties called "Receive buffers" and "Transmit buffers", both are set to 256.
What does this mean?
Are there any good literature on this subject? (I have Tanenbaum's Modern Operating Systems, but it is not specifically related to Windows.)
Your question subsumes (at least!) three very, very broad topics:
1) how does a Layer 2 (Data Link) hardware device work?
2) How does it relate to the operating system's network stack
... and ...
3) How does it relate to the operating system's kernel-level device driver?
The next link is actually 180 degrees opposite your original question (the API is relatively high level, your question pertains to the lowest software levels), but it wouldn't hurt to look at the .Net API for perspective "how things work":
http://msdn.microsoft.com/en-us/library/4as0wz7t.aspx
'Hope that helps ... at least a little bit...
PS:
Linux is a wealth of information about implementing a network stack: all of the kernel source and all of the device drivers are completely available, and very well documented.

Will moving code into kernel space give more precise timing?

Background information:
I presently have a hardware device that connects to the USB port. The hardware device is responsible sending out precise periodic messages onto various networks that it, in turn, connects too. Inside the hardware device I have a couple Microchip dsPICs. There are two modes of operation.
One scenario is where send simple "jobs" down to the dsPICs that, in turn, can send out the precise messages with .001ms accuracy. This architecture is not ideal for more complex messaging where we need to send a periodic packet that changes based on events going on within the PC application. So we have a second mode of operation where our PC application will send the periodic messages and the dsPICs simply convert and transmit in response. All this, by the way, is transparent to the end user of our software. Our hardware device is a test tool used in the automotive field.
Currently, we use a USB to serial chip from FTDI and the FTDI Windows drivers to interface the hardware to our PC software.
The problem is that in mode two where we send messages from the PC, the best we are able to achieve is around 1ms on average hardware range. We are subjected to Windows kernel pre-emption. I've tried a number of "tricks" to improve things such as:
Making sure our reader & writer threads live on seperate CPU affinities when possible.
Increasing the thread priority of the writer while reducing that of the reader.
Informing the user to turn off screen saver and other applications when using our software.
Replacing createthread calls with CreateTimerQueueTimer calls.
All our software is written in C/C++. I'm very familiar and comfortable with advanced Windows programming; such as IO Completions, Overlapped I/O, lockless thread queues (really a design strategy), sockets, threads, semaphores, etc...
However, I know nothing about Windows driver development. I've read through a few papers on KMDF vs. UDMF vs. WDM.
I'm hoping a seasoned Windows kernel mode driver developer will respond here...
The next rev. of our hardware has the option to replace the FTDI chip and use either the dsPIC's USB interface or, possibly, port the open source Linux FTDI stuff to Windows and continue to use the FTDI chip within our custom driver. I think by going to a kernel mode driver on the PC side, I can establish a kernel driver that can send out periodic messages at more precise intervals without preemption and/or possibly taking advantage of DMA.
We have a competitor in our business who I think does exactly something similar with their tools. As far as I know, user space applications can not schedule a thread any better than 1ms. We currently use timeGetTime in a thread. I've experiemented with timer queues (via CreateTimerQueueTimer) with no real improvement.
Is a WDM the correct approach to achieve more precise timing?
Our competitor some how is achieveing very precise timing from Windows driven signals to their hardware and they do load a kernel driver (.sys) and their device runs over USB2.0 as does ours.
If WDM is the way to go, can I get some advise on what kernel functions I should be studying for setting up the timings?
Thanks for reading
In kernel mode, you have the luxury of getting a DPC triggered in multiples of 100-nanosecond intervals without dealing with interrupts. A DPC cannot be preempted (aka interrupted by thread scheduler) because thread scheduler is also a DPC. An interrupt can still preempt a DPC though. So an interval value of 10 should do the trick for you to have a callback with utmost precision.
However you don't have access to many features such as paged memory, or a specific thread's memory space at DPC level because they run in arbitrary context. It could be useful to defer processing to your own user mode process' context using an APC which has access to more features.
Kernel threads don't get any special treatment in terms of priority. They are the same as user threads from scheduler's perspective. There are couple more higher-priority levels kernel threads can get but usually no kernel thread uses any of them. I don't think your bottleneck is thread priority. It doesn't matter how big your priority number is, having just one above everyone else is enough for you to become the "god thread" which receives top priority. Having highest priority doesn't mean that you'll get continuous attention. OS will still pause your thread to run others so quantum starvation does not occur.
Another note on Windows preemption behavior: Balance Set Manager temporarily boosts a thread's priority when a thread is signaled by an asynchronous event (GUI click, timer trigger, I/O completion) to allow completion code to finish it's procesing with less preemption. Using an async timer handler should give enough boost to prevent preemption at least for a quantum. I wonder why your code does not fall into that window. However it seems like you are not the only one having problems with timer precision: http://www.virtualdub.org/blog/pivot/entry.php?id=272
I agree with Paul on complexity of driver development, but as long as you have a good justification it's not rocket science, just more effort.
This is one of the fundamental design aspects of the Windows kernel - that code running at passive level (=> all user-mode code) is subject to DPCs and interrupts taking up time, and if you want 1us accuracy, you're probably not going to get it with either a UMDF or user-mode driver.
However, writing a kernel driver is not a light or cheap undertaking, it is very difficult, both to even write, and to ensure that it works on your customers' machines (a lot of testing is required). Getting it right will cost you significant engineering resources.
As a stopgap, I'd look into MMCSS for >= Vista (http://msdn.microsoft.com/en-us/library/windows/desktop/ms684247(v=vs.85).aspx), it may give you enough priority that you can be satisfied.
If you really want to go down the rabbit hole, KMDF is what you should be using. KMDF is a framework on top of WDM that represents a lot of codified best-practices for drivers. Unless you're absolutely forced to, KMDF is always the best way to go for drivers. And to be honest, you're almost certainly going to want to either contract with OSR (http://www.osr.com) or hire someone (several people?) experienced in writing Windows drivers.
Your focus on drivers and kernel performance misses the forest for the trees. The elephant in the room is the fact that full-speed USB 2 bus frames happen with 1ms period. High speed USB 2 micro-frames happen every 1/8ms.
When you send data over full-speed USB (like for most FTDI chips), the best your application can hope for is that the data will get to the device sometime during the very next frame. With an unloaded USB bus, the transfer will happen very close to the start-of-frame. You'll observe it as 1ms granularity with small random deviation. This is precisely what you're seeing, and is not bad. For example, since all USB devices attached to the same host will see the frames at the same time, it's a simple way to synchronize multiple device clocks with better than microsecond precision. What your application can do is simply send a message that has not only the data, but some time in the near future when it should be sent out. Another issue with USB is that there are no guarantees as to when your requests for data transmission will be serviced. You're sharing a bus with other devices, after all.
I think you need to reengineer your system and not depend on any sort of timing from the PC end. The application that runs on the PC should be assumed to be, timing-wise, limited to the performance of the human that interacts with it. Anything that requires guaranteed real time performance must be on your dsPIC devices. Even the USB bus doesn't cut it as you have no guarantees at all as to how soon will your request be scheduled on the bus.
Basically, if you want guaranteed real-time performance on Windows, then there must be no user mode involved -- it must all run in kernel mode, and you must use communications channels that are for your exclusive use (or you make them act that way, e.g. by filtering right on top of the USB host).

How do interrupts in multicore/multicpu machines work?

I recently started diving into low level OS programming. I am (very slowly) currently working through two older books, XINU and Build Your Own 32 Bit OS, as well as some resources suggested by the fine SO folks in my previous question, How to get started in operating system development.
It could just be that I haven't encountered it in any of those resources yet, but its probably because most of these resources were written before ubiquitous multicore systems, but what I'm wondering is how interrupts work in a multicore/multiprocessor system.
For instance, say the DMA wants to signal that a file read operation is complete. Which processor/core acknowledges that an interrupt was signaled? Is it the processor/core that initiated the file read? Is it whichever processor/core that gets to it first?
Looking into the IoConnectInterrupt function you can find the ProcessorEnableMask that will select the cpu's that allowed to run the InterruptService routine (ISR).
Based on this information i can assume that somewhere in the low level (see Adam's post) it's possible to specify where to route the interrupt.
On the side note file operation is not really related to the interrupts and/or dma directly. File operation is file system concept that translated to something low level depend on which bus you filesystem located it might be IDE or SATA disk or it might be even usb storage in this case sector read will be translated to 3 logical operation over usb bus, there will be interrupt served by usb host controller driver, but it's not really related to original file read operation, that was probably split to smaller transaction any way.
In the old days the interrupt went to all processors. In modern times some kinds of hardware can be programmed by an OS to send an interrupt to one particular processor. Of course if you could choose a processor dynamically instead of statically, you wouldn't want to send the interrupt to whichever processor initiated the I/O, you'd want to send it to whichever processor is least burdened at the present time and can most efficiently start the next I/O operation, and/or whichever processor is least burdened at the present time and can most efficiently execute the thread that was waiting for the results.

What simple method can I use to debug an embedded processor without serial port or video?

We have a small embedded system without any video or serial ports (i.e. we can't output text via printf).
We would like to track the progress of our code through the initialization sequence.
Is there some simple things we can do to help with this.
It is not running any OS, and the hardware platform is somewhat customizable.
The simplest most scalable solution are state LEDs. Toggle LEDs based on actions, either in binary form or when certain actions occur if you can narrow your focus.
The most powerful will be a hardware JTAG device. You don't even need to set breakpoints - simply being able to stop the application and inspect the state of memory may be enough. Note that some hardware platforms do not support "fancy" options such as memory watches or hardware breakpoints. The former is usually worked around with constantly stopping the processor and reading memory (turns your 10MHz system into a 1kHz system), while the latter is sometimes performed using code replacement (replace the targeted instruction with a different jump), which sometimes masks other problems. Be aware of these issues and which embedded processors they apply to.
There are a few strategies you can employ to help with debugging:
If you have Output Pins available, you can hook them up to LEDs (or an oscilloscope) and toggle the output pins high/low to indicate that certain points have been reached in the code.
For example, 1 blink might be program loaded, 2 blink is foozbar initialized, 3 blink is accepting inputs...
If you have multiple output lines available, you can use a 7 segment LED to convey more information (numbers/letters instead of blinks).
If you have the capabilities to read memory and have some RAM available, you can use the sprint function to do printf-like debugging, but instead of going to a screen/serial port, it is written in memory.
It depends on the type of debugging that you're trying to do - in particular if you're after a temporary method of tracing or if you are trying to provide a tool that can be used as an indication of status during the life of the project (or product).
For one off, in depth source tracing and debugging an in-circuit debugger (eg. jtag) can be very helpful. However, they are most helpful where your debugging requires setting breakpoints and investigating memory and registers - which makes it of little benefit where you are dealing time critical problems.
Where you need to determine program state without having a significant impact on the execution of your system the use of LEDs connected to spare I/O pins will be helpful. These can also be used as the input to a digital storage oscilloscope (DSO) or logic analyzer.
This technique can be made more powerful by selecting unique patterns of pulses that will be identifiable on the DSO.
For a more versatile debugging tool, though, a serial port is a good solution. To save cost and PCB real-estate you may find it useful to use an plug-in module that contains the RS232 converters.
If you are trying to provide a longer term indication of status as part of the normal operation of your product, LEDs are again a cheap an simple method. However in this situation it is best to choose patterns of pulses that are slow enough to be easily identified by visual inspection. This will all you over time you will learn a particular pattern that represents "normal" behavior.
You can easily emulate serial communications (UARTs) using bit-banging from the IO pins of the system. Hook it to one of the card's pins and attach to a RS232 converter there (TTL to RS232 converters are easy to either buy or build), which goes to your PC's serial port.
A JTAG debugger is also an option, though cumbersome to set up.
If you don't have JTAG, the LEDs suggested by the others are a great idea - although you do tend to end up in a test/rebuild cycle to try to track down the issue.
If you've got more time, and spare hardware pins, and memory to spare, you could always bit-bash a low speed serial interface. I've found that pretty useful in the past.
Others have suggested some pretty good ideas using output pins, so I won't suggest that, although it can be a very good solution, and is very cost effective. If your budget and target processor support it, a hardware trace system, (either an old fashioned emulator, or a fancy BDM with bus snooping trace support) can be great for this type of thing. It's very expensive though.
The idea of using a bit-banged software UART is nice, but there's some effort required in writing one and also you need some free timers and interrupts. If your hardware has any other unused serial interface (SPI, I2C, ..), using them would be easier. With a small microcontroller you could convert the interface to RS-232.
If you have to go for the bit-banging, making a synchronous serial might be a simpler alternative as it wouldn't be critical to timing.

Resources