diff between IO-APIC-level and PCI-MSI-X [closed] - linux-kernel

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
In /proc/interrupts file I see IO-APIC-level(or edge) and in my other system i see the PCI-MSI-X. The both are with same device etho.
I am not getting diff between these two. Can I change the PCI-MSI-X to IO-APIC ?? Which kernel module or file or conf or proc file, it belongs to ?
Is it safe to distribute the interrupts to all available CPU cores ??

MSI-X interrupts are message-based interrupts, and are the sole method available for PCIe devices to signal interrupts. Instead of asserting a hardware line to signal an interrupt, the device writes a single word to a preconfigured address. That address is either a control register in the CPU, or a register in the PCIe root port which emulates the legacy interrupt system. You're seeing both of those cases.
The BIOS configures the board to send its MSI interrupts to the root port, which emulates INTx interrupts, which get to the CPU via the routing in the APIC. When the OS supports MSI directly, the device driver can reprogram the MSI destination address, so that the interrupt message reaches the CPU interrupt registers directly.
MSI-X is different than MSI simply by supporting multiple interrupt vectors (one for each network port on a dual-port NIC, for example, or one for TX and for RX).
MSI performs better than INTx emulation, since INTx emulation shares its interrupts across devices behind the same PCIe bridge, though this really only matters on devices that generate tons of interrupts, which modern NICs actually don't. Your question should be, "why is one of my systems failing to enable MSI-X interrupts on my network card."
References:
http://lwn.net/Articles/44139/
http://en.wikipedia.org/wiki/Message_Signaled_Interrupts

Related

Serial driver in userspace

Is it possible to write serial driver in userspace, yet, have the device appear as regular serial driver /dev/ttyS0 in the system ?
The full story is that we have a pci express fpga, and there are several devices behind the pci express fpga: serials, canbus, i2c, mdio, etc.
I thought to implement it as uio_pci_generic, yet the serial driver is a bit problematic because we rather that it will appear as regular serial /dev/ttyS0.
If the above is not possible: Is it possible to implement some of the pci devices in kernel (serial) and others in userspace ? Is it problematic in terms of interrupt ?
Thanks for any idea.
Yes, you can do this using a pty. The user mode driver opens the master end of the pty and the application that wants to use the serial port opens the slave end. Search for Linux pty.
Everywhere where you need to use interrupts you need to write code for kernel space not user space. Interrupt handlers need to be serviced in atomic context and user space is not able to provide atomic context. Second thing - if you need to write HAL layer - it also has to be written in kernel space.

Abusing the USB mass storage class for driverless I/O [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am developing a USB-based peripheral device for use on Windows desktop systems and would prefer to avoid a driver installation step. In part, due to the resources required to develop and sign custom drivers, and in part as third party drivers have proved a significant stumbling block for users.
This suggests the use of a standard USB device class. HID is straightforward and flexible but has poor throughput aggrevated by MCU-specific limitations. Instead I am evaluating a scheme of impersonating a mass-storage device.
The trick being to report the metadata for a FAT filesystem containing a hidden device I/O file, which the interface application then employs raw file unbuffered I/O to communicate through. All data outside of the hard-wired I/O file sectors is reloaded into RAM at enumeration and ignored.
Thus far, this has worked surprisingly smoothly, with fast I/O and enumeration through what is presumably well-optimized path on all systems tested and no privilege elevation. However, this is clearly an abuse of the system and may fall over if Windows decides to, say, detect that the I/O data is being read back inconsistently, to defragment the cluster chain, reformat as exFAT, etc.
My question is whether such a scenario is known to occur in practice, or likely to occur in the near future? Has such a scheme been attempted the past? Will the quantity of dodgy USB mass storage devices out there form an effective shield against the operating system getting fancy?
Finally, are there any other standard USB classes or approaches which I might consider as a more reliable alternative? Windows 10 has finally added standard CDC support yet supporting earlier versions would involve bypassing signed driver installation (plus a history of BSODs, random disconnects and enumeration failures has left me wary of virtual serial devices.)
Consider using MS OS 2.0 Descriptors. Basically, you can add some special requests and descriptors to your USB device that tell Windows 8.1 and later to automatically use WinUSB for the device. Once WinUSB is installed, you can use libraries like libusb and libusbp to access the device, or just use the WinUSB API directly.
For users who are on Windows 7 or below, you could supply a signed INF-only driver, which is not too hard or expensive. See the document I wrote about it. Or you could just tell them to use a utility like Zadig to install WinUSB. Zadig gets around the driver singing requirement by inventing its own root certificate and installing it as a trusted root certificate on the user's machine.

Are PCIe device drivers beneficial if using Linux as a bootloader for bare-metal code?

I am developing an embedded system on a PowerPC processor and there is need for communication with an FPGA via PCIe. I wish to use Linux/embedded-Linux as a bootloader to leverage its PCIe initialization code and driver API for simplified PCIe driver development. However in the end I want to be running bare-metal code (no OS running). So I am looking at using PetitBoot/kexec to jump from Linux to my own code.
Is this possible?
My current understanding of PCIe drivers leads me to believe that once the device is initialized, so long as I have a pointer to the address space, I should be able to simply execute MMIO R/W operations directly to the memory space. So even if kexec overwrites the driver code I should be able to use the device because the driver has done its job already.
Is this correct?
If not, what are my alternatives?
I don't think this approach would be a good idea. Drivers that were written with the Linux OS in mind are going to assume that all of the OS's resources are available, not just memory allocations. For example, it may configure interrupt handlers, but when the OS is not longer available, your hardware may get hung because nothing is acknowledging and servicing its interrupt requests.
I'm skeptical of the memory initialization as well. I suppose you could theoretically allocate some DMA memory and pass the resulting physical address to your bare-metal application as it takes over, but the whole process seems sketchy. It would be very difficult to make sure everything in Linux is shut down cleanly while leaving the PCIe subsystem running. You'll have to look at the driver's shut down routines and see what it does to the card to make sure it doesn't shut down the device and make it unresponsive to your bare-metal code.
I would suggest that you instead go through the Linux-based driver and use it as a guide to construct a new bare-metal driver. Copy the initialization code that you need, and leave out the Linux-specific configuration details.

What's the difference between "COM", "USB", "Serial Port"? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I am confused about the these 3 concepts.
My understanding is, Serial Port usually means RS-232 compatible port (RS = Recommended Standard). USB stands for Universal Serial Bus. So its name contains serial port, does it support RS-232? What does the Universal mean?
And what does COM port mean?
ADD 1
Some understanding from Hans' answer:
To reduce effort, device manufacturers usually make their device can behave like a serial port device as well. This relies on the the fact that many OS and language libraries have already included serial port communication support. Though such support is no comparable to a real matching device driver.
ADD 2
A good reference doc about Serial Port HOW-TO.
And btw, the Linux Document Project is really useful.
Serial port is a type of device that uses an UART chip, a Universal Asynchronous Receiver Transmitter. One of the two basic ways to interface a computer in the olden days, parallel ports were the other way. Serial is simple to hook up, it doesn't need a lot of wires. Parallel was useful if you wanted to go fast, typ 8 times faster than serial, but cables and connectors were expensive. Parallel I/O has completely disappeared from computer designs, caught up by tremendous advances in bus transceivers, the kind of chip that can transmit an electrical signal down a wire.
COM comes from MS-Dos, it is a device name. Short for "COMmunication port". Computers in the 1980's usually had two serial ports, labeled COM1 and COM2 on the back of the machine. This name was carried forward into Windows, most any driver that simulates a serial port will create a device with "COM" in its name. LPT was the device name for parallel ports, short for "Line PrinTer".
RS-232 was an electrical signaling standard for serial ports. It is the simplest one with very low demands on the device, supporting just a point-to-point connection. RS-422 and RS-485 were not uncommon, using a twisted pair for each signal, providing much higher noise immunity and allowing multiple devices connected to each other.
USB means Universal Serial Bus. Empowered by the ability to integrate a micro-processor into devices that's a few millimeters in size and costs a few dimes. It replaced legacy devices in the latter 1990s. It is Universal because it can support many different kinds of devices, from coffee-pot warmers to disk drives to wifi adapters to audio playback. It is Serial, it only requires 4 wires. And it is a Bus, you can plug a USB device into an arbitrary port. It competed with FireWire, a very similar approach and championed by Apple, but won by a land-slide.
The only reason that serial ports are still relevant in on Windows these days is because a USB device requires a custom device driver. Device manufacturers do not like writing and supporting drivers, they often take a shortcut in their driver that makes it emulate a legacy serial port device. So programmers can use the legacy support for serial ports built into the operating system and about any language runtime library. Rather imperfect support btw, these emulators never support plug-and-play well. Discovering the specific serial port to open is very difficult. And these drivers often misbehave in impossible to diagnose ways when you jerk a USB device while your program is using it.
USB stand for Universal Serial Bus not Port. The term "serial port" simply means that the data is transferred one bit at a time over a single signal path - in that sense even Ethernet is serial in nature. The word serial in both terms implies no relationship other than the width of the data path.
You are right in that the term serial-port in the context of a PC normally means an RS-232 port, but there are other serial port standards such as RS-422 and RS-485 often used in industrial applications. What these have in common is that they are implemented using a UART (Universal Asynchronous Receiver/Transmitter).
The term Universal in USB merely reflects the fact that it is not a specific device interface such as the dedicated mouse or keyboard ports found on older hardware. Similarly a UART based serial port is not device specific, reflected by the U in UART.
USB differs significantly from RS-232 in a number of ways; it is a master/slave (or host/device in USB terminology), rather than peer-to-peer, the USB device cannot initiate communication, it must be polled by the host. USB includes a low-voltage power supply to allow devices with moderate power requirements to be powered by the bus - that is also why USB ports can be used for charging battery powered devices. USB is significantly more complex that RS-232 which defines only the physical (hardware) layer whereas USB requires a complete software protocol stack.
The term COM is just a device name prefix used in Windows (and previously MS-DOS) for a serial (UART) port. Short for "communications", you can for example open a COM port as a stream I/O device with say FILE* port = fopen( "COM1", "wr" ) ;.

Keeping device functionality inside device controller rather than OS kernel. What are consequences?

A friend of mine asked me this question in the class and I could not answer it. He asked:
Since we know kernel controls the physical hardware via device drivers. What if all this functionality is kept inside the device controller itself rather than kernel managing them. What would be the consequences of such scenario? Good or Bad?
I searched online for this question but could not get information about this scenario. May be I'm not googling in the right keyword.
You insight into this will help me getting clearing my concepts.
Please answer.
Thanks.
Your question seems to propose the elimination of the "device driver" by "keeping" "control (of) the physical hardware ... inside the device controller". The premise for this seems to be:
kernel controls the physical hardware via device drivers.
That description of a device driver is something similar to what I've seem for end-user comprehension rather than from a developer's perspective. The end-user is aware of the device, and it is the device driver that takes that abstraction and can control that device down to the specific control bits of each device port.
But a device driver is responsible for mundane housekeeping tasks such as:
maintaining device status and availability;
configuring the device for operation;
managing data flow, setting-up/tearing-down data transfers, copying data between user space and kernel space;
handling interrupts and exceptions.
These tasks are integral to a device driver. These tasks cannot be transferred out of the purview of the kernel driver to a peripheral device.
Sometimes the device driver can only try to manage the device, rather than fully control it, for example, a NIC driver during a packet flood.
There is simply no possibility that you can eliminate a device driver no matter how much of "all this functionality is kept inside the device controller itself". And there would still be control directives/commands issued from the device driver to the peripheral.
The hardware device in question should be a computer peripheral device, not an autonomous robot device. The device should be designed to operate with a computer. Whatever interface there is between processor and device should be suitable for the task. If the peripheral is made more "intelligent", then perhaps the CPU can be unburdened and a high-level command interface can replace low-level sub-operation directives. But only "some" functionality can be transferred to the peripheral, not "all".

Resources