How does the CPU know the PCI adress-space - linux-kernel

I understand that PCI and PCIe devices can be configured by the CPU (via code in the BIOS or OS) to respond to certain physical memory addresses by writing to specific areas of the device's configuration space.
In fact the Linux kernel has quite the complicated algorithm for doing this taking into account a lot of requirements of the device (memory alignment, DMA capabilities etc).
Seeing that software seems to be in control of if, when and where this memory is mapped, my question is: How can a piece of software control mapping of physical memory?
After this configuration, the PCI device will know to respond to the given address range, but how does the CPU know that it should go on the PCI bus for those specific addresses that were just dynamically decided?

The northbridge is programmed with the address range(s) that are to be routed to the memory controller(s).
All other addresses go to the external bus.

It is based on address mapping info that CPU had.
normally you have 2^64 -1 address lines with CPU if it is 64 bit processor.
Now memory is now around 16 GB which is 2^34 is around 16 GB.
So all the devices which CPU has (even legacy PCI and PCIe devices) and their config space can be mapped
to address line above this RAM physical address space.
Any IO to this space can be forwarded to respective device.
In our case CPU finds out that the config space which it wants to access to is a PCI or PCIe device then it forwards the
instruction to host bridge of CPU (00:00:00 Do lspci in a box you will see the host bridge with this BDF)
Once it finds out the target device is within host bridge the instruction (Can be IO or Memory) will be converted to appropriate TLP request.

Related

Which instructions does a CPU use to communicate with PCIe cards?

I want to understand how a CPU works and so I want to know how it communicates with a PCIe card.
Which instructions does the CPU use to initialize a PCIe port and than read and write to it?
For example OUT or MOV.
A CPU mainly communicates with PCIe cards through memory ranges they expose. This memory may be small for network or sound cards, and very large for graphics cards. Integrated GPUs have also have their own tiny memory but share most of the main memory. Most other cards also have read/write access to main memory.
To set up the PCIe device, the configuration space is written to. On x86, the BIOS or bootloader will provide the location of this data. PCI devices are connecting in a tree which may include hubs and bridges on larger computers and this can be shown in lspci -t. Thunderbolt can even connect to external devices. This is why the OS needs to recursively "probe" the tree to find PCI devices and configure them.
Synchronization uses interrupts and ring buffers. The device can send a prenegotiated interrupt to the CPU when it's done doing work. The CPU writes work to a ring buffer. It then writes another memory location that contains the head pointer. This memory location is located on the device so it can listen to writes there and wake up when there is work to do.
Most of the interaction for modern devices will use MOV instead of OUT. The I/O ports concept is very old and not very suitable for the massive amount of data on modern systems. Having devices expose their functionality as a type of memory instead of a separate mechanism allows vectorized variants of MOV to move 32 bytes or similar at a time. With graphics card and modern network cards supporting offload, they can also use their own hardware to write results back to main memory when instructed to do so. The CPU can then read the results when it's free later, again using MOV.
Before this memory access works, the OS will need to set up the memory mapping properly. The memory mapping is set in the PCI configuration space as BARs. On the CPU side it is set up in the page tables. CPUs usually have caches to keep data locally because access to RAM is slower. This causes a problem when the data needs to get to a PCI device, so the OS will set certain memory as write-through or even uncacheable so this is ensured.
The word BAR is often marketed by GPU vendors. What they are selling is the ability to map a larger region of memory at a time. Without that, OSes have been just unmapping and reinitializing by remapping a limited window of memory at a time. This exemplifies the importance of MOV accessing PCIe devices.

how software interrupts are different than port IN/OUT ,

I am confused with port mapping and ISR
since i am following an article which mentioned that hardware ports are mapped to memory from 0x00000000 to 0x000003FF
now we can talk with microcontroller of that hardware using these port no using IN and OUT instructions ok
but what is ivt then mean i read ivt contain address of interrupt service routine
everthing is messed in mind
do when we use IN /OUTwith port no cpu checks in ivt and how microcontrollers knows their number
When hardware ports are mapped to memory location then this is called Memory-Mapped IO.
Hardware is accessed by reading/writing data/commands in their registers. In Memory Mapped IO, instead of transmitting data/command to hardware registers, the cpu reads/writes signal/command/data at particular memory locations which are mapped to hardware registers. Therefore, communication between hardware and cpu happens via read/write to specific memory location.
When a hardware in installed it is given a set of fixed memory location for the purpose of Memory Mapped IO and these memory location are recorded. Also, every hardware has its ISR whose address is stored in IVT. Now when a particular hardware interrupts the cpu, the cpu finds the interrupting hardware's ISR address from the IVT. Once the cpu identifies with which hardware the communication (I/O) needs to be done then it communicate with that hardware via Memory Mapped IO by making use of the fixed memory locations which were allocated for that hardware.

How does the CPU know where to look for a given physical memory address?

If I understand correctly, whenever a CPU is turned on it jumps to a manufacturer hardwired default physical memory address and starts executing the binary code present there. Naturally in a PC the default location maps to the ROM containing the BIOS.
Suppose I have a system with RAM installed as well, how does the CPU know that it is supposed to search for that specific address in the ROM chip containing the BIOS and not RAM's address?
Do the manufacturers of the Motherboard and the RAM have some standard or contract where they agree that the memory addresses of their hardware will never overlap?
I think you will see this diagram very often:
It summarize several things (I assumed you are familiar with the definition of "Physical Addresses" and "Virtual Addresses":
All program and wires within the CPU, always communicate and exchange among themselves "virtual addresses". You will never encounter any "physical addresses".
To address any DRAM outside the CPU, you will need "physical addresses".
The same "virtual addresses", can map to different "physical addresses". Eg, the following instruction:
load eax, (virtual_address_XXXX) (load from memory into EAX)
Same assembly instruction, but running under different processes, will result in accessing different parts of physical memory. (this is done through pagetable + MMU)
Translation from virtual address to physical address will need the MMU. Any electrical signals detected outside the CPU, is always at the physical addresses level. So all hardware devices (eg, memory) have to recognize that.
https://www.slideserve.com/stacie/computer-architecture-memory-management-units
So to start off, your question in the "title" is not really correct - the CPU does not know and see any "physical addresses", it is the hardware devices. But all get translated by the MMU (or IOMMU) (can cached by the TLB).
Note that some CPU does not have MMU. So "physical" and also the same value as "virtual address".
Let us take it step by step in a quick way:
When you reset your PC, it actually runs the code from BIOS.
Bios code is code written by Motherboard manufacturer to boot up the board. Then and after Bios finish its job. MBR; Master Boot Record, will run it is a piece of code written at the head of your HDD (address 0) when you installed you operating system, say windows or Linux.
This piece of code is responsible for jumping to your windows drive to start it which is called bootloader. So,
BIOS(Non-Volatile Memory) -> MBR(HDD) -> OS
If you think about it, you can find that there is an option of booting sequence inside BIOS to identify which is MBR should be read from HDD, DVD, ...etc
Read more about BIOS:
https://en.wikipedia.org/wiki/BIOS
Read more about MBR:
https://en.wikipedia.org/wiki/Master_boot_record

In X86 Platform, does the DMA operation mean to move data between MMIO addr space and system memory addr space?

On the modern X86/X86_64 platform, due to MMIO mechanism, are DMA operations to move data between MMIO address space and memory address space? In the Linux kernel, I see that there is a dma_addr_t definition. Is this type used for MMIO addresses?
In general, a DMA operation just refers to a device other than the CPU accessing memory. On x86, there are not separate MMIO and RAM address spaces -- everything is unified. Some examples of typical DMA operations:
A network card might receive a packet from the network and use DMA to write the packet contents into the system's RAM.
A SATA controller might get a write command and use DMA to read the data to send to the hard disk from system RAM.
A graphics card might use DMA to read texture data from system RAM into its own video memory. The video memory is visible to the system CPU through a PCI BAR (MMIO), but that's not really relevant here.
The dma_addr_t type holds a "bus address" in Linux. The address that, for example, a PCI device (like a NIC / SATA controller / GPU) sees a given part of memory mapped at can be different than the address the CPU uses. So Linux has the abstraction of "DMA mapping" to handle this difference.
In the first example above, the network stack would allocate a buffer in RAM, and then pass it to a dma_map function to get a bus address that it hands to the NIC. The NIC would use that address to write the packet into memory.
In older x86 systems, there wasn't really any difference between the physical address that the CPU used and the bus address that external devices used, and the dma_map functions were pretty much NOPs. However, with modern technologies like VT-d, the bus address that a PCI device uses might be completely different than the CPU's physical address, and so it is important to do the DMA mapping and use a dma_addr_t for all addresses that are used by external DMA devices.

What happens when we plug a piece of hardware into a computer system?

When we plug a piece of hardware into a computer system, say a NIC (Network Interface Card) or a sound card, what happens under the hood so that we coud use that piece of hardware?
I can think of the following 2 scenarios, correct me if I am wrong.
If the hardware has its own memory chips, someone will arrange for a range of address space to map to those memory chips.
If the hardware doesn't have its own memory chips, someone will allocate a range of address in the main memory of the computer system to accomodate that hardware.
I am not sure the aforemetioned someone is the operating system or the CPU.
And another question: Does hardware always need some memory to work?
Am I right on this?
Many thanks.
The world is not that easily defined.
first off look at the hardware and what it does. Take a mouse for example, it is trying to deliver x and y coordinate changes and button status, that can be as little as a few bytes or even a single byte two bits define what the other 6 mean, update x, update y, update buttons, that kind of thing. And the memory requirement is just enough to hold those bytes. Take a serial mouse there is already at least one byte of storage in the serial port so do you need any more? usb, another story just to speak usb back and forth takes memory for the messages, but that memory can be in the usb logic, so do you need any more for such small information.
NICs and sound cards are another category and more interesting. For nics you have packets of data coming and going and you need some buffer space, ring, fifo, etc to allow for multiple packets to be in flight in both directions for efficiency and interrupt latency and the like. You also need registers, these have their storage in the hardware/logic itself and wont need main memory. In both the sound card case and the nic case you can either have memory on the board with the hardware or have it use system memory that it can access semi-directly (dma, etc). Sound cards are similar but different in that you can think of the packets as being fixed sized and continuous. Basically you need to ping-pong buffers to or from the card at some rate, 44100khz 16 bit per sample stereo is 44100 * 2 * 2 = 176400 bytes per second, say for example the driver/software is preparing the next 8192 bytes at a time and while the hardware is playing the pong buffer software is filling the ping buffer, when hardware drains the pong buffer it indicates this to the software, starts draining the ping buffer and the software fills the ping buffer.
All interesting stuff but to get to the point. With the nic or sound card you could have as little as two registers, an address/command register and a data register. Quite painful but was often used in the old days in restricted systems, still used as well. Or you could go to the other extreme and desire to have all of the memory on the device mapped into system memory's address space as well as each register having its own unique address. With audio you dont really need random access to the memory so you dont really need this, graphics you do, nic cards you could argue do you leave the packet on the nic or do you make a copy in system memory where you can have a much larger software buffer/ring freeing the hardwares limited buffer/ring. If on nic then you would want random access, if not then you dont.
For isa/pci/pcie, etc on x86 systems the hardware is usually mapped directly into the processors memory space. So for 32 bit systems you can address up to 4GB, well even if you have 4GB worth of memory some of that memory you cannot get to because video cards, hardware registers, PCI, etc consume some of that address space (registers or memory or both, whatever the hardware was designed to use). As distasteful as it may appear to day this is why there was a distiction between I/O mapped I/O and memory mapped I/O on x86 systems, its another address bit if you will. You could have all of your registers in I/O space and not lose memory space, and map memory into nice neat aligned chunks, requiring less of your ram to be replaced with hardware. either way, isa had basically vendor specific ways of mapping into the memory space available to the isa bus, jumpers, interesting detection schemes with programmable address decoders, etc. PCI and its successors came up with something more standard. When the computer boots (talking x86 machines in general now) the BIOS goes out on the pcie bus and looks to see who is out there by talking to config space that is mapped per card in a known place. Using a known protocol the cards indicate the desired amount of memory they require, the BIOS then allocates out of the flat memory space for the processor chunks of memory for each device and tells the device what address and how much it has been allocated. It is certainly possible for the operating system to re-do or override this but typically the BIOS does this discovery for the system and the operating system simply reads the config space on each device which includes the vendor id and device id and then knows how and where to talk to the device. For this memory space I believe the hardware contains the memory/registers. For general system memory to dma to/from I believe the operating system and device drivers have to provide the mechanism for allocating that system memory then telling the hardware what address to dma to/from.
The x86 way of doing it with the bios handling the ugly details and having system memory address space and pci address space being the same address space has its pros and cons. A pro is that the hardware can easily dma to/from system memory because it does not have to know how to get from pcie address space to system address space. The negative is the case of a 32 bit system where pcie normally consumes up to 1GB of address space and the dram you bought for that hole is not available. The transition from 32 bit to 64 bit is slow and painful, the bioses and pcie chips are still limiting to the lower 4gig and limiting to 1gb for all the pcie devices, even if the chipset has a 64 bit mode, and this is with 64 bit processors and more than 4gb of ram. the mmu allowes for fragmented memory so that is not an issue. Slowly the chipsets and bioses are catching up but it is taking time.
USB. these are serial mostly master/slave protocols. Like a serial port but bigger and faster and more complicated, and like a serial port both the master and slave hardware need to have ram to store the messages, very much like a nic. Like a nic, in theory, you can be register based and pull the memory sequentially or have it mapped in to system memory and have random access to it, etc. Think of it this way, the usb interface can/does sit on a pcie interface even if it is on the motherboard. A number of devices are pcie devices on your motherboard even if they are not an actual pcie connector with a card. And they fall into the pcie cagetory of how you might design your interface or who has what memory where.
Some devices like video cards have lots of memory on board, more than is practical or is at least painful to allow all of it to be mapped into pcie memory space at once. And these would want to use a sliding window type arrangement. Tell the video card you want to look at address 0x0000 in the video cards address space, but your window may only be 0x1000 bytes (for example) in system/pcie space. When you want to look at addresses 0x1000 to 0x1FFF in video memory space you write some register to move the window then the same pcie memory space accesses different memory on the video card.
x86 being the dominant architecture has this overlapped pcie and system memory addressing thing but that is not how the whole world works. Other solutions include having independent system and pcie address spaces, with sliding windows, like the video card problem above, allowing you to have say a 2gb video card mapped flat in pcie space but limiting the window into pcie space to something not painful for the host system.
hardware designs are as varied as software designs. take 100 software engineers and give them a specification and you may get as many as 100 different solutions. Same with hardware give them a specification and you may get 100 different pcie designs. Some standards are in place to limit that, and/or cloning where you want to make a sound blaster compatible card, you dont change the interface, but given the freedom software has the hardware can and will vary and with the number of types of pcie devices (sound, hard disk controllers, video, usb, networking,etc) you will get that many different mixes of registers and addressable memory.
sorry for the long answer, hope this helps. I would dig through linux and/or bsd sources for device drivers along with programmers reference manuals if you can get access to them, and see how different hardware designs use register and memory space and see what designs are painful for the software folks and what designs are elegant and well done.
The answer depends on what is the interface of the hardware- is it over USB or PCI-Express? (and there could be others connectivity methods too - USB and PCI-Express are the most common)
With USB
The host learns about the newly arrived device by reading the descriptors and loads the appropriate device driver. The device would have presented its ID that is used for Plug n Play. The device is also assigned an address by the Host. Once the device driver kicks-in it configures the device and makes it ready for data transfer. The data transfer is done using IRP, the transfer technique and how the IRPs are loaded depend upon whether the transfer is isochronous data or bulk or other modes.
So to answer your second question - yes the hardware needs some memory to work. The Driver and the USB Host Controller Driver together setup the Memory on the host for the USB Device - the USB Device Driver then accordingly communicates/drives the device.
With PCI-Express
It is similar - sorry I do not have hands on experience with PCI-Express.

Resources