I have some knowledge of the boot-loader but I have to know how bios code jumps to boot-loader and when does it initialize the I/O devices and what are the functionality does the bios does?
And is bios responsible for selecting the boot sector and how will it select the boot sector?
And also please inform me how the reset vector will directly point to bios region?
I am having some knowledge on boot-loader but i have to know how bios code jumps to boot-loader
When booting from a hard disk, the BIOS reads the boot loader from the Master Boot Record off of the first sector of the hard disk and loads it into memory at 0000:7C00 (or possibly 07C0:0000 -- in real mode, these refer to the same address).
and when does it initialize the I/O devices and what are the functionality does the bios does ?
The Minimal Intel Architecture Boot Loader gives a concise outline of the basic functions that the BIOS performs as it initializes the system. The major bullet points in the recommended order of initialization are as follows:
Power-Up (Reset Vector) Handling
Mode Selection
Preparation for Memory Initialization
Memory Initialization
Post Memory Initialization
Miscellaneous Platform Enabling
Interrupt Enabling
Processor Interrupt Modes
Interrupt Vector Table (IVT)
Interrupt Descriptor Table (IDT)
Timers
Memory Caching Control
Processor Discovery and Initialization
I/O Devices
PCI Device Discovery
Booting a Legacy OS
Memory Map
Non-Volatile (NV) Storage
Note: The Memory Map and Non-Volatile Storage sections are listed in the outline after the Booting a Legacy OS section, but as stated in the document, and as you would expect, mapping system memory occurs before handing over to the boot loader and booting the OS. Additionally, some accesses to non-volatile memory will also occur during BIOS initialization. (For example, the BIOS may read settings stored in the CMOS to determine the primary boot device)
and is bios responsible for selecting the boot sector and how will it select the boot sector?
The BIOS locates the boot sector based on the type of the device.
In most circumstances (except for CDs/DVDs) the bootsector will be LBA 0 of the device (or equivalently, CHS 0,0,1) -- see the specs of the particular device to find out what addressing mode it uses. For CDs/DVDs, the bootsector is LBA 17.
For more information, see the quoted article on OSDev.
and also please inform me how the reset vector will directly point to bios region ?
The CPU is designed that way. Upon assertion of the reset signal, the CPU sets its registers, including the instruction pointer to known values. For example, on most x86 systems, this is FFFFFFF0h, and this address maps into the BIOS ROM. This answer provides a more thorough explanation.
Related
I am confused with port mapping and ISR
since i am following an article which mentioned that hardware ports are mapped to memory from 0x00000000 to 0x000003FF
now we can talk with microcontroller of that hardware using these port no using IN and OUT instructions ok
but what is ivt then mean i read ivt contain address of interrupt service routine
everthing is messed in mind
do when we use IN /OUTwith port no cpu checks in ivt and how microcontrollers knows their number
When hardware ports are mapped to memory location then this is called Memory-Mapped IO.
Hardware is accessed by reading/writing data/commands in their registers. In Memory Mapped IO, instead of transmitting data/command to hardware registers, the cpu reads/writes signal/command/data at particular memory locations which are mapped to hardware registers. Therefore, communication between hardware and cpu happens via read/write to specific memory location.
When a hardware in installed it is given a set of fixed memory location for the purpose of Memory Mapped IO and these memory location are recorded. Also, every hardware has its ISR whose address is stored in IVT. Now when a particular hardware interrupts the cpu, the cpu finds the interrupting hardware's ISR address from the IVT. Once the cpu identifies with which hardware the communication (I/O) needs to be done then it communicate with that hardware via Memory Mapped IO by making use of the fixed memory locations which were allocated for that hardware.
I've got a Xilinx Zynq 7000-based board with a peripheral in the FPGA fabric that has DMA capability (on an AXI bus). We've developed a circuit and are running Linux on the ARM cores. We're having performance problems accessing a DMA buffer from user space after it's been filled by hardware.
Summary:
We have pre-reserved at boot time a section of DRAM for use as a large DMA buffer. We're apparently using the wrong APIs to map this buffer, because it appears to be uncached, and the access speed is terrible.
Using it even as a bounce-buffer is untenably slow due to horrible performance. IIUC, ARM caches are not DMA coherent, so I would really appreciate some insight on how to do the following:
Map a region of DRAM into the kernel virtual address space but ensure that it is cacheable.
Ensure that mapping it into userspace doesn't also have an undesirable effect, even if that requires we provide an mmap call by our own driver.
Explicitly invalidate a region of physical memory from the cache hierarchy before doing a DMA, to ensure coherency.
More info:
I've been trying to research this thoroughly before asking. Unfortunately, this being an ARM SoC/FPGA, there's very little information available on this, so I have to ask the experts directly.
Since this is an SoC, a lot of stuff is hard-coded for u-boot. For instance, the kernel and a ramdisk are loaded to specific places in DRAM before handing control over to the kernel. We've taken advantage of this to reserve a 64MB section of DRAM for a DMA buffer (it does need to be that big, which is why we pre-reserve it). There isn't any worry about conflicting memory types or the kernel stomping on this memory, because the boot parameters tell the kernel what region of DRAM it has control over.
Initially, we tried to map this physical address range into kernel space using ioremap, but that appears to mark the region uncacheable, and the access speed is horrible, even if we try to use memcpy to make it a bounce buffer. We use /dev/mem to map this also into userspace, and I've timed memcpy as being around 70MB/sec.
Based on a fair amount of searching on this topic, it appears that although half the people out there want to use ioremap like this (which is probably where we got the idea from), ioremap is not supposed to be used for this purpose and that there are DMA-related APIs that should be used instead. Unfortunately, it appears that DMA buffer allocation is totally dynamic, and I haven't figured out how to tell it, "here's a physical address already allocated -- use that."
One document I looked at is this one, but it's way too x86 and PC-centric:
https://www.kernel.org/doc/Documentation/DMA-API-HOWTO.txt
And this question also comes up at the top of my searches, but there's no real answer:
get the physical address of a buffer under Linux
Looking at the standard calls, dma_set_mask_and_coherent and family won't take a pre-defined address and wants a device structure for PCI. I don't have such a structure, because this is an ARM SoC without PCI. I could manually populate such a structure, but that smells to me like abusing the API, not using it as intended.
BTW: This is a ring buffer, where we DMA data blocks into different offsets, but we align to cache line boundaries, so there is no risk of false sharing.
Thank you a million for any help you can provide!
UPDATE: It appears that there's no such thing as a cacheable DMA buffer on ARM if you do it the normal way. Maybe if I don't make the ioremap call, the region won't be marked as uncacheable, but then I have to figure out how to do cache management on ARM, which I can't figure out. One of the problems is that memcpy in userspace appears to really suck. Is there a memcpy implementation that's optimized for uncached memory I can use? Maybe I could write one. I have to figure out if this processor has Neon.
Have you tried implementing your own char device with an mmap() method remapping your buffer as cacheable (by means of remap_pfn_range())?
I believe you need a driver that implements mmap() if you want the mapping to be cached.
We use two device drivers for this: portalmem and zynqportal. In the Connectal Project, we call the connection between user space software and FPGA logic a "portal". These drivers require dma-buf, which has been stable for us since Linux kernel version 3.8.x.
The portalmem driver provides an ioctl to allocate a reference-counted chunk of memory and returns a file descriptor associated with that memory. This driver implements dma-buf sharing. It also implements mmap() so that user-space applications can access the memory.
At allocation time, the application may choose cached or uncached mapping of the memory. On x86, the mapping is always cached. Our implementation of mmap() currently starts at line 173 of the portalmem driver. If the mapping is uncached, it modifies vma->vm_page_prot using pgprot_writecombine(), enabling buffering of writes but disabling caching.
The portalmem driver also provides an ioctl to invalidate and optionally write back data cache lines.
The portalmem driver has no knowledge of the FPGA. For that, we the zynqportal driver, which provides an ioctl for transferring a translation table to the FPGA so that we can use logically contiguous addresses on the FPGA and translate them to the actual DMA addresses. The allocation scheme used by portalmem is designed to produce compact translation tables.
We use the same portalmem driver with pcieportal for PCI Express attached FPGAs, with no change to the user software.
The Zynq has neon instructions, and an assembly code implementation of memcpy using neon instructions, using aligned on cache boundary (32 bytes) will achieve 300 MB/s rates or higher.
I struggled with this for some time with udmabuf and discovered the answer was as simple as adding dma_coherent; to its entry in the device tree. I saw a dramatic speedup in access time from this simple step - though I still need to add code to invalidate/flush whenever I transfer ownership from/to the device.
Why do we need the memory management unit?
It seems the only task of the memory management unit is to convert virtual addresses to physical address. Can't this be done in software? Why do we need another hardware device to this?
MMU (Memory Management Unit) is a hardware component available on most hardware platforms translating virtual addresses to physical addresses. This translation brings the following benefits:
Swap: your system can handle more memory than the one physically available. For example, on a 32-bit architecture, the system "sees" 4 GB of memory, regardless of the amount of the physical memory available. If you use more memory than actually available, memory pages are swapped out onto the swap disk.
Memory protection: the MMU enforces memory protection by preventing a user-mode task to access the portion of memory owned by other tasks.
Relocation: each task can use addresses at a certain offset (e.g., for variables), regardless of the real addresses assigned at run-time.
It is possible to partially implement a software translation mechanism. For example, for the relocation you can have a look at the implementation of gcc's fpic. However, a software mechanism can't provide memory protection (which, in turn, affects system security and reliability).
The reason for a MMU component of a CPU is to make the logical to physical address translation transparent to the executing process. Doing it in software would require stopping to process every memory access by a process. Plus, you'd have the chicken and egg problem that, if memory translation is done by software, who does that software's memory translation.
Am I correct when I say that addresses of memory mapped registers are always physical addresses?
If yes then how does MMU deal with these addresses and decide not to do virtual to physical translations for them?
The MMU doesn't decide anything. It merely maps addresses according to what it has been told by the OS: virtual addresses to physical ones, and/or interrupts the application program if the mapping for a particular physical address is marked as "invalid" or somehow inconsistent with the operation of the current machine instruction (e.g., for instruction fetches, "not executable", for stores, "read only", etc.).
The operating system establishes a set of rules and conventions that ensure that applications cannot create greif for one another. If writing to memory-mapped I/O devices is OK for this OS, then the OS will set MMU mappings (e.g. page map registers) to allow it; otherwise it will not set MMU pages to map to I/O devices.
For most general purpose OSes, allowing arbitrary programs to write to I/O registers is a definition of "causes grief" and they simply never set up such a mapping. This is how Windows acts from the point of user processes.
For special purpose OSes, have separate processes share I/O pages may be fine, especially if the processes running are trusted (e.g, part of the OS or pass some certification authority who asserts good quality). Then multiple trusted processes might share memory-mapped I/O devices safely and conveniently. Even untrusted processes can be run on such an OS; it simply doesn't give them access to I/O.
Back in 1972, I built a unique virtual memory 16 bit minicomputer. The MMU had two kinds of page mappings: mapping of virtual pages to physical (as you'd expect), and mapping of a page to a single 32 byte I/O device. What this means is that the OS can hand any process any device (not critical to OS function) safely.
In particular, it meant that each I/O driver has its own address space; if it screwed up, no problem. You could debug device drivers while running the OS without fear. (Windows suffered from I/O driver corruption destroying windows for years; still does I think but their quality control "trustedness checking" is wicked strong now).
Alas, it wasn't a commercial success. I was forced to go into software to make a living :-{
You are correct.
All registers and or memory locations within a processor's memory map are physical addresses.
Virtual to physical translations are done by the MMU and only occur within contiguous blocks of memory in which code can be executed from. i.e RAM or internal flash. No virtual to physical translations occur when other parts of the memory map are accessed, because they do not interact with the MMU.
The more I read about low level languages like C and pointers and memory management, it makes me wonder about the current state of the art with modern operating systems and memory protection. For example what kind of checks are in place that prevent some rogue program from randomly trying to read as much address space as it can and disregard the rules set in place by the operating system?
In general terms how do these memory protection schemes work? What are their strength and weaknesses? To put it another way, are there things that simply cannot be done anymore when running a compiled program in a modern OS even if you have C and you own compiler with whatever tweaks you want?
The protection is enforced by the hardware (i.e., by the CPU). Applications can only express addresses as virtual addresses and the CPU resolves the mapping of virtual address to physical address using lookaside buffers. Whenever the CPU needs to resolve an unknown address it generates a 'page fault' which interrupts the current running application and switches control to the operating system. The operating system is responsible for looking up its internal structures (page tables) and find a mapping between the virtual address touched by the application and the actual physical address. Once the mapping is found the CPU can resume the application.
The CPU instructions needed to load a mapping between a physical address and a virtual one are protected and as such can only be executed by a protected component (ie. the OS kernel).
Overall the scheme works because:
applications cannot address physical memory
resolving mapping from virtual to physical requires protected operations
only the OS kernel is allowed to execute protected operations
The scheme fails though if a rogue module is loaded in the kernel, because at that protection level it can read and write into any physical address.
Application can read and write other processes memory, but only by asking the kernel to do this operation for them (eg. in Win32 ReadProcessMemory), and such APIs are protected by access control (certain privileges are required on the caller).
Memory protection is enforced in hardware, typically with a minimum granularity on the order of KBs.
From the Wikipedia article about memory protection:
In paging, the memory address space is
divided into equal, small pieces,
called pages. Using a virtual memory
mechanism, each page can be made to
reside in any location of the physical
memory, or be flagged as being
protected. Virtual memory makes it
possible to have a linear virtual
memory address space and to use it to
access blocks fragmented over physical
memory address space.
Most computer architectures based on
pages, most notably x86 architecture,
also use pages for memory protection.
A page table is used for mapping
virtual memory to physical memory. The
page table is usually invisible to the
process. Page tables make it easier to
allocate new memory, as each new page
can be allocated from anywhere in
physical memory.
By such design, it is impossible for
an application to access a page that
has not been explicitly allocated to
it, simply because any memory address,
even a completely random one, that
application may decide to use, either
points to an allocated page, or
generates a page fault (PF) error.
Unallocated pages simply do not have
any addresses from the application
point of view.
You should ask Google for Segmentation fault, Memory Violation Error and General Protection Failure. These are errors returned by various OSes in response for a program trying to access memory address it shouldn't access.
And Windows Vista (or 7) has routines for randomized dll attaching, which means that buffer overflow can take you to different addresses each time it occurs. This also makes buffer overflow attack a little bit less repeatable.
So, to link together the answers posted with your question. A program that attempts to read any memory address that is not mapped in its address space, will cause the processor to issue a page fault exception transferring execution control to the operating system code (trusted code), the kernel will then check which is the faulty address, if there is no mapping in the current process address space, it will send the SIGSEV (segmentation fault) signal to the process which typically kills the process (talking about Linux/Unix here), on Windows you get something along the same lines.
Note: you can take a look at mprotect() in Linux and POSIX operating systems, it allows you to protect pages of memory explicitly, functions like malloc() return memory on pages with default protection, which you can then modify, this way you can protect areas of memory as read only (but just in page size chunks, typically around 4KB).