I am new to Linux kernel stuff and is reading about memory layout of Kernel loader but confused with below given diagram
0A0000 +------------------------+
| Reserved for BIOS | Do not use. Reserved for BIOS EBDA.
09A000 +------------------------+
| Command line |
| Stack/heap | For use by the kernel real-mode code.
098000 +------------------------+
| Kernel setup | The kernel real-mode code.
090200 +------------------------+
| Kernel boot sector | The kernel legacy boot sector.
090000 +------------------------+
| Protected-mode kernel | The bulk of the kernel image.
010000 +------------------------+
| Boot loader | <- Boot sector entry point 0000:7C00
001000 +------------------------+
| Reserved for MBR/BIOS |
000800 +------------------------+
| Typically used by MBR |
000600 +------------------------+
| BIOS use only |
Now statement explaining this diagram is bit confusing for me.
When using bzImage, the protected-mode kernel was relocated to 0x100000 ("high memory"), and the kernel real-mode block (boot sector,setup, and stack/heap) was made relocatable to any address between 0x10000 and end of low memory.
Now first thing where is 0x100000 address is in above diagram ??
Second thing is when its says kernel real-mode block was made relocatable to "any address between 0x10000 and end of low memory" means it was relocatable to address between 0x10000 to 000600?
Intially kernle mode block is placed between 0x10000 to 09A000.
"it is desirable to keep the "memory ceiling" -- the highest point in low memory touched by the boot loader -- as low as possible, since some newer BIOSes have begun to allocate some rather large amounts of memory, called the Extended BIOS Data Area, near the top of low memory".
when its says low memory means memory downside towards 000600 and high memory upside towards 0A0000??
Now first thing where is 0x100000 address is in above diagram ??
0x100000 is not on the diagram because only the first megabyte is special. Beyond that point the physical memory is contiguous at least until the 15-16MB point.
Second thing is when its says kernel real-mode block was made relocatable to "any address between 0x10000 and end of low memory" means it was relocatable to address between 0x10000 to 000600?
Real-mode code can live anywhere below approximately 1 MB and the end is probably around there, at 0x9A000 or wherever the EBDA begins.
when its says low memory means memory downside towards 000600 and high memory upside towards 0A0000??
You have it on the diagram, from 0xA0000 downwards, towards 0.
Related
With x86 32-bit virtual address space and lower physical memory mapped continuousely after kernel at 0xc0000000 the upper physical memory part needed to be mapped into the virtual address space dynamically.
Has this changed in the x86_64 kernel?
Is there still HIGHMEM allocation or is all phyical memory in x86_64 accessible with simple physical to virtual address translation macro?
No. The highmem comes from ZONE_DMA、ZONE_NORMAL and ZONE_HIGHMEM. But in 64, cause it's really huge, we split the kernel spaces into several part with large holes between them for safe, and there are nothing called high memory there. You can read this for more detail about the structure of x64 kernel address.
I found this one:
https://www.kernel.org/doc/Documentation/x86/x86_64/mm.txt
ff11000000000000 | -59.75 PB | ff90ffffffffffff | 32 PB | direct mapping of all physical memory (page_offset_base)
I have a simple question regarding Linux.
Let us Suppose we have 1GB RAM. I read that Out of this 1GB RAM
1GB RAM is itself divided into High Mem and Low mem
High Mem is 128MB and Low Mem is 896MB (Both are 1GB total)
My Question is Where is the 0x0GB to 3GB data mapped into RAM
1) user space is 3GB - Where does it reside in the RAM? If the 896 MB + High is
already occupying the entire RAM. Where is the space for the Userspace 3GB RAM?
4GB +---------------+-------------+
| 128MB | |
+---------------+ <------+ |->|------------+
| 896MB | | | 128MB |
3GB +---------------+ <--+ +------>+------------+
| | | | 896 MB |
| ///// | +---------->+------------+
| |
0GB +---------------+
You're confusing different concepts. The [0-3GB] + [3-4GB] areas are in virtual address space (and that particular layout is very specific to i386 [i.e. x86 32-bit], btw).
If you have 1GB of RAM, the available physical memory is mapped via the virtual address space. It is possible (and in many cases, likely) for the same physical page of memory to be mapped more than once.
By default, in i386, the low 896MB of RAM is direct-mapped into kernel virtual address space starting at the 3GB mark (0xc0000000). The lowest several megabytes is actually used by the kernel for its code and data areas. Most of the rest is then placed into allocation pools where it can subsequently be allocated for use by the kernel or by user processes.
So, user virtual address space uses some of the same physical memory. Physical pages are allocated one-by-one as needed by a process and mapped into the low 3GB of virtual space. This mapping changes every time there is a context switch. That is, process A's virtual address space maps different sets of pages than process B's -- except that the kernel part (above 0xc0000000) will not change.
When actually executing code, every code or data address used in the program is a virtual address. The virtual address gets translated to a physical address in hardware by page tables. The kernel sets up and completely controls the page tables.
I have a doubt about endian-ness concept.please don't refer me to wikipedia, i've already read it.
Endian-ness, Isn't it just the 2 ways that the hardware cabling(between memory, and registers, through data bus) has been implemented in a system?
In my understanding, below picture is a little endian implementation(follow horizontal line from a memory address (e.g 4000) and then vertical line to reach to the low/high part of the register please)
As you see little memory addresses have been physically connected to low-part of 4-byte register.
I think that it does not related at all to READ and WRITE instructions in any language(e.g. LDR in ARM).
1-byte memory address:
- 4000 value:XX ------------------|
- 4001 value:XX ---------------| |
- 4002 value:XX ------------| | |
- 4003 value:XX ---------| | | |
| | | |
general-purpose register:XX XX XX XX
Yes and no. (I can't see your diagram, but I think I understand what you're asking). The way data lines are physically connected in the hardware can determine/control whether the representation in memory is treated as big or little endian. However, there is more to it than this; little endian is a means of representation, so for instance data stored on magnetic storage (in a file) might be coded using little endian representation or big endian representation and obviously at this level the hardware is not important.
Furthermore, some 8 bit microcontrollers can perform 16 bit operations, which are performed at the hardware level using two separate memory accesses. They can therefore use either little or big endian representation independent of bus design and ALU connection.
I read the Datasheet for an Intel Xeon Processor and saw the following:
The Integrated Memory Controller (IMC) supports DDR3 protocols with four
independent 64-bit memory channels with 8 bits of ECC for each channel (total of
72-bits) and supports 1 to 3 DIMMs per channel depending on the type of memory
installed.
I need to know what this exactly means from a programmers view.
The documentation on this seems to be rather sparse and I don't have someone from Intel at hand to ask ;)
Can this memory controller execute 4 loads of data simultaneously from non-adjacent memory regions (and request each data from up to 3 memory DIMMs)? I.e. 4x64 Bits, striped from up to 3 DIMMs, e.g:
| X | _ | X | _ | X | _ | X |
(X is loaded data, _ an arbitrarily large region of unloaded data)
Can this IMC execute 1 load which will load up to 1x256 Bits from a contiguous memory region.
| X | X | X | X | _ | _ | _ | _ |
This seems to be implementation specific, depending on compiler, OS and memory controller. The standard is available at: http://www.jedec.org/standards-documents/docs/jesd-79-3d . It seems that if your controller is fully compliant there are specific bits that can be set to indicate interleaved or non-interleaved mode. See page 24,25 and 143 of the DDR3 Spec, but even in the spec details are light.
For the i7/i5/i3 series specifically, and likely all newer Intel chips the memory is interleaved as in your first example. For these newer chips and presumably a compiler that supports it, yes one Asm/C/C++ level call to load something large enough to be interleaved/striped would initiate the required amount of independent hardware channel level loads to each channel of memory.
In the Triple channel section in of the Multichannel memory page on wikipedia there is a small list of CPUs that do this, likely it is incomplete: http://en.wikipedia.org/wiki/Multi-channel_memory_architecture
I read in text books that the stack grows by decreasing memory address; that is, from higher address to lower address. It may be a bad question, but I didn't get the concept right. Can you explain?
First, it's platform dependent. In some architectures, stack is allocated from the bottom of the address space and grows upwards.
Assuming an architecture like x86 that stack grown downwards from the top of address space, the idea is pretty simple:
=============== Highest Address (e.g. 0xFFFF)
| |
| STACK |
| |
|-------------| <- Stack Pointer (e.g. 0xEEEE)
| |
. ... .
| |
|-------------| <- Heap Pointer (e.g. 0x2222)
| |
| HEAP |
| |
=============== Lowest Address (e.g. 0x0000)
To grow stack, you'd decrease the stack pointer:
=============== Highest Address (e.g. 0xFFFF)
| |
| STACK |
| |
|.............| <- Old Stack Pointer (e.g. 0xEEEE)
| |
| Newly |
| allocated |
|-------------| <- New Stack Pointer (e.g. 0xAAAA)
. ... .
| |
|-------------| <- Heap Pointer (e.g. 0x2222)
| |
| HEAP |
| |
=============== Lowest Address (e.g. 0x0000)
As you can see, to grow stack, we have decreased the stack pointer from 0xEEEE to 0xAAAA, whereas to grow heap, you have to increase the heap pointer.
Obviously, this is a simplification of memory layout. The actual executable, data section, ... is also loaded in memory. Besides, threads have their own stack space.
You may ask, why should stack grow downwards. Well, as I said before, some architectures do the reverse, making heap grow downwards and stack grow upwards. It makes sense to put stack and heap on opposite sides as it prevents overlap and allows both areas to grow freely as long as you have enough address space available.
Another valid question could be: Isn't the program supposed to decrease/increase the stack pointer itself? How can an architecture impose one over the other to the programmer? Why it's not so program dependent as it's architecture dependent?
While you can pretty much fight the architecture and somehow get away your stack in the opposite direction, some instructions, notably call and ret that modify the stack pointer directly are going to assume another direction, making a mess.
Nowadays it's largely because it's been done that way for a long time and lots of programs assume it's done that way, and there's no real reason to change it.
Back when dinosaurs roamed the earth and computers had 8kB of memory if you were lucky, though, it was an important space optimization. You put the bottom of the stack at the very top of memory, growing down, and you put the program and its data at the very bottom, with the malloc area growing up. That way, the only limit on the size of the stack was the size of the program + heap, and vice versa. If the stack instead started at 4kB (for instance) and grew up, the heap could never get bigger than 4kB (minus the size of the program) even if the program only needed a few hundred bytes of stack.
Man CLONE : The child_stack argument specifies the location of the stack used by the child process. Since the child and calling process may share memory, it is not possible for the child process to execute in the same stack as the calling process. The calling process must therefore set up memory space for the child stack and pass a pointer to this space to clone(). Stacks grow downward on all processors that run Linux (except the HP PA processors), so child_stack usually points to the topmost address of the memory space set up for the child stack.
On x86, the primary reason the stack grows toward decreasing memory addresses is that the PUSH instruction decrements the stack pointer:
Decrements the stack pointer and then stores the source operand on the top of the stack.
See p. 4-511 in Intel® 64 and IA-32 ArchitecturesSoftware Developer’s Manual.