I have a simple question regarding Linux.
Let us Suppose we have 1GB RAM. I read that Out of this 1GB RAM
1GB RAM is itself divided into High Mem and Low mem
High Mem is 128MB and Low Mem is 896MB (Both are 1GB total)
My Question is Where is the 0x0GB to 3GB data mapped into RAM
1) user space is 3GB - Where does it reside in the RAM? If the 896 MB + High is
already occupying the entire RAM. Where is the space for the Userspace 3GB RAM?
4GB +---------------+-------------+
| 128MB | |
+---------------+ <------+ |->|------------+
| 896MB | | | 128MB |
3GB +---------------+ <--+ +------>+------------+
| | | | 896 MB |
| ///// | +---------->+------------+
| |
0GB +---------------+
You're confusing different concepts. The [0-3GB] + [3-4GB] areas are in virtual address space (and that particular layout is very specific to i386 [i.e. x86 32-bit], btw).
If you have 1GB of RAM, the available physical memory is mapped via the virtual address space. It is possible (and in many cases, likely) for the same physical page of memory to be mapped more than once.
By default, in i386, the low 896MB of RAM is direct-mapped into kernel virtual address space starting at the 3GB mark (0xc0000000). The lowest several megabytes is actually used by the kernel for its code and data areas. Most of the rest is then placed into allocation pools where it can subsequently be allocated for use by the kernel or by user processes.
So, user virtual address space uses some of the same physical memory. Physical pages are allocated one-by-one as needed by a process and mapped into the low 3GB of virtual space. This mapping changes every time there is a context switch. That is, process A's virtual address space maps different sets of pages than process B's -- except that the kernel part (above 0xc0000000) will not change.
When actually executing code, every code or data address used in the program is a virtual address. The virtual address gets translated to a physical address in hardware by page tables. The kernel sets up and completely controls the page tables.
Related
With x86 32-bit virtual address space and lower physical memory mapped continuousely after kernel at 0xc0000000 the upper physical memory part needed to be mapped into the virtual address space dynamically.
Has this changed in the x86_64 kernel?
Is there still HIGHMEM allocation or is all phyical memory in x86_64 accessible with simple physical to virtual address translation macro?
No. The highmem comes from ZONE_DMAćZONE_NORMAL and ZONE_HIGHMEM. But in 64, cause it's really huge, we split the kernel spaces into several part with large holes between them for safe, and there are nothing called high memory there. You can read this for more detail about the structure of x64 kernel address.
I found this one:
https://www.kernel.org/doc/Documentation/x86/x86_64/mm.txt
ff11000000000000 | -59.75 PB | ff90ffffffffffff | 32 PB | direct mapping of all physical memory (page_offset_base)
$ man top
CPU Percentage of processor usage, broken into user, system, and idle components. The time period for which
these percentages are calculated depends on the event counting mode.
Disks Number and total size of disk reads and writes.
LoadAvg Load average over 1, 5, and 15 minutes. The load average is the average number of jobs in the run
queue.
MemRegions Number and total size of memory regions, and total size of memory regions broken into private (broken
into non-library and library) and shared components.
Networks Number and total size of input and output network packets.
PhysMem Physical memory usage, broken into wired, active, inactive, used, and free components.
Procs Total number of processes and number of processes in each process state.
SharedLibs Resident sizes of code and data segments, and link editor memory usage.
Threads Number of threads.
Time Time, in H:MM:SS format. When running in logging mode, Time is in YYYY/MM/DD HH:MM:SS format by
default, but may be overridden with accumulative mode. When running in accumulative event counting
mode, the Time is in HH:MM:SS since the beginning of the top process.
VirtMem Total virtual memory, virtual memory consumed by shared libraries, and number of pageins and pageouts.
Swap Swap usage: total size of swap areas, amount of swap space in use and amount of swap space available.
Purgeable Number of pages purged and number of pages currently purgeable.
Below the global state fields, a list of processes is displayed. The fields that are displayed depend on the
options that are set. The pid field displays the following for the architecture:
+ for 64-bit native architecture, or - for 32-bit native architecture, or * for a non-native architecture.
I see the following output of top on Mac. I don't quite understand as the manual is not very detailed.
For example, I only have 8GB of memory. Why it shows 15G PhysMem? What are wired, active, inactive, used, and free components?
For Disks, are the numbers '21281572/769G read' the size of disk read since the machine starts?
For Networks, are the numbers since the machine starts?
For VM, what are vsize, framework vsize, swapins, swapouts?
$ top -l 1 | head
Processes: 797 total, 4 running, 1 stuck, 792 sleeping, 1603 threads
2019/05/08 09:48:40
Load Avg: 54.32, 41.08, 34.69
CPU usage: 62.2% user, 36.89% sys, 1.8% idle
SharedLibs: 258M resident, 65M data, 86M linkedit.
MemRegions: 78888 total, 6239M resident, 226M private, 2045M shared.
PhysMem: 15G used (2220M wired), 785M unused.
VM: 3392G vsize, 1299M framework vsize, 0(0) swapins, 0(0) swapouts.
Networks: packets: 24484543/16G in, 24962180/7514M out.
Disks: 21281572/769G read, 20527776/242G written.
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/disk1s1 466G 444G 19G 97% /
/dev/disk1s4 466G 3.1G 19G 14% /private/var/vm
/dev/disk2s1 932G 546G 387G 59% /Volumes/usbhd
com.apple.TimeMachine.2019-05-06-225547#/dev/disk1s1 466G 441G 19G 96% /Volumes/com.apple.TimeMachine.localsnapshots/Backups.backupdb/py???s MacBook Air/2019-05-06-225547/Macintosh HD
com.apple.TimeMachine.2019-05-02-082105#/dev/disk1s1 466G 440G 19G 96% /Volumes/com.apple.TimeMachine.localsnapshots/Backups.backupdb/py???s MacBook Air/2019-05-02-082105/Macintosh HD
I only have 8GB of memory. Why it shows 15G PhysMem?
There is 15G available, which is above the 8Gb in the machine due to the usage of Virtual Memory, where the OS can choose to swap (copy in / out) pages in memory to other storage (hard disk / SSD).
What are wired, active, inactive, used, and free components?
These are the states of physical pages of memory as used by Virtual Memory. So we have:
wired - Memory pages that are in use and can't be swapped (paged) out to disk (e.g. the OS itself)
active - Memory pages used for virtual memory that are in use that have been referenced recently. They are not likely to be paged out, unless no other pages are available
inactive - Memory pages that are used for virtual memory, but have not been referenced recently. They are likely to be swapped out if the need arises
used - Sometimes known as "speculative", physical memory that is speculatively mapped as the OS guesses about possibly requiring this, but it's not yet active
free - Physical memory pages not being used for virtual memory and is instantly available
For Disks, are the numbers '21281572/769G read' the size of disk read since the machine starts
For Networks, are the numbers since the machine starts?
Yes, I believe that these are since rebooting the OS.
For VM, what are vsize, framework vsize, swapins, swapouts?
I expect these are:
vsize - the amount of virtual space being used on disk
framework vsize - No idea about this one!
swapins - the number of memory pages loaded in from virtual memory to physical memory
swapout - the number of memory pages swapped out to physical memory from virtual memory
I am trying to understand memory management by the OS .
What I understand till now is that in a 32 bit system ,each process is allocated a space of 4gb [2gb user + 2gb kernel] ,in the virtual address space.
What confuses me is that is this 4gb space unique for every process . if I have say 3 processes p1 ,p2 ,p3 running would I need 12 gb of space on the hard disk ?
Also if say I have 2gb ram on a 32 bit system ,how will it manage to handle a process which needs 4gb ?[through the paging file ] ?
[2gb user + 2gb kernel]
That is a constraint by the OS. On an x86 32-bit system without PAE enabled, the virtual address space is 4 GiB (note that GB usually denotes 1000 MB while GiB stands for 1024 MiB).
What confuses me is that is this 4gb space unique for every process .
Yes, every process has its own 4 GiB virtual address space.
if I have say 3 processes p1 ,p2 ,p3 running would I need 12 gb of
space on the hard disk ?
No. With three processes, they can occupy a maximum of 12 GiB of storage. Whether that's primary or secondary storage is left to the kernel (primary is preferred, of course). So, you'd need your primary memory size + some secondary storage space to be at least 12 GiB to contain all three processes if all those processes really occupied the full range of 4 GiB, which is pretty unlikely to happen.
Also if say I have 2gb ram on a 32 bit system ,how will it manage to
handle a process which needs 4gb ?[through the paging file ] ?
Yes, in a way. You mean the right thing, but the "paging file" is just an implementation detail. It is used by Windows, but Linux, for example, uses a seperate swap partition instead. So, to be technically correct, "secondary storage (a HDD, for example) is needed to store the remaining 2 GiB of the process" would be right.
I am new to Linux kernel stuff and is reading about memory layout of Kernel loader but confused with below given diagram
0A0000 +------------------------+
| Reserved for BIOS | Do not use. Reserved for BIOS EBDA.
09A000 +------------------------+
| Command line |
| Stack/heap | For use by the kernel real-mode code.
098000 +------------------------+
| Kernel setup | The kernel real-mode code.
090200 +------------------------+
| Kernel boot sector | The kernel legacy boot sector.
090000 +------------------------+
| Protected-mode kernel | The bulk of the kernel image.
010000 +------------------------+
| Boot loader | <- Boot sector entry point 0000:7C00
001000 +------------------------+
| Reserved for MBR/BIOS |
000800 +------------------------+
| Typically used by MBR |
000600 +------------------------+
| BIOS use only |
Now statement explaining this diagram is bit confusing for me.
When using bzImage, the protected-mode kernel was relocated to 0x100000 ("high memory"), and the kernel real-mode block (boot sector,setup, and stack/heap) was made relocatable to any address between 0x10000 and end of low memory.
Now first thing where is 0x100000 address is in above diagram ??
Second thing is when its says kernel real-mode block was made relocatable to "any address between 0x10000 and end of low memory" means it was relocatable to address between 0x10000 to 000600?
Intially kernle mode block is placed between 0x10000 to 09A000.
"it is desirable to keep the "memory ceiling" -- the highest point in low memory touched by the boot loader -- as low as possible, since some newer BIOSes have begun to allocate some rather large amounts of memory, called the Extended BIOS Data Area, near the top of low memory".
when its says low memory means memory downside towards 000600 and high memory upside towards 0A0000??
Now first thing where is 0x100000 address is in above diagram ??
0x100000 is not on the diagram because only the first megabyte is special. Beyond that point the physical memory is contiguous at least until the 15-16MB point.
Second thing is when its says kernel real-mode block was made relocatable to "any address between 0x10000 and end of low memory" means it was relocatable to address between 0x10000 to 000600?
Real-mode code can live anywhere below approximately 1 MB and the end is probably around there, at 0x9A000 or wherever the EBDA begins.
when its says low memory means memory downside towards 000600 and high memory upside towards 0A0000??
You have it on the diagram, from 0xA0000 downwards, towards 0.
How do you determine addressability based on address space? How do you determine the size of the address bus based on the addressability? Ex. The addressability of a machine is 32 bits, what is the size of the address bus?
The address bus connects the CPU with the main memory. So if the address bus has 32 bits, the max size of the main memory is 2^32 bytes, i.e. 4 GB.
The address bus transfers a physical address, and thus the physical address space in this example is 4 GB.
However the CPU generates virtual addresses, and the virtual addresses are the virtual address space. The virtual addresses have to be mapped to physical addresses by a memory management unit.
In principle, one can map a small virtual address space to a large physical one (as done earlier e.g. in the PDP11 computers), but nowadays mostly a larger virtual address space is mapped to a smaller physical one, e.g. from a 64-bit CPU with a 2^64 byte virtual address space to a physical memory with a 32-bit address bus, which is thus 4 GB large.
So if you have a primitive system without memory management, and you want that all addresses that the GPU can generate are existing main memory addresses, then you address bus must have the same number of bits as the CPU uses for addressing, e.g. 32 bits.
But in a real system the virtual CPU addresses are essentially independent from the physical memory addresses.