does program counter generates physical or logical address? - memory-management

so i was reading about memory addresses and confuse about what address does the program counter store. each thread has there own program counter value, so does these program counters hold addresses respective to the program or do they hold actual physical memory addresses.
because in operating system concepts by abraham silbrschatz it's saying cpu generates the logical addresses, and cpu gets those addresses from program counter. so it's seems like program counters of threads hold the logical address!
can i get a heads up with this :)

The CPU doesn't generate logical addresses, it finds the physical address corresponding to a logical address by using the page tables. The CPU never 'generates' addresses. The CPU is a dead machine made of circuits that blindly execute instructions. The CPU has an architecture (x64, ARM, RISC-V or others) and it is its architecture which determines the format of the binary instructions it understands. It is the programmer that generates the instructions using input/output mechanisms like keyboard, mouse and the screen.
The program counter contains logical addresses. The program counter is a physical CPU register which is a small memory. The thread is an abstract OS construct. The OS programmers write structures in main memory to keep track of each thread like the Process Control Block (PCB) and several lists of threads for each different thread state like running, ready, waiting for IO, etc. It is the PCB that holds the program counter of a thread that's not currently running. On thread switch, the current program counter is saved in the PCB of that thread and the PCB of another is read to change the thread running on that core. The job of the OS is to balance threads between available cores to give the impression that each thread is running concurently.
The program counter's value is thus translated to a physical address by crossing the page tables.

Related

Address in Program Counter Register

We know that the Program Counter contains the address of the next instruction to be executed. I am trying to understand which address this is - logical (CPU) or physical (RAM).
The address is virtual, i.e. it contains the address the CPU sees at that moment. That is true in most PCs and architectures that contain an MMU. In a microcontroller (e.g.: Arduino CPU, STM32 etc), program counter will always contain physical addresses.
On all the architectures I know, it's a logical (virtual) address. That's really the only way that's useful. You want page translation to apply to instruction fetches just like data accesses, so that all the features of paging can be used for code as well as data. And for architectures that can do PC-relative data addressing, you want virtual addresses there too - since all data instructions are subject to page translation, you can't really do anything useful with a physical address.
(Just to acknowledge dirac3000's point - if the machine doesn't have an MMU or it's disabled, like x86 in real mode, then all addresses are physical and the distinction between "logical" and "physical" doesn't exist, so the question becomes moot.)

Difference between physical/logical/virtual memory address

I am a little confused about the terms physical/logical/virtual addresses in an Operating System(I use Linux- open SUSE)
Here is what I understand:
Physical Address- When the processor is in system mode, the address used by the processor is physical address.
Logical Address- When the processor is in user mode, the address used is the logical address. these are anyways mapped to some physical address by adding a base register with the offset value.It in a way provides a sort of memory protection.
I have come across discussion that virtual and logical addresses/address space are the same. Is it true?
Any help is deeply appreciated.
My answer is true for Intel CPUs running on a modern Linux system, and I am speaking about user-level processes, not kernel code. Still, I think it'll give you some insight enough to think about the other possibilities
Address Types
Regarding question 3:
I have come across discussion that virtual and logical
addresses/address space are the same. Is it true?
As far as I know they are the same, at least in modern OS's running on top of Intel processors.
Let me try to define two notions before I explain more:
Physical Address: The address of where something is physically located in the RAM chip.
Logical/Virtual Address: The address that your program uses to reach its things. It's typically converted to a physical address later by a hardware chip (mostly, not even the CPU is aware really of this conversion).
Virtual/Logical Address
The virtual address is well, a virtual address, the OS along with a hardware circuit called the MMU (Memory Management Unit) delude your program that it's running alone in the system, it's got the whole address space(having 32-bits system means your program will think it has 4 GBs of RAM; roughly speaking).
Obviously, if you have more than one program running at the time (you always do, GUI, Init process, Shell, clock app, calendar, whatever), this won't work.
What will happen is that the OS will put most of your program memory in the hard disk, the parts it uses the most will be present in the RAM, but hey, that doesn't mean they'll have the address you and your program know.
Example: Your process might have a variable named (counter) that's given the virtual address 0xff (imaginably...) and another variable named (oftenNotUsed) that's given the virtual address (0xaa).
If you read the assembly of your compiled code after all linking's happened, you'll be accessing them using those addresses but well, the (oftenNotUsed) variable won't be really there in RAM at 0xaa, it'll be in the hard disk because the process is not using it.
Moreover, the variable (counter) probably won't be physically at (0xff), it'll be somewhere else in RAM, when your CPU tries to fetch what's in 0xff, the MMU and a part of the OS, will do a mapping and get that variable from where it's really available in the RAM, the CPU won't even notice it wasn't in 0xff.
Now what happens if your program asks for the (oftenNotUsed) variable? The MMU+OS will notice this 'miss' and will fetch it for the CPU from the Harddisk into RAM then hand it over to the CPU as if it were in the address (0xaa); this fetching means some data that was present in RAM will be sent back to the Harddisk.
Now imagine this running for every process in your system. Every process thinks they have 4GB of RAMs, no one actually have that but everything works because everyone has some parts of their program available physically in the RAM but most of the program resides in the HardDisk. Don't confuse this part of the program memory being put in HD with the program data you can access through file operations.
Summary
Virtual address: The address you use in your programs, the address that your CPU use to fetch data, is not real and gets translated via MMU to some physical address; everyone has one and its size depends on your system(Linux running 32-bit has 4GB address space)
Physical address: The address you'll never reach if you're running on top of an OS. It's where your data, regardless of its virtual address, resides in RAM. This will change if your data is sent back and forth to the hard disk to accommodate more space for other processes.
All of what I have mentioned above, although it's a simplified version of the whole concept, is what's called the memory management part of the the computer system.
Consequences of this system
Processes cannot access each other memory, everyone has their separate virtual addresses and every process gets a different translation to different areas even though sometimes you may look and find that two processes try to access the same virtual address.
This system works well as a caching system, you typically don't use the whole 4GB you have available, so why waste that? let others share it and let them use it too; when your process needs more, the OS will fetch your data from the HD and replace other process' data, at an expense of course.
Physical Address- When the processor is in system mode, the address used by the processor is physical address.
Not necessarily true. It depends on the particular CPU. On x86 CPUs, once you've enabled page translation, all code ceases to operate with physical addresses or addresses trivially convertible into physical addresses (except, SMM, AFAIK, but that's not important here).
Logical Address- When the processor is in user mode, the address used is the logical address. these are anyways mapped to some physical address by adding a base register with the offset value.
Logical addresses do not necessarily apply to the user mode exclusively. On x86 CPUs they exist in the kernel mode as well.
I have come across discussion that virtual and logical addresses/address space are the same. Is it true?
It depends on the particular CPU. x86 CPUs can be configured in such a way that segments aren't used explicitly. They are used implicitly and their bases are always 0 (except for thread-local-storage segments). What remains when you drop the segment selector from a logical address is a 32-bit (or 64-bit) offset whose value coincides with the 32-bit (or 64-bit) virtual address. In this simplified set-up you may consider the two to be the same or that logical addresses don't exist. It's not true, but for most practical purposes, good enough of an approximation.
I am referring to below answer base on intel x86 CPU
Difference Between Logical to Virtual Address
Whenever your program is under execution CPU generates logical address for instructions which contains (16 bit Segment Selector and 32 bit offset ).Basically Virtual(Linear address) is generated using logical address fields.
Segment selector is 16 bit field out of which first 13bit is index (Which is a pointer to the segment descriptor resides in GDT,described below) , 1 bit TI field ( TI = 1, Refer LDT , TI=0 Refer GDT )
Now Segment Selector OR say segment identifier refers to Code Segment OR Data Segment OR Stack Segment etc. Linux contains one GDT/LDT (Global/Local Descriptor Table) Which contains 8 byte descriptor of each segments and holds the base (virtual) address of the segment.
So for for each logical address, virtual address is calculated using below steps.
1) Examines the TI field of the Segment Selector to determine which Descriptor
Table stores the Segment Descriptor. This field indicates that the Descriptor is
either in the GDT (in which case the segmentation unit gets the base linear
address of the GDT from the gdtr register) or in the active LDT (in which case the
segmentation unit gets the base linear address of that LDT from the ldtr register).
2) Computes the address of the Segment Descriptor from the index field of the Segment
Selector. The index field is multiplied by 8 (the size of a Segment Descriptor),
and the result is added to the content of the gdtr or ldtr register.
3) Adds the offset of the logical address to the Base field of the Segment Descriptor,
thus obtaining the linear(Virtual) address.
Now it is the job of Pagging unit to translate physical address from virtual address.
Refer : Understanding the linux Kernel , Chapter 2 Memory Addressing
User virtual addresses
These are the regular addresses seen by user-space programs. User addresses are either 32 or 64 bits in length, depending on the underlying hardware architecture, and each process has its own virtual address space.
Physical addresses
The addresses used between the processor and the system's memory. Physical addresses are 32- or 64-bit quantities; even 32-bit systems can use 64-bit physical addresses in some situations.
Bus addresses
The addresses used between peripheral buses and memory. Often they are the same as the physical addresses used by the processor, but that is not necessarily the case. Bus addresses are highly architecture dependent, of course.
Kernel logical addresses
These make up the normal address space of the kernel. These addresses map most or all of main memory, and are often treated as if they were physical addresses. On most architectures, logical addresses and their associated physical addresses differ only by a constant offset. Logical addresses use the hardware's native pointer size, and thus may be unable to address all of physical memory on heavily equipped 32-bit systems. Logical addresses are usually stored in variables of type unsigned long or void *. Memory returned from kmalloc has a logical address.
Kernel virtual addresses
These differ from logical addresses in that they do not necessarily have a direct mapping to physical addresses. All logical addresses are kernel virtual addresses; memory allocated by vmalloc also has a virtual address (but no direct physical mapping). The function kmap returns virtual addresses. Virtual addresses are usually stored in pointer variables.
If you have a logical address, the macro __pa() (defined in ) will return its associated physical address. Physical addresses can be mapped back to logical addresses with __va(), but only for low-memory pages.
Reference.
Normally every address issued (for x86 architecture) is a logical address which is translated to a linear address via the segment tables. After the translation into linear address, it is then translated to physical address via page table.
A nice article explaining the same in depth:
http://duartes.org/gustavo/blog/post/memory-translation-and-segmentation/
Physical Address is the address that is seen by the memory unit, i.e., one loaded into memory address register.
Logical Address is the address that is generated by the CPU.
The user program can never see the real physical address.Memory mapping unit converts the logical address to physical address.
Logical address generated by user process must be mapped to physical memory before they are used.
Logical memory is relative to the respective program i.e (Start point of program + offset)
Virtual memory uses a page table that maps to ram and disk. In this way each process can promise more memory for each individual process.
In the Usermode or UserSpace all the addresses seen by program are Virtual addresses.
When in kernel mode addresses seen by kernel are still virtual but termed as logical as they are equal to physical + pageoffset .
Physical addresses are the ones which are seen by RAM .
With Virtual memory every address in program goes through page tables.
when u write a small program eg:
int a=10;
int main()
{
printf("%d",a);
}
compile: >gcc -c fname.c
>ls
fname.o //fname.o is generated
>readelf -a fname.o >readelf_obj.txt
/readelf is a command to understand the object files and executabe file which will be in 0s and 1s. output is written in readelf_onj.txt file/
`>vim readelf_obj.txt`
/* under "section header" you will see .data .text .rodata sections of your object file. every starting or the base address is started from 0000 and grows to the respective size till it reach the size under the heading "size"----> these are the logical addresses.*/
>gcc fname.c
>ls
a.out //your executabe
>readelf -a a.out>readelf_exe.txt
>vim readelf_exe.txt
/* here the base address of all the sections are not zero. it will start from particular address and end up to the particular address. The linker will give the continuous adresses to all the sections (observe in the readelf_exe.txt file. observe base address and size of each section. They start continuously) so only the base addresses are different.---> this is called the virtual address space.*/
Physical address-> the memory ll have the physical address. when your executable file is loaded into memory it ll have physical address. Actually the virtual adresses are mapped to physical addresses for the execution.

32-bit physical page table resolution

I'm running a 32 bit system in legacy mode on a 64-bit (x86-64 that is) capable architecture. When a new process is created, the kernel has to decide where in physical memory all of the pages needed at the time of instantiation are to be allocated (assuming a single thread this may include several memory regions such as the stack, the heaps etc).
I'm assuming the kernel keeps some sort of dynamic list of the physical RAM frames that are in use, and also a static list of all the regions of physical memory that have been taken up by devices for systems that use memory-mapped IO. Is this correct?
In addition, I also read that a 32-bit Windows system has a physical memory limit of 4GB (probably due to minimum address bus assumptions) so, even though a system may have more than 4 gigabytes of physical memory installed, a 32 bit kernel will only allocate addresses within the 4GB range.
Specific information regarding low-level operating system implementation for specific cases such as this is quite difficult to find online. Can anyone verify these statements and possibly refer me to a source where I could attain more information?
Thanks for your considerations.
When a new process is created, the kernel has to decide where in physical memory all of the pages needed at the time of instantiation are to be allocated
Why does it have to decide at process creation time? In fact, it only creates them on-demand - it simply creates the PTEs (i.e. "This address range is valid", but the pages are not backed in any way); when the process first starts executing, it immediately page-faults.
What is a page fault though? What happens is, first the CPU reads the TLB to see if it has an address <=> frame mapping. When that fails, it walks the PTEs looking for an entry that matches. If no entry is found, or if the entry indicates that the page isn't backed, a page-fault is generated. This means, that a CPU exception occurs and the CPU immediately jumps to a predefined address. The first thing the kernel then does is save the CPU Context (i.e. the registers at the location of the fault), then dispatches to the page fault handler.
When the page-fault occurs, Mm (the Memory Manager in NT) will read the mapping in its own data structures (remember that all PE images are memory-mapped files) and determine at that time which physical frame (i.e. 'a real piece of memory') which will be used.
Once the page fault is serviced, the page fault restores the saved CPU context, and jumps back to where it was, and retries the instruction that faulted.
You're correct that a 32-bit OS will only use 4GB of address space (not RAM! Don't forget those memory-mapped devices and files!), the processor will operate in 32-bit mode and interpret the PTEs as 32-bit (remember that AMD64 long mode adds an extra level of page tables and extends the address space to 48 bits).
32bit systems can only ever address 4gig directly (2^32 = 4gig). There's PAE hacks, which let the system have more than 4gig of physical ram, but no process can ever have more than 4gig available. As well, even if you have 4gig of ram, you'll never see more than 3.5gig or so actually available - some is reserved for memory mapping hardware devices, such as your video ram.
For one method of dealing with the physical-virtual memory mapping, look at TLB

What is the difference between the kernel space and the user space?

What is the difference between the kernel space and the user space? Do kernel space, kernel threads, kernel processes and kernel stack mean the same thing? Also, why do we need this differentiation?
The really simplified answer is that the kernel runs in kernel space, and normal programs run in user space. User space is basically a form of sand-boxing -- it restricts user programs so they can't mess with memory (and other resources) owned by other programs or by the OS kernel. This limits (but usually doesn't entirely eliminate) their ability to do bad things like crashing the machine.
The kernel is the core of the operating system. It normally has full access to all memory and machine hardware (and everything else on the machine). To keep the machine as stable as possible, you normally want only the most trusted, well-tested code to run in kernel mode/kernel space.
The stack is just another part of memory, so naturally it's segregated right along with the rest of memory.
The Random Access Memory (RAM) can be logically divided into two distinct regions namely - the kernel space and the user space.(The Physical Addresses of the RAM are not actually divided only the Virtual Addresses, all this implemented by the MMU)
The kernel runs in the part of memory entitled to it. This part of memory cannot be accessed directly by the processes of the normal users, while the kernel can access all parts of the memory. To access some part of the kernel, the user processes have to use the predefined system calls i.e. open, read, write etc. Also, the C library functions like printf call the system call write in turn.
The system calls act as an interface between the user processes and the kernel processes. The access rights are placed on the kernel space in order to stop the users from messing with the kernel unknowingly.
So, when a system call occurs, a software interrupt is sent to the kernel. The CPU may hand over the control temporarily to the associated interrupt handler routine. The kernel process which was halted by the interrupt resumes after the interrupt handler routine finishes its job.
CPU rings are the most clear distinction
In x86 protected mode, the CPU is always in one of 4 rings. The Linux kernel only uses 0 and 3:
0 for kernel
3 for users
This is the most hard and fast definition of kernel vs userland.
Why Linux does not use rings 1 and 2: CPU Privilege Rings: Why rings 1 and 2 aren't used?
How is the current ring determined?
The current ring is selected by a combination of:
global descriptor table: a in-memory table of GDT entries, and each entry has a field Privl which encodes the ring.
The LGDT instruction sets the address to the current descriptor table.
See also: http://wiki.osdev.org/Global_Descriptor_Table
the segment registers CS, DS, etc., which point to the index of an entry in the GDT.
For example, CS = 0 means the first entry of the GDT is currently active for the executing code.
What can each ring do?
The CPU chip is physically built so that:
ring 0 can do anything
ring 3 cannot run several instructions and write to several registers, most notably:
cannot change its own ring! Otherwise, it could set itself to ring 0 and rings would be useless.
In other words, cannot modify the current segment descriptor, which determines the current ring.
cannot modify the page tables: How does x86 paging work?
In other words, cannot modify the CR3 register, and paging itself prevents modification of the page tables.
This prevents one process from seeing the memory of other processes for security / ease of programming reasons.
cannot register interrupt handlers. Those are configured by writing to memory locations, which is also prevented by paging.
Handlers run in ring 0, and would break the security model.
In other words, cannot use the LGDT and LIDT instructions.
cannot do IO instructions like in and out, and thus have arbitrary hardware accesses.
Otherwise, for example, file permissions would be useless if any program could directly read from disk.
More precisely thanks to Michael Petch: it is actually possible for the OS to allow IO instructions on ring 3, this is actually controlled by the Task state segment.
What is not possible is for ring 3 to give itself permission to do so if it didn't have it in the first place.
Linux always disallows it. See also: Why doesn't Linux use the hardware context switch via the TSS?
How do programs and operating systems transition between rings?
when the CPU is turned on, it starts running the initial program in ring 0 (well kind of, but it is a good approximation). You can think this initial program as being the kernel (but it is normally a bootloader that then calls the kernel still in ring 0).
when a userland process wants the kernel to do something for it like write to a file, it uses an instruction that generates an interrupt such as int 0x80 or syscall to signal the kernel. x86-64 Linux syscall hello world example:
.data
hello_world:
.ascii "hello world\n"
hello_world_len = . - hello_world
.text
.global _start
_start:
/* write */
mov $1, %rax
mov $1, %rdi
mov $hello_world, %rsi
mov $hello_world_len, %rdx
syscall
/* exit */
mov $60, %rax
mov $0, %rdi
syscall
compile and run:
as -o hello_world.o hello_world.S
ld -o hello_world.out hello_world.o
./hello_world.out
GitHub upstream.
When this happens, the CPU calls an interrupt callback handler which the kernel registered at boot time. Here is a concrete baremetal example that registers a handler and uses it.
This handler runs in ring 0, which decides if the kernel will allow this action, do the action, and restart the userland program in ring 3. x86_64
when the exec system call is used (or when the kernel will start /init), the kernel prepares the registers and memory of the new userland process, then it jumps to the entry point and switches the CPU to ring 3
If the program tries to do something naughty like write to a forbidden register or memory address (because of paging), the CPU also calls some kernel callback handler in ring 0.
But since the userland was naughty, the kernel might kill the process this time, or give it a warning with a signal.
When the kernel boots, it setups a hardware clock with some fixed frequency, which generates interrupts periodically.
This hardware clock generates interrupts that run ring 0, and allow it to schedule which userland processes to wake up.
This way, scheduling can happen even if the processes are not making any system calls.
What is the point of having multiple rings?
There are two major advantages of separating kernel and userland:
it is easier to make programs as you are more certain one won't interfere with the other. E.g., one userland process does not have to worry about overwriting the memory of another program because of paging, nor about putting hardware in an invalid state for another process.
it is more secure. E.g. file permissions and memory separation could prevent a hacking app from reading your bank data. This supposes, of course, that you trust the kernel.
How to play around with it?
I've created a bare metal setup that should be a good way to manipulate rings directly: https://github.com/cirosantilli/x86-bare-metal-examples
I didn't have the patience to make a userland example unfortunately, but I did go as far as paging setup, so userland should be feasible. I'd love to see a pull request.
Alternatively, Linux kernel modules run in ring 0, so you can use them to try out privileged operations, e.g. read the control registers: How to access the control registers cr0,cr2,cr3 from a program? Getting segmentation fault
Here is a convenient QEMU + Buildroot setup to try it out without killing your host.
The downside of kernel modules is that other kthreads are running and could interfere with your experiments. But in theory you can take over all interrupt handlers with your kernel module and own the system, that would be an interesting project actually.
Negative rings
While negative rings are not actually referenced in the Intel manual, there are actually CPU modes which have further capabilities than ring 0 itself, and so are a good fit for the "negative ring" name.
One example is the hypervisor mode used in virtualization.
For further details see:
https://security.stackexchange.com/questions/129098/what-is-protection-ring-1
https://security.stackexchange.com/questions/216527/ring-3-exploits-and-existence-of-other-rings
ARM
In ARM, the rings are called Exception Levels instead, but the main ideas remain the same.
There exist 4 exception levels in ARMv8, commonly used as:
EL0: userland
EL1: kernel ("supervisor" in ARM terminology).
Entered with the svc instruction (SuperVisor Call), previously known as swi before unified assembly, which is the instruction used to make Linux system calls. Hello world ARMv8 example:
hello.S
.text
.global _start
_start:
/* write */
mov x0, 1
ldr x1, =msg
ldr x2, =len
mov x8, 64
svc 0
/* exit */
mov x0, 0
mov x8, 93
svc 0
msg:
.ascii "hello syscall v8\n"
len = . - msg
GitHub upstream.
Test it out with QEMU on Ubuntu 16.04:
sudo apt-get install qemu-user gcc-arm-linux-gnueabihf
arm-linux-gnueabihf-as -o hello.o hello.S
arm-linux-gnueabihf-ld -o hello hello.o
qemu-arm hello
Here is a concrete baremetal example that registers an SVC handler and does an SVC call.
EL2: hypervisors, for example Xen.
Entered with the hvc instruction (HyperVisor Call).
A hypervisor is to an OS, what an OS is to userland.
For example, Xen allows you to run multiple OSes such as Linux or Windows on the same system at the same time, and it isolates the OSes from one another for security and ease of debug, just like Linux does for userland programs.
Hypervisors are a key part of today's cloud infrastructure: they allow multiple servers to run on a single hardware, keeping hardware usage always close to 100% and saving a lot of money.
AWS for example used Xen until 2017 when its move to KVM made the news.
EL3: yet another level. TODO example.
Entered with the smc instruction (Secure Mode Call)
The ARMv8 Architecture Reference Model DDI 0487C.a - Chapter D1 - The AArch64 System Level Programmer's Model - Figure D1-1 illustrates this beautifully:
The ARM situation changed a bit with the advent of ARMv8.1 Virtualization Host Extensions (VHE). This extension allows the kernel to run in EL2 efficiently:
VHE was created because in-Linux-kernel virtualization solutions such as KVM have gained ground over Xen (see e.g. AWS' move to KVM mentioned above), because most clients only need Linux VMs, and as you can imagine, being all in a single project, KVM is simpler and potentially more efficient than Xen. So now the host Linux kernel acts as the hypervisor in those cases.
Note how ARM, maybe due to the benefit of hindsight, has a better naming convention for the privilege levels than x86, without the need for negative levels: 0 being the lower and 3 highest. Higher levels tend to be created more often than lower ones.
The current EL can be queried with the MRS instruction: what is the current execution mode/exception level, etc?
ARM does not require all exception levels to be present to allow for implementations that don't need the feature to save chip area. ARMv8 "Exception levels" says:
An implementation might not include all of the Exception levels. All implementations must include EL0 and EL1.
EL2 and EL3 are optional.
QEMU for example defaults to EL1, but EL2 and EL3 can be enabled with command line options: qemu-system-aarch64 entering el1 when emulating a53 power up
Code snippets tested on Ubuntu 18.10.
Kernel space & virtual space are concepts of virtual memory....it doesn't mean Ram(your actual memory) is divided into kernel & User space.
Each process is given virtual memory which is divided into kernel & user space.
So saying
"The random access memory (RAM) can be divided into two distinct regions namely - the kernel space and the user space." is wrong.
& regarding "kernel space vs user space" thing
When a process is created and its virtual memory is divided into user-space and a kernel-space , where user space region contains data, code, stack, heap of the process & kernel-space contains things such as the page table for the process, kernel data structures and kernel code etc.
To run kernel space code, control must shift to kernel mode(using 0x80 software interrupt for system calls) & kernel stack is basically shared among all processes currently executing in kernel space.
Kernel space and user space is the separation of the privileged operating system functions and the restricted user applications. The separation is necessary to prevent user applications from ransacking your computer. It would be a bad thing if any old user program could start writing random data to your hard drive or read memory from another user program's memory space.
User space programs cannot access system resources directly so access is handled on the program's behalf by the operating system kernel. The user space programs typically make such requests of the operating system through system calls.
Kernel threads, processes, stack do not mean the same thing. They are analogous constructs for kernel space as their counterparts in user space.
Each process has its own 4GB of virtual memory which maps to the physical memory through page tables. The virtual memory is mostly split in two parts: 3 GB for the use of the process and 1 GB for the use of the Kernel. Most of the variables you create lie in the first part of the address space. That part is called user space. The last part is where the kernel resides and is common for all the processes. This is called Kernel space and most of this space is mapped to the starting locations of physical memory where the kernel image is loaded at boot time.
The maximum size of address space depends on the length of the address register on the CPU.
On systems with 32-bit address registers, the maximum size of address space is 232 bytes, or 4 GiB.
Similarly, on 64-bit systems, 264 bytes can be addressed.
Such address space is called virtual memory or virtual address space. It is not actually related to physical RAM size.
On Linux platforms, virtual address space is divided into kernel space and user space.
An architecture-specific constant called task size limit, or TASK_SIZE, marks the position where the split occurs:
the address range from 0 up to TASK_SIZE-1 is allotted to user space;
the remainder from TASK_SIZE up to 232-1 (or 264-1) is allotted to kernel space.
On a particular 32-bit system for example, 3 GiB could be occupied for user space and 1 GiB for kernel space.
Each application/program in a Unix-like operating system is a process; each of those has a unique identifier called Process Identifier (or simply Process ID, i.e. PID). Linux provides two mechanisms for creating a process: 1. the fork() system call, or 2. the exec() call.
A kernel thread is a lightweight process and also a program under execution.
A single process may consist of several threads sharing the same data and resources but taking different paths through the program code. Linux provides a clone() system call to generate threads.
Example uses of kernel threads are: data synchronization of RAM, helping the scheduler to distribute processes among CPUs, etc.
Briefly : Kernel runs in Kernel Space, the kernel space has full access to all memory and resources, you can say the memory divide into two parts, part for kernel , and part for user own process, (user space) runs normal programs, user space cannot access directly to kernel space so it request from kernel to use resources. by syscall (predefined system call in glibc)
there is a statement that simplify the different "User Space is Just a test load for the Kernel " ...
To be very clear : processor architecture allow CPU to operate in two mode, Kernel Mode and User Mode, the Hardware instruction allow switching from one mode to the other.
memory can be marked as being part of user space or kernel space.
When CPU running in User Mode, the CPU can access only memory that is being in user space, while cpu attempts to access memory in Kernel space the result is a "hardware exception", when CPU running in Kernel mode, the CPU can access directly to both kernel space and user space ...
The kernel space means a memory space can only be touched by kernel. On 32bit linux it is 1G(from 0xC0000000 to 0xffffffff as virtual memory address).Every process created by kernel is also a kernel thread, So for one process, there are two stacks: one stack in user space for this process and another in kernel space for kernel thread.
the kernel stack occupied 2 pages(8k in 32bit linux), include a task_struct(about 1k) and the real stack(about 7k). The latter is used to store some auto variables or function call params or function address in kernel functions. Here is the code(Processor.h (linux\include\asm-i386)):
#define THREAD_SIZE (2*PAGE_SIZE)
#define alloc_task_struct() ((struct task_struct *) __get_free_pages(GFP_KERNEL,1))
#define free_task_struct(p) free_pages((unsigned long) (p), 1)
__get_free_pages(GFP_KERNEL,1)) means alloc memory as 2^1=2 pages.
But the process stack is another thing, its address is just bellow 0xC0000000(32bit linux), the size of it can be quite bigger, used for the user space function calls.
So here is a question come for system call, it is running in kernel space but was called by process in user space, how does it work? Will linux put its params and function address in kernel stack or process stack? Linux's solution: all system call are triggered by software interruption INT 0x80.
Defined in entry.S (linux\arch\i386\kernel), here is some lines for example:
ENTRY(sys_call_table)
.long SYMBOL_NAME(sys_ni_syscall) /* 0 - old "setup()" system call*/
.long SYMBOL_NAME(sys_exit)
.long SYMBOL_NAME(sys_fork)
.long SYMBOL_NAME(sys_read)
.long SYMBOL_NAME(sys_write)
.long SYMBOL_NAME(sys_open) /* 5 */
.long SYMBOL_NAME(sys_close)
By Sunil Yadav, on Quora:
The Linux Kernel refers to everything that runs in Kernel mode and is
made up of several distinct layers. At the lowest layer, the Kernel
interacts with the hardware via the HAL. At the middle level, the
UNIX Kernel is divided into 4 distinct areas. The first of the four
areas handles character devices, raw and cooked TTY and terminal
handling. The second area handles network device drivers, routing
protocols and sockets. The third area handles disk device drivers,
page and buffer caches, file system, virtual memory, file naming and
mapping. The fourth and last area handles process dispatching,
scheduling, creation and termination as well as signal handling.
Above all this we have the top layer of the Kernel which includes
system calls, interrupts and traps. This level serves as the
interface to each of the lower level functions. A programmer uses
the various system calls and interrupts to interact with the features
of the operating system.
IN short kernel space is the portion of memory where linux kernel runs (top 1 GB virtual space in case of linux) and user space is the portion of memory where user application runs( bottom 3 GB of virtual memory in case of Linux. If you wanna know more the see the link given below :)
http://learnlinuxconcepts.blogspot.in/2014/02/kernel-space-and-user-space.html
Kernel Space and User Space are logical spaces.
Most of the modern processors are designed to run in different privileged mode. x86 machines can run in 4 different privileged modes.
And a particular machine instruction can be executed when in/above particular privileged mode.
Because of this design you are giving a system protection or sand-boxing the execution environment.
Kernel is a piece of code, which manages your hardware and provide system abstraction. So it needs to have access for all the machine instruction. And it is most trusted piece of software. So i should be executed with the highest privilege. And Ring level 0 is the most privileged mode. So Ring Level 0 is also called as Kernel Mode.
User Application are piece of software which comes from any third party vendor, and you can't completely trust them. Someone with malicious intent can write a code to crash your system if he had complete access to all the machine instruction. So application should be provided with access to limited set of instructions. And Ring Level 3 is the least privileged mode. So all your application run in that mode. Hence that Ring Level 3 is also called User Mode.
Note: I am not getting Ring Levels 1 and 2. They are basically modes with intermediate privilege. So may be device driver code are executed with this privilege. AFAIK, linux uses only Ring Level 0 and 3 for kernel code execution and user application respectively.
So any operation happening in kernel mode can be considered as kernel space.
And any operation happening in user mode can be considered as user space.
Trying to give a very simplified explanation
Virtual Memory is divided into kernel space and the user space.
Kernel space is that area of virtual memory where kernel processes will run and user space is that area of virtual memory where user processes will be running.
This division is required for memory access protections.
Whenever a bootloader starts a kernel after loading it to a location in RAM, (on an ARM based controller typically)it needs to make sure that the controller is in supervisor mode with FIQ's and IRQ's disabled.
The correct answer is: There is no such thing as kernel space and user space. The processor instruction set has special permissions to set destructive things like the root of the page table map, or access hardware device memory, etc.
Kernel code has the highest level privileges, and user code the lowest. This prevents user code from crashing the system, modifying other programs, etc.
Generally kernel code is kept under a different memory map than user code (just as user spaces are kept in different memory maps than each other). This is where the "kernel space" and "user space" terms come from. But that is not a hard and fast rule. For example, since the x86 indirectly requires its interrupt/trap handlers to be mapped at all times, part (or some OSes all) of the kernel must be mapped into user space. Again, this does not mean that such code has user privileges.
Why is the kernel/user divide necessary? Some designers disagree that it is, in fact, necessary. Microkernel architecture is based on the idea that the highest privileged sections of code should be as small as possible, with all significant operations done in user privileged code. You would need to study why this might be a good idea, it is not a simple concept (and is famous for both having advantages and drawbacks).
This demarcation need architecture support there are some instructions that are accessed in privileged mode.
In pagetables we have access details if user process try to access address which lies in kernel address range then it will give privilege violation fault.
So to enter privileged mode it is required to run instruction like trap which change CPU mode to privilege and give access to instructions as well as memory regions
In Linux there are two space 1st is user space and another one is kernal space. user space consist of only user application which u want to run. as the kernal service there is process management, file management, signal handling, memory management, thread management, and so many services are present there. if u run the application from the user space that appliction interact with only kernal service. and that service is interact with device driver which is present between hardware and kernal.
the main benefit of kernal space and user space seperation is we can acchive a security by the virus.bcaz of all user application present in user space, and service is present in kernal space. thats why linux doesn,t affect from the virus.

How are same virtual address for different processes mapped to different physical addresses

I have taken a course about Operating System design and concept and now I am trying to study Linux kernel thoroughly. I have a question that I cannot get rid of. In modern operating systems each process has own virtual address space(VAS) (eg, 0 to 2^32-1 in 32-bit systems). This provides many advantages. But in the implementation I am confused at some points. Let me explain it by giving an example:
Let's say we have two processes p1, p2;
p1 and p2 have their own VASes. An address 0x023f4a54 is mapped to different physical addresses(PA), how can it be? How is done this translation in this manner. I mean I know translation mechanism but I cannot understand that same address is mapped to different physical address when it comes different processes' address space.
0x023f4a54 in p1's VAS => PA 0x12321321
0x023f4a54 in p2's VAS => PA 0x23af2341 # (random addresses)
A CPU that provides virtual memory lets you set up a mapping of the memory addresses as the CPU sees it to physical memory addresses , typically this is done by a harware unit called the MMU.
The OS kernel can program that MMU, typically not down to the individual addresses, but rather in units of pages (4096 bytes is common). This means the MMU can be programmed to translate e.g. virtual addresses 0x1000-0x2000 to be translated to physical address 0x20000-0x21000.
The OS keeps one set of these mapping per process, and before it schedules a process to run, it loads that mapping into the MMU before it switches control back to the process. This enables different mappings for different processes, and nothing stops those mappings from mapping the same virtual address to a different physical address.
All this is transparent as far as the program is concerned, it just executes instructions on the CPU, and as the CPU has been set to virtual memory mode (paged mode), every memory access is translated by the MMU before it goes out on the physical bus to the memory.
The actual implementation details are complicated, but here's some references that might provide more insight;
http://wiki.osdev.org/Paging
http://www.usenix.org/event/usenix99/full_papers/cranor/cranor.pdf
http://ptgmedia.pearsoncmg.com/images/0131453483/downloads/gorman_book.pdf
Your question confuses a virtual address with using an address as a way of identification, so the first step to understanding is to separate the concepts.
A working example is the C runtime library function sprintf(). When properly declared and called, it is incorporated into a program as a shared object module, along with all the subfunctions it needs. The address of sprintf varies from program to program because the library is loaded in an available free address. For a simple hello world program, sprintf might be loaded at address 0x101000. For a complex program which calculates taxes, it might be loaded at 0x763f8000 (because of all the yucky logic the main program contains goes before the libraries it references). From a system perspective, the shared library is loaded into memory in one place only, but the address window (range of addresses) that each process sees that memory is unique to that executable.
Of course, this is complicated further by some of the features of Security Enhanced Linux (SELinux) which randomizes the addresses at which different program sections are loaded into memory, including shared library mapping.
--- clarification ---
As someone correctly points out, the virtual address mapping of each process is specific to each process, not unlike its set of file descriptors, socket connections, process parent and children, etc. That is, p1 might map address 0x1000 to physical 0x710000 while p2 maps address 0x1000 to a page fault, and p3 is mapped to some shared library at physical 0x9f32a000. The virtual address mapping is carefully supervised by the operating system, arguably for providing features such as swapping and paging, but also to provide features like shared code and data, and interprocess shared data.
There are two important data structures dealing with paging: the page table and the TLB. The OS maintains different page tables per process. The TLB is just a cache of the page table.
Now, different CPUs are, well, different. x86 accesses page tables directly, using a special register called CR3 which points to the page table in use. MIPS processors don't know anything about the page table, so the OS must work directly with the TLB.
Some CPUs (e.g: MIPS) keep an identifier in the TLB to separate different processes apart, so the OS can just change a control register when doing a context switch (unless it needs to reuse an identifier). Other CPUs require a full TLB flush in every context switch. So, basically, the OS needs to change some control registers and possibly needs to clear the TLB (do a TLB flush) to allow virtual addresses from different processes map to whatever physical addresses they should.
Thanks for all answers. The actual point that i dont know is that how same virtual address of different processes does not clash with each other's physical correspondent. I found the answer in the link below, each process has its own page table.
http://tldp.org/LDP/tlk/mm/memory.html
This mapping (virtual address to physical address) is handled by the OS and the MMU (see #nos' answer); the point of this abstraction is so p1 "thinks" it's accessing 0x023f4a54 when in reality it's accessing 0x12321321.
If you go back to your class on how programs work on the machine code level, p1 will expect some variable/function/whatever to be at the same place (eg 0x023f4a54) every time it's loaded. The OS mapping physical to virtual address provides this abstraction. In reality, it won't always be loaded to the same physical address, but your program doesn't care as long as it's in the same virtual address.
I think it is important to keep in mind that each process has its own set of page tables. I had hard times understanding this as well when I was thinking that there is a single page table for the whole system.
When specific process refers to its page table and tries to access the page that has not yet been mapped to a page frame, OS allocates a different piece of physical memory for that specific process and maps it to the virtual address.

Resources