I am wondering the difference between this 2 arguments in linux's kernel command line:
noexec=off
nosmep
In both cases it denies kernel to execute code which is in userland memory.
But i cannot see any differences between them.
The error message in dmesg is different but the behaviour seems to be the same.
Thanks
The noexec parameter controls whether kernel can use the XD flag (also called the NX flag) of the paging structures to mark pages that are not supposed to be executable as such. The nosmep parameter, on the other hand, specifies whether SMEP is enabled. Note that nosmep only has an effect when both the kernel version and the processor support SMEP (See: How can i enable/disable kernel kaslr, smep and smap). In addition, XD only has an effect when the kernel is running in 64-bit or using 36-bit paging and IA32_EFER.NXE is set to 1.
The XD and SMEP flags determine whether the instruction at a given memory location can be fetched. SMEP overrides XD, which means that if SMEP is set, supervisor-mode code is not allowed to fetch instructions (for execution) from a User page irrespective of XD flag. Otherwise if SMEP is not supported or disabled, instruction fetch is not allowed in the following cases:
Supervisor-mode code attempts to fetch instructions from a User or Supervisor page with a translation whose XD flag is 1 in at least one of the paging structures.
User-mode code attempts to fetch instructions from a User page with a translation whose XD flag is 1 in at least one of the paging structures.
User-mode code attempts to fetch instructions from a Supervisor page.
In any of these cases, a page fault Exception (#PF) occurs.
Related
A CPU can be either in kernel mode (fully privilege) or in user mode. The kernel requires kernel mode, while applications need to run in the user mode. But how can the CPU be in two modes at once?
Processors generally include a mode flag which indicates which mode the processor is in at a given time; that flag need not necessarily do a whole lot. In a simple implementation, the flag might only control whether the processor is allowed to change memory mappings; the processor would include an instruction which simply switches to user mode, and an instruction which simultaneously switches to kernel mode and jumps to a particular address.
If the kernel stores its own code at the aforementioned address and then switches the memory map so that the address in question is write-protected, then user code would be able to ask the kernel to do something by storing its request somewhere and making a call to a "switch to kernel mode and jump" instruction. The kernel code could then enable its private memory areas, examine the request stored by the user-mode code, act upon the request, disable its private memory areas, switch back to user mode, and return to executing user-mode code.
any apps or the system kernel can access or even modify the content of CPU cahce and/or TLB?
I found a short description about the CPU cache from this webiste:
"No programming language has direct access to CPU cache. Reading and writing the cache is something done automatically by the hardware; there's NO way to write instructions which treat the cache as any kind of separate entity. Reads and writes to the cache happen as side-effect to all instructions that touch memory."
From this message, it seems there is no way to read/write the content of CPU cahce/TLB.
However, I also got another information that conflicts with the above one. That information implies that a debug tool may be able to dump/show the content of CPU cache.
Currently I'm confused. so please help me.
I got some answers from another post: dump the contents of TLB buffer of x86 CPU. Thanks adamdunson.
People could read this document about test registers, but it is only available on very old x86 machines test registers
Another descriptions from wiki https://en.wikipedia.org/wiki/Test_register:
A test register, in the Intel 80486 processor, was a register used by
the processor, usually to do a self-test. Most of these registers were
undocumented, and used by specialized software. The test registers
were named TR3 to TR7. Regular programs don't usually require these
registers to work. With the Pentium, the test registers were replaced
by a variety of model-specific registers (MSRs).
Two test registers, TR6 and TR7, were provided for the purpose of
testing. TR6 was the test command register, and TR7 was the test data
register. These registers were accessed by variants of the MOV
instruction. A test register may either be the source operand or the
destination operand. The MOV instructions are defined in both
real-address mode and protected mode. The test registers are
privileged resources. In protected mode, the MOV instructions that
access them can only be executed at privilege level 0. An attempt to
read or write the test registers when executing at any other privilege
level causes a general protection exception. Also, those instructions
generate invalid opcode exception on any CPU newer than 80486.
In fact, I'm still expecting some similar functions on Intel i7 or i5. Unfortunately, I do not find any related document about that. If anyone has such information, please let me know.
I am try to understand the mechanism used by Linux to invoke a system call. In particular, I am struggling to understand the VSDO mechanism. Can it be used to invoke all system calls? And what the difference between the vsdo page and vsyscall page within the process memory? are they always there?
For example using cat /proc/self/maps :
7fff32938000-7fff32939000 r-xp 00000000 00:00 0 [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]
Best,
The vsyscall and vDSO segments are two mechanisms used to accelerate certain system calls in Linux. For instance, gettimeoftheday is usually invoked through this mechanism. The first mechanism introduced was vsyscall, which was added as a way to execute specific system calls which do not need any real level of privilege to run in order to reduce the system call overhead. Following the previous example, all gettimeofday needs to do is to read the kernel's the current time. There are applications that call gettimeofday frequently (e.g to generate timestamps), to the point that they care about even a little bit of overhead. To address this concern, the kernel maps into user space a page containing the current time and a fast gettimeofday implementation (i.e. just a function which reads the time saved into vsyscall). Using this virtual system call, the C library can provide a fast gettimeofday which does not have the overhead introduced by the context switch between kernel space and user space usually introduced by the classic system call model INT 0x80 or SYSCALL.
However, this vsyscall mechanism has some limitations: the memory allocated is small and allows only 4 system calls, and, more important and serious, the vsyscall page is statically allocated to the same address in each process, since the location of the vsyscall page is nailed down in the kernel ABI. This static allocation of the vsyscall compromises the benefit introduced by the memory space randomisation commonly used by Linux. An attacker, after compromising an application by exploiting a stack-overflow, can invoke a system call from the vsyscall page with arbitrary parameters. All he needs is the address of the system call, which is easily predicable as it is statically allocated (if you try to run again your command even with different applications, you'll notice that the address of the vsyscall does not change).
It would be nice to remove or at least randomize the location of the vsyscall page to thwart this type of attack. Unfortunately, applications depend on the existence and exact address of that page, so nothing can be done.
This security issue has been addresses by replacing all system call instructions at fixed addressed by a special trap instruction. An application trying to call into the vsyscall page will trap into the kernel, which will then emulate the desired virtual system call in kernel space. The result is a kernel system call emulating a virtual system call which was put there to avoid the kernel system call in the first place. The result is a vsyscall which takes longer to execute but, crucially, does not break the existing ABI. In any case, the slowdown will only be seen if the application is trying to use the vsyscall page instead of the vDSO. The vDSO offers the same functionality as the vsyscall, while overcoming its limitations. The vDSO (Virtual Dynamically linked Shared Objects) is a memory area allocated in user space which exposes some kernel functionalities at user space in a safe manner.
This has been introduced to solve the security threats caused by the vsyscall.
The vDSO is dynamically allocated which solves security concerns and can have more than 4 system calls. The vDSO links are provided via the glibc library. The linker will link in the glibc vDSO functionality, provided that such a routine has an accompanying vDSO version, such as gettimeofday. When your program executes, if your kernel does not have vDSO support, a traditional syscall will be made.
Credits and useful links :
Awesome tutorial, how to create your own vDSO.
vsyscall andvDSO, nice article
useful article and links
What is linux-gate.so.1?
Since the CPU runs in user/kernel mode, I want to know how this is determined by kernel. I mean, if a sys call is invoked, the kernel executes it on behalf of the process, but how does the kernel know that it is executing in kernel mode?
You can tell if you're in user-mode or kernel-mode from the privilege level set in the code-segment register (CS). Every instruction loaded into the CPU from the memory pointed to by the RIP or EIP register (the instruction pointer register depending on if you are x86_64 or x86 respectively) will read from the segment described in the global descriptor table (GDT) by the current code-segment descriptor. The lower two-bits of the code segment descriptor will determine the current privilege level that the code is executing at. When a syscall is made, which is typically done through a software interrupt, the CPU will check the current privilege-level, and if it's in user-mode, will exchange the current code-segment descriptor for a kernel-level one as determined by the syscall's software interrupt gate descriptor, as well as make a stack-switch and save the current flags, the user-level CS value and RIP value on this new kernel-level stack. When the syscall is complete, the user-mode CS value, flags, and instruction pointer (EIP or RIP) value are restored from the kernel-stack, and a stack-switch is made back to the current executing processes' stack.
Broadly if it's running kernel code it's in kernel mode. The transition from user-space to kernel mode (say for a system call) causes a context switch to occur. As part of this context switch the CPU mode is changed.
Kernel code only executes in kernel mode. There is no way, kernel code can execute in user mode. When application calls system call, it will generate a trap (software interrupt) and the mode will be switch to kernel mode and kernel implementation of system call will executed. Once it is done, kernel will switch back to user mode and user application will continue processing in user mode.
The term is called "Superviser Mode", which applies to x86/ARM and many other processor as well.
Read this (which applies only to x86 CPU):
http://en.wikipedia.org/wiki/Ring_(computer_security)
Ring 0 to 3 are the different privileges level of x86 CPU. Normally only Ring0 and 3 are used (kernel and user), but nowadays Ring 1 find usages (eg, VMWare used it to emulate guest's execution of ring 0). Only Ring 0 has the full privilege to run some privileged instructions (like lgdt, or lidt), and so a good test at the assembly level is of course to execute these instruction, and see if your program encounters any exception or not.
Read this to really identify your current privilege level (look for CPL, which is a pictorialization of Jason's answer):
http://duartes.org/gustavo/blog/post/cpu-rings-privilege-and-protection
It is a simple question and does not need any expert comment as provided above..
The question is how does a cpu come to know whether it is kernel mode or its a user mode.
The answer is "mode bit"....
It is a bit in Status register of cpu's registers set.
When "mode bit=0",,,it is considered as kernel mode(also called,monitor mode,privileged mode,protected mode...and many other...)
When "mode bit=1",,it is considered as User mode...and user can now perform its personal applications without any special kernel interruption.
so simple...isn't it??
How does Windows protect against a user-mode thread from arbitrarily transitioning the CPU to kernel-mode?
I understand these things are true:
User-mode threads DO actually transition to kernel-mode when a system call is made through NTDLL.
The transition to kernel-mode is done through processor-specific instructions.
So what is special about these system calls through NTDLL? Why can't the user-mode thread fake-it and execute the processor-specific instructions to transition to kernel-mode? I know I'm missing some key piece of Windows architecture here...what is it?
You're probably thinking that thread running in user mode is calling into Ring 0, but that's not what's actually happening. The user mode thread is causing an exception that's caught by the Ring 0 code. The user mode thread is halted and the CPU switches to a kernel/ring 0 thread, which can then inspect the context (e.g., call stack and registers) of the user mode thread to figure out what to do. Before syscall, it really was an exception rather than a special exception specifically to invoke ring 0 code.
If you take the advice of the other responses and read the Intel manuals, you'll see syscall/sysenter don't take any parameters - the OS decides what happens. You can't call arbitrary code. WinNT uses function numbers that map to which kernel mode function the user mode code will execute (for example, NtOpenFile is fnc 75h on my Windows XP machine (the numbers change all the time; it's one of the jobs of NTDll is to map a function call to a fnc number, put it in EAX, point EDX to the incoming parameters then invoke sysenter).
Intel CPUs enforce security using what's called 'Protection Rings'.
There are 4 of these, numbered from 0 to 3. Code running in ring 0 has the highest privileges; it can (practically) do whatever it pleases with your computer. The code in ring 3, on the other hand, is always on a tight leash; it has only limited powers to influence things. And rings 1 and 2 are currently not used for any purpose at all.
A thread running in a higher privileged ring (such as ring 0) can transition to lower privilege ring (such as ring 1, 2 or 3) at will. However, the transition the other way around is strictly regulated. This is how the security of high privileged resources (such as memory) etc. is maintained.
Naturally, your user mode code (applications and all) runs in ring 3 while the OS's code runs in ring 0. This ensures that the user mode threads can't mess with the OS's data structures and other critical resources.
For details on how all this is actually implemented you could read this article. In addition, you may also want to go through Intel Manuals, especially Vol 1 and Vol 3A, which you can download here.
This is the story for Intel processors. I'm sure other architectures have something similar going on.
I think (I may be wrong) that the mechanism which it uses for transition is simple:
User-mode code executes a software interrupt
This (interrupt) causes a branch to a location specified in the interrupt descriptor table (IDT)
The thing that prevents user-mode code from usurping this is as follows: you need to be priviledged to write to the IDT; so only the kernel is able to specify what happens when an interrupt is executed.
Code running in User Mode (Ring 3) can't arbitrarily change to Kernel Mode (Ring 0). It can only do so using special routes -- jump gates, interrupts, and sysenter vectors. These routes are highly protected and input is scrubbed so that bad data can't (shouldn't) cause bad behavior.
All of this is set up by the kernel, usually on startup. It can only be configured in Kernel Mode so User-Mode code can't modify it.
It's probably fair to say that it does it in a (relatively) similar way to what Linux does. In both cases it's going to be CPU-specific, but on x86 probably either a software interrupt with the INT instruction, or via SYSENTER instruction.
The advantage of looking at how Linux does it is that you can do so without a Windows source licence.
The userspace source part is here here at LXR and the
kernel space bit - look at entry_32.S and entry_64.S
Under Linux on x86 there are three different mechanisms, int 0x80, syscall and sysenter.
A library which is built at runtime by the kernel called vdso is called by the C library to implement the syscall function, which uses a different mechanism depending on the CPU and which system call it is. The kernel then has handlers for those mechanisms (if they exist on the specific CPU variant).