How to translate Virtual to Physical Address (WinDbg)? - windows

It seems I don't understand something.
I'm trying to translate VA to PA on Windows 10 (x86) under VirtualBox.
I use Microsoft manual for that.
I set up a local kernel debugger (bcedit) and launched CFF Explorer as a tested application. Then I started WinDbg, connected to the kernel and get active processes:
!process 0 0
Found my test application:
PROCESS a6bd7900 SessionId: 1 Cid: 0988 Peb: 7ffd9000 ParentCid: 0840
DirBase: ba9ac3c0 ObjectTable: acaeedc0 HandleCount: <Data Not Accessible>
Image: CFF Explorer.exe
Then get PEB:
.process /p a6bd7900; !peb 7ffd9000
Implicit process is now a6bd7900
PEB at 7ffd9000
...
ImageBaseAddress: 00400000
...
Ldr 76f99aa0
Ldr.Initialized: Yes
Ldr.InInitializationOrderModuleList: 00881658 . 00887c00
Ldr.InLoadOrderModuleList: 00881728 . 00887bf0
Ldr.InMemoryOrderModuleList: 00881730 . 00887bf8
Base TimeStamp Module
400000 50a8fbd6 Nov 18 18:16:38 2012 C:\Program Files\NTCore\Explorer Suite\CFF Explorer.exe
76e90000 580ee2c9 Oct 25 07:42:49 2016 C:\WINDOWS\SYSTEM32\ntdll.dll
74970000 57cf8f7a Sep 07 06:54:34 2016 C:\WINDOWS\system32\KERNEL32.DLL
...
I typed "!r" command to print all registers:
cr0 Value: 00720054
cr2 Value: 00720054
cr3 Value: 00720054
cr4 Value: 00720054
cr4 in bin: 00000000 00001010 11111100 10110110
The 5th bit is true what means that PAE is enabled.
Then I opened the Memory windows and typed 400000 address to check I have the header of CFF Explorer.exe in Virtual memory.
Then I tried to get page frame number (PFN) via PTE extension (by the manual):
lkd> !pte 00400000
VA 00400000
PDE at C0600010 PTE at C0002000
contains 0000000000000000
contains 0000000000000000
not valid
I've got not a not valid address. At the same time, when I tried to get PFN of kernel32.dll I've got valid address:
lkd> !pte 74970000
VA 74970000
PDE at C0601D20 PTE at C03A4B80
contains 000000000121B867 contains 800000006F1CE005
pfn 121b ---DA--UWEV pfn 6f1ce -------UR-V
And then successfully got the header by physical address via "!dc 6f1ce000".
Then I checked windbg.exe itself and noticed that kernel32.dll has the same base address as CFF Explorer.exe. I always think that each process has own mapping of the dependent module to his own memory, but now it seems not so.
My questions:
Why do I get "not valid" when trying to translate 0x00400000 address?
Please, clear the situation with kernel32.dll and my doubts about mapping the module to each process.
UPDATE 0:
I don't know why, but when I debug the kernel as local - I see the same value in ALL registers. I've tried to remote debug the kernel, and now I see the different values for each register:
cr0 Value: 80010033
cr2 Value: 909a301c
cr3 Value: 001a8000
cr4 Value: 000406e9
And now, I can't get either kernel32.dll or the other modules translation.
The main questions are opened.

!pte may not work without the capital /P when setting the process context, because !pte reads the contents of the page table entries via virtual address, starting with nt!MmPteBase (FFFFF6FB7DBED000 in my case) – this is a kernel address – remember that the page tables are in kernel virtual memory meaning the PTs/PDs/PTPTs/PML4 themselves have kernel virtual addresses, so enabling the user mode address bypass will not stop kernel addresses from still being translated in hardware.
Without /P, the debugger will naturally use the page table of the current process in the logical core to access the data at this virtual address using translation in hardware on the CPU, which will work fine for non–process-unique virtual addresses because the same physical page is mapped into all page tables so it doesn't matter what ones currently in the core, but it will not work for any user virtual memory as all user memory is unique to the process (where a virtual page maps to a physical page unique to the process) and neither will it work for any kernel virtual memory that is unique to the process. An example of kernel virtual memory that is unique to the process is the page for user addresses, and the page tables for kernel addresses that contain the page tables
/p and /P are used in order to bypass this, and the debugger accesses the correct dirbase in software and walks the page table in software. /p only bypasses for all user mode addresses and /P also bypasses for all kernel mode addresses.
lkd> !process 0 0 calc.exe
PROCESS fffffa805d954b10
SessionId: 1 Cid: 3294 Peb: 7fffffdb000 ParentCid: 10f8
DirBase: 27a385000 ObjectTable: fffff8a02a766e60 HandleCount: 81.
Image: calc.exe
lkd> .process /p fffffa805d954b10
Implicit process is now fffffa80`5d954b10
lkd> !pte 0`ffbe0000
VA 00000000ffbe0000
PXE at FFFFF6FB7DBED000 PPE at FFFFF6FB7DA00018 PDE at FFFFF6FB40003FE8 PTE at FFFFF680007FDF00
contains 00C0000263E77867 contains 0000000000000000
pfn 263e77 ---DA--UWEV not valid
----------------------------------------------------------------------------------------
lkd> .process /P fffffa805d954b10
Implicit process is now fffffa80`5d954b10
lkd> !pte 0`ffbe0000
VA 00000000ffbe0000
PXE at FFFFF6FB7DBED000 PPE at FFFFF6FB7DA00018 PDE at FFFFF6FB40003FE8 PTE at FFFFF680007FDF00
contains 00C000000B023867 contains 00D0000759124867 contains 00E0000792FA5867 contains 80F000004D7DD025
pfn b023 ---DA--UWEV pfn 759124 ---DA--UWEV pfn 792fa5 ---DA--UWEV pfn 4d7dd ----A--UR-V
!vtop 0 ffbe0000 will work without /p or /P because it gets the dirbase PML4 physical address from the EPROCESS structure (EPROCESS is in non–⁠process-unique kernel memory that it can use any page table to access), and then it maps in the PML4 physical page of the correct page table by physical address, showing their physical addresses, and mapping in each resulting physical address of the next entry in the hierarchy into virtual memory so it can read it and continue the walk.
!pte fffffa80`5d954b10 (the EPROCESS address) will work without /p or /P because the EPROCESS physical page block happens to be mapped into all page tables at the same virtual address, so it doesn't matter if the translation is being bypassed by the debugger or if it is being done in hardware with whatever page table is currently in the core.
It appears to me that you only need to do /p or /P once for the whole debug session, and in order to reset it you have to .cache nodecodeptes, which you can't do in a local debugging session for some reason:
lkd> .process /P fffffa805d954b10
Implicit process is now fffffa80`5d954b10
lkd> !pte 10000
VA 0000000000010000
PXE at FFFFF6FB7DBED000 PPE at FFFFF6FB7DA00000 PDE at FFFFF6FB40000000 PTE at FFFFF68000000080
contains 00C000000B023867 contains 01300001F5CA7867 contains 014000030E728867 contains 8D400001A4654867
pfn b023 ---DA--UWEV pfn 1f5ca7 ---DA--UWEV pfn 30e728 ---DA--UWEV pfn 1a4654 ---DA--UW-V
------------------------------------------
lkd> .process fffffa8027653b10
Implicit process is now fffffa80`27653b10
lkd> !pte 10000
VA 0000000000010000
PXE at FFFFF6FB7DBED000 PPE at FFFFF6FB7DA00000 PDE at FFFFF6FB40000000 PTE at FFFFF68000000080
contains 12B0000195039867 contains 036000016A13C867 contains 01400001730BD867 contains FFFFFFFF00000480
pfn 195039 ---DA--UWEV pfn 16a13c ---DA--UWEV pfn 1730bd ---DA--UWEV not valid
Proto: VAD
Protect: 4 - ReadWrite
I mean it does say that the behaviour of /p and /P is the same as .cache forcedcodeuser and .cache forcedecodeptes respectively. Omitting both /p and /P does not perform .cache nodecodeptes but leaves it as it is, so once you've set /p on one process it applies to all processes (despite what msdn says, which I think is wrong), and then you can toggle to /P on a new process, and then /P will apply to all processes. When you start the session, the current state is .cache nodecodeptes, and in that state, it depends on the page tables that are actually in the logical core of the processor at the time, which for a local debug will be kd.exe, and for remote debug it will be the process of whatever thread has broken into the debugger.

Related

Windbg vtop outputs physical address larger than memory

I'm studying the internals of the Windows kernel and one of the things I'm looking into is how paging and virtual addresses work in Windows. I was experimenting with windbg's !vtop function when I noticed something strange I was getting an impossible physical address?
For example here is my output of a !process 0 0 command:
PROCESS fffffa8005319b30
SessionId: none Cid: 0104 Peb: 7fffffd8000 ParentCid: 0004
DirBase: a8df3000 ObjectTable: fffff8a0002f6df0 HandleCount: 29.
Image: smss.exe
when I run !vtop a8df3000 fffffa8005319b30. I get the following result:
lkd> !vtop a8df3000 fffffa8005319b30
Amd64VtoP: Virt fffffa80`05319b30, pagedir a8df3000
Amd64VtoP: PML4E a8df3fa8
Amd64VtoP: PDPE 2e54000
Amd64VtoP: PDE 2e55148
Amd64VtoP: Large page mapped phys 1`3eb19b30
Virtual address fffffa8001f07310 translates to physical address 13eb19b30
The problem I have with this is that my VM that I'm running this test on only has 4GB and 13eb19b30 is 5,346,794,288...
When I run !dd 13eb19b30 and dd fffffa8001f07310 I get the same result so windows seems to be able to access this physical address somehow... Does anyone know how this is done?
I found this post on Cheat Engine that looks like he had a similar problem to me. But they found no solution in that case either
I see You have posted this is RESE also i saw it there didn't understand exactly what you are trying to do.
i see a few discrepancies
you seemed to have used a PFN a8df3000 but it seems windbg seems to be using a PFN of 187000 instead
btw pfn iirc should be dirbase & 0xfffff000
also for virtual address you seem to using the EPROCESS address of your process
are you sure that this is the right virtual address you want to use ?
also it seems you are using lkd which is local kernel debugging prompt
and i hope you understand that lkd is not real kernel debugging
So I think I was finally able to come with a reasonable answer to the problem. It turns out that vmware doesn't seem to actually expose to the VM contiguous memory but instead segments it into different memory "runs". I was able to confirm this by using the volatility:
$ python vol.py -f ~/Desktop/Win7SP1x64-d8737a34.vmss vmwareinfo --verbose | less
Magic: 0xbad1bad1 (Version 1)
Group count: 0x5c
File Offset PhysMem Offset Size
----------- -------------- ----------
0x000010000 0x000000000000 0xc0000000
0x0c0010000 0x000100000000 0xc0000000
Here is a volatility github wiki article that goes into more detail about: volatility

Trying to Analyze Dump File with WinDbg: PEB is Paged Out, Won't load symbols

Hi I'm trying to use WinDbg to look at a memory.dmp kernel dump file with the aim of diagnosing a crash. When I open the crash file and get the symbols I get the message
BugCheck A, {2, ff, 4e, fffff801a42ebff2}
CompressedPageDataReader warning: failed to get _SM_PAGE_KEY symbol.
CompressedPageDataReader warning: failed to get _SM_PAGE_KEY symbol.
Probably caused by : ntkrnlmp.exe ( nt!KxWaitForLockOwnerShipWithIrql+12 )
Followup: MachineOwner
---------
0: kd> .reload
Loading Kernel Symbols
..................................CompressedPageDataReader warning: failed to get _SM_PAGE_KEY symbol.
Loading User Symbols
PEB is paged out (Peb.Ldr = 000000e1`114f4018). Type ".hh dbgerr001" for details
Which I assume means it can't load some of the symbols. When I try the !vad process to fix the PEB page error I get
0: kd> !vad 000000e1114f4018 1
VAD # ffffca0f084164e0
Start VPN e111400 End VPN e1115ff Control Area 0000000000000000
FirstProtoPte 0000000000000000 LastPte f943916c00000002 Commit Charge 21 (0n33)
Secured.Flink 0 Blink 0 Banked/Extend 0
File Offset 50005
ViewUnmap NoChange PrivateMemory READWRITE
which doesn't correspond to what the internet tells me the result should be.
when I try the !process method I get
0: kd> !process 000000e1114f4018 1
Searching for Process with Cid == e1114f4018
Invalid Handle: 0x114f4018
***Could not retrieve process handle from the Cid table. Searching...
which also is an error which doesn't load the symbols either. What is wrong? In either the symbol loading or the crash itself if there is enough info.
NOTE: I've tried the solutions from the MSDN page and they dont work as noted. Part of the problem is I don't know if I'm using the 000000e1`114f4018 address I'm given in the PEB paged out error message correctly in the command.
NOTE 2: Here is a link to the crash report from WinDBG. If someone can figure out the cause and explain how they figured it out that would be dandy.
https://www.scribd.com/document/326672131/Crash-Archive
The PEB being paged out is normal. In order for the PEB to be present, the dump must be a full memory dump and the corresponding pages must be resident at the time of the crash.
This mostly doesn't matter because the PEB contains user mode state (user loaded modules, command line, environment variables, etc.) which generally isn't interesting for a kernel mode crash.
What IS interesting is the !analyze -v output, including the kernel mode stack of the faulting thread. Based on what you have provided, we can at least see the crash code:
BugCheck A, {2, ff, 4e, fffff801a42ebff2}
Bugcheck A is an IRQL_NOT_LESS_OR_EQUAL, which means you have an invalid pointer dereference at an elevated IRQL (>= DISPATCH_LEVEL). The first argument is the bad address ("2") and the second argument is the IRQL ("0xFF" - this is WinDbg speak for "interrupts disabled on the processor").
In summary this means that someone has dereferenced address "2", which clearly isn't a good thing. It happened to happen with interrupts disabled on the processor, so you get an IRQL_NOT_LESS_OR_EQUAL. The trick then is to look at the call stack and faulting instruction and figure out where the "2" came from.

Kernel panic using deferred_io on kmalloced buffer

I'm writing a framebuffer for an SPI LCD display on ARM. Before I complete that, I've written a memory only driver and trialled it under Ubuntu (Intel, Virtualbox). The driver works fine - I've allocated a block of memory using kmalloc, page aligned it (it's page aligned anyway actually), and used the framebuffer system to create a /dev/fb1. I have my own mmap function if that's relevant (deferred_io ignores it and uses its own by the look of it).
I have set:
info->screen_base = (u8 __iomem *)kmemptr;
info->fix.smem_len = kmem_size;
When I open /dev/fb1 with a test program and mmap it, it works correctly. I can see what is happening x11vnc to "share" the fb1 out:
x11vnc -rawfb map:/dev/fb1#320x240x16
And view with a vnc viewer:
gvncviewer strontium:0
I've made sure I've no overflows by writing to the entire mmapped buffer and that seems to be fine.
The problem arises when I add in deferred_io. As a test of it, I have a delay of 1 second and the called deferred_io function does nothing except a pr_devel() print. I followed the docs.
Now, the test program opens /dev/fb1 fine, mmap returns ok but as soon as I write to that pointer, I get a kernel panic. The following dump is from the ARM machine actually but it panics on the Ubuntu VM as well:
root#duovero:~/testdrv# ./fbtest1 /dev/fb1
Device opened: /dev/fb3
Screen is: 320 x 240, 16 bpp
Screen size = 153600 bytes
mmap on device succeeded
Unable to handle kernel paging request at virtual address bf81e020
pgd = edbec000
[bf81e020] *pgd=00000000
Internal error: Oops: 5 [#1] SMP ARM
Modules linked in: hhlcd28a(O) sysimgblt sysfillrect syscopyarea fb_sys_fops bnep ipv6 mwifiex_sdio mwifiex btmrvl_sdio firmware_class btmrvl cfg80211 bluetooth rfkill
CPU: 0 Tainted: G O (3.6.0-hh04 #1)
PC is at fb_deferred_io_fault+0x34/0xb0
LR is at fb_deferred_io_fault+0x2c/0xb0
pc : [<c0271b7c>] lr : [<c0271b74>] psr: a0000113
sp : edbdfdb8 ip : 00000000 fp : edbeedb8
r10: edbeedb8 r9 : 00000029 r8 : edbeedb8
r7 : 00000029 r6 : bf81e020 r5 : eda99128 r4 : edbdfdd8
r3 : c081e000 r2 : f0000000 r1 : 00001000 r0 : bf81e020
Flags: NzCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment user
Control: 10c5387d Table: adbec04a DAC: 00000015
Process fbtest1 (pid: 485, stack limit = 0xedbde2f8)
Stack: (0xedbdfdb8 to 0xedbe0000)
[snipped out hexdump]
[<c0271b7c>] (fb_deferred_io_fault+0x34/0xb0) from [<c00db0c4>] (__do_fault+0xbc/0x470)
[<c00db0c4>] (__do_fault+0xbc/0x470) from [<c00dde0c>] (handle_pte_fault+0x2c4/0x790)
[<c00dde0c>] (handle_pte_fault+0x2c4/0x790) from [<c00de398>] (handle_mm_fault+0xc0/0xd4)
[<c00de398>] (handle_mm_fault+0xc0/0xd4) from [<c049a038>] (do_page_fault+0x140/0x37c)
[<c049a038>] (do_page_fault+0x140/0x37c) from [<c0008348>] (do_DataAbort+0x34/0x98)
[<c0008348>] (do_DataAbort+0x34/0x98) from [<c0498af4>] (__dabt_usr+0x34/0x40)
Exception stack(0xedbdffb0 to 0xedbdfff8)
ffa0: 00000280 0000ffff b6f5c900 00000000
ffc0: 00000003 00000000 00025800 b6f5c900 bea6dc1c 00011048 00000032 b6f5b000
ffe0: 00006450 bea6db70 00000000 000085d6 40000030 ffffffff
Code: 28bd8070 ebffff37 e2506000 0a00001b (e5963000)
---[ end trace 7e5ca57bebd433f5 ]---
Segmentation fault
root#duovero:~/testdrv#
I'm totally stumped - other drivers look more or less the same as mine but I assume they work. Most use vmalloc actually - is there a difference between kmalloc and vmalloc for this purpose?
Confirmed the fix so I'll answer my own question:
deferred_io changes the info mmap to its own that sets up fault handlers for writes to the video memory pages. In the fault handler it
checks bounds against info->fix.smem_len, so you must set that
gets the page that was written to.
For the latter case, it treats vmalloc differently from kmalloc (by checking info->screen_base to see if it's vmalloced). If you have vmalloced, it uses screen_base as the virtual address. If you have not used vmalloc, it assumes that the address of interest is the physical address in info->fix.smem_start.
So, to use deferred_io correctly
set screen_base (char __iomem *) and point that to the virtual address.
set info->fix.smem_len to the video buffer size
if you are not using vmalloc, you must set info->fix.smem_start to the video buffer's physical address by using virt_to_phys(vid_buffer);
Confirmed on Ubuntu as fixing the issue.
Really interesting, I'm currently implementing SPI-based display FB driver too (Sharp Memory LCD display and my VFDHack32 host driver). I also facing similar problem where it crashes at deferred_io. Can you share you source code ? mine is at my GitHub repo. P.S. that Memory LCD display is monochrome so I just pretend to be color display and just check whether the pixel byte is empty (dot off) or not empty (dot on).

Page table in Linux kernel space during boot

I feel confuse in page table management in Linux kernel ?
In Linux kernel space, before page table is turned on. Kernel will run in virtual memory with 1-1 mapping mechanism. After page table is turned on, then kernel has consult page tables to translate a virtual address into a physical memory address.
Questions are:
At this time, after turning on page table, kernel space is still 1GB (from 0xC0000000 - 0xFFFFFFFF ) ?
And in the page tables of kernel process, only page table entries (PTE) in range from 0xC0000000 - 0xFFFFFFFF are mapped ?. PTEs are out of this range will be not mapped because kernel code never jump there ?
Mapping address before and after turning on page table is same ?
Eg. before turning on page table, the virtual address 0xC00000FF is mapped to physical address 0x000000FF, then after turning on page table, above mapping does not change. virtual address 0xC00000FF is still mapped to physical address 0x000000FF. Different thing is only that after turning on page table, CPU has consult the page table to translate virtual address to physical address which no need to do before.
The page table in kernel space is global and will be shared across all process in the system including user process ?
This mechanism is same in x86 32bit and ARM ?
The following discussion is based on 32-bit ARM Linux, and version of kernel source code is 3.9
All your questions can be addressed if you go through the procedure of setting up the initial page table(which will be overwitten later by function paging_init ) and turning on MMU.
When kernel is first launched by bootloader, Assembly function stext(in arch\arm\kernel\head.s) is the first function to run. Note that MMU has not been turned on yet at this moment.
Among other things, the two import jobs done by this function stext is:
create the initial page tabel(which will be overwitten later by
function paging_init )
turn on MMU
jump to C part of kernel initialization code and carry on
Before delving into the your questions, it is benificial to know:
Before MMU is turned on, every address issued by CPU is physical
address
After MMU is turned on, every address issued by CPU is virtual address
A proper page table should be set up before turning on MMU, otherwise your code will simply "be blown away"
By convention, Linux kernel uses higher 1GB part of virtual address and user land uses the lower 3GB part
Now the tricky part:
First trick: using position-independent code.
Assembly function stext is linked to address "PAGE_OFFSET + TEXT_OFFSET"(0xCxxxxxxx), which is a virtual address, however, since MMU has not been turned on yet, the actual address where assembly function stext is running is "PHYS_OFFSET + TEXT_OFFSET"(the actual value depends on your actual hardware), which is a physical address.
So, here is the thing: the program of function stext "thinks" that it is running in address like 0xCxxxxxxx but it is actually running in address (0x00000000 + some_offeset)(say your hardware configures 0x00000000 as the starting point of RAM). So before turning on MMU, the assembly code need to be very carefully written to make sure that nothing goes wrong during the execution procedure. In fact a techinque called position-independent code(PIC) is used.
To further explain the above, I extract several assembly code snippets:
ldr r13, =__mmap_switched # address to jump to after MMU has been enabled
b __enable_mmu # jump to function "__enable_mmu" to turn on MMU
Note that the above "ldr" instruction is a pseudo instruction which means "get the (virtual) address of function __mmap_switched and put it into r13"
And function __enable_mmu in turn calls function __turn_mmu_on:
(Note that I removed several instructions from function __turn_mmu_on which are essential instructions to the function but not of our interest)
ENTRY(__turn_mmu_on)
mcr p15, 0, r0, c1, c0, 0 # write control reg to enable MMU====> This is where MMU is turned on, after this instruction, every address issued by CPU is "virtual address" which will be translated by MMU
mov r3, r13 # r13 stores the (virtual) address to jump to after MMU has been enabled, which is (0xC0000000 + some_offset)
mov pc, r3 # a long jump
ENDPROC(__turn_mmu_on)
Second trick: identical mapping when setting up initial page table before turning on MMU.
More specifically, the same address range where kernel code is running is mapped twice.
The first mapping, as expected, maps address range 0x00000000(again,
this address depends on hardware config) through (0x00000000 +
offset) to 0xCxxxxxxx through (0xCxxxxxxx + offset)
The second mapping, interestingly, maps address range 0x00000000
through (0x00000000 + offset) to itself(i.e.: 0x00000000 -->
(0x00000000 + offset))
Why doing that?
Remember that before MMU is turned on, every address issued by CPU is physical address(starting at 0x00000000) and after MMU is turned on, every address issued by CPU is virtual address(starting at 0xC0000000).
Because ARM is a pipeline structure, at the moment MMU is turned on, there are still instructions in ARM's pipeine that are using (physical) addresses that are generated by CPU before MMU is turned on! To avoid these instructions to get blown up, an identical mapping has to be set up to cater them.
Now returning to your questions:
At this time, after turning on page table, kernel space is still 1GB (from 0xC0000000 - 0xFFFFFFFF ) ?
A: I guess you mean turning on MMU. The answer is yes, kernel space is 1GB(actually it also occupies several mega bytes below 0xC0000000, but that is not of our interest)
And in the page tables of kernel process, only page table entries (PTE) in range from 0xC0000000 - 0xFFFFFFFF are mapped ?. PTEs are out
of this range will be not mapped because kernel code never jump there
?
A: While the answer to this question is quite complicated because it involves lot of details regarding specific kernel configurations.
To fully answer this question, you need to read the part of kernel source code that set up the initial page table(assembly function __create_page_tables) and the function which sets up the final page table(C function paging_init).
To put it simple, there are two levels of page table in ARM, the first page table is PGD, which occupies 16KB. Kernel first zeros out this PGD during initialization process and does the initial mapping in assembly function __create_page_tables. In function __create_page_tables, only a very small portion of address space is mapped.
After that, the final page table is set up in function paging_init, and in this function, a quite large portion of address space is mapped. Say if you only have 512M RAM, for most common configurations, this 512M-RAM would be mapping by kernel code section by section(1 section is 1MB). If your RAM is quite large(say 2GB), only a portion of your RAM will be directly mapped.
(I will stop here because there are too many details regarding Question 2)
Mapping address before and after turning on page table is same ?
A: I think I've already answered this question in my explanation of "Second trick: identical mapping when setting up initial page table before turning on MMU."
4 . The page table in kernel space is global and will be shared across
all process in the system including user process ?
A: Yes and no. Yes because all processes share the same copy(content) of kernel page table(higher 1GB part). No because each process uses its own 16KB memory to store the kernel page table(although the content of page table for higher 1GB part is identical for every process).
5 . This mechanism is same in x86 32bit and ARM ?
Different Architectures use different mechanism
When Linux enables the MMU, it is only required that the virtual address of the kernel space is mapped. This happens very early in booting. At this point, there is no user space. There is no restrictions that the MMU can map multiple virtual addresses to the same physical address. So, when enabling the MMU, it is simplest to have a virt==phys mapping for the kernel code space and the mapping link==phys or the 0xC0000000 mapping.
Mapping address before and after turning on page table is same ?
If the physical code address is Oxff and the final link address is 0xc00000FF, then we have a duplicate mapping when turning on the MMU. Both 0xff and 0xc00000ff map to the same physical page. A simple jmp (jump) or b (branch) will move from one address space to the other. At this point, the virt==phys mapping can be removed as we are executing at the final destination address.
I think the above should answer points 1 through 3. Basically, the booting page tables are not the final page tables.
4 . The page table in kernel space is global and will be shared across all process in the system including user process?
Yes, this is a big win with a VIVT cache and for many other reasons.
5 . This mechanism is same in x86 32bit and ARM?
Of course the underlying mechanics are different. They are different even for different processors within these families; 486 vs P4 vs Amd-K6; ARM926 vs Cortex-A5 vs Cortex-A8, etc. However, the semantics are very similar.
See: Bootmem#lwn.net - An article on the early Linux memory phase.
Depending on the version, different memory pools and page table mappings are active during boot. The mappings we are all familiar with do not need to be in place until init runs.

Modifying current process' pte through /dev/mem?

AFAIK, /dev/mem presents physical memory to user, and it's usually being used for device read/write through MMIO. In my use case, I want to modify current process' pte so that two ptes will point to the same physical page. In particular, I move a x86_64 binary above 4G virtual space and mmap virtual space below 4G. I want to make 4G above pte and 4G below pte point to the same physical page, so that when I write into 4G above vaddr and read from 4G below pte, I get the same result. Sample code might look like below:
*(unsigned char *)vaddr1 = 7 // write into 4G above vaddr1
val = *(unsigned char *)vaddr2; // read from 4G below vaddr2
printf("val should be 7, %d\n", val);
But after I modify 4G below pte to point to physical page pointed by 4G above pte through /dev/mem, kernel give me message below,
BUG: Bad page map in process mmap pte:8000000007eb2067 pmd:07acb067
page:ffffea00001fac80 count:0 mapcount:-1 mapping: (null) index:0x101b7b
page flags: 0x4000000000000014(referenced|dirty)
addr:0000000101b7b000 vm_flags:00100073 anon_vma:ffff880007ab0708 mapping: (null) index:101b7b
Pid: 609, comm: mmap Tainted: G B 3.5.3 #7
Call Trace:
[<ffffffff8107abcc>] ? print_bad_pte+0x1d2/0x1ea
[<ffffffff8107bf18>] ? unmap_single_vma+0x3a0/0x56d
[<ffffffff8107c745>] ? unmap_vmas+0x2c/0x46
[<ffffffff8108106b>] ? exit_mmap+0x6e/0xdd
[<ffffffff8101cc4f>] ? do_page_fault+0x30f/0x348
[<ffffffff81020ce6>] ? mmput+0x20/0xb4
[<ffffffff810256ae>] ? exit_mm+0x105/0x110
[<ffffffff8103bb6c>] ? hrtimer_try_to_cancel+0x67/0x70
[<ffffffff81026b59>] ? do_exit+0x211/0x711
[<ffffffff810272e0>] ? do_group_exit+0x76/0xa0
[<ffffffff8102731c>] ? sys_exit_group+0x12/0x19
[<ffffffff812f3662>] ? system_call_fastpath+0x16/0x1b
BUG: Bad rss-counter state mm:ffff880007a496c0 idx:0 val:-1
BUG: Bad rss-counter state mm:ffff880007a496c0 idx:1 val:1
I guess kernel will examine if the pte has been modified, and I did something wrong. Here are vaddr1 and vaddr2's pte before/after my pte rewriting.
above 4G pte: 0x8000000007eb2067
below 4G pte: 0x0000000007ea7067
after rewriting pte...
above 4G pte: 0x8000000007eb2067
below 4G pte: 0x8000000007eb2067
Any idea? Thanks.
Note: Now I know I should release the physical page pointed by vaddr2's pte, otherwise kernel will note that physical page isn't pointed by any pte and give those error. But how? I try to use __free_page, but get error below.
BUG: unable to handle kernel paging request at ffffebe00008001c
IP: [<ffffffff8106b908>] __free_pages+0x4/0x2a
PGD 0
Oops: 0000 [#2] PREEMPT SMP
CPU 0

Resources