I can't understand the relations of mm_struct.start_data and vm_area_struct.vm_start
vma = find_vma(mm, mm->start_data);
DBG("mm->start_data(%p) vma->vm_start(%p) mm->end_data(%p) vma->vm_end(%p)\n", (uvp_t)mm->start_data, (uvp_t)vma->vm_start, (uvp_t)mm->end_data, (uvp_t)vma->vm_end);
ASSERT(mm->start_data >= vma->vm_start);
I find corresponding vm_area_struct for address represented by mm->start_data and I can't understand why the data is not aligned by vm_start and vm_end boundaries. I have the following:
|vma->vm_start|----------|mm->start_data|---------|vma->vm_end|------|mm->end_data|
mm->start_data(08049f00) vma->vm_start(08049000) mm->end_data(0804a07a) vma->vm_end(0804a000)
vma->vm_start(08049000), vma->vm_end(0804a000) - Indicates the used memory page.
mm->start_data(08049f00), mm->end_data(0804a07a) - Indicates the actual location of the data.
Read more: Linux Device Drivers, Third Edition - Chapter 15: Memory Mapping and DMA.
Related
On Microsoft Docs I read:
In 64-bit Windows, the theoretical amount of virtual address space is 2^64 bytes (16 exabytes), but only a small portion of the 16-exabyte range is actually used. The 8-terabyte range from 0x000'00000000 through 0x7FF'FFFFFFFF is used for user space, and portions of the 248-terabyte range from 0xFFFF0800'00000000 through 0xFFFFFFFF'FFFFFFFF are used for system space.
Since I have 64 bit pointers, I could possibly construct a pointer that points to some 0xFFFFxxxxxxxxxxxx address.
The site continues:
Code running in user mode has access to user space but does not have access to system space.
If I wereable to guess a valid address in system virtual address space, what mechanism prevents me from writing there?
I know about memory protection but that doesn't seem to offer something that distinguishes between user memory and system memory.
According to the comments by #RbMm, this information is stored in the PTE (page table entry). There seems to be a bit which defines whether access is granted from user mode.
This is confirmed by an article on OSR online, which says
Bit Name: User access
The structure itself does not seem to be part of Microsoft symbols
0:000> dt ntdll!_page*
ntdll!_PAGED_LOOKASIDE_LIST
ntdll!_PAGEFAULT_HISTORY
0:000> dt ntdll!page*
0:000> dt ntdll!*pte*
00007fff324fe910 ntdll!RtlpTestHookInitialize
The PTEs are closely supported by the CPU (the MMU, memory management unit, specifically). That's why we find additional information at OSDev, which says
U, the 'User/Supervisor' bit, controls access to the page based on privilege level. If the bit is set, then the page may be accessed by all; if the bit is not set, however, only the supervisor can access it.
In some leaked SDK files, the bit seems to be
unsigned __int64 Owner : 1;
Since the PTE is supported by the CPU, we should find similar things in Linux. And voilĂ , I see this SO answer which also has the bit:
#define _PAGE_USER 0x004
which exactly matches the information of OSDev.
We have a 4-byte memory corruption that always occurs at a fixed offset in the physical memory.
The physical frame number is 0x00a4d and the offset is ending with dc0.
Question 1) Based on this information, can we say the physical address of corruption is 0x00a4d * PAGE_SIZE (4096) + dc0 = 0x00A4DDC0. Programmatically, what is best way to confirm the physical address? Ours is ppc64 based system.
Question 2) What would be the best way to find out this memory corruption? The more I read the more I get lost with the plethora of options. Should I use KASAN, or CONFIG_DEBUG_PAGEALLOC (debug_guardpage_minorder) option or a HW breakpoint?
Question 3) Since we know the corruption is at a fixed option, if we were to reserve/block that page, what again is the best option? The two I came across are memmap and Reserved memory regions
Thanks
1.) You are right about physical address.
2.) HW breakpoint is the best if you have such possibility. Do you have the appropriate device (t32 or whatever) / debug port/ could it place HW break at physical address?
Here is the more generic and dumb case which needs no HW support:
If I remember right from your previous post, you suspect the kernel code as a corruption causer.
If you have read anything about KASAN, you probably mentioned that gcc part places hooks instead of kernel code loads and stores. The kernel part provides kasan_store_bla_bla_bla hook, which handles correctness of this store. Very likely, that default functionality wouldn't help you, but you can integrate your code in this kasan store hook, which would:
2.1)Take the virtual address passed to the store kasan hook
2.2)Finds appropriate physical address by page tables walking like this (the more convenient API exists but i don't remember the function name):
pgd_t *pgd = pgd_offset(mm, addr);
pud_t *pud = pud_offset(pgd, addr);
pmd_t *pmd = pmd_offset(pud, addr);
...
As i remember from your previous post you get crash in userspace app, so you will be need to check all processes mms from task list.
2.3) Compare found physical address to the given, and check that written value is zero (as i remember from your previous post)
2.4) If match print backtraces for all cores and stop execution.
I am aware that ARM architecture emulates the Linux's young and dirty flags by setting them in page fault handlers as discussed here. But recently for a small binary, I observed that a Linux PTE in one of the anonymous segments was set to be not writable and dirty. The following Linux PTE state was observed:
- L_PTE_PRESENT : 1
- L_PTE_YOUNG : 1
- L_PTE_DIRTY : 1
- L_PTE_RDONLY : 1
- L_PTE_XN : 0
I couldn't find an explanation for this combination of PTE flags. Does the kernel set this combination for special anonymous VMA segments? What does this combination signify? Any pointers will be helpful. Thanks in advance.
I observed that a Linux PTE in one of the anonymous segments was set to be not writable and dirty... What does this combination signify?
TL;DR - This simply means that the page is not in a backing store and it is read-only.
Dirty just means not written to a backing store (swap, mmap file or inode). Many things such as code are always read from a file, so they are backed by an inode.
If you mmap some read-only memory, then you could get this combination, for example. Other possibilities are a stack guard, allocator run-time buffer overflow detection, and copy-on-write functionality.
These are not normal. For a typical allocation, you will have something backed by swap and only a write will cause the page to become dirty. So the case is probably less frequent but valid.
See: ARM Linux PTE bits
ARM Linux emulate dirty/accessed
There seems to be little documentation on what the young bit means. young is information about what to swap. If something is young and not accessed for a prolonged time, it is a good candidate to evict. In contrast, dirty is for whether it needs to be swapped. If a page is dirty, then it has not been written to a backing store (a swap file or mmap file, etc). The pager must write out this page then. If it was not dirty (or clean), then the pager can simply discard the memory and re-use.
The difference between young and dirty is like should and must.
- L_PTE_PRESENT : 1 - it has physical RAM (not swapped)
- L_PTE_YOUNG : 1 - is has not been used
- L_PTE_DIRTY : 1 - it is different than backing store
- L_PTE_RDONLY : 1 - user space can not write.
- L_PTE_XN : 0 - code can execute.
Not present and dirty seem like an impossible condition for instance, but dirty and read-only is valid.
I was reading section 'Part Id' of the following document I'm not sure how relevant this document to kernel 2.6.35 for instance; specifically it says:
..the DMA address of the memory must be within the dma_mask of the device..
and they recommend to pass certain flags, such as GFP_DMA, to kmalloc, so that it ensures the memory will fall within DMA mask provided.
However if the memory is allocated from cache pool created by kmem_cache_create, and with kmem_cache_alloc(.. GFP_ATOMIC), this doesn't meet requirements outlined in DMA-API.txt ?
On the other hand, LDD talks about __GFP_DMA flag with regard to legacy ISA devices, therefore I'm not sure this is applicable to PCI/PCIe devices.
This is x86 64-bit platform if it matters:
pci_set_dma_mask(dev, 0xffffffffffffffffULL);
pci_set_consistent_dma_mask(dev, 0xffffffffffffffffULL);
I would appreciate to hear some explanations on it.
For GFP_* for DMA
On x86:
ISA - when using kmalloc() need to bitwise-or GFP_DMA with GFP_KERNEL (or _ATOMIC) because of the following:
GFP_DMA guarantees:
(1) physical addresses are consecutive when get_free_page returns more than one page and
(2) only addresses lower than MAX_DMA_ADDRESS are returned. MAX_DMA_ADDRESS is 16MB on the PC because of ISA constraings
PCI - don't need to use GFP_DMA because there is no MAX_DMA_ADDRESS limit
The dma_mask is checked by the device when calling dma_map_* or dma_alloc_coherent.
dma_alloc_coherent ensures the memory allocated is able to be used by dma_map_* which gives other benifits too. (the implementation may choose to ignore flags that affect the location of the returned memory, like GFP_DMA)
You can refer to http://coweb.cc.gatech.edu/sysHackfest/uploads/58/DMA_howto.1.txt
I feel confuse in page table management in Linux kernel ?
In Linux kernel space, before page table is turned on. Kernel will run in virtual memory with 1-1 mapping mechanism. After page table is turned on, then kernel has consult page tables to translate a virtual address into a physical memory address.
Questions are:
At this time, after turning on page table, kernel space is still 1GB (from 0xC0000000 - 0xFFFFFFFF ) ?
And in the page tables of kernel process, only page table entries (PTE) in range from 0xC0000000 - 0xFFFFFFFF are mapped ?. PTEs are out of this range will be not mapped because kernel code never jump there ?
Mapping address before and after turning on page table is same ?
Eg. before turning on page table, the virtual address 0xC00000FF is mapped to physical address 0x000000FF, then after turning on page table, above mapping does not change. virtual address 0xC00000FF is still mapped to physical address 0x000000FF. Different thing is only that after turning on page table, CPU has consult the page table to translate virtual address to physical address which no need to do before.
The page table in kernel space is global and will be shared across all process in the system including user process ?
This mechanism is same in x86 32bit and ARM ?
The following discussion is based on 32-bit ARM Linux, and version of kernel source code is 3.9
All your questions can be addressed if you go through the procedure of setting up the initial page table(which will be overwitten later by function paging_init ) and turning on MMU.
When kernel is first launched by bootloader, Assembly function stext(in arch\arm\kernel\head.s) is the first function to run. Note that MMU has not been turned on yet at this moment.
Among other things, the two import jobs done by this function stext is:
create the initial page tabel(which will be overwitten later by
function paging_init )
turn on MMU
jump to C part of kernel initialization code and carry on
Before delving into the your questions, it is benificial to know:
Before MMU is turned on, every address issued by CPU is physical
address
After MMU is turned on, every address issued by CPU is virtual address
A proper page table should be set up before turning on MMU, otherwise your code will simply "be blown away"
By convention, Linux kernel uses higher 1GB part of virtual address and user land uses the lower 3GB part
Now the tricky part:
First trick: using position-independent code.
Assembly function stext is linked to address "PAGE_OFFSET + TEXT_OFFSET"(0xCxxxxxxx), which is a virtual address, however, since MMU has not been turned on yet, the actual address where assembly function stext is running is "PHYS_OFFSET + TEXT_OFFSET"(the actual value depends on your actual hardware), which is a physical address.
So, here is the thing: the program of function stext "thinks" that it is running in address like 0xCxxxxxxx but it is actually running in address (0x00000000 + some_offeset)(say your hardware configures 0x00000000 as the starting point of RAM). So before turning on MMU, the assembly code need to be very carefully written to make sure that nothing goes wrong during the execution procedure. In fact a techinque called position-independent code(PIC) is used.
To further explain the above, I extract several assembly code snippets:
ldr r13, =__mmap_switched # address to jump to after MMU has been enabled
b __enable_mmu # jump to function "__enable_mmu" to turn on MMU
Note that the above "ldr" instruction is a pseudo instruction which means "get the (virtual) address of function __mmap_switched and put it into r13"
And function __enable_mmu in turn calls function __turn_mmu_on:
(Note that I removed several instructions from function __turn_mmu_on which are essential instructions to the function but not of our interest)
ENTRY(__turn_mmu_on)
mcr p15, 0, r0, c1, c0, 0 # write control reg to enable MMU====> This is where MMU is turned on, after this instruction, every address issued by CPU is "virtual address" which will be translated by MMU
mov r3, r13 # r13 stores the (virtual) address to jump to after MMU has been enabled, which is (0xC0000000 + some_offset)
mov pc, r3 # a long jump
ENDPROC(__turn_mmu_on)
Second trick: identical mapping when setting up initial page table before turning on MMU.
More specifically, the same address range where kernel code is running is mapped twice.
The first mapping, as expected, maps address range 0x00000000(again,
this address depends on hardware config) through (0x00000000 +
offset) to 0xCxxxxxxx through (0xCxxxxxxx + offset)
The second mapping, interestingly, maps address range 0x00000000
through (0x00000000 + offset) to itself(i.e.: 0x00000000 -->
(0x00000000 + offset))
Why doing that?
Remember that before MMU is turned on, every address issued by CPU is physical address(starting at 0x00000000) and after MMU is turned on, every address issued by CPU is virtual address(starting at 0xC0000000).
Because ARM is a pipeline structure, at the moment MMU is turned on, there are still instructions in ARM's pipeine that are using (physical) addresses that are generated by CPU before MMU is turned on! To avoid these instructions to get blown up, an identical mapping has to be set up to cater them.
Now returning to your questions:
At this time, after turning on page table, kernel space is still 1GB (from 0xC0000000 - 0xFFFFFFFF ) ?
A: I guess you mean turning on MMU. The answer is yes, kernel space is 1GB(actually it also occupies several mega bytes below 0xC0000000, but that is not of our interest)
And in the page tables of kernel process, only page table entries (PTE) in range from 0xC0000000 - 0xFFFFFFFF are mapped ?. PTEs are out
of this range will be not mapped because kernel code never jump there
?
A: While the answer to this question is quite complicated because it involves lot of details regarding specific kernel configurations.
To fully answer this question, you need to read the part of kernel source code that set up the initial page table(assembly function __create_page_tables) and the function which sets up the final page table(C function paging_init).
To put it simple, there are two levels of page table in ARM, the first page table is PGD, which occupies 16KB. Kernel first zeros out this PGD during initialization process and does the initial mapping in assembly function __create_page_tables. In function __create_page_tables, only a very small portion of address space is mapped.
After that, the final page table is set up in function paging_init, and in this function, a quite large portion of address space is mapped. Say if you only have 512M RAM, for most common configurations, this 512M-RAM would be mapping by kernel code section by section(1 section is 1MB). If your RAM is quite large(say 2GB), only a portion of your RAM will be directly mapped.
(I will stop here because there are too many details regarding Question 2)
Mapping address before and after turning on page table is same ?
A: I think I've already answered this question in my explanation of "Second trick: identical mapping when setting up initial page table before turning on MMU."
4 . The page table in kernel space is global and will be shared across
all process in the system including user process ?
A: Yes and no. Yes because all processes share the same copy(content) of kernel page table(higher 1GB part). No because each process uses its own 16KB memory to store the kernel page table(although the content of page table for higher 1GB part is identical for every process).
5 . This mechanism is same in x86 32bit and ARM ?
Different Architectures use different mechanism
When Linux enables the MMU, it is only required that the virtual address of the kernel space is mapped. This happens very early in booting. At this point, there is no user space. There is no restrictions that the MMU can map multiple virtual addresses to the same physical address. So, when enabling the MMU, it is simplest to have a virt==phys mapping for the kernel code space and the mapping link==phys or the 0xC0000000 mapping.
Mapping address before and after turning on page table is same ?
If the physical code address is Oxff and the final link address is 0xc00000FF, then we have a duplicate mapping when turning on the MMU. Both 0xff and 0xc00000ff map to the same physical page. A simple jmp (jump) or b (branch) will move from one address space to the other. At this point, the virt==phys mapping can be removed as we are executing at the final destination address.
I think the above should answer points 1 through 3. Basically, the booting page tables are not the final page tables.
4 . The page table in kernel space is global and will be shared across all process in the system including user process?
Yes, this is a big win with a VIVT cache and for many other reasons.
5 . This mechanism is same in x86 32bit and ARM?
Of course the underlying mechanics are different. They are different even for different processors within these families; 486 vs P4 vs Amd-K6; ARM926 vs Cortex-A5 vs Cortex-A8, etc. However, the semantics are very similar.
See: Bootmem#lwn.net - An article on the early Linux memory phase.
Depending on the version, different memory pools and page table mappings are active during boot. The mappings we are all familiar with do not need to be in place until init runs.