Why I have pte not present pages when swap is off - linux-kernel

I go throught vm_area_struct areas of the task and try to get corresponding struct page * (pages), but some pages is not present in RAM: pte_present(*pte) returns 0. I can't understand this behaviour because I have no swap area, so I suppose that all userspace virtual space maps into presented pages in RAM. Could anybody explain me this?
static struct page * get_page(unsigned long addr)
{
pgd_t *pgd;
pte_t *pte;
pud_t *pud;
pmd_t *pmd;
struct page *pg;
struct mm_struct *mm = current->mm;
pgd = pgd_offset(mm, addr);
if (pgd_none_or_clear_bad(pgd)) {
goto err;
}
pud = pud_offset(pgd, addr);
if (pud_none(*pud) || pud_bad(*pud)) {
goto err;
}
pmd = pmd_offset(pud, addr);
if (pmd_none(*pmd) || pmd_bad(*pmd)) {
goto err;
}
pte = pte_offset_map(pmd, addr);
if (!pte) {
goto err;
}
if (!pte_present(*pte)) {
PR("pte is not present\n");
goto err;
}
pg = pte_page(*pte);
if (!pg) {
pte_unmap(pte);
goto err;
}
pte_unmap(pte);

pte_none checks that there is no value in pte, pte_present check flag of presence.
#define pte_none(pte) (!pte_val(pte))
#define pte_present(pte) (pte_isset((pte), L_PTE_PRESENT))
so the condition for swapped out pages would be !pte_present && !pte_none
And in your case you interpret all empty ptes as swapped out...

It may be a minor page fault where the memory region is reserved, but not allocated with data yet.

Related

Debugging VM after vmlaunch

Description:
I'm following the tutorial on hypervisor development. After day 4 of the series I cannot make my code run. Right after __vmx_vmlaunch instruction gets executed successfully, the virtual machine I'm testing the hypervisor on, reboots. I believe this is caused by some incorrect settings of VMCS Host State Area (Chapter 24.5). There is no BSOD, crash nor error message in WinDbg.
I see two ways to approach this problem. One is to somehow extract more information from WinDbg. And the second one would require someone to spot what stupid thing I'm doing or missing in the vmcs initialization.
Unfortunately the code in tutorial is incomplete. I've spend some fair time to make sure that everything that tutorial covers is the same in my code. I will highlight parts which where added by me.
I'm really sorry for such an amount of code. Again let me emphasise that I believe the problem lies within the HOST fields (as the crash inside guest vm shouldn't crash the host).
int init_vmcs(struct __vcpu_t* vcpu)
{
log_entry("init_vmcs()\n");
// Determinate exact size which is implementation specific.
// We expect it to be 4KB, but it is not guaranteed.
union __vmx_basic_msr_t vmx_basic_msr = { 0 };
vmx_basic_msr.control = __readmsr(IA32_VMX_BASIC);
if (vmx_basic_msr.bits.vmxon_region_size != sizeof(struct __vmcs_t)) {
log_error("Non standard vmcs region size: %llx. Support not yet implemented.\n",
vmx_basic_msr.bits.vmxon_region_size);
log_exit("init_vmcs()\n");
return -1;
}
PHYSICAL_ADDRESS physical_max;
physical_max.QuadPart = MAXULONG64;
vcpu->vmcs = MmAllocateContiguousMemory(sizeof(struct __vmcs_t), physical_max);
if (!vcpu->vmcs) {
log_error("Failed to allocate vcpu->vmcs(MmAllocateContiguousMemory failed).\n");
log_exit("init_vmcs()\n");
return -1;
}
RtlSecureZeroMemory(vcpu->vmcs, sizeof(struct __vmcs_t));
vcpu->vmcs_physical = MmGetPhysicalAddress(vcpu->vmcs).QuadPart;
// Discover VMCS revision identifier that a processor uses by reading the
// VMX capability MSR IA32_VMX_BASIC.
vcpu->vmcs->header.bits.revision_identifier = (unsigned int)vmx_basic_msr.bits.vmcs_revision_identifier;
vcpu->vmcs->header.bits.shadow_vmcs_indicator = 0;
// Before loading vmcs we invoke vmclear to flush data which might be cached by processor.
if (__vmx_vmclear(&vcpu->vmcs_physical) || __vmx_vmptrld(&vcpu->vmcs_physical)) {
log_error("Failed to flush data or load vmcs. (__vmx_vmclear or __vmx_vmptrld failed).\n");
log_exit("init_vmcs()\n");
return -1;
}
// Initialize VMCS Guest State Area.
if (__vmx_vmwrite(GUEST_CR0, __readcr0()) ||
__vmx_vmwrite(GUEST_CR3, __readcr3()) ||
__vmx_vmwrite(GUEST_CR4, __readcr4()) ||
__vmx_vmwrite(GUEST_DR7, __readdr(7)) ||
__vmx_vmwrite(GUEST_RSP, vcpu->guest_rsp) ||
__vmx_vmwrite(GUEST_RIP, vcpu->guest_rip) ||
__vmx_vmwrite(GUEST_RFLAGS, __readeflags()) ||
__vmx_vmwrite(GUEST_DEBUG_CONTROL, __readmsr(IA32_DEBUGCTL)) ||
__vmx_vmwrite(GUEST_SYSENTER_ESP, __readmsr(IA32_SYSENTER_ESP)) ||
__vmx_vmwrite(GUEST_SYSENTER_EIP, __readmsr(IA32_SYSENTER_EIP)) ||
__vmx_vmwrite(GUEST_SYSENTER_CS, __readmsr(IA32_SYSENTER_CS)) ||
__vmx_vmwrite(GUEST_VMCS_LINK_POINTER, ~0ULL) ||
__vmx_vmwrite(GUEST_FS_BASE, __readmsr(IA32_FS_BASE)) ||
__vmx_vmwrite(GUEST_GS_BASE, __readmsr(IA32_GS_BASE))
) {
log_error("Failed to set guest state. (__vmx_vmwrite failed).\n");
log_exit("init_vmcs()\n");
return -1;
}
if (__vmx_vmwrite(CR0_READ_SHADOW, __readcr0()) ||
__vmx_vmwrite(CR4_READ_SHADOW, __readcr4())
) {
log_error("Failed to set cr0_read_shadow or cr4_read_shadow. (__vmx_vmwrite failed).\n");
log_exit("init_vmcs()\n");
return -1;
}
union __vmx_entry_control_t entry_controls = { 0 };
entry_controls.bits.ia32e_mode_guest = 1;
vmx_adjust_entry_controls(&entry_controls);
__vmx_vmwrite(VM_ENTRY_CONTROLS, entry_controls.control);
union __vmx_exit_control_t exit_controls = { 0 };
exit_controls.bits.host_address_space_size = 1;
vmx_adjust_exit_controls(&exit_controls);
__vmx_vmwrite(VM_EXIT_CONTROLS, exit_controls.control);
union __vmx_pinbased_control_msr_t pinbased_controls = { 0 };
vmx_adjust_pinbased_controls(&pinbased_controls);
__vmx_vmwrite(PIN_BASED_VM_EXECUTION_CONTROLS, pinbased_controls.control);
union __vmx_primary_processor_based_control_t primary_controls = { 0 };
primary_controls.bits.use_msr_bitmaps = 1;
primary_controls.bits.active_secondary_controls = 1;
vmx_adjust_primary_processor_based_controls(&primary_controls);
__vmx_vmwrite(PRIMARY_PROCESSOR_BASED_VM_EXECUTION_CONTROLS, primary_controls.control);
union __vmx_secondary_processor_based_control_t secondary_controls = { 0 };
secondary_controls.bits.enable_rdtscp = 1;
secondary_controls.bits.enable_xsave_xrstor = 1;
secondary_controls.bits.enable_invpcid = 1;
vmx_adjust_secondary_processor_based_controls(&secondary_controls);
__vmx_vmwrite(SECONDARY_PROCESSOR_BASED_VM_EXECUTION_CONTROLS, secondary_controls.control);
__vmx_vmwrite(GUEST_CS_SELECTOR, __read_cs());
__vmx_vmwrite(GUEST_SS_SELECTOR, __read_ss());
__vmx_vmwrite(GUEST_DS_SELECTOR, __read_ds());
__vmx_vmwrite(GUEST_ES_SELECTOR, __read_es());
__vmx_vmwrite(GUEST_FS_SELECTOR, __read_fs());
__vmx_vmwrite(GUEST_GS_SELECTOR, __read_gs());
__vmx_vmwrite(GUEST_LDTR_SELECTOR, __read_ldtr());
__vmx_vmwrite(GUEST_TR_SELECTOR, __read_tr());
__vmx_vmwrite(GUEST_CS_LIMIT, __segmentlimit(__read_cs()));
__vmx_vmwrite(GUEST_SS_LIMIT, __segmentlimit(__read_ss()));
__vmx_vmwrite(GUEST_DS_LIMIT, __segmentlimit(__read_ds()));
__vmx_vmwrite(GUEST_ES_LIMIT, __segmentlimit(__read_es()));
__vmx_vmwrite(GUEST_FS_LIMIT, __segmentlimit(__read_fs()));
__vmx_vmwrite(GUEST_GS_LIMIT, __segmentlimit(__read_gs()));
__vmx_vmwrite(GUEST_LDTR_LIMIT, __segmentlimit(__read_ldtr()));
__vmx_vmwrite(GUEST_TR_LIMIT, __segmentlimit(__read_tr()));
struct __pseudo_descriptor_64_t gdtr;
struct __pseudo_descriptor_64_t idtr;
_sgdt(&gdtr);
__sidt(&idtr);
__vmx_vmwrite(GUEST_GDTR_BASE, gdtr.base_address);
__vmx_vmwrite(GUEST_GDTR_LIMIT, gdtr.limit);
__vmx_vmwrite(GUEST_IDTR_BASE, idtr.base_address);
__vmx_vmwrite(GUEST_IDTR_LIMIT, idtr.limit);
__vmx_vmwrite(GUEST_CS_BASE, get_segment_base(gdtr.base_address, __read_cs()));
__vmx_vmwrite(GUEST_DS_BASE, get_segment_base(gdtr.base_address, __read_ds()));
__vmx_vmwrite(GUEST_SS_BASE, get_segment_base(gdtr.base_address, __read_ss()));
__vmx_vmwrite(GUEST_ES_BASE, get_segment_base(gdtr.base_address, __read_es()));
__vmx_vmwrite(GUEST_CS_ACCESS_RIGHTS, read_segment_access_rights(__read_cs()));
__vmx_vmwrite(GUEST_SS_ACCESS_RIGHTS, read_segment_access_rights(__read_ss()));
__vmx_vmwrite(GUEST_DS_ACCESS_RIGHTS, read_segment_access_rights(__read_ds()));
__vmx_vmwrite(GUEST_ES_ACCESS_RIGHTS, read_segment_access_rights(__read_es()));
__vmx_vmwrite(GUEST_FS_ACCESS_RIGHTS, read_segment_access_rights(__read_fs()));
__vmx_vmwrite(GUEST_GS_ACCESS_RIGHTS, read_segment_access_rights(__read_gs()));
__vmx_vmwrite(GUEST_LDTR_ACCESS_RIGHTS, read_segment_access_rights(__read_ldtr()));
__vmx_vmwrite(GUEST_TR_ACCESS_RIGHTS, read_segment_access_rights(__read_tr()));
__vmx_vmwrite(GUEST_LDTR_BASE, get_segment_base(gdtr.base_address, __read_ldtr()));
__vmx_vmwrite(GUEST_TR_BASE, get_segment_base(gdtr.base_address, __read_tr()));
// Initialize VMCS Host State Area.
__vmx_vmwrite(HOST_CR0, __readcr0()); // Added by me
__vmx_vmwrite(HOST_CR3, __readcr3()); // Added by me
__vmx_vmwrite(HOST_CR4, __readcr4()); // Added by me
// Fields RPL and TI in host selector fields must be cleared.
unsigned short host_selector_mask = 7;
__vmx_vmwrite(HOST_CS_SELECTOR, __read_cs() & ~host_selector_mask);
__vmx_vmwrite(HOST_SS_SELECTOR, __read_ss() & ~host_selector_mask);
__vmx_vmwrite(HOST_DS_SELECTOR, __read_ds() & ~host_selector_mask);
__vmx_vmwrite(HOST_ES_SELECTOR, __read_es() & ~host_selector_mask);
__vmx_vmwrite(HOST_FS_SELECTOR, __read_fs() & ~host_selector_mask);
__vmx_vmwrite(HOST_GS_SELECTOR, __read_gs() & ~host_selector_mask);
__vmx_vmwrite(HOST_TR_SELECTOR, __read_tr() & ~host_selector_mask);
__vmx_vmwrite(HOST_TR_BASE, get_segment_base(gdtr.base_address, __read_tr()));
__vmx_vmwrite(HOST_GDTR_BASE, gdtr.base_address);
__vmx_vmwrite(HOST_IDTR_BASE, idtr.base_address);
unsigned __int64 vmm_stack = (unsigned __int64)vcpu->vmm_context->stack + VMM_STACK_SIZE;
if (__vmx_vmwrite(HOST_RSP, vmm_stack) ||
__vmx_vmwrite(HOST_RIP, vmm_entrypoint)
) {
log_error("Failed to set host_rsp, host_rip. (__vmx_vmwrite failed).\n");
log_exit("init_vmcs()\n");
return -1;
}
log_exit("init_vmcs()\n");
return 0;
}
void init_logical_processor(struct __vmm_context_t *vmm_context, void *guest_rsp)
{
log_entry("init_logical_processor()\n");
unsigned long cur_processor_number = KeGetCurrentProcessorNumber();
struct __vcpu_t* vcpu = vmm_context->vcpu_table[cur_processor_number];
log_debug("vcpu: %llx, guest_rsp: %llx\n", cur_processor_number, guest_rsp);
vcpu->guest_rsp = guest_rsp;
vcpu->guest_rip = (void*) guest_entry_stub;
adjust_control_registers();
if (enable_vmx_operation() != 0) {
log_error("Failed to enable_vmx_operation.\n");
goto _end;
}
if (!vm_has_cpuid_support()) {
log_error("VMX operation is not supported by the processor.\n");
goto _end;
}
log_success("VMX operation is supported by the processor.\n");
if (init_vmxon(vcpu)) {
log_error("Failed to initialize vmxon region.\n");
goto _end;
}
log_success("Initialized vmxon region.\n");
unsigned char vmxon_res = __vmx_on(&vcpu->vmxon_physical);
if (vmxon_res != 0) {
log_error("Failed to put vcpu into VMX operation. Error code: %d\n", vmxon_res);
goto _end;
}
log_success("vmx_on succeeded.\n");
if (init_vmcs(vcpu)) {
log_error("Failed to initialize vmcs.\n");
goto _end;
}
log_success("Initialized vmcs.\n");
unsigned char vmlaunch_res = vmxlaunch(); // just a wrapper over __vmx_vmlaunch
if (vmlaunch_res != 0) {
goto _end;
}
_end:
log_exit("init_logical_processor()\n");
}
vmm_entrypoint proc
int 3 ; addded by me
vmm_entrypoint endp
guest_entry_stub proc
mov rax, 1337h
hlt
guest_entry_stub endp
Update
I've read again the Intel manual section regarding VM-entry checks and found out that my init_vmcs function wasn't setting HOST_FS_BASE, HOST_GS_BASE. After adding these fields it finally worked and trapped inside vmm_entrypoint.
However I would love to hear some solution on how to debug the unexpected shutdowns.

Copy structure with included user pointers from user space to kernel space (copy_from_user)

I want to transfer a transaction structure, which contains an user space pointer to an array, to kernel by using copy_from_user.
The goal is, to get access to the array elements in kernel space.
User space side:
I allocate an array of _sg_param structures in user space. Now i put the address of this array in a transaction structure (line (*)).
Then i transfer the transaction structure to the kernel via ioctl().
Kernel space side:
On executing this ioctl, the complete transaction structure is copied to kernel space (line ()). Now kernel space is allocated for holding the array (line (*)). Then i try to copy the array from user space to the new allocated kernel space (line (****)), and here start my problems:
The kernel is corrupted during execution of this copy. dmesg shows following output:
[ 54.443106] Unhandled fault: page domain fault (0x01b) at 0xb6f09738
[ 54.448067] pgd = ee5ec000
[ 54.449465] [b6f09738] *pgd=2e9d7831, *pte=2d56875f, *ppte=2d568c7f
[ 54.454411] Internal error: : 1b [#1] PREEMPT SMP ARM
Any ideas ???
Following an simplified extract of my code:
// structure declaration
typedef struct _sg_param {
void *seg_buf;
int seg_len;
int received;
} sg_param_t;
struct transaction {
...
int num_of_elements;
sg_param_t *pbuf_list; // Array of sg_param structure
...
} trans;
// user space side:
if ((pParam = (sg_param_t *) malloc(NR_OF_STRUCTS * sizeof(sg_param_t))) == NULL) {
return -ENOMEM;
}
else {
trans.num_of_elements = NR_OF_STRUCTS;
trans.pbuf_list = pParam; // (*)
}
rc = ioctl(dev->fd, MY_CMD, &trans);
if (rc < 0) {
return rc;
}
// kernel space side
static long ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
arg_ptr = (void __user *)arg;
// Perform the specified command
switch (cmd) {
case MY_CMD:
{
struct transaction *__user user_trans;
user_trans = (struct transaction *__user)arg_ptr;
if (copy_from_user(&trans, arg_ptr, sizeof(trans)) != 0) { // (**)
k_err("Unable to copy transfer info from userspace for "
"AXIDMA_DMA_START_DMA.\n");
return -EFAULT;
}
int size = trans.num_of_elements * sizeof(sg_param_t);
if (trans.pbuf_list != NULL) {
// Allocate kernel memory for buf_list
trans.pbuf_list = (sg_param_t *) kmalloc(size, GFP_KERNEL); // (***)
if (trans.pbuf_list == NULL) {
k_err("Unable to allocate array for buffers.\n");
return -ENOMEM;
}
// Now copy pbuf_list from user space to kernel space
if (copy_from_user(trans.pbuf_list, user_trans->pbuf_list, size) != 0) { // (****)
kfree(trans.pbuf_list);
return -EFAULT;
}
}
break;
}
}
You're directly accessing userspace data (user_trans->pbuf_list). You should use the one that you've already copied to kernel (trans.pbuf_list).
Code for this would normally be something like:
sg_param_t *local_copy = kmalloc(size, ...);
// TODO check it succeeded
if (copy_from_user(local_copy, trans.pbuf_list, size) ...)
trans.pbuf_list = local_copy;
// use trans.pbuf_list
Note that you also need to check trans.num_of_elements to be valid (0 would make kmalloc return ZERO_SIZE_PTR, and too big value might be a way for DoS).

How to use mmap for devmemon MT7620n

I`m trying to access the GPIOs of a MT7620n via register settings. So far I can access them by using /sys/class/gpio/... but that is not fast enough for me.
In the Programming guide of the MT7620 page 84 you can see that the GPIO base address is at 0x10000600 and the single registers have an offset of 4 Bytes.
MT7620 Programming Guide
Something like:
devmem 0x10000600
from the shell works absolutely fine but I cannot access it from inside of a c Programm.
Here is my code:
#define GPIOCHIP_0_ADDDRESS 0x10000600 // base address
#define GPIO_BLOCK 4
volatile unsigned long *gpiochip_0_Address;
int gpioSetup()
{
int m_mfd;
if ((m_mfd = open("/dev/mem", O_RDWR)) < 0)
{
printf("ERROR open\n");
return -1;
}
gpiochip_0_Address = (unsigned long*)mmap(NULL, GPIO_BLOCK, PROT_READ|PROT_WRITE, MAP_SHARED, m_mfd, GPIOCHIP_0_ADDDRESS);
close(m_mfd);
if(gpiochip_0_Address == MAP_FAILED)
{
printf("mmap() failed at phsical address:%d %s\n", GPIOCHIP_0_ADDDRESS, strerror(errno));
return -2;
}
return 0;
}
The Output I get is:
mmap() failed at phsical address:268436992 Invalid argument
What do I have to take care of? Do I have to make the memory accessable before? I´m running as root.
Thanks
EDIT
Peter Cordes is right, thank you so much.
Here is my final solution, if somebody finds a bug, please tell me ;)
#define GPIOCHIP_0_ADDDRESS 0x10000600 // base address
volatile unsigned long *gpiochip_0_Address;
int gpioSetup()
{
const size_t pagesize = sysconf(_SC_PAGE_SIZE);
unsigned long gpiochip_pageAddress = GPIOCHIP_0_ADDDRESS & ~(pagesize-1); //get the closest page-sized-address
const unsigned long gpiochip_0_offset = GPIOCHIP_0_ADDDRESS - gpiochip_pageAddress; //calculate the offset between the physical address and the page-sized-address
int m_mfd;
if ((m_mfd = open("/dev/mem", O_RDWR)) < 0)
{
printf("ERROR open\n");
return -1;
}
page_virtual_start_Address = (unsigned long*)mmap(NULL, pagesize, PROT_READ|PROT_WRITE, MAP_SHARED, m_mfd, gpiochip_pageAddress);
close(m_mfd);
if(page_virtual_start_Address == MAP_FAILED)
{
printf("ERROR mmap\n");
printf("mmap() failed at phsical address:%d %d\n", GPIOCHIP_0_ADDDRESS, strerror(errno));
return -2;
}
gpiochip_0_Address = page_virtual_start_Address + (gpiochip_0_offset/sizeof(long));
return 0;
}
mmap's file offset argument has to be page-aligned, and that's one of the documented reasons for mmap to fail with EINVAL.
0x10000600 is not a multiple of 4k, or even 1k, so that's almost certainly your problem. I don't think any systems have pages as small as 512B.
mmap a whole page that includes the address you want, and access the MMIO registers at an offset within that page.
Either hard-code it, or maybe do something like GPIOCHIP_0_ADDDRESS & ~(page_size-1) to round down an address to a page-aligned boundary. You should be able to do something that gets the page size as a compile-time constant so it still compiles efficiently.

Setting a PTE to point a different physical page - Linux Kernel

Is it possible to make a PTE point a different physical page?
Say I'm currently in Kernel mode in a context of some process A that currently has the address 400k mapped to physical page no. 5.
Can I make that address (400k) to be mapped to a physical page no. 6 ? (For example)
If so, how?
I tried using this API:
set_pte / clear_pte / mk_pte / pfn_to_page
but no luck so far.
EDIT:
Some code:
static pte_t *walk_page_table(struct mm_struct *mm, size_t addr)
{
pgd_t *pgd;
pud_t *pud;
pmd_t *pmd;
pte_t *ptep;
spinlock_t *ptl;
struct vm_area_struct* vma = mm->mmap;
pgd = pgd_offset(mm, addr);
if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
return NULL;
pud = pud_offset(pgd, addr);
if (pud_none(*pud) || unlikely(pud_bad(*pud)))
return NULL;
pmd = pmd_offset(pud, addr);
if (pmd_none(*pmd))
return NULL;
ptep = pte_offset_map(pmd, addr);
return ptep;
}
bool change_pte(size_t address, size_t new_page_phys_address)
{
pte_t *p = walk_page_table(current->mm, address);
pte_t new_pte;
if (!p)
return false;
new_pte = pfn_pte(new_page_phys_address >> PAGE_SHIFT,
PAGE_KERNEL_EXEC);
set_pte(p, new_pte);
__flush_tlb_one(address);
return true;
}
Some test code:
struct pt_regs* regs = task_pt_regs(current);
hexDump("someData", regs->ip, some_size);
void * newPage = kmalloc(PAGE_SIZE,GFP_KERNEL);
memset(newPage,0,PAGE_SIZE);
change_pte(regs->ip, virtual_to_physical(newPage));
hexDump("post someData", regs->ip, some_size);
Let me share one suggestion with you. It seems that there is no evidence, that kmalloc will return page aligned address (i don't know for sure, but you can check it). Let me make an illustration:
page1 page2 remapped uspace page
| 000|000 | | 000|
| 000|000 | | 000|
^ ^
| |
newPage reg->ip
Try to use get_free_page instead of kmalloc. If this wouldn't help, i will try to repeat your experiment.

How to set write protection to all page in memory that a process is using?

I want to write a function that set write protection to all page in Virtual memory. I that moment, I do like that:
mm = task->mm;
spin_lock(&mm->page_table_lock);
for (vma = mm->mmap ; vma ; vma = vma->vm_next) { //loop for all vmas
vma->vm_flags, (int) vma->vm_flags & VM_WRITE);
addr = vma->vm_start;
vma->vm_flags &= ~VM_WRITE;
while(addr < vma->vm_end){ //loop for each page
pgd = pgd_offset(mm, addr);
if (pgd_none(*pgd)) { addr += PAGE_SIZE; continue; }
pud = pud_offset(pgd, addr);
if (pud_none(*pud)) { addr += PAGE_SIZE; continue; }
pmd = pmd_offset(pud, addr);
if (pmd_none(*pmd)) { addr += PAGE_SIZE; continue; }
pte = pte_offset_map(pmd, addr);
if (pte_present(*pte)){
*pte = pte_wrprotect(*pte);
}
spin_unlock(&mm->page_table_lock);
...........
I work on Ubuntu 14.04 (Kernel 3.13). My program work in kernel level.
But it just set write protection to a part of memory, it does't set write protection to all page that process is using.
Somebody can help me, please!

Resources