set_memory_uc() function of linux API - linux-kernel

I know some virtual address from user space, I want to uncache the space. I try to use the API set_memory_uc in a kernel module.
I look at the kernel code and found this API accepting virtual address as first argument, but in this API it uses __pa() to convert it to PA. However, __pa() only valid for kernel space virtual address right? So I wonder if I can use set_memory_uc with user space virtual address as argument.
What I expect is a function like :
uncache(VA) VA is a user space virtual address, this function uncache one page according to the virtual address (I will mlock the page first).
Thank you!

Related

How to share variable between two child process (in both kernel and user space) in Linux?

I have a scenario that Process A forks two child process called PB and PC. And there is a variable X will be modified by PB and PC in user space. Also, X will be modified by PB and PC in kernel space.
What I have done is make PB allocate user virtual address by aligned_alloc and pass it into kernel (by hooking system call), using get_user_pages + kmap to get kernel virtual address. In this way, X can be shared in PB's kernel and user space.
My question is how to let PC to modify X in both kernel and user space?
Following is what I proposed but still remain questionable.
function page_to_phys can convert struct page into physical address, then PC can get user virtual address by mapping device /dev/mem with given physical address. In this way, PC can modify X in user space.
I can get virtual kernel address by let PC to call kmap with struct page which is returned from page_to_phys. In this way, PC can modify X in kernel space. But I'm not sure struct page is context-dependent or not. Can PC use this page which is the result called by PB?

QEMU-KVM: Is Guest Physical Address(GBA) same as QEMU-KVM's Virtual Address on the host?

For example, if a process on the guest has data allocated on 0x8000(as in Guest Virtual Memory), and that has been mapped into 0x4000 in Guest Physical Memory Address Space, is the data located on 0x4000 in the virtual memory address space of host-side QEMU-KVM's (sub)process(per VM)? In other words, if I write new code within QEMU's source(so I can use the QEMU-KVM's page table), compile, and run it, then can I access to the data of guest process directly just with the guest's physical memory address matching to the guest virtual memory address?
No, the page table mapping used for the user-space QEMU process is unrelated to either the page tables used by the guest itself for guest virtual -> guest physical address mapping, or to the page tables used by the KVM kernel code for guest physical address -> host physical address mapping for RAM.
When you're writing code in QEMU that needs to access guest memory, you should do so using the APIs that QEMU provides for that, which deal with converting the guest address to a host virtual address for RAM and also with handling the case when that guest address has an emulated device rather than RAM. The QEMU developer internals docs have a section on APIs for loads and stores, and the functions are also documented in doc comments in the header files.
Usually the best advice is "find some existing code in QEMU which is doing something basically the same as what you're trying to do, and follow that as an example".
I'm assuming your question is about x86 platform.
QEMU+KVM use extended page table (EPT) to map host virtual addresses (HVA) to guest physical addresses (GPA). To find HVA corresponding to GPA, you should traverse EPT of your guest.
When you need to read a virtual address from your guest, you can use GDB functions in QEMU source. This snippet will read 4 bytes at virtualAddress in the current executing process in your guest:
uint8_t instr[4];
if( cpu_memory_rw_debug( cpu, virtualAddress, (uint8_t *)instr, sizeof( instr ), 0 ) )
{
printf( "Failed to read instr!\n" );
return;
}
cpu is an CPUState* of a cpu of your guest.
When you want to read an address of a concrete process in your guest, you should set env->cr[3] to CR3 value of this process (env is an CPUX86State*). Dont forget to restore the original value after you finish reading. And of course read memory only when your guest is not executing, otherwise there can be races.

Why does the kernel have a separate virtual address for a user page?

I'm confused about this statement:
From http://web.stanford.edu/class/cs140/projects/pintos/pintos_4.html#SEC63:
In Pintos, every user virtual page is aliased to its kernel virtual
page.
I thought the kernel would just be able to use the user virtual address to refer to the user page, with the kernel virtual addresses above it. In the image below, for instance, wouldn't the entire VAS just be from 0 to 4GB, and the user virtual address space would be restricted to addresses below PHYS_BASE, while the kernel could also access the addresses above it?
(from http://web.stanford.edu/class/cs140/cgi-bin/section/10sp-proj3.pdf)
This doesn't seem to be how it works though, as the PintOS documentation continues:
You must manage these aliases somehow. For example, your code
could check and update the accessed and dirty bits for both addresses.
Alternatively, the kernel could avoid the problem by only accessing
user data through the user virtual address.
This implies that the kernel could access the user data through a separate kernel virtual address. I'm not sure why the two addresses would be different.
Thanks for any clarification.
To access a page, it needs to be mapped in your current virtual address space.
So if the kernel wants to access a user page there are 2 solutions :
Map the page in our current address space, the kernel's address space, and make sure the two pages table entries stay consistent (you don't stricly have to keep it consistent, but you really want to).
Switch to an address space where that page is already mapped, the user's own address space
Your kernel seems to be picking option 1, which is a good thing for performance. Switching to another address space and back takes quite a lot time.
It could pick option 2 instead and switch to the user's address space every time it wants to access a user page, this would possibly make the code simpler by avoiding some bookkeeping, but that would be awfully slow.

ARM platform, how to convert virtual address to physical address in kernel module?

As we know, on ARM platform, 16MB space is reserved for kernel modules below PAGE_OFFSET.
If I write a module and define a global variable, then how I get its phisical address?
It is obvious that I get a wrong physical address using virt_to_phys function.
If virt_to_phys won't work for you, you can use the MMU to do a V=>P mapping, see Find the physical address of exception vector table from kernel module

Is there any kernel function to convert a physical page to its virtual address?

I get one huge page by struct page *page=alloc_pages(), and I want to verify if it is a 2MB page. Is there any kernel function that I can use to convert this page to its virtual address?
For the pages allocated with alloc_page() or the like, you can use page_address() to obtain their virtual addresses (see <linux/mm.h>).

Resources