How can I change the cache type of DSR using NS2 - adhoc

There are three types of cache at DSR. Mobicache, link cache and simplecache. Mobicache is the cache that is used by default in the DSR. I want to change from mobicache to to link cache. How can I do that using NS2.

Related

Mmap a writecombine region to userspace documentation

According to the following documentation
https://www.kernel.org/doc/html/latest/x86/pat.html,
Drivers wanting to export some pages to userspace do it by using mmap interface and a combination of:
pgprot_noncached()
io_remap_pfn_range() or remap_pfn_range() or vmf_insert_pfn()
Note that this set of APIs only works with IO (non RAM) regions. If driver wants to export a RAM region, it has to do set_memory_uc() or set_memory_wc() as step 0 above and also track the usage of those pages and use set_memory_wb() before the page is freed to free pool.
Why is the extra step set_memory_uc() or set_memory_wc() needed for RAM regions?
This is needed since set_memory_uc() and set_memory_wc() are specifically written to work with memory regions; the other API functions you're being told to use here are for I/O regions.
Since you want to work with page(s) in a RAM region using the API functions listed, your driver needs to mark them as uncached or write-combined first so that they can essentially be treated like I/O pages, use the APIs, and then be sure to follow up with explicit writeback(s) of the memory page(s) in order to sync their contents before your driver considers itself "finished" with them.

Can someone explain the Windows ZwMapViewOfSection system call so that a noob (me) can understand?

I'm investigating a set of Windows API system calls made by a piece of malware running in a sandbox so that I can understand its malicious intent. Unfortunately, I'm struggling to understand the ZwMapViewOfSection function described in documentation: https://learn.microsoft.com/en-us/windows-hardware/drivers/ddi/content/wdm/nf-wdm-zwmapviewofsection
Now, I do understand that this function is related to the mapping of physical memory to virtual memory in a page table. Apart from that, I find the documentation arcane and not friendly to beginners. I am also confused why they are calling blocks of physical memory "sections" rather than "frames" (if that is what they are indeed referring to -- its not clear to me). Can anyone provide a more intuitive explanation about this system call and what it does in general? Is this a common system call for programs or is it limited to malware? Thank You.
It is extremely common for normal programs to make this call (not directly of course), every program is going to invoke it multiple times during initialization at the very least (ZwMapViewOfSection is used when performing the memory backing used to implement the executable sections of code itself). Not so common during normal program code but not uncommon either. Particularly common if the program performs dynamic DLL loads but legitimate programs can also do memory-mapped IO for their own reasons.
It operates on memory section objects (I've never really understood that name either) which are one part of the link between disc files and memory-mapped regions, the section is created via ZwCreateSection or opened with ZwOpenSection and then the other part comes into play with ZwMapViewOfSection.
What part of this, exactly, is confusing you? Knowing that would make it far easier to provide an informative response.
As far as I understand it, you have to open the file and acquire a file handle which you then map with CreateFileMapping, which will call NtCreateSection, which calls MmCreateSection. If the file is mapped for the first time a new segment object and control area are created first then depending on whether the section is created for a data, image or page-file backed MiCreateDataFileMap, MiCreateImageFileMap or MiCreatePagingFileMap is called.
MiCreateDataFileMap sets up the subsection object and section object. In the normal case only one subsection is created, but under some special conditions multiple subsections are used, e.g. if the file is very large. For data files, the subsection object field SubsectionBase is left blank. Instead the SegmentPteTemplate field of the segment object is setup properly which can be used to create the PPTEs when necessary. This defers the creation of PPTEs until a view is mapped for the first time which avoids wasting memory when very large data files are mapped. Note a PPTE is a PTE that is serving as a prototype PTE, but an _MMPTE_PROTOTYPE is a PTE that is pointing to a prototype.
MiCreateImageFileMap creates the section object and loads the PE header of the specified file and verifies it then one subsection is created for the PE header and one for each PE section. If a very small image file is mapped then only one subsection is created for the complete file. Besides the subsections also the related PPTEs for each of them are created and their page protection flags are set according to the protection settings of the related PE section. These PPTEs will be used as a template for building the real PTEs when a view is mapped and accessed.
After a section is created it can be mapped into the address space by creating a view from it. The flProtect passed to CreateFileMapping specifies the protection of the section object. All mapped views of the object must be compatible with this protection. You specify dwMaximumSizeLow and dwMaximumSizeHigh to be 0 in order for dwMaximumSizeHigh to be set to the length of the file automatically.
You then pass the returned section object handle to MapViewOfFile, which will calls NtMapViewOfSection on it, which calls MmMapViewOfSegment, which calls MmCreateMemoryArea, which is where the view is mapped into the VAD of the process with the protection dwDesiredAccess supplied to MapViewOfFile, which serves as the protection type for all PTEs that the VAD entry covers. dwNumberOfBytesToMap = 0 and dwFileOffsetLow = 0 in MapViewOfFile maps the whole file.
When a view is mapped, I believe that all of the PTEs are made to point to the prototype PTEs and are given the protection of the PPTE. For an image file, the PPTEs have already been initialised to subsection PTEs. For a data file, the PPTEs for the view need to be initialised to subsection PTEs. The VAD entry for the view is now created. The VAD entry protection isn't always reflective of the protection of the PTEs it covers, because it can cover multiple subsections and multiple blocks within those subsections.
The first time an address in the mapping is actually accessed, the subsection prototype PTE is filled in on demand with the allocated physical page filled with the I/O write for that range and the process PTE is filled in with that same address. For an image, the PPTE was already filled in when the subsections were created along with protection information derived from the section header characteristics in the image, and it just fills in the PTE with that address and the protection information in it.
When the PTE is trimmed from the process working set, the working set manager accesses the PFN to locate the PPTE address, decreases the share count, and it inserts the PPTE address into the PTE.
I'm not sure when a VAD PTE (which have a prototype bit and prototype address of 0xFFFFFFFF0000 and is not valid) occurs. I would have thought the PPTEs are always there at their virtual address and can be pointed to as soon as the VAD entry is created.

Loading data segment of already loaded shared library

For global offset table to work, GOT must be at a fixed location from text segment. Now assume that a program needs a shared library. Assume also that the shared library is already loaded by the OS for some other process. Now for our program, since text section of shared library is already loaded, it just needs to load data segment. The shared library text section is mapped back to the virtual address of our process. But what if there is already some data/text or whatever at the fixed offset from the virtual address of our shared library. How does the dynamic linker resolve that conflict? One approach would be to leave R_386_GOTPC in the text section till load time and let the dynamic linker change it the new offset. Is this how it is done in practice.
On GNU, even the same DSO is mapped at different addresses in different processes. No data at all is shared between them. This means that the GOT is just private data (like .data), and is initialized at load time with the proper addresses (either stubs or the proper function addresses with BIND_NOW).
(This assumes that prelink is not in use, which is somewhat broken anyway.)

how to check allocated buffer's corresponding page is in cache or main memory?

At application level i use malloc() and memset() and in driver i use get_user_pages_fast() to pin the corresponding pages.
Is there a way in linux to determine whether to check these pages are in cache or in main memory ?
Unless you have a device-specific call that allows you to pin them to the cache, the CPU is free to move them in and out of the cache as it sees fit. Even if you can check if the address is question is in the cache, the information is not reliable when you execute the next statement in your driver.

write a block in cache that it's dirty bit was set

In computer architecture , if processor want to read a block in cache which it's dirty bit was set , then the processor will re-write this block to the memory or just read the block without write allocate ?
For reads, the data is read from the cache, as that is the latest updated data. For writes to the same block, the new data (to the same address) is updated and the dirty bit is set again. Only when there's a conflict miss (due to two different addresses sharing the same cache block) would the data actually be pushed to the next level of memory hierarchy.

Resources