I need a trace of memory accesses of a process for simulation.
For this purpose i need a tool that captures and logs all memory accesses of a process.
is there any program to do so? if there is not, can I do that with modifications in linux kernel?
Related
I'm working on a custom Linux kernel for a RISC-V architecture. I am debugging using GDB/QEMU now that those tools are available. As I am debugging I notice that I am not able to access memory at addresses that are virtualized. That is once memory gets transitioned from physical to virtual addressing in the kernel, I can't access those memory locations any longer in gdb. For example, the kernel shows up like this in QEMU's info mem command.
paddr: 0x80200000 --> vaddr: 0xffffffff80000000
I think this question/problem is more an issue with QEMU or maybe my understanding of how to access it in QEMU correctly. As it stands, single stepping to this point in my kernel where virtual memory starts being used is fine but single stepping beyond this causes QEMU to effectively stop--it gives the same instruction each step. However, if I continue it boots in QEMU. How can I debug this via single stepping? Is there something I need to switch in GDB/QEMU?
I did try to access an address 0xffffffff8000007c for example and I could get that successfully, QEMU just doesn't transition to virtual memory when I single step past that point.
I'm experiencing a similar problem and have formed the following hypothesis:
I think the kernel is switching to a reduced page table when in idle,
one that does not map loaded module memory. An asynchronous break by GDB has a high likelihood of interrupting the CPU while in idle, of course.
Single stepping out idle (e.g., after hitting a key in the Linux console) and re-attempting setting a breakpoint on the loaded module succeeds at some point.
A viable strategy is probably to break on the conclusion of the module loading code and to set relevant breakpoints at that point.
For a project with specific hardware integration, we need to modify the linux kernel's page_fault handler and I wondered if the following is possible:
1) during the do_page_fault, can we know which thread generated that fault (thread and process). the platform is ARM so arm-specific interrupt registers can be used if helpful
2) can we access the user-space memory of that process and read some information that our user-mode library left us prior to that? (assuming it's already probed and locked in memory)
Further explanation in the comment, if one desires.
# 1): If the page fault happened while accessing memory from the userspace application (which is likely), then the page fault handler runs in the context of that process. From CPU point of view it enters kernel mode because of exception from MMU. So yes, you can get pid/tid of the userspace process that was interrupted..
# 2): Yes. Kernel can access all the memory. If it's a 32bit system you need Highmem support, if it's a 64bit then you get it out of the box.
I have an MFC C++ application that usually runs constantly in the system tray.
It allocates a very extensive tree of objects in memory, which causes the application to take several seconds to free, when the application needs to shutdown.
All my objects are allocated using new and typically freed using delete.
If I just skip deleting all the objects, in order to quit faster, what are the effects if any?
Does Windows realize the process is dead and reclaims the memory automatically?
I know that not freeing allocated memory is almost sacrilegious, but thought I would ask to see what everyone else thinks.
The application only shuts down when either the users system shuts down, or if they choose to shut the program down themselves.
When a process terminates the system will reclaim all resources. This includes releasing open handles to kernel objects and allocated memory. If you do not free memory during process termination it has no adverse effect on the operating system.
You will find substantial information about the steps performed during process termination at Terminating a process. With respect to your question the following is the relevant section:
Terminating a process has the following results:
...
Any resources allocated by the process are freed.
You probably should not skip the cleanup step in your debug builds though. Otherwise you will not get memory leak diagnostics for real memory leaks.
So a process is:
------DOS header/PE header
------executable code and statically linked libraries
------slack space?
------some dynamically linked libraries
------start of heap
------slack space
------top of stack
------bottom of stack
I am unsure of where the kernel mode stack and user mode stacks are relative to eachother in the virtual memory allocated for the process stack - also, when a new thread is spawned by a multithreaded process, where is the virtual memory allocated for it?
Thanks!
On x86 Windows, the kernel-mode modules are located in the (virtual) memory space from 0x80000000, which is not accessible from a user mode process, and all the user-mode modules are located in the memory space before 0x80000000.
When a new (user-mode) thread gets spawned, a new memory page is allocated for its stack in both the user-mode memory space (accessible from both user-mode and kernel-mode) and kernel-mode memory space (accessible only from the kernel mode). Note that there are some system threads that do not have a user mode context (thus no stack allocated in any of the user-mode processes). These threads purely run in the kernel and do not run under user-mode.
On Linux I used to be sure that whatever resources a process allocates, they are released after process termination. Memory is freed, open file descriptors are closed. No memory is leaked when I loop starting and terminating a process several times.
Recently I've started working with opencl.
I understand that the opencl-compiler keeps compiled kernels in a cache. So when I run a program that uses the same kernels like a previous run (or probably even those from another process running the same kernels) they don't need to be compiled again. I guess that cache is on the device.
From that behaviour I suspect that maybe allocated device-memory might be cached as well (maybe associated with a magic cookie for later reuse or something like that) if it was not released explicitly before termination.
So I pose this question to rule out any such suspicion.
kernels survive in chache => other memory-allocations survive somehow ???
My short answer would be yes based on this tool http://www.techpowerup.com/gpuz/
I'm investigating a memory leak on my device and I noticed that memory is freed when my process terminates... most of the time. If you have a memory leak like me, it may linger around even after the process is finished.
Another tool that may help is http://www.gremedy.com/download.php
but its really buggy so use it judiciously.