How is the Page File available calculated in Windows Task Manager? - windows

In Vista Task Manager, I understand the available page file is listed like this:
Page File inUse M / available M
In XP it's listed as the Commit Charge Limit.
I had thought that:
Available Virtual Memory = Physical Memory Total + Sum of Page Files
But on my machine I've got Physical Memory = 2038M, Page Files = 4096M, Page File Available = 6051. There's 83M unaccounted for here. What's that used for. I thought it might be something to do with the Kernel memory, but the number doesn't seem to match up?
Info I've found so far:
See http://msdn.microsoft.com/en-us/library/aa965225(VS.85).aspx for more info.
Page file size can be found here: Computer Properties, advanced, performance settings, advanced.

I think you are correct in your guess it has to do something with the kernel - the kernel memory needs some physical backup as well.
However I have to admit that when trying to verify try, the numbers still do not match well and there is a significant amount of memory not accounted for by this.
I have:
Available Virtual Memory = 4 033 552 KB
Physical Memory Total = 2 096 148 KB
Sum of Page Files = 2048 MB
Kernel Non-Paged Memory = 28 264 KB
Kernel Paged Memory = 63 668 KB

Related

vm_stat displays less memory than in real

Update:
So, restarting the mac did the work but any known reason for this bug?
I'm using vm_stat to calculate RAM info like here
But when I try to add all the values and multiply it with page size im getting approx. 1.3gb less.
Mach Virtual Memory Statistics: (page size of 4096 bytes)
Pages free: 22064.
Pages active: 580105.
Pages inactive: 472217.
Pages speculative: 5594.
Pages throttled: 0.
Pages wired down: 559999.
Pages purgeable: 29101.
"Translation faults": 261945239.
Pages copy-on-write: 6941679.
Pages zero filled: 165324784.
Pages reactivated: 14573079.
Pages purged: 1602247.
File-backed pages: 203023.
Anonymous pages: 854893.
Pages stored in compressor: 1732046.
Pages occupied by compressor: 456427.
Decompressions: 11423912.
Compressions: 20641865.
Pageins: 4475678.
Pageouts: 32877.
Swapins: 1714616.
Swapouts: 2389086.
So by adding first 6 values and multiplying with page size, im getting 6.7GB but my mac has 8GB.
So what is going wrong?
Thank you!
Here is the result
After the reboot
It looks to me like you need to add in "Pages occupied by compressor". That gets you to almost exactly 8GB. The reboot probably just reset that to zero so it didn't matter.

Coredump size different than process virtual memory space

I'm working on OS X 10.11, and generated dump file in the following manner :
1. ulimit -c unlimited
2. kill -10 5228 (process pid)
and got dump file with the rolling attributes : 642M Jun 26 15:00 core.5228
Right before that, I checked the process total memory space using vmmap command to try and estimate the expected dump size.
However, the estimation (238.7Mb) was much smaller than the actual size (642Mb).
Can this gap be explained ?
VIRTUAL REGION
REGION TYPE SIZE COUNT (non-coalesced)
=========== ======= =======
Activity Tracing 2048K 2
Kernel Alloc Once 4K 2
MALLOC guard page 16K 4
MALLOC metadata 180K 6
MALLOC_SMALL 56.0M 4 see MALLOC ZONE table below
MALLOC_SMALL (empty) 8192K 2 see MALLOC ZONE table below
MALLOC_TINY 8192K 3 see MALLOC ZONE table below
STACK GUARD 56.0M 2
Stack 8192K 2
__DATA 1512K 44
__LINKEDIT 90.9M 4
__TEXT 8336K 44
shared memory 12K 4
=========== ======= =======
TOTAL 238.7M 110
VIRTUAL ALLOCATION BYTES REGION
MALLOC ZONE SIZE COUNT ALLOCATED % FULL COUNT
=========== ======= ========= ========= ====== ======
DefaultMallocZone_0x100e42000 72.0M 7096 427K 0% 6
coredump can, and does, filter the process memory. See the core man page:
Controlling which mappings are written to the core dump
Since kernel 2.6.23, the Linux-specific /proc/PID/coredump_filter file can be used to control which memory segments are written to the core dump file in the event that a core dump is performed for the process with the corresponding process ID.
The value in the file is a bit mask of memory mapping types (see mmap(2)). If a bit is set in the mask, then memory mappings of the corresponding type are dumped; otherwise they are not dumped. The bits in this file have the following meanings:
bit 0 Dump anonymous private mappings.
bit 1 Dump anonymous shared mappings.
bit 2 Dump file-backed private mappings.
bit 3 Dump file-backed shared mappings.
bit 4 (since Linux 2.6.24)
Dump ELF headers.
bit 5 (since Linux 2.6.28)
Dump private huge pages.
bit 6 (since Linux 2.6.28)
Dump shared huge pages.
bit 7 (since Linux 4.4)
Dump private DAX pages.
bit 8 (since Linux 4.4)
Dump shared DAX pages.
By default, the following bits are set: 0, 1, 4 (if the CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS kernel configuration option is enabled), and 5. This default can be modified at boot time using the coredump_filter boot option.
I assume OS X behaves similarly.

Size of exe file vs available memory

I have gone through How does a PE file get mapped into memory?, this is not what i am asking for.
I want to know which sections (data, text, code, ...) of a PE file are always completely loaded into memory by the loader no matter whatever the condition is?
As per my understanding, none of the sections (code,data,resources,text,...) are always loaded completely, they are loaded as and when needed, page by page. If few pages of code (in the middle or at the end), are not required to process user's request then these pages will not always get loaded.
I have tried making exe files with lots of code with/without resources both of which are not used at all, but, every time the exe loads into memory, it takes more memory than the file size. (I might have been looking at the wrong column of Memory in Task Manager)
Matt Pietrek writes here
It's important to note that PE files are not just mapped into memory
as a single memory-mapped file. Instead, the Windows loader looks at
the PE file and decides what portions of the file to map in.
and
A module in memory represents all the code, data, and resources from
an executable file that is needed by a process. Other parts of a PE
file may be read, but not mapped in (for instance, relocations). Some
parts may not be mapped in at all, for example, when debug information
is placed at the end of the file.
In a nutshell,
1- There is an exe of size 1 MB and available memory (physical + virtual) is less than 1 MB, is it consistent that loader will always refuse to load because available memory is less than the size of file?
2- If an exe of size 1 MB takes 2 MB memory when loaded (starts running first line of user code) while available memory (physical + virtual) is 1.5 MB, is it consistent that loader will always refuse to load because there is not enough memory?
3- There is an exe of size 50 MB (lots of code, data and resources) but it requires 500 KB to run the first line of user code, is it consistent that this exe will always run first line of code if available memory (physical + virtual) is 500 KB atleast?

How to calculate Virtual Memory Size in Mavericks

I would like to know if there is a command/API call (or set of commands/API calls) that calculates each of (Virtual Memory, File Cache and App Memory) parameters listed in the screen shot above.
You can use vm_stat and sysctl terminal commands. Although there was no straightforward way or documentation on how to extract the new attributes from these commands, we had to do some trial and error till we discovered the relations between parameters in the commands and the attribute we need to calculate.
The Steps are as the following:
Run vm_stat
Run "sysctl hw.memsize" and "sysctl vm.swapusage".
The relationship between Memory usage which appears in Activity Monitor and previous commands are described in How to calc Memory usage in Mavericks programmatically.
Sample output from vm_stat:
Mach Virtual Memory Statistics: (page size of 4096 bytes)
Pages free: 24428.
Pages active: 1039653.
Pages inactive: 626002.
Pages speculative: 184530.
Pages throttled: 0.
Pages wired down: 156244.
Pages purgeable: 9429.
"Translation faults": 14335334.
Pages copy-on-write: 557301.
Pages zero filled: 5682527.
Pages reactivated: 74.
Pages purged: 52633.
File-backed pages: 660167.
Anonymous pages: 1190018.
Pages stored in compressor: 644.
Pages occupied by compressor: 603.
Decompressions: 18.
Compressions: 859.
Pageins: 253589.
Pageouts: 0.
Swapins: 0.
Swapouts: 0.

Why core file is more than virtual memory?

I have a multithreaded program running which crashes after a day or two. Moreover the gdb backtrace of the core dump does not lead anywhere. There are no symbols at the point where it crashes.
Now the machine that generates the core file has a physical memory of 3 Gigs and 5 Gigs swap space. But the core dump that we get is around 25 Gigs. Isn't the core dump actually memory dump? Why is the core dump large?
And can anyone give me more lead on how to debug in such situation?
If you are running a 64-bit OS then you can have file-backed mappings that exceed many times the amount of available physical memory + swap space.
Since kernel version 2.6.23, Linux provides a mechanism to control what gets included in the core dump file, called core dump filter. The value of the filter is a bit-field manipulated via the /proc/<pid>/coredump_filter file (see core(5) man page):
bit 0 (0x01) - anonymous private mappings (e.g. dynamically allocated memory)
bit 1 (0x02) - anonymous shared mappings
bit 2 (0x04) - file-backed private mappings
bit 3 (0x08) - file-backed shared mappings (e.g. shared libraries)
bit 4 (0x10) - ELF headers
bit 5 (0x20) - private huge pages
bit 6 (0x40) - shared huge pages
The default value is 0x33 which corresponds to dumping all anonymous mappings as well as the ELF headers (but only if kernel is compiled with CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS) and the private huge pages. Reading from this file returns the hexadecimal value of the filter. Writing a new hexadecimal value to coredump_filter changes the filter for the particular process, e.g. to enable dump of all possible mappings one would:
echo 0x7f > /proc/<pid>/coredump_filter
(where <pid> is the PID of the process)
The value of the core dump filter is iherited in child processes created by fork().
Some Linux distributions might change the filter value for the init process early in the OS boot stage, e.g. to enable dumping the file-backed mappings. This would then affect any process started later.
A core dump contains more than just the state of the memory of the process. See the answer at https://stackoverflow.com/a/5321564/91757 for examples of other information included in the core dump (on Linux).

Resources