How to Calculate MemTotal in /proc/meminfo - linux-kernel

When I cat /proc/meminfo, the report as follows:
MemTotal: 2034284 kB
MemFree: 1432728 kB
Buffers: 16568 kB
Cached: 324864 kB
SwapCached: 0 kB
Active: 307344 kB
Inactive: 256916 kB
Active(anon): 223020 kB
Inactive(anon): 74372 kB
Active(file): 84324 kB
Inactive(file): 182544 kB
Unevictable: 0 kB
Mlocked: 0 kB
HighTotal: 1152648 kB
HighFree: 600104 kB
LowTotal: 881636 kB
LowFree: 832624 kB
SwapTotal: 4200960 kB
SwapFree: 4200960 kB
Dirty: 60 kB
Writeback: 0 kB
AnonPages: 222868 kB
Mapped: 80596 kB
Shmem: 74564 kB
Slab: 24268 kB
SReclaimable: 14024 kB
SUnreclaim: 10244 kB
KernelStack: 1672 kB
PageTables: 2112 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 5218100 kB
Committed_AS: 833352 kB
VmallocTotal: 122880 kB
VmallocUsed: 13916 kB
VmallocChunk: 50540 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 4096 kB
DirectMap4k: 20472 kB
DirectMap4M: 888832 kB
I got a formula to calculate the Memtotal:
Memtotal = MemFree + Cached + Active + Inactive + Mapped + Shmem + Slab + PageTables + VmallocUsed
but I don't know the formula is correct or not, any one can help to clarify it?

I think it would be difficult to reach to the exact value (addition based total memory validation) from meminfo.
Nonetheless, in my view following should lead you close to TotalMemory figure.
TotalMemory = MemFree + Buffers + Cached + Dirty + AnonPages + Slab + VmAllocUsed
In your example case
1432728 + 16568 + 324864 + 60 + 222868 + 24268 + 13916 = 2035272
Some ref :
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/filesystems/proc.txt?id=HEAD#l451
(From another stackoverflow article suggested above)
Apart from that, I believe the volatility is because of VmAllocUsed.

Related

Bash sed awk, format CPU/Mem info from /proc/cpuinfo and /proc/meminfo [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
The problem that I'm trying to solve is to produce portable output that I can display on all of the servers in our environment to show basic info at login using generic information on all CentOS / Red Hat systems. I would like to pluck info from /proc/cpuinfo and /proc/meminfo (or free -m -h); "why not just 'yum install some-great-tool'?" is not ideal as all of this information is freely available to us right in /proc. I know that this sort of thing can often be a very simple trick for sed/awk experts (I don't know how to approach this
with my limited sed/awk knowledge).
I would like to extract something like the following on a single line:
<model name>, <cpu MHz> MHz, <cpu cores> cores, <detect "vmx" (Intel-VT) or "svm" (AMD-V support)>
e.g. with the below output, this would look like (with "1300.000" rounded to "1300")
"AMD Athlon(tm) II Neo N36L Dual-Core Processor, 1300 MHz, 2 cores, VMX-Virtualization" (or "SVM-Virtualization" or "No Virtualization")
I would like to also combine this info with that of /proc/meminfo or free -mh, so:
"AMD Athlon(tm) II Neo N36L Dual-Core Processor, 1300 MHz, 2 cores, 4.7 GB Memory (1.8 GB Free), SVM-Virtualization"
I have spent some time searching for methods, but without luck, and maybe this is an interesting generic problem as involves taking the format of tables that a lot of info is held in and extracting as required so has some generic application.
$ free -m -h
total used free shared buff/cache available
Mem: 4.5Gi 1.2Gi 1.8Gi 77Mi 1.6Gi 3.0Gi
Swap: 4.8Gi 0B 4.8Gi
$ cat /proc/cpuinfo
processor : 0
vendor_id : AuthenticAMD
cpu family : 16
model : 6
model name : AMD Athlon(tm) II Neo N36L Dual-Core Processor
stepping : 3
microcode : 0x10000c8
cpu MHz : 1300.000
cache size : 1024 KB
physical id : 0
siblings : 2
core id : 0
cpu cores : 2
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 5
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid pni monitor cx16 popcnt lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a 3dnowprefetch osvw ibs skinit wdt nodeid_msr hw_pstate vmmcall npt lbrv svm_lock nrip_save
bugs : tlb_mmatch apic_c1e fxsave_leak sysret_ss_attrs null_seg amd_e400 spectre_v1 spectre_v2
bogomips : 2595.59
TLB size : 1024 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 48 bits physical, 48 bits virtual
power management: ts ttp tm stc 100mhzsteps hwpstate
$ cat /proc/meminfo
MemTotal: 4771304 kB
MemFree: 1862372 kB
MemAvailable: 3195768 kB
Buffers: 2628 kB
Cached: 1542788 kB
SwapCached: 0 kB
Active: 1534572 kB
Inactive: 909316 kB
Active(anon): 917792 kB
Inactive(anon): 62468 kB
Active(file): 616780 kB
Inactive(file): 846848 kB
Unevictable: 8384 kB
Mlocked: 0 kB
SwapTotal: 5070844 kB
SwapFree: 5070844 kB
Dirty: 20 kB
Writeback: 0 kB
AnonPages: 881304 kB
Mapped: 395420 kB
Shmem: 79776 kB
KReclaimable: 152892 kB
Slab: 295508 kB
SReclaimable: 152892 kB
SUnreclaim: 142616 kB
KernelStack: 9328 kB
PageTables: 45156 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 7456496 kB
Committed_AS: 5260708 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
Percpu: 2864 kB
HardwareCorrupted: 0 kB
AnonHugePages: 417792 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 0 kB
DirectMap4k: 314944 kB
DirectMap2M: 4796416 kB
DirectMap1G: 0 kB
Using /proc/cpuinfo and free -mh along with awk, search for the strings required, using : as the field delimited, set variables accordingly, splitting the output of free -mh further into an array called arr based on " " as the delimiter. At the end, print the data in the required format using the variables created.
When searching for lines beginning with flag, we search for strings svn or vmx using awk's match function. A match will signified by the RSTART variable not being 0 and so we check this to find the type of virtualisatiion being utilised. As we have set virt to No Virtualisation at the beginning, no matches will print No Virtualisation.
awk -F: '/^model name/ {
mod=$2
}
/^cpu MHz/ {
mhz=$2
}
/^cpu core/ {
core=$2
}
/^flags/ {
virt="No Virtualisation";
match($0,"svm");
if (RSTART!=0)
{
virt="SVM-Virtualisation"
};
match($0,"vmx");
if (RSTART!=0) {
virt="VMX-Virtualisation"
}
}
/^Mem:/ {
split($2,arr," ");
tot=arr[1];
free=arr[2]
}
END {
printf "%s %dMHz %s core(s) %s %sB Memory (%sB Free)\n",mod,mhz,core,virt,tot,free
}' /proc/cpuinfo <(free -mh)
One liner:
awk -F: '/^model name/ { mod=$2 } /^cpu MHz/ { mhz=$2 } /^cpu core/ {core=$2} /^flags/ { virt="No Virtualisation";match($0,"svm");if (RSTART!=0) { virt="SVM-Virtualisation" };match($0,"vmx");if (RSTART!=0) { virt="VMX-Virtualisation" } } /^Mem:/ {split($2,arr," ");tot=arr[1];free=arr[2]} END { printf "%s %dMHz %s core(s) %s %sB Memory (%sB Free)\n",mod,mhz,core,virt,tot,free }' /proc/cpuinfo <(free -mh)

How much RAM is actually available for applications in Linux?

I’m working on embedded Linux targets (32-bit ARM) and need to determine how much RAM is available for applications once the kernel and core software are launched. Available memory reported by free and /proc/meminfo don’t seem to align with what testing shows is actually usable by applications. Is there a way to correctly calculate how much RAM is truly available without running e.g., stress on each system?
The target system used in my tests below has 256 MB of RAM and does not use swap (CONFIG_SWAP is not set). I’m used the 3.14.79-rt85 kernel in the tests below but have also tried 4.9.39 and see similar results. During boot, the following is reported:
Memory: 183172K/262144K available (5901K kernel code, 377K rwdata, 1876K rodata, 909K init, 453K bss, 78972K reserved)
Once system initialization is complete and the base software is running (e.g., dhcp client, ssh server, etc.), I get the following reported values:
[root#host ~]# vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
1 0 0 210016 320 7880 0 0 0 0 186 568 0 2 97 0 0
[root#host ~]# free -k
total used free shared buff/cache available
Mem: 249616 31484 209828 68 8304 172996
Swap: 0 0 0
[root#host ~]# cat /proc/meminfo
MemTotal: 249616 kB
MemFree: 209020 kB
MemAvailable: 172568 kB
Buffers: 712 kB
Cached: 4112 kB
SwapCached: 0 kB
Active: 4684 kB
Inactive: 2252 kB
Active(anon): 2120 kB
Inactive(anon): 68 kB
Active(file): 2564 kB
Inactive(file): 2184 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages: 2120 kB
Mapped: 3256 kB
Shmem: 68 kB
Slab: 13236 kB
SReclaimable: 4260 kB
SUnreclaim: 8976 kB
KernelStack: 864 kB
PageTables: 296 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 124808 kB
Committed_AS: 47944 kB
VmallocTotal: 1810432 kB
VmallocUsed: 3668 kB
VmallocChunk: 1803712 kB
[root#host ~]# sysctl -a | grep '^vm'
vm.admin_reserve_kbytes = 7119
vm.block_dump = 0
vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 10
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 3000
vm.dirty_ratio = 20
vm.dirty_writeback_centisecs = 500
vm.drop_caches = 3
vm.extfrag_threshold = 500
vm.laptop_mode = 0
vm.legacy_va_layout = 0
vm.lowmem_reserve_ratio = 32
vm.max_map_count = 65530
vm.min_free_kbytes = 32768
vm.mmap_min_addr = 4096
vm.nr_pdflush_threads = 0
vm.oom_dump_tasks = 1
vm.oom_kill_allocating_task = 0
vm.overcommit_kbytes = 0
vm.overcommit_memory = 0
vm.overcommit_ratio = 50
vm.page-cluster = 3
vm.panic_on_oom = 0
vm.percpu_pagelist_fraction = 0
vm.scan_unevictable_pages = 0
vm.stat_interval = 1
vm.swappiness = 60
vm.user_reserve_kbytes = 7119
vm.vfs_cache_pressure = 100
Based on the numbers above, I expected to have ~160 MiB available for future applications. By tweaking sysctl vm.min_free_kbytes I can boost this to nearly 200 MiB since /proc/meminfo appears to take this reserve into account, but for testing I left it set as it is above.
To test how much RAM was actually available, i used the stress tool as follows:
stress --vm 11 --vm-bytes 10M --vm-keep --timeout 5s
At 110 MiB, the system remains responsive and both free and vmstat reflect the increased RAM usage. The lowest reported free/available values are below:
[root#host ~]# free -k
total used free shared buff/cache available
Mem: 249616 146580 93196 68 9840 57124
Swap: 0 0 0
[root#host ~]# vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
11 0 0 93204 1792 8048 0 0 0 0 240 679 50 0 50 0 0
Here is where things start to break down. After increasing stress’ memory usage to 120 MiB - still well shy of the 168 MiB reported as available - the system freezes for the 5 seconds while stress is running. Continuously running vmstat during the test (or as continuously as possible due to the freeze) shows:
[root#host ~]# vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 0 209664 724 6336 0 0 0 0 237 666 0 1 99 0 0
3 0 0 121916 1024 6724 0 0 289 0 1088 22437 0 45 54 0 0
1 0 0 208120 1328 7128 0 0 1652 0 4431 43519 28 22 50 0 0
Due to the significant increase in interrupts and IO, I’m guessing the kernel is evicting pages containing executable code and then promptly needing to read them back in from flash. My questions are a) is this a correct assessment? and b) why would the kernel be doing this with RAM still available?
Note that if try to use a single worker with stress and claim 160 MiB of memory, the OOM gets activated and kills the test. OOM does not trigger in the scenarios described above.

HDP- Datanodes are crashing

We have 4 nodes Hadoop cluster.
2 Master nodes
2 data nodes
After sometimes we found that our data nodes are failing. then, we go and see the log section it always tell cannot allocate memory.
ENV
HDP 2.3.6 VERSION
HAWQ 2.0.0 VERSION
linux os : centos 6.0
Getting following error
Data nodes are crashing WITH following logs
os::commit_memory(0x00007fec816ac000, 12288, 0) failed; error='Cannot allocate memory' (errno=12)
Memory Info
vm_overcommit ratio is 2
MemTotal: 30946088 kB
MemFree: 11252496 kB
Buffers: 496376 kB
Cached: 11938144 kB
SwapCached: 0 kB
Active: 15023232 kB
Inactive: 3116316 kB
Active(anon): 5709860 kB
Inactive(anon): 394092 kB
Active(file): 9313372 kB
Inactive(file): 2722224 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 15728636 kB
SwapFree: 15728636 kB
Dirty: 280 kB
Writeback: 0 kB
AnonPages: 5705052 kB
Mapped: 461876 kB
Shmem: 398936 kB
Slab: 803936 kB
SReclaimable: 692240 kB
SUnreclaim: 111696 kB
KernelStack: 33520 kB
PageTables: 342840 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 31201680 kB
Committed_AS: 26896520 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 73516 kB
VmallocChunk: 34359538628 kB
HardwareCorrupted: 0 kB
AnonHugePages: 2887680 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 6132 kB
DirectMap2M: 2091008 kB
DirectMap1G: 29360128 kB

Entry in /proc/meminfo

I am now study linux.
cat /proc/meminfo produces as following.
Please tell me the mean of entry "Active(file)/Inactive(file)".
I can't find the explanation of these entry.
Thanks.
MemTotal: 7736104 kB
MemFree: 166580 kB
Buffers: 604636 kB
Cached: 5965376 kB
SwapCached: 0 kB
Active: 4294464 kB
Inactive: 2319240 kB
Active(anon): 13688 kB
Inactive(anon): 33828 kB
Active(file): 4280776 kB
Inactive(file): 2285412 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 16777208 kB
SwapFree: 16777208 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages: 43772 kB
Mapped: 11056 kB
Shmem: 3792 kB
Slab: 861004 kB
SReclaimable: 818040 kB
SUnreclaim: 42964 kB
KernelStack: 1624 kB
PageTables: 5460 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 20645260 kB
Committed_AS: 124392 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 450644 kB
VmallocChunk: 34359282660 kB
HardwareCorrupted: 0 kB
AnonHugePages: 2048 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 16384 kB
DirectMap2M: 3880960 kB
DirectMap1G: 4194304 kB
According to the output, result of Active(file) + Inactive(file) + Shmem doesn't equal to that of Cached + Buffer + SwapCached
Active — The total amount of buffer or page cache memory, in kilobytes, that is in active use. This is memory that has been recently used and is usually not reclaimed for other purposes.
Inactive — The total amount of buffer or page cache memory, in kilobytes, that are free and and available. This is memory that has not been recently used and can be reclaimed for other purposes.
Ref : https://www.centos.org/docs/5/html/5.1/Deployment_Guide/s2-proc-meminfo.html
And FYI.
Active = Active(anon) + Active(file) Inactive = Inactive(anon) + Inactive(file)
Active(file), Inactive(file) has file back-end which means its original file is in disk but to use it faster it was loaded on RAM.
Active(file) + Inactive(file) + Shmem = Cached + Buffer + SwapCached

How can I sort columns of text in Vim by size?

Given this text
affiliates 1038 680 KB
article_ratings 699 168 KB
authors 30 40 KB
fs.chunks 3401 633.89 MB
fs.files 1476 680 KB
nodes 1432 24.29 MB
nodes_search 91 2.8 MB
nodes_tags 272 40 KB
page_views 107769 16.37 MB
page_views_map 212 40 KB
recommendations 34305 45.1 MB
rewrite_rules 209 168 KB
sign_ups 10331 12.52 MB
sitemaps 1 14.84 MB
suppliers 13 8 KB
tariff_price_check_reports 34 540 KB
tariff_price_checks 1129 968 KB
tariffs 5 680 KB
users 17 64 KB
users_tags 2 8 KB
versions 18031 156.64 MB
How can I sort by the 4th and then 3rd column so that it's sorted by file size?
I've tried :%!sort -k4 -k3n which partially works, but seems to fail on the 3rd size column.
What am I doing wrong?
I think I've figured it out.
:%!sort -k4 -bk3g
I sort by the the 4th column (-k4), followed by the 3rd column. We ignore leading blank spaces (b), and this time we sort using a general numeric sort (g).
I blogged about this too
I don't know how to handle it with sort(). I've found problems with the decimal point, although I changed the LC_NUMERIC environment variable, so I would switch to perl to solve it, like:
:%!perl -0777 -ne '
#l = map { [ $_, split " ", $_ ] } split /\n/, $_;
#l = sort { $a->[-1] cmp $b->[-1] or $a->[-2] <=> $b->[-2] } #l;
print "$_->[0]\n" for #l
'
Put it in the same line to run if from inside vim. It yields:
suppliers 13 8 KB
users_tags 2 8 KB
authors 30 40 KB
nodes_tags 272 40 KB
page_views_map 212 40 KB
users 17 64 KB
article_ratings 699 168 KB
rewrite_rules 209 168 KB
tariff_price_check_reports 34 540 KB
affiliates 1038 680 KB
fs.files 1476 680 KB
tariffs 5 680 KB
tariff_price_checks 1129 968 KB
nodes_search 91 2.8 MB
sign_ups 10331 12.52 MB
sitemaps 1 14.84 MB
page_views 107769 16.37 MB
nodes 1432 24.29 MB
recommendations 34305 45.1 MB
versions 18031 156.64 MB
fs.chunks 3401 633.89 MB

Resources