We have 4 nodes Hadoop cluster.
2 Master nodes
2 data nodes
After sometimes we found that our data nodes are failing. then, we go and see the log section it always tell cannot allocate memory.
ENV
HDP 2.3.6 VERSION
HAWQ 2.0.0 VERSION
linux os : centos 6.0
Getting following error
Data nodes are crashing WITH following logs
os::commit_memory(0x00007fec816ac000, 12288, 0) failed; error='Cannot allocate memory' (errno=12)
Memory Info
vm_overcommit ratio is 2
MemTotal: 30946088 kB
MemFree: 11252496 kB
Buffers: 496376 kB
Cached: 11938144 kB
SwapCached: 0 kB
Active: 15023232 kB
Inactive: 3116316 kB
Active(anon): 5709860 kB
Inactive(anon): 394092 kB
Active(file): 9313372 kB
Inactive(file): 2722224 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 15728636 kB
SwapFree: 15728636 kB
Dirty: 280 kB
Writeback: 0 kB
AnonPages: 5705052 kB
Mapped: 461876 kB
Shmem: 398936 kB
Slab: 803936 kB
SReclaimable: 692240 kB
SUnreclaim: 111696 kB
KernelStack: 33520 kB
PageTables: 342840 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 31201680 kB
Committed_AS: 26896520 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 73516 kB
VmallocChunk: 34359538628 kB
HardwareCorrupted: 0 kB
AnonHugePages: 2887680 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 6132 kB
DirectMap2M: 2091008 kB
DirectMap1G: 29360128 kB
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
The problem that I'm trying to solve is to produce portable output that I can display on all of the servers in our environment to show basic info at login using generic information on all CentOS / Red Hat systems. I would like to pluck info from /proc/cpuinfo and /proc/meminfo (or free -m -h); "why not just 'yum install some-great-tool'?" is not ideal as all of this information is freely available to us right in /proc. I know that this sort of thing can often be a very simple trick for sed/awk experts (I don't know how to approach this
with my limited sed/awk knowledge).
I would like to extract something like the following on a single line:
<model name>, <cpu MHz> MHz, <cpu cores> cores, <detect "vmx" (Intel-VT) or "svm" (AMD-V support)>
e.g. with the below output, this would look like (with "1300.000" rounded to "1300")
"AMD Athlon(tm) II Neo N36L Dual-Core Processor, 1300 MHz, 2 cores, VMX-Virtualization" (or "SVM-Virtualization" or "No Virtualization")
I would like to also combine this info with that of /proc/meminfo or free -mh, so:
"AMD Athlon(tm) II Neo N36L Dual-Core Processor, 1300 MHz, 2 cores, 4.7 GB Memory (1.8 GB Free), SVM-Virtualization"
I have spent some time searching for methods, but without luck, and maybe this is an interesting generic problem as involves taking the format of tables that a lot of info is held in and extracting as required so has some generic application.
$ free -m -h
total used free shared buff/cache available
Mem: 4.5Gi 1.2Gi 1.8Gi 77Mi 1.6Gi 3.0Gi
Swap: 4.8Gi 0B 4.8Gi
$ cat /proc/cpuinfo
processor : 0
vendor_id : AuthenticAMD
cpu family : 16
model : 6
model name : AMD Athlon(tm) II Neo N36L Dual-Core Processor
stepping : 3
microcode : 0x10000c8
cpu MHz : 1300.000
cache size : 1024 KB
physical id : 0
siblings : 2
core id : 0
cpu cores : 2
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 5
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid pni monitor cx16 popcnt lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a 3dnowprefetch osvw ibs skinit wdt nodeid_msr hw_pstate vmmcall npt lbrv svm_lock nrip_save
bugs : tlb_mmatch apic_c1e fxsave_leak sysret_ss_attrs null_seg amd_e400 spectre_v1 spectre_v2
bogomips : 2595.59
TLB size : 1024 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 48 bits physical, 48 bits virtual
power management: ts ttp tm stc 100mhzsteps hwpstate
$ cat /proc/meminfo
MemTotal: 4771304 kB
MemFree: 1862372 kB
MemAvailable: 3195768 kB
Buffers: 2628 kB
Cached: 1542788 kB
SwapCached: 0 kB
Active: 1534572 kB
Inactive: 909316 kB
Active(anon): 917792 kB
Inactive(anon): 62468 kB
Active(file): 616780 kB
Inactive(file): 846848 kB
Unevictable: 8384 kB
Mlocked: 0 kB
SwapTotal: 5070844 kB
SwapFree: 5070844 kB
Dirty: 20 kB
Writeback: 0 kB
AnonPages: 881304 kB
Mapped: 395420 kB
Shmem: 79776 kB
KReclaimable: 152892 kB
Slab: 295508 kB
SReclaimable: 152892 kB
SUnreclaim: 142616 kB
KernelStack: 9328 kB
PageTables: 45156 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 7456496 kB
Committed_AS: 5260708 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
Percpu: 2864 kB
HardwareCorrupted: 0 kB
AnonHugePages: 417792 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 0 kB
DirectMap4k: 314944 kB
DirectMap2M: 4796416 kB
DirectMap1G: 0 kB
Using /proc/cpuinfo and free -mh along with awk, search for the strings required, using : as the field delimited, set variables accordingly, splitting the output of free -mh further into an array called arr based on " " as the delimiter. At the end, print the data in the required format using the variables created.
When searching for lines beginning with flag, we search for strings svn or vmx using awk's match function. A match will signified by the RSTART variable not being 0 and so we check this to find the type of virtualisatiion being utilised. As we have set virt to No Virtualisation at the beginning, no matches will print No Virtualisation.
awk -F: '/^model name/ {
mod=$2
}
/^cpu MHz/ {
mhz=$2
}
/^cpu core/ {
core=$2
}
/^flags/ {
virt="No Virtualisation";
match($0,"svm");
if (RSTART!=0)
{
virt="SVM-Virtualisation"
};
match($0,"vmx");
if (RSTART!=0) {
virt="VMX-Virtualisation"
}
}
/^Mem:/ {
split($2,arr," ");
tot=arr[1];
free=arr[2]
}
END {
printf "%s %dMHz %s core(s) %s %sB Memory (%sB Free)\n",mod,mhz,core,virt,tot,free
}' /proc/cpuinfo <(free -mh)
One liner:
awk -F: '/^model name/ { mod=$2 } /^cpu MHz/ { mhz=$2 } /^cpu core/ {core=$2} /^flags/ { virt="No Virtualisation";match($0,"svm");if (RSTART!=0) { virt="SVM-Virtualisation" };match($0,"vmx");if (RSTART!=0) { virt="VMX-Virtualisation" } } /^Mem:/ {split($2,arr," ");tot=arr[1];free=arr[2]} END { printf "%s %dMHz %s core(s) %s %sB Memory (%sB Free)\n",mod,mhz,core,virt,tot,free }' /proc/cpuinfo <(free -mh)
I’m working on embedded Linux targets (32-bit ARM) and need to determine how much RAM is available for applications once the kernel and core software are launched. Available memory reported by free and /proc/meminfo don’t seem to align with what testing shows is actually usable by applications. Is there a way to correctly calculate how much RAM is truly available without running e.g., stress on each system?
The target system used in my tests below has 256 MB of RAM and does not use swap (CONFIG_SWAP is not set). I’m used the 3.14.79-rt85 kernel in the tests below but have also tried 4.9.39 and see similar results. During boot, the following is reported:
Memory: 183172K/262144K available (5901K kernel code, 377K rwdata, 1876K rodata, 909K init, 453K bss, 78972K reserved)
Once system initialization is complete and the base software is running (e.g., dhcp client, ssh server, etc.), I get the following reported values:
[root#host ~]# vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
1 0 0 210016 320 7880 0 0 0 0 186 568 0 2 97 0 0
[root#host ~]# free -k
total used free shared buff/cache available
Mem: 249616 31484 209828 68 8304 172996
Swap: 0 0 0
[root#host ~]# cat /proc/meminfo
MemTotal: 249616 kB
MemFree: 209020 kB
MemAvailable: 172568 kB
Buffers: 712 kB
Cached: 4112 kB
SwapCached: 0 kB
Active: 4684 kB
Inactive: 2252 kB
Active(anon): 2120 kB
Inactive(anon): 68 kB
Active(file): 2564 kB
Inactive(file): 2184 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages: 2120 kB
Mapped: 3256 kB
Shmem: 68 kB
Slab: 13236 kB
SReclaimable: 4260 kB
SUnreclaim: 8976 kB
KernelStack: 864 kB
PageTables: 296 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 124808 kB
Committed_AS: 47944 kB
VmallocTotal: 1810432 kB
VmallocUsed: 3668 kB
VmallocChunk: 1803712 kB
[root#host ~]# sysctl -a | grep '^vm'
vm.admin_reserve_kbytes = 7119
vm.block_dump = 0
vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 10
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 3000
vm.dirty_ratio = 20
vm.dirty_writeback_centisecs = 500
vm.drop_caches = 3
vm.extfrag_threshold = 500
vm.laptop_mode = 0
vm.legacy_va_layout = 0
vm.lowmem_reserve_ratio = 32
vm.max_map_count = 65530
vm.min_free_kbytes = 32768
vm.mmap_min_addr = 4096
vm.nr_pdflush_threads = 0
vm.oom_dump_tasks = 1
vm.oom_kill_allocating_task = 0
vm.overcommit_kbytes = 0
vm.overcommit_memory = 0
vm.overcommit_ratio = 50
vm.page-cluster = 3
vm.panic_on_oom = 0
vm.percpu_pagelist_fraction = 0
vm.scan_unevictable_pages = 0
vm.stat_interval = 1
vm.swappiness = 60
vm.user_reserve_kbytes = 7119
vm.vfs_cache_pressure = 100
Based on the numbers above, I expected to have ~160 MiB available for future applications. By tweaking sysctl vm.min_free_kbytes I can boost this to nearly 200 MiB since /proc/meminfo appears to take this reserve into account, but for testing I left it set as it is above.
To test how much RAM was actually available, i used the stress tool as follows:
stress --vm 11 --vm-bytes 10M --vm-keep --timeout 5s
At 110 MiB, the system remains responsive and both free and vmstat reflect the increased RAM usage. The lowest reported free/available values are below:
[root#host ~]# free -k
total used free shared buff/cache available
Mem: 249616 146580 93196 68 9840 57124
Swap: 0 0 0
[root#host ~]# vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
11 0 0 93204 1792 8048 0 0 0 0 240 679 50 0 50 0 0
Here is where things start to break down. After increasing stress’ memory usage to 120 MiB - still well shy of the 168 MiB reported as available - the system freezes for the 5 seconds while stress is running. Continuously running vmstat during the test (or as continuously as possible due to the freeze) shows:
[root#host ~]# vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 0 209664 724 6336 0 0 0 0 237 666 0 1 99 0 0
3 0 0 121916 1024 6724 0 0 289 0 1088 22437 0 45 54 0 0
1 0 0 208120 1328 7128 0 0 1652 0 4431 43519 28 22 50 0 0
Due to the significant increase in interrupts and IO, I’m guessing the kernel is evicting pages containing executable code and then promptly needing to read them back in from flash. My questions are a) is this a correct assessment? and b) why would the kernel be doing this with RAM still available?
Note that if try to use a single worker with stress and claim 160 MiB of memory, the OOM gets activated and kills the test. OOM does not trigger in the scenarios described above.
I am writing a shell script to upload a file to a specific folder on google drive. I have started out on the terminal though, to see how it can be done.
$ now=$(date +"%Y")
$ echo now
2015
$ drive list | grep -w $now
0B6g6AG_EmqeJM3ZKSHc5cUNJZ2M delete.txt 18.0 B 2015-06-12 10:32:05
0B6g6AG_EmqeJVkZZVXI4OWtHVEk delete.txt 17.0 B 2015-06-11 18:58:19
0B6g6AG_EmqeJTWIxVVBLSjB3YXc Open Drive Replacements 06_11_2015.xls 9.7 KB 2015-06-11 12:03:13
0B6g6AG_EmqeJakdNVjhjSTk0V1U Open Drive Replacements 06_08_2015.xls 13.8 KB 2015-06-08 10:06:17
0B6g6AG_EmqeJQ1JldDFOTUt0Uzg Open Drive Replacements 06_05_2015.xls 798.2 KB 2015-06-05 17:03:46
0B6g6AG_EmqeJQW1LaGU4UnJqdHM YYZ Replacements 06_05_2015.xls 84.0 KB 2015-06-05 14:56:43
0B6g6AG_EmqeJQ2R3QkJDWkp1X2c YVR3 Replacements 06_05_2015.xls 30.2 KB 2015-06-05 14:56:40
0B6g6AG_EmqeJZjMwOS1oZGRLN2M TYO Replacements 06_05_2015.xls 38.4 KB 2015-06-05 14:56:38
0B6g6AG_EmqeJelcwYXBkOVpFeTQ TYO3 Replacements 06_05_2015.xls 108.5 KB 2015-06-05 14:56:35
0B6g6AG_EmqeJZ2E4eXVPUkNaUmM TLV1 Replacements 06_05_2015.xls 34.3 KB 2015-06-05 14:56:33
0B6g6AG_EmqeJUWZESVZGUmc2QWc SYD Replacements 06_05_2015.xls 17.9 KB 2015-06-05 14:56:31
0B6g6AG_EmqeJaExsQmdwOGFiQUU SNV1 Replacements 06_05_2015.xls 58.9 KB 2015-06-05 14:56:27
0B6g6AG_EmqeJVW9YbDdXNzU5SWs SIN Replacements 06_05_2015.xls 22.0 KB 2015-06-05 14:56:24
0B6g6AG_EmqeJN21zRHhkMzhPNnc SEA3 Replacements 06_05_2015.xls 92.2 KB 2015-06-05 14:56:22
0B6g6AG_EmqeJbU81QURvZjVJZUU SEA2 Replacements 06_05_2015.xls 34.3 KB 2015-06-05 14:56:20
0B6g6AG_EmqeJOTZIcDlrUy0tTGc PMO1 Replacements 06_05_2015.xls 22.0 KB 2015-06-05 14:56:18
0B6g6AG_EmqeJQVdXNUwwaE9CRmc PHX2 Replacements 06_05_2015.xls 9.7 KB 2015-06-05 14:56:15
0B6g6AG_EmqeJakVLeFhNb2NnSkU PAR3 Replacements 06_05_2015.xls 186.9 KB 2015-06-05 14:56:12
0B6g6AG_EmqeJNFhDVUZtRjYtNk0 ORD Replacements 06_05_2015.xls 50.7 KB 2015-06-05 14:56:06
0B6g6AG_EmqeJUUxEUDh6Vm0tMXM ORD4 Replacements 06_05_2015.xls 34.3 KB 2015-06-05 14:55:59
0B6g6AG_EmqeJc3hJalc3R25qa1E ORD2 Replacements 06_05_2015.xls 26.1 KB 2015-06-05 14:55:56
0B6g6AG_EmqeJaHhRN1N6NElkd1U MIA1 Replacements 06_05_2015.xls 88.1 KB 2015-06-05 14:55:54
0B6g6AG_EmqeJWktoQU5wTU13YTA MEX1 Replacements 06_05_2015.xls 17.9 KB 2015-06-05 14:55:51
0B6g6AG_EmqeJb2FEWWwwQXF2SEk MDE1 Replacements 06_05_2015.xls 9.7 KB 2015-06-05 14:55:48
0B6g6AG_EmqeJMkxidzNpR1k2alk MAD1 Replacements 06_05_2015.xls 92.2 KB 2015-06-05 14:55:46
0B6g6AG_EmqeJY212ZHdJaDJXa3c MAA1 Replacements 06_05_2015.xls 42.5 KB 2015-06-05 14:55:44
0B6g6AG_EmqeJUy1Ec0NCN09lVTg LON3 Replacements 06_05_2015.xls 145.9 KB 2015-06-05 14:55:41
0B6g6AG_EmqeJV0tOQ1FmUVhtNUE LON2 Replacements 06_05_2015.xls 54.8 KB 2015-06-05 14:55:37
0B6g6AG_EmqeJQXUwMEpMaHBvOEU LIN1 Replacements 06_05_2015.xls 116.7 KB 2015-06-05 14:55:35
0B6g6AG_EmqeJX1ZBNjFvZWkwU0E LCY1 Replacements 06_05_2015.xls 154.1 KB 2015-06-05 14:55:32
0B6g6AG_EmqeJODhxbzM4dmk3Mk0 LAX6 Replacements 06_05_2015.xls 108.5 KB 2015-06-05 14:55:30
0B6g6AG_EmqeJcTlxcm8zb0tCdDg ICN1 Replacements 06_05_2015.xls 42.5 KB 2015-06-05 14:55:27
0B6g6AG_EmqeJZ0x3MVNkTTZOcWs IAD5 Replacements 06_05_2015.xls 104.4 KB 2015-06-05 14:55:25
0B6g6AG_EmqeJc0ZIMzNzN2R6c2c HKG Replacements 06_05_2015.xls 34.3 KB 2015-06-05 14:55:23
0B6g6AG_EmqeJSzhtbm1VV01QNFU FRF Replacements 06_05_2015.xls 79.9 KB 2015-06-05 14:55:20
0B6g6AG_EmqeJam1uMXBxQUxodDA FRF3 Replacements 06_05_2015.xls 178.7 KB 2015-06-05 14:55:18
0B6g6AG_EmqeJMi1EWlJPazlPcWc EWR1 Replacements 06_05_2015.xls 178.7 KB 2015-06-05 14:55:15
0B6g6AG_EmqeJY3Z5TURmdDhaR3M DAL Replacements 06_05_2015.xls 38.4 KB 2015-06-05 14:55:13
0B6g6AG_EmqeJYmdMclVVWWJYVXM DAL2 Replacements 06_05_2015.xls 133.6 KB 2015-06-05 14:55:11
0B6g6AG_EmqeJTXF6TVBCRDl5dWs BAH1 Replacements 06_05_2015.xls 13.8 KB 2015-06-05 14:55:08
0B6g6AG_EmqeJRlRONW9JVXlGbmc ATL2 Replacements 06_05_2015.xls 38.4 KB 2015-06-05 14:55:06
0B6g6AG_EmqeJQzVzSDlVWEVYSFU ATL1 Replacements 06_05_2015.xls 63.0 KB 2015-06-05 14:55:03
0B6g6AG_EmqeJZmpiY25ROXJqYU0 ARN1 Replacements 06_05_2015.xls 63.0 KB 2015-06-05 14:55:01
0B6g6AG_EmqeJMlFDbWp6MDI5X00 AMS Replacements 06_05_2015.xls 38.4 KB 2015-06-05 14:54:58
0B6g6AG_EmqeJbzdkUFdnMlFSUVU AMS3 Replacements 06_05_2015.xls 58.9 KB 2015-06-05 14:54:52
0B6g6AG_EmqeJfkVwNDE4bkxqY3YtdFhaMWFkNGZhQVZPMV9leFhGbWF1MXY4SVFiNXlMNkU 06_05_2015 0.0 B 2015-06-05 14:54:28
0B6g6AG_EmqeJeTBUT00zdGxBeG8 2015 0.0 B 2015-06-02 10:25:52
0B79uElAwuDVMfkdvSnJQNEM1Q1VINzZWcDZEWUJGT1o2RXYwTDRuNFVOcGRmbDJPVm81U1E 2014 0.0 B 2015-05-09 15:39:31
0B6g6AG_EmqeJTXllSEM1QzcxMGM unnamed5.gif 566.0 B 2015-05-26 10:44:20
0B6g6AG_EmqeJbnpLN1FKVkUxTGM unnamed (2).png 502.0 B 2015-05-26 10:44:07
0B6g6AG_EmqeJNlhfaDlQLW9jeE0 unnamed.png 441.0 B 2015-05-26 10:43:48
0B6g6AG_EmqeJV2dqNHBpc2RpTkk unnamed (1).png 325.0 B 2015-05-26 10:43:28
0B6g6AG_EmqeJTUtGcDVmMzhpeVE unnamed.png 453.0 B 2015-05-26 10:38:20
0B6g6AG_EmqeJa2hRNkgwY3gxd0E agvuykfd.jpg 4.7 KB 2015-05-26 10:34:41
The first column is the file/folder id, second column is the file/folder name from my google drive.
I need to look for a folder by the name yyyy (current year). And then retrieve its id. If the folder does not exist then I have to create one.
I am currently using the gdrive on Ubuntu 14.04.2.
How do I get the desired file id from the above table?
This worked for me-
drive list | grep -w " 2015 " | grep -Eo '^[^ ]+'
I am now study linux.
cat /proc/meminfo produces as following.
Please tell me the mean of entry "Active(file)/Inactive(file)".
I can't find the explanation of these entry.
Thanks.
MemTotal: 7736104 kB
MemFree: 166580 kB
Buffers: 604636 kB
Cached: 5965376 kB
SwapCached: 0 kB
Active: 4294464 kB
Inactive: 2319240 kB
Active(anon): 13688 kB
Inactive(anon): 33828 kB
Active(file): 4280776 kB
Inactive(file): 2285412 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 16777208 kB
SwapFree: 16777208 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages: 43772 kB
Mapped: 11056 kB
Shmem: 3792 kB
Slab: 861004 kB
SReclaimable: 818040 kB
SUnreclaim: 42964 kB
KernelStack: 1624 kB
PageTables: 5460 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 20645260 kB
Committed_AS: 124392 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 450644 kB
VmallocChunk: 34359282660 kB
HardwareCorrupted: 0 kB
AnonHugePages: 2048 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 16384 kB
DirectMap2M: 3880960 kB
DirectMap1G: 4194304 kB
According to the output, result of Active(file) + Inactive(file) + Shmem doesn't equal to that of Cached + Buffer + SwapCached
Active — The total amount of buffer or page cache memory, in kilobytes, that is in active use. This is memory that has been recently used and is usually not reclaimed for other purposes.
Inactive — The total amount of buffer or page cache memory, in kilobytes, that are free and and available. This is memory that has not been recently used and can be reclaimed for other purposes.
Ref : https://www.centos.org/docs/5/html/5.1/Deployment_Guide/s2-proc-meminfo.html
And FYI.
Active = Active(anon) + Active(file) Inactive = Inactive(anon) + Inactive(file)
Active(file), Inactive(file) has file back-end which means its original file is in disk but to use it faster it was loaded on RAM.
Active(file) + Inactive(file) + Shmem = Cached + Buffer + SwapCached
When I cat /proc/meminfo, the report as follows:
MemTotal: 2034284 kB
MemFree: 1432728 kB
Buffers: 16568 kB
Cached: 324864 kB
SwapCached: 0 kB
Active: 307344 kB
Inactive: 256916 kB
Active(anon): 223020 kB
Inactive(anon): 74372 kB
Active(file): 84324 kB
Inactive(file): 182544 kB
Unevictable: 0 kB
Mlocked: 0 kB
HighTotal: 1152648 kB
HighFree: 600104 kB
LowTotal: 881636 kB
LowFree: 832624 kB
SwapTotal: 4200960 kB
SwapFree: 4200960 kB
Dirty: 60 kB
Writeback: 0 kB
AnonPages: 222868 kB
Mapped: 80596 kB
Shmem: 74564 kB
Slab: 24268 kB
SReclaimable: 14024 kB
SUnreclaim: 10244 kB
KernelStack: 1672 kB
PageTables: 2112 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 5218100 kB
Committed_AS: 833352 kB
VmallocTotal: 122880 kB
VmallocUsed: 13916 kB
VmallocChunk: 50540 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 4096 kB
DirectMap4k: 20472 kB
DirectMap4M: 888832 kB
I got a formula to calculate the Memtotal:
Memtotal = MemFree + Cached + Active + Inactive + Mapped + Shmem + Slab + PageTables + VmallocUsed
but I don't know the formula is correct or not, any one can help to clarify it?
I think it would be difficult to reach to the exact value (addition based total memory validation) from meminfo.
Nonetheless, in my view following should lead you close to TotalMemory figure.
TotalMemory = MemFree + Buffers + Cached + Dirty + AnonPages + Slab + VmAllocUsed
In your example case
1432728 + 16568 + 324864 + 60 + 222868 + 24268 + 13916 = 2035272
Some ref :
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/filesystems/proc.txt?id=HEAD#l451
(From another stackoverflow article suggested above)
Apart from that, I believe the volatility is because of VmAllocUsed.