Memory is not even getting freed (back to OS) - go

I'm running a server written in Go and the RSS is very high and the memory is not even getting freed (back to OS).
I used pprof to check but it seems that there is no memory leak.
I also tried:
GODEBUG=madvdontneed=1 ./memorytest
Please tell me how to use madvdontneed.
OS: CentOS 7 (Linux)
Arch: amd64
Go version: 1.14.2
Code:
package main
import (
"fmt"
"os"
"runtime"
"time"
)
var garr []string
var chn chan int
func main() {
chn = make(chan int, 1)
go Alloc()
<-chn
}
func Alloc() {
for {
arr := make([]string, 100000000)
//copy(garr,arr)
garr = arr
print_heap_info()
time.Sleep(5 * time.Second)
}
}
func print_heap_info() {
var m runtime.MemStats
runtime.ReadMemStats(&m)
fmt.Printf("env:%v, heapsys:%d,heapalloc:%d,heapidel:%d,heapreleased:%d,heapinuse:%d\n",
os.Getenv("GODEBUG"), m.HeapSys, m.HeapAlloc, m.HeapIdle, m.HeapReleased, m.HeapInuse)
}
Output:
env:madvdontneed=1, heapsys:1677262848,heapalloc:1600074304,heapidel:76955648,heapreleased:76914688,heapinuse:1600307200
env:madvdontneed=1, heapsys:3287875584,heapalloc:3200081392,heapidel:87556096,heapreleased:87506944,heapinuse:3200319488
env:madvdontneed=1, heapsys:4898619392,heapalloc:4800086512,heapidel:98295808,heapreleased:98115584,heapinuse:4800323584
env:madvdontneed=1, heapsys:6509101056,heapalloc:6400090640,heapidel:108773376,heapreleased:108724224,heapinuse:6400327680
env:madvdontneed=1, heapsys:6509232128,heapalloc:4800086176,heapidel:1708908544,heapreleased:108724224,heapinuse:4800323584
env:madvdontneed=1, heapsys:6509002752,heapalloc:6400090304,heapidel:108675072,heapreleased:108560384,heapinuse:6400327680
env:madvdontneed=1, heapsys:6509199360,heapalloc:4800086712,heapidel:1708875776,heapreleased:108560384,heapinuse:4800323584
env:madvdontneed=1, heapsys:6509068288,heapalloc:6400090744,heapidel:108740608,heapreleased:108560384,heapinuse:6400327680
env:madvdontneed=1, heapsys:6509199360,heapalloc:4800086712,heapidel:1708875776,heapreleased:108560384,heapinuse:4800323584
env:madvdontneed=1, heapsys:6509068288,heapalloc:6400090840,heapidel:108740608,heapreleased:108462080,heapinuse:6400327680

Optimize your memory allocations to lower the peak.
See also: debug.FreeOSMemory()
FreeOSMemory forces a garbage collection followed by an attempt to return as much memory to the operating system as possible. (Even if this is not called, the runtime gradually returns memory to the operating system in a background task.)
Your formatted output is:
sys: 1599 MB alloc: 1525 MB idel: 73 MB released: 73 MB inuse: 1526 MB
sys: 3135 MB alloc: 3051 MB idel: 83 MB released: 83 MB inuse: 3052 MB
sys: 4671 MB alloc: 4577 MB idel: 93 MB released: 93 MB inuse: 4577 MB
sys: 6207 MB alloc: 6103 MB idel: 103 MB released: 103 MB inuse: 6103 MB
sys: 6207 MB alloc: 4577 MB idel: 1629 MB released: 103 MB inuse: 4577 MB
sys: 6207 MB alloc: 6103 MB idel: 103 MB released: 103 MB inuse: 6103 MB
sys: 6207 MB alloc: 4577 MB idel: 1629 MB released: 103 MB inuse: 4577 MB
sys: 6207 MB alloc: 6103 MB idel: 103 MB released: 103 MB inuse: 6103 MB
sys: 6207 MB alloc: 4577 MB idel: 1629 MB released: 103 MB inuse: 4577 MB
sys: 6207 MB alloc: 6103 MB idel: 103 MB released: 103 MB inuse: 6103 MB
Which is (for the last line):
sys: bytes of heap memory obtained from the OS: 6207 MB
alloc: bytes of allocated heap objects: 6103 MB
idel: bytes in idle (unused) spans: 103 MB
released: bytes of physical memory returned to the OS: 103 MB
inuse: bytes in in-use spans: 6103 MB
It is not a memory leak.
Output for a system with total 8GB RAM (system memory monitor):
command:
GODEBUG=madvdontneed=1 go run .
Output:
env: madvdontneed=1, sys: 1087 MB, alloc: 1024 MB, idel: 63 MB, released: 63 MB, inuse: 1024 MB
env: madvdontneed=1, sys: 2111 MB, alloc: 2048 MB, idel: 63 MB, released: 63 MB, inuse: 2048 MB
env: madvdontneed=1, sys: 3135 MB, alloc: 3072 MB, idel: 63 MB, released: 63 MB, inuse: 3072 MB
env: madvdontneed=1, sys: 4159 MB, alloc: 4096 MB, idel: 63 MB, released: 63 MB, inuse: 4096 MB
env: madvdontneed=1, sys: 4159 MB, alloc: 3072 MB, idel: 1087 MB, released: 63 MB, inuse: 3072 MB
env: madvdontneed=1, sys: 4159 MB, alloc: 4096 MB, idel: 63 MB, released: 63 MB, inuse: 4096 MB
env: madvdontneed=1, sys: 4159 MB, alloc: 3072 MB, idel: 1087 MB, released: 63 MB, inuse: 3072 MB
env: madvdontneed=1, sys: 4159 MB, alloc: 4096 MB, idel: 63 MB, released: 63 MB, inuse: 4096 MB
env: madvdontneed=1, sys: 4159 MB, alloc: 3072 MB, idel: 1087 MB, released: 63 MB, inuse: 3072 MB
env: madvdontneed=1, sys: 4159 MB, alloc: 4096 MB, idel: 63 MB, released: 63 MB, inuse: 4096 MB
Code:
package main
import (
"fmt"
"os"
"runtime"
"time"
)
func main() {
for i := 0; i < 10; i++ {
a = make([]byte, 1024*meg)
var m runtime.MemStats
runtime.ReadMemStats(&m)
fmt.Printf("env: %v, sys: %4d MB, alloc: %4d MB, idel: %4d MB, released: %4d MB, inuse: %4d MB\n",
os.Getenv("GODEBUG"), m.HeapSys/meg, m.HeapAlloc/meg, m.HeapIdle/meg, m.HeapReleased/meg, m.HeapInuse/meg)
time.Sleep(1 * time.Second)
}
}
var a []byte
const meg = 1024 * 1024
htop:
Output for vmstat 1 -S MB command, running two terminals:
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 1161 6369 25 595 0 0 702 452 276 567 6 3 91 1 0
3 0 1160 6300 28 632 0 0 43432 0 6288 14013 13 5 82 1 0
1 0 1160 6243 28 634 0 0 528 0 6342 13008 11 3 85 0 0
1 0 1160 6177 28 634 0 0 56 56 979 2090 4 1 95 0 0
2 0 1160 6110 28 634 0 0 0 0 1061 2664 5 2 94 0 0
0 0 1160 6044 28 634 0 0 12 36 994 2233 4 1 94 0 0
0 0 1160 3991 28 634 0 0 32 188 1074 1787 5 6 89 0 0
2 0 1160 2120 28 634 0 0 0 136 1016 1634 4 5 90 0 0
1 0 1160 973 28 633 0 0 0 4 1077 1660 4 5 92 0 0
2 0 1160 143 25 390 0 0 1780 356 1849 2341 4 9 87 0 0
0 9 1323 102 0 108 0 165 51964 169652 20277 21017 1 20 53 25 0
1 5 1510 99 0 129 2 189 54376 193996 57829 52152 1 16 56 27 0
1 5 1794 99 0 129 1 286 10068 293856 81160 59511 0 16 77 6 0
4 5 2047 101 0 98 5 257 21236 263292 69923 65485 0 23 54 23 0
1 2 1479 2867 0 217 43 23 233508 24380 24023 48536 4 27 47 22 0
2 0 1452 2824 0 232 27 0 43960 0 8168 21085 4 5 90 1 0
0 1 1443 2814 0 233 9 0 10956 0 3341 8468 5 2 93 0 0
3 0 1425 2796 0 231 18 0 19688 0 5780 15490 4 3 92 1 0
1 1 1420 2672 10 337 3 0 121628 1920 3292 7934 5 7 81 7 0
0 0 1394 2646 10 338 25 0 27360 0 7975 21555 3 5 92 0 0
0 1 1359 6856 10 339 1 0 2416 0 1035 2108 3 2 95 0 0
0 0 1353 6847 10 348 4 0 13660 0 1696 3471 4 1 95 0 0
Output for first GODEBUG=madvdontneed=1 go run . command (auto killed):
env: madvdontneed=1, sys: 1087 MB, alloc: 1024 MB, idel: 63 MB, released: 63 MB, inuse: 1024 MB
env: madvdontneed=1, sys: 2111 MB, alloc: 2048 MB, idel: 63 MB, released: 63 MB, inuse: 2048 MB
env: madvdontneed=1, sys: 3135 MB, alloc: 3072 MB, idel: 63 MB, released: 63 MB, inuse: 3072 MB
env: madvdontneed=1, sys: 4159 MB, alloc: 4096 MB, idel: 63 MB, released: 63 MB, inuse: 4096 MB
env: madvdontneed=1, sys: 4159 MB, alloc: 3072 MB, idel: 1087 MB, released: 63 MB, inuse: 3072 MB
env: madvdontneed=1, sys: 4159 MB, alloc: 4096 MB, idel: 63 MB, released: 63 MB, inuse: 4096 MB
env: madvdontneed=1, sys: 4159 MB, alloc: 3072 MB, idel: 1087 MB, released: 63 MB, inuse: 3072 MB
signal: killed
Output for second GODEBUG=madvdontneed=1 go run . command:
env: madvdontneed=1, sys: 1087 MB, alloc: 1024 MB, idel: 63 MB, released: 63 MB, inuse: 1024 MB
env: madvdontneed=1, sys: 2111 MB, alloc: 2048 MB, idel: 63 MB, released: 63 MB, inuse: 2048 MB
env: madvdontneed=1, sys: 3135 MB, alloc: 3072 MB, idel: 63 MB, released: 63 MB, inuse: 3072 MB
env: madvdontneed=1, sys: 4159 MB, alloc: 4096 MB, idel: 63 MB, released: 63 MB, inuse: 4096 MB
env: madvdontneed=1, sys: 4159 MB, alloc: 3072 MB, idel: 1087 MB, released: 63 MB, inuse: 3072 MB
env: madvdontneed=1, sys: 4159 MB, alloc: 4096 MB, idel: 63 MB, released: 63 MB, inuse: 4096 MB
env: madvdontneed=1, sys: 4159 MB, alloc: 3072 MB, idel: 1087 MB, released: 63 MB, inuse: 3072 MB
env: madvdontneed=1, sys: 4159 MB, alloc: 4096 MB, idel: 63 MB, released: 62 MB, inuse: 4096 MB
env: madvdontneed=1, sys: 4159 MB, alloc: 3072 MB, idel: 1087 MB, released: 62 MB, inuse: 3072 MB
env: madvdontneed=1, sys: 4159 MB, alloc: 4096 MB, idel: 63 MB, released: 62 MB, inuse: 4096 MB
Code:
package main
import (
"fmt"
"os"
"runtime"
"time"
)
func main() {
for i := 0; i < 10; i++ {
a = make([]byte, 1024*meg)
var m runtime.MemStats
runtime.ReadMemStats(&m)
fmt.Printf("env: %v, sys: %4d MB, alloc: %4d MB, idel: %4d MB, released: %4d MB, inuse: %4d MB\n\n",
os.Getenv("GODEBUG"), m.HeapSys/meg, m.HeapAlloc/meg, m.HeapIdle/meg, m.HeapReleased/meg, m.HeapInuse/meg)
time.Sleep(1 * time.Second)
}
}
var a []byte
const meg = 1024 * 1024
Command:
GODEBUG=gctrace=1 go run .
Output:
gc 1 #0.008s 2%: 0.071+0.67+0.034 ms clock, 0.57+0.88/0.76/0.041+0.27 ms cpu, 4->4->0 MB, 5 MB goal, 8 P
gc 2 #0.014s 3%: 0.069+1.3+0.027 ms clock, 0.55+0.48/0.80/0.73+0.21 ms cpu, 4->4->0 MB, 5 MB goal, 8 P
gc 3 #0.029s 2%: 0.056+0.62+0.040 ms clock, 0.45+0.39/0.75/0.67+0.32 ms cpu, 4->4->0 MB, 5 MB goal, 8 P
gc 4 #0.045s 3%: 0.31+0.97+0.12 ms clock, 2.5+0.91/1.2/0.92+1.0 ms cpu, 4->4->0 MB, 5 MB goal, 8 P
gc 5 #0.060s 2%: 0.056+0.60+0.019 ms clock, 0.45+0.44/0.70/1.1+0.15 ms cpu, 4->4->0 MB, 5 MB goal, 8 P
gc 6 #0.071s 2%: 0.025+1.0+0.018 ms clock, 0.20+0.47/1.1/3.5+0.15 ms cpu, 4->4->0 MB, 5 MB goal, 8 P
gc 7 #0.084s 2%: 0.12+0.88+0.042 ms clock, 0.97+0.77/1.2/0.88+0.33 ms cpu, 4->4->1 MB, 5 MB goal, 8 P
gc 8 #0.093s 2%: 0.039+0.83+0.028 ms clock, 0.31+0.38/0.66/1.2+0.22 ms cpu, 4->4->0 MB, 5 MB goal, 8 P
gc 1 #0.007s 3%: 0.013+1.7+0.005 ms clock, 0.11+0.86/2.0/1.2+0.042 ms cpu, 4->5->4 MB, 5 MB goal, 8 P
gc 1 #0.002s 5%: 0.024+2.0+0.042 ms clock, 0.19+0.25/1.5/1.3+0.34 ms cpu, 4->6->5 MB, 5 MB goal, 8 P
gc 2 #0.017s 3%: 0.014+4.6+0.044 ms clock, 0.11+0.13/3.3/1.9+0.35 ms cpu, 9->10->7 MB, 10 MB goal, 8 P
gc 3 #0.058s 2%: 0.030+6.7+0.036 ms clock, 0.24+0.12/5.3/1.8+0.29 ms cpu, 13->15->10 MB, 15 MB goal, 8 P
gc 4 #0.100s 2%: 0.034+5.6+0.015 ms clock, 0.27+0/7.2/0.65+0.12 ms cpu, 18->18->12 MB, 21 MB goal, 8 P
gc env: gctrace=1, sys: 1087 MB, alloc: 1024 MB, idel: 63 MB, released: 63 MB, inuse: 1024 MB
1 #0.024s 0%: 0.028+0.41+0.016 ms clock, 0.22+0.11/0.14/0.093+0.13 ms cpu, 1024->1024->1024 MB, 1025 MB goal, 8 P
env: gctrace=1, sys: 2111 MB, alloc: 2048 MB, idel: 63 MB, released: 63 MB, inuse: 2048 MB
gc 2 #1.049s 0%: 0.021+0.44+0.005 ms clock, 0.16+0.12/0.15/0.12+0.045 ms cpu, 2048->2048->2048 MB, 2049 MB goal, 8 P
env: gctrace=1, sys: 3135 MB, alloc: 3072 MB, idel: 63 MB, released: 63 MB, inuse: 3072 MB
env: gctrace=1, sys: 4159 MB, alloc: 4096 MB, idel: 63 MB, released: 63 MB, inuse: 4096 MB
gc 3 #3.096s 0%: 0.023+0.56+0.017 ms clock, 0.18+0.13/0.20/0.17+0.13 ms cpu, 4096->4096->2048 MB, 4097 MB goal, 8 P
env: gctrace=1, sys: 4159 MB, alloc: 3072 MB, idel: 1087 MB, released: 63 MB, inuse: 3072 MB
env: gctrace=1, sys: 4159 MB, alloc: 4096 MB, idel: 63 MB, released: 63 MB, inuse: 4096 MB
gc 4 #5.619s 0%: 0.023+0.31+0.018 ms clock, 0.18+0.18/0.18/0.22+0.15 ms cpu, 4096->4096->2048 MB, 4097 MB goal, 8 P
env: gctrace=1, sys: 4159 MB, alloc: 3072 MB, idel: 1087 MB, released: 63 MB, inuse: 3072 MB
env: gctrace=1, sys: 4159 MB, alloc: 4096 MB, idel: 63 MB, released: 63 MB, inuse: 4096 MB
gc 5 #8.141s 0%: 0.025+0.26+0.004 ms clock, 0.20+0.16/0.15/0.15+0.033 ms cpu, 4096->4096->2048 MB, 4097 MB goal, 8 P
env: gctrace=1, sys: 4159 MB, alloc: 3072 MB, idel: 1087 MB, released: 63 MB, inuse: 3072 MB
gc 6 #10.413s 0%: 0.028+0.51+0.013 ms clock, 0.22+0.23/0.28/0.29+0.11 ms cpu, 4096->4096->2048 MB, 4097 MB goal, 8 P
env: gctrace=1, sys: 4159 MB, alloc: 4096 MB, idel: 63 MB, released: 63 MB, inuse: 4096 MB
The format of this line is subject to change. Currently, it is:
gc # ##s #%: #+#+# ms clock, #+#/#/#+# ms cpu, #->#-># MB, # MB goal, # P
where the fields are as follows:
gc # the GC number, incremented at each GC
##s time in seconds since program start
#% percentage of time spent in GC since program start
#+...+# wall-clock/CPU times for the phases of the GC
#->#-># MB heap size at GC start, at GC end, and live heap
# MB goal goal heap size
# P number of processors used
Note:
go version go1.15.5 linux/amd64
See also:
Go 1.13 RSS keeps on increasing, suspected scavenging issue
https://github.com/golang/go/issues/36398
https://github.com/golang/go/issues/39295
https://go.googlesource.com/proposal/+/master/design/14951-soft-heap-limit.md
https://docs.google.com/document/d/1wmjrocXIWTr1JxU-3EQBI6BK6KgtiFArkG47XK73xIQ/edit#
https://blog.cloudflare.com/go-dont-collect-my-garbage/

Related

How to check the size of packages linked into my Go code

Following up with How do I check the size of a Go project?
The conclusion was:
in order to get a true sense of how much extra weight importing certain packages, one has to look at all of the pkg's sub-dependencies as well.
That's totally understandable. My question is,
Is there anyway that I can know how much space each component is taking in my compiled binary, the Go runtime, the dependencies and sub-dependencies packages, and my own code.
I vaguely remember reading something like this before (when go enhanced its linker maybe).
If there has never been such discussion before, then is there any way the go or even c linker can look into the my compiled binary, and reveal something that I can further parse myself?
The binary will contain debug symbols which we can use to figure out how many space each package takes up.
I wrote a basic program to do this since I don't know of any tool that does this:
package main
import (
"debug/elf"
"fmt"
"os"
"runtime"
"sort"
"strings"
"github.com/go-delve/delve/pkg/proc"
)
func main() {
// Use delve to decode the DWARF section
binInfo := proc.NewBinaryInfo(runtime.GOOS, runtime.GOARCH)
err := binInfo.AddImage(os.Args[1], 0)
if err != nil {
panic(err)
}
// Make a list of unique packages
pkgs := make([]string, 0, len(binInfo.PackageMap))
for _, fullPkgs := range binInfo.PackageMap {
for _, fullPkg := range fullPkgs {
exists := false
for _, pkg := range pkgs {
if fullPkg == pkg {
exists = true
break
}
}
if !exists {
pkgs = append(pkgs, fullPkg)
}
}
}
// Sort them for a nice output
sort.Strings(pkgs)
// Parse the ELF file ourselfs
elfFile, err := elf.Open(os.Args[1])
if err != nil {
panic(err)
}
// Get the symbol table
symbols, err := elfFile.Symbols()
if err != nil {
panic(err)
}
usage := make(map[string]map[string]int)
for _, sym := range symbols {
if sym.Section == elf.SHN_UNDEF || sym.Section >= elf.SectionIndex(len(elfFile.Sections)) {
continue
}
sectionName := elfFile.Sections[sym.Section].Name
symPkg := ""
for _, pkg := range pkgs {
if strings.HasPrefix(sym.Name, pkg) {
symPkg = pkg
break
}
}
// Symbol doesn't belong to a known package
if symPkg == "" {
continue
}
pkgStats := usage[symPkg]
if pkgStats == nil {
pkgStats = make(map[string]int)
}
pkgStats[sectionName] += int(sym.Size)
usage[symPkg] = pkgStats
}
for _, pkg := range pkgs {
sections, exists := usage[pkg]
if !exists {
continue
}
fmt.Printf("%s:\n", pkg)
for section, size := range sections {
fmt.Printf("%15s: %8d bytes\n", section, size)
}
fmt.Println()
}
}
Now the actual space used is divided over multiple section(.text for code, .bss for zero initialized data, .data for global vars, ect.). This example lists the size per section, but you can modify the code to get the total if that is what you prefer.
Here is the outputs it generates from its own binary:
bufio:
.text: 12733 bytes
.noptrdata: 64 bytes
.bss: 176 bytes
.rodata: 72 bytes
bytes:
.bss: 48 bytes
.rodata: 64 bytes
.text: 12617 bytes
.noptrdata: 320 bytes
compress/flate:
.text: 20385 bytes
.noptrdata: 248 bytes
.bss: 2112 bytes
.noptrbss: 12 bytes
.rodata: 48 bytes
compress/zlib:
.text: 4138 bytes
.noptrdata: 96 bytes
.bss: 48 bytes
container/list:
.text: 4016 bytes
context:
.text: 387 bytes
.noptrdata: 72 bytes
.bss: 40 bytes
crypto:
.text: 20982 bytes
.noptrdata: 416 bytes
.bss: 96 bytes
.rodata: 58 bytes
.noptrbss: 3 bytes
debug/dwarf:
.rodata: 1088 bytes
.text: 113878 bytes
.noptrdata: 247 bytes
.bss: 64 bytes
debug/elf:
.rodata: 168 bytes
.text: 36557 bytes
.noptrdata: 112 bytes
.data: 5160 bytes
.bss: 16 bytes
debug/macho:
.text: 22980 bytes
.noptrdata: 96 bytes
.data: 456 bytes
.rodata: 80 bytes
debug/pe:
.text: 26004 bytes
.noptrdata: 96 bytes
.rodata: 288 bytes
encoding/base64:
.bss: 32 bytes
.rodata: 48 bytes
.text: 846 bytes
.noptrdata: 56 bytes
encoding/binary:
.text: 27108 bytes
.noptrdata: 72 bytes
.bss: 56 bytes
.rodata: 136 bytes
encoding/hex:
.bss: 16 bytes
.text: 288 bytes
.noptrdata: 64 bytes
encoding/json:
.rodata: 108 bytes
.text: 2930 bytes
.noptrdata: 128 bytes
.bss: 80 bytes
errors:
.rodata: 48 bytes
.text: 744 bytes
.noptrdata: 40 bytes
.bss: 16 bytes
fmt:
.text: 72010 bytes
.noptrdata: 136 bytes
.data: 104 bytes
.bss: 32 bytes
.rodata: 720 bytes
github.com/cilium/ebpf:
.text: 170860 bytes
.noptrdata: 1405 bytes
.bss: 608 bytes
.rodata: 3971 bytes
.data: 16 bytes
.noptrbss: 8 bytes
github.com/go-delve/delve/pkg/dwarf/frame:
.text: 18304 bytes
.noptrdata: 80 bytes
.bss: 8 bytes
.rodata: 211 bytes
github.com/go-delve/delve/pkg/dwarf/godwarf:
.text: 40431 bytes
.noptrdata: 144 bytes
.rodata: 352 bytes
github.com/go-delve/delve/pkg/dwarf/line:
.bss: 48 bytes
.rodata: 160 bytes
.text: 24069 bytes
.noptrdata: 96 bytes
github.com/go-delve/delve/pkg/dwarf/loclist:
.noptrdata: 64 bytes
.rodata: 64 bytes
.text: 4538 bytes
github.com/go-delve/delve/pkg/dwarf/op:
.text: 31142 bytes
.noptrdata: 80 bytes
.bss: 72 bytes
.rodata: 5313 bytes
github.com/go-delve/delve/pkg/dwarf/reader:
.noptrdata: 72 bytes
.bss: 16 bytes
.rodata: 24 bytes
.text: 8037 bytes
github.com/go-delve/delve/pkg/dwarf/regnum:
.bss: 40 bytes
.rodata: 2760 bytes
.text: 3943 bytes
.noptrdata: 48 bytes
github.com/go-delve/delve/pkg/dwarf/util:
.text: 4028 bytes
.noptrdata: 64 bytes
.rodata: 96 bytes
github.com/go-delve/delve/pkg/elfwriter:
.text: 3394 bytes
.noptrdata: 48 bytes
.rodata: 48 bytes
github.com/go-delve/delve/pkg/goversion:
.noptrdata: 104 bytes
.bss: 64 bytes
.rodata: 160 bytes
.text: 4415 bytes
github.com/go-delve/delve/pkg/logflags:
.bss: 32 bytes
.rodata: 40 bytes
.text: 2610 bytes
.noptrdata: 136 bytes
.noptrbss: 3 bytes
github.com/go-delve/delve/pkg/proc:
.text: 432477 bytes
.noptrdata: 718 bytes
.data: 1448 bytes
.bss: 592 bytes
.rodata: 10106 bytes
github.com/go-delve/delve/pkg/version:
.text: 1509 bytes
.noptrdata: 72 bytes
.data: 112 bytes
.rodata: 40 bytes
github.com/hashicorp/golang-lru/simplelru:
.text: 3911 bytes
.noptrdata: 32 bytes
.rodata: 160 bytes
github.com/sirupsen/logrus:
.noptrbss: 20 bytes
.rodata: 696 bytes
.text: 40175 bytes
.noptrdata: 204 bytes
.data: 64 bytes
.bss: 56 bytes
go/ast:
.text: 24407 bytes
.noptrdata: 104 bytes
.data: 112 bytes
.rodata: 120 bytes
go/constant:
.bss: 8 bytes
.rodata: 824 bytes
.text: 33910 bytes
.noptrdata: 88 bytes
go/parser:
.rodata: 1808 bytes
.text: 78751 bytes
.noptrdata: 136 bytes
.bss: 32 bytes
go/printer:
.text: 77202 bytes
.noptrdata: 113 bytes
.data: 24 bytes
.rodata: 1504 bytes
go/scanner:
.rodata: 240 bytes
.text: 18594 bytes
.noptrdata: 93 bytes
.data: 24 bytes
go/token:
.noptrdata: 72 bytes
.data: 1376 bytes
.bss: 8 bytes
.rodata: 192 bytes
.text: 7154 bytes
golang.org/x/arch/arm64/arm64asm:
.rodata: 856 bytes
.text: 116428 bytes
.noptrdata: 80 bytes
.bss: 80 bytes
.data: 46128 bytes
golang.org/x/arch/x86/x86asm:
.noptrdata: 29125 bytes
.bss: 112 bytes
.data: 20928 bytes
.rodata: 1252 bytes
.text: 76721 bytes
golang.org/x/sys/unix:
.text: 1800 bytes
.noptrdata: 128 bytes
.rodata: 70 bytes
.data: 80 bytes
hash/adler32:
.text: 1013 bytes
.noptrdata: 40 bytes
internal/bytealg:
.rodata: 56 bytes
.noptrbss: 8 bytes
.text: 1462 bytes
.noptrdata: 32 bytes
internal/cpu:
.rodata: 500 bytes
.noptrbss: 416 bytes
.noptrdata: 8 bytes
.bss: 24 bytes
.text: 3017 bytes
internal/fmtsort:
.text: 7443 bytes
.noptrdata: 40 bytes
.rodata: 40 bytes
internal/oserror:
.text: 500 bytes
.noptrdata: 40 bytes
.bss: 80 bytes
internal/poll:
.text: 31565 bytes
.rodata: 192 bytes
.noptrdata: 112 bytes
.data: 96 bytes
.bss: 64 bytes
.noptrbss: 12 bytes
internal/reflectlite:
.text: 13761 bytes
.noptrdata: 32 bytes
.data: 456 bytes
.bss: 24 bytes
.rodata: 496 bytes
internal/syscall/unix:
.rodata: 72 bytes
.text: 708 bytes
.noptrdata: 40 bytes
.noptrbss: 4 bytes
internal/testlog:
.text: 827 bytes
.noptrdata: 32 bytes
.noptrbss: 12 bytes
.bss: 16 bytes
.rodata: 72 bytes
io:
.noptrdata: 240 bytes
.bss: 272 bytes
.data: 56 bytes
.noptrbss: 0 bytes
.rodata: 128 bytes
.text: 10824 bytes
log:
.text: 188 bytes
.noptrdata: 80 bytes
.bss: 8 bytes
main:
.text: 3002 bytes
.noptrdata: 80 bytes
.rodata: 104 bytes
math:
.data: 136 bytes
.bss: 2672 bytes
.text: 184385 bytes
.noptrdata: 10211 bytes
.rodata: 2076 bytes
.noptrbss: 2 bytes
net:
.text: 24417 bytes
.noptrdata: 236 bytes
.data: 240 bytes
.bss: 584 bytes
.noptrbss: 16 bytes
.rodata: 48 bytes
os:
.bss: 264 bytes
.data: 32 bytes
.rodata: 352 bytes
.text: 46276 bytes
.noptrdata: 296 bytes
.noptrbss: 1 bytes
path:
.text: 9378 bytes
.noptrdata: 136 bytes
.bss: 48 bytes
.rodata: 48 bytes
reflect:
.noptrbss: 1 bytes
.text: 97417 bytes
.noptrdata: 72 bytes
.rodata: 1728 bytes
.data: 456 bytes
.bss: 160 bytes
regexp:
.rodata: 968 bytes
.text: 126451 bytes
.noptrdata: 558 bytes
.bss: 296 bytes
.noptrbss: 16 bytes
.data: 816 bytes
runtime:
.noptrbss: 20487 bytes
.data: 8520 bytes
.bss: 184836 bytes
.tbss: 8 bytes
.typelink: 9020 bytes
.gopclntab: 0 bytes
.text: 408713 bytes
.noptrdata: 4347 bytes
.rodata: 23102 bytes
.itablink: 2952 bytes
sort:
.text: 13055 bytes
.noptrdata: 32 bytes
.data: 16 bytes
.rodata: 24 bytes
strconv:
.text: 45928 bytes
.noptrdata: 17015 bytes
.data: 1680 bytes
.bss: 32 bytes
.rodata: 144 bytes
strings:
.text: 21070 bytes
.noptrdata: 320 bytes
.rodata: 168 bytes
sync:
.rodata: 476 bytes
.noptrdata: 56 bytes
.bss: 56 bytes
.noptrbss: 8 bytes
.text: 14288 bytes
syscall:
.noptrdata: 127 bytes
.rodata: 978 bytes
.noptrbss: 76 bytes
.bss: 264 bytes
.data: 2720 bytes
.text: 33728 bytes
text/tabwriter:
.data: 96 bytes
.rodata: 88 bytes
.text: 8002 bytes
.noptrdata: 46 bytes
text/template:
.text: 166284 bytes
.noptrdata: 316 bytes
.noptrbss: 8 bytes
.bss: 176 bytes
.data: 376 bytes
.rodata: 3152 bytes
time:
.text: 83290 bytes
.noptrdata: 164 bytes
.data: 912 bytes
.bss: 208 bytes
.noptrbss: 20 bytes
.rodata: 832 bytes
unicode:
.noptrdata: 50398 bytes
.data: 15248 bytes
.bss: 40 bytes
.noptrbss: 0 bytes
.text: 27198 bytes
Note this program isn't perfect, it only works on Linux/Mac since it relies on ELF. I am sure you can do a similar thing for Windows PE files, but that would have taken me to much time.
Also this program ignores some parts of the go runtime, but I am guessing that is not the most important to you.
You can run nm to get sizes of all objects in the binary.
Example: nm -S /usr/local/go/bin/gofmt. Second column is a size.
0000000000468700 000000000000011c T unicode/utf8.DecodeLastRuneInString
0000000000468240 00000000000001a6 T unicode/utf8.DecodeRune
0000000000468400 00000000000001a6 T unicode/utf8.DecodeRuneInString
0000000000468820 0000000000000157 T unicode/utf8.EncodeRune

Get the size of files and folders in mb through command prompt?

In my system some of my files are in gb .
Through command prompt using dir command we can get the size of the files but the size which it shows
its in kb.
I have to manually convert this to mb.
command : dir
How to get the size of the files in mb?
In cmd you need to use a for loop and convert the file info to the format you want.
For %_ in ("C:\FolderPath\*") DO #(Set /A %~z_ / 1048576 &Echo. Mb %~nxt_ )
NOTE: It is essential that you include a File Mask (In my example case *, but it could also be Somefile*.* or *.* or some other variant)
Example Results:
C:\Admin>For %_ in ("C:\*") DO #(Set /A %~z_ / 1048576 &Echo. Mb %~nxt_ )
0 Mb 11/07/2007 08:00 AM eula.1028.txt
0 Mb 11/07/2007 08:00 AM eula.1031.txt
0 Mb 11/07/2007 08:00 AM eula.1033.txt
0 Mb 11/07/2007 08:00 AM eula.1036.txt
0 Mb 11/07/2007 08:00 AM eula.1040.txt
0 Mb 11/07/2007 08:00 AM eula.1041.txt
0 Mb 11/07/2007 08:00 AM eula.1042.txt
0 Mb 11/07/2007 08:00 AM eula.2052.txt
0 Mb 11/07/2007 08:00 AM eula.3082.txt
0 Mb 02/09/2019 10:26 AM External_URL_Monitor_2019-02-09_09.26.14.2614.log
3 Mb 02/09/2019 10:47 AM External_URL_Monitor_2019-02-09_09.27.53.2753.log
3 Mb 02/09/2019 11:20 AM External_URL_Monitor_2019-02-09_09.59.28.5928.log
4 Mb 02/09/2019 11:57 AM External_URL_Monitor_2019-02-09_10.23.56.2356.log
23 Mb 02/09/2019 03:20 PM External_URL_Monitor_2019-02-09_11.49.20.4920.log
4 Mb 02/09/2019 10:47 AM Internal_URL_Monitor_2019-02-09_09.27.55.2755.log
3 Mb 02/09/2019 11:21 AM Internal_URL_Monitor_2019-02-09_09.59.34.5934.log
4 Mb 02/09/2019 11:57 AM Internal_URL_Monitor_2019-02-09_10.23.59.2359.log
0 Mb 02/09/2019 03:21 PM Internal_URL_Monitor_2019-02-09_11.41.15.4115.log
1 Mb 02/09/2019 10:05 AM URL_Monitor_2019-02-09_09.01.40.140.log
1 Mb 11/07/2007 08:50 AM VC_RED.cab
0 Mb 11/07/2007 08:53 AM VC_RED.MSI
1 Mb 06/06/2012 05:23 PM Windows6.1-KB2639043-v5-x64.msu
File sizes in MiB:
powershell -NoLogo -NoProfile -Command ^
"Get-ChildItem -File | ForEach-Object {'{0:N2} {1}' -f #(($_.Length / 1mb), $_.FullName)}"

How much RAM is actually available for applications in Linux?

I’m working on embedded Linux targets (32-bit ARM) and need to determine how much RAM is available for applications once the kernel and core software are launched. Available memory reported by free and /proc/meminfo don’t seem to align with what testing shows is actually usable by applications. Is there a way to correctly calculate how much RAM is truly available without running e.g., stress on each system?
The target system used in my tests below has 256 MB of RAM and does not use swap (CONFIG_SWAP is not set). I’m used the 3.14.79-rt85 kernel in the tests below but have also tried 4.9.39 and see similar results. During boot, the following is reported:
Memory: 183172K/262144K available (5901K kernel code, 377K rwdata, 1876K rodata, 909K init, 453K bss, 78972K reserved)
Once system initialization is complete and the base software is running (e.g., dhcp client, ssh server, etc.), I get the following reported values:
[root#host ~]# vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
1 0 0 210016 320 7880 0 0 0 0 186 568 0 2 97 0 0
[root#host ~]# free -k
total used free shared buff/cache available
Mem: 249616 31484 209828 68 8304 172996
Swap: 0 0 0
[root#host ~]# cat /proc/meminfo
MemTotal: 249616 kB
MemFree: 209020 kB
MemAvailable: 172568 kB
Buffers: 712 kB
Cached: 4112 kB
SwapCached: 0 kB
Active: 4684 kB
Inactive: 2252 kB
Active(anon): 2120 kB
Inactive(anon): 68 kB
Active(file): 2564 kB
Inactive(file): 2184 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages: 2120 kB
Mapped: 3256 kB
Shmem: 68 kB
Slab: 13236 kB
SReclaimable: 4260 kB
SUnreclaim: 8976 kB
KernelStack: 864 kB
PageTables: 296 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 124808 kB
Committed_AS: 47944 kB
VmallocTotal: 1810432 kB
VmallocUsed: 3668 kB
VmallocChunk: 1803712 kB
[root#host ~]# sysctl -a | grep '^vm'
vm.admin_reserve_kbytes = 7119
vm.block_dump = 0
vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 10
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 3000
vm.dirty_ratio = 20
vm.dirty_writeback_centisecs = 500
vm.drop_caches = 3
vm.extfrag_threshold = 500
vm.laptop_mode = 0
vm.legacy_va_layout = 0
vm.lowmem_reserve_ratio = 32
vm.max_map_count = 65530
vm.min_free_kbytes = 32768
vm.mmap_min_addr = 4096
vm.nr_pdflush_threads = 0
vm.oom_dump_tasks = 1
vm.oom_kill_allocating_task = 0
vm.overcommit_kbytes = 0
vm.overcommit_memory = 0
vm.overcommit_ratio = 50
vm.page-cluster = 3
vm.panic_on_oom = 0
vm.percpu_pagelist_fraction = 0
vm.scan_unevictable_pages = 0
vm.stat_interval = 1
vm.swappiness = 60
vm.user_reserve_kbytes = 7119
vm.vfs_cache_pressure = 100
Based on the numbers above, I expected to have ~160 MiB available for future applications. By tweaking sysctl vm.min_free_kbytes I can boost this to nearly 200 MiB since /proc/meminfo appears to take this reserve into account, but for testing I left it set as it is above.
To test how much RAM was actually available, i used the stress tool as follows:
stress --vm 11 --vm-bytes 10M --vm-keep --timeout 5s
At 110 MiB, the system remains responsive and both free and vmstat reflect the increased RAM usage. The lowest reported free/available values are below:
[root#host ~]# free -k
total used free shared buff/cache available
Mem: 249616 146580 93196 68 9840 57124
Swap: 0 0 0
[root#host ~]# vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
11 0 0 93204 1792 8048 0 0 0 0 240 679 50 0 50 0 0
Here is where things start to break down. After increasing stress’ memory usage to 120 MiB - still well shy of the 168 MiB reported as available - the system freezes for the 5 seconds while stress is running. Continuously running vmstat during the test (or as continuously as possible due to the freeze) shows:
[root#host ~]# vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 0 209664 724 6336 0 0 0 0 237 666 0 1 99 0 0
3 0 0 121916 1024 6724 0 0 289 0 1088 22437 0 45 54 0 0
1 0 0 208120 1328 7128 0 0 1652 0 4431 43519 28 22 50 0 0
Due to the significant increase in interrupts and IO, I’m guessing the kernel is evicting pages containing executable code and then promptly needing to read them back in from flash. My questions are a) is this a correct assessment? and b) why would the kernel be doing this with RAM still available?
Note that if try to use a single worker with stress and claim 160 MiB of memory, the OOM gets activated and kills the test. OOM does not trigger in the scenarios described above.

How can I sort columns of text in Vim by size?

Given this text
affiliates 1038 680 KB
article_ratings 699 168 KB
authors 30 40 KB
fs.chunks 3401 633.89 MB
fs.files 1476 680 KB
nodes 1432 24.29 MB
nodes_search 91 2.8 MB
nodes_tags 272 40 KB
page_views 107769 16.37 MB
page_views_map 212 40 KB
recommendations 34305 45.1 MB
rewrite_rules 209 168 KB
sign_ups 10331 12.52 MB
sitemaps 1 14.84 MB
suppliers 13 8 KB
tariff_price_check_reports 34 540 KB
tariff_price_checks 1129 968 KB
tariffs 5 680 KB
users 17 64 KB
users_tags 2 8 KB
versions 18031 156.64 MB
How can I sort by the 4th and then 3rd column so that it's sorted by file size?
I've tried :%!sort -k4 -k3n which partially works, but seems to fail on the 3rd size column.
What am I doing wrong?
I think I've figured it out.
:%!sort -k4 -bk3g
I sort by the the 4th column (-k4), followed by the 3rd column. We ignore leading blank spaces (b), and this time we sort using a general numeric sort (g).
I blogged about this too
I don't know how to handle it with sort(). I've found problems with the decimal point, although I changed the LC_NUMERIC environment variable, so I would switch to perl to solve it, like:
:%!perl -0777 -ne '
#l = map { [ $_, split " ", $_ ] } split /\n/, $_;
#l = sort { $a->[-1] cmp $b->[-1] or $a->[-2] <=> $b->[-2] } #l;
print "$_->[0]\n" for #l
'
Put it in the same line to run if from inside vim. It yields:
suppliers 13 8 KB
users_tags 2 8 KB
authors 30 40 KB
nodes_tags 272 40 KB
page_views_map 212 40 KB
users 17 64 KB
article_ratings 699 168 KB
rewrite_rules 209 168 KB
tariff_price_check_reports 34 540 KB
affiliates 1038 680 KB
fs.files 1476 680 KB
tariffs 5 680 KB
tariff_price_checks 1129 968 KB
nodes_search 91 2.8 MB
sign_ups 10331 12.52 MB
sitemaps 1 14.84 MB
page_views 107769 16.37 MB
nodes 1432 24.29 MB
recommendations 34305 45.1 MB
versions 18031 156.64 MB
fs.chunks 3401 633.89 MB

Oracle 11gr2 failed check of kernel parameters on hp-ux

I'm installing oracle 11gR2 on 64 bit itanium HP-UX (v 11.31) system ( for HP Operation Manager 9 ).
According with the installation requiremens, I've changed the kernel parameters but when I start the installation process it don't recognize them.
Below the parameters that I've set :
Parameter ( Manual) (on server)
-------------------------------------------------------------
fs_async 0 0
ksi_alloc_max (nproc*8) 10240*8 = 81920
executable_stack 0 0
max_thread_proc 1024 3003
maxdsiz 0x40000000 (1073741824) 2063835136
maxdsiz_64bit 0x80000000 (2147483648) 2147483648
maxfiles 256 (a) 4096
maxssiz 0x8000000 (134217728) 134217728
maxssiz_64bit 0x40000000 (1073741824) 1073741824
maxuprc ((nproc*9)/10) 9216
msgmni (nproc) 10240
msgtql (nproc) 32000
ncsize 35840 95120
nflocks (nproc) 10240
ninode (8*nproc+2048) 83968
nkthread (((nproc*7)/4)+16) 17936
nproc 4096 10240
semmni (nproc) 10240
semmns (semmni*2) 20480
semmnu (nproc-4) 10236
semvmx 32767 65535
shmmax size of memory or 0x40000000 (higher one) 1073741824
shmmni 4096 4096
shmseg 512 1024
vps_ceiling 64 64
if this can help:
[root#HUG30055 /] # swapinfo
Kb Kb Kb PCT START/ Kb
TYPE AVAIL USED FREE USED LIMIT RESERVE PRI NAME
dev 4194304 0 4194304 0% 0 - 1 /dev/vg00/lvol2
dev 8388608 0 8388608 0% 0 - 1 /dev/vg00/lvol10
reserve - 742156 -742156
memory 7972944 3011808 4961136 38%
[root#HUG30055 /] # bdf /tmp
Filesystem kbytes used avail %used Mounted on
/dev/vg00/lvol6 4194304 1773864 2401576 42% /tmp

Resources