Ho to properly interpret HeapInuse / HeapIdle / HeapReleased memory stats in golang - go

I want to monitor memory usage of my golang program and clean up some internal cache if system lacks of free memory.
The problem is that HeapAlloc / HeapInuse / HeapReleased stats aren't always add up properly (to my understanding).
I'm looking at free system memory (+ buffers/cache) - the value that is shown as available by free utility:
$ free
total used free shared buff/cache available
Mem: 16123232 409248 15113628 200 600356 15398424
Swap: 73242180 34560 73207620
And also I look at HeapIdle - HeapReleased, that, according to comments in https://godoc.org/runtime#MemStats,
HeapIdle minus HeapReleased estimates the amount of memory
that could be returned to the OS, but is being retained by
the runtime so it can grow the heap without requesting more
memory from the OS.
Now the problem: sometimes Available + HeapInuse + HeapIdle - HeapReleased exceeds total amount of system memory. Usually it happens when HeapIdle is quite high and HeapReleased is neither close to HeapIdle nor to zero:
# Start of test
Available: 15379M, HeapAlloc: 49M, HeapInuse: 51M, HeapIdle: 58M, HeapReleased: 0M
# Work is in progress
# Looks good: 11795 + 3593 = 15388
Available: 11795M, HeapAlloc: 3591M, HeapInuse: 3593M, HeapIdle: 0M, HeapReleased: 0M
# Work has been done
# Looks good: 11745 + 45 + 3602 = 15392
Available: 11745M, HeapAlloc: 42M, HeapInuse: 45M, HeapIdle: 3602M, HeapReleased: 0M
# Golang released some memory to OS
# Looks good: 15224 + 14 + 3632 - 3552 = 15318
Available: 15224M, HeapAlloc: 10M, HeapInuse: 14M, HeapIdle: 3632M, HeapReleased: 3552M
# Some other work started
# Looks SUSPICIOUS: 13995 + 1285 + 2360 - 1769 = 15871
Available: 13995M, HeapAlloc: 1282M, HeapInuse: 1285M, HeapIdle: 2360M, HeapReleased: 1769M
# 5 seconds later
# Looks BAD: 13487 + 994 + 2652 - 398 = 16735 - more than system memory
Available: 13487M, HeapAlloc: 991M, HeapInuse: 994M, HeapIdle: 2652M, HeapReleased: 398M
# This bad situation holds for quite a while, even when work has been done
# Looks BAD: 13488 + 14 + 3631 - 489 = 16644
Available: 13488M, HeapAlloc: 10M, HeapInuse: 14M, HeapIdle: 3631M, HeapReleased: 489M
# It is strange that at this moment HeapIdle - HeapReleased = 3142
# > than 2134M of used memory reported by "free" utility.
$ free
total used free shared buff/cache available
Mem: 16123232 2185696 13337632 200 599904 13621988
Swap: 73242180 34560 73207620
# Still bad when another set of work started
# Looks BAD: 13066 + 2242 + 1403 = 16711
Available: 13066M, HeapAlloc: 2240M, HeapInuse: 2242M, HeapIdle: 1403M, HeapReleased: 0M
# But after 10 seconds it becomes good
# Looks good: 11815 + 2325 + 1320 = 15460
Available: 11815M, HeapAlloc: 2322M, HeapInuse: 2325M, HeapIdle: 1320M, HeapReleased: 0M
I do not understand from where this additional "breathing" 1.3GB (16700 - 15400) of memory comes from. Used swap space remained the same during the whole test.

Related

Golang linux RSS shows more bytes than pprof runtime.MemStats

I have a socket client Golang program. when it just start up, Linux /proc/PID/status show the process RSS is 15204 kB, but the pprof report shows that HeapAlloc is about 1408 kB, there is a gap of about 14000kB.
My Questions:
1、Why is there such a big difference?
2、How is the go application memory distributed? Besides heap and stack, are there other memory areas? and how can I find these areas?
3、More importantly, how can I lower its rss?
cat /proc/PID/status:
Umask: 0000
State: S (sleeping)
Tgid: 3393
Ngid: 0
Pid: 3393
PPid: 2882
TracerPid: 0
Uid: 500 500 500 500
Gid: 500 500 500 500
FDSize: 32
Groups: 500
NStgid: 3393
NSpid: 3393
NSpgid: 2881
NSsid: 2881
VmPeak: 806492 kB
VmSize: 806492 kB
VmLck: 0 kB
VmPin: 0 kB
VmHWM: 15204 kB
VmRSS: 15204 kB
RssAnon: 5024 kB
RssFile: 10180 kB
RssShmem: 0 kB
VmData: 10988 kB
VmStk: 132 kB
VmExe: 5164 kB
VmLib: 8 kB
VmPTE: 28 kB
VmPMD: 0 kB
VmSwap: 0 kB
Threads: 6
SigQ: 0/937
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: 0000000000000000
SigIgn: 0000000000000001
SigCgt: fffffffe7fc1fefe
CapInh: 0000000000000000
CapPrm: 0000000000000000
CapEff: 0000000000000000
CapBnd: 0000003fffffffff
CapAmb: 0000000000000000
Cpus_allowed: 3
Cpus_allowed_list: 0-1
voluntary_ctxt_switches: 261951
nonvoluntary_ctxt_switches: 21327
go tool pprof heap:
# runtime.MemStats
# Alloc = 1408048
# TotalAlloc = 45071968
# Sys = 10828924
# Lookups = 0
# Mallocs = 889174
# Frees = 885421
# HeapAlloc = 1408048
# HeapSys = 7929856
# HeapIdle = 5677056
# HeapInuse = 2252800
# HeapReleased = 5480448
# HeapObjects = 3753
# Stack = 458752 / 458752
# MSpan = 25120 / 32768
# MCache = 1736 / 16384
# BuckHashSys = 725549
# GCSys = 886912
# OtherSys = 778703
# NextGC = 4194304
# LastGC = 1645757614280889245
"Why is there such a big difference?"
These are two different metrics. What people call "memory consumption" is
incredibly hard to even define on modern machines.
"How is the go application memory distributed?"
This is an implementation detail and varies. Nothing actuable to know here.
"Besides heap and stack, are there other memory areas?"
No, but note that the heap/stack dichotomy is an implementation detail of a certain (albeit common) compiler/compiler version/system combination.
"and how can I find these areas?" You cannot.
"More importantly, how can I lower its rss?"
By reducing how much memory you allocate. But note that lowering RSS most probably simply isn't needed. You probably overestimate how "problematic" a "large" RSS is.

Improving Haskell performance for small GET requests

In an effort to become better with Haskell, I'm rewriting a small CLI that I developed originally in Python. The CLI mostly makes GET requests to an API and allows for filtering/formatting the JSON result.
I'm finding my Haskell version to be a lot slower than my Python one.
To help narrow down the problem, I excluded all parts of my Haskell code except the fetching of data - essentially, it's this:
import Data.Aeson
import qualified Data.ByteString.Char8 as BC
import Data.List (intercalate)
import Network.HTTP.Simple
...
-- For testing purposes
getUsers :: [Handle] -> IO ()
getUsers hs = do
let handles = BC.pack $ intercalate ";" hs
req <- parseRequest (baseUrl ++ "/user.info")
let request = setRequestQueryString [("handles", Just handles)] $ req
response <- httpJSON request
let (usrs :: Maybe (MyApiResponseType [User])) = getResponseBody response
print usrs
And I'm using the following dependencies:
dependencies:
- base >= 4.7 && < 5
- aeson
- bytestring
- http-conduit
To test this, I timed how long it takes for my Haskell program to retrieve data for a particular user (without any particular formatting). I compared it with my Python version (which formats the data), and Curl (which I piped into jq to format the data):
I ran each 5 times and took the average of the 3 middle values, excluding the highest and lowest times:
Haskell Python Curl
real: 1017 ms 568 ms 214 ms
user: 1062 ms 367 ms 26 ms
sys: 210 ms 45 ms 10 ms
Ok, so the Haskell version is definitely slower. Next I tried profiling tools to narrow down the cause of the problem.
I profiled the code using an SCC annotation for the function above:
> stack build --profile
...
> stack exec --profile -- my-cli-exe +RTS -p -sstderr
...
244,904,040 bytes allocated in the heap
27,759,640 bytes copied during GC
5,771,840 bytes maximum residency (6 sample(s))
245,912 bytes maximum slop
28 MiB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 228 colls, 228 par 0.849s 0.212s 0.0009s 0.0185s
Gen 1 6 colls, 5 par 0.090s 0.023s 0.0038s 0.0078s
Parallel GC work balance: 30.54% (serial 0%, perfect 100%)
TASKS: 21 (1 bound, 20 peak workers (20 total), using -N8)
SPARKS: 0 (0 converted, 0 overflowed, 0 dud, 0 GC'd, 0 fizzled)
INIT time 0.004s ( 0.003s elapsed)
MUT time 0.881s ( 0.534s elapsed)
GC time 0.939s ( 0.235s elapsed)
RP time 0.000s ( 0.000s elapsed)
PROF time 0.000s ( 0.000s elapsed)
EXIT time 0.010s ( 0.001s elapsed)
Total time 1.833s ( 0.773s elapsed)
Alloc rate 277,931,867 bytes per MUT second
Productivity 48.1% of total user, 69.1% of total elapsed
Seems like a lot of time is being spent in garbage collection.
I looked at the generated .prof file, which gave this:
COST CENTRE MODULE SRC %time %alloc
>>=.\.ks' Data.ASN1.Get Data/ASN1/Get.hs:104:13-61 10.2 9.8
fromBase64.decode4 Data.Memory.Encoding.Base64 Data/Memory/Encoding/Base64.hs:(299,9)-(309,37) 9.0 12.3
>>=.\ Data.ASN1.Parse Data/ASN1/Parse.hs:(54,9)-(56,43) 5.4 0.7
fromBase64.loop Data.Memory.Encoding.Base64 Data/Memory/Encoding/Base64.hs:(264,9)-(296,45) 4.2 7.4
>>=.\ Data.ASN1.Get Data/ASN1/Get.hs:(104,9)-(105,38) 4.2 3.5
decodeSignedObject.onContainer Data.X509.Signed Data/X509/Signed.hs:(167,9)-(171,30) 3.6 2.9
runParseState.go Data.ASN1.BinaryEncoding.Parse Data/ASN1/BinaryEncoding/Parse.hs:(98,12)-(129,127) 3.0 3.2
getConstructedEndRepr.getEnd Data.ASN1.Stream Data/ASN1/Stream.hs:(37,11)-(41,82) 3.0 12.7
getConstructedEnd Data.ASN1.Stream Data/ASN1/Stream.hs:(23,1)-(28,93) 3.0 7.8
readCertificates Data.X509.CertificateStore Data/X509/CertificateStore.hs:(92,1)-(96,33) 3.0 2.2
fmap.\.ks' Data.ASN1.Get Data/ASN1/Get.hs:88:13-52 1.8 2.2
decodeConstruction Data.ASN1.BinaryEncoding Data/ASN1/BinaryEncoding.hs:(48,1)-(50,66) 1.8 0.0
fmap Data.ASN1.Parse Data/ASN1/Parse.hs:41:5-57 1.8 1.0
concat.loopCopy Data.ByteArray.Methods Data/ByteArray/Methods.hs:(210,5)-(215,28) 1.2 0.4
fromBase64.rset Data.Memory.Encoding.Base64 Data/Memory/Encoding/Base64.hs:(312,9)-(314,53) 1.2 0.0
localTimeParseE.allDigits Data.Hourglass.Format Data/Hourglass/Format.hs:358:9-37 1.2 0.3
getWord8 Data.ASN1.Get Data/ASN1/Get.hs:(200,1)-(204,43) 1.2 0.0
fmap.\ Data.ASN1.Get Data/ASN1/Get.hs:(88,9)-(89,38) 1.2 0.6
runParseState.runGetHeader.\ Data.ASN1.BinaryEncoding.Parse Data/ASN1/BinaryEncoding/Parse.hs:131:44-66 1.2 0.0
mplusEither Data.ASN1.BinaryEncoding.Parse Data/ASN1/BinaryEncoding/Parse.hs:(67,1)-(70,45) 1.2 4.9
getOID.groupOID Data.ASN1.Prim Data/ASN1/Prim.hs:299:9-92 1.2 0.3
getConstructedEndRepr.getEnd.zs Data.ASN1.Stream Data/ASN1/Stream.hs:40:48-73 1.2 0.0
getConstructedEndRepr.getEnd.(...) Data.ASN1.Stream Data/ASN1/Stream.hs:40:48-73 1.2 0.4
getConstructedEnd.(...) Data.ASN1.Stream Data/ASN1/Stream.hs:28:48-80 1.2 0.3
decodeEventASN1Repr.loop Data.ASN1.BinaryEncoding Data/ASN1/BinaryEncoding.hs:(54,11)-(67,69) 1.2 2.5
put Data.ASN1.Parse Data/ASN1/Parse.hs:(72,1)-(74,24) 1.2 0.0
fromASN1 Data.X509.ExtensionRaw Data/X509/ExtensionRaw.hs:(55,5)-(61,71) 1.2 0.0
compare Data.X509.DistinguishedName Data/X509/DistinguishedName.hs:31:23-25 1.2 0.0
putBinaryVersion Network.TLS.Packet Network/TLS/Packet.hs:(109,1)-(110,41) 1.2 0.0
parseLBS.onSuccess Data.ASN1.BinaryEncoding.Parse Data/ASN1/BinaryEncoding/Parse.hs:(147,11)-(149,64) 0.6 1.7
pemParseLBS Data.PEM.Parser Data/PEM/Parser.hs:(92,1)-(97,41) 0.6 1.0
runParseState.terminateAugment Data.ASN1.BinaryEncoding.Parse Data/ASN1/BinaryEncoding/Parse.hs:(87,12)-(93,53) 0.0 1.7
parseOnePEM.getPemContent Data.PEM.Parser Data/PEM/Parser.hs:(56,9)-(64,93) 0.0 1.8
This doesn't seem too bad, and when I scrolled down to functions I had defined and they didn't seem to be taking much time either.
This leads me to believe it's a memory leak problem(?), so I profiled the heap:
stack exec --profile -- my-cli-exe +RTS -h
hp2ps my-cli-exe.hp
open my-cli.exe.ps
So it seems as though lots of space is being allocated on the heap, and then suddenly cleared.
The main issue is, I'm not sure where to go from here. My function is relatively small and is only getting a small JSON response of around 500 bytes. So where could the issue be coming from?
It seemed odd that the performance of a common Haskell library was so slow for me, but somehow this approach solved my concerns:
I found that the performance of my executable was faster when I used stack install to copy the binaries:
stack install
my-cli-exe
instead of using build and run.
Here are the running times again for comparison:
HS (stack install) HS (stack run) Python Curl
real: 373 ms 1017 ms 568 ms 214 ms
user: 222 ms 1062 ms 367 ms 26 ms
sys: 141 ms 210 ms 45 ms 10 ms

High memory usage on digital ocean droplet

I have a laravel application which I've installed on a 1GB standard droplet running ubuntu 20.4, nginx, MySQL 8 and php 7.4
The application isn't even live yet and I notice it's already using over 50% memory. Yesterday it was using 80% and after a system reboot its returned to using around 60% memory usage.
Below is a snap shot of the current high memory running processes. Is this level of memory usage normal for a laravel application which is not even live i.e. limited load?
top - 19:41:00 up 3:46, 1 user, load average: 0.08, 0.04, 0.01
Tasks: 101 total, 1 running, 100 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.3 us, 0.7 sy, 0.0 ni, 98.7 id, 0.3 wa, 0.0 hi, 0.0 si, 0.0 st
MiB Mem : 981.3 total, 90.6 free, 601.4 used, 289.3 buff/cache
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 212.2 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
815 mysql 20 0 1305900 417008 13352 S 0.7 41.5 1:32.39 mysqld
2257 www-data 20 0 245988 44992 30180 S 0.0 4.5 0:04.67 php-fpm7.4
2265 www-data 20 0 243700 42204 29572 S 0.0 4.2 0:04.41 php-fpm7.4
2259 www-data 20 0 243960 42104 30380 S 0.0 4.2 0:04.44 php-fpm7.4
988 root 20 0 125160 36188 10604 S 0.3 3.6 0:09.89 php
388 root 19 -1 84404 35116 33932 S 0.0 3.5 0:01.14 systemd-journ+
741 root 20 0 627300 20936 6656 S 0.0 2.1 0:02.11 snapd
738 root 20 0 238392 18588 12624 S 0.0 1.8 0:00.83 php-fpm7.4
743 root 20 0 31348 18344 3844 S 0.0 1.8 0:02.75 supervisord
544 root rt 0 280180 17976 8184 S 0.0 1.8 0:00.90 multipathd
825 root 20 0 108036 15376 7732 S 0.0 1.5 0:00.10 unattended-up+
736 root 20 0 29220 13200 5544 S 0.0 1.3 0:00.11 networkd-disp+
726 do-agent 20 0 559436 12120 6588 S 0.0 1.2 0:01.78 do-agent
1 root 20 0 101964 11124 8024 S 0.0 1.1 0:02.52 systemd
623 systemd+ 20 0 23912 10488 6484 S 0.0 1.0 0:00.42 systemd-resol+
778 www-data 20 0 71004 9964 5240 S 0.0 1.0 0:02.43 nginx
My concern is once the application goes live and the load increases, more database connection it going to run out of memory. I know I can resize the droplet and increase the memory or set up some swap space but is this amount of memory usage normal for an unused application?
How can I optimize the high memory usage processes such as mysql, niginx, php. Mysql8 appear to be the main culprit hogging all the memory. Below are my mysql setting:
#
# The MySQL database server configuration file.
#
# One can use all long options that the program supports.
# Run program with --help to get a list of available options and with
# --print-defaults to see which it would actually understand and use.
#
# For explanations see
# http://dev.mysql.com/doc/mysql/en/server-system-variables.html
# Here is entries for some specific programs
# The following values assume you have at least 32M ram
[mysqld]
#
# * Basic Settings
#
user = mysql
# pid-file = /var/run/mysqld/mysqld.pid
# socket = /var/run/mysqld/mysqld.sock
# port = 3306
# datadir = /var/lib/mysql
# If MySQL is running as a replication slave, this should be
# changed. Ref https://dev.mysql.com/doc/refman/8.0/en/server-system- variables.html#sysvar_tmpdir
# tmpdir = /tmp
#
# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
bind-address = 127.0.0.1
#
# * Fine Tuning
#
key_buffer_size = 16M
# max_allowed_packet = 64M
# thread_stack = 256K
# thread_cache_size = -1
# This replaces the startup script and checks MyISAM tables if needed
# the first time they are touched
myisam-recover-options = BACKUP
# max_connections = 151
# table_open_cache = 4000
#
# * Logging and Replication
#
# Both location gets rotated by the cronjob.
#
# Log all queries
# Be aware that this log type is a performance killer.
# general_log_file = /var/log/mysql/query.log
# general_log = 1
#
# Error log - should be very few entries.
#
log_error = /var/log/mysql/error.log
#
# Here you can see queries with especially long duration
# slow_query_log = 1
# slow_query_log_file = /var/log/mysql/mysql-slow.log
# long_query_time = 2
# log-queries-not-using-indexes
#
# The following can be used as easy to replay backup logs or for replication.
# note: if you are setting up a replication slave, see README.Debian about
# other settings you may need to change.
# server-id = 1
# log_bin = /var/log/mysql/mysql-bin.log
# binlog_expire_logs_seconds = 2592000
max_binlog_size = 100M
# binlog_do_db = include_database_name
# binlog_ignore_db = include_database_name
Any tips and advice much appreciate as this is the first time I'm using a vps.

Large amount of memory allocated by net/protocol.rb:153

While running memory_profiler, I noticed a large amount of memory being allocated by Ruby's net/protocol.rb component. I call it when performing an HTTP request to a server to download a file. The file is 43.67MB large and net/protocol.rb alone allocates 262,011,476 bytes just to download it.
Looking at the "allocated memory by location" section in the profiler report below, I can see net/protocol.rb:172 and http/response.rb:334 allocating 50-60MB of memory each, which is about the size of the file, so that looks reasonable. However, the top most entry (net/protocol.rb:153) worries me: that's 200MB of memory, at least 4x the size of the file.
I have two questions:
Why does net/protocol need to allocate 5x the size of the file in order to download it?
Is there anything I can do to reduce the amount of memory used by net/protocol?
memory_profiler output:
Total allocated: 314461424 bytes (82260 objects)
Total retained: 0 bytes (0 objects)
allocated memory by gem
-----------------------------------
314461304 ruby-2.1.2/lib
120 client/lib
allocated memory by file
-----------------------------------
262011476 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb
52435727 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/response.rb
7971 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb
2178 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/common.rb
1663 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http.rb
1260 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/generic_request.rb
949 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/generic.rb
120 /Users/andre.debrito/git/techserv-cache/client/lib/connections/cache_server_connection.rb
80 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/http.rb
allocated memory by location
-----------------------------------
200483909 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:153
60548199 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:172
52428839 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/response.rb:334
978800 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:155
2537 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/response.rb:61
2365 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:172
2190 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/response.rb:54
1280 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/response.rb:56
960 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:62
836 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:165
792 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:13
738 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/common.rb:125
698 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:263
576 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/common.rb:214
489 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/response.rb:40
480 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/common.rb:127
360 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:40
328 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http.rb:610
320 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/generic_request.rb:71
320 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:30
320 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:59
308 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/generic_request.rb:322
256 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http.rb:879
240 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/generic.rb:1615
239 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:211
232 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/generic_request.rb:38
224 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/common.rb:181
200 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:17
192 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/response.rb:42
179 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http.rb:877
169 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http.rb:1459
160 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http.rb:1029
160 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:434
160 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:435
160 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:445
160 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/generic.rb:1617
149 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/generic.rb:1445
147 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http.rb:1529
129 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:98
128 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http.rb:1475
120 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:444
120 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:446
120 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:447
120 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/response.rb:29
120 /Users/andre.debrito/git/techserv-cache/client/lib/connections/cache_server_connection.rb:45
96 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http.rb:899
80 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/generic_request.rb:39
80 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/generic_request.rb:45
80 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/generic_request.rb:46
80 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:145
allocated memory by class
-----------------------------------
309678360 String
3445304 Thread::Backtrace
981096 Array
352376 IO::EAGAINWaitReadable
1960 MatchData
1024 Hash
328 Net::HTTP
256 TCPSocket
256 URI::HTTP
128 Time
120 Net::HTTP::Get
120 Net::HTTPOK
96 Net::BufferedIO
allocated objects by gem
-----------------------------------
82259 ruby-2.1.2/lib
1 client/lib
allocated objects by file
-----------------------------------
81908 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb
129 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/response.rb
127 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb
28 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/common.rb
23 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/generic_request.rb
23 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/generic.rb
19 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http.rb
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/http.rb
1 /Users/andre.debrito/git/techserv-cache/client/lib/connections/cache_server_connection.rb
allocated objects by location
-----------------------------------
36373 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:153
24470 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:155
21057 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:172
48 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/response.rb:61
38 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/response.rb:54
32 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/response.rb:56
31 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:172
24 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:62
12 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/common.rb:127
9 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:40
8 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/generic_request.rb:71
8 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:165
8 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:30
8 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:59
6 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/common.rb:214
6 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/generic.rb:1615
5 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:263
4 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http.rb:1029
4 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/generic_request.rb:322
4 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:17
4 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:434
4 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:435
4 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:445
4 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/response.rb:42
4 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/common.rb:125
4 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/generic.rb:1617
3 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http.rb:1529
3 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:444
3 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:446
3 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:447
3 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/response.rb:40
3 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/generic.rb:1445
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http.rb:877
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/generic_request.rb:39
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/generic_request.rb:45
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/generic_request.rb:46
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:13
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:145
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/header.rb:31
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:111
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:144
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:98
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/common.rb:179
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/common.rb:181
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/common.rb:213
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/generic.rb:1640
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/generic.rb:1642
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/generic.rb:343
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/generic.rb:530
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/generic.rb:557
allocated objects by class
-----------------------------------
47935 String
24519 Array
4894 IO::EAGAINWaitReadable
4894 Thread::Backtrace
7 MatchData
3 Hash
2 URI::HTTP
1 Net::BufferedIO
1 Net::HTTP
1 Net::HTTP::Get
1 Net::HTTPOK
1 TCPSocket
1 Time
retained memory by gem
-----------------------------------
NO DATA
retained memory by file
-----------------------------------
NO DATA
retained memory by location
-----------------------------------
NO DATA
retained memory by class
-----------------------------------
NO DATA
retained objects by gem
-----------------------------------
NO DATA
retained objects by file
-----------------------------------
NO DATA
retained objects by location
-----------------------------------
NO DATA
retained objects by class
-----------------------------------
NO DATA
Allocated String Report
-----------------------------------
11926 ""
7019 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:172
4894 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:153
10 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/http/response.rb:54
2 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/uri/common.rb:179
1 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:67
4894 "Resource temporarily unavailable - read would block"
4894 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:153
4894 "UTF-8"
4894 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:153
4894 "read would block"
4894 /Users/andre.debrito/.rvm/rubies/ruby-2.1.2/lib/ruby/2.1.0/net/protocol.rb:153
...
Relevant code:
report = MemoryProfiler.report do
begin
response = nil
Net::HTTP.new(uri.host, uri.port).start { |http|
request = Net::HTTP::Get.new uri.request_uri
response = http.request request
}
rescue Net::ReadTimeout => e
raise RequestTimeoutError.new(e.message)
rescue Exception => e
raise ServerConnectionError.new(e.message)
end
end
report.pretty_print
Network traffic data from Charles proxy:
Request Header: 168 bytes
Response Header: 288 bytes
Request: -
Response: 43.67 MB (45792735 bytes)
Total: 43.67 MB (45793191 bytes)
Almost all of those strings allocated in net/protocol.rb#L153 are short-lived and are reclaimed by the next GC run. Those allocated objects are thus pretty harmless and will not result in a significantly larger process size.
You get a lot of exceptions (which are used for control flow here to read form the socket) and the actual read data which is appended to the buffer. All of these operations create temporary (internally used) objects.
As such, you are probably measuring the wrong thing. What would probably make more sense is to:
measure the maximum RSS of the process (i.e. the "used" memory);
and to measure the amount of additional memory still allocated after the read.
You will notice that (depending on the memory pressure on your computer), the RSS will not grow significantly above the amount of actually read data and that the references memory after the read is about the same size as the read data with about no internal intermediate objects still referenced.

WHM cPanlel Disk Usage Incorrect

I have cPanel/WHM installed on a 40gb partition, however WHM shows that 8.9gb out of 9.9gb is in use. How do I correct this?
This is on an AWS EC2 instance. The root volume is configured to 40gb.
After running df -h :
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
9.9G 8.9G 574M 95% /
/dev/hda1 99M 26M 69M 28% /boot
tmpfs 1006M 0 1006M 0% /dev/shm
So that shows that the /dev/mapper/VolGroup00-LogVol00 is 9.9GB. However, if I run parted and print the configuration I can see that:
Model: QEMU HARDDISK (ide)
Disk /dev/hda: 42.9GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 32.3kB 107MB 107MB primary ext3 boot
2 107MB 21.5GB 21.4GB primary lvm
I need the whole 40GB for cPanel/WHM. Why would it limit its self to 1/4 of the disk?
After Running vgs
VG #PV #LV #SN Attr VSize VFree
VolGroup00 1 2 0 wz--n- 19.88G 0
pvs:
PV VG Fmt Attr PSize PFree
/dev/hda2 VolGroup00 lvm2 a-- 19.88G 0
lvs:
LV VG Attr LSize Origin Snap% Move Log Copy% Convert
LogVol00 VolGroup00 -wi-ao 10.22G
LogVol01 VolGroup00 -wi-ao 9.66G
fdisk -l
Disk /dev/hda: 42.9 GB, 42949672960 bytes
255 heads, 63 sectors/track, 5221 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/hda1 * 1 13 104391 83 Linux
/dev/hda2 14 2610 20860402+ 8e Linux LVM
Disk /dev/dm-0: 10.9 GB, 10972299264 bytes
255 heads, 63 sectors/track, 1333 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/dm-0 doesn't contain a valid partition table
Disk /dev/dm-1: 10.3 GB, 10368319488 bytes
255 heads, 63 sectors/track, 1260 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/dm-1 doesn't contain a valid partition table
Where are you checking disk usages in WHM ? Can you please let me know following command out put so that I can assist you on this.
df -h
I think there is free space on your server in LVM partition. Can you please check this with the following command and let me know
vgs
pvs
lvs
fdisk -l
And if you found any free space in your VolGroup, Then you will have to increase it through lvextend command, You can check it at http://www.24x7servermanagement.com/blog/how-to-increase-the-size-of-the-logical-volume/

Resources