Based on following output of !address -summary command, I think I have got a native memory leak. In order to deterine the callstack on where these allocations are happening, I am following article at http://www.codeproject.com/KB/cpp/MemoryLeak.aspx
0:000> !address -summary
TEB 7efdd000 in range 7efdb000 7efde000
TEB 7efda000 in range 7efd8000 7efdb000
TEB 7efd7000 in range 7efd5000 7efd8000
TEB 7efaf000 in range 7efad000 7efb0000
TEB 7efac000 in range 7efaa000 7efad000
ProcessParametrs 00441b78 in range 00440000 00540000
Environment 004407f0 in range 00440000 00540000
-------------------- Usage SUMMARY --------------------------
TotSize ( KB) Pct(Tots) Pct(Busy) Usage
551a000 ( 87144) : 04.16% 14.59% : RegionUsageIsVAD
5b8d3000 ( 1499980) : 71.53% 00.00% : RegionUsageFree
2cc3000 ( 45836) : 02.19% 07.68% : RegionUsageImage
4ff000 ( 5116) : 00.24% 00.86% : RegionUsageStack
0 ( 0) : 00.00% 00.00% : RegionUsageTeb
1c040000 ( 459008) : 21.89% 76.87% : RegionUsageHeap
0 ( 0) : 00.00% 00.00% : RegionUsagePageHeap
1000 ( 4) : 00.00% 00.00% : RegionUsagePeb
0 ( 0) : 00.00% 00.00% : RegionUsageProcessParametrs
0 ( 0) : 00.00% 00.00% : RegionUsageEnvironmentBlock
Tot: 7fff0000 (2097088 KB) Busy: 2471d000 (597108 KB)
0:000> !heap -s
LFH Key : 0x7fdcf95f
Termination on corruption : DISABLED
Heap Flags Reserv Commit Virt Free List UCR Virt Lock Fast
(k) (k) (k) (k) length blocks cont. heap
-----------------------------------------------------------------------------
00440000 00000002 453568 436656 453568 62 54 32 0 0 LFH
006b0000 00001002 64 16 64 4 2 1 0 0
002b0000 00041002 256 4 256 2 1 1 0 0
00620000 00001002 64 16 64 5 2 1 0 0
00250000 00001002 64 16 64 4 2 1 0 0
007d0000 00041002 256 4 256 0 1 1 0 0
005c0000 00001002 1088 388 1088 7 17 2 0 0 LFH
02070000 00041002 256 4 256 1 1 1 0 0
02270000 00041002 256 144 256 0 1 1 0 0 LFH
04e10000 00001002 3136 1764 3136 384 36 3 0 0 LFH
External fragmentation 21 % (36 free blocks)
-----------------------------------------------------------------------------
But when I run !heap -p –a command, I don’t get any callstack, just the following. Any ideas how to get callstack of allocations source?
0:000> !heap -p -a 0218e008
address 0218e008 found in
_HEAP # 4e10000
HEAP_ENTRY Size Prev Flags UserPtr UserSize - state
0218e000 001c 0000 [00] 0218e008 000d4 - (busy)
You should use deleaker. It's powerful tool for debuging.
use valgrind for linux and deleaker for windows.
If you don't get a call stack from !heap -p -a
The reason can be that you have not used gflags correctly
Remeber to use correct name including .exe
Try to start it inteactivly and go to the image tab, might be easier
Try with page heap, that also gives call stack
I know nothing about Windows, but at least on Unix systems a debugger (like gdb on Linux) is useful to understand callstacks.
And you could also circumvent some of your issues by using e.g. Boehm's conservative garbage collector. On many systems you can also hunt memory leaks with the help of valgrind
Related
Consider the cache system with the following properties:
Cache (direct mapped cache):
- Cache size 128 bytes, block size 16 bytes (24 bytes)
- Tag/Valid bits for cache blocks are as follows:
Block index - 0 1 2 3 4 5 6 7
Tag - 0 6 7 0 5 3 1 3
Valid - 1 0 0 1 0 0 0 1
Find Tag Block index, Block offset, Cache hit/miss for memory addresses - 0x7f6, 0x133.
I am not sure how to solve.
Since cache size is 128 bytes, cache has 128/16 = 8 blocks and hence block offset = 3.
Since block size is 16 bytes, block offset is 4.
Address bits are 12 for 0x7f6 = 0111 1111 0110:
Offset = (0110 >> 1) = 3
Index = 111 = 7
Tag = 01111 = f
I can see very high % of stolen time on a EC2 web server (t2.micro) without any load (one current user) with a high page load time. Is there a correlation between hight load time and hight stolen time? I have the same symptoms with another server from class t2.medium
Do you have an explanation?
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
2 0 0 79824 7428 479172 0 0 0 0 52 49 18 0 0 0 82
1 0 0 79792 7436 479172 0 0 0 6 54 49 18 0 0 0 82
1 0 0 79824 7444 479172 0 0 0 5 54 51 18 0 0 0 82
We had a system with a 3-node Cassandra 2.0.6 ring. Over time, the application load on that system increased until a limit where the ring could not handle it anymore, causing the typical node overload failures.
We doubled the size of the ring, and recently even added one more node, to try to handle the load, but there're still only 3 nodes taking all the load; but not the original 3 nodes of the initial ring.
We did the bootstrap + cleanup process described in the adding nodes guide. We also tried repairs on each node after not seeing much improvements in the ring load. Our load is 99.99% writes on this system.
Here's a chart of the cluster load illustrating the issue:
The highest load tables have a high cardinality on the partition key that I'd expect distributes well over vnodes.
Edit: nodetool info
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN x.y.z.92 56.83 GB 256 13.8% x-y-z-b53e8ab55e0a rack1
UN x.y.z.253 136.87 GB 256 15.2% x-y-z-bd3cf08449c8 rack1
UN x.y.z.70 69.84 GB 256 14.2% x-y-z-39e63dd017cd rack1
UN x.y.z.251 74.03 GB 256 14.4% x-y-z-36a6c8e4a8e8 rack1
UN x.y.z.240 51.77 GB 256 13.0% x-y-z-ea239f65794d rack1
UN x.y.z.189 128.49 GB 256 14.3% x-y-z-7c36c93e0022 rack1
UN x.y.z.99 53.65 GB 256 15.2% x-y-z-746477dc5db9 rack1
Edit: tpstats (node highly loaded)
Pool Name Active Pending Completed Blocked All time blocked
ReadStage 0 0 11591287 0 0
RequestResponseStage 0 0 283211224 0 0
MutationStage 32 405875 349531549 0 0
ReadRepairStage 0 0 3591 0 0
ReplicateOnWriteStage 0 0 0 0 0
GossipStage 0 0 3246983 0 0
AntiEntropyStage 0 0 72055 0 0
MigrationStage 0 0 133 0 0
MemoryMeter 0 0 205 0 0
MemtablePostFlusher 0 0 94915 0 0
FlushWriter 0 0 12521 0 0
MiscStage 0 0 34680 0 0
PendingRangeCalculator 0 0 14 0 0
commitlog_archiver 0 0 0 0 0
AntiEntropySessions 1 1 1 0 0
InternalResponseStage 0 0 30 0 0
HintedHandoff 0 0 1957 0 0
Message type Dropped
RANGE_SLICE 0
READ_REPAIR 196
PAGED_RANGE 0
BINARY 0
READ 0
MUTATION 31663792
_TRACE 24409
REQUEST_RESPONSE 4
COUNTER_MUTATION 0
How could I further troubleshoot this issue?
You need to run nodetool cleanup on the previous nodes that were part of the ring. Nodetool cleanup will remove the partition keys that the node currently does not own.
Seems like after the addition of the nodes, the keys have not been deleted hence causing the load to be higher on the previous nodes.
Try running
nodetool cleanup
on the previous nodes
In my code I am using an external C library and the library calls madvise with MADV_SEQUENTIAL option which takes too long to finish. In my opinion only calling madvise with MADV_SEQUENTIAL is enough for our job. My first question is, why multiple madvise system calls are made, is there a logic in calling madvise with different options sequentially? My second question is, do you have any idea why madvise with MADV_SEQUENTIAL takes too long, sometimes about 1-2 minutes?
[root#mymachine ~]# strace -ttT my_compiled_code
...
13:11:35.358982 open("/some/big/file", O_RDONLY) = 8 <0.000010>
13:11:35.359060 fstat64(8, {st_mode=S_IFREG|0644, st_size=953360384, ...}) = 0 <0.000006>
13:11:35.359155 mmap2(NULL, 1073741824, PROT_READ, MAP_SHARED, 8, 0) = 0x7755e000 <0.000007>
13:11:35.359223 madvise(0x7755e000, 1073741824, MADV_NORMAL) = 0 <0.000006>
13:11:35.359266 madvise(0x7755e000, 1073741824, MADV_RANDOM) = 0 <0.000006>
13:11:35.359886 madvise(0x7755e000, 1073741824, MADV_SEQUENTIAL) = 0 <0.000006>
13:11:53.730549 madvise(0x7755e000, 1073741824, MADV_RANDOM) = 0 <0.000013>
...
I am using 32-bit linux kernel: 3.4.52-9
[root#mymachine ~]# free -lk
total used free shared buffers cached
Mem: 4034412 3419344 615068 0 55712 767824
Low: 853572 495436 358136
High: 3180840 2923908 256932
-/+ buffers/cache: 2595808 1438604
Swap: 4192960 218624 3974336
[root#mymachine ~]# cat /proc/buddyinfo
Node 0, zone DMA 89 23 9 4 5 4 4 1 0 2 0
Node 0, zone Normal 9615 7099 3997 1723 931 397 78 0 0 1 1
Node 0, zone HighMem 7313 8089 2187 420 206 92 41 15 8 3 6
In Windows I can get the Peak Memory usage by calling GetProcessMemoryInfo
function TProcess.Peek: Cardinal;
var
PMC: PPROCESS_MEMORY_COUNTERS;
PMCSize: Cardinal;
begin
PMCSize := SizeOf(PROCESS_MEMORY_COUNTERS);
GetMem(PMC, PMCSize);
try
PMC^.cb := PMCSize;
if GetProcessMemoryInfo(FHandle, PMC, PMCSize) then
Exit(PMC^.PeakWorkingSetSize)
else
Exit(0);
finally
FreeMem(PMC);
end;
end;
What is the Mac OS equivalent to do this?
You can use /usr/bin/time -l <cmd> like this:
/usr/bin/time -l sleep 3
3.00 real 0.00 user 0.00 sys
552960 maximum resident set size <--- this one (in bytes)
0 average shared memory size
0 average unshared data size
0 average unshared stack size
144 page reclaims
0 page faults
0 swaps
0 block input operations
0 block output operations
0 messages sent
0 messages received
0 signals received
0 voluntary context switches
2 involuntary context switches