Correlation between throughtput and latency when benchmarking with YCSB - performance

I'm using YCSB to benchmark a number of different NoSQL databases. However, when playing around with the number of client threads I have a hard time interpreting the throughput vs. latency results.
For example, when benchmarking cassandra running workload a (50/50 reads and updates) with 16 client threads the following command is executed:
bin/ycsb run cassandra-cql -p hosts=xx.xx.xx.xx -p recordcount=525600 -p operationcount=525600 -threads 16 -P workloads/workloada -s > workloada_525600_16_threads_run_res.txt
which gives the following output:
[OVERALL], RunTime(ms), 62751
[OVERALL], Throughput(ops/sec), 8375.962136061577
[TOTAL_GCS_PS_Scavenge], Count, 64
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 289
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.46055042947522745
[TOTAL_GCS_PS_MarkSweep], Count, 0
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.0
[TOTAL_GCs], Count, 64
[TOTAL_GC_TIME], Time(ms), 289
[TOTAL_GC_TIME_%], Time(%), 0.46055042947522745
[READ], Operations, 262650
[READ], AverageLatency(us), 1844.6075042832667
[READ], MinLatency(us), 290
[READ], MaxLatency(us), 116159
[READ], 95thPercentileLatency(us), 3081
[READ], 99thPercentileLatency(us), 7551
[READ], Return=OK, 262650
[CLEANUP], Operations, 16
[CLEANUP], AverageLatency(us), 139458.5
[CLEANUP], MinLatency(us), 1
[CLEANUP], MaxLatency(us), 2232319
[CLEANUP], 95thPercentileLatency(us), 19
[CLEANUP], 99thPercentileLatency(us), 2232319
[UPDATE], Operations, 262950
[UPDATE], AverageLatency(us), 1764.8220193953223
[UPDATE], MinLatency(us), 208
[UPDATE], MaxLatency(us), 95807
[UPDATE], 95thPercentileLatency(us), 2901
[UPDATE], 99thPercentileLatency(us), 7031
[UPDATE], Return=OK, 262950
Running the same operation with 32 threads I get:
[OVERALL], RunTime(ms), 51785
[OVERALL], Throughput(ops/sec), 10149.65723665154
[TOTAL_GCS_PS_Scavenge], Count, 124
[TOTAL_GC_TIME_PS_Scavenge], Time(ms), 310
[TOTAL_GC_TIME_%_PS_Scavenge], Time(%), 0.5986289466061601
[TOTAL_GCS_PS_MarkSweep], Count, 0
[TOTAL_GC_TIME_PS_MarkSweep], Time(ms), 0
[TOTAL_GC_TIME_%_PS_MarkSweep], Time(%), 0.0
[TOTAL_GCs], Count, 124
[TOTAL_GC_TIME], Time(ms), 310
[TOTAL_GC_TIME_%], Time(%), 0.5986289466061601
[READ], Operations, 262848
[READ], AverageLatency(us), 2947.844628834916
[READ], MinLatency(us), 363
[READ], MaxLatency(us), 194559
[READ], 95thPercentileLatency(us), 5079
[READ], 99thPercentileLatency(us), 11055
[READ], Return=OK, 262848
[CLEANUP], Operations, 32
[CLEANUP], AverageLatency(us), 69601.5625
[CLEANUP], MinLatency(us), 1
[CLEANUP], MaxLatency(us), 2228223
[CLEANUP], 95thPercentileLatency(us), 3
[CLEANUP], 99thPercentileLatency(us), 2228223
[UPDATE], Operations, 262752
[UPDATE], AverageLatency(us), 2881.930485781269
[UPDATE], MinLatency(us), 316
[UPDATE], MaxLatency(us), 203391
[UPDATE], 95thPercentileLatency(us), 4987
[UPDATE], 99thPercentileLatency(us), 10711
[UPDATE], Return=OK, 262752
The overall runtime is lower and thus, the throughput is higher, but the latencies are higher as well.
I'm not quite sure how to interpret these results, and how would you find the "appropriate" number of client threads to run?

In order to have a qualified benchmarks you should 1st define the SLA requirements you aim your system to achieve.
Say your workload pattern is 50/50 WR/RD and your SLA requirements are 10K ops/sec throughput with 99th percentile latency < 10 millisec. Use YCSB -target flag to generate the needed throughput, and use various thread count to see which one meets your SLA needs.
It makes a lot of sense that when more threads are used, the throughput increased (more ops/sec), but that comes at a latency price.
You should look into the relevant database metrics to try and find your bottleneck - it can be the:
Client (need a stronger client, or better parallelism using less threads but more clients)
Network
DB server (Disk / RAM - use a stronger instance).
You can read more about the Do's and Don't of DB benchmarking here

Related

Which field in Apache Bench is the Response-Time?

I try to interpret the fields of apache bench, but I can't understand which field is that indicates the Response-time . Can you help me find that?
Document Path: /
Document Length: 45563 bytes
Concurrency Level: 2
Time taken for tests: 3.955 seconds
Complete requests: 100
Failed requests: 0
Total transferred: 4625489 bytes
HTML transferred: 4556300 bytes
Requests per second: 25.29 [#/sec] (mean)
Time per request: 79.094 [ms] (mean)
Time per request: 39.547 [ms] (mean, across all concurrent requests)
Transfer rate: 1142.21 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 40 53 8.1 51 99
Processing: 12 24 9.4 23 98
Waiting: 5 14 10.6 12 95
Total: 57 77 15.0 75 197
Percentage of the requests served within a certain time (ms)
50% 75
66% 77
75% 80
80% 81
90% 85
95% 92
98% 116
99% 197
100% 197 (longest request)

Aerospike - No improvements in latency on moving to in-memory cluster from on-disk cluster

To begin with, we had an aerospike cluster having 5 nodes of i2.2xlarge type in AWS, which our production fleet of around 200 servers was using to store/retrieve data. The aerospike config of the cluster was as follows -
service {
user root
group root
paxos-single-replica-limit 1 # Number of nodes where the replica count is automatically reduced to 1.
pidfile /var/run/aerospike/asd.pid
service-threads 8
transaction-queues 8
transaction-threads-per-queue 4
fabric-workers 8
transaction-pending-limit 100
proto-fd-max 25000
}
logging {
# Log file must be an absolute path.
file /var/log/aerospike/aerospike.log {
context any info
}
}
network {
service {
address any
port 3000
}
heartbeat {
mode mesh
port 3002 # Heartbeat port for this node.
# List one or more other nodes, one ip-address & port per line:
mesh-seed-address-port <IP> 3002
mesh-seed-address-port <IP> 3002
mesh-seed-address-port <IP> 3002
mesh-seed-address-port <IP> 3002
# mesh-seed-address-port <IP> 3002
interval 250
timeout 10
}
fabric {
port 3001
}
info {
port 3003
}
}
namespace FC {
replication-factor 2
memory-size 7G
default-ttl 30d # 30 days, use 0 to never expire/evict.
high-water-disk-pct 80 # How full may the disk become before the server begins eviction
high-water-memory-pct 70 # Evict non-zero TTL data if capacity exceeds # 70% of 15GB
stop-writes-pct 90 # Stop writes if capacity exceeds 90% of 15GB
storage-engine device {
device /dev/xvdb1
write-block-size 256K
}
}
It was properly handling the traffic corresponding to the namespace "FC", with latencies within 14 ms, as shown in the following graph plotted using graphite -
However, on turning on another namespace, with much higher traffic on the same cluster, it started to give a lot of timeouts and higher latencies, as we scaled up the number of servers using the same cluster of 5 nodes (increasing number of servers step by step from 20 to 40 to 60) with the following namespace configuration -
namespace HEAVYNAMESPACE {
replication-factor 2
memory-size 35G
default-ttl 30d # 30 days, use 0 to never expire/evict.
high-water-disk-pct 80 # How full may the disk become before the server begins eviction
high-water-memory-pct 70 # Evict non-zero TTL data if capacity exceeds # 70% of 35GB
stop-writes-pct 90 # Stop writes if capacity exceeds 90% of 35GB
storage-engine device {
device /dev/xvdb8
write-block-size 256K
}
}
Following were the observations -
----FC Namespace----
20 - servers, 6k Write TPS, 16K Read TPS
set latency = 10ms
set timeouts = 1
get latency = 15ms
get timeouts = 3
40 - servers, 12k Write TPS, 17K Read TPS
set latency = 12ms
set timeouts = 1
get latency = 20ms
get timeouts = 5
60 - servers, 17k Write TPS, 18K Read TPS
set latency = 25ms
set timeouts = 5
get latency = 30ms
get timeouts = 10-50 (fluctuating)
----HEAVYNAMESPACE----
20 - del servers, 6k Write TPS, 16K Read TPS
set latency = 7ms
set timeouts = 1
get latency = 5ms
get timeouts = 0
no of keys = 47 million x 2
disk usage = 121 gb
ram usage = 5.62 gb
40 - del servers, 12k Write TPS, 17K Read TPS
set latency = 15ms
set timeouts = 5
get latency = 12ms
get timeouts = 2
60 - del servers, 17k Write TPS, 18K Read TPS
set latency = 25ms
set timeouts = 25-75 (fluctuating)
get latency = 25ms
get timeouts = 2-15 (fluctuating)
* Set latency refers to latency in setting aerospike cache keys and similarly get for getting keys.
We had to turn off the namespace "HEAVYNAMESPACE" after reaching 60 servers.
We then started a fresh POC with a cluster having nodes which were r3.4xlarge instances of AWS (find details here https://aws.amazon.com/ec2/instance-types/), with the key difference in aerospike configuration being the usage of memory only for caching, hoping that it would give better performance. Here is the aerospike.conf file -
service {
user root
group root
paxos-single-replica-limit 1 # Number of nodes where the replica count is automatically reduced to 1.
pidfile /var/run/aerospike/asd.pid
service-threads 16
transaction-queues 16
transaction-threads-per-queue 4
proto-fd-max 15000
}
logging {
# Log file must be an absolute path.
file /var/log/aerospike/aerospike.log {
context any info
}
}
network {
service {
address any
port 3000
}
heartbeat {
mode mesh
port 3002 # Heartbeat port for this node.
# List one or more other nodes, one ip-address & port per line:
mesh-seed-address-port <IP> 3002
mesh-seed-address-port <IP> 3002
mesh-seed-address-port <IP> 3002
mesh-seed-address-port <IP> 3002
mesh-seed-address-port <IP> 3002
interval 250
timeout 10
}
fabric {
port 3001
}
info {
port 3003
}
}
namespace FC {
replication-factor 2
memory-size 30G
storage-engine memory
default-ttl 30d # 30 days, use 0 to never expire/evict.
high-water-memory-pct 80 # Evict non-zero TTL data if capacity exceeds # 70% of 15GB
stop-writes-pct 90 # Stop writes if capacity exceeds 90% of 15GB
}
We began with the FC namespace only, and decided to go ahead with the HEAVYNAMESPACE only if we saw significant improvements with the FC namespace, but we didn't. Here are the current observations with different combinations of node count and server count -
Current stats
Observation Point 1 - 4 nodes serving 130 servers.
Point 2 - 5 nodes serving 80 servers.
Point 3 - 5 nodes serving 100 servers.
These observation points are highlighted in the graphs below -
Get latency -
Set successes (giving a measure of the load handled by the cluster) -
We also observed that -
Total memory usage across cluster is 5.52 GB of 144 GB. Node-wise memory usage is ~ 1.10 GB out of 28.90 GB.
There were no observed write failures yet.
There were occasional get/set timeouts which looked fine.
No evicted objects.
Conclusion
We are not seeing the improvements we had expected, by using the memory-only configuration. We would like to get some pointers to be able to scale up with the same cost -
- via tweaking the aerospike configurations
- or by using some more suitable AWS instance type (even if that would lead to cost cutting).
Update
Output of top command on one of the aerospike servers, to show SI (Pointed out by #Sunil in his answer) -
$ top
top - 08:02:21 up 188 days, 48 min, 1 user, load average: 0.07, 0.07, 0.02
Tasks: 179 total, 1 running, 178 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.3%us, 0.1%sy, 0.0%ni, 99.4%id, 0.0%wa, 0.0%hi, 0.2%si, 0.0%st
Mem: 125904196k total, 2726964k used, 123177232k free, 148612k buffers
Swap: 0k total, 0k used, 0k free, 445968k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
63421 root 20 0 5217m 1.6g 4340 S 6.3 1.3 461:08.83 asd
If I am not wrong, the SI appears to be 0.2%. I checked the same on all the nodes of the cluster and it is 0.2% on one and 0.1% on the rest of the three.
Also, here is the output of the network stats on the same node -
$ sar -n DEV 10 10
Linux 4.4.30-32.54.amzn1.x86_64 (ip-10-111-215-72) 07/10/17 _x86_64_ (16 CPU)
08:09:16 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
08:09:26 lo 12.20 12.20 5.61 5.61 0.00 0.00 0.00 0.00
08:09:26 eth0 2763.60 1471.60 299.24 233.08 0.00 0.00 0.00 0.00
08:09:26 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
08:09:36 lo 12.00 12.00 5.60 5.60 0.00 0.00 0.00 0.00
08:09:36 eth0 2772.60 1474.50 300.08 233.48 0.00 0.00 0.00 0.00
08:09:36 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
08:09:46 lo 17.90 17.90 15.21 15.21 0.00 0.00 0.00 0.00
08:09:46 eth0 2802.80 1491.90 304.63 245.33 0.00 0.00 0.00 0.00
08:09:46 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
08:09:56 lo 12.00 12.00 5.60 5.60 0.00 0.00 0.00 0.00
08:09:56 eth0 2805.20 1494.30 304.37 237.51 0.00 0.00 0.00 0.00
08:09:56 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
08:10:06 lo 9.40 9.40 5.05 5.05 0.00 0.00 0.00 0.00
08:10:06 eth0 3144.10 1702.30 342.54 255.34 0.00 0.00 0.00 0.00
08:10:06 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
08:10:16 lo 12.00 12.00 5.60 5.60 0.00 0.00 0.00 0.00
08:10:16 eth0 2862.70 1522.20 310.15 238.32 0.00 0.00 0.00 0.00
08:10:16 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
08:10:26 lo 12.00 12.00 5.60 5.60 0.00 0.00 0.00 0.00
08:10:26 eth0 2738.40 1453.80 295.85 231.47 0.00 0.00 0.00 0.00
08:10:26 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
08:10:36 lo 11.79 11.79 5.59 5.59 0.00 0.00 0.00 0.00
08:10:36 eth0 2758.14 1464.14 297.59 231.47 0.00 0.00 0.00 0.00
08:10:36 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
08:10:46 lo 12.00 12.00 5.60 5.60 0.00 0.00 0.00 0.00
08:10:46 eth0 3100.40 1811.30 328.31 289.92 0.00 0.00 0.00 0.00
08:10:46 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
08:10:56 lo 9.40 9.40 5.05 5.05 0.00 0.00 0.00 0.00
08:10:56 eth0 2753.40 1460.80 297.15 231.98 0.00 0.00 0.00 0.00
Average: IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
Average: lo 12.07 12.07 6.45 6.45 0.00 0.00 0.00 0.00
Average: eth0 2850.12 1534.68 307.99 242.79 0.00 0.00 0.00 0.00
From the above, I think the total number of packets handled per second should be 2850.12+1534.68 = 4384.8 (sum of rxpck/s and txpck/s) which is well within 250K packets per second, as mentioned in The Amazon EC2 deployment guide on the Aerospike site which is referred in #RonenBotzer's answer.
Update 2
I ran the asadm command followed by show latency on one of the nodes of the cluster and from the output, it appears that there is no latency beyond 1 ms for both reads and writes -
Admin> show latency
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~read Latency~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Node Time Ops/Sec >1Ms >8Ms >64Ms
. Span . . . .
ip-10-111-215-72.ec2.internal:3000 11:35:01->11:35:11 1242.1 0.0 0.0 0.0
ip-10-13-215-20.ec2.internal:3000 11:34:57->11:35:07 1297.5 0.0 0.0 0.0
ip-10-150-147-167.ec2.internal:3000 11:35:04->11:35:14 1147.7 0.0 0.0 0.0
ip-10-165-168-246.ec2.internal:3000 11:34:59->11:35:09 1342.2 0.0 0.0 0.0
ip-10-233-158-213.ec2.internal:3000 11:35:00->11:35:10 1218.0 0.0 0.0 0.0
Number of rows: 5
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~write Latency~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Node Time Ops/Sec >1Ms >8Ms >64Ms
. Span . . . .
ip-10-111-215-72.ec2.internal:3000 11:35:01->11:35:11 33.0 0.0 0.0 0.0
ip-10-13-215-20.ec2.internal:3000 11:34:57->11:35:07 37.2 0.0 0.0 0.0
ip-10-150-147-167.ec2.internal:3000 11:35:04->11:35:14 36.4 0.0 0.0 0.0
ip-10-165-168-246.ec2.internal:3000 11:34:59->11:35:09 36.9 0.0 0.0 0.0
ip-10-233-158-213.ec2.internal:3000 11:35:00->11:35:10 33.9 0.0 0.0 0.0
Number of rows: 5
Aerospike has several modes for storage that you can configure:
Data in memory with no persistence
Data in memory, persisted to disk
Data on SSD, primary index in memory (AKA Hybrid Memory architecture)
In-Memory Optimizations
Release 3.11 and release 3.12 of
Aerospike include several big performance improvements for in-memory namespaces.
Among these are a change to how partitions are represented, from a single red-black tree to sprigs (many sub-trees). The new config parameters partition-tree-sprigs and partition-tree-locks should be used appropriately. In your case, as r3.4xlarge instances have 122G of DRAM, you can afford the 311M of overhead associated with setting partition-tree-sprigs to the max value of 4096.
You should also consider the auto-pin=cpu setting, as well. This option does require Linux Kernal >= 3.19 which is part of Ubuntu >= 15.04 (but not many others yet).
Clustering Improvements
The recent releases 3.13 and 3.14 include a rewrite of the cluster manager. In general you should consider using the latest version, but I'm pointing out the aspects that will directly affect your performance.
EC2 Networking and Aerospike
You don't show the latency numbers of the cluster itself, so I suspect the problem is with the networking, rather than the nodes.
Older instance family types, such as the r3, c3, i2, come with ENIs - NICs which have a single transmit/receive queue. The software interrupts of cores accessing this queue may become a bottleneck as the number of CPUs increases, all of which need to wait for their turn to use the NIC. There's a knowledge base article in the Aerospike community discussion forum on using multiple ENIs with Aerospike to get around the limited performance capacity of the single ENI you initially get with such an instance. The Amazon EC2 deployment guide on the Aerospike site talks about using RPS to maximize TPS when you're in an instance that uses ENIs.
Alternatively, you should consider moving to the newer instances (r4, i3, etc) which come with multiqueue ENAs. These do not require RPS, and support higher TPS without adding extra cards. They also happen to have better chipsets, and cost significantly less than their older siblings (r4 is roughly 30% cheaper than r3, i3 is about 1/3 the price of the i2).
Your title is misleading. Please consider changing it. You moved from on-disk to in-memory.
mem+disk means data is both on disk and mem (using data-in-memory=true).
My best guess is that one CPU is bottlenecking to do network I/O.
You can take a look at the top output and see the si (software interrupts)
If one CPU is showing much higher than the other,
simplest thing you can try is RPS (Receive Packet Steering)
echo f|sudo tee /sys/class/net/eth0/queues/rx-0/rps_cpus
Once you confirm that its network bottlneck,
You can try ENA as suggested by #Ronen
Going into details,
When you had 15ms latency with only FC, assuming its low tps.
But when you added high load on HEAVYNAMESPACE in prod,
the latency kept increasing as you added more client nodes and hence tps.
Simlarly in you POC also, the latency increased with client nodes.
The latency is under 15ms even with 130 servers. Its partly good.
I am not sure if I understood your set_success graph. Assumign its in ktps.
Update:
After looking at the server side latency histogram, looks like server is doing fine.
Most likely it is a client issue. Check CPU and network on the client machine(s).

Generate random binaries with avg hamming distance of 50%?

I want to generate binaries, where the average hamming distance between the items is around 50%.
The second condition is that the distance should not fall below ~40% or go above ~60%.
Another complication is that the items are not generated in sequence, but once in a while and I don't want to loop over all of the items to check and regenerate, because it will become a slow process after a while.
Is there a mechanism or algorithm to achieve this ?
Currently I use the following code :
def rand(size):
op = np.random.uniform()
return np.random.choice([0,1], size=size, p=[op, 1-op] )
but it breaks even when I generate 10 items f.e. (hamming dist) :
[ [ 0 2510 8209 4305 3896 1619 7231 6356 8103 3265]
[2510 0 8131 4347 3940 1697 7219 6334 8037 3305]
[8209 8131 0 5858 6449 9312 2100 3317 1030 7196]
[4305 4347 5858 0 4661 4088 5598 5311 5764 4590]
[3896 3940 6449 4661 0 3485 6093 5650 6385 4251]
[1619 1697 9312 4088 3485 0 8034 6739 9152 2716]
[7231 7219 2100 5598 6093 8034 0 3831 2238 6510]
[6356 6334 3317 5311 5650 6739 3831 0 3405 5933]
[8103 8037 1030 5764 6385 9152 2238 3405 0 7112]
[3265 3305 7196 4590 4251 2716 6510 5933 7112 0]]
min: 1030
Avg distance : 0.470624%
btw. binaries are 10 000 bits.
So far the following solution seems to behave as to my expectation.
def rand(size):
return [np.random.randint(0,2) for _ in xrange(size)]
will update when I do more extensive tests, if it is ok.

How to calculate Total average response time

Below are the results
sampler_label count average median 90%_line min max
Transaction1 2 61774 61627 61921 61627 61921
Transaction2 4 82 61 190 15 190
Transaction3 4 1862 1317 3612 1141 3612
Transaction4 4 1242 915 1602 911 1602
Transaction5 4 692 608 906 423 906
Transaction6 4 2764 2122 4748 1182 4748
Transaction7 4 9369 9029 11337 7198 11337
Transaction8 4 1245 890 2168 834 2168
Transaction9 4 3475 2678 4586 2520 4586
TOTAL 34 6073 1381 9913 15 61921
My question here is how is total average response time is being calculated (which is 6073)?
Like in my result I want to exclude transaction1 response time and then want to calculate Total average response time.
How can I do that?
Total Avg Response time = ((s1*t1) + (s2*t2)...)/s
s1 = No of times transaction 1 was executed
t1 = Avg response time for transaction 1
s2 = No of times transaction 2 was executed
t2 = Avg response time for transaction 2
s = Total no of samples (s1+s2..)
In your case, except transaction1 all other transactions have been executed 4 times. So, simple avg of (82, 1862, 1242...) should give the result you wanted.

windbg memory leak investigation - missing heap memory

I am investigating a slow memory leak in a windows application using windbg
!heap -s gives the following output
Heap Flags Reserv Commit Virt Free List UCR Virt Lock Fast
(k) (k) (k) (k) length blocks cont. heap
-------------------------------------------------------------------------------------
00000023d62c0000 08000002 1182680 1169996 1181900 15759 2769 78 3 2b63 LFH
00000023d4830000 08008000 64 4 64 2 1 1 0 0
00000023d6290000 08001002 1860 404 1080 43 7 2 0 0 LFH
00000023d6dd0000 08001002 32828 32768 32828 32765 33 1 0 0
External fragmentation 99 % (33 free blocks)
00000023d8fb0000 08001000 16384 2420 16384 2412 5 5 0 3355
External fragmentation 99 % (5 free blocks)
00000023da780000 08001002 60 8 60 5 2 1 0 0
-------------------------------------------------------------------------------------
This shows that the heap with address 00000023d62c0000 has over a gigabyte of reserved memory.
Next I ran the command !heap -stat -h 00000023d62c0000
heap # 00000023d62c0000
group-by: TOTSIZE max-display: 20
size #blocks total ( %) (percent of total busy bytes)
30 19b1 - 4d130 (13.81)
20 1d72 - 3ae40 (10.55)
ccf 40 - 333c0 (9.18)
478 8c - 271a0 (7.01)
27158 1 - 27158 (7.00)
40 80f - 203c0 (5.78)
410 79 - 1eb90 (5.50)
68 43a - 1b790 (4.92)
16000 1 - 16000 (3.94)
50 39e - 12160 (3.24)
11000 1 - 11000 (3.05)
308 54 - fea0 (2.85)
60 28e - f540 (2.75)
8018 1 - 8018 (1.43)
80 f2 - 7900 (1.36)
1000 5 - 5000 (0.90)
70 ac - 4b40 (0.84)
4048 1 - 4048 (0.72)
100 3e - 3e00 (0.69)
48 c9 - 3888 (0.63)
If I add up the total size of the heap blocks from the above command (4d130 + 3ae40 + ...) I get a few megabytes of allocated memory.
Am I missing something here? How can I find which blocks are consuming the gigabyte of allocated heap memory?
I believe that the !heap –stat is broken for 64 bits dumps, at least big one. I have instead used debugdiag 1.2 for hunting memory leaks on 64 bits.

Resources