How to analyze <unclassified> memory usage in windbg - debugging

This is a .NET v4 windows service application running on a x64 machine. At some point after days of running steadily the windows service memory consumption spikes up like crazy until it crashes. I was able to catch it at 1.2 GB and capture a memory dump. Here is what i get
If i run !address -summary in windbg on my dump file i get the follow result
!address -summary
--- Usage Summary ------ RgnCount ------- Total Size -------- %ofBusy %ofTotal
Free 821 7ff`7e834000 ( 7.998 Tb) 99.98%
<unclassified> 3696 0`6eece000 ( 1.733 Gb) 85.67% 0.02%
Image 1851 0`0ea6f000 ( 234.434 Mb) 11.32% 0.00%
Stack 1881 0`03968000 ( 57.406 Mb) 2.77% 0.00%
TEB 628 0`004e8000 ( 4.906 Mb) 0.24% 0.00%
NlsTables 1 0`00023000 ( 140.000 kb) 0.01% 0.00%
ActivationContextData 3 0`00006000 ( 24.000 kb) 0.00% 0.00%
CsrSharedMemory 1 0`00005000 ( 20.000 kb) 0.00% 0.00%
PEB 1 0`00001000 ( 4.000 kb) 0.00% 0.00%
-
-
-
--- Type Summary (for busy) -- RgnCount ----- Total Size ----- %ofBusy %ofTotal
MEM_PRIVATE 5837 0`7115a000 ( 1.767 Gb) 87.34% 0.02%
MEM_IMAGE 2185 0`0f131000 (241.191 Mb) 11.64% 0.00%
MEM_MAPPED 40 0`01531000 ( 21.191 Mb) 1.02% 0.00%
-
-
--- State Summary ------------ RgnCount ------ Total Size ---- %ofBusy %ofTotal
MEM_FREE 821 7ff`7e834000 ( 7.998 Tb) 99.98%
MEM_COMMIT 6127 0`4fd5e000 ( 1.247 Gb) 61.66% 0.02%
MEM_RESERVE 1935 0`31a5e000 (794.367 Mb) 38.34% 0.01%
-
-
--Protect Summary(for commit)- RgnCount ------ Total Size --- %ofBusy %ofTotal
PAGE_READWRITE 3412 0`3e862000 (1000.383 Mb) 48.29% 0.01%
PAGE_EXECUTE_READ 220 0`0b12f000 ( 177.184 Mb) 8.55% 0.00%
PAGE_READONLY 646 0`02fd0000 ( 47.813 Mb) 2.31% 0.00%
PAGE_WRITECOPY 410 0`01781000 ( 23.504 Mb) 1.13% 0.00%
PAGE_READWRITE|PAGE_GUARD 1224 0`012f2000 ( 18.945 Mb) 0.91% 0.00%
PAGE_EXECUTE_READWRITE 144 0`007b9000 ( 7.723 Mb) 0.37% 0.00%
PAGE_EXECUTE_WRITECOPY 70 0`001cd000 ( 1.801 Mb) 0.09% 0.00%
PAGE_EXECUTE 1 0`00004000 ( 16.000 kb) 0.00% 0.00%
-
-
--- Largest Region by Usage ----Base Address -------- Region Size ----------
Free 0`8fff0000 7fe`59050000 ( 7.994 Tb)
<unclassified> 0`80d92000 0`0f25e000 ( 242.367 Mb)
Image fe`f6255000 0`0125a000 ( 18.352 Mb)
Stack 0`014d0000 0`000fc000 (1008.000 kb)
TEB 0`7ffde000 0`00002000 ( 8.000 kb)
NlsTables 7ff`fffb0000 0`00023000 ( 140.000 kb)
ActivationContextData 0`00030000 0`00004000 ( 16.000 kb)
CsrSharedMemory 0`7efe0000 0`00005000 ( 20.000 kb)
PEB 7ff`fffdd000 0`00001000 ( 4.000 kb)
First, why would unclassified show up once as 1.73 GB and the other time as 242 MB. (This has been answered. Thank you)
Second, i understand that unclassified can mean managed code, however my heap size according to !eeheap is only 248 MB, which actually matches the 242 but not even close to the 1.73GB. The dump file size is 1.2 GB which is much higher than normal. Where do I go from here to find out what's using all the memory. Anything in the managed heap world is under 248 MB, but i'm using 1.2 GB.
Thanks
EDIT
If i do !heap -s i get the following
LFH Key : 0x000000171fab7f20
Termination on corruption : ENABLED
Heap Flags Reserv Commit Virt Free List UCR Virt Lock Fast
(k) (k) (k) (k) length blocks cont. heap
-------------------------------------------------------------------------------------
Virtual block: 00000000017e0000 - 00000000017e0000 (size 0000000000000000)
Virtual block: 0000000045bd0000 - 0000000045bd0000 (size 0000000000000000)
Virtual block: 000000006fff0000 - 000000006fff0000 (size 0000000000000000)
0000000000060000 00000002 113024 102028 113024 27343 1542 11 3 1c LFH
External fragmentation 26 % (1542 free blocks)
0000000000010000 00008000 64 4 64 1 1 1 0 0
0000000000480000 00001002 3136 1380 3136 20 8 3 0 0 LFH
0000000000640000 00041002 512 8 512 3 1 1 0 0
0000000000800000 00001002 3136 1412 3136 15 7 3 0 0 LFH
00000000009d0000 00001002 3136 1380 3136 19 7 3 0 0 LFH
00000000008a0000 00041002 512 16 512 3 1 1 0 0
0000000000630000 00001002 7232 3628 7232 18 53 4 0 0 LFH
0000000000da0000 00041002 1536 856 1536 1 1 2 0 0 LFH
0000000000ef0000 00041002 1536 944 1536 4 12 2 0 0 LFH
00000000034b0000 00001002 1536 1452 1536 6 17 2 0 0 LFH
00000000019c0000 00001002 3136 1396 3136 16 6 3 0 0 LFH
0000000003be0000 00001002 1536 1072 1536 5 7 2 0 3 LFH
0000000003dc0000 00011002 512 220 512 100 60 1 0 2
0000000002520000 00001002 512 8 512 3 2 1 0 0
0000000003b60000 00001002 339712 168996 339712 151494 976 116 0 18 LFH
External fragmentation 89 % (976 free blocks)
Virtual address fragmentation 50 % (116 uncommited ranges)
0000000003f20000 00001002 64 8 64 3 1 1 0 0
0000000003d90000 00001002 64 8 64 3 1 1 0 0
0000000003ee0000 00001002 64 16 64 11 1 1 0 0
-------------------------------------------------------------------------------------

I've recently had a very similar situation and found a couple techniques useful in the investigation. None is a silver bullet, but each sheds a little more light on the problem.
1) vmmap.exe from SysInternals (http://technet.microsoft.com/en-us/sysinternals/dd535533) does a good job of correlating information on native and managed memory and presenting it in a nice UI. The same information can be gathered using the techniques below, but this is way easier and a nice place to start. Sadly, it doesn't work on dump files, you need a live process.
2) The "!address -summary" output is a rollup of the more detailed "!address" output. I found it useful to drop the detailed output into Excel and run some pivots. Using this technique I discovered that a large number of bytes that were listed as "" were actually MEM_IMAGE pages, likely copies of data pages that were loaded when the DLLs were loaded but then copied when the data was changed. I could also filter to large regions and drill in on specific addresses. Poking around in the memory dump with a toothpick and lots of praying is painful, but can be revealing.
3) Finally, I did a poor man's version of the vmmap.exe technique above. I loaded up the dump file, opened a log, and ran !address, !eeheap, !heap, and !threads. I also targeted the thread environment blocks listed in ~*k with !teb. I closed the log file and loaded it up in my favorite editor. I could then find an unclassified block and search to see if it popped up in the output from one of the more detailed commands. You can pretty quickly correlate native and managed heaps to weed those out of your suspect unclassified regions.
These are all way too manual. I'd love to write a script that would take the output similar to what I generated in technique 3 above and output an mmp file suitable for viewing the vmmap.exe. Some day.
One last note: I did a correlation between vmmap.exe's output with the !address output and noted these types of regions that vmmap couple identify from various sources (similar to what !heap and !eeheap use) but that !address didn't know about. That is, these are things that vmmap.exe labeled but !address didn't:
.data
.pdata
.rdata
.text
64-bit thread stack
Domain 1
Domain 1 High Frequency Heap
Domain 1 JIT Code Heap
Domain 1 Low Frequency Heap
Domain 1 Virtual Call Stub
Domain 1 Virtual Call Stub Lookup Heap
Domain 1 Virtual Call Stub Resolve Heap
GC
Large Object Heap
Native heaps
Thread Environment Blocks
There were still a lot of "private" bytes unaccounted for, but again, I'm able to narrow the problem if I can weed these out.
Hope this gives you some ideas on how to investigate. I'm in the same boat so I'd appreciate what you find, too. Thanks!

“Usage summary” tells that you have 3696 regions of unclassified giving a total of 17.33 Gb
“Largest Region” tells that the largest of the unclassified regions is 242 Mb.
The rest of the unclassified (3695 regions) together makes the difference up to 17.33 Gb.
Try to do a !heap –s and sum up the Virt col to see the size of the native heaps, I think these also falls into the unmanaged bucket.
(NB earlier versions shows native heap explicit from !address -summary)

I keep a copy of Debugging Tools for Windows 6.11.1.404 which seems to be able to display something more meaningful for "unclassified"
With that version, I see a list of TEB addresses and then this:
0:000> !address -summary
--------- PEB fffde000 not found ----
TEB fffdd000 in range fffdb000 fffde000
TEB fffda000 in range fffd8000 fffdb000
...snip...
TEB fe01c000 in range fe01a000 fe01d000
ProcessParametrs 002c15e0 in range 002c0000 003c0000
Environment 002c0810 in range 002c0000 003c0000
-------------------- Usage SUMMARY --------------------------
TotSize ( KB) Pct(Tots) Pct(Busy) Usage
41f08000 ( 1080352) : 25.76% 34.88% : RegionUsageIsVAD
42ecf000 ( 1096508) : 26.14% 00.00% : RegionUsageFree
5c21000 ( 94340) : 02.25% 03.05% : RegionUsageImage
c900000 ( 205824) : 04.91% 06.64% : RegionUsageStack
0 ( 0) : 00.00% 00.00% : RegionUsageTeb
68cf8000 ( 1717216) : 40.94% 55.43% : RegionUsageHeap
0 ( 0) : 00.00% 00.00% : RegionUsagePageHeap
0 ( 0) : 00.00% 00.00% : RegionUsagePeb
0 ( 0) : 00.00% 00.00% : RegionUsageProcessParametrs
0 ( 0) : 00.00% 00.00% : RegionUsageEnvironmentBlock
Tot: ffff0000 (4194240 KB) Busy: bd121000 (3097732 KB)
-------------------- Type SUMMARY --------------------------
TotSize ( KB) Pct(Tots) Usage
42ecf000 ( 1096508) : 26.14% : <free>
5e6e000 ( 96696) : 02.31% : MEM_IMAGE
28ed000 ( 41908) : 01.00% : MEM_MAPPED
b49c6000 ( 2959128) : 70.55% : MEM_PRIVATE
-------------------- State SUMMARY --------------------------
TotSize ( KB) Pct(Tots) Usage
9b4d1000 ( 2544452) : 60.67% : MEM_COMMIT
42ecf000 ( 1096508) : 26.14% : MEM_FREE
21c50000 ( 553280) : 13.19% : MEM_RESERVE
Largest free region: Base bc480000 - Size 38e10000 (931904 KB)
With my "current" version (6.12.2.633) I get this from the same dump. Two things I note:
The data seems to be the sum of the HeapAlloc/RegionUsageHeap and VirtualAlloc/RegionUsageIsVAD).
The lovely EFAIL error which is no doubt in part responsible for the missing data!
I'm not sure how that'll help you with your managed code, but I think it actually answers the original question ;-)
0:000> !address -summary
Failed to map Heaps (error 80004005)
--- Usage Summary ---------------- RgnCount ----------- Total Size -------- %ofBusy %ofTotal
<unclassified> 7171 aab21000 ( 2.667 Gb) 90.28% 66.68%
Free 637 42ecf000 ( 1.046 Gb) 26.14%
Stack 603 c900000 ( 201.000 Mb) 6.64% 4.91%
Image 636 5c21000 ( 92.129 Mb) 3.05% 2.25%
TEB 201 c9000 ( 804.000 kb) 0.03% 0.02%
ActivationContextData 14 11000 ( 68.000 kb) 0.00% 0.00%
CsrSharedMemory 1 5000 ( 20.000 kb) 0.00% 0.00%
--- Type Summary (for busy) ------ RgnCount ----------- Total Size -------- %ofBusy %ofTotal
MEM_PRIVATE 7921 b49c6000 ( 2.822 Gb) 95.53% 70.55%
MEM_IMAGE 665 5e6e000 ( 94.430 Mb) 3.12% 2.31%
MEM_MAPPED 40 28ed000 ( 40.926 Mb) 1.35% 1.00%
--- State Summary ---------------- RgnCount ----------- Total Size -------- %ofBusy %ofTotal
MEM_COMMIT 5734 9b4d1000 ( 2.427 Gb) 82.14% 60.67%
MEM_FREE 637 42ecf000 ( 1.046 Gb) 26.14%
MEM_RESERVE 2892 21c50000 ( 540.313 Mb) 17.86% 13.19%
--- Protect Summary (for commit) - RgnCount ----------- Total Size -------- %ofBusy %ofTotal
PAGE_READWRITE 4805 942bd000 ( 2.315 Gb) 78.37% 57.88%
PAGE_READONLY 215 3cbb000 ( 60.730 Mb) 2.01% 1.48%
PAGE_EXECUTE_READ 78 2477000 ( 36.465 Mb) 1.21% 0.89%
PAGE_WRITECOPY 74 75b000 ( 7.355 Mb) 0.24% 0.18%
PAGE_READWRITE|PAGE_GUARD 402 3d6000 ( 3.836 Mb) 0.13% 0.09%
PAGE_EXECUTE_READWRITE 80 3b0000 ( 3.688 Mb) 0.12% 0.09%
PAGE_EXECUTE_WRITECOPY 80 201000 ( 2.004 Mb) 0.07% 0.05%
--- Largest Region by Usage ----------- Base Address -------- Region Size ----------
<unclassified> 786000 17d9000 ( 23.848 Mb)
Free bc480000 38e10000 ( 910.063 Mb)
Stack 6f90000 fd000 (1012.000 kb)
Image 3c3c000 ebe000 ( 14.742 Mb)
TEB fdf8f000 1000 ( 4.000 kb)
ActivationContextData 190000 4000 ( 16.000 kb)
CsrSharedMemory 7efe0000 5000 ( 20.000 kb)

You're best bet would be to use the EEHeap and GCHandles commands in windbg (http://msdn.microsoft.com/en-us/library/bb190764.aspx) and try to see if you can find what might be leaking/wrong that way.
Unfortunately you probably won't be able to get the exact help you're looking for due to the fact that diagnosing these types of issues is almost always very time intensive and outside of the simplest cases requires someone to do a full analysis on the dump. Basically it's unlikely that someone will be able to point you towards a direct answer on Stack overflow. Mostly people will be able to point you commands that might be helpful. You're going to have to do a lot of digging to find out more information on what is happening.

I recently spent some time diagnosing a customers issue where their app was using 70GB before terminating (likely due to hitting an IIS App Pool recycling limit, but still unconfirmed). They sent me a 35 GB memory dump. Based on my recent experience, here are some observations I can make about what you've provided:
In the !heap -s output, 284 MB of the 1.247 GB is shown in the Commit column. If you were to open this dump in DebugDiag it would tell you that heap 0x60000 has 1 GB committed memory. You'll add up the commit size of the 11 segments reported and find that they only add up to about 102 MB and not 1GB. So annoying.
The "missing" memory isn't missing. It's actually hinted at in the !heap -s output as "Virtual block:" lines. Unfortunately, !heap -s sucks and doesn't show the end address properly and therefore reports size as 0. Check the output of the following commands:
!address 17e0000
!address 45bd0000
!address 6fff0000
It will report the proper end address and therefore an accurate "Region Size". Even better, it gives a succinct version of the region size. If you add the size of those 3 regions to 102 MB, you should be pretty close to 1 GB.
So what's in them? Well, you can look using dq. By spelunking you might find a hint at why they were allocated. Perhaps your managed code calls some 3rd party code which has a native side.
You might be able to find references to your heap by using !heap 6fff0000 -x -v. If there are references you can see what memory regions they live in by using !address again. In my customer issue I found a reference that lived on a region with "Usage: Stack". A "More info: " hint referenced the stack's thread which happened to have some large basic_string append/copy calls at the top.

Related

Is 1KB = 1.024 bytes OR 1.000 bytes?

Everyone keeps saying that
1'000 KB = 1 MB (decimal) -> Megabyte (MB) 10^6 Byte = 1 000 000 Byte
1'024 KB = 1 MiB (binary) -> Mebibyte (MiB) 2^20 Byte = 1 048 576 Byte
But when you look at windows properties of a folder, you get:
Which clearly uses 1KB = 1024bytes (and it's not this "MiB" but still uses 1024). So, what's the veredict?
Is 1KB = 1.024 bytes OR 1.000 ?
Yes, in causal discourse, it is either one - context sensitive.
Memory size tends to use 1024.
File size tends to use 1000.
Exceptions are common.
Else see Kilobyte.
Pedantic concerns like this include using K, when k should be used as in 1kB.

Tag Size and Cache Bits Exercise

I am studying for my computer architecture exam that is due tomorrow and am stuck on a practice exercise regarding tag size and the total number of cache bits. Here is the question:
Question 8:
This question deals with main and cache memory only.
Address size: 32 bits
Block size: 128 items
Item size: 8 bits
Cache Layout: 6 way set associative
Cache Size: 192 KB (data only)
Write Policy: Write Back
Answer: The tag size is 17 bits. The total number of cache bits is 1602048.
I know that this a failry straight-forward exercise, but I seem to be lacking the proper formulas. I also know that the structure of a N set way associative is |TAG 25 bits|SET 2 bits|OFFSET 5 bits|. And that Tag size = AddrSize - Set - Offset (- item size if any) thus giving the answer of 17 bits tag size.
However, how do I calculate the total number of cache bits please?
cache size in bytes: 192*1024 = 196608
number of blocks: 196608 / 128 = 1536
number of sets: 1536 / 6 = 256
set number bits: log2(256) = 8
offset number bits: log2(128) = 7
tag size: 32-(8+7) = 17
metadata: valid+dirty = 2 bits
total tag + metadata: (17+2)*1536 = 29184 bits
total data: 1536*128*8 = 1572864 bits
total size: 29184 + 1572864 = 1,602,048
There could also be bits used for the replacement policy, but we can assume it's random to make the answer work.

Haskell: Data.HashSet (from unordered-container) Performance for Large Sets

The data
First of all, let's generate some input so we have concrete data to talk about:
python -c 'for f in xrange(4000000): print f' > input.txt
this will generate a file input.txt containing the numbers from 0 to 3999999, each on its own line. That means we should have a file with 4,000,000 lines, adding up to 30,888,890 bytes, roughly 29 MiB.
Everything as a list
Right, let's load everything into memory as a [Text]:
import Data.Conduit
import Data.Text (Text)
import Control.Monad.Trans.Resource (runResourceT)
import qualified Data.Conduit.Binary as CB
import qualified Data.Conduit.Text as CT
import qualified Data.Conduit.List as CL
main :: IO ()
main = do
hs <- (runResourceT
$ CB.sourceFile "input.txt"
$$ CT.decode CT.utf8
=$ CT.lines
=$ CL.fold (\b a -> a `seq` b `seq` a:b) [])
print $ head hs
and run it:
[1 of 1] Compiling Main ( Test.hs, Test.o )
Linking Test ...
"3999999"
2,425,996,328 bytes allocated in the heap
972,945,088 bytes copied during GC
280,665,656 bytes maximum residency (13 sample(s))
5,120,528 bytes maximum slop
533 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 4378 colls, 0 par 0.296s 0.309s 0.0001s 0.0009s
Gen 1 13 colls, 0 par 0.452s 0.661s 0.0508s 0.2511s
INIT time 0.000s ( 0.000s elapsed)
MUT time 0.460s ( 0.465s elapsed)
GC time 0.748s ( 0.970s elapsed)
EXIT time 0.002s ( 0.034s elapsed)
Total time 1.212s ( 1.469s elapsed)
%GC time 61.7% (66.0% elapsed)
Alloc rate 5,271,326,694 bytes per MUT second
Productivity 38.3% of total user, 31.6% of total elapsed
real 0m1.481s
user 0m1.212s
sys 0m0.232s
runs in 1.4s, takes 533 MB of memory. As of Haskell Wiki's Memory Footprint, the 4M Text instances should take 6 words + 2N bytes of memory. I'm on 64 bit so one word is 8 bytes. That means it should be (6 * 8 bytes * 4000000) + (2*26888890) bytes = 234 MiB. (The 26888890 are all the bytes in input.txt that are not newline characters). For the list which will hold them all, we'll need additional memory of (1 + 3N) words + N * sizeof(v). sizeof(v) should be 8 because it'll be a pointer to the Text. The list should then use (1 + 3 * 4000000) * 8 bytes + 4000000 * 8 bytes = 122MiB. So in total (list + strings) we'd expect 356 MiB of memory used. I don't know where difference of 177 MiB (50%) of our memory went but let's ignore that for now.
The large hash set
Finally, we shall come to the use case that I'm actually interested in: Storing all the words in a large Data.HashSet. For that, I changed the program ever so slightly
import Data.Conduit
import Data.Text (Text)
import Control.Monad.Trans.Resource (runResourceT)
import qualified Data.Conduit.Binary as CB
import qualified Data.Conduit.Text as CT
import qualified Data.Conduit.List as CL
import qualified Data.HashSet as HS
main :: IO ()
main = do
hs <- (runResourceT
$ CB.sourceFile "input.txt"
$$ CT.decode CT.utf8
=$ CT.lines
=$ CL.fold (\b a -> a `seq` b `seq` HS.insert a b) HS.empty)
print $ HS.size hs
if we run that again
$ ghc -fforce-recomp -O3 -rtsopts Test.hs && time ./Test +RTS -sstderr
[1 of 1] Compiling Main ( Test.hs, Test.o )
Linking Test ...
4000000
6,544,900,208 bytes allocated in the heap
6,314,477,464 bytes copied during GC
442,295,792 bytes maximum residency (26 sample(s))
8,868,304 bytes maximum slop
1094 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause
Gen 0 12420 colls, 0 par 5.756s 5.869s 0.0005s 0.0034s
Gen 1 26 colls, 0 par 3.068s 3.633s 0.1397s 0.6409s
INIT time 0.000s ( 0.000s elapsed)
MUT time 3.567s ( 3.592s elapsed)
GC time 8.823s ( 9.502s elapsed)
EXIT time 0.008s ( 0.097s elapsed)
Total time 12.399s ( 13.192s elapsed)
%GC time 71.2% (72.0% elapsed)
Alloc rate 1,835,018,578 bytes per MUT second
Productivity 28.8% of total user, 27.1% of total elapsed
real 0m13.208s
user 0m12.399s
sys 0m0.646s
it's quite bad: 13s and 1094MiB of memory used. The memory footprint page lists 4.5N words + N * sizeof(v) for a hash set, that should become (4.5 * 4000000 * 8bytes) + (4000000 * 8bytes) = 167 MiB. Adding the storage for the stings (234 MiB), I'd expect 401 MiB which is more than double, and it feels quite slow on top of that :(.
Thought experiment: manually managing the memory
As a thought experiment: Using a language where we can manually control memory layout and implement the HashSet with Open addressing I'd expect the following to be the sizes. For fairness, I'll expect the strings to still be in UTF-16 (which is what Data.Text does). Given it's 26888890 characters in total (without newlines), the strings in UTF-16 should roughly be 53777780 bytes (2 * 26888890) = 51 MiB. We will need to store the length for every string, which will be 8 bytes * 4000000 = 30 MiB. And we will need space for the hash set (4000000 * 8 bytes), again 30 MiB. Given that the hash sets are normally increased exponentially, one would maybe expect 32 MiB or 64 MiB worst case. Let's go with the worst case: 64 MiB for the table + 30 MiB for the string lengths + 51 MiB for the actual string data, grand total of 145 MiB.
So given that Data.HashSet is not a specialised implementation for storing strings, the calculated 401 MiB would not be too bad but the actually used 1094 MiB seem a bit much waste.
The questions finally :)
So we finally got there:
Where is the error in my calculations?
Is there some problem in my implementation or is 1094 MiB really the best we can get?
Versions and stuff
I should probably use ByteStrings instead of Text as I only need ascii characters
I'm on GHC 7.10.1 and unordered-containers-0.2.5.1
For comparison: 4,000,000 Ints:
import Data.List (foldl')
import qualified Data.HashSet as HS
main = do
let hs = foldl' (\b a -> a `seq` b `seq` HS.insert a b) (HS.empty :: HS.HashSet Int) [1..4000000]
print $ HS.size hs
doesn't look any better:
[...]
798 MB total memory in use (0 MB lost due to fragmentation)
[...]
real 0m9.956s
that's almost 800 MiB for 4M Ints!

windbg memory leak investigation - missing heap memory

I am investigating a slow memory leak in a windows application using windbg
!heap -s gives the following output
Heap Flags Reserv Commit Virt Free List UCR Virt Lock Fast
(k) (k) (k) (k) length blocks cont. heap
-------------------------------------------------------------------------------------
00000023d62c0000 08000002 1182680 1169996 1181900 15759 2769 78 3 2b63 LFH
00000023d4830000 08008000 64 4 64 2 1 1 0 0
00000023d6290000 08001002 1860 404 1080 43 7 2 0 0 LFH
00000023d6dd0000 08001002 32828 32768 32828 32765 33 1 0 0
External fragmentation 99 % (33 free blocks)
00000023d8fb0000 08001000 16384 2420 16384 2412 5 5 0 3355
External fragmentation 99 % (5 free blocks)
00000023da780000 08001002 60 8 60 5 2 1 0 0
-------------------------------------------------------------------------------------
This shows that the heap with address 00000023d62c0000 has over a gigabyte of reserved memory.
Next I ran the command !heap -stat -h 00000023d62c0000
heap # 00000023d62c0000
group-by: TOTSIZE max-display: 20
size #blocks total ( %) (percent of total busy bytes)
30 19b1 - 4d130 (13.81)
20 1d72 - 3ae40 (10.55)
ccf 40 - 333c0 (9.18)
478 8c - 271a0 (7.01)
27158 1 - 27158 (7.00)
40 80f - 203c0 (5.78)
410 79 - 1eb90 (5.50)
68 43a - 1b790 (4.92)
16000 1 - 16000 (3.94)
50 39e - 12160 (3.24)
11000 1 - 11000 (3.05)
308 54 - fea0 (2.85)
60 28e - f540 (2.75)
8018 1 - 8018 (1.43)
80 f2 - 7900 (1.36)
1000 5 - 5000 (0.90)
70 ac - 4b40 (0.84)
4048 1 - 4048 (0.72)
100 3e - 3e00 (0.69)
48 c9 - 3888 (0.63)
If I add up the total size of the heap blocks from the above command (4d130 + 3ae40 + ...) I get a few megabytes of allocated memory.
Am I missing something here? How can I find which blocks are consuming the gigabyte of allocated heap memory?
I believe that the !heap –stat is broken for 64 bits dumps, at least big one. I have instead used debugdiag 1.2 for hunting memory leaks on 64 bits.

Prestashop performance - waiting time

i have a Prestashop installation and the problem is the page is loading very slow.
When i enter any url of my site, the browser is waiting from 2 to 10(!) seconds, and then the site is loading normally. Even if i add product to cart (ajax call), i need to wait few seconds, before cart refresh.
Can someone help me with this?
Here You can see the results of the Pingdom Speed Test:
http://tools.pingdom.com/fpt/#!/d4ktam/minvara.se
As You can see, the site is waiting while languages are loading, but i can't find where the problem is.
Here You can see prestashop debug data:
Load time: 9.288s
You'd better run your shop on a toaster
config: 1.376s
constructor: 0ms
init: 4.658s
checkAccess: 0ms
setMedia: 5ms
postProcess: 0ms
initHeader: 0ms
initContent: 3.134s
initFooter: 0ms
display: 114ms
Hook processing: 2.304s / 12.01 Mb
displayRightColumn: 860ms / 1.8 Mb
displayTop: 585ms / 2.73 Mb
moduleRoutes: 290ms / 0.56 Mb
displayLeftColumn: 220ms / 0.64 Mb
displayHeader: 218ms / 5.95 Mb
displayLeftColumnProduct: 120ms / 0.23 Mb
displayMyAccountBlock: 11ms / 0.09 Mb
actionProductOutOfStock: 0ms / 0 Mb
actionDispatcher: 0ms / 0 Mb
displayFooterProduct: 0ms / 0 Mb
actionFrontControllerSetMedia: 0ms / 0 Mb
displayRightColumnProduct: 0ms / 0 Mb
DisplayOverrideTemplate: 0ms / 0 Mb
displayProductButtons: 0ms / 0 Mb
displayProductTab: 0ms / 0 Mb
displayFooter: 0ms / 0 Mb
displayProductTabContent: 0ms / 0 Mb
Memory peak usage: 30.4 Mb
config: 7.15 Mb (7.2 Mb)
constructor: 0 Mb (7.2 Mb)
init: 8.29 Mb (15.6 Mb)
checkAccess: 0 Mb (15.6 Mb)
setMedia: 0.08 Mb (15.7 Mb)
postProcess: 0 Mb (15.7 Mb)
initHeader: 0.01 Mb (15.7 Mb)
initContent: 12.43 Mb (28.2 Mb)
initFooter: 0.01 Mb (28.2 Mb)
display: 1.36 Mb (30.4 Mb)
Total cache size (in Cache class): 0 Mb
DB type: DbPDO
SQL Queries: 129 queries
Time spent querying: 8.532s
Included files: 218
Size of included files: 2.66 Mb
Globals (> 1 Ko only): 830 Ko
_MODULES ≈ 401.5 Ko
_LANG ≈ 313 Ko
_SERVER ≈ 18.7 Ko
HTTP_SERVER_VARS ≈ 18.6 Ko
_ENV ≈ 18.1 Ko
HTTP_ENV_VARS ≈ 18.1 Ko
smarty ≈ 12.6 Ko
context ≈ 12.6 Ko
_COOKIE ≈ 3 Ko
HTTP_COOKIE_VARS ≈ 3 Ko
_MODULE ≈ 1.5 Ko
_transient ≈ 1.5 Ko
_GET ≈ 1.4 Ko

Resources