Microsoft Visual Studio 2017 provides Diagnostic Tools.
Using this tool, we can read the memory used at anytime during a program is running, which is very helpful.
However, this reading has to be manually.
The original problem I want to solve is to get the peak memory usage for each instance in a single process.
If I can output, from Diagnostic Tools, the memory usage together with the time, then I can write a program to find the peak memory usage during a time interval, where each time interval is recorded for solving an instance.
I understand that the memory measured by Diagnostic Tools is actually PeakPagedMemorySize64 of class Process, while I really want to know is the value of PeakWorkingSet64. Nevertheless, this is acceptable because the two values are quite close (for an instance, PeakPagedMemorySize64 = 1,272,377,344 bytes and PeakWorkingSet64 = 1,225,867,264 bytes).
Related
This is a simple program to decompress the string, I just running a loop to show that memory usage increases and the memory used never get released.
Memory is not getting released even after 8hr also
Package for decompressing string: https://github.com/Albinzr/lzGo - (simple lz string algorithm)
I'm adding a gist link since the string used for decompressing is large
Source Code:
Code
Activity Monitor
I'm completely new to go, Can anyone tell me how I can solve the memory issue?
UPDATE Jul 15 20
The app still crashes when the memory limit is reached Since it only uses 12mb - 15mb this should not happen!!
There is a lot going on here.
First, using Go version 1.14.2 your program works fine for me. It does not appear to be leaking memory.
Second, even when I purposely created a memory leak by increasing the loop size to 100 and saving the results in an array, I only used about 100 MB of memory.
Which gets us to Third, you should not be using Activity Monitor or any other operating system level tools to be checking for memory leaks in a Go program. Operating System memory management is a painfully complex topic and the OS tools are designed to help you determine how a program is affecting the whole system, not what is going on within the program.
Specifically, macOS "Real Memory" (analogous to RSS, Resident Set Size) includes memory the program is no longer using but the OS has not taken back yet. When the garbage collector frees up memory and tells the OS it does not need that memory anymore, the OS does not immediately take it back. (Why it works that way is way beyond the scope of this answer.)
Also, if the OS is under Memory Pressure, it can take back not only memory the program has freed, but it can also take back (temporarily) memory the program is still using but has not accessed "recently" so that another program that urgently needs memory can use it. In this case, "Real Memory" will be reduced even if the process is not actually using less memory. There is no statistic reported by the operations system that will help you here.
You need to use native Go settings like GODEBUG=gctrace=1 or tools like expvar and expvarmon to see what the garbage collector is doing.
As for why your program ran out of memory when you limited it, keep in mind that by default Go builds a dynamically linked executable and just reading in all the shared libraries can take up a lot of memory. Try building your application with static linking using CGO_ENABLED=0 and see if that helps. See how much memory it uses when you only run 1 iteration of the loop.
Is there a way for me to see exactly how much memory each Outlook add-in is using? I have a few customers on 32-bit Office who are all having issues with screen flashing and crashing and I suspect that we as a company have deployed too many add-ins, and even with Large Address Awareness (LAA), they're running out of memory which is causing Outlook to freak out.
I didn't see a way to do this in Outlook so i created a .dmp file and I've opened it via windbg, but I'm new to this application and have no clue how to see specific memory usage by specific add-ins (the .dmp file is only of outlook.exe)
The following assumes plugins created in .NET.
The allocation of memory with a new statement goes to the .NET memory manager. In order to find out which plugin allocated the memory, that information would need to be stored in the .NET heap as well.
A UST (User Mode Stack Trace) database like available for the Windows Heap Manager is not available in .NET. Also, the .NET memory manager works directly above VirtualAlloc(), so it does not use the Windows Heap Manager. Basically, the reason is garbage collection.
Is there a way for me to see exactly how much memory each Outlook add-in is using?
No, since this information is not stored in crash dumps and there's no setting to enable it.
What you need is a memory profiler which is specific for .NET.
If you work with .NET and Visual Studio already, perhaps you're using JetBrains Resharper. The Ultimate Edition comes with a tool called dotMemory, so you might already have a license and you just need to install it via the control panel ("modify" Resharper installation).
It has (and other tools probably have as well) a feature to group memory allocations by assembly:
The screenshot shows memory allocated by an application called "MemoryPerformance". It retains 202 MB in objects, and those objects are mostly objects of the .NET framework (mscorlib).
The following assumes plugins created in C++ or other "native" languages, at least not .NET.
The allocation of memory with a new statement goes to HeapAlloc(). In order to find out who allocated the memory, that information would need to be stored in the heap as well.
However, you cannot provide that information in the new statement, and even if it were possible, you would need to rewrite all the new statements in your code.
Another way would be that HeapAlloc() has a look at the call stack at the time someone wants memory. In normal operation, that's too much cost (time-wise) and too much overhead (memory-wise). However, it is possible to enable the so called User Mode Stack Trace Database, sometimes abbreviated as UST database. You can do that with the tool GFlags, which ships with WinDbg.
The tool to capture memory snapshots is UMDH, also available with WinDbg. It will store the results as plain text files. It should be possible to extract statistical data from those USTs, however, I'm not aware of a tool that would do that, which means you would need to write one yourself.
The third approach is using the concept of "heap tagging". However, it's quite complex and also needs modifications in your code. I never implemented it, but you can look at the question How to benefit from Heap tagging by DLL?
Let's say the UST approch looks most feasible. How large should the UST database be?
Until now, 50 MB was sufficient for me to identify and fix memory leaks. However, for that use case it's not important to get information about all memory. It just needs enough samples to support a hypothesis. Those 50 MB are IMHO allocated in your application's memory, so it may affect the application.
The UST database only stores the addresses, not the call stack text. So in a 32 bit application, each frame on the call stack only needs 32 bit of storage.
In your case, 50 MB will not be sufficient. Considering an average depth of 10 frames and an average allocation size of 256 bytes (4 bytes for an int, but also larger things like strings), you get
4 GB / 256 bytes = 16M allocations
16M allocations * 10 frames * 4 byte/frame = 640 MB UST
If the given assumptions are realistic (I can't guarantee that), you would need a 640 MB UST database size. This will influence your application much, since it reduces the memory from 4 GB to 3.3 GB, thus the OOM comes earlier.
The UST information should also be available in the DMP file, if it was configured at the time the crash dump was created. Certainly not in your DMP file, otherwise you would have told us. However, it's not available in a way that's good for statistics. Using the UMDH text files IMHO is a better approach.
Is there a way for me to see exactly how much memory each Outlook add-in is using?
Not with the DMP file you have at the moment. It will still be hard with the tools available with WinDbg.
There are a few other options left:
Disable all plugins and measure memory of Outlook itself. Then, enable one plugin at a time and measure the memory with that plugin enables. Calculate the difference to find out what additional memory that plugin needs.
Does it crash immediately at startup? Or later, say after 10 minutes of usage? Could it be a memory leak? Identifying a memory leak could be easier: just enable one plugin at a time and monitor memory usage over time. Use a memory profiler, not WinDbg. It will be much easier to use and it can draw the appropriate graphics you need.
Note that you need to define a clear process to measure memory. Some memory will only be allocated when you do something specific ("lazy initialization"). Perhaps you want to measure that memory, too.
I'm trying to make an application more memory efficient and in order to do that I need to understand its current memory usage. So I've got a little console program where I have it pause at various steps in the process so I can analyze its memory usage with various tools. This is a C++ program using standard STL allocators and containers (unique_ptr, vector, map, etc).
Unfortunately I've apparently run into an issue with how the Windows heap manager works. My understanding is that as the heap usage grows, it commits more memory from the operating system. But as the heap usage shrinks, it does NOT decommit that memory. See this answer and this comment.
Ultimately I can tell how much heap my program is actually using by examining it in windbg with the !heap extension, but that's a bit tedious and it would be nice to be able to use something easier like process explorer or performance monitor. Also, I'm going to get complaints when people open up task manager and see this misleading information.
So I guess my question is A) is there a way to actually decommit heap memory without exiting the process or, failing that, B) is there a strategy I can use to avoid this? (custom allocator?)
I'm trying to speed up the time taken to compile my application and one thing I'm investigating is to check what resources, if any, I can add to the build machine to speed things up. To this end, how do I figure out if I should invest in more CPU, more RAM, a better hard disk or whether the process is being bound by some other resource? I already saw this (How to check if app is cpu-bound or memory-bound?) and am looking for more tips and pointers.
What I've tried so far:
Time the process on the build machine vs. on my local machine. I found that the build machine takes twice the time as my machine.
Run "Resource Monitor" and look at the CPU usage, Memory usage and Disk usage while the process is running - while doing this, I have trouble interpreting the numbers, mainly because I don't understand what each column means and how that translates to a Virtual Machine vs. a physical box and what it means with multi-CPU boxes.
Start > Run > perfmon.exe
Performance Monitor can graph many system metrics that you can use to deduce where the bottlenecks are including cpu load, io operations, pagefile hits and so on.
Additionally, the Platform SDK now includes a tool called XPerf that can provide information more relevant to developers.
Random-pausing will tell you what is your percentage split between CPU and I/O time.
Basically, if you grab 10 random stackshots, and if 80% (for example) of the time is in I/O, then on 8+/-1.3 samples the stack will reach into the system routine that reads or writes a buffer.
If you want higher precision, take more samples.
I am now profiling an application, who does a lot of disk I/O.
At this point, I want to know how much time is spent on disk I/O. So that, I could make a comparison between I/O and the whole execution time, in order to decide the next target of optimization.
In short, I am seeking tools or methods to:
Calculate and summary the total time of disk I/O operations of my application.
Stack trace is not mandatory but helpful
Works on Windows or OSX.
I have no control on the component who does disk I/O operations. So I have no way to add profiling codes into my application to record I/O time manually.
I have tried the time profiler from Xcode Instruments. But it is too heavy. I just want a summary time of the I/O operations.
Thanks
Use XPerf on Windows to tell you in excruciating detail about the file I/O times. You can get stack traces with this as well - there's a great PDC09 video on using XPerf by Michael Milrud.
I've wrote once some simple hooks for ReadFile/WriteFile under Win32. You just have to call an init() function from the main and the calls from the whole process will be intercepted.
I just don't know how to post them, because they are quite large.