Garbage Collection in vfp9 - visual-foxpro

When I close in memory cursors in Visual Foxpro 9, the memory is not always reduced. How can I go about releasing the cursor from memory so I can reduce the amount of memory my application uses

VFP caches things to make execution faster. However, when it needs the memory for something else, it'll release older cached stuff.
If you want to override VFP's decisions, use SYS(1104). You may also want to play with SYS(3050) to tune VFP's memory allocation.

Related

How could I make a Go program use more memory? Is that recommended?

I'm looking for option something similar to -Xmx in Java, that is to assign maximum runtime memory that my Go application can utilise. Was checking the runtime , but not entirely if that is the way to go.
I tried setting something like this with func SetMaxStack(), (likely very stupid)
debug.SetMaxStack(5000000000) // bytes
model.ExcelCreator()
The reason why I am looking to do this is because currently there is ample amount of RAM available but the application won't consume more than 4-6% , I might be wrong here but it could be forcing GC to happen much faster than needed leading to performance issue.
What I'm doing
Getting large dataset from RDBMS system , processing it to write out in excel.
Another reason why I am looking for such an option is to limit the maximum usage of RAM on the server where it will be ultimately deployed.
Any hints on this would greatly appreciated.
The current stable Go (1.10) has only a single knob which may be used to trade memory for lower CPU usage by the garbage collection the Go runtime performs.
This knob is called GOGC, and its description reads
The GOGC variable sets the initial garbage collection target percentage. A collection is triggered when the ratio of freshly allocated data to live data remaining after the previous collection reaches this percentage. The default is GOGC=100. Setting GOGC=off disables the garbage collector entirely. The runtime/debug package's SetGCPercent function allows changing this percentage at run time. See https://golang.org/pkg/runtime/debug/#SetGCPercent.
So basically setting it to 200 would supposedly double the amount of memory the Go runtime of your running process may use.
Having said that I'd note that the Go runtime actually tries to adjust the behaviour of its garbage collector to the workload of your running program and the CPU processing power at hand.
I mean, that normally there's nothing wrong with your program not consuming lots of RAM—if the collector happens to sweep the garbage fast enough without hampering the performance in a significant way, I see no reason to worry about: the Go's GC is
one of the points of the most intense fine-tuning in the runtime,
and works very good in fact.
Hence you may try to take another route:
Profile memory allocations of your program.
Analyze the profile and try to figure out where the hot spots
are, and whether (and how) they can be optimized.
You might start here
and continue with the gazillion other
intros to this stuff.
Optimize. Typically this amounts to making certain buffers
reusable across different calls to the same function(s)
consuming them, preallocating slices instead of growing them
gradually, using sync.Pool where deemed useful etc.
Such measures may actually increase the memory
truly used (that is, by live objects—as opposed to
garbage) but it may lower the pressure on the GC.

How to see exactly how much memory each add-in is using?

Is there a way for me to see exactly how much memory each Outlook add-in is using? I have a few customers on 32-bit Office who are all having issues with screen flashing and crashing and I suspect that we as a company have deployed too many add-ins, and even with Large Address Awareness (LAA), they're running out of memory which is causing Outlook to freak out.
I didn't see a way to do this in Outlook so i created a .dmp file and I've opened it via windbg, but I'm new to this application and have no clue how to see specific memory usage by specific add-ins (the .dmp file is only of outlook.exe)
The following assumes plugins created in .NET.
The allocation of memory with a new statement goes to the .NET memory manager. In order to find out which plugin allocated the memory, that information would need to be stored in the .NET heap as well.
A UST (User Mode Stack Trace) database like available for the Windows Heap Manager is not available in .NET. Also, the .NET memory manager works directly above VirtualAlloc(), so it does not use the Windows Heap Manager. Basically, the reason is garbage collection.
Is there a way for me to see exactly how much memory each Outlook add-in is using?
No, since this information is not stored in crash dumps and there's no setting to enable it.
What you need is a memory profiler which is specific for .NET.
If you work with .NET and Visual Studio already, perhaps you're using JetBrains Resharper. The Ultimate Edition comes with a tool called dotMemory, so you might already have a license and you just need to install it via the control panel ("modify" Resharper installation).
It has (and other tools probably have as well) a feature to group memory allocations by assembly:
The screenshot shows memory allocated by an application called "MemoryPerformance". It retains 202 MB in objects, and those objects are mostly objects of the .NET framework (mscorlib).
The following assumes plugins created in C++ or other "native" languages, at least not .NET.
The allocation of memory with a new statement goes to HeapAlloc(). In order to find out who allocated the memory, that information would need to be stored in the heap as well.
However, you cannot provide that information in the new statement, and even if it were possible, you would need to rewrite all the new statements in your code.
Another way would be that HeapAlloc() has a look at the call stack at the time someone wants memory. In normal operation, that's too much cost (time-wise) and too much overhead (memory-wise). However, it is possible to enable the so called User Mode Stack Trace Database, sometimes abbreviated as UST database. You can do that with the tool GFlags, which ships with WinDbg.
The tool to capture memory snapshots is UMDH, also available with WinDbg. It will store the results as plain text files. It should be possible to extract statistical data from those USTs, however, I'm not aware of a tool that would do that, which means you would need to write one yourself.
The third approach is using the concept of "heap tagging". However, it's quite complex and also needs modifications in your code. I never implemented it, but you can look at the question How to benefit from Heap tagging by DLL?
Let's say the UST approch looks most feasible. How large should the UST database be?
Until now, 50 MB was sufficient for me to identify and fix memory leaks. However, for that use case it's not important to get information about all memory. It just needs enough samples to support a hypothesis. Those 50 MB are IMHO allocated in your application's memory, so it may affect the application.
The UST database only stores the addresses, not the call stack text. So in a 32 bit application, each frame on the call stack only needs 32 bit of storage.
In your case, 50 MB will not be sufficient. Considering an average depth of 10 frames and an average allocation size of 256 bytes (4 bytes for an int, but also larger things like strings), you get
4 GB / 256 bytes = 16M allocations
16M allocations * 10 frames * 4 byte/frame = 640 MB UST
If the given assumptions are realistic (I can't guarantee that), you would need a 640 MB UST database size. This will influence your application much, since it reduces the memory from 4 GB to 3.3 GB, thus the OOM comes earlier.
The UST information should also be available in the DMP file, if it was configured at the time the crash dump was created. Certainly not in your DMP file, otherwise you would have told us. However, it's not available in a way that's good for statistics. Using the UMDH text files IMHO is a better approach.
Is there a way for me to see exactly how much memory each Outlook add-in is using?
Not with the DMP file you have at the moment. It will still be hard with the tools available with WinDbg.
There are a few other options left:
Disable all plugins and measure memory of Outlook itself. Then, enable one plugin at a time and measure the memory with that plugin enables. Calculate the difference to find out what additional memory that plugin needs.
Does it crash immediately at startup? Or later, say after 10 minutes of usage? Could it be a memory leak? Identifying a memory leak could be easier: just enable one plugin at a time and monitor memory usage over time. Use a memory profiler, not WinDbg. It will be much easier to use and it can draw the appropriate graphics you need.
Note that you need to define a clear process to measure memory. Some memory will only be allocated when you do something specific ("lazy initialization"). Perhaps you want to measure that memory, too.

GC in .NET 4.0 not affecting Working Set in task manager

OK, just to be clear, I understand that the Task Manager is never a good way to monitor memory consumption of a program. Now that I've cleared the air...
I've used SciTech's .Net memory profiler for a while now, but almost exclusively on the .Net 3.5 version of our app. Typically, I'd run an operation, collect a baseline snapshot, run the operation again and collect a comparison snapshot and attack leaks from there. Generally, the task manager would mimic the rise and fall of memory (within a range of accuracy and within a certain period of time).
In our .net 4.0 app, our testing department reported a memory when performing a certain set of operations (which I'll call operation A). Within the profiler, I'd see a sizable change of live bytes (usually denoting a leak). If I immediately collect another snapshot, the memory is reclaimed (regardless of how long I waited to collect the initial comparison snapshot). I thought the finalizer might be getting stuck so I manually injected the following calls:
GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect();
When I do that, I don't see the initial leak in the profiler, but my working set in the Task Manager is still ridiculously high (operation A involves loading images). I thought the issue might be a stuck finalizer (and that SciTech was able to do some voodoo magic when the profiler collects its snapshots), but for all the hours I spent using WinDbg and MDbg, I couldn't ever find anything that suggested the finalizer was getting stuck. If I just let my app sit for hours, the memory in the working set would never reduce. However, if I proceeded to perform other operations, the memory would drop substantially at random points.
MY QUESTION
I know the GC changed substantially with CLR 4.0, but did it affect how the OS sees allocated memory? My computer has 12 GB RAM, so when I run my app and ramp up my memory usage, I still have TONS free so I'm hypothesizing that it just doesn't care to reduce what it's already allocated (as portrayed in the Task Manager), even if the memory has been "collected". When I run this same operation on a machine with 1GB RAM, I never get an out of memory exception, suggesting that I'm really not leaking memory (which is what the profiler also suggests).
The only reason I care what the Task Manager shows because it's how our customers monitor our memory usage. If there is a change in the GC that would affect this, I just want to be able to show them documentation that says it's Microsoft's fault, not ours :)
In my due diligence, I've searched a bunch of other SO threads for an answer, but all I've found are articles about the concurrency of the generational cleanup and other unrelated, yet useful, facts.
You cannot expect to see changes in the use of managed heap immediately reflected in process memory. The CLR essentially acts as a memory manager on behalf of your application, so it allocates and frees segments as needed. As allocating and freeing segments is an expensive operation the CLR tries to optimize this. It will typically not free an empty segment immediately as it could be used to serve new managed allocations. If you want to monitor managed memory usage, you should rely on the .NET specific counters for this.

In a GC environment, when will Core Data release its allocated memory?

In my application, right now it seems that Core Data is busy allocating space in memory for different objects, however, it's never releasing that memory. The memory used by the application keeps growing the more it runs.
Is there a call to the Core Data context (or something else) that ensures all memory is cleaned up? When will Core Data release the allocated memory?
Thanks!
Even when core data has finished with an object (which might not be when you think), the garbage collector won't necessarily collect it straight away.
The garbage collector has two methods to trigger collection: collectIfNeeded and collectExhaustively. The former doesn't guarantee to collect right now and the latter will probably stop your application for a bit.
You can force core data to fault its objects. See Reducing Memory Overhead for details.

Memory management for intentionally memory intensive applications

Note: I am aware of the question Memory management in memory intensive application, however that question appears to be about applications that make frequent memory allocations, whereas my question is about applications intentionally designed to consume as much physical memory as is safe.
I have a server application that uses large amounts of memory in order to perform caching and other optimisations (think SQL Server). The application runs on a dedicated machine, and so can (and should) consume as much memory as it wants / is able to in order to speed up and increase throughput and response times without worry of impacting other applications on the system.
The trouble is that if memory usage is underestimated, or if load increases its possible to end up with nasty failures as memory allocations fail - in this situation obviously the best thing to do is to free up memory in order to prevent the failure at the expense of performance.
Some assumptions:
The application is running on a dedicated machine
The memory requirements of the application exceed the physical memory on the machine (that is, if additional memory was available to the application it would always be able to use that memory to in some way improve response times or throughput)
The memory is effectively managed in a way such that memory fragmentation is not an issue.
The application knows what memory can be safely freed, and what memory should be freed first for the least performance impact.
The app runs on a Windows machine
My question is - how should I handle memory allocations in such an application? In particular:
How can I predict whether or not a memory allocation will fail?
Should I leave a certain amount of memory free in order to ensure that core OS operations remain responsive (and don't in that way adversely impact the applications performance), and how can I find out how much memory that is?
The core objective is to prevent failures as a result of using too much memory, while at the same time using up as much memory as possible.
I'm a C# developer, however my hope is that the basic concepts for any such app are the same regardless of the language.
In linux, the memory usage percentage is divided into following levels.
0 - 30% - no swapping
30 - 60% - swap dirty pages only
60 - 90% - swap clean pages also based on LRU policy.
90% - Invoke OOM(Out of memory) killer and kill the process consuming maximum memory.
check this - http://linux-mm.org/OOM_Killer
In think windows might have similar policy, so you can check the memory stats and make sure you never get to the max threshold.
One way to stop consuming more memory is to go to sleep and give more time for memory cleanup threads to run.
That is a very good question, and bound to be subjective as well, because the very nature of the fundamental of C# is that all memory management is done by the runtime, i.e. Garbage Collector. The Garbage Collector is a non-deterministic entity that manages and sweeps the memory for reclaiming, depending on how often the memory gets fragmented, the GC will kick in hence to know in advance is not easy thing to do.
To properly manage the memory sounds tedious but common sense applies, such as the using clause to ensure an object gets disposed. You could put in a single handler to trap the OutOfMemory Exception but that is an awkward way, since if the program has run out of memory, does the program just seize up, and bomb out, or should it wait patiently for the GC to kick in, again determining that is tricky.
The load of the system can adversely affect the GC's job, almost to a point of a Denial of Service, where everything just grind to a halt, again, since the specifications of the machine, or what is the nature of that machine's job is unknown, I cannot answer it fully, but I'll assume it has loads of RAM..
In essence, while an excellent question, I think you should not worry about it and leave it to the .NET CLR to handle the memory allocation/fragmentation as it seems to do a pretty good job.
Hope this helps,
Best regards,
Tom.
Your question reminds me of an old discussion "So what's wrong with 1975 programming ?". The architect of varnish-cache argues, that instead of telling the OS to get out of the way and manage all memory yourself, you should rather cooperate with the OS and let it understand what you intend to do with memory.
For example, instead of simply reading data from disk, you should use memory-mapped files. This allows the OS to apply its LRU algorithm to write-back data to disk when memory becomes scarce. At the same time, as long as there is enough memory, your data will stay in memory. Thus, your application may potentially use all memory, without risking getting killed.

Resources