WinRT cache with cleanup on low memory - caching

I have a WinRT application which uses many (sometimes large) images. The images are stored on disk, but it takes some time to load the images, which causes a visual hiccup. To fix that, I load the images beforehand and store them into a cache.
However, I'm a bit hesitant to store an arbitrary number for images in memory, and would like to use a cache that is automatically cleaned when memory gets low.
How would I go about implementing this? On iOS there is a didReceiveMemoryWarning method, but I can't find an equivalent method for WinRT.

If using .NET you could try holding weak references to images in the cache and so when you use them - they won't be garbage collected, but if you don't - they will be collected when memory pressure occurs. When retrieving images from the cache - you'd simply check if the weak reference is alive and if it is not - you'd reload the image before returning.

Related

CFSpreadSheet functions using up memory for large data sets

We have a Coldfusion application that is running a large query (up to 100k rows) and then displaying it in HTML. The UI then offers an Export button that triggers writing the report to an Excel spreadsheet in .xlsx format using the cfspreadsheet tags and spreadsheet function, in particular, spreadsheetSetCellValue for building out row column values, spreadsheetFormatRow and spreadsheetFormatCell functions for formatting. The ssObj is then written to a file using:
<cfheader name="Content-Disposition" value="attachment; filename=OES_#sel_rtype#_#Dateformat(now(),"MMM-DD-YYYY")#.xlsx">
<cfcontent type="application/vnd-ms.excel" variable="#ssObj#" reset="true">
where ssObj is the SS object. We are seeing the file size about 5-10 Mb.
However... the memory usage for creating this report and writing the file jumps up by about 1GB. The compounding problem is that the memory is not released right away after the export completes by the java GC. When we have multiple users running and exporting this type of report, the memory keeps climbing up and reaches the heap size allocated and kills the serer's performance to the point it brings down the server. A reboot is usually necessary to clear it out.
Is this normal/expected behavior or how should we be dealing with this issue? Is it possible to easily release the memory usage of this operation on demand after the export has completed, so that others running the report readily get access to the freed up space for their reports? Is this type of memory usage for a 5-10Mb file common with cfspreadsheet functions and writing the object out?
We have tried temporarily removing the expensive formatting functions and still the memory usage is large for the creation and writing of the .xlsx file. We have also tried using the spreadsheetAddRows approach and the cfspreadsheet action="write" query="queryname" tag passing in a query object but this too took up a lot of memory.
Why are these functions so memory hoggish? What is the optimal way to generate Excel SS files without this out of memory issue?
I should add the server is running in Apache/Tomcat container on Windows and we are using CF2016.
How much memory do you have allocated to your CF instance?
How many instances are you running?
Why are you allowing anyone to view 100k records in HTML?
Why are you allowing anyone to export that much data on the fly?
We had issues of this sort (CF and memory) at my last job. Large file uploads consumed memory, large excel exports consumed memory, it's just going to happen. As your application's user base grows, you'll hit a point where these memory hogging requests kill the site for other users.
Start with your memory settings. You might get a boost across the board by doubling or tripling what the app is allotted. Also, make sure you're on the latest version of the supported JDK for your version of CF. That can make a huge difference too.
Large file uploads would impact the performance of the instance making the request. This meant that others on the same instance doing normal requests were waiting for those resources needlessly. We dedicated a pool of instances to only handle file uploads. Specific URLs were routed to these instances via a load balancer and the application was much happier for it.
That app also handled an insane amount of data and users constantly wanted "all of it". We had to force search results and certain data sets to reduce the amount shown on screen. The DB was quite happy with that decision. Data exports were moved to a queue so they could craft those large excel files outside of normal page requests. Maybe they got their data immediately, maybe the waited a while to get a notification. Either way, the application performed better across the board.
Presumably a bit late for the OP, but since I ended up here others might too. Whilst there is plenty of general memory-related sound advice in the other answer+comments here, I suspect the OP was actually hitting a genuine memory leak bug that has been reported in the CF spreadsheet functions from CF11 through to CF2018.
When generating a spreadsheet object and serving it up with cfheader+cfcontent without writing it to disk, even with careful variable scoping, the memory never gets garbage collected. So if your app runs enough Excel exports using this method then it eventually maxes out memory and then maxes out CPU indefinitely, requiring a CF restart.
See https://tracker.adobe.com/#/view/CF-4199829 - I don't know if he's on SO but credit to Trevor Cotton for the bug report and this workaround:
Write spreadsheet to temporary file,
read spreadsheet from temporary file back into memory,
delete temporary file,
stream spreadsheet from memory to
user's browser.
So given a spreadsheet object that was created in memory with spreadsheetNew() and never written to disk, then this causes a memory leak:
<cfheader name="Content-disposition" value="attachment;filename=#arguments.fileName#" />
<cfcontent type="application/vnd.ms-excel" variable = "#SpreadsheetReadBinary(arguments.theSheet)#" />
...but this does not:
<cfset local.tempFilePath = getTempDirectory()&CreateUUID()&arguments.filename />
<cfset spreadsheetWrite(arguments.theSheet, local.tempFilePath, "", true) />
<cfset local.theSheet = spreadsheetRead(local.tempFilePath) />
<cffile action="delete" file="#local.tempFilePath#" />
<cfheader name="Content-disposition" value="attachment;filename=#arguments.fileName#" />
<cfcontent type="application/vnd.ms-excel" variable = "#SpreadsheetReadBinary(local.theSheet)#" />
It shouldn't be necessary, but Adobe don't appear to be in a hurry to fix this, and I've verified that this works for me in CF2016.

How do I find memory leak of my application?

I have written a windows service using .NET technologies. I am using `JetBrains dotMemory' tool to understand the memory leak.
I am getting below report but as a new bee I am not sure how to read this.
System namespace is showing more survived bytes. But how do I know which code is the root cause of memory leak?
At the first your should decide which kind of memory issue you are going to find
Constantly growing memory consumption - get base snaphsot, get another after memory consumption is increased, open snapshots comparison, open new objects created after first snapshot, look at them to understand which should be collected.
Ensure that some key object doesn't leak - set your app in a state when some object should not be presented in memory (e.g. close some view), get snapshot, using filter on "Group by type" view to ensure that this object is not presented in memory.
Memory traffic - get base snapshot if needed, run action/algorithm in your app which you want to check, get snapshot. Open "Memory Traffic" view, look if it looks as you implemented or more objects then you expected were allocated during the action.
Grab this free book for other possible memory issues.
P.S. Only you as an app author can answer the question, is it a problem or it is as designed.
You should look at the survived bytes / retained bytes which will point you to the base instance or the root object of the creation. It depends on your application design and implementation to decide whether the specified object in the memory should be retained or not.
If you identify the root object of the creation, you should try to separate the linkage and make the .net garbage collector to automatically collect the unwanted objects.
There is no fixed flag points to identify memory leaks.
Using ANTS Memory Profiler
Using Windbg or here
One source of memory leaks are the event handlers that are not being de-referenced.
Example:
myClass.DoSomething += Event_DoSomething
You need to make sure the resources are being clearead like below:
myClass.DoSomething -= Event_DoSomething

Is there a Notification when CoreData finished reading data from disk?

I have a Mac (not document) app, that uses CoreData.
When launching the app, it reads the data stored on the filesystem.
I have to setup some things in -(void)applicationDidFinishLaunching based on the information stored using CoreData.
So it would be nice to know when my app read everything from disk.
If I do my setup in -(void)applicationDidFinishLaunching i doesn't work. If I do it a few seconds later it works!
Thx!
If you are using object controllers that automatically prepare their own content, you can observe arrangedObjects to find out when they have fetched their content. This does not guarantee that the actual objects are not faults. In fact, that's one of the main strengths of Core Data: objects are lazily loaded from disk.
If you for some reason want to make sure that most disk activity has taken place in applicationDidFinishLaunching, you can perform a custom fetch that specifically does not return objects as faults. Look up "prefetching" in the Core Data documentation. However, there is no guarantee that Core Data won't fault these objects at a later time due to memory constraints, thereby incurring another disk read when those objects are loaded again.
You can of course also use the NSBinaryStoreType, in which case the entire store is loaded into memory synchronously when it is added to the persistent store coordinator.

Cocoa Memory Usage

I'm trying to track down some peculiar memory behavior in my Cocoa desktop app. My app does a lot of image processing using NSImage and uploads those images to a website over HTTP using NSURLConnection.
After uploading several hundred images (some very large), when I run Instrument I get no leaks. I've also run through MallocDebug and get no leaks. When I dig into object allocations using Instrument I get output like this:
GeneralBlock-9437184, Net Bytes 9437184, # Net 1
GeneralBlock-192512, Net Bytes 2695168, # Net 14
and etc., for smaller sizes. When I look at these in detail, they're marked as being owned by 'Foundation' and created via NSConcreteMutableData initWithCapacity. During HTTP upload I'm creating a post body using NSMutableData, so I'm guessing these are buffers Cocoa is caching for me when I create the NSMutableData objects.
Is there a way to force Cocoa to free these? I'm 90% positive I'm releasing correctly (and Instruments and MallocDebug seem to confirm this), but I think Cocoa is keeping these around for perf reasons since I'm allocating so many MSMutableData buffers.
If you're certain you're releasing the objects you own correctly, then there's really nothing you can (or should) do. Those blocks are, as Instruments says, owned by Foundation because NSConcreteMutableData, a Foundation object, created them. It's possible that these are some sort of cache that NSData is keeping around on purpose, but there's no way to know what they are.
If you believe this is a bug, you should report it at http://bugreport.apple.com. The rules of memory ownership apply to classes that don't manage memory well, too.
Also, this might be a silly question, but which option are you using for the Object Alloc tool? All objects created or Created and still living? You might be looking at allocations that don't matter anymore.

What is the Oracle KGL SIMULATOR?

What is this thing called a KGL SIMULATOR and how can its memory utilisation be managed by application developers?
The background to the question is that I'm occasionally getting errors like the following and would like to get a general understanding of what is using this heap-space?
ORA-04031: unable to allocate 4032 bytes of shared memory ("shared pool","select text from > view$ where...","sga heap(3,0)","kglsim heap")
I've read forum posts through Google suggesting that the kglsim is related to the KGL SIMULATOR, but there is no definition of that component, or any tips for developers.
KGL=Kernel General Library cache manager, as the name says it deals with library objects such cursors, cached stored object definitions (PL/SQL stored procs, table definitions etc).
KGL simulator is used for estimating the benefit of caching if the cache was larger than currently. The general idea is that when flushing out a library cache object, it's hash value (and few other bits of info) are still kept in the KGL simulator hash table. This stores a history of objects which were in memory but flushed out.
When loading a library cache object (which means that no existing such object is in library cache), Oracle goes and checks the KGL simulator hash table to see whether an object with matching hash value is in there. If a matching object is found, that means that the required object had been in cache in past, but flushed out due space pressure.
Using that information of how many library cache object (re)loads could have been been avoided if cache had been bigger (thanks to KGL simulator history) and knowing how much time the object reloads took, Oracle can predict how much response time would have been saved instancewide if shared pool was bigger. This is seen from v$library_cache_advice.
Anyway, this error was probably raised by a victim session due running out of shared pool space. In other words, someone else may have used up all the memory (or all the large enough chunks) and this allocation for KGL sim failed because of that.
v$sgastat would be the first point for troubleshooting ORA-4031 errors, you need to identify how much free memory you have in shared pool (and who's using up most of the memory).
--
Tanel Poder
http://blog.tanelpoder.com
I've found that KGL stands for "Kernel Generic Library".
Your issue could be a memory leak within Oracle. You probably should open a case with Oracle support.

Resources