CFSpreadSheet functions using up memory for large data sets - performance

We have a Coldfusion application that is running a large query (up to 100k rows) and then displaying it in HTML. The UI then offers an Export button that triggers writing the report to an Excel spreadsheet in .xlsx format using the cfspreadsheet tags and spreadsheet function, in particular, spreadsheetSetCellValue for building out row column values, spreadsheetFormatRow and spreadsheetFormatCell functions for formatting. The ssObj is then written to a file using:
<cfheader name="Content-Disposition" value="attachment; filename=OES_#sel_rtype#_#Dateformat(now(),"MMM-DD-YYYY")#.xlsx">
<cfcontent type="application/vnd-ms.excel" variable="#ssObj#" reset="true">
where ssObj is the SS object. We are seeing the file size about 5-10 Mb.
However... the memory usage for creating this report and writing the file jumps up by about 1GB. The compounding problem is that the memory is not released right away after the export completes by the java GC. When we have multiple users running and exporting this type of report, the memory keeps climbing up and reaches the heap size allocated and kills the serer's performance to the point it brings down the server. A reboot is usually necessary to clear it out.
Is this normal/expected behavior or how should we be dealing with this issue? Is it possible to easily release the memory usage of this operation on demand after the export has completed, so that others running the report readily get access to the freed up space for their reports? Is this type of memory usage for a 5-10Mb file common with cfspreadsheet functions and writing the object out?
We have tried temporarily removing the expensive formatting functions and still the memory usage is large for the creation and writing of the .xlsx file. We have also tried using the spreadsheetAddRows approach and the cfspreadsheet action="write" query="queryname" tag passing in a query object but this too took up a lot of memory.
Why are these functions so memory hoggish? What is the optimal way to generate Excel SS files without this out of memory issue?
I should add the server is running in Apache/Tomcat container on Windows and we are using CF2016.

How much memory do you have allocated to your CF instance?
How many instances are you running?
Why are you allowing anyone to view 100k records in HTML?
Why are you allowing anyone to export that much data on the fly?
We had issues of this sort (CF and memory) at my last job. Large file uploads consumed memory, large excel exports consumed memory, it's just going to happen. As your application's user base grows, you'll hit a point where these memory hogging requests kill the site for other users.
Start with your memory settings. You might get a boost across the board by doubling or tripling what the app is allotted. Also, make sure you're on the latest version of the supported JDK for your version of CF. That can make a huge difference too.
Large file uploads would impact the performance of the instance making the request. This meant that others on the same instance doing normal requests were waiting for those resources needlessly. We dedicated a pool of instances to only handle file uploads. Specific URLs were routed to these instances via a load balancer and the application was much happier for it.
That app also handled an insane amount of data and users constantly wanted "all of it". We had to force search results and certain data sets to reduce the amount shown on screen. The DB was quite happy with that decision. Data exports were moved to a queue so they could craft those large excel files outside of normal page requests. Maybe they got their data immediately, maybe the waited a while to get a notification. Either way, the application performed better across the board.

Presumably a bit late for the OP, but since I ended up here others might too. Whilst there is plenty of general memory-related sound advice in the other answer+comments here, I suspect the OP was actually hitting a genuine memory leak bug that has been reported in the CF spreadsheet functions from CF11 through to CF2018.
When generating a spreadsheet object and serving it up with cfheader+cfcontent without writing it to disk, even with careful variable scoping, the memory never gets garbage collected. So if your app runs enough Excel exports using this method then it eventually maxes out memory and then maxes out CPU indefinitely, requiring a CF restart.
See https://tracker.adobe.com/#/view/CF-4199829 - I don't know if he's on SO but credit to Trevor Cotton for the bug report and this workaround:
Write spreadsheet to temporary file,
read spreadsheet from temporary file back into memory,
delete temporary file,
stream spreadsheet from memory to
user's browser.
So given a spreadsheet object that was created in memory with spreadsheetNew() and never written to disk, then this causes a memory leak:
<cfheader name="Content-disposition" value="attachment;filename=#arguments.fileName#" />
<cfcontent type="application/vnd.ms-excel" variable = "#SpreadsheetReadBinary(arguments.theSheet)#" />
...but this does not:
<cfset local.tempFilePath = getTempDirectory()&CreateUUID()&arguments.filename />
<cfset spreadsheetWrite(arguments.theSheet, local.tempFilePath, "", true) />
<cfset local.theSheet = spreadsheetRead(local.tempFilePath) />
<cffile action="delete" file="#local.tempFilePath#" />
<cfheader name="Content-disposition" value="attachment;filename=#arguments.fileName#" />
<cfcontent type="application/vnd.ms-excel" variable = "#SpreadsheetReadBinary(local.theSheet)#" />
It shouldn't be necessary, but Adobe don't appear to be in a hurry to fix this, and I've verified that this works for me in CF2016.

Related

Does calling `writev` repeatedly with the same memory address allow hardware caching?

I've read some performance claims about how Elixir and Erlang use hardware, and I'm trying to see if I understand their basis. Some background:
First, Erlang supports writing nested lists of immutable strings (iolists) to IO (files, sockets, etc) and uses writev and the strings' memory addresses to do so (see Evan Miller's blog post on this).
Second, the docs for an Erlang web framework called Chicago Boss say:
Erlang Respects Your RAM!
Erlang is different from other platforms because when rendering a server-side template, it doesn't create a separate copy of a web page in memory for each connected client. Instead, it constructs pointers to the same pieces of immutable memory across multiple requests.
So if two people request two different profile pages at the same time, they're actually sent the same chunks of memory for the header, footer, and other shared template snippets. The result is a server that can construct complex, uncached web pages for hundreds of users per second without breaking a sweat.
Third, a book about an Elixir (Erlang VM) web framework called Phoenix says:
Templates are precompiled. Phoenix doesn’t need to copy strings for each rendered template. At the hardware level, you’ll see caching come into play for these strings where it never did before.
From looking at the source, I know that this framework uses iolists to represent a completed response template.
Putting all this together, I think what's being implied is that if a web framework uses writev to tell the OS to send the same header and footer strings from the same memory locations, one web request after another, the hardware will be able to say "oh, I know that value, it's already in CPU cache so I don't have to look in RAM for it."
Is that right? (I have very little understanding of system calls and hardware.) If not, any ideas on how hardware caching is involved?
(Bonus if you can tell me how to see or infer what's happening.)
Yes, it's mostly the processor caches that help you. The time needed to retrieve the data is smaller as it's in a faster memory (ie the CPU caches).
Some pointers for understanding what the caches are and how they work:
https://www.quora.com/How-does-the-cache-memory-in-a-computer-work
http://www.hardwaresecrets.com/how-the-cache-memory-works/
http://lwn.net/Articles/252125/
To see this, measure how much a request takes (client side) in the normal server operation. After that have a separate process within the same vm that constantly creates and writes to disk a very large string (it probably has to be megabytes in size - whatever the size of the L2/L3 caches on your process are). Remeasure how much the request takes - if done correctly this should be at least 1 order of magnitude slower.

Coldfusion/Railo: What's the most efficient way to output file contents - fileRead or include?

While I've always cached database calls and placed commonly used data into memory for faster access, I've been finding of late that simple processing and output of data can add a significant amount of time to page load and thus I've been working on a template caching component that will save parsed HTML to either a file, or in memory, for quicker inclusion on pages.
This is all working very well, reducing some page loads down to 10% of the uncached equivalent - however I find myself wondering what would be the most efficient way to output the content.
Currently I'm using fileRead to pull in the parsed HTML and save to a variable, which is output on the page.
This seems very fast, but I'm noticing the memory used by the Tomcat service gradually increasing - presumably because the fileRead operation is reading the contents into memory, and quite possibly, Tomcat isn't removing that data when its finished.
(Side question: Anyone know a way that I can interrogate the JVM memory and find details/stack traces of the objects that CF has created??)
Alternatively, I could use cfinclude to simply include the parsed HTML file. From all the information I can find it seems that the speed would be about the same - so would this method be more memory efficient?
I've had issues on the server before with memory usage crashing Tomcat, so keeping it down is quite important.
Is there anyone doing something similar that can give me the benefit of their experience?
cfinclude just includes the template into the one being compiled, whereas fileread has to read it into memory first and then output, so technically is going to consume more memory. I don;t expect the speed difference is much, but you can see the difference by just turning on debugging and checking the execution times.
The most efficient way would be to cached it with cachePut() and serve it from cacheGet(). What can be faster than fetching from RAM? Don't fetch it at all with proper Expire headers if it's the whole page, or smartly return 304 for Not Modified.
It turns out that CFInclude actually compiles the (already rendered in this case) content into a class, which itself has overhead. The classes aren't unloaded (according to CFTracker) and as such, too many of these can cause permgen errors. FileRead() seems to be orders of magnitude more efficient, as all we're doing is inserting content into the output buffer.

Clearing and freeing memory

I am developing a windows application using C# .Net. This is in fact a plug-in which is installed in to a DBMS. The purpose of this plug-in is to read all the records (a record is an object) in DBMS, matching the provided criteria and transfer them across to my local file system as XML files. My problem is related to usage of memory. Everything is working fine. But, each time I read a record, it occupies the memory and after a certain limit the plug in stops working, because of out of memory.
I am dealing with around 10k-20k of records (objects). Is there any memory related methods in C# to clear the memory of each record as soon as they are written to the XML file. I tried all the basic memory handling methods like clear(), flush(), gc(), & finalize()/ But no use.
Please consider he following:
Record is an object, I cannot change this & use other efficient data
structures.
Each time I read a record I write them to XML. and repeat this
again & again.
C# is a garbage collected language. Therefore, to reclaim memory used by an object, you need to make sure all references to that object are removed so that it is eligible for collection. Specifically, this means you should remove the objects from any data structures that are holding references to them after you're done doing whatever you need to do with them.
If you get a little more specific about what type of data structures you're using we can probably give a more specific answer.

dojo.io.script.get caching

I'm loading data using dojo.io.script.get. Size of each request can be big and I need to issue lots of them.
Question is, after data loaded and later dismissed is it cached by browser?
In other words, when I load some data that have content "myFunc('blah blah blah')". It will execute myFunc function. What happens to the browser memory after execution? If I say load it 100 times and size of each string within myFunc is say 1GB, will browser run out of memory?
Thanks.
Andrei
One of the things I have learned about Dojo is that the source code is a great reference.
My quick inspection of dojo/io/script.js shows that there is some logic involving dead code tags and destroying script tags so I guess it should protect against the memory leaks you mention. (Of course, you should always test this kind of stuff yourself, just to be sure).

How do I load the Oracle schema into memory instead of the hard drive?

I have a certain web application that makes upwards of ~100 updates to an Oracle database in succession. This can take anywhere from 3-5 minutes, which sometimes causes the webpage to time out. A re-design of the application is scheduled soon but someone told me that there is a way to configure a "loader file" which loads the schema into memory and runs the transactions there instead of on the hard drive, supposedly improving speed by several orders of magnitude. I have tried to research this "loader file" but all I can find is information about the SQL* bulk data loader. Does anyone know what he's talking about? Is this really possible and is it a feasible quick fix or should I just wait until the application is re-designed?
Oracle already does it's work in memory - disk I/O is managed behind the scenes. Frequently accessed data stays in memory in the buffer cache. Perhaps your informant was referring to "pinning" an object in memory, but that's really not effective in the modern releases of Oracle (since V8), particularly for table data. Let Oracle do it's job - it's actually very good at it (probably better than we are). Face it - 100K updates is going to take a while.

Resources