Used memory of my single page application increases as time goes - performance

We have a single page application, which runs well at the beginning, but slows down sharply as time goes. I am trying to investigate the root cause.
I use Chrome DevTool to record the timeline for initial page loading and a typical user operation. The JS Heap shows that the memory usage is ok: goes up and down periodically (due to Garbage Collection by browser, maybe).
However, when I check the Chrome Task Manager, I found that my page uses 60MB memory initially. But after 1 hour (and some user operations), the memory goes to 160MB. While the JavaScript Memory seems stable. Later I observed that the memory usage never goes down.
I guess maybe there is some memory leak in our JavaScript code? But the JS Heap seems ok. Does Chrome hold those memory and may release in future (when, say, other process needs more memory)?
Here is the Timeline recorded when I am operating:
I googled but cannot find explanations about this. Could anybody help? Thanks.

It is because of an interval that is not cleared. It keeps calling a function too frequently.

Related

Can many (similar) processes use a common RAM cache?

As I understand the creation of processes, every process has it's own space in RAM for it's heap, data, etc, which is allocated upon its creation. Many processes can share their data and storage space in some ways. But since terminating a process would erase its allocated memory(so also its caches), I was wondering if it is possible that many (similar) processes share a cache in memory that is not allocated to any specific process, so that it can be used even when these processes are terminated and other ones are created.
This is a theoretical question from a student perspective, so I am merely interested in the general sence of an operating system, without adding more functionality to them to achieve it.
For example I think of a webserver that uses only single-threaded processes (maybe due to lack of multi-threading support), so that most of the processes created do similar jobs, like retrieving a certain page.
There are a least four ways what you describe can occur.
First, the system address space is shared by all processes. The Operating system can save data there that survives the death of a process.
Second, processes can map logical pages to the same physical page frame. The termination of one process does not cause the page frame to be deallocated to the other processes.
Third, some operating systems have support for writable shared libraries.
Fourth, memory mapped files.
There are probably others as well.
I think so, when a process is terminated the RAM clears it. However your right as things such as webpages will be stored in the Cache for when there re-called. For example -
You open Google and then go to another tab and close the open Google page, when you next go to Google it loads faster.
However, what I think your saying is if the Entire program E.G - Google Chrome or Safari - is closed, does the webpage you just had open stay in the cache? No, when the program is closed all its relative data is also terminated in order to fully close the program.
I guess this page has some info on it -
https://www.wikipedia.org/wiki/Shared_memory

Strategy for asynchronously flushing view of file to disk

I am writing an application that maps a file into memory to make some information resilient to failures (crash, power outage, etc). I know that the idea is to flush as infrequently as allowable, but to Do Things Right, and considering the goal, it seems to me that I should essentially flush to disk whenever the data has changed.
All the mapped data fits into a single page. I have a burst usage pattern (nothing happens for a looong time, then all of a sudden you'd modify the information ~20 times in a row). For this reason, I'm hesitant about FlushViewOfFile, since it seems to be synchronous. Flushing at every hit on a burst would seem to be inefficient.
Is there not a way I can tell Windows to flush pages the next time it has an idle cycle, and without having me wait until it does it?
I do not believe that there is a function in Windows for that. FlushViewOfFile is what you have to work with. You're going to have to think of a 'scheduler' for your program that matches your use-case/profile. Something like starting a short timer after each hit, which resets if there is another hit and if it expires flushes the page, and one long timer which if it expires flushes the page despite still being in a burst would probably work nicely for you. In any case, you'll need to profile what the usage will be and have the program act accordingly.

How do you see the specific methods responsible for memory allocation in XCode Instruments?

I've been asked to try and reduce memory usage in an app's code I have been given. The app runs fine in the simulator but on the device it is terminated or something, when debugging it enters a 'Paused' state and the app closes on the device.
When running instruments I discovered leaks, fixed them, however there is a large amount of allocation going on. Within a few seconds of launch the instruments allocation trace shows 1,021 KB for 'Malloc 16 Bytes'. This is essentially useless information as is, I need to see where the memory is being allocated but I can't seem to find anything useful. All i can get for a deeper inspection is that 'dyld', 'libsystem_c.dylib', 'libCGFreetype.A.dylib' etc are allocating a lot, but the responsible caller is never a recognizable method from the app source.
How can I see what methods are causing the most allocations here? I need to get this usage down! Thank you
Opening the extended detail view will show the call stack for memory allocations. Choose View > Extended Detail to open the extended detail view.
Switching to the call tree view will help you find where you are allocating memory in your code. Use the jump bar to switch to the call tree view.
1MB is no big deal. You can't do much in terms of throwing up a full view without using 1MB.
There's a good video from WWDC 2010 (http://developer.apple.com/videos/wwdc/2010/) that covers using instruments to analyze memory use. Title is Advanced Memory Analysis with Instruments. There may be an updated video from 2011.

Programatically read program's page fault count on Windows

I'd like to my Windows C++ program to be able to read the number of hard page faults it has caused. The program isn't running as administrator. Edited to add: To be clear, I'm not as interested in the aggregate page fault count of the whole system.
It looks like ETW might export counters for this, but I'm having a lot of difficulty figuring out the API, and it's not clear what's accessible by regular users as compared to administrators.
Does anyone have an example of this functionality lying around? Or is it simply not possible on Windows?
(OT, but isn't it sad how much easier this is on *nix? gerusage() and you're done.)
afai can tell the only way to do this would be to use ETW (Event Tracing for Windows) to monitor kernel Hard Page Faults. The event payload has a thread ID that you might be able to correlate with an existing process (this is going to be non-trivial btw) to produce a running per-process count. I don't see any way to get historical information per process.
I can guarantee you that this is A Hard Problem because Process Explorer supports only Page Faults (soft or hard) in its per-process display.
http://msdn.microsoft.com/en-us/magazine/ee412263.aspx
A page fault occurs when a sought-out
page table entry is invalid. If the
requested page needs to be brought in
from disk, it is called a hard page
fault (a very expensive operation),
and all other types are considered
soft page faults (a less expensive
operation). A Page Fault event payload
contains the virtual memory address
for which a page fault happened and
the instruction pointer that caused
it. A hard page fault requires disk
access to occur, which could be the
first access to contents in a file or
accesses to memory blocks that were
paged out. Enabling Page Fault events
causes a hard page fault to be logged
as a page fault with a type Hard Page
Fault. However, a hard fault typically
has a considerably larger impact on
performance, so a separate event is
available just for a hard fault that
can be enabled independently. A Hard
Fault event payload has more data,
such as file key, offset and thread
ID, compared with a Page Fault event.
I think you can use GetProcessMemoryInfo() - Please refer to http://msdn.microsoft.com/en-us/library/ms683219(v=vs.85).aspx for more information.
Yes, quite sad. Or you could just not assume Windows is so gimp that it doesn't even provide a page fault counter and look it up: Win32_PerfFormattedData_PerfOS_Memory.
There is a C/C++ sample on Microsoft's site that explain how to read performance counters: INFO: PDH Sample Code to Enumerate Performance Counters and Instances
You can copy/paste it and I think you're interested by the "Memory" / "Page Reads/sec" counters, as stated in this interesting article: The Basics of Page Faults
This is done with performance counters in windows. It's been a while since I've done anything with them. I don't recall whether or not you need to run as administrator to query them.
[Edit]
I don't have example code to provide but according to this page, you can get this information for a particular process:
Process : Page Faults/sec. This is an
indication of the number of page
faults that occurred due to requests
from this particular process.
Excessive page faults from a
particular process are an indication
usually of bad coding practices.
Either the functions and DLLs are not
organized correctly, or the data set
that the application is using is being
called in a less than efficient
manner.
I don't think you need administrative credential to enumerate the performance counters. A sample at codeproject Performance Counters Enumerator

Help understanding Windows memory - "Working Set"

I've been tracking down a few memory leaks in my application. It's been a real pain, but I've finally tightened everything up. However, there's one bit of Windows memory management that is confusing me. Here is a printout of the app's memory usage over time...
Time PrivateMemorySize64 WorkingSet64
20:00:36 47480, 50144
20:01:06 47480, 50144
20:01:36 47480, 50144
20:02:06 47480, 149540
20:02:36 47480, 149540
20:03:06 47480, 149540
The working set jumped from 49 MB to 146 over a span of 30 seconds. This happened overnight as the application was basically doing nothing.
The working set (which is what task manager shows me) seems to be able to be influenced by other applications such as debuggers (as I learned while looking for memory leaks). After reading the documentation on what Working Set is, I still don't have a good understanding.
Any help is appreciated.
Update: Thanks to some links from responders as well as some additional searching, I have a better understanding on how a separate process can cause my process' working set to grow. Good to know that a spike in the working set is not necessarily an indication that your app is leaking... Further reason to not rely on Task Manager for memory evaluation :)
Helpful links:
A few words on memory usage or: working set vs. private working set
CyberNotes: Windows Memory Usage Explained
Simply said, the working set is the collection of memory pages currently owned by your process and not swapped out (i.e. in RAM). That is somewhat inaccurate, however. Reality is a lot more complicated.
Windows maintains a minimum working set size and a maximum working set size for every process. The minimum working set is easy, it is what Windows will grant to every process (as long as it can, by physical limits).
The maximum working set is more dubious. If your program uses more memory than will fit in its quota, Windows will drop some pages. However, while they are no longer in your working set, these pages are not necessarily "gone".
Rather, those pages are removed from your working set and moved to the pool of available pages. As a consequence, if some other program needs more memory and no cleared pages are left over, your pages will be cleared, and assigned to a different process. When you access them, they will need to be fetched from the swapfile again, possibly purging other pages, if you are still above the maximum working set size.
However, if nobody asked for more memory in the mean time (or if all demands could be satisfied by pages that were unused anyway), then accessing one of those pages will simply make it "magically reappear" and kick out another page in its stead.
Thus, your process can have more pages in RAM than are actually in its working set, but it does not "officially" own them.
The Resident Set/Working Set is the portion of Virtual Address Space which is currently residing in Physical Memory and therefore isn't Swapped Out

Resources