mlock usage in RAM only system - embedded-linux

We have a time-critical process (third party code) and it has mlockall. I am porting this code to our embedded system which doesnt have a hard disk.
It is RAM only system, which boots from SD card and has storage as well in SD card.
Does mlockall has any performance benefits at all in an RAM only system?
Just to check whether there is any swap partition it tried:
cat /proc/swaps
Filename Type Size Used Priority
In this case can i remove mlockall as it is not going to add any value in our system. Kindly help.

When under memory pressure the linux kernel might decide to evict pages from RAM. Dirty pages (containing writeable data) can only be swapped out. Clean pages (read only) like the text section of your process might just be dropped. Such pages would be brought back in via a page fault when needed.
The first case can not happen to you as there is no swap. The second case might happen and could be prevented via the mlockall call.
Overall the discussion is theoretical as running a system under memory pressure introduces a lot of non deterministic behaviour which is bad for a real time system.

Related

Is it possible to "gracefully" use virtual memory in a program whose regular use would consume all physical RAM?

I am intending to write a program to create huge relational networks out of unstructured data - the exact implementation is irrelevant but imagine a GPT-3-style large language model. Training such a model would require potentially 100+ gigabytes of available random access memory as links get reinforced between new and existing nodes in the graph. Only a small portion of the entire model would likely be loaded at any given time, but potentially any region of memory may be accessed randomly.
I do not have a machine with 512 Gb of physical RAM. However, I do have one with a 512 Gb NVMe SSD that I can dedicate for the purpose. I see two potential options for making this program work without specialized hardware:
I can write my own memory manager that would swap pages between "hot" resident memory and "cold" on the hard disk, probably using memory-mapped files or some similar construct. This would require me coding all memory accesses in the modeling program to use this custom memory manager, and coding the page cache and concurrent access handlers and all of the other low-level stuff that comes along with it, which would take days and very likely introduce bugs. Also performance would likely be poor. Or,
I can configure the operating system to use the entire SSD as a page file / SWAP filesystem, and then just have the program reserve as much virtual memory as it needs - the same as any other normal program, relying on the kernel's memory manager which is already doing the page mapping + swapping + caching for me.
The problem I foresee with #2 is making the operating system understand what I am trying to do in a "cooperative" way. Ideally I would like to hint to the OS that I would only like a specific fraction of resident memory and swap the rest, to keep overall system RAM usage below 90% or so. Otherwise the OS will allocate 99% of physical RAM and then start aggressively compacting and cutting down memory from other background programs, which ends up making the whole system unresponsive. Linux apparently just starts sacrificing entire processes if it gets too bad.
Does there exist a kernel command in any language or operating system that would let me tell the OS to chill out and proactively swap user memory to disk? I have looked through VMM functions in kernel32.dll and the Linux paging and swap daemon (kswapd) documentation, but nothing looks like what I need. Perhaps some way to reserve, say, 1Gb of pages and then "donate" them back to the kernel to make sure they get used for processes that aren't my own? Some way to configure memory pressure or limits or make kswapd work more aggressively for just my process?

What happens when I try to write a file ( say text) that consumes more memory than the RAM?

Suppose I open notepad ( not necessarily ) and write a text file of 6 GB ( again, suppose) . I have no running processes other than notepad itself, and the memory assigned to the user processes has a limit of less than 6 GB. My disc memory is sufficient though.
What happens to the file now? I know that writing is definitely possible and virtual memory may get involved , but I am not sure how. Does virtual memory actually get involved? Either way, can you please explain what happens from an OS point of view?
Thanks
From the memory point of view, the notepad allocates a 6Gb buffer in memory to store the text you're seeing. The process consists of data segment (which includes the buffer above, but not only) and a code segment (the notepad native code), so the total process space is going to be larger than 6Gb.
Now, it's all virtual memory as far the process is concerned (yes, it is involved). If I understand your case right, the process won't fit into physical memory, so it's going to crash due to insufficient memory.
the memory assigned to the user processes has a limit of less than 6 GB.
If this is a hard limit enforced by the operating system, it may at it's own discretion kill the process with some error message. It may also do anything else it wants, depending on it's implementation. This part of the answer disregards virtual memory or any other way of swapping RAM to disk.
My disc memory is sufficient though. What happens to the file now? I know that writing is definitely possible and virtual memory may get involved , but I am not sure how.
At this point, when your question starts involving the disk, we can start talking about virtual memory and swapping. If virtual memory is involved and the 6GB limit is in RAM usage, not total virtual memory usage, parts of the file can be moved to disk. This could be parts of the file currently out of view on screen or similar. The OS then manages what parts of the (more than 6GB of) data is available in RAM, and swaps in/out data depending on what the program needs (i.e. where in the file you are working).
Does virtual memory actually get involved?
Depends on weather it is enabled in the OS in question or not, and how it is configured.
Yes a lot of this depends on the OS in question and it's implementation and how it handles cases like this. If the OS is poorly written it may crash itself.
Before I give you a precise answer, let me explain few things.
I could suggest you to open Linux System Monitor or Windows Task Manager, and then open heavy softwares like a game, Android Studio, IntelliJ e.t.c
Go to the memory visualization tap. You will notice that each of the applications( processes) consume a certain amount of memory. Okey fine!
Some machines if not most, support memory virtualization.
It's a concept of allocating a certain amount of the hard disk space as a back up plan just in case some application( process) consumes a lot of memory and if it is not active at times, then it gets moved from the main memory to the virtual memory to create a priority for other tasks that are currently busy.
A virtual memory is slower that the main memory as it is located in the hard disk. However is it still faster than fetching data directly from the disk.
When launching any application, not all its application size will be loaded to memory, but only the necessary files that are required at that particular time will be loaded to memory. Thus why you can play a 60GB game in a machine that has a 4GB RAM.
To answer your question:
If you happen to launch a software that consumes all the memory resources of your machine, your machine will freeze. You will even hear the sounds made by its cooling system. It will be louder and faster.
I hope I clarified this well

Will Windows be still able to allocate memory when free space in physical memory is very low?

On Windows 32-bit system the application is being developed using Visual Studio:
Lets say lots of other application running on my machine and they have occupied almost all of physical memory and only 1 MB memory is left free. If my application (which has not yet allocated any memory) tries to allocate, say 2 MB, will the call be successful?
My guess: In theory, each Windows application has 2GB of virtual memory available.
So I believe this call should be successful (regardless how much physical memory is available). But I am not sure on this. That's why asking here.
Windows gives a rock-hard guarantee that this will always work. A process can only allocate virtual memory when Windows can commit space in the paging file for the allocation. If necessary, it will grow the paging file to make the space available. If that fails, for example when the paging file grows beyond the preset limit, then the allocation fails as well. Windows doesn't have the equivalent of the Linux "OOM killer", it does not support over-committing that may require an operating system to start randomly killing processes to find RAM.
Do note that the "always works" clause does have a sting. There is no guarantee on how long this will take. In very extreme circumstances the machine can start thrashing where just about every memory access in the running processes causes a page fault. Code execution slows down to a crawl, you can lose control with the mouse pointer frozen when Explorer or the mouse or video driver start thrashing as well. You are well past the point of shopping for RAM when that happens. Windows applies quotas to processes to prevent them from hogging the machine, but if you have enough processes running then that doesn't necessarily avoid the problem.
Of course. It would be lousy design if memory had to be wasted now in order to be used later. Operating systems constantly re-purpose memory to its most advantageous use at any moment. They don't have to waste memory by keeping it free just so that it can be used later.
This is one of the benefits of virtual memory with a page file. Because the memory is virtual, the system can allocate more virtual memory than physical memory. Virtual memory that cannot fit in physical memory, is pushed out to the page file.
So the fact that your system may be using all of the physical memory does not mean that your program will not be able to allocate memory. In the scenario that you describe, your 2MB memory allocation will succeed. If you then access that memory, the virtual memory will be paged in to physical memory and very likely some other pages (maybe in your process, maybe in another process) will be pushed out to the page file.
Well, it will succeed as long as there's some memory for it - apart from physical memory, there's also the page file.
However, once you reach the limit of both RAM and the page file, you're done for and that's when the out of memory situation really starts being fun.
Now, systems like Windows Vista will try to use all of your available RAM, pretty much for caching. That's a good thing, and when there's a request for memory from an application, the cache will be thrown away as needed.
As for virtual memory, you can request much more than you have available, regardless of your RAM or page file size. Only when you commit the memory does it actually need some backing - either RAM or the page file. On 64-bit, you can easily request terabytes of virtual memory - that doesn't mean you'll get it when you try to commit it, though :P
If your application is unable to allocate a physical memory (RAM) block to store information, the operating system takes over and 'pages' or stores sections that are in RAM on disk to free up physical memory so that your program is able to perform the allocation. This is done automatically and is completely invisible to your applications.
So, in your example, on a system that has 1MB RAM free, if your application tries to allocate memory, the operating system will page certain contents of physical memory to disk and free up RAM for your application. Your application will not crash in this case.
This, obviously is much more complicated than that.
There are several ways to configure a page file on Windows (fixed size, variable size and on which disk). If you run out of physical memory, and out of hard drive space (because your page file has grown very large due to excessive 'paging') or reach the limit of your paging file (if it is a static limit) then your applications will fail due out an out-of-memory exception. With today's systems with large local storage however, this is a rare event.
Be sure to read about paging for the full picture. Check out:
http://en.wikipedia.org/wiki/Paging
In certain cases, you will notice that you have sufficient free physical memory. Say 100MB and your program tries to allocate a 10MB block to store a large object but fails. This is caused by physical memory fragmentation. Although the total free memory is 100MB, there is no single contiguous block of 10MB that can be used to store your object. This will result in an exception that needs to be handled in your code. If you allocate large objects in your code you may want to separate the allocation into smaller blocks to facilitate allocation, and then aggregate them back in your code logic. For example, instead of having a single 10m vector, you can declare 10 x 1m vectors in an array and allocate memory for each individual one.

windows page file policy

Generally my question is why Windows is constantly thrashing my HDD. I'm talking about Windows 7 to be exact (and of course indexing is turned off).
I have a feeling that this is related to the page file, because this phenomenon disappears when the page file is turned off.
More specifically, my question is why Windows ever makes a use of the page file, whereas the amount of RAM is not exhausted yet.
I have two possible explanations:
perhaps Windows tends to save memory pages of every running process to the page file "in background" even before this is really needed. Then when free RAM is eventually needed - it can be acquired more quickly.
In kernel mode paging is not always possible. Drivers that run at high IRQL can use only physical RAM. Hence the OS should have some reserve of RAM that drivers may wish to allocate dynamically.
Anyway I'm talking about reserve of GBs free RAM, yet Windows is thrashing HDD.
Does anybody knows what's the exact Windows policy regarding the page file, and if it can be adjusted?
Of course I can turn the page file off, but I do want to use it, but only when RAM is adequately exhausted.
BTW I thought about buying in SSD as a system driver, but I'm afraid it'll be dead in a year with this kind of an abuse.
There are multiple reasons for disk activity in Windows 7: indexing (turned off here), scheduled tasks (collection of usage information, check disk etc.), disk defrag, antivirus, SuperFetch, paging and many more. You are correct to suspect that paging can happen even when there is sufficient RAM and the reason you have is correct. They call is pre-emptive paging to prepare system for spikes of memory demand. Once you have large amounts of RAM this seems unnecessary and it is in 99% of time. 1% are all those situations when you actually run out of memory and paging is the only option. There is a solution to get best of both worlds. You can practically eliminate paging and still have page file by setting its minimum size to something very small. If you are fine with mini kernel dumps to be done when system crashes you can set page file minimum size to be just 16 MB. If you want full kernel dumps, minimum is 1 GB but that would limit paging to a lesser degree. Set maximum page file size to 4 GB or whatever you want.
This way Windows can always expand and use bigger page if it REALLY needs to (out of virtual memory) but should not use it under normal condition. Probably never. You still keep the safety net of page file though.

Memory mapped files causes low physical memory

I have a 2GB RAM and running a memory intensive application and going to low available physical memory state and system is not responding to user actions, like opening any application or menu invocation etc.
How do I trigger or tell the system to swap the memory to pagefile and free physical memory?
I'm using Windows XP.
If I run the same application on 4GB RAM machine it is not the case, system response is good. After getting choked of available physical memory system automatically swaps to pagefile and free physical memory, not that bad as 2GB system.
To overcome this problem (on 2GB machine) attempted to use memory mapped files for large dataset which are allocated by application. In this case virtual memory of the application(process) is fine but system cache is high and same problem as above that physical memory is less.
Even though memory mapped file is not mapped to process virtual memory system cache is high. why???!!! :(
Any help is appreciated.
Thanks.
If your data access pattern for using the memory mapped file is sequential, you might get slightly better page recycling by specifying the FILE_FLAG_SEQUENTIAL_SCAN flag when opening the underlying file. If your data pattern accesses the mapped file in random order, this won't help.
You should consider decreasing the size of your map view. That's where all the memory is actually consumed and cached. Since it appears that you need to handle files that are larger than available contiguous free physical memory, you can probably do a better job of memory management than the virtual memory page swapper since you know more about how you're using the memory than the virtual memory manager does. If at all possible, try to adjust your design so that you can operate on portions of the large file using a smaller view.
Even if you can't get rid of the need for full random access across the entire range of the underlying file, it might still be beneficial to tear down and recreate the view as needed to move the view to the section of the file that the next operation needs to access. If your data access patterns tend to cluster around parts of the file before moving on, then you won't need to move the view as often. You'll take a hit to tear down and recreate the view object, but since tearing down the view also releases all the cached pages associated with the view, it seems likely you'd see a net gain in performance because the smaller view significantly reduces memory pressure and page swapping system wide. Try setting the size of the view based on a portion of the installed system RAM and move the view around as needed by your file processing. The larger the view, the less you'll need to move it around, but the more RAM it will consume potentially impacting system responsiveness.
As I think you are hinting in your post, the slow response time is probably at least partially due to delays in the system while the OS writes the contents of memory to the pagefile to make room for other processes in physical memory.
The obvious solution (and possibly not practical) is to use less memory in your application. I'll assume that is not an option or at least not a simple option. The alternative is to try to proactively flush data to disk to continually keep available physical memory for other applications to run. You can find the total memory on the machine with GlobalMemoryStatusEx. And GetProcessMemoryInfo will return current information about your own application's memory usage. Since you say you are using a memory mapped file, you may need to account for that in addition. For example, I believe the PageFileUsage information returned from that API will not include information about your own memory mapped file.
If your application is monitoring the usage, you may be able to use FlushViewOfFile to proactively force data to disk from memory. There is also an API (EmptyWorkingSet) that I think attempts to write as many dirty pages to disk as possible, but that seems like it would very likely hurt performance of your own application significantly. Although, it could be useful in a situation where you know your application is going into some kind of idle state.
And, finally, one other API that might be useful is SetProcessWorkingSetSizeEx. You might consider using this API to give a hint on an upper limit for your application's working set size. This might help preserve more memory for other applications.
Edit: This is another obvious statement, but I forgot to mention it earlier. It also may not be practical for you, but it sounds like one of the best things you might do considering that you are running into 32-bit limitations is to build your application as 64-bit and run it on a 64-bit OS (and throw a little bit more memory at the machine).
Well, it sounds like your program needs more than 2GB of working set.
Modern operating systems are designed to use most of the RAM for something at all times, only keeping a fairly small amount free so that it can be immediately handed out to processes that need more. The rest is used to hold memory pages and cached disk blocks that have been used recently; whatever hasn't been used recently is flushed back to disk to replenish the pool of free pages. In short, there isn't supposed to be much free physical memory.
The principle difference between using a normal memory allocation and memory mapped a files is where the data gets stored when it must be paged out of memory. It doesn't necessarily have any effect on when the memory will be paged out, and will have little effect on the time it takes to page it out.
The real problem you are seeing is probably not that you have too little free physical memory, but that the paging rate is too high.
My suggestion would be to attempt to reduce the amount of storage needed by your program, and see if you can increase the locality of reference to reduce the amount of paging needed.

Resources