suppose I'm writing to a RAM location on a Core Duo system through L1/L2 cache.
Suppose I am going to write to a persistent location in RAM and panic Linux kernel soon after that. The location is persistent meaning that it won't be re-inited during CPU reboot and will be picked up after reboot.
Will Linux flush CPU cache as a part of reboot/panic?
Will the CPU flush cache before rebooting?
Or should I do that manually? How?
Update: my cache is not write-through.
The question is, does the CPU spec define this behavior?
Probably the most appropriate way to do this would be to mark the page containing the persistent location(s) as non-cacheable. That way writes to the persistent location(s) would always bypass the cache (effectively write-through). Of course it may be that your cache is write-through anyway, so this may be redundant - you should check this first.
The cache may not be flushed because a system diagnostic or debugger may need to be run by a user, system engineer or IT support person to diagnose and dump the computer state. The cache may be flushed at startup or not and this depends on the type and version of operating system, programming language and application in use at the event. It may be a selectable option (from any BIOS) at start up time but It would likely be initialized at poweron but not necessarily at warm restart if available.
I guess this might come in handy:) http://lxr.linux.no/#linux+v2.6.30/arch/x86/kernel/reboot.c
Related
I have a project where we manipulate large amounts of cached data using memory mapped files. We use Windows 10, NTFS and .NET.
When the user starts the application, we detect if the previous program session was shutdown correctly, and if so we reuse the cache.
However, this is a pain for developers when debugging. It's quite common to just stop the program being debugged. At next startup, the cached data needs to be recalculated, which takes time and is annoying.
So, we've been thinking we could introduce a 'transaction log', so that we can recover even if the previous shutdown was unclean.
Now for the actual problem.
There seems to be no guarantees in which order memory mapped files are flushed. In case the program is just stopped, there is no problem, since the entire memory mapped file will be flushed to disk by the operating system. The problem comes if there is a power cut. In this case, there are no guarantees what state the file is in. Our "transaction log" doesn't help either, unless we always flush the transaction log to disk before modifying the cache. This would defeat the purpose of our architecture, since it would introduce unacceptable performance penalties.
If we could somehow know that our memory mapped file on disk was previously left in a state where the OS didn't manange to flush all pages before operating system shutdown, we could just throw the entire file away at next startup. There would be a delay, but it would be totally acceptable since it would only occur after a power cut or similar event.
When the operating system boots, it knows that the file is possibly corrupt, because it knows the filesystem was not cleanly unmounted.
And finally, my question:
Is there some way to ask Windows if the file system was clean when it was mounted?
NTFS periodically commits its own logs and so there's a window in which a power fail could occur and NTFS would (correctly) state that the volume (as in, "NTFS DATA" not user data) is clean.
You will likely have to do what databases do which is to lock your cache into physical memory so that you can control the writes-to-disk.
Imagine there's a memory-mapped file and the application is writing to it constantly. Eventually, Windows will probably flush that page to disk. How does Windows ensure that a stable snapshot of that page is flushed to disk?
Probably, the disk hardware is copying the memory into it's internal memory before writing it. That's not atomic. If the application writes randomly to that page the disk hardware might copy data that has never existed at any point in time.
Does this mean that memory mapped files might leave a page on disk in a state that has never actually existed? That could be a problem to consistency.
Or does Windows lock the page during flushing? That could be a problem because a write to that page might result in very high latency.
How does Windows ensure that a stable snapshot of that page is flushed to disk?
It doesn't need to. If the page doesn't get changed during the flush operation, the data is consistent. If the page does get changed during the flush operation, then the page is marked as dirty, so it will be flushed again in due course and the data that got written to the disk is ignored.
(Incidentally, the data is probably not copied internally. The system should normally be able to use DMA to transfer it directly to the physical device.)
I am not an OS expert, and I am having trouble understanding my server's memory usage. I need your advices to understand the following:
My server has 8 GB RAM and operates as web server. PHP, mySQL and Apache processes consume the majority of the memory. When I issue the command "free" after the system is rebooted, I would normally see something along these lines:
total used free shared buffers cached
Mem: 8059080 2277924 5781156 0 948 310852
-/+ buffers/cache: 1966124 6092956
Swap: 4194296 0 4092668
Obviously, sooner or later the free memory would drop and the cached memory would increase and I assume there is nothing wrong with that since the OS decides to cache it.
What I don't understand is about 1-2 days later after the machine is rebooted, I would slightly see an increase in the used swap memory. Does not this mean that the server does not have free memory anymore and using IO instead? How can I understand which processes cause this?
I am asking this question to stackoverflow users because if I ask it to my hosting provider, I am sure they would ask more money to increase RAM.
Thanks.
This is perfectly normal. When the machine starts up, a large number of services also start up. As they run their startup code, read their configuration, and so on, they dirty some pages of memory. Many of these services will never run again. By writing this data to swap, the operating system accomplishes two things:
First, if it ever does encounter memory pressure, it can discard the pages without having to write them first, since it has already written them. Second, it can discard the pages to make more free memory to enlarge the cache.
The alternative is to keep information that hasn't been touched in days in physical memory. And that just doesn't make sense.
If in testing on a computer without a debugger, say a client's computer, I encounter a bug that may have corrupted the state of the program but not actually crashed it, I know I can take a memory dump using the Windows Task Manager (right click on process name, create dump file).
I can use these with WinDbg to peek around in memory, etc., but what would be most useful to me is to be able to restore the dump into memory so that I can continue interacting with the program. Is this possible? If so, how? Is there a tool that can restore it or do I need to write my own.
The typical usermode dumps or minidumps do not contain enough information to do so. While they contain all usermode memory, they do not contain kernel memory, so open handles to kernel resources like files or network sockets will not be included in the dump (and even if they were, the hard disk has most likely changed so just trying to write to the hard disk may corrupt your system even more).
The only way I see to restore a memory dump is restoring the full memory and all other state like hard disk state, which can be done with most virtual machine software (which will, however, disconnect all your network connections on restore; gratefully most programs can handle lost network connectsions better than lost file handles).
I discovered that I could do this with Hyper-V snapshots. If I run my program in a virtual machine, I can optionally dump the memory, create a snapshot, transfer the dump if necessary, come back some time later, restore the snapshot and continue the program.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How to invalidate the file system cache?
I'm writing a disk intensive win32 program. The first time it runs, it runs a lot slower while it scans the user's folders using FindFirstFile()/FindNextFile().
How can I repeat this first time performance without rebooting? Is there any way to force the system to discard everything in its disk cache?
I know that if I were reading a single file, I can disable caching by passing the FILE_FLAG_NO_BUFFERING flag to a call to CreateFile(). But it doesn't seem possible to do this when searching for files.
Have you thought about doing it on a different volume, and dismounting / remounting the volume? That will cause the vast majority of everything to be re-read from disk (though the cache down there won't care).
You need to create enough memory pressure to cause the memory manager and cache manager to discard the previously caches results. For the cache manager, you could try to open a large (I.e. Bigger than physical ram) file with caching enabled and then read it backwards (to avoid any sequential I/o optimizations). The interactions between vm and cache manager are a little more complex and much more dependent on os version.
There are also caches on the controller (possibly, but unlikely) and on the disk drive itself (likely). There are specific IoCtls to flush this cache, but in my experience, disk firmware is untested in this arena.
Check out the Clear function of CacheSet by SysInternals.
You could avoid a physical reboot by using a virtual machine.
I tried all the methods in the answers, including CacheSet, but they would not work for FindFirstFile/FindNextfile(). Here is what worked:
Scanning files over the network. When scanning a shared drive, it seems that windows does not cache the folders, so it is slow every time.
The simplest way to make any algorithm slower is to insert calls to Sleep(). This can reveal lots of problems in multi-threaded code, and that is what I was really trying to do.