How to clear directory/inode cache on MacOSX - performance

How to clear file system cache (e.g. directory cache and inode cache) on a Mac OSX? I would like to do some performance analysis on a cold MacOSX machine. I am wondering that is there any approach like echo 3 > /proc/sys/vm/drop_caches on a Linux box?

is purge(8) any use to you?
Purge can be used to approximate initial boot conditions with a cold
disk
buffer cache for performance analysis. It does not affect anonymous memory that has been allocated through malloc, vm_allocate, etc.

Related

How to completely disable Elasticsearch chaching?

I'm trying to measure Elasticsearch performance for some queries and make a benchmark. I'm looking for a way to completely disable the cache. I've already tried some ways but always my first request takes longer than the next queries. So I think even I disabled the cache, at some level it is still working! I've tried this:
1- GET my_index/_search?request_cache=false
2- POST /my_index/_cache/clear
3-
PUT /my_index/_settings
{ "index.requests.cache.enable": false }
I think you are disabling the elasticsearch cache correctly. the problem is that there is filesystem cache memory at OS level that you cannot disable it easily. every files and segment that have been read from harddisk will be cached in memory until new segment and files arrived. you can check this cache via free -g command and buffer column.
you can clear memory page cache via below command:
# sync; echo 3 > /proc/sys/vm/drop_caches

NVMeOF/RDMA sync file modifications

I just set up the NVMeOF/RDMA environment to play around. I have a target node which NVMe SSD is accessed by some client nodes. However, when I delete a file say test on one client node, the rest nodes cannot see this operation and can still read the content of test as normal. I know that RDMA bypasses the kernel, so I guess this is because of the cache? I then have tried to clean up the cache using these commands:
sudo sync; echo 3 | sudo tee /proc/sys/vm/drop_caches
sudo sync; echo 1 | sudo tee /proc/sys/vm/drop_caches
sudo sync; echo 2 | sudo tee /proc/sys/vm/drop_caches
Unfortunately, other nodes still keep this file.
So actually I have two questions:
Does it exactly due to the cache? How does it work?
What is the correct way to clean up the cache so that other nodes can see the deletion without re-mount?
Any help will be greatly appreciated!
Relatively short answer
Like Boris said, you don't want to do that (distributed consistency on storage is a hard problem), and you need something else to do what you want. Flushing caches may not work because you've got multiple distinct views of the system + caching behaviors
Longer answer:
As Boris mentioned, NVMeoF is a block protocol. This means that at a broad level (with some hand-waving) all it can do is read and write blocks at a particular address. In practice, we usually have layers above the NVMe/NVMeoF communication layer like file systems that handle this abstraction.
I can't tell right now if you're using a file system or if you reading/writing the device directly, but in either case you are at least partially correct that the page cache may be getting in the way, even with RDMA.
Now, if you are using local file systems on your client nodes, you quickly get inconsistent views. The filesystem (and consequently overall operating system and its view of the state of the page cache and block storage) has no idea anyone else wrote anything. So even if you write and sync on one client, you may have to bypass the page cache on another (e.g. use O_DIRECT reads, which have their own sets of complexities) and make sure you target something that eventually refers to the same block addresses that were written on the NVMe target from your other client.
In theory, this will let you read data written by another if everything lines up correctly, in practice though this can cause confusion, especially if a file system or application on one client writes something at a location, and the other client attempts to read or write that location unknowingly. Now you have a consistency problem.
NVMeoF (with RDMA or any other transport) is a block level storage protocol and not a file level storage protocol. Thus, there is no guarantee to the atomicity of file operations across nodes in NVMeoF systems. Even if one node deletes a file, there is no guarantee that:
The delete operation was actually translated to block erase operations and sent to the storage server;
Even if the storage server deleted the blocks, there is no guarantee that other clients that have cached this data will not continue to read it. Moreover, another client can overwrite the deleted file.
Overall, I think that to have any guarantees at the file level, you should consider a distribute file systems, rather than NVMeoF.
What is the correct way to clean up the cache so that other nodes can see the deletion without re-mount?
There is no good way to do it. Flushing the cache on all nodes and only then reading may work, but it depends on the file system.

the linux page cache flush order

There is page cache before we write data to disk.
So if I have two operations.
write(fileA)
write(fileB)
Then if the system is suddenly shutdown. We don't initiative call the sync() call.
I want to know if it is possible that the data we wrote to fileB has flush to the disk, while the data we wrote to fileA haven't been flush to the disk?
I believe that it is possible for fileB to be written to disk before fileA, as the writes will be bundled into block I/O requests and can be reordered at the block device layer by the I/O scheduler in an attempt to minimise disk seeking.
See the kernel documentation for more info about the I/O scheduler (elevator):
http://lxr.free-electrons.com/source/Documentation/block/biodoc.txt#L885
To answer you question in short you may want to consider calling sync() or fsync() system call in your application after the write() to make sure that data is sync-ed to the disk immediately.
flush (or pdflush) kernel threads are responsible for syncing dirty pages to the disk. When the system is being shutdown properly, all the dirty buffers get synced / written to disk. However, this is not the same in case of abrupt power failures as data which is not yet flushed / sync-ed to the disk is obviously lost.
If you don’t call sync() in your application, then dirty buffers get written to the disk upon certain kernel tunables. You could control how the application data gets synced (inactive dirty pages) through sysctl kernel tunable. You may want to consider reading more about the following:
vm.dirty_expire_centisecs - how old (in 1/100th of a sec) the dirty
pages must be before writing them to the disk
vm.dirty_writeback_centisecs - how often the kernel will wake up the
BDI-flush thread to sync the dirty pages onto the disk
vm.dirty_background_ratio - percentage of system memory which when
dirty then system can start writing data to the disks
vm.dirty_ratio - percentage of system memory which when dirty the the
process doing writes should block to write out dirty pages to the
disks

I/O performance of multiple JVM (Windows 7 affected, Linux works)

I have a program that creates a file of about 50MB size. During the process the program frequently rewrites sections of the file and forces the changes to disk (in the order of 100 times). It uses a FileChannel and direct ByteBuffers via fc.read(...), fc.write(...) and fc.force(...).
New text:
I have a better view on the problem now.
The problem appears to be that I use three different JVMs to modify a file (one creates it, two others (launched from the first) write to it). Every JVM closes the file properly before the next JVM is started.
The problem is that the cost of fc.write() to that file occasionally goes through the roof for the third JVM (in the order of 100 times the normal cost). That is, all write operations are equally slow, it is not just one that hang very long.
Interestingly, one way to help this is to insert delays (2 seconds) between the launching of JVMs. Without delay, writing is always slow, with delay, the writing is slow aboutr every second time or so.
I also found this Stackoverflow: How to unmap a file from memory mapped using FileChannel in java? which describes a problem for mapped files, which I'm not using.
What I suspect might be going on:
Java does not completely release the file handle when I call close(). When the next JVM is started, Java (or Windows) recognizes concurrent access to that file and installes some expensive concurrency handler for that file, which makes writing expensive.
Would that make sense?
The problem occurs on Windows 7 (Java 6 and 7, tested on two machines), but not under Linux (SuSE 11.3 64).
Old text:
The problem:
Starting the program from as a JUnit test harness from eclipse or from console works fine, it takes around 3 seconds.
Starting the program through an ant task (or through JUnit by kicking of a separate JVM using a ProcessBuilder) slows the program down to 70-80 seconds for the same task (factor 20-30).
Using -Xprof reveals that the usage of 'force0' and 'pwrite' goes through the roof from 34.1% (76+20 tics) to 97.3% (3587+2913+751 tics):
Fast run:
27.0% 0 + 76 sun.nio.ch.FileChannelImpl.force0
7.1% 0 + 20 sun.nio.ch.FileDispatcher.pwrite0
[..]
Slow run:
Interpreted + native Method
48.1% 0 + 3587 sun.nio.ch.FileDispatcher.pwrite0
39.1% 0 + 2913 sun.nio.ch.FileChannelImpl.force0
[..]
Stub + native Method
10.1% 0 + 751 sun.nio.ch.FileDispatcher.pwrite0
[..]
GC and compilation are negligible.
More facts:
No other methods show a significant change in the -Xprof output.
It's either fast or very slow, never something in-between.
Memory is not a problem, all test machines have at least 8GB, the process uses <200MB
rebooting the machine does not help
switching of virus-scanners and similar stuff has no affect
When the process is slow, there is virtually no CPU usage
It is never slow when running it from a normal JVM
It is pretty consistently slow when running it in a JVM that was started from the first JVM (via ProcessBuilder or as ant-task)
All JVMs are exactly the same. I output System.getProperty("java.home") and the JVM options via RuntimeMXBean RuntimemxBean = ManagementFactory.getRuntimeMXBean(); List arguments = RuntimemxBean.getInputArguments();
I tested it on two machines with Windows7 64bit, Java 7u2, Java 6u26 and JRockit, the hardware of the machines differs, though, but the results are very similar.
I tested it also from outside Eclipse (command-line ant) but no difference there.
The whole program is written by myself, all it does is reading and writing to/from this file, no other libraries are used, especially no native libraries. -
And some scary facts that I just refuse to believe to make any sense:
Removing all class files and rebuilding the project sometimes (rarely) helps. The program (nested version) runs fast one or two times before becoming extremely slow again.
Installing a new JVM always helps (every single time!) such that the (nested) program runs fast at least once! Installing a JDK counts as two because both the JDK-jre and the JRE-jre work fine at least once. Overinstalling a JVM does not help. Neither does rebooting. I haven't tried deleting/rebooting/reinstalling yet ...
These are the only two ways I ever managed to get fast program runtimes for the nested program.
Questions:
What may cause this performance drop for nested JVMs?
What exactly do these methods do (pwrite0/force0)? -
Are you using local disks for all testing (as opposed to any network share) ?
Can you setup Windows with a ram drive to store the data ? When a JVM terminates, by default its file handles will have been closed but what you might be seeing is the flushing of the data to the disk. When you overwrite lots of data the previous version of data is discarded and may not cause disk IO. The act of closing the file might make windows kernel implicitly flush data to disk. So using a ram drive would allow you to confirm that their since disk IO time is removed from your stats.
Find a tool for windows that allows you to force the kernel to flush all buffers to disk, use this in between JVM runs, see how long that takes at the time.
But I would guess you are hitten some iteraction with the demands of the process and the demands of the kernel in attempting to manage disk block buffer cache. In linux there is a tool like "/sbin/blockdev --flushbufs" that can do this.
FWIW
"pwrite" is a Linux/Unix API for allowing concurrent writing to a file descriptor (which would be the best kernel syscall API to use for the JVM, I think Win32 API already has provision for the same kinds of usage to share a file handle between threads in a process, but since Sun have Unix heritige things get named after the Unix way). Google "pwrite(2)" for more info on this API.
"force" I would guess that is a file system sync, meaning the process is requesting the kernel to flush unwritten data (that is currently in disk block buffer cache) into the file on the disk (such as would be needed before you turned your computer off). This action will happen automatically over time, but transactional systems require to know when the data previously written (with pwrite) has actually hit the physical disk and is stored. Because some other disk IO is dependant on knowing that, such as with transactional checkpointing.
One thing that could help is making sure you explicitly set the FileChannel to null. Then call System.runFinalization() and maybe System.gc() at the end of the program. You may need more than 1 call.
System.runFinalizersOnExit(true) may also help, but it's deprecated so you will have to deal with the compiler warnings.

IntelliJ 9; caching a lot of data under C:\Users\

Is there a reason why IntelliJ creates a lot of files under C:\Users\<username>\.IntelliJIdea90 ?
This directory has slowly grown to around 2GB. I can understand IntelliJ needs to perform some caching for local history, and indexing, but 2GB seems a litle excessive
Is there a way to safely clear down some of this data and free up some disk space?
I haven't yet heard of unexplainable growth of those indices, maybe there is a reason after all.
You can safely delete that directory (with IDEA not running), but expect a full rebuild of the index on next startup. If you want to preserve your configuration, though, consider only removing system/caches and system/index.
Edit: Back at work, I had a look on my machine:
$ du -sh ~/Library/Caches/IntelliJIdea90
3,8G /Users/jjungnickel/Library/Caches/IntelliJIdea90

Resources