I have plenty of RAM on my development machine.
Can I move my Rails' application sources to the tmpfs partition in order to gain performance boost because of in-memory storage which is order of magnitude faster than HDD?
I understand that tmpfs is temporary storage by its nature BUT can I use it for this task if I'll write some script to move sources from HDD partition to tmpfs and backup it back to HDD before reboot?
Is it sane?
It is sane to use a RAMdisk to speed up access to read only resources.
It is quite dangerous to use this approach for wriable resources because if you lose power during operation you are guaranteed to lose the data. If you don't mind losing data or you implement some form of cache mechanism so that data "saved" to the RAMdisk is copied back to hard disk pretty soon after it is written, then this approach is ok with read/write data.
However, check out the hardware and OS you're running under as well. If you have SSD disks or disks and I/O subsystems with large caches, you may find that performance isn't that bad in the first place. On the OS front, (for example) Windows Vista & 7 will use any spare RAM for disk caching, and this works very effectively, which means you will see little or no performance gain using a RAMdisk.
A RAM disk or cache also only works if you have enough RAM. If there isn't enough RAM in your PC, you'll end up using VM and performance will get worse, not better.
You can quickly try doing this manually to see what sort of performance change you achieve, and then make up your mind if the gain is worth the pain (of copying data from/to the HDD and the extra risks involved).
Yes, as long as you don't mind losing the data if your machine reboots unexpectedly (e.g. power loss). I don't know what your use case is, but there are cases where the need for performance exceeds the need for securely saving every data persistently e.g. if you don't mind a few hours' worth of data loss). If your use case falls into that category, then tmpfs is a perfectly fine solution.
You can use it this way, but it doesn't make much sense:
If you have enough RAM, then the files will be in the filesystem cache (i.e. in RAM) anyway. So, you don't win anything by using tmpfs, but you also don't lose anything.
If you don't have enough RAM, the tmpfs will be flushed out to swap. Now, your Rails sources eat up precious swap space despite the fact there is already a copy on disk in the filesystem. So, you lose swap space and you don't win anything on performance (whether the files are read back in from swap or the filesystem is equally expensive).
If you don't want to take that first time hit until all the files are in the cache, you could put something like this in your development environment startup scripts:
find /usr/lib/ruby/gems/1.9.1/{rails,action,active}* -exec cat '{}' + > /dev/null
Which will read all the Rails files and echo them into /dev/null and as a side effect pull them into the cache. (Do this while getting your coding coffee.)
Related
Just learned these 3 new techniques from https://unix.stackexchange.com/questions/87908/how-do-you-empty-the-buffers-and-cache-on-a-linux-system:
To free pagecache:
# echo 1 > /proc/sys/vm/drop_caches
To free dentries and inodes:
# echo 2 > /proc/sys/vm/drop_caches
To free pagecache, dentries and inodes:
# echo 3 > /proc/sys/vm/drop_caches
I am trying to understand what exactly are pagecache, dentries and inodes. What exactly are they?
Do freeing them up also remove the useful memcached and/or redis cache?
--
Why i am asking this question? My Amazon EC2 server RAM was getting filled up over the days - from 6% to up to 95% in a matter of 7 days. I am having to run a bi-weekly cronjob to remove these cache. Then memory usage drops to 6% again.
With some oversimplification, let me try to explain in what appears to be the context of your question because there are multiple answers.
It appears you are working with memory caching of directory structures. An inode in your context is a data structure that represents a file. A dentries is a data structure that represents a directory. These structures could be used to build a memory cache that represents the file structure on a disk. To get a directly listing, the OS could go to the dentries--if the directory is there--list its contents (a series of inodes). If not there, go to the disk and read it into memory so that it can be used again.
The page cache could contain any memory mappings to blocks on disk. That could conceivably be buffered I/O, memory mapped files, paged areas of executables--anything that the OS could hold in memory from a file.
Your commands flush these buffers.
I am trying to understand what exactly are pagecache, dentries and
inodes. What exactly are they?
user3344003 already gave an exact answer to that specific question, but it's still important to note those memory structures are dynamically allocated.
When there's no better use for "free memory", memory will be used for those caches, but automatically purged and freed when some other "more important" application wants to allocate memory.
No, those caches don't affect any caches maintained by any applications (including redis and memcached).
My Amazon EC2 server RAM was getting filled up over the days - from 6%
to up to 95% in a matter of 7 days. I am having to run a bi-weekly
cronjob to remove these cache. Then memory usage drops to 6% again.
Probably you're mis-interpreting the situation: your system may just be making efficient usage of its ressources.
To simplify things a little bit: "free" memory can also be seen as "unused", or even more dramatic - a waste of resources: you paid for it, but don't make use of it. That's a very un-economic situation, and the linux kernel tries to make some "more useful" use of your "free" memory.
Part of its strategy involves using it to save various kinds of disk I/O by using various dynamically sized memory caches. A quick access to cache memory saves "slow" disk access, so that's often a useful idea.
As soon as a "more important" process wants to allocate memory, the Linux kernel voluntarily frees those caches and makes the memory available to the requesting process. So there's usually no need to "manually free" those caches.
The Linux kernel may even decide to swap out memory of an otherwise idle process to disk (swap space), freeing RAM to be used for "more important" tasks, probably also including to be used as some cache.
So as long as your system is not actively swapping in/out, there's little reason to manually flush caches.
A common case to "manually flush" those caches is purely for benchmark comparison: your first benchmark run may run with "empty" caches and so give poor results, while a second run will show much "better" results (due to the pre-warmed caches). By flushing your caches before any benchmark run, you're removing the "warmed" caches and so your benchmark runs are more "fair" to be compared with each other.
Common misconception is that "Free Memory" is important.
Memory is meant to be used.
So let's clear that out :
There's used memory, which is where important data is stored, and if that reaches 100% you're dead
Then there's cache/buffer, which is used as long as there is space to do so. It's facultative memory to access disk files faster, mostly. If you run out of free memory, this will just free itself and let you access disk directly.
Clearing cached memory as you suggest is most of the case useless and means you're deactivating an optimization, therefore you'll get a slow down.
If you really run out of memory, that is if your "used memory" is high, and you begin to see swap usage, then you must do something.
HOWEVER : there's a known bug running on AWS instances, with dentry cache eating memory with no apparent reason. It's clearly described and solved in this blog.
My own experience with this bug is that "dentry" cache consumes both "used" and "cached" memory and does not seem to release it in time, eventually causing swap. The bug itself can consume resources anyway, so you need to look into it.
Hate to bring an old thread back from the dead, but I've been dealing with memory issues lately on my Linux Virtual Machines. Unfortunately, even with the virtualization of computing machines being great and the advancements of Linux memory and resource allocation being superb, conflicts occur when the hypervisor acts out what it calls "performance features".
VMWare will actively send RAM that hasn't been "written or modified" recently, to the disk. When your disk is on a SAN, that means reading from the RAM is now at 1Gbps to 10Gbps at best if you have a REALLY performant RAID and steady network access (ignoring the fact that now the RAM of say 100 VMs are all using the same SAN). DDR3 RAM operates at 25Gbps+ on modern systems, so I'll assume you can see the problem with systems running at 1/25th to less than 1/2 of the speed anticipated.
The caches on my linux systems are literally the same speed as disk I/O of the filesystem, meaning they do not help our performance and are actively sending the OS's RAM into Swap instead of clearing caches. This is a huge problem thanks to VMWare, not because of Linux, but be aware that cloud infrastructure often does stupid crap like this all the time unfortunately. You can read more here: https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/perf-vsphere-memory_management.pdf or if you use VMWare, surely you'll notice the "allocated memory" vs "active memory" and where your VMs will always display a different amount than VMWare because of this distinction and treatment of the memory.
I have a general question about using Apache HBase with a RAMdisk.
There is a big collection of data in a single table, about 25GB in total.
With this data I am doing some basic aggregations, using a Java program.
As I have enough RAM avaiable I tried to put this data set into a RAMdisk using tmpfs:
mount -t tmpfs -o size=40G none /home/user/ramdisk
Then I stopped HBase, copied the content of the data folder into the RAMdisk.
Finally I created a symbolic link, linking the old data directory to the new one and started HBase again.
It works, but when I process the aggregations now, It became slightly slower than before.
I could image of not having that much impact of using a RAMdisk, if HBase compresses the data (Snappy-compression is activated) and so on... but I can't guess why a faster medium would lead to a slower access of the data. There is enough available RAM left such that this cannot be the bottleneck.
Maybe someone has a general idea or insight about this?
I think it's going to be one of two things:
A: Do you really have more than 40G of free ram before allocating the disk ? I'm impressed & all if you actually had that much free, but seeing ram free afterwards isn't an indicator that you didn't just use a big chunk of swap.
B: compression (even something fast like snappy) is going to hurt performance... particularly for something like a database engine that has a lot of wacky optimization in it. You're right that a ramdisk should be ludicrously faster, but it having to jump all over your database queries, and then having to jump all over the compressed image to decompress chunks, has to have a pretty big overhead.
I have an application which needs to create many small files in maximum performance (less than 1% of them may be read later), and I want to avoid using asynchronous file API to keep the simplicity of my code. The size of total files written cannot be pre-determined, so I figured that to achieve maximum performance, I would need:
1.Windows to utilize all unused RAM for cache (especially for file writes), with no regard of relibility or other issues. If I have more than 1GB of unused RAM and I create one million of 1KB files, I expect Windows to report "DONE" immediately, even if it has written nothing to disk yet.
OR
2.A memory-based file system backed by real on-disk file system. Whenever I write files, it should first write everything in memory only, and then update on-disk file system in background. No delay in synchronous calls unless there isn't enough free memory. Note it's different from tmpfs or RAM disk implementations on Windows, since they require fixed amount of memory and utilize page file when there isn't enough RAM.
For option 1, I have tested VHD files - while it does offer as high as 50% increase of total performance under some configurations, it still does flushing-to-disk and cause writing to wait unnecessarily. And it appears there is no way I can disable journaling or further tweak the write caching behavior.....
For option 2, I have yet found anything similar..... Do such things exist?
It's a frequently asked question, I know. But I tried every solution suggested (apc.stat=0, increasing shared memory, etc) without benefits.
Here's the screen with stats (you can see nginx and php5-fpm) and the parameters set in apc.ini:
APC is used for system and user-cache entries on multiple sites (8-9 WordPress sites and one with MediaWiki and SMF).
What would you suggest?
Each wordpress site is is going to cache a healthy amount in the user cache. I've looked at this myself in depth and have found the best 'guesstimate' is that if you use user cache in APC keep the fragmentation under 10%. This can sometimes mean you need to try and reserve upwards of 10x the amount of memory you intend to actually use for caching to avoid fragmentation. Start where you are and keep increasing memory allocated until fragmentation stays below 10% after running a while.
BTW: wordpress pages being cached are huge, so you'll probably need a lot of memory to avoid fragmentation.
Why 10% fragmentation? It's a bit of a black art, but I observed this is where performance start to noticeabily decline. I haven't found any good benchmarks (or run my own controlled tests) however.
This 10x amount to get this my seem insane, but root cause is APC has no way to defragment other than a restart (which completely dumps the cache). Having a slab of 1G of memory when you only plan to use 100-200m gives a lot of space to fill without having to look for 'holes' to put stuff. Think about bad old FAT16 or FAT32 disk performance with Windows 98--constant need to manually defrag as the disk gets over 50% full.
If you can't afford the extra memory to spare, you might want to look at either memcached or plain old file caching for your user cache.
I've got a proof-of-concept program which is doing some interprocess communication simply by writing and reading from the HD. Yes, I know this is really slow; but it was the easiest way to get things up and running. I had always planned on coming back and swapping out that part of the code with a mechanism that does all the IPC(interprocess communication) in RAM.
With the arrival of solid-state disks, do you think that bottleneck is likely to become negligible?
Notes: It's server software written in C# calling some bare metal number-crunching libraries written in FORTRAN.
The short answer is probably no. A famous researcher named Jim Gray gave a talk about storage and performance which included this great analogy. Assuming your brain as the processor, accessing a register takes 1 clock tick (numbers on left) which roughly equivalent to that information being in your brain. Accessing memory takes 100 clock ticks, so roughly equivalent to getting data somewhere in the city you live in. Accessing a standard disk takes roughly 10^6 ticks, which is the equivalent to the data being on pluto. Where does solid state fit it? Current SSD technology is somewhere between 10^4-10^5 depending on who you ask. While they can be an order of magnitude faster, there is still a tremendous gap between reading from memory and reading from disk. This is why the answer to your question is likely no, since as fast as SSDs become they will still be significantly slower than disk (at least in the foreseeable future).
I think that you will find the bottlenecks are just moved. As we expect higher throughput then we write programs with higher demands.
This pushes bottlenecks to buses, caches and parts other than the read/write mechanism (which is last in the chain anyway).
With a process not bound by disk I/O, then I think you might find it becomes bound by the scheduler which limits the amount of read/write instructions (as with all process instructions).
To take full advantage of limitless I/O speed you would require real-time response and very aggressive management of caches and so on.
When disks get faster then so does RAM and processors and the demand on devices. The bottleneck is the same, the workload just gets bigger.
I don't believe that it will change the way I/O bound applications are written the tiniest bit. Having faster processors did not make people pick bubblesort as a sorting algorithm either.
The external memory hierarchies are an inherent problem of computing.
Joel on Software has an article about his experience upgrading to solid state. Not exactly the same issue you have, but my takeaway was:
Solid state drives can significantly speed up I/O bound operations, but many things (like compiling) are still cpu-bound.
I have a solid-state drive, and no, this won't eliminate I/O as a bottleneck. The SSD is nice, bit it's not that nice.
It's actually not hard to master your system's IPC primitives or to build something on top of TCP. But if you want to stick with your disk stuff and make it faster, ramdisk or tmpfs might do the trick.
No. Current SSDs are designed as disk replacements. Every layer, from SATA controller to filesystem driver treats them as storage.
This is not a problem of the underlying technology, NAND flash. When NAND flash is directly mapped into memory, and uses a rotating log storage system instead of a file system based on named files it can be quite fast. The fundamental problem is that NAND Flash only performans well in block updates. File metadata updates cause expensive read-modify-write operations. Also, NAND blocks are much bigger than typical disk blocks, which doesn't help performance either.
For these reasons, the future of SSDs will be better cached SSDs. DRAM will hide the overhead of poor mapping and a small supercap backup will allow the SSD to commit writes faster.
Solid state drives do make one important improvement to IO throughput, and that is the fact that on solid state disks, block locality is less of an issue from rotating media. This means that high performance IO bound applications can shift their focus from structures that arrange data accessed in order to structures that optimize IO in other ways, such as by keeping data in a single block by means of compression. That said, Even solid state drives benefit from linear access patterns because they can prefetch subsequent blocks into a read cache before the application requests it.
A noticeable regression on solid state disks is that writes take longer than reads, although both are still generally faster than rotating drives, and the difference is narrowing with newer, high end solid state disks.
No, sadly not. They do make it more interesting though: SSD drives have very fast reads and no sync time, but their writes are almost as slow as normal hard drives. This means that you will want to read most of the time. However when you do write to the drive you should write as much as possible in the same spot since SSD drives can only write entire blocks at a time.
How about using a ram drive instead of the disk? You would not have to rewrite anything. Just point it to a different file system. Windows and Linux both have them. Make sure you have lots of memory on the machine and create a virtual disk with enough space for your processing. I did this for a system that listened to multiple protocols on a network tap. I never new what packet I was going to get and there was too much data to keep it in memory. I would write it to the RAM drive and when something was completed, I would move it and let another process get it off the RAM drive and onto a physical disk. I was able to keep up with really busy server class network cards in this way. Good luck!
Something to keep in mind here:
If the communication involves frequent messages and is on the same system you'll get very good performance because Windows won't actually write the data out in the first place.
I've had to resort to it once and discovered this--the drive light did NOT come on at all so long as the data kept getting written.
but it was the easiest way to get things up and running.
I usually find that it's much cheaper to think good once with your own head, than to make the cpu think millions of times in vain.