Methods to decrease Memory consumption in ArangoDB - performance

We currently have a ArangoDB cluster running on version 3.0.10 for a POC with about 81 GB stored on disk and Main memory consumption of about 98 GB distributed across 5 Primary DB servers. There are about 200 million vertices and 350 million Edges. There are 3 Edge collections and 3 document collections, most of the memory(80%) is consumed due to the presence of the edges
I'm exploring methods to decrease the main memory consumption. I'm wondering if there are any methods to compress/serialize the data so that less amount of main memory is utilized.
The reason for decreasing memory is to reduce infrastructure costs, I'm willing to trade-off on speed for my use case.
Please can you let me know, if there any Methods to reduce main memory consumption for ArangoDB

It took us a while to find out that our original recommendation to set vm.overcommit_memory to 2 this is not good in all situations.
It seems that there is an issue with the bundled jemalloc memory allocator in ArangoDB with some environments.
With an vm.overcommit_memory kernel settings value of 2, the allocator had a problem of splitting existing memory mappings, which made the number of memory mappings of an arangod process grow over time. This could have led to the kernel refusing to hand out more memory to the arangod process, even if physical memory was still available. The kernel will only grant up to vm.max_map_count memory mappings to each process, which defaults to 65530 on many Linux environments.
Another issue when running jemalloc with vm.overcommit_memory set to 2 is that for some workloads the amount of memory that the Linux kernel tracks as "committed memory" also grows over time and does not decrease. So eventually the ArangoDB daemon process (arangod) may not get any more memory simply because it reaches the configured overcommit limit (physical RAM * overcommit_ratio + swap space).
So the solution here is to modify the value of vm.overcommit_memory from 2 to either 1 or 0. This will fix both of these problems.
We are still observing ever-increasing virtual memory consumption when using jemalloc with any overcommit setting, but in practice this should not cause problems.
So when adjusting the value of vm.overcommit_memory from 2 to either 0 or 1 (0 is the Linux kernel default btw.) this should improve the situation.
Another way to address the problem, which however requires compilation of ArangoDB from source, is to compile a build without jemalloc (-DUSE_JEMALLOC=Off when cmaking). I am just listing this as an alternative here for completeness. With the system's libc allocator you should see quite stable memory usage. We also tried another allocator, precisely the one from libmusl, and this also shows quite stable memory usage over time. The main problem here which makes exchanging the allocator a non-trivial issue is that jemalloc has very nice performance characteristics otherwise.
(quoting Jan Steemann as can be found on github)
Several new additions to the rocksdb storage engine were made meanwhile. We demonstrate how memory management works in rocksdb.
Many Options of the rocksdb storage engine are exposed to the outside via options.
Discussions and a research of a user led to two more options to be exposed for configuration with ArangoDB 3.7:
--rocksdb.cache-index-and-filter-blocks-with-high-priority
--rocksdb.pin-l0-filter-and-index-blocks-in-cache
--rocksdb.pin-top-level-index-and-filter

Related

Is it possible to "gracefully" use virtual memory in a program whose regular use would consume all physical RAM?

I am intending to write a program to create huge relational networks out of unstructured data - the exact implementation is irrelevant but imagine a GPT-3-style large language model. Training such a model would require potentially 100+ gigabytes of available random access memory as links get reinforced between new and existing nodes in the graph. Only a small portion of the entire model would likely be loaded at any given time, but potentially any region of memory may be accessed randomly.
I do not have a machine with 512 Gb of physical RAM. However, I do have one with a 512 Gb NVMe SSD that I can dedicate for the purpose. I see two potential options for making this program work without specialized hardware:
I can write my own memory manager that would swap pages between "hot" resident memory and "cold" on the hard disk, probably using memory-mapped files or some similar construct. This would require me coding all memory accesses in the modeling program to use this custom memory manager, and coding the page cache and concurrent access handlers and all of the other low-level stuff that comes along with it, which would take days and very likely introduce bugs. Also performance would likely be poor. Or,
I can configure the operating system to use the entire SSD as a page file / SWAP filesystem, and then just have the program reserve as much virtual memory as it needs - the same as any other normal program, relying on the kernel's memory manager which is already doing the page mapping + swapping + caching for me.
The problem I foresee with #2 is making the operating system understand what I am trying to do in a "cooperative" way. Ideally I would like to hint to the OS that I would only like a specific fraction of resident memory and swap the rest, to keep overall system RAM usage below 90% or so. Otherwise the OS will allocate 99% of physical RAM and then start aggressively compacting and cutting down memory from other background programs, which ends up making the whole system unresponsive. Linux apparently just starts sacrificing entire processes if it gets too bad.
Does there exist a kernel command in any language or operating system that would let me tell the OS to chill out and proactively swap user memory to disk? I have looked through VMM functions in kernel32.dll and the Linux paging and swap daemon (kswapd) documentation, but nothing looks like what I need. Perhaps some way to reserve, say, 1Gb of pages and then "donate" them back to the kernel to make sure they get used for processes that aren't my own? Some way to configure memory pressure or limits or make kswapd work more aggressively for just my process?

what are pagecache, dentries, inodes?

Just learned these 3 new techniques from https://unix.stackexchange.com/questions/87908/how-do-you-empty-the-buffers-and-cache-on-a-linux-system:
To free pagecache:
# echo 1 > /proc/sys/vm/drop_caches
To free dentries and inodes:
# echo 2 > /proc/sys/vm/drop_caches
To free pagecache, dentries and inodes:
# echo 3 > /proc/sys/vm/drop_caches
I am trying to understand what exactly are pagecache, dentries and inodes. What exactly are they?
Do freeing them up also remove the useful memcached and/or redis cache?
--
Why i am asking this question? My Amazon EC2 server RAM was getting filled up over the days - from 6% to up to 95% in a matter of 7 days. I am having to run a bi-weekly cronjob to remove these cache. Then memory usage drops to 6% again.
With some oversimplification, let me try to explain in what appears to be the context of your question because there are multiple answers.
It appears you are working with memory caching of directory structures. An inode in your context is a data structure that represents a file. A dentries is a data structure that represents a directory. These structures could be used to build a memory cache that represents the file structure on a disk. To get a directly listing, the OS could go to the dentries--if the directory is there--list its contents (a series of inodes). If not there, go to the disk and read it into memory so that it can be used again.
The page cache could contain any memory mappings to blocks on disk. That could conceivably be buffered I/O, memory mapped files, paged areas of executables--anything that the OS could hold in memory from a file.
Your commands flush these buffers.
I am trying to understand what exactly are pagecache, dentries and
inodes. What exactly are they?
user3344003 already gave an exact answer to that specific question, but it's still important to note those memory structures are dynamically allocated.
When there's no better use for "free memory", memory will be used for those caches, but automatically purged and freed when some other "more important" application wants to allocate memory.
No, those caches don't affect any caches maintained by any applications (including redis and memcached).
My Amazon EC2 server RAM was getting filled up over the days - from 6%
to up to 95% in a matter of 7 days. I am having to run a bi-weekly
cronjob to remove these cache. Then memory usage drops to 6% again.
Probably you're mis-interpreting the situation: your system may just be making efficient usage of its ressources.
To simplify things a little bit: "free" memory can also be seen as "unused", or even more dramatic - a waste of resources: you paid for it, but don't make use of it. That's a very un-economic situation, and the linux kernel tries to make some "more useful" use of your "free" memory.
Part of its strategy involves using it to save various kinds of disk I/O by using various dynamically sized memory caches. A quick access to cache memory saves "slow" disk access, so that's often a useful idea.
As soon as a "more important" process wants to allocate memory, the Linux kernel voluntarily frees those caches and makes the memory available to the requesting process. So there's usually no need to "manually free" those caches.
The Linux kernel may even decide to swap out memory of an otherwise idle process to disk (swap space), freeing RAM to be used for "more important" tasks, probably also including to be used as some cache.
So as long as your system is not actively swapping in/out, there's little reason to manually flush caches.
A common case to "manually flush" those caches is purely for benchmark comparison: your first benchmark run may run with "empty" caches and so give poor results, while a second run will show much "better" results (due to the pre-warmed caches). By flushing your caches before any benchmark run, you're removing the "warmed" caches and so your benchmark runs are more "fair" to be compared with each other.
Common misconception is that "Free Memory" is important.
Memory is meant to be used.
So let's clear that out :
There's used memory, which is where important data is stored, and if that reaches 100% you're dead
Then there's cache/buffer, which is used as long as there is space to do so. It's facultative memory to access disk files faster, mostly. If you run out of free memory, this will just free itself and let you access disk directly.
Clearing cached memory as you suggest is most of the case useless and means you're deactivating an optimization, therefore you'll get a slow down.
If you really run out of memory, that is if your "used memory" is high, and you begin to see swap usage, then you must do something.
HOWEVER : there's a known bug running on AWS instances, with dentry cache eating memory with no apparent reason. It's clearly described and solved in this blog.
My own experience with this bug is that "dentry" cache consumes both "used" and "cached" memory and does not seem to release it in time, eventually causing swap. The bug itself can consume resources anyway, so you need to look into it.
Hate to bring an old thread back from the dead, but I've been dealing with memory issues lately on my Linux Virtual Machines. Unfortunately, even with the virtualization of computing machines being great and the advancements of Linux memory and resource allocation being superb, conflicts occur when the hypervisor acts out what it calls "performance features".
VMWare will actively send RAM that hasn't been "written or modified" recently, to the disk. When your disk is on a SAN, that means reading from the RAM is now at 1Gbps to 10Gbps at best if you have a REALLY performant RAID and steady network access (ignoring the fact that now the RAM of say 100 VMs are all using the same SAN). DDR3 RAM operates at 25Gbps+ on modern systems, so I'll assume you can see the problem with systems running at 1/25th to less than 1/2 of the speed anticipated.
The caches on my linux systems are literally the same speed as disk I/O of the filesystem, meaning they do not help our performance and are actively sending the OS's RAM into Swap instead of clearing caches. This is a huge problem thanks to VMWare, not because of Linux, but be aware that cloud infrastructure often does stupid crap like this all the time unfortunately. You can read more here: https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/perf-vsphere-memory_management.pdf or if you use VMWare, surely you'll notice the "allocated memory" vs "active memory" and where your VMs will always display a different amount than VMWare because of this distinction and treatment of the memory.

go large memory garbage collection performance

I am considering implementing a memory caching daemon in Go. It has potential of getting some serious memory utilization (say, Terabyte). Fragmenting into separate heaps is not a good option, I want it all in one memory space. Does anyone have experience running Go with such huge memory sizes? Is GC going to perform acceptably?
I am trying to do the same but the only projects that gave me a good performance to cache data was the binary tree https://github.com/stathat/treap m which supported more than 1 millons of nodes on memory in one machine Ubuntu 12.0.4 LTS with 8 GB memory. Furthermore, it was fast loading and searching data.
Other projects that I tested was LMDB but not support many nodes on memory, kv, go-cache and goleveldb but no one was as faster to recovery data from memory that treap.

why the memory fragmentation is less than 1 in Redis

Redis support 3 memory allocator: libc, jemalloc, tcmalloc. When i do memory usage test, i find that mem_fragmentation_ratio in INFO MEMORY could be less than 1 with libc allocator. With jemalloc or tcmalloc, this value is greater or equal than 1 as it should be.
Could anyone explain why mem_fragmentation_ratio is less than 1 with libc?
Redis version:2.6.12. CentOS 6
Update:
I forgot to mention that one possible reason is that swap happens and mem_fragmentation_ratio will be < 1.
But when i do my test, i adjust swapiness, even turn swap off. The result is the same. And my redis instance actually do not cost too much memory.
Generally, you will have less fragmentation with jemalloc or tcmalloc, than with libc malloc. This is due to 4 factors:
more granular allocation classes for jemalloc and tcmalloc. It reduces internal fragmentation, especially when Redis has to allocate a lot of very small objects.
better algorithms and data structures to prevent external fragmentation (especially for jemalloc). Obviously, the gain depends on your long term memory allocation patterns.
support of "malloc size". Some allocators offer an API to return the size of allocated memory. With glibc (Linux), malloc does not have this capability, so it is emulated by explicitly adding an extra prefix to each allocated memory block. It increases internal fragmentation. With jemalloc and tcmalloc (or with the BSD libc malloc), there is no such
overhead.
jemalloc (and tcmalloc with some setting changes) can be more aggressive than glibc to release memory to the OS - but again, it depends on the allocation patterns.
Now, how is it possible to get inconsistent values for mem_fragmentation_ratio?
As stated in the INFO documentation, the mem_fragmentation_ratio value is calculated as the ratio between memory resident set size of the process (RSS, measured by the OS), and the total number of bytes allocated by Redis using the allocator.
Now, if more memory is allocated with libc (compared to jemalloc,tcmalloc), or if more memory is used by some other processes on your system during your benchmarks, Redis memory may be swapped out by the OS. It will reduce the RSS (since a part of Redis memory will not be in main memory anymore). The resulting fragmentation ratio will be less than 1.
In other words, this ratio is only relevant if you are sure Redis memory has not been swapped out by the OS (if it is not the case, you will have performance issues anyway).
Other than swap, I know 2 ways to make "memory fragmentation ratio" to be less than 1:
Have a redis instance with little or no data, but thousands of idling client connections. From my testing, it looks like redis will have to allocate about 20 KB of memory for each client connections, but most of it won't actually be used (i.e. won't appear in RSS) until later.
Have a master-slave setup with let's say 8 GB of repl-backlog-size. The 8 GB will be allocated as soon as the replication starts (on master only for version <4.0, on both master and slave otherwise), but the memory will only be used as we start writing to the master. So the ratio will be way below 1 initially, and then get closer and closer to 1 as the replication backlog get filled.

why does the redis memory usage not reduce when del half of keys

Redis is used to save data but it costs a lot of memory, and its memory usage up to 52.5%.
I deleted half of the keys in redis, and the return code of the delete operation is ok, but its memory usage doesn't reduce.
What's the reason? Thanks in Advance.
My operation code is as below:
// save data
m_pReply = (redisReply *)redisCommand(m_pCntxt, "set %b %b", mykey.data(), mykey.size(), &myval, sizeof(myval));
// del data
m_pReply = (redisReply *)redisCommand(m_pCntxt, "del %b", mykey.data(), mykey.size());
The redis info:
redis 127.0.0.1:6979> info
redis_version:2.4.8
redis_git_sha1:00000000
redis_git_dirty:0
arch_bits:64
multiplexing_api:epoll
gcc_version:4.4.6
process_id:28799
uptime_in_seconds:1289592
uptime_in_days:14
lru_clock:127925
used_cpu_sys:148455.30
used_cpu_user:38023.92
used_cpu_sys_children:23187.60
used_cpu_user_children:123989.72
connected_clients:22
connected_slaves:0
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
used_memory:31903334872
used_memory_human:29.71G
used_memory_rss:34414981120
used_memory_peak:34015653264
used_memory_peak_human:31.68G
mem_fragmentation_ratio:1.08
mem_allocator:jemalloc-2.2.5
loading:0
aof_enabled:0
changes_since_last_save:177467
bgsave_in_progress:0
last_save_time:1343456339
bgrewriteaof_in_progress:0
total_connections_received:820
total_commands_processed:2412759064
expired_keys:0
evicted_keys:0
keyspace_hits:994257907
keyspace_misses:32760132
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:11672476
vm_enabled:0
role:slave
master_host:192.168.252.103
master_port:6479
master_link_status:up
master_last_io_seconds_ago:0
master_sync_in_progress:0
db0:keys=66372158,expires=0
Please refer to Memory allocation section on the following link:
http://redis.io/topics/memory-optimization
I quoted it here:
Redis will not always free up (return) memory to the OS when keys are
removed. This is not something special about Redis, but it is how most
malloc() implementations work. For example if you fill an instance
with 5GB worth of data, and then remove the equivalent of 2GB of data,
the Resident Set Size (also known as the RSS, which is the number of
memory pages consumed by the process) will probably still be around
5GB, even if Redis will claim that the user memory is around 3GB. This
happens because the underlying allocator can't easily release the
memory. For example often most of the removed keys were allocated in
the same pages as the other keys that still exist.
Since Redis 4.0.0 there's a command for this:
MEMORY PURGE
Should do the trick: https://redis.io/commands/memory-purge
Note however that command docs state:
This command is currently implemented only when using jemalloc as an allocator, and evaluates to a benign NOOP for all others.
And the README reminds us that:
Redis is compiled and linked against libc
malloc by default, with the exception of jemalloc being the default on Linux
systems. This default was picked because jemalloc has proven to have fewer
fragmentation problems than libc malloc.
A good starting point is to use the Redis CLI command: MEMORY DOCTOR.
It can give you very valuable information and point you to the potential issue.
some useful links:
MEMORY DOCTOR command docs
What is defragmentation and what are the Redis defragmentation configs
example:
Peak memory: In the past this instance used more than 150% the memory that is currently using. The allocator is normally not able to release memory after a peak, so you can expect to see a big fragmentation ratio, however this is actually harmless and is only due to the memory peak, and if the Redis instance Resident Set Size (RSS) is currently bigger than expected, the memory will be used as soon as you fill the Redis instance with more data. If the memory peak was only occasional and you want to try to reclaim memory, please try the MEMORY PURGE command, otherwise the only other option is to shutdown and restart the instance.
High total RSS: This instance has a memory fragmentation and RSS overhead greater than 1.4 (this means that the Resident Set Size of the Redis process is much larger than the sum of the logical allocations Redis performed). This problem is usually due either to a large peak memory (check if there is a peak memory entry above in the report) or may result from a workload that causes the allocator to fragment memory a lot. If the problem is a large peak memory, then there is no issue. Otherwise, make sure you are using the Jemalloc allocator and not the default libc malloc. Note: The currently used allocator is "jemalloc-5.1.0".
High allocator fragmentation: This instance has an allocator external fragmentation greater than 1.1. This problem is usually due either to a large peak memory (check if there is a peak memory entry above in the report) or may result from a workload that causes the allocator to fragment memory a lot. You can try enabling 'activedefrag' config option.

Resources