How to control total Memgraph RAM usage? - memgraphdb

Since Memgraph is in RAM memory, how can you control Memgraph total RAM memory usage?
It looks like storage, procedure calls, and queries all take part of the available RAM.

Memgraph memory usage can be impacted by storage, queries, and procedure calls.
By setting the --memory-limit flag in the Memgraph configuration file, you can set the maximum amount of memory (in MiB) that a Memgraph instance can allocate during its runtime. If the memory limit is exceeded, only the queries that don't require additional memory are allowed. If the memory limit is exceeded while a query is running, the query is aborted, and its transaction becomes invalid.
The main Memgraph configuration file is available in /etc/memgraph/memgraph.conf.
There are also more granular controls for setting limits for each Query or Procedure call, but more details can be found in docs for controlling memory usage.

Related

AWS Lambda: Does pricing depends on allocated memory or function rumtime memory?

How can I know actual memory my aws lambda function is taking up?
If my running code takes 50MB out of allocated 128MB, does aws charge for 50MB or 1328MB?
What happens, if my function requires more than the allocated memory (may be for few events)?
Use AWS Cloudwatch to monitor memory usage. Alternatively you can use a third party service such as Dashbird. In either case you should be able to configure alerts in the case of excess usage or other behavior
AWS will charge you for the full 128 MB. You are billed for what is allocated on your behalf.
If you exceed your memory limits, the function call will be terminated. See here.
See also Understanding and Controlling AWS Lambda Costs.
You are charged for the allocated memory and not for the used memory.
Also if your required memory is greater than the allocated memory then the code will fail at runtime.
For AWS Lambda pricing details refer to the below link.
https://aws.amazon.com/lambda/pricing/

statement_mem seems to limit the node memory instead of the segment memory

According to the GreenPlum documentation, GUCs such as statement_mem, gp_vmem_protect_limit should work at segment level. Same thing should happen with a resource queue memory allowance.
On our system we have 8 primary segments per node. So if I set the statement_mem of a query to 2GB I would expect the query to consume (if needed) up to 2GB x 8 = 16GBs of RAM. But it seems that it would only use 2GBs total per node before starting to write into disk (that's it 2GB/8 per segment). I tried with different statement_values and same thing.
max_statement_mem or gp_vmem_protect_limit limits are never reached. RAM usage on nodes have been monitored using various tools (from GP command center to top, free, all the way across Pivotal suggested session_level_memory_consumption view).
EDITED FROM HERE
ADDED two documentation sources where statement_mem is defined per segment and not per host. (#Jon Roberts)
On the GP best practices guide, beginning of page 32, it clearly says that if the statement_mem is 125MB and we have 8 segments on the server, each query will get 1GB allocated per server.
https://www.google.es/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&ved=0ahUKEwi6sOTx8O3KAhVBKg4KHTwICX0QFggmMAE&url=http%3A%2F%2Fgpdb.docs.pivotal.io%2F4300%2Fpdf%2FGPDB43BestPractices.pdf&usg=AFQjCNGkTqa6143fvJUztYISWAiVyj62dA&sig2=D2ZcJwLDqN0qBzU73NjXNg&bvm=bv.113943164,d.ZWU&cad=rja
On the https://support.pivotal.io/hc/en-us/articles/201947018-Pivotal-Greenplum-GPDB-Memory-Configuration it seems to use statement_mem as segment memory and not host memory. It keeps interrelating statement_mem with the memory limit of the resource queues as well as with the gp_vmem_protect_limit (both parameters defined per segment basis).
This is why I'm getting confused about how to properly manage the memory resources.
Thanks
I incorrectly stated that statement_mem is on a per host and that is not the case. This link is talking about the memory on a segment level:
http://gpdb.docs.pivotal.io/4370/guc_config-statement_mem.html#statement_mem
With the default of "eager_free" gp_resqueue_memory_policy, memory gets re-used so the aggregate amount of memory used may look low for a particular query execution. If you change it to "auto" where the memory isn't re-used, the memory usage is more noticeable.
Run an "explain analyze" of your query and see the slices that are used. With eager_free, the memory gets re-used so you may only have a single slice wanting more memory than available such as this one:
(slice18) * Executor memory: 10399K bytes avg x 2 workers, 10399K bytes max (seg0). Work_mem: 8192K bytes max, 13088K bytes wanted.
And for your question on how to manage the resources, most people don't change the default values. A query that spills to disk is usually an indication that the query needs to be revised or the data model needs some work.

MongoDB process size in task manager

I have been working on MongoDB and insterted upto 1 GB data into a database collection and noticed that the process size of MongoDB shown in task manager is 25mb but overall Memory in Performance tab of task manager is getting higher as i insert data, Question is why that 1 GB is not part of Process Size shown by task manager, i know that mongodb store it on Files but yet it cache a part of that data in memory.
MongoDB (<= 2.6) uses memory-mapped files. This means that the database asks the operating system to map the data files to a portion of virtual memory. The operating system then handles moving things in and out of physical memory according to what the database accesses. Your 1GB of data is mapped into virtual memory, but is likely not resident in physical memory since you have not accessed it recently. To see more detailed statistics about MongoDB's memory usage, run db.serverStatus() in the shell and look at the mem section. You can read a bit more about the memory-mapped storage engine in the storage FAQ.

why does the redis memory usage not reduce when del half of keys

Redis is used to save data but it costs a lot of memory, and its memory usage up to 52.5%.
I deleted half of the keys in redis, and the return code of the delete operation is ok, but its memory usage doesn't reduce.
What's the reason? Thanks in Advance.
My operation code is as below:
// save data
m_pReply = (redisReply *)redisCommand(m_pCntxt, "set %b %b", mykey.data(), mykey.size(), &myval, sizeof(myval));
// del data
m_pReply = (redisReply *)redisCommand(m_pCntxt, "del %b", mykey.data(), mykey.size());
The redis info:
redis 127.0.0.1:6979> info
redis_version:2.4.8
redis_git_sha1:00000000
redis_git_dirty:0
arch_bits:64
multiplexing_api:epoll
gcc_version:4.4.6
process_id:28799
uptime_in_seconds:1289592
uptime_in_days:14
lru_clock:127925
used_cpu_sys:148455.30
used_cpu_user:38023.92
used_cpu_sys_children:23187.60
used_cpu_user_children:123989.72
connected_clients:22
connected_slaves:0
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
used_memory:31903334872
used_memory_human:29.71G
used_memory_rss:34414981120
used_memory_peak:34015653264
used_memory_peak_human:31.68G
mem_fragmentation_ratio:1.08
mem_allocator:jemalloc-2.2.5
loading:0
aof_enabled:0
changes_since_last_save:177467
bgsave_in_progress:0
last_save_time:1343456339
bgrewriteaof_in_progress:0
total_connections_received:820
total_commands_processed:2412759064
expired_keys:0
evicted_keys:0
keyspace_hits:994257907
keyspace_misses:32760132
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:11672476
vm_enabled:0
role:slave
master_host:192.168.252.103
master_port:6479
master_link_status:up
master_last_io_seconds_ago:0
master_sync_in_progress:0
db0:keys=66372158,expires=0
Please refer to Memory allocation section on the following link:
http://redis.io/topics/memory-optimization
I quoted it here:
Redis will not always free up (return) memory to the OS when keys are
removed. This is not something special about Redis, but it is how most
malloc() implementations work. For example if you fill an instance
with 5GB worth of data, and then remove the equivalent of 2GB of data,
the Resident Set Size (also known as the RSS, which is the number of
memory pages consumed by the process) will probably still be around
5GB, even if Redis will claim that the user memory is around 3GB. This
happens because the underlying allocator can't easily release the
memory. For example often most of the removed keys were allocated in
the same pages as the other keys that still exist.
Since Redis 4.0.0 there's a command for this:
MEMORY PURGE
Should do the trick: https://redis.io/commands/memory-purge
Note however that command docs state:
This command is currently implemented only when using jemalloc as an allocator, and evaluates to a benign NOOP for all others.
And the README reminds us that:
Redis is compiled and linked against libc
malloc by default, with the exception of jemalloc being the default on Linux
systems. This default was picked because jemalloc has proven to have fewer
fragmentation problems than libc malloc.
A good starting point is to use the Redis CLI command: MEMORY DOCTOR.
It can give you very valuable information and point you to the potential issue.
some useful links:
MEMORY DOCTOR command docs
What is defragmentation and what are the Redis defragmentation configs
example:
Peak memory: In the past this instance used more than 150% the memory that is currently using. The allocator is normally not able to release memory after a peak, so you can expect to see a big fragmentation ratio, however this is actually harmless and is only due to the memory peak, and if the Redis instance Resident Set Size (RSS) is currently bigger than expected, the memory will be used as soon as you fill the Redis instance with more data. If the memory peak was only occasional and you want to try to reclaim memory, please try the MEMORY PURGE command, otherwise the only other option is to shutdown and restart the instance.
High total RSS: This instance has a memory fragmentation and RSS overhead greater than 1.4 (this means that the Resident Set Size of the Redis process is much larger than the sum of the logical allocations Redis performed). This problem is usually due either to a large peak memory (check if there is a peak memory entry above in the report) or may result from a workload that causes the allocator to fragment memory a lot. If the problem is a large peak memory, then there is no issue. Otherwise, make sure you are using the Jemalloc allocator and not the default libc malloc. Note: The currently used allocator is "jemalloc-5.1.0".
High allocator fragmentation: This instance has an allocator external fragmentation greater than 1.1. This problem is usually due either to a large peak memory (check if there is a peak memory entry above in the report) or may result from a workload that causes the allocator to fragment memory a lot. You can try enabling 'activedefrag' config option.

cma_alloc fails to create memory chunk of 10M

I am working on Camera Driver and whenever I try to allocate the memory around 10M, it fails but 4-5M memory is created. Is there a limit to memory allocation using cma_alloc? If yes, how do I increase it?
At first glance I'd say allocating 10M of continuous memory is already very suspicious. Why do you need that much?
You're on Android, so that indicates an embedded platform, making 10M of memory even more suspicious.
From the documentation I've found on-line you also need to specify how much cma memory the system needs at boot time. Did you specify more than 10M?

Resources