Setting the RAM limit for the influxdb write process - client

The python influx client use too much ram during the writing process. Since we do not have access to the influxdb config we need to improve this in python directly. I tried to optimize it using different settings for batch_size, flush_interval, jitter_interval with no success.
.write_api(
success_callback=None,
error_callback=callback.error,
retry_callback=callback.retry,
write_options=WriteOptions(
write_type=WriteType.batching,
batch_size=1_000,
flush_interval=1_000,
jitter_interval=1_000
)) as write_api:
points = '\n'.join(df_converted)
write_api.write(record=points, bucket=self.db_name, write_precision=write_precision)
My question is: given a certian upper limit value for RAM, can we somehow limit the client to not exceed it during writing/query

Related

janusgraph-0.5.3 memory configuration

I am using janusgraph-0.5.3 (with Cassandra) and I want to know how to configure memory allocation to increase default memory allocated to 2GB for the gremlin server process.
I am trying to load bulk data on my gremlin-server, but it is failing with error. I would like to know if there is a way to check and increase the default memory allocation.
I need help in locating the .yaml configuration files as well as the values in these files that would need to change.
Thanks
I changed gremlin-server.sh file to take additional memory
# Set Java options
if [ "$JAVA_OPTIONS" = "" ] ; then
echo "Setting xmx and xss"
JAVA_OPTIONS="-Xms1024m -Xmx3074m -Xss2048k -javaagent:$JANUSGRAPH_LIB/jamm-0.3.0.jar -Dgremlin.io.kryoShimService=org.janusgraph.hadoop.serialize.JanusGraphKryoShimService"
fi

Cannot increase work_mem above 1GB using PostgreSQL 9.3 on Windows Server

I would like to tweak the postgres config for use on a Windows server. Here is my current posgresql.conf file: http://pastebin.com/KpSi2zSd
I would like to increase work_mem and maintenance_work_mem, but if I raise the values above 1GB I get this error when starting the service:
Nothing is added to the log files (at least not in data\pg_log). How can I figure out what is causing the issue (increase logging)? Could the have anything to do with issues management between windows and postgres?
Here are my server specs:
Windows Server 2012 R2 Datacenter (64 bit)
Intel CPU E5-2670 v2 # 2.50 GHz
512 GB RAM
PostgreSQL 9.3
Under Windows the value for work_mem is limited to 2GB (even on a 64bit system) - there is no workaround as far as I know.
I don't know why you couldn't set it to 1GB though. Maybe the sum of work_mem and maintenance_work_mem has another limit I am not aware of.
Setting work_mem that high by default is usually not a good idea. With 512GB RAM and just 10 users this might work, but keep in mind that the amount of work_mem is requested by a statement for every sort, group or hash operation in a single query. So you could have a statement requesting this amount of memory 15 or 20 times.
You don't need to change this in postgresql.conf - this can be changed dynamically if you know that the following query will benefit from a large work_mem, by running:
set session work_mem='2097151';
If you use a higher number, you'll get an error message telling you the limit:
ERROR: 2097152 is outside the valid range for parameter "work_mem" (64 .. 2097151)
Even if Postgres isn't using all the memory, it still benefits from it. Postgres (unlike e.g. Oracle) relies heavily on the filesystem cache rather than doing all the caching itself. Values for shared_buffers beyond roughly 8GB rarely show any benefit.
What you do need to tell Postgres is how much memory the operating system usually uses for caching, by setting effective_cache_size to the appropriate value. Postgres doesn't use that for caching, but it influences the planner's choice to e.g. prefer an index scan over a seq scan if the index is likely to be in the file system cache.
You can see the current size of the file system cache in the Windows task manager (or e.g. ProcessExplorer)
as described above, in windows it is more beneficial to rely on the OS cache.
If you use RAMMAP from sysinternals (Microsoft) you can see exactly what is being used by postgres in the OS cache, and hence how much is actually cached to it.

Why 'Total MB' in golang heap profile is less than 'RES' in top?

I have a service written in go that takes 6-7G memory at runtime (RES in top). So I used the pprof tool trying to figure out where the problem is.
go tool pprof --pdf http://<service>/debug/pprof/heap > heap_prof.pdf
But there are only about 1-2G memory in result ('Total MB' in pdf). Where's the rest ?
And I've tried profile my service with GOGC=off, as a result the 'Total MB' is exactly the same as 'RES' in top. It seems that memory is GCed but haven't been return to kernel won't be profiled.
Any idea?
P.S, I've tested in both 1.0.3 and 1.1rc3.
This is because Go currently does not give memory of GC-ed objects back to the operating system, to be precise, only for objects smaller then predefined limit (32KB). Instead memory is cached to speed up future allocations Go:malloc. Also, it seems that this is going to be fixed in the future TODO.
Edit:
New GC behavior: If the memory is not used for a while (about 5 min), runtime will advise the kernel to remove the physical mappings from the unused virtual ranges. This process can be forced by calling runtime.FreeOSMemory()

Can ETW (event tracing for windows) be used to gather also memory statistics?

Is it possible using ETW to also get memory statistics of all the processes and the system ?
With memory statistics I mean : e.g. Commited bytes, private bytes,paged pool,working set,...
I cannot find anything about using xperf to get and see memory statistics. It is always about CPU , disk , network.
One could probably use performance counters to get that kind of information, but how can one overlay the statistics graphically in one chart (how to correlate/sync the timestamps) ?
Your best bet on Windows 8.1 and higher is the Microsoft-Windows-Kernel-Memory provider, which records per-process memory information every 0.5 s. See https://github.com/google/UIforETW/issues/80 for details. UIforETW enables this by default when it is available.
You could also try the MEMINFO provider. It gives a system-wide overview of memory pressure. It shows the Active List (currently in use memory), the Standby List ('useful' pages not currently in use, such as the disk cache), and the Zero and Free lists (genuinely free memory). This at least lets you tell whether a system is running out of memory.
You could also try MEMINFO_WS and CONTMEMGEN but these are undocumented so I really don't know what they do. They show up in xperf -providers k but when I record with them I can't see any new graphs appearing. Apparently Microsoft ships these providers but no way to view them. Sigh...
If you want more memory details on Windows 7 -- such as per-process working sets -- your best bet is to have a process running which periodically queries this data and emits it in custom ETW events. This is available in a prepackaged form in UIforETW which can query the working set of a specified set of processes once a second. See the announcement post for how to get UIforETW:
https://randomascii.wordpress.com/2015/04/14/uiforetw-windows-performance-made-easier/
UIforETW's Windows 7 working set data shows up in Generic Events under Task Name == WorkingSet. On Windows 8.1 the OS working set data (more detailed, more efficiently recorded) shows up under Memory-> Virtual Memory Snapshots.
You can trace memory usage with ReferenceSet kernel group. It includes the following traceflags:
PROC_THREAD+LOADER+HARD_FAULTS+MEMORY+FOOTPRINT+VIRT_ALLOC+MEMINFO+VAMAP+SESSION+REFSET+MEMINFO_WS
MEMORY = Memory tracing
FOOTPRINT+REFSET = Support footprint analysis
MEMINFO = Memory List Info (active, standby and oters you see from ResMon)
VIRT_ALLOC = Virtual allocation reserve and release
VAMAP = mapped files information
MEMINFO_WS = Working set Info
As you can see xperf can capture a lot of memory data when you sue the right flags.

memory limit in Node.js (and chrome V8)

In many places in the web, you will see:
What is the memory limit on a node process?
and the answer:
Currently, by default V8 has a memory limit of 512mb on 32-bit systems, and 1gb on 64-bit systems. The limit can be raised by setting --max-old-space-size to a maximum of ~1gb (32-bit) and ~1.7gb (64-bit), but it is recommended that you split your single process into several workers if you are hitting memory limits.
Can somebody confirm this is the case as Node.js seems to update frequently?
And more importantly, will it be the case in the near future?
I want to write JavaScript code which might have to deal with 4gb of javascript objects (and speed might not be an issue).
If I can't do it in Node, I will end up doing in java (on a 64bit machine) but I would rather not.
This has been a big concern for some using Node.js, and there are good news. The new memory limit for V8 is now unknown (not tested) for 64bit and raised to as much as 32bit address space allows in 32bit environments.
Read more here: http://code.google.com/p/v8/issues/detail?id=847
Starting nodejs app with a heap memory of 8 GB
node --max-old-space-size=8192 app.js
See node command line options documentation or run:
node --help --v8-options
I'm running a proc now on Ubuntu linux that has a definite memory leak and node 0.6.0 is pushing 8gb. Think it's handled :).
Memory Limit Max Value is 3049 for 32bit users
If you are running Node.js with os.arch() === 'ia32' is true, the max value you can set is 3049
under my testing with node v11.15.0 and windows 10
if you set it to 3050, then it will overflow and equal to be set to 1.
if you set it to 4000, then it will equal to be set to 51 (4000 - 3049)
Set Memory to Max for Node.js
node --max-old-space-size=3049
Set Memory to Max for Node.js with TypeScript
node -r ts-node/register --max-old-space-size=3049
See: https://github.com/TypeStrong/ts-node/issues/261#issuecomment-402093879
It looks like it's true. When I had tried to allocate 50 Mb string in buffer:
var buf = new Buffer(50*1024*1024);
I've got an error:
FATAL ERROR: CALL_AND_RETRY_2 Allocation failed - process out of memory
Meantime there was about 457 Mb of memory usage by Node.js in process monitor.

Resources