Im running ElasticSearch 1.3.2 on Centos 6.5 with the terminal like so:
bin/elasticsearch -console
My server has 16GB of RAM. How do I give 8GB of it to ES?
This post may have the answer but I just couldnt piece it together. Further the docs have only confused me more...
Its good practice to should keep 50% memory for Elasticsearch Heap Size. As you mentioned you don't have service wrapper. In this case you can assign heap size while starting elasticsearch services.
bin/elasticsearch ES_HEAP_SIZE=8000m
Related
I am writing this question to share the solution we found in our company.
We migrated Solr over a docker only solution to a kubernetes solution.
On kubernetes the environment ended up with slowness.
At least for me the solution was atypical.
Environment:
solr(8.2.0) with just one node
solr database with 250GB on disk
kubernetes over Rancher
Node with 24vcpus and 32GB of Ram
Node hosts Solr and nginx ingress
Reserved 30GB for the Solr pod in kubernetes
Reserved 25GB for the Solr
Expected Load:
350 updates/min (pdf documents and html documents)
50 selects/min
The result was Solr degrading over time having high loads on host. The culpirit was heavy disk access.
After one week of frustrated adjustments this is the simple solution we found:
Solr JVM had 25 GB. We decreased the value to 10GB.
This is the command to start solr with the new values:
/opt/solr/bin/solr start -f -force -a '-Xms10g -Xmx10g' -p 8983
If someone can explain what happened that would be great.
My guess is that solr was trying to make cash and kubernetes was reapping this cache. So Solr ended up in a continuous reading of the disk trying to build its cache.
I just started JanusGraph and my total RAM usage is more than 5GB without firing any queries. I have the following dependent services running
JanusGraph
GremlinServer (via JanusGraph)
Cassandra (via JanusGraph)
Elasticsearch (via JanusGraph)
There are some java instances running
I am trying to find out the minimum system requirements for janusgraph but I am unable to find it mentioned in its documentation. Anyone has a link for the same? or any idea how much is the minimum system requirements?
Also are there any configuration changes to make it consume less RAM?
We have a Linux machine with 56GB of hard disk, but querying the max available space of ES shows only 28GB. We would like the ES to use more of the available space on the machine, but could not find the configuration for this. Thanks.
Elasticsearch uses all the space that is available on your machine. However if this is a single machine in your cluster, then no copies of data will be created (as it does only make sense in a distributed environment to be failsafe friendly). So if you dont have more data, then no more is used.
It might make sense to talk about your setup and provide more information.
I'm now using Elastic Search service version 2.3.3 on Windows 7.
The memory usage of the service increase by each query statement but not release although the application was exit.
I tried to clear cached data as below command but the memory was still kept.
POST /_cache/clear
Is there anyway to release it?
Thanks in advance!
Elasticsearch 1.7.2 on CentOS, 8GB RAM
We see that ES_HEAP_SIZE should be increased to 4g.
The only place this seems declared in the ES environment is in /etc/init.d/elasticsearch
We set it to 4g in this init file, restarted ES, but the jvm "heap_max_in_bytes" (as returned from /_nodes/stats ) did not move from the default 1g value.
Where and how can we get control of ES_HEAP_SIZE ?
(I should add: The similar looking threads here on SO are either dated [e.g. apply to earlier versions of ES and do not apply to 1.7.x] or are for other platforms [win, osx], or are do not work [have tried them, and you can see many of the responses are tagged 'this is a hack don't do it'])
(I should further note that the ES docs document this element, and suggest what to set it to, but do not instruct how or where.)
Note: Below is for Elasticsearch 1.7.x. For 5.3 and higher, it is different.
Per a comment that is rather buried on How to change Elasticsearch max memory size
On CentOS /etc/sysconfig/elasticsearch is the appropriate place to make these changes.
This has been tested and verified on my CentOS 7 environment. Strongly expect it to also fly on CentOS 6.