EDITED - Based on comments of #opster elasticsearch ninja, I edited original question to keep it focused on low disk watermarks error for ES.
For more general server optimization on small machine, see:
Debugging Elasticsearch and tuning on small server, single node
For original follow up on the original question and considerations related to debugging ES failures, also:
https://chat.stackoverflow.com/rooms/213776/discussion-between-opster-elasticsearch-ninja-and-user305883
Problem : I noticed that elasticsearch is failing frequently, and need to restart the server manually.
This question may relate to: High disk watermark exceeded even when there is not much data in my index
I want to have a better understanding about what elasticsearch will do if the disk size fails, how to optimise configuration and only afterwards eventually restart automatically when system fails.
Could you help in understanding how to read the elasticsearch journal and make a choice to fix the problems accordingly, suggesting best practices to tune server ops on a small server machine ?
My priority is not to have system crash; it is ok to have a bit less performance, no budget to increase server size.
Hardware
I am running elasticsearch on a single small server (2GB), have 3 index (500mb, 20mb and 65mb of store size) and several GB free on disk (state solid) : I would like to allow use virtual memory VS consuming RAM.
Below what I did:
What does the journal say?
journalctl | grep elasticsearch> explore failures related to ES.
May 13 05:44:15 ubuntu systemd[1]: elasticsearch.service: Main process exited, code=killed, status=9/KILL
May 13 05:44:15 ubuntu systemd[1]: elasticsearch.service: Unit entered failed state.
May 13 05:44:15 ubuntu systemd[1]: elasticsearch.service: Failed with result 'signal'.
Here I can see ES was killed.
EDITED : I have found due to out of memory error from java, see below error in service elasticsearch status ; readers may also find useful to run:
java -XX:+PrintFlagsFinal -version | grep -iE 'HeapSize|PermSize|ThreadStackSize'
to check current memory assignment.
What does the ES log say?
check:
/var/log/elasticsearch
[2020-05-09T14:17:48,766][WARN ][o.e.c.r.a.DiskThresholdMonitor] [my_clustername-master] high disk watermark [90%] exceeded on [Ynm6YG-MQyevaDqT2n9OeA][awesome3-master][/var/lib/elasticsearch/nodes/0] free: 1.7gb[7.6%], shards will be relocated away from this node
[2020-05-09T14:17:48,766][INFO ][o.e.c.r.a.DiskThresholdMonitor] [my_clustername-master] rerouting shards: [high disk watermark exceeded on one or more nodes]
what does "shards will be relocated away from this node" if I only have one server and one instance working ?
service elasticsearch status
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2020-05-09 13:47:02 UTC; 32min ago
Docs: http://www.elastic.co
Process: 22691 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCES
Main PID: 22694 (java)
CGroup: /system.slice/elasticsearch.service
└─22694 /usr/bin/java -Xms512m -Xmx512m -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+U
What does my configuration say ?
I am using a default configuration of `/etc/elasticsearch/elasticsearch.yml´
and don't have any options configured for watermark, like in https://stackoverflow.com/a/52006486/305883
Should I include them ? What would they do ?
Please note I have uncommented #bootstrap.memory_lock: true
because I only have 2gb of ram.
Even if elasticsearch will perform poorly if memory is swapping, my priority is that it does not fail, and the sites stays up and running.
Running on a Single node machine - how to handle unassigned replicas ?
I understood that replicas cannot be assigned on the same nodes.
As a consequence, does it make sense to have replicas on a single node ?
If a primary index will fail, replicas will come to rescue or will they be unused anyway ?
I wonder if I should delete them and make space, or better not to.
Explanation of your question:
Shards will be relocated away from this node" if I only have one server and one instance working?
Elasticsearch considers the available disk space before deciding
whether to allocate new shards, relocate shards away or put all
indices on reading mode based on a different threshold of this error,
Reason being Elasticsearch indices consists of different shards which
are persisted on data nodes and low disk space can cause the above
issues.
In your case, as you have just one data node, all the indices on the same data node will be put into reading mode and even if you free up
space it wouldn't come in writing mode until you explicitly hit the
API mentioned in opster's guide.
Edit: On a single node it would be better to disable replica as Elasticsearch would not allocate replica of a shard to the same data node. So it doesn't make sense to have replicas on a single node Elasticasearch cluster and doing that will unnecessary mark your index and cluster health to yellow(missing replica).
Related
In some of my indices, I'm doing "index.blocks.read_only_allow_delete": true by using the PUT /index/_settings API call. But after around 10 seconds, the setting disappears and the index is writable again.
I'm wondering if this can be a bug in ES, as in version 6.8 a change was made to reset this setting automatically when a node whose disk had gone over the flooding stage, was again below the normal thresholds.
I'm experiencing that odd behaviour in ES 7.9. What I expected is that, if ES changed the setting to true because of the watermarks, then it could reset it to false later. But if an operator changes the setting to true manually, then ES was going to respect that setting.
These are the docs where I read about that behaviour:
Controls the flood stage watermark, which defaults to 95%. Elasticsearch enforces a read-only index block ( index.blocks.read_only_allow_delete ) on every index that has one or more shards allocated on the node, and that has at least one disk exceeding the flood stage. This setting is a last resort to prevent nodes from running out of disk space. The index block is automatically released when the disk utilization falls below the high watermark.
Cross-posted here.
I ended up using index.blocks.read_only instead, as this one is not updated by ElasticSearch automatically.
I have about 130 million articles in my Postgres database on AWS. I am trying to index them with elasticsearch. In a screen, I entered:
python manage.py search_index --rebuild -f --parallel --model [APP NAME].[MODEL NAME]
Everything began correctly. The output was
Deleting index '[MODEL NAME]'
Creating index '[MODEL NAME]'
Indexing 129413202 'MODEL NAME' objects (parallel)
But after about 15 hours, the output was "Killed". I was running this on a t2.xlarge EC2 instance, which has 16 GBs of memory. Interestingly, the "Killed" message happened after I saw that the connection to the AWS server was broken, but that shouldn't matter if the process was run in a screen. Any idea what the issue is? Do I just need to get an even larger EC2 instance?
A process unexpectedly exiting with message Killed often means it received a SIGKILL; if so then the exit code would be 137. Hard to be certain here, a process can obviously print Killed and exit with code 137 anyway, but assuming you're not doing that in your code then this is what I'd check next.
An unexpected SIGKILL often comes from the kernel's OOM killer which takes action when the system runs out of memory and typically kills the process with the largest memory footprint. If so it will have logged details in the kernel logs that you can read with dmesg.
If it was the OOM killer then this sounds like a bug in this indexing code. Indexing a large body of documents into Elasticsearch should require pretty limited working memory, nowhere near 16GB, but it's easy to accidentally keep too much data in memory for too long which would lead to excessive memory usage.
python manage.py search_index suggests you're using the Django Elasticsearch DSL which fixed a performance issue relatively recently. Make sure you're using a version that contains this fix.
Installed Kibana and Elasticsearch on my macOS Catalina via Brew (not sudo), but I'm not able to install the sample data sets. Anyone have any idea why I'm getting this Forbidden error and how to resolve? The error message is on the bottom right of the picture
go to the conf of elasticsearch uncomment and fill the path.logs line with the right path
Check if you have enough(>90%) disk space available
A good way to look for reason of any error is logs if available :)
I was trying to load Sample Data (eCommerce orders, flight , web logs) in to my kibana. I was getting some error. Logs shown below
elasticsearch.log
[o.e.c.r.a.DiskThresholdMonitor] high disk watermark [90%] exceeded on [/Users/xyz/Installs/ElasticSearch/elasticsearch-7.9.3/data/nodes/0] free: 15.1gb[6.4%], shards will be relocated away from this node; currently relocating away shards totalling [0] bytes; the node is expected to continue to exceed the high disk watermark when these relocations are complete
My MAC has total 250 GB space, I freed up extra 20 GB, then it worked. Please check if you have enough memory (more than 90% should be available)
go to management -> index pattern -> create index pattern.
Looks like for some reason I hit the threshold where the indexes are locked because of no more disk space and so I had to unlock the indexes manually
https://discuss.elastic.co/t/unable-to-create-index-pattern-from-kibana/167184
I am indexing a large amount of daily data ~160GB per index into elasticsearch. I am facing this case where I need to update almost all the docs in the indices with a small amount of data(~16GB) which is of the format
id1,data1
id1,data2
id2,data1
id2,data2
id2,data3
.
.
.
My update operations start happening at 16000 lines per second and in over 5 minutes it comes down to 1000 lines per second and doesnt go up after that. The update process for this 16GB of data is currently longer than the time it takes for my entire indexing of 160GB to happen
My conf file for the update operation currently looks as follows
output
{
elasticsearch {
action => "update"
doc_as_upsert => true
hosts => ["host1","host2","host3","host4"]
index => "logstash-2017-08-1"
document_id => "%{uniqueid}"
document_type => "daily"
retry_on_conflict => 2
flush_size => 1000
}
}
The optimizations I have done to speed up indexing in my cluster based on the suggestions here https://www.elastic.co/guide/en/elasticsearch/guide/current/indexing-performance.html are
Setting "indices.store.throttle.type" : "none"
Index "refresh_interval" : "-1"
I am running my cluster on 4 instances of the d2.8xlarge EC2 instances. I have allocated 30GB of heap to each nodes.
While the update is happening barely any cpu is used and the load is very less as well.
Despite everything the update is extremely slow. Is there something very obvious that I am missing that is causing this issue? While looking at the threadpool data I find that the number of threads working on bulk operations are constantly high.
Any help on this issue would be really helpful
Thanks in advance
There are a couple of rule-outs to try here.
Memory Pressure
With 244GB of RAM, this is not terribly likely, but you can still check it out. Find the jstat command in the JDK for your platform, though there are visual tools for some of them. You want to check both your Logstash JVM and the ElasticSearch JVMs.
jstat -gcutil -h7 {PID of JVM} 2s
This will give you a readout of the various memory pools, garbage collection counts, and GC timings for that JVM as it works. It will update every 2 seconds, and print headers every 7 lines. Spending excessive time in the FCT is a sign that you're underallocated for HEAP.
I/O Pressure
The d2.8xlarge is a dense-storage instance, and may not be great for a highly random, small-block workload. If you're on a Unix platform, top will tell you how much time you're spending in IOWAIT state. If it's high, your storage isn't up to the workload you're sending it.
If that's the case, you may want to consider provisioned IOP EBS instances rather than the instance-local stuff. Or, if your stuff will fit, consider an instance in the i3 family of high I/O instances instead.
Logstash version
You don't say what version of Logstash you're using. Being StackOverflow, you're likely to be using 5.2. If that's the case, this isn't a rule-out.
But, if you're using something in the 2.x series, you may want to set the -w flag to 1 at first, and work your way up. Yes, that's single-threading this. But the ElasticSearch output has some concurrency issues in the 2.x series that are largely fixed in the 5.x series.
With elasticsearch version 6.0 we had an exactly same issue of slow updates on aws and the culprit was slow I/O. Same data was upserting on a local test stack completely fine but once in cloud on ec2 stack, everything was dying after an initial burst of speedy inserts lasting only for few minutes.
Local test stack was a low-spec server in terms of memory and cpu but contained SSDs.
s3 stack was EBS volumes with default gp2 300 IOPS.
Converting the volumes to type io1 with 3000 IOPS solved the issue and everything got back on track.
I am using amazon aws elasticsearch service version 6.0 . I need heavy write/insert from serials of json file to the elasticsearch for 10 billion items . The elasticsearch-py bulk write speed is really slow most of time and occasionally high speed write . i tried all kinds of methods , such as split json file to smaller pieces , multiprocess read json files , parallel_bulk insert into elasticsearch , nothing works . Finally , after I upgraded io1 EBS volume , everything goes smoothly with 10000 write IOPS .
I'm trying to spin up a Neo4j 3.1 instance in a Docker container (through Docker-Compose), running on OSX (El Capitan). All is well, unless I try to increase the max-heap space available to Neo above the default of 512MB.
According to the docs, this can be achieved by adding the environment variable NEO4J_dbms_memory_heap_maxSize, which then causes the server wrapper script to update the neo4j.conf file accordingly. I've checked and it is being updated as one would expect.
The problem is, when I run docker-compose up to spin up the container, the Neo4j instance crashes out with a 137 status code. A little research tells me this is a linux hard-crash, based on heap-size maximum limits.
$ docker-compose up
Starting elasticsearch
Recreating neo4j31
Attaching to elasticsearch, neo4j31
neo4j31 | Starting Neo4j.
neo4j31 exited with code 137
My questions:
Is this due to a Docker or an OSX limitation?
Is there a way I can modify these limits? If I drop the requested limit to 1GB, it will spin up, but still crashes once I run my heavy query (which is what caused the need for increased Heap space anyway).
The query that I'm running is a large-scale update across a lot of nodes (>150k) containing full-text attributes, so that they can be syncronised to ElasticSearch using the plug-in. Is there a way I can get Neo to step through doing, say, 500 nodes at a time, using only cypher (I'd rather avoid writing a script if I can, feels a little dirty for this).
My docker-compose.yml is as follows:
---
version: '2'
services:
# ---<SNIP>
neo4j:
image: neo4j:3.1
container_name: neo4j31
volumes:
- ./docker/neo4j/conf:/var/lib/neo4j/conf
- ./docker/neo4j/mnt:/var/lib/neo4j/import
- ./docker/neo4j/plugins:/plugins
- ./docker/neo4j/data:/data
- ./docker/neo4j/logs:/var/lib/neo4j/logs
ports:
- "7474:7474"
- "7687:7687"
environment:
- NEO4J_dbms_memory_heap_maxSize=4G
# ---<SNIP>
Is this due to a Docker or an OSX limitation?
NO Increase the amount of available RAM to Docker to resolve this issue.
Is there a way I can modify these limits? If I drop the requested
limit to 1GB, it will spin up, but still crashes once I run my heavy
query (which is what caused the need for increased Heap space
anyway).
The query that I'm running is a large-scale update across a lot of
nodes (>150k) containing full-text attributes, so that they can be
syncronised to ElasticSearch using the plug-in. Is there a way I can
get Neo to step through doing, say, 500 nodes at a time, using only
cypher (I'd rather avoid writing a script if I can, feels a little
dirty for this).
N/A This is a NEO4J specific question. It might be better to seperate this from the Docker questions listed above.
3.The query that I'm running is a large-scale update across a lot of nodes (>150k) containing full-text attributes, so that they can be syncronised to ElasticSearch using the plug-in. Is there a way I can get Neo to step through doing, say, 500 nodes at a time, using only cypher (I'd rather avoid writing a script if I can, feels a little dirty for this).
You can do this with the help of apoc plugin for neo4j, more specifically apoc.periodic.iterate
or apoc.periodic.commit
.
If you will use apoc.periodic.commit your first match should be specific like in example you mark which nodes have you already synced, because it sometimes fall in the loop:
call apoc.periodic.commit("
match (user:User) WHERE user.synced = false
with user limit {limit}
MERGE (city:City {name:user.city})
MERGE (user)-[:LIVES_IN]->(city)
SET user.synced =true
RETURN count(*)
",{limit:10000})
If you use apoc.periodic.iterate you can run it in parallel mode:
CALL apoc.periodic.iterate(
"MATCH (o:Order) WHERE o.date > '2016-10-13' RETURN o",
"with {o} as o MATCH (o)-[:HAS_ITEM]->(i) WITH o, sum(i.value) as value
CALL apoc.es.post(host-or-port,index-or-null,type-or-null,
query-or-null,payload-or-null) yield value return *", {batchSize:100, parallel:true})
Note that there is no need for second MATCH clause and apoc.es.post is a function for apoc that can send post requests to elastic search.
see documentation for more info